lesa


Namelesa JSON
Version 0.1.0.7 PyPI version JSON
download
home_pagehttps://github.com/shxntanu/lesa
SummaryA CLI tool to converse with any document locally using Ollama.
upload_time2025-01-03 07:23:41
maintainerNone
docs_urlNone
authorShantanu Wable
requires_python<3.13,>=3.10.14
licenseApache-2.0
keywords lesa rag pipeline document chatbot
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![Banner](https://github.com/shxntanu/lesa/raw/master/assets/banner-v3.png)

<div align="center">

[![Python](https://img.shields.io/badge/python-3.10%2B-blue)](https://www.python.org/downloads/)
![PyPI - Version](https://img.shields.io/pypi/v/lesa)
![PyPI Downloads](https://static.pepy.tech/badge/lesa)

</div>

<div align="center">

**_lesa_**
`[lee - saa]` • **Old Norse** <br/>
(v.) to read, to study, to learn

<!-- <div align="center">
  <sub>Prepared by <a href="https://github.com/shxntanu">Shantanu Wable</a> and <a href="https://github.com/omkargwagholikar">Omkar Wagholikar</a> </sub>
</div> -->

</div>

`lesa` is a CLI tool built in Python that allows you to converse with your documents from the terminal, completely offline and on-device using **Ollama**. Open the terminal in the directory of your choice and start a conversation with any document!

## Usage

To start a conversation with a document (`.pdf` and `.docx` for now), simply run:

```bash
lesa read path/to/your/document --page <page_number> (optional)
```

Or start a conversation with an already-embedded directory, run:

```bash
lesa chat
```

### Embed

To embed all files from your current working directory, run:

```bash
lesa embed
```

This creates a `.lesa` config folder in your current working directory that stores the embeddings of all the documents in the directory.

<!-- ## Features

-   🖥️ **Completely On-Device**: Uses Ollama under the hood to interface with LLMs, so you can be sure your data is not leaving your device.
-   📚 **Converse with (almost) all documents**: Supports PDF, DOCX and Text files.
-   🤖 **Wide Range of LLMs**: Choose the Large Language Model of your choice. Whether you want to keep it quick and concise, or want to go all in with a huge context window, the choice is yours. -->

## Setup

`lesa` uses [Ollama](https://ollama.com/) under the hood to utilize the power of large language models.
To install and setup Ollama, run the setup script [`setup-ollama.sh`](scripts/setup-ollama.sh).

```bash
curl -fsSL https://raw.githubusercontent.com/shxntanu/lesa/master/scripts/setup-ollama.sh | bash
```

This script automatically installs the Ollama CLI and pulls the default model (llama3.1:latest) for you. Then install the package using pip.

## Installation

Simply install the package using pip:

```bash
pip install lesa
```

To upgrade to the latest version, run:

```bash
pip install -U lesa
```

## Contribute

We welcome contributions! If you'd like to improve `lesa` or have any feedback, feel free to open an issue or submit a pull request.

## Credits

1. [Typer](https://typer.tiangolo.com/) and [Rich](https://github.com/Textualize/rich): CLI library and terminal formatting.
2. [Ollama](https://ollama.com/): On-device language model inference.
3. [Langchain](https://langchain.com/): Pipeline for language model inference.
4. [FAISS](https://github.com/facebookresearch/faiss): Similarity Search and Vector Store library from Meta AI.

## License

Apache-2.0


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/shxntanu/lesa",
    "name": "lesa",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.10.14",
    "maintainer_email": null,
    "keywords": "lesa, rag pipeline, document chatbot",
    "author": "Shantanu Wable",
    "author_email": "shantanuwable2003@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/27/5f/a67eb2f43d0f39cbe647567630dc09dfc29721b9166851e7005c0337a3b4/lesa-0.1.0.7.tar.gz",
    "platform": null,
    "description": "![Banner](https://github.com/shxntanu/lesa/raw/master/assets/banner-v3.png)\n\n<div align=\"center\">\n\n[![Python](https://img.shields.io/badge/python-3.10%2B-blue)](https://www.python.org/downloads/)\n![PyPI - Version](https://img.shields.io/pypi/v/lesa)\n![PyPI Downloads](https://static.pepy.tech/badge/lesa)\n\n</div>\n\n<div align=\"center\">\n\n**_lesa_**\n`[lee - saa]` \u2022 **Old Norse** <br/>\n(v.) to read, to study, to learn\n\n<!-- <div align=\"center\">\n  <sub>Prepared by <a href=\"https://github.com/shxntanu\">Shantanu Wable</a> and <a href=\"https://github.com/omkargwagholikar\">Omkar Wagholikar</a> </sub>\n</div> -->\n\n</div>\n\n`lesa` is a CLI tool built in Python that allows you to converse with your documents from the terminal, completely offline and on-device using **Ollama**. Open the terminal in the directory of your choice and start a conversation with any document!\n\n## Usage\n\nTo start a conversation with a document (`.pdf` and `.docx` for now), simply run:\n\n```bash\nlesa read path/to/your/document --page <page_number> (optional)\n```\n\nOr start a conversation with an already-embedded directory, run:\n\n```bash\nlesa chat\n```\n\n### Embed\n\nTo embed all files from your current working directory, run:\n\n```bash\nlesa embed\n```\n\nThis creates a `.lesa` config folder in your current working directory that stores the embeddings of all the documents in the directory.\n\n<!-- ## Features\n\n-   \ud83d\udda5\ufe0f **Completely On-Device**: Uses Ollama under the hood to interface with LLMs, so you can be sure your data is not leaving your device.\n-   \ud83d\udcda **Converse with (almost) all documents**: Supports PDF, DOCX and Text files.\n-   \ud83e\udd16 **Wide Range of LLMs**: Choose the Large Language Model of your choice. Whether you want to keep it quick and concise, or want to go all in with a huge context window, the choice is yours. -->\n\n## Setup\n\n`lesa` uses [Ollama](https://ollama.com/) under the hood to utilize the power of large language models.\nTo install and setup Ollama, run the setup script [`setup-ollama.sh`](scripts/setup-ollama.sh).\n\n```bash\ncurl -fsSL https://raw.githubusercontent.com/shxntanu/lesa/master/scripts/setup-ollama.sh | bash\n```\n\nThis script automatically installs the Ollama CLI and pulls the default model (llama3.1:latest) for you. Then install the package using pip.\n\n## Installation\n\nSimply install the package using pip:\n\n```bash\npip install lesa\n```\n\nTo upgrade to the latest version, run:\n\n```bash\npip install -U lesa\n```\n\n## Contribute\n\nWe welcome contributions! If you'd like to improve `lesa` or have any feedback, feel free to open an issue or submit a pull request.\n\n## Credits\n\n1. [Typer](https://typer.tiangolo.com/) and [Rich](https://github.com/Textualize/rich): CLI library and terminal formatting.\n2. [Ollama](https://ollama.com/): On-device language model inference.\n3. [Langchain](https://langchain.com/): Pipeline for language model inference.\n4. [FAISS](https://github.com/facebookresearch/faiss): Similarity Search and Vector Store library from Meta AI.\n\n## License\n\nApache-2.0\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A CLI tool to converse with any document locally using Ollama.",
    "version": "0.1.0.7",
    "project_urls": {
        "Homepage": "https://github.com/shxntanu/lesa",
        "Repository": "https://github.com/shxntanu/lesa"
    },
    "split_keywords": [
        "lesa",
        " rag pipeline",
        " document chatbot"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f26768f1b4ef8883847c99ea70fe7cfedd3ff5019835918f6fc2b58c48dc6515",
                "md5": "95bc08ab3c33d886f40f9e4bd1c9bf05",
                "sha256": "3856193113f3426681c82455d29ee00f32b91003dba3c6b1ebdd7e18d57de8a9"
            },
            "downloads": -1,
            "filename": "lesa-0.1.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "95bc08ab3c33d886f40f9e4bd1c9bf05",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.10.14",
            "size": 557161,
            "upload_time": "2025-01-03T07:23:38",
            "upload_time_iso_8601": "2025-01-03T07:23:38.480415Z",
            "url": "https://files.pythonhosted.org/packages/f2/67/68f1b4ef8883847c99ea70fe7cfedd3ff5019835918f6fc2b58c48dc6515/lesa-0.1.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "275fa67eb2f43d0f39cbe647567630dc09dfc29721b9166851e7005c0337a3b4",
                "md5": "acc9d4bf4d70ffb23e8be3791b73fe1a",
                "sha256": "9b6d2a5c06f593603e5c67cfb45e907866980a4dd575a045c7c994d9cb03fe34"
            },
            "downloads": -1,
            "filename": "lesa-0.1.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "acc9d4bf4d70ffb23e8be3791b73fe1a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.10.14",
            "size": 554478,
            "upload_time": "2025-01-03T07:23:41",
            "upload_time_iso_8601": "2025-01-03T07:23:41.496104Z",
            "url": "https://files.pythonhosted.org/packages/27/5f/a67eb2f43d0f39cbe647567630dc09dfc29721b9166851e7005c0337a3b4/lesa-0.1.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-03 07:23:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "shxntanu",
    "github_project": "lesa",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "lesa"
}
        
Elapsed time: 0.67685s