chatdocs


Namechatdocs JSON
Version 0.2.6 PyPI version JSON
download
home_pagehttps://github.com/marella/chatdocs
SummaryChat with your documents offline using AI.
upload_time2023-09-03 07:59:15
maintainer
docs_urlNone
authorRavindra Marella
requires_python
licenseMIT
keywords chatdocs ctransformers transformers langchain chroma ai llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # [ChatDocs](https://github.com/marella/chatdocs) [![PyPI](https://img.shields.io/pypi/v/chatdocs)](https://pypi.org/project/chatdocs/) [![tests](https://github.com/marella/chatdocs/actions/workflows/tests.yml/badge.svg)](https://github.com/marella/chatdocs/actions/workflows/tests.yml)

Chat with your documents offline using AI. No data leaves your system. Internet connection is only required to install the tool and download the AI models. It is based on [PrivateGPT](https://github.com/imartinez/privateGPT) but has more features.

![Web UI](https://github.com/marella/chatdocs/raw/main/docs/demo.png)

**Contents**

- [Features](#features)
- [Installation](#installation)
- [Usage](#usage)
- [Configuration](#configuration)
- [GPU](#gpu)

## Features

- Supports GGML/GGUF models via [CTransformers](https://github.com/marella/ctransformers)
- Supports 🤗 Transformers models
- Supports GPTQ models
- Web UI
- GPU support
- Highly configurable via `chatdocs.yml`

<details>
<summary><strong>Show supported document types</strong></summary><br>

| Extension       | Format                         |
| :-------------- | :----------------------------- |
| `.csv`          | CSV                            |
| `.docx`, `.doc` | Word Document                  |
| `.enex`         | EverNote                       |
| `.eml`          | Email                          |
| `.epub`         | EPub                           |
| `.html`         | HTML                           |
| `.md`           | Markdown                       |
| `.msg`          | Outlook Message                |
| `.odt`          | Open Document Text             |
| `.pdf`          | Portable Document Format (PDF) |
| `.pptx`, `.ppt` | PowerPoint Document            |
| `.txt`          | Text file (UTF-8)              |

</details>

## Installation

Install the tool using:

```sh
pip install chatdocs
```

Download the AI models using:

```sh
chatdocs download
```

Now it can be run offline without internet connection.

## Usage

Add a directory containing documents to chat with using:

```sh
chatdocs add /path/to/documents
```

> The processed documents will be stored in `db` directory by default.

Chat with your documents using:

```sh
chatdocs ui
```

Open http://localhost:5000 in your browser to access the web UI.

It also has a nice command-line interface:

```sh
chatdocs chat
```

<details>
<summary><strong>Show preview</strong></summary><br>

![Demo](https://github.com/marella/chatdocs/raw/main/docs/cli.png)

</details>

## Configuration

All the configuration options can be changed using the `chatdocs.yml` config file. Create a `chatdocs.yml` file in some directory and run all commands from that directory. For reference, see the default [`chatdocs.yml`](https://github.com/marella/chatdocs/blob/main/chatdocs/data/chatdocs.yml) file.

You don't have to copy the entire file, just add the config options you want to change as it will be merged with the default config. For example, see [`tests/fixtures/chatdocs.yml`](https://github.com/marella/chatdocs/blob/main/tests/fixtures/chatdocs.yml) which changes only some of the config options.

### Embeddings

To change the embeddings model, add and change the following in your `chatdocs.yml`:

```yml
embeddings:
  model: hkunlp/instructor-large
```

> **Note:** When you change the embeddings model, delete the `db` directory and add documents again.

### CTransformers

To change the CTransformers (GGML/GGUF) model, add and change the following in your `chatdocs.yml`:

```yml
ctransformers:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-GGML
  model_file: Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin
  model_type: llama
```

> **Note:** When you add a new model for the first time, run `chatdocs download` to download the model before using it.

You can also use an existing local model file:

```yml
ctransformers:
  model: /path/to/ggml-model.bin
  model_type: llama
```

### 🤗 Transformers

To use 🤗 Transformers models, add the following to your `chatdocs.yml`:

```yml
llm: huggingface
```

To change the 🤗 Transformers model, add and change the following in your `chatdocs.yml`:

```yml
huggingface:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-HF
```

> **Note:** When you add a new model for the first time, run `chatdocs download` to download the model before using it.

To use GPTQ models with 🤗 Transformers, install the necessary packages using:

```sh
pip install chatdocs[gptq]
```

## GPU

### Embeddings

To enable GPU (CUDA) support for the embeddings model, add the following to your `chatdocs.yml`:

```yml
embeddings:
  model_kwargs:
    device: cuda
```

You may have to reinstall PyTorch with CUDA enabled by following the instructions [here](https://pytorch.org/get-started/locally/).

### CTransformers

To enable GPU (CUDA) support for the CTransformers (GGML/GGUF) model, add the following to your `chatdocs.yml`:

```yml
ctransformers:
  config:
    gpu_layers: 50
```

You may have to install the CUDA libraries using:

```sh
pip install ctransformers[cuda]
```

### 🤗 Transformers

To enable GPU (CUDA) support for the 🤗 Transformers model, add the following to your `chatdocs.yml`:

```yml
huggingface:
  device: 0
```

You may have to reinstall PyTorch with CUDA enabled by following the instructions [here](https://pytorch.org/get-started/locally/).

## License

[MIT](https://github.com/marella/chatdocs/blob/main/LICENSE)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/marella/chatdocs",
    "name": "chatdocs",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "chatdocs ctransformers transformers langchain chroma ai llm",
    "author": "Ravindra Marella",
    "author_email": "mv.ravindra007@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/29/c6/917ef0b56736a36e208e29a98c1701db3ad3862204781fe0f5d8e90c9c1a/chatdocs-0.2.6.tar.gz",
    "platform": null,
    "description": "# [ChatDocs](https://github.com/marella/chatdocs) [![PyPI](https://img.shields.io/pypi/v/chatdocs)](https://pypi.org/project/chatdocs/) [![tests](https://github.com/marella/chatdocs/actions/workflows/tests.yml/badge.svg)](https://github.com/marella/chatdocs/actions/workflows/tests.yml)\n\nChat with your documents offline using AI. No data leaves your system. Internet connection is only required to install the tool and download the AI models. It is based on [PrivateGPT](https://github.com/imartinez/privateGPT) but has more features.\n\n![Web UI](https://github.com/marella/chatdocs/raw/main/docs/demo.png)\n\n**Contents**\n\n- [Features](#features)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Configuration](#configuration)\n- [GPU](#gpu)\n\n## Features\n\n- Supports GGML/GGUF models via [CTransformers](https://github.com/marella/ctransformers)\n- Supports \ud83e\udd17 Transformers models\n- Supports GPTQ models\n- Web UI\n- GPU support\n- Highly configurable via `chatdocs.yml`\n\n<details>\n<summary><strong>Show supported document types</strong></summary><br>\n\n| Extension       | Format                         |\n| :-------------- | :----------------------------- |\n| `.csv`          | CSV                            |\n| `.docx`, `.doc` | Word Document                  |\n| `.enex`         | EverNote                       |\n| `.eml`          | Email                          |\n| `.epub`         | EPub                           |\n| `.html`         | HTML                           |\n| `.md`           | Markdown                       |\n| `.msg`          | Outlook Message                |\n| `.odt`          | Open Document Text             |\n| `.pdf`          | Portable Document Format (PDF) |\n| `.pptx`, `.ppt` | PowerPoint Document            |\n| `.txt`          | Text file (UTF-8)              |\n\n</details>\n\n## Installation\n\nInstall the tool using:\n\n```sh\npip install chatdocs\n```\n\nDownload the AI models using:\n\n```sh\nchatdocs download\n```\n\nNow it can be run offline without internet connection.\n\n## Usage\n\nAdd a directory containing documents to chat with using:\n\n```sh\nchatdocs add /path/to/documents\n```\n\n> The processed documents will be stored in `db` directory by default.\n\nChat with your documents using:\n\n```sh\nchatdocs ui\n```\n\nOpen http://localhost:5000 in your browser to access the web UI.\n\nIt also has a nice command-line interface:\n\n```sh\nchatdocs chat\n```\n\n<details>\n<summary><strong>Show preview</strong></summary><br>\n\n![Demo](https://github.com/marella/chatdocs/raw/main/docs/cli.png)\n\n</details>\n\n## Configuration\n\nAll the configuration options can be changed using the `chatdocs.yml` config file. Create a `chatdocs.yml` file in some directory and run all commands from that directory. For reference, see the default [`chatdocs.yml`](https://github.com/marella/chatdocs/blob/main/chatdocs/data/chatdocs.yml) file.\n\nYou don't have to copy the entire file, just add the config options you want to change as it will be merged with the default config. For example, see [`tests/fixtures/chatdocs.yml`](https://github.com/marella/chatdocs/blob/main/tests/fixtures/chatdocs.yml) which changes only some of the config options.\n\n### Embeddings\n\nTo change the embeddings model, add and change the following in your `chatdocs.yml`:\n\n```yml\nembeddings:\n  model: hkunlp/instructor-large\n```\n\n> **Note:** When you change the embeddings model, delete the `db` directory and add documents again.\n\n### CTransformers\n\nTo change the CTransformers (GGML/GGUF) model, add and change the following in your `chatdocs.yml`:\n\n```yml\nctransformers:\n  model: TheBloke/Wizard-Vicuna-7B-Uncensored-GGML\n  model_file: Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin\n  model_type: llama\n```\n\n> **Note:** When you add a new model for the first time, run `chatdocs download` to download the model before using it.\n\nYou can also use an existing local model file:\n\n```yml\nctransformers:\n  model: /path/to/ggml-model.bin\n  model_type: llama\n```\n\n### \ud83e\udd17 Transformers\n\nTo use \ud83e\udd17 Transformers models, add the following to your `chatdocs.yml`:\n\n```yml\nllm: huggingface\n```\n\nTo change the \ud83e\udd17 Transformers model, add and change the following in your `chatdocs.yml`:\n\n```yml\nhuggingface:\n  model: TheBloke/Wizard-Vicuna-7B-Uncensored-HF\n```\n\n> **Note:** When you add a new model for the first time, run `chatdocs download` to download the model before using it.\n\nTo use GPTQ models with \ud83e\udd17 Transformers, install the necessary packages using:\n\n```sh\npip install chatdocs[gptq]\n```\n\n## GPU\n\n### Embeddings\n\nTo enable GPU (CUDA) support for the embeddings model, add the following to your `chatdocs.yml`:\n\n```yml\nembeddings:\n  model_kwargs:\n    device: cuda\n```\n\nYou may have to reinstall PyTorch with CUDA enabled by following the instructions [here](https://pytorch.org/get-started/locally/).\n\n### CTransformers\n\nTo enable GPU (CUDA) support for the CTransformers (GGML/GGUF) model, add the following to your `chatdocs.yml`:\n\n```yml\nctransformers:\n  config:\n    gpu_layers: 50\n```\n\nYou may have to install the CUDA libraries using:\n\n```sh\npip install ctransformers[cuda]\n```\n\n### \ud83e\udd17 Transformers\n\nTo enable GPU (CUDA) support for the \ud83e\udd17 Transformers model, add the following to your `chatdocs.yml`:\n\n```yml\nhuggingface:\n  device: 0\n```\n\nYou may have to reinstall PyTorch with CUDA enabled by following the instructions [here](https://pytorch.org/get-started/locally/).\n\n## License\n\n[MIT](https://github.com/marella/chatdocs/blob/main/LICENSE)\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Chat with your documents offline using AI.",
    "version": "0.2.6",
    "project_urls": {
        "Homepage": "https://github.com/marella/chatdocs"
    },
    "split_keywords": [
        "chatdocs",
        "ctransformers",
        "transformers",
        "langchain",
        "chroma",
        "ai",
        "llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e4b156b826fc503bcbd027bea9bf05dd8d60cf8d3b3321f1a7b0bdf29400a75d",
                "md5": "e9944b24b96ad4dacba08a56878d26f1",
                "sha256": "e4804316b21bc2b7057d9722b4b2cef880c59678b5001fd0694724b42fc26b3d"
            },
            "downloads": -1,
            "filename": "chatdocs-0.2.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e9944b24b96ad4dacba08a56878d26f1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 15028,
            "upload_time": "2023-09-03T07:59:13",
            "upload_time_iso_8601": "2023-09-03T07:59:13.037907Z",
            "url": "https://files.pythonhosted.org/packages/e4/b1/56b826fc503bcbd027bea9bf05dd8d60cf8d3b3321f1a7b0bdf29400a75d/chatdocs-0.2.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "29c6917ef0b56736a36e208e29a98c1701db3ad3862204781fe0f5d8e90c9c1a",
                "md5": "a39dbab3cb06a794b15fb810d30071bc",
                "sha256": "c716dc7fa4c2fe389b654b0b06dc541400b801b8f7058aafd6da7b48feac0bcb"
            },
            "downloads": -1,
            "filename": "chatdocs-0.2.6.tar.gz",
            "has_sig": false,
            "md5_digest": "a39dbab3cb06a794b15fb810d30071bc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 13719,
            "upload_time": "2023-09-03T07:59:15",
            "upload_time_iso_8601": "2023-09-03T07:59:15.379833Z",
            "url": "https://files.pythonhosted.org/packages/29/c6/917ef0b56736a36e208e29a98c1701db3ad3862204781fe0f5d8e90c9c1a/chatdocs-0.2.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-03 07:59:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "marella",
    "github_project": "chatdocs",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "chatdocs"
}
        
Elapsed time: 0.10953s