pygpt4all


Namepygpt4all JSON
Version 1.1.0 PyPI version JSON
download
home_page
SummaryOfficial Python CPU inference for GPT4All language models based on llama.cpp and ggml
upload_time2023-05-02 20:12:23
maintainer
docs_urlNone
authorAbdeladim Sadiki
requires_python>=3.8
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PyGPT4All
Official Python CPU inference for [GPT4All](https://github.com/nomic-ai/gpt4all) language models based on [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml)

[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![PyPi version](https://badgen.net/pypi/v/pygpt4all)](https://pypi.org/project/pygpt4all/)

<!-- TOC -->
* [Installation](#installation)
* [Tutorial](#tutorial)
    * [Model instantiation](#model-instantiation)
    * [Simple generation](#simple-generation)
    * [Interactive Dialogue](#interactive-dialogue)
* [API reference](#api-reference)
* [License](#license)
<!-- TOC -->
# Installation

```bash
pip install pygpt4all
```

# Tutorial

You will need first to download the model weights

| Model     | Download link                                            |
|-----------|----------------------------------------------------------|
| GPT4ALL   | http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin    |
| GPT4ALL-j | https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin |                                                                     

### Model instantiation
Once the weights are downloaded, you can instantiate the models as follows:
* GPT4All model

```python
from pygpt4all import GPT4All

model = GPT4All('path/to/ggml-gpt4all-l13b-snoozy.bin')
```

* GPT4All-J model

```python
from pygpt4all import GPT4All_J

model = GPT4All_J('path/to/ggml-gpt4all-j-v1.3-groovy.bin')
```


### Simple generation
The `generate` function is used to generate new tokens from the `prompt` given as input:

```python
for token in model.generate("Tell me a joke ?\n"):
    print(token, end='', flush=True)
```

### Interactive Dialogue
You can set up an interactive dialogue by simply keeping the `model` variable alive:

```python
while True:
    try:
        prompt = input("You: ", flush=True)
        if prompt == '':
            continue
        print(f"AI:", end='')
        for token in model.generate(prompt):
            print(f"{token}", end='', flush=True)
        print()
    except KeyboardInterrupt:
        break
```

# API reference
You can check the [API reference documentation](https://nomic-ai.github.io/pygpt4all/) for more details.


# License
This project is licensed under the MIT  [License](./LICENSE).


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "pygpt4all",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "",
    "author": "Abdeladim Sadiki",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/9f/57/fe1203e28b8fa7d5ce3f3eecb415932084326a98c03430d200f0897ba316/pygpt4all-1.1.0.tar.gz",
    "platform": null,
    "description": "# PyGPT4All\nOfficial Python CPU inference for [GPT4All](https://github.com/nomic-ai/gpt4all) language models based on [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml)\n\n[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)\n[![PyPi version](https://badgen.net/pypi/v/pygpt4all)](https://pypi.org/project/pygpt4all/)\n\n<!-- TOC -->\n* [Installation](#installation)\n* [Tutorial](#tutorial)\n    * [Model instantiation](#model-instantiation)\n    * [Simple generation](#simple-generation)\n    * [Interactive Dialogue](#interactive-dialogue)\n* [API reference](#api-reference)\n* [License](#license)\n<!-- TOC -->\n# Installation\n\n```bash\npip install pygpt4all\n```\n\n# Tutorial\n\nYou will need first to download the model weights\n\n| Model     | Download link                                            |\n|-----------|----------------------------------------------------------|\n| GPT4ALL   | http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin    |\n| GPT4ALL-j | https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin |                                                                     \n\n### Model instantiation\nOnce the weights are downloaded, you can instantiate the models as follows:\n* GPT4All model\n\n```python\nfrom pygpt4all import GPT4All\n\nmodel = GPT4All('path/to/ggml-gpt4all-l13b-snoozy.bin')\n```\n\n* GPT4All-J model\n\n```python\nfrom pygpt4all import GPT4All_J\n\nmodel = GPT4All_J('path/to/ggml-gpt4all-j-v1.3-groovy.bin')\n```\n\n\n### Simple generation\nThe `generate` function is used to generate new tokens from the `prompt` given as input:\n\n```python\nfor token in model.generate(\"Tell me a joke ?\\n\"):\n    print(token, end='', flush=True)\n```\n\n### Interactive Dialogue\nYou can set up an interactive dialogue by simply keeping the `model` variable alive:\n\n```python\nwhile True:\n    try:\n        prompt = input(\"You: \", flush=True)\n        if prompt == '':\n            continue\n        print(f\"AI:\", end='')\n        for token in model.generate(prompt):\n            print(f\"{token}\", end='', flush=True)\n        print()\n    except KeyboardInterrupt:\n        break\n```\n\n# API reference\nYou can check the [API reference documentation](https://nomic-ai.github.io/pygpt4all/) for more details.\n\n\n# License\nThis project is licensed under the MIT  [License](./LICENSE).\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Official Python CPU inference for GPT4All language models based on llama.cpp and ggml",
    "version": "1.1.0",
    "project_urls": {
        "Documentation": "https://nomic-ai.github.io/pygpt4all/",
        "Source": "https://github.com/nomic-ai/pygpt4all",
        "Tracker": "https://github.com/nomic-ai/pygpt4all/issues"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f57fe1203e28b8fa7d5ce3f3eecb415932084326a98c03430d200f0897ba316",
                "md5": "2914cdda13976819f81bcbce78eb9a5f",
                "sha256": "d78fd0d7e85e014a10de90becc0867747fe081fd4d190088410e10fc58349c4f"
            },
            "downloads": -1,
            "filename": "pygpt4all-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "2914cdda13976819f81bcbce78eb9a5f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 4393,
            "upload_time": "2023-05-02T20:12:23",
            "upload_time_iso_8601": "2023-05-02T20:12:23.342586Z",
            "url": "https://files.pythonhosted.org/packages/9f/57/fe1203e28b8fa7d5ce3f3eecb415932084326a98c03430d200f0897ba316/pygpt4all-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-02 20:12:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nomic-ai",
    "github_project": "pygpt4all",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "pygpt4all"
}
        
Elapsed time: 0.15938s