# [GPT4All-J](https://github.com/marella/gpt4all-j) [![PyPI](https://img.shields.io/pypi/v/gpt4all-j)](https://pypi.org/project/gpt4all-j/) [![tests](https://github.com/marella/gpt4all-j/actions/workflows/tests.yml/badge.svg)](https://github.com/marella/gpt4all-j/actions/workflows/tests.yml)
Python bindings for the [C++ port][gptj.cpp] of GPT4All-J model.
> Please migrate to [`ctransformers`](https://github.com/marella/ctransformers) library which supports more models and has more features.
## Installation
```sh
pip install gpt4all-j
```
Download the model from [here](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin).
## Usage
```py
from gpt4allj import Model
model = Model('/path/to/ggml-gpt4all-j.bin')
print(model.generate('AI is going to'))
```
[Run in Google Colab](https://colab.research.google.com/drive/1bd38-i1Qlx6_MvJyCTJOy7t8eHSNnqAx)
If you are getting `illegal instruction` error, try using `instructions='avx'` or `instructions='basic'`:
```py
model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')
```
If it is running slow, try building the C++ library from source. [Learn more](https://github.com/marella/gpt4all-j#c-library)
### Parameters
```py
model.generate(prompt,
seed=-1,
n_threads=-1,
n_predict=200,
top_k=40,
top_p=0.9,
temp=0.9,
repeat_penalty=1.0,
repeat_last_n=64,
n_batch=8,
reset=True,
callback=None)
```
#### `reset`
If `True`, context will be reset. To keep the previous context, use `reset=False`.
```py
model.generate('Write code to sort numbers in Python.')
model.generate('Rewrite the code in JavaScript.', reset=False)
```
#### `callback`
If a callback function is passed, it will be called once per each generated token. To stop generating more tokens, return `False` inside the callback function.
```py
def callback(token):
print(token)
model.generate('AI is going to', callback=callback)
```
## LangChain
[LangChain](https://python.langchain.com/) is a framework for developing applications powered by language models. A LangChain LLM object for the GPT4All-J model can be created using:
```py
from gpt4allj.langchain import GPT4AllJ
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin')
print(llm('AI is going to'))
```
If you are getting `illegal instruction` error, try using `instructions='avx'` or `instructions='basic'`:
```py
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin', instructions='avx')
```
It can be used with other LangChain modules:
```py
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=['question'])
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run('What is AI?'))
```
### Parameters
```py
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin',
seed=-1,
n_threads=-1,
n_predict=200,
top_k=40,
top_p=0.9,
temp=0.9,
repeat_penalty=1.0,
repeat_last_n=64,
n_batch=8,
reset=True)
```
## C++ Library
To build the C++ library from source, please see [gptj.cpp][gptj.cpp]. Once you have built the shared libraries, you can use them as:
```py
from gpt4allj import Model, load_library
lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')
model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)
```
## License
[MIT](https://github.com/marella/gpt4all-j/blob/main/LICENSE)
[gptj.cpp]: https://github.com/marella/gptj.cpp
Raw data
{
"_id": null,
"home_page": "https://github.com/marella/gpt4all-j",
"name": "gpt4all-j",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "gpt4all-j gpt4all gpt-j ai llm cpp",
"author": "Ravindra Marella",
"author_email": "mv.ravindra007@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/5a/48/7c6c8a4d3262b77f8b909b2ca49d3d2196ce5bbbbad8192533a7a9da26d3/gpt4all-j-0.2.6.tar.gz",
"platform": null,
"description": "# [GPT4All-J](https://github.com/marella/gpt4all-j) [![PyPI](https://img.shields.io/pypi/v/gpt4all-j)](https://pypi.org/project/gpt4all-j/) [![tests](https://github.com/marella/gpt4all-j/actions/workflows/tests.yml/badge.svg)](https://github.com/marella/gpt4all-j/actions/workflows/tests.yml)\n\nPython bindings for the [C++ port][gptj.cpp] of GPT4All-J model.\n\n> Please migrate to [`ctransformers`](https://github.com/marella/ctransformers) library which supports more models and has more features.\n\n## Installation\n\n```sh\npip install gpt4all-j\n```\n\nDownload the model from [here](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin).\n\n## Usage\n\n```py\nfrom gpt4allj import Model\n\nmodel = Model('/path/to/ggml-gpt4all-j.bin')\n\nprint(model.generate('AI is going to'))\n```\n\n[Run in Google Colab](https://colab.research.google.com/drive/1bd38-i1Qlx6_MvJyCTJOy7t8eHSNnqAx)\n\nIf you are getting `illegal instruction` error, try using `instructions='avx'` or `instructions='basic'`:\n\n```py\nmodel = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')\n```\n\nIf it is running slow, try building the C++ library from source. [Learn more](https://github.com/marella/gpt4all-j#c-library)\n\n### Parameters\n\n```py\nmodel.generate(prompt,\n seed=-1,\n n_threads=-1,\n n_predict=200,\n top_k=40,\n top_p=0.9,\n temp=0.9,\n repeat_penalty=1.0,\n repeat_last_n=64,\n n_batch=8,\n reset=True,\n callback=None)\n```\n\n#### `reset`\n\nIf `True`, context will be reset. To keep the previous context, use `reset=False`.\n\n```py\nmodel.generate('Write code to sort numbers in Python.')\nmodel.generate('Rewrite the code in JavaScript.', reset=False)\n```\n\n#### `callback`\n\nIf a callback function is passed, it will be called once per each generated token. To stop generating more tokens, return `False` inside the callback function.\n\n```py\ndef callback(token):\n print(token)\n\nmodel.generate('AI is going to', callback=callback)\n```\n\n## LangChain\n\n[LangChain](https://python.langchain.com/) is a framework for developing applications powered by language models. A LangChain LLM object for the GPT4All-J model can be created using:\n\n```py\nfrom gpt4allj.langchain import GPT4AllJ\n\nllm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin')\n\nprint(llm('AI is going to'))\n```\n\nIf you are getting `illegal instruction` error, try using `instructions='avx'` or `instructions='basic'`:\n\n```py\nllm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin', instructions='avx')\n```\n\nIt can be used with other LangChain modules:\n\n```py\nfrom langchain import PromptTemplate, LLMChain\n\ntemplate = \"\"\"Question: {question}\n\nAnswer:\"\"\"\n\nprompt = PromptTemplate(template=template, input_variables=['question'])\n\nllm_chain = LLMChain(prompt=prompt, llm=llm)\n\nprint(llm_chain.run('What is AI?'))\n```\n\n### Parameters\n\n```py\nllm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin',\n seed=-1,\n n_threads=-1,\n n_predict=200,\n top_k=40,\n top_p=0.9,\n temp=0.9,\n repeat_penalty=1.0,\n repeat_last_n=64,\n n_batch=8,\n reset=True)\n```\n\n## C++ Library\n\nTo build the C++ library from source, please see [gptj.cpp][gptj.cpp]. Once you have built the shared libraries, you can use them as:\n\n```py\nfrom gpt4allj import Model, load_library\n\nlib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')\n\nmodel = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)\n```\n\n## License\n\n[MIT](https://github.com/marella/gpt4all-j/blob/main/LICENSE)\n\n[gptj.cpp]: https://github.com/marella/gptj.cpp",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python bindings for the C++ port of GPT4All-J model.",
"version": "0.2.6",
"project_urls": {
"Homepage": "https://github.com/marella/gpt4all-j"
},
"split_keywords": [
"gpt4all-j",
"gpt4all",
"gpt-j",
"ai",
"llm",
"cpp"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5a487c6c8a4d3262b77f8b909b2ca49d3d2196ce5bbbbad8192533a7a9da26d3",
"md5": "373a8e6526f4981258964904c67f8cd5",
"sha256": "d2681cc4b7974586ecd4fa614fbb7e315bb787944a66f6b5e103f202c004fa40"
},
"downloads": -1,
"filename": "gpt4all-j-0.2.6.tar.gz",
"has_sig": false,
"md5_digest": "373a8e6526f4981258964904c67f8cd5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 1800003,
"upload_time": "2023-05-15T00:27:41",
"upload_time_iso_8601": "2023-05-15T00:27:41.447382Z",
"url": "https://files.pythonhosted.org/packages/5a/48/7c6c8a4d3262b77f8b909b2ca49d3d2196ce5bbbbad8192533a7a9da26d3/gpt4all-j-0.2.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-15 00:27:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "marella",
"github_project": "gpt4all-j",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "gpt4all-j"
}