Name | local-llm-cli JSON |
Version |
0.1.3
JSON |
| download |
home_page | |
Summary | Converse with GPT4 LLM locally |
upload_time | 2023-07-30 05:36:43 |
maintainer | |
docs_url | None |
author | Harsh Avinash |
requires_python | |
license | |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# local_llm_cli
`local_llm_cli` is a Python package that allows you to converse with a GPT4All Language Model (LLM) locally. This can be useful for testing, developing, and debugging.
Currently, this library supports interacting with the GPT4All model. However, support for other models and additional functionalities are planned for future updates.
## Installation
To install `local_llm_cli`, you can use `pip`:
```bash
pip install local_llm_cli
```
You'll also need to ensure that you have the necessary model files available locally.
## Usage
The `converse` sublibrary provides a function to load a GPT4All LLM and converse with it.
Here's a simple usage example:
```python
from local_llm_cli.converse.chat import load_and_interact
# define the model path
model_path = 'path/to/your/model'
# call the function to start conversing with the LLM
load_and_interact(model_path)
```
In this example, the `model_path` should be the path to the GPT4All model files on your local system.
The `load_and_interact` function also accepts optional arguments to specify the model context (`model_n_ctx`) and batch size (`model_n_batch`). If these arguments are not provided, they default to 1024 and 8, respectively.
Here's an example with custom context and batch size:
```python
load_and_interact(model_path, model_n_ctx=2048, model_n_batch=16)
```
You can stop the conversation at any time by typing `exit`.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.
This package was crafted with ❤️ by Harsh Avinash in approximately 22 minutes. Enjoy conversing with your local LLM!
Raw data
{
"_id": null,
"home_page": "",
"name": "local-llm-cli",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Harsh Avinash",
"author_email": "harsh.avinash.official@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/1b/27/99cd31ee3b5b6e9cd2bff0003fb44050e4a1f4cc754daf47f7393ded44b5/local_llm_cli-0.1.3.tar.gz",
"platform": null,
"description": "# local_llm_cli\r\n\r\n`local_llm_cli` is a Python package that allows you to converse with a GPT4All Language Model (LLM) locally. This can be useful for testing, developing, and debugging.\r\n\r\nCurrently, this library supports interacting with the GPT4All model. However, support for other models and additional functionalities are planned for future updates.\r\n\r\n## Installation\r\n\r\nTo install `local_llm_cli`, you can use `pip`:\r\n\r\n```bash\r\npip install local_llm_cli\r\n```\r\n\r\nYou'll also need to ensure that you have the necessary model files available locally.\r\n\r\n## Usage\r\n\r\nThe `converse` sublibrary provides a function to load a GPT4All LLM and converse with it.\r\n\r\nHere's a simple usage example:\r\n\r\n```python\r\nfrom local_llm_cli.converse.chat import load_and_interact\r\n\r\n# define the model path\r\nmodel_path = 'path/to/your/model'\r\n\r\n# call the function to start conversing with the LLM\r\nload_and_interact(model_path)\r\n```\r\n\r\nIn this example, the `model_path` should be the path to the GPT4All model files on your local system.\r\n\r\nThe `load_and_interact` function also accepts optional arguments to specify the model context (`model_n_ctx`) and batch size (`model_n_batch`). If these arguments are not provided, they default to 1024 and 8, respectively.\r\n\r\nHere's an example with custom context and batch size:\r\n\r\n```python\r\nload_and_interact(model_path, model_n_ctx=2048, model_n_batch=16)\r\n```\r\n\r\nYou can stop the conversation at any time by typing `exit`.\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.\r\n\r\n\r\n\r\nThis package was crafted with \u2764\ufe0f by Harsh Avinash in approximately 22 minutes. Enjoy conversing with your local LLM!\r\n",
"bugtrack_url": null,
"license": "",
"summary": "Converse with GPT4 LLM locally",
"version": "0.1.3",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "28d3a667476f5272cd6314d77a7ba8c54ca66103f4af9b56d74ca97addfa27b1",
"md5": "1245f03c6ec91fcb5605c4950923e7ee",
"sha256": "b9568b4d79057cf34681dc5237b06d7f8fe15cfcce9ecef40b6afb6aaadbea16"
},
"downloads": -1,
"filename": "local_llm_cli-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1245f03c6ec91fcb5605c4950923e7ee",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 2968,
"upload_time": "2023-07-30T05:36:41",
"upload_time_iso_8601": "2023-07-30T05:36:41.617928Z",
"url": "https://files.pythonhosted.org/packages/28/d3/a667476f5272cd6314d77a7ba8c54ca66103f4af9b56d74ca97addfa27b1/local_llm_cli-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1b2799cd31ee3b5b6e9cd2bff0003fb44050e4a1f4cc754daf47f7393ded44b5",
"md5": "5b64ee6b58115fcd277a03abf6d57227",
"sha256": "faf75c1c0a5a78459f36abdb0701f1680771c12c1867f2beb65b88a70931c2f4"
},
"downloads": -1,
"filename": "local_llm_cli-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "5b64ee6b58115fcd277a03abf6d57227",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 2529,
"upload_time": "2023-07-30T05:36:43",
"upload_time_iso_8601": "2023-07-30T05:36:43.248811Z",
"url": "https://files.pythonhosted.org/packages/1b/27/99cd31ee3b5b6e9cd2bff0003fb44050e4a1f4cc754daf47f7393ded44b5/local_llm_cli-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-30 05:36:43",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "local-llm-cli"
}