Name | pxgpt JSON |
Version |
0.0.1
JSON |
| download |
home_page | |
Summary | Your personal, powerful and private GPT. |
upload_time | 2023-07-21 02:16:58 |
maintainer | |
docs_url | None |
author | pwwang |
requires_python | >=3.9,<4.0 |
license | Apache-2.0 |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<img height="48" src="./logo.png">
</p>
<p align="center">Your personal, powerful and private GPT</p>
<hr />
## Features
- Ingest of your own documents and talk to them.
- Store your data locally on your device.
- Choose from a variety of models, including OpenAI.
- Support conversation history and memory.
- Switch between profiles with different settings.
- Support both web interface and command line interface.
- Support Llama v2!
## Installation
```shell
# With all supported models
$ pip install -U pxgpt[all]
# With support for GPT4all only
$ pip install -U pxgpt[gpt4all]
# With support for llama-cpp only
$ pip install -U pxgpt[llama-cpp]
# With support for openai only
$ pip install -U pxgpt[openai]
```
Then copy the configuration from `.pxgpt.config-example.yml` to `.pxgpt.config.yml` and modify it to your needs.
## Usage
### Chat from CLI
```shell
$ 23:45:06 ❯ pxgpt chat
Welcome to chat via the pxGPT CLI v0.0.0rc0!
Hit 'Ctrl+c' or 'Ctrl+d' to exit.
[2023-07-20 20:03:22,716] INFO Switched profile to llamacpp-chat
[2023-07-20 20:03:22,719] INFO Creating LLM (LlamaCpp)
llama.cpp: loading model from models/llamacpp/llama-2-7b.ggmlv3.q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.08 MB
llama_model_load_internal: mem required = 5185.72 MB (+ 1026.00 MB per state)
llama_new_context_with_model: kv self size = 256.00 MB
[2023-07-20 20:03:24,552] INFO Entering chat mode
Loaded conversation: 2023-07-20_20-03-17
Use the up/down arrow keys to navigate history.
Use /help to see the list of commands.
>>> Hello Llama 2, if you could choose a superpower, what would it be and why?
I will answer this question in two parts. The first part is the super power that I will choose. The second part is why I chose this super power.
In the first part, the super power that I will choose is the ability to fly. This is because flying has many benefits. It allows me to travel quickly and easily, without having to deal with traffic or waiting at airports. Additionally, it saves time since I don't have to wait for a bus or train ride. Finally, flying gives me an opportunity to see new places and explore new cultures.
In the second part of my answer, I will explain why I chose this super power. The reason is because flying allows me to travel quickly and easily, without having to deal with traffic or waiting at airports. Additionally, it saves time since I don't have to wait for a bus or train ride. Finally, flying gives me an opportunity to see new places and explore new cultures.
>>> /help
Commands:
- /help: List the commands
- /new: Start a new conversation
- /switch: Switch to a conversation
- /list: List all conversations
- /path: Show the path of the current conversation file
- /delete: Delete a conversation
- /rename: Rename a conversation
- /ingest: Ingest documents from the source directory
- /docs: List ingested and uningested documents in the source directory
- /exit: Exit the CLI
>>>
```
### Chat from the web interface
```shell
$ pxgpt serve
# Open http://localhost:7758 in your browser
```
![Web-interface](web-interface.png)
### Configuration
px
The configuration files are loaded from the following paths:
- `~/.config/pxgpt/config.yml`
- `~/.pxgpt.config.yml`
- `./.pxgpt.config.yml`
#### Profiles
Note that you need to define profiles in the configuration file. For example:
```yaml
openai: # The profile
model:
type: ChatOpenAI
```
The configuration items are inherited from the `default` profile. For example:
```yaml
default:
credentials:
openai_api_key: sk-xxxxxxxxxxx
openai:
model:
type: ChatOpenAI
```
Then when you use `openai` profile, the configurations are expanded as:
```yaml
openai:
credentials:
openai_api_key: sk-xxxxxxxxxxx
model:
type: ChatOpenAI
```
Higher-level configurations override lower-level configurations. For example:
If you define the `default` profile in `~/.config/pxgpt/config.yml` and the `openai` profile in `./.pxgpt.config.yml`, then the `openai` profile will inherit the `default` profile, as well.
#### Configuration items
- `log_level`: The log level for the logger in your teminal
- `history_directory`: The directory to store the conversation history
- `history_into_memory`: Whether to load the conversation history into memory
- You can turn this off if you are using small models
- `credentials`: The credentials for the models.
For example, for OpenAI, you need to provide the `openai_api_key`.
- `model`: Type of the model and arguments for it.
- `type`: The type of the model, supported models are: `GPT4All`, `LlamaCpp`, `ChatOpenAI` and `OpenAI`
- `<other>`: The arguments for the model. Passed to `langchain` llms.
- For `GPT4All`, you can pass the arguments listed in [here][1].
- For `LlamaCpp`, you can pass the arguments listed in [here][2].
- For `ChatOpenAI`, you can pass the arguments listed in [here][3].
- For `OpenAI`, you can pass the arguments listed in [here][4].
- `qmodel`: The arguments for model used to condense questions
- `type`: Same as `model.type`, with and `Echo` model added, which is useful for models that don't do question condensing very well.
- `<other>`: Same as `model.<other>`.
- `ingest`: The arguments for the ingestion.
- `source_directory`: The directory to ingest documents from.
- If not provided, we will enter the chat mode.
- `persist_directory`: The directory to save the vectorstore database.
- If not provided, will use `<source_directory>/.pxgpt-<model>-db`.
- `target_source_chunks`: The number of chunks to return against the query.
- `n_workers`: The number of workers to use for ingestion.
- `chunk_size` and `chunk_overlap`: The chunk size and overlap for the ingestion.
- `embeddings`: The arguments for the embeddings.
- For `GPT4All`, you can pass the arguments listed in [here][5].
- For `LlamaCpp`, you can pass the arguments listed in [here][6].
- For `OpenAI` or `ChatOpenAI`, you can pass the arguments listed in [here][7].
### Ingest documents
```shell
$ pxgpt ingest # default profile
$ pxgpt ingest --profile openai-docs
# Will ingest documents under `ingest.source_directory` under `openai-docs` profile
```
## Credits
`pxgpt` is Inspired by [privateGPT][8], with the addition of openai API support, history and memory support, and a web interface.
## TODO
- [ ] Support ingestion management (upload/download/delete/ingest documents) from the web interface
- [ ] Support profile management (add/remove/modify) from the web interface
- [ ] Use markdown to format the response on the web interface
- [ ] Read credentials from environment variables
- [ ] Build a docker image
- [ ] Support more models
## Q & A
[QA.md](QA.md)
[1]: https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html#langchain.llms.gpt4all.GPT4All
[2]: https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html#langchain.llms.llamacpp.LlamaCpp
[3]: https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html#langchain.chat_models.openai.ChatOpenAI
[4]: https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html#langchain.llms.openai.OpenAI
[5]: https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.gpt4all.GPT4AllEmbeddings.html#langchain.embeddings.gpt4all.GPT4AllEmbeddings
[6]: https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html#langchain.embeddings.llamacpp.LlamaCppEmbeddings
[7]: https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html#langchain.embeddings.openai.OpenAIEmbeddings
[8]: https://github.com/imartinez/privateGPT
Raw data
{
"_id": null,
"home_page": "",
"name": "pxgpt",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9,<4.0",
"maintainer_email": "",
"keywords": "",
"author": "pwwang",
"author_email": "pwwang@pwwang.com",
"download_url": "https://files.pythonhosted.org/packages/63/af/d04a5af864a378855c7fadd02c926a8ed43444ca8e8271853cef418e5015/pxgpt-0.0.1.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img height=\"48\" src=\"./logo.png\">\n</p>\n<p align=\"center\">Your personal, powerful and private GPT</p>\n<hr />\n\n## Features\n\n- Ingest of your own documents and talk to them.\n- Store your data locally on your device.\n- Choose from a variety of models, including OpenAI.\n- Support conversation history and memory.\n- Switch between profiles with different settings.\n- Support both web interface and command line interface.\n- Support Llama v2!\n\n## Installation\n\n```shell\n# With all supported models\n$ pip install -U pxgpt[all]\n\n# With support for GPT4all only\n$ pip install -U pxgpt[gpt4all]\n\n# With support for llama-cpp only\n$ pip install -U pxgpt[llama-cpp]\n\n# With support for openai only\n$ pip install -U pxgpt[openai]\n```\n\nThen copy the configuration from `.pxgpt.config-example.yml` to `.pxgpt.config.yml` and modify it to your needs.\n\n## Usage\n\n### Chat from CLI\n\n```shell\n$ 23:45:06 \u276f pxgpt chat\n\nWelcome to chat via the pxGPT CLI v0.0.0rc0!\nHit 'Ctrl+c' or 'Ctrl+d' to exit.\n[2023-07-20 20:03:22,716] INFO Switched profile to llamacpp-chat\n[2023-07-20 20:03:22,719] INFO Creating LLM (LlamaCpp)\nllama.cpp: loading model from models/llamacpp/llama-2-7b.ggmlv3.q4_0.bin\nllama_model_load_internal: format = ggjt v3 (latest)\nllama_model_load_internal: n_vocab = 32000\nllama_model_load_internal: n_ctx = 512\nllama_model_load_internal: n_embd = 4096\nllama_model_load_internal: n_mult = 256\nllama_model_load_internal: n_head = 32\nllama_model_load_internal: n_layer = 32\nllama_model_load_internal: n_rot = 128\nllama_model_load_internal: freq_base = 10000.0\nllama_model_load_internal: freq_scale = 1\nllama_model_load_internal: ftype = 2 (mostly Q4_0)\nllama_model_load_internal: n_ff = 11008\nllama_model_load_internal: model size = 7B\nllama_model_load_internal: ggml ctx size = 0.08 MB\nllama_model_load_internal: mem required = 5185.72 MB (+ 1026.00 MB per state)\nllama_new_context_with_model: kv self size = 256.00 MB\n[2023-07-20 20:03:24,552] INFO Entering chat mode\nLoaded conversation: 2023-07-20_20-03-17\nUse the up/down arrow keys to navigate history.\nUse /help to see the list of commands.\n\n>>> Hello Llama 2, if you could choose a superpower, what would it be and why?\n I will answer this question in two parts. The first part is the super power that I will choose. The second part is why I chose this super power.\nIn the first part, the super power that I will choose is the ability to fly. This is because flying has many benefits. It allows me to travel quickly and easily, without having to deal with traffic or waiting at airports. Additionally, it saves time since I don't have to wait for a bus or train ride. Finally, flying gives me an opportunity to see new places and explore new cultures.\nIn the second part of my answer, I will explain why I chose this super power. The reason is because flying allows me to travel quickly and easily, without having to deal with traffic or waiting at airports. Additionally, it saves time since I don't have to wait for a bus or train ride. Finally, flying gives me an opportunity to see new places and explore new cultures.\n\n>>> /help\nCommands:\n - /help: List the commands\n - /new: Start a new conversation\n - /switch: Switch to a conversation\n - /list: List all conversations\n - /path: Show the path of the current conversation file\n - /delete: Delete a conversation\n - /rename: Rename a conversation\n - /ingest: Ingest documents from the source directory\n - /docs: List ingested and uningested documents in the source directory\n - /exit: Exit the CLI\n\n>>>\n```\n\n### Chat from the web interface\n\n```shell\n$ pxgpt serve\n# Open http://localhost:7758 in your browser\n```\n\n![Web-interface](web-interface.png)\n\n### Configuration\npx\nThe configuration files are loaded from the following paths:\n\n- `~/.config/pxgpt/config.yml`\n- `~/.pxgpt.config.yml`\n- `./.pxgpt.config.yml`\n\n#### Profiles\n\nNote that you need to define profiles in the configuration file. For example:\n\n```yaml\nopenai: # The profile\n model:\n type: ChatOpenAI\n```\n\nThe configuration items are inherited from the `default` profile. For example:\n\n```yaml\ndefault:\n credentials:\n openai_api_key: sk-xxxxxxxxxxx\n\nopenai:\n model:\n type: ChatOpenAI\n```\n\nThen when you use `openai` profile, the configurations are expanded as:\n\n```yaml\nopenai:\n credentials:\n openai_api_key: sk-xxxxxxxxxxx\n model:\n type: ChatOpenAI\n```\n\nHigher-level configurations override lower-level configurations. For example:\n\nIf you define the `default` profile in `~/.config/pxgpt/config.yml` and the `openai` profile in `./.pxgpt.config.yml`, then the `openai` profile will inherit the `default` profile, as well.\n\n#### Configuration items\n\n- `log_level`: The log level for the logger in your teminal\n- `history_directory`: The directory to store the conversation history\n- `history_into_memory`: Whether to load the conversation history into memory\n - You can turn this off if you are using small models\n- `credentials`: The credentials for the models.\n For example, for OpenAI, you need to provide the `openai_api_key`.\n- `model`: Type of the model and arguments for it.\n - `type`: The type of the model, supported models are: `GPT4All`, `LlamaCpp`, `ChatOpenAI` and `OpenAI`\n - `<other>`: The arguments for the model. Passed to `langchain` llms.\n - For `GPT4All`, you can pass the arguments listed in [here][1].\n - For `LlamaCpp`, you can pass the arguments listed in [here][2].\n - For `ChatOpenAI`, you can pass the arguments listed in [here][3].\n - For `OpenAI`, you can pass the arguments listed in [here][4].\n- `qmodel`: The arguments for model used to condense questions\n - `type`: Same as `model.type`, with and `Echo` model added, which is useful for models that don't do question condensing very well.\n - `<other>`: Same as `model.<other>`.\n- `ingest`: The arguments for the ingestion.\n - `source_directory`: The directory to ingest documents from.\n - If not provided, we will enter the chat mode.\n - `persist_directory`: The directory to save the vectorstore database.\n - If not provided, will use `<source_directory>/.pxgpt-<model>-db`.\n - `target_source_chunks`: The number of chunks to return against the query.\n - `n_workers`: The number of workers to use for ingestion.\n - `chunk_size` and `chunk_overlap`: The chunk size and overlap for the ingestion.\n- `embeddings`: The arguments for the embeddings.\n - For `GPT4All`, you can pass the arguments listed in [here][5].\n - For `LlamaCpp`, you can pass the arguments listed in [here][6].\n - For `OpenAI` or `ChatOpenAI`, you can pass the arguments listed in [here][7].\n\n### Ingest documents\n\n```shell\n$ pxgpt ingest # default profile\n$ pxgpt ingest --profile openai-docs\n# Will ingest documents under `ingest.source_directory` under `openai-docs` profile\n```\n\n## Credits\n\n`pxgpt` is Inspired by [privateGPT][8], with the addition of openai API support, history and memory support, and a web interface.\n\n## TODO\n\n- [ ] Support ingestion management (upload/download/delete/ingest documents) from the web interface\n- [ ] Support profile management (add/remove/modify) from the web interface\n- [ ] Use markdown to format the response on the web interface\n- [ ] Read credentials from environment variables\n- [ ] Build a docker image\n- [ ] Support more models\n\n## Q & A\n\n[QA.md](QA.md)\n\n[1]: https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html#langchain.llms.gpt4all.GPT4All\n[2]: https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html#langchain.llms.llamacpp.LlamaCpp\n[3]: https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html#langchain.chat_models.openai.ChatOpenAI\n[4]: https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html#langchain.llms.openai.OpenAI\n[5]: https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.gpt4all.GPT4AllEmbeddings.html#langchain.embeddings.gpt4all.GPT4AllEmbeddings\n[6]: https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html#langchain.embeddings.llamacpp.LlamaCppEmbeddings\n[7]: https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html#langchain.embeddings.openai.OpenAIEmbeddings\n[8]: https://github.com/imartinez/privateGPT\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Your personal, powerful and private GPT.",
"version": "0.0.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2ec607a948b082f3df1038918698b790fbffd83ad608ccfeb23245cc0c1b8bb3",
"md5": "bab0c0d7d5cdee60175e83f769ea5270",
"sha256": "6e3a40bd8c755eca064b7fb14cbeae19e66c1cf944f0dd146004d4dacec8a094"
},
"downloads": -1,
"filename": "pxgpt-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bab0c0d7d5cdee60175e83f769ea5270",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9,<4.0",
"size": 180350,
"upload_time": "2023-07-21T02:16:56",
"upload_time_iso_8601": "2023-07-21T02:16:56.221288Z",
"url": "https://files.pythonhosted.org/packages/2e/c6/07a948b082f3df1038918698b790fbffd83ad608ccfeb23245cc0c1b8bb3/pxgpt-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "63afd04a5af864a378855c7fadd02c926a8ed43444ca8e8271853cef418e5015",
"md5": "f1cde192dfa05c6b35878d9c60378613",
"sha256": "a2716446a7d215a488ecaf24f2bf66808dcaf9d8fe94f0e9830ed411b12227c0"
},
"downloads": -1,
"filename": "pxgpt-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "f1cde192dfa05c6b35878d9c60378613",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9,<4.0",
"size": 177766,
"upload_time": "2023-07-21T02:16:58",
"upload_time_iso_8601": "2023-07-21T02:16:58.818233Z",
"url": "https://files.pythonhosted.org/packages/63/af/d04a5af864a378855c7fadd02c926a8ed43444ca8e8271853cef418e5015/pxgpt-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-21 02:16:58",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pxgpt"
}