<p align="center"><a href="https://pypi.org/project/contextqa"><img src="https://contextqa-assets.s3.amazonaws.com/logo.png" width="200px" alt="ContextQA logo" /></a></p>
<p align="center"><a href="https://pypi.org/project/contextqa"><img src="https://contextqa-assets.s3.amazonaws.com/title.png" width="200px" alt="ContextQA title" /></a></p>
<p align="center" style="font-size: 20px"><i>Chat with your data by leveraging the power of LLMs and vector databases</i></p>
<p align="center">
<a href="https://pypi.org/project/contextqa" target="_blank">
<img alt="contextqa latest version" src="https://img.shields.io/pypi/v/contextqa?label=Latest%20release&color=%230cc109">
</a>
<a href="https://pypi.org/project/contextqa" target="_blank">
<img alt="Supported Python versions" src="https://img.shields.io/pypi/pyversions/contextqa?logo=python&logoColor=white&color=0cc109">
</a>
<img alt="node version" src="https://img.shields.io/badge/nodejs-v18.17.1-green?logo=nodedotjs">
<img alt="vue version" src="https://img.shields.io/badge/Vue.js-%5Ev3.2.13-green?logo=vuedotjs">
</p>
---
ContextQA is a modern utility that provides a ready-to-use LLM-powered application. It is built on top of giants such as [FastAPI](https://fastapi.tiangolo.com/), [LangChain](https://www.langchain.com/), and [Hugging Face](https://huggingface.co/).
Key features include:
- Regular chat supporting knowledge expansion via internet access
- Conversational QA with relevant sources
- Streaming responses
- Ingestion of data sources used in QA sessions
- Data sources management
- LLM settings: Configure parameters such as provider, model, temperature, etc. Currently, the supported providers are **[OpenAI](https://openai.com/)** and **Google**
- Vector DB settings. Adjust parameters such as engine, chunk size, chunk overlap, etc. Currently, the supported engines are **[ChromaDB](https://www.trychroma.com/)** and **[Pinecone](https://www.pinecone.io/)**
- Other settings: Choose embedded or external LLM memory (**[Redis](https://redis.io/)**), media directory, database credentials, etc.
## Installation
```bash
pip install contextqa
```
## Usage
On installation contextqa provides a CLI tool
```bash
contextqa init
```
Check out the available parameters by running the following command
```bash
contextqa init --help
```
## Example
### Run it
```bash
$ contextqa init
2024-08-28 01:00:39,586 - INFO - Using SQLite
2024-08-28 01:00:47,850 - INFO - Use pytorch device_name: cpu
2024-08-28 01:00:47,850 - INFO - Load pretrained SentenceTransformer: sentence-transformers/all-mpnet-base-v2
INFO: Started server process [20658]
INFO: Waiting for application startup.
2024-08-28 01:00:47,850 - INFO - Running initial migrations...
2024-08-28 01:00:47,853 - INFO - Context impl SQLiteImpl.
2024-08-28 01:00:47,855 - INFO - Will assume non-transactional DDL.
2024-08-28 01:00:47,860 - INFO - Running upgrade -> 0bb7d192c063, Initial migration
2024-08-28 01:00:47,862 - INFO - Running upgrade 0bb7d192c063 -> b7d862d599fe, Support for store types and related indexes
2024-08-28 01:00:47,864 - INFO - Running upgrade b7d862d599fe -> 3058bf204a05, unique index name
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)
```
### Check it
Open your browser at http://localhost:8080. You will see the initialization stepper which will guide you through the initial configurations
<img alt="init config" src="https://contextqa-assets.s3.amazonaws.com/init.png" width="1000px">
Or the main contextqa view - If the initial configuration has already been set
<img alt="main view" src="https://contextqa-assets.s3.amazonaws.com/main.png" width="1000px">
## Guideline
For detailed usage instructions, please refer to the [usage guidelines](https://zaldivards.github.io/introducing-contextqa/).
## Contributing
We welcome contributions to **ContextQA**! To get started, please refer to our [CONTRIBUTING.md](CONTRIBUTING.md) file for guidelines on how to contribute.
Your feedback and contributions help us improve and enhance the project. Thank you for your interest in contributing!
Raw data
{
"_id": null,
"home_page": "https://github.com/zaldivards/ContextQA/",
"name": "contextqa",
"maintainer": "Christian Zaldivar",
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": "herrerachristian1897@gmail.com",
"keywords": "OpenAI, LLM, LLM client, LLM app, QA, Agent, LLM agent",
"author": "Christian Zaldivar",
"author_email": "herrerachristian1897@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/d0/1f/4dc1f10dfcc799e21b242b79e9ec4e11804c31a25458395d753a40afb772/contextqa-2.0.5.tar.gz",
"platform": null,
"description": "<p align=\"center\"><a href=\"https://pypi.org/project/contextqa\"><img src=\"https://contextqa-assets.s3.amazonaws.com/logo.png\" width=\"200px\" alt=\"ContextQA logo\" /></a></p>\n<p align=\"center\"><a href=\"https://pypi.org/project/contextqa\"><img src=\"https://contextqa-assets.s3.amazonaws.com/title.png\" width=\"200px\" alt=\"ContextQA title\" /></a></p>\n<p align=\"center\" style=\"font-size: 20px\"><i>Chat with your data by leveraging the power of LLMs and vector databases</i></p>\n<p align=\"center\">\n<a href=\"https://pypi.org/project/contextqa\" target=\"_blank\">\n <img alt=\"contextqa latest version\" src=\"https://img.shields.io/pypi/v/contextqa?label=Latest%20release&color=%230cc109\">\n</a>\n<a href=\"https://pypi.org/project/contextqa\" target=\"_blank\">\n <img alt=\"Supported Python versions\" src=\"https://img.shields.io/pypi/pyversions/contextqa?logo=python&logoColor=white&color=0cc109\">\n</a>\n<img alt=\"node version\" src=\"https://img.shields.io/badge/nodejs-v18.17.1-green?logo=nodedotjs\">\n<img alt=\"vue version\" src=\"https://img.shields.io/badge/Vue.js-%5Ev3.2.13-green?logo=vuedotjs\">\n</p>\n\n---\n\nContextQA is a modern utility that provides a ready-to-use LLM-powered application. It is built on top of giants such as [FastAPI](https://fastapi.tiangolo.com/), [LangChain](https://www.langchain.com/), and [Hugging Face](https://huggingface.co/).\n\nKey features include:\n- Regular chat supporting knowledge expansion via internet access\n- Conversational QA with relevant sources\n- Streaming responses\n- Ingestion of data sources used in QA sessions\n- Data sources management\n- LLM settings: Configure parameters such as provider, model, temperature, etc. Currently, the supported providers are **[OpenAI](https://openai.com/)** and **Google** \n- Vector DB settings. Adjust parameters such as engine, chunk size, chunk overlap, etc. Currently, the supported engines are **[ChromaDB](https://www.trychroma.com/)** and **[Pinecone](https://www.pinecone.io/)**\n- Other settings: Choose embedded or external LLM memory (**[Redis](https://redis.io/)**), media directory, database credentials, etc.\n\n\n## Installation\n\n```bash\npip install contextqa\n```\n## Usage\nOn installation contextqa provides a CLI tool \n```bash\ncontextqa init\n```\nCheck out the available parameters by running the following command\n```bash\ncontextqa init --help\n```\n## Example\n### Run it\n```bash\n$ contextqa init\n\n2024-08-28 01:00:39,586 - INFO - Using SQLite\n2024-08-28 01:00:47,850 - INFO - Use pytorch device_name: cpu\n2024-08-28 01:00:47,850 - INFO - Load pretrained SentenceTransformer: sentence-transformers/all-mpnet-base-v2\nINFO: Started server process [20658]\nINFO: Waiting for application startup.\n2024-08-28 01:00:47,850 - INFO - Running initial migrations...\n2024-08-28 01:00:47,853 - INFO - Context impl SQLiteImpl.\n2024-08-28 01:00:47,855 - INFO - Will assume non-transactional DDL.\n2024-08-28 01:00:47,860 - INFO - Running upgrade -> 0bb7d192c063, Initial migration\n2024-08-28 01:00:47,862 - INFO - Running upgrade 0bb7d192c063 -> b7d862d599fe, Support for store types and related indexes\n2024-08-28 01:00:47,864 - INFO - Running upgrade b7d862d599fe -> 3058bf204a05, unique index name\nINFO: Application startup complete.\nINFO: Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)\n```\n### Check it\n\nOpen your browser at http://localhost:8080. You will see the initialization stepper which will guide you through the initial configurations\n\n<img alt=\"init config\" src=\"https://contextqa-assets.s3.amazonaws.com/init.png\" width=\"1000px\">\n\nOr the main contextqa view - If the initial configuration has already been set\n\n<img alt=\"main view\" src=\"https://contextqa-assets.s3.amazonaws.com/main.png\" width=\"1000px\">\n\n## Guideline\n\nFor detailed usage instructions, please refer to the [usage guidelines](https://zaldivards.github.io/introducing-contextqa/).\n\n## Contributing\n\nWe welcome contributions to **ContextQA**! To get started, please refer to our [CONTRIBUTING.md](CONTRIBUTING.md) file for guidelines on how to contribute. \nYour feedback and contributions help us improve and enhance the project. Thank you for your interest in contributing!\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Chat with your data by leveraging the power of LLMs and vector databases",
"version": "2.0.5",
"project_urls": {
"Homepage": "https://github.com/zaldivards/ContextQA/"
},
"split_keywords": [
"openai",
" llm",
" llm client",
" llm app",
" qa",
" agent",
" llm agent"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "509fa4264bf2c6763c21243ee685ad35d26cce6cd999f9341a6c16a61bdc490a",
"md5": "7fc629aa4bad317cdaa2e58a8ce11a7d",
"sha256": "61b02c688c7d5bbfc0534beab87802b6b44596acd28a8d24aab95a107034e811"
},
"downloads": -1,
"filename": "contextqa-2.0.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7fc629aa4bad317cdaa2e58a8ce11a7d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 2693397,
"upload_time": "2024-09-04T05:28:12",
"upload_time_iso_8601": "2024-09-04T05:28:12.342982Z",
"url": "https://files.pythonhosted.org/packages/50/9f/a4264bf2c6763c21243ee685ad35d26cce6cd999f9341a6c16a61bdc490a/contextqa-2.0.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d01f4dc1f10dfcc799e21b242b79e9ec4e11804c31a25458395d753a40afb772",
"md5": "1f440936f9e859081926c30178b08a29",
"sha256": "27560f19c1e5d5df292de94f3046e30c39347bd826adf6a06690d0e3de218cbb"
},
"downloads": -1,
"filename": "contextqa-2.0.5.tar.gz",
"has_sig": false,
"md5_digest": "1f440936f9e859081926c30178b08a29",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 2672648,
"upload_time": "2024-09-04T05:28:14",
"upload_time_iso_8601": "2024-09-04T05:28:14.603243Z",
"url": "https://files.pythonhosted.org/packages/d0/1f/4dc1f10dfcc799e21b242b79e9ec4e11804c31a25458395d753a40afb772/contextqa-2.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-04 05:28:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "zaldivards",
"github_project": "ContextQA",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "contextqa"
}