Name | llama-index-packs-longrag JSON |
Version |
0.3.0
JSON |
| download |
home_page | None |
Summary | llama-index packs longrag integration |
upload_time | 2024-08-22 17:36:42 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | <4.0,>=3.8.1 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LlamaIndex Packs Integration: LongRAG
This LlamaPack implements LongRAG based on [this paper](https://arxiv.org/pdf/2406.15319).
LongRAG retrieves large tokens at a time, with each retrieval unit being ~6k tokens long, consisting of entire documents or groups of documents. This contrasts the short retrieval units (100 word passages) of traditional RAG. LongRAG is advantageous because results can be achieved using only the top 4-8 retrieval units, and long-context LLMs can better understand the context of the documents because long retrieval units preserve their semantic integrity.
## Installation
```
# installation
pip install llama-index-packs-longrag
# source code
llamaindex-cli download-llamapack LongRAGPack --download-dir ./longrag_pack
```
## Code Usage
```py
from llama_index.packs.longrag import LongRAGPack
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
Settings.llm = OpenAI("gpt-4o")
pack = LongRAGPack(data_dir="./data")
query_str = "How can Pittsburgh become a startup hub, and what are the two types of moderates?"
res = pack.run(query_str)
print(str(res))
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-packs-longrag",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": null,
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/c5/df/4fbc26be17f389ff2fd8d836f6b6312e44a0433a67e20c85b115d934e140/llama_index_packs_longrag-0.3.0.tar.gz",
"platform": null,
"description": "# LlamaIndex Packs Integration: LongRAG\n\nThis LlamaPack implements LongRAG based on [this paper](https://arxiv.org/pdf/2406.15319).\n\nLongRAG retrieves large tokens at a time, with each retrieval unit being ~6k tokens long, consisting of entire documents or groups of documents. This contrasts the short retrieval units (100 word passages) of traditional RAG. LongRAG is advantageous because results can be achieved using only the top 4-8 retrieval units, and long-context LLMs can better understand the context of the documents because long retrieval units preserve their semantic integrity.\n\n## Installation\n\n```\n# installation\npip install llama-index-packs-longrag\n\n# source code\nllamaindex-cli download-llamapack LongRAGPack --download-dir ./longrag_pack\n```\n\n## Code Usage\n\n```py\nfrom llama_index.packs.longrag import LongRAGPack\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core import Settings\n\nSettings.llm = OpenAI(\"gpt-4o\")\n\npack = LongRAGPack(data_dir=\"./data\")\n\nquery_str = \"How can Pittsburgh become a startup hub, and what are the two types of moderates?\"\nres = pack.run(query_str)\nprint(str(res))\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index packs longrag integration",
"version": "0.3.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "83ca4b5ab74d221054d21659f72fb6e78a511b3aca99e6f5eded94c0e0348f2f",
"md5": "d09821228f4e1e56e93a1a2d7b020842",
"sha256": "120662e081638c6ff42a5b250d56a2b7c600da051404dbb0ea72ca3a84963fe1"
},
"downloads": -1,
"filename": "llama_index_packs_longrag-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d09821228f4e1e56e93a1a2d7b020842",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8.1",
"size": 5640,
"upload_time": "2024-08-22T17:36:40",
"upload_time_iso_8601": "2024-08-22T17:36:40.892783Z",
"url": "https://files.pythonhosted.org/packages/83/ca/4b5ab74d221054d21659f72fb6e78a511b3aca99e6f5eded94c0e0348f2f/llama_index_packs_longrag-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c5df4fbc26be17f389ff2fd8d836f6b6312e44a0433a67e20c85b115d934e140",
"md5": "b0b0f15b94b329e59f80bea678496c15",
"sha256": "46a95de1b4877e39d0a064c8c3564f92e4e2bb2705cf057b4be80b2c54f62cc0"
},
"downloads": -1,
"filename": "llama_index_packs_longrag-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "b0b0f15b94b329e59f80bea678496c15",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 5390,
"upload_time": "2024-08-22T17:36:42",
"upload_time_iso_8601": "2024-08-22T17:36:42.208607Z",
"url": "https://files.pythonhosted.org/packages/c5/df/4fbc26be17f389ff2fd8d836f6b6312e44a0433a67e20c85b115d934e140/llama_index_packs_longrag-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-22 17:36:42",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-packs-longrag"
}