llama-index-packs-query


Namellama-index-packs-query JSON
Version 0.1.3 PyPI version JSON
download
home_page
Summaryllama-index packs query integration
upload_time2024-02-22 01:30:38
maintainerjerryjliu
docs_urlNone
authorYour Name
requires_python>=3.8.1,<4.0
licenseMIT
keywords fusion pipeline query rag
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RAG Fusion Pipeline Llama Pack

This LlamaPack creates the RAG Fusion Query Pipeline, which runs multiple retrievers in parallel (with varying chunk sizes), and aggregates the results in the end with reciprocal rank fusion.

You can run it out of the box, but we also encourage you to inspect the code to take a look at how our `QueryPipeline` syntax works. More details on query pipelines can be found here: https://docs.llamaindex.ai/en/stable/module_guides/querying/pipeline/root.html.

Check out our [notebook guide](https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/query/rag_fusion_pipeline/rag_fusion_pipeline.ipynb) as well.

## CLI Usage

You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:

```bash
llamaindex-cli download-llamapack RAGFusionPipelinePack --download-dir ./rag_fusion_pipeline_pack
```

You can then inspect the files at `./rag_fusion_pipeline_pack` and use them as a template for your own project!

## Code Usage

You can download the pack to a `./rag_fusion_pipeline_pack` directory:

```python
from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
RAGFusionPipelinePack = download_llama_pack(
    "RAGFusionPipelinePack", "./rag_fusion_pipeline_pack"
)
```

From here, you can use the pack, or inspect and modify the pack in `./rag_fusion_pipeline_pack`.

Then, you can set up the pack like so:

```python
# create the pack
pack = RAGFusionPipelinePack(docs, llm=OpenAI(model="gpt-3.5-turbo"))
```

The `run()` function is a light wrapper around `query_pipeline.run(*args, **kwargs)`.

```python
response = pack.run(input="What did the author do during his time in YC?")
```

You can also use modules individually.

```python
# get query pipeline directly
pack.query_pipeline

# get retrievers for each chunk size
pack.retrievers

# get query engines for each chunk size
pack.query_engines
```

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "llama-index-packs-query",
    "maintainer": "jerryjliu",
    "docs_url": null,
    "requires_python": ">=3.8.1,<4.0",
    "maintainer_email": "",
    "keywords": "fusion,pipeline,query,rag",
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/47/49/465e81ca0fda8dd9f6cd1a7056fd8637bd63a24de2ea8c7c95a0ef06aa18/llama_index_packs_query-0.1.3.tar.gz",
    "platform": null,
    "description": "# RAG Fusion Pipeline Llama Pack\n\nThis LlamaPack creates the RAG Fusion Query Pipeline, which runs multiple retrievers in parallel (with varying chunk sizes), and aggregates the results in the end with reciprocal rank fusion.\n\nYou can run it out of the box, but we also encourage you to inspect the code to take a look at how our `QueryPipeline` syntax works. More details on query pipelines can be found here: https://docs.llamaindex.ai/en/stable/module_guides/querying/pipeline/root.html.\n\nCheck out our [notebook guide](https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/query/rag_fusion_pipeline/rag_fusion_pipeline.ipynb) as well.\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack RAGFusionPipelinePack --download-dir ./rag_fusion_pipeline_pack\n```\n\nYou can then inspect the files at `./rag_fusion_pipeline_pack` and use them as a template for your own project!\n\n## Code Usage\n\nYou can download the pack to a `./rag_fusion_pipeline_pack` directory:\n\n```python\nfrom llama_index.core.llama_pack import download_llama_pack\n\n# download and install dependencies\nRAGFusionPipelinePack = download_llama_pack(\n    \"RAGFusionPipelinePack\", \"./rag_fusion_pipeline_pack\"\n)\n```\n\nFrom here, you can use the pack, or inspect and modify the pack in `./rag_fusion_pipeline_pack`.\n\nThen, you can set up the pack like so:\n\n```python\n# create the pack\npack = RAGFusionPipelinePack(docs, llm=OpenAI(model=\"gpt-3.5-turbo\"))\n```\n\nThe `run()` function is a light wrapper around `query_pipeline.run(*args, **kwargs)`.\n\n```python\nresponse = pack.run(input=\"What did the author do during his time in YC?\")\n```\n\nYou can also use modules individually.\n\n```python\n# get query pipeline directly\npack.query_pipeline\n\n# get retrievers for each chunk size\npack.retrievers\n\n# get query engines for each chunk size\npack.query_engines\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index packs query integration",
    "version": "0.1.3",
    "project_urls": null,
    "split_keywords": [
        "fusion",
        "pipeline",
        "query",
        "rag"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "56ed9efe9b2871533c5f3c528edc4acc1c94b51f556be4df4fb03032b198db73",
                "md5": "92a404d87b7ac3d8e0de021069e7eac7",
                "sha256": "5c96a69ed293bff2118b2fcd548cfaea1b239fd666d3ddc4b259dbb9d847d420"
            },
            "downloads": -1,
            "filename": "llama_index_packs_query-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "92a404d87b7ac3d8e0de021069e7eac7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.1,<4.0",
            "size": 4050,
            "upload_time": "2024-02-22T01:30:37",
            "upload_time_iso_8601": "2024-02-22T01:30:37.116897Z",
            "url": "https://files.pythonhosted.org/packages/56/ed/9efe9b2871533c5f3c528edc4acc1c94b51f556be4df4fb03032b198db73/llama_index_packs_query-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4749465e81ca0fda8dd9f6cd1a7056fd8637bd63a24de2ea8c7c95a0ef06aa18",
                "md5": "1ae483fe07b77065ca7d7e1e26e360df",
                "sha256": "f1a88fcc41c3b9d9f8fcdde8a36be1c5e1398ae72f1c63f55660ec93decde575"
            },
            "downloads": -1,
            "filename": "llama_index_packs_query-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "1ae483fe07b77065ca7d7e1e26e360df",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.1,<4.0",
            "size": 3687,
            "upload_time": "2024-02-22T01:30:38",
            "upload_time_iso_8601": "2024-02-22T01:30:38.273300Z",
            "url": "https://files.pythonhosted.org/packages/47/49/465e81ca0fda8dd9f6cd1a7056fd8637bd63a24de2ea8c7c95a0ef06aa18/llama_index_packs_query-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-22 01:30:38",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-packs-query"
}
        
Elapsed time: 0.19866s