llama-index-packs-query


Namellama-index-packs-query JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
Summaryllama-index packs query integration
upload_time2024-11-18 01:31:23
maintainerjerryjliu
docs_urlNone
authorYour Name
requires_python<4.0,>=3.9
licenseMIT
keywords fusion pipeline query rag
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RAG Fusion Pipeline Llama Pack

This LlamaPack creates the RAG Fusion Query Pipeline, which runs multiple retrievers in parallel (with varying chunk sizes), and aggregates the results in the end with reciprocal rank fusion.

You can run it out of the box, but we also encourage you to inspect the code to take a look at how our `QueryPipeline` syntax works. More details on query pipelines can be found here: https://docs.llamaindex.ai/en/stable/module_guides/querying/pipeline/root.html.

Check out our [notebook guide](https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/query/rag_fusion_pipeline/rag_fusion_pipeline.ipynb) as well.

## CLI Usage

You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:

```bash
llamaindex-cli download-llamapack RAGFusionPipelinePack --download-dir ./rag_fusion_pipeline_pack
```

You can then inspect the files at `./rag_fusion_pipeline_pack` and use them as a template for your own project!

## Code Usage

You can download the pack to a `./rag_fusion_pipeline_pack` directory:

```python
from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
RAGFusionPipelinePack = download_llama_pack(
    "RAGFusionPipelinePack", "./rag_fusion_pipeline_pack"
)
```

From here, you can use the pack, or inspect and modify the pack in `./rag_fusion_pipeline_pack`.

Then, you can set up the pack like so:

```python
# create the pack
pack = RAGFusionPipelinePack(docs, llm=OpenAI(model="gpt-3.5-turbo"))
```

The `run()` function is a light wrapper around `query_pipeline.run(*args, **kwargs)`.

```python
response = pack.run(input="What did the author do during his time in YC?")
```

You can also use modules individually.

```python
# get query pipeline directly
pack.query_pipeline

# get retrievers for each chunk size
pack.retrievers

# get query engines for each chunk size
pack.query_engines
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-packs-query",
    "maintainer": "jerryjliu",
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "fusion, pipeline, query, rag",
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/85/c4/47ff7b2dfa415a4197bc27a34bd768134e64685c7d1915ee05266c63a75d/llama_index_packs_query-0.3.0.tar.gz",
    "platform": null,
    "description": "# RAG Fusion Pipeline Llama Pack\n\nThis LlamaPack creates the RAG Fusion Query Pipeline, which runs multiple retrievers in parallel (with varying chunk sizes), and aggregates the results in the end with reciprocal rank fusion.\n\nYou can run it out of the box, but we also encourage you to inspect the code to take a look at how our `QueryPipeline` syntax works. More details on query pipelines can be found here: https://docs.llamaindex.ai/en/stable/module_guides/querying/pipeline/root.html.\n\nCheck out our [notebook guide](https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/query/rag_fusion_pipeline/rag_fusion_pipeline.ipynb) as well.\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack RAGFusionPipelinePack --download-dir ./rag_fusion_pipeline_pack\n```\n\nYou can then inspect the files at `./rag_fusion_pipeline_pack` and use them as a template for your own project!\n\n## Code Usage\n\nYou can download the pack to a `./rag_fusion_pipeline_pack` directory:\n\n```python\nfrom llama_index.core.llama_pack import download_llama_pack\n\n# download and install dependencies\nRAGFusionPipelinePack = download_llama_pack(\n    \"RAGFusionPipelinePack\", \"./rag_fusion_pipeline_pack\"\n)\n```\n\nFrom here, you can use the pack, or inspect and modify the pack in `./rag_fusion_pipeline_pack`.\n\nThen, you can set up the pack like so:\n\n```python\n# create the pack\npack = RAGFusionPipelinePack(docs, llm=OpenAI(model=\"gpt-3.5-turbo\"))\n```\n\nThe `run()` function is a light wrapper around `query_pipeline.run(*args, **kwargs)`.\n\n```python\nresponse = pack.run(input=\"What did the author do during his time in YC?\")\n```\n\nYou can also use modules individually.\n\n```python\n# get query pipeline directly\npack.query_pipeline\n\n# get retrievers for each chunk size\npack.retrievers\n\n# get query engines for each chunk size\npack.query_engines\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index packs query integration",
    "version": "0.3.0",
    "project_urls": null,
    "split_keywords": [
        "fusion",
        " pipeline",
        " query",
        " rag"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e71a5d8cc6d3e725f2b2072ee54872861b4e5efb61cdc51b696b63c1f4ddfdc3",
                "md5": "544d451f24af437b009e12da86bb3e7b",
                "sha256": "164e3d0c395fcb87230824515eefe8aa6f9979845188f721f49368c232e7eac7"
            },
            "downloads": -1,
            "filename": "llama_index_packs_query-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "544d451f24af437b009e12da86bb3e7b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 4027,
            "upload_time": "2024-11-18T01:31:22",
            "upload_time_iso_8601": "2024-11-18T01:31:22.866947Z",
            "url": "https://files.pythonhosted.org/packages/e7/1a/5d8cc6d3e725f2b2072ee54872861b4e5efb61cdc51b696b63c1f4ddfdc3/llama_index_packs_query-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "85c447ff7b2dfa415a4197bc27a34bd768134e64685c7d1915ee05266c63a75d",
                "md5": "417e8a574af255d72b3aa0f729712e94",
                "sha256": "b2c3a088d1d00492fb8f0be6f6ae155d86596ab928554cc32ab72a4f4c3a17e4"
            },
            "downloads": -1,
            "filename": "llama_index_packs_query-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "417e8a574af255d72b3aa0f729712e94",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 3653,
            "upload_time": "2024-11-18T01:31:23",
            "upload_time_iso_8601": "2024-11-18T01:31:23.694471Z",
            "url": "https://files.pythonhosted.org/packages/85/c4/47ff7b2dfa415a4197bc27a34bd768134e64685c7d1915ee05266c63a75d/llama_index_packs_query-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-18 01:31:23",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-packs-query"
}
        
Elapsed time: 0.55682s