llama-index-packs-raptor


Namellama-index-packs-raptor JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
Summaryllama-index packs raptor integration
upload_time2024-11-18 01:31:31
maintainerNone
docs_urlNone
authorLogan Markewich
requires_python<4.0,>=3.9
licenseMIT
keywords cluster raptor retrieval
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Raptor Retriever LlamaPack

This LlamaPack shows how to use an implementation of RAPTOR with llama-index, leveraging the RAPTOR pack.

RAPTOR works by recursively clustering and summarizing clusters in layers for retrieval.

There two retrieval modes:

- tree_traversal -- traversing the tree of clusters, performing top-k at each level in the tree.
- collapsed -- treat the entire tree as a giant pile of nodes, perform simple top-k.

See [the paper](https://arxiv.org/abs/2401.18059) for full algorithm details.

## CLI Usage

You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:

```bash
llamaindex-cli download-llamapack RaptorPack --download-dir ./raptor_pack
```

You can then inspect/modify the files at `./raptor_pack` and use them as a template for your own project.

## Code Usage

You can alternaitvely install the package:

`pip install llama-index-packs-raptor`

Then, you can import and initialize the pack! This will perform clustering and summarization over your data.

```python
from llama_index.packs.raptor import RaptorPack

pack = RaptorPack(documents, llm=llm, embed_model=embed_model)
```

The `run()` function is a light wrapper around `retriever.retrieve()`.

```python
nodes = pack.run(
    "query",
    mode="collapsed",  # or tree_traversal
)
```

You can also use modules individually.

```python
# get the retriever
retriever = pack.retriever
```

## Persistence

The `RaptorPack` comes with the `RaptorRetriever`, which offers ways of saving/reloading!

If you are using a remote vector-db, just pass it in

```python
# Pack usage
pack = RaptorPack(..., vector_store=vector_store)

# RaptorRetriever usage
retriever = RaptorRetriever(..., vector_store=vector_store)
```

Then, to re-connect, just pass in the vector store again and an empty list of documents

```python
# Pack usage
pack = RaptorPack([], ..., vector_store=vector_store)

# RaptorRetriever usage
retriever = RaptorRetriever([], ..., vector_store=vector_store)
```

Check out the [notebook here for complete details!](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raptor/examples/raptor.ipynb).

## Configure Summary Module

Using the SummaryModule you can configure how the Raptor Pack does summaries and how many workers are applied to summaries.

You can configure the LLM.

You can configure summary_prompt. This will change the prompt sent to your LLM to summarize you docs.

You can configure num_workers, which will influence the number of workers or rather async semaphores allowing more summaries to process simulatneously.
This might affect openai or other LLm provider API limits, be aware.

```python
from llama_index.packs.raptor.base import SummaryModule
from llama_index.packs.raptor import RaptorRetriever

summary_prompt = "As a professional summarizer, create a concise and comprehensive summary of the provided text, be it an article, post, conversation, or passage with as much detail as possible."

# Adding SummaryModule you can configure the summary prompt and number of workers doing summaries.
summary_module = SummaryModule(
    llm=llm, summary_prompt=summary_prompt, num_workers=16
)

pack = RaptorPack(
    documents, llm=llm, embed_model=embed_model, summary_module=summary_module
)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-packs-raptor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "cluster, raptor, retrieval",
    "author": "Logan Markewich",
    "author_email": "logan@llamaindex.ai",
    "download_url": "https://files.pythonhosted.org/packages/0d/20/83b953c2dcf95ce8de8c878494266816cae75f632c16d41247cf8d089a22/llama_index_packs_raptor-0.3.0.tar.gz",
    "platform": null,
    "description": "# Raptor Retriever LlamaPack\n\nThis LlamaPack shows how to use an implementation of RAPTOR with llama-index, leveraging the RAPTOR pack.\n\nRAPTOR works by recursively clustering and summarizing clusters in layers for retrieval.\n\nThere two retrieval modes:\n\n- tree_traversal -- traversing the tree of clusters, performing top-k at each level in the tree.\n- collapsed -- treat the entire tree as a giant pile of nodes, perform simple top-k.\n\nSee [the paper](https://arxiv.org/abs/2401.18059) for full algorithm details.\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack RaptorPack --download-dir ./raptor_pack\n```\n\nYou can then inspect/modify the files at `./raptor_pack` and use them as a template for your own project.\n\n## Code Usage\n\nYou can alternaitvely install the package:\n\n`pip install llama-index-packs-raptor`\n\nThen, you can import and initialize the pack! This will perform clustering and summarization over your data.\n\n```python\nfrom llama_index.packs.raptor import RaptorPack\n\npack = RaptorPack(documents, llm=llm, embed_model=embed_model)\n```\n\nThe `run()` function is a light wrapper around `retriever.retrieve()`.\n\n```python\nnodes = pack.run(\n    \"query\",\n    mode=\"collapsed\",  # or tree_traversal\n)\n```\n\nYou can also use modules individually.\n\n```python\n# get the retriever\nretriever = pack.retriever\n```\n\n## Persistence\n\nThe `RaptorPack` comes with the `RaptorRetriever`, which offers ways of saving/reloading!\n\nIf you are using a remote vector-db, just pass it in\n\n```python\n# Pack usage\npack = RaptorPack(..., vector_store=vector_store)\n\n# RaptorRetriever usage\nretriever = RaptorRetriever(..., vector_store=vector_store)\n```\n\nThen, to re-connect, just pass in the vector store again and an empty list of documents\n\n```python\n# Pack usage\npack = RaptorPack([], ..., vector_store=vector_store)\n\n# RaptorRetriever usage\nretriever = RaptorRetriever([], ..., vector_store=vector_store)\n```\n\nCheck out the [notebook here for complete details!](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raptor/examples/raptor.ipynb).\n\n## Configure Summary Module\n\nUsing the SummaryModule you can configure how the Raptor Pack does summaries and how many workers are applied to summaries.\n\nYou can configure the LLM.\n\nYou can configure summary_prompt. This will change the prompt sent to your LLM to summarize you docs.\n\nYou can configure num_workers, which will influence the number of workers or rather async semaphores allowing more summaries to process simulatneously.\nThis might affect openai or other LLm provider API limits, be aware.\n\n```python\nfrom llama_index.packs.raptor.base import SummaryModule\nfrom llama_index.packs.raptor import RaptorRetriever\n\nsummary_prompt = \"As a professional summarizer, create a concise and comprehensive summary of the provided text, be it an article, post, conversation, or passage with as much detail as possible.\"\n\n# Adding SummaryModule you can configure the summary prompt and number of workers doing summaries.\nsummary_module = SummaryModule(\n    llm=llm, summary_prompt=summary_prompt, num_workers=16\n)\n\npack = RaptorPack(\n    documents, llm=llm, embed_model=embed_model, summary_module=summary_module\n)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index packs raptor integration",
    "version": "0.3.0",
    "project_urls": null,
    "split_keywords": [
        "cluster",
        " raptor",
        " retrieval"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "61fbff34c50e724bb82d299308a9f72bb2c7a35cf2a86484d7c943b62b27436e",
                "md5": "e9763c1d3ed867b95099a77794a4b728",
                "sha256": "80c57c92a9638a422969eea833b66c3fc7a737a7ddf6f156af5c684404bdf980"
            },
            "downloads": -1,
            "filename": "llama_index_packs_raptor-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e9763c1d3ed867b95099a77794a4b728",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 8432,
            "upload_time": "2024-11-18T01:31:30",
            "upload_time_iso_8601": "2024-11-18T01:31:30.465013Z",
            "url": "https://files.pythonhosted.org/packages/61/fb/ff34c50e724bb82d299308a9f72bb2c7a35cf2a86484d7c943b62b27436e/llama_index_packs_raptor-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0d2083b953c2dcf95ce8de8c878494266816cae75f632c16d41247cf8d089a22",
                "md5": "90d52b2ad0d07bdc62b382052bbd4345",
                "sha256": "16e1e377c698782be273ec4a5ed7ae28aabb5812c97b8299c32d3ee5808e61cb"
            },
            "downloads": -1,
            "filename": "llama_index_packs_raptor-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "90d52b2ad0d07bdc62b382052bbd4345",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 8928,
            "upload_time": "2024-11-18T01:31:31",
            "upload_time_iso_8601": "2024-11-18T01:31:31.317390Z",
            "url": "https://files.pythonhosted.org/packages/0d/20/83b953c2dcf95ce8de8c878494266816cae75f632c16d41247cf8d089a22/llama_index_packs_raptor-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-18 01:31:31",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-packs-raptor"
}
        
Elapsed time: 0.44388s