llama-index-packs-raptor


Namellama-index-packs-raptor JSON
Version 0.1.3 PyPI version JSON
download
home_pageNone
Summaryllama-index packs raptor integration
upload_time2024-03-20 16:40:09
maintainerNone
docs_urlNone
authorLogan Markewich
requires_python<4.0,>=3.9
licenseMIT
keywords cluster raptor retrieval
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Raptor Retriever LlamaPack

This LlamaPack shows how to use an implementation of RAPTOR with llama-index, leveraging the RAPTOR pack.

RAPTOR works by recursively clustering and summarizing clusters in layers for retrieval.

There two retrieval modes:

- tree_traversal -- traversing the tree of clusters, performing top-k at each level in the tree.
- collapsed -- treat the entire tree as a giant pile of nodes, perform simple top-k.

See [the paper](https://arxiv.org/abs/2401.18059) for full algorithm details.

## CLI Usage

You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:

```bash
llamaindex-cli download-llamapack RaptorPack --download-dir ./raptor_pack
```

You can then inspect/modify the files at `./raptor_pack` and use them as a template for your own project.

## Code Usage

You can alternaitvely install the package:

`pip install llama-index-packs-raptor`

Then, you can import and initialize the pack! This will perform clustering and summarization over your data.

```python
from llama_index.packs.raptor import RaptorPack

pack = RaptorPack(documents, llm=llm, embed_model=embed_model)
```

The `run()` function is a light wrapper around `retriever.retrieve()`.

```python
nodes = pack.run(
    "query",
    mode="collapsed",  # or tree_traversal
)
```

You can also use modules individually.

```python
# get the retriever
retriever = pack.retriever
```

## Persistence

The `RaptorPack` comes with the `RaptorRetriever`, which offers ways of saving/reloading!

If you are using a remote vector-db, just pass it in

```python
# Pack usage
pack = RaptorPack(..., vector_store=vector_store)

# RaptorRetriever usage
retriever = RaptorRetriever(..., vector_store=vector_store)
```

Then, to re-connect, just pass in the vector store again and an empty list of documents

```python
# Pack usage
pack = RaptorPack([], ..., vector_store=vector_store)

# RaptorRetriever usage
retriever = RaptorRetriever([], ..., vector_store=vector_store)
```

Check out the [notebook here for complete details!](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raptor/examples/raptor.ipynb).

## Configure Summary Module

Using the SummaryModule you can configure how the Raptor Pack does summaries and how many workers are applied to summaries.

You can configure the LLM.

You can configure summary_prompt. This will change the prompt sent to your LLM to summarize you docs.

You can configure num_workers, which will influence the number of workers or rather async semaphores allowing more summaries to process simulatneously.
This might affect openai or other LLm provider API limits, be aware.

```python
from llama_index.packs.raptor.base import SummaryModule
from llama_index.packs.raptor import RaptorRetriever

summary_prompt = "As a professional summarizer, create a concise and comprehensive summary of the provided text, be it an article, post, conversation, or passage with as much detail as possible."

# Adding SummaryModule you can configure the summary prompt and number of workers doing summaries.
summary_module = SummaryModule(
    llm=llm, summary_prompt=summary_prompt, num_workers=16
)

pack = RaptorPack(
    documents, llm=llm, embed_model=embed_model, summary_module=summary_module
)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-packs-raptor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "cluster, raptor, retrieval",
    "author": "Logan Markewich",
    "author_email": "logan@llamaindex.ai",
    "download_url": "https://files.pythonhosted.org/packages/06/94/61331b85ba499d49b5c3fc62bf3c6f21c5f6975d75e17317cc74912600d4/llama_index_packs_raptor-0.1.3.tar.gz",
    "platform": null,
    "description": "# Raptor Retriever LlamaPack\n\nThis LlamaPack shows how to use an implementation of RAPTOR with llama-index, leveraging the RAPTOR pack.\n\nRAPTOR works by recursively clustering and summarizing clusters in layers for retrieval.\n\nThere two retrieval modes:\n\n- tree_traversal -- traversing the tree of clusters, performing top-k at each level in the tree.\n- collapsed -- treat the entire tree as a giant pile of nodes, perform simple top-k.\n\nSee [the paper](https://arxiv.org/abs/2401.18059) for full algorithm details.\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack RaptorPack --download-dir ./raptor_pack\n```\n\nYou can then inspect/modify the files at `./raptor_pack` and use them as a template for your own project.\n\n## Code Usage\n\nYou can alternaitvely install the package:\n\n`pip install llama-index-packs-raptor`\n\nThen, you can import and initialize the pack! This will perform clustering and summarization over your data.\n\n```python\nfrom llama_index.packs.raptor import RaptorPack\n\npack = RaptorPack(documents, llm=llm, embed_model=embed_model)\n```\n\nThe `run()` function is a light wrapper around `retriever.retrieve()`.\n\n```python\nnodes = pack.run(\n    \"query\",\n    mode=\"collapsed\",  # or tree_traversal\n)\n```\n\nYou can also use modules individually.\n\n```python\n# get the retriever\nretriever = pack.retriever\n```\n\n## Persistence\n\nThe `RaptorPack` comes with the `RaptorRetriever`, which offers ways of saving/reloading!\n\nIf you are using a remote vector-db, just pass it in\n\n```python\n# Pack usage\npack = RaptorPack(..., vector_store=vector_store)\n\n# RaptorRetriever usage\nretriever = RaptorRetriever(..., vector_store=vector_store)\n```\n\nThen, to re-connect, just pass in the vector store again and an empty list of documents\n\n```python\n# Pack usage\npack = RaptorPack([], ..., vector_store=vector_store)\n\n# RaptorRetriever usage\nretriever = RaptorRetriever([], ..., vector_store=vector_store)\n```\n\nCheck out the [notebook here for complete details!](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raptor/examples/raptor.ipynb).\n\n## Configure Summary Module\n\nUsing the SummaryModule you can configure how the Raptor Pack does summaries and how many workers are applied to summaries.\n\nYou can configure the LLM.\n\nYou can configure summary_prompt. This will change the prompt sent to your LLM to summarize you docs.\n\nYou can configure num_workers, which will influence the number of workers or rather async semaphores allowing more summaries to process simulatneously.\nThis might affect openai or other LLm provider API limits, be aware.\n\n```python\nfrom llama_index.packs.raptor.base import SummaryModule\nfrom llama_index.packs.raptor import RaptorRetriever\n\nsummary_prompt = \"As a professional summarizer, create a concise and comprehensive summary of the provided text, be it an article, post, conversation, or passage with as much detail as possible.\"\n\n# Adding SummaryModule you can configure the summary prompt and number of workers doing summaries.\nsummary_module = SummaryModule(\n    llm=llm, summary_prompt=summary_prompt, num_workers=16\n)\n\npack = RaptorPack(\n    documents, llm=llm, embed_model=embed_model, summary_module=summary_module\n)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index packs raptor integration",
    "version": "0.1.3",
    "project_urls": null,
    "split_keywords": [
        "cluster",
        " raptor",
        " retrieval"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1e31f16a58c45d01cb061c5bca17b5259419ebe0c3c4a65a4946d1d5975607f1",
                "md5": "5f419ef9f9cda90bab4de1cafecfef74",
                "sha256": "e08eddaf7713ee3d60f9bbe39bf7abebb561dfe116bf929cb25098411d82a3cf"
            },
            "downloads": -1,
            "filename": "llama_index_packs_raptor-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5f419ef9f9cda90bab4de1cafecfef74",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 8427,
            "upload_time": "2024-03-20T16:40:07",
            "upload_time_iso_8601": "2024-03-20T16:40:07.384906Z",
            "url": "https://files.pythonhosted.org/packages/1e/31/f16a58c45d01cb061c5bca17b5259419ebe0c3c4a65a4946d1d5975607f1/llama_index_packs_raptor-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "069461331b85ba499d49b5c3fc62bf3c6f21c5f6975d75e17317cc74912600d4",
                "md5": "7e5c0e8880bc2977fe95ef8abd4091fc",
                "sha256": "989fad70cf3b5b1e882e7c5a050e90e8438c8c3fb3220dbfd17a85a13f692943"
            },
            "downloads": -1,
            "filename": "llama_index_packs_raptor-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "7e5c0e8880bc2977fe95ef8abd4091fc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 8844,
            "upload_time": "2024-03-20T16:40:09",
            "upload_time_iso_8601": "2024-03-20T16:40:09.003899Z",
            "url": "https://files.pythonhosted.org/packages/06/94/61331b85ba499d49b5c3fc62bf3c6f21c5f6975d75e17317cc74912600d4/llama_index_packs_raptor-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-20 16:40:09",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-packs-raptor"
}
        
Elapsed time: 0.20509s