llama-index-indices-managed-vectara


Namellama-index-indices-managed-vectara JSON
Version 0.3.1 PyPI version JSON
download
home_pageNone
Summaryllama-index managed vectara integration
upload_time2024-12-07 05:57:51
maintainerNone
docs_urlNone
authorDavid Oplatka
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Managed Integration: Vectara

The Vectara Index provides a simple implementation to Vectara's end-to-end RAG pipeline,
including data ingestion, document retrieval, reranking results, summary generation, and hallucination evaluation.

## Setup

First, make sure you have the latest LlamaIndex version installed.

Next, install the Vectara Index:

```
pip install -U llama-index-indices-managed-vectara
```

Finally, set up your Vectara corpus. If you don't have a Vectara account, you can [sign up](https://vectara.com/integrations/llamaindex) and follow our [Quick Start](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key (make sure it has both indexing and query permissions).

## Usage

First let's initialize the index with some sample documents.

```python
import os

os.environ["VECTARA_API_KEY"] = "<YOUR_VECTARA_API_KEY>"
os.environ["VECTARA_CORPUS_ID"] = "<YOUR_VECTARA_CORPUS_ID>"
os.environ["VECTARA_CUSTOMER_ID"] = "<YOUR_VECTARA_CUSTOMER_ID>"

from llama_index.indices.managed.vectara import VectaraIndex
from llama_index.core.schema import Document

docs = [
    Document(
        text="""
        This is test text for Vectara integration with LlamaIndex.
        Users should love their experience with this integration
        """,
    ),
    Document(
        text="""
        The Vectara index integration with LlamaIndex implements Vectara's RAG pipeline.
        It can be used both as a retriever and query engine.
        """,
    ),
]

index = VectaraIndex.from_documents(docs)
```

You can now use this index to retrieve documents.

```python
# Retrieves the top search result
retriever = index.as_retriever(similarity_top_k=1)

results = retriever.retrieve("How will users feel about this new tool?")
print(results[0])
```

You can also use it as a query engine to get a generated summary from the retrieved results.

```python
query_engine = index.as_query_engine()

results = query_engine.query(
    "Which company has partnered with Vectara to implement their RAG pipeline as an index?"
)
print(f"Generated summary: {results.response}\n")
print("Top sources:")
for node in results.source_nodes[:2]:
    print(node)
```

If you want to see the full features and capabilities of `VectaraIndex`, check out this Jupyter [notebook](https://github.com/vectara/example-notebooks/blob/main/notebooks/using-vectara-with-llamaindex.ipynb).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-indices-managed-vectara",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "David Oplatka",
    "author_email": "david.oplatka@vectara.com",
    "download_url": "https://files.pythonhosted.org/packages/cd/2a/5122cee7fec604d696990a25bc165f30a6da11c5219069cff93a17e6709d/llama_index_indices_managed_vectara-0.3.1.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Managed Integration: Vectara\n\nThe Vectara Index provides a simple implementation to Vectara's end-to-end RAG pipeline,\nincluding data ingestion, document retrieval, reranking results, summary generation, and hallucination evaluation.\n\n## Setup\n\nFirst, make sure you have the latest LlamaIndex version installed.\n\nNext, install the Vectara Index:\n\n```\npip install -U llama-index-indices-managed-vectara\n```\n\nFinally, set up your Vectara corpus. If you don't have a Vectara account, you can [sign up](https://vectara.com/integrations/llamaindex) and follow our [Quick Start](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key (make sure it has both indexing and query permissions).\n\n## Usage\n\nFirst let's initialize the index with some sample documents.\n\n```python\nimport os\n\nos.environ[\"VECTARA_API_KEY\"] = \"<YOUR_VECTARA_API_KEY>\"\nos.environ[\"VECTARA_CORPUS_ID\"] = \"<YOUR_VECTARA_CORPUS_ID>\"\nos.environ[\"VECTARA_CUSTOMER_ID\"] = \"<YOUR_VECTARA_CUSTOMER_ID>\"\n\nfrom llama_index.indices.managed.vectara import VectaraIndex\nfrom llama_index.core.schema import Document\n\ndocs = [\n    Document(\n        text=\"\"\"\n        This is test text for Vectara integration with LlamaIndex.\n        Users should love their experience with this integration\n        \"\"\",\n    ),\n    Document(\n        text=\"\"\"\n        The Vectara index integration with LlamaIndex implements Vectara's RAG pipeline.\n        It can be used both as a retriever and query engine.\n        \"\"\",\n    ),\n]\n\nindex = VectaraIndex.from_documents(docs)\n```\n\nYou can now use this index to retrieve documents.\n\n```python\n# Retrieves the top search result\nretriever = index.as_retriever(similarity_top_k=1)\n\nresults = retriever.retrieve(\"How will users feel about this new tool?\")\nprint(results[0])\n```\n\nYou can also use it as a query engine to get a generated summary from the retrieved results.\n\n```python\nquery_engine = index.as_query_engine()\n\nresults = query_engine.query(\n    \"Which company has partnered with Vectara to implement their RAG pipeline as an index?\"\n)\nprint(f\"Generated summary: {results.response}\\n\")\nprint(\"Top sources:\")\nfor node in results.source_nodes[:2]:\n    print(node)\n```\n\nIf you want to see the full features and capabilities of `VectaraIndex`, check out this Jupyter [notebook](https://github.com/vectara/example-notebooks/blob/main/notebooks/using-vectara-with-llamaindex.ipynb).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index managed vectara integration",
    "version": "0.3.1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ba8f89739ba5716050c88fe0422b392c978c4daa41a17f2839bb9cb3773f4089",
                "md5": "fe8bfb591b59f68495f366406270c6ac",
                "sha256": "4631d133f49025d2f62f19d8219574b4074c51132afec35ffb880dfb22f52ebd"
            },
            "downloads": -1,
            "filename": "llama_index_indices_managed_vectara-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fe8bfb591b59f68495f366406270c6ac",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 16838,
            "upload_time": "2024-12-07T05:57:49",
            "upload_time_iso_8601": "2024-12-07T05:57:49.778024Z",
            "url": "https://files.pythonhosted.org/packages/ba/8f/89739ba5716050c88fe0422b392c978c4daa41a17f2839bb9cb3773f4089/llama_index_indices_managed_vectara-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cd2a5122cee7fec604d696990a25bc165f30a6da11c5219069cff93a17e6709d",
                "md5": "72071d230e2c26543b5df94e879cf502",
                "sha256": "1dd7bf9b5029e69d721734a4c220b6f28652904650c7234b736a23318a0e2bd5"
            },
            "downloads": -1,
            "filename": "llama_index_indices_managed_vectara-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "72071d230e2c26543b5df94e879cf502",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 15501,
            "upload_time": "2024-12-07T05:57:51",
            "upload_time_iso_8601": "2024-12-07T05:57:51.286812Z",
            "url": "https://files.pythonhosted.org/packages/cd/2a/5122cee7fec604d696990a25bc165f30a6da11c5219069cff93a17e6709d/llama_index_indices_managed_vectara-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-07 05:57:51",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-indices-managed-vectara"
}
        
Elapsed time: 0.38538s