llama-index-retrievers-bedrock


Namellama-index-retrievers-bedrock JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
Summaryllama-index retrievers bedrock integration
upload_time2024-08-22 07:16:01
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Retrievers Integration: Bedrock

## Knowledge Bases

> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.

> Implementing `RAG` requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the user’s query. This can be time-consuming and inefficient.

> With `Knowledge Bases for Amazon Bedrock`, simply point to the location of your data in `Amazon S3`, and `Knowledge Bases for Amazon Bedrock` takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you.

> Knowledge base can be configured through [AWS Console](https://aws.amazon.com/console/) or by using [AWS SDKs](https://aws.amazon.com/developer/tools/).

## Installation

```
pip install llama-index-retrievers-bedrock
```

## Usage

```
from llama_index.retrievers.bedrock import AmazonKnowledgeBasesRetriever

retriever = AmazonKnowledgeBasesRetriever(
    knowledge_base_id="<knowledge-base-id>",
    retrieval_config={
        "vectorSearchConfiguration": {
            "numberOfResults": 4,
            "overrideSearchType": "HYBRID",
            "filter": {"equals": {"key": "tag", "value": "space"}},
        }
    },
)

query = "How big is Milky Way as compared to the entire universe?"
retrieved_results = retriever.retrieve(query)

# Prints the first retrieved result
print(retrieved_results[0].get_content())
```

## Notebook

Explore the retriever using Notebook present at:
https://docs.llamaindex.ai/en/latest/examples/retrievers/bedrock_retriever/

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-retrievers-bedrock",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/c3/35/bf2247296399f5c9cc03e56b332fd9f7a87525a3f9a7fc7f56deca08275e/llama_index_retrievers_bedrock-0.2.0.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Retrievers Integration: Bedrock\n\n## Knowledge Bases\n\n> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.\n\n> Implementing `RAG` requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the user\u2019s query. This can be time-consuming and inefficient.\n\n> With `Knowledge Bases for Amazon Bedrock`, simply point to the location of your data in `Amazon S3`, and `Knowledge Bases for Amazon Bedrock` takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you.\n\n> Knowledge base can be configured through [AWS Console](https://aws.amazon.com/console/) or by using [AWS SDKs](https://aws.amazon.com/developer/tools/).\n\n## Installation\n\n```\npip install llama-index-retrievers-bedrock\n```\n\n## Usage\n\n```\nfrom llama_index.retrievers.bedrock import AmazonKnowledgeBasesRetriever\n\nretriever = AmazonKnowledgeBasesRetriever(\n    knowledge_base_id=\"<knowledge-base-id>\",\n    retrieval_config={\n        \"vectorSearchConfiguration\": {\n            \"numberOfResults\": 4,\n            \"overrideSearchType\": \"HYBRID\",\n            \"filter\": {\"equals\": {\"key\": \"tag\", \"value\": \"space\"}},\n        }\n    },\n)\n\nquery = \"How big is Milky Way as compared to the entire universe?\"\nretrieved_results = retriever.retrieve(query)\n\n# Prints the first retrieved result\nprint(retrieved_results[0].get_content())\n```\n\n## Notebook\n\nExplore the retriever using Notebook present at:\nhttps://docs.llamaindex.ai/en/latest/examples/retrievers/bedrock_retriever/\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index retrievers bedrock integration",
    "version": "0.2.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "887c60da7df1a86b1ea47c65b94e924e20bc0f1051c16f6f815068d0b9269e4e",
                "md5": "62a426acd9d2a10ee76a14510a80ea26",
                "sha256": "1b046fb551c5848ceadc0d4f085cc71071467704c1db56d68a7528b27e5bffe8"
            },
            "downloads": -1,
            "filename": "llama_index_retrievers_bedrock-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "62a426acd9d2a10ee76a14510a80ea26",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 3950,
            "upload_time": "2024-08-22T07:16:00",
            "upload_time_iso_8601": "2024-08-22T07:16:00.059951Z",
            "url": "https://files.pythonhosted.org/packages/88/7c/60da7df1a86b1ea47c65b94e924e20bc0f1051c16f6f815068d0b9269e4e/llama_index_retrievers_bedrock-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c335bf2247296399f5c9cc03e56b332fd9f7a87525a3f9a7fc7f56deca08275e",
                "md5": "40002438508e021612eb0968e8a7c00f",
                "sha256": "98ba5ab7e27a66cf209d431c5f6c524bf94a895bc6dc2ac527eeaa30ec125d04"
            },
            "downloads": -1,
            "filename": "llama_index_retrievers_bedrock-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "40002438508e021612eb0968e8a7c00f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 3419,
            "upload_time": "2024-08-22T07:16:01",
            "upload_time_iso_8601": "2024-08-22T07:16:01.289733Z",
            "url": "https://files.pythonhosted.org/packages/c3/35/bf2247296399f5c9cc03e56b332fd9f7a87525a3f9a7fc7f56deca08275e/llama_index_retrievers_bedrock-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-22 07:16:01",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-retrievers-bedrock"
}
        
Elapsed time: 1.04454s