vector-database-loader


Namevector-database-loader JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://medium.com/@dan.jam.kuhn/data-pipelines-for-rag-a-python-utility-for-populating-vector-databases-3f6c164756e9
SummaryA generalized tool for currating and loading content into vector databases
upload_time2025-02-11 15:54:24
maintainerNone
docs_urlNone
authordanjamk
requires_python<3.13,>=3.11
licenseMIT
keywords vector database loader milvus langchain pinecone
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # vector-database-loader
Loading content info a vector database is relatively easy to do, especially with frameworks like [LangChain](https://www.langchain.com/).
However, the process of curating the content and loading it into the database can be a bit more complex.  If you are building 
a RAG application or similar, the quality and relevance of the content is critical.  This project is meant to help with that process.

A use case for this type of project is discussed in more depth in the blog [A Cost-Effective AI Chatbot Architecture with AWS Bedrock, Lambda, and Pinecone](https://medium.com/@dan.jam.kuhn/a-cost-effective-ai-chatbot-architecture-with-aws-bedrock-lambda-and-pinecone-40935b9ec361)

## Features
- **Vector Database Support** - The framework is built to support multiple vector databases but is currently implementing support for [Pinecone](https://www.pinecone.io/).
More vector databases will be added, but if needed you can fork the project and handle your own needs by extending the base class.  
- **Embedding Support** - You can use any embedding provided by [Langchain](https://python.langchain.com/docs/integrations/text_embedding/), which includes OpenAI, AWS Bedrock, HuggingFace, Cohere and much, much more.
- **Content Curation** - The framework is built configure some common content types and sources, but again is meant to be extended a needed.
  - Sources include websites, local folders and google drive
  - Types include PDF, Word, and Web content and google docs
- **Text Splitter** - This framework uses the [RecursiveCharacterTextSplitter](https://python.langchain.com/v0.1/docs/modules/data_connection/document_transformers/recursive_text_splitter/) from LangChain.  
This is a powerful tool that can split text into chunks of a specified size, while maintaining the context of the text.  This is especially useful for long documents like web pages or PDFs.

## Example
**Install the package with pip:**

Note: The dependencies are kind of beefy!

```pip install vector-database-loader```

**Configuration details:**

Add your Pinecone API and OpenAI keys to your .env file, see sample.env for an examples.


**Code**

```python
import time

from dotenv import load_dotenv, find_dotenv
from langchain_openai import OpenAIEmbeddings

from vector_database_loader.pinecone_vector_db import PineconeVectorLoader, PineconeVectorQuery

# Define your content sources and add them to the array
web_page_content_source = {"name": "SpaceX", "type": "Website", "items": [
    "https://en.wikipedia.org/wiki/SpaceX"

], "chunk_size": 512}
content_sources = [web_page_content_source]

# Load into your vector database.  Be sure to add your Pinecone and OpenAI API keys to your .env file
load_dotenv(find_dotenv())
embedding_client = OpenAIEmbeddings()
index_name = "my-vectordb-index"
vector_db_loader = PineconeVectorLoader(index_name=index_name,
                                        embedding_client=embedding_client)
vector_db_loader.load_sources(content_sources, delete_index=True)

# Query your vector database
print("Waiting 30 seconds before running the query, to make sure the data is available")
time.sleep(30)  # This is needed because there is a latency in the data being available
vector_db_query = PineconeVectorQuery(index_name=index_name,
                                      embedding_client=embedding_client)
query = "What is SpaceX's most recent rocket model being tested?"
documents = vector_db_query.query(query)
print(f"Query: {query} returned {len(documents)} results")
for doc in documents:
    print(f"   {doc.metadata['title']}")
    print(f"   {doc.page_content}")
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://medium.com/@dan.jam.kuhn/data-pipelines-for-rag-a-python-utility-for-populating-vector-databases-3f6c164756e9",
    "name": "vector-database-loader",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.11",
    "maintainer_email": null,
    "keywords": "vector, database, loader, milvus, langchain, pinecone",
    "author": "danjamk",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/9c/7a/ded2e35775f06544cc789d111013d9afe57914e48f962d4b8c114e88dac6/vector_database_loader-0.2.1.tar.gz",
    "platform": null,
    "description": "# vector-database-loader\nLoading content info a vector database is relatively easy to do, especially with frameworks like [LangChain](https://www.langchain.com/).\nHowever, the process of curating the content and loading it into the database can be a bit more complex.  If you are building \na RAG application or similar, the quality and relevance of the content is critical.  This project is meant to help with that process.\n\nA use case for this type of project is discussed in more depth in the blog [A Cost-Effective AI Chatbot Architecture with AWS Bedrock, Lambda, and Pinecone](https://medium.com/@dan.jam.kuhn/a-cost-effective-ai-chatbot-architecture-with-aws-bedrock-lambda-and-pinecone-40935b9ec361)\n\n## Features\n- **Vector Database Support** - The framework is built to support multiple vector databases but is currently implementing support for [Pinecone](https://www.pinecone.io/).\nMore vector databases will be added, but if needed you can fork the project and handle your own needs by extending the base class.  \n- **Embedding Support** - You can use any embedding provided by [Langchain](https://python.langchain.com/docs/integrations/text_embedding/), which includes OpenAI, AWS Bedrock, HuggingFace, Cohere and much, much more.\n- **Content Curation** - The framework is built configure some common content types and sources, but again is meant to be extended a needed.\n  - Sources include websites, local folders and google drive\n  - Types include PDF, Word, and Web content and google docs\n- **Text Splitter** - This framework uses the [RecursiveCharacterTextSplitter](https://python.langchain.com/v0.1/docs/modules/data_connection/document_transformers/recursive_text_splitter/) from LangChain.  \nThis is a powerful tool that can split text into chunks of a specified size, while maintaining the context of the text.  This is especially useful for long documents like web pages or PDFs.\n\n## Example\n**Install the package with pip:**\n\nNote: The dependencies are kind of beefy!\n\n```pip install vector-database-loader```\n\n**Configuration details:**\n\nAdd your Pinecone API and OpenAI keys to your .env file, see sample.env for an examples.\n\n\n**Code**\n\n```python\nimport time\n\nfrom dotenv import load_dotenv, find_dotenv\nfrom langchain_openai import OpenAIEmbeddings\n\nfrom vector_database_loader.pinecone_vector_db import PineconeVectorLoader, PineconeVectorQuery\n\n# Define your content sources and add them to the array\nweb_page_content_source = {\"name\": \"SpaceX\", \"type\": \"Website\", \"items\": [\n    \"https://en.wikipedia.org/wiki/SpaceX\"\n\n], \"chunk_size\": 512}\ncontent_sources = [web_page_content_source]\n\n# Load into your vector database.  Be sure to add your Pinecone and OpenAI API keys to your .env file\nload_dotenv(find_dotenv())\nembedding_client = OpenAIEmbeddings()\nindex_name = \"my-vectordb-index\"\nvector_db_loader = PineconeVectorLoader(index_name=index_name,\n                                        embedding_client=embedding_client)\nvector_db_loader.load_sources(content_sources, delete_index=True)\n\n# Query your vector database\nprint(\"Waiting 30 seconds before running the query, to make sure the data is available\")\ntime.sleep(30)  # This is needed because there is a latency in the data being available\nvector_db_query = PineconeVectorQuery(index_name=index_name,\n                                      embedding_client=embedding_client)\nquery = \"What is SpaceX's most recent rocket model being tested?\"\ndocuments = vector_db_query.query(query)\nprint(f\"Query: {query} returned {len(documents)} results\")\nfor doc in documents:\n    print(f\"   {doc.metadata['title']}\")\n    print(f\"   {doc.page_content}\")\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A generalized tool for currating and loading content into vector databases",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://medium.com/@dan.jam.kuhn/data-pipelines-for-rag-a-python-utility-for-populating-vector-databases-3f6c164756e9",
        "Repository": "https://github.com/danjamk/vector-database-loader"
    },
    "split_keywords": [
        "vector",
        " database",
        " loader",
        " milvus",
        " langchain",
        " pinecone"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cbb029664d7525f7f69ad337ebfc8b7d6d9f557dd4867f89b92a863598e6bdd8",
                "md5": "e4a8383549951f4fe4e814ee9ab30d0e",
                "sha256": "37cebf988da470c5d6ba5cd7aee3c908f41c840be422bce0697e691c0678ddbc"
            },
            "downloads": -1,
            "filename": "vector_database_loader-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e4a8383549951f4fe4e814ee9ab30d0e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.11",
            "size": 14575,
            "upload_time": "2025-02-11T15:54:23",
            "upload_time_iso_8601": "2025-02-11T15:54:23.411920Z",
            "url": "https://files.pythonhosted.org/packages/cb/b0/29664d7525f7f69ad337ebfc8b7d6d9f557dd4867f89b92a863598e6bdd8/vector_database_loader-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9c7aded2e35775f06544cc789d111013d9afe57914e48f962d4b8c114e88dac6",
                "md5": "05d1f626071ec478a468393d8c4a9ab1",
                "sha256": "9e6cb4c27057ed32c6e5073621b1ecf26b94ed811cbfbffcf44280a1ef8ccf38"
            },
            "downloads": -1,
            "filename": "vector_database_loader-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "05d1f626071ec478a468393d8c4a9ab1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.11",
            "size": 13593,
            "upload_time": "2025-02-11T15:54:24",
            "upload_time_iso_8601": "2025-02-11T15:54:24.499046Z",
            "url": "https://files.pythonhosted.org/packages/9c/7a/ded2e35775f06544cc789d111013d9afe57914e48f962d4b8c114e88dac6/vector_database_loader-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-11 15:54:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "danjamk",
    "github_project": "vector-database-loader",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "vector-database-loader"
}
        
Elapsed time: 1.42035s