llama-index-vector-stores-zeusdb


Namellama-index-vector-stores-zeusdb JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryLlamaIndex integration for ZeusDB vector database. Enterprise-grade RAG with high-performance vector search.
upload_time2025-10-30 03:13:20
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.10
licenseMIT
keywords embeddings hnsw llama-index rag semantic-search vector-database zeusdb
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex ZeusDB Integration

ZeusDB vector database integration for LlamaIndex. Connect LlamaIndex's RAG framework with high-performance, enterprise-grade vector database.

## Features

- **Production Ready**: Built for enterprise-scale RAG applications
- **Persistence**: Complete save/load functionality with cross-platform compatibility
- **Advanced Filtering**: Comprehensive metadata filtering with complex operators
- **MMR Support**: Maximal Marginal Relevance for diverse, non-redundant results
- **Quantization**: Product Quantization (PQ) for memory-efficient vector storage
- **Async Support**: Async wrappers for non-blocking operations (`aadd`, `aquery`, `adelete_nodes`)

## Installation

```bash
pip install llama-index-vector-stores-zeusdb
```

## Quick Start

```python
from llama_index.core import VectorStoreIndex, Document, StorageContext
from llama_index.vector_stores.zeusdb import ZeusDBVectorStore
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings

# Set up embedding model and LLM
Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
Settings.llm = OpenAI(model="gpt-5")

# Create ZeusDB vector store
vector_store = ZeusDBVectorStore(
    dim=1536,  # OpenAI embedding dimension
    distance="cosine",
    index_type="hnsw"
)

# Create storage context
storage_context = StorageContext.from_defaults(vector_store=vector_store)

# Create documents
documents = [
    Document(text="ZeusDB is a high-performance vector database."),
    Document(text="LlamaIndex provides RAG capabilities."),
    Document(text="Vector search enables semantic similarity.")
]

# Create index and store documents
index = VectorStoreIndex.from_documents(
    documents,
    storage_context=storage_context
)

# Query the index
query_engine = index.as_query_engine()
response = query_engine.query("What is ZeusDB?")
print(response)
```

## Advanced Features

### Persistence

Save and load indexes with complete state preservation:

```python
# Save index to disk
vector_store.save_index("my_index.zdb")

# Load index from disk
loaded_store = ZeusDBVectorStore.load_index("my_index.zdb")
```

### MMR Search

Balance relevance and diversity for comprehensive results:

```python
from llama_index.core.vector_stores.types import VectorStoreQuery

# Query with MMR for diverse results
query_embedding = embed_model.get_text_embedding("your query")
results = vector_store.query(
    VectorStoreQuery(query_embedding=query_embedding, similarity_top_k=5),
    mmr=True,
    fetch_k=20,
    mmr_lambda=0.7  # 0.0=max diversity, 1.0=pure relevance
)

# Note: MMR automatically enables return_vector=True for diversity calculation
# Results contain ids and similarities (nodes=None)
```

### Quantization

Reduce memory usage with Product Quantization:

```python
vector_store = ZeusDBVectorStore(
    dim=1536,
    distance="cosine",
    quantization_config={
        'type': 'pq',
        'subvectors': 8,
        'bits': 8,
        'training_size': 1000,
        'storage_mode': 'quantized_only'
    }
)
```

### Async Operations

Non-blocking operations for web servers and concurrent workflows:

```python
import asyncio

# Async add
node_ids = await vector_store.aadd(nodes)

# Async query
results = await vector_store.aquery(query_obj)

# Async delete
await vector_store.adelete_nodes(node_ids=["id1", "id2"])
```

### Metadata Filtering

Filter results by metadata:

```python
from llama_index.core.vector_stores.types import (
    MetadataFilters,
    FilterOperator,
    FilterCondition
)

# Create metadata filter
filters = MetadataFilters.from_dicts([
    {"key": "category", "value": "tech", "operator": FilterOperator.EQ},
    {"key": "year", "value": 2024, "operator": FilterOperator.GTE}
], condition=FilterCondition.AND)

# Query with filters
results = vector_store.query(
    VectorStoreQuery(
        query_embedding=query_embedding,
        similarity_top_k=5,
        filters=filters
    )
)
```

**Supported operators**: EQ, NE, GT, GTE, LT, LTE, IN, NIN, ANY, ALL, CONTAINS, TEXT_MATCH, TEXT_MATCH_INSENSITIVE

## Configuration

| Parameter | Description | Default |
|-----------|-------------|---------|
| `dim` | Vector dimension | Required |
| `distance` | Distance metric (`cosine`, `l2`, `l1`) | `cosine` |
| `index_type` | Index type (`hnsw`) | `hnsw` |
| `m` | HNSW connectivity parameter | 16 |
| `ef_construction` | HNSW build-time search depth | 200 |
| `expected_size` | Expected number of vectors | 10000 |
| `quantization_config` | PQ quantization settings | None |

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-vector-stores-zeusdb",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "embeddings, hnsw, llama-index, rag, semantic-search, vector-database, zeusdb",
    "author": null,
    "author_email": "ZeusDB <contact@zeusdb.com>",
    "download_url": "https://files.pythonhosted.org/packages/26/3e/975ae3b33dc7d7065297530846d76ac8332153e61f6f0b034bbfd9d0dfad/llama_index_vector_stores_zeusdb-0.1.4.tar.gz",
    "platform": null,
    "description": "# LlamaIndex ZeusDB Integration\n\nZeusDB vector database integration for LlamaIndex. Connect LlamaIndex's RAG framework with high-performance, enterprise-grade vector database.\n\n## Features\n\n- **Production Ready**: Built for enterprise-scale RAG applications\n- **Persistence**: Complete save/load functionality with cross-platform compatibility\n- **Advanced Filtering**: Comprehensive metadata filtering with complex operators\n- **MMR Support**: Maximal Marginal Relevance for diverse, non-redundant results\n- **Quantization**: Product Quantization (PQ) for memory-efficient vector storage\n- **Async Support**: Async wrappers for non-blocking operations (`aadd`, `aquery`, `adelete_nodes`)\n\n## Installation\n\n```bash\npip install llama-index-vector-stores-zeusdb\n```\n\n## Quick Start\n\n```python\nfrom llama_index.core import VectorStoreIndex, Document, StorageContext\nfrom llama_index.vector_stores.zeusdb import ZeusDBVectorStore\nfrom llama_index.embeddings.openai import OpenAIEmbedding\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core import Settings\n\n# Set up embedding model and LLM\nSettings.embed_model = OpenAIEmbedding(model=\"text-embedding-3-small\")\nSettings.llm = OpenAI(model=\"gpt-5\")\n\n# Create ZeusDB vector store\nvector_store = ZeusDBVectorStore(\n    dim=1536,  # OpenAI embedding dimension\n    distance=\"cosine\",\n    index_type=\"hnsw\"\n)\n\n# Create storage context\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\n# Create documents\ndocuments = [\n    Document(text=\"ZeusDB is a high-performance vector database.\"),\n    Document(text=\"LlamaIndex provides RAG capabilities.\"),\n    Document(text=\"Vector search enables semantic similarity.\")\n]\n\n# Create index and store documents\nindex = VectorStoreIndex.from_documents(\n    documents,\n    storage_context=storage_context\n)\n\n# Query the index\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What is ZeusDB?\")\nprint(response)\n```\n\n## Advanced Features\n\n### Persistence\n\nSave and load indexes with complete state preservation:\n\n```python\n# Save index to disk\nvector_store.save_index(\"my_index.zdb\")\n\n# Load index from disk\nloaded_store = ZeusDBVectorStore.load_index(\"my_index.zdb\")\n```\n\n### MMR Search\n\nBalance relevance and diversity for comprehensive results:\n\n```python\nfrom llama_index.core.vector_stores.types import VectorStoreQuery\n\n# Query with MMR for diverse results\nquery_embedding = embed_model.get_text_embedding(\"your query\")\nresults = vector_store.query(\n    VectorStoreQuery(query_embedding=query_embedding, similarity_top_k=5),\n    mmr=True,\n    fetch_k=20,\n    mmr_lambda=0.7  # 0.0=max diversity, 1.0=pure relevance\n)\n\n# Note: MMR automatically enables return_vector=True for diversity calculation\n# Results contain ids and similarities (nodes=None)\n```\n\n### Quantization\n\nReduce memory usage with Product Quantization:\n\n```python\nvector_store = ZeusDBVectorStore(\n    dim=1536,\n    distance=\"cosine\",\n    quantization_config={\n        'type': 'pq',\n        'subvectors': 8,\n        'bits': 8,\n        'training_size': 1000,\n        'storage_mode': 'quantized_only'\n    }\n)\n```\n\n### Async Operations\n\nNon-blocking operations for web servers and concurrent workflows:\n\n```python\nimport asyncio\n\n# Async add\nnode_ids = await vector_store.aadd(nodes)\n\n# Async query\nresults = await vector_store.aquery(query_obj)\n\n# Async delete\nawait vector_store.adelete_nodes(node_ids=[\"id1\", \"id2\"])\n```\n\n### Metadata Filtering\n\nFilter results by metadata:\n\n```python\nfrom llama_index.core.vector_stores.types import (\n    MetadataFilters,\n    FilterOperator,\n    FilterCondition\n)\n\n# Create metadata filter\nfilters = MetadataFilters.from_dicts([\n    {\"key\": \"category\", \"value\": \"tech\", \"operator\": FilterOperator.EQ},\n    {\"key\": \"year\", \"value\": 2024, \"operator\": FilterOperator.GTE}\n], condition=FilterCondition.AND)\n\n# Query with filters\nresults = vector_store.query(\n    VectorStoreQuery(\n        query_embedding=query_embedding,\n        similarity_top_k=5,\n        filters=filters\n    )\n)\n```\n\n**Supported operators**: EQ, NE, GT, GTE, LT, LTE, IN, NIN, ANY, ALL, CONTAINS, TEXT_MATCH, TEXT_MATCH_INSENSITIVE\n\n## Configuration\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `dim` | Vector dimension | Required |\n| `distance` | Distance metric (`cosine`, `l2`, `l1`) | `cosine` |\n| `index_type` | Index type (`hnsw`) | `hnsw` |\n| `m` | HNSW connectivity parameter | 16 |\n| `ef_construction` | HNSW build-time search depth | 200 |\n| `expected_size` | Expected number of vectors | 10000 |\n| `quantization_config` | PQ quantization settings | None |\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LlamaIndex integration for ZeusDB vector database. Enterprise-grade RAG with high-performance vector search.",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://github.com/zeusdb/llama-index-vector-stores-zeusdb",
        "Issues": "https://github.com/zeusdb/llama-index-vector-stores-zeusdb/issues",
        "Repository": "https://github.com/zeusdb/llama-index-vector-stores-zeusdb"
    },
    "split_keywords": [
        "embeddings",
        " hnsw",
        " llama-index",
        " rag",
        " semantic-search",
        " vector-database",
        " zeusdb"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a7f9a740f2b0404b711e6356c9bae08d10fa9c57e50d303b400555ffb6e27183",
                "md5": "c84e831e686dc35e918fa3d5cb917455",
                "sha256": "d8d53bcb2889f9db2e68693a0f99b2293571de8d67299eab2d43c72453ee1b5b"
            },
            "downloads": -1,
            "filename": "llama_index_vector_stores_zeusdb-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c84e831e686dc35e918fa3d5cb917455",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 14538,
            "upload_time": "2025-10-30T03:13:19",
            "upload_time_iso_8601": "2025-10-30T03:13:19.330685Z",
            "url": "https://files.pythonhosted.org/packages/a7/f9/a740f2b0404b711e6356c9bae08d10fa9c57e50d303b400555ffb6e27183/llama_index_vector_stores_zeusdb-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "263e975ae3b33dc7d7065297530846d76ac8332153e61f6f0b034bbfd9d0dfad",
                "md5": "b86dcb1761ebab5debd9715a96426f83",
                "sha256": "358a58ad5b55c71e9c335b2221991d960e49c4e787e665b6058383ae11b96dce"
            },
            "downloads": -1,
            "filename": "llama_index_vector_stores_zeusdb-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "b86dcb1761ebab5debd9715a96426f83",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 14975,
            "upload_time": "2025-10-30T03:13:20",
            "upload_time_iso_8601": "2025-10-30T03:13:20.942922Z",
            "url": "https://files.pythonhosted.org/packages/26/3e/975ae3b33dc7d7065297530846d76ac8332153e61f6f0b034bbfd9d0dfad/llama_index_vector_stores_zeusdb-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-30 03:13:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zeusdb",
    "github_project": "llama-index-vector-stores-zeusdb",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llama-index-vector-stores-zeusdb"
}
        
Elapsed time: 2.43548s