llama-index-graph-stores-arcadedb


Namellama-index-graph-stores-arcadedb JSON
Version 0.3.1 PyPI version JSON
download
home_pageNone
SummaryLlamaIndex Graph Stores integration for ArcadeDB - Multi-Model Database with Graph, Document, Key-Value, Vector, and Time-Series support
upload_time2025-10-27 02:45:25
maintainerSteve Reiner
docs_urlNone
authorAdams Rosales, ExtReMLapin, Steve Reiner
requires_python<4.0,>=3.10
licenseNone
keywords arcadedb database graph-store graphrag knowledge-graph llama-index multi-model property-graph rag vector-search
VCS
bugtrack_url
requirements arcadedb-python llama-index-core llama-index-embeddings-openai llama-index-llms-openai
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Graph_Stores Integration: ArcadeDB

ArcadeDB is a Multi-Model DBMS that supports Graph, Document, Key-Value, Vector, and Time-Series models in a single engine. It's designed to be fast, scalable, and easy to use, making it an excellent choice for GraphRAG applications.

This integration provides both basic graph store and property graph store implementations for ArcadeDB, enabling LlamaIndex to work with ArcadeDB as a graph database backend with full vector search capabilities.

## Features

- **Multi-Model Support**: Graph, Document, Key-Value, Vector, and Time-Series in one database
- **High Performance**: Native SQL with graph traversal capabilities
- **Vector Search**: Built-in vector similarity search capabilities
- **Schema Flexibility**: Dynamic schema creation and management
- **Production Ready**: ACID transactions, clustering, and enterprise features

## Installation

```shell
pip install llama-index-graph-stores-arcadedb
```

## Usage

### Property Graph Store (Recommended)

The property graph store is the recommended approach for most GraphRAG applications:

```python
from llama_index.graph_stores.arcadedb import ArcadeDBPropertyGraphStore
from llama_index.core import PropertyGraphIndex

# For OpenAI embeddings (ada-002)
graph_store = ArcadeDBPropertyGraphStore(
    host="localhost",
    port=2480,
    username="root",
    password="playwithdata",
    database="knowledge_graph",
    embedding_dimension=1536  # OpenAI text-embedding-ada-002
)

# For Ollama embeddings (all-MiniLM-L6-v2 - common in flexible-graphrag)
graph_store = ArcadeDBPropertyGraphStore(
    host="localhost",
    port=2480,
    username="root",
    password="playwithdata",
    database="knowledge_graph",
    embedding_dimension=384   # Ollama all-MiniLM-L6-v2
)

# Or omit embedding_dimension to disable vector operations
graph_store = ArcadeDBPropertyGraphStore(
    host="localhost",
    port=2480,
    username="root",
    password="playwithdata",
    database="knowledge_graph"
    # No embedding_dimension = no vector search
)

# Create a property graph index
index = PropertyGraphIndex.from_documents(
    documents,
    property_graph_store=graph_store,
    show_progress=True
)

# Query the graph
response = index.query("What are the main topics discussed?")
```

### Basic Graph Store

For simpler use cases, you can use the basic graph store:

```python
from llama_index.graph_stores.arcadedb import ArcadeDBGraphStore
from llama_index.core import KnowledgeGraphIndex

# Initialize the graph store
graph_store = ArcadeDBGraphStore(
    host="localhost",
    port=2480,
    username="root",
    password="playwithdata",
    database="knowledge_graph"
)

# Create a knowledge graph index
index = KnowledgeGraphIndex.from_documents(
    documents,
    storage_context=StorageContext.from_defaults(graph_store=graph_store)
)
```

## Configuration

### Connection Parameters

- `host`: ArcadeDB server hostname (default: "localhost")
- `port`: ArcadeDB server port (default: 2480)
- `username`: Database username (default: "root")
- `password`: Database password
- `database`: Database name
- `embedding_dimension`: Vector dimension for embeddings (optional)

### Query Engine

The property graph store uses **native ArcadeDB SQL** for optimal performance and reliability. ArcadeDB's SQL engine provides excellent graph traversal capabilities with MATCH patterns and is the recommended approach for production use.

### Embedding Dimensions

Choose the correct `embedding_dimension` based on your embedding model:

| **Model** | **Dimension** | **Example** | **Usage** |
|-----------|---------------|-------------|-----------|
| OpenAI text-embedding-ada-002 | 1536 | `embedding_dimension=1536` | Production OpenAI |
| Ollama all-MiniLM-L6-v2 | 384 | `embedding_dimension=384` | **flexible-graphrag default** |
| Ollama nomic-embed-text | 768 | `embedding_dimension=768` | Alternative Ollama |
| Ollama mxbai-embed-large | 1024 | `embedding_dimension=1024` | High-quality Ollama |
| No vector search | None | Omit parameter entirely | Graph-only mode |

## Requirements

- ArcadeDB server (version 23.10+)
- Python 3.9+
- LlamaIndex core

## Getting Started

1. Start ArcadeDB server:
```bash
docker run -d --name arcadedb -p 2480:2480 -p 2424:2424 \
  -e JAVA_OPTS="-Darcadedb.server.rootPassword=playwithdata" \
  arcadedata/arcadedb:latest
```

2. Install the package:
```bash
pip install llama-index-graph-stores-arcadedb
```

3. Run your GraphRAG application!

## Examples

Check out the [examples directory](examples/) for complete working examples including:
- Basic usage with document ingestion
- Advanced GraphRAG workflows
- Vector similarity search
- Migration from other graph databases

## License

Apache License 2.0

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-graph-stores-arcadedb",
    "maintainer": "Steve Reiner",
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "arcadedb, database, graph-store, graphrag, knowledge-graph, llama-index, multi-model, property-graph, rag, vector-search",
    "author": "Adams Rosales, ExtReMLapin, Steve Reiner",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/e4/49/c82f0c90f455865806ed0cbc6836169406bff372b4bcc68c11b51edbb79b/llama_index_graph_stores_arcadedb-0.3.1.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Graph_Stores Integration: ArcadeDB\n\nArcadeDB is a Multi-Model DBMS that supports Graph, Document, Key-Value, Vector, and Time-Series models in a single engine. It's designed to be fast, scalable, and easy to use, making it an excellent choice for GraphRAG applications.\n\nThis integration provides both basic graph store and property graph store implementations for ArcadeDB, enabling LlamaIndex to work with ArcadeDB as a graph database backend with full vector search capabilities.\n\n## Features\n\n- **Multi-Model Support**: Graph, Document, Key-Value, Vector, and Time-Series in one database\n- **High Performance**: Native SQL with graph traversal capabilities\n- **Vector Search**: Built-in vector similarity search capabilities\n- **Schema Flexibility**: Dynamic schema creation and management\n- **Production Ready**: ACID transactions, clustering, and enterprise features\n\n## Installation\n\n```shell\npip install llama-index-graph-stores-arcadedb\n```\n\n## Usage\n\n### Property Graph Store (Recommended)\n\nThe property graph store is the recommended approach for most GraphRAG applications:\n\n```python\nfrom llama_index.graph_stores.arcadedb import ArcadeDBPropertyGraphStore\nfrom llama_index.core import PropertyGraphIndex\n\n# For OpenAI embeddings (ada-002)\ngraph_store = ArcadeDBPropertyGraphStore(\n    host=\"localhost\",\n    port=2480,\n    username=\"root\",\n    password=\"playwithdata\",\n    database=\"knowledge_graph\",\n    embedding_dimension=1536  # OpenAI text-embedding-ada-002\n)\n\n# For Ollama embeddings (all-MiniLM-L6-v2 - common in flexible-graphrag)\ngraph_store = ArcadeDBPropertyGraphStore(\n    host=\"localhost\",\n    port=2480,\n    username=\"root\",\n    password=\"playwithdata\",\n    database=\"knowledge_graph\",\n    embedding_dimension=384   # Ollama all-MiniLM-L6-v2\n)\n\n# Or omit embedding_dimension to disable vector operations\ngraph_store = ArcadeDBPropertyGraphStore(\n    host=\"localhost\",\n    port=2480,\n    username=\"root\",\n    password=\"playwithdata\",\n    database=\"knowledge_graph\"\n    # No embedding_dimension = no vector search\n)\n\n# Create a property graph index\nindex = PropertyGraphIndex.from_documents(\n    documents,\n    property_graph_store=graph_store,\n    show_progress=True\n)\n\n# Query the graph\nresponse = index.query(\"What are the main topics discussed?\")\n```\n\n### Basic Graph Store\n\nFor simpler use cases, you can use the basic graph store:\n\n```python\nfrom llama_index.graph_stores.arcadedb import ArcadeDBGraphStore\nfrom llama_index.core import KnowledgeGraphIndex\n\n# Initialize the graph store\ngraph_store = ArcadeDBGraphStore(\n    host=\"localhost\",\n    port=2480,\n    username=\"root\",\n    password=\"playwithdata\",\n    database=\"knowledge_graph\"\n)\n\n# Create a knowledge graph index\nindex = KnowledgeGraphIndex.from_documents(\n    documents,\n    storage_context=StorageContext.from_defaults(graph_store=graph_store)\n)\n```\n\n## Configuration\n\n### Connection Parameters\n\n- `host`: ArcadeDB server hostname (default: \"localhost\")\n- `port`: ArcadeDB server port (default: 2480)\n- `username`: Database username (default: \"root\")\n- `password`: Database password\n- `database`: Database name\n- `embedding_dimension`: Vector dimension for embeddings (optional)\n\n### Query Engine\n\nThe property graph store uses **native ArcadeDB SQL** for optimal performance and reliability. ArcadeDB's SQL engine provides excellent graph traversal capabilities with MATCH patterns and is the recommended approach for production use.\n\n### Embedding Dimensions\n\nChoose the correct `embedding_dimension` based on your embedding model:\n\n| **Model** | **Dimension** | **Example** | **Usage** |\n|-----------|---------------|-------------|-----------|\n| OpenAI text-embedding-ada-002 | 1536 | `embedding_dimension=1536` | Production OpenAI |\n| Ollama all-MiniLM-L6-v2 | 384 | `embedding_dimension=384` | **flexible-graphrag default** |\n| Ollama nomic-embed-text | 768 | `embedding_dimension=768` | Alternative Ollama |\n| Ollama mxbai-embed-large | 1024 | `embedding_dimension=1024` | High-quality Ollama |\n| No vector search | None | Omit parameter entirely | Graph-only mode |\n\n## Requirements\n\n- ArcadeDB server (version 23.10+)\n- Python 3.9+\n- LlamaIndex core\n\n## Getting Started\n\n1. Start ArcadeDB server:\n```bash\ndocker run -d --name arcadedb -p 2480:2480 -p 2424:2424 \\\n  -e JAVA_OPTS=\"-Darcadedb.server.rootPassword=playwithdata\" \\\n  arcadedata/arcadedb:latest\n```\n\n2. Install the package:\n```bash\npip install llama-index-graph-stores-arcadedb\n```\n\n3. Run your GraphRAG application!\n\n## Examples\n\nCheck out the [examples directory](examples/) for complete working examples including:\n- Basic usage with document ingestion\n- Advanced GraphRAG workflows\n- Vector similarity search\n- Migration from other graph databases\n\n## License\n\nApache License 2.0\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "LlamaIndex Graph Stores integration for ArcadeDB - Multi-Model Database with Graph, Document, Key-Value, Vector, and Time-Series support",
    "version": "0.3.1",
    "project_urls": {
        "ArcadeDB Documentation": "https://docs.arcadedb.com/",
        "ArcadeDB Homepage": "https://arcadedb.com/",
        "Bug Tracker": "https://github.com/stevereiner/arcadedb-llama-index/issues",
        "Documentation": "https://docs.llamaindex.ai/en/stable/module_guides/storing/graph_stores/",
        "Homepage": "https://github.com/stevereiner/arcadedb-llama-index",
        "Repository": "https://github.com/stevereiner/arcadedb-llama-index"
    },
    "split_keywords": [
        "arcadedb",
        " database",
        " graph-store",
        " graphrag",
        " knowledge-graph",
        " llama-index",
        " multi-model",
        " property-graph",
        " rag",
        " vector-search"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9eb8336f75df45a5b243ec6cd0618bb213c5b91f3a05f1ea579b550baaf15d90",
                "md5": "f1b23a446b0867f6a312474437e94e1a",
                "sha256": "3aba6e3f063ebf26309c6e6a21a6a5e7aaeb980e8976ba3bcc3c4eb5ff7e7949"
            },
            "downloads": -1,
            "filename": "llama_index_graph_stores_arcadedb-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f1b23a446b0867f6a312474437e94e1a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 30761,
            "upload_time": "2025-10-27T02:45:24",
            "upload_time_iso_8601": "2025-10-27T02:45:24.165441Z",
            "url": "https://files.pythonhosted.org/packages/9e/b8/336f75df45a5b243ec6cd0618bb213c5b91f3a05f1ea579b550baaf15d90/llama_index_graph_stores_arcadedb-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e449c82f0c90f455865806ed0cbc6836169406bff372b4bcc68c11b51edbb79b",
                "md5": "8761befc6c4e3cbb1beb6678cc56da2a",
                "sha256": "94ccd63200b8be89075b515b7a0aff9c3797b2de198da3c8516a1bdc1c787dff"
            },
            "downloads": -1,
            "filename": "llama_index_graph_stores_arcadedb-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "8761befc6c4e3cbb1beb6678cc56da2a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 30415,
            "upload_time": "2025-10-27T02:45:25",
            "upload_time_iso_8601": "2025-10-27T02:45:25.304911Z",
            "url": "https://files.pythonhosted.org/packages/e4/49/c82f0c90f455865806ed0cbc6836169406bff372b4bcc68c11b51edbb79b/llama_index_graph_stores_arcadedb-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-27 02:45:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "stevereiner",
    "github_project": "arcadedb-llama-index",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "arcadedb-python",
            "specs": [
                [
                    ">=",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "llama-index-core",
            "specs": [
                [
                    ">=",
                    "0.10.0"
                ]
            ]
        },
        {
            "name": "llama-index-embeddings-openai",
            "specs": []
        },
        {
            "name": "llama-index-llms-openai",
            "specs": []
        }
    ],
    "lcname": "llama-index-graph-stores-arcadedb"
}
        
Elapsed time: 1.00968s