# Graphiti MCP Server
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents
operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti
continuously integrates user interactions, structured and unstructured enterprise data, and external information into a
coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical
queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI
applications.
This is an experimental Model Context Protocol (MCP) server implementation for Graphiti. The MCP server exposes
Graphiti's key functionality through the MCP protocol, allowing AI assistants to interact with Graphiti's knowledge
graph capabilities.
## Features
The Graphiti MCP server provides comprehensive knowledge graph capabilities:
- **Episode Management**: Add, retrieve, and delete episodes (text, messages, or JSON data)
- **Entity Management**: Search and manage entity nodes and relationships in the knowledge graph
- **Search Capabilities**: Search for facts (edges) and node summaries using semantic and hybrid search
- **Group Management**: Organize and manage groups of related data with group_id filtering
- **Graph Maintenance**: Clear the graph and rebuild indices
- **Graph Database Support**: Multiple backend options including FalkorDB (default) and Neo4j
- **Multiple LLM Providers**: Support for OpenAI, Anthropic, Gemini, Groq, and Azure OpenAI
- **Multiple Embedding Providers**: Support for OpenAI, Voyage, Sentence Transformers, and Gemini embeddings
- **Rich Entity Types**: Built-in entity types including Preferences, Requirements, Procedures, Locations, Events, Organizations, Documents, and more for structured knowledge extraction
- **HTTP Transport**: Default HTTP transport with MCP endpoint at `/mcp/` for broad client compatibility
- **Queue-based Processing**: Asynchronous episode processing with configurable concurrency limits
## Quick Start
### Clone the Graphiti GitHub repo
```bash
git clone https://github.com/getzep/graphiti.git
```
or
```bash
gh repo clone getzep/graphiti
```
### For Claude Desktop and other `stdio` only clients
1. Note the full path to this directory.
```
cd graphiti && pwd
```
2. Install the [Graphiti prerequisites](#prerequisites).
3. Configure Claude, Cursor, or other MCP client to use [Graphiti with a `stdio` transport](#integrating-with-mcp-clients). See the client documentation on where to find their MCP configuration files.
### For Cursor and other HTTP-enabled clients
1. Change directory to the `mcp_server` directory
`cd graphiti/mcp_server`
2. Start the combined FalkorDB + MCP server using Docker Compose (recommended)
```bash
docker compose up
```
This starts both FalkorDB and the MCP server in a single container.
**Alternative**: Run with separate containers using Neo4j:
```bash
docker compose -f docker/docker-compose-neo4j.yml up
```
4. Point your MCP client to `http://localhost:8000/mcp/`
## Installation
### Prerequisites
1. Docker and Docker Compose (for the default FalkorDB setup)
2. OpenAI API key for LLM operations (or API keys for other supported LLM providers)
3. (Optional) Python 3.10+ if running the MCP server standalone with an external FalkorDB instance
### Setup
1. Clone the repository and navigate to the mcp_server directory
2. Use `uv` to create a virtual environment and install dependencies:
```bash
# Install uv if you don't have it already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a virtual environment and install dependencies in one step
uv sync
# Optional: Install additional LLM providers (anthropic, gemini, groq, voyage, sentence-transformers)
uv sync --extra providers
```
## Configuration
The server can be configured using a `config.yaml` file, environment variables, or command-line arguments (in order of precedence).
### Default Configuration
The MCP server comes with sensible defaults:
- **Transport**: stdio (for Claude Desktop/Cursor) or sse/http (for web clients like LibreChat)
- **Database**: FalkorDB (combined in single container with MCP server)
- **LLM**: OpenAI with model gpt-5-mini
- **Embedder**: OpenAI text-embedding-3-small
### Database Configuration
#### FalkorDB (Default)
FalkorDB is a Redis-based graph database that comes bundled with the MCP server in a single Docker container. This is the default and recommended setup.
```yaml
database:
provider: "falkordb" # Default
providers:
falkordb:
uri: "redis://localhost:6379"
password: "" # Optional
database: "default_db" # Optional
```
#### Neo4j
For production use or when you need a full-featured graph database, Neo4j is recommended:
```yaml
database:
provider: "neo4j"
providers:
neo4j:
uri: "bolt://localhost:7687"
username: "neo4j"
password: "your_password"
database: "neo4j" # Optional, defaults to "neo4j"
```
#### FalkorDB
FalkorDB is another graph database option based on Redis:
```yaml
database:
provider: "falkordb"
providers:
falkordb:
uri: "redis://localhost:6379"
password: "" # Optional
database: "default_db" # Optional
```
### Configuration File (config.yaml)
The server supports multiple LLM providers (OpenAI, Anthropic, Gemini, Groq) and embedders. Edit `config.yaml` to configure:
```yaml
server:
transport: "stdio" # Default. Options: stdio, sse, http
# For Claude Desktop/Cursor: use "stdio"
# For web clients (LibreChat, etc.): use "sse" or "http" (http falls back to sse)
llm:
provider: "openai" # or "anthropic", "gemini", "groq", "azure_openai"
model: "gpt-4.1" # Default model
database:
provider: "falkordb" # Default. Options: "falkordb", "neo4j"
```
### Using Ollama for Local LLM
To use Ollama with the MCP server, configure it as an OpenAI-compatible endpoint:
```yaml
llm:
provider: "openai"
model: "gpt-oss:120b" # or your preferred Ollama model
api_base: "http://localhost:11434/v1"
api_key: "ollama" # dummy key required
embedder:
provider: "sentence_transformers" # recommended for local setup
model: "all-MiniLM-L6-v2"
```
Make sure Ollama is running locally with: `ollama serve`
### Entity Types
Graphiti MCP Server includes built-in entity types for structured knowledge extraction. These entity types are always enabled and configured via the `entity_types` section in your `config.yaml`:
**Available Entity Types:**
- **Preference**: User preferences, choices, opinions, or selections (prioritized for user-specific information)
- **Requirement**: Specific needs, features, or functionality that must be fulfilled
- **Procedure**: Standard operating procedures and sequential instructions
- **Location**: Physical or virtual places where activities occur
- **Event**: Time-bound activities, occurrences, or experiences
- **Organization**: Companies, institutions, groups, or formal entities
- **Document**: Information content in various forms (books, articles, reports, videos, etc.)
- **Topic**: Subject of conversation, interest, or knowledge domain (used as a fallback)
- **Object**: Physical items, tools, devices, or possessions (used as a fallback)
These entity types are defined in `config.yaml` and can be customized by modifying the descriptions:
```yaml
graphiti:
entity_types:
- name: "Preference"
description: "User preferences, choices, opinions, or selections"
- name: "Requirement"
description: "Specific needs, features, or functionality"
# ... additional entity types
```
The MCP server automatically uses these entity types during episode ingestion to extract and structure information from conversations and documents.
### Environment Variables
The `config.yaml` file supports environment variable expansion using `${VAR_NAME}` or `${VAR_NAME:default}` syntax. Key variables:
- `NEO4J_URI`: URI for the Neo4j database (default: `bolt://localhost:7687`)
- `NEO4J_USER`: Neo4j username (default: `neo4j`)
- `NEO4J_PASSWORD`: Neo4j password (default: `demodemo`)
- `OPENAI_API_KEY`: OpenAI API key (required for OpenAI LLM/embedder)
- `ANTHROPIC_API_KEY`: Anthropic API key (for Claude models)
- `GOOGLE_API_KEY`: Google API key (for Gemini models)
- `GROQ_API_KEY`: Groq API key (for Groq models)
- `AZURE_OPENAI_API_KEY`: Azure OpenAI API key
- `AZURE_OPENAI_ENDPOINT`: Azure OpenAI endpoint URL
- `AZURE_OPENAI_DEPLOYMENT`: Azure OpenAI deployment name
- `AZURE_OPENAI_EMBEDDINGS_ENDPOINT`: Optional Azure OpenAI embeddings endpoint URL
- `AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT`: Optional Azure OpenAI embeddings deployment name
- `AZURE_OPENAI_API_VERSION`: Optional Azure OpenAI API version
- `USE_AZURE_AD`: Optional use Azure Managed Identities for authentication
- `SEMAPHORE_LIMIT`: Episode processing concurrency. See [Concurrency and LLM Provider 429 Rate Limit Errors](#concurrency-and-llm-provider-429-rate-limit-errors)
You can set these variables in a `.env` file in the project directory.
### Tool Compatibility
The MCP server provides backward compatibility for tool names and parameters:
- **Tool names**: `search_nodes` is also available as `search_memory_nodes`
- **Parameters**: Tools accept both singular (`group_id`) and plural (`group_ids`) parameter names
- **Backward compatibility**: Existing clients using `group_id`, `last_n` parameters will continue to work
## Running the Server
### Default Setup (FalkorDB Combined Container)
To run the Graphiti MCP server with the default FalkorDB setup:
```bash
docker compose up
```
This starts a single container with:
- HTTP transport on `http://localhost:8000/mcp/`
- FalkorDB graph database on `localhost:6379`
- FalkorDB web UI on `http://localhost:3000`
- OpenAI LLM with gpt-5-mini model
### Running with Neo4j
#### Option 1: Using Docker Compose
The easiest way to run with Neo4j is using the provided Docker Compose configuration:
```bash
# This starts both Neo4j and the MCP server
docker compose -f docker/docker-compose.neo4j.yaml up
```
#### Option 2: Direct Execution with Existing Neo4j
If you have Neo4j already running:
```bash
# Set environment variables
export NEO4J_URI="bolt://localhost:7687"
export NEO4J_USER="neo4j"
export NEO4J_PASSWORD="your_password"
# Run with Neo4j
uv run graphiti_mcp_server.py --database-provider neo4j
```
Or use the Neo4j configuration file:
```bash
uv run graphiti_mcp_server.py --config config/config-docker-neo4j.yaml
```
### Running with FalkorDB
#### Option 1: Using Docker Compose
```bash
# This starts both FalkorDB (Redis-based) and the MCP server
docker compose -f docker/docker-compose.falkordb.yaml up
```
#### Option 2: Direct Execution with Existing FalkorDB
```bash
# Set environment variables
export FALKORDB_URI="redis://localhost:6379"
export FALKORDB_PASSWORD="" # If password protected
# Run with FalkorDB
uv run graphiti_mcp_server.py --database-provider falkordb
```
Or use the FalkorDB configuration file:
```bash
uv run graphiti_mcp_server.py --config config/config-docker-falkordb.yaml
```
### Available Command-Line Arguments
- `--config`: Path to YAML configuration file (default: config.yaml)
- `--llm-provider`: LLM provider to use (openai, anthropic, gemini, groq, azure_openai)
- `--embedder-provider`: Embedder provider to use (openai, azure_openai, gemini, voyage)
- `--database-provider`: Database provider to use (falkordb, neo4j) - default: falkordb
- `--model`: Model name to use with the LLM client
- `--temperature`: Temperature setting for the LLM (0.0-2.0)
- `--transport`: Choose the transport method (http or stdio, default: http)
- `--group-id`: Set a namespace for the graph (optional). If not provided, defaults to "main"
- `--destroy-graph`: If set, destroys all Graphiti graphs on startup
### Concurrency and LLM Provider 429 Rate Limit Errors
Graphiti's ingestion pipelines are designed for high concurrency, controlled by the `SEMAPHORE_LIMIT` environment variable. This setting determines how many episodes can be processed simultaneously. Since each episode involves multiple LLM calls (entity extraction, deduplication, summarization), the actual number of concurrent LLM requests will be several times higher.
**Default:** `SEMAPHORE_LIMIT=10` (suitable for OpenAI Tier 3, mid-tier Anthropic)
#### Tuning Guidelines by LLM Provider
**OpenAI:**
- Tier 1 (free): 3 RPM → `SEMAPHORE_LIMIT=1-2`
- Tier 2: 60 RPM → `SEMAPHORE_LIMIT=5-8`
- Tier 3: 500 RPM → `SEMAPHORE_LIMIT=10-15`
- Tier 4: 5,000 RPM → `SEMAPHORE_LIMIT=20-50`
**Anthropic:**
- Default tier: 50 RPM → `SEMAPHORE_LIMIT=5-8`
- High tier: 1,000 RPM → `SEMAPHORE_LIMIT=15-30`
**Azure OpenAI:**
- Consult your quota in Azure Portal and adjust accordingly
- Start conservative and increase gradually
**Ollama (local):**
- Hardware dependent → `SEMAPHORE_LIMIT=1-5`
- Monitor CPU/GPU usage and adjust
#### Symptoms
- **Too high**: 429 rate limit errors, increased API costs from parallel processing
- **Too low**: Slow episode throughput, underutilized API quota
#### Monitoring
- Watch logs for `429` rate limit errors
- Monitor episode processing times in server logs
- Check your LLM provider's dashboard for actual request rates
- Track token usage and costs
Set this in your `.env` file:
```bash
SEMAPHORE_LIMIT=10 # Adjust based on your LLM provider tier
```
### Docker Deployment
The Graphiti MCP server can be deployed using Docker with your choice of database backend. The Dockerfile uses `uv` for package management, ensuring consistent dependency installation.
**Pre-built Docker Images:**
- **Official**: `zepai/knowledge-graph-mcp` - Official Graphiti MCP server image
- **Custom with Enhanced Tools**: `lvarming/graphiti-mcp` - Community fork with additional MCP tools for advanced knowledge management
- Includes `get_entities_by_type` for browsing entities by classification
- Includes `compare_facts_over_time` for tracking knowledge evolution
- Automated builds from [Varming73/graphiti](https://github.com/Varming73/graphiti)
- Uses official graphiti-core from PyPI with custom MCP server enhancements
#### Environment Configuration
Before running Docker Compose, configure your API keys using a `.env` file (recommended):
1. **Create a .env file in the mcp_server directory**:
```bash
cd graphiti/mcp_server
cp .env.example .env
```
2. **Edit the .env file** to set your API keys:
```bash
# Required - at least one LLM provider API key
OPENAI_API_KEY=your_openai_api_key_here
# Optional - other LLM providers
ANTHROPIC_API_KEY=your_anthropic_key
GOOGLE_API_KEY=your_google_key
GROQ_API_KEY=your_groq_key
# Optional - embedder providers
VOYAGE_API_KEY=your_voyage_key
```
**Important**: The `.env` file must be in the `mcp_server/` directory (the parent of the `docker/` subdirectory).
#### Running with Docker Compose
**All commands must be run from the `mcp_server` directory** to ensure the `.env` file is loaded correctly:
```bash
cd graphiti/mcp_server
```
##### Option 1: FalkorDB Combined Container (Default)
Single container with both FalkorDB and MCP server - simplest option:
```bash
docker compose up
```
##### Option 2: Neo4j Database
Separate containers with Neo4j and MCP server:
```bash
docker compose -f docker/docker-compose-neo4j.yml up
```
Default Neo4j credentials:
- Username: `neo4j`
- Password: `demodemo`
- Bolt URI: `bolt://neo4j:7687`
- Browser UI: `http://localhost:7474`
##### Option 3: FalkorDB with Separate Containers
Alternative setup with separate FalkorDB and MCP server containers:
```bash
docker compose -f docker/docker-compose-falkordb.yml up
```
FalkorDB configuration:
- Redis port: `6379`
- Web UI: `http://localhost:3000`
- Connection: `redis://falkordb:6379`
#### Accessing the MCP Server
Once running, the MCP server is available at:
- **HTTP endpoint**: `http://localhost:8000/mcp/`
- **Health check**: `http://localhost:8000/health`
#### Running Docker Compose from a Different Directory
If you run Docker Compose from the `docker/` subdirectory instead of `mcp_server/`, you'll need to modify the `.env` file path in the compose file:
```yaml
# Change this line in the docker-compose file:
env_file:
- path: ../.env # When running from mcp_server/
# To this:
env_file:
- path: .env # When running from mcp_server/docker/
```
However, **running from the `mcp_server/` directory is recommended** to avoid confusion.
## Integrating with MCP Clients
### VS Code / GitHub Copilot
VS Code with GitHub Copilot Chat extension supports MCP servers. Add to your VS Code settings (`.vscode/mcp.json` or global settings):
```json
{
"mcpServers": {
"graphiti": {
"uri": "http://localhost:8000/mcp/",
"transport": {
"type": "http"
}
}
}
}
```
### Other MCP Clients
To use the Graphiti MCP server with other MCP-compatible clients, configure it to connect to the server:
> [!IMPORTANT]
> You will need the Python package manager, `uv` installed. Please refer to the [`uv` install instructions](https://docs.astral.sh/uv/getting-started/installation/).
>
> Ensure that you set the full path to the `uv` binary and your Graphiti project folder.
```json
{
"mcpServers": {
"graphiti-memory": {
"transport": "stdio",
"command": "/Users/<user>/.local/bin/uv",
"args": [
"run",
"--isolated",
"--directory",
"/Users/<user>>/dev/zep/graphiti/mcp_server",
"--project",
".",
"graphiti_mcp_server.py",
"--transport",
"stdio"
],
"env": {
"NEO4J_URI": "bolt://localhost:7687",
"NEO4J_USER": "neo4j",
"NEO4J_PASSWORD": "password",
"OPENAI_API_KEY": "sk-XXXXXXXX",
"MODEL_NAME": "gpt-4.1-mini"
}
}
}
}
```
For HTTP transport (default), you can use this configuration:
```json
{
"mcpServers": {
"graphiti-memory": {
"transport": "http",
"url": "http://localhost:8000/mcp/"
}
}
}
```
## Available Tools
The Graphiti MCP server exposes the following tools:
### Core Tools
- `add_episode`: Add an episode to the knowledge graph (supports text, JSON, and message formats)
- `search_nodes`: Search the knowledge graph for relevant node summaries
- `search_facts`: Search the knowledge graph for relevant facts (edges between entities)
- `delete_entity_edge`: Delete an entity edge from the knowledge graph
- `delete_episode`: Delete an episode from the knowledge graph
- `get_entity_edge`: Get an entity edge by its UUID
- `get_episodes`: Get the most recent episodes for a specific group
- `clear_graph`: Clear all data from the knowledge graph and rebuild indices
- `get_status`: Get the status of the Graphiti MCP server and Neo4j connection
### Enhanced Knowledge Management Tools
> **Note**: These tools are available in the custom Docker image `lvarming/graphiti-mcp` or when using the [community fork](https://github.com/Varming73/graphiti).
- **`get_entities_by_type`**: Retrieve entities by their type classification
- Essential for personal knowledge management (PKM) workflows
- Browse entities by type (e.g., Pattern, Insight, Preference, Procedure)
- Filter by group IDs and search query
- Example: `get_entities_by_type(entity_types=["Preference", "Requirement"])`
- **`compare_facts_over_time`**: Track knowledge evolution between time periods
- Compare facts valid at different points in time
- Returns facts added, facts invalidated, and facts that remained valid
- Useful for understanding how your knowledge base evolved
- Example: `compare_facts_over_time(query="productivity", start_time="2024-01-01", end_time="2024-03-01")`
## Working with JSON Data
The Graphiti MCP server can process structured JSON data through the `add_episode` tool with `source="json"`. This
allows you to automatically extract entities and relationships from structured data:
```
add_episode(
name="Customer Profile",
episode_body="{\"company\": {\"name\": \"Acme Technologies\"}, \"products\": [{\"id\": \"P001\", \"name\": \"CloudSync\"}, {\"id\": \"P002\", \"name\": \"DataMiner\"}]}",
source="json",
source_description="CRM data"
)
```
## Integrating with the Cursor IDE
To integrate the Graphiti MCP Server with the Cursor IDE, follow these steps:
1. Run the Graphiti MCP server using the default HTTP transport:
```bash
uv run graphiti_mcp_server.py --group-id <your_group_id>
```
Hint: specify a `group_id` to namespace graph data. If you do not specify a `group_id`, the server will use "main" as the group_id.
or
```bash
docker compose up
```
2. Configure Cursor to connect to the Graphiti MCP server.
```json
{
"mcpServers": {
"graphiti-memory": {
"url": "http://localhost:8000/mcp/"
}
}
}
```
3. Add the Graphiti rules to Cursor's User Rules. See [cursor_rules.md](cursor_rules.md) for details.
4. Kick off an agent session in Cursor.
The integration enables AI assistants in Cursor to maintain persistent memory through Graphiti's knowledge graph
capabilities.
## Integrating with Claude Desktop (Docker MCP Server)
The Graphiti MCP Server uses HTTP transport (at endpoint `/mcp/`). Claude Desktop does not natively support HTTP transport, so you'll need to use a gateway like `mcp-remote`.
1. **Run the Graphiti MCP server**:
```bash
docker compose up
# Or run directly with uv:
uv run graphiti_mcp_server.py
```
2. **(Optional) Install `mcp-remote` globally**:
If you prefer to have `mcp-remote` installed globally, or if you encounter issues with `npx` fetching the package, you can install it globally. Otherwise, `npx` (used in the next step) will handle it for you.
```bash
npm install -g mcp-remote
```
3. **Configure Claude Desktop**:
Open your Claude Desktop configuration file (usually `claude_desktop_config.json`) and add or modify the `mcpServers` section as follows:
```json
{
"mcpServers": {
"graphiti-memory": {
// You can choose a different name if you prefer
"command": "npx", // Or the full path to mcp-remote if npx is not in your PATH
"args": [
"mcp-remote",
"http://localhost:8000/mcp/" // The Graphiti server's HTTP endpoint
]
}
}
}
```
If you already have an `mcpServers` entry, add `graphiti-memory` (or your chosen name) as a new key within it.
4. **Restart Claude Desktop** for the changes to take effect.
## Requirements
- Python 3.10 or higher
- OpenAI API key (for LLM operations and embeddings) or other LLM provider API keys
- MCP-compatible client
- Docker and Docker Compose (for the default FalkorDB combined container)
- (Optional) Neo4j database (version 5.26 or later) if not using the default FalkorDB setup
## Telemetry
The Graphiti MCP server uses the Graphiti core library, which includes anonymous telemetry collection. When you initialize the Graphiti MCP server, anonymous usage statistics are collected to help improve the framework.
### What's Collected
- Anonymous identifier and system information (OS, Python version)
- Graphiti version and configuration choices (LLM provider, database backend, embedder type)
- **No personal data, API keys, or actual graph content is ever collected**
### How to Disable
To disable telemetry in the MCP server, set the environment variable:
```bash
export GRAPHITI_TELEMETRY_ENABLED=false
```
Or add it to your `.env` file:
```
GRAPHITI_TELEMETRY_ENABLED=false
```
For complete details about what's collected and why, see the [Telemetry section in the main Graphiti README](../README.md#telemetry).
## License
This project is licensed under the same license as the parent Graphiti project.
Raw data
{
"_id": null,
"home_page": null,
"name": "graphiti-mcp-varming",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.10",
"maintainer_email": null,
"keywords": "ai, graphiti, knowledge-graph, llm, mcp",
"author": null,
"author_email": "Varming <varming@example.com>",
"download_url": "https://files.pythonhosted.org/packages/c9/56/5eb730a1769ceef0a76e6ad656398644be14e34dd6933c04fbd3a882ca9e/graphiti_mcp_varming-1.0.2.tar.gz",
"platform": null,
"description": "# Graphiti MCP Server\n\nGraphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents\noperating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti\ncontinuously integrates user interactions, structured and unstructured enterprise data, and external information into a\ncoherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical\nqueries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI\napplications.\n\nThis is an experimental Model Context Protocol (MCP) server implementation for Graphiti. The MCP server exposes\nGraphiti's key functionality through the MCP protocol, allowing AI assistants to interact with Graphiti's knowledge\ngraph capabilities.\n\n## Features\n\nThe Graphiti MCP server provides comprehensive knowledge graph capabilities:\n\n- **Episode Management**: Add, retrieve, and delete episodes (text, messages, or JSON data)\n- **Entity Management**: Search and manage entity nodes and relationships in the knowledge graph\n- **Search Capabilities**: Search for facts (edges) and node summaries using semantic and hybrid search\n- **Group Management**: Organize and manage groups of related data with group_id filtering\n- **Graph Maintenance**: Clear the graph and rebuild indices\n- **Graph Database Support**: Multiple backend options including FalkorDB (default) and Neo4j\n- **Multiple LLM Providers**: Support for OpenAI, Anthropic, Gemini, Groq, and Azure OpenAI\n- **Multiple Embedding Providers**: Support for OpenAI, Voyage, Sentence Transformers, and Gemini embeddings\n- **Rich Entity Types**: Built-in entity types including Preferences, Requirements, Procedures, Locations, Events, Organizations, Documents, and more for structured knowledge extraction\n- **HTTP Transport**: Default HTTP transport with MCP endpoint at `/mcp/` for broad client compatibility\n- **Queue-based Processing**: Asynchronous episode processing with configurable concurrency limits\n\n## Quick Start\n\n### Clone the Graphiti GitHub repo\n\n```bash\ngit clone https://github.com/getzep/graphiti.git\n```\n\nor\n\n```bash\ngh repo clone getzep/graphiti\n```\n\n### For Claude Desktop and other `stdio` only clients\n\n1. Note the full path to this directory.\n\n```\ncd graphiti && pwd\n```\n\n2. Install the [Graphiti prerequisites](#prerequisites).\n\n3. Configure Claude, Cursor, or other MCP client to use [Graphiti with a `stdio` transport](#integrating-with-mcp-clients). See the client documentation on where to find their MCP configuration files.\n\n### For Cursor and other HTTP-enabled clients\n\n1. Change directory to the `mcp_server` directory\n\n`cd graphiti/mcp_server`\n\n2. Start the combined FalkorDB + MCP server using Docker Compose (recommended)\n\n```bash\ndocker compose up\n```\n\nThis starts both FalkorDB and the MCP server in a single container.\n\n**Alternative**: Run with separate containers using Neo4j:\n```bash\ndocker compose -f docker/docker-compose-neo4j.yml up\n```\n\n4. Point your MCP client to `http://localhost:8000/mcp/`\n\n## Installation\n\n### Prerequisites\n\n1. Docker and Docker Compose (for the default FalkorDB setup)\n2. OpenAI API key for LLM operations (or API keys for other supported LLM providers)\n3. (Optional) Python 3.10+ if running the MCP server standalone with an external FalkorDB instance\n\n### Setup\n\n1. Clone the repository and navigate to the mcp_server directory\n2. Use `uv` to create a virtual environment and install dependencies:\n\n```bash\n# Install uv if you don't have it already\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Create a virtual environment and install dependencies in one step\nuv sync\n\n# Optional: Install additional LLM providers (anthropic, gemini, groq, voyage, sentence-transformers)\nuv sync --extra providers\n```\n\n## Configuration\n\nThe server can be configured using a `config.yaml` file, environment variables, or command-line arguments (in order of precedence).\n\n### Default Configuration\n\nThe MCP server comes with sensible defaults:\n- **Transport**: stdio (for Claude Desktop/Cursor) or sse/http (for web clients like LibreChat)\n- **Database**: FalkorDB (combined in single container with MCP server)\n- **LLM**: OpenAI with model gpt-5-mini\n- **Embedder**: OpenAI text-embedding-3-small\n\n### Database Configuration\n\n#### FalkorDB (Default)\n\nFalkorDB is a Redis-based graph database that comes bundled with the MCP server in a single Docker container. This is the default and recommended setup.\n\n```yaml\ndatabase:\n provider: \"falkordb\" # Default\n providers:\n falkordb:\n uri: \"redis://localhost:6379\"\n password: \"\" # Optional\n database: \"default_db\" # Optional\n```\n\n#### Neo4j\n\nFor production use or when you need a full-featured graph database, Neo4j is recommended:\n\n```yaml\ndatabase:\n provider: \"neo4j\"\n providers:\n neo4j:\n uri: \"bolt://localhost:7687\"\n username: \"neo4j\"\n password: \"your_password\"\n database: \"neo4j\" # Optional, defaults to \"neo4j\"\n```\n\n#### FalkorDB\n\nFalkorDB is another graph database option based on Redis:\n\n```yaml\ndatabase:\n provider: \"falkordb\"\n providers:\n falkordb:\n uri: \"redis://localhost:6379\"\n password: \"\" # Optional\n database: \"default_db\" # Optional\n```\n\n### Configuration File (config.yaml)\n\nThe server supports multiple LLM providers (OpenAI, Anthropic, Gemini, Groq) and embedders. Edit `config.yaml` to configure:\n\n```yaml\nserver:\n transport: \"stdio\" # Default. Options: stdio, sse, http\n # For Claude Desktop/Cursor: use \"stdio\"\n # For web clients (LibreChat, etc.): use \"sse\" or \"http\" (http falls back to sse)\n\nllm:\n provider: \"openai\" # or \"anthropic\", \"gemini\", \"groq\", \"azure_openai\"\n model: \"gpt-4.1\" # Default model\n\ndatabase:\n provider: \"falkordb\" # Default. Options: \"falkordb\", \"neo4j\"\n```\n\n### Using Ollama for Local LLM\n\nTo use Ollama with the MCP server, configure it as an OpenAI-compatible endpoint:\n\n```yaml\nllm:\n provider: \"openai\"\n model: \"gpt-oss:120b\" # or your preferred Ollama model\n api_base: \"http://localhost:11434/v1\"\n api_key: \"ollama\" # dummy key required\n\nembedder:\n provider: \"sentence_transformers\" # recommended for local setup\n model: \"all-MiniLM-L6-v2\"\n```\n\nMake sure Ollama is running locally with: `ollama serve`\n\n### Entity Types\n\nGraphiti MCP Server includes built-in entity types for structured knowledge extraction. These entity types are always enabled and configured via the `entity_types` section in your `config.yaml`:\n\n**Available Entity Types:**\n\n- **Preference**: User preferences, choices, opinions, or selections (prioritized for user-specific information)\n- **Requirement**: Specific needs, features, or functionality that must be fulfilled\n- **Procedure**: Standard operating procedures and sequential instructions\n- **Location**: Physical or virtual places where activities occur\n- **Event**: Time-bound activities, occurrences, or experiences\n- **Organization**: Companies, institutions, groups, or formal entities\n- **Document**: Information content in various forms (books, articles, reports, videos, etc.)\n- **Topic**: Subject of conversation, interest, or knowledge domain (used as a fallback)\n- **Object**: Physical items, tools, devices, or possessions (used as a fallback)\n\nThese entity types are defined in `config.yaml` and can be customized by modifying the descriptions:\n\n```yaml\ngraphiti:\n entity_types:\n - name: \"Preference\"\n description: \"User preferences, choices, opinions, or selections\"\n - name: \"Requirement\"\n description: \"Specific needs, features, or functionality\"\n # ... additional entity types\n```\n\nThe MCP server automatically uses these entity types during episode ingestion to extract and structure information from conversations and documents.\n\n### Environment Variables\n\nThe `config.yaml` file supports environment variable expansion using `${VAR_NAME}` or `${VAR_NAME:default}` syntax. Key variables:\n\n- `NEO4J_URI`: URI for the Neo4j database (default: `bolt://localhost:7687`)\n- `NEO4J_USER`: Neo4j username (default: `neo4j`)\n- `NEO4J_PASSWORD`: Neo4j password (default: `demodemo`)\n- `OPENAI_API_KEY`: OpenAI API key (required for OpenAI LLM/embedder)\n- `ANTHROPIC_API_KEY`: Anthropic API key (for Claude models)\n- `GOOGLE_API_KEY`: Google API key (for Gemini models)\n- `GROQ_API_KEY`: Groq API key (for Groq models)\n- `AZURE_OPENAI_API_KEY`: Azure OpenAI API key\n- `AZURE_OPENAI_ENDPOINT`: Azure OpenAI endpoint URL\n- `AZURE_OPENAI_DEPLOYMENT`: Azure OpenAI deployment name\n- `AZURE_OPENAI_EMBEDDINGS_ENDPOINT`: Optional Azure OpenAI embeddings endpoint URL\n- `AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT`: Optional Azure OpenAI embeddings deployment name\n- `AZURE_OPENAI_API_VERSION`: Optional Azure OpenAI API version\n- `USE_AZURE_AD`: Optional use Azure Managed Identities for authentication\n- `SEMAPHORE_LIMIT`: Episode processing concurrency. See [Concurrency and LLM Provider 429 Rate Limit Errors](#concurrency-and-llm-provider-429-rate-limit-errors)\n\nYou can set these variables in a `.env` file in the project directory.\n\n### Tool Compatibility\n\nThe MCP server provides backward compatibility for tool names and parameters:\n- **Tool names**: `search_nodes` is also available as `search_memory_nodes`\n- **Parameters**: Tools accept both singular (`group_id`) and plural (`group_ids`) parameter names\n- **Backward compatibility**: Existing clients using `group_id`, `last_n` parameters will continue to work\n\n## Running the Server\n\n### Default Setup (FalkorDB Combined Container)\n\nTo run the Graphiti MCP server with the default FalkorDB setup:\n\n```bash\ndocker compose up\n```\n\nThis starts a single container with:\n- HTTP transport on `http://localhost:8000/mcp/`\n- FalkorDB graph database on `localhost:6379`\n- FalkorDB web UI on `http://localhost:3000`\n- OpenAI LLM with gpt-5-mini model\n\n### Running with Neo4j\n\n#### Option 1: Using Docker Compose\n\nThe easiest way to run with Neo4j is using the provided Docker Compose configuration:\n\n```bash\n# This starts both Neo4j and the MCP server\ndocker compose -f docker/docker-compose.neo4j.yaml up\n```\n\n#### Option 2: Direct Execution with Existing Neo4j\n\nIf you have Neo4j already running:\n\n```bash\n# Set environment variables\nexport NEO4J_URI=\"bolt://localhost:7687\"\nexport NEO4J_USER=\"neo4j\"\nexport NEO4J_PASSWORD=\"your_password\"\n\n# Run with Neo4j\nuv run graphiti_mcp_server.py --database-provider neo4j\n```\n\nOr use the Neo4j configuration file:\n\n```bash\nuv run graphiti_mcp_server.py --config config/config-docker-neo4j.yaml\n```\n\n### Running with FalkorDB\n\n#### Option 1: Using Docker Compose\n\n```bash\n# This starts both FalkorDB (Redis-based) and the MCP server\ndocker compose -f docker/docker-compose.falkordb.yaml up\n```\n\n#### Option 2: Direct Execution with Existing FalkorDB\n\n```bash\n# Set environment variables\nexport FALKORDB_URI=\"redis://localhost:6379\"\nexport FALKORDB_PASSWORD=\"\" # If password protected\n\n# Run with FalkorDB\nuv run graphiti_mcp_server.py --database-provider falkordb\n```\n\nOr use the FalkorDB configuration file:\n\n```bash\nuv run graphiti_mcp_server.py --config config/config-docker-falkordb.yaml\n```\n\n### Available Command-Line Arguments\n\n- `--config`: Path to YAML configuration file (default: config.yaml)\n- `--llm-provider`: LLM provider to use (openai, anthropic, gemini, groq, azure_openai)\n- `--embedder-provider`: Embedder provider to use (openai, azure_openai, gemini, voyage)\n- `--database-provider`: Database provider to use (falkordb, neo4j) - default: falkordb\n- `--model`: Model name to use with the LLM client\n- `--temperature`: Temperature setting for the LLM (0.0-2.0)\n- `--transport`: Choose the transport method (http or stdio, default: http)\n- `--group-id`: Set a namespace for the graph (optional). If not provided, defaults to \"main\"\n- `--destroy-graph`: If set, destroys all Graphiti graphs on startup\n\n### Concurrency and LLM Provider 429 Rate Limit Errors\n\nGraphiti's ingestion pipelines are designed for high concurrency, controlled by the `SEMAPHORE_LIMIT` environment variable. This setting determines how many episodes can be processed simultaneously. Since each episode involves multiple LLM calls (entity extraction, deduplication, summarization), the actual number of concurrent LLM requests will be several times higher.\n\n**Default:** `SEMAPHORE_LIMIT=10` (suitable for OpenAI Tier 3, mid-tier Anthropic)\n\n#### Tuning Guidelines by LLM Provider\n\n**OpenAI:**\n- Tier 1 (free): 3 RPM \u2192 `SEMAPHORE_LIMIT=1-2`\n- Tier 2: 60 RPM \u2192 `SEMAPHORE_LIMIT=5-8`\n- Tier 3: 500 RPM \u2192 `SEMAPHORE_LIMIT=10-15`\n- Tier 4: 5,000 RPM \u2192 `SEMAPHORE_LIMIT=20-50`\n\n**Anthropic:**\n- Default tier: 50 RPM \u2192 `SEMAPHORE_LIMIT=5-8`\n- High tier: 1,000 RPM \u2192 `SEMAPHORE_LIMIT=15-30`\n\n**Azure OpenAI:**\n- Consult your quota in Azure Portal and adjust accordingly\n- Start conservative and increase gradually\n\n**Ollama (local):**\n- Hardware dependent \u2192 `SEMAPHORE_LIMIT=1-5`\n- Monitor CPU/GPU usage and adjust\n\n#### Symptoms\n\n- **Too high**: 429 rate limit errors, increased API costs from parallel processing\n- **Too low**: Slow episode throughput, underutilized API quota\n\n#### Monitoring\n\n- Watch logs for `429` rate limit errors\n- Monitor episode processing times in server logs\n- Check your LLM provider's dashboard for actual request rates\n- Track token usage and costs\n\nSet this in your `.env` file:\n```bash\nSEMAPHORE_LIMIT=10 # Adjust based on your LLM provider tier\n```\n\n### Docker Deployment\n\nThe Graphiti MCP server can be deployed using Docker with your choice of database backend. The Dockerfile uses `uv` for package management, ensuring consistent dependency installation.\n\n**Pre-built Docker Images:**\n\n- **Official**: `zepai/knowledge-graph-mcp` - Official Graphiti MCP server image\n- **Custom with Enhanced Tools**: `lvarming/graphiti-mcp` - Community fork with additional MCP tools for advanced knowledge management\n - Includes `get_entities_by_type` for browsing entities by classification\n - Includes `compare_facts_over_time` for tracking knowledge evolution\n - Automated builds from [Varming73/graphiti](https://github.com/Varming73/graphiti)\n - Uses official graphiti-core from PyPI with custom MCP server enhancements\n\n#### Environment Configuration\n\nBefore running Docker Compose, configure your API keys using a `.env` file (recommended):\n\n1. **Create a .env file in the mcp_server directory**:\n ```bash\n cd graphiti/mcp_server\n cp .env.example .env\n ```\n\n2. **Edit the .env file** to set your API keys:\n ```bash\n # Required - at least one LLM provider API key\n OPENAI_API_KEY=your_openai_api_key_here\n\n # Optional - other LLM providers\n ANTHROPIC_API_KEY=your_anthropic_key\n GOOGLE_API_KEY=your_google_key\n GROQ_API_KEY=your_groq_key\n\n # Optional - embedder providers\n VOYAGE_API_KEY=your_voyage_key\n ```\n\n**Important**: The `.env` file must be in the `mcp_server/` directory (the parent of the `docker/` subdirectory).\n\n#### Running with Docker Compose\n\n**All commands must be run from the `mcp_server` directory** to ensure the `.env` file is loaded correctly:\n\n```bash\ncd graphiti/mcp_server\n```\n\n##### Option 1: FalkorDB Combined Container (Default)\n\nSingle container with both FalkorDB and MCP server - simplest option:\n\n```bash\ndocker compose up\n```\n\n##### Option 2: Neo4j Database\n\nSeparate containers with Neo4j and MCP server:\n\n```bash\ndocker compose -f docker/docker-compose-neo4j.yml up\n```\n\nDefault Neo4j credentials:\n- Username: `neo4j`\n- Password: `demodemo`\n- Bolt URI: `bolt://neo4j:7687`\n- Browser UI: `http://localhost:7474`\n\n##### Option 3: FalkorDB with Separate Containers\n\nAlternative setup with separate FalkorDB and MCP server containers:\n\n```bash\ndocker compose -f docker/docker-compose-falkordb.yml up\n```\n\nFalkorDB configuration:\n- Redis port: `6379`\n- Web UI: `http://localhost:3000`\n- Connection: `redis://falkordb:6379`\n\n#### Accessing the MCP Server\n\nOnce running, the MCP server is available at:\n- **HTTP endpoint**: `http://localhost:8000/mcp/`\n- **Health check**: `http://localhost:8000/health`\n\n#### Running Docker Compose from a Different Directory\n\nIf you run Docker Compose from the `docker/` subdirectory instead of `mcp_server/`, you'll need to modify the `.env` file path in the compose file:\n\n```yaml\n# Change this line in the docker-compose file:\nenv_file:\n - path: ../.env # When running from mcp_server/\n\n# To this:\nenv_file:\n - path: .env # When running from mcp_server/docker/\n```\n\nHowever, **running from the `mcp_server/` directory is recommended** to avoid confusion.\n\n## Integrating with MCP Clients\n\n### VS Code / GitHub Copilot\n\nVS Code with GitHub Copilot Chat extension supports MCP servers. Add to your VS Code settings (`.vscode/mcp.json` or global settings):\n\n```json\n{\n \"mcpServers\": {\n \"graphiti\": {\n \"uri\": \"http://localhost:8000/mcp/\",\n \"transport\": {\n \"type\": \"http\"\n }\n }\n }\n}\n```\n\n### Other MCP Clients\n\nTo use the Graphiti MCP server with other MCP-compatible clients, configure it to connect to the server:\n\n> [!IMPORTANT]\n> You will need the Python package manager, `uv` installed. Please refer to the [`uv` install instructions](https://docs.astral.sh/uv/getting-started/installation/).\n>\n> Ensure that you set the full path to the `uv` binary and your Graphiti project folder.\n\n```json\n{\n \"mcpServers\": {\n \"graphiti-memory\": {\n \"transport\": \"stdio\",\n \"command\": \"/Users/<user>/.local/bin/uv\",\n \"args\": [\n \"run\",\n \"--isolated\",\n \"--directory\",\n \"/Users/<user>>/dev/zep/graphiti/mcp_server\",\n \"--project\",\n \".\",\n \"graphiti_mcp_server.py\",\n \"--transport\",\n \"stdio\"\n ],\n \"env\": {\n \"NEO4J_URI\": \"bolt://localhost:7687\",\n \"NEO4J_USER\": \"neo4j\",\n \"NEO4J_PASSWORD\": \"password\",\n \"OPENAI_API_KEY\": \"sk-XXXXXXXX\",\n \"MODEL_NAME\": \"gpt-4.1-mini\"\n }\n }\n }\n}\n```\n\nFor HTTP transport (default), you can use this configuration:\n\n```json\n{\n \"mcpServers\": {\n \"graphiti-memory\": {\n \"transport\": \"http\",\n \"url\": \"http://localhost:8000/mcp/\"\n }\n }\n}\n```\n\n## Available Tools\n\nThe Graphiti MCP server exposes the following tools:\n\n### Core Tools\n\n- `add_episode`: Add an episode to the knowledge graph (supports text, JSON, and message formats)\n- `search_nodes`: Search the knowledge graph for relevant node summaries\n- `search_facts`: Search the knowledge graph for relevant facts (edges between entities)\n- `delete_entity_edge`: Delete an entity edge from the knowledge graph\n- `delete_episode`: Delete an episode from the knowledge graph\n- `get_entity_edge`: Get an entity edge by its UUID\n- `get_episodes`: Get the most recent episodes for a specific group\n- `clear_graph`: Clear all data from the knowledge graph and rebuild indices\n- `get_status`: Get the status of the Graphiti MCP server and Neo4j connection\n\n### Enhanced Knowledge Management Tools\n\n> **Note**: These tools are available in the custom Docker image `lvarming/graphiti-mcp` or when using the [community fork](https://github.com/Varming73/graphiti).\n\n- **`get_entities_by_type`**: Retrieve entities by their type classification\n - Essential for personal knowledge management (PKM) workflows\n - Browse entities by type (e.g., Pattern, Insight, Preference, Procedure)\n - Filter by group IDs and search query\n - Example: `get_entities_by_type(entity_types=[\"Preference\", \"Requirement\"])`\n\n- **`compare_facts_over_time`**: Track knowledge evolution between time periods\n - Compare facts valid at different points in time\n - Returns facts added, facts invalidated, and facts that remained valid\n - Useful for understanding how your knowledge base evolved\n - Example: `compare_facts_over_time(query=\"productivity\", start_time=\"2024-01-01\", end_time=\"2024-03-01\")`\n\n## Working with JSON Data\n\nThe Graphiti MCP server can process structured JSON data through the `add_episode` tool with `source=\"json\"`. This\nallows you to automatically extract entities and relationships from structured data:\n\n```\n\nadd_episode(\nname=\"Customer Profile\",\nepisode_body=\"{\\\"company\\\": {\\\"name\\\": \\\"Acme Technologies\\\"}, \\\"products\\\": [{\\\"id\\\": \\\"P001\\\", \\\"name\\\": \\\"CloudSync\\\"}, {\\\"id\\\": \\\"P002\\\", \\\"name\\\": \\\"DataMiner\\\"}]}\",\nsource=\"json\",\nsource_description=\"CRM data\"\n)\n\n```\n\n## Integrating with the Cursor IDE\n\nTo integrate the Graphiti MCP Server with the Cursor IDE, follow these steps:\n\n1. Run the Graphiti MCP server using the default HTTP transport:\n\n```bash\nuv run graphiti_mcp_server.py --group-id <your_group_id>\n```\n\nHint: specify a `group_id` to namespace graph data. If you do not specify a `group_id`, the server will use \"main\" as the group_id.\n\nor\n\n```bash\ndocker compose up\n```\n\n2. Configure Cursor to connect to the Graphiti MCP server.\n\n```json\n{\n \"mcpServers\": {\n \"graphiti-memory\": {\n \"url\": \"http://localhost:8000/mcp/\"\n }\n }\n}\n```\n\n3. Add the Graphiti rules to Cursor's User Rules. See [cursor_rules.md](cursor_rules.md) for details.\n\n4. Kick off an agent session in Cursor.\n\nThe integration enables AI assistants in Cursor to maintain persistent memory through Graphiti's knowledge graph\ncapabilities.\n\n## Integrating with Claude Desktop (Docker MCP Server)\n\nThe Graphiti MCP Server uses HTTP transport (at endpoint `/mcp/`). Claude Desktop does not natively support HTTP transport, so you'll need to use a gateway like `mcp-remote`.\n\n1. **Run the Graphiti MCP server**:\n\n ```bash\n docker compose up\n # Or run directly with uv:\n uv run graphiti_mcp_server.py\n ```\n\n2. **(Optional) Install `mcp-remote` globally**:\n If you prefer to have `mcp-remote` installed globally, or if you encounter issues with `npx` fetching the package, you can install it globally. Otherwise, `npx` (used in the next step) will handle it for you.\n\n ```bash\n npm install -g mcp-remote\n ```\n\n3. **Configure Claude Desktop**:\n Open your Claude Desktop configuration file (usually `claude_desktop_config.json`) and add or modify the `mcpServers` section as follows:\n\n ```json\n {\n \"mcpServers\": {\n \"graphiti-memory\": {\n // You can choose a different name if you prefer\n \"command\": \"npx\", // Or the full path to mcp-remote if npx is not in your PATH\n \"args\": [\n \"mcp-remote\",\n \"http://localhost:8000/mcp/\" // The Graphiti server's HTTP endpoint\n ]\n }\n }\n }\n ```\n\n If you already have an `mcpServers` entry, add `graphiti-memory` (or your chosen name) as a new key within it.\n\n4. **Restart Claude Desktop** for the changes to take effect.\n\n## Requirements\n\n- Python 3.10 or higher\n- OpenAI API key (for LLM operations and embeddings) or other LLM provider API keys\n- MCP-compatible client\n- Docker and Docker Compose (for the default FalkorDB combined container)\n- (Optional) Neo4j database (version 5.26 or later) if not using the default FalkorDB setup\n\n## Telemetry\n\nThe Graphiti MCP server uses the Graphiti core library, which includes anonymous telemetry collection. When you initialize the Graphiti MCP server, anonymous usage statistics are collected to help improve the framework.\n\n### What's Collected\n\n- Anonymous identifier and system information (OS, Python version)\n- Graphiti version and configuration choices (LLM provider, database backend, embedder type)\n- **No personal data, API keys, or actual graph content is ever collected**\n\n### How to Disable\n\nTo disable telemetry in the MCP server, set the environment variable:\n\n```bash\nexport GRAPHITI_TELEMETRY_ENABLED=false\n```\n\nOr add it to your `.env` file:\n\n```\nGRAPHITI_TELEMETRY_ENABLED=false\n```\n\nFor complete details about what's collected and why, see the [Telemetry section in the main Graphiti README](../README.md#telemetry).\n\n## License\n\nThis project is licensed under the same license as the parent Graphiti project.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Graphiti MCP Server - Enhanced fork with additional tools by Varming",
"version": "1.0.2",
"project_urls": {
"Homepage": "https://github.com/Varming73/graphiti",
"Issues": "https://github.com/Varming73/graphiti/issues",
"Repository": "https://github.com/Varming73/graphiti"
},
"split_keywords": [
"ai",
" graphiti",
" knowledge-graph",
" llm",
" mcp"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1f0c7e21deaa9b9d0bd197e527e14ba6b188baa77fdf974183cf3118483ba735",
"md5": "4b0767fc602d277ebd77a3cfefbb24bc",
"sha256": "7d30423789400da70413626fa2d3de0256f04810f66587f1d67071fdec4b7f0a"
},
"downloads": -1,
"filename": "graphiti_mcp_varming-1.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4b0767fc602d277ebd77a3cfefbb24bc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.10",
"size": 37391,
"upload_time": "2025-11-09T19:13:12",
"upload_time_iso_8601": "2025-11-09T19:13:12.735013Z",
"url": "https://files.pythonhosted.org/packages/1f/0c/7e21deaa9b9d0bd197e527e14ba6b188baa77fdf974183cf3118483ba735/graphiti_mcp_varming-1.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c9565eb730a1769ceef0a76e6ad656398644be14e34dd6933c04fbd3a882ca9e",
"md5": "3bb67f2b7ec20a546932b9622fd289c7",
"sha256": "6375f9afdeffee187f73b61837d042112ef6ef7c758dd18fc8bce08f529f1120"
},
"downloads": -1,
"filename": "graphiti_mcp_varming-1.0.2.tar.gz",
"has_sig": false,
"md5_digest": "3bb67f2b7ec20a546932b9622fd289c7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.10",
"size": 269559,
"upload_time": "2025-11-09T19:13:13",
"upload_time_iso_8601": "2025-11-09T19:13:13.670650Z",
"url": "https://files.pythonhosted.org/packages/c9/56/5eb730a1769ceef0a76e6ad656398644be14e34dd6933c04fbd3a882ca9e/graphiti_mcp_varming-1.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-09 19:13:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Varming73",
"github_project": "graphiti",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "graphiti-mcp-varming"
}