# qdrant-llamaindex-mcp-server: LlamaIndex-Compatible Qdrant MCP Server
> The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol that enables
> seamless integration between LLM applications and external data sources and tools. Whether you're building an
> AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to
> connect LLMs with the context they need.
This repository is a fork of [qdrant/mcp-server-qdrant](https://github.com/qdrant/mcp-server-qdrant) specifically designed to work with documents stored by [LlamaIndex](https://llamaindex.ai/) in [Qdrant](https://qdrant.tech/) vector databases.
## ⚠️ Important Differences from Official Server
This fork has **breaking changes** compared to the official `qdrant/mcp-server-qdrant`:
- **🔧 Many More Tools**: Provides 10+ tools vs. the official server's basic find/store tools
- **🎯 Dynamic Collection Selection**: Collection names are specified at runtime by MCP clients, not hardcoded in configuration
- **🤖 Dynamic Embedding Model Detection**: Automatically detects and loads the correct embedding model for each collection
- **📚 LlamaIndex Compatibility**: Adapts to different content field names and metadata structures used by LlamaIndex
- **🔒 Enhanced Security**: Built-in embedding model whitelist to prevent accidental loading of large models
**These changes make configurations incompatible with the official server.** You cannot simply swap this server for the official one without updating your configuration and workflow.
## Overview
A comprehensive Model Context Protocol server for working with documents stored by LlamaIndex in Qdrant vector databases. Unlike the original server which provides basic functionality with a fixed document structure, this version offers extensive tooling and automatically adapts to different payload formats used by LlamaIndex.
## Key Features
- **LlamaIndex Compatibility**: Automatically detects and adapts to different content field names (`text`, `document`, `_node_content`, etc.)
- **Dynamic Embedding Model Detection**: Automatically detects and uses the correct embedding model for each collection based on its vector configuration
- **Embedding Model Whitelist**: Built-in safety mechanism to prevent accidentally loading large models
- **Flexible Metadata Handling**: Works with both flat and nested metadata structures
- **Read-Only Access**: Designed specifically for querying existing LlamaIndex-indexed data
- **Smart Content Detection**: Automatically identifies the most likely content field when standard names aren't found
## Tools
### Read-Only Tools (Available with `QDRANT_READ_ONLY=true`)
1. **`qdrant-find`** - Search and retrieve documents stored by LlamaIndex in Qdrant
- `query` (string): Semantic search query
- `collection_name` (string): Name of the collection to search
- Returns: Relevant documents with content and metadata
2. **`qdrant-get-point`** - Get a specific point by its ID
- `point_id` (string): The ID of the point to retrieve
- `collection_name` (string): The collection to get the point from
- Returns: Point information with content and metadata
3. **`qdrant-get-collections`** - Get a list of all collections
- Returns: Array of collection names in the Qdrant server
4. **`qdrant-get-collection-details`** - Get detailed information about a collection
- `collection_name` (string): The name of the collection
- Returns: Collection configuration, statistics, and status
5. **`qdrant-get-collection-count`** - Get the number of points in a collection
- `collection_name` (string): The name of the collection
- Returns: Number of points in the collection
6. **`qdrant-peek-collection`** - Preview sample points from a collection
- `collection_name` (string): The name of the collection
- `limit` (int, optional): Maximum number of points to return (default: 10)
- Returns: Sample points from the collection
7. **`qdrant-get-documents`** - Retrieve multiple documents by their IDs
- `point_ids` (array of strings): List of point IDs to retrieve
- `collection_name` (string): The collection to get documents from
- Returns: Array of found documents
8. **`qdrant-search-by-vector`** - Search using a raw vector instead of text query
- `vector` (array of floats): The query vector to search with
- `collection_name` (string): The collection to search in
- `limit` (int, optional): Maximum number of results to return (default: 10)
- Returns: Relevant documents based on vector similarity
9. **`qdrant-list-document-ids`** - List document IDs with pagination
- `collection_name` (string): The collection to list IDs from
- `limit` (int, optional): Maximum number of IDs to return (default: 100)
- `offset` (int, optional): Number of IDs to skip for pagination (default: 0)
- Returns: Array of document IDs
10. **`qdrant-scroll-points`** - Paginated retrieval of points using scroll
- `collection_name` (string): The collection to scroll through
- `limit` (int, optional): Maximum number of points to return (default: 10)
- `offset` (int, optional): Offset for pagination
- Returns: Points with pagination info
### Write Tools (Available when `QDRANT_READ_ONLY=false`)
When read-only mode is disabled, additional tools become available for modifying data:
- **`qdrant-store`** - Store new documents in Qdrant
- **`qdrant-delete-point`** - Delete a specific point by ID
- **`qdrant-update-point-payload`** - Update point metadata
- **`qdrant-create-collection`** - Create new collections
- **`qdrant-delete-collection`** - Delete entire collections
- **`qdrant-add-documents`** - Batch add multiple documents
- **`qdrant-delete-documents`** - Batch delete multiple documents
## Environment Variables
The configuration of the server is done using environment variables:
| Name | Description | Default Value |
|--------------------------|---------------------------------------------------------------------|-------------------------------------------------------------------|
| `QDRANT_URL` | URL of the Qdrant server | None |
| `QDRANT_API_KEY` | API key for the Qdrant server | None |
| `COLLECTION_NAME` | **Deprecated**: Collection names are now specified dynamically by MCP clients at runtime | None |
| `QDRANT_READ_ONLY` | Enable read-only mode (disables write tools for safety) | `false` |
| `QDRANT_LOCAL_PATH` | Path to the local Qdrant database (alternative to `QDRANT_URL`) | None |
| `EMBEDDING_PROVIDER` | Embedding provider to use (currently only "fastembed" is supported) | `fastembed` |
| `EMBEDDING_MODEL` | Name of the embedding model to use | `sentence-transformers/all-MiniLM-L6-v2` |
| `EMBEDDING_ALLOWED_MODELS` | JSON array of allowed embedding models for dynamic loading | `["sentence-transformers/all-MiniLM-L6-v2", "BAAI/bge-small-en-v1.5", "snowflake/snowflake-arctic-embed-xs", "jinaai/jina-embeddings-v2-small-en"]` |
| `TOOL_FIND_DESCRIPTION` | Custom description for the find tool | See default in [`settings.py`](src/mcp_server_qdrant/settings.py) |
Note: You cannot provide both `QDRANT_URL` and `QDRANT_LOCAL_PATH` at the same time.
> [!IMPORTANT]
> Command-line arguments are not supported anymore! Please use environment variables for all configuration.
### FastMCP Environment Variables
Since `mcp-server-qdrant` is based on FastMCP, it also supports all the FastMCP environment variables. The most
important ones are listed below:
| Environment Variable | Description | Default Value |
|---------------------------------------|-----------------------------------------------------------|---------------|
| `FASTMCP_DEBUG` | Enable debug mode | `false` |
| `FASTMCP_LOG_LEVEL` | Set logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) | `INFO` |
| `FASTMCP_HOST` | Host address to bind the server to | `127.0.0.1` |
| `FASTMCP_PORT` | Port to run the server on | `8000` |
| `FASTMCP_WARN_ON_DUPLICATE_RESOURCES` | Show warnings for duplicate resources | `true` |
| `FASTMCP_WARN_ON_DUPLICATE_TOOLS` | Show warnings for duplicate tools | `true` |
| `FASTMCP_WARN_ON_DUPLICATE_PROMPTS` | Show warnings for duplicate prompts | `true` |
| `FASTMCP_DEPENDENCIES` | List of dependencies to install in the server environment | `[]` |
## Dynamic Embedding Model Detection
This server automatically detects which embedding model was used for each collection and uses the appropriate model for queries. This is especially useful when you have multiple collections created with different embedding models.
### How It Works
1. **Collection Creation**: When LlamaIndex creates a collection, the full model name is stored as the vector name (e.g., `"BAAI/bge-small-en-v1.5"`)
2. **Query Time**: When searching a collection, the server:
- Inspects the collection's vector configuration
- Extracts the model name from the vector name
- Loads the appropriate embedding model (with caching for performance)
- Uses that model to embed the query
### Embedding Model Whitelist
For security and resource management, the server includes a built-in whitelist of allowed embedding models. By default, only small, efficient models are permitted:
- `sentence-transformers/all-MiniLM-L6-v2` (384 dims, ~90MB)
- `BAAI/bge-small-en-v1.5` (384 dims, ~67MB)
- `snowflake/snowflake-arctic-embed-xs` (384 dims, ~90MB)
- `jinaai/jina-embeddings-v2-small-en` (512 dims, ~120MB)
### Customizing the Whitelist
#### Using Environment Variables
```bash
# Allow only specific models
export EMBEDDING_ALLOWED_MODELS='["sentence-transformers/all-MiniLM-L6-v2", "BAAI/bge-small-en-v1.5"]'
# Allow all models (removes safety protection)
export EMBEDDING_ALLOWED_MODELS='null'
```
#### In Claude Desktop Config
```json
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": ["mcp-server-qdrant"],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "your-collection",
"EMBEDDING_ALLOWED_MODELS": "[\"sentence-transformers/all-MiniLM-L6-v2\", \"BAAI/bge-small-en-v1.5\"]"
}
}
}
}
```
### Behavior with Blocked Models
When the server encounters a collection using a model not in the whitelist:
- ⚠️ Logs a warning message
- 🔄 Falls back to the default configured model (`EMBEDDING_MODEL`)
- ✅ Continues operating normally
This ensures your system remains stable while preventing accidental downloads of large models.
## Installation
### Using uvx
#### From GitHub Repository
```shell
QDRANT_URL="http://localhost:6333" \
COLLECTION_NAME="your-llamaindex-collection" \
EMBEDDING_MODEL="sentence-transformers/all-MiniLM-L6-v2" \
uvx --from git+https://github.com/azhang/qdrant-llamaindex-mcp-server.git qdrant-llamaindex-mcp-server
```
#### From Local Directory
```shell
# Clone and run locally
git clone https://github.com/azhang/qdrant-llamaindex-mcp-server.git
cd qdrant-llamaindex-mcp-server
QDRANT_URL="http://localhost:6333" \
COLLECTION_NAME="your-llamaindex-collection" \
EMBEDDING_MODEL="sentence-transformers/all-MiniLM-L6-v2" \
uvx --from . qdrant-llamaindex-mcp-server
```
#### Transport Protocols
The server supports different transport protocols that can be specified using the `--transport` flag:
```shell
QDRANT_URL="http://localhost:6333" \
COLLECTION_NAME="your-llamaindex-collection" \
uvx --from git+https://github.com/azhang/qdrant-llamaindex-mcp-server.git qdrant-llamaindex-mcp-server --transport sse
```
Supported transport protocols:
- `stdio` (default): Standard input/output transport, might only be used by local MCP clients
- `sse`: Server-Sent Events transport, perfect for remote clients
- `streamable-http`: Streamable HTTP transport, perfect for remote clients, more recent than SSE
The default transport is `stdio` if not specified.
When SSE transport is used, the server will listen on the specified port and wait for incoming connections. The default
port is 8000, however it can be changed using the `FASTMCP_PORT` environment variable.
```shell
QDRANT_URL="http://localhost:6333" \
COLLECTION_NAME="my-collection" \
FASTMCP_PORT=1234 \
uvx mcp-server-qdrant --transport sse
```
### Using Docker
A Dockerfile is available for building and running the MCP server:
```bash
# Build the container
docker build -t mcp-server-qdrant .
# Run the container
docker run -p 8000:8000 \
-e FASTMCP_HOST="0.0.0.0" \
-e QDRANT_URL="http://your-qdrant-server:6333" \
-e QDRANT_API_KEY="your-api-key" \
-e COLLECTION_NAME="your-collection" \
mcp-server-qdrant
```
> [!TIP]
> Please note that we set `FASTMCP_HOST="0.0.0.0"` to make the server listen on all network interfaces. This is
> necessary when running the server in a Docker container.
### Installing via Smithery
To install Qdrant MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/protocol/mcp-server-qdrant):
```bash
npx @smithery/cli install mcp-server-qdrant --client claude
```
### Manual configuration of Claude Desktop
To use this server with the Claude Desktop app, add the following configuration to the "mcpServers" section of your
`claude_desktop_config.json`:
```json
{
"qdrant": {
"command": "uvx",
"args": ["mcp-server-qdrant"],
"env": {
"QDRANT_URL": "https://xyz-example.eu-central.aws.cloud.qdrant.io:6333",
"QDRANT_API_KEY": "your_api_key",
"QDRANT_READ_ONLY": "true",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
```
For local Qdrant mode:
```json
{
"qdrant": {
"command": "uvx",
"args": ["mcp-server-qdrant"],
"env": {
"QDRANT_LOCAL_PATH": "/path/to/qdrant/database",
"QDRANT_READ_ONLY": "true",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
```
> [!NOTE]
> **Collection Names**: Collection names are now specified dynamically when using the tools (e.g., when calling `qdrant-find`, you specify which collection to search). This provides more flexibility than the previous approach of hardcoding a single collection name.
By default, the server will use the `sentence-transformers/all-MiniLM-L6-v2` embedding model to encode memories.
For the time being, only [FastEmbed](https://qdrant.github.io/fastembed/) models are supported.
## Support for other tools
This MCP server can be used with any MCP-compatible client. For example, you can use it with
[Cursor](https://docs.cursor.com/context/model-context-protocol) and [VS Code](https://code.visualstudio.com/docs), which provide built-in support for the Model Context
Protocol.
### Using with Cursor/Windsurf
You can configure this MCP server to work as a code search tool for Cursor or Windsurf by customizing the tool
descriptions:
```bash
QDRANT_URL="http://localhost:6333" \
COLLECTION_NAME="code-snippets" \
TOOL_STORE_DESCRIPTION="Store reusable code snippets for later retrieval. \
The 'information' parameter should contain a natural language description of what the code does, \
while the actual code should be included in the 'metadata' parameter as a 'code' property. \
The value of 'metadata' is a Python dictionary with strings as keys. \
Use this whenever you generate some code snippet." \
TOOL_FIND_DESCRIPTION="Search for relevant code snippets based on natural language descriptions. \
The 'query' parameter should describe what you're looking for, \
and the tool will return the most relevant code snippets. \
Use this when you need to find existing code snippets for reuse or reference." \
uvx mcp-server-qdrant --transport sse # Enable SSE transport
```
In Cursor/Windsurf, you can then configure the MCP server in your settings by pointing to this running server using
SSE transport protocol. The description on how to add an MCP server to Cursor can be found in the [Cursor
documentation](https://docs.cursor.com/context/model-context-protocol#adding-an-mcp-server-to-cursor). If you are
running Cursor/Windsurf locally, you can use the following URL:
```
http://localhost:8000/sse
```
> [!TIP]
> We suggest SSE transport as a preferred way to connect Cursor/Windsurf to the MCP server, as it can support remote
> connections. That makes it easy to share the server with your team or use it in a cloud environment.
This configuration transforms the Qdrant MCP server into a specialized code search tool that can:
1. Store code snippets, documentation, and implementation details
2. Retrieve relevant code examples based on semantic search
3. Help developers find specific implementations or usage patterns
You can populate the database by storing natural language descriptions of code snippets (in the `information` parameter)
along with the actual code (in the `metadata.code` property), and then search for them using natural language queries
that describe what you're looking for.
> [!NOTE]
> The tool descriptions provided above are examples and may need to be customized for your specific use case. Consider
> adjusting the descriptions to better match your team's workflow and the specific types of code snippets you want to
> store and retrieve.
**If you have successfully installed the `mcp-server-qdrant`, but still can't get it to work with Cursor, please
consider creating the [Cursor rules](https://docs.cursor.com/context/rules-for-ai) so the MCP tools are always used when
the agent produces a new code snippet.** You can restrict the rules to only work for certain file types, to avoid using
the MCP server for the documentation or other types of content.
### Using with Claude Code
You can enhance Claude Code's capabilities by connecting it to this MCP server, enabling semantic search over your
existing codebase.
#### Setting up mcp-server-qdrant
1. Add the MCP server to Claude Code:
```shell
# Add mcp-server-qdrant configured for code search
claude mcp add code-search \
-e QDRANT_URL="http://localhost:6333" \
-e COLLECTION_NAME="code-repository" \
-e EMBEDDING_MODEL="sentence-transformers/all-MiniLM-L6-v2" \
-e TOOL_STORE_DESCRIPTION="Store code snippets with descriptions. The 'information' parameter should contain a natural language description of what the code does, while the actual code should be included in the 'metadata' parameter as a 'code' property." \
-e TOOL_FIND_DESCRIPTION="Search for relevant code snippets using natural language. The 'query' parameter should describe the functionality you're looking for." \
-- uvx mcp-server-qdrant
```
2. Verify the server was added:
```shell
claude mcp list
```
#### Using Semantic Code Search in Claude Code
Tool descriptions, specified in `TOOL_STORE_DESCRIPTION` and `TOOL_FIND_DESCRIPTION`, guide Claude Code on how to use
the MCP server. The ones provided above are examples and may need to be customized for your specific use case. However,
Claude Code should be already able to:
1. Use the `qdrant-store` tool to store code snippets with descriptions.
2. Use the `qdrant-find` tool to search for relevant code snippets using natural language.
### Run MCP server in Development Mode
The MCP server can be run in development mode using the `mcp dev` command. This will start the server and open the MCP
inspector in your browser.
```shell
COLLECTION_NAME=mcp-dev fastmcp dev src/mcp_server_qdrant/server.py
```
### Using with VS Code
For one-click installation, click one of the install buttons below:
[](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D) [](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D&quality=insiders)
[](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-p%22%2C%228000%3A8000%22%2C%22-i%22%2C%22--rm%22%2C%22-e%22%2C%22QDRANT_URL%22%2C%22-e%22%2C%22QDRANT_API_KEY%22%2C%22-e%22%2C%22COLLECTION_NAME%22%2C%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D) [](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-p%22%2C%228000%3A8000%22%2C%22-i%22%2C%22--rm%22%2C%22-e%22%2C%22QDRANT_URL%22%2C%22-e%22%2C%22QDRANT_API_KEY%22%2C%22-e%22%2C%22COLLECTION_NAME%22%2C%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D&quality=insiders)
#### Manual Installation
Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.
```json
{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "qdrantUrl",
"description": "Qdrant URL"
},
{
"type": "promptString",
"id": "qdrantApiKey",
"description": "Qdrant API Key",
"password": true
},
{
"type": "promptString",
"id": "collectionName",
"description": "Collection Name"
}
],
"servers": {
"qdrant": {
"command": "uvx",
"args": ["mcp-server-qdrant"],
"env": {
"QDRANT_URL": "${input:qdrantUrl}",
"QDRANT_API_KEY": "${input:qdrantApiKey}",
"COLLECTION_NAME": "${input:collectionName}"
}
}
}
}
}
```
Or if you prefer using Docker, add this configuration instead:
```json
{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "qdrantUrl",
"description": "Qdrant URL"
},
{
"type": "promptString",
"id": "qdrantApiKey",
"description": "Qdrant API Key",
"password": true
},
{
"type": "promptString",
"id": "collectionName",
"description": "Collection Name"
}
],
"servers": {
"qdrant": {
"command": "docker",
"args": [
"run",
"-p", "8000:8000",
"-i",
"--rm",
"-e", "QDRANT_URL",
"-e", "QDRANT_API_KEY",
"-e", "COLLECTION_NAME",
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "${input:qdrantUrl}",
"QDRANT_API_KEY": "${input:qdrantApiKey}",
"COLLECTION_NAME": "${input:collectionName}"
}
}
}
}
}
```
Alternatively, you can create a `.vscode/mcp.json` file in your workspace with the following content:
```json
{
"inputs": [
{
"type": "promptString",
"id": "qdrantUrl",
"description": "Qdrant URL"
},
{
"type": "promptString",
"id": "qdrantApiKey",
"description": "Qdrant API Key",
"password": true
},
{
"type": "promptString",
"id": "collectionName",
"description": "Collection Name"
}
],
"servers": {
"qdrant": {
"command": "uvx",
"args": ["mcp-server-qdrant"],
"env": {
"QDRANT_URL": "${input:qdrantUrl}",
"QDRANT_API_KEY": "${input:qdrantApiKey}",
"COLLECTION_NAME": "${input:collectionName}"
}
}
}
}
```
For workspace configuration with Docker, use this in `.vscode/mcp.json`:
```json
{
"inputs": [
{
"type": "promptString",
"id": "qdrantUrl",
"description": "Qdrant URL"
},
{
"type": "promptString",
"id": "qdrantApiKey",
"description": "Qdrant API Key",
"password": true
},
{
"type": "promptString",
"id": "collectionName",
"description": "Collection Name"
}
],
"servers": {
"qdrant": {
"command": "docker",
"args": [
"run",
"-p", "8000:8000",
"-i",
"--rm",
"-e", "QDRANT_URL",
"-e", "QDRANT_API_KEY",
"-e", "COLLECTION_NAME",
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "${input:qdrantUrl}",
"QDRANT_API_KEY": "${input:qdrantApiKey}",
"COLLECTION_NAME": "${input:collectionName}"
}
}
}
}
```
## Contributing
If you have suggestions for how mcp-server-qdrant could be improved, or want to report a bug, open an issue!
We'd love all and any contributions.
### Testing `mcp-server-qdrant` locally
The [MCP inspector](https://github.com/modelcontextprotocol/inspector) is a developer tool for testing and debugging MCP
servers. It runs both a client UI (default port 5173) and an MCP proxy server (default port 3000). Open the client UI in
your browser to use the inspector.
```shell
QDRANT_URL=":memory:" COLLECTION_NAME="test" \
fastmcp dev src/mcp_server_qdrant/server.py
```
Once started, open your browser to http://localhost:5173 to access the inspector interface.
## License
This MCP server is licensed under the Apache License 2.0. This means you are free to use, modify, and distribute the
software, subject to the terms and conditions of the Apache License 2.0. For more details, please see the LICENSE file
in the project repository.
Raw data
{
"_id": null,
"home_page": null,
"name": "qdrant-llamaindex-mcp-server",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "embeddings, llamaindex, mcp, qdrant, vector-database",
"author": "Allie",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/89/ae/3095408d25be9adedb4a2716caf93f12e7b1d55cfb2561cb7c6dd02723c9/qdrant_llamaindex_mcp_server-0.1.0.tar.gz",
"platform": null,
"description": "# qdrant-llamaindex-mcp-server: LlamaIndex-Compatible Qdrant MCP Server\n\n> The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol that enables\n> seamless integration between LLM applications and external data sources and tools. Whether you're building an\n> AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to\n> connect LLMs with the context they need.\n\nThis repository is a fork of [qdrant/mcp-server-qdrant](https://github.com/qdrant/mcp-server-qdrant) specifically designed to work with documents stored by [LlamaIndex](https://llamaindex.ai/) in [Qdrant](https://qdrant.tech/) vector databases.\n\n## \u26a0\ufe0f Important Differences from Official Server\n\nThis fork has **breaking changes** compared to the official `qdrant/mcp-server-qdrant`:\n\n- **\ud83d\udd27 Many More Tools**: Provides 10+ tools vs. the official server's basic find/store tools\n- **\ud83c\udfaf Dynamic Collection Selection**: Collection names are specified at runtime by MCP clients, not hardcoded in configuration \n- **\ud83e\udd16 Dynamic Embedding Model Detection**: Automatically detects and loads the correct embedding model for each collection\n- **\ud83d\udcda LlamaIndex Compatibility**: Adapts to different content field names and metadata structures used by LlamaIndex\n- **\ud83d\udd12 Enhanced Security**: Built-in embedding model whitelist to prevent accidental loading of large models\n\n**These changes make configurations incompatible with the official server.** You cannot simply swap this server for the official one without updating your configuration and workflow.\n\n## Overview\n\nA comprehensive Model Context Protocol server for working with documents stored by LlamaIndex in Qdrant vector databases. Unlike the original server which provides basic functionality with a fixed document structure, this version offers extensive tooling and automatically adapts to different payload formats used by LlamaIndex.\n\n## Key Features\n\n- **LlamaIndex Compatibility**: Automatically detects and adapts to different content field names (`text`, `document`, `_node_content`, etc.)\n- **Dynamic Embedding Model Detection**: Automatically detects and uses the correct embedding model for each collection based on its vector configuration\n- **Embedding Model Whitelist**: Built-in safety mechanism to prevent accidentally loading large models\n- **Flexible Metadata Handling**: Works with both flat and nested metadata structures\n- **Read-Only Access**: Designed specifically for querying existing LlamaIndex-indexed data\n- **Smart Content Detection**: Automatically identifies the most likely content field when standard names aren't found\n\n## Tools\n\n### Read-Only Tools (Available with `QDRANT_READ_ONLY=true`)\n\n1. **`qdrant-find`** - Search and retrieve documents stored by LlamaIndex in Qdrant\n - `query` (string): Semantic search query\n - `collection_name` (string): Name of the collection to search\n - Returns: Relevant documents with content and metadata\n\n2. **`qdrant-get-point`** - Get a specific point by its ID\n - `point_id` (string): The ID of the point to retrieve\n - `collection_name` (string): The collection to get the point from\n - Returns: Point information with content and metadata\n\n3. **`qdrant-get-collections`** - Get a list of all collections\n - Returns: Array of collection names in the Qdrant server\n\n4. **`qdrant-get-collection-details`** - Get detailed information about a collection\n - `collection_name` (string): The name of the collection\n - Returns: Collection configuration, statistics, and status\n\n5. **`qdrant-get-collection-count`** - Get the number of points in a collection\n - `collection_name` (string): The name of the collection\n - Returns: Number of points in the collection\n\n6. **`qdrant-peek-collection`** - Preview sample points from a collection\n - `collection_name` (string): The name of the collection\n - `limit` (int, optional): Maximum number of points to return (default: 10)\n - Returns: Sample points from the collection\n\n7. **`qdrant-get-documents`** - Retrieve multiple documents by their IDs\n - `point_ids` (array of strings): List of point IDs to retrieve\n - `collection_name` (string): The collection to get documents from\n - Returns: Array of found documents\n\n8. **`qdrant-search-by-vector`** - Search using a raw vector instead of text query\n - `vector` (array of floats): The query vector to search with\n - `collection_name` (string): The collection to search in\n - `limit` (int, optional): Maximum number of results to return (default: 10)\n - Returns: Relevant documents based on vector similarity\n\n9. **`qdrant-list-document-ids`** - List document IDs with pagination\n - `collection_name` (string): The collection to list IDs from\n - `limit` (int, optional): Maximum number of IDs to return (default: 100)\n - `offset` (int, optional): Number of IDs to skip for pagination (default: 0)\n - Returns: Array of document IDs\n\n10. **`qdrant-scroll-points`** - Paginated retrieval of points using scroll\n - `collection_name` (string): The collection to scroll through\n - `limit` (int, optional): Maximum number of points to return (default: 10)\n - `offset` (int, optional): Offset for pagination\n - Returns: Points with pagination info\n\n### Write Tools (Available when `QDRANT_READ_ONLY=false`)\n\nWhen read-only mode is disabled, additional tools become available for modifying data:\n\n- **`qdrant-store`** - Store new documents in Qdrant\n- **`qdrant-delete-point`** - Delete a specific point by ID\n- **`qdrant-update-point-payload`** - Update point metadata\n- **`qdrant-create-collection`** - Create new collections\n- **`qdrant-delete-collection`** - Delete entire collections\n- **`qdrant-add-documents`** - Batch add multiple documents\n- **`qdrant-delete-documents`** - Batch delete multiple documents\n\n## Environment Variables\n\nThe configuration of the server is done using environment variables:\n\n| Name | Description | Default Value |\n|--------------------------|---------------------------------------------------------------------|-------------------------------------------------------------------|\n| `QDRANT_URL` | URL of the Qdrant server | None |\n| `QDRANT_API_KEY` | API key for the Qdrant server | None |\n| `COLLECTION_NAME` | **Deprecated**: Collection names are now specified dynamically by MCP clients at runtime | None |\n| `QDRANT_READ_ONLY` | Enable read-only mode (disables write tools for safety) | `false` |\n| `QDRANT_LOCAL_PATH` | Path to the local Qdrant database (alternative to `QDRANT_URL`) | None |\n| `EMBEDDING_PROVIDER` | Embedding provider to use (currently only \"fastembed\" is supported) | `fastembed` |\n| `EMBEDDING_MODEL` | Name of the embedding model to use | `sentence-transformers/all-MiniLM-L6-v2` |\n| `EMBEDDING_ALLOWED_MODELS` | JSON array of allowed embedding models for dynamic loading | `[\"sentence-transformers/all-MiniLM-L6-v2\", \"BAAI/bge-small-en-v1.5\", \"snowflake/snowflake-arctic-embed-xs\", \"jinaai/jina-embeddings-v2-small-en\"]` |\n| `TOOL_FIND_DESCRIPTION` | Custom description for the find tool | See default in [`settings.py`](src/mcp_server_qdrant/settings.py) |\n\nNote: You cannot provide both `QDRANT_URL` and `QDRANT_LOCAL_PATH` at the same time.\n\n> [!IMPORTANT]\n> Command-line arguments are not supported anymore! Please use environment variables for all configuration.\n\n### FastMCP Environment Variables\n\nSince `mcp-server-qdrant` is based on FastMCP, it also supports all the FastMCP environment variables. The most\nimportant ones are listed below:\n\n| Environment Variable | Description | Default Value |\n|---------------------------------------|-----------------------------------------------------------|---------------|\n| `FASTMCP_DEBUG` | Enable debug mode | `false` |\n| `FASTMCP_LOG_LEVEL` | Set logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) | `INFO` |\n| `FASTMCP_HOST` | Host address to bind the server to | `127.0.0.1` |\n| `FASTMCP_PORT` | Port to run the server on | `8000` |\n| `FASTMCP_WARN_ON_DUPLICATE_RESOURCES` | Show warnings for duplicate resources | `true` |\n| `FASTMCP_WARN_ON_DUPLICATE_TOOLS` | Show warnings for duplicate tools | `true` |\n| `FASTMCP_WARN_ON_DUPLICATE_PROMPTS` | Show warnings for duplicate prompts | `true` |\n| `FASTMCP_DEPENDENCIES` | List of dependencies to install in the server environment | `[]` |\n\n## Dynamic Embedding Model Detection\n\nThis server automatically detects which embedding model was used for each collection and uses the appropriate model for queries. This is especially useful when you have multiple collections created with different embedding models.\n\n### How It Works\n\n1. **Collection Creation**: When LlamaIndex creates a collection, the full model name is stored as the vector name (e.g., `\"BAAI/bge-small-en-v1.5\"`)\n2. **Query Time**: When searching a collection, the server:\n - Inspects the collection's vector configuration\n - Extracts the model name from the vector name\n - Loads the appropriate embedding model (with caching for performance)\n - Uses that model to embed the query\n\n### Embedding Model Whitelist\n\nFor security and resource management, the server includes a built-in whitelist of allowed embedding models. By default, only small, efficient models are permitted:\n\n- `sentence-transformers/all-MiniLM-L6-v2` (384 dims, ~90MB)\n- `BAAI/bge-small-en-v1.5` (384 dims, ~67MB) \n- `snowflake/snowflake-arctic-embed-xs` (384 dims, ~90MB)\n- `jinaai/jina-embeddings-v2-small-en` (512 dims, ~120MB)\n\n### Customizing the Whitelist\n\n#### Using Environment Variables\n```bash\n# Allow only specific models\nexport EMBEDDING_ALLOWED_MODELS='[\"sentence-transformers/all-MiniLM-L6-v2\", \"BAAI/bge-small-en-v1.5\"]'\n\n# Allow all models (removes safety protection)\nexport EMBEDDING_ALLOWED_MODELS='null'\n```\n\n#### In Claude Desktop Config\n```json\n{\n \"mcpServers\": {\n \"qdrant\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-qdrant\"],\n \"env\": {\n \"QDRANT_URL\": \"http://localhost:6333\",\n \"COLLECTION_NAME\": \"your-collection\",\n \"EMBEDDING_ALLOWED_MODELS\": \"[\\\"sentence-transformers/all-MiniLM-L6-v2\\\", \\\"BAAI/bge-small-en-v1.5\\\"]\"\n }\n }\n }\n}\n```\n\n### Behavior with Blocked Models\n\nWhen the server encounters a collection using a model not in the whitelist:\n- \u26a0\ufe0f Logs a warning message\n- \ud83d\udd04 Falls back to the default configured model (`EMBEDDING_MODEL`)\n- \u2705 Continues operating normally\n\nThis ensures your system remains stable while preventing accidental downloads of large models.\n\n## Installation\n\n### Using uvx\n\n#### From GitHub Repository\n\n```shell\nQDRANT_URL=\"http://localhost:6333\" \\\nCOLLECTION_NAME=\"your-llamaindex-collection\" \\\nEMBEDDING_MODEL=\"sentence-transformers/all-MiniLM-L6-v2\" \\\nuvx --from git+https://github.com/azhang/qdrant-llamaindex-mcp-server.git qdrant-llamaindex-mcp-server\n```\n\n#### From Local Directory\n\n```shell\n# Clone and run locally\ngit clone https://github.com/azhang/qdrant-llamaindex-mcp-server.git\ncd qdrant-llamaindex-mcp-server\n\nQDRANT_URL=\"http://localhost:6333\" \\\nCOLLECTION_NAME=\"your-llamaindex-collection\" \\\nEMBEDDING_MODEL=\"sentence-transformers/all-MiniLM-L6-v2\" \\\nuvx --from . qdrant-llamaindex-mcp-server\n```\n\n#### Transport Protocols\n\nThe server supports different transport protocols that can be specified using the `--transport` flag:\n\n```shell\nQDRANT_URL=\"http://localhost:6333\" \\\nCOLLECTION_NAME=\"your-llamaindex-collection\" \\\nuvx --from git+https://github.com/azhang/qdrant-llamaindex-mcp-server.git qdrant-llamaindex-mcp-server --transport sse\n```\n\nSupported transport protocols:\n\n- `stdio` (default): Standard input/output transport, might only be used by local MCP clients\n- `sse`: Server-Sent Events transport, perfect for remote clients\n- `streamable-http`: Streamable HTTP transport, perfect for remote clients, more recent than SSE\n\nThe default transport is `stdio` if not specified.\n\nWhen SSE transport is used, the server will listen on the specified port and wait for incoming connections. The default\nport is 8000, however it can be changed using the `FASTMCP_PORT` environment variable.\n\n```shell\nQDRANT_URL=\"http://localhost:6333\" \\\nCOLLECTION_NAME=\"my-collection\" \\\nFASTMCP_PORT=1234 \\\nuvx mcp-server-qdrant --transport sse\n```\n\n### Using Docker\n\nA Dockerfile is available for building and running the MCP server:\n\n```bash\n# Build the container\ndocker build -t mcp-server-qdrant .\n\n# Run the container\ndocker run -p 8000:8000 \\\n -e FASTMCP_HOST=\"0.0.0.0\" \\\n -e QDRANT_URL=\"http://your-qdrant-server:6333\" \\\n -e QDRANT_API_KEY=\"your-api-key\" \\\n -e COLLECTION_NAME=\"your-collection\" \\\n mcp-server-qdrant\n```\n\n> [!TIP]\n> Please note that we set `FASTMCP_HOST=\"0.0.0.0\"` to make the server listen on all network interfaces. This is\n> necessary when running the server in a Docker container.\n\n### Installing via Smithery\n\nTo install Qdrant MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/protocol/mcp-server-qdrant):\n\n```bash\nnpx @smithery/cli install mcp-server-qdrant --client claude\n```\n\n### Manual configuration of Claude Desktop\n\nTo use this server with the Claude Desktop app, add the following configuration to the \"mcpServers\" section of your\n`claude_desktop_config.json`:\n\n```json\n{\n \"qdrant\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-qdrant\"],\n \"env\": {\n \"QDRANT_URL\": \"https://xyz-example.eu-central.aws.cloud.qdrant.io:6333\",\n \"QDRANT_API_KEY\": \"your_api_key\",\n \"QDRANT_READ_ONLY\": \"true\",\n \"EMBEDDING_MODEL\": \"sentence-transformers/all-MiniLM-L6-v2\"\n }\n }\n}\n```\n\nFor local Qdrant mode:\n\n```json\n{\n \"qdrant\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-qdrant\"],\n \"env\": {\n \"QDRANT_LOCAL_PATH\": \"/path/to/qdrant/database\",\n \"QDRANT_READ_ONLY\": \"true\",\n \"EMBEDDING_MODEL\": \"sentence-transformers/all-MiniLM-L6-v2\"\n }\n }\n}\n```\n\n> [!NOTE]\n> **Collection Names**: Collection names are now specified dynamically when using the tools (e.g., when calling `qdrant-find`, you specify which collection to search). This provides more flexibility than the previous approach of hardcoding a single collection name.\n\nBy default, the server will use the `sentence-transformers/all-MiniLM-L6-v2` embedding model to encode memories.\nFor the time being, only [FastEmbed](https://qdrant.github.io/fastembed/) models are supported.\n\n## Support for other tools\n\nThis MCP server can be used with any MCP-compatible client. For example, you can use it with\n[Cursor](https://docs.cursor.com/context/model-context-protocol) and [VS Code](https://code.visualstudio.com/docs), which provide built-in support for the Model Context\nProtocol.\n\n### Using with Cursor/Windsurf\n\nYou can configure this MCP server to work as a code search tool for Cursor or Windsurf by customizing the tool\ndescriptions:\n\n```bash\nQDRANT_URL=\"http://localhost:6333\" \\\nCOLLECTION_NAME=\"code-snippets\" \\\nTOOL_STORE_DESCRIPTION=\"Store reusable code snippets for later retrieval. \\\nThe 'information' parameter should contain a natural language description of what the code does, \\\nwhile the actual code should be included in the 'metadata' parameter as a 'code' property. \\\nThe value of 'metadata' is a Python dictionary with strings as keys. \\\nUse this whenever you generate some code snippet.\" \\\nTOOL_FIND_DESCRIPTION=\"Search for relevant code snippets based on natural language descriptions. \\\nThe 'query' parameter should describe what you're looking for, \\\nand the tool will return the most relevant code snippets. \\\nUse this when you need to find existing code snippets for reuse or reference.\" \\\nuvx mcp-server-qdrant --transport sse # Enable SSE transport\n```\n\nIn Cursor/Windsurf, you can then configure the MCP server in your settings by pointing to this running server using\nSSE transport protocol. The description on how to add an MCP server to Cursor can be found in the [Cursor\ndocumentation](https://docs.cursor.com/context/model-context-protocol#adding-an-mcp-server-to-cursor). If you are\nrunning Cursor/Windsurf locally, you can use the following URL:\n\n```\nhttp://localhost:8000/sse\n```\n\n> [!TIP]\n> We suggest SSE transport as a preferred way to connect Cursor/Windsurf to the MCP server, as it can support remote\n> connections. That makes it easy to share the server with your team or use it in a cloud environment.\n\nThis configuration transforms the Qdrant MCP server into a specialized code search tool that can:\n\n1. Store code snippets, documentation, and implementation details\n2. Retrieve relevant code examples based on semantic search\n3. Help developers find specific implementations or usage patterns\n\nYou can populate the database by storing natural language descriptions of code snippets (in the `information` parameter)\nalong with the actual code (in the `metadata.code` property), and then search for them using natural language queries\nthat describe what you're looking for.\n\n> [!NOTE]\n> The tool descriptions provided above are examples and may need to be customized for your specific use case. Consider\n> adjusting the descriptions to better match your team's workflow and the specific types of code snippets you want to\n> store and retrieve.\n\n**If you have successfully installed the `mcp-server-qdrant`, but still can't get it to work with Cursor, please\nconsider creating the [Cursor rules](https://docs.cursor.com/context/rules-for-ai) so the MCP tools are always used when\nthe agent produces a new code snippet.** You can restrict the rules to only work for certain file types, to avoid using\nthe MCP server for the documentation or other types of content.\n\n### Using with Claude Code\n\nYou can enhance Claude Code's capabilities by connecting it to this MCP server, enabling semantic search over your\nexisting codebase.\n\n#### Setting up mcp-server-qdrant\n\n1. Add the MCP server to Claude Code:\n\n ```shell\n # Add mcp-server-qdrant configured for code search\n claude mcp add code-search \\\n -e QDRANT_URL=\"http://localhost:6333\" \\\n -e COLLECTION_NAME=\"code-repository\" \\\n -e EMBEDDING_MODEL=\"sentence-transformers/all-MiniLM-L6-v2\" \\\n -e TOOL_STORE_DESCRIPTION=\"Store code snippets with descriptions. The 'information' parameter should contain a natural language description of what the code does, while the actual code should be included in the 'metadata' parameter as a 'code' property.\" \\\n -e TOOL_FIND_DESCRIPTION=\"Search for relevant code snippets using natural language. The 'query' parameter should describe the functionality you're looking for.\" \\\n -- uvx mcp-server-qdrant\n ```\n\n2. Verify the server was added:\n\n ```shell\n claude mcp list\n ```\n\n#### Using Semantic Code Search in Claude Code\n\nTool descriptions, specified in `TOOL_STORE_DESCRIPTION` and `TOOL_FIND_DESCRIPTION`, guide Claude Code on how to use\nthe MCP server. The ones provided above are examples and may need to be customized for your specific use case. However,\nClaude Code should be already able to:\n\n1. Use the `qdrant-store` tool to store code snippets with descriptions.\n2. Use the `qdrant-find` tool to search for relevant code snippets using natural language.\n\n### Run MCP server in Development Mode\n\nThe MCP server can be run in development mode using the `mcp dev` command. This will start the server and open the MCP\ninspector in your browser.\n\n```shell\nCOLLECTION_NAME=mcp-dev fastmcp dev src/mcp_server_qdrant/server.py\n```\n\n### Using with VS Code\n\nFor one-click installation, click one of the install buttons below:\n\n[](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D) [](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D&quality=insiders)\n\n[](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-p%22%2C%228000%3A8000%22%2C%22-i%22%2C%22--rm%22%2C%22-e%22%2C%22QDRANT_URL%22%2C%22-e%22%2C%22QDRANT_API_KEY%22%2C%22-e%22%2C%22COLLECTION_NAME%22%2C%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D) [](https://insiders.vscode.dev/redirect/mcp/install?name=qdrant&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22-p%22%2C%228000%3A8000%22%2C%22-i%22%2C%22--rm%22%2C%22-e%22%2C%22QDRANT_URL%22%2C%22-e%22%2C%22QDRANT_API_KEY%22%2C%22-e%22%2C%22COLLECTION_NAME%22%2C%22mcp-server-qdrant%22%5D%2C%22env%22%3A%7B%22QDRANT_URL%22%3A%22%24%7Binput%3AqdrantUrl%7D%22%2C%22QDRANT_API_KEY%22%3A%22%24%7Binput%3AqdrantApiKey%7D%22%2C%22COLLECTION_NAME%22%3A%22%24%7Binput%3AcollectionName%7D%22%7D%7D&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantUrl%22%2C%22description%22%3A%22Qdrant+URL%22%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22qdrantApiKey%22%2C%22description%22%3A%22Qdrant+API+Key%22%2C%22password%22%3Atrue%7D%2C%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22collectionName%22%2C%22description%22%3A%22Collection+Name%22%7D%5D&quality=insiders)\n\n#### Manual Installation\n\nAdd the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.\n\n```json\n{\n \"mcp\": {\n \"inputs\": [\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantUrl\",\n \"description\": \"Qdrant URL\"\n },\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantApiKey\",\n \"description\": \"Qdrant API Key\",\n \"password\": true\n },\n {\n \"type\": \"promptString\",\n \"id\": \"collectionName\",\n \"description\": \"Collection Name\"\n }\n ],\n \"servers\": {\n \"qdrant\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-qdrant\"],\n \"env\": {\n \"QDRANT_URL\": \"${input:qdrantUrl}\",\n \"QDRANT_API_KEY\": \"${input:qdrantApiKey}\",\n \"COLLECTION_NAME\": \"${input:collectionName}\"\n }\n }\n }\n }\n}\n```\n\nOr if you prefer using Docker, add this configuration instead:\n\n```json\n{\n \"mcp\": {\n \"inputs\": [\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantUrl\",\n \"description\": \"Qdrant URL\"\n },\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantApiKey\",\n \"description\": \"Qdrant API Key\",\n \"password\": true\n },\n {\n \"type\": \"promptString\",\n \"id\": \"collectionName\",\n \"description\": \"Collection Name\"\n }\n ],\n \"servers\": {\n \"qdrant\": {\n \"command\": \"docker\",\n \"args\": [\n \"run\",\n \"-p\", \"8000:8000\",\n \"-i\",\n \"--rm\",\n \"-e\", \"QDRANT_URL\",\n \"-e\", \"QDRANT_API_KEY\",\n \"-e\", \"COLLECTION_NAME\",\n \"mcp-server-qdrant\"\n ],\n \"env\": {\n \"QDRANT_URL\": \"${input:qdrantUrl}\",\n \"QDRANT_API_KEY\": \"${input:qdrantApiKey}\",\n \"COLLECTION_NAME\": \"${input:collectionName}\"\n }\n }\n }\n }\n}\n```\n\nAlternatively, you can create a `.vscode/mcp.json` file in your workspace with the following content:\n\n```json\n{\n \"inputs\": [\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantUrl\",\n \"description\": \"Qdrant URL\"\n },\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantApiKey\",\n \"description\": \"Qdrant API Key\",\n \"password\": true\n },\n {\n \"type\": \"promptString\",\n \"id\": \"collectionName\",\n \"description\": \"Collection Name\"\n }\n ],\n \"servers\": {\n \"qdrant\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-qdrant\"],\n \"env\": {\n \"QDRANT_URL\": \"${input:qdrantUrl}\",\n \"QDRANT_API_KEY\": \"${input:qdrantApiKey}\",\n \"COLLECTION_NAME\": \"${input:collectionName}\"\n }\n }\n }\n}\n```\n\nFor workspace configuration with Docker, use this in `.vscode/mcp.json`:\n\n```json\n{\n \"inputs\": [\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantUrl\",\n \"description\": \"Qdrant URL\"\n },\n {\n \"type\": \"promptString\",\n \"id\": \"qdrantApiKey\",\n \"description\": \"Qdrant API Key\",\n \"password\": true\n },\n {\n \"type\": \"promptString\",\n \"id\": \"collectionName\",\n \"description\": \"Collection Name\"\n }\n ],\n \"servers\": {\n \"qdrant\": {\n \"command\": \"docker\",\n \"args\": [\n \"run\",\n \"-p\", \"8000:8000\",\n \"-i\",\n \"--rm\",\n \"-e\", \"QDRANT_URL\",\n \"-e\", \"QDRANT_API_KEY\",\n \"-e\", \"COLLECTION_NAME\",\n \"mcp-server-qdrant\"\n ],\n \"env\": {\n \"QDRANT_URL\": \"${input:qdrantUrl}\",\n \"QDRANT_API_KEY\": \"${input:qdrantApiKey}\",\n \"COLLECTION_NAME\": \"${input:collectionName}\"\n }\n }\n }\n}\n```\n\n## Contributing\n\nIf you have suggestions for how mcp-server-qdrant could be improved, or want to report a bug, open an issue!\nWe'd love all and any contributions.\n\n### Testing `mcp-server-qdrant` locally\n\nThe [MCP inspector](https://github.com/modelcontextprotocol/inspector) is a developer tool for testing and debugging MCP\nservers. It runs both a client UI (default port 5173) and an MCP proxy server (default port 3000). Open the client UI in\nyour browser to use the inspector.\n\n```shell\nQDRANT_URL=\":memory:\" COLLECTION_NAME=\"test\" \\\nfastmcp dev src/mcp_server_qdrant/server.py\n```\n\nOnce started, open your browser to http://localhost:5173 to access the inspector interface.\n\n## License\n\nThis MCP server is licensed under the Apache License 2.0. This means you are free to use, modify, and distribute the\nsoftware, subject to the terms and conditions of the Apache License 2.0. For more details, please see the LICENSE file\nin the project repository.\n",
"bugtrack_url": null,
"license": null,
"summary": "MCP server for reading LlamaIndex documents stored in Qdrant vector database",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/azhang/qdrant-llamaindex-mcp-server",
"Issues": "https://github.com/azhang/qdrant-llamaindex-mcp-server/issues",
"Repository": "https://github.com/azhang/qdrant-llamaindex-mcp-server"
},
"split_keywords": [
"embeddings",
" llamaindex",
" mcp",
" qdrant",
" vector-database"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "93146b0de7c27bccf6e22a14dc2eef6a0558a273f5f9fccec2605252e38b1b30",
"md5": "b4c15b50821f377d7f1e6321f16d8b68",
"sha256": "004f551a1df28be6fa849166787435fe349879dd64c2caaa7e5a0d4aaa8aff21"
},
"downloads": -1,
"filename": "qdrant_llamaindex_mcp_server-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b4c15b50821f377d7f1e6321f16d8b68",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 31745,
"upload_time": "2025-08-07T16:14:09",
"upload_time_iso_8601": "2025-08-07T16:14:09.263356Z",
"url": "https://files.pythonhosted.org/packages/93/14/6b0de7c27bccf6e22a14dc2eef6a0558a273f5f9fccec2605252e38b1b30/qdrant_llamaindex_mcp_server-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "89ae3095408d25be9adedb4a2716caf93f12e7b1d55cfb2561cb7c6dd02723c9",
"md5": "20dfadf14886607f8004adba3c725db2",
"sha256": "be3d1c0fa9abac0d96d9f26443c311683fba99678ecadcb8e399d9c2b1243fe0"
},
"downloads": -1,
"filename": "qdrant_llamaindex_mcp_server-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "20dfadf14886607f8004adba3c725db2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 133235,
"upload_time": "2025-08-07T16:14:11",
"upload_time_iso_8601": "2025-08-07T16:14:11.343224Z",
"url": "https://files.pythonhosted.org/packages/89/ae/3095408d25be9adedb4a2716caf93f12e7b1d55cfb2561cb7c6dd02723c9/qdrant_llamaindex_mcp_server-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-07 16:14:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "azhang",
"github_project": "qdrant-llamaindex-mcp-server",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "qdrant-llamaindex-mcp-server"
}