Name | deepset-mcp JSON |
Version |
0.0.3
JSON |
| download |
home_page | None |
Summary | Collection of MCP tools and Agents to work with the deepset AI platform. Create, debug or learn about pipelines on the platform. Useable from the CLI, Cursor, Claude Code, or other MCP clients. |
upload_time | 2025-07-08 09:53:53 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | None |
keywords |
agents
haystack
llm
mcp
deepset
pipelines
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# MCP Server for the deepset AI platform
The deepset MCP server exposes tools that MCP clients like Claude or Cursor can use to interact with the deepset AI platform.
Agents can use these tools to:
- develop and iterate on Pipelines or Indexes
- debug Pipelines and Indexes
- search the deepset AI platform documentation
## Contents
- [1. Installation](#installation)
- [1.1. Claude Desktop](#claude-desktop-app)
- [1.2. Other MCP Clients](#other-mcp-clients)
- [1.3. Advanced Configuration](#advanced-configuration)
- [2. Prompts](#prompts)
- [3. Use Cases](#use-cases)
- [3.1. Creating Pipelines](#creating-pipelines)
- [3.2. Debugging Pipelines](#debugging-pipelines)
- [4. CLI](#cli)

## Installation
### Claude Desktop App
**Prerequisites:**
- [Claude Desktop App](https://claude.ai/download) needs to be installed
- You need to be on the Claude Pro, Team, Max, or Enterprise plan
- You need an installation of [Docker](https://docs.docker.com/desktop/) ([Go here](#using-uv-instead-of-docker) if you want to use `uv` instead of Docker)
- You need an [API key](https://docs.cloud.deepset.ai/docs/generate-api-key) for the deepset platform
**Steps:**
1. Go to: `/Users/your_user/Library/Application Support/Claude` (Mac)
2. Either open or create `claude_desktop_config.json`
3. Add the following json as your config (or update your existing config if you are already using other MCP servers)
```json
{
"mcpServers": {
"deepset": {
"command": "/usr/local/bin/docker",
"args": [
"run",
"-i",
"-e",
"DEEPSET_WORKSPACE",
"-e",
"DEEPSET_API_KEY",
"deepset/deepset-mcp-server:main"
],
"env": {
"DEEPSET_WORKSPACE":"<WORKSPACE>",
"DEEPSET_API_KEY":"<DEEPSET_API_KEY>"
}
}
}
}
```
4. Quit and start the Claude Desktop App
5. The deepset server should appear in the "Search and Tools" menu (this takes a few seconds as the Docker image needs to be downloaded and started)

#### Using uv instead of Docker
Running the server with uv gives you faster startup time and consumes slightly less resources on your system.
1. [Install uv](https://docs.astral.sh/uv/guides/install-python/) if you don't have it yet
2. Put the following into your `claude_desktop_config.json`
```python
{
"mcpServers": {
"deepset": {
"command": "uvx",
"args": [
"deepset-mcp"
],
"env": {
"DEEPSET_WORKSPACE":"<WORKSPACE>",
"DEEPSET_API_KEY":"<DEEPSET_API_KEY>"
}
}
}
}
```
This will load the [deepset-mcp package from PyPi](https://pypi.org/project/deepset-mcp/) and install it into a temporary virtual environment.
3. Quit and start the Claude Desktop App
### Other MCP Clients
`deepset-mcp` can be used with other MCP clients.
Here is where you need to configure `deepset-mcp` for:
- [Cursor](https://docs.cursor.com/context/mcp#using-mcp-json)
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code/mcp#configure-mcp-servers)
- [Gemini CLI](https://cloud.google.com/gemini/docs/codeassist/use-agentic-chat-pair-programmer#configure-mcp-servers)
Generally speaking, depending on your installation, you need to configure an MCP client with one of the following commands:
`uvx deepset-mcp --workspace your_workspace --api-key your_api_key`
If you installed the deepset-mcp package globally and added it to your `PATH`, you can just run:
`deepset-mcp --workspace your_workspace --api-key your_api_key`
The server runs locally using `stdio` to communicate with the client.
### Advanced Configuration
#### Tool Selection
You can customize which tools the MCP server should expose.
Use the `ยด--tools`-option in your config to explicitly specify which tools should be exposed.
You can list available tools with: `deepset-mcp --list-tools`.
To only expose the `list_pipelines` and `get_pipeline` tools you would use the following command:
`deepset-mcp --tools list_pipelines get_pipeline`
For smooth operations, you should always expose the `get_from_object_store` and `get_slice_from_object_store` tools.
#### Allowing access to multiple workspaces
The basic configuration uses a hardcoded workspace which you pass in via the `DEEPSET_WORKSPACE` environment variable.
If you want to allow an agent to access resources from multiple workspaces, you can use `--workspace-mode explicit`
in your config.
For example:
```json
{
"mcpServers": {
"deepset": {
"command": "uvx",
"args": [
"deepset-mcp",
"--workspace-mode",
"explicit"
],
"env": {
"DEEPSET_API_KEY":"<DEEPSET_API_KEY>"
}
}
}
}
```
An agent using the MCP server now has access to all workspaces that the API-key has access to. When interacting with most
resources, you will need to tell the agent what workspace it should use to perform an action. Instead of prompting it
with "list my pipelines", you would now have to prompt it with "list my pipelines in the staging workspace".
## Prompts
All tools exposed through the MCP server have minimal prompts. Any Agent interacting with these tools benefits from an additional system prompt.
View the **recommended prompt** [here](src/deepset_mcp/prompts/deepset_debugging_agent.md).
This prompt is also exposed as the `deepset_recommended_prompt` on the MCP server.
In Claude Desktop, click `add from deepset` to add the prompt to your context.
A better way to add system prompts in Claude Desktop is through "Projects".
You can customize the system prompt to your specific needs.
## Use Cases
The primary way to use the deepset MCP server is through an LLM that interacts with the deepset MCP tools in an agentic way.
### Creating Pipelines
Tell the LLM about the type of pipeline you want to build. Creating new pipelines will work best if you use terminology
that is similar to what is used on the deepset AI platform or in Haystack.
Your prompts should be precise and specific.
Examples:
- "Build a RAG pipeline with hybrid retrieval that uses claude-sonnet-4 from Anthropic as the LLM."
- "Build an Agent that can iteratively search the web (deep research). Use SerperDev for web search and GPT-4o as the LLM."
You can also instruct the LLM to deploy pipelines, and it can issue search requests against pipelines to test them.
**Best Practices**
- be specific in your requests
- point the LLM to examples, if there is already a similar pipeline in your workspace, then ask it to look at it first,
if you have a template in mind, ask it to look at the template
- instruct the LLM to iterate with you locally before creating the pipeline, have it validate the drafts and then let it
create it once the pipeline is up to your standards
### Debugging Pipelines
The `deepset-mcp` tools allow LLMs to debug pipelines on the deepset AI platform.
Primary tools used for debugging are:
- get_logs
- validate_pipeline
- search_pipeline
- search_pipeline_templates
- search_component_definition
You can ask the LLM to check the logs of a specific pipeline in case it is already deployed but has errors.
The LLM will find errors in the logs and devise strategies to fix them.
If your pipeline is not deployed yet, the LLM can autonomously validate it and fix validation errors.
## CLI
You can use the MCP server as a Haystack Agent through a command-line interface.
Install with `uvx tool install "deepset-mcp[cli]"`.
Start the interactive CLI with:
`deepset agent chat`
You can set environment variables before starting the Agent via:
```shell
export DEEPSET_API_KEY=your_key
export DEEPSET_WORKSPACE=your_workspace
```
You can also provide an `.env` file using the `--env-file` option:
`deepset agent chat --env-file your/env/.file`
The agent will load environment variables from the file on startup.
Raw data
{
"_id": null,
"home_page": null,
"name": "deepset-mcp",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "Agents, Haystack, LLM, MCP, deepset, pipelines",
"author": null,
"author_email": "Mathis Lucka <mathis.lucka@deepset.ai>, Tanay Soni <tanay.soni@deepset.ai>",
"download_url": "https://files.pythonhosted.org/packages/c2/0c/00870e88c70ab50c8195ec2f95534a283e9cc56791e8f770b079c9b910eb/deepset_mcp-0.0.3.tar.gz",
"platform": null,
"description": "# MCP Server for the deepset AI platform\n\nThe deepset MCP server exposes tools that MCP clients like Claude or Cursor can use to interact with the deepset AI platform.\n\nAgents can use these tools to:\n\n- develop and iterate on Pipelines or Indexes\n- debug Pipelines and Indexes\n- search the deepset AI platform documentation\n\n## Contents\n\n- [1. Installation](#installation)\n - [1.1. Claude Desktop](#claude-desktop-app)\n - [1.2. Other MCP Clients](#other-mcp-clients)\n - [1.3. Advanced Configuration](#advanced-configuration)\n- [2. Prompts](#prompts)\n- [3. Use Cases](#use-cases)\n - [3.1. Creating Pipelines](#creating-pipelines)\n - [3.2. Debugging Pipelines](#debugging-pipelines)\n- [4. CLI](#cli)\n\n\n\n\n\n\n\n\n## Installation\n\n### Claude Desktop App\n\n**Prerequisites:**\n- [Claude Desktop App](https://claude.ai/download) needs to be installed\n- You need to be on the Claude Pro, Team, Max, or Enterprise plan\n- You need an installation of [Docker](https://docs.docker.com/desktop/) ([Go here](#using-uv-instead-of-docker) if you want to use `uv` instead of Docker)\n- You need an [API key](https://docs.cloud.deepset.ai/docs/generate-api-key) for the deepset platform\n\n**Steps:**\n1. Go to: `/Users/your_user/Library/Application Support/Claude` (Mac)\n2. Either open or create `claude_desktop_config.json`\n3. Add the following json as your config (or update your existing config if you are already using other MCP servers)\n\n```json\n{\n \"mcpServers\": {\n \"deepset\": {\n \"command\": \"/usr/local/bin/docker\",\n \"args\": [\n \"run\",\n \"-i\",\n \"-e\",\n \"DEEPSET_WORKSPACE\",\n \"-e\",\n \"DEEPSET_API_KEY\",\n \"deepset/deepset-mcp-server:main\"\n ],\n \"env\": {\n \"DEEPSET_WORKSPACE\":\"<WORKSPACE>\",\n \"DEEPSET_API_KEY\":\"<DEEPSET_API_KEY>\"\n }\n\n }\n }\n}\n```\n\n4. Quit and start the Claude Desktop App\n5. The deepset server should appear in the \"Search and Tools\" menu (this takes a few seconds as the Docker image needs to be downloaded and started)\n\n\n\n\n\n#### Using uv instead of Docker\n\nRunning the server with uv gives you faster startup time and consumes slightly less resources on your system.\n\n1. [Install uv](https://docs.astral.sh/uv/guides/install-python/) if you don't have it yet\n2. Put the following into your `claude_desktop_config.json`\n\n```python\n{\n \"mcpServers\": {\n \"deepset\": {\n \"command\": \"uvx\",\n \"args\": [\n \"deepset-mcp\"\n ],\n \"env\": {\n \"DEEPSET_WORKSPACE\":\"<WORKSPACE>\",\n \"DEEPSET_API_KEY\":\"<DEEPSET_API_KEY>\"\n }\n\n }\n }\n}\n```\n\nThis will load the [deepset-mcp package from PyPi](https://pypi.org/project/deepset-mcp/) and install it into a temporary virtual environment.\n\n3. Quit and start the Claude Desktop App\n\n\n\n### Other MCP Clients\n\n`deepset-mcp` can be used with other MCP clients.\n\nHere is where you need to configure `deepset-mcp` for:\n\n- [Cursor](https://docs.cursor.com/context/mcp#using-mcp-json)\n- [Claude Code](https://docs.anthropic.com/en/docs/claude-code/mcp#configure-mcp-servers)\n- [Gemini CLI](https://cloud.google.com/gemini/docs/codeassist/use-agentic-chat-pair-programmer#configure-mcp-servers)\n\nGenerally speaking, depending on your installation, you need to configure an MCP client with one of the following commands:\n\n`uvx deepset-mcp --workspace your_workspace --api-key your_api_key`\n\nIf you installed the deepset-mcp package globally and added it to your `PATH`, you can just run:\n\n`deepset-mcp --workspace your_workspace --api-key your_api_key`\n\nThe server runs locally using `stdio` to communicate with the client.\n\n### Advanced Configuration\n\n#### Tool Selection\n\nYou can customize which tools the MCP server should expose.\nUse the `\u00b4--tools`-option in your config to explicitly specify which tools should be exposed.\n\nYou can list available tools with: `deepset-mcp --list-tools`.\n\nTo only expose the `list_pipelines` and `get_pipeline` tools you would use the following command:\n\n`deepset-mcp --tools list_pipelines get_pipeline`\n\nFor smooth operations, you should always expose the `get_from_object_store` and `get_slice_from_object_store` tools.\n\n\n#### Allowing access to multiple workspaces\n\nThe basic configuration uses a hardcoded workspace which you pass in via the `DEEPSET_WORKSPACE` environment variable.\nIf you want to allow an agent to access resources from multiple workspaces, you can use `--workspace-mode explicit`\nin your config.\n\nFor example:\n\n```json\n{\n \"mcpServers\": {\n \"deepset\": {\n \"command\": \"uvx\",\n \"args\": [\n \"deepset-mcp\",\n \"--workspace-mode\",\n \"explicit\"\n ],\n \"env\": {\n \"DEEPSET_API_KEY\":\"<DEEPSET_API_KEY>\"\n }\n\n }\n }\n}\n```\n\nAn agent using the MCP server now has access to all workspaces that the API-key has access to. When interacting with most\nresources, you will need to tell the agent what workspace it should use to perform an action. Instead of prompting it\nwith \"list my pipelines\", you would now have to prompt it with \"list my pipelines in the staging workspace\".\n\n\n## Prompts\n\nAll tools exposed through the MCP server have minimal prompts. Any Agent interacting with these tools benefits from an additional system prompt.\n\nView the **recommended prompt** [here](src/deepset_mcp/prompts/deepset_debugging_agent.md).\n\nThis prompt is also exposed as the `deepset_recommended_prompt` on the MCP server.\nIn Claude Desktop, click `add from deepset` to add the prompt to your context.\nA better way to add system prompts in Claude Desktop is through \"Projects\".\n\nYou can customize the system prompt to your specific needs.\n\n\n## Use Cases\n\nThe primary way to use the deepset MCP server is through an LLM that interacts with the deepset MCP tools in an agentic way.\n\n### Creating Pipelines\n\nTell the LLM about the type of pipeline you want to build. Creating new pipelines will work best if you use terminology\nthat is similar to what is used on the deepset AI platform or in Haystack.\n\nYour prompts should be precise and specific.\n\nExamples:\n\n- \"Build a RAG pipeline with hybrid retrieval that uses claude-sonnet-4 from Anthropic as the LLM.\"\n- \"Build an Agent that can iteratively search the web (deep research). Use SerperDev for web search and GPT-4o as the LLM.\"\n\nYou can also instruct the LLM to deploy pipelines, and it can issue search requests against pipelines to test them.\n\n**Best Practices**\n\n- be specific in your requests\n- point the LLM to examples, if there is already a similar pipeline in your workspace, then ask it to look at it first, \nif you have a template in mind, ask it to look at the template\n- instruct the LLM to iterate with you locally before creating the pipeline, have it validate the drafts and then let it \ncreate it once the pipeline is up to your standards\n\n\n### Debugging Pipelines\n\nThe `deepset-mcp` tools allow LLMs to debug pipelines on the deepset AI platform.\nPrimary tools used for debugging are:\n- get_logs\n- validate_pipeline\n- search_pipeline\n- search_pipeline_templates\n- search_component_definition\n\nYou can ask the LLM to check the logs of a specific pipeline in case it is already deployed but has errors.\nThe LLM will find errors in the logs and devise strategies to fix them.\nIf your pipeline is not deployed yet, the LLM can autonomously validate it and fix validation errors.\n\n## CLI\nYou can use the MCP server as a Haystack Agent through a command-line interface.\n\nInstall with `uvx tool install \"deepset-mcp[cli]\"`.\n\nStart the interactive CLI with:\n\n`deepset agent chat`\n\nYou can set environment variables before starting the Agent via:\n\n```shell\nexport DEEPSET_API_KEY=your_key\nexport DEEPSET_WORKSPACE=your_workspace\n```\n\nYou can also provide an `.env` file using the `--env-file` option:\n\n`deepset agent chat --env-file your/env/.file`\n\nThe agent will load environment variables from the file on startup.\n",
"bugtrack_url": null,
"license": null,
"summary": "Collection of MCP tools and Agents to work with the deepset AI platform. Create, debug or learn about pipelines on the platform. Useable from the CLI, Cursor, Claude Code, or other MCP clients.",
"version": "0.0.3",
"project_urls": {
"Homepage": "https://deepset.ai"
},
"split_keywords": [
"agents",
" haystack",
" llm",
" mcp",
" deepset",
" pipelines"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e009af535414f5c4673e25ad3aeadff59d497db054e2eb679de57fec1ae1f68b",
"md5": "9db6a95fb6372fa5b49962b0dff95126",
"sha256": "4740b65e40e80b89c37e70f3c8d66e5a4674e759a620d12eb8a7653cc5f91b36"
},
"downloads": -1,
"filename": "deepset_mcp-0.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9db6a95fb6372fa5b49962b0dff95126",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 157733,
"upload_time": "2025-07-08T09:53:51",
"upload_time_iso_8601": "2025-07-08T09:53:51.837642Z",
"url": "https://files.pythonhosted.org/packages/e0/09/af535414f5c4673e25ad3aeadff59d497db054e2eb679de57fec1ae1f68b/deepset_mcp-0.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c20c00870e88c70ab50c8195ec2f95534a283e9cc56791e8f770b079c9b910eb",
"md5": "811c2299244f1c548b2f527d297812eb",
"sha256": "5d2787fb01c543ba02a1e4cba729979e60bb61a400ae193e5c8f9d9fc69de4ab"
},
"downloads": -1,
"filename": "deepset_mcp-0.0.3.tar.gz",
"has_sig": false,
"md5_digest": "811c2299244f1c548b2f527d297812eb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 25424876,
"upload_time": "2025-07-08T09:53:53",
"upload_time_iso_8601": "2025-07-08T09:53:53.713498Z",
"url": "https://files.pythonhosted.org/packages/c2/0c/00870e88c70ab50c8195ec2f95534a283e9cc56791e8f770b079c9b910eb/deepset_mcp-0.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-08 09:53:53",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "deepset-mcp"
}