<!-- Banner / Title -->
<div align="center">
<img src="docs/images/icon.png" width="120" alt="DeepMCPAgent Logo"/>
<h1>π€ DeepMCPAgent</h1>
<p><strong>Model-agnostic LangChain/LangGraph agents powered entirely by <a href="https://modelcontextprotocol.io/">MCP</a> tools over HTTP/SSE.</strong></p>
<!-- Badges (adjust links after you publish) -->
<p>
<a href="#"><img alt="Python" src="https://img.shields.io/badge/Python-3.10%2B-blue.svg"></a>
<a href="#"><img alt="License" src="https://img.shields.io/badge/license-Apache2.0-green.svg"></a>
<a href="#"><img alt="Status" src="https://img.shields.io/badge/status-beta-orange.svg"></a>
</p>
<p>
<em>Discover MCP tools dynamically. Bring your own LangChain model. Build production-ready agentsβfast.</em>
</p>
</div>
<hr/>
## β¨ Why DeepMCPAgent?
- π **Zero manual tool wiring** β tools are discovered dynamically from MCP servers (HTTP/SSE)
- π **External APIs welcome** β connect to remote MCP servers (with headers/auth)
- π§ **Model-agnostic** β pass any LangChain chat model instance (OpenAI, Anthropic, Ollama, Groq, local, β¦)
- β‘ **DeepAgents (optional)** β if installed, you get a deep agent loop; otherwise robust LangGraph ReAct fallback
- π οΈ **Typed tool args** β JSON-Schema β Pydantic β LangChain `BaseTool` (typed, validated calls)
- π§ͺ **Quality bar** β mypy (strict), ruff, pytest, GitHub Actions, docs
> **MCP first.** Agents shouldnβt hardcode tools β they should **discover** and **call** them. DeepMCPAgent builds that bridge.
---
## π Quickstart
### 1) Install
```bash
# create and activate a virtual env
python3 -m venv .venv
source .venv/bin/activate
# install (editable) + dev extras (optional) + deep agents (optional, but recommended)
pip install -e ".[dev,deep]"
```
### 2) Start a sample MCP server (HTTP)
```bash
python examples/servers/math_server.py
```
This serves an MCP endpoint at: **[http://127.0.0.1:8000/mcp](http://127.0.0.1:8000/mcp)**
### 3) Run the example agent (with fancy console output)
```bash
python examples/use_agent.py
```
**What youβll see:**

---
## π§βπ» Bring-Your-Own Model (BYOM)
DeepMCPAgent lets you pass **any LangChain chat model instance** (or a provider id string if you prefer `init_chat_model`):
```python
import asyncio
from deepmcpagent import HTTPServerSpec, build_deep_agent
# choose your model:
# from langchain_openai import ChatOpenAI
# model = ChatOpenAI(model="gpt-4.1")
# from langchain_anthropic import ChatAnthropic
# model = ChatAnthropic(model="claude-3-5-sonnet-latest")
# from langchain_community.chat_models import ChatOllama
# model = ChatOllama(model="llama3.1")
async def main():
servers = {
"math": HTTPServerSpec(
url="http://127.0.0.1:8000/mcp",
transport="http", # or "sse"
# headers={"Authorization": "Bearer <token>"},
),
}
graph, _ = await build_deep_agent(
servers=servers,
model=model,
instructions="Use MCP tools precisely."
)
out = await graph.ainvoke({"messages":[{"role":"user","content":"add 21 and 21 with tools"}]})
print(out)
asyncio.run(main())
```
> Tip: If you pass a **string** like `"openai:gpt-4.1"`, weβll call LangChainβs `init_chat_model()` for you (and it will read env vars like `OPENAI_API_KEY`). Passing a **model instance** gives you full control.
---
## π§° Example MCP Server (HTTP)
`examples/servers/math_server.py`:
```python
from fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two integers."""
return a * b
if __name__ == "__main__":
mcp.run(
transport="http",
host="127.0.0.1",
port=8000,
path="/mcp",
)
```
> **Important:** The FastMCP HTTP endpoint should be accessible (default `/mcp`).
> Your client spec must point to the full URL, e.g. `http://127.0.0.1:8000/mcp`.
---
## π₯οΈ CLI (no Python required)
```bash
# list tools from one or more HTTP servers
deepmcpagent list-tools \
--http name=math url=http://127.0.0.1:8000/mcp transport=http \
--model-id "openai:gpt-4.1"
# interactive agent chat (HTTP/SSE servers only)
deepmcpagent run \
--http name=math url=http://127.0.0.1:8000/mcp transport=http \
--model-id "openai:gpt-4.1"
```
> The CLI accepts **repeated** `--http` blocks; add `header.X=Y` pairs for auth:
>
> ```
> --http name=ext url=https://api.example.com/mcp transport=http header.Authorization="Bearer TOKEN"
> ```
---
## π§© Architecture (at a glance)
```
ββββββββββββββββββ list_tools / call_tool βββββββββββββββββββββββββββ
β LangChain/LLM β βββββββββββββββββββββββββββββββββββΆ β FastMCP Client (HTTP/SSE)β
β (your model) β βββββββββββββ¬βββββββββββββββ
ββββββββ¬ββββββββββ tools (LC BaseTool) β
β β
βΌ βΌ
LangGraph Agent One or many MCP servers (remote APIs)
(or DeepAgents) e.g., math, github, search, ...
```
- `HTTPServerSpec(...)` β **FastMCP client** (single client, multiple servers)
- **Tool discovery** β JSON-Schema β Pydantic β LangChain `BaseTool`
- **Agent loop** β DeepAgents (if installed) or LangGraph ReAct fallback
---
## Full Architecture & Agent Flow
### 1) High-level Architecture (modules & data flow)
```mermaid
flowchart LR
%% Groupings
subgraph User["π€ User / App"]
Q["Prompt / Task"]
CLI["CLI (Typer)"]
PY["Python API"]
end
subgraph Agent["π€ Agent Runtime"]
DIR["build_deep_agent()"]
PROMPT["prompt.py\n(DEFAULT_SYSTEM_PROMPT)"]
subgraph AGRT["Agent Graph"]
DA["DeepAgents loop\n(if installed)"]
REACT["LangGraph ReAct\n(fallback)"]
end
LLM["LangChain Model\n(instance or init_chat_model(provider-id))"]
TOOLS["LangChain Tools\n(BaseTool[])"]
end
subgraph MCP["π§° Tooling Layer (MCP)"]
LOADER["MCPToolLoader\n(JSON-Schema β Pydantic β BaseTool)"]
TOOLWRAP["_FastMCPTool\n(async _arun β client.call_tool)"]
end
subgraph FMCP["π FastMCP Client"]
CFG["servers_to_mcp_config()\n(mcpServers dict)"]
MULTI["FastMCPMulti\n(fastmcp.Client)"]
end
subgraph SRV["π MCP Servers (HTTP/SSE)"]
S1["Server A\n(e.g., math)"]
S2["Server B\n(e.g., search)"]
S3["Server C\n(e.g., github)"]
end
%% Edges
Q -->|query| CLI
Q -->|query| PY
CLI --> DIR
PY --> DIR
DIR --> PROMPT
DIR --> LLM
DIR --> LOADER
DIR --> AGRT
LOADER --> MULTI
CFG --> MULTI
MULTI -->|list_tools| SRV
LOADER --> TOOLS
TOOLS --> AGRT
AGRT <-->|messages| LLM
AGRT -->|tool calls| TOOLWRAP
TOOLWRAP --> MULTI
MULTI -->|call_tool| SRV
SRV -->|tool result| MULTI --> TOOLWRAP --> AGRT -->|final answer| CLI
AGRT -->|final answer| PY
```
---
### 2) Runtime Sequence (end-to-end tool call)
```mermaid
sequenceDiagram
autonumber
participant U as User
participant CLI as CLI/Python
participant Builder as build_deep_agent()
participant Loader as MCPToolLoader
participant Graph as Agent Graph (DeepAgents or ReAct)
participant LLM as LangChain Model
participant Tool as _FastMCPTool
participant FMCP as FastMCP Client
participant S as MCP Server (HTTP/SSE)
U->>CLI: Enter prompt
CLI->>Builder: build_deep_agent(servers, model, instructions?)
Builder->>Loader: get_all_tools()
Loader->>FMCP: list_tools()
FMCP->>S: HTTP(S)/SSE list_tools
S-->>FMCP: tools + JSON-Schema
FMCP-->>Loader: tool specs
Loader-->>Builder: BaseTool[]
Builder-->>CLI: (Graph, Loader)
U->>Graph: ainvoke({messages:[user prompt]})
Graph->>LLM: Reason over system + messages + tool descriptions
LLM-->>Graph: Tool call (e.g., add(a=3,b=5))
Graph->>Tool: _arun(a=3,b=5)
Tool->>FMCP: call_tool("add", {a:3,b:5})
FMCP->>S: POST /mcp tools.call("add", {...})
S-->>FMCP: result { data: 8 }
FMCP-->>Tool: result
Tool-->>Graph: ToolMessage(content=8)
Graph->>LLM: Continue with observations
LLM-->>Graph: Final response "(3 + 5) * 7 = 56"
Graph-->>CLI: messages (incl. final LLM answer)
```
---
### 3) Agent Control Loop (planning & acting)
```mermaid
stateDiagram-v2
[*] --> AcquireTools
AcquireTools: Discover MCP tools via FastMCP\n(JSON-Schema β Pydantic β BaseTool)
AcquireTools --> Plan
Plan: LLM plans next step\n(uses system prompt + tool descriptions)
Plan --> CallTool: if tool needed
Plan --> Respond: if direct answer sufficient
CallTool: _FastMCPTool._arun\nβ client.call_tool(name, args)
CallTool --> Observe: receive tool result
Observe: Parse result payload (data/text/content)
Observe --> Decide
Decide: More tools needed?
Decide --> Plan: yes
Decide --> Respond: no
Respond: LLM crafts final message
Respond --> [*]
```
---
### 4) Code Structure (types & relationships)
```mermaid
classDiagram
class StdioServerSpec {
+command: str
+args: List[str]
+env: Dict[str,str]
+cwd: Optional[str]
+keep_alive: bool
}
class HTTPServerSpec {
+url: str
+transport: Literal["http","streamable-http","sse"]
+headers: Dict[str,str]
+auth: Optional[str]
}
class FastMCPMulti {
-_client: fastmcp.Client
+client(): Client
}
class MCPToolLoader {
-_multi: FastMCPMulti
+get_all_tools(): List[BaseTool]
+list_tool_info(): List[ToolInfo]
}
class _FastMCPTool {
+name: str
+description: str
+args_schema: Type[BaseModel]
-_tool_name: str
-_client: Any
+_arun(**kwargs) async
}
class ToolInfo {
+server_guess: str
+name: str
+description: str
+input_schema: Dict[str,Any]
}
class build_deep_agent {
+servers: Mapping[str,ServerSpec]
+model: ModelLike
+instructions?: str
+returns: (graph, loader)
}
StdioServerSpec <|-- ServerSpec
HTTPServerSpec <|-- ServerSpec
FastMCPMulti o--> ServerSpec : uses servers_to_mcp_config()
MCPToolLoader o--> FastMCPMulti
MCPToolLoader --> _FastMCPTool : creates
_FastMCPTool ..> BaseTool
build_deep_agent --> MCPToolLoader : discovery
build_deep_agent --> _FastMCPTool : tools for agent
```
---
### 5) Deployment / Integration View (clusters & boundaries)
```mermaid
flowchart TD
subgraph App["Your App / Service"]
UI["CLI / API / Notebook"]
Code["deepmcpagent (Python pkg)\n- config.py\n- clients.py\n- tools.py\n- agent.py\n- prompt.py"]
UI --> Code
end
subgraph Cloud["LLM Provider(s)"]
P1["OpenAI / Anthropic / Groq / Ollama..."]
end
subgraph Net["Network"]
direction LR
FMCP["FastMCP Client\n(HTTP/SSE)"]
FMCP ---|mcpServers| Code
end
subgraph Servers["MCP Servers"]
direction LR
A["Service A (HTTP)\n/path: /mcp"]
B["Service B (SSE)\n/path: /mcp"]
C["Service C (HTTP)\n/path: /mcp"]
end
Code -->|init_chat_model or model instance| P1
Code --> FMCP
FMCP --> A
FMCP --> B
FMCP --> C
```
---
### 6) Error Handling & Observability (tool errors & retries)
```mermaid
flowchart TD
Start([Tool Call]) --> Try{"client.call_tool(name,args)"}
Try -- ok --> Parse["Extract data/text/content/result"]
Parse --> Return[Return ToolMessage to Agent]
Try -- raises --> Err["Tool/Transport Error"]
Err --> Wrap["ToolMessage(status=error, content=trace)"]
Wrap --> Agent["Agent observes error\nand may retry / alternate tool"]
```
---
> These diagrams reflect the current implementation:
>
> - **Model is required** (string provider-id or LangChain model instance).
> - **MCP tools only**, discovered at runtime via **FastMCP** (HTTP/SSE).
> - Agent loop prefers **DeepAgents** if installed; otherwise **LangGraph ReAct**.
> - Tools are typed via **JSON-Schema β Pydantic β LangChain BaseTool**.
> - Fancy console output shows **discovered tools**, **calls**, **results**, and **final answer**.
---
## π§ͺ Development
```bash
# install dev tooling
pip install -e ".[dev]"
# lint & type-check
ruff check .
mypy
# run tests
pytest -q
```
---
## π‘οΈ Security & Privacy
- **Your keys, your model** β we donβt enforce a provider; pass any LangChain model.
- Use **HTTP headers** in `HTTPServerSpec` to deliver bearer/OAuth tokens to servers.
- Report vulns privately: see `SECURITY.md`.
---
## π§― Troubleshooting
- **PEP 668: externally managed environment (macOS + Homebrew)**
Use a virtualenv:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
- **404 Not Found when connecting**
Ensure your server uses a path (e.g., `/mcp`) and your client URL includes it.
- **Tool calls failing / attribute errors**
Ensure youβre on the latest version; our tool wrapper uses `PrivateAttr` for client state.
- **High token counts**
Thatβs normal with tool-calling models. Use smaller models for dev.
---
## π License
Apache-2.0 β see `LICENSE`.
---
## π Acknowledgments
- The **MCP** community for a clean protocol.
- **LangChain** and **LangGraph** for powerful agent runtimes.
- **FastMCP** for solid client & server implementations.
Raw data
{
"_id": null,
"home_page": null,
"name": "deepmcpagent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "mcp, fastmcp, langchain, langgraph, agents, tools",
"author": "cryxnet",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/33/bb/bacf781fc9792babbbc70c62616b65608e7b6acbed12696cfb0a775f0b89/deepmcpagent-0.1.0.tar.gz",
"platform": null,
"description": "<!-- Banner / Title -->\n<div align=\"center\">\n <img src=\"docs/images/icon.png\" width=\"120\" alt=\"DeepMCPAgent Logo\"/>\n\n <h1>\ud83e\udd16 DeepMCPAgent</h1>\n <p><strong>Model-agnostic LangChain/LangGraph agents powered entirely by <a href=\"https://modelcontextprotocol.io/\">MCP</a> tools over HTTP/SSE.</strong></p>\n\n <!-- Badges (adjust links after you publish) -->\n <p>\n <a href=\"#\"><img alt=\"Python\" src=\"https://img.shields.io/badge/Python-3.10%2B-blue.svg\"></a>\n <a href=\"#\"><img alt=\"License\" src=\"https://img.shields.io/badge/license-Apache2.0-green.svg\"></a>\n <a href=\"#\"><img alt=\"Status\" src=\"https://img.shields.io/badge/status-beta-orange.svg\"></a>\n </p>\n\n <p>\n <em>Discover MCP tools dynamically. Bring your own LangChain model. Build production-ready agents\u2014fast.</em>\n </p>\n</div>\n\n<hr/>\n\n## \u2728 Why DeepMCPAgent?\n\n- \ud83d\udd0c **Zero manual tool wiring** \u2014 tools are discovered dynamically from MCP servers (HTTP/SSE)\n- \ud83c\udf10 **External APIs welcome** \u2014 connect to remote MCP servers (with headers/auth)\n- \ud83e\udde0 **Model-agnostic** \u2014 pass any LangChain chat model instance (OpenAI, Anthropic, Ollama, Groq, local, \u2026)\n- \u26a1 **DeepAgents (optional)** \u2014 if installed, you get a deep agent loop; otherwise robust LangGraph ReAct fallback\n- \ud83d\udee0\ufe0f **Typed tool args** \u2014 JSON-Schema \u2192 Pydantic \u2192 LangChain `BaseTool` (typed, validated calls)\n- \ud83e\uddea **Quality bar** \u2014 mypy (strict), ruff, pytest, GitHub Actions, docs\n\n> **MCP first.** Agents shouldn\u2019t hardcode tools \u2014 they should **discover** and **call** them. DeepMCPAgent builds that bridge.\n\n---\n\n## \ud83d\ude80 Quickstart\n\n### 1) Install\n\n```bash\n# create and activate a virtual env\npython3 -m venv .venv\nsource .venv/bin/activate\n\n# install (editable) + dev extras (optional) + deep agents (optional, but recommended)\npip install -e \".[dev,deep]\"\n```\n\n### 2) Start a sample MCP server (HTTP)\n\n```bash\npython examples/servers/math_server.py\n```\n\nThis serves an MCP endpoint at: **[http://127.0.0.1:8000/mcp](http://127.0.0.1:8000/mcp)**\n\n### 3) Run the example agent (with fancy console output)\n\n```bash\npython examples/use_agent.py\n```\n\n**What you\u2019ll see:**\n\n\n\n---\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Bring-Your-Own Model (BYOM)\n\nDeepMCPAgent lets you pass **any LangChain chat model instance** (or a provider id string if you prefer `init_chat_model`):\n\n```python\nimport asyncio\nfrom deepmcpagent import HTTPServerSpec, build_deep_agent\n\n# choose your model:\n# from langchain_openai import ChatOpenAI\n# model = ChatOpenAI(model=\"gpt-4.1\")\n\n# from langchain_anthropic import ChatAnthropic\n# model = ChatAnthropic(model=\"claude-3-5-sonnet-latest\")\n\n# from langchain_community.chat_models import ChatOllama\n# model = ChatOllama(model=\"llama3.1\")\n\nasync def main():\n servers = {\n \"math\": HTTPServerSpec(\n url=\"http://127.0.0.1:8000/mcp\",\n transport=\"http\", # or \"sse\"\n # headers={\"Authorization\": \"Bearer <token>\"},\n ),\n }\n\n graph, _ = await build_deep_agent(\n servers=servers,\n model=model,\n instructions=\"Use MCP tools precisely.\"\n )\n\n out = await graph.ainvoke({\"messages\":[{\"role\":\"user\",\"content\":\"add 21 and 21 with tools\"}]})\n print(out)\n\nasyncio.run(main())\n```\n\n> Tip: If you pass a **string** like `\"openai:gpt-4.1\"`, we\u2019ll call LangChain\u2019s `init_chat_model()` for you (and it will read env vars like `OPENAI_API_KEY`). Passing a **model instance** gives you full control.\n\n---\n\n## \ud83e\uddf0 Example MCP Server (HTTP)\n\n`examples/servers/math_server.py`:\n\n```python\nfrom fastmcp import FastMCP\n\nmcp = FastMCP(\"Math\")\n\n@mcp.tool()\ndef add(a: int, b: int) -> int:\n \"\"\"Add two integers.\"\"\"\n return a + b\n\n@mcp.tool()\ndef multiply(a: int, b: int) -> int:\n \"\"\"Multiply two integers.\"\"\"\n return a * b\n\nif __name__ == \"__main__\":\n mcp.run(\n transport=\"http\",\n host=\"127.0.0.1\",\n port=8000,\n path=\"/mcp\",\n )\n```\n\n> **Important:** The FastMCP HTTP endpoint should be accessible (default `/mcp`).\n> Your client spec must point to the full URL, e.g. `http://127.0.0.1:8000/mcp`.\n\n---\n\n## \ud83d\udda5\ufe0f CLI (no Python required)\n\n```bash\n# list tools from one or more HTTP servers\ndeepmcpagent list-tools \\\n --http name=math url=http://127.0.0.1:8000/mcp transport=http \\\n --model-id \"openai:gpt-4.1\"\n\n# interactive agent chat (HTTP/SSE servers only)\ndeepmcpagent run \\\n --http name=math url=http://127.0.0.1:8000/mcp transport=http \\\n --model-id \"openai:gpt-4.1\"\n```\n\n> The CLI accepts **repeated** `--http` blocks; add `header.X=Y` pairs for auth:\n>\n> ```\n> --http name=ext url=https://api.example.com/mcp transport=http header.Authorization=\"Bearer TOKEN\"\n> ```\n\n---\n\n## \ud83e\udde9 Architecture (at a glance)\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 list_tools / call_tool \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 LangChain/LLM \u2502 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25b6 \u2502 FastMCP Client (HTTP/SSE)\u2502\n\u2502 (your model) \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 tools (LC BaseTool) \u2502\n \u2502 \u2502\n \u25bc \u25bc\n LangGraph Agent One or many MCP servers (remote APIs)\n (or DeepAgents) e.g., math, github, search, ...\n```\n\n- `HTTPServerSpec(...)` \u2192 **FastMCP client** (single client, multiple servers)\n- **Tool discovery** \u2192 JSON-Schema \u2192 Pydantic \u2192 LangChain `BaseTool`\n- **Agent loop** \u2192 DeepAgents (if installed) or LangGraph ReAct fallback\n\n---\n\n## Full Architecture & Agent Flow\n\n### 1) High-level Architecture (modules & data flow)\n\n```mermaid\nflowchart LR\n %% Groupings\n subgraph User[\"\ud83d\udc64 User / App\"]\n Q[\"Prompt / Task\"]\n CLI[\"CLI (Typer)\"]\n PY[\"Python API\"]\n end\n\n subgraph Agent[\"\ud83e\udd16 Agent Runtime\"]\n DIR[\"build_deep_agent()\"]\n PROMPT[\"prompt.py\\n(DEFAULT_SYSTEM_PROMPT)\"]\n subgraph AGRT[\"Agent Graph\"]\n DA[\"DeepAgents loop\\n(if installed)\"]\n REACT[\"LangGraph ReAct\\n(fallback)\"]\n end\n LLM[\"LangChain Model\\n(instance or init_chat_model(provider-id))\"]\n TOOLS[\"LangChain Tools\\n(BaseTool[])\"]\n end\n\n subgraph MCP[\"\ud83e\uddf0 Tooling Layer (MCP)\"]\n LOADER[\"MCPToolLoader\\n(JSON-Schema \u279c Pydantic \u279c BaseTool)\"]\n TOOLWRAP[\"_FastMCPTool\\n(async _arun \u2192 client.call_tool)\"]\n end\n\n subgraph FMCP[\"\ud83c\udf10 FastMCP Client\"]\n CFG[\"servers_to_mcp_config()\\n(mcpServers dict)\"]\n MULTI[\"FastMCPMulti\\n(fastmcp.Client)\"]\n end\n\n subgraph SRV[\"\ud83d\udee0 MCP Servers (HTTP/SSE)\"]\n S1[\"Server A\\n(e.g., math)\"]\n S2[\"Server B\\n(e.g., search)\"]\n S3[\"Server C\\n(e.g., github)\"]\n end\n\n %% Edges\n Q -->|query| CLI\n Q -->|query| PY\n CLI --> DIR\n PY --> DIR\n\n DIR --> PROMPT\n DIR --> LLM\n DIR --> LOADER\n DIR --> AGRT\n\n LOADER --> MULTI\n CFG --> MULTI\n MULTI -->|list_tools| SRV\n LOADER --> TOOLS\n TOOLS --> AGRT\n\n AGRT <-->|messages| LLM\n AGRT -->|tool calls| TOOLWRAP\n TOOLWRAP --> MULTI\n MULTI -->|call_tool| SRV\n\n SRV -->|tool result| MULTI --> TOOLWRAP --> AGRT -->|final answer| CLI\n AGRT -->|final answer| PY\n```\n\n---\n\n### 2) Runtime Sequence (end-to-end tool call)\n\n```mermaid\nsequenceDiagram\n autonumber\n participant U as User\n participant CLI as CLI/Python\n participant Builder as build_deep_agent()\n participant Loader as MCPToolLoader\n participant Graph as Agent Graph (DeepAgents or ReAct)\n participant LLM as LangChain Model\n participant Tool as _FastMCPTool\n participant FMCP as FastMCP Client\n participant S as MCP Server (HTTP/SSE)\n\n U->>CLI: Enter prompt\n CLI->>Builder: build_deep_agent(servers, model, instructions?)\n Builder->>Loader: get_all_tools()\n Loader->>FMCP: list_tools()\n FMCP->>S: HTTP(S)/SSE list_tools\n S-->>FMCP: tools + JSON-Schema\n FMCP-->>Loader: tool specs\n Loader-->>Builder: BaseTool[]\n Builder-->>CLI: (Graph, Loader)\n\n U->>Graph: ainvoke({messages:[user prompt]})\n Graph->>LLM: Reason over system + messages + tool descriptions\n LLM-->>Graph: Tool call (e.g., add(a=3,b=5))\n Graph->>Tool: _arun(a=3,b=5)\n Tool->>FMCP: call_tool(\"add\", {a:3,b:5})\n FMCP->>S: POST /mcp tools.call(\"add\", {...})\n S-->>FMCP: result { data: 8 }\n FMCP-->>Tool: result\n Tool-->>Graph: ToolMessage(content=8)\n\n Graph->>LLM: Continue with observations\n LLM-->>Graph: Final response \"(3 + 5) * 7 = 56\"\n Graph-->>CLI: messages (incl. final LLM answer)\n```\n\n---\n\n### 3) Agent Control Loop (planning & acting)\n\n```mermaid\nstateDiagram-v2\n [*] --> AcquireTools\n AcquireTools: Discover MCP tools via FastMCP\\n(JSON-Schema \u279c Pydantic \u279c BaseTool)\n AcquireTools --> Plan\n\n Plan: LLM plans next step\\n(uses system prompt + tool descriptions)\n Plan --> CallTool: if tool needed\n Plan --> Respond: if direct answer sufficient\n\n CallTool: _FastMCPTool._arun\\n\u2192 client.call_tool(name, args)\n CallTool --> Observe: receive tool result\n Observe: Parse result payload (data/text/content)\n Observe --> Decide\n\n Decide: More tools needed?\n Decide --> Plan: yes\n Decide --> Respond: no\n\n Respond: LLM crafts final message\n Respond --> [*]\n```\n\n---\n\n### 4) Code Structure (types & relationships)\n\n```mermaid\nclassDiagram\n class StdioServerSpec {\n +command: str\n +args: List[str]\n +env: Dict[str,str]\n +cwd: Optional[str]\n +keep_alive: bool\n }\n\n class HTTPServerSpec {\n +url: str\n +transport: Literal[\"http\",\"streamable-http\",\"sse\"]\n +headers: Dict[str,str]\n +auth: Optional[str]\n }\n\n class FastMCPMulti {\n -_client: fastmcp.Client\n +client(): Client\n }\n\n class MCPToolLoader {\n -_multi: FastMCPMulti\n +get_all_tools(): List[BaseTool]\n +list_tool_info(): List[ToolInfo]\n }\n\n class _FastMCPTool {\n +name: str\n +description: str\n +args_schema: Type[BaseModel]\n -_tool_name: str\n -_client: Any\n +_arun(**kwargs) async\n }\n\n class ToolInfo {\n +server_guess: str\n +name: str\n +description: str\n +input_schema: Dict[str,Any]\n }\n\n class build_deep_agent {\n +servers: Mapping[str,ServerSpec]\n +model: ModelLike\n +instructions?: str\n +returns: (graph, loader)\n }\n\n StdioServerSpec <|-- ServerSpec\n HTTPServerSpec <|-- ServerSpec\n FastMCPMulti o--> ServerSpec : uses servers_to_mcp_config()\n MCPToolLoader o--> FastMCPMulti\n MCPToolLoader --> _FastMCPTool : creates\n _FastMCPTool ..> BaseTool\n build_deep_agent --> MCPToolLoader : discovery\n build_deep_agent --> _FastMCPTool : tools for agent\n```\n\n---\n\n### 5) Deployment / Integration View (clusters & boundaries)\n\n```mermaid\nflowchart TD\n subgraph App[\"Your App / Service\"]\n UI[\"CLI / API / Notebook\"]\n Code[\"deepmcpagent (Python pkg)\\n- config.py\\n- clients.py\\n- tools.py\\n- agent.py\\n- prompt.py\"]\n UI --> Code\n end\n\n subgraph Cloud[\"LLM Provider(s)\"]\n P1[\"OpenAI / Anthropic / Groq / Ollama...\"]\n end\n\n subgraph Net[\"Network\"]\n direction LR\n FMCP[\"FastMCP Client\\n(HTTP/SSE)\"]\n FMCP ---|mcpServers| Code\n end\n\n subgraph Servers[\"MCP Servers\"]\n direction LR\n A[\"Service A (HTTP)\\n/path: /mcp\"]\n B[\"Service B (SSE)\\n/path: /mcp\"]\n C[\"Service C (HTTP)\\n/path: /mcp\"]\n end\n\n Code -->|init_chat_model or model instance| P1\n Code --> FMCP\n FMCP --> A\n FMCP --> B\n FMCP --> C\n```\n\n---\n\n### 6) Error Handling & Observability (tool errors & retries)\n\n```mermaid\nflowchart TD\n Start([Tool Call]) --> Try{\"client.call_tool(name,args)\"}\n Try -- ok --> Parse[\"Extract data/text/content/result\"]\n Parse --> Return[Return ToolMessage to Agent]\n Try -- raises --> Err[\"Tool/Transport Error\"]\n Err --> Wrap[\"ToolMessage(status=error, content=trace)\"]\n Wrap --> Agent[\"Agent observes error\\nand may retry / alternate tool\"]\n```\n\n---\n\n> These diagrams reflect the current implementation:\n>\n> - **Model is required** (string provider-id or LangChain model instance).\n> - **MCP tools only**, discovered at runtime via **FastMCP** (HTTP/SSE).\n> - Agent loop prefers **DeepAgents** if installed; otherwise **LangGraph ReAct**.\n> - Tools are typed via **JSON-Schema \u279c Pydantic \u279c LangChain BaseTool**.\n> - Fancy console output shows **discovered tools**, **calls**, **results**, and **final answer**.\n\n---\n\n## \ud83e\uddea Development\n\n```bash\n# install dev tooling\npip install -e \".[dev]\"\n\n# lint & type-check\nruff check .\nmypy\n\n# run tests\npytest -q\n```\n\n---\n\n## \ud83d\udee1\ufe0f Security & Privacy\n\n- **Your keys, your model** \u2014 we don\u2019t enforce a provider; pass any LangChain model.\n- Use **HTTP headers** in `HTTPServerSpec` to deliver bearer/OAuth tokens to servers.\n- Report vulns privately: see `SECURITY.md`.\n\n---\n\n## \ud83e\uddef Troubleshooting\n\n- **PEP 668: externally managed environment (macOS + Homebrew)**\n Use a virtualenv:\n\n ```bash\n python3 -m venv .venv\n source .venv/bin/activate\n ```\n\n- **404 Not Found when connecting**\n Ensure your server uses a path (e.g., `/mcp`) and your client URL includes it.\n- **Tool calls failing / attribute errors**\n Ensure you\u2019re on the latest version; our tool wrapper uses `PrivateAttr` for client state.\n- **High token counts**\n That\u2019s normal with tool-calling models. Use smaller models for dev.\n\n---\n\n## \ud83d\udcc4 License\n\nApache-2.0 \u2014 see `LICENSE`.\n\n---\n\n## \ud83d\ude4f Acknowledgments\n\n- The **MCP** community for a clean protocol.\n- **LangChain** and **LangGraph** for powerful agent runtimes.\n- **FastMCP** for solid client & server implementations.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "DeepMCPAgent: LangChain/LangGraph agents powered by MCP tools over FastMCP.",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/cryxnet/deepmcpagent",
"Issues": "https://github.com/cryxnet/deepmcpagent/issues"
},
"split_keywords": [
"mcp",
" fastmcp",
" langchain",
" langgraph",
" agents",
" tools"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3d7aa2c705f4f1d2e9e38b55e10b782590b82a7d407b4949cef8a39ae24174f6",
"md5": "95ca34b8f787cb9f13fbf752610e6971",
"sha256": "79ccccd1c40332c3ae54af8aa3b715427065918cad1341f6813e2af128af7f4f"
},
"downloads": -1,
"filename": "deepmcpagent-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "95ca34b8f787cb9f13fbf752610e6971",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 19213,
"upload_time": "2025-08-30T09:36:22",
"upload_time_iso_8601": "2025-08-30T09:36:22.276264Z",
"url": "https://files.pythonhosted.org/packages/3d/7a/a2c705f4f1d2e9e38b55e10b782590b82a7d407b4949cef8a39ae24174f6/deepmcpagent-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "33bbbacf781fc9792babbbc70c62616b65608e7b6acbed12696cfb0a775f0b89",
"md5": "0819bc90a75168b1de5e21c72f5d0362",
"sha256": "3a22ffd32837dbf26adff50c5f2464a5101ee465a28fa3f4830712b43752d8c7"
},
"downloads": -1,
"filename": "deepmcpagent-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "0819bc90a75168b1de5e21c72f5d0362",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 22825,
"upload_time": "2025-08-30T09:36:23",
"upload_time_iso_8601": "2025-08-30T09:36:23.959505Z",
"url": "https://files.pythonhosted.org/packages/33/bb/bacf781fc9792babbbc70c62616b65608e7b6acbed12696cfb0a775f0b89/deepmcpagent-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-30 09:36:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "cryxnet",
"github_project": "deepmcpagent",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "deepmcpagent"
}