Name | tmcp-runner JSON |
Version |
0.4.0
JSON |
| download |
home_page | None |
Summary | Python library for embedding MCP (Model Context Protocol) capabilities into applications, like Claude Desktop and Cursor do internally. Sponsored by TowardsAGI. |
upload_time | 2025-10-07 11:13:38 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
mcp
model-context-protocol
anthropic
ai
llm
tools
agents
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# ๐งฉ tmcp_runner
A Python library for embedding **Model Context Protocol (MCP)** capabilities into your applications, just like Claude Desktop and Cursor do internally.
## Overview
`tmcp_runner` enables your Python applications to:
- ๐ Connect to any MCP-compliant server using the official Anthropic MCP SDK
- ๐ Discover available tools, resources, and prompts from MCP servers
- โก Execute MCP tools programmatically with full async support
- ๐ Work with multiple MCP servers simultaneously
- ๐ฏ Use the same configuration format as Claude Desktop
## Installation
```bash
pip install tmcp-runner
```
Or install from source:
```bash
git clone https://github.com/QuantumicsAI/tmcp_runner.git
cd tmcp_runner
pip install -e .
```
## Quick Start
### 1. Create MCP Configuration
Create an `mcp_config.json` file with your MCP servers (same format as Claude Desktop):
```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"],
"transport": "stdio"
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost:5432/dbname"
},
"transport": "stdio"
}
}
}
```
### 2. Import and Use in Your Application
```python
import asyncio
from tmcp_runner import TMCPRunner
async def main():
# Initialize the MCP runner
runner = TMCPRunner("mcp_config.json")
# Connect to all configured MCP servers
await runner.connect_all()
# Discover available tools
discovery = await runner.discover_all()
print(f"Connected to {len(discovery)} MCP servers")
# Execute a tool
result = await runner.execute_tool(
server_name="filesystem",
tool_name="read_file",
arguments={"path": "/path/to/file.txt"}
)
# Process results
for content in result:
if hasattr(content, 'text'):
print(content.text)
# Cleanup when done
await runner.disconnect_all()
asyncio.run(main())
```
## โ
Tested Agent Integration Examples
We provide **3 complete, tested examples** showing how to integrate `tmcp_runner` with different AI agent frameworks:
### 1. Simple Agent ([test_simple_agent.py](tests/test_simple_agent.py))
Basic pattern for custom AI agents. **4/4 tests passed** โ
- Tool discovery and management
- Query handling
- Direct tool execution
- Perfect for prototyping and learning
### 2. LangChain Agent ([test_langchain_agent.py](tests/test_langchain_agent.py))
Integration with LangChain framework. **6/6 tests passed** โ
- MCP tools wrapped as LangChain-compatible tools
- AgentExecutor integration
- Tool management for LangChain
- Perfect for LangChain projects
### 3. Pydantic Agent ([test_pydantic_agent.py](tests/test_pydantic_agent.py))
Type-safe agent with Pydantic validation. **7/7 tests passed** โ
- Type-safe tool definitions
- Validated execution results
- Runtime type checking
- Perfect for production systems
**Run the tests:**
```bash
python3 tests/test_simple_agent.py
python3 tests/test_langchain_agent.py
python3 tests/test_pydantic_agent.py
```
**See [tests/README_AGENT_TESTS.md](tests/README_AGENT_TESTS.md) for complete documentation.**
## Core API
### TMCPRunner
Main class for managing multiple MCP server connections in your application.
```python
from tmcp_runner import TMCPRunner
runner = TMCPRunner("path/to/config.json")
```
#### Key Methods
```python
# Connection Management
await runner.connect_all() # Connect to all servers
await runner.connect_server("server-name") # Connect to specific server
await runner.disconnect_all() # Clean up all connections
# Discovery
servers = runner.list_servers() # List configured servers
discovery = await runner.discover_all() # Discover all tools & resources
# Tool Execution
result = await runner.execute_tool(
server_name="filesystem",
tool_name="read_file",
arguments={"path": "/file.txt"}
)
# Resource Reading
content = await runner.read_resource(
server_name="github",
uri="repo://owner/repo/README.md"
)
```
### MCPClient
For direct interaction with a single MCP server:
```python
from tmcp_runner import MCPClient
# Configure a server
config = {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"transport": "stdio"
}
# Use with context manager (auto-cleanup)
async with MCPClient("filesystem", config) as client:
# List available tools
tools = await client.list_tools()
# Execute a tool
result = await client.call_tool("read_file", {"path": "/tmp/test.txt"})
# List resources
resources = await client.list_resources()
```
## Integration Examples
### Example 1: Adding MCP to a Web Application
```python
from fastapi import FastAPI
from tmcp_runner import TMCPRunner
app = FastAPI()
runner = TMCPRunner("mcp_config.json")
@app.on_event("startup")
async def startup():
"""Initialize MCP connections when app starts"""
await runner.connect_all()
print(f"โ
MCP servers connected: {runner.list_servers()}")
@app.on_event("shutdown")
async def shutdown():
"""Cleanup MCP connections when app shuts down"""
await runner.disconnect_all()
@app.post("/execute-tool")
async def execute_tool(server: str, tool: str, arguments: dict):
"""Expose MCP tool execution via API"""
result = await runner.execute_tool(server, tool, arguments)
return {"result": [{"text": c.text} for c in result if hasattr(c, 'text')]}
```
### Example 2: AI Agent with MCP Tools
```python
from tmcp_runner import TMCPRunner
import asyncio
class AIAgent:
def __init__(self, mcp_config_path: str):
self.mcp = TMCPRunner(mcp_config_path)
self.available_tools = {}
async def initialize(self):
"""Setup MCP connections and discover tools"""
await self.mcp.connect_all()
# Discover all available tools
discovery = await self.mcp.discover_all()
for server_name, info in discovery.items():
for tool in info.get('tools', []):
tool_id = f"{server_name}.{tool['name']}"
self.available_tools[tool_id] = {
'server': server_name,
'name': tool['name'],
'description': tool['description']
}
async def use_tool(self, tool_id: str, arguments: dict):
"""Execute an MCP tool"""
tool = self.available_tools[tool_id]
result = await self.mcp.execute_tool(
tool['server'],
tool['name'],
arguments
)
return result
async def cleanup(self):
"""Cleanup MCP connections"""
await self.mcp.disconnect_all()
# Usage
async def main():
agent = AIAgent("mcp_config.json")
await agent.initialize()
# Agent can now use any configured MCP tool
result = await agent.use_tool(
"filesystem.read_file",
{"path": "/data/context.txt"}
)
await agent.cleanup()
asyncio.run(main())
```
### Example 3: Background Task Processing
```python
from tmcp_runner import TMCPRunner
import asyncio
from typing import List, Dict
class MCPTaskProcessor:
def __init__(self, config_path: str):
self.runner = TMCPRunner(config_path)
self.task_queue = asyncio.Queue()
async def start(self):
"""Start the processor"""
await self.runner.connect_all()
asyncio.create_task(self._process_tasks())
async def _process_tasks(self):
"""Process tasks from queue"""
while True:
task = await self.task_queue.get()
try:
result = await self.runner.execute_tool(
task['server'],
task['tool'],
task['arguments']
)
task['callback'](result)
except Exception as e:
task['error_callback'](e)
finally:
self.task_queue.task_done()
async def submit_task(self, server: str, tool: str, arguments: dict, callback, error_callback):
"""Submit a task for processing"""
await self.task_queue.put({
'server': server,
'tool': tool,
'arguments': arguments,
'callback': callback,
'error_callback': error_callback
})
async def stop(self):
"""Stop the processor"""
await self.task_queue.join()
await self.runner.disconnect_all()
```
## Configuration
### Stdio Transport (Local Servers)
For MCP servers that run as local processes:
```json
{
"server-name": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-name", "arg1"],
"env": {
"API_KEY": "your-key"
},
"transport": "stdio"
}
}
```
### SSE Transport (Remote Servers)
For remote MCP servers using Server-Sent Events:
```json
{
"remote-server": {
"url": "https://your-mcp-server.com/sse",
"transport": "sse"
}
}
```
## Supported MCP Servers
Works with any MCP-compliant server, including:
### Official Anthropic Servers
- **filesystem** - File system operations
- **github** - GitHub API integration
- **postgres** - PostgreSQL database access
- **sqlite** - SQLite database access
- **puppeteer** - Web automation
- **gdrive** - Google Drive integration
- **slack** - Slack integration
- And many more...
Browse available servers at [mcpservers.org](https://mcpservers.org)
## Architecture
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Your Application โ
โ (Web App, AI Agent, โ
โ Background Service) โ
โโโโโโโโโโโโโฌโโโโโโโโโโโโโโ
โ
โ imports
โ
โโโโโโโโโผโโโโโโโโโ
โ TMCPRunner โ
โ (Library) โ
โโโโโโโโโฌโโโโโโโโโ
โ
โ manages
โ
โโโโโโโโโผโโโโโโโโโ
โ MCPClient โ
โ (per server) โ
โโโโโโโโโฌโโโโโโโโโ
โ
โ uses
โ
โโโโโโโโโผโโโโโโโโโ
โ Official MCP โ
โ Python SDK โ
โโโโโโโโโฌโโโโโโโโโ
โ
โโโโโโโโโผโโโโโโโโโ
โ MCP Servers โ
โ (Tools, Data) โ
โโโโโโโโโโโโโโโโโโ
```
## How It Works (Like Claude Desktop)
1. **Configuration Loading**: Reads MCP server configs from JSON file
2. **Server Connection**: Spawns stdio processes or connects to SSE endpoints
3. **Discovery**: Queries each server for available tools, resources, and prompts
4. **Tool Execution**: Executes tools via JSON-RPC and returns typed results
5. **Resource Reading**: Fetches resource contents by URI
## Advanced Usage
### Concurrent Tool Execution
```python
async def execute_multiple_tools():
runner = TMCPRunner("mcp_config.json")
await runner.connect_all()
# Execute multiple tools concurrently
results = await asyncio.gather(
runner.execute_tool("server1", "tool1", {"arg": "val1"}),
runner.execute_tool("server2", "tool2", {"arg": "val2"}),
runner.execute_tool("server3", "tool3", {"arg": "val3"}),
)
await runner.disconnect_all()
return results
```
### Dynamic Server Management
```python
async def dynamic_servers():
runner = TMCPRunner("mcp_config.json")
# Connect only to specific servers
await runner.connect_server("filesystem")
await runner.connect_server("postgres")
# Later, connect to more servers
await runner.connect_server("github")
# Disconnect specific server
await runner.disconnect_server("filesystem")
await runner.disconnect_all()
```
### Error Handling
```python
async def robust_execution():
runner = TMCPRunner("mcp_config.json")
try:
await runner.connect_all()
except Exception as e:
print(f"Connection failed: {e}")
return
try:
result = await runner.execute_tool(
"filesystem",
"read_file",
{"path": "/nonexistent.txt"}
)
except RuntimeError as e:
print(f"Tool execution failed: {e}")
finally:
await runner.disconnect_all()
```
## Testing
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run with coverage
pytest --cov=tmcp_runner
```
## Requirements
- Python 3.10+
- Dependencies (auto-installed):
- `mcp>=1.0.0` - Official Anthropic MCP SDK
- `httpx>=0.27.0` - HTTP client
- `httpx-sse>=0.4.0` - SSE support
- `pydantic>=2.0.0` - Data validation
## Use Cases
Perfect for:
- ๐ค **AI Agents** - Give your AI agents access to tools and data sources
- ๐ **Web Applications** - Add MCP capabilities to FastAPI, Flask, Django apps
- โ๏ธ **Background Services** - Process MCP tasks asynchronously
- ๐ **Data Pipelines** - Integrate MCP servers into ETL workflows
- ๐งช **Testing Frameworks** - Test MCP server implementations
- ๐ฑ **Desktop Applications** - Build desktop apps with MCP integration
## Security
- โ
Process isolation for stdio servers
- โ
Environment variable control
- โ
No shell injection vulnerabilities
- โ
Input validation
- โ
Configurable permissions per server
**Best Practices**:
- Use restricted directories for filesystem access
- Use read-only tokens when possible
- Store secrets in environment variables
- Validate tool inputs before execution
- Review MCP server code before use
## Documentation
- **README.md** (this file) - Library API and integration guide
- **ARCHITECTURE.md** - Technical architecture details
- **example_usage.py** - Comprehensive code examples
- **demo.py** - Interactive demonstration
## Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Add tests for new features
4. Submit a pull request
## License
MIT License - see LICENSE file for details
## Resources
- [MCP Documentation](https://docs.anthropic.com/mcp)
- [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)
- [MCP Server Registry](https://mcpservers.org)
## Support
- ๐ Issues: [GitHub Issues](https://github.com/QuantumicsAI/tmcp_runner/issues)
- ๐ฌ Discussions: [GitHub Discussions](https://github.com/QuantumicsAI/tmcp_runner/discussions)
- ๐ง Email: support@quantumics.ai
---
Made with โค๏ธ by [Quantumics AI](https://quantumics.ai)
Raw data
{
"_id": null,
"home_page": null,
"name": "tmcp-runner",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Mansoor Pasha I <support@towardsagi.ai>",
"keywords": "mcp, model-context-protocol, anthropic, ai, llm, tools, agents",
"author": null,
"author_email": "TowardsMCP Developers <support@towardsagi.ai>",
"download_url": "https://files.pythonhosted.org/packages/ad/8e/913705af071b0fc3dec87d9327cfcab03f6c1202be9c3a8d978869bcc0cd/tmcp_runner-0.4.0.tar.gz",
"platform": null,
"description": "# \ud83e\udde9 tmcp_runner\n\nA Python library for embedding **Model Context Protocol (MCP)** capabilities into your applications, just like Claude Desktop and Cursor do internally.\n\n## Overview\n\n`tmcp_runner` enables your Python applications to:\n- \ud83d\udd0c Connect to any MCP-compliant server using the official Anthropic MCP SDK\n- \ud83d\udd0d Discover available tools, resources, and prompts from MCP servers\n- \u26a1 Execute MCP tools programmatically with full async support\n- \ud83c\udf10 Work with multiple MCP servers simultaneously\n- \ud83c\udfaf Use the same configuration format as Claude Desktop\n\n## Installation\n\n```bash\npip install tmcp-runner\n```\n\nOr install from source:\n\n```bash\ngit clone https://github.com/QuantumicsAI/tmcp_runner.git\ncd tmcp_runner\npip install -e .\n```\n\n## Quick Start\n\n### 1. Create MCP Configuration\n\nCreate an `mcp_config.json` file with your MCP servers (same format as Claude Desktop):\n\n```json\n{\n \"mcpServers\": {\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/path/to/directory\"],\n \"transport\": \"stdio\"\n },\n \"postgres\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-postgres\"],\n \"env\": {\n \"POSTGRES_CONNECTION_STRING\": \"postgresql://user:pass@localhost:5432/dbname\"\n },\n \"transport\": \"stdio\"\n }\n }\n}\n```\n\n### 2. Import and Use in Your Application\n\n```python\nimport asyncio\nfrom tmcp_runner import TMCPRunner\n\nasync def main():\n # Initialize the MCP runner\n runner = TMCPRunner(\"mcp_config.json\")\n \n # Connect to all configured MCP servers\n await runner.connect_all()\n \n # Discover available tools\n discovery = await runner.discover_all()\n print(f\"Connected to {len(discovery)} MCP servers\")\n \n # Execute a tool\n result = await runner.execute_tool(\n server_name=\"filesystem\",\n tool_name=\"read_file\",\n arguments={\"path\": \"/path/to/file.txt\"}\n )\n \n # Process results\n for content in result:\n if hasattr(content, 'text'):\n print(content.text)\n \n # Cleanup when done\n await runner.disconnect_all()\n\nasyncio.run(main())\n```\n\n## \u2705 Tested Agent Integration Examples\n\nWe provide **3 complete, tested examples** showing how to integrate `tmcp_runner` with different AI agent frameworks:\n\n### 1. Simple Agent ([test_simple_agent.py](tests/test_simple_agent.py))\nBasic pattern for custom AI agents. **4/4 tests passed** \u2705\n- Tool discovery and management\n- Query handling\n- Direct tool execution\n- Perfect for prototyping and learning\n\n### 2. LangChain Agent ([test_langchain_agent.py](tests/test_langchain_agent.py))\nIntegration with LangChain framework. **6/6 tests passed** \u2705\n- MCP tools wrapped as LangChain-compatible tools\n- AgentExecutor integration\n- Tool management for LangChain\n- Perfect for LangChain projects\n\n### 3. Pydantic Agent ([test_pydantic_agent.py](tests/test_pydantic_agent.py))\nType-safe agent with Pydantic validation. **7/7 tests passed** \u2705\n- Type-safe tool definitions\n- Validated execution results\n- Runtime type checking\n- Perfect for production systems\n\n**Run the tests:**\n```bash\npython3 tests/test_simple_agent.py\npython3 tests/test_langchain_agent.py\npython3 tests/test_pydantic_agent.py\n```\n\n**See [tests/README_AGENT_TESTS.md](tests/README_AGENT_TESTS.md) for complete documentation.**\n\n## Core API\n\n### TMCPRunner\n\nMain class for managing multiple MCP server connections in your application.\n\n```python\nfrom tmcp_runner import TMCPRunner\n\nrunner = TMCPRunner(\"path/to/config.json\")\n```\n\n#### Key Methods\n\n```python\n# Connection Management\nawait runner.connect_all() # Connect to all servers\nawait runner.connect_server(\"server-name\") # Connect to specific server\nawait runner.disconnect_all() # Clean up all connections\n\n# Discovery\nservers = runner.list_servers() # List configured servers\ndiscovery = await runner.discover_all() # Discover all tools & resources\n\n# Tool Execution\nresult = await runner.execute_tool(\n server_name=\"filesystem\",\n tool_name=\"read_file\",\n arguments={\"path\": \"/file.txt\"}\n)\n\n# Resource Reading\ncontent = await runner.read_resource(\n server_name=\"github\",\n uri=\"repo://owner/repo/README.md\"\n)\n```\n\n### MCPClient\n\nFor direct interaction with a single MCP server:\n\n```python\nfrom tmcp_runner import MCPClient\n\n# Configure a server\nconfig = {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/tmp\"],\n \"transport\": \"stdio\"\n}\n\n# Use with context manager (auto-cleanup)\nasync with MCPClient(\"filesystem\", config) as client:\n # List available tools\n tools = await client.list_tools()\n \n # Execute a tool\n result = await client.call_tool(\"read_file\", {\"path\": \"/tmp/test.txt\"})\n \n # List resources\n resources = await client.list_resources()\n```\n\n## Integration Examples\n\n### Example 1: Adding MCP to a Web Application\n\n```python\nfrom fastapi import FastAPI\nfrom tmcp_runner import TMCPRunner\n\napp = FastAPI()\nrunner = TMCPRunner(\"mcp_config.json\")\n\n@app.on_event(\"startup\")\nasync def startup():\n \"\"\"Initialize MCP connections when app starts\"\"\"\n await runner.connect_all()\n print(f\"\u2705 MCP servers connected: {runner.list_servers()}\")\n\n@app.on_event(\"shutdown\")\nasync def shutdown():\n \"\"\"Cleanup MCP connections when app shuts down\"\"\"\n await runner.disconnect_all()\n\n@app.post(\"/execute-tool\")\nasync def execute_tool(server: str, tool: str, arguments: dict):\n \"\"\"Expose MCP tool execution via API\"\"\"\n result = await runner.execute_tool(server, tool, arguments)\n return {\"result\": [{\"text\": c.text} for c in result if hasattr(c, 'text')]}\n```\n\n### Example 2: AI Agent with MCP Tools\n\n```python\nfrom tmcp_runner import TMCPRunner\nimport asyncio\n\nclass AIAgent:\n def __init__(self, mcp_config_path: str):\n self.mcp = TMCPRunner(mcp_config_path)\n self.available_tools = {}\n \n async def initialize(self):\n \"\"\"Setup MCP connections and discover tools\"\"\"\n await self.mcp.connect_all()\n \n # Discover all available tools\n discovery = await self.mcp.discover_all()\n for server_name, info in discovery.items():\n for tool in info.get('tools', []):\n tool_id = f\"{server_name}.{tool['name']}\"\n self.available_tools[tool_id] = {\n 'server': server_name,\n 'name': tool['name'],\n 'description': tool['description']\n }\n \n async def use_tool(self, tool_id: str, arguments: dict):\n \"\"\"Execute an MCP tool\"\"\"\n tool = self.available_tools[tool_id]\n result = await self.mcp.execute_tool(\n tool['server'],\n tool['name'],\n arguments\n )\n return result\n \n async def cleanup(self):\n \"\"\"Cleanup MCP connections\"\"\"\n await self.mcp.disconnect_all()\n\n# Usage\nasync def main():\n agent = AIAgent(\"mcp_config.json\")\n await agent.initialize()\n \n # Agent can now use any configured MCP tool\n result = await agent.use_tool(\n \"filesystem.read_file\",\n {\"path\": \"/data/context.txt\"}\n )\n \n await agent.cleanup()\n\nasyncio.run(main())\n```\n\n### Example 3: Background Task Processing\n\n```python\nfrom tmcp_runner import TMCPRunner\nimport asyncio\nfrom typing import List, Dict\n\nclass MCPTaskProcessor:\n def __init__(self, config_path: str):\n self.runner = TMCPRunner(config_path)\n self.task_queue = asyncio.Queue()\n \n async def start(self):\n \"\"\"Start the processor\"\"\"\n await self.runner.connect_all()\n asyncio.create_task(self._process_tasks())\n \n async def _process_tasks(self):\n \"\"\"Process tasks from queue\"\"\"\n while True:\n task = await self.task_queue.get()\n try:\n result = await self.runner.execute_tool(\n task['server'],\n task['tool'],\n task['arguments']\n )\n task['callback'](result)\n except Exception as e:\n task['error_callback'](e)\n finally:\n self.task_queue.task_done()\n \n async def submit_task(self, server: str, tool: str, arguments: dict, callback, error_callback):\n \"\"\"Submit a task for processing\"\"\"\n await self.task_queue.put({\n 'server': server,\n 'tool': tool,\n 'arguments': arguments,\n 'callback': callback,\n 'error_callback': error_callback\n })\n \n async def stop(self):\n \"\"\"Stop the processor\"\"\"\n await self.task_queue.join()\n await self.runner.disconnect_all()\n```\n\n## Configuration\n\n### Stdio Transport (Local Servers)\n\nFor MCP servers that run as local processes:\n\n```json\n{\n \"server-name\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-name\", \"arg1\"],\n \"env\": {\n \"API_KEY\": \"your-key\"\n },\n \"transport\": \"stdio\"\n }\n}\n```\n\n### SSE Transport (Remote Servers)\n\nFor remote MCP servers using Server-Sent Events:\n\n```json\n{\n \"remote-server\": {\n \"url\": \"https://your-mcp-server.com/sse\",\n \"transport\": \"sse\"\n }\n}\n```\n\n## Supported MCP Servers\n\nWorks with any MCP-compliant server, including:\n\n### Official Anthropic Servers\n- **filesystem** - File system operations\n- **github** - GitHub API integration\n- **postgres** - PostgreSQL database access\n- **sqlite** - SQLite database access\n- **puppeteer** - Web automation\n- **gdrive** - Google Drive integration\n- **slack** - Slack integration\n- And many more...\n\nBrowse available servers at [mcpservers.org](https://mcpservers.org)\n\n## Architecture\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Your Application \u2502\n\u2502 (Web App, AI Agent, \u2502\n\u2502 Background Service) \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 imports\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 TMCPRunner \u2502\n \u2502 (Library) \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 manages\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 MCPClient \u2502\n \u2502 (per server) \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 uses\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 Official MCP \u2502\n \u2502 Python SDK \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 MCP Servers \u2502\n \u2502 (Tools, Data) \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## How It Works (Like Claude Desktop)\n\n1. **Configuration Loading**: Reads MCP server configs from JSON file\n2. **Server Connection**: Spawns stdio processes or connects to SSE endpoints\n3. **Discovery**: Queries each server for available tools, resources, and prompts\n4. **Tool Execution**: Executes tools via JSON-RPC and returns typed results\n5. **Resource Reading**: Fetches resource contents by URI\n\n## Advanced Usage\n\n### Concurrent Tool Execution\n\n```python\nasync def execute_multiple_tools():\n runner = TMCPRunner(\"mcp_config.json\")\n await runner.connect_all()\n \n # Execute multiple tools concurrently\n results = await asyncio.gather(\n runner.execute_tool(\"server1\", \"tool1\", {\"arg\": \"val1\"}),\n runner.execute_tool(\"server2\", \"tool2\", {\"arg\": \"val2\"}),\n runner.execute_tool(\"server3\", \"tool3\", {\"arg\": \"val3\"}),\n )\n \n await runner.disconnect_all()\n return results\n```\n\n### Dynamic Server Management\n\n```python\nasync def dynamic_servers():\n runner = TMCPRunner(\"mcp_config.json\")\n \n # Connect only to specific servers\n await runner.connect_server(\"filesystem\")\n await runner.connect_server(\"postgres\")\n \n # Later, connect to more servers\n await runner.connect_server(\"github\")\n \n # Disconnect specific server\n await runner.disconnect_server(\"filesystem\")\n \n await runner.disconnect_all()\n```\n\n### Error Handling\n\n```python\nasync def robust_execution():\n runner = TMCPRunner(\"mcp_config.json\")\n \n try:\n await runner.connect_all()\n except Exception as e:\n print(f\"Connection failed: {e}\")\n return\n \n try:\n result = await runner.execute_tool(\n \"filesystem\",\n \"read_file\",\n {\"path\": \"/nonexistent.txt\"}\n )\n except RuntimeError as e:\n print(f\"Tool execution failed: {e}\")\n finally:\n await runner.disconnect_all()\n```\n\n## Testing\n\n```bash\n# Install with dev dependencies\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Run with coverage\npytest --cov=tmcp_runner\n```\n\n## Requirements\n\n- Python 3.10+\n- Dependencies (auto-installed):\n - `mcp>=1.0.0` - Official Anthropic MCP SDK\n - `httpx>=0.27.0` - HTTP client\n - `httpx-sse>=0.4.0` - SSE support\n - `pydantic>=2.0.0` - Data validation\n\n## Use Cases\n\nPerfect for:\n- \ud83e\udd16 **AI Agents** - Give your AI agents access to tools and data sources\n- \ud83c\udf10 **Web Applications** - Add MCP capabilities to FastAPI, Flask, Django apps\n- \u2699\ufe0f **Background Services** - Process MCP tasks asynchronously\n- \ud83d\udd04 **Data Pipelines** - Integrate MCP servers into ETL workflows\n- \ud83e\uddea **Testing Frameworks** - Test MCP server implementations\n- \ud83d\udcf1 **Desktop Applications** - Build desktop apps with MCP integration\n\n## Security\n\n- \u2705 Process isolation for stdio servers\n- \u2705 Environment variable control\n- \u2705 No shell injection vulnerabilities\n- \u2705 Input validation\n- \u2705 Configurable permissions per server\n\n**Best Practices**:\n- Use restricted directories for filesystem access\n- Use read-only tokens when possible\n- Store secrets in environment variables\n- Validate tool inputs before execution\n- Review MCP server code before use\n\n## Documentation\n\n- **README.md** (this file) - Library API and integration guide\n- **ARCHITECTURE.md** - Technical architecture details\n- **example_usage.py** - Comprehensive code examples\n- **demo.py** - Interactive demonstration\n\n## Contributing\n\nContributions welcome! Please:\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for new features\n4. Submit a pull request\n\n## License\n\nMIT License - see LICENSE file for details\n\n## Resources\n\n- [MCP Documentation](https://docs.anthropic.com/mcp)\n- [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)\n- [MCP Server Registry](https://mcpservers.org)\n\n## Support\n\n- \ud83d\udc1b Issues: [GitHub Issues](https://github.com/QuantumicsAI/tmcp_runner/issues)\n- \ud83d\udcac Discussions: [GitHub Discussions](https://github.com/QuantumicsAI/tmcp_runner/discussions)\n- \ud83d\udce7 Email: support@quantumics.ai\n\n---\n\nMade with \u2764\ufe0f by [Quantumics AI](https://quantumics.ai)\n",
"bugtrack_url": null,
"license": null,
"summary": "Python library for embedding MCP (Model Context Protocol) capabilities into applications, like Claude Desktop and Cursor do internally. Sponsored by TowardsAGI.",
"version": "0.4.0",
"project_urls": {
"Documentation": "https://github.com/QuantumicsAI/tmcp_runner#readme",
"Homepage": "https://towardsmcp.com",
"Repository": "https://github.com/QuantumicsAI/tmcp_runner",
"Sponsor": "https://towardsagi.ai"
},
"split_keywords": [
"mcp",
" model-context-protocol",
" anthropic",
" ai",
" llm",
" tools",
" agents"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2992acb056e511eb1dc723c8307e8d616d53625ac88dbd7bfe9f13ff88cef30d",
"md5": "081adc5969d3c62bef14a8eceb9a49cb",
"sha256": "25faa87f8999beade4b51193a44f9573f01c90ab4edbbd6f6fdc3d4881f63c8c"
},
"downloads": -1,
"filename": "tmcp_runner-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "081adc5969d3c62bef14a8eceb9a49cb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 11533,
"upload_time": "2025-10-07T11:13:36",
"upload_time_iso_8601": "2025-10-07T11:13:36.956485Z",
"url": "https://files.pythonhosted.org/packages/29/92/acb056e511eb1dc723c8307e8d616d53625ac88dbd7bfe9f13ff88cef30d/tmcp_runner-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ad8e913705af071b0fc3dec87d9327cfcab03f6c1202be9c3a8d978869bcc0cd",
"md5": "3dad33548b91537f024d473914f29508",
"sha256": "1bd9c02474334c9ed7acac83e64667555a5408157b2ff1decd7500092849b58f"
},
"downloads": -1,
"filename": "tmcp_runner-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "3dad33548b91537f024d473914f29508",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 21850,
"upload_time": "2025-10-07T11:13:38",
"upload_time_iso_8601": "2025-10-07T11:13:38.730568Z",
"url": "https://files.pythonhosted.org/packages/ad/8e/913705af071b0fc3dec87d9327cfcab03f6c1202be9c3a8d978869bcc0cd/tmcp_runner-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-07 11:13:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "QuantumicsAI",
"github_project": "tmcp_runner#readme",
"github_not_found": true,
"lcname": "tmcp-runner"
}