Name | mcpomni-connect JSON |
Version |
0.1.19
JSON |
| download |
home_page | None |
Summary | Universal MCP Client with multi-transport support and LLM-powered tool routing |
upload_time | 2025-08-05 14:36:17 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT |
keywords |
agent
ai
automation
git
llm
mcp
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# π MCPOmni Connect - Complete AI Platform: OmniAgent + Universal MCP Client
[](https://pepy.tech/projects/mcpomni-connect)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/Abiorh001/mcp_omni_connect/actions)
[](https://badge.fury.io/py/mcpomni-connect)
[](https://github.com/Abiorh001/mcp_omni_connect/commits/main)
[](https://github.com/Abiorh001/mcp_omni_connect/issues)
[](https://github.com/Abiorh001/mcp_omni_connect/pulls)
**MCPOmni Connect** is the complete AI platform that evolved from a world-class MCP client into a revolutionary ecosystem. It now includes **OmniAgent** - the ultimate AI agent builder born from MCPOmni Connect's powerful foundation. Build production-ready AI agents, use the advanced MCP CLI, or combine both for maximum power.
## π **Complete AI Platform - Two Powerful Systems:**
### 1. π€ **OmniAgent System** *(Revolutionary AI Agent Builder)*
Born from MCPOmni Connect's foundation - create intelligent, autonomous agents with:
- **π οΈ Local Tools System** - Register your Python functions as AI tools
- **π Self-Flying Background Agents** - Autonomous task execution
- **π§ Multi-Tier Memory** - Vector databases, Redis, PostgreSQL, MySQL, SQLite
- **π‘ Real-Time Events** - Live monitoring and streaming
- **π§ MCP + Local Tool Orchestration** - Seamlessly combine both tool types
### 2. π **Universal MCP Client** *(World-Class CLI)*
Advanced command-line interface for connecting to any Model Context Protocol server with:
- **π Multi-Protocol Support** - stdio, SSE, HTTP, Docker, NPX transports
- **π Authentication** - OAuth 2.0, Bearer tokens, custom headers
- **π§ Advanced Memory** - Redis, Database, Vector storage with intelligent retrieval
- **π‘ Event Streaming** - Real-time monitoring and debugging
- **π€ Agentic Modes** - ReAct, Orchestrator, and Interactive chat modes
**π― Perfect for:** Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.
## π NEW: OmniAgent - Build Your Own AI Agents!
**π Introducing OmniAgent** - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!
### β
OmniAgent Revolutionary Capabilities:
- **π§ Multi-tier memory management** with vector search and semantic retrieval
- **π οΈ XML-based reasoning** with strict tool formatting for reliable execution
- **π§ Advanced tool orchestration** - Seamlessly combine MCP server tools + local tools
- **π Self-flying background agents** with autonomous task execution
- **π‘ Real-time event streaming** for monitoring and debugging
- **ποΈ Production-ready infrastructure** with error handling and retry logic
- **β‘ Plug-and-play intelligence** - No complex setup required!
### π₯ **LOCAL TOOLS SYSTEM** *(MAJOR FEATURE!)*
- **π― Easy Tool Registration**: `@tool_registry.register_tool("tool_name")`
- **π Custom Tool Creation**: Register your own Python functions as AI tools
- **π Runtime Tool Management**: Add/remove tools dynamically
- **βοΈ Type-Safe Interface**: Automatic parameter validation and documentation
- **π Rich Examples**: Study `run_omni_agent.py` for 12+ EXAMPLE tool registration patterns
> π **New User?** Start with the [βοΈ Configuration Guide](#%EF%B8%8F-configuration-guide) to understand the difference between config files, transport types, and OAuth behavior. Then check out the [π§ͺ Testing](#-testing) section to get started quickly.
## β¨ Key Features
### π€ Intelligent Agent System
- **ReAct Agent Mode**
- Autonomous task execution with reasoning and action cycles
- Independent decision-making without human intervention
- Advanced problem-solving through iterative reasoning
- Self-guided tool selection and execution
- Complex task decomposition and handling
- **Orchestrator Agent Mode**
- Strategic multi-step task planning and execution
- Intelligent coordination across multiple MCP servers
- Dynamic agent delegation and communication
- Parallel task execution when possible
- Sophisticated workflow management with real-time progress monitoring
- **Interactive Chat Mode**
- Human-in-the-loop task execution with approval workflows
- Step-by-step guidance and explanations
- Educational mode for understanding AI decision processes
### π Universal Connectivity
- **Multi-Protocol Support**
- Native support for stdio transport
- Server-Sent Events (SSE) for real-time communication
- Streamable HTTP for efficient data streaming
- Docker container integration
- NPX package execution
- Extensible transport layer for future protocols
- **Authentication Support**
- OAuth 2.0 authentication flow
- Bearer token authentication
- Custom header support
- Secure credential management
- **Agentic Operation Modes**
- Seamless switching between chat, autonomous, and orchestrator modes
- Context-aware mode selection based on task complexity
- Persistent state management across mode transitions
### π§ AI-Powered Intelligence
- **Unified LLM Integration with LiteLLM**
- Single unified interface for all AI providers
- Support for 100+ models across providers including:
- OpenAI (GPT-4, GPT-3.5, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)
- Google (Gemini Pro, Gemini Flash, etc.)
- Groq (Llama, Mixtral, Gemma, etc.)
- DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)
- Azure OpenAI
- OpenRouter (access to 200+ models)
- Ollama (local models)
- Simplified configuration and reduced complexity
- Dynamic system prompts based on available capabilities
- Intelligent context management
- Automatic tool selection and chaining
- Universal model support through custom ReAct Agent
- Handles models without native function calling
- Dynamic function execution based on user requests
- Intelligent tool orchestration
### π Security & Privacy
- **Explicit User Control**
- All tool executions require explicit user approval in chat mode
- Clear explanation of tool actions before execution
- Transparent disclosure of data access and usage
- **Data Protection**
- Strict data access controls
- Server-specific data isolation
- No unauthorized data exposure
- **Privacy-First Approach**
- Minimal data collection
- User data remains on specified servers
- No cross-server data sharing without consent
- **Secure Communication**
- Encrypted transport protocols
- Secure API key management
- Environment variable protection
### πΎ Advanced Memory Management *(UPDATED!)*
- **Multi-Backend Memory Storage**
- **In-Memory**: Fast development storage
- **Redis**: Persistent memory with real-time access
- **Database**: PostgreSQL, MySQL, SQLite support
- **File Storage**: Save/load conversation history
- Runtime switching: `/memory_store:redis`, `/memory_store:database:postgresql://user:pass@host/db`
- **Multi-Tier Memory Strategy**
- **Short-term Memory**: Sliding window or token budget strategies
- **Long-term Memory**: Vector database storage for semantic retrieval
- **Episodic Memory**: Context-aware conversation history
- Runtime configuration: `/memory_mode:sliding_window:5`, `/memory_mode:token_budget:3000`
- **Vector Database Integration *(NEW!)*
- **Qdrant**: Production-grade vector search (set `QDRANT_HOST` and `QDRANT_PORT`)
- **ChromaDB**: Local fallback vector storage (automatic installation)
- **Semantic Search**: Intelligent context retrieval across conversations
- **Enable**: Set `ENABLE_VECTOR_DB=true` for long-term and episodic memory
- **Real-Time Event Streaming *(NEW!)*
- **In-Memory Events**: Fast development event processing
- **Redis Streams**: Persistent event storage and streaming
- Runtime switching: `/event_store:redis_stream`, `/event_store:in_memory`
### π¬ Prompt Management
- **Advanced Prompt Handling**
- Dynamic prompt discovery across servers
- Flexible argument parsing (JSON and key-value formats)
- Cross-server prompt coordination
- Intelligent prompt validation
- Context-aware prompt execution
- Real-time prompt responses
- Support for complex nested arguments
- Automatic type conversion and validation
- **Client-Side Sampling Support**
- Dynamic sampling configuration from client
- Flexible LLM response generation
- Customizable sampling parameters
- Real-time sampling adjustments
### π οΈ Tool Orchestration
- **Dynamic Tool Discovery & Management**
- Automatic tool capability detection
- Cross-server tool coordination
- Intelligent tool selection based on context
- Real-time tool availability updates
### π¦ Resource Management
- **Universal Resource Access**
- Cross-server resource discovery
- Unified resource addressing
- Automatic resource type detection
- Smart content summarization
### π Server Management
- **Advanced Server Handling**
- Multiple simultaneous server connections
- Automatic server health monitoring
- Graceful connection management
- Dynamic capability updates
- Flexible authentication methods
- Runtime server configuration updates
## ποΈ Architecture
### Core Components
```
MCPOmni Connect Platform
βββ π€ OmniAgent System (Revolutionary Agent Builder)
β βββ Local Tools Registry
β βββ Background Agent Manager
β βββ Custom Agent Creation
β βββ Agent Orchestration Engine
βββ π Universal MCP Client (World-Class CLI)
β βββ Transport Layer (stdio, SSE, HTTP, Docker, NPX)
β βββ Multi-Server Orchestration
β βββ Authentication & Security
β βββ Connection Lifecycle Management
βββ π§ Shared Memory System (Both Systems)
β βββ Multi-Backend Storage (Redis, DB, In-Memory)
β βββ Vector Database Integration (Qdrant, ChromaDB)
β βββ Memory Strategies (Sliding Window, Token Budget)
β βββ Session Management
βββ π‘ Event System (Both Systems)
β βββ In-Memory Event Processing
β βββ Redis Streams for Persistence
β βββ Real-Time Event Monitoring
βββ π οΈ Tool Management (Both Systems)
β βββ Dynamic Tool Discovery
β βββ Cross-Server Tool Routing
β βββ Local Python Tool Registration
β βββ Tool Execution Engine
βββ π€ AI Integration (Both Systems)
βββ LiteLLM (100+ Models)
βββ Context Management
βββ ReAct Agent Processing
βββ Response Generation
```
## π Getting Started
### Prerequisites
- Python 3.10+
- LLM API key (OpenAI, Anthropic, etc.)
- UV package manager (recommended)
- Redis server (optional, for persistent memory & events)
- Database (optional, PostgreSQL/MySQL/SQLite for persistent memory)
- Qdrant or ChromaDB (optional, for vector search & long-term memory)
### Install using package manager
#### With uv (recommended)
```bash
uv add mcpomni-connect
```
#### Using pip
```bash
pip install mcpomni-connect
```
### Configuration
```bash
# Set up environment variables
echo "LLM_API_KEY=your_api_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_URL=redis://localhost:6379/0" >> .env
# Optional: Configure database connection
echo "DATABASE_URL=sqlite:///mcpomni_memory.db" >> .env
# Configure your servers in servers_config.json
```
## βοΈ Configuration Guide
### Configuration Files Overview
MCPOmni Connect uses **two separate configuration files** for different purposes:
#### 1. `.env` File - Environment Variables
Contains sensitive information like API keys and optional settings:
```bash
# Required: Your LLM provider API key
LLM_API_KEY=your_api_key_here
# Optional: Memory Storage Configuration
DATABASE_URL=sqlite:///mcpomni_memory.db
REDIS_URL=redis://localhost:6379/0
```
#### 2. `servers_config.json` - Server & Agent Configuration
Contains application settings, LLM configuration, and MCP server connections:
```json
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4o-mini",
"temperature": 0.7,
"max_tokens": 5000,
"top_p": 0.7
},
"mcpServers": {
"your-server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
}
```
### π¦ Transport Types & Authentication
MCPOmni Connect supports multiple ways to connect to MCP servers:
#### 1. **stdio** - Direct Process Communication
**Use when**: Connecting to local MCP servers that run as separate processes
```json
{
"server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
```
- **No authentication needed**
- **No OAuth server started**
- Most common for local development
#### 2. **sse** - Server-Sent Events
**Use when**: Connecting to HTTP-based MCP servers using Server-Sent Events
```json
{
"server-name": {
"transport_type": "sse",
"url": "http://your-server.com:4010/sse",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
```
- **Uses Bearer token or custom headers**
- **No OAuth server started**
#### 3. **streamable_http** - HTTP with Optional OAuth
**Use when**: Connecting to HTTP-based MCP servers with or without OAuth
**Without OAuth (Bearer Token):**
```json
{
"server-name": {
"transport_type": "streamable_http",
"url": "http://your-server.com:4010/mcp",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60
}
}
```
- **Uses Bearer token or custom headers**
- **No OAuth server started**
**With OAuth:**
```json
{
"server-name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server.com:4010/mcp"
}
}
```
- **OAuth callback server automatically starts on `http://localhost:3000`**
- **This is hardcoded and cannot be changed**
- **Required for OAuth flow to work properly**
### π OAuth Server Behavior
**Important**: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.
#### What You'll See:
```
π₯οΈ Started callback server on http://localhost:3000
```
#### Key Points:
- **This is normal behavior** - not an error
- **The address `http://localhost:3000` is hardcoded** and cannot be changed
- **The server only starts when** you have `"auth": {"method": "oauth"}` in your config
- **The server stops** when the application shuts down
- **Only used for OAuth token handling** - no other purpose
#### When OAuth is NOT Used:
- Remove the entire `"auth"` section from your server configuration
- Use `"headers"` with `"Authorization": "Bearer token"` instead
- No OAuth server will start
### π οΈ Troubleshooting Common Issues
#### "Failed to connect to server: Session terminated"
**Possible Causes & Solutions:**
1. **Wrong Transport Type**
```
Problem: Your server expects 'stdio' but you configured 'streamable_http'
Solution: Check your server's documentation for the correct transport type
```
2. **OAuth Configuration Mismatch**
```
Problem: Your server doesn't support OAuth but you have "auth": {"method": "oauth"}
Solution: Remove the "auth" section entirely and use headers instead:
"headers": {
"Authorization": "Bearer your-token"
}
```
3. **Server Not Running**
```
Problem: The MCP server at the specified URL is not running
Solution: Start your MCP server first, then connect with MCPOmni Connect
```
4. **Wrong URL or Port**
```
Problem: URL in config doesn't match where your server is running
Solution: Verify the server's actual address and port
```
#### "Started callback server on http://localhost:3000" - Is This Normal?
**Yes, this is completely normal** when:
- You have `"auth": {"method": "oauth"}` in any server configuration
- The OAuth server handles authentication tokens automatically
- You cannot and should not try to change this address
**If you don't want the OAuth server:**
- Remove `"auth": {"method": "oauth"}` from all server configurations
- Use alternative authentication methods like Bearer tokens
### π Configuration Examples by Use Case
#### Local Development (stdio)
```json
{
"mcpServers": {
"local-tools": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-tools"]
}
}
}
```
#### Remote Server with Token
```json
{
"mcpServers": {
"remote-api": {
"transport_type": "streamable_http",
"url": "http://api.example.com:8080/mcp",
"headers": {
"Authorization": "Bearer abc123token"
}
}
}
}
```
#### Remote Server with OAuth
```json
{
"mcpServers": {
"oauth-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://oauth-server.com:8080/mcp"
}
}
}
```
### Start CLI
Start the CLI - ensure your API key is exported or create `.env` file:
```bash
mcpomni_connect
```
## π§ͺ Testing
### Running Tests
```bash
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
```
### Test Structure
```
tests/
βββ unit/ # Unit tests for individual components
```
### Development Quick Start
1. **Installation**
```bash
# Clone the repository
git clone https://github.com/Abiorh001/mcp_omni_connect.git
cd mcp_omni_connect
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
```
2. **Configuration**
```bash
# Set up environment variables
echo "LLM_API_KEY=your_api_key_here" > .env
# Configure your servers in servers_config.json
```
3. **Start Client**
```bash
uv run run.py
```
Or:
```bash
python run.py
```
## π§βπ» Examples
### Basic CLI Example
You can run the basic CLI example to interact with MCPOmni Connect directly from the terminal.
**Using [uv](https://github.com/astral-sh/uv) (recommended):**
```bash
uv run examples/basic.py
```
**Or using Python directly:**
```bash
python examples/basic.py
```
---
### π€ OmniAgent - Create Your Own AI Agents
Build intelligent agents that combine MCP tools with local tools for powerful automation.
#### Basic OmniAgent Creation
```python
from mcpomni_connect.omni_agent import OmniAgent
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create local tools registry
tool_registry = ToolRegistry()
# Register your custom tools directly with the agent
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate the area of a rectangle."""
area = length * width
return f"Area of rectangle ({length} x {width}): {area} square units"
@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
"""Analyze text and return word count and character count."""
words = len(text.split())
chars = len(text)
return f"Analysis: {words} words, {chars} characters"
# Initialize memory store
memory_store = MemoryRouter(memory_store_type="redis") # or "postgresql", "sqlite", "mysql"
event_router = EventRouter(event_store_type="in_memory")
# Create OmniAgent with LOCAL TOOLS + MCP TOOLS
agent = OmniAgent(
name="my_agent",
system_instruction="You are a helpful assistant with access to custom tools and file operations.",
model_config={
"provider": "openai",
"model": "gpt-4o",
"max_context_length": 50000,
},
# Your custom local tools
local_tools=tool_registry,
# MCP server tools
mcp_tools=[
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"],
}
],
memory_store=memory_store,
event_router=event_router
)
# Now the agent can use BOTH your custom tools AND MCP tools!
result = await agent.run("Calculate the area of a 10x5 rectangle, then analyze this text: 'Hello world'")
print(f"Response: {result['response']}")
print(f"Session ID: {result['session_id']}")
```
#### π Self-Flying Background Agents *(NEW!)*
Create autonomous agents that run in the background and execute tasks automatically:
```python
from mcpomni_connect.omni_agent.background_agent.background_agent_manager import BackgroundAgentManager
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
# Initialize components
memory_store = MemoryRouter(memory_store_type="in_memory")
event_router = EventRouter(event_store_type="in_memory")
# Create background agent manager
manager = BackgroundAgentManager(
memory_store=memory_store,
event_router=event_router
)
# Create a self-flying background agent
agent_config = {
"agent_id": "system_monitor",
"system_instruction": "You are a system monitoring agent that checks system health.",
"model_config": {
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7,
},
"local_tools": tool_registry, # Your tool registry
"agent_config": {
"max_steps": 10,
"tool_call_timeout": 30,
},
"interval": 60, # Run every 60 seconds
"max_retries": 3,
"retry_delay": 30,
"task_config": {
"query": "Check system status and report any critical issues.",
"description": "System health monitoring task"
}
}
# Create and start the background agent
result = manager.create_agent(agent_config)
manager.start() # Start all background agents
# Monitor events in real-time
async for event in manager.get_agent("system_monitor").stream_events(result["session_id"]):
print(f"Background Agent Event: {event.type} - {event.payload}")
# Runtime task updates
manager.update_task_config("system_monitor", {
"query": "Perform emergency system check and report critical issues immediately.",
"description": "Emergency system check task",
"priority": "high"
})
```
#### π Session Management
Maintain conversation continuity across multiple interactions:
```python
# Use session ID for conversation continuity
session_id = "user_123_conversation"
result1 = await agent.run("Hello! My name is Alice.", session_id)
result2 = await agent.run("What did I tell you my name was?", session_id)
# Get conversation history
history = await agent.get_session_history(session_id)
# Stream events in real-time
async for event in agent.stream_events(session_id):
print(f"Event: {event.type} - {event.payload}")
```
#### π Learn from Examples
Study these comprehensive examples to see OmniAgent in action:
- **`examples/omni_agent_example.py`** - β **COMPLETE DEMO** showing all OmniAgent features
- **`examples/background_agent_example.py`** - Self-flying background agents
- **`run_omni_agent.py`** - Advanced EXAMPLE patterns (study only, not for end-user use)
- **`examples/basic.py`** - Simple agent setup patterns
- **`examples/web_server.py`** - FastAPI web interface
- **`examples/vector_db_examples.py`** - Advanced vector memory
- **Provider Examples**: `anthropic.py`, `groq.py`, `azure.py`, `ollama.py`
π‘ **Pro Tip**: Run `python examples/omni_agent_example.py` to see the full capabilities in action!
### π― **Getting Started - Choose Your Path**
#### **Path 1: π€ Build Custom AI Agents (OmniAgent)**
```bash
# Study the examples to learn patterns:
python examples/basic.py # Simple setup
python examples/omni_agent_example.py # Complete demo
python examples/background_agent_example.py # Self-flying agents
python examples/web_server.py # Web interface
# Then build your own using the patterns!
```
#### **Path 2: π Advanced MCP Client (CLI)**
```bash
# World-class MCP client with advanced features
python run.py
# OR: mcpomni-connect --config servers_config.json
# Features: Connect to MCP servers, agentic modes, advanced memory
```
#### **Path 3: π§ͺ Study Tool Patterns (Learning)**
```bash
# Comprehensive testing interface - Study 12+ EXAMPLE tools
python run_omni_agent.py --mode cli
# Study this file to see tool registration patterns and CLI features
# Contains many examples of how to create custom tools
```
**π‘ Pro Tip:** Most developers use **both paths** - the MCP CLI for daily workflow and OmniAgent for building custom solutions!
---
## π₯ Local Tools System - Create Custom AI Tools!
One of OmniAgent's most powerful features is the ability to **register your own Python functions as AI tools**. The agent can then intelligently use these tools to complete tasks.
### π― Quick Tool Registration Example
```python
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create tool registry
tool_registry = ToolRegistry()
# Register your custom tools with simple decorator
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate the area of a rectangle."""
area = length * width
return f"Area of rectangle ({length} x {width}): {area} square units"
@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
"""Analyze text and return word count and character count."""
words = len(text.split())
chars = len(text)
return f"Analysis: {words} words, {chars} characters"
@tool_registry.register_tool("system_status")
def get_system_status() -> str:
"""Get current system status information."""
import platform
import time
return f"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}"
# Use tools with OmniAgent
agent = OmniAgent(
name="my_agent",
local_tools=tool_registry, # Your custom tools!
# ... other config
)
# Now the AI can use your tools!
result = await agent.run("Calculate the area of a 10x5 rectangle and tell me the current system time")
```
### π Tool Registration Patterns (Create Your Own!)
**No built-in tools** - You create exactly what you need! Study these EXAMPLE patterns from `run_omni_agent.py`:
**Mathematical Tools Examples:**
```python
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
area = length * width
return f"Area: {area} square units"
@tool_registry.register_tool("analyze_numbers")
def analyze_numbers(numbers: str) -> str:
num_list = [float(x.strip()) for x in numbers.split(",")]
return f"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}"
```
**System Tools Examples:**
```python
@tool_registry.register_tool("system_info")
def get_system_info() -> str:
import platform
return f"OS: {platform.system()}, Python: {platform.python_version()}"
```
**File Tools Examples:**
```python
@tool_registry.register_tool("list_files")
def list_directory(path: str = ".") -> str:
import os
files = os.listdir(path)
return f"Found {len(files)} items in {path}"
```
### π¨ Tool Registration Patterns
**1. Simple Function Tools:**
```python
@tool_registry.register_tool("weather_check")
def check_weather(city: str) -> str:
"""Get weather information for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 25Β°C"
```
**2. Complex Analysis Tools:**
```python
@tool_registry.register_tool("data_analysis")
def analyze_data(data: str, analysis_type: str = "summary") -> str:
"""Analyze data with different analysis types."""
import json
try:
data_obj = json.loads(data)
if analysis_type == "summary":
return f"Data contains {len(data_obj)} items"
elif analysis_type == "detailed":
# Complex analysis logic
return "Detailed analysis results..."
except:
return "Invalid data format"
```
**3. File Processing Tools:**
```python
@tool_registry.register_tool("process_file")
def process_file(file_path: str, operation: str) -> str:
"""Process files with different operations."""
try:
if operation == "read":
with open(file_path, 'r') as f:
content = f.read()
return f"File content (first 100 chars): {content[:100]}..."
elif operation == "count_lines":
with open(file_path, 'r') as f:
lines = len(f.readlines())
return f"File has {lines} lines"
except Exception as e:
return f"Error processing file: {e}"
```
---
## βοΈ Configuration Guide *(UPDATED!)*
### Environment Variables
Create a `.env` file with your configuration:
```bash
# ===============================================
# Required: AI Model API Key
# ===============================================
LLM_API_KEY=your_api_key_here
# ===============================================
# Memory Storage Configuration (NEW!)
# ===============================================
# Database backend (PostgreSQL, MySQL, SQLite)
DATABASE_URL=sqlite:///mcpomni_memory.db
# DATABASE_URL=postgresql://user:password@localhost:5432/mcpomni
# DATABASE_URL=mysql://user:password@localhost:3306/mcpomni
# Redis for memory and event storage (single URL)
REDIS_URL=redis://localhost:6379/0
# REDIS_URL=redis://:password@localhost:6379/0 # With password
# ===============================================
# Vector Database Configuration (NEW!)
# ===============================================
# Enable vector databases for long-term & episodic memory
ENABLE_VECTOR_DB=true
# Qdrant (Production-grade vector search)
QDRANT_HOST=localhost
QDRANT_PORT=6333
# ChromaDB uses local storage automatically if Qdrant not available
```
### π§ Vector Database Setup *(NEW!)*
**For Long-term & Episodic Memory:**
1. **Enable Vector Databases:**
```bash
ENABLE_VECTOR_DB=true
```
2. **Option A: Use Qdrant (Recommended for Production):**
```bash
# Install and run Qdrant
docker run -p 6333:6333 qdrant/qdrant
# Set environment variables
QDRANT_HOST=localhost
QDRANT_PORT=6333
```
3. **Option B: Use ChromaDB (Automatic Local Fallback):**
```bash
# Install ChromaDB (usually auto-installed)
pip install chromadb
# No additional configuration needed - uses local .chroma_db directory
```
### π₯οΈ Updated CLI Commands *(NEW!)*
**Memory Store Management:**
```bash
# Switch between memory backends
/memory_store:in_memory # Fast in-memory storage (default)
/memory_store:redis # Redis persistent storage
/memory_store:database # SQLite database storage
/memory_store:database:postgresql://user:pass@host/db # PostgreSQL
/memory_store:database:mysql://user:pass@host/db # MySQL
# Memory strategy configuration
/memory_mode:sliding_window:10 # Keep last 10 messages
/memory_mode:token_budget:5000 # Keep under 5000 tokens
```
**Event Store Management:**
```bash
# Switch between event backends
/event_store:in_memory # Fast in-memory events (default)
/event_store:redis_stream # Redis Streams for persistence
```
**Enhanced Commands:**
```bash
# Memory operations
/history # Show conversation history
/clear_history # Clear conversation history
/save_history <file> # Save history to file
/load_history <file> # Load history from file
# Server management
/add_servers:<config.json> # Add servers from config
/remove_server:<server_name> # Remove specific server
/refresh # Refresh server capabilities
# Debugging and monitoring
/debug # Toggle debug mode
/api_stats # Show API usage statistics
```
---
### π MCPOmni Connect CLI - World-Class MCP Client
The MCPOmni Connect CLI is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes:
```bash
# Launch the advanced MCP CLI
python run.py
# OR: mcpomni-connect --config servers_config.json
# Core MCP client commands:
/tools # List all available tools
/prompts # List all available prompts
/resources # List all available resources
/prompt:<name> # Execute a specific prompt
/resource:<uri> # Read a specific resource
/subscribe:<uri> # Subscribe to resource updates
/query <your_question> # Ask questions using tools
# Advanced platform features:
/memory_store:redis # Switch to Redis memory
/event_store:redis_stream # Switch to Redis events
/add_servers:<config.json> # Add MCP servers dynamically
/remove_server:<name> # Remove MCP server
/mode:auto # Switch to autonomous agentic mode
/mode:orchestrator # Switch to multi-server orchestration
```
## π οΈ Developer Integration
MCPOmni Connect is not just a CLI toolβit's also a powerful Python library. **OmniAgent consolidates everything** - you no longer need to manually manage MCP clients, configurations, and agents separately!
### Build Apps with OmniAgent *(Recommended)*
**OmniAgent automatically includes MCP client functionality** - just specify your MCP servers and you're ready to go:
```python
from mcpomni_connect.omni_agent import OmniAgent
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry
# Create tool registry for custom tools
tool_registry = ToolRegistry()
@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
"""Analyze data and return insights."""
return f"Analysis complete: {len(data)} characters processed"
# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
name="my_app_agent",
system_instruction="You are a helpful assistant with access to MCP servers and custom tools.",
model_config={
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7
},
# Your custom local tools
local_tools=tool_registry,
# MCP servers - automatically connected!
mcp_tools=[
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
},
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {"Authorization": "Bearer your-token"}
}
],
memory_store=MemoryRouter(memory_store_type="redis"),
event_router=EventRouter(event_store_type="in_memory")
)
# Use in your app - gets both MCP tools AND your custom tools!
result = await agent.run("List files in the current directory and analyze the filenames")
```
### Legacy Manual Approach *(Not Recommended)*
If you need the old manual approach for some reason:
### FastAPI Integration with OmniAgent
OmniAgent makes building APIs incredibly simple. See [`examples/web_server.py`](examples/web_server.py) for a complete FastAPI example:
```python
from fastapi import FastAPI
from mcpomni_connect.omni_agent import OmniAgent
app = FastAPI()
agent = OmniAgent(...) # Your agent setup from above
@app.post("/chat")
async def chat(message: str, session_id: str = None):
result = await agent.run(message, session_id)
return {"response": result['response'], "session_id": result['session_id']}
@app.get("/tools")
async def get_tools():
# Returns both MCP tools AND your custom tools automatically
return agent.get_available_tools()
```
**Key Benefits:**
- **One OmniAgent = MCP + Custom Tools + Memory + Events**
- **Automatic tool discovery** from all connected MCP servers
- **Built-in session management** and conversation history
- **Real-time event streaming** for monitoring
- **Easy integration** with any Python web framework
---
### Server Configuration Examples
#### Basic OpenAI Configuration
```json
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"mcpServers": {
"ev_assistant": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"sse-server": {
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
},
"streamable_http-server": {
"transport_type": "streamable_http",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
}
```
#### Anthropic Claude Configuration
```json
{
"LLM": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
```
#### Groq Configuration
```json
{
"LLM": {
"provider": "groq",
"model": "llama-3.1-8b-instant",
"temperature": 0.5,
"max_tokens": 2000,
"max_context_length": 8000,
"top_p": 0.9
}
}
```
#### Azure OpenAI Configuration
```json
{
"LLM": {
"provider": "azureopenai",
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 2000,
"max_context_length": 100000,
"top_p": 0.95,
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01",
"azure_deployment": "your-deployment-name"
}
}
```
#### Ollama Local Model Configuration
```json
{
"LLM": {
"provider": "ollama",
"model": "llama3.1:8b",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 100000,
"top_p": 0.7,
"ollama_host": "http://localhost:11434"
}
}
```
#### OpenRouter Configuration
```json
{
"LLM": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
```
### π Authentication Methods
MCPOmni Connect supports multiple authentication methods for secure server connections:
#### OAuth 2.0 Authentication
```json
{
"server_name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server/mcp"
}
}
```
#### Bearer Token Authentication
```json
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"Authorization": "Bearer your-token-here"
},
"url": "http://your-server/mcp"
}
}
```
#### Custom Headers
```json
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"X-Custom-Header": "value",
"Authorization": "Custom-Auth-Scheme token"
},
"url": "http://your-server/mcp"
}
}
```
## π Dynamic Server Configuration
MCPOmni Connect supports dynamic server configuration through commands:
#### Add New Servers
```bash
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
```
The configuration file can include multiple servers with different authentication methods:
```json
{
"new-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"another-server": {
"transport_type": "sse",
"headers": {
"Authorization": "Bearer token"
},
"url": "http://localhost:3000/sse"
}
}
```
#### Remove Servers
```bash
# Remove a server by its name
/remove_server:server_name
```
## π― Usage
### Interactive Commands
- `/tools` - List all available tools across servers
- `/prompts` - View available prompts
- `/prompt:<name>/<args>` - Execute a prompt with arguments
- `/resources` - List available resources
- `/resource:<uri>` - Access and analyze a resource
- `/debug` - Toggle debug mode
- `/refresh` - Update server capabilities
- `/memory` - Toggle Redis memory persistence (on/off)
- `/mode:auto` - Switch to autonomous agentic mode
- `/mode:chat` - Switch back to interactive chat mode
- `/add_servers:<config.json>` - Add one or more servers from a configuration file
- `/remove_server:<server_name>` - Remove a server by its name
### Memory and Chat History
```bash
# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
```
### Operation Modes
```bash
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
```
### Mode Differences
- **Chat Mode (Default)**
- Requires explicit approval for tool execution
- Interactive conversation style
- Step-by-step task execution
- Detailed explanations of actions
- **Autonomous Mode**
- Independent task execution
- Self-guided decision making
- Automatic tool selection and chaining
- Progress updates and final results
- Complex task decomposition
- Error handling and recovery
- **Orchestrator Mode**
- Advanced planning for complex multi-step tasks
- Strategic delegation across multiple MCP servers
- Intelligent agent coordination and communication
- Parallel task execution when possible
- Dynamic resource allocation
- Sophisticated workflow management
- Real-time progress monitoring across agents
- Adaptive task prioritization
### Prompt Management
```bash
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
```
### Advanced Prompt Features
- **Argument Validation**: Automatic type checking and validation
- **Default Values**: Smart handling of optional arguments
- **Context Awareness**: Prompts can access previous conversation context
- **Cross-Server Execution**: Seamless execution across multiple MCP servers
- **Error Handling**: Graceful handling of invalid arguments with helpful messages
- **Dynamic Help**: Detailed usage information for each prompt
### AI-Powered Interactions
The client intelligently:
- Chains multiple tools together
- Provides context-aware responses
- Automatically selects appropriate tools
- Handles errors gracefully
- Maintains conversation context
### Model Support with LiteLLM
- **Unified Model Access**
- Single interface for 100+ models across all major providers
- Automatic provider detection and routing
- Consistent API regardless of underlying provider
- Native function calling for compatible models
- ReAct Agent fallback for models without function calling
- **Supported Providers**
- **OpenAI**: GPT-4, GPT-3.5, and all model variants
- **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- **Google**: Gemini Pro, Gemini Flash, PaLM models
- **Groq**: Ultra-fast inference for Llama, Mixtral, Gemma
- **DeepSeek**: DeepSeek-V3, DeepSeek-Coder, and specialized models
- **Azure OpenAI**: Enterprise-grade OpenAI models
- **OpenRouter**: Access to 200+ models from various providers
- **Ollama**: Local model execution with privacy
- **Advanced Features**
- Automatic model capability detection
- Dynamic tool execution based on model features
- Intelligent fallback mechanisms
- Provider-specific optimizations
### Token & Usage Management
MCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.
#### View API Usage Stats
Use the `/api_stats` command to see your current usage:
```bash
/api_stats
```
This will display:
- **Total tokens used**
- **Total requests made**
- **Total response tokens**
- **Number of requests**
#### Set Usage Limits
You can set limits to automatically stop execution when thresholds are reached:
- **Total Request Limit:**
Set the maximum number of requests allowed in a session.
- **Total Token Usage Limit:**
Set the maximum number of tokens that can be used.
- **Tool Call Timeout:**
Set the maximum time (in seconds) a tool call can take before being terminated.
- **Max Steps:**
Set the maximum number of steps the agent can take before stopping.
You can configure these in your `servers_config.json` under the `AgentConfig` section:
```json
"AgentConfig": {
"tool_call_timeout": 30, // Tool call timeout in seconds
"max_steps": 15, // Max number of steps before termination
"request_limit": 1000, // Max number of requests allowed
"total_tokens_limit": 100000 // Max number of tokens allowed
}
```
- When any of these limits are reached, the agent will automatically stop running and notify you.
#### Example Commands
```bash
# Check your current API usage and limits
/api_stats
# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
```
## π§ Advanced Features
### Tool Orchestration
```python
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
```
### Resource Analysis
```python
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
```
### Demo

## π Troubleshooting
> π **For comprehensive configuration help**, see the [βοΈ Configuration Guide](#%EF%B8%8F-configuration-guide) section above, which covers:
>
> - Config file differences (`.env` vs `servers_config.json`)
> - Transport type selection and authentication
> - OAuth server behavior explanation
> - Common connection issues and solutions
### Common Issues and Solutions
1. **Connection Issues**
```bash
Error: Could not connect to MCP server
```
- Check if the server is running
- Verify server configuration in `servers_config.json`
- Ensure network connectivity
- Check server logs for errors
- **See [Transport Types & Authentication](#-transport-types--authentication) for detailed setup**
2. **API Key Issues**
```bash
Error: Invalid API key
```
- Verify API key is correctly set in `.env`
- Check if API key has required permissions
- Ensure API key is for correct environment (production/development)
- **See [Configuration Files Overview](#configuration-files-overview) for correct setup**
3. **Redis Connection**
```bash
Error: Could not connect to Redis
```
- Verify Redis server is running
- Check Redis connection settings in `.env`
- Ensure Redis password is correct (if configured)
4. **Tool Execution Failures**
```bash
Error: Tool execution failed
```
- Check tool availability on connected servers
- Verify tool permissions
- Review tool arguments for correctness
### Debug Mode
Enable debug mode for detailed logging:
```bash
/debug
```
For additional support, please:
1. Check the [Issues](https://github.com/Abiorh001/mcp_omni_connect/issues) page
2. Review closed issues for similar problems
3. Open a new issue with detailed information if needed
## π€ Contributing
We welcome contributions! See our [Contributing Guide](CONTRIBUTING.md) for details.
## π Documentation
Complete documentation is available at: **[MCPOmni Connect Docs](https://abiorh001.github.io/mcp_omni_connect)**
To build documentation locally:
```bash
./docs.sh serve # Start development server at http://127.0.0.1:8080
./docs.sh build # Build static documentation
```
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π¬ Contact & Support
- **Author**: Abiola Adeshina
- **Email**: abiolaadedayo1993@gmail.com
- **GitHub Issues**: [Report a bug](https://github.com/Abiorh001/mcp_omni_connect/issues)
---
<p align="center">Built with β€οΈ by the MCPOmni Connect Team</p>
Raw data
{
"_id": null,
"home_page": null,
"name": "mcpomni-connect",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "agent, ai, automation, git, llm, mcp",
"author": null,
"author_email": "Abiola Adeshina <abiolaadedayo1993@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/4a/40/5038d7c5f4956b27daf596981356815b6c12857656dda18392ebc85a4b3b/mcpomni_connect-0.1.19.tar.gz",
"platform": null,
"description": "# \ud83d\ude80 MCPOmni Connect - Complete AI Platform: OmniAgent + Universal MCP Client\n\n[](https://pepy.tech/projects/mcpomni-connect)\n[](https://www.python.org/downloads/)\n[](LICENSE)\n[](https://github.com/Abiorh001/mcp_omni_connect/actions)\n[](https://badge.fury.io/py/mcpomni-connect)\n[](https://github.com/Abiorh001/mcp_omni_connect/commits/main)\n[](https://github.com/Abiorh001/mcp_omni_connect/issues)\n[](https://github.com/Abiorh001/mcp_omni_connect/pulls)\n\n**MCPOmni Connect** is the complete AI platform that evolved from a world-class MCP client into a revolutionary ecosystem. It now includes **OmniAgent** - the ultimate AI agent builder born from MCPOmni Connect's powerful foundation. Build production-ready AI agents, use the advanced MCP CLI, or combine both for maximum power.\n\n## \ud83c\udf1f **Complete AI Platform - Two Powerful Systems:**\n\n### 1. \ud83e\udd16 **OmniAgent System** *(Revolutionary AI Agent Builder)*\nBorn from MCPOmni Connect's foundation - create intelligent, autonomous agents with:\n- **\ud83d\udee0\ufe0f Local Tools System** - Register your Python functions as AI tools\n- **\ud83d\ude81 Self-Flying Background Agents** - Autonomous task execution\n- **\ud83e\udde0 Multi-Tier Memory** - Vector databases, Redis, PostgreSQL, MySQL, SQLite\n- **\ud83d\udce1 Real-Time Events** - Live monitoring and streaming\n- **\ud83d\udd27 MCP + Local Tool Orchestration** - Seamlessly combine both tool types\n\n### 2. \ud83d\udd0c **Universal MCP Client** *(World-Class CLI)*\nAdvanced command-line interface for connecting to any Model Context Protocol server with:\n- **\ud83c\udf10 Multi-Protocol Support** - stdio, SSE, HTTP, Docker, NPX transports\n- **\ud83d\udd10 Authentication** - OAuth 2.0, Bearer tokens, custom headers\n- **\ud83e\udde0 Advanced Memory** - Redis, Database, Vector storage with intelligent retrieval\n- **\ud83d\udce1 Event Streaming** - Real-time monitoring and debugging\n- **\ud83e\udd16 Agentic Modes** - ReAct, Orchestrator, and Interactive chat modes\n\n**\ud83c\udfaf Perfect for:** Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.\n\n## \ud83d\ude80 NEW: OmniAgent - Build Your Own AI Agents! \n\n**\ud83c\udf1f Introducing OmniAgent** - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!\n\n### \u2705 OmniAgent Revolutionary Capabilities:\n- **\ud83e\udde0 Multi-tier memory management** with vector search and semantic retrieval\n- **\ud83d\udee0\ufe0f XML-based reasoning** with strict tool formatting for reliable execution \n- **\ud83d\udd27 Advanced tool orchestration** - Seamlessly combine MCP server tools + local tools\n- **\ud83d\ude81 Self-flying background agents** with autonomous task execution\n- **\ud83d\udce1 Real-time event streaming** for monitoring and debugging\n- **\ud83c\udfd7\ufe0f Production-ready infrastructure** with error handling and retry logic\n- **\u26a1 Plug-and-play intelligence** - No complex setup required!\n\n### \ud83d\udd25 **LOCAL TOOLS SYSTEM** *(MAJOR FEATURE!)*\n- **\ud83c\udfaf Easy Tool Registration**: `@tool_registry.register_tool(\"tool_name\")`\n- **\ud83d\udd0c Custom Tool Creation**: Register your own Python functions as AI tools\n- **\ud83d\udd04 Runtime Tool Management**: Add/remove tools dynamically\n- **\u2699\ufe0f Type-Safe Interface**: Automatic parameter validation and documentation\n- **\ud83d\udcd6 Rich Examples**: Study `run_omni_agent.py` for 12+ EXAMPLE tool registration patterns\n\n> \ud83d\ude80 **New User?** Start with the [\u2699\ufe0f Configuration Guide](#%EF%B8%8F-configuration-guide) to understand the difference between config files, transport types, and OAuth behavior. Then check out the [\ud83e\uddea Testing](#-testing) section to get started quickly.\n\n## \u2728 Key Features\n\n### \ud83e\udd16 Intelligent Agent System\n\n- **ReAct Agent Mode**\n - Autonomous task execution with reasoning and action cycles\n - Independent decision-making without human intervention\n - Advanced problem-solving through iterative reasoning\n - Self-guided tool selection and execution\n - Complex task decomposition and handling\n- **Orchestrator Agent Mode**\n - Strategic multi-step task planning and execution\n - Intelligent coordination across multiple MCP servers\n - Dynamic agent delegation and communication\n - Parallel task execution when possible\n - Sophisticated workflow management with real-time progress monitoring\n- **Interactive Chat Mode**\n - Human-in-the-loop task execution with approval workflows\n - Step-by-step guidance and explanations\n - Educational mode for understanding AI decision processes\n\n### \ud83d\udd0c Universal Connectivity\n\n- **Multi-Protocol Support**\n - Native support for stdio transport\n - Server-Sent Events (SSE) for real-time communication\n - Streamable HTTP for efficient data streaming\n - Docker container integration\n - NPX package execution\n - Extensible transport layer for future protocols\n- **Authentication Support**\n - OAuth 2.0 authentication flow\n - Bearer token authentication\n - Custom header support\n - Secure credential management\n- **Agentic Operation Modes**\n - Seamless switching between chat, autonomous, and orchestrator modes\n - Context-aware mode selection based on task complexity\n - Persistent state management across mode transitions\n\n### \ud83e\udde0 AI-Powered Intelligence\n\n- **Unified LLM Integration with LiteLLM**\n - Single unified interface for all AI providers\n - Support for 100+ models across providers including:\n - OpenAI (GPT-4, GPT-3.5, etc.)\n - Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)\n - Google (Gemini Pro, Gemini Flash, etc.)\n - Groq (Llama, Mixtral, Gemma, etc.)\n - DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)\n - Azure OpenAI\n - OpenRouter (access to 200+ models)\n - Ollama (local models)\n - Simplified configuration and reduced complexity\n - Dynamic system prompts based on available capabilities\n - Intelligent context management\n - Automatic tool selection and chaining\n - Universal model support through custom ReAct Agent\n - Handles models without native function calling\n - Dynamic function execution based on user requests\n - Intelligent tool orchestration\n\n### \ud83d\udd12 Security & Privacy\n\n- **Explicit User Control**\n - All tool executions require explicit user approval in chat mode\n - Clear explanation of tool actions before execution\n - Transparent disclosure of data access and usage\n- **Data Protection**\n - Strict data access controls\n - Server-specific data isolation\n - No unauthorized data exposure\n- **Privacy-First Approach**\n - Minimal data collection\n - User data remains on specified servers\n - No cross-server data sharing without consent\n- **Secure Communication**\n - Encrypted transport protocols\n - Secure API key management\n - Environment variable protection\n\n### \ud83d\udcbe Advanced Memory Management *(UPDATED!)*\n\n- **Multi-Backend Memory Storage**\n - **In-Memory**: Fast development storage\n - **Redis**: Persistent memory with real-time access\n - **Database**: PostgreSQL, MySQL, SQLite support \n - **File Storage**: Save/load conversation history\n - Runtime switching: `/memory_store:redis`, `/memory_store:database:postgresql://user:pass@host/db`\n- **Multi-Tier Memory Strategy**\n - **Short-term Memory**: Sliding window or token budget strategies\n - **Long-term Memory**: Vector database storage for semantic retrieval\n - **Episodic Memory**: Context-aware conversation history\n - Runtime configuration: `/memory_mode:sliding_window:5`, `/memory_mode:token_budget:3000`\n- **Vector Database Integration *(NEW!)*\n - **Qdrant**: Production-grade vector search (set `QDRANT_HOST` and `QDRANT_PORT`)\n - **ChromaDB**: Local fallback vector storage (automatic installation)\n - **Semantic Search**: Intelligent context retrieval across conversations\n - **Enable**: Set `ENABLE_VECTOR_DB=true` for long-term and episodic memory\n- **Real-Time Event Streaming *(NEW!)*\n - **In-Memory Events**: Fast development event processing\n - **Redis Streams**: Persistent event storage and streaming\n - Runtime switching: `/event_store:redis_stream`, `/event_store:in_memory`\n\n### \ud83d\udcac Prompt Management\n\n- **Advanced Prompt Handling**\n - Dynamic prompt discovery across servers\n - Flexible argument parsing (JSON and key-value formats)\n - Cross-server prompt coordination\n - Intelligent prompt validation\n - Context-aware prompt execution\n - Real-time prompt responses\n - Support for complex nested arguments\n - Automatic type conversion and validation\n- **Client-Side Sampling Support**\n - Dynamic sampling configuration from client\n - Flexible LLM response generation\n - Customizable sampling parameters\n - Real-time sampling adjustments\n\n### \ud83d\udee0\ufe0f Tool Orchestration\n\n- **Dynamic Tool Discovery & Management**\n - Automatic tool capability detection\n - Cross-server tool coordination\n - Intelligent tool selection based on context\n - Real-time tool availability updates\n\n### \ud83d\udce6 Resource Management\n\n- **Universal Resource Access**\n - Cross-server resource discovery\n - Unified resource addressing\n - Automatic resource type detection\n - Smart content summarization\n\n### \ud83d\udd04 Server Management\n\n- **Advanced Server Handling**\n - Multiple simultaneous server connections\n - Automatic server health monitoring\n - Graceful connection management\n - Dynamic capability updates\n - Flexible authentication methods\n - Runtime server configuration updates\n\n## \ud83c\udfd7\ufe0f Architecture\n\n### Core Components\n\n```\nMCPOmni Connect Platform\n\u251c\u2500\u2500 \ud83e\udd16 OmniAgent System (Revolutionary Agent Builder)\n\u2502 \u251c\u2500\u2500 Local Tools Registry\n\u2502 \u251c\u2500\u2500 Background Agent Manager \n\u2502 \u251c\u2500\u2500 Custom Agent Creation\n\u2502 \u2514\u2500\u2500 Agent Orchestration Engine\n\u251c\u2500\u2500 \ud83d\udd0c Universal MCP Client (World-Class CLI)\n\u2502 \u251c\u2500\u2500 Transport Layer (stdio, SSE, HTTP, Docker, NPX)\n\u2502 \u251c\u2500\u2500 Multi-Server Orchestration\n\u2502 \u251c\u2500\u2500 Authentication & Security\n\u2502 \u2514\u2500\u2500 Connection Lifecycle Management\n\u251c\u2500\u2500 \ud83e\udde0 Shared Memory System (Both Systems)\n\u2502 \u251c\u2500\u2500 Multi-Backend Storage (Redis, DB, In-Memory)\n\u2502 \u251c\u2500\u2500 Vector Database Integration (Qdrant, ChromaDB)\n\u2502 \u251c\u2500\u2500 Memory Strategies (Sliding Window, Token Budget)\n\u2502 \u2514\u2500\u2500 Session Management\n\u251c\u2500\u2500 \ud83d\udce1 Event System (Both Systems)\n\u2502 \u251c\u2500\u2500 In-Memory Event Processing\n\u2502 \u251c\u2500\u2500 Redis Streams for Persistence\n\u2502 \u2514\u2500\u2500 Real-Time Event Monitoring\n\u251c\u2500\u2500 \ud83d\udee0\ufe0f Tool Management (Both Systems)\n\u2502 \u251c\u2500\u2500 Dynamic Tool Discovery\n\u2502 \u251c\u2500\u2500 Cross-Server Tool Routing\n\u2502 \u251c\u2500\u2500 Local Python Tool Registration\n\u2502 \u2514\u2500\u2500 Tool Execution Engine\n\u2514\u2500\u2500 \ud83e\udd16 AI Integration (Both Systems)\n \u251c\u2500\u2500 LiteLLM (100+ Models)\n \u251c\u2500\u2500 Context Management\n \u251c\u2500\u2500 ReAct Agent Processing\n \u2514\u2500\u2500 Response Generation\n```\n\n## \ud83d\ude80 Getting Started\n\n### Prerequisites\n\n- Python 3.10+\n- LLM API key (OpenAI, Anthropic, etc.)\n- UV package manager (recommended) \n- Redis server (optional, for persistent memory & events)\n- Database (optional, PostgreSQL/MySQL/SQLite for persistent memory)\n- Qdrant or ChromaDB (optional, for vector search & long-term memory)\n\n### Install using package manager\n\n#### With uv (recommended)\n\n```bash\nuv add mcpomni-connect\n```\n\n#### Using pip\n\n```bash\npip install mcpomni-connect\n```\n\n### Configuration\n\n```bash\n# Set up environment variables\necho \"LLM_API_KEY=your_api_key_here\" > .env\n# Optional: Configure Redis connection\necho \"REDIS_URL=redis://localhost:6379/0\" >> .env\n# Optional: Configure database connection \necho \"DATABASE_URL=sqlite:///mcpomni_memory.db\" >> .env\n# Configure your servers in servers_config.json\n```\n\n## \u2699\ufe0f Configuration Guide\n\n### Configuration Files Overview\n\nMCPOmni Connect uses **two separate configuration files** for different purposes:\n\n#### 1. `.env` File - Environment Variables\n\nContains sensitive information like API keys and optional settings:\n\n```bash\n# Required: Your LLM provider API key\nLLM_API_KEY=your_api_key_here\n\n# Optional: Memory Storage Configuration \nDATABASE_URL=sqlite:///mcpomni_memory.db\nREDIS_URL=redis://localhost:6379/0\n```\n\n#### 2. `servers_config.json` - Server & Agent Configuration\n\nContains application settings, LLM configuration, and MCP server connections:\n\n```json\n{\n \"AgentConfig\": {\n \"tool_call_timeout\": 30,\n \"max_steps\": 15,\n \"request_limit\": 1000,\n \"total_tokens_limit\": 100000\n },\n \"LLM\": {\n \"provider\": \"openai\",\n \"model\": \"gpt-4o-mini\",\n \"temperature\": 0.7,\n \"max_tokens\": 5000,\n \"top_p\": 0.7\n },\n \"mcpServers\": {\n \"your-server-name\": {\n \"transport_type\": \"stdio\",\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-package\"]\n }\n }\n}\n```\n\n### \ud83d\udea6 Transport Types & Authentication\n\nMCPOmni Connect supports multiple ways to connect to MCP servers:\n\n#### 1. **stdio** - Direct Process Communication\n\n**Use when**: Connecting to local MCP servers that run as separate processes\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"stdio\",\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-package\"]\n }\n}\n```\n\n- **No authentication needed**\n- **No OAuth server started**\n- Most common for local development\n\n#### 2. **sse** - Server-Sent Events\n\n**Use when**: Connecting to HTTP-based MCP servers using Server-Sent Events\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"sse\",\n \"url\": \"http://your-server.com:4010/sse\",\n \"headers\": {\n \"Authorization\": \"Bearer your-token\"\n },\n \"timeout\": 60,\n \"sse_read_timeout\": 120\n }\n}\n```\n\n- **Uses Bearer token or custom headers**\n- **No OAuth server started**\n\n#### 3. **streamable_http** - HTTP with Optional OAuth\n\n**Use when**: Connecting to HTTP-based MCP servers with or without OAuth\n\n**Without OAuth (Bearer Token):**\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://your-server.com:4010/mcp\",\n \"headers\": {\n \"Authorization\": \"Bearer your-token\"\n },\n \"timeout\": 60\n }\n}\n```\n\n- **Uses Bearer token or custom headers**\n- **No OAuth server started**\n\n**With OAuth:**\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://your-server.com:4010/mcp\"\n }\n}\n```\n\n- **OAuth callback server automatically starts on `http://localhost:3000`**\n- **This is hardcoded and cannot be changed**\n- **Required for OAuth flow to work properly**\n\n### \ud83d\udd10 OAuth Server Behavior\n\n**Important**: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.\n\n#### What You'll See:\n\n```\n\ud83d\udda5\ufe0f Started callback server on http://localhost:3000\n```\n\n#### Key Points:\n\n- **This is normal behavior** - not an error\n- **The address `http://localhost:3000` is hardcoded** and cannot be changed\n- **The server only starts when** you have `\"auth\": {\"method\": \"oauth\"}` in your config\n- **The server stops** when the application shuts down\n- **Only used for OAuth token handling** - no other purpose\n\n#### When OAuth is NOT Used:\n\n- Remove the entire `\"auth\"` section from your server configuration\n- Use `\"headers\"` with `\"Authorization\": \"Bearer token\"` instead\n- No OAuth server will start\n\n### \ud83d\udee0\ufe0f Troubleshooting Common Issues\n\n#### \"Failed to connect to server: Session terminated\"\n\n**Possible Causes & Solutions:**\n\n1. **Wrong Transport Type**\n\n ```\n Problem: Your server expects 'stdio' but you configured 'streamable_http'\n Solution: Check your server's documentation for the correct transport type\n ```\n\n2. **OAuth Configuration Mismatch**\n\n ```\n Problem: Your server doesn't support OAuth but you have \"auth\": {\"method\": \"oauth\"}\n Solution: Remove the \"auth\" section entirely and use headers instead:\n\n \"headers\": {\n \"Authorization\": \"Bearer your-token\"\n }\n ```\n\n3. **Server Not Running**\n\n ```\n Problem: The MCP server at the specified URL is not running\n Solution: Start your MCP server first, then connect with MCPOmni Connect\n ```\n\n4. **Wrong URL or Port**\n ```\n Problem: URL in config doesn't match where your server is running\n Solution: Verify the server's actual address and port\n ```\n\n#### \"Started callback server on http://localhost:3000\" - Is This Normal?\n\n**Yes, this is completely normal** when:\n\n- You have `\"auth\": {\"method\": \"oauth\"}` in any server configuration\n- The OAuth server handles authentication tokens automatically\n- You cannot and should not try to change this address\n\n**If you don't want the OAuth server:**\n\n- Remove `\"auth\": {\"method\": \"oauth\"}` from all server configurations\n- Use alternative authentication methods like Bearer tokens\n\n### \ud83d\udccb Configuration Examples by Use Case\n\n#### Local Development (stdio)\n\n```json\n{\n \"mcpServers\": {\n \"local-tools\": {\n \"transport_type\": \"stdio\",\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-tools\"]\n }\n }\n}\n```\n\n#### Remote Server with Token\n\n```json\n{\n \"mcpServers\": {\n \"remote-api\": {\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://api.example.com:8080/mcp\",\n \"headers\": {\n \"Authorization\": \"Bearer abc123token\"\n }\n }\n }\n}\n```\n\n#### Remote Server with OAuth\n\n```json\n{\n \"mcpServers\": {\n \"oauth-server\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://oauth-server.com:8080/mcp\"\n }\n }\n}\n```\n\n### Start CLI\n\nStart the CLI - ensure your API key is exported or create `.env` file:\n\n```bash\nmcpomni_connect\n```\n\n## \ud83e\uddea Testing\n\n### Running Tests\n\n```bash\n# Run all tests with verbose output\npytest tests/ -v\n\n# Run specific test file\npytest tests/test_specific_file.py -v\n\n# Run tests with coverage report\npytest tests/ --cov=src --cov-report=term-missing\n```\n\n### Test Structure\n\n```\ntests/\n\u251c\u2500\u2500 unit/ # Unit tests for individual components\n```\n\n### Development Quick Start\n\n1. **Installation**\n\n ```bash\n # Clone the repository\n git clone https://github.com/Abiorh001/mcp_omni_connect.git\n cd mcp_omni_connect\n\n # Create and activate virtual environment\n uv venv\n source .venv/bin/activate\n\n # Install dependencies\n uv sync\n ```\n\n2. **Configuration**\n\n ```bash\n # Set up environment variables\n echo \"LLM_API_KEY=your_api_key_here\" > .env\n\n # Configure your servers in servers_config.json\n ```\n\n3. **Start Client**\n\n ```bash\n uv run run.py\n ```\n\n Or:\n\n ```bash\n python run.py\n ```\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Examples\n\n### Basic CLI Example\n\nYou can run the basic CLI example to interact with MCPOmni Connect directly from the terminal.\n\n**Using [uv](https://github.com/astral-sh/uv) (recommended):**\n\n```bash\nuv run examples/basic.py\n```\n\n**Or using Python directly:**\n\n```bash\npython examples/basic.py\n```\n\n---\n\n### \ud83e\udd16 OmniAgent - Create Your Own AI Agents\n\nBuild intelligent agents that combine MCP tools with local tools for powerful automation.\n\n#### Basic OmniAgent Creation\n\n```python\nfrom mcpomni_connect.omni_agent import OmniAgent\nfrom mcpomni_connect.memory_store.memory_router import MemoryRouter\nfrom mcpomni_connect.events.event_router import EventRouter\nfrom mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry\n\n# Create local tools registry\ntool_registry = ToolRegistry()\n\n# Register your custom tools directly with the agent\n@tool_registry.register_tool(\"calculate_area\")\ndef calculate_area(length: float, width: float) -> str:\n \"\"\"Calculate the area of a rectangle.\"\"\"\n area = length * width\n return f\"Area of rectangle ({length} x {width}): {area} square units\"\n\n@tool_registry.register_tool(\"analyze_text\")\ndef analyze_text(text: str) -> str:\n \"\"\"Analyze text and return word count and character count.\"\"\"\n words = len(text.split())\n chars = len(text)\n return f\"Analysis: {words} words, {chars} characters\"\n\n# Initialize memory store\nmemory_store = MemoryRouter(memory_store_type=\"redis\") # or \"postgresql\", \"sqlite\", \"mysql\"\nevent_router = EventRouter(event_store_type=\"in_memory\")\n\n# Create OmniAgent with LOCAL TOOLS + MCP TOOLS\nagent = OmniAgent(\n name=\"my_agent\",\n system_instruction=\"You are a helpful assistant with access to custom tools and file operations.\",\n model_config={\n \"provider\": \"openai\",\n \"model\": \"gpt-4o\",\n \"max_context_length\": 50000,\n },\n # Your custom local tools\n local_tools=tool_registry,\n # MCP server tools \n mcp_tools=[\n {\n \"name\": \"filesystem\",\n \"transport_type\": \"stdio\",\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/home\"],\n }\n ],\n memory_store=memory_store,\n event_router=event_router\n)\n\n# Now the agent can use BOTH your custom tools AND MCP tools!\nresult = await agent.run(\"Calculate the area of a 10x5 rectangle, then analyze this text: 'Hello world'\")\nprint(f\"Response: {result['response']}\")\nprint(f\"Session ID: {result['session_id']}\")\n```\n\n#### \ud83d\ude81 Self-Flying Background Agents *(NEW!)*\n\nCreate autonomous agents that run in the background and execute tasks automatically:\n\n```python\nfrom mcpomni_connect.omni_agent.background_agent.background_agent_manager import BackgroundAgentManager\nfrom mcpomni_connect.memory_store.memory_router import MemoryRouter\nfrom mcpomni_connect.events.event_router import EventRouter\n\n# Initialize components\nmemory_store = MemoryRouter(memory_store_type=\"in_memory\")\nevent_router = EventRouter(event_store_type=\"in_memory\")\n\n# Create background agent manager\nmanager = BackgroundAgentManager(\n memory_store=memory_store,\n event_router=event_router\n)\n\n# Create a self-flying background agent\nagent_config = {\n \"agent_id\": \"system_monitor\",\n \"system_instruction\": \"You are a system monitoring agent that checks system health.\",\n \"model_config\": {\n \"provider\": \"openai\",\n \"model\": \"gpt-4o\",\n \"temperature\": 0.7,\n },\n \"local_tools\": tool_registry, # Your tool registry\n \"agent_config\": {\n \"max_steps\": 10,\n \"tool_call_timeout\": 30,\n },\n \"interval\": 60, # Run every 60 seconds\n \"max_retries\": 3,\n \"retry_delay\": 30,\n \"task_config\": {\n \"query\": \"Check system status and report any critical issues.\",\n \"description\": \"System health monitoring task\"\n }\n}\n\n# Create and start the background agent\nresult = manager.create_agent(agent_config)\nmanager.start() # Start all background agents\n\n# Monitor events in real-time\nasync for event in manager.get_agent(\"system_monitor\").stream_events(result[\"session_id\"]):\n print(f\"Background Agent Event: {event.type} - {event.payload}\")\n\n# Runtime task updates\nmanager.update_task_config(\"system_monitor\", {\n \"query\": \"Perform emergency system check and report critical issues immediately.\",\n \"description\": \"Emergency system check task\",\n \"priority\": \"high\"\n})\n```\n\n#### \ud83d\udcdd Session Management\n\nMaintain conversation continuity across multiple interactions:\n\n```python\n# Use session ID for conversation continuity\nsession_id = \"user_123_conversation\"\nresult1 = await agent.run(\"Hello! My name is Alice.\", session_id)\nresult2 = await agent.run(\"What did I tell you my name was?\", session_id)\n\n# Get conversation history\nhistory = await agent.get_session_history(session_id)\n\n# Stream events in real-time\nasync for event in agent.stream_events(session_id):\n print(f\"Event: {event.type} - {event.payload}\")\n```\n\n#### \ud83d\udcda Learn from Examples\n\nStudy these comprehensive examples to see OmniAgent in action:\n\n- **`examples/omni_agent_example.py`** - \u2b50 **COMPLETE DEMO** showing all OmniAgent features\n- **`examples/background_agent_example.py`** - Self-flying background agents \n- **`run_omni_agent.py`** - Advanced EXAMPLE patterns (study only, not for end-user use)\n- **`examples/basic.py`** - Simple agent setup patterns\n- **`examples/web_server.py`** - FastAPI web interface\n- **`examples/vector_db_examples.py`** - Advanced vector memory\n- **Provider Examples**: `anthropic.py`, `groq.py`, `azure.py`, `ollama.py`\n\n\ud83d\udca1 **Pro Tip**: Run `python examples/omni_agent_example.py` to see the full capabilities in action!\n\n### \ud83c\udfaf **Getting Started - Choose Your Path**\n\n#### **Path 1: \ud83e\udd16 Build Custom AI Agents (OmniAgent)**\n```bash\n# Study the examples to learn patterns:\npython examples/basic.py # Simple setup\npython examples/omni_agent_example.py # Complete demo\npython examples/background_agent_example.py # Self-flying agents\npython examples/web_server.py # Web interface\n\n# Then build your own using the patterns!\n```\n\n#### **Path 2: \ud83d\udd0c Advanced MCP Client (CLI)**\n```bash\n# World-class MCP client with advanced features\npython run.py\n# OR: mcpomni-connect --config servers_config.json\n\n# Features: Connect to MCP servers, agentic modes, advanced memory\n```\n\n#### **Path 3: \ud83e\uddea Study Tool Patterns (Learning)**\n```bash\n# Comprehensive testing interface - Study 12+ EXAMPLE tools\npython run_omni_agent.py --mode cli\n\n# Study this file to see tool registration patterns and CLI features\n# Contains many examples of how to create custom tools\n```\n\n**\ud83d\udca1 Pro Tip:** Most developers use **both paths** - the MCP CLI for daily workflow and OmniAgent for building custom solutions!\n\n---\n\n## \ud83d\udd25 Local Tools System - Create Custom AI Tools!\n\nOne of OmniAgent's most powerful features is the ability to **register your own Python functions as AI tools**. The agent can then intelligently use these tools to complete tasks.\n\n### \ud83c\udfaf Quick Tool Registration Example\n\n```python\nfrom mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry\ntool_registry = ToolRegistry()\n\n# Register your custom tools with simple decorator\n@tool_registry.register_tool(\"calculate_area\")\ndef calculate_area(length: float, width: float) -> str:\n \"\"\"Calculate the area of a rectangle.\"\"\"\n area = length * width\n return f\"Area of rectangle ({length} x {width}): {area} square units\"\n\n@tool_registry.register_tool(\"analyze_text\")\ndef analyze_text(text: str) -> str:\n \"\"\"Analyze text and return word count and character count.\"\"\"\n words = len(text.split())\n chars = len(text)\n return f\"Analysis: {words} words, {chars} characters\"\n\n@tool_registry.register_tool(\"system_status\")\ndef get_system_status() -> str:\n \"\"\"Get current system status information.\"\"\"\n import platform\n import time\n return f\"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}\"\n\n# Use tools with OmniAgent\nagent = OmniAgent(\n name=\"my_agent\",\n local_tools=tool_registry, # Your custom tools!\n # ... other config\n)\n\n# Now the AI can use your tools!\nresult = await agent.run(\"Calculate the area of a 10x5 rectangle and tell me the current system time\")\n```\n\n### \ud83d\udcd6 Tool Registration Patterns (Create Your Own!)\n\n**No built-in tools** - You create exactly what you need! Study these EXAMPLE patterns from `run_omni_agent.py`:\n\n**Mathematical Tools Examples:**\n```python\n@tool_registry.register_tool(\"calculate_area\")\ndef calculate_area(length: float, width: float) -> str:\n area = length * width\n return f\"Area: {area} square units\"\n\n@tool_registry.register_tool(\"analyze_numbers\") \ndef analyze_numbers(numbers: str) -> str:\n num_list = [float(x.strip()) for x in numbers.split(\",\")]\n return f\"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}\"\n```\n\n**System Tools Examples:**\n```python\n@tool_registry.register_tool(\"system_info\")\ndef get_system_info() -> str:\n import platform\n return f\"OS: {platform.system()}, Python: {platform.python_version()}\"\n```\n\n**File Tools Examples:**\n```python\n@tool_registry.register_tool(\"list_files\")\ndef list_directory(path: str = \".\") -> str:\n import os\n files = os.listdir(path)\n return f\"Found {len(files)} items in {path}\"\n```\n\n### \ud83c\udfa8 Tool Registration Patterns\n\n**1. Simple Function Tools:**\n```python\n@tool_registry.register_tool(\"weather_check\")\ndef check_weather(city: str) -> str:\n \"\"\"Get weather information for a city.\"\"\"\n # Your weather API logic here\n return f\"Weather in {city}: Sunny, 25\u00b0C\"\n```\n\n**2. Complex Analysis Tools:**\n```python\n@tool_registry.register_tool(\"data_analysis\")\ndef analyze_data(data: str, analysis_type: str = \"summary\") -> str:\n \"\"\"Analyze data with different analysis types.\"\"\"\n import json\n try:\n data_obj = json.loads(data)\n if analysis_type == \"summary\":\n return f\"Data contains {len(data_obj)} items\"\n elif analysis_type == \"detailed\":\n # Complex analysis logic\n return \"Detailed analysis results...\"\n except:\n return \"Invalid data format\"\n```\n\n**3. File Processing Tools:**\n```python\n@tool_registry.register_tool(\"process_file\")\ndef process_file(file_path: str, operation: str) -> str:\n \"\"\"Process files with different operations.\"\"\"\n try:\n if operation == \"read\":\n with open(file_path, 'r') as f:\n content = f.read()\n return f\"File content (first 100 chars): {content[:100]}...\"\n elif operation == \"count_lines\":\n with open(file_path, 'r') as f:\n lines = len(f.readlines())\n return f\"File has {lines} lines\"\n except Exception as e:\n return f\"Error processing file: {e}\"\n```\n\n---\n\n## \u2699\ufe0f Configuration Guide *(UPDATED!)*\n\n### Environment Variables\n\nCreate a `.env` file with your configuration:\n\n```bash\n# ===============================================\n# Required: AI Model API Key\n# ===============================================\nLLM_API_KEY=your_api_key_here\n\n# ===============================================\n# Memory Storage Configuration (NEW!)\n# ===============================================\n# Database backend (PostgreSQL, MySQL, SQLite)\nDATABASE_URL=sqlite:///mcpomni_memory.db\n# DATABASE_URL=postgresql://user:password@localhost:5432/mcpomni\n# DATABASE_URL=mysql://user:password@localhost:3306/mcpomni\n\n# Redis for memory and event storage (single URL)\nREDIS_URL=redis://localhost:6379/0\n# REDIS_URL=redis://:password@localhost:6379/0 # With password\n\n# ===============================================\n# Vector Database Configuration (NEW!)\n# ===============================================\n# Enable vector databases for long-term & episodic memory\nENABLE_VECTOR_DB=true\n\n# Qdrant (Production-grade vector search)\nQDRANT_HOST=localhost\nQDRANT_PORT=6333\n\n# ChromaDB uses local storage automatically if Qdrant not available\n```\n\n### \ud83e\udde0 Vector Database Setup *(NEW!)*\n\n**For Long-term & Episodic Memory:**\n\n1. **Enable Vector Databases:**\n ```bash\n ENABLE_VECTOR_DB=true\n ```\n\n2. **Option A: Use Qdrant (Recommended for Production):**\n ```bash\n # Install and run Qdrant\n docker run -p 6333:6333 qdrant/qdrant\n \n # Set environment variables\n QDRANT_HOST=localhost\n QDRANT_PORT=6333\n ```\n\n3. **Option B: Use ChromaDB (Automatic Local Fallback):**\n ```bash\n # Install ChromaDB (usually auto-installed)\n pip install chromadb\n \n # No additional configuration needed - uses local .chroma_db directory\n ```\n\n### \ud83d\udda5\ufe0f Updated CLI Commands *(NEW!)*\n\n**Memory Store Management:**\n```bash\n# Switch between memory backends\n/memory_store:in_memory # Fast in-memory storage (default)\n/memory_store:redis # Redis persistent storage \n/memory_store:database # SQLite database storage\n/memory_store:database:postgresql://user:pass@host/db # PostgreSQL\n/memory_store:database:mysql://user:pass@host/db # MySQL\n\n# Memory strategy configuration\n/memory_mode:sliding_window:10 # Keep last 10 messages\n/memory_mode:token_budget:5000 # Keep under 5000 tokens\n```\n\n**Event Store Management:**\n```bash\n# Switch between event backends\n/event_store:in_memory # Fast in-memory events (default)\n/event_store:redis_stream # Redis Streams for persistence\n```\n\n**Enhanced Commands:**\n```bash\n# Memory operations\n/history # Show conversation history\n/clear_history # Clear conversation history\n/save_history <file> # Save history to file\n/load_history <file> # Load history from file\n\n# Server management\n/add_servers:<config.json> # Add servers from config\n/remove_server:<server_name> # Remove specific server\n/refresh # Refresh server capabilities\n\n# Debugging and monitoring\n/debug # Toggle debug mode\n/api_stats # Show API usage statistics\n```\n\n---\n\n### \ud83d\ude80 MCPOmni Connect CLI - World-Class MCP Client\n\nThe MCPOmni Connect CLI is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes:\n\n```bash\n# Launch the advanced MCP CLI\npython run.py\n# OR: mcpomni-connect --config servers_config.json\n\n# Core MCP client commands:\n/tools # List all available tools\n/prompts # List all available prompts \n/resources # List all available resources\n/prompt:<name> # Execute a specific prompt\n/resource:<uri> # Read a specific resource\n/subscribe:<uri> # Subscribe to resource updates\n/query <your_question> # Ask questions using tools\n\n# Advanced platform features:\n/memory_store:redis # Switch to Redis memory\n/event_store:redis_stream # Switch to Redis events\n/add_servers:<config.json> # Add MCP servers dynamically\n/remove_server:<name> # Remove MCP server\n/mode:auto # Switch to autonomous agentic mode\n/mode:orchestrator # Switch to multi-server orchestration\n ```\n\n## \ud83d\udee0\ufe0f Developer Integration\n\nMCPOmni Connect is not just a CLI tool\u2014it's also a powerful Python library. **OmniAgent consolidates everything** - you no longer need to manually manage MCP clients, configurations, and agents separately!\n\n### Build Apps with OmniAgent *(Recommended)*\n\n**OmniAgent automatically includes MCP client functionality** - just specify your MCP servers and you're ready to go:\n\n```python\nfrom mcpomni_connect.omni_agent import OmniAgent\nfrom mcpomni_connect.memory_store.memory_router import MemoryRouter\nfrom mcpomni_connect.events.event_router import EventRouter\nfrom mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry for custom tools\ntool_registry = ToolRegistry()\n\n@tool_registry.register_tool(\"analyze_data\")\ndef analyze_data(data: str) -> str:\n \"\"\"Analyze data and return insights.\"\"\"\n return f\"Analysis complete: {len(data)} characters processed\"\n\n# OmniAgent automatically handles MCP connections + your tools\nagent = OmniAgent(\n name=\"my_app_agent\",\n system_instruction=\"You are a helpful assistant with access to MCP servers and custom tools.\",\n model_config={\n \"provider\": \"openai\", \n \"model\": \"gpt-4o\",\n \"temperature\": 0.7\n },\n # Your custom local tools\n local_tools=tool_registry,\n # MCP servers - automatically connected!\n mcp_tools=[\n {\n \"name\": \"filesystem\",\n \"transport_type\": \"stdio\", \n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/home\"]\n },\n {\n \"name\": \"github\",\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://localhost:8080/mcp\",\n \"headers\": {\"Authorization\": \"Bearer your-token\"}\n }\n ],\n memory_store=MemoryRouter(memory_store_type=\"redis\"),\n event_router=EventRouter(event_store_type=\"in_memory\")\n)\n\n# Use in your app - gets both MCP tools AND your custom tools!\nresult = await agent.run(\"List files in the current directory and analyze the filenames\")\n```\n\n### Legacy Manual Approach *(Not Recommended)*\n\nIf you need the old manual approach for some reason:\n\n### FastAPI Integration with OmniAgent\n\nOmniAgent makes building APIs incredibly simple. See [`examples/web_server.py`](examples/web_server.py) for a complete FastAPI example:\n\n```python\nfrom fastapi import FastAPI\nfrom mcpomni_connect.omni_agent import OmniAgent\n\napp = FastAPI()\nagent = OmniAgent(...) # Your agent setup from above\n\n@app.post(\"/chat\")\nasync def chat(message: str, session_id: str = None):\n result = await agent.run(message, session_id)\n return {\"response\": result['response'], \"session_id\": result['session_id']}\n\n@app.get(\"/tools\") \nasync def get_tools():\n # Returns both MCP tools AND your custom tools automatically\n return agent.get_available_tools()\n```\n\n**Key Benefits:**\n\n- **One OmniAgent = MCP + Custom Tools + Memory + Events**\n- **Automatic tool discovery** from all connected MCP servers\n- **Built-in session management** and conversation history\n- **Real-time event streaming** for monitoring\n- **Easy integration** with any Python web framework\n\n---\n\n### Server Configuration Examples\n\n#### Basic OpenAI Configuration\n\n```json\n{\n \"AgentConfig\": {\n \"tool_call_timeout\": 30,\n \"max_steps\": 15,\n \"request_limit\": 1000,\n \"total_tokens_limit\": 100000\n },\n \"LLM\": {\n \"provider\": \"openai\",\n \"model\": \"gpt-4\",\n \"temperature\": 0.5,\n \"max_tokens\": 5000,\n \"max_context_length\": 30000,\n \"top_p\": 0\n },\n \"mcpServers\": {\n \"ev_assistant\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://localhost:8000/mcp\"\n },\n \"sse-server\": {\n \"transport_type\": \"sse\",\n \"url\": \"http://localhost:3000/sse\",\n \"headers\": {\n \"Authorization\": \"Bearer token\"\n },\n \"timeout\": 60,\n \"sse_read_timeout\": 120\n },\n \"streamable_http-server\": {\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://localhost:3000/mcp\",\n \"headers\": {\n \"Authorization\": \"Bearer token\"\n },\n \"timeout\": 60,\n \"sse_read_timeout\": 120\n }\n }\n}\n```\n\n#### Anthropic Claude Configuration\n\n```json\n{\n \"LLM\": {\n \"provider\": \"anthropic\",\n \"model\": \"claude-3-5-sonnet-20241022\",\n \"temperature\": 0.7,\n \"max_tokens\": 4000,\n \"max_context_length\": 200000,\n \"top_p\": 0.95\n }\n}\n```\n\n#### Groq Configuration\n\n```json\n{\n \"LLM\": {\n \"provider\": \"groq\",\n \"model\": \"llama-3.1-8b-instant\",\n \"temperature\": 0.5,\n \"max_tokens\": 2000,\n \"max_context_length\": 8000,\n \"top_p\": 0.9\n }\n}\n```\n\n#### Azure OpenAI Configuration\n\n```json\n{\n \"LLM\": {\n \"provider\": \"azureopenai\",\n \"model\": \"gpt-4\",\n \"temperature\": 0.7,\n \"max_tokens\": 2000,\n \"max_context_length\": 100000,\n \"top_p\": 0.95,\n \"azure_endpoint\": \"https://your-resource.openai.azure.com\",\n \"azure_api_version\": \"2024-02-01\",\n \"azure_deployment\": \"your-deployment-name\"\n }\n}\n```\n\n#### Ollama Local Model Configuration\n\n```json\n{\n \"LLM\": {\n \"provider\": \"ollama\",\n \"model\": \"llama3.1:8b\",\n \"temperature\": 0.5,\n \"max_tokens\": 5000,\n \"max_context_length\": 100000,\n \"top_p\": 0.7,\n \"ollama_host\": \"http://localhost:11434\"\n }\n}\n```\n\n#### OpenRouter Configuration\n\n```json\n{\n \"LLM\": {\n \"provider\": \"openrouter\",\n \"model\": \"anthropic/claude-3.5-sonnet\",\n \"temperature\": 0.7,\n \"max_tokens\": 4000,\n \"max_context_length\": 200000,\n \"top_p\": 0.95\n }\n}\n```\n\n### \ud83d\udd10 Authentication Methods\n\nMCPOmni Connect supports multiple authentication methods for secure server connections:\n\n#### OAuth 2.0 Authentication\n\n```json\n{\n \"server_name\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://your-server/mcp\"\n }\n}\n```\n\n#### Bearer Token Authentication\n\n```json\n{\n \"server_name\": {\n \"transport_type\": \"streamable_http\",\n \"headers\": {\n \"Authorization\": \"Bearer your-token-here\"\n },\n \"url\": \"http://your-server/mcp\"\n }\n}\n```\n\n#### Custom Headers\n\n```json\n{\n \"server_name\": {\n \"transport_type\": \"streamable_http\",\n \"headers\": {\n \"X-Custom-Header\": \"value\",\n \"Authorization\": \"Custom-Auth-Scheme token\"\n },\n \"url\": \"http://your-server/mcp\"\n }\n}\n```\n\n## \ud83d\udd04 Dynamic Server Configuration\n\nMCPOmni Connect supports dynamic server configuration through commands:\n\n#### Add New Servers\n\n```bash\n# Add one or more servers from a configuration file\n/add_servers:path/to/config.json\n```\n\nThe configuration file can include multiple servers with different authentication methods:\n\n```json\n{\n \"new-server\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://localhost:8000/mcp\"\n },\n \"another-server\": {\n \"transport_type\": \"sse\",\n \"headers\": {\n \"Authorization\": \"Bearer token\"\n },\n \"url\": \"http://localhost:3000/sse\"\n }\n}\n```\n\n#### Remove Servers\n\n```bash\n# Remove a server by its name\n/remove_server:server_name\n```\n\n## \ud83c\udfaf Usage\n\n### Interactive Commands\n\n- `/tools` - List all available tools across servers\n- `/prompts` - View available prompts\n- `/prompt:<name>/<args>` - Execute a prompt with arguments\n- `/resources` - List available resources\n- `/resource:<uri>` - Access and analyze a resource\n- `/debug` - Toggle debug mode\n- `/refresh` - Update server capabilities\n- `/memory` - Toggle Redis memory persistence (on/off)\n- `/mode:auto` - Switch to autonomous agentic mode\n- `/mode:chat` - Switch back to interactive chat mode\n- `/add_servers:<config.json>` - Add one or more servers from a configuration file\n- `/remove_server:<server_name>` - Remove a server by its name\n\n### Memory and Chat History\n\n```bash\n# Enable Redis memory persistence\n/memory\n\n# Check memory status\nMemory persistence is now ENABLED using Redis\n\n# Disable memory persistence\n/memory\n\n# Check memory status\nMemory persistence is now DISABLED\n```\n\n### Operation Modes\n\n```bash\n# Switch to autonomous mode\n/mode:auto\n\n# System confirms mode change\nNow operating in AUTONOMOUS mode. I will execute tasks independently.\n\n# Switch back to chat mode\n/mode:chat\n\n# System confirms mode change\nNow operating in CHAT mode. I will ask for approval before executing tasks.\n```\n\n### Mode Differences\n\n- **Chat Mode (Default)**\n\n - Requires explicit approval for tool execution\n - Interactive conversation style\n - Step-by-step task execution\n - Detailed explanations of actions\n\n- **Autonomous Mode**\n\n - Independent task execution\n - Self-guided decision making\n - Automatic tool selection and chaining\n - Progress updates and final results\n - Complex task decomposition\n - Error handling and recovery\n\n- **Orchestrator Mode**\n - Advanced planning for complex multi-step tasks\n - Strategic delegation across multiple MCP servers\n - Intelligent agent coordination and communication\n - Parallel task execution when possible\n - Dynamic resource allocation\n - Sophisticated workflow management\n - Real-time progress monitoring across agents\n - Adaptive task prioritization\n\n### Prompt Management\n\n```bash\n# List all available prompts\n/prompts\n\n# Basic prompt usage\n/prompt:weather/location=tokyo\n\n# Prompt with multiple arguments depends on the server prompt arguments requirements\n/prompt:travel-planner/from=london/to=paris/date=2024-03-25\n\n# JSON format for complex arguments\n/prompt:analyze-data/{\n \"dataset\": \"sales_2024\",\n \"metrics\": [\"revenue\", \"growth\"],\n \"filters\": {\n \"region\": \"europe\",\n \"period\": \"q1\"\n }\n}\n\n# Nested argument structures\n/prompt:market-research/target=smartphones/criteria={\n \"price_range\": {\"min\": 500, \"max\": 1000},\n \"features\": [\"5G\", \"wireless-charging\"],\n \"markets\": [\"US\", \"EU\", \"Asia\"]\n}\n```\n\n### Advanced Prompt Features\n\n- **Argument Validation**: Automatic type checking and validation\n- **Default Values**: Smart handling of optional arguments\n- **Context Awareness**: Prompts can access previous conversation context\n- **Cross-Server Execution**: Seamless execution across multiple MCP servers\n- **Error Handling**: Graceful handling of invalid arguments with helpful messages\n- **Dynamic Help**: Detailed usage information for each prompt\n\n### AI-Powered Interactions\n\nThe client intelligently:\n\n- Chains multiple tools together\n- Provides context-aware responses\n- Automatically selects appropriate tools\n- Handles errors gracefully\n- Maintains conversation context\n\n### Model Support with LiteLLM\n\n- **Unified Model Access**\n - Single interface for 100+ models across all major providers\n - Automatic provider detection and routing\n - Consistent API regardless of underlying provider\n - Native function calling for compatible models\n - ReAct Agent fallback for models without function calling\n- **Supported Providers**\n - **OpenAI**: GPT-4, GPT-3.5, and all model variants\n - **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus\n - **Google**: Gemini Pro, Gemini Flash, PaLM models\n - **Groq**: Ultra-fast inference for Llama, Mixtral, Gemma\n - **DeepSeek**: DeepSeek-V3, DeepSeek-Coder, and specialized models\n - **Azure OpenAI**: Enterprise-grade OpenAI models\n - **OpenRouter**: Access to 200+ models from various providers\n - **Ollama**: Local model execution with privacy\n- **Advanced Features**\n - Automatic model capability detection\n - Dynamic tool execution based on model features\n - Intelligent fallback mechanisms\n - Provider-specific optimizations\n\n### Token & Usage Management\n\nMCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.\n\n#### View API Usage Stats\n\nUse the `/api_stats` command to see your current usage:\n\n```bash\n/api_stats\n```\n\nThis will display:\n\n- **Total tokens used**\n- **Total requests made**\n- **Total response tokens**\n- **Number of requests**\n\n#### Set Usage Limits\n\nYou can set limits to automatically stop execution when thresholds are reached:\n\n- **Total Request Limit:**\n Set the maximum number of requests allowed in a session.\n- **Total Token Usage Limit:**\n Set the maximum number of tokens that can be used.\n- **Tool Call Timeout:**\n Set the maximum time (in seconds) a tool call can take before being terminated.\n- **Max Steps:**\n Set the maximum number of steps the agent can take before stopping.\n\nYou can configure these in your `servers_config.json` under the `AgentConfig` section:\n\n```json\n\"AgentConfig\": {\n \"tool_call_timeout\": 30, // Tool call timeout in seconds\n \"max_steps\": 15, // Max number of steps before termination\n \"request_limit\": 1000, // Max number of requests allowed\n \"total_tokens_limit\": 100000 // Max number of tokens allowed\n}\n```\n\n- When any of these limits are reached, the agent will automatically stop running and notify you.\n\n#### Example Commands\n\n```bash\n# Check your current API usage and limits\n/api_stats\n\n# Set a new request limit (example)\n# (This can be done by editing servers_config.json or via future CLI commands)\n```\n\n## \ud83d\udd27 Advanced Features\n\n### Tool Orchestration\n\n```python\n# Example of automatic tool chaining if the tool is available in the servers connected\nUser: \"Find charging stations near Silicon Valley and check their current status\"\n\n# Client automatically:\n1. Uses Google Maps API to locate Silicon Valley\n2. Searches for charging stations in the area\n3. Checks station status through EV network API\n4. Formats and presents results\n```\n\n### Resource Analysis\n\n```python\n# Automatic resource processing\nUser: \"Analyze the contents of /path/to/document.pdf\"\n\n# Client automatically:\n1. Identifies resource type\n2. Extracts content\n3. Processes through LLM\n4. Provides intelligent summary\n```\n\n### Demo\n\n\n\n## \ud83d\udd0d Troubleshooting\n\n> \ud83d\udcd6 **For comprehensive configuration help**, see the [\u2699\ufe0f Configuration Guide](#%EF%B8%8F-configuration-guide) section above, which covers:\n>\n> - Config file differences (`.env` vs `servers_config.json`)\n> - Transport type selection and authentication\n> - OAuth server behavior explanation\n> - Common connection issues and solutions\n\n### Common Issues and Solutions\n\n1. **Connection Issues**\n\n ```bash\n Error: Could not connect to MCP server\n ```\n\n - Check if the server is running\n - Verify server configuration in `servers_config.json`\n - Ensure network connectivity\n - Check server logs for errors\n - **See [Transport Types & Authentication](#-transport-types--authentication) for detailed setup**\n\n2. **API Key Issues**\n\n ```bash\n Error: Invalid API key\n ```\n\n - Verify API key is correctly set in `.env`\n - Check if API key has required permissions\n - Ensure API key is for correct environment (production/development)\n - **See [Configuration Files Overview](#configuration-files-overview) for correct setup**\n\n3. **Redis Connection**\n\n ```bash\n Error: Could not connect to Redis\n ```\n\n - Verify Redis server is running\n - Check Redis connection settings in `.env`\n - Ensure Redis password is correct (if configured)\n\n4. **Tool Execution Failures**\n ```bash\n Error: Tool execution failed\n ```\n - Check tool availability on connected servers\n - Verify tool permissions\n - Review tool arguments for correctness\n\n### Debug Mode\n\nEnable debug mode for detailed logging:\n\n```bash\n/debug\n```\n\nFor additional support, please:\n\n1. Check the [Issues](https://github.com/Abiorh001/mcp_omni_connect/issues) page\n2. Review closed issues for similar problems\n3. Open a new issue with detailed information if needed\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! See our [Contributing Guide](CONTRIBUTING.md) for details.\n\n## \ud83d\udcd6 Documentation\n\nComplete documentation is available at: **[MCPOmni Connect Docs](https://abiorh001.github.io/mcp_omni_connect)**\n\nTo build documentation locally:\n\n```bash\n./docs.sh serve # Start development server at http://127.0.0.1:8080\n./docs.sh build # Build static documentation\n```\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udcec Contact & Support\n\n- **Author**: Abiola Adeshina\n- **Email**: abiolaadedayo1993@gmail.com\n- **GitHub Issues**: [Report a bug](https://github.com/Abiorh001/mcp_omni_connect/issues)\n\n---\n\n<p align=\"center\">Built with \u2764\ufe0f by the MCPOmni Connect Team</p>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Universal MCP Client with multi-transport support and LLM-powered tool routing",
"version": "0.1.19",
"project_urls": {
"Issues": "https://github.com/Abiorh001/mcp_omni_connect/issues",
"Repository": "https://github.com/Abiorh001/mcp_omni_connect"
},
"split_keywords": [
"agent",
" ai",
" automation",
" git",
" llm",
" mcp"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f3611a047ac8dcbac245bb563ed100ffd7fac76a16a3d6d47f9759222ff56a45",
"md5": "27eac60510cbe5b10a9a5fa9cc1c3d88",
"sha256": "c38db5cf1ed7cb9c61551790ee8c68c0da78833dafc4bc40ec2972a496b6e003"
},
"downloads": -1,
"filename": "mcpomni_connect-0.1.19-py3-none-any.whl",
"has_sig": false,
"md5_digest": "27eac60510cbe5b10a9a5fa9cc1c3d88",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 143950,
"upload_time": "2025-08-05T14:36:15",
"upload_time_iso_8601": "2025-08-05T14:36:15.933785Z",
"url": "https://files.pythonhosted.org/packages/f3/61/1a047ac8dcbac245bb563ed100ffd7fac76a16a3d6d47f9759222ff56a45/mcpomni_connect-0.1.19-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4a405038d7c5f4956b27daf596981356815b6c12857656dda18392ebc85a4b3b",
"md5": "d14ab2dd79da28f75bfe5c356ec68381",
"sha256": "8054127817741d4740961d550e8b251a50512fe788fafd5cd34de602767c7592"
},
"downloads": -1,
"filename": "mcpomni_connect-0.1.19.tar.gz",
"has_sig": false,
"md5_digest": "d14ab2dd79da28f75bfe5c356ec68381",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 128437,
"upload_time": "2025-08-05T14:36:17",
"upload_time_iso_8601": "2025-08-05T14:36:17.877761Z",
"url": "https://files.pythonhosted.org/packages/4a/40/5038d7c5f4956b27daf596981356815b6c12857656dda18392ebc85a4b3b/mcpomni_connect-0.1.19.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-05 14:36:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Abiorh001",
"github_project": "mcp_omni_connect",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "mcpomni-connect"
}