Name | omnicoreagent JSON |
Version |
0.2.0
JSON |
| download |
home_page | None |
Summary | OmniCoreAgent AI Framework - Universal MCP Client with multi-transport support and LLM-powered tool routing |
upload_time | 2025-09-01 01:41:23 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT |
keywords |
agent
ai
automation
framework
git
llm
mcp
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# π OmniCoreAgent - Complete AI Development Platform
> **βΉοΈ Project Renaming Notice:**
> This project was previously known as **`mcp_omni-connect`**.
> It has been renamed to **`omnicoreagent`** to reflect its evolution into a complete AI development platformβcombining both a world-class MCP client and a powerful AI agent builder framework.
> **β οΈ Breaking Change:**
> The package name has changed from **`mcp_omni-connect`** to **`omnicoreagent`**.
> Please uninstall the old package and install the new one:
>
> ```bash
> pip uninstall mcp_omni-connect
> pip install omnicoreagent
> ```
>
> All imports and CLI commands now use `omnicoreagent`.
> Update your code and scripts accordingly.
[](https://...)
...
[](https://pepy.tech/projects/omnicoreagent)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/Abiorh001/omnicoreagent/actions)
[](https://badge.fury.io/py/omnicoreagent)
[](https://github.com/Abiorh001/omnicoreagent/commits/main)
[](https://github.com/Abiorh001/omnicoreagent/issues)
[](https://github.com/Abiorh001/omnicoreagent/pulls)
<p align="center">
<img src="Gemini_Generated_Image_pfgm65pfgm65pfgmcopy.png" alt="OmniCoreAgent Logo" width="250"/>
</p>
**OmniCoreAgent** is the complete AI development platform that combines two powerful systems into one revolutionary ecosystem. Build production-ready AI agents with **OmniAgent**, use the advanced MCP client with **MCPOmni Connect**, or combine both for maximum power.
## π Table of Contents
### π **Getting Started**
- [π Quick Start (2 minutes)](#-quick-start-2-minutes)
- [π What is OmniCoreAgent?](#-what-is-omnicoreagent)
- [π‘ What Can You Build? (Examples)](#-what-can-you-build-see-real-examples)
- [π― Choose Your Path](#-choose-your-path)
### π€ **OmniAgent System**
- [β¨ OmniAgent Features](#-omniagent---revolutionary-ai-agent-builder)
- [π₯ Local Tools System](#-local-tools-system---create-custom-ai-tools)
- [π οΈ Building Custom Agents](#-building-custom-agents)
- [π OmniAgent Examples](#-omniagent-examples)
### π **MCPOmni Connect System**
- [β¨ MCP Client Features](#-mcpomni-connect---world-class-mcp-client)
- [π¦ Transport Types & Authentication](#-transport-types--authentication)
- [π₯οΈ CLI Commands](#οΈ-cli-commands)
- [π MCP Usage Examples](#-mcp-usage-examples)
### π **Core Information**
- [β¨ Platform Features](#-platform-features)
- [ποΈ Architecture](#οΈ-architecture)
### βοΈ **Setup & Configuration**
- [βοΈ Configuration Guide](#οΈ-configuration-guide)
- [π§ Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide)
- [π Tracing & Observability](#-opik-tracing--observability-setup-latest-feature)
### π οΈ **Development & Integration**
- [π§βπ» Developer Integration](#-developer-integration)
- [π§ͺ Testing](#-testing)
### π **Reference & Support**
- [π Troubleshooting](#-troubleshooting)
- [π€ Contributing](#-contributing)
- [π Documentation](#-documentation)
---
**New to OmniCoreAgent?** Get started in 2 minutes:
### Step 1: Install
```bash
# Install with uv (recommended)
uv add omnicoreagent
# Or with pip
pip install omnicoreagent
```
### Step 2: Set API Key
```bash
# Create .env file with your LLM API key
echo "LLM_API_KEY=your_openai_api_key_here" > .env
```
### Step 3: Run Examples
```bash
# Try OmniAgent with custom tools
python examples/run_omni_agent.py
# Try MCPOmni Connect (MCP client)
python examples/run.py
```
### What Can You Build?
- **Custom AI Agents**: Register your Python functions as AI tools with OmniAgent
- **MCP Integration**: Connect to any Model Context Protocol server with MCPOmni Connect
- **Smart Memory**: Vector databases for long-term AI memory
- **Background Agents**: Self-flying autonomous task execution
- **Production Monitoring**: Opik tracing for performance optimization
β‘οΈ **Next**: Check out [Examples](#-what-can-you-build-see-real-examples) or jump to [Configuration Guide](#οΈ-configuration-guide)
---
## π **What is OmniCoreAgent?**
OmniCoreAgent is a comprehensive AI development platform consisting of two integrated systems:
### 1. π€ **OmniAgent** *(Revolutionary AI Agent Builder)*
Create intelligent, autonomous agents with custom capabilities:
- **π οΈ Local Tools System** - Register your Python functions as AI tools
- **π Self-Flying Background Agents** - Autonomous task execution
- **π§ Multi-Tier Memory** - Vector databases, Redis, PostgreSQL, MySQL, SQLite
- **π‘ Real-Time Events** - Live monitoring and streaming
- **π§ MCP + Local Tool Orchestration** - Seamlessly combine both tool types
### 2. π **MCPOmni Connect** *(World-Class MCP Client)*
Advanced command-line interface for connecting to any Model Context Protocol server with:
- **π Multi-Protocol Support** - stdio, SSE, HTTP, Docker, NPX transports
- **π Authentication** - OAuth 2.0, Bearer tokens, custom headers
- **π§ Advanced Memory** - Redis, Database, Vector storage with intelligent retrieval
- **π‘ Event Streaming** - Real-time monitoring and debugging
- **π€ Agentic Modes** - ReAct, Orchestrator, and Interactive chat modes
**π― Perfect for:** Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.
---
## π‘ **What Can You Build? (See Real Examples)**
### π€ **OmniAgent System** *(Build Custom AI Agents)*
```bash
# Complete OmniAgent demo - All features showcase
python examples/omni_agent_example.py
# Advanced OmniAgent patterns - Study 12+ tool examples
python examples/run_omni_agent.py
# Self-flying background agents - Autonomous task execution
python examples/background_agent_example.py
# Web server with UI - Interactive interface for OmniAgent
python examples/web_server.py
# Open http://localhost:8000 for web interface
# enhanced_web_server.py - Advanced web patterns
python examples/enhanced_web_server.py
# FastAPI implementation - Clean API endpoints
python examples/fast_api_impl.py
```
### π **MCPOmni Connect System** *(Connect to MCP Servers)*
```bash
# Basic MCP client usage - Simple connection patterns
python examples/basic_mcp.py
# Advanced MCP CLI - Full-featured client interface
python examples/run.py
```
### π§ **LLM Provider Configuration** *(Multiple Providers)*
All LLM provider examples consolidated in:
```bash
# See examples/llm_usage-config.json for:
# - Anthropic Claude models
# - Groq ultra-fast inference
# - Azure OpenAI enterprise
# - Ollama local models
# - OpenRouter 200+ models
# - And more providers...
```
---
## π― **Choose Your Path**
### When to Use What?
| **Use Case** | **Choose** | **Best For** |
|-------------|------------|--------------|
| Build custom AI apps | **OmniAgent** | Web apps, automation, custom workflows |
| Connect to MCP servers | **MCPOmni Connect** | Daily workflow, server management, debugging |
| Learn & experiment | **Examples** | Understanding patterns, proof of concepts |
| Production deployment | **Both** | Full-featured AI applications |
### **Path 1: π€ Build Custom AI Agents (OmniAgent)**
Perfect for: Custom applications, automation, web apps
```bash
# Study the examples to learn patterns:
python examples/basic.py # Simple introduction
python examples/run_omni_agent.py # Complete OmniAgent demo
python examples/background_agent_example.py # Self-flying agents
python examples/web_server.py # Web interface
python examples/enhanced_web_server.py # Advanced patterns
# Then build your own using the patterns!
```
### **Path 2: π Advanced MCP Client (MCPOmni Connect)**
Perfect for: Daily workflow, server management, debugging
```bash
# Basic MCP client - Simple connection patterns
python examples/basic_mcp.py
# World-class MCP client with advanced features
python examples/run.py
# Features: Connect to MCP servers, agentic modes, advanced memory
```
### **Path 3: π§ͺ Study Tool Patterns (Learning)**
Perfect for: Learning, understanding patterns, experimentation
```bash
# Comprehensive testing interface - Study 12+ EXAMPLE tools
python examples/run_omni_agent.py
# Study this file to see tool registration patterns and CLI features
# Contains many examples of how to create custom tools
```
**π‘ Pro Tip:** Most developers use **both paths** - MCPOmni Connect for daily workflow and OmniAgent for building custom solutions!
---
# π€ OmniAgent - Revolutionary AI Agent Builder
**π Introducing OmniAgent** - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!
## β
OmniAgent Revolutionary Capabilities:
- **π§ Multi-tier memory management** with vector search and semantic retrieval
- **π οΈ XML-based reasoning** with strict tool formatting for reliable execution
- **π§ Advanced tool orchestration** - Seamlessly combine MCP server tools + local tools
- **π Self-flying background agents** with autonomous task execution
- **π‘ Real-time event streaming** for monitoring and debugging
- **ποΈ Production-ready infrastructure** with error handling and retry logic
- **β‘ Plug-and-play intelligence** - No complex setup required!
## π₯ **LOCAL TOOLS SYSTEM** - Create Custom AI Tools!
One of OmniAgent's most powerful features is the ability to **register your own Python functions as AI tools**. The agent can then intelligently use these tools to complete tasks.
### π― Quick Tool Registration Example
```python
from omnicoreagent.omni_agent import OmniAgent
from omnicoreagent.core.tools.local_tools_registry import ToolRegistry
# Create tool registry
tool_registry = ToolRegistry()
# Register your custom tools with simple decorator
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate the area of a rectangle."""
area = length * width
return f"Area of rectangle ({length} x {width}): {area} square units"
@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
"""Analyze text and return word count and character count."""
words = len(text.split())
chars = len(text)
return f"Analysis: {words} words, {chars} characters"
@tool_registry.register_tool("system_status")
def get_system_status() -> str:
"""Get current system status information."""
import platform
import time
return f"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}"
# Use tools with OmniAgent
agent = OmniAgent(
name="my_agent",
local_tools=tool_registry, # Your custom tools!
# ... other config
)
# Now the AI can use your tools!
result = await agent.run("Calculate the area of a 10x5 rectangle and tell me the current system time")
```
### π Tool Registration Patterns (Create Your Own!)
**No built-in tools** - You create exactly what you need! Study these EXAMPLE patterns from `run_omni_agent.py`:
**Mathematical Tools Examples:**
```python
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
area = length * width
return f"Area: {area} square units"
@tool_registry.register_tool("analyze_numbers")
def analyze_numbers(numbers: str) -> str:
num_list = [float(x.strip()) for x in numbers.split(",")]
return f"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}"
```
**System Tools Examples:**
```python
@tool_registry.register_tool("system_info")
def get_system_info() -> str:
import platform
return f"OS: {platform.system()}, Python: {platform.python_version()}"
```
**File Tools Examples:**
```python
@tool_registry.register_tool("list_files")
def list_directory(path: str = ".") -> str:
import os
files = os.listdir(path)
return f"Found {len(files)} items in {path}"
```
### π¨ Tool Registration Patterns
**1. Simple Function Tools:**
```python
@tool_registry.register_tool("weather_check")
def check_weather(city: str) -> str:
"""Get weather information for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 25Β°C"
```
**2. Complex Analysis Tools:**
```python
@tool_registry.register_tool("data_analysis")
def analyze_data(data: str, analysis_type: str = "summary") -> str:
"""Analyze data with different analysis types."""
import json
try:
data_obj = json.loads(data)
if analysis_type == "summary":
return f"Data contains {len(data_obj)} items"
elif analysis_type == "detailed":
# Complex analysis logic
return "Detailed analysis results..."
except:
return "Invalid data format"
```
**3. File Processing Tools:**
```python
@tool_registry.register_tool("process_file")
def process_file(file_path: str, operation: str) -> str:
"""Process files with different operations."""
try:
if operation == "read":
with open(file_path, 'r') as f:
content = f.read()
return f"File content (first 100 chars): {content[:100]}..."
elif operation == "count_lines":
with open(file_path, 'r') as f:
lines = len(f.readlines())
return f"File has {lines} lines"
except Exception as e:
return f"Error processing file: {e}"
```
## π οΈ Building Custom Agents
### Basic Agent Setup
```python
from omnicoreagent.omni_agent import OmniAgent
from omnicoreagent.core.memory_store.memory_router import MemoryRouter
from omnicoreagent.core.events.event_router import EventRouter
from omnicoreagent.core.tools.local_tools_registry import ToolRegistry
# Create tool registry for custom tools
tool_registry = ToolRegistry()
@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
"""Analyze data and return insights."""
return f"Analysis complete: {len(data)} characters processed"
# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
name="my_app_agent",
system_instruction="You are a helpful assistant with access to MCP servers and custom tools.",
model_config={
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7
},
agent_config={
"tool_call_timeout": 30,
"max_steps": 10,
"request_limit": 0, # 0 = unlimited (production mode), set > 0 to enable limits
"total_tokens_limit": 0, # 0 = unlimited (production mode), set > 0 to enable limits
"memory_results_limit": 5, # Number of memory results to retrieve (1-100, default: 5)
"memory_similarity_threshold": 0.5 # Similarity threshold for memory filtering (0.0-1.0, default: 0.5)
},
# Your custom local tools
local_tools=tool_registry,
# MCP servers - automatically connected!
mcp_tools=[
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
},
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {"Authorization": "Bearer your-token"}
}
],
embedding_config={
"provider": "openai",
"model": "text-embedding-3-small",
"dimensions": 1536,
"encoding_format": "float",
},
memory_store=MemoryRouter(memory_store_type="redis"),
event_router=EventRouter(event_store_type="in_memory")
)
# Use in your app - gets both MCP tools AND your custom tools!
result = await agent.run("List files in the current directory and analyze the filenames")
```
## π OmniAgent Examples
### Basic Agent Usage
```bash
# Complete OmniAgent demo with custom tools
python examples/omni_agent_example.py
# Advanced patterns with 12+ tool examples
python examples/run_omni_agent.py
```
### Background Agents
```bash
# Self-flying autonomous agents
python examples/background_agent_example.py
```
### Web Applications
```bash
# FastAPI integration
python examples/fast_api_impl.py
# Full web interface
python examples/web_server.py
# Open http://localhost:8000
```
---
# π MCPOmni Connect - World-Class MCP Client
The MCPOmni Connect system is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes.
## β¨ MCPOmni Connect Key Features
### π€ Intelligent Agent System
- **ReAct Agent Mode**
- Autonomous task execution with reasoning and action cycles
- Independent decision-making without human intervention
- Advanced problem-solving through iterative reasoning
- Self-guided tool selection and execution
- Complex task decomposition and handling
- **Orchestrator Agent Mode**
- Strategic multi-step task planning and execution
- Intelligent coordination across multiple MCP servers
- Dynamic agent delegation and communication
- Parallel task execution when possible
- Sophisticated workflow management with real-time progress monitoring
- **Interactive Chat Mode**
- Human-in-the-loop task execution with approval workflows
- Step-by-step guidance and explanations
- Educational mode for understanding AI decision processes
### π Universal Connectivity
- **Multi-Protocol Support**
- Native support for stdio transport
- Server-Sent Events (SSE) for real-time communication
- Streamable HTTP for efficient data streaming
- Docker container integration
- NPX package execution
- Extensible transport layer for future protocols
- **Authentication Support**
- OAuth 2.0 authentication flow
- Bearer token authentication
- Custom header support
- Secure credential management
- **Agentic Operation Modes**
- Seamless switching between chat, autonomous, and orchestrator modes
- Context-aware mode selection based on task complexity
- Persistent state management across mode transitions
## π¦ Transport Types & Authentication
MCPOmni Connect supports multiple ways to connect to MCP servers:
### 1. **stdio** - Direct Process Communication
**Use when**: Connecting to local MCP servers that run as separate processes
```json
{
"server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
```
- **No authentication needed**
- **No OAuth server started**
- Most common for local development
### 2. **sse** - Server-Sent Events
**Use when**: Connecting to HTTP-based MCP servers using Server-Sent Events
```json
{
"server-name": {
"transport_type": "sse",
"url": "http://your-server.com:4010/sse",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
```
- **Uses Bearer token or custom headers**
- **No OAuth server started**
### 3. **streamable_http** - HTTP with Optional OAuth
**Use when**: Connecting to HTTP-based MCP servers with or without OAuth
**Without OAuth (Bearer Token):**
```json
{
"server-name": {
"transport_type": "streamable_http",
"url": "http://your-server.com:4010/mcp",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60
}
}
```
- **Uses Bearer token or custom headers**
- **No OAuth server started**
**With OAuth:**
```json
{
"server-name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server.com:4010/mcp"
}
}
```
- **OAuth callback server automatically starts on `http://localhost:3000`**
- **This is hardcoded and cannot be changed**
- **Required for OAuth flow to work properly**
### π OAuth Server Behavior
**Important**: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.
#### What You'll See:
```
π₯οΈ Started callback server on http://localhost:3000
```
#### Key Points:
- **This is normal behavior** - not an error
- **The address `http://localhost:3000` is hardcoded** and cannot be changed
- **The server only starts when** you have `"auth": {"method": "oauth"}` in your config
- **The server stops** when the application shuts down
- **Only used for OAuth token handling** - no other purpose
#### When OAuth is NOT Used:
- Remove the entire `"auth"` section from your server configuration
- Use `"headers"` with `"Authorization": "Bearer token"` instead
- No OAuth server will start
## π₯οΈ CLI Commands
### Memory Store Management:
```bash
# Switch between memory backends
/memory_store:in_memory # Fast in-memory storage (default)
/memory_store:redis # Redis persistent storage
/memory_store:database # SQLite database storage
/memory_store:database:postgresql://user:pass@host/db # PostgreSQL
/memory_store:database:mysql://user:pass@host/db # MySQL
/memory_store:mongodb # Mongodb persistent storage
/memory_store:mongodb:your_mongodb_connection_string # Mongodb with custom URI
# Memory strategy configuration
/memory_mode:sliding_window:10 # Keep last 10 messages
/memory_mode:token_budget:5000 # Keep under 5000 tokens
```
### Event Store Management:
```bash
# Switch between event backends
/event_store:in_memory # Fast in-memory events (default)
/event_store:redis_stream # Redis Streams for persistence
```
### Core MCP Operations:
```bash
/tools # List all available tools
/prompts # List all available prompts
/resources # List all available resources
/prompt:<name> # Execute a specific prompt
/resource:<uri> # Read a specific resource
/subscribe:<uri> # Subscribe to resource updates
/query <your_question> # Ask questions using tools
```
### Enhanced Commands:
```bash
# Memory operations
/history # Show conversation history
/clear_history # Clear conversation history
/save_history <file> # Save history to file
/load_history <file> # Load history from file
# Server management
/add_servers:<config.json> # Add servers from config
/remove_server:<server_name> # Remove specific server
/refresh # Refresh server capabilities
# Agentic modes
/mode:auto # Switch to autonomous agentic mode
/mode:orchestrator # Switch to multi-server orchestration
/mode:chat # Switch to interactive chat mode
# Debugging and monitoring
/debug # Toggle debug mode
/api_stats # Show API usage statistics
```
## π MCP Usage Examples
### Basic MCP Client
```bash
# Launch the basic MCP client
python examples/basic_mcp.py
```
### Advanced MCP CLI
```bash
# Launch the advanced MCP CLI
python examples/advanced_mcp.py
# Core MCP client commands:
/tools # List all available tools
/prompts # List all available prompts
/resources # List all available resources
/prompt:<name> # Execute a specific prompt
/resource:<uri> # Read a specific resource
/subscribe:<uri> # Subscribe to resource updates
/query <your_question> # Ask questions using tools
# Advanced platform features:
/memory_store:redis # Switch to Redis memory
/event_store:redis_stream # Switch to Redis events
/add_servers:<config.json> # Add MCP servers dynamically
/remove_server:<name> # Remove MCP server
/mode:auto # Switch to autonomous agentic mode
/mode:orchestrator # Switch to multi-server orchestration
```
---
## β¨ Platform Features
> **π Want to start building right away?** Jump to [Quick Start](#-quick-start-2-minutes) | [Examples](#-what-can-you-build-see-real-examples) | [Configuration](#οΈ-configuration-guide)
### π§ AI-Powered Intelligence
- **Unified LLM Integration with LiteLLM**
- Single unified interface for all AI providers
- Support for 100+ models across providers including:
- OpenAI (GPT-4, GPT-3.5, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)
- Google (Gemini Pro, Gemini Flash, etc.)
- Groq (Llama, Mixtral, Gemma, etc.)
- DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)
- Azure OpenAI
- OpenRouter (access to 200+ models)
- Ollama (local models)
- Simplified configuration and reduced complexity
- Dynamic system prompts based on available capabilities
- Intelligent context management
- Automatic tool selection and chaining
- Universal model support through custom ReAct Agent
- Handles models without native function calling
- Dynamic function execution based on user requests
- Intelligent tool orchestration
### π Security & Privacy
- **Explicit User Control**
- All tool executions require explicit user approval in chat mode
- Clear explanation of tool actions before execution
- Transparent disclosure of data access and usage
- **Data Protection**
- Strict data access controls
- Server-specific data isolation
- No unauthorized data exposure
- **Privacy-First Approach**
- Minimal data collection
- User data remains on specified servers
- No cross-server data sharing without consent
- **Secure Communication**
- Encrypted transport protocols
- Secure API key management
- Environment variable protection
### πΎ Advanced Memory Management
- **Multi-Backend Memory Storage**
- **In-Memory**: Fast development storage
- **Redis**: Persistent memory with real-time access
- **Database**: PostgreSQL, MySQL, SQLite support
- **Mongodb**: NoSQL document storage
- **File Storage**: Save/load conversation history
- Runtime switching: `/memory_store:redis`, `/memory_store:database:postgresql://user:pass@host/db`
- **Multi-Tier Memory Strategy**
- **Short-term Memory**: Sliding window or token budget strategies
- **Long-term Memory**: Vector database storage for semantic retrieval
- **Episodic Memory**: Context-aware conversation history
- Runtime configuration: `/memory_mode:sliding_window:5`, `/memory_mode:token_budget:3000`
- **Vector Database Integration**
- **Multiple Provider Support**: Mongodb atlas, ChromaDB (remote/cloud), and Qdrant (remote)
- **Smart Fallback**: Automatic failover to local storage if remote fails
- **Semantic Search**: Intelligent context retrieval across conversations
- **Long-term & Episodic Memory**: Enable with `ENABLE_VECTOR_DB=true`
- **Real-Time Event Streaming**
- **In-Memory Events**: Fast development event processing
- **Redis Streams**: Persistent event storage and streaming
- Runtime switching: `/event_store:redis_stream`, `/event_store:in_memory`
- **Advanced Tracing & Observability**
- **Opik Integration**: Production-grade tracing and monitoring
- **Real-time Performance Tracking**: Monitor LLM calls, tool executions, and agent performance
- **Detailed Call Traces**: See exactly where time is spent in your AI workflows
- **System Observability**: Understand bottlenecks and optimize performance
- **Open Source**: Built on Opik, the open-source observability platform
- **Easy Setup**: Just add your Opik credentials to start monitoring
- **Zero Code Changes**: Automatic tracing with `@track` decorators
- **Performance Insights**: Identify slow operations and optimization opportunities
### π¬ Prompt Management
- **Advanced Prompt Handling**
- Dynamic prompt discovery across servers
- Flexible argument parsing (JSON and key-value formats)
- Cross-server prompt coordination
- Intelligent prompt validation
- Context-aware prompt execution
- Real-time prompt responses
- Support for complex nested arguments
- Automatic type conversion and validation
- **Client-Side Sampling Support**
- Dynamic sampling configuration from client
- Flexible LLM response generation
- Customizable sampling parameters
- Real-time sampling adjustments
### π οΈ Tool Orchestration
- **Dynamic Tool Discovery & Management**
- Automatic tool capability detection
- Cross-server tool coordination
- Intelligent tool selection based on context
- Real-time tool availability updates
### π¦ Resource Management
- **Universal Resource Access**
- Cross-server resource discovery
- Unified resource addressing
- Automatic resource type detection
- Smart content summarization
### π Server Management
- **Advanced Server Handling**
- Multiple simultaneous server connections
- Automatic server health monitoring
- Graceful connection management
- Dynamic capability updates
- Flexible authentication methods
- Runtime server configuration updates
## ποΈ Architecture
> **π Prefer hands-on learning?** Skip to [Examples](#-what-can-you-build-see-real-examples) or [Configuration](#οΈ-configuration-guide)
### Core Components
```
OmniCoreAgent Platform
βββ π€ OmniAgent System (Revolutionary Agent Builder)
β βββ Local Tools Registry
β βββ Background Agent Manager
β βββ Custom Agent Creation
β βββ Agent Orchestration Engine
βββ π MCPOmni Connect System (World-Class MCP Client)
β βββ Transport Layer (stdio, SSE, HTTP, Docker, NPX)
β βββ Multi-Server Orchestration
β βββ Authentication & Security
β βββ Connection Lifecycle Management
βββ π§ Shared Memory System (Both Systems)
β βββ Multi-Backend Storage (Redis, DB, In-Memory)
β βββ Vector Database Integration (ChromaDB, Qdrant)
β βββ Memory Strategies (Sliding Window, Token Budget)
β βββ Session Management
βββ π‘ Event System (Both Systems)
β βββ In-Memory Event Processing
β βββ Redis Streams for Persistence
β βββ Real-Time Event Monitoring
βββ π οΈ Tool Management (Both Systems)
β βββ Dynamic Tool Discovery
β βββ Cross-Server Tool Routing
β βββ Local Python Tool Registration
β βββ Tool Execution Engine
βββ π€ AI Integration (Both Systems)
βββ LiteLLM (100+ Models)
βββ Context Management
βββ ReAct Agent Processing
βββ Response Generation
```
---
## π¦ Installation
### β
**Minimal Setup (Just Python + API Key)**
**Required:**
- Python 3.10+
- LLM API key (OpenAI, Anthropic, Groq, etc.)
**Optional (for advanced features):**
- Redis (persistent memory)
- Vector DB (Support Qdrant, ChromaDB, Mongodb atlas)
- Database (PostgreSQL/MySQL/SQLite)
- Opik account (for tracing/observability)
### π¦ **Installation**
```bash
# Option 1: UV (recommended - faster)
uv add omnicoreagent
# Option 2: Pip (standard)
pip install omnicoreagent
```
### β‘ **Quick Configuration**
**Minimal setup** (get started immediately):
```bash
# Just set your API key - that's it!
echo "LLM_API_KEY=your_api_key_here" > .env
```
**Advanced setup** (optional features):
> **π Need more options?** See the complete [Configuration Guide](#οΈ-configuration-guide) below for all environment variables, vector database setup, memory configuration, and advanced features.
---
## βοΈ Configuration Guide
> **β‘ Quick Setup**: Only need `LLM_API_KEY` to get started! | **π Detailed Setup**: [Vector DB](#-vector-database--smart-memory-setup-complete-guide) | [Tracing](#-opik-tracing--observability-setup-latest-feature)
### Environment Variables
Create a `.env` file with your configuration. **Only the LLM API key is required** - everything else is optional for advanced features.
#### **π₯ REQUIRED (Start Here)**
```bash
# ===============================================
# REQUIRED: AI Model API Key (Choose one provider)
# ===============================================
LLM_API_KEY=your_openai_api_key_here
# OR for other providers:
# LLM_API_KEY=your_anthropic_api_key_here
# LLM_API_KEY=your_groq_api_key_here
# LLM_API_KEY=your_azure_openai_api_key_here
# See examples/llm_usage-config.json for all provider configs
```
#### **β‘ OPTIONAL: Advanced Features**
```bash
# ===============================================
# Embeddings (OPTIONAL) - NEW!
# ===============================================
# For generating text embeddings (vector representations)
# Choose one provider - same key works for all embedding models
EMBEDDING_API_KEY=your_embedding_api_key_here
# OR for other providers:
# EMBEDDING_API_KEY=your_cohere_api_key_here
# EMBEDDING_API_KEY=your_huggingface_api_key_here
# EMBEDDING_API_KEY=your_mistral_api_key_here
# See docs/EMBEDDING_README.md for all provider configs
# ===============================================
# Tracing & Observability (OPTIONAL) - NEW!
# ===============================================
# For advanced monitoring and performance optimization
# π Sign up: https://www.comet.com/signup?from=llm
OPIK_API_KEY=your_opik_api_key_here
OPIK_WORKSPACE=your_opik_workspace_name
# ===============================================
# Vector Database (OPTIONAL) - Smart Memory
# ===============================================
ENABLE_VECTOR_DB=true # Default: false
# Choose ONE provider (required if ENABLE_VECTOR_DB=true):
# Option 1: Qdrant Remote
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333
# Option 2: ChromaDB Remote
# OMNI_MEMORY_PROVIDER=chroma-remote
# CHROMA_HOST=localhost
# CHROMA_PORT=8000
# Option 3: ChromaDB Cloud
# OMNI_MEMORY_PROVIDER=chroma-cloud
# CHROMA_TENANT=your_tenant
# CHROMA_DATABASE=your_database
# CHROMA_API_KEY=your_api_key
# Option 4: MongoDB Atlas
# OMNI_MEMORY_PROVIDER=mongodb-remote
# MONGODB_URI="your_mongodb_connection_string"
# MONGODB_DB_NAME="db name"
# ===============================================
# Persistent Memory Storage (OPTIONAL)
# ===============================================
# These have sensible defaults - only set if you need custom configuration
# Redis - for memory_store_type="redis" (defaults to: redis://localhost:6379/0)
# REDIS_URL=redis://your-remote-redis:6379/0
# REDIS_URL=redis://:password@localhost:6379/0 # With password
# Database - for memory_store_type="database" (defaults to: sqlite:///omnicoreagent_memory.db)
# DATABASE_URL=postgresql://user:password@localhost:5432/omnicoreagent
# DATABASE_URL=mysql://user:password@localhost:3306/omnicoreagent
# Mongodb - for memory_store_type="mongodb" (defaults to: mongodb://localhost:27017/omnicoreagent)
# MONGODB_URI="your_mongodb_connection_string"
# MONGODB_DB_NAME="db name"
```
> **π‘ Quick Start**: Just set `LLM_API_KEY` and you're ready to go! Add other variables only when you need advanced features.
### **Server Configuration (`servers_config.json`)**
For MCP server connections and agent settings:
#### Basic OpenAI Configuration
```json
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 0, // 0 = unlimited (production mode), set > 0 to enable limits
"total_tokens_limit": 0, // 0 = unlimited (production mode), set > 0 to enable limits
"memory_results_limit": 5, // Number of memory results to retrieve (1-100, default: 5)
"memory_similarity_threshold": 0.5 // Similarity threshold for memory filtering (0.0-1.0, default: 0.5)
},
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"Embedding": {
"provider": "openai",
"model": "text-embedding-3-small",
"dimensions": 1536,
"encoding_format": "float"
},
"mcpServers": {
"ev_assistant": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"sse-server": {
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
},
"streamable_http-server": {
"transport_type": "streamable_http",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
}
```
#### Multiple Provider Examples
**Anthropic Claude Configuration**
```json
{
"LLM": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
```
**Groq Configuration**
```json
{
"LLM": {
"provider": "groq",
"model": "llama-3.1-8b-instant",
"temperature": 0.5,
"max_tokens": 2000,
"max_context_length": 8000,
"top_p": 0.9
}
}
```
**Azure OpenAI Configuration**
```json
{
"LLM": {
"provider": "azureopenai",
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 2000,
"max_context_length": 100000,
"top_p": 0.95,
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01",
"azure_deployment": "your-deployment-name"
}
}
```
**Ollama Local Model Configuration**
```json
{
"LLM": {
"provider": "ollama",
"model": "llama3.1:8b",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 100000,
"top_p": 0.7,
"ollama_host": "http://localhost:11434"
}
}
```
**OpenRouter Configuration**
```json
{
"LLM": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
```
### π Authentication Methods
OmniCoreAgent supports multiple authentication methods for secure server connections:
#### OAuth 2.0 Authentication
```json
{
"server_name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server/mcp"
}
}
```
#### Bearer Token Authentication
```json
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"Authorization": "Bearer your-token-here"
},
"url": "http://your-server/mcp"
}
}
```
#### Custom Headers
```json
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"X-Custom-Header": "value",
"Authorization": "Custom-Auth-Scheme token"
},
"url": "http://your-server/mcp"
}
}
```
## π Dynamic Server Configuration
OmniCoreAgent supports dynamic server configuration through commands:
#### Add New Servers
```bash
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
```
The configuration file can include multiple servers with different authentication methods:
```json
{
"new-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"another-server": {
"transport_type": "sse",
"headers": {
"Authorization": "Bearer token"
},
"url": "http://localhost:3000/sse"
}
}
```
#### Remove Servers
```bash
# Remove a server by its name
/remove_server:server_name
```
---
## π§ Vector Database & Smart Memory Setup (Complete Guide)
OmniCoreAgent provides advanced memory capabilities through vector databases for intelligent, semantic search and long-term memory.
#### **β‘ Quick Start (Choose Your Provider)**
```bash
# Enable vector memory - you MUST choose a provider
ENABLE_VECTOR_DB=true
# Option 1: Qdrant (recommended)
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333
# Option 2: ChromaDB Remote
OMNI_MEMORY_PROVIDER=chroma-remote
CHROMA_HOST=localhost
CHROMA_PORT=8000
# Option 3: ChromaDB Cloud
OMNI_MEMORY_PROVIDER=chroma-cloud
CHROMA_TENANT=your_tenant
CHROMA_DATABASE=your_database
CHROMA_API_KEY=your_api_key
# Option 4: MongoDB Atlas
OMNI_MEMORY_PROVIDER=mongodb-remote
MONGODB_URI="your_mongodb_connection_string"
MONGODB_DB_NAME="db name"
# Disable vector memory (default)
ENABLE_VECTOR_DB=false
```
#### **π§ Vector Database Providers**
**1. Qdrant Remote**
```bash
# Install and run Qdrant
docker run -p 6333:6333 qdrant/qdrant
# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333
```
**2. MongoDB Atlas**
```bash
# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=mongodb-remote
MONGODB_URI="your_mongodb_connection_string"
MONGODB_DB_NAME="db name"
```
**3. ChromaDB Remote**
```bash
# Install and run ChromaDB server
docker run -p 8000:8000 chromadb/chroma
# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=chroma-remote
CHROMA_HOST=localhost
CHROMA_PORT=8000
```
**4. ChromaDB Cloud**
```bash
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=chroma-cloud
CHROMA_TENANT=your_tenant
CHROMA_DATABASE=your_database
CHROMA_API_KEY=your_api_key
```
#### **β¨ What You Get**
- **Long-term Memory**: Persistent storage across sessions
- **Episodic Memory**: Context-aware conversation history
- **Semantic Search**: Find relevant information by meaning, not exact text
- **Multi-session Context**: Remember information across different conversations
- **Automatic Summarization**: Intelligent memory compression for efficiency
---
## π Opik Tracing & Observability Setup (Latest Feature)
**Monitor and optimize your AI agents with production-grade observability:**
#### **π Quick Setup**
1. **Sign up for Opik** (Free & Open Source):
- Visit: **[https://www.comet.com/signup?from=llm](https://www.comet.com/signup?from=llm)**
- Create your account and get your API key and workspace name
2. **Add to your `.env` file** (see [Environment Variables](#environment-variables) above):
```bash
OPIK_API_KEY=your_opik_api_key_here
OPIK_WORKSPACE=your_opik_workspace_name
```
#### **β¨ What You Get Automatically**
Once configured, OmniCoreAgent automatically tracks:
- **π₯ LLM Call Performance**: Execution time, token usage, response quality
- **π οΈ Tool Execution Traces**: Which tools were used and how long they took
- **π§ Memory Operations**: Vector DB queries, memory retrieval performance
- **π€ Agent Workflow**: Complete trace of multi-step agent reasoning
- **π System Bottlenecks**: Identify exactly where time is spent
#### **π Benefits**
- **Performance Optimization**: See which LLM calls or tools are slow
- **Cost Monitoring**: Track token usage and API costs
- **Debugging**: Understand agent decision-making processes
- **Production Monitoring**: Real-time observability for deployed agents
- **Zero Code Changes**: Works automatically with existing agents
#### **π Example: What You'll See**
```
Agent Execution Trace:
βββ agent_execution: 4.6s
β βββ tools_registry_retrieval: 0.02s β
β βββ memory_retrieval_step: 0.08s β
β βββ llm_call: 4.5s β οΈ (bottleneck identified!)
β βββ response_parsing: 0.01s β
β βββ action_execution: 0.03s β
```
**π‘ Pro Tip**: Opik is completely optional. If you don't set the credentials, OmniCoreAgent works normally without tracing.
---
## π§βπ» Developer Integration
OmniCoreAgent is not just a CLI toolβit's also a powerful Python library. Both systems can be used programmatically in your applications.
### Using OmniAgent in Applications
```python
from omnicoreagent.omni_agent import OmniAgent
from omnicoreagent.core.memory_store.memory_router import MemoryRouter
from omnicoreagent.core.events.event_router import EventRouter
from omnicoreagent.core.tools.local_tools_registry import ToolRegistry
# Create tool registry for custom tools
tool_registry = ToolRegistry()
@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
"""Analyze data and return insights."""
return f"Analysis complete: {len(data)} characters processed"
# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
name="my_app_agent",
system_instruction="You are a helpful assistant.",
model_config={
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7
},
local_tools=tool_registry, # Your custom tools!
memory_store=MemoryRouter(memory_store_type="redis"),
event_router=EventRouter(event_store_type="in_memory")
)
# Use in your app
result = await agent.run("Analyze some sample data")
```
### FastAPI Integration with OmniAgent
OmniAgent makes building APIs incredibly simple. See [`examples/web_server.py`](examples/web_server.py) for a complete FastAPI example:
```python
from fastapi import FastAPI
from omnicoreagent.omni_agent import OmniAgent
app = FastAPI()
agent = OmniAgent(...) # Your agent setup from above
@app.post("/chat")
async def chat(message: str, session_id: str = None):
result = await agent.run(message, session_id)
return {"response": result['response'], "session_id": result['session_id']}
@app.get("/tools")
async def get_tools():
# Returns both MCP tools AND your custom tools automatically
return agent.get_available_tools()
```
### Using MCPOmni Connect Programmatically
```python
from omnicoreagent.mcp_client import MCPClient
# Create MCP client
client = MCPClient(config_file="servers_config.json")
# Connect to servers
await client.connect_all()
# Use tools
tools = await client.list_tools()
result = await client.call_tool("tool_name", {"arg": "value"})
```
**Key Benefits:**
- **One OmniAgent = MCP + Custom Tools + Memory + Events**
- **Automatic tool discovery** from all connected MCP servers
- **Built-in session management** and conversation history
- **Real-time event streaming** for monitoring
- **Easy integration** with any Python web framework
---
## π― Usage Patterns
### Interactive Commands
- `/tools` - List all available tools across servers
- `/prompts` - View available prompts
- `/prompt:<n>/<args>` - Execute a prompt with arguments
- `/resources` - List available resources
- `/resource:<uri>` - Access and analyze a resource
- `/debug` - Toggle debug mode
- `/refresh` - Update server capabilities
- `/memory` - Toggle Redis memory persistence (on/off)
- `/mode:auto` - Switch to autonomous agentic mode
- `/mode:chat` - Switch back to interactive chat mode
- `/add_servers:<config.json>` - Add one or more servers from a configuration file
- `/remove_server:<server_n>` - Remove a server by its name
### Memory and Chat History
```bash
# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
```
### Operation Modes
```bash
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
```
### Mode Differences
- **Chat Mode (Default)**
- Requires explicit approval for tool execution
- Interactive conversation style
- Step-by-step task execution
- Detailed explanations of actions
- **Autonomous Mode**
- Independent task execution
- Self-guided decision making
- Automatic tool selection and chaining
- Progress updates and final results
- Complex task decomposition
- Error handling and recovery
- **Orchestrator Mode**
- Advanced planning for complex multi-step tasks
- Strategic delegation across multiple MCP servers
- Intelligent agent coordination and communication
- Parallel task execution when possible
- Dynamic resource allocation
- Sophisticated workflow management
- Real-time progress monitoring across agents
- Adaptive task prioritization
### Prompt Management
```bash
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
```
### Advanced Prompt Features
- **Argument Validation**: Automatic type checking and validation
- **Default Values**: Smart handling of optional arguments
- **Context Awareness**: Prompts can access previous conversation context
- **Cross-Server Execution**: Seamless execution across multiple MCP servers
- **Error Handling**: Graceful handling of invalid arguments with helpful messages
- **Dynamic Help**: Detailed usage information for each prompt
### AI-Powered Interactions
The client intelligently:
- Chains multiple tools together
- Provides context-aware responses
- Automatically selects appropriate tools
- Handles errors gracefully
- Maintains conversation context
### Model Support with LiteLLM
- **Unified Model Access**
- Single interface for 100+ models across all major providers
- Automatic provider detection and routing
- Consistent API regardless of underlying provider
- Native function calling for compatible models
- ReAct Agent fallback for models without function calling
- **Supported Providers**
- **OpenAI**: GPT-4, GPT-3.5, and all model variants
- **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- **Google**: Gemini Pro, Gemini Flash, PaLM models
- **Groq**: Ultra-fast inference for Llama, Mixtral, Gemma
- **DeepSeek**: DeepSeek-V3, DeepSeek-Coder, and specialized models
- **Azure OpenAI**: Enterprise-grade OpenAI models
- **OpenRouter**: Access to 200+ models from various providers
- **Ollama**: Local model execution with privacy
- **Advanced Features**
- Automatic model capability detection
- Dynamic tool execution based on model features
- Intelligent fallback mechanisms
- Provider-specific optimizations
### Token & Usage Management
OmniCoreAgent provides advanced controls and visibility over your API usage and resource limits.
#### View API Usage Stats
Use the `/api_stats` command to see your current usage:
```bash
/api_stats
```
This will display:
- **Total tokens used**
- **Total requests made**
- **Total response tokens**
- **Number of requests**
#### Set Usage Limits
You can set limits to automatically stop execution when thresholds are reached:
- **Total Request Limit:** Set the maximum number of requests allowed in a session.
- **Total Token Usage Limit:** Set the maximum number of tokens that can be used.
- **Tool Call Timeout:** Set the maximum time (in seconds) a tool call can take before being terminated.
- **Max Steps:** Set the maximum number of steps the agent can take before stopping.
You can configure these in your `servers_config.json` under the `AgentConfig` section:
```json
"AgentConfig": {
"tool_call_timeout": 30, // Tool call timeout in seconds
"max_steps": 15, // Max number of steps before termination
"request_limit": 0, // 0 = unlimited (production mode), set > 0 to enable limits
"total_tokens_limit": 0, // 0 = unlimited (production mode), set > 0 to enable limits
"memory_results_limit": 5, // Number of memory results to retrieve (1-100, default: 5)
"memory_similarity_threshold": 0.5 // Similarity threshold for memory filtering (0.0-1.0, default: 0.5)
}
```
- When any of these limits are reached, the agent will automatically stop running and notify you.
#### Example Commands
```bash
# Check your current API usage and limits
/api_stats
# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
```
## π§ Advanced Features
### Tool Orchestration
```python
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
```
### Resource Analysis
```python
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
```
### π οΈ Troubleshooting Common Issues
#### "Failed to connect to server: Session terminated"
**Possible Causes & Solutions:**
1. **Wrong Transport Type**
```
Problem: Your server expects 'stdio' but you configured 'streamable_http'
Solution: Check your server's documentation for the correct transport type
```
2. **OAuth Configuration Mismatch**
```
Problem: Your server doesn't support OAuth but you have "auth": {"method": "oauth"}
Solution: Remove the "auth" section entirely and use headers instead:
"headers": {
"Authorization": "Bearer your-token"
}
```
3. **Server Not Running**
```
Problem: The MCP server at the specified URL is not running
Solution: Start your MCP server first, then connect with OmniCoreAgent
```
4. **Wrong URL or Port**
```
Problem: URL in config doesn't match where your server is running
Solution: Verify the server's actual address and port
```
#### "Started callback server on http://localhost:3000" - Is This Normal?
**Yes, this is completely normal** when:
- You have `"auth": {"method": "oauth"}` in any server configuration
- The OAuth server handles authentication tokens automatically
- You cannot and should not try to change this address
**If you don't want the OAuth server:**
- Remove `"auth": {"method": "oauth"}` from all server configurations
- Use alternative authentication methods like Bearer tokens
### π Configuration Examples by Use Case
#### Local Development (stdio)
```json
{
"mcpServers": {
"local-tools": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-tools"]
}
}
}
```
#### Remote Server with Token
```json
{
"mcpServers": {
"remote-api": {
"transport_type": "streamable_http",
"url": "http://api.example.com:8080/mcp",
"headers": {
"Authorization": "Bearer abc123token"
}
}
}
}
```
#### Remote Server with OAuth
```json
{
"mcpServers": {
"oauth-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://oauth-server.com:8080/mcp"
}
}
}
```
---
## π§ͺ Testing
### Running Tests
```bash
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
```
### Test Structure
```
tests/
βββ unit/ # Unit tests for individual components
βββ omni_agent/ # OmniAgent system tests
βββ mcp_client/ # MCPOmni Connect system tests
βββ integration/ # Integration tests for both systems
```
### Development Quick Start
1. **Installation**
```bash
# Clone the repository
git clone https://github.com/Abiorh001/omnicoreagent.git
cd omnicoreagent
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
```
2. **Configuration**
```bash
# Set up environment variables
echo "LLM_API_KEY=your_api_key_here" > .env
# Configure your servers in servers_config.json
```
3. **Start Systems**
```bash
# Try OmniAgent
uv run examples/omni_agent_example.py
# Or try MCPOmni Connect
uv run examples/mcp_client_example.py
```
Or:
```bash
python examples/omni_agent_example.py
python examples/mcp_client_example.py
```
---
## π Troubleshooting
> **π¨ Most Common Issues**: Check [Quick Fixes](#-quick-fixes-common-issues) below first!
>
> **π For comprehensive setup help**: See [βοΈ Configuration Guide](#οΈ-configuration-guide) | [π§ Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide)
### π¨ **Quick Fixes (Common Issues)**
| **Error** | **Quick Fix** |
|-----------|---------------|
| `Error: Invalid API key` | Check your `.env` file: `LLM_API_KEY=your_actual_key` |
| `ModuleNotFoundError: omnicoreagent` | Run: `uv add omnicoreagent` or `pip install omnicoreagent` |
| `Connection refused` | Ensure MCP server is running before connecting |
| `ChromaDB not available` | Install: `pip install chromadb` - [See Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide) |
| `Redis connection failed` | Install Redis or use in-memory mode (default) |
| `Tool execution failed` | Check tool permissions and arguments |
### Detailed Issues and Solutions
1. **Connection Issues**
```bash
Error: Could not connect to MCP server
```
- Check if the server is running
- Verify server configuration in `servers_config.json`
- Ensure network connectivity
- Check server logs for errors
- **See [Transport Types & Authentication](#-transport-types--authentication) for detailed setup**
2. **API Key Issues**
```bash
Error: Invalid API key
```
- Verify API key is correctly set in `.env`
- Check if API key has required permissions
- Ensure API key is for correct environment (production/development)
- **See [Configuration Guide](#οΈ-configuration-guide) for correct setup**
3. **Redis Connection**
```bash
Error: Could not connect to Redis
```
- Verify Redis server is running
- Check Redis connection settings in `.env`
- Ensure Redis password is correct (if configured)
4. **Tool Execution Failures**
```bash
Error: Tool execution failed
```
- Check tool availability on connected servers
- Verify tool permissions
- Review tool arguments for correctness
5. **Vector Database Issues**
```bash
Error: Vector database connection failed
```
- Ensure chosen provider (Qdrant, ChromaDB, MongoDB) is running
- Check connection settings in `.env`
- Verify API keys for cloud providers
- **See [Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide) for detailed configuration**
6. **Import Errors**
```bash
ImportError: cannot import name 'OmniAgent'
```
- Check package installation: `pip show omnicoreagent`
- Verify Python version compatibility (3.10+)
- Try reinstalling: `pip uninstall omnicoreagent && pip install omnicoreagent`
### Debug Mode
Enable debug mode for detailed logging:
```bash
# In MCPOmni Connect
/debug
# In OmniAgent
agent = OmniAgent(..., debug=True)
```
### **Getting Help**
1. **First**: Check the [Quick Fixes](#-quick-fixes-common-issues) above
2. **Examples**: Study working examples in the `examples/` directory
3. **Issues**: Search [GitHub Issues](https://github.com/Abiorh001/omnicoreagent/issues) for similar problems
4. **New Issue**: [Create a new issue](https://github.com/Abiorh001/omnicoreagent/issues/new) with detailed information
---
## π€ Contributing
We welcome contributions to OmniCoreAgent! Here's how you can help:
### Development Setup
```bash
# Fork and clone the repository
git clone https://github.com/yourusername/omnicoreagent.git
cd omnicoreagent
# Set up development environment
uv venv
source .venv/bin/activate
uv sync --dev
# Install pre-commit hooks
pre-commit install
```
### Contribution Areas
- **OmniAgent System**: Custom agents, local tools, background processing
- **MCPOmni Connect**: MCP client features, transport protocols, authentication
- **Shared Infrastructure**: Memory systems, vector databases, event handling
- **Documentation**: Examples, tutorials, API documentation
- **Testing**: Unit tests, integration tests, performance tests
### Pull Request Process
1. Create a feature branch from `main`
2. Make your changes with tests
3. Run the test suite: `pytest tests/ -v`
4. Update documentation as needed
5. Submit a pull request with a clear description
### Code Standards
- Python 3.10+ compatibility
- Type hints for all public APIs
- Comprehensive docstrings
- Unit tests for new functionality
- Follow existing code style
---
## π Documentation
Complete documentation is available at: **[OmniCoreAgent Docs](https://abiorh001.github.io/omnicoreagent)**
### Documentation Structure
- **Getting Started**: Quick setup and first steps
- **OmniAgent Guide**: Custom agent development
- **MCPOmni Connect Guide**: MCP client usage
- **API Reference**: Complete code documentation
- **Examples**: Working code examples
- **Advanced Topics**: Vector databases, tracing, production deployment
### Build Documentation Locally
```bash
# Install documentation dependencies
pip install mkdocs mkdocs-material
# Serve documentation locally
mkdocs serve
# Open http://127.0.0.1:8000
# Build static documentation
mkdocs build
```
### Contributing to Documentation
Documentation improvements are always welcome:
- Fix typos or unclear explanations
- Add new examples or use cases
- Improve existing tutorials
- Translate to other languages
---
## Demo

---
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π¬ Contact & Support
- **Author**: Abiola Adeshina
- **Email**: abiolaadedayo1993@gmail.com
- **GitHub**: [https://github.com/Abiorh001/omnicoreagent](https://github.com/Abiorh001/omnicoreagent)
- **Issues**: [Report a bug or request a feature](https://github.com/Abiorh001/omnicoreagent/issues)
- **Discussions**: [Join the community](https://github.com/Abiorh001/omnicoreagent/discussions)
### Support Channels
- **GitHub Issues**: Bug reports and feature requests
- **GitHub Discussions**: General questions and community support
- **Email**: Direct contact for partnership or enterprise inquiries
---
<p align="center">
<strong>Built with β€οΈ by the OmniCoreAgent Team</strong><br>
<em>Empowering developers to build the next generation of AI applications</em>
</p>
Raw data
{
"_id": null,
"home_page": null,
"name": "omnicoreagent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "agent, ai, automation, framework, git, llm, mcp",
"author": null,
"author_email": "Abiola Adeshina <abiolaadedayo1993@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/4b/97/d354d1a78c03395cc5d4d7951e19b4b22e76972a1c8553dcb8ecb1fec586/omnicoreagent-0.2.0.tar.gz",
"platform": null,
"description": "# \ud83d\ude80 OmniCoreAgent - Complete AI Development Platform\n\n> **\u2139\ufe0f Project Renaming Notice:** \n> This project was previously known as **`mcp_omni-connect`**. \n> It has been renamed to **`omnicoreagent`** to reflect its evolution into a complete AI development platform\u2014combining both a world-class MCP client and a powerful AI agent builder framework.\n\n> **\u26a0\ufe0f Breaking Change:** \n> The package name has changed from **`mcp_omni-connect`** to **`omnicoreagent`**. \n> Please uninstall the old package and install the new one:\n>\n> ```bash\n> pip uninstall mcp_omni-connect\n> pip install omnicoreagent\n> ```\n>\n> All imports and CLI commands now use `omnicoreagent`. \n> Update your code and scripts accordingly.\n\n[](https://...)\n...\n\n[](https://pepy.tech/projects/omnicoreagent)\n[](https://www.python.org/downloads/)\n[](LICENSE)\n[](https://github.com/Abiorh001/omnicoreagent/actions)\n[](https://badge.fury.io/py/omnicoreagent)\n[](https://github.com/Abiorh001/omnicoreagent/commits/main)\n[](https://github.com/Abiorh001/omnicoreagent/issues)\n[](https://github.com/Abiorh001/omnicoreagent/pulls)\n\n<p align=\"center\">\n <img src=\"Gemini_Generated_Image_pfgm65pfgm65pfgmcopy.png\" alt=\"OmniCoreAgent Logo\" width=\"250\"/>\n</p>\n\n**OmniCoreAgent** is the complete AI development platform that combines two powerful systems into one revolutionary ecosystem. Build production-ready AI agents with **OmniAgent**, use the advanced MCP client with **MCPOmni Connect**, or combine both for maximum power.\n\n## \ud83d\udccb Table of Contents\n\n### \ud83d\ude80 **Getting Started**\n- [\ud83d\ude80 Quick Start (2 minutes)](#-quick-start-2-minutes)\n- [\ud83c\udf1f What is OmniCoreAgent?](#-what-is-omnicoreagent)\n- [\ud83d\udca1 What Can You Build? (Examples)](#-what-can-you-build-see-real-examples)\n- [\ud83c\udfaf Choose Your Path](#-choose-your-path)\n\n### \ud83e\udd16 **OmniAgent System**\n- [\u2728 OmniAgent Features](#-omniagent---revolutionary-ai-agent-builder)\n- [\ud83d\udd25 Local Tools System](#-local-tools-system---create-custom-ai-tools)\n- [\ud83d\udee0\ufe0f Building Custom Agents](#-building-custom-agents)\n- [\ud83d\udcda OmniAgent Examples](#-omniagent-examples)\n\n### \ud83d\udd0c **MCPOmni Connect System**\n- [\u2728 MCP Client Features](#-mcpomni-connect---world-class-mcp-client)\n- [\ud83d\udea6 Transport Types & Authentication](#-transport-types--authentication)\n- [\ud83d\udda5\ufe0f CLI Commands](#\ufe0f-cli-commands)\n- [\ud83d\udcda MCP Usage Examples](#-mcp-usage-examples)\n\n### \ud83d\udcd6 **Core Information**\n- [\u2728 Platform Features](#-platform-features)\n- [\ud83c\udfd7\ufe0f Architecture](#\ufe0f-architecture)\n\n### \u2699\ufe0f **Setup & Configuration**\n- [\u2699\ufe0f Configuration Guide](#\ufe0f-configuration-guide)\n- [\ud83e\udde0 Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide)\n- [\ud83d\udcca Tracing & Observability](#-opik-tracing--observability-setup-latest-feature)\n\n### \ud83d\udee0\ufe0f **Development & Integration**\n- [\ud83e\uddd1\u200d\ud83d\udcbb Developer Integration](#-developer-integration)\n- [\ud83e\uddea Testing](#-testing)\n\n### \ud83d\udcda **Reference & Support**\n- [\ud83d\udd0d Troubleshooting](#-troubleshooting)\n- [\ud83e\udd1d Contributing](#-contributing)\n- [\ud83d\udcd6 Documentation](#-documentation)\n\n---\n\n**New to OmniCoreAgent?** Get started in 2 minutes:\n\n### Step 1: Install\n```bash\n# Install with uv (recommended)\nuv add omnicoreagent\n\n# Or with pip\npip install omnicoreagent\n```\n\n### Step 2: Set API Key\n```bash\n# Create .env file with your LLM API key\necho \"LLM_API_KEY=your_openai_api_key_here\" > .env\n```\n\n### Step 3: Run Examples\n```bash\n# Try OmniAgent with custom tools\npython examples/run_omni_agent.py\n\n# Try MCPOmni Connect (MCP client)\npython examples/run.py\n\n```\n\n### What Can You Build?\n- **Custom AI Agents**: Register your Python functions as AI tools with OmniAgent\n- **MCP Integration**: Connect to any Model Context Protocol server with MCPOmni Connect\n- **Smart Memory**: Vector databases for long-term AI memory\n- **Background Agents**: Self-flying autonomous task execution\n- **Production Monitoring**: Opik tracing for performance optimization\n\n\u27a1\ufe0f **Next**: Check out [Examples](#-what-can-you-build-see-real-examples) or jump to [Configuration Guide](#\ufe0f-configuration-guide)\n\n---\n\n## \ud83c\udf1f **What is OmniCoreAgent?**\n\nOmniCoreAgent is a comprehensive AI development platform consisting of two integrated systems:\n\n### 1. \ud83e\udd16 **OmniAgent** *(Revolutionary AI Agent Builder)*\nCreate intelligent, autonomous agents with custom capabilities:\n- **\ud83d\udee0\ufe0f Local Tools System** - Register your Python functions as AI tools\n- **\ud83d\ude81 Self-Flying Background Agents** - Autonomous task execution\n- **\ud83e\udde0 Multi-Tier Memory** - Vector databases, Redis, PostgreSQL, MySQL, SQLite\n- **\ud83d\udce1 Real-Time Events** - Live monitoring and streaming\n- **\ud83d\udd27 MCP + Local Tool Orchestration** - Seamlessly combine both tool types\n\n### 2. \ud83d\udd0c **MCPOmni Connect** *(World-Class MCP Client)*\nAdvanced command-line interface for connecting to any Model Context Protocol server with:\n- **\ud83c\udf10 Multi-Protocol Support** - stdio, SSE, HTTP, Docker, NPX transports\n- **\ud83d\udd10 Authentication** - OAuth 2.0, Bearer tokens, custom headers\n- **\ud83e\udde0 Advanced Memory** - Redis, Database, Vector storage with intelligent retrieval\n- **\ud83d\udce1 Event Streaming** - Real-time monitoring and debugging\n- **\ud83e\udd16 Agentic Modes** - ReAct, Orchestrator, and Interactive chat modes\n\n**\ud83c\udfaf Perfect for:** Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.\n\n---\n\n## \ud83d\udca1 **What Can You Build? (See Real Examples)**\n\n### \ud83e\udd16 **OmniAgent System** *(Build Custom AI Agents)*\n```bash\n# Complete OmniAgent demo - All features showcase\npython examples/omni_agent_example.py\n\n# Advanced OmniAgent patterns - Study 12+ tool examples\npython examples/run_omni_agent.py\n\n# Self-flying background agents - Autonomous task execution\npython examples/background_agent_example.py\n\n# Web server with UI - Interactive interface for OmniAgent\npython examples/web_server.py\n# Open http://localhost:8000 for web interface\n\n# enhanced_web_server.py - Advanced web patterns\npython examples/enhanced_web_server.py\n\n# FastAPI implementation - Clean API endpoints\npython examples/fast_api_impl.py\n```\n\n### \ud83d\udd0c **MCPOmni Connect System** *(Connect to MCP Servers)*\n```bash\n# Basic MCP client usage - Simple connection patterns\npython examples/basic_mcp.py\n\n# Advanced MCP CLI - Full-featured client interface \npython examples/run.py\n```\n\n### \ud83d\udd27 **LLM Provider Configuration** *(Multiple Providers)*\nAll LLM provider examples consolidated in:\n```bash\n# See examples/llm_usage-config.json for:\n# - Anthropic Claude models\n# - Groq ultra-fast inference \n# - Azure OpenAI enterprise\n# - Ollama local models\n# - OpenRouter 200+ models\n# - And more providers...\n```\n\n---\n\n## \ud83c\udfaf **Choose Your Path**\n\n### When to Use What?\n\n| **Use Case** | **Choose** | **Best For** |\n|-------------|------------|--------------|\n| Build custom AI apps | **OmniAgent** | Web apps, automation, custom workflows |\n| Connect to MCP servers | **MCPOmni Connect** | Daily workflow, server management, debugging |\n| Learn & experiment | **Examples** | Understanding patterns, proof of concepts |\n| Production deployment | **Both** | Full-featured AI applications |\n\n### **Path 1: \ud83e\udd16 Build Custom AI Agents (OmniAgent)**\nPerfect for: Custom applications, automation, web apps\n```bash\n# Study the examples to learn patterns:\npython examples/basic.py # Simple introduction\npython examples/run_omni_agent.py # Complete OmniAgent demo\npython examples/background_agent_example.py # Self-flying agents\npython examples/web_server.py # Web interface\npython examples/enhanced_web_server.py # Advanced patterns\n\n# Then build your own using the patterns!\n```\n\n### **Path 2: \ud83d\udd0c Advanced MCP Client (MCPOmni Connect)**\nPerfect for: Daily workflow, server management, debugging\n```bash\n# Basic MCP client - Simple connection patterns\npython examples/basic_mcp.py\n\n# World-class MCP client with advanced features\npython examples/run.py\n\n# Features: Connect to MCP servers, agentic modes, advanced memory\n```\n\n### **Path 3: \ud83e\uddea Study Tool Patterns (Learning)**\nPerfect for: Learning, understanding patterns, experimentation\n```bash\n# Comprehensive testing interface - Study 12+ EXAMPLE tools\npython examples/run_omni_agent.py \n\n# Study this file to see tool registration patterns and CLI features\n# Contains many examples of how to create custom tools\n```\n\n**\ud83d\udca1 Pro Tip:** Most developers use **both paths** - MCPOmni Connect for daily workflow and OmniAgent for building custom solutions!\n\n---\n\n# \ud83e\udd16 OmniAgent - Revolutionary AI Agent Builder\n\n**\ud83c\udf1f Introducing OmniAgent** - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!\n\n## \u2705 OmniAgent Revolutionary Capabilities:\n- **\ud83e\udde0 Multi-tier memory management** with vector search and semantic retrieval\n- **\ud83d\udee0\ufe0f XML-based reasoning** with strict tool formatting for reliable execution \n- **\ud83d\udd27 Advanced tool orchestration** - Seamlessly combine MCP server tools + local tools\n- **\ud83d\ude81 Self-flying background agents** with autonomous task execution\n- **\ud83d\udce1 Real-time event streaming** for monitoring and debugging\n- **\ud83c\udfd7\ufe0f Production-ready infrastructure** with error handling and retry logic\n- **\u26a1 Plug-and-play intelligence** - No complex setup required!\n\n## \ud83d\udd25 **LOCAL TOOLS SYSTEM** - Create Custom AI Tools!\n\nOne of OmniAgent's most powerful features is the ability to **register your own Python functions as AI tools**. The agent can then intelligently use these tools to complete tasks.\n\n### \ud83c\udfaf Quick Tool Registration Example\n\n```python\nfrom omnicoreagent.omni_agent import OmniAgent\nfrom omnicoreagent.core.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry\ntool_registry = ToolRegistry()\n\n# Register your custom tools with simple decorator\n@tool_registry.register_tool(\"calculate_area\")\ndef calculate_area(length: float, width: float) -> str:\n \"\"\"Calculate the area of a rectangle.\"\"\"\n area = length * width\n return f\"Area of rectangle ({length} x {width}): {area} square units\"\n\n@tool_registry.register_tool(\"analyze_text\")\ndef analyze_text(text: str) -> str:\n \"\"\"Analyze text and return word count and character count.\"\"\"\n words = len(text.split())\n chars = len(text)\n return f\"Analysis: {words} words, {chars} characters\"\n\n@tool_registry.register_tool(\"system_status\")\ndef get_system_status() -> str:\n \"\"\"Get current system status information.\"\"\"\n import platform\n import time\n return f\"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}\"\n\n# Use tools with OmniAgent\nagent = OmniAgent(\n name=\"my_agent\",\n local_tools=tool_registry, # Your custom tools!\n # ... other config\n)\n\n# Now the AI can use your tools!\nresult = await agent.run(\"Calculate the area of a 10x5 rectangle and tell me the current system time\")\n```\n\n### \ud83d\udcd6 Tool Registration Patterns (Create Your Own!)\n\n**No built-in tools** - You create exactly what you need! Study these EXAMPLE patterns from `run_omni_agent.py`:\n\n**Mathematical Tools Examples:**\n```python\n@tool_registry.register_tool(\"calculate_area\")\ndef calculate_area(length: float, width: float) -> str:\n area = length * width\n return f\"Area: {area} square units\"\n\n@tool_registry.register_tool(\"analyze_numbers\") \ndef analyze_numbers(numbers: str) -> str:\n num_list = [float(x.strip()) for x in numbers.split(\",\")]\n return f\"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}\"\n```\n\n**System Tools Examples:**\n```python\n@tool_registry.register_tool(\"system_info\")\ndef get_system_info() -> str:\n import platform\n return f\"OS: {platform.system()}, Python: {platform.python_version()}\"\n```\n\n**File Tools Examples:**\n```python\n@tool_registry.register_tool(\"list_files\")\ndef list_directory(path: str = \".\") -> str:\n import os\n files = os.listdir(path)\n return f\"Found {len(files)} items in {path}\"\n```\n\n### \ud83c\udfa8 Tool Registration Patterns\n\n**1. Simple Function Tools:**\n```python\n@tool_registry.register_tool(\"weather_check\")\ndef check_weather(city: str) -> str:\n \"\"\"Get weather information for a city.\"\"\"\n # Your weather API logic here\n return f\"Weather in {city}: Sunny, 25\u00b0C\"\n```\n\n**2. Complex Analysis Tools:**\n```python\n@tool_registry.register_tool(\"data_analysis\")\ndef analyze_data(data: str, analysis_type: str = \"summary\") -> str:\n \"\"\"Analyze data with different analysis types.\"\"\"\n import json\n try:\n data_obj = json.loads(data)\n if analysis_type == \"summary\":\n return f\"Data contains {len(data_obj)} items\"\n elif analysis_type == \"detailed\":\n # Complex analysis logic\n return \"Detailed analysis results...\"\n except:\n return \"Invalid data format\"\n```\n\n**3. File Processing Tools:**\n```python\n@tool_registry.register_tool(\"process_file\")\ndef process_file(file_path: str, operation: str) -> str:\n \"\"\"Process files with different operations.\"\"\"\n try:\n if operation == \"read\":\n with open(file_path, 'r') as f:\n content = f.read()\n return f\"File content (first 100 chars): {content[:100]}...\"\n elif operation == \"count_lines\":\n with open(file_path, 'r') as f:\n lines = len(f.readlines())\n return f\"File has {lines} lines\"\n except Exception as e:\n return f\"Error processing file: {e}\"\n```\n\n## \ud83d\udee0\ufe0f Building Custom Agents\n\n### Basic Agent Setup\n\n```python\nfrom omnicoreagent.omni_agent import OmniAgent\nfrom omnicoreagent.core.memory_store.memory_router import MemoryRouter\nfrom omnicoreagent.core.events.event_router import EventRouter\nfrom omnicoreagent.core.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry for custom tools\ntool_registry = ToolRegistry()\n\n@tool_registry.register_tool(\"analyze_data\")\ndef analyze_data(data: str) -> str:\n \"\"\"Analyze data and return insights.\"\"\"\n return f\"Analysis complete: {len(data)} characters processed\"\n\n# OmniAgent automatically handles MCP connections + your tools\nagent = OmniAgent(\n name=\"my_app_agent\",\n system_instruction=\"You are a helpful assistant with access to MCP servers and custom tools.\",\n model_config={\n \"provider\": \"openai\", \n \"model\": \"gpt-4o\",\n \"temperature\": 0.7\n },\n agent_config={\n \"tool_call_timeout\": 30,\n \"max_steps\": 10,\n \"request_limit\": 0, # 0 = unlimited (production mode), set > 0 to enable limits\n \"total_tokens_limit\": 0, # 0 = unlimited (production mode), set > 0 to enable limits\n \"memory_results_limit\": 5, # Number of memory results to retrieve (1-100, default: 5)\n \"memory_similarity_threshold\": 0.5 # Similarity threshold for memory filtering (0.0-1.0, default: 0.5)\n },\n # Your custom local tools\n local_tools=tool_registry,\n # MCP servers - automatically connected!\n mcp_tools=[\n {\n \"name\": \"filesystem\",\n \"transport_type\": \"stdio\", \n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/home\"]\n },\n {\n \"name\": \"github\",\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://localhost:8080/mcp\",\n \"headers\": {\"Authorization\": \"Bearer your-token\"}\n }\n ],\n embedding_config={\n \"provider\": \"openai\",\n \"model\": \"text-embedding-3-small\",\n \"dimensions\": 1536,\n \"encoding_format\": \"float\",\n },\n memory_store=MemoryRouter(memory_store_type=\"redis\"),\n event_router=EventRouter(event_store_type=\"in_memory\")\n)\n\n# Use in your app - gets both MCP tools AND your custom tools!\nresult = await agent.run(\"List files in the current directory and analyze the filenames\")\n```\n\n## \ud83d\udcda OmniAgent Examples\n\n### Basic Agent Usage\n```bash\n# Complete OmniAgent demo with custom tools\npython examples/omni_agent_example.py\n\n# Advanced patterns with 12+ tool examples\npython examples/run_omni_agent.py\n```\n\n### Background Agents\n```bash\n# Self-flying autonomous agents\npython examples/background_agent_example.py\n```\n\n### Web Applications\n```bash\n# FastAPI integration\npython examples/fast_api_impl.py\n\n# Full web interface\npython examples/web_server.py\n# Open http://localhost:8000\n```\n\n---\n\n# \ud83d\udd0c MCPOmni Connect - World-Class MCP Client\n\nThe MCPOmni Connect system is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes.\n\n## \u2728 MCPOmni Connect Key Features\n\n### \ud83e\udd16 Intelligent Agent System\n\n- **ReAct Agent Mode**\n - Autonomous task execution with reasoning and action cycles\n - Independent decision-making without human intervention\n - Advanced problem-solving through iterative reasoning\n - Self-guided tool selection and execution\n - Complex task decomposition and handling\n- **Orchestrator Agent Mode**\n - Strategic multi-step task planning and execution\n - Intelligent coordination across multiple MCP servers\n - Dynamic agent delegation and communication\n - Parallel task execution when possible\n - Sophisticated workflow management with real-time progress monitoring\n- **Interactive Chat Mode**\n - Human-in-the-loop task execution with approval workflows\n - Step-by-step guidance and explanations\n - Educational mode for understanding AI decision processes\n\n### \ud83d\udd0c Universal Connectivity\n\n- **Multi-Protocol Support**\n - Native support for stdio transport\n - Server-Sent Events (SSE) for real-time communication\n - Streamable HTTP for efficient data streaming\n - Docker container integration\n - NPX package execution\n - Extensible transport layer for future protocols\n- **Authentication Support**\n - OAuth 2.0 authentication flow\n - Bearer token authentication\n - Custom header support\n - Secure credential management\n- **Agentic Operation Modes**\n - Seamless switching between chat, autonomous, and orchestrator modes\n - Context-aware mode selection based on task complexity\n - Persistent state management across mode transitions\n\n## \ud83d\udea6 Transport Types & Authentication\n\nMCPOmni Connect supports multiple ways to connect to MCP servers:\n\n### 1. **stdio** - Direct Process Communication\n\n**Use when**: Connecting to local MCP servers that run as separate processes\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"stdio\",\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-package\"]\n }\n}\n```\n\n- **No authentication needed**\n- **No OAuth server started**\n- Most common for local development\n\n### 2. **sse** - Server-Sent Events\n\n**Use when**: Connecting to HTTP-based MCP servers using Server-Sent Events\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"sse\",\n \"url\": \"http://your-server.com:4010/sse\",\n \"headers\": {\n \"Authorization\": \"Bearer your-token\"\n },\n \"timeout\": 60,\n \"sse_read_timeout\": 120\n }\n}\n```\n\n- **Uses Bearer token or custom headers**\n- **No OAuth server started**\n\n### 3. **streamable_http** - HTTP with Optional OAuth\n\n**Use when**: Connecting to HTTP-based MCP servers with or without OAuth\n\n**Without OAuth (Bearer Token):**\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://your-server.com:4010/mcp\",\n \"headers\": {\n \"Authorization\": \"Bearer your-token\"\n },\n \"timeout\": 60\n }\n}\n```\n\n- **Uses Bearer token or custom headers**\n- **No OAuth server started**\n\n**With OAuth:**\n\n```json\n{\n \"server-name\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://your-server.com:4010/mcp\"\n }\n}\n```\n\n- **OAuth callback server automatically starts on `http://localhost:3000`**\n- **This is hardcoded and cannot be changed**\n- **Required for OAuth flow to work properly**\n\n### \ud83d\udd10 OAuth Server Behavior\n\n**Important**: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.\n\n#### What You'll See:\n\n```\n\ud83d\udda5\ufe0f Started callback server on http://localhost:3000\n```\n\n#### Key Points:\n\n- **This is normal behavior** - not an error\n- **The address `http://localhost:3000` is hardcoded** and cannot be changed\n- **The server only starts when** you have `\"auth\": {\"method\": \"oauth\"}` in your config\n- **The server stops** when the application shuts down\n- **Only used for OAuth token handling** - no other purpose\n\n#### When OAuth is NOT Used:\n\n- Remove the entire `\"auth\"` section from your server configuration\n- Use `\"headers\"` with `\"Authorization\": \"Bearer token\"` instead\n- No OAuth server will start\n\n## \ud83d\udda5\ufe0f CLI Commands\n\n### Memory Store Management:\n```bash\n# Switch between memory backends\n/memory_store:in_memory # Fast in-memory storage (default)\n/memory_store:redis # Redis persistent storage \n/memory_store:database # SQLite database storage\n/memory_store:database:postgresql://user:pass@host/db # PostgreSQL\n/memory_store:database:mysql://user:pass@host/db # MySQL\n/memory_store:mongodb # Mongodb persistent storage\n/memory_store:mongodb:your_mongodb_connection_string # Mongodb with custom URI\n\n# Memory strategy configuration\n/memory_mode:sliding_window:10 # Keep last 10 messages\n/memory_mode:token_budget:5000 # Keep under 5000 tokens\n```\n\n### Event Store Management:\n```bash\n# Switch between event backends\n/event_store:in_memory # Fast in-memory events (default)\n/event_store:redis_stream # Redis Streams for persistence\n```\n\n### Core MCP Operations:\n```bash\n/tools # List all available tools\n/prompts # List all available prompts \n/resources # List all available resources\n/prompt:<name> # Execute a specific prompt\n/resource:<uri> # Read a specific resource\n/subscribe:<uri> # Subscribe to resource updates\n/query <your_question> # Ask questions using tools\n```\n\n### Enhanced Commands:\n```bash\n# Memory operations\n/history # Show conversation history\n/clear_history # Clear conversation history\n/save_history <file> # Save history to file\n/load_history <file> # Load history from file\n\n# Server management\n/add_servers:<config.json> # Add servers from config\n/remove_server:<server_name> # Remove specific server\n/refresh # Refresh server capabilities\n\n# Agentic modes\n/mode:auto # Switch to autonomous agentic mode\n/mode:orchestrator # Switch to multi-server orchestration\n/mode:chat # Switch to interactive chat mode\n\n# Debugging and monitoring\n/debug # Toggle debug mode\n/api_stats # Show API usage statistics\n```\n\n## \ud83d\udcda MCP Usage Examples\n\n### Basic MCP Client\n```bash\n# Launch the basic MCP client\npython examples/basic_mcp.py\n```\n\n### Advanced MCP CLI\n```bash\n# Launch the advanced MCP CLI\npython examples/advanced_mcp.py\n\n# Core MCP client commands:\n/tools # List all available tools\n/prompts # List all available prompts \n/resources # List all available resources\n/prompt:<name> # Execute a specific prompt\n/resource:<uri> # Read a specific resource\n/subscribe:<uri> # Subscribe to resource updates\n/query <your_question> # Ask questions using tools\n\n# Advanced platform features:\n/memory_store:redis # Switch to Redis memory\n/event_store:redis_stream # Switch to Redis events\n/add_servers:<config.json> # Add MCP servers dynamically\n/remove_server:<name> # Remove MCP server\n/mode:auto # Switch to autonomous agentic mode\n/mode:orchestrator # Switch to multi-server orchestration\n```\n\n---\n\n## \u2728 Platform Features\n\n> **\ud83d\ude80 Want to start building right away?** Jump to [Quick Start](#-quick-start-2-minutes) | [Examples](#-what-can-you-build-see-real-examples) | [Configuration](#\ufe0f-configuration-guide)\n\n### \ud83e\udde0 AI-Powered Intelligence\n\n- **Unified LLM Integration with LiteLLM**\n - Single unified interface for all AI providers\n - Support for 100+ models across providers including:\n - OpenAI (GPT-4, GPT-3.5, etc.)\n - Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)\n - Google (Gemini Pro, Gemini Flash, etc.)\n - Groq (Llama, Mixtral, Gemma, etc.)\n - DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)\n - Azure OpenAI\n - OpenRouter (access to 200+ models)\n - Ollama (local models)\n - Simplified configuration and reduced complexity\n - Dynamic system prompts based on available capabilities\n - Intelligent context management\n - Automatic tool selection and chaining\n - Universal model support through custom ReAct Agent\n - Handles models without native function calling\n - Dynamic function execution based on user requests\n - Intelligent tool orchestration\n\n### \ud83d\udd12 Security & Privacy\n\n- **Explicit User Control**\n - All tool executions require explicit user approval in chat mode\n - Clear explanation of tool actions before execution\n - Transparent disclosure of data access and usage\n- **Data Protection**\n - Strict data access controls\n - Server-specific data isolation\n - No unauthorized data exposure\n- **Privacy-First Approach**\n - Minimal data collection\n - User data remains on specified servers\n - No cross-server data sharing without consent\n- **Secure Communication**\n - Encrypted transport protocols\n - Secure API key management\n - Environment variable protection\n\n### \ud83d\udcbe Advanced Memory Management\n\n- **Multi-Backend Memory Storage**\n - **In-Memory**: Fast development storage\n - **Redis**: Persistent memory with real-time access\n - **Database**: PostgreSQL, MySQL, SQLite support \n - **Mongodb**: NoSQL document storage\n - **File Storage**: Save/load conversation history\n - Runtime switching: `/memory_store:redis`, `/memory_store:database:postgresql://user:pass@host/db`\n- **Multi-Tier Memory Strategy**\n - **Short-term Memory**: Sliding window or token budget strategies\n - **Long-term Memory**: Vector database storage for semantic retrieval\n - **Episodic Memory**: Context-aware conversation history\n - Runtime configuration: `/memory_mode:sliding_window:5`, `/memory_mode:token_budget:3000`\n- **Vector Database Integration**\n - **Multiple Provider Support**: Mongodb atlas, ChromaDB (remote/cloud), and Qdrant (remote)\n - **Smart Fallback**: Automatic failover to local storage if remote fails\n - **Semantic Search**: Intelligent context retrieval across conversations \n - **Long-term & Episodic Memory**: Enable with `ENABLE_VECTOR_DB=true`\n \n- **Real-Time Event Streaming**\n - **In-Memory Events**: Fast development event processing\n - **Redis Streams**: Persistent event storage and streaming\n - Runtime switching: `/event_store:redis_stream`, `/event_store:in_memory`\n- **Advanced Tracing & Observability**\n - **Opik Integration**: Production-grade tracing and monitoring\n - **Real-time Performance Tracking**: Monitor LLM calls, tool executions, and agent performance\n - **Detailed Call Traces**: See exactly where time is spent in your AI workflows\n - **System Observability**: Understand bottlenecks and optimize performance\n - **Open Source**: Built on Opik, the open-source observability platform\n - **Easy Setup**: Just add your Opik credentials to start monitoring\n - **Zero Code Changes**: Automatic tracing with `@track` decorators\n - **Performance Insights**: Identify slow operations and optimization opportunities\n\n### \ud83d\udcac Prompt Management\n\n- **Advanced Prompt Handling**\n - Dynamic prompt discovery across servers\n - Flexible argument parsing (JSON and key-value formats)\n - Cross-server prompt coordination\n - Intelligent prompt validation\n - Context-aware prompt execution\n - Real-time prompt responses\n - Support for complex nested arguments\n - Automatic type conversion and validation\n- **Client-Side Sampling Support**\n - Dynamic sampling configuration from client\n - Flexible LLM response generation\n - Customizable sampling parameters\n - Real-time sampling adjustments\n\n### \ud83d\udee0\ufe0f Tool Orchestration\n\n- **Dynamic Tool Discovery & Management**\n - Automatic tool capability detection\n - Cross-server tool coordination\n - Intelligent tool selection based on context\n - Real-time tool availability updates\n\n### \ud83d\udce6 Resource Management\n\n- **Universal Resource Access**\n - Cross-server resource discovery\n - Unified resource addressing\n - Automatic resource type detection\n - Smart content summarization\n\n### \ud83d\udd04 Server Management\n\n- **Advanced Server Handling**\n - Multiple simultaneous server connections\n - Automatic server health monitoring\n - Graceful connection management\n - Dynamic capability updates\n - Flexible authentication methods\n - Runtime server configuration updates\n\n## \ud83c\udfd7\ufe0f Architecture\n\n> **\ud83d\udcda Prefer hands-on learning?** Skip to [Examples](#-what-can-you-build-see-real-examples) or [Configuration](#\ufe0f-configuration-guide)\n\n### Core Components\n\n```\nOmniCoreAgent Platform\n\u251c\u2500\u2500 \ud83e\udd16 OmniAgent System (Revolutionary Agent Builder)\n\u2502 \u251c\u2500\u2500 Local Tools Registry\n\u2502 \u251c\u2500\u2500 Background Agent Manager \n\u2502 \u251c\u2500\u2500 Custom Agent Creation\n\u2502 \u2514\u2500\u2500 Agent Orchestration Engine\n\u251c\u2500\u2500 \ud83d\udd0c MCPOmni Connect System (World-Class MCP Client)\n\u2502 \u251c\u2500\u2500 Transport Layer (stdio, SSE, HTTP, Docker, NPX)\n\u2502 \u251c\u2500\u2500 Multi-Server Orchestration\n\u2502 \u251c\u2500\u2500 Authentication & Security\n\u2502 \u2514\u2500\u2500 Connection Lifecycle Management\n\u251c\u2500\u2500 \ud83e\udde0 Shared Memory System (Both Systems)\n\u2502 \u251c\u2500\u2500 Multi-Backend Storage (Redis, DB, In-Memory)\n\u2502 \u251c\u2500\u2500 Vector Database Integration (ChromaDB, Qdrant)\n\u2502 \u251c\u2500\u2500 Memory Strategies (Sliding Window, Token Budget)\n\u2502 \u2514\u2500\u2500 Session Management\n\u251c\u2500\u2500 \ud83d\udce1 Event System (Both Systems)\n\u2502 \u251c\u2500\u2500 In-Memory Event Processing\n\u2502 \u251c\u2500\u2500 Redis Streams for Persistence\n\u2502 \u2514\u2500\u2500 Real-Time Event Monitoring\n\u251c\u2500\u2500 \ud83d\udee0\ufe0f Tool Management (Both Systems)\n\u2502 \u251c\u2500\u2500 Dynamic Tool Discovery\n\u2502 \u251c\u2500\u2500 Cross-Server Tool Routing\n\u2502 \u251c\u2500\u2500 Local Python Tool Registration\n\u2502 \u2514\u2500\u2500 Tool Execution Engine\n\u2514\u2500\u2500 \ud83e\udd16 AI Integration (Both Systems)\n \u251c\u2500\u2500 LiteLLM (100+ Models)\n \u251c\u2500\u2500 Context Management\n \u251c\u2500\u2500 ReAct Agent Processing\n \u2514\u2500\u2500 Response Generation\n```\n\n---\n\n## \ud83d\udce6 Installation\n\n### \u2705 **Minimal Setup (Just Python + API Key)**\n\n**Required:**\n- Python 3.10+\n- LLM API key (OpenAI, Anthropic, Groq, etc.)\n\n**Optional (for advanced features):**\n- Redis (persistent memory)\n- Vector DB (Support Qdrant, ChromaDB, Mongodb atlas)\n- Database (PostgreSQL/MySQL/SQLite)\n- Opik account (for tracing/observability)\n\n### \ud83d\udce6 **Installation**\n\n```bash\n# Option 1: UV (recommended - faster)\nuv add omnicoreagent\n\n# Option 2: Pip (standard)\npip install omnicoreagent\n```\n\n### \u26a1 **Quick Configuration**\n\n**Minimal setup** (get started immediately):\n```bash\n# Just set your API key - that's it!\necho \"LLM_API_KEY=your_api_key_here\" > .env\n```\n\n**Advanced setup** (optional features):\n> **\ud83d\udcd6 Need more options?** See the complete [Configuration Guide](#\ufe0f-configuration-guide) below for all environment variables, vector database setup, memory configuration, and advanced features.\n\n---\n\n## \u2699\ufe0f Configuration Guide\n\n> **\u26a1 Quick Setup**: Only need `LLM_API_KEY` to get started! | **\ud83d\udd0d Detailed Setup**: [Vector DB](#-vector-database--smart-memory-setup-complete-guide) | [Tracing](#-opik-tracing--observability-setup-latest-feature)\n\n### Environment Variables\n\nCreate a `.env` file with your configuration. **Only the LLM API key is required** - everything else is optional for advanced features.\n\n#### **\ud83d\udd25 REQUIRED (Start Here)**\n```bash\n# ===============================================\n# REQUIRED: AI Model API Key (Choose one provider)\n# ===============================================\nLLM_API_KEY=your_openai_api_key_here\n# OR for other providers:\n# LLM_API_KEY=your_anthropic_api_key_here\n# LLM_API_KEY=your_groq_api_key_here\n# LLM_API_KEY=your_azure_openai_api_key_here\n# See examples/llm_usage-config.json for all provider configs\n```\n\n#### **\u26a1 OPTIONAL: Advanced Features**\n```bash\n# ===============================================\n# Embeddings (OPTIONAL) - NEW!\n# ===============================================\n# For generating text embeddings (vector representations)\n# Choose one provider - same key works for all embedding models\nEMBEDDING_API_KEY=your_embedding_api_key_here\n# OR for other providers:\n# EMBEDDING_API_KEY=your_cohere_api_key_here\n# EMBEDDING_API_KEY=your_huggingface_api_key_here\n# EMBEDDING_API_KEY=your_mistral_api_key_here\n# See docs/EMBEDDING_README.md for all provider configs\n\n# ===============================================\n# Tracing & Observability (OPTIONAL) - NEW!\n# ===============================================\n# For advanced monitoring and performance optimization\n# \ud83d\udd17 Sign up: https://www.comet.com/signup?from=llm\nOPIK_API_KEY=your_opik_api_key_here\nOPIK_WORKSPACE=your_opik_workspace_name\n\n# ===============================================\n# Vector Database (OPTIONAL) - Smart Memory\n# ===============================================\n\nENABLE_VECTOR_DB=true # Default: false\n\n# Choose ONE provider (required if ENABLE_VECTOR_DB=true):\n\n# Option 1: Qdrant Remote\nOMNI_MEMORY_PROVIDER=qdrant-remote\nQDRANT_HOST=localhost\nQDRANT_PORT=6333\n\n# Option 2: ChromaDB Remote\n# OMNI_MEMORY_PROVIDER=chroma-remote\n# CHROMA_HOST=localhost\n# CHROMA_PORT=8000\n\n# Option 3: ChromaDB Cloud\n# OMNI_MEMORY_PROVIDER=chroma-cloud\n# CHROMA_TENANT=your_tenant\n# CHROMA_DATABASE=your_database\n# CHROMA_API_KEY=your_api_key\n\n# Option 4: MongoDB Atlas\n# OMNI_MEMORY_PROVIDER=mongodb-remote\n# MONGODB_URI=\"your_mongodb_connection_string\"\n# MONGODB_DB_NAME=\"db name\"\n\n# ===============================================\n# Persistent Memory Storage (OPTIONAL)\n# ===============================================\n# These have sensible defaults - only set if you need custom configuration\n\n# Redis - for memory_store_type=\"redis\" (defaults to: redis://localhost:6379/0)\n# REDIS_URL=redis://your-remote-redis:6379/0\n# REDIS_URL=redis://:password@localhost:6379/0 # With password\n\n# Database - for memory_store_type=\"database\" (defaults to: sqlite:///omnicoreagent_memory.db)\n# DATABASE_URL=postgresql://user:password@localhost:5432/omnicoreagent\n# DATABASE_URL=mysql://user:password@localhost:3306/omnicoreagent\n\n# Mongodb - for memory_store_type=\"mongodb\" (defaults to: mongodb://localhost:27017/omnicoreagent)\n# MONGODB_URI=\"your_mongodb_connection_string\"\n# MONGODB_DB_NAME=\"db name\"\n```\n\n> **\ud83d\udca1 Quick Start**: Just set `LLM_API_KEY` and you're ready to go! Add other variables only when you need advanced features.\n\n### **Server Configuration (`servers_config.json`)**\n\nFor MCP server connections and agent settings:\n\n#### Basic OpenAI Configuration\n\n```json\n{\n \"AgentConfig\": {\n \"tool_call_timeout\": 30,\n \"max_steps\": 15,\n \"request_limit\": 0, // 0 = unlimited (production mode), set > 0 to enable limits\n \"total_tokens_limit\": 0, // 0 = unlimited (production mode), set > 0 to enable limits\n \"memory_results_limit\": 5, // Number of memory results to retrieve (1-100, default: 5)\n \"memory_similarity_threshold\": 0.5 // Similarity threshold for memory filtering (0.0-1.0, default: 0.5)\n },\n \"LLM\": {\n \"provider\": \"openai\",\n \"model\": \"gpt-4\",\n \"temperature\": 0.5,\n \"max_tokens\": 5000,\n \"max_context_length\": 30000,\n \"top_p\": 0\n },\n \"Embedding\": {\n \"provider\": \"openai\",\n \"model\": \"text-embedding-3-small\",\n \"dimensions\": 1536,\n \"encoding_format\": \"float\"\n },\n \"mcpServers\": {\n \"ev_assistant\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://localhost:8000/mcp\"\n },\n \"sse-server\": {\n \"transport_type\": \"sse\",\n \"url\": \"http://localhost:3000/sse\",\n \"headers\": {\n \"Authorization\": \"Bearer token\"\n },\n \"timeout\": 60,\n \"sse_read_timeout\": 120\n },\n \"streamable_http-server\": {\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://localhost:3000/mcp\",\n \"headers\": {\n \"Authorization\": \"Bearer token\"\n },\n \"timeout\": 60,\n \"sse_read_timeout\": 120\n }\n }\n}\n```\n\n#### Multiple Provider Examples\n\n**Anthropic Claude Configuration**\n```json\n{\n \"LLM\": {\n \"provider\": \"anthropic\",\n \"model\": \"claude-3-5-sonnet-20241022\",\n \"temperature\": 0.7,\n \"max_tokens\": 4000,\n \"max_context_length\": 200000,\n \"top_p\": 0.95\n }\n}\n```\n\n**Groq Configuration**\n```json\n{\n \"LLM\": {\n \"provider\": \"groq\",\n \"model\": \"llama-3.1-8b-instant\",\n \"temperature\": 0.5,\n \"max_tokens\": 2000,\n \"max_context_length\": 8000,\n \"top_p\": 0.9\n }\n}\n```\n\n**Azure OpenAI Configuration**\n```json\n{\n \"LLM\": {\n \"provider\": \"azureopenai\",\n \"model\": \"gpt-4\",\n \"temperature\": 0.7,\n \"max_tokens\": 2000,\n \"max_context_length\": 100000,\n \"top_p\": 0.95,\n \"azure_endpoint\": \"https://your-resource.openai.azure.com\",\n \"azure_api_version\": \"2024-02-01\",\n \"azure_deployment\": \"your-deployment-name\"\n }\n}\n```\n\n**Ollama Local Model Configuration**\n```json\n{\n \"LLM\": {\n \"provider\": \"ollama\",\n \"model\": \"llama3.1:8b\",\n \"temperature\": 0.5,\n \"max_tokens\": 5000,\n \"max_context_length\": 100000,\n \"top_p\": 0.7,\n \"ollama_host\": \"http://localhost:11434\"\n }\n}\n```\n\n**OpenRouter Configuration**\n```json\n{\n \"LLM\": {\n \"provider\": \"openrouter\",\n \"model\": \"anthropic/claude-3.5-sonnet\",\n \"temperature\": 0.7,\n \"max_tokens\": 4000,\n \"max_context_length\": 200000,\n \"top_p\": 0.95\n }\n}\n```\n\n### \ud83d\udd10 Authentication Methods\n\nOmniCoreAgent supports multiple authentication methods for secure server connections:\n\n#### OAuth 2.0 Authentication\n```json\n{\n \"server_name\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://your-server/mcp\"\n }\n}\n```\n\n#### Bearer Token Authentication\n```json\n{\n \"server_name\": {\n \"transport_type\": \"streamable_http\",\n \"headers\": {\n \"Authorization\": \"Bearer your-token-here\"\n },\n \"url\": \"http://your-server/mcp\"\n }\n}\n```\n\n#### Custom Headers\n```json\n{\n \"server_name\": {\n \"transport_type\": \"streamable_http\",\n \"headers\": {\n \"X-Custom-Header\": \"value\",\n \"Authorization\": \"Custom-Auth-Scheme token\"\n },\n \"url\": \"http://your-server/mcp\"\n }\n}\n```\n\n## \ud83d\udd04 Dynamic Server Configuration\n\nOmniCoreAgent supports dynamic server configuration through commands:\n\n#### Add New Servers\n```bash\n# Add one or more servers from a configuration file\n/add_servers:path/to/config.json\n```\n\nThe configuration file can include multiple servers with different authentication methods:\n\n```json\n{\n \"new-server\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://localhost:8000/mcp\"\n },\n \"another-server\": {\n \"transport_type\": \"sse\",\n \"headers\": {\n \"Authorization\": \"Bearer token\"\n },\n \"url\": \"http://localhost:3000/sse\"\n }\n}\n```\n\n#### Remove Servers\n```bash\n# Remove a server by its name\n/remove_server:server_name\n```\n\n---\n\n## \ud83e\udde0 Vector Database & Smart Memory Setup (Complete Guide)\n\nOmniCoreAgent provides advanced memory capabilities through vector databases for intelligent, semantic search and long-term memory.\n\n#### **\u26a1 Quick Start (Choose Your Provider)**\n```bash\n# Enable vector memory - you MUST choose a provider\nENABLE_VECTOR_DB=true\n\n# Option 1: Qdrant (recommended)\nOMNI_MEMORY_PROVIDER=qdrant-remote\nQDRANT_HOST=localhost\nQDRANT_PORT=6333\n\n# Option 2: ChromaDB Remote\nOMNI_MEMORY_PROVIDER=chroma-remote\nCHROMA_HOST=localhost\nCHROMA_PORT=8000\n\n# Option 3: ChromaDB Cloud\nOMNI_MEMORY_PROVIDER=chroma-cloud\nCHROMA_TENANT=your_tenant\nCHROMA_DATABASE=your_database\nCHROMA_API_KEY=your_api_key\n\n# Option 4: MongoDB Atlas\nOMNI_MEMORY_PROVIDER=mongodb-remote\nMONGODB_URI=\"your_mongodb_connection_string\"\nMONGODB_DB_NAME=\"db name\"\n\n# Disable vector memory (default)\nENABLE_VECTOR_DB=false\n```\n\n#### **\ud83d\udd27 Vector Database Providers**\n\n**1. Qdrant Remote**\n```bash\n# Install and run Qdrant\ndocker run -p 6333:6333 qdrant/qdrant\n\n# Configure\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=qdrant-remote\nQDRANT_HOST=localhost\nQDRANT_PORT=6333\n```\n\n**2. MongoDB Atlas**\n```bash\n# Configure\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=mongodb-remote\nMONGODB_URI=\"your_mongodb_connection_string\"\nMONGODB_DB_NAME=\"db name\"\n```\n\n**3. ChromaDB Remote**\n```bash\n# Install and run ChromaDB server\ndocker run -p 8000:8000 chromadb/chroma\n\n# Configure\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=chroma-remote\nCHROMA_HOST=localhost\nCHROMA_PORT=8000\n```\n\n**4. ChromaDB Cloud**\n```bash\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=chroma-cloud\nCHROMA_TENANT=your_tenant\nCHROMA_DATABASE=your_database\nCHROMA_API_KEY=your_api_key\n```\n\n#### **\u2728 What You Get**\n- **Long-term Memory**: Persistent storage across sessions\n- **Episodic Memory**: Context-aware conversation history\n- **Semantic Search**: Find relevant information by meaning, not exact text\n- **Multi-session Context**: Remember information across different conversations\n- **Automatic Summarization**: Intelligent memory compression for efficiency\n\n---\n\n## \ud83d\udcca Opik Tracing & Observability Setup (Latest Feature)\n\n**Monitor and optimize your AI agents with production-grade observability:**\n\n#### **\ud83d\ude80 Quick Setup**\n\n1. **Sign up for Opik** (Free & Open Source):\n - Visit: **[https://www.comet.com/signup?from=llm](https://www.comet.com/signup?from=llm)**\n - Create your account and get your API key and workspace name\n\n2. **Add to your `.env` file** (see [Environment Variables](#environment-variables) above):\n ```bash\n OPIK_API_KEY=your_opik_api_key_here\n OPIK_WORKSPACE=your_opik_workspace_name\n ```\n\n#### **\u2728 What You Get Automatically**\n\nOnce configured, OmniCoreAgent automatically tracks:\n\n- **\ud83d\udd25 LLM Call Performance**: Execution time, token usage, response quality\n- **\ud83d\udee0\ufe0f Tool Execution Traces**: Which tools were used and how long they took\n- **\ud83e\udde0 Memory Operations**: Vector DB queries, memory retrieval performance\n- **\ud83e\udd16 Agent Workflow**: Complete trace of multi-step agent reasoning\n- **\ud83d\udcca System Bottlenecks**: Identify exactly where time is spent\n\n#### **\ud83d\udcc8 Benefits**\n\n- **Performance Optimization**: See which LLM calls or tools are slow\n- **Cost Monitoring**: Track token usage and API costs\n- **Debugging**: Understand agent decision-making processes\n- **Production Monitoring**: Real-time observability for deployed agents\n- **Zero Code Changes**: Works automatically with existing agents\n\n#### **\ud83d\udd0d Example: What You'll See**\n\n```\nAgent Execution Trace:\n\u251c\u2500\u2500 agent_execution: 4.6s\n\u2502 \u251c\u2500\u2500 tools_registry_retrieval: 0.02s \u2705\n\u2502 \u251c\u2500\u2500 memory_retrieval_step: 0.08s \u2705\n\u2502 \u251c\u2500\u2500 llm_call: 4.5s \u26a0\ufe0f (bottleneck identified!)\n\u2502 \u251c\u2500\u2500 response_parsing: 0.01s \u2705\n\u2502 \u2514\u2500\u2500 action_execution: 0.03s \u2705\n```\n\n**\ud83d\udca1 Pro Tip**: Opik is completely optional. If you don't set the credentials, OmniCoreAgent works normally without tracing.\n\n---\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Developer Integration\n\nOmniCoreAgent is not just a CLI tool\u2014it's also a powerful Python library. Both systems can be used programmatically in your applications.\n\n### Using OmniAgent in Applications\n\n```python\nfrom omnicoreagent.omni_agent import OmniAgent\nfrom omnicoreagent.core.memory_store.memory_router import MemoryRouter\nfrom omnicoreagent.core.events.event_router import EventRouter\nfrom omnicoreagent.core.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry for custom tools\ntool_registry = ToolRegistry()\n\n@tool_registry.register_tool(\"analyze_data\")\ndef analyze_data(data: str) -> str:\n \"\"\"Analyze data and return insights.\"\"\"\n return f\"Analysis complete: {len(data)} characters processed\"\n\n# OmniAgent automatically handles MCP connections + your tools\nagent = OmniAgent(\n name=\"my_app_agent\",\n system_instruction=\"You are a helpful assistant.\",\n model_config={\n \"provider\": \"openai\", \n \"model\": \"gpt-4o\",\n \"temperature\": 0.7\n },\n local_tools=tool_registry, # Your custom tools!\n memory_store=MemoryRouter(memory_store_type=\"redis\"),\n event_router=EventRouter(event_store_type=\"in_memory\")\n)\n\n# Use in your app\nresult = await agent.run(\"Analyze some sample data\")\n```\n\n### FastAPI Integration with OmniAgent\n\nOmniAgent makes building APIs incredibly simple. See [`examples/web_server.py`](examples/web_server.py) for a complete FastAPI example:\n\n```python\nfrom fastapi import FastAPI\nfrom omnicoreagent.omni_agent import OmniAgent\n\napp = FastAPI()\nagent = OmniAgent(...) # Your agent setup from above\n\n@app.post(\"/chat\")\nasync def chat(message: str, session_id: str = None):\n result = await agent.run(message, session_id)\n return {\"response\": result['response'], \"session_id\": result['session_id']}\n\n@app.get(\"/tools\") \nasync def get_tools():\n # Returns both MCP tools AND your custom tools automatically\n return agent.get_available_tools()\n```\n\n### Using MCPOmni Connect Programmatically\n\n```python\nfrom omnicoreagent.mcp_client import MCPClient\n\n# Create MCP client\nclient = MCPClient(config_file=\"servers_config.json\")\n\n# Connect to servers\nawait client.connect_all()\n\n# Use tools\ntools = await client.list_tools()\nresult = await client.call_tool(\"tool_name\", {\"arg\": \"value\"})\n```\n\n**Key Benefits:**\n\n- **One OmniAgent = MCP + Custom Tools + Memory + Events**\n- **Automatic tool discovery** from all connected MCP servers\n- **Built-in session management** and conversation history\n- **Real-time event streaming** for monitoring\n- **Easy integration** with any Python web framework\n\n---\n\n## \ud83c\udfaf Usage Patterns\n\n### Interactive Commands\n\n- `/tools` - List all available tools across servers\n- `/prompts` - View available prompts\n- `/prompt:<n>/<args>` - Execute a prompt with arguments\n- `/resources` - List available resources\n- `/resource:<uri>` - Access and analyze a resource\n- `/debug` - Toggle debug mode\n- `/refresh` - Update server capabilities\n- `/memory` - Toggle Redis memory persistence (on/off)\n- `/mode:auto` - Switch to autonomous agentic mode\n- `/mode:chat` - Switch back to interactive chat mode\n- `/add_servers:<config.json>` - Add one or more servers from a configuration file\n- `/remove_server:<server_n>` - Remove a server by its name\n\n### Memory and Chat History\n\n```bash\n# Enable Redis memory persistence\n/memory\n\n# Check memory status\nMemory persistence is now ENABLED using Redis\n\n# Disable memory persistence\n/memory\n\n# Check memory status\nMemory persistence is now DISABLED\n```\n\n### Operation Modes\n\n```bash\n# Switch to autonomous mode\n/mode:auto\n\n# System confirms mode change\nNow operating in AUTONOMOUS mode. I will execute tasks independently.\n\n# Switch back to chat mode\n/mode:chat\n\n# System confirms mode change\nNow operating in CHAT mode. I will ask for approval before executing tasks.\n```\n\n### Mode Differences\n\n- **Chat Mode (Default)**\n - Requires explicit approval for tool execution\n - Interactive conversation style\n - Step-by-step task execution\n - Detailed explanations of actions\n\n- **Autonomous Mode**\n - Independent task execution\n - Self-guided decision making\n - Automatic tool selection and chaining\n - Progress updates and final results\n - Complex task decomposition\n - Error handling and recovery\n\n- **Orchestrator Mode**\n - Advanced planning for complex multi-step tasks\n - Strategic delegation across multiple MCP servers\n - Intelligent agent coordination and communication\n - Parallel task execution when possible\n - Dynamic resource allocation\n - Sophisticated workflow management\n - Real-time progress monitoring across agents\n - Adaptive task prioritization\n\n### Prompt Management\n\n```bash\n# List all available prompts\n/prompts\n\n# Basic prompt usage\n/prompt:weather/location=tokyo\n\n# Prompt with multiple arguments depends on the server prompt arguments requirements\n/prompt:travel-planner/from=london/to=paris/date=2024-03-25\n\n# JSON format for complex arguments\n/prompt:analyze-data/{\n \"dataset\": \"sales_2024\",\n \"metrics\": [\"revenue\", \"growth\"],\n \"filters\": {\n \"region\": \"europe\",\n \"period\": \"q1\"\n }\n}\n\n# Nested argument structures\n/prompt:market-research/target=smartphones/criteria={\n \"price_range\": {\"min\": 500, \"max\": 1000},\n \"features\": [\"5G\", \"wireless-charging\"],\n \"markets\": [\"US\", \"EU\", \"Asia\"]\n}\n```\n\n### Advanced Prompt Features\n\n- **Argument Validation**: Automatic type checking and validation\n- **Default Values**: Smart handling of optional arguments\n- **Context Awareness**: Prompts can access previous conversation context\n- **Cross-Server Execution**: Seamless execution across multiple MCP servers\n- **Error Handling**: Graceful handling of invalid arguments with helpful messages\n- **Dynamic Help**: Detailed usage information for each prompt\n\n### AI-Powered Interactions\n\nThe client intelligently:\n\n- Chains multiple tools together\n- Provides context-aware responses\n- Automatically selects appropriate tools\n- Handles errors gracefully\n- Maintains conversation context\n\n### Model Support with LiteLLM\n\n- **Unified Model Access**\n - Single interface for 100+ models across all major providers\n - Automatic provider detection and routing\n - Consistent API regardless of underlying provider\n - Native function calling for compatible models\n - ReAct Agent fallback for models without function calling\n- **Supported Providers**\n - **OpenAI**: GPT-4, GPT-3.5, and all model variants\n - **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus\n - **Google**: Gemini Pro, Gemini Flash, PaLM models\n - **Groq**: Ultra-fast inference for Llama, Mixtral, Gemma\n - **DeepSeek**: DeepSeek-V3, DeepSeek-Coder, and specialized models\n - **Azure OpenAI**: Enterprise-grade OpenAI models\n - **OpenRouter**: Access to 200+ models from various providers\n - **Ollama**: Local model execution with privacy\n- **Advanced Features**\n - Automatic model capability detection\n - Dynamic tool execution based on model features\n - Intelligent fallback mechanisms\n - Provider-specific optimizations\n\n### Token & Usage Management\n\nOmniCoreAgent provides advanced controls and visibility over your API usage and resource limits.\n\n#### View API Usage Stats\n\nUse the `/api_stats` command to see your current usage:\n\n```bash\n/api_stats\n```\n\nThis will display:\n\n- **Total tokens used**\n- **Total requests made**\n- **Total response tokens**\n- **Number of requests**\n\n#### Set Usage Limits\n\nYou can set limits to automatically stop execution when thresholds are reached:\n\n- **Total Request Limit:** Set the maximum number of requests allowed in a session.\n- **Total Token Usage Limit:** Set the maximum number of tokens that can be used.\n- **Tool Call Timeout:** Set the maximum time (in seconds) a tool call can take before being terminated.\n- **Max Steps:** Set the maximum number of steps the agent can take before stopping.\n\nYou can configure these in your `servers_config.json` under the `AgentConfig` section:\n\n```json\n\"AgentConfig\": {\n \"tool_call_timeout\": 30, // Tool call timeout in seconds\n \"max_steps\": 15, // Max number of steps before termination\n \"request_limit\": 0, // 0 = unlimited (production mode), set > 0 to enable limits\n \"total_tokens_limit\": 0, // 0 = unlimited (production mode), set > 0 to enable limits\n \"memory_results_limit\": 5, // Number of memory results to retrieve (1-100, default: 5)\n \"memory_similarity_threshold\": 0.5 // Similarity threshold for memory filtering (0.0-1.0, default: 0.5)\n}\n```\n\n- When any of these limits are reached, the agent will automatically stop running and notify you.\n\n#### Example Commands\n\n```bash\n# Check your current API usage and limits\n/api_stats\n\n# Set a new request limit (example)\n# (This can be done by editing servers_config.json or via future CLI commands)\n```\n\n## \ud83d\udd27 Advanced Features\n\n### Tool Orchestration\n\n```python\n# Example of automatic tool chaining if the tool is available in the servers connected\nUser: \"Find charging stations near Silicon Valley and check their current status\"\n\n# Client automatically:\n1. Uses Google Maps API to locate Silicon Valley\n2. Searches for charging stations in the area\n3. Checks station status through EV network API\n4. Formats and presents results\n```\n\n### Resource Analysis\n\n```python\n# Automatic resource processing\nUser: \"Analyze the contents of /path/to/document.pdf\"\n\n# Client automatically:\n1. Identifies resource type\n2. Extracts content\n3. Processes through LLM\n4. Provides intelligent summary\n```\n\n### \ud83d\udee0\ufe0f Troubleshooting Common Issues\n\n#### \"Failed to connect to server: Session terminated\"\n\n**Possible Causes & Solutions:**\n\n1. **Wrong Transport Type**\n ```\n Problem: Your server expects 'stdio' but you configured 'streamable_http'\n Solution: Check your server's documentation for the correct transport type\n ```\n\n2. **OAuth Configuration Mismatch**\n ```\n Problem: Your server doesn't support OAuth but you have \"auth\": {\"method\": \"oauth\"}\n Solution: Remove the \"auth\" section entirely and use headers instead:\n\n \"headers\": {\n \"Authorization\": \"Bearer your-token\"\n }\n ```\n\n3. **Server Not Running**\n ```\n Problem: The MCP server at the specified URL is not running\n Solution: Start your MCP server first, then connect with OmniCoreAgent\n ```\n\n4. **Wrong URL or Port**\n ```\n Problem: URL in config doesn't match where your server is running\n Solution: Verify the server's actual address and port\n ```\n\n#### \"Started callback server on http://localhost:3000\" - Is This Normal?\n\n**Yes, this is completely normal** when:\n\n- You have `\"auth\": {\"method\": \"oauth\"}` in any server configuration\n- The OAuth server handles authentication tokens automatically\n- You cannot and should not try to change this address\n\n**If you don't want the OAuth server:**\n\n- Remove `\"auth\": {\"method\": \"oauth\"}` from all server configurations\n- Use alternative authentication methods like Bearer tokens\n\n### \ud83d\udccb Configuration Examples by Use Case\n\n#### Local Development (stdio)\n\n```json\n{\n \"mcpServers\": {\n \"local-tools\": {\n \"transport_type\": \"stdio\",\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-tools\"]\n }\n }\n}\n```\n\n#### Remote Server with Token\n\n```json\n{\n \"mcpServers\": {\n \"remote-api\": {\n \"transport_type\": \"streamable_http\",\n \"url\": \"http://api.example.com:8080/mcp\",\n \"headers\": {\n \"Authorization\": \"Bearer abc123token\"\n }\n }\n }\n}\n```\n\n#### Remote Server with OAuth\n\n```json\n{\n \"mcpServers\": {\n \"oauth-server\": {\n \"transport_type\": \"streamable_http\",\n \"auth\": {\n \"method\": \"oauth\"\n },\n \"url\": \"http://oauth-server.com:8080/mcp\"\n }\n }\n}\n```\n\n---\n\n## \ud83e\uddea Testing\n\n### Running Tests\n\n```bash\n# Run all tests with verbose output\npytest tests/ -v\n\n# Run specific test file\npytest tests/test_specific_file.py -v\n\n# Run tests with coverage report\npytest tests/ --cov=src --cov-report=term-missing\n```\n\n### Test Structure\n\n```\ntests/\n\u251c\u2500\u2500 unit/ # Unit tests for individual components\n\u251c\u2500\u2500 omni_agent/ # OmniAgent system tests\n\u251c\u2500\u2500 mcp_client/ # MCPOmni Connect system tests\n\u2514\u2500\u2500 integration/ # Integration tests for both systems\n```\n\n### Development Quick Start\n\n1. **Installation**\n\n ```bash\n # Clone the repository\n git clone https://github.com/Abiorh001/omnicoreagent.git\n cd omnicoreagent\n\n # Create and activate virtual environment\n uv venv\n source .venv/bin/activate\n\n # Install dependencies\n uv sync\n ```\n\n2. **Configuration**\n\n ```bash\n # Set up environment variables\n echo \"LLM_API_KEY=your_api_key_here\" > .env\n\n # Configure your servers in servers_config.json\n ```\n\n3. **Start Systems**\n\n ```bash\n # Try OmniAgent\n uv run examples/omni_agent_example.py\n\n # Or try MCPOmni Connect\n uv run examples/mcp_client_example.py\n ```\n\n Or:\n\n ```bash\n python examples/omni_agent_example.py\n python examples/mcp_client_example.py\n ```\n\n---\n\n## \ud83d\udd0d Troubleshooting\n\n> **\ud83d\udea8 Most Common Issues**: Check [Quick Fixes](#-quick-fixes-common-issues) below first!\n> \n> **\ud83d\udcd6 For comprehensive setup help**: See [\u2699\ufe0f Configuration Guide](#\ufe0f-configuration-guide) | [\ud83e\udde0 Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide)\n\n### \ud83d\udea8 **Quick Fixes (Common Issues)**\n\n| **Error** | **Quick Fix** |\n|-----------|---------------|\n| `Error: Invalid API key` | Check your `.env` file: `LLM_API_KEY=your_actual_key` |\n| `ModuleNotFoundError: omnicoreagent` | Run: `uv add omnicoreagent` or `pip install omnicoreagent` |\n| `Connection refused` | Ensure MCP server is running before connecting |\n| `ChromaDB not available` | Install: `pip install chromadb` - [See Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide) |\n| `Redis connection failed` | Install Redis or use in-memory mode (default) |\n| `Tool execution failed` | Check tool permissions and arguments |\n\n### Detailed Issues and Solutions\n\n1. **Connection Issues**\n\n ```bash\n Error: Could not connect to MCP server\n ```\n\n - Check if the server is running\n - Verify server configuration in `servers_config.json`\n - Ensure network connectivity\n - Check server logs for errors\n - **See [Transport Types & Authentication](#-transport-types--authentication) for detailed setup**\n\n2. **API Key Issues**\n\n ```bash\n Error: Invalid API key\n ```\n\n - Verify API key is correctly set in `.env`\n - Check if API key has required permissions\n - Ensure API key is for correct environment (production/development)\n - **See [Configuration Guide](#\ufe0f-configuration-guide) for correct setup**\n\n3. **Redis Connection**\n\n ```bash\n Error: Could not connect to Redis\n ```\n\n - Verify Redis server is running\n - Check Redis connection settings in `.env`\n - Ensure Redis password is correct (if configured)\n\n4. **Tool Execution Failures**\n ```bash\n Error: Tool execution failed\n ```\n - Check tool availability on connected servers\n - Verify tool permissions\n - Review tool arguments for correctness\n\n5. **Vector Database Issues**\n\n ```bash\n Error: Vector database connection failed\n ```\n\n - Ensure chosen provider (Qdrant, ChromaDB, MongoDB) is running\n - Check connection settings in `.env`\n - Verify API keys for cloud providers\n - **See [Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide) for detailed configuration**\n\n6. **Import Errors**\n\n ```bash\n ImportError: cannot import name 'OmniAgent'\n ```\n\n - Check package installation: `pip show omnicoreagent`\n - Verify Python version compatibility (3.10+)\n - Try reinstalling: `pip uninstall omnicoreagent && pip install omnicoreagent`\n\n### Debug Mode\n\nEnable debug mode for detailed logging:\n\n```bash\n# In MCPOmni Connect\n/debug\n\n# In OmniAgent\nagent = OmniAgent(..., debug=True)\n```\n\n### **Getting Help**\n\n1. **First**: Check the [Quick Fixes](#-quick-fixes-common-issues) above\n2. **Examples**: Study working examples in the `examples/` directory\n3. **Issues**: Search [GitHub Issues](https://github.com/Abiorh001/omnicoreagent/issues) for similar problems\n4. **New Issue**: [Create a new issue](https://github.com/Abiorh001/omnicoreagent/issues/new) with detailed information\n\n---\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions to OmniCoreAgent! Here's how you can help:\n\n### Development Setup\n\n```bash\n# Fork and clone the repository\ngit clone https://github.com/yourusername/omnicoreagent.git\ncd omnicoreagent\n\n# Set up development environment\nuv venv\nsource .venv/bin/activate\nuv sync --dev\n\n# Install pre-commit hooks\npre-commit install\n```\n\n### Contribution Areas\n\n- **OmniAgent System**: Custom agents, local tools, background processing\n- **MCPOmni Connect**: MCP client features, transport protocols, authentication\n- **Shared Infrastructure**: Memory systems, vector databases, event handling\n- **Documentation**: Examples, tutorials, API documentation\n- **Testing**: Unit tests, integration tests, performance tests\n\n### Pull Request Process\n\n1. Create a feature branch from `main`\n2. Make your changes with tests\n3. Run the test suite: `pytest tests/ -v`\n4. Update documentation as needed\n5. Submit a pull request with a clear description\n\n### Code Standards\n\n- Python 3.10+ compatibility\n- Type hints for all public APIs\n- Comprehensive docstrings\n- Unit tests for new functionality\n- Follow existing code style\n\n---\n\n## \ud83d\udcd6 Documentation\n\nComplete documentation is available at: **[OmniCoreAgent Docs](https://abiorh001.github.io/omnicoreagent)**\n\n### Documentation Structure\n\n- **Getting Started**: Quick setup and first steps\n- **OmniAgent Guide**: Custom agent development\n- **MCPOmni Connect Guide**: MCP client usage\n- **API Reference**: Complete code documentation\n- **Examples**: Working code examples\n- **Advanced Topics**: Vector databases, tracing, production deployment\n\n### Build Documentation Locally\n\n```bash\n# Install documentation dependencies\npip install mkdocs mkdocs-material\n\n# Serve documentation locally\nmkdocs serve\n# Open http://127.0.0.1:8000\n\n# Build static documentation\nmkdocs build\n```\n\n### Contributing to Documentation\n\nDocumentation improvements are always welcome:\n\n- Fix typos or unclear explanations\n- Add new examples or use cases\n- Improve existing tutorials\n- Translate to other languages\n\n---\n\n## Demo\n\n\n\n---\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udcec Contact & Support\n\n- **Author**: Abiola Adeshina\n- **Email**: abiolaadedayo1993@gmail.com\n- **GitHub**: [https://github.com/Abiorh001/omnicoreagent](https://github.com/Abiorh001/omnicoreagent)\n- **Issues**: [Report a bug or request a feature](https://github.com/Abiorh001/omnicoreagent/issues)\n- **Discussions**: [Join the community](https://github.com/Abiorh001/omnicoreagent/discussions)\n\n### Support Channels\n\n- **GitHub Issues**: Bug reports and feature requests\n- **GitHub Discussions**: General questions and community support\n- **Email**: Direct contact for partnership or enterprise inquiries\n\n---\n\n<p align=\"center\">\n <strong>Built with \u2764\ufe0f by the OmniCoreAgent Team</strong><br>\n <em>Empowering developers to build the next generation of AI applications</em>\n</p>",
"bugtrack_url": null,
"license": "MIT",
"summary": "OmniCoreAgent AI Framework - Universal MCP Client with multi-transport support and LLM-powered tool routing",
"version": "0.2.0",
"project_urls": {
"Issues": "https://github.com/Abiorh001/mcp_omni_connect/issues",
"Repository": "https://github.com/Abiorh001/mcp_omni_connect"
},
"split_keywords": [
"agent",
" ai",
" automation",
" framework",
" git",
" llm",
" mcp"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "93f3a657a32d5d1c463dca3b10881cc2b38d48b69372c1b60b6ac424ac65656b",
"md5": "40fbc4d88c45d22fc3171c146c6b40ca",
"sha256": "00169f753ea7cb188c04a80446b87840ab0716c091c59611029ac2b98a968ebd"
},
"downloads": -1,
"filename": "omnicoreagent-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "40fbc4d88c45d22fc3171c146c6b40ca",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 172318,
"upload_time": "2025-09-01T01:41:21",
"upload_time_iso_8601": "2025-09-01T01:41:21.758834Z",
"url": "https://files.pythonhosted.org/packages/93/f3/a657a32d5d1c463dca3b10881cc2b38d48b69372c1b60b6ac424ac65656b/omnicoreagent-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4b97d354d1a78c03395cc5d4d7951e19b4b22e76972a1c8553dcb8ecb1fec586",
"md5": "d04bbe79be352c7ae6b78d4cf4b8ac1e",
"sha256": "7ad9755de6aa68efb3c6a2f075cfd0f31825d647ac3c4cd988009e3a48ac4842"
},
"downloads": -1,
"filename": "omnicoreagent-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "d04bbe79be352c7ae6b78d4cf4b8ac1e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 155434,
"upload_time": "2025-09-01T01:41:23",
"upload_time_iso_8601": "2025-09-01T01:41:23.763543Z",
"url": "https://files.pythonhosted.org/packages/4b/97/d354d1a78c03395cc5d4d7951e19b4b22e76972a1c8553dcb8ecb1fec586/omnicoreagent-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-01 01:41:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Abiorh001",
"github_project": "mcp_omni_connect",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "omnicoreagent"
}