omnicoreagent


Nameomnicoreagent JSON
Version 0.2.9 PyPI version JSON
download
home_pageNone
SummaryOmniCoreAgent AI Framework - Universal MCP Client with multi-transport support and LLM-powered tool routing
upload_time2025-10-27 10:28:55
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords agent ai automation framework git llm mcp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # πŸš€ OmniCoreAgent - Complete AI Development Platform

> **ℹ️ Project Renaming Notice:**  
> This project was previously known as **`mcp_omni-connect`**.  
> It has been renamed to **`omnicoreagent`** to reflect its evolution into a complete AI development platformβ€”combining both a world-class MCP client and a powerful AI agent builder framework.

> **⚠️ Breaking Change:**  
> The package name has changed from **`mcp_omni-connect`** to **`omnicoreagent`**.  
> Please uninstall the old package and install the new one:
>
> ```bash
> pip uninstall mcp_omni-connect
> pip install omnicoreagent
> ```
>
> All imports and CLI commands now use `omnicoreagent`.  
> Update your code and scripts accordingly.

[![PyPI Downloads](https://static.pepy.tech/badge/omnicoreagent)](https://pepy.tech/projects/omnicoreagent)
[![Python Version](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
[![Tests](https://img.shields.io/badge/tests-passing-brightgreen.svg)](https://github.com/Abiorh001/omnicoreagent/actions)
[![PyPI version](https://badge.fury.io/py/omnicoreagent.svg)](https://badge.fury.io/py/omnicoreagent)
[![Last Commit](https://img.shields.io/github/last-commit/Abiorh001/omnicoreagent)](https://github.com/Abiorh001/omnicoreagent/commits/main)
[![Open Issues](https://img.shields.io/github/issues/Abiorh001/omnicoreagent)](https://github.com/Abiorh001/omnicoreagent/issues)
[![Pull Requests](https://img.shields.io/github/issues-pr/Abiorh001/omnicoreagent)](https://github.com/Abiorh001/omnicoreagent/pulls)

<p align="center">
  <img src="assets/IMG_5292.jpeg" alt="OmniCoreAgent Logo" width="250"/>
</p>

**OmniCoreAgent** is the complete AI development platform that combines two powerful systems into one revolutionary ecosystem. Build production-ready AI agents with **OmniAgent**, use the advanced MCP client with **MCPOmni Connect**, or combine both for maximum power.

## πŸ“‹ Table of Contents

### πŸš€ **Getting Started**
- [πŸš€ Quick Start (2 minutes)](#-quick-start-2-minutes)
- [🌟 What is OmniCoreAgent?](#-what-is-omnicoreagent)
- [πŸ’‘ What Can You Build? (Examples)](#-what-can-you-build-see-real-examples)
- [🎯 Choose Your Path](#-choose-your-path)
- [🧠 Semantic Tool Knowledge Base](#-semantic-tool-knowledge-base)
- [πŸ—‚οΈ Memory Tool Backend](#-memory-tool-backend)

### πŸ€– **OmniAgent System**

- [✨ OmniAgent Features](#-omniagent---revolutionary-ai-agent-builder)
- [πŸ”₯ Local Tools System](#-local-tools-system---create-custom-ai-tools)
- [🧩 OmniAgent Workflow System](#-omniagent-workflow-system--multi-agent-orchestration)
- [🚁 Background Agent System](#-background-agent-system---autonomous-task-automation)
- [πŸ› οΈ Building Custom Agents](#-building-custom-agents)
- [πŸ“š OmniAgent Examples](#-omniagent-examples)

### πŸ”Œ **MCPOmni Connect System**
- [✨ MCP Client Features](#-mcpomni-connect---world-class-mcp-client)
- [🚦 Transport Types & Authentication](#-transport-types--authentication)
- [πŸ–₯️ CLI Commands](#️-cli-commands)
- [πŸ“š MCP Usage Examples](#-mcp-usage-examples)

### πŸ“– **Core Information**
- [✨ Platform Features](#-platform-features)
- [πŸ—οΈ Architecture](#️-architecture)

### βš™οΈ **Setup & Configuration**
- [βš™οΈ Configuration Guide](#️-configuration-guide)
- [🧠 Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide)
- [πŸ“Š Tracing & Observability](#-opik-tracing--observability-setup-latest-feature)

### πŸ› οΈ **Development & Integration**
- [πŸ§‘β€πŸ’» Developer Integration](#-developer-integration)
- [πŸ§ͺ Testing](#-testing)

### πŸ“š **Reference & Support**
- [πŸ” Troubleshooting](#-troubleshooting)
- [🀝 Contributing](#-contributing)
- [πŸ“– Documentation](#-documentation)

---

**New to OmniCoreAgent?** Get started in 2 minutes:

### Step 1: Install
```bash
# Install with uv (recommended)
uv add omnicoreagent

# Or with pip
pip install omnicoreagent
```

### Step 2: Set API Key
```bash
# Create .env file with your LLM API key
echo "LLM_API_KEY=your_openai_api_key_here" > .env
```

### Step 3: Run Examples
```bash
# Try OmniAgent with custom tools
python examples/omni_agent_example.py

# Try MCPOmni Connect (MCP client)
python examples/run_mcp.py

# Try the integrated platform
python examples/run_omni_agent.py
```

### What Can You Build?
- **Custom AI Agents**: Register your Python functions as AI tools with OmniAgent
- **MCP Integration**: Connect to any Model Context Protocol server with MCPOmni Connect
- **Smart Memory**: Vector databases for long-term AI memory
- **Background Agents**: Self-flying autonomous task execution
- **Production Monitoring**: Opik tracing for performance optimization

➑️ **Next**: Check out [Examples](#-what-can-you-build-see-real-examples) or jump to [Configuration Guide](#️-configuration-guide)

---

## 🌟 **What is OmniCoreAgent?**

OmniCoreAgent is a comprehensive AI development platform consisting of two integrated systems:

### 1. πŸ€– **OmniAgent** *(Revolutionary AI Agent Builder)*
Create intelligent, autonomous agents with custom capabilities:
- **πŸ› οΈ Local Tools System** - Register your Python functions as AI tools
- **🚁 Self-Flying Background Agents** - Autonomous task execution
- **🧠 Multi-Tier Memory** - Vector databases, Redis, PostgreSQL, MySQL, SQLite
- **πŸ“‘ Real-Time Events** - Live monitoring and streaming
- **πŸ”§ MCP + Local Tool Orchestration** - Seamlessly combine both tool types

### 2. πŸ”Œ **MCPOmni Connect** *(World-Class MCP Client)*
Advanced command-line interface for connecting to any Model Context Protocol server with:
- **🌐 Multi-Protocol Support** - stdio, SSE, HTTP, Docker, NPX transports
- **πŸ” Authentication** - OAuth 2.0, Bearer tokens, custom headers
- **🧠 Advanced Memory** - Redis, Database, Vector storage with intelligent retrieval
- **πŸ“‘ Event Streaming** - Real-time monitoring and debugging
- **πŸ€– Agentic Modes** - ReAct, Orchestrator, and Interactive chat modes

**🎯 Perfect for:** Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.

---

## πŸ’‘ **What Can You Build? (See Real Examples)**

### πŸ€– **OmniAgent System** *(Build Custom AI Agents)*
```bash
# Complete OmniAgent demo - All features showcase
python examples/omni_agent_example.py

# Advanced OmniAgent patterns - Study 12+ tool examples
python examples/run_omni_agent.py

# Self-flying background agents - Autonomous task execution with Background Agent Manager
python examples/background_agent_example.py


# Web server with UI - Interactive interface for OmniAgent
python examples/web_server.py
# Open http://localhost:8000 for web interface

# FastAPI implementation - Clean API endpoints
python examples/fast_api_impl.py

# Enhanced web server - Production-ready with advanced features
python examples/enhanced_web_server.py
```

### πŸ”Œ **MCPOmni Connect System** *(Connect to MCP Servers)*
```bash
# Basic MCP client usage
python examples/run_mcp.py

```

### πŸ”§ **LLM Provider Configuration** *(Multiple Providers)*
All LLM provider examples consolidated in:
```bash
# See examples/llm_usage-config.json for:
# - Anthropic Claude models
# - Groq ultra-fast inference  
# - Azure OpenAI enterprise
# - Ollama local models
# - OpenRouter 200+ models
# - And more providers...
```

---

## 🎯 **Choose Your Path**

### When to Use What?

| **Use Case** | **Choose** | **Best For** |
|-------------|------------|--------------|
| Build custom AI apps | **OmniAgent** | Web apps, automation, custom workflows |
| Connect to MCP servers | **MCPOmni Connect** | Daily workflow, server management, debugging |
| Learn & experiment | **Examples** | Understanding patterns, proof of concepts |
| Production deployment | **Both** | Full-featured AI applications |

### **Path 1: πŸ€– Build Custom AI Agents (OmniAgent)**
Perfect for: Custom applications, automation, web apps
```bash
# Study the examples to learn patterns:
python examples/basic.py                    # Simple introduction
python examples/omni_agent_example.py       # Complete OmniAgent demo
python examples/background_agent_example.py # Self-flying agents
python examples/web_server.py              # Web interface
python examples/fast_api_impl.py           # FastAPI integration
python examples/enhanced_web_server.py    # Production-ready web server

# Then build your own using the patterns!
```

### **Path 2: πŸ”Œ Advanced MCP Client (MCPOmni Connect)**
Perfect for: Daily workflow, server management, debugging
```bash
# Basic MCP client
python examples/run_mcp.py


# Features: Connect to MCP servers, agentic modes, advanced memory
```

### **Path 3: πŸ§ͺ Study Tool Patterns (Learning)**
Perfect for: Learning, understanding patterns, experimentation
```bash
# Comprehensive testing interface - Study 12+ EXAMPLE tools
python examples/run_omni_agent.py 

# Study this file to see tool registration patterns and CLI features
# Contains many examples of how to create custom tools
```

**πŸ’‘ Pro Tip:** Most developers use **both paths** - MCPOmni Connect for daily workflow and OmniAgent for building custom solutions!

---

## 🧠 **Semantic Tool Knowledge Base**

### Why You Need It

As your AI agents grow and connect to more MCP servers, finding the right tool quickly becomes challenging. Relying on static lists or manual selection is slow, inflexible, and can overload your agent’s context windowβ€”making it harder for the agent to choose the best tool for each task.

The **Semantic Tool Knowledge Base** solves this by automatically embedding all available tools into a vector database. This enables your agent to use semantic search: it can instantly and intelligently retrieve the most relevant tools based on the meaning of your query, not just keywords. As your tool ecosystem expands, the agent always finds the best matchβ€”no manual updates or registry management

### Usefulness

- **Scalable Tool Discovery:** Connect unlimited MCP servers and tools; the agent finds what it needs, when it needs it.
- **Context-Aware Retrieval:** The agent uses semantic similarity to select tools that best match the user’s intent, not just keywords.
- **Unified Access:** All tools are accessible via a single `tools_retriever` interface, simplifying agent logic.
- **Fallback Reliability:** If semantic search fails, the agent falls back to fast keyword (BM25) search for robust results.
- **No Manual Registry:** Tools are automatically indexed and updatedβ€”no need to maintain a static list.

---

### How to Enable

Add these options to your agent config:

```json
"agent_config": {
    "enable_tools_knowledge_base": true,      // Enable semantic tool KB, default: false
    "tools_results_limit": 10,                // Max tools to retrieve per query
    "tools_similarity_threshold": 0.1,        // Similarity threshold for semantic search
    ...
}
```

When enabled, all MCP server tools are embedded into your chosen vector DB (Qdrant, ChromaDB, MongoDB, etc.) and standard DB. The agent uses `tools_retriever` to fetch tools at runtime.

---

### Example Usage

```python
agent = OmniAgent(
    ...,
    agent_config={
        "enable_tools_knowledge_base": True,
        "tools_results_limit": 10,
        "tools_similarity_threshold": 0.1,
        # other config...
    },
    ...
)
```

---

### Benefits Recap

- **Instant access to thousands of tools**
- **Context-aware, semantic selection**
- **No manual registry management**
- **Reliable fallback search**
- **Scales with your infrastructure**

---

## πŸ—‚οΈ **Memory Tool Backend**

Introduces a persistent "memory tool" backend so agents can store a writable working memory layer on disk (under /memories). This is designed for multi-step or resumable workflows where the agent needs durable state outside the transient LLM context.

Why this matters

- Agents often need an external writable workspace for long-running tasks, progress tracking, or resumable operations.
- Storing working memory externally prevents constantly bloating the prompt and preserves important intermediate state across restarts or multiple runs.
- This is a lightweight, agent-facing working layer β€” not a replacement for structured DBs or vector semantic memory.

How to enable

- Enable via agent config:

```python
agent_config = {
    "memory_tool_backend": "local",  # enable persistent memory (writes to ./memories)
}
```

- Disable by omitting the key or setting it to None:

```python
agent_config = {
    "memory_tool_backend": None,  # disable persistent memory
}
```

Behavior & capabilities

- When enabled the agent gets access to memory_* tools for managing persistent files under /memories:
  - memory_view, memory_create_update, memory_insert
  - memory_str_replace, memory_delete, memory_rename, memory_clear_all
- Operations use a structured XML observation format so the LLM can perform reliable memory actions and parse results programmatically.
- System prompt extensions include privacy, concurrency, and size constraints to help enforce safe usage.

Files & storage

- Local backend stores files under the repository (./memories) by default.
- Current release: local backend only. Future releases will add S3, database, and other filesystem backends.

Example usage (agent-facing)

```python
# enable persistent memory in agent config
agent = OmniAgent(
    ...,
    agent_config={
        "memory_tool_backend": "local",
        # other agent config...
    },
    ...
)

# Agent can now call memory_* tools to create and update working memory
# (these are invoked by the agent's tool-calling logic; see examples/ for patterns)
```

Result / tradeoffs

- Agents can maintain durable working memory outside the token context enabling long-running workflows, planning persistence, and resumable tasks.
- This memory layer is intended as a writable working area for active tasks (progress, in-progress artifacts, state), not a substitute for structured transactional storage or semantic vector memory.
- Privacy, concurrency, and size constraints are enforced via system prompt and runtime checks; review policies for production deployment.

Roadmap

- Add S3, DB, and other filesystem backends.
- Add optional encryption, access controls, and configurable retention policies.

Practical note

- Use the memory tool backend when your workflows require persistent, writable agent state between steps or runs. Continue using vector DBs or SQL/NoSQL stores for semantic or structured storage needs.

---

**Note:** Choose your vector DB provider via environment variables. See [Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide)
---

# πŸ€– OmniAgent - Revolutionary AI Agent Builder

**🌟 Introducing OmniAgent** - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!

## βœ… OmniAgent Revolutionary Capabilities:
- **🧠 Multi-tier memory management** with vector search and semantic retrieval
- **πŸ› οΈ XML-based reasoning** with strict tool formatting for reliable execution  
- **πŸ”§ Advanced tool orchestration** - Seamlessly combine MCP server tools + local tools
- **🚁 Self-flying background agents** with autonomous task execution
- **πŸ“‘ Real-time event streaming** for monitoring and debugging
- **πŸ—οΈ Production-ready infrastructure** with error handling and retry logic
- **⚑ Plug-and-play intelligence** - No complex setup required!

## πŸ”₯ **LOCAL TOOLS SYSTEM** - Create Custom AI Tools!

One of OmniAgent's most powerful features is the ability to **register your own Python functions as AI tools**. The agent can then intelligently use these tools to complete tasks.

### 🎯 Quick Tool Registration Example

```python
from omnicoreagent.omni_agent import OmniAgent
from omnicoreagent.core.tools.local_tools_registry import ToolRegistry

# Create tool registry
tool_registry = ToolRegistry()

# Register your custom tools with simple decorator
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
    """Calculate the area of a rectangle."""
    area = length * width
    return f"Area of rectangle ({length} x {width}): {area} square units"

@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
    """Analyze text and return word count and character count."""
    words = len(text.split())
    chars = len(text)
    return f"Analysis: {words} words, {chars} characters"

@tool_registry.register_tool("system_status")
def get_system_status() -> str:
    """Get current system status information."""
    import platform
    import time
    return f"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}"

# Use tools with OmniAgent
agent = OmniAgent(
    name="my_agent",
    local_tools=tool_registry,  # Your custom tools!
    # ... other config
)

# Now the AI can use your tools!
result = await agent.run("Calculate the area of a 10x5 rectangle and tell me the current system time")
```

### πŸ“– Tool Registration Patterns (Create Your Own!)

**No built-in tools** - You create exactly what you need! Study these EXAMPLE patterns from `run_omni_agent.py`:

**Mathematical Tools Examples:**
```python
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
    area = length * width
    return f"Area: {area} square units"

@tool_registry.register_tool("analyze_numbers") 
def analyze_numbers(numbers: str) -> str:
    num_list = [float(x.strip()) for x in numbers.split(",")]
    return f"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}"
```

**System Tools Examples:**
```python
@tool_registry.register_tool("system_info")
def get_system_info() -> str:
    import platform
    return f"OS: {platform.system()}, Python: {platform.python_version()}"
```

**File Tools Examples:**
```python
@tool_registry.register_tool("list_files")
def list_directory(path: str = ".") -> str:
    import os
    files = os.listdir(path)
    return f"Found {len(files)} items in {path}"
```

### 🎨 Tool Registration Patterns

**1. Simple Function Tools:**
```python
@tool_registry.register_tool("weather_check")
def check_weather(city: str) -> str:
    """Get weather information for a city."""
    # Your weather API logic here
    return f"Weather in {city}: Sunny, 25Β°C"
```

**2. Complex Analysis Tools:**
```python
@tool_registry.register_tool("data_analysis")
def analyze_data(data: str, analysis_type: str = "summary") -> str:
    """Analyze data with different analysis types."""
    import json
    try:
        data_obj = json.loads(data)
        if analysis_type == "summary":
            return f"Data contains {len(data_obj)} items"
        elif analysis_type == "detailed":
            # Complex analysis logic
            return "Detailed analysis results..."
    except:
        return "Invalid data format"
```

**3. File Processing Tools:**
```python
@tool_registry.register_tool("process_file")
def process_file(file_path: str, operation: str) -> str:
    """Process files with different operations."""
    try:
        if operation == "read":
            with open(file_path, 'r') as f:
                content = f.read()
            return f"File content (first 100 chars): {content[:100]}..."
        elif operation == "count_lines":
            with open(file_path, 'r') as f:
                lines = len(f.readlines())
            return f"File has {lines} lines"
    except Exception as e:
        return f"Error processing file: {e}"
```

## πŸ› οΈ Building Custom Agents

### Basic Agent Setup

```python
from omnicoreagent.omni_agent import OmniAgent
from omnicoreagent.core.memory_store.memory_router import MemoryRouter
from omnicoreagent.core.events.event_router import EventRouter
from omnicoreagent.core.tools.local_tools_registry import ToolRegistry

# Create tool registry for custom tools
tool_registry = ToolRegistry()

@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
    """Analyze data and return insights."""
    return f"Analysis complete: {len(data)} characters processed"

# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
    name="my_app_agent",
    system_instruction="You are a helpful assistant with access to MCP servers and custom tools.",
    model_config={
        "provider": "openai", 
        "model": "gpt-4o",
        "temperature": 0.7
    },
    agent_config={
        "tool_call_timeout": 30,
        "max_steps": 10,
        "request_limit": 0,          # 0 = unlimited (production mode), set > 0 to enable limits
        "total_tokens_limit": 0,     # 0 = unlimited (production mode), set > 0 to enable limits
        "memory_results_limit": 5,   # Number of memory results to retrieve (1-100, default: 5)
        "memory_similarity_threshold": 0.5  # Similarity threshold for memory filtering (0.0-1.0, default: 0.5)
    },
    # Your custom local tools
    local_tools=tool_registry,
    # MCP servers - automatically connected!
    mcp_tools=[
        {
            "name": "filesystem",
            "transport_type": "stdio", 
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
        },
        {
            "name": "github",
            "transport_type": "streamable_http",
            "url": "http://localhost:8080/mcp",
            "headers": {"Authorization": "Bearer your-token"}
        }
    ],
    embedding_config={
        "provider": "openai",
        "model": "text-embedding-3-small",
        "dimensions": 1536,
        "encoding_format": "float",
    },
    memory_store=MemoryRouter(memory_store_type="redis"),
    event_router=EventRouter(event_store_type="in_memory")
)

# Use in your app - gets both MCP tools AND your custom tools!
result = await agent.run("List files in the current directory and analyze the filenames")
```

## 🧩 **OmniAgent Workflow System** – Multi Agent Orchestration

OmniCoreAgent now includes a powerful **workflow system** for orchestrating multiple agents in your application.  
You can choose from three workflow agents, each designed for different orchestration patterns:

- **SequentialAgent** – Chain agents step-by-step, passing output from one to the next.
- **ParallelAgent** – Run multiple agents concurrently, each with its own task.
- **RouterAgent** – Use an intelligent router agent to select the best sub-agent for a given task.

All three workflow agents are available in the `omni_agent/workflow/` directory, and usage examples are provided in the `examples/` folder.

---

### πŸ€– **SequentialAgent** – Step-by-Step Agent Chaining

**Purpose:**  
Run a list of agents in sequence, passing the output of each agent as the input to the next.  
This is ideal for multi-stage processing pipelines, where each agent performs a specific transformation or analysis.

**How it works:**

- You provide a list of `OmniAgent` instances.
- The first agent receives the initial query (or uses its system instruction if no query is provided).
- Each agent’s output is passed as the input to the next agent.
- The same session ID is used for all agents, ensuring shared context and memory.

**Example Usage:**

```python
from omnicoreagent.omni_agent.workflow.sequential_agent import SequentialAgent

# Create your agents (see examples/ for full setup)
agents = [agent1, agent2, agent3]

seq_agent = SequentialAgent(sub_agents=agents)
await seq_agent.initialize()
result = await seq_agent.run(initial_task="Analyze this data and summarize results")
print(result)
```

**Typical Use Cases:**

- Data preprocessing β†’ analysis β†’ reporting
- Multi-step document processing
- Chained reasoning tasks

---

### ⚑ **ParallelAgent** – Concurrent Agent Execution

**Purpose:**  
Run multiple agents at the same time, each with its own task or system instruction.  
This is perfect for scenarios where you want to gather results from several agents independently and quickly.

**How it works:**

- You provide a list of `OmniAgent` instances.
- Optionally, you can specify a dictionary of tasks for each agent (`agent_name: task`). If no task is provided, the agent uses its system instruction.
- All agents are run concurrently, sharing the same session ID for context.
- Results are returned as a dictionary mapping agent names to their outputs.

**Example Usage:**

```python
from omnicoreagent.omni_agent.workflow.parallel_agent import ParallelAgent

agents = [agent1, agent2, agent3]
tasks = {
    "agent1": "Summarize this article",
    "agent2": "Extract keywords",
    "agent3": None  # Uses system instruction
}

par_agent = ParallelAgent(sub_agents=agents)
await par_agent.initialize()
results = await par_agent.run(agent_tasks=tasks)
print(results)
```

**Typical Use Cases:**

- Running multiple analyses on the same data
- Gathering different perspectives or answers in parallel
- Batch processing with independent agents

---

### 🧠 **RouterAgent** – Intelligent Task Routing

**Purpose:**  
Automatically select the most suitable agent for a given task using LLM-powered reasoning and XML-based decision making.  
The RouterAgent analyzes the user’s query and agent capabilities, then routes the task to the best-fit agent.

**How it works:**

- You provide a list of `OmniAgent` instances and configuration for the router.
- The RouterAgent builds a registry of agent capabilities (using system instructions and available tools).
- When a task is received, the RouterAgent uses its internal LLM to select the best agent and forwards the task.
- The selected agent executes the task and returns the result.

**Example Usage:**

```python
from omnicoreagent.omni_agent.workflow.router_agent import RouterAgent

agents = [agent1, agent2, agent3]
router = RouterAgent(
    sub_agents=agents,
    model_config={...},
    agent_config={...},
    memory_router=...,
    event_router=...,
    debug=True
)
await router.initialize()
result = await router.run(task="Find and summarize recent news about AI")
print(result)
```

**Typical Use Cases:**

- Dynamic agent selection based on user query
- Multi-domain assistants (e.g., code, data, research)
- Intelligent orchestration in complex workflows

---

### πŸ“š **Workflow Agent Examples**

See the `examples/` directory for ready-to-run demos of each workflow agent:

- `examples/sequential_agent.py`
- `examples/parallel_agent.py`
- `examples/router_agent.py`

Each example shows how to set up agents, configure workflows, and process results.

---

### πŸ› οΈ **How to Choose?**

| Workflow Agent   | Best For                                      |
|------------------|-----------------------------------------------|
| SequentialAgent  | Multi-stage pipelines, step-by-step tasks     |
| ParallelAgent    | Fast batch processing, independent analyses   |
| RouterAgent      | Smart routing, dynamic agent selection        |

You can combine these workflow agents for advanced orchestration patterns in your AI applications.

---

**Ready to build?**  
Explore the examples, study the API, and start orchestrating powerful multi-agent workflows with OmniCoreAgent!

## 🚁 Background Agent System - Autonomous Task Automation

The Background Agent System is one of OmniAgent's most powerful features, providing fully autonomous task execution with intelligent lifecycle management. Background agents run independently, executing scheduled tasks without human intervention.

### ✨ Background Agent Features

- **πŸ”„ Autonomous Execution** - Agents run independently in the background
- **⏰ Flexible Scheduling** - Time-based, interval-based, and cron-style scheduling
- **🧠 Full OmniAgent Capabilities** - Access to all local tools and MCP servers
- **πŸ“Š Lifecycle Management** - Create, update, pause, resume, and delete agents
- **πŸ”§ Background Agent Manager** - Central control system for all background agents
- **πŸ“‘ Real-Time Monitoring** - Track agent status and execution results
- **πŸ› οΈ Task Management** - Update tasks, schedules, and configurations dynamically

### πŸ”§ Background Agent Manager

The Background Agent Manager handles the complete lifecycle of background agents:

#### **Core Capabilities:**
- **Create New Agents** - Deploy autonomous agents with custom tasks
- **Update Agent Tasks** - Modify agent instructions and capabilities dynamically
- **Schedule Management** - Update timing, intervals, and execution schedules
- **Agent Control** - Start, stop, pause, and resume agents
- **Health Monitoring** - Track agent status and performance
- **Resource Management** - Manage agent memory and computational resources

#### **Scheduler Support:**
- **APScheduler** *(Current)* - Advanced Python task scheduling
  - Cron-style scheduling
  - Interval-based execution
  - Date-based scheduling
  - Timezone support
- **Future Roadmap**:
  - **RabbitMQ** - Message queue-based task distribution
  - **Redis Pub/Sub** - Event-driven agent communication
  - **Celery** - Distributed task execution
  - **Kubernetes Jobs** - Container-based agent deployment

### 🎯 Background Agent Usage Examples

#### **1. Basic Background Agent Creation**

```python
from omnicoreagent import (
    OmniAgent,
    MemoryRouter,
    EventRouter,
    BackgroundAgentManager,
    ToolRegistry,
    logger,
)

# Initialize the background agent service
memory_router = MemoryRouter(memory_store_type="redis")
event_router = EventRouter(event_store_type="redis_stream")
bg_service = BackgroundAgentService(memory_router, event_router)

# Start the background agent manager
bg_service.start_manager()

# Create tool registry for the background agent
tool_registry = ToolRegistry()

@tool_registry.register_tool("monitor_system")
def monitor_system() -> str:
    """Monitor system resources and status."""
    import psutil
    cpu = psutil.cpu_percent()
    memory = psutil.virtual_memory().percent
    return f"System Status - CPU: {cpu}%, Memory: {memory}%"

# Configure the background agent
agent_config = {
    "agent_id": "system_monitor",
    "system_instruction": "You are a system monitoring agent. Check system resources and send alerts when thresholds are exceeded.",
    "model_config": {
        "provider": "openai",
        "model": "gpt-4o-mini",
        "temperature": 0.3
    },
    "agent_config": {
        "max_steps": 10,
        "tool_call_timeout": 60
    },
    "interval": 300,  # 5 minutes in seconds
    "task_config": {
        "query": "Monitor system resources and send alerts if CPU > 80% or Memory > 90%",
        "schedule": "every 5 minutes",
        "interval": 300,
        "max_retries": 2,
        "retry_delay": 30
    },
    "local_tools": tool_registry
}

# Create and deploy background agent
result = await bg_service.create(agent_config)
print(f"Background agent '{agent_config['agent_id']}' created successfully!")
print(f"Details: {result}")
```

#### **4. Web Application Integration**

```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

# Request models for API
class BackgroundAgentRequest(BaseModel):
    agent_id: str
    query: str = None
    schedule: str = None

class TaskUpdateRequest(BaseModel):
    agent_id: str
    query: str

# FastAPI integration
app = FastAPI()

# Initialize background service
@app.on_event("startup")
async def startup():
    memory_router = MemoryRouter(memory_store_type="redis")
    event_router = EventRouter(event_store_type="redis_stream")
    app.state.bg_service = BackgroundAgentService(memory_router, event_router)
    app.state.bg_service.start_manager()

@app.on_event("shutdown")
async def shutdown():
    app.state.bg_service.shutdown_manager()

# API endpoints (same as shown in REST API section above)
@app.post("/api/background/create")
async def create_background_agent(payload: BackgroundAgentRequest):
    # Parse schedule to interval seconds
    def parse_schedule(schedule_str: str) -> int:
        import re
        if not schedule_str:
            return 3600  # Default 1 hour
        
        # Try parsing as raw number
        try:
            return max(1, int(schedule_str))
        except:
            pass
        
        # Parse text patterns
        text = schedule_str.lower().strip()
        
        # Match patterns like "5 minutes", "every 30 seconds", etc.
        patterns = [
            (r"(\d+)(?:\s*)(second|sec|s)s?", 1),
            (r"(\d+)(?:\s*)(minute|min|m)s?", 60),
            (r"(\d+)(?:\s*)(hour|hr|h)s?", 3600),
            (r"every\s+(\d+)\s+(second|sec|s)s?", 1),
            (r"every\s+(\d+)\s+(minute|min|m)s?", 60),
            (r"every\s+(\d+)\s+(hour|hr|h)s?", 3600)
        ]
        
        for pattern, multiplier in patterns:
            match = re.search(pattern, text)
            if match:
                value = int(match.group(1))
                return max(1, value * multiplier)
        
        return 3600  # Default fallback

    interval_seconds = parse_schedule(payload.schedule)
    
    agent_config = {
        "agent_id": payload.agent_id,
        "system_instruction": f"You are a background agent that performs: {payload.query}",
        "model_config": {
            "provider": "openai", 
            "model": "gpt-4o-mini", 
            "temperature": 0.3
        },
        "agent_config": {
            "max_steps": 10, 
            "tool_call_timeout": 60
        },
        "interval": interval_seconds,
        "task_config": {
            "query": payload.query,
            "schedule": payload.schedule or "immediate",
            "interval": interval_seconds,
            "max_retries": 2,
            "retry_delay": 30
        },
        "local_tools": build_tool_registry()  # Your custom tools
    }
    
    details = await app.state.bg_service.create(agent_config)
    app.state.bg_service.start_manager()
    
    return {
        "status": "success",
        "agent_id": payload.agent_id,
        "message": "Background agent created",
        "details": details
    }
```

#### **3. Agent Lifecycle Management**

```python
# List all background agents
agent_ids = bg_service.list()
print(f"Active agents: {len(agent_ids)}")
print(f"Agent IDs: {agent_ids}")

# Get detailed agent information
for agent_id in agent_ids:
    status = bg_service.get_agent_status(agent_id)
    print(f"""
Agent: {agent_id}
β”œβ”€β”€ Running: {status.get('is_running', False)}
β”œβ”€β”€ Scheduled: {status.get('scheduled', False)}
β”œβ”€β”€ Query: {status.get('task_config', {}).get('query', 'N/A')}
β”œβ”€β”€ Schedule: {status.get('task_config', {}).get('schedule', 'N/A')}
β”œβ”€β”€ Interval: {status.get('task_config', {}).get('interval', 'N/A')}s
└── Session ID: {bg_service.manager.get_agent_session_id(agent_id)}
""")

# Update agent task
success = bg_service.update_task_config(
    agent_id="system_monitor",
    task_config={
        "query": "Monitor system resources and also check disk space. Alert if disk usage > 85%",
        "max_retries": 3,
        "retry_delay": 60
    }
)
print(f"Task update success: {success}")

# Agent control operations
bg_service.pause_agent("system_monitor")   # Pause scheduling
print("Agent paused")

bg_service.resume_agent("system_monitor")  # Resume scheduling
print("Agent resumed")

bg_service.stop_agent("system_monitor")    # Stop execution
print("Agent stopped")

bg_service.start_agent("system_monitor")   # Start execution
print("Agent started")

# Remove agent task permanently
success = bg_service.remove_task("system_monitor")
print(f"Task removal success: {success}")

# Get manager status
manager_status = bg_service.get_manager_status()
print(f"Manager status: {manager_status}")

# Connect MCP servers for agent (if configured)
await bg_service.connect_mcp("system_monitor")
print("MCP servers connected")

# Shutdown entire manager
bg_service.shutdown_manager()
print("Background agent manager shutdown")
```

#### **4. Background Agent with MCP Integration**

```python
# Background agent with both local tools and MCP servers
web_scraper_agent = await manager.create_agent(
    agent_id="web_scraper",
    task="Scrape news websites hourly, analyze sentiment, and store results",
    schedule={
        "type": "interval",
        "hours": 1
    },
    local_tools=tool_registry,  # Your custom tools
    mcp_tools=[  # MCP server connections
        {
            "name": "web_scraper",
            "transport_type": "stdio",
            "command": "npx",
            "args": ["-y", "@mcp/server-web-scraper"]
        },
        {
            "name": "database",
            "transport_type": "streamable_http",
            "url": "http://localhost:8080/mcp",
            "headers": {"Authorization": "Bearer db-token"}
        }
    ],
    system_instruction="You are a web scraping agent. Scrape news sites, analyze sentiment, and store results in the database."
)
```

### πŸ› οΈ Background Agent Manager API

The BackgroundAgentService provides a comprehensive API for managing background agents:

#### **Agent Creation & Configuration**
```python
# Create new background agent
result = await bg_service.create(agent_config: dict)

# Agent configuration structure
agent_config = {
    "agent_id": "unique_agent_id",
    "system_instruction": "Agent role and behavior description",
    "model_config": {
        "provider": "openai",
        "model": "gpt-4o-mini",
        "temperature": 0.3
    },
    "agent_config": {
        "max_steps": 10,
        "tool_call_timeout": 60
    },
    "interval": 300,  # Execution interval in seconds
    "task_config": {
        "query": "Main task description",
        "schedule": "human-readable schedule (e.g., 'every 5 minutes')",
        "interval": 300,
        "max_retries": 2,
        "retry_delay": 30
    },
    "local_tools": tool_registry,  # Optional custom tools
    "mcp_tools": mcp_server_configs  # Optional MCP server connections
}
```

#### **Agent Lifecycle Management**
```python
# Start the background agent manager
bg_service.start_manager()

# Agent control operations
bg_service.start_agent(agent_id: str)      # Start agent execution
bg_service.stop_agent(agent_id: str)       # Stop agent execution
bg_service.pause_agent(agent_id: str)      # Pause agent scheduling
bg_service.resume_agent(agent_id: str)     # Resume agent scheduling

# Shutdown manager (stops all agents)
bg_service.shutdown_manager()
```

#### **Agent Monitoring & Status**
```python
# List all agents
agent_ids = bg_service.list()  # Returns list of agent IDs

# Get specific agent status
status = bg_service.get_agent_status(agent_id: str)
# Returns: {
#     "is_running": bool,
#     "scheduled": bool,
#     "task_config": dict,
#     "session_id": str,
#     # ... other status info
# }

# Get manager status
manager_status = bg_service.get_manager_status()
```

#### **Task Management**
```python
# Update agent task configuration
success = bg_service.update_task_config(
    agent_id: str, 
    task_config: dict
)

# Remove agent task completely
success = bg_service.remove_task(agent_id: str)
```

#### **MCP Server Management**
```python
# Connect MCP servers for specific agent
await bg_service.connect_mcp(agent_id: str)
```

### 🌐 REST API Endpoints

The Background Agent system can be integrated into web applications with these REST endpoints:

#### **Agent Management Endpoints**
```bash
# Create new background agent
POST /api/background/create
{
    "agent_id": "system_monitor",
    "query": "Monitor system resources and alert on high usage",
    "schedule": "every 5 minutes"
}

# List all background agents
GET /api/background/list
# Returns: {
#   "status": "success",
#   "agents": [
#     {
#       "agent_id": "system_monitor",
#       "query": "Monitor system resources...",
#       "is_running": true,
#       "scheduled": true,
#       "schedule": "every 5 minutes",
#       "interval": 300,
#       "session_id": "session_123"
#     }
#   ]
# }
```

#### **Agent Control Endpoints**
```bash
# Start agent
POST /api/background/start
{"agent_id": "system_monitor"}

# Stop agent
POST /api/background/stop
{"agent_id": "system_monitor"}

# Pause agent
POST /api/background/pause
{"agent_id": "system_monitor"}

# Resume agent
POST /api/background/resume
{"agent_id": "system_monitor"}
```

#### **Task Management Endpoints**
```bash
# Update agent task
POST /api/task/update
{
    "agent_id": "system_monitor",
    "query": "Updated task description"
}

# Remove agent task
DELETE /api/task/remove/{agent_id}
```

#### **Status & Monitoring Endpoints**
```bash
# Get manager status
GET /api/background/status

# Get specific agent status
GET /api/background/status/{agent_id}

# Connect MCP servers for agent
POST /api/background/mcp/connect
{"agent_id": "system_monitor"}
```

#### **Event Streaming Endpoints**
```bash
# Get events for session
GET /api/events?session_id=session_123

# Stream real-time events
GET /api/events/stream/{session_id}
# Returns Server-Sent Events stream
```

### πŸ“‹ Schedule Parsing

The Background Agent system includes intelligent schedule parsing:

```python
# Flexible schedule input formats:
"300"                    # 300 seconds
"5 minutes"             # 5 minutes
"2 hours"               # 2 hours
"every 30 seconds"      # Every 30 seconds
"every 10 minutes"      # Every 10 minutes
"every 2 hours"         # Every 2 hours

# All converted to interval seconds automatically
# Minimum interval: 1 second
```

### πŸ“… Scheduling Configuration

The Background Agent system currently supports interval-based scheduling with intelligent parsing:

#### **Interval-Based Scheduling (Current Implementation)**
```python
# Schedule configuration in agent_config
agent_config = {
    "interval": 300,  # Execution interval in seconds
    "task_config": {
        "schedule": "every 5 minutes",  # Human-readable description
        "interval": 300,               # Same value in seconds
        "max_retries": 2,
        "retry_delay": 30
    }
}

# Flexible schedule input formats supported:
"300"                    # 300 seconds
"5 minutes"             # 5 minutes β†’ 300 seconds
"2 hours"               # 2 hours β†’ 7200 seconds
"30 seconds"            # 30 seconds
"every 30 seconds"      # Every 30 seconds
"every 10 minutes"      # Every 10 minutes β†’ 600 seconds
"every 2 hours"         # Every 2 hours β†’ 7200 seconds

# All automatically converted to interval seconds
# Minimum interval: 1 second
```

#### **Schedule Parsing Logic**
The system intelligently parses various schedule formats:
- **Raw numbers**: `"300"` β†’ 300 seconds
- **Unit expressions**: `"5 minutes"` β†’ 300 seconds
- **Every patterns**: `"every 10 minutes"` β†’ 600 seconds
- **Supported units**: seconds (s/sec), minutes (m/min), hours (h/hr)

#### **Future Scheduling Features (Planned)**
```python
# Coming with future scheduler backends:
schedule = {
    "type": "cron",
    "cron": "0 9 * * 1-5",    # Weekdays at 9 AM
    "timezone": "UTC"
}

schedule = {
    "type": "date",
    "run_date": "2024-03-15 14:30:00",
    "timezone": "UTC"
}
```

### πŸ”„ Background Agent States

Background agents can be in different states managed by the Background Agent Manager:

- **`CREATED`** - Agent created but not yet started
- **`RUNNING`** - Agent is active and executing according to schedule
- **`PAUSED`** - Agent is temporarily stopped but retains configuration
- **`STOPPED`** - Agent execution stopped but agent still exists
- **`ERROR`** - Agent encountered an error during execution
- **`DELETED`** - Agent permanently removed

### πŸ“Š Monitoring & Observability

#### **Real-Time Status Monitoring**
```python
# Get comprehensive agent status
status = await manager.get_agent_status("system_monitor")

print(f"""
Agent Status Report:
β”œβ”€β”€ ID: {status['agent_id']}
β”œβ”€β”€ Name: {status['name']}
β”œβ”€β”€ State: {status['state']}
β”œβ”€β”€ Last Run: {status['last_run']}
β”œβ”€β”€ Next Run: {status['next_run']}
β”œβ”€β”€ Success Rate: {status['success_rate']}%
β”œβ”€β”€ Total Executions: {status['total_runs']}
β”œβ”€β”€ Failed Executions: {status['failed_runs']}
└── Average Duration: {status['avg_duration']}s
""")
```

#### **Execution History**
```python
# Get detailed execution history
history = await manager.get_execution_history("system_monitor", limit=5)

for execution in history:
    print(f"""
Execution {execution['execution_id']}:
β”œβ”€β”€ Start Time: {execution['start_time']}
β”œβ”€β”€ Duration: {execution['duration']}s
β”œβ”€β”€ Status: {execution['status']}
β”œβ”€β”€ Result: {execution['result'][:100]}...
└── Tools Used: {execution['tools_used']}
""")
```

### πŸš€ Future Scheduler Support

The Background Agent Manager is designed to support multiple scheduling backends:

#### **Current Support**
- **APScheduler** - Full-featured Python task scheduling
  - In-memory scheduler
  - Persistent job storage
  - Multiple trigger types
  - Timezone support

#### **Planned Future Support**
- **RabbitMQ** - Message queue-based task distribution
  - Distributed agent execution
  - Load balancing across workers
  - Reliable message delivery
  - Dead letter queues for failed tasks

- **Redis Pub/Sub** - Event-driven agent communication
  - Real-time event processing
  - Agent-to-agent communication
  - Scalable event distribution
  - Pattern-based subscriptions

- **Celery** - Distributed task queue
  - Horizontal scaling
  - Result backends
  - Task routing and priority
  - Monitoring and management tools

- **Kubernetes Jobs** - Container-based agent deployment
  - Cloud-native scaling
  - Resource management
  - Job persistence and recovery
  - Integration with CI/CD pipelines

### πŸ“‹ Background Agent Configuration

#### **Complete Configuration Example**
```python
# Comprehensive background agent setup
background_agent = await manager.create_agent(
    agent_id="comprehensive_agent",
    name="Comprehensive Background Agent",
    task="Monitor APIs, process data, and generate reports",
    
    # Scheduling configuration
    schedule={
        "type": "cron",
        "cron": "0 */6 * * *",  # Every 6 hours
        "timezone": "UTC"
    },
    
    # AI model configuration
    model_config={
        "provider": "openai",
        "model": "gpt-4o-mini",
        "temperature": 0.3,
        "max_tokens": 2000
    },
    
    # Agent behavior configuration
    agent_config={
        "tool_call_timeout": 60,
        "max_steps": 20,
        "request_limit": 100,
        "total_tokens_limit": 10000,
        "memory_results_limit": 10,
        "memory_similarity_threshold": 0.7
    },
    
    # Custom tools
    local_tools=tool_registry,
    
    # MCP server connections
    mcp_tools=[
        {
            "name": "api_monitor",
            "transport_type": "streamable_http",
            "url": "http://localhost:8080/mcp",
            "headers": {"Authorization": "Bearer api-token"}
        }
    ],
    
    # Agent personality
    system_instruction="You are an autonomous monitoring agent. Execute tasks efficiently and report any issues.",
    
    # Memory and events
    memory_store=MemoryRouter(memory_store_type="redis"),
    event_router=EventRouter(event_store_type="redis_stream")
)
```

### πŸ”„ Error Handling & Recovery

Background agents include robust error handling:

```python
# Automatic retry configuration
agent_config = {
    "max_retries": 3,           # Retry failed executions
    "retry_delay": 60,          # Wait 60 seconds between retries
    "failure_threshold": 5,     # Pause agent after 5 consecutive failures
    "recovery_mode": "auto"     # Auto-resume after successful execution
}

# Error monitoring
try:
    result = await agent.execute_task()
except BackgroundAgentException as e:
    # Handle agent-specific errors
    await manager.handle_agent_error(agent_id, e)
```

### πŸ“‘ Event Integration

Background agents integrate with the event system for real-time monitoring:

```python
# Subscribe to background agent events
event_router = EventRouter(event_store_type="redis_stream")

# Listen for agent events
async for event in event_router.subscribe("background_agent.*"):
    if event.type == "agent_started":
        print(f"Agent {event.data['agent_id']} started execution")
    elif event.type == "agent_completed":
        print(f"Agent {event.data['agent_id']} completed task")
    elif event.type == "agent_failed":
        print(f"Agent {event.data['agent_id']} failed: {event.data['error']}")
```

### Basic Agent Usage
```bash
# Complete OmniAgent demo with custom tools
python examples/omni_agent_example.py

# Advanced patterns with 12+ tool examples
python examples/run_omni_agent.py
```

### Background Agents
```bash
# Self-flying autonomous agents
python examples/background_agent_example.py
```

### Web Applications
```bash
# FastAPI integration
python examples/fast_api_impl.py

# Full web interface
python examples/web_server.py
# Open http://localhost:8000
```

---

# πŸ”Œ MCPOmni Connect - World-Class MCP Client

The MCPOmni Connect system is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes.

## ✨ MCPOmni Connect Key Features

### πŸ€– Intelligent Agent System

- **ReAct Agent Mode**
  - Autonomous task execution with reasoning and action cycles
  - Independent decision-making without human intervention
  - Advanced problem-solving through iterative reasoning
  - Self-guided tool selection and execution
  - Complex task decomposition and handling
- **Orchestrator Agent Mode**
  - Strategic multi-step task planning and execution
  - Intelligent coordination across multiple MCP servers
  - Dynamic agent delegation and communication
  - Parallel task execution when possible
  - Sophisticated workflow management with real-time progress monitoring
- **Interactive Chat Mode**
  - Human-in-the-loop task execution with approval workflows
  - Step-by-step guidance and explanations
  - Educational mode for understanding AI decision processes

### πŸ”Œ Universal Connectivity

- **Multi-Protocol Support**
  - Native support for stdio transport
  - Server-Sent Events (SSE) for real-time communication
  - Streamable HTTP for efficient data streaming
  - Docker container integration
  - NPX package execution
  - Extensible transport layer for future protocols
- **Authentication Support**
  - OAuth 2.0 authentication flow
  - Bearer token authentication
  - Custom header support
  - Secure credential management
- **Agentic Operation Modes**
  - Seamless switching between chat, autonomous, and orchestrator modes
  - Context-aware mode selection based on task complexity
  - Persistent state management across mode transitions

## 🚦 Transport Types & Authentication

MCPOmni Connect supports multiple ways to connect to MCP servers:

### 1. **stdio** - Direct Process Communication

**Use when**: Connecting to local MCP servers that run as separate processes

```json
{
  "server-name": {
    "transport_type": "stdio",
    "command": "uvx",
    "args": ["mcp-server-package"]
  }
}
```

- **No authentication needed**
- **No OAuth server started**
- Most common for local development

### 2. **sse** - Server-Sent Events

**Use when**: Connecting to HTTP-based MCP servers using Server-Sent Events

```json
{
  "server-name": {
    "transport_type": "sse",
    "url": "http://your-server.com:4010/sse",
    "headers": {
      "Authorization": "Bearer your-token"
    },
    "timeout": 60,
    "sse_read_timeout": 120
  }
}
```

- **Uses Bearer token or custom headers**
- **No OAuth server started**

### 3. **streamable_http** - HTTP with Optional OAuth

**Use when**: Connecting to HTTP-based MCP servers with or without OAuth

**Without OAuth (Bearer Token):**

```json
{
  "server-name": {
    "transport_type": "streamable_http",
    "url": "http://your-server.com:4010/mcp",
    "headers": {
      "Authorization": "Bearer your-token"
    },
    "timeout": 60
  }
}
```

- **Uses Bearer token or custom headers**
- **No OAuth server started**

**With OAuth:**

```json
{
  "server-name": {
    "transport_type": "streamable_http",
    "auth": {
      "method": "oauth"
    },
    "url": "http://your-server.com:4010/mcp"
  }
}
```

- **OAuth callback server automatically starts on `http://localhost:3000`**
- **This is hardcoded and cannot be changed**
- **Required for OAuth flow to work properly**

### πŸ” OAuth Server Behavior

**Important**: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.

#### What You'll See:

```
πŸ–₯️  Started callback server on http://localhost:3000
```

#### Key Points:

- **This is normal behavior** - not an error
- **The address `http://localhost:3000` is hardcoded** and cannot be changed
- **The server only starts when** you have `"auth": {"method": "oauth"}` in your config
- **The server stops** when the application shuts down
- **Only used for OAuth token handling** - no other purpose

#### When OAuth is NOT Used:

- Remove the entire `"auth"` section from your server configuration
- Use `"headers"` with `"Authorization": "Bearer token"` instead
- No OAuth server will start

## πŸ–₯️ CLI Commands

### Memory Store Management:
```bash
# Switch between memory backends
/memory_store:in_memory                    # Fast in-memory storage (default)
/memory_store:redis                        # Redis persistent storage  
/memory_store:database                     # SQLite database storage
/memory_store:database:postgresql://user:pass@host/db  # PostgreSQL
/memory_store:database:mysql://user:pass@host/db       # MySQL
/memory_store:mongodb                      # Mongodb persistent storage
/memory_store:mongodb:your_mongodb_connection_string   # Mongodb with custom URI

# Memory strategy configuration
/memory_mode:sliding_window:10             # Keep last 10 messages
/memory_mode:token_budget:5000             # Keep under 5000 tokens
```

### Event Store Management:
```bash
# Switch between event backends
/event_store:in_memory                     # Fast in-memory events (default)
/event_store:redis_stream                  # Redis Streams for persistence
```

### Core MCP Operations:
```bash
/tools                                    # List all available tools
/prompts                                  # List all available prompts  
/resources                               # List all available resources
/prompt:<name>                           # Execute a specific prompt
/resource:<uri>                          # Read a specific resource
/subscribe:<uri>                         # Subscribe to resource updates
/query <your_question>                   # Ask questions using tools
```

### Enhanced Commands:
```bash
# Memory operations
/history                                   # Show conversation history
/clear_history                            # Clear conversation history
/save_history <file>                      # Save history to file
/load_history <file>                      # Load history from file

# Server management
/add_servers:<config.json>                # Add servers from config
/remove_server:<server_name>              # Remove specific server
/refresh                                  # Refresh server capabilities

# Agentic modes
/mode:auto                              # Switch to autonomous agentic mode
/mode:orchestrator                      # Switch to multi-server orchestration
/mode:chat                              # Switch to interactive chat mode

# Debugging and monitoring
/debug                                    # Toggle debug mode
/api_stats                               # Show API usage statistics
```

## πŸ“š MCP Usage Examples

### Basic MCP Client
```bash
# Launch the basic MCP client
python examples/basic_mcp.py
```

### Advanced MCP CLI
```bash
# Launch the advanced MCP CLI
python examples/run_mcp.py

# Core MCP client commands:
/tools                                    # List all available tools
/prompts                                  # List all available prompts  
/resources                               # List all available resources
/prompt:<name>                           # Execute a specific prompt
/resource:<uri>                          # Read a specific resource
/subscribe:<uri>                         # Subscribe to resource updates
/query <your_question>                   # Ask questions using tools

# Advanced platform features:
/memory_store:redis                      # Switch to Redis memory
/event_store:redis_stream               # Switch to Redis events
/add_servers:<config.json>              # Add MCP servers dynamically
/remove_server:<name>                   # Remove MCP server
/mode:auto                              # Switch to autonomous agentic mode
/mode:orchestrator                      # Switch to multi-server orchestration
```

---

## ✨ Platform Features

> **πŸš€ Want to start building right away?** Jump to [Quick Start](#-quick-start-2-minutes) | [Examples](#-what-can-you-build-see-real-examples) | [Configuration](#️-configuration-guide)

### 🧠 AI-Powered Intelligence

- **Unified LLM Integration with LiteLLM**
  - Single unified interface for all AI providers
  - Support for 100+ models across providers including:
    - OpenAI (GPT-4, GPT-3.5, etc.)
    - Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)
    - Google (Gemini Pro, Gemini Flash, etc.)
    - Groq (Llama, Mixtral, Gemma, etc.)
    - DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)
    - Azure OpenAI
    - OpenRouter (access to 200+ models)
    - Ollama (local models)
  - Simplified configuration and reduced complexity
  - Dynamic system prompts based on available capabilities
  - Intelligent context management
  - Automatic tool selection and chaining
  - Universal model support through custom ReAct Agent
    - Handles models without native function calling
    - Dynamic function execution based on user requests
    - Intelligent tool orchestration

### πŸ”’ Security & Privacy

- **Explicit User Control**
  - All tool executions require explicit user approval in chat mode
  - Clear explanation of tool actions before execution
  - Transparent disclosure of data access and usage
- **Data Protection**
  - Strict data access controls
  - Server-specific data isolation
  - No unauthorized data exposure
- **Privacy-First Approach**
  - Minimal data collection
  - User data remains on specified servers
  - No cross-server data sharing without consent
- **Secure Communication**
  - Encrypted transport protocols
  - Secure API key management
  - Environment variable protection

### πŸ’Ύ Advanced Memory Management

- **Multi-Backend Memory Storage**
  - **In-Memory**: Fast development storage
  - **Redis**: Persistent memory with real-time access
  - **Database**: PostgreSQL, MySQL, SQLite support 
  - **Mongodb**: NoSQL document storage
  - **File Storage**: Save/load conversation history
  - Runtime switching: `/memory_store:redis`, `/memory_store:database:postgresql://user:pass@host/db`
- **Multi-Tier Memory Strategy**
  - **Short-term Memory**: Sliding window or token budget strategies
  - **Long-term Memory**: Vector database storage for semantic retrieval
  - **Episodic Memory**: Context-aware conversation history
  - Runtime configuration: `/memory_mode:sliding_window:5`, `/memory_mode:token_budget:3000`
- **Vector Database Integration**
  - **Multiple Provider Support**: Mongodb atlas, ChromaDB (remote/cloud), and Qdrant (remote)
  - **Smart Fallback**: Automatic failover to local storage if remote fails
  - **Semantic Search**: Intelligent context retrieval across conversations  
  - **Long-term & Episodic Memory**: Enable with `ENABLE_VECTOR_DB=true`
  
- **Real-Time Event Streaming**
  - **In-Memory Events**: Fast development event processing
  - **Redis Streams**: Persistent event storage and streaming
  - Runtime switching: `/event_store:redis_stream`, `/event_store:in_memory`
- **Advanced Tracing & Observability**
  - **Opik Integration**: Production-grade tracing and monitoring
    - **Real-time Performance Tracking**: Monitor LLM calls, tool executions, and agent performance
    - **Detailed Call Traces**: See exactly where time is spent in your AI workflows
    - **System Observability**: Understand bottlenecks and optimize performance
    - **Open Source**: Built on Opik, the open-source observability platform
  - **Easy Setup**: Just add your Opik credentials to start monitoring
  - **Zero Code Changes**: Automatic tracing with `@track` decorators
  - **Performance Insights**: Identify slow operations and optimization opportunities

### πŸ’¬ Prompt Management

- **Advanced Prompt Handling**
  - Dynamic prompt discovery across servers
  - Flexible argument parsing (JSON and key-value formats)
  - Cross-server prompt coordination
  - Intelligent prompt validation
  - Context-aware prompt execution
  - Real-time prompt responses
  - Support for complex nested arguments
  - Automatic type conversion and validation
- **Client-Side Sampling Support**
  - Dynamic sampling configuration from client
  - Flexible LLM response generation
  - Customizable sampling parameters
  - Real-time sampling adjustments

### πŸ› οΈ Tool Orchestration

- **Dynamic Tool Discovery & Management**
  - Automatic tool capability detection
  - Cross-server tool coordination
  - Intelligent tool selection based on context
  - Real-time tool availability updates

### πŸ“¦ Resource Management

- **Universal Resource Access**
  - Cross-server resource discovery
  - Unified resource addressing
  - Automatic resource type detection
  - Smart content summarization

### πŸ”„ Server Management

- **Advanced Server Handling**
  - Multiple simultaneous server connections
  - Automatic server health monitoring
  - Graceful connection management
  - Dynamic capability updates
  - Flexible authentication methods
  - Runtime server configuration updates

## πŸ—οΈ Architecture

> **πŸ“š Prefer hands-on learning?** Skip to [Examples](#-what-can-you-build-see-real-examples) or [Configuration](#️-configuration-guide)

### Core Components

```
OmniCoreAgent Platform
β”œβ”€β”€ πŸ€– OmniAgent System (Revolutionary Agent Builder)
β”‚   β”œβ”€β”€ Local Tools Registry
β”‚   β”œβ”€β”€ Background Agent Manager (Lifecycle Management)
β”‚   β”‚   β”œβ”€β”€ Agent Creation & Deployment
β”‚   β”‚   β”œβ”€β”€ Task & Schedule Updates
β”‚   β”‚   β”œβ”€β”€ Agent Control (Start/Stop/Pause/Resume)
β”‚   β”‚   β”œβ”€β”€ Health Monitoring & Status Tracking
β”‚   β”‚   └── Scheduler Integration (APScheduler + Future: RabbitMQ, Redis Pub/Sub)
β”‚   β”œβ”€β”€ Custom Agent Creation
β”‚   └── Agent Orchestration Engine
β”œβ”€β”€ πŸ”Œ MCPOmni Connect System (World-Class MCP Client)
β”‚   β”œβ”€β”€ Transport Layer (stdio, SSE, HTTP, Docker, NPX)
β”‚   β”œβ”€β”€ Multi-Server Orchestration
β”‚   β”œβ”€β”€ Authentication & Security
β”‚   └── Connection Lifecycle Management
β”œβ”€β”€ 🧠 Shared Memory System (Both Systems)
β”‚   β”œβ”€β”€ Multi-Backend Storage (Redis, DB, In-Memory)
β”‚   β”œβ”€β”€ Vector Database Integration (ChromaDB, Qdrant, MongoDB)
β”‚   β”œβ”€β”€ Memory Strategies (Sliding Window, Token Budget)
β”‚   └── Session Management
β”œβ”€β”€ πŸ“‘ Event System (Both Systems)
β”‚   β”œβ”€β”€ In-Memory Event Processing
β”‚   β”œβ”€β”€ Redis Streams for Persistence
β”‚   β”œβ”€β”€ Real-Time Event Monitoring
β”‚   └── Background Agent Event Broadcasting
β”œβ”€β”€ πŸ› οΈ Tool Management (Both Systems)
β”‚   β”œβ”€β”€ Dynamic Tool Discovery
β”‚   β”œβ”€β”€ Cross-Server Tool Routing
β”‚   β”œβ”€β”€ Local Python Tool Registration
β”‚   └── Tool Execution Engine
└── πŸ€– AI Integration (Both Systems)
    β”œβ”€β”€ LiteLLM (100+ Models)
    β”œβ”€β”€ Context Management
    β”œβ”€β”€ ReAct Agent Processing
    └── Response Generation
```

---

## πŸ“¦ Installation

### βœ… **Minimal Setup (Just Python + API Key)**

**Required:**
- Python 3.10+
- LLM API key (OpenAI, Anthropic, Groq, etc.)

**Optional (for advanced features):**
- Redis (persistent memory)
- Vector DB (Support Qdrant, ChromaDB, Mongodb atlas)
- Database (PostgreSQL/MySQL/SQLite)
- Opik account (for tracing/observability)

### πŸ“¦ **Installation**

```bash
# Option 1: UV (recommended - faster)
uv add omnicoreagent

# Option 2: Pip (standard)
pip install omnicoreagent
```

### ⚑ **Quick Configuration**

**Minimal setup** (get started immediately):
```bash
# Just set your API key - that's it!
echo "LLM_API_KEY=your_api_key_here" > .env
```

**Advanced setup** (optional features):
> **πŸ“– Need more options?** See the complete [Configuration Guide](#️-configuration-guide) below for all environment variables, vector database setup, memory configuration, and advanced features.

---

## βš™οΈ Configuration Guide

> **⚑ Quick Setup**: Only need `LLM_API_KEY` to get started! | **πŸ” Detailed Setup**: [Vector DB](#-vector-database--smart-memory-setup-complete-guide) | [Tracing](#-opik-tracing--observability-setup-latest-feature)

### Environment Variables

Create a `.env` file with your configuration. **Only the LLM API key is required** - everything else is optional for advanced features.

#### **πŸ”₯ REQUIRED (Start Here)**
```bash
# ===============================================
# REQUIRED: AI Model API Key (Choose one provider)
# ===============================================
LLM_API_KEY=your_openai_api_key_here
# OR for other providers:
# LLM_API_KEY=your_anthropic_api_key_here
# LLM_API_KEY=your_groq_api_key_here
# LLM_API_KEY=your_azure_openai_api_key_here
# See examples/llm_usage-config.json for all provider configs
```

#### **⚑ OPTIONAL: Advanced Features**
```bash
# ===============================================
# Embeddings (OPTIONAL) - NEW!
# ===============================================
# For generating text embeddings (vector representations)
# Choose one provider - same key works for all embedding models
EMBEDDING_API_KEY=your_embedding_api_key_here
# OR for other providers:
# EMBEDDING_API_KEY=your_cohere_api_key_here
# EMBEDDING_API_KEY=your_huggingface_api_key_here
# EMBEDDING_API_KEY=your_mistral_api_key_here
# See docs/EMBEDDING_README.md for all provider configs

# ===============================================
# Tracing & Observability (OPTIONAL) - NEW!
# ===============================================
# For advanced monitoring and performance optimization
# πŸ”— Sign up: https://www.comet.com/signup?from=llm
OPIK_API_KEY=your_opik_api_key_here
OPIK_WORKSPACE=your_opik_workspace_name

# ===============================================
# Vector Database (OPTIONAL) - Smart Memory
# ===============================================
# ⚠️ Warning: 30-60s startup time for sentence transformer
# ⚠️ IMPORTANT: You MUST choose a provider - no local fallback
ENABLE_VECTOR_DB=true # Default: false

# Choose ONE provider (required if ENABLE_VECTOR_DB=true):

# Option 1: Qdrant Remote (RECOMMENDED)
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333

# Option 2: ChromaDB Remote
# OMNI_MEMORY_PROVIDER=chroma-remote
# CHROMA_HOST=localhost
# CHROMA_PORT=8000

# Option 3: ChromaDB Cloud
# OMNI_MEMORY_PROVIDER=chroma-cloud
# CHROMA_TENANT=your_tenant
# CHROMA_DATABASE=your_database
# CHROMA_API_KEY=your_api_key

# Option 4: MongoDB Atlas
# OMNI_MEMORY_PROVIDER=mongodb-remote
# MONGODB_URI="your_mongodb_connection_string"
# MONGODB_DB_NAME="db name"

# ===============================================
# Persistent Memory Storage (OPTIONAL)
# ===============================================
# These have sensible defaults - only set if you need custom configuration

# Redis - for memory_store_type="redis" (defaults to: redis://localhost:6379/0)
# REDIS_URL=redis://your-remote-redis:6379/0
# REDIS_URL=redis://:password@localhost:6379/0  # With password


# DATABASE_URL=sqlite:///omnicoreagent_memory.db
# DATABASE_URL=postgresql://user:password@localhost:5432/omnicoreagent
# DATABASE_URL=mysql://user:password@localhost:3306/omnicoreagent

# Mongodb - for memory_store_type="mongodb" (defaults to: mongodb://localhost:27017/omnicoreagent)
MONGODB_URI="your_mongodb_connection_string"
MONGODB_DB_NAME="db name"
```

> **πŸ’‘ Quick Start**: Just set `LLM_API_KEY` and you're ready to go! Add other variables only when you need advanced features.

### **Server Configuration (`servers_config.json`)**

For MCP server connections and agent settings:

#### Basic OpenAI Configuration

```json
{
  "AgentConfig": {
    "tool_call_timeout": 30,
    "max_steps": 15,
    "request_limit": 0,          // 0 = unlimited (production mode), set > 0 to enable limits
    "total_tokens_limit": 0,     // 0 = unlimited (production mode), set > 0 to enable limits
    "memory_results_limit": 5,   // Number of memory results to retrieve (1-100, default: 5)
    "memory_similarity_threshold": 0.5  // Similarity threshold for memory filtering (0.0-1.0, default: 0.5)
  },
  "LLM": {
    "provider": "openai",
    "model": "gpt-4",
    "temperature": 0.5,
    "max_tokens": 5000,
    "max_context_length": 30000,
    "top_p": 0
  },
  "Embedding": {
    "provider": "openai",
    "model": "text-embedding-3-small",
    "dimensions": 1536,
    "encoding_format": "float"
  },
  "mcpServers": {
    "ev_assistant": {
      "transport_type": "streamable_http",
      "auth": {
        "method": "oauth"
      },
      "url": "http://localhost:8000/mcp"
    },
    "sse-server": {
      "transport_type": "sse",
      "url": "http://localhost:3000/sse",
      "headers": {
        "Authorization": "Bearer token"
      },
      "timeout": 60,
      "sse_read_timeout": 120
    },
    "streamable_http-server": {
      "transport_type": "streamable_http",
      "url": "http://localhost:3000/mcp",
      "headers": {
        "Authorization": "Bearer token"
      },
      "timeout": 60,
      "sse_read_timeout": 120
    }
  }
}
```

#### Multiple Provider Examples

**Anthropic Claude Configuration**
```json
{
  "LLM": {
    "provider": "anthropic",
    "model": "claude-3-5-sonnet-20241022",
    "temperature": 0.7,
    "max_tokens": 4000,
    "max_context_length": 200000,
    "top_p": 0.95
  }
}
```

**Groq Configuration**
```json
{
  "LLM": {
    "provider": "groq",
    "model": "llama-3.1-8b-instant",
    "temperature": 0.5,
    "max_tokens": 2000,
    "max_context_length": 8000,
    "top_p": 0.9
  }
}
```

**Azure OpenAI Configuration**
```json
{
  "LLM": {
    "provider": "azureopenai",
    "model": "gpt-4",
    "temperature": 0.7,
    "max_tokens": 2000,
    "max_context_length": 100000,
    "top_p": 0.95,
    "azure_endpoint": "https://your-resource.openai.azure.com",
    "azure_api_version": "2024-02-01",
    "azure_deployment": "your-deployment-name"
  }
}
```

**Ollama Local Model Configuration**
```json
{
  "LLM": {
    "provider": "ollama",
    "model": "llama3.1:8b",
    "temperature": 0.5,
    "max_tokens": 5000,
    "max_context_length": 100000,
    "top_p": 0.7,
    "ollama_host": "http://localhost:11434"
  }
}
```

**OpenRouter Configuration**
```json
{
  "LLM": {
    "provider": "openrouter",
    "model": "anthropic/claude-3.5-sonnet",
    "temperature": 0.7,
    "max_tokens": 4000,
    "max_context_length": 200000,
    "top_p": 0.95
  }
}
```

### πŸ” Authentication Methods

OmniCoreAgent supports multiple authentication methods for secure server connections:

#### OAuth 2.0 Authentication
```json
{
  "server_name": {
    "transport_type": "streamable_http",
    "auth": {
      "method": "oauth"
    },
    "url": "http://your-server/mcp"
  }
}
```

#### Bearer Token Authentication
```json
{
  "server_name": {
    "transport_type": "streamable_http",
    "headers": {
      "Authorization": "Bearer your-token-here"
    },
    "url": "http://your-server/mcp"
  }
}
```

#### Custom Headers
```json
{
  "server_name": {
    "transport_type": "streamable_http",
    "headers": {
      "X-Custom-Header": "value",
      "Authorization": "Custom-Auth-Scheme token"
    },
    "url": "http://your-server/mcp"
  }
}
```

## πŸ”„ Dynamic Server Configuration

OmniCoreAgent supports dynamic server configuration through commands:

#### Add New Servers
```bash
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
```

The configuration file can include multiple servers with different authentication methods:

```json
{
  "new-server": {
    "transport_type": "streamable_http",
    "auth": {
      "method": "oauth"
    },
    "url": "http://localhost:8000/mcp"
  },
  "another-server": {
    "transport_type": "sse",
    "headers": {
      "Authorization": "Bearer token"
    },
    "url": "http://localhost:3000/sse"
  }
}
```

#### Remove Servers
```bash
# Remove a server by its name
/remove_server:server_name
```

---

## 🧠 Vector Database & Smart Memory Setup (Complete Guide)

OmniCoreAgent provides advanced memory capabilities through vector databases for intelligent, semantic search and long-term memory.

#### **⚑ Quick Start (Choose Your Provider)**
```bash
# Enable vector memory - you MUST choose a provider
ENABLE_VECTOR_DB=true

# Option 1: Qdrant (recommended)
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333

# Option 2: ChromaDB Remote
OMNI_MEMORY_PROVIDER=chroma-remote
CHROMA_HOST=localhost
CHROMA_PORT=8000

# Option 3: ChromaDB Cloud
OMNI_MEMORY_PROVIDER=chroma-cloud
CHROMA_TENANT=your_tenant
CHROMA_DATABASE=your_database
CHROMA_API_KEY=your_api_key

# Option 4: MongoDB Atlas
OMNI_MEMORY_PROVIDER=mongodb-remote
MONGODB_URI="your_mongodb_connection_string"
MONGODB_DB_NAME="db name"

# Disable vector memory (default)
ENABLE_VECTOR_DB=false
```

#### **πŸ”§ Vector Database Providers**

**1. Qdrant Remote**
```bash
# Install and run Qdrant
docker run -p 6333:6333 qdrant/qdrant

# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=qdrant-remote
QDRANT_HOST=localhost
QDRANT_PORT=6333
```

**2. MongoDB Atlas**
```bash
# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=mongodb-remote
MONGODB_URI="your_mongodb_connection_string"
MONGODB_DB_NAME="db name"
```

**3. ChromaDB Remote**
```bash
# Install and run ChromaDB server
docker run -p 8000:8000 chromadb/chroma

# Configure
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=chroma-remote
CHROMA_HOST=localhost
CHROMA_PORT=8000
```

**4. ChromaDB Cloud**
```bash
ENABLE_VECTOR_DB=true
OMNI_MEMORY_PROVIDER=chroma-cloud
CHROMA_TENANT=your_tenant
CHROMA_DATABASE=your_database
CHROMA_API_KEY=your_api_key
```

#### **✨ What You Get**
- **Long-term Memory**: Persistent storage across sessions
- **Episodic Memory**: Context-aware conversation history
- **Semantic Search**: Find relevant information by meaning, not exact text
- **Multi-session Context**: Remember information across different conversations
- **Automatic Summarization**: Intelligent memory compression for efficiency

---

## πŸ“Š Opik Tracing & Observability Setup (Latest Feature)

**Monitor and optimize your AI agents with production-grade observability:**

#### **πŸš€ Quick Setup**

1. **Sign up for Opik** (Free & Open Source):
   - Visit: **[https://www.comet.com/signup?from=llm](https://www.comet.com/signup?from=llm)**
   - Create your account and get your API key and workspace name

2. **Add to your `.env` file** (see [Environment Variables](#environment-variables) above):
   ```bash
   OPIK_API_KEY=your_opik_api_key_here
   OPIK_WORKSPACE=your_opik_workspace_name
   ```

#### **✨ What You Get Automatically**

Once configured, OmniCoreAgent automatically tracks:

- **πŸ”₯ LLM Call Performance**: Execution time, token usage, response quality
- **πŸ› οΈ Tool Execution Traces**: Which tools were used and how long they took
- **🧠 Memory Operations**: Vector DB queries, memory retrieval performance
- **πŸ€– Agent Workflow**: Complete trace of multi-step agent reasoning
- **πŸ“Š System Bottlenecks**: Identify exactly where time is spent

#### **πŸ“ˆ Benefits**

- **Performance Optimization**: See which LLM calls or tools are slow
- **Cost Monitoring**: Track token usage and API costs
- **Debugging**: Understand agent decision-making processes
- **Production Monitoring**: Real-time observability for deployed agents
- **Zero Code Changes**: Works automatically with existing agents

#### **πŸ” Example: What You'll See**

```
Agent Execution Trace:
β”œβ”€β”€ agent_execution: 4.6s
β”‚   β”œβ”€β”€ tools_registry_retrieval: 0.02s βœ…
β”‚   β”œβ”€β”€ memory_retrieval_step: 0.08s βœ…
β”‚   β”œβ”€β”€ llm_call: 4.5s ⚠️ (bottleneck identified!)
β”‚   β”œβ”€β”€ response_parsing: 0.01s βœ…
β”‚   └── action_execution: 0.03s βœ…
```

**πŸ’‘ Pro Tip**: Opik is completely optional. If you don't set the credentials, OmniCoreAgent works normally without tracing.

---

## πŸ§‘β€πŸ’» Developer Integration

OmniCoreAgent is not just a CLI toolβ€”it's also a powerful Python library. Both systems can be used programmatically in your applications.

### Using OmniAgent in Applications

```python
from omnicoreagent.omni_agent import OmniAgent
from omnicoreagent.core.memory_store.memory_router import MemoryRouter
from omnicoreagent.core.events.event_router import EventRouter
from omnicoreagent.core.tools.local_tools_registry import ToolRegistry

# Create tool registry for custom tools
tool_registry = ToolRegistry()

@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
    """Analyze data and return insights."""
    return f"Analysis complete: {len(data)} characters processed"

# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
    name="my_app_agent",
    system_instruction="You are a helpful assistant.",
    model_config={
        "provider": "openai", 
        "model": "gpt-4o",
        "temperature": 0.7
    },
    local_tools=tool_registry,  # Your custom tools!
    memory_store=MemoryRouter(memory_store_type="redis"),
    event_router=EventRouter(event_store_type="in_memory")
)

# Use in your app
result = await agent.run("Analyze some sample data")
```

### FastAPI Integration with OmniAgent

OmniAgent makes building APIs incredibly simple. See [`examples/web_server.py`](examples/web_server.py) for a complete FastAPI example:

```python
from fastapi import FastAPI
from omnicoreagent.omni_agent import OmniAgent

app = FastAPI()
agent = OmniAgent(...)  # Your agent setup from above

@app.post("/chat")
async def chat(message: str, session_id: str = None):
    result = await agent.run(message, session_id)
    return {"response": result['response'], "session_id": result['session_id']}

@app.get("/tools") 
async def get_tools():
    # Returns both MCP tools AND your custom tools automatically
    return agent.get_available_tools()
```

### Using MCPOmni Connect Programmatically

```python
from omnicoreagent.mcp_client import MCPClient

# Create MCP client
client = MCPClient(config_file="servers_config.json")

# Connect to servers
await client.connect_all()

# Use tools
tools = await client.list_tools()
result = await client.call_tool("tool_name", {"arg": "value"})
```

**Key Benefits:**

- **One OmniAgent = MCP + Custom Tools + Memory + Events**
- **Automatic tool discovery** from all connected MCP servers
- **Built-in session management** and conversation history
- **Real-time event streaming** for monitoring
- **Easy integration** with any Python web framework

---

## 🎯 Usage Patterns

### Interactive Commands

- `/tools` - List all available tools across servers
- `/prompts` - View available prompts
- `/prompt:<n>/<args>` - Execute a prompt with arguments
- `/resources` - List available resources
- `/resource:<uri>` - Access and analyze a resource
- `/debug` - Toggle debug mode
- `/refresh` - Update server capabilities
- `/memory` - Toggle Redis memory persistence (on/off)
- `/mode:auto` - Switch to autonomous agentic mode
- `/mode:chat` - Switch back to interactive chat mode
- `/add_servers:<config.json>` - Add one or more servers from a configuration file
- `/remove_server:<server_n>` - Remove a server by its name

### Memory and Chat History

```bash
# Enable Redis memory persistence
/memory

# Check memory status
Memory persistence is now ENABLED using Redis

# Disable memory persistence
/memory

# Check memory status
Memory persistence is now DISABLED
```

### Operation Modes

```bash
# Switch to autonomous mode
/mode:auto

# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.

# Switch back to chat mode
/mode:chat

# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
```

### Mode Differences

- **Chat Mode (Default)**
  - Requires explicit approval for tool execution
  - Interactive conversation style
  - Step-by-step task execution
  - Detailed explanations of actions

- **Autonomous Mode**
  - Independent task execution
  - Self-guided decision making
  - Automatic tool selection and chaining
  - Progress updates and final results
  - Complex task decomposition
  - Error handling and recovery

- **Orchestrator Mode**
  - Advanced planning for complex multi-step tasks
  - Strategic delegation across multiple MCP servers
  - Intelligent agent coordination and communication
  - Parallel task execution when possible
  - Dynamic resource allocation
  - Sophisticated workflow management
  - Real-time progress monitoring across agents
  - Adaptive task prioritization

### Prompt Management

```bash
# List all available prompts
/prompts

# Basic prompt usage
/prompt:weather/location=tokyo

# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25

# JSON format for complex arguments
/prompt:analyze-data/{
    "dataset": "sales_2024",
    "metrics": ["revenue", "growth"],
    "filters": {
        "region": "europe",
        "period": "q1"
    }
}

# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
    "price_range": {"min": 500, "max": 1000},
    "features": ["5G", "wireless-charging"],
    "markets": ["US", "EU", "Asia"]
}
```

### Advanced Prompt Features

- **Argument Validation**: Automatic type checking and validation
- **Default Values**: Smart handling of optional arguments
- **Context Awareness**: Prompts can access previous conversation context
- **Cross-Server Execution**: Seamless execution across multiple MCP servers
- **Error Handling**: Graceful handling of invalid arguments with helpful messages
- **Dynamic Help**: Detailed usage information for each prompt

### AI-Powered Interactions

The client intelligently:

- Chains multiple tools together
- Provides context-aware responses
- Automatically selects appropriate tools
- Handles errors gracefully
- Maintains conversation context

### Model Support with LiteLLM

- **Unified Model Access**
  - Single interface for 100+ models across all major providers
  - Automatic provider detection and routing
  - Consistent API regardless of underlying provider
  - Native function calling for compatible models
  - ReAct Agent fallback for models without function calling
- **Supported Providers**
  - **OpenAI**: GPT-4, GPT-3.5, and all model variants
  - **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
  - **Google**: Gemini Pro, Gemini Flash, PaLM models
  - **Groq**: Ultra-fast inference for Llama, Mixtral, Gemma
  - **DeepSeek**: DeepSeek-V3, DeepSeek-Coder, and specialized models
  - **Azure OpenAI**: Enterprise-grade OpenAI models
  - **OpenRouter**: Access to 200+ models from various providers
  - **Ollama**: Local model execution with privacy
- **Advanced Features**
  - Automatic model capability detection
  - Dynamic tool execution based on model features
  - Intelligent fallback mechanisms
  - Provider-specific optimizations

### Token & Usage Management

OmniCoreAgent provides advanced controls and visibility over your API usage and resource limits.

#### View API Usage Stats

Use the `/api_stats` command to see your current usage:

```bash
/api_stats
```

This will display:

- **Total tokens used**
- **Total requests made**
- **Total response tokens**
- **Number of requests**

#### Set Usage Limits

You can set limits to automatically stop execution when thresholds are reached:

- **Total Request Limit:** Set the maximum number of requests allowed in a session.
- **Total Token Usage Limit:** Set the maximum number of tokens that can be used.
- **Tool Call Timeout:** Set the maximum time (in seconds) a tool call can take before being terminated.
- **Max Steps:** Set the maximum number of steps the agent can take before stopping.

You can configure these in your `servers_config.json` under the `AgentConfig` section:

```json
"AgentConfig": {
    "agent_name": "OmniAgent",              // Unique agent identifier
    "tool_call_timeout": 30,                // Tool call timeout in seconds
    "max_steps": 15,                        // Max number of reasoning/tool steps before termination

    // --- Limits ---
    "request_limit": 0,                     // 0 = unlimited (production mode), set > 0 to enable limits
    "total_tokens_limit": 0,                // 0 = unlimited (production mode), set > 0 for hard cap on tokens

    // --- Memory Retrieval Config ---
    "memory_config": {
        "mode": "sliding_window",           // Options: sliding_window, episodic, vector
        "value": 100                        // Window size or parameter value depending on mode
    },
    "memory_results_limit": 5,              // Number of memory results to retrieve (1–100, default: 5)
    "memory_similarity_threshold": 0.5,     // Similarity threshold for memory filtering (0.0–1.0, default: 0.5)

    // --- Tool Retrieval Config ---
    "enable_tools_knowledge_base": false,   // Enable semantic tool retrieval (default: false)
    "tools_results_limit": 10,              // Max number of tools to retrieve (default: 10)
    "tools_similarity_threshold": 0.1,      // Similarity threshold for tool retrieval (0.0–1.0, default: 0.1)

    // --- Memory Tool Backend ---
    "memory_tool_backend": "None"           // Backend for memory tool. Options: "None" (default), "local", "s3", or "db"
}


```

- When any of these limits are reached, the agent will automatically stop running and notify you.

#### Example Commands

```bash
# Check your current API usage and limits
/api_stats

# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
```

## πŸ”§ Advanced Features

### Tool Orchestration

```python
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"

# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
```

### Resource Analysis

```python
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"

# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
```

### πŸ› οΈ Troubleshooting Common Issues

#### "Failed to connect to server: Session terminated"

**Possible Causes & Solutions:**

1. **Wrong Transport Type**
   ```
   Problem: Your server expects 'stdio' but you configured 'streamable_http'
   Solution: Check your server's documentation for the correct transport type
   ```

2. **OAuth Configuration Mismatch**
   ```
   Problem: Your server doesn't support OAuth but you have "auth": {"method": "oauth"}
   Solution: Remove the "auth" section entirely and use headers instead:

   "headers": {
       "Authorization": "Bearer your-token"
   }
   ```

3. **Server Not Running**
   ```
   Problem: The MCP server at the specified URL is not running
   Solution: Start your MCP server first, then connect with OmniCoreAgent
   ```

4. **Wrong URL or Port**
   ```
   Problem: URL in config doesn't match where your server is running
   Solution: Verify the server's actual address and port
   ```

#### "Started callback server on http://localhost:3000" - Is This Normal?

**Yes, this is completely normal** when:

- You have `"auth": {"method": "oauth"}` in any server configuration
- The OAuth server handles authentication tokens automatically
- You cannot and should not try to change this address

**If you don't want the OAuth server:**

- Remove `"auth": {"method": "oauth"}` from all server configurations
- Use alternative authentication methods like Bearer tokens

### πŸ“‹ Configuration Examples by Use Case

#### Local Development (stdio)

```json
{
  "mcpServers": {
    "local-tools": {
      "transport_type": "stdio",
      "command": "uvx",
      "args": ["mcp-server-tools"]
    }
  }
}
```

#### Remote Server with Token

```json
{
  "mcpServers": {
    "remote-api": {
      "transport_type": "streamable_http",
      "url": "http://api.example.com:8080/mcp",
      "headers": {
        "Authorization": "Bearer abc123token"
      }
    }
  }
}
```

#### Remote Server with OAuth

```json
{
  "mcpServers": {
    "oauth-server": {
      "transport_type": "streamable_http",
      "auth": {
        "method": "oauth"
      },
      "url": "http://oauth-server.com:8080/mcp"
    }
  }
}
```

---

## πŸ§ͺ Testing

### Running Tests

```bash
# Run all tests with verbose output
pytest tests/ -v

# Run specific test file
pytest tests/test_specific_file.py -v

# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
```

### Test Structure

```
tests/
β”œβ”€β”€ unit/           # Unit tests for individual components
β”œβ”€β”€ omni_agent/     # OmniAgent system tests
β”œβ”€β”€ mcp_client/     # MCPOmni Connect system tests
└── integration/    # Integration tests for both systems
```

### Development Quick Start

1. **Installation**

   ```bash
   # Clone the repository
   git clone https://github.com/Abiorh001/omnicoreagent.git
   cd omnicoreagent

   # Create and activate virtual environment
   uv venv
   source .venv/bin/activate

   # Install dependencies
   uv sync
   ```

2. **Configuration**

   ```bash
   # Set up environment variables
   echo "LLM_API_KEY=your_api_key_here" > .env

   # Configure your servers in servers_config.json
   ```

3. **Start Systems**

   ```bash
   # Try OmniAgent
   uv run examples/omni_agent_example.py

   # Or try MCPOmni Connect
   uv run examples/mcp_client_example.py
   ```

   Or:

   ```bash
   python examples/omni_agent_example.py
   python examples/mcp_client_example.py
   ```

---

## πŸ” Troubleshooting

> **🚨 Most Common Issues**: Check [Quick Fixes](#-quick-fixes-common-issues) below first!
> 
> **πŸ“– For comprehensive setup help**: See [βš™οΈ Configuration Guide](#️-configuration-guide) | [🧠 Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide)

### 🚨 **Quick Fixes (Common Issues)**

| **Error** | **Quick Fix** |
|-----------|---------------|
| `Error: Invalid API key` | Check your `.env` file: `LLM_API_KEY=your_actual_key` |
| `ModuleNotFoundError: omnicoreagent` | Run: `uv add omnicoreagent` or `pip install omnicoreagent` |
| `Connection refused` | Ensure MCP server is running before connecting |
| `ChromaDB not available` | Install: `pip install chromadb` - [See Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide) |
| `Redis connection failed` | Install Redis or use in-memory mode (default) |
| `Tool execution failed` | Check tool permissions and arguments |

### Detailed Issues and Solutions

1. **Connection Issues**

   ```bash
   Error: Could not connect to MCP server
   ```

   - Check if the server is running
   - Verify server configuration in `servers_config.json`
   - Ensure network connectivity
   - Check server logs for errors
   - **See [Transport Types & Authentication](#-transport-types--authentication) for detailed setup**

2. **API Key Issues**

   ```bash
   Error: Invalid API key
   ```

   - Verify API key is correctly set in `.env`
   - Check if API key has required permissions
   - Ensure API key is for correct environment (production/development)
   - **See [Configuration Guide](#️-configuration-guide) for correct setup**

3. **Redis Connection**

   ```bash
   Error: Could not connect to Redis
   ```

   - Verify Redis server is running
   - Check Redis connection settings in `.env`
   - Ensure Redis password is correct (if configured)

4. **Tool Execution Failures**
   ```bash
   Error: Tool execution failed
   ```
   - Check tool availability on connected servers
   - Verify tool permissions
   - Review tool arguments for correctness

5. **Vector Database Issues**

   ```bash
   Error: Vector database connection failed
   ```

   - Ensure chosen provider (Qdrant, ChromaDB, MongoDB) is running
   - Check connection settings in `.env`
   - Verify API keys for cloud providers
   - **See [Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide) for detailed configuration**

6. **Import Errors**

   ```bash
   ImportError: cannot import name 'OmniAgent'
   ```

   - Check package installation: `pip show omnicoreagent`
   - Verify Python version compatibility (3.10+)
   - Try reinstalling: `pip uninstall omnicoreagent && pip install omnicoreagent`

### Debug Mode

Enable debug mode for detailed logging:

```bash
# In MCPOmni Connect
/debug

# In OmniAgent
agent = OmniAgent(..., debug=True)
```

### **Getting Help**

1. **First**: Check the [Quick Fixes](#-quick-fixes-common-issues) above
2. **Examples**: Study working examples in the `examples/` directory
3. **Issues**: Search [GitHub Issues](https://github.com/Abiorh001/omnicoreagent/issues) for similar problems
4. **New Issue**: [Create a new issue](https://github.com/Abiorh001/omnicoreagent/issues/new) with detailed information

---

## 🀝 Contributing

We welcome contributions to OmniCoreAgent! Here's how you can help:

### Development Setup

```bash
# Fork and clone the repository
git clone https://github.com/abiorh001/omnicoreagent.git
cd omnicoreagent

# Set up development environment
uv venv
source .venv/bin/activate
uv sync --dev

# Install pre-commit hooks
pre-commit install
```

### Contribution Areas

- **OmniAgent System**: Custom agents, local tools, background processing
- **MCPOmni Connect**: MCP client features, transport protocols, authentication
- **Shared Infrastructure**: Memory systems, vector databases, event handling
- **Documentation**: Examples, tutorials, API documentation
- **Testing**: Unit tests, integration tests, performance tests

### Pull Request Process

1. Create a feature branch from `main`
2. Make your changes with tests
3. Run the test suite: `pytest tests/ -v`
4. Update documentation as needed
5. Submit a pull request with a clear description

### Code Standards

- Python 3.10+ compatibility
- Type hints for all public APIs
- Comprehensive docstrings
- Unit tests for new functionality
- Follow existing code style

---

## πŸ“– Documentation

Complete documentation is available at: **[OmniCoreAgent Docs](https://abiorh001.github.io/omnicoreagent)**

### Documentation Structure

- **Getting Started**: Quick setup and first steps
- **OmniAgent Guide**: Custom agent development
- **MCPOmni Connect Guide**: MCP client usage
- **API Reference**: Complete code documentation
- **Examples**: Working code examples
- **Advanced Topics**: Vector databases, tracing, production deployment

### Build Documentation Locally

```bash
# Install documentation dependencies
pip install mkdocs mkdocs-material

# Serve documentation locally
mkdocs serve
# Open http://127.0.0.1:8000

# Build static documentation
mkdocs build
```

### Contributing to Documentation

Documentation improvements are always welcome:

- Fix typos or unclear explanations
- Add new examples or use cases
- Improve existing tutorials
- Translate to other languages

---

## Demo

![omnicoreagent-demo-MadewithClipchamp-ezgif com-optimize](https://github.com/user-attachments/assets/9c4eb3df-d0d5-464c-8815-8f7415a47fce)

---

## πŸ“„ License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## πŸ“¬ Contact & Support

- **Author**: Abiola Adeshina
- **Email**: abiolaadedayo1993@gmail.com
- **GitHub**: [https://github.com/Abiorh001/omnicoreagent](https://github.com/Abiorh001/omnicoreagent)
- **Issues**: [Report a bug or request a feature](https://github.com/Abiorh001/omnicoreagent/issues)
- **Discussions**: [Join the community](https://github.com/Abiorh001/omnicoreagent/discussions)

### Support Channels

- **GitHub Issues**: Bug reports and feature requests
- **GitHub Discussions**: General questions and community support
- **Email**: Direct contact for partnership or enterprise inquiries

---

<p align="center">
  <strong>Built with ❀️ by the OmniCoreAgent Team</strong><br>
  <em>Empowering developers to build the next generation of AI applications</em>
</p>
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "omnicoreagent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "agent, ai, automation, framework, git, llm, mcp",
    "author": null,
    "author_email": "Abiola Adeshina <abiolaadedayo1993@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/59/84/55083e48b25b6807dee9b0b07884a55deaa21e1999cf53ed2f75a0dd9570/omnicoreagent-0.2.9.tar.gz",
    "platform": null,
    "description": "# \ud83d\ude80 OmniCoreAgent - Complete AI Development Platform\n\n> **\u2139\ufe0f Project Renaming Notice:**  \n> This project was previously known as **`mcp_omni-connect`**.  \n> It has been renamed to **`omnicoreagent`** to reflect its evolution into a complete AI development platform\u2014combining both a world-class MCP client and a powerful AI agent builder framework.\n\n> **\u26a0\ufe0f Breaking Change:**  \n> The package name has changed from **`mcp_omni-connect`** to **`omnicoreagent`**.  \n> Please uninstall the old package and install the new one:\n>\n> ```bash\n> pip uninstall mcp_omni-connect\n> pip install omnicoreagent\n> ```\n>\n> All imports and CLI commands now use `omnicoreagent`.  \n> Update your code and scripts accordingly.\n\n[![PyPI Downloads](https://static.pepy.tech/badge/omnicoreagent)](https://pepy.tech/projects/omnicoreagent)\n[![Python Version](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)\n[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)\n[![Tests](https://img.shields.io/badge/tests-passing-brightgreen.svg)](https://github.com/Abiorh001/omnicoreagent/actions)\n[![PyPI version](https://badge.fury.io/py/omnicoreagent.svg)](https://badge.fury.io/py/omnicoreagent)\n[![Last Commit](https://img.shields.io/github/last-commit/Abiorh001/omnicoreagent)](https://github.com/Abiorh001/omnicoreagent/commits/main)\n[![Open Issues](https://img.shields.io/github/issues/Abiorh001/omnicoreagent)](https://github.com/Abiorh001/omnicoreagent/issues)\n[![Pull Requests](https://img.shields.io/github/issues-pr/Abiorh001/omnicoreagent)](https://github.com/Abiorh001/omnicoreagent/pulls)\n\n<p align=\"center\">\n  <img src=\"assets/IMG_5292.jpeg\" alt=\"OmniCoreAgent Logo\" width=\"250\"/>\n</p>\n\n**OmniCoreAgent** is the complete AI development platform that combines two powerful systems into one revolutionary ecosystem. Build production-ready AI agents with **OmniAgent**, use the advanced MCP client with **MCPOmni Connect**, or combine both for maximum power.\n\n## \ud83d\udccb Table of Contents\n\n### \ud83d\ude80 **Getting Started**\n- [\ud83d\ude80 Quick Start (2 minutes)](#-quick-start-2-minutes)\n- [\ud83c\udf1f What is OmniCoreAgent?](#-what-is-omnicoreagent)\n- [\ud83d\udca1 What Can You Build? (Examples)](#-what-can-you-build-see-real-examples)\n- [\ud83c\udfaf Choose Your Path](#-choose-your-path)\n- [\ud83e\udde0 Semantic Tool Knowledge Base](#-semantic-tool-knowledge-base)\n- [\ud83d\uddc2\ufe0f Memory Tool Backend](#-memory-tool-backend)\n\n### \ud83e\udd16 **OmniAgent System**\n\n- [\u2728 OmniAgent Features](#-omniagent---revolutionary-ai-agent-builder)\n- [\ud83d\udd25 Local Tools System](#-local-tools-system---create-custom-ai-tools)\n- [\ud83e\udde9 OmniAgent Workflow System](#-omniagent-workflow-system--multi-agent-orchestration)\n- [\ud83d\ude81 Background Agent System](#-background-agent-system---autonomous-task-automation)\n- [\ud83d\udee0\ufe0f Building Custom Agents](#-building-custom-agents)\n- [\ud83d\udcda OmniAgent Examples](#-omniagent-examples)\n\n### \ud83d\udd0c **MCPOmni Connect System**\n- [\u2728 MCP Client Features](#-mcpomni-connect---world-class-mcp-client)\n- [\ud83d\udea6 Transport Types & Authentication](#-transport-types--authentication)\n- [\ud83d\udda5\ufe0f CLI Commands](#\ufe0f-cli-commands)\n- [\ud83d\udcda MCP Usage Examples](#-mcp-usage-examples)\n\n### \ud83d\udcd6 **Core Information**\n- [\u2728 Platform Features](#-platform-features)\n- [\ud83c\udfd7\ufe0f Architecture](#\ufe0f-architecture)\n\n### \u2699\ufe0f **Setup & Configuration**\n- [\u2699\ufe0f Configuration Guide](#\ufe0f-configuration-guide)\n- [\ud83e\udde0 Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide)\n- [\ud83d\udcca Tracing & Observability](#-opik-tracing--observability-setup-latest-feature)\n\n### \ud83d\udee0\ufe0f **Development & Integration**\n- [\ud83e\uddd1\u200d\ud83d\udcbb Developer Integration](#-developer-integration)\n- [\ud83e\uddea Testing](#-testing)\n\n### \ud83d\udcda **Reference & Support**\n- [\ud83d\udd0d Troubleshooting](#-troubleshooting)\n- [\ud83e\udd1d Contributing](#-contributing)\n- [\ud83d\udcd6 Documentation](#-documentation)\n\n---\n\n**New to OmniCoreAgent?** Get started in 2 minutes:\n\n### Step 1: Install\n```bash\n# Install with uv (recommended)\nuv add omnicoreagent\n\n# Or with pip\npip install omnicoreagent\n```\n\n### Step 2: Set API Key\n```bash\n# Create .env file with your LLM API key\necho \"LLM_API_KEY=your_openai_api_key_here\" > .env\n```\n\n### Step 3: Run Examples\n```bash\n# Try OmniAgent with custom tools\npython examples/omni_agent_example.py\n\n# Try MCPOmni Connect (MCP client)\npython examples/run_mcp.py\n\n# Try the integrated platform\npython examples/run_omni_agent.py\n```\n\n### What Can You Build?\n- **Custom AI Agents**: Register your Python functions as AI tools with OmniAgent\n- **MCP Integration**: Connect to any Model Context Protocol server with MCPOmni Connect\n- **Smart Memory**: Vector databases for long-term AI memory\n- **Background Agents**: Self-flying autonomous task execution\n- **Production Monitoring**: Opik tracing for performance optimization\n\n\u27a1\ufe0f **Next**: Check out [Examples](#-what-can-you-build-see-real-examples) or jump to [Configuration Guide](#\ufe0f-configuration-guide)\n\n---\n\n## \ud83c\udf1f **What is OmniCoreAgent?**\n\nOmniCoreAgent is a comprehensive AI development platform consisting of two integrated systems:\n\n### 1. \ud83e\udd16 **OmniAgent** *(Revolutionary AI Agent Builder)*\nCreate intelligent, autonomous agents with custom capabilities:\n- **\ud83d\udee0\ufe0f Local Tools System** - Register your Python functions as AI tools\n- **\ud83d\ude81 Self-Flying Background Agents** - Autonomous task execution\n- **\ud83e\udde0 Multi-Tier Memory** - Vector databases, Redis, PostgreSQL, MySQL, SQLite\n- **\ud83d\udce1 Real-Time Events** - Live monitoring and streaming\n- **\ud83d\udd27 MCP + Local Tool Orchestration** - Seamlessly combine both tool types\n\n### 2. \ud83d\udd0c **MCPOmni Connect** *(World-Class MCP Client)*\nAdvanced command-line interface for connecting to any Model Context Protocol server with:\n- **\ud83c\udf10 Multi-Protocol Support** - stdio, SSE, HTTP, Docker, NPX transports\n- **\ud83d\udd10 Authentication** - OAuth 2.0, Bearer tokens, custom headers\n- **\ud83e\udde0 Advanced Memory** - Redis, Database, Vector storage with intelligent retrieval\n- **\ud83d\udce1 Event Streaming** - Real-time monitoring and debugging\n- **\ud83e\udd16 Agentic Modes** - ReAct, Orchestrator, and Interactive chat modes\n\n**\ud83c\udfaf Perfect for:** Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.\n\n---\n\n## \ud83d\udca1 **What Can You Build? (See Real Examples)**\n\n### \ud83e\udd16 **OmniAgent System** *(Build Custom AI Agents)*\n```bash\n# Complete OmniAgent demo - All features showcase\npython examples/omni_agent_example.py\n\n# Advanced OmniAgent patterns - Study 12+ tool examples\npython examples/run_omni_agent.py\n\n# Self-flying background agents - Autonomous task execution with Background Agent Manager\npython examples/background_agent_example.py\n\n\n# Web server with UI - Interactive interface for OmniAgent\npython examples/web_server.py\n# Open http://localhost:8000 for web interface\n\n# FastAPI implementation - Clean API endpoints\npython examples/fast_api_impl.py\n\n# Enhanced web server - Production-ready with advanced features\npython examples/enhanced_web_server.py\n```\n\n### \ud83d\udd0c **MCPOmni Connect System** *(Connect to MCP Servers)*\n```bash\n# Basic MCP client usage\npython examples/run_mcp.py\n\n```\n\n### \ud83d\udd27 **LLM Provider Configuration** *(Multiple Providers)*\nAll LLM provider examples consolidated in:\n```bash\n# See examples/llm_usage-config.json for:\n# - Anthropic Claude models\n# - Groq ultra-fast inference  \n# - Azure OpenAI enterprise\n# - Ollama local models\n# - OpenRouter 200+ models\n# - And more providers...\n```\n\n---\n\n## \ud83c\udfaf **Choose Your Path**\n\n### When to Use What?\n\n| **Use Case** | **Choose** | **Best For** |\n|-------------|------------|--------------|\n| Build custom AI apps | **OmniAgent** | Web apps, automation, custom workflows |\n| Connect to MCP servers | **MCPOmni Connect** | Daily workflow, server management, debugging |\n| Learn & experiment | **Examples** | Understanding patterns, proof of concepts |\n| Production deployment | **Both** | Full-featured AI applications |\n\n### **Path 1: \ud83e\udd16 Build Custom AI Agents (OmniAgent)**\nPerfect for: Custom applications, automation, web apps\n```bash\n# Study the examples to learn patterns:\npython examples/basic.py                    # Simple introduction\npython examples/omni_agent_example.py       # Complete OmniAgent demo\npython examples/background_agent_example.py # Self-flying agents\npython examples/web_server.py              # Web interface\npython examples/fast_api_impl.py           # FastAPI integration\npython examples/enhanced_web_server.py    # Production-ready web server\n\n# Then build your own using the patterns!\n```\n\n### **Path 2: \ud83d\udd0c Advanced MCP Client (MCPOmni Connect)**\nPerfect for: Daily workflow, server management, debugging\n```bash\n# Basic MCP client\npython examples/run_mcp.py\n\n\n# Features: Connect to MCP servers, agentic modes, advanced memory\n```\n\n### **Path 3: \ud83e\uddea Study Tool Patterns (Learning)**\nPerfect for: Learning, understanding patterns, experimentation\n```bash\n# Comprehensive testing interface - Study 12+ EXAMPLE tools\npython examples/run_omni_agent.py \n\n# Study this file to see tool registration patterns and CLI features\n# Contains many examples of how to create custom tools\n```\n\n**\ud83d\udca1 Pro Tip:** Most developers use **both paths** - MCPOmni Connect for daily workflow and OmniAgent for building custom solutions!\n\n---\n\n## \ud83e\udde0 **Semantic Tool Knowledge Base**\n\n### Why You Need It\n\nAs your AI agents grow and connect to more MCP servers, finding the right tool quickly becomes challenging. Relying on static lists or manual selection is slow, inflexible, and can overload your agent\u2019s context window\u2014making it harder for the agent to choose the best tool for each task.\n\nThe **Semantic Tool Knowledge Base** solves this by automatically embedding all available tools into a vector database. This enables your agent to use semantic search: it can instantly and intelligently retrieve the most relevant tools based on the meaning of your query, not just keywords. As your tool ecosystem expands, the agent always finds the best match\u2014no manual updates or registry management\n\n### Usefulness\n\n- **Scalable Tool Discovery:** Connect unlimited MCP servers and tools; the agent finds what it needs, when it needs it.\n- **Context-Aware Retrieval:** The agent uses semantic similarity to select tools that best match the user\u2019s intent, not just keywords.\n- **Unified Access:** All tools are accessible via a single `tools_retriever` interface, simplifying agent logic.\n- **Fallback Reliability:** If semantic search fails, the agent falls back to fast keyword (BM25) search for robust results.\n- **No Manual Registry:** Tools are automatically indexed and updated\u2014no need to maintain a static list.\n\n---\n\n### How to Enable\n\nAdd these options to your agent config:\n\n```json\n\"agent_config\": {\n    \"enable_tools_knowledge_base\": true,      // Enable semantic tool KB, default: false\n    \"tools_results_limit\": 10,                // Max tools to retrieve per query\n    \"tools_similarity_threshold\": 0.1,        // Similarity threshold for semantic search\n    ...\n}\n```\n\nWhen enabled, all MCP server tools are embedded into your chosen vector DB (Qdrant, ChromaDB, MongoDB, etc.) and standard DB. The agent uses `tools_retriever` to fetch tools at runtime.\n\n---\n\n### Example Usage\n\n```python\nagent = OmniAgent(\n    ...,\n    agent_config={\n        \"enable_tools_knowledge_base\": True,\n        \"tools_results_limit\": 10,\n        \"tools_similarity_threshold\": 0.1,\n        # other config...\n    },\n    ...\n)\n```\n\n---\n\n### Benefits Recap\n\n- **Instant access to thousands of tools**\n- **Context-aware, semantic selection**\n- **No manual registry management**\n- **Reliable fallback search**\n- **Scales with your infrastructure**\n\n---\n\n## \ud83d\uddc2\ufe0f **Memory Tool Backend**\n\nIntroduces a persistent \"memory tool\" backend so agents can store a writable working memory layer on disk (under /memories). This is designed for multi-step or resumable workflows where the agent needs durable state outside the transient LLM context.\n\nWhy this matters\n\n- Agents often need an external writable workspace for long-running tasks, progress tracking, or resumable operations.\n- Storing working memory externally prevents constantly bloating the prompt and preserves important intermediate state across restarts or multiple runs.\n- This is a lightweight, agent-facing working layer \u2014 not a replacement for structured DBs or vector semantic memory.\n\nHow to enable\n\n- Enable via agent config:\n\n```python\nagent_config = {\n    \"memory_tool_backend\": \"local\",  # enable persistent memory (writes to ./memories)\n}\n```\n\n- Disable by omitting the key or setting it to None:\n\n```python\nagent_config = {\n    \"memory_tool_backend\": None,  # disable persistent memory\n}\n```\n\nBehavior & capabilities\n\n- When enabled the agent gets access to memory_* tools for managing persistent files under /memories:\n  - memory_view, memory_create_update, memory_insert\n  - memory_str_replace, memory_delete, memory_rename, memory_clear_all\n- Operations use a structured XML observation format so the LLM can perform reliable memory actions and parse results programmatically.\n- System prompt extensions include privacy, concurrency, and size constraints to help enforce safe usage.\n\nFiles & storage\n\n- Local backend stores files under the repository (./memories) by default.\n- Current release: local backend only. Future releases will add S3, database, and other filesystem backends.\n\nExample usage (agent-facing)\n\n```python\n# enable persistent memory in agent config\nagent = OmniAgent(\n    ...,\n    agent_config={\n        \"memory_tool_backend\": \"local\",\n        # other agent config...\n    },\n    ...\n)\n\n# Agent can now call memory_* tools to create and update working memory\n# (these are invoked by the agent's tool-calling logic; see examples/ for patterns)\n```\n\nResult / tradeoffs\n\n- Agents can maintain durable working memory outside the token context enabling long-running workflows, planning persistence, and resumable tasks.\n- This memory layer is intended as a writable working area for active tasks (progress, in-progress artifacts, state), not a substitute for structured transactional storage or semantic vector memory.\n- Privacy, concurrency, and size constraints are enforced via system prompt and runtime checks; review policies for production deployment.\n\nRoadmap\n\n- Add S3, DB, and other filesystem backends.\n- Add optional encryption, access controls, and configurable retention policies.\n\nPractical note\n\n- Use the memory tool backend when your workflows require persistent, writable agent state between steps or runs. Continue using vector DBs or SQL/NoSQL stores for semantic or structured storage needs.\n\n---\n\n**Note:** Choose your vector DB provider via environment variables. See [Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide)\n---\n\n# \ud83e\udd16 OmniAgent - Revolutionary AI Agent Builder\n\n**\ud83c\udf1f Introducing OmniAgent** - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!\n\n## \u2705 OmniAgent Revolutionary Capabilities:\n- **\ud83e\udde0 Multi-tier memory management** with vector search and semantic retrieval\n- **\ud83d\udee0\ufe0f XML-based reasoning** with strict tool formatting for reliable execution  \n- **\ud83d\udd27 Advanced tool orchestration** - Seamlessly combine MCP server tools + local tools\n- **\ud83d\ude81 Self-flying background agents** with autonomous task execution\n- **\ud83d\udce1 Real-time event streaming** for monitoring and debugging\n- **\ud83c\udfd7\ufe0f Production-ready infrastructure** with error handling and retry logic\n- **\u26a1 Plug-and-play intelligence** - No complex setup required!\n\n## \ud83d\udd25 **LOCAL TOOLS SYSTEM** - Create Custom AI Tools!\n\nOne of OmniAgent's most powerful features is the ability to **register your own Python functions as AI tools**. The agent can then intelligently use these tools to complete tasks.\n\n### \ud83c\udfaf Quick Tool Registration Example\n\n```python\nfrom omnicoreagent.omni_agent import OmniAgent\nfrom omnicoreagent.core.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry\ntool_registry = ToolRegistry()\n\n# Register your custom tools with simple decorator\n@tool_registry.register_tool(\"calculate_area\")\ndef calculate_area(length: float, width: float) -> str:\n    \"\"\"Calculate the area of a rectangle.\"\"\"\n    area = length * width\n    return f\"Area of rectangle ({length} x {width}): {area} square units\"\n\n@tool_registry.register_tool(\"analyze_text\")\ndef analyze_text(text: str) -> str:\n    \"\"\"Analyze text and return word count and character count.\"\"\"\n    words = len(text.split())\n    chars = len(text)\n    return f\"Analysis: {words} words, {chars} characters\"\n\n@tool_registry.register_tool(\"system_status\")\ndef get_system_status() -> str:\n    \"\"\"Get current system status information.\"\"\"\n    import platform\n    import time\n    return f\"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}\"\n\n# Use tools with OmniAgent\nagent = OmniAgent(\n    name=\"my_agent\",\n    local_tools=tool_registry,  # Your custom tools!\n    # ... other config\n)\n\n# Now the AI can use your tools!\nresult = await agent.run(\"Calculate the area of a 10x5 rectangle and tell me the current system time\")\n```\n\n### \ud83d\udcd6 Tool Registration Patterns (Create Your Own!)\n\n**No built-in tools** - You create exactly what you need! Study these EXAMPLE patterns from `run_omni_agent.py`:\n\n**Mathematical Tools Examples:**\n```python\n@tool_registry.register_tool(\"calculate_area\")\ndef calculate_area(length: float, width: float) -> str:\n    area = length * width\n    return f\"Area: {area} square units\"\n\n@tool_registry.register_tool(\"analyze_numbers\") \ndef analyze_numbers(numbers: str) -> str:\n    num_list = [float(x.strip()) for x in numbers.split(\",\")]\n    return f\"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}\"\n```\n\n**System Tools Examples:**\n```python\n@tool_registry.register_tool(\"system_info\")\ndef get_system_info() -> str:\n    import platform\n    return f\"OS: {platform.system()}, Python: {platform.python_version()}\"\n```\n\n**File Tools Examples:**\n```python\n@tool_registry.register_tool(\"list_files\")\ndef list_directory(path: str = \".\") -> str:\n    import os\n    files = os.listdir(path)\n    return f\"Found {len(files)} items in {path}\"\n```\n\n### \ud83c\udfa8 Tool Registration Patterns\n\n**1. Simple Function Tools:**\n```python\n@tool_registry.register_tool(\"weather_check\")\ndef check_weather(city: str) -> str:\n    \"\"\"Get weather information for a city.\"\"\"\n    # Your weather API logic here\n    return f\"Weather in {city}: Sunny, 25\u00b0C\"\n```\n\n**2. Complex Analysis Tools:**\n```python\n@tool_registry.register_tool(\"data_analysis\")\ndef analyze_data(data: str, analysis_type: str = \"summary\") -> str:\n    \"\"\"Analyze data with different analysis types.\"\"\"\n    import json\n    try:\n        data_obj = json.loads(data)\n        if analysis_type == \"summary\":\n            return f\"Data contains {len(data_obj)} items\"\n        elif analysis_type == \"detailed\":\n            # Complex analysis logic\n            return \"Detailed analysis results...\"\n    except:\n        return \"Invalid data format\"\n```\n\n**3. File Processing Tools:**\n```python\n@tool_registry.register_tool(\"process_file\")\ndef process_file(file_path: str, operation: str) -> str:\n    \"\"\"Process files with different operations.\"\"\"\n    try:\n        if operation == \"read\":\n            with open(file_path, 'r') as f:\n                content = f.read()\n            return f\"File content (first 100 chars): {content[:100]}...\"\n        elif operation == \"count_lines\":\n            with open(file_path, 'r') as f:\n                lines = len(f.readlines())\n            return f\"File has {lines} lines\"\n    except Exception as e:\n        return f\"Error processing file: {e}\"\n```\n\n## \ud83d\udee0\ufe0f Building Custom Agents\n\n### Basic Agent Setup\n\n```python\nfrom omnicoreagent.omni_agent import OmniAgent\nfrom omnicoreagent.core.memory_store.memory_router import MemoryRouter\nfrom omnicoreagent.core.events.event_router import EventRouter\nfrom omnicoreagent.core.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry for custom tools\ntool_registry = ToolRegistry()\n\n@tool_registry.register_tool(\"analyze_data\")\ndef analyze_data(data: str) -> str:\n    \"\"\"Analyze data and return insights.\"\"\"\n    return f\"Analysis complete: {len(data)} characters processed\"\n\n# OmniAgent automatically handles MCP connections + your tools\nagent = OmniAgent(\n    name=\"my_app_agent\",\n    system_instruction=\"You are a helpful assistant with access to MCP servers and custom tools.\",\n    model_config={\n        \"provider\": \"openai\", \n        \"model\": \"gpt-4o\",\n        \"temperature\": 0.7\n    },\n    agent_config={\n        \"tool_call_timeout\": 30,\n        \"max_steps\": 10,\n        \"request_limit\": 0,          # 0 = unlimited (production mode), set > 0 to enable limits\n        \"total_tokens_limit\": 0,     # 0 = unlimited (production mode), set > 0 to enable limits\n        \"memory_results_limit\": 5,   # Number of memory results to retrieve (1-100, default: 5)\n        \"memory_similarity_threshold\": 0.5  # Similarity threshold for memory filtering (0.0-1.0, default: 0.5)\n    },\n    # Your custom local tools\n    local_tools=tool_registry,\n    # MCP servers - automatically connected!\n    mcp_tools=[\n        {\n            \"name\": \"filesystem\",\n            \"transport_type\": \"stdio\", \n            \"command\": \"npx\",\n            \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/home\"]\n        },\n        {\n            \"name\": \"github\",\n            \"transport_type\": \"streamable_http\",\n            \"url\": \"http://localhost:8080/mcp\",\n            \"headers\": {\"Authorization\": \"Bearer your-token\"}\n        }\n    ],\n    embedding_config={\n        \"provider\": \"openai\",\n        \"model\": \"text-embedding-3-small\",\n        \"dimensions\": 1536,\n        \"encoding_format\": \"float\",\n    },\n    memory_store=MemoryRouter(memory_store_type=\"redis\"),\n    event_router=EventRouter(event_store_type=\"in_memory\")\n)\n\n# Use in your app - gets both MCP tools AND your custom tools!\nresult = await agent.run(\"List files in the current directory and analyze the filenames\")\n```\n\n## \ud83e\udde9 **OmniAgent Workflow System** \u2013 Multi Agent Orchestration\n\nOmniCoreAgent now includes a powerful **workflow system** for orchestrating multiple agents in your application.  \nYou can choose from three workflow agents, each designed for different orchestration patterns:\n\n- **SequentialAgent** \u2013 Chain agents step-by-step, passing output from one to the next.\n- **ParallelAgent** \u2013 Run multiple agents concurrently, each with its own task.\n- **RouterAgent** \u2013 Use an intelligent router agent to select the best sub-agent for a given task.\n\nAll three workflow agents are available in the `omni_agent/workflow/` directory, and usage examples are provided in the `examples/` folder.\n\n---\n\n### \ud83e\udd16 **SequentialAgent** \u2013 Step-by-Step Agent Chaining\n\n**Purpose:**  \nRun a list of agents in sequence, passing the output of each agent as the input to the next.  \nThis is ideal for multi-stage processing pipelines, where each agent performs a specific transformation or analysis.\n\n**How it works:**\n\n- You provide a list of `OmniAgent` instances.\n- The first agent receives the initial query (or uses its system instruction if no query is provided).\n- Each agent\u2019s output is passed as the input to the next agent.\n- The same session ID is used for all agents, ensuring shared context and memory.\n\n**Example Usage:**\n\n```python\nfrom omnicoreagent.omni_agent.workflow.sequential_agent import SequentialAgent\n\n# Create your agents (see examples/ for full setup)\nagents = [agent1, agent2, agent3]\n\nseq_agent = SequentialAgent(sub_agents=agents)\nawait seq_agent.initialize()\nresult = await seq_agent.run(initial_task=\"Analyze this data and summarize results\")\nprint(result)\n```\n\n**Typical Use Cases:**\n\n- Data preprocessing \u2192 analysis \u2192 reporting\n- Multi-step document processing\n- Chained reasoning tasks\n\n---\n\n### \u26a1 **ParallelAgent** \u2013 Concurrent Agent Execution\n\n**Purpose:**  \nRun multiple agents at the same time, each with its own task or system instruction.  \nThis is perfect for scenarios where you want to gather results from several agents independently and quickly.\n\n**How it works:**\n\n- You provide a list of `OmniAgent` instances.\n- Optionally, you can specify a dictionary of tasks for each agent (`agent_name: task`). If no task is provided, the agent uses its system instruction.\n- All agents are run concurrently, sharing the same session ID for context.\n- Results are returned as a dictionary mapping agent names to their outputs.\n\n**Example Usage:**\n\n```python\nfrom omnicoreagent.omni_agent.workflow.parallel_agent import ParallelAgent\n\nagents = [agent1, agent2, agent3]\ntasks = {\n    \"agent1\": \"Summarize this article\",\n    \"agent2\": \"Extract keywords\",\n    \"agent3\": None  # Uses system instruction\n}\n\npar_agent = ParallelAgent(sub_agents=agents)\nawait par_agent.initialize()\nresults = await par_agent.run(agent_tasks=tasks)\nprint(results)\n```\n\n**Typical Use Cases:**\n\n- Running multiple analyses on the same data\n- Gathering different perspectives or answers in parallel\n- Batch processing with independent agents\n\n---\n\n### \ud83e\udde0 **RouterAgent** \u2013 Intelligent Task Routing\n\n**Purpose:**  \nAutomatically select the most suitable agent for a given task using LLM-powered reasoning and XML-based decision making.  \nThe RouterAgent analyzes the user\u2019s query and agent capabilities, then routes the task to the best-fit agent.\n\n**How it works:**\n\n- You provide a list of `OmniAgent` instances and configuration for the router.\n- The RouterAgent builds a registry of agent capabilities (using system instructions and available tools).\n- When a task is received, the RouterAgent uses its internal LLM to select the best agent and forwards the task.\n- The selected agent executes the task and returns the result.\n\n**Example Usage:**\n\n```python\nfrom omnicoreagent.omni_agent.workflow.router_agent import RouterAgent\n\nagents = [agent1, agent2, agent3]\nrouter = RouterAgent(\n    sub_agents=agents,\n    model_config={...},\n    agent_config={...},\n    memory_router=...,\n    event_router=...,\n    debug=True\n)\nawait router.initialize()\nresult = await router.run(task=\"Find and summarize recent news about AI\")\nprint(result)\n```\n\n**Typical Use Cases:**\n\n- Dynamic agent selection based on user query\n- Multi-domain assistants (e.g., code, data, research)\n- Intelligent orchestration in complex workflows\n\n---\n\n### \ud83d\udcda **Workflow Agent Examples**\n\nSee the `examples/` directory for ready-to-run demos of each workflow agent:\n\n- `examples/sequential_agent.py`\n- `examples/parallel_agent.py`\n- `examples/router_agent.py`\n\nEach example shows how to set up agents, configure workflows, and process results.\n\n---\n\n### \ud83d\udee0\ufe0f **How to Choose?**\n\n| Workflow Agent   | Best For                                      |\n|------------------|-----------------------------------------------|\n| SequentialAgent  | Multi-stage pipelines, step-by-step tasks     |\n| ParallelAgent    | Fast batch processing, independent analyses   |\n| RouterAgent      | Smart routing, dynamic agent selection        |\n\nYou can combine these workflow agents for advanced orchestration patterns in your AI applications.\n\n---\n\n**Ready to build?**  \nExplore the examples, study the API, and start orchestrating powerful multi-agent workflows with OmniCoreAgent!\n\n## \ud83d\ude81 Background Agent System - Autonomous Task Automation\n\nThe Background Agent System is one of OmniAgent's most powerful features, providing fully autonomous task execution with intelligent lifecycle management. Background agents run independently, executing scheduled tasks without human intervention.\n\n### \u2728 Background Agent Features\n\n- **\ud83d\udd04 Autonomous Execution** - Agents run independently in the background\n- **\u23f0 Flexible Scheduling** - Time-based, interval-based, and cron-style scheduling\n- **\ud83e\udde0 Full OmniAgent Capabilities** - Access to all local tools and MCP servers\n- **\ud83d\udcca Lifecycle Management** - Create, update, pause, resume, and delete agents\n- **\ud83d\udd27 Background Agent Manager** - Central control system for all background agents\n- **\ud83d\udce1 Real-Time Monitoring** - Track agent status and execution results\n- **\ud83d\udee0\ufe0f Task Management** - Update tasks, schedules, and configurations dynamically\n\n### \ud83d\udd27 Background Agent Manager\n\nThe Background Agent Manager handles the complete lifecycle of background agents:\n\n#### **Core Capabilities:**\n- **Create New Agents** - Deploy autonomous agents with custom tasks\n- **Update Agent Tasks** - Modify agent instructions and capabilities dynamically\n- **Schedule Management** - Update timing, intervals, and execution schedules\n- **Agent Control** - Start, stop, pause, and resume agents\n- **Health Monitoring** - Track agent status and performance\n- **Resource Management** - Manage agent memory and computational resources\n\n#### **Scheduler Support:**\n- **APScheduler** *(Current)* - Advanced Python task scheduling\n  - Cron-style scheduling\n  - Interval-based execution\n  - Date-based scheduling\n  - Timezone support\n- **Future Roadmap**:\n  - **RabbitMQ** - Message queue-based task distribution\n  - **Redis Pub/Sub** - Event-driven agent communication\n  - **Celery** - Distributed task execution\n  - **Kubernetes Jobs** - Container-based agent deployment\n\n### \ud83c\udfaf Background Agent Usage Examples\n\n#### **1. Basic Background Agent Creation**\n\n```python\nfrom omnicoreagent import (\n    OmniAgent,\n    MemoryRouter,\n    EventRouter,\n    BackgroundAgentManager,\n    ToolRegistry,\n    logger,\n)\n\n# Initialize the background agent service\nmemory_router = MemoryRouter(memory_store_type=\"redis\")\nevent_router = EventRouter(event_store_type=\"redis_stream\")\nbg_service = BackgroundAgentService(memory_router, event_router)\n\n# Start the background agent manager\nbg_service.start_manager()\n\n# Create tool registry for the background agent\ntool_registry = ToolRegistry()\n\n@tool_registry.register_tool(\"monitor_system\")\ndef monitor_system() -> str:\n    \"\"\"Monitor system resources and status.\"\"\"\n    import psutil\n    cpu = psutil.cpu_percent()\n    memory = psutil.virtual_memory().percent\n    return f\"System Status - CPU: {cpu}%, Memory: {memory}%\"\n\n# Configure the background agent\nagent_config = {\n    \"agent_id\": \"system_monitor\",\n    \"system_instruction\": \"You are a system monitoring agent. Check system resources and send alerts when thresholds are exceeded.\",\n    \"model_config\": {\n        \"provider\": \"openai\",\n        \"model\": \"gpt-4o-mini\",\n        \"temperature\": 0.3\n    },\n    \"agent_config\": {\n        \"max_steps\": 10,\n        \"tool_call_timeout\": 60\n    },\n    \"interval\": 300,  # 5 minutes in seconds\n    \"task_config\": {\n        \"query\": \"Monitor system resources and send alerts if CPU > 80% or Memory > 90%\",\n        \"schedule\": \"every 5 minutes\",\n        \"interval\": 300,\n        \"max_retries\": 2,\n        \"retry_delay\": 30\n    },\n    \"local_tools\": tool_registry\n}\n\n# Create and deploy background agent\nresult = await bg_service.create(agent_config)\nprint(f\"Background agent '{agent_config['agent_id']}' created successfully!\")\nprint(f\"Details: {result}\")\n```\n\n#### **4. Web Application Integration**\n\n```python\nfrom fastapi import FastAPI, HTTPException\nfrom pydantic import BaseModel\n\n# Request models for API\nclass BackgroundAgentRequest(BaseModel):\n    agent_id: str\n    query: str = None\n    schedule: str = None\n\nclass TaskUpdateRequest(BaseModel):\n    agent_id: str\n    query: str\n\n# FastAPI integration\napp = FastAPI()\n\n# Initialize background service\n@app.on_event(\"startup\")\nasync def startup():\n    memory_router = MemoryRouter(memory_store_type=\"redis\")\n    event_router = EventRouter(event_store_type=\"redis_stream\")\n    app.state.bg_service = BackgroundAgentService(memory_router, event_router)\n    app.state.bg_service.start_manager()\n\n@app.on_event(\"shutdown\")\nasync def shutdown():\n    app.state.bg_service.shutdown_manager()\n\n# API endpoints (same as shown in REST API section above)\n@app.post(\"/api/background/create\")\nasync def create_background_agent(payload: BackgroundAgentRequest):\n    # Parse schedule to interval seconds\n    def parse_schedule(schedule_str: str) -> int:\n        import re\n        if not schedule_str:\n            return 3600  # Default 1 hour\n        \n        # Try parsing as raw number\n        try:\n            return max(1, int(schedule_str))\n        except:\n            pass\n        \n        # Parse text patterns\n        text = schedule_str.lower().strip()\n        \n        # Match patterns like \"5 minutes\", \"every 30 seconds\", etc.\n        patterns = [\n            (r\"(\\d+)(?:\\s*)(second|sec|s)s?\", 1),\n            (r\"(\\d+)(?:\\s*)(minute|min|m)s?\", 60),\n            (r\"(\\d+)(?:\\s*)(hour|hr|h)s?\", 3600),\n            (r\"every\\s+(\\d+)\\s+(second|sec|s)s?\", 1),\n            (r\"every\\s+(\\d+)\\s+(minute|min|m)s?\", 60),\n            (r\"every\\s+(\\d+)\\s+(hour|hr|h)s?\", 3600)\n        ]\n        \n        for pattern, multiplier in patterns:\n            match = re.search(pattern, text)\n            if match:\n                value = int(match.group(1))\n                return max(1, value * multiplier)\n        \n        return 3600  # Default fallback\n\n    interval_seconds = parse_schedule(payload.schedule)\n    \n    agent_config = {\n        \"agent_id\": payload.agent_id,\n        \"system_instruction\": f\"You are a background agent that performs: {payload.query}\",\n        \"model_config\": {\n            \"provider\": \"openai\", \n            \"model\": \"gpt-4o-mini\", \n            \"temperature\": 0.3\n        },\n        \"agent_config\": {\n            \"max_steps\": 10, \n            \"tool_call_timeout\": 60\n        },\n        \"interval\": interval_seconds,\n        \"task_config\": {\n            \"query\": payload.query,\n            \"schedule\": payload.schedule or \"immediate\",\n            \"interval\": interval_seconds,\n            \"max_retries\": 2,\n            \"retry_delay\": 30\n        },\n        \"local_tools\": build_tool_registry()  # Your custom tools\n    }\n    \n    details = await app.state.bg_service.create(agent_config)\n    app.state.bg_service.start_manager()\n    \n    return {\n        \"status\": \"success\",\n        \"agent_id\": payload.agent_id,\n        \"message\": \"Background agent created\",\n        \"details\": details\n    }\n```\n\n#### **3. Agent Lifecycle Management**\n\n```python\n# List all background agents\nagent_ids = bg_service.list()\nprint(f\"Active agents: {len(agent_ids)}\")\nprint(f\"Agent IDs: {agent_ids}\")\n\n# Get detailed agent information\nfor agent_id in agent_ids:\n    status = bg_service.get_agent_status(agent_id)\n    print(f\"\"\"\nAgent: {agent_id}\n\u251c\u2500\u2500 Running: {status.get('is_running', False)}\n\u251c\u2500\u2500 Scheduled: {status.get('scheduled', False)}\n\u251c\u2500\u2500 Query: {status.get('task_config', {}).get('query', 'N/A')}\n\u251c\u2500\u2500 Schedule: {status.get('task_config', {}).get('schedule', 'N/A')}\n\u251c\u2500\u2500 Interval: {status.get('task_config', {}).get('interval', 'N/A')}s\n\u2514\u2500\u2500 Session ID: {bg_service.manager.get_agent_session_id(agent_id)}\n\"\"\")\n\n# Update agent task\nsuccess = bg_service.update_task_config(\n    agent_id=\"system_monitor\",\n    task_config={\n        \"query\": \"Monitor system resources and also check disk space. Alert if disk usage > 85%\",\n        \"max_retries\": 3,\n        \"retry_delay\": 60\n    }\n)\nprint(f\"Task update success: {success}\")\n\n# Agent control operations\nbg_service.pause_agent(\"system_monitor\")   # Pause scheduling\nprint(\"Agent paused\")\n\nbg_service.resume_agent(\"system_monitor\")  # Resume scheduling\nprint(\"Agent resumed\")\n\nbg_service.stop_agent(\"system_monitor\")    # Stop execution\nprint(\"Agent stopped\")\n\nbg_service.start_agent(\"system_monitor\")   # Start execution\nprint(\"Agent started\")\n\n# Remove agent task permanently\nsuccess = bg_service.remove_task(\"system_monitor\")\nprint(f\"Task removal success: {success}\")\n\n# Get manager status\nmanager_status = bg_service.get_manager_status()\nprint(f\"Manager status: {manager_status}\")\n\n# Connect MCP servers for agent (if configured)\nawait bg_service.connect_mcp(\"system_monitor\")\nprint(\"MCP servers connected\")\n\n# Shutdown entire manager\nbg_service.shutdown_manager()\nprint(\"Background agent manager shutdown\")\n```\n\n#### **4. Background Agent with MCP Integration**\n\n```python\n# Background agent with both local tools and MCP servers\nweb_scraper_agent = await manager.create_agent(\n    agent_id=\"web_scraper\",\n    task=\"Scrape news websites hourly, analyze sentiment, and store results\",\n    schedule={\n        \"type\": \"interval\",\n        \"hours\": 1\n    },\n    local_tools=tool_registry,  # Your custom tools\n    mcp_tools=[  # MCP server connections\n        {\n            \"name\": \"web_scraper\",\n            \"transport_type\": \"stdio\",\n            \"command\": \"npx\",\n            \"args\": [\"-y\", \"@mcp/server-web-scraper\"]\n        },\n        {\n            \"name\": \"database\",\n            \"transport_type\": \"streamable_http\",\n            \"url\": \"http://localhost:8080/mcp\",\n            \"headers\": {\"Authorization\": \"Bearer db-token\"}\n        }\n    ],\n    system_instruction=\"You are a web scraping agent. Scrape news sites, analyze sentiment, and store results in the database.\"\n)\n```\n\n### \ud83d\udee0\ufe0f Background Agent Manager API\n\nThe BackgroundAgentService provides a comprehensive API for managing background agents:\n\n#### **Agent Creation & Configuration**\n```python\n# Create new background agent\nresult = await bg_service.create(agent_config: dict)\n\n# Agent configuration structure\nagent_config = {\n    \"agent_id\": \"unique_agent_id\",\n    \"system_instruction\": \"Agent role and behavior description\",\n    \"model_config\": {\n        \"provider\": \"openai\",\n        \"model\": \"gpt-4o-mini\",\n        \"temperature\": 0.3\n    },\n    \"agent_config\": {\n        \"max_steps\": 10,\n        \"tool_call_timeout\": 60\n    },\n    \"interval\": 300,  # Execution interval in seconds\n    \"task_config\": {\n        \"query\": \"Main task description\",\n        \"schedule\": \"human-readable schedule (e.g., 'every 5 minutes')\",\n        \"interval\": 300,\n        \"max_retries\": 2,\n        \"retry_delay\": 30\n    },\n    \"local_tools\": tool_registry,  # Optional custom tools\n    \"mcp_tools\": mcp_server_configs  # Optional MCP server connections\n}\n```\n\n#### **Agent Lifecycle Management**\n```python\n# Start the background agent manager\nbg_service.start_manager()\n\n# Agent control operations\nbg_service.start_agent(agent_id: str)      # Start agent execution\nbg_service.stop_agent(agent_id: str)       # Stop agent execution\nbg_service.pause_agent(agent_id: str)      # Pause agent scheduling\nbg_service.resume_agent(agent_id: str)     # Resume agent scheduling\n\n# Shutdown manager (stops all agents)\nbg_service.shutdown_manager()\n```\n\n#### **Agent Monitoring & Status**\n```python\n# List all agents\nagent_ids = bg_service.list()  # Returns list of agent IDs\n\n# Get specific agent status\nstatus = bg_service.get_agent_status(agent_id: str)\n# Returns: {\n#     \"is_running\": bool,\n#     \"scheduled\": bool,\n#     \"task_config\": dict,\n#     \"session_id\": str,\n#     # ... other status info\n# }\n\n# Get manager status\nmanager_status = bg_service.get_manager_status()\n```\n\n#### **Task Management**\n```python\n# Update agent task configuration\nsuccess = bg_service.update_task_config(\n    agent_id: str, \n    task_config: dict\n)\n\n# Remove agent task completely\nsuccess = bg_service.remove_task(agent_id: str)\n```\n\n#### **MCP Server Management**\n```python\n# Connect MCP servers for specific agent\nawait bg_service.connect_mcp(agent_id: str)\n```\n\n### \ud83c\udf10 REST API Endpoints\n\nThe Background Agent system can be integrated into web applications with these REST endpoints:\n\n#### **Agent Management Endpoints**\n```bash\n# Create new background agent\nPOST /api/background/create\n{\n    \"agent_id\": \"system_monitor\",\n    \"query\": \"Monitor system resources and alert on high usage\",\n    \"schedule\": \"every 5 minutes\"\n}\n\n# List all background agents\nGET /api/background/list\n# Returns: {\n#   \"status\": \"success\",\n#   \"agents\": [\n#     {\n#       \"agent_id\": \"system_monitor\",\n#       \"query\": \"Monitor system resources...\",\n#       \"is_running\": true,\n#       \"scheduled\": true,\n#       \"schedule\": \"every 5 minutes\",\n#       \"interval\": 300,\n#       \"session_id\": \"session_123\"\n#     }\n#   ]\n# }\n```\n\n#### **Agent Control Endpoints**\n```bash\n# Start agent\nPOST /api/background/start\n{\"agent_id\": \"system_monitor\"}\n\n# Stop agent\nPOST /api/background/stop\n{\"agent_id\": \"system_monitor\"}\n\n# Pause agent\nPOST /api/background/pause\n{\"agent_id\": \"system_monitor\"}\n\n# Resume agent\nPOST /api/background/resume\n{\"agent_id\": \"system_monitor\"}\n```\n\n#### **Task Management Endpoints**\n```bash\n# Update agent task\nPOST /api/task/update\n{\n    \"agent_id\": \"system_monitor\",\n    \"query\": \"Updated task description\"\n}\n\n# Remove agent task\nDELETE /api/task/remove/{agent_id}\n```\n\n#### **Status & Monitoring Endpoints**\n```bash\n# Get manager status\nGET /api/background/status\n\n# Get specific agent status\nGET /api/background/status/{agent_id}\n\n# Connect MCP servers for agent\nPOST /api/background/mcp/connect\n{\"agent_id\": \"system_monitor\"}\n```\n\n#### **Event Streaming Endpoints**\n```bash\n# Get events for session\nGET /api/events?session_id=session_123\n\n# Stream real-time events\nGET /api/events/stream/{session_id}\n# Returns Server-Sent Events stream\n```\n\n### \ud83d\udccb Schedule Parsing\n\nThe Background Agent system includes intelligent schedule parsing:\n\n```python\n# Flexible schedule input formats:\n\"300\"                    # 300 seconds\n\"5 minutes\"             # 5 minutes\n\"2 hours\"               # 2 hours\n\"every 30 seconds\"      # Every 30 seconds\n\"every 10 minutes\"      # Every 10 minutes\n\"every 2 hours\"         # Every 2 hours\n\n# All converted to interval seconds automatically\n# Minimum interval: 1 second\n```\n\n### \ud83d\udcc5 Scheduling Configuration\n\nThe Background Agent system currently supports interval-based scheduling with intelligent parsing:\n\n#### **Interval-Based Scheduling (Current Implementation)**\n```python\n# Schedule configuration in agent_config\nagent_config = {\n    \"interval\": 300,  # Execution interval in seconds\n    \"task_config\": {\n        \"schedule\": \"every 5 minutes\",  # Human-readable description\n        \"interval\": 300,               # Same value in seconds\n        \"max_retries\": 2,\n        \"retry_delay\": 30\n    }\n}\n\n# Flexible schedule input formats supported:\n\"300\"                    # 300 seconds\n\"5 minutes\"             # 5 minutes \u2192 300 seconds\n\"2 hours\"               # 2 hours \u2192 7200 seconds\n\"30 seconds\"            # 30 seconds\n\"every 30 seconds\"      # Every 30 seconds\n\"every 10 minutes\"      # Every 10 minutes \u2192 600 seconds\n\"every 2 hours\"         # Every 2 hours \u2192 7200 seconds\n\n# All automatically converted to interval seconds\n# Minimum interval: 1 second\n```\n\n#### **Schedule Parsing Logic**\nThe system intelligently parses various schedule formats:\n- **Raw numbers**: `\"300\"` \u2192 300 seconds\n- **Unit expressions**: `\"5 minutes\"` \u2192 300 seconds\n- **Every patterns**: `\"every 10 minutes\"` \u2192 600 seconds\n- **Supported units**: seconds (s/sec), minutes (m/min), hours (h/hr)\n\n#### **Future Scheduling Features (Planned)**\n```python\n# Coming with future scheduler backends:\nschedule = {\n    \"type\": \"cron\",\n    \"cron\": \"0 9 * * 1-5\",    # Weekdays at 9 AM\n    \"timezone\": \"UTC\"\n}\n\nschedule = {\n    \"type\": \"date\",\n    \"run_date\": \"2024-03-15 14:30:00\",\n    \"timezone\": \"UTC\"\n}\n```\n\n### \ud83d\udd04 Background Agent States\n\nBackground agents can be in different states managed by the Background Agent Manager:\n\n- **`CREATED`** - Agent created but not yet started\n- **`RUNNING`** - Agent is active and executing according to schedule\n- **`PAUSED`** - Agent is temporarily stopped but retains configuration\n- **`STOPPED`** - Agent execution stopped but agent still exists\n- **`ERROR`** - Agent encountered an error during execution\n- **`DELETED`** - Agent permanently removed\n\n### \ud83d\udcca Monitoring & Observability\n\n#### **Real-Time Status Monitoring**\n```python\n# Get comprehensive agent status\nstatus = await manager.get_agent_status(\"system_monitor\")\n\nprint(f\"\"\"\nAgent Status Report:\n\u251c\u2500\u2500 ID: {status['agent_id']}\n\u251c\u2500\u2500 Name: {status['name']}\n\u251c\u2500\u2500 State: {status['state']}\n\u251c\u2500\u2500 Last Run: {status['last_run']}\n\u251c\u2500\u2500 Next Run: {status['next_run']}\n\u251c\u2500\u2500 Success Rate: {status['success_rate']}%\n\u251c\u2500\u2500 Total Executions: {status['total_runs']}\n\u251c\u2500\u2500 Failed Executions: {status['failed_runs']}\n\u2514\u2500\u2500 Average Duration: {status['avg_duration']}s\n\"\"\")\n```\n\n#### **Execution History**\n```python\n# Get detailed execution history\nhistory = await manager.get_execution_history(\"system_monitor\", limit=5)\n\nfor execution in history:\n    print(f\"\"\"\nExecution {execution['execution_id']}:\n\u251c\u2500\u2500 Start Time: {execution['start_time']}\n\u251c\u2500\u2500 Duration: {execution['duration']}s\n\u251c\u2500\u2500 Status: {execution['status']}\n\u251c\u2500\u2500 Result: {execution['result'][:100]}...\n\u2514\u2500\u2500 Tools Used: {execution['tools_used']}\n\"\"\")\n```\n\n### \ud83d\ude80 Future Scheduler Support\n\nThe Background Agent Manager is designed to support multiple scheduling backends:\n\n#### **Current Support**\n- **APScheduler** - Full-featured Python task scheduling\n  - In-memory scheduler\n  - Persistent job storage\n  - Multiple trigger types\n  - Timezone support\n\n#### **Planned Future Support**\n- **RabbitMQ** - Message queue-based task distribution\n  - Distributed agent execution\n  - Load balancing across workers\n  - Reliable message delivery\n  - Dead letter queues for failed tasks\n\n- **Redis Pub/Sub** - Event-driven agent communication\n  - Real-time event processing\n  - Agent-to-agent communication\n  - Scalable event distribution\n  - Pattern-based subscriptions\n\n- **Celery** - Distributed task queue\n  - Horizontal scaling\n  - Result backends\n  - Task routing and priority\n  - Monitoring and management tools\n\n- **Kubernetes Jobs** - Container-based agent deployment\n  - Cloud-native scaling\n  - Resource management\n  - Job persistence and recovery\n  - Integration with CI/CD pipelines\n\n### \ud83d\udccb Background Agent Configuration\n\n#### **Complete Configuration Example**\n```python\n# Comprehensive background agent setup\nbackground_agent = await manager.create_agent(\n    agent_id=\"comprehensive_agent\",\n    name=\"Comprehensive Background Agent\",\n    task=\"Monitor APIs, process data, and generate reports\",\n    \n    # Scheduling configuration\n    schedule={\n        \"type\": \"cron\",\n        \"cron\": \"0 */6 * * *\",  # Every 6 hours\n        \"timezone\": \"UTC\"\n    },\n    \n    # AI model configuration\n    model_config={\n        \"provider\": \"openai\",\n        \"model\": \"gpt-4o-mini\",\n        \"temperature\": 0.3,\n        \"max_tokens\": 2000\n    },\n    \n    # Agent behavior configuration\n    agent_config={\n        \"tool_call_timeout\": 60,\n        \"max_steps\": 20,\n        \"request_limit\": 100,\n        \"total_tokens_limit\": 10000,\n        \"memory_results_limit\": 10,\n        \"memory_similarity_threshold\": 0.7\n    },\n    \n    # Custom tools\n    local_tools=tool_registry,\n    \n    # MCP server connections\n    mcp_tools=[\n        {\n            \"name\": \"api_monitor\",\n            \"transport_type\": \"streamable_http\",\n            \"url\": \"http://localhost:8080/mcp\",\n            \"headers\": {\"Authorization\": \"Bearer api-token\"}\n        }\n    ],\n    \n    # Agent personality\n    system_instruction=\"You are an autonomous monitoring agent. Execute tasks efficiently and report any issues.\",\n    \n    # Memory and events\n    memory_store=MemoryRouter(memory_store_type=\"redis\"),\n    event_router=EventRouter(event_store_type=\"redis_stream\")\n)\n```\n\n### \ud83d\udd04 Error Handling & Recovery\n\nBackground agents include robust error handling:\n\n```python\n# Automatic retry configuration\nagent_config = {\n    \"max_retries\": 3,           # Retry failed executions\n    \"retry_delay\": 60,          # Wait 60 seconds between retries\n    \"failure_threshold\": 5,     # Pause agent after 5 consecutive failures\n    \"recovery_mode\": \"auto\"     # Auto-resume after successful execution\n}\n\n# Error monitoring\ntry:\n    result = await agent.execute_task()\nexcept BackgroundAgentException as e:\n    # Handle agent-specific errors\n    await manager.handle_agent_error(agent_id, e)\n```\n\n### \ud83d\udce1 Event Integration\n\nBackground agents integrate with the event system for real-time monitoring:\n\n```python\n# Subscribe to background agent events\nevent_router = EventRouter(event_store_type=\"redis_stream\")\n\n# Listen for agent events\nasync for event in event_router.subscribe(\"background_agent.*\"):\n    if event.type == \"agent_started\":\n        print(f\"Agent {event.data['agent_id']} started execution\")\n    elif event.type == \"agent_completed\":\n        print(f\"Agent {event.data['agent_id']} completed task\")\n    elif event.type == \"agent_failed\":\n        print(f\"Agent {event.data['agent_id']} failed: {event.data['error']}\")\n```\n\n### Basic Agent Usage\n```bash\n# Complete OmniAgent demo with custom tools\npython examples/omni_agent_example.py\n\n# Advanced patterns with 12+ tool examples\npython examples/run_omni_agent.py\n```\n\n### Background Agents\n```bash\n# Self-flying autonomous agents\npython examples/background_agent_example.py\n```\n\n### Web Applications\n```bash\n# FastAPI integration\npython examples/fast_api_impl.py\n\n# Full web interface\npython examples/web_server.py\n# Open http://localhost:8000\n```\n\n---\n\n# \ud83d\udd0c MCPOmni Connect - World-Class MCP Client\n\nThe MCPOmni Connect system is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes.\n\n## \u2728 MCPOmni Connect Key Features\n\n### \ud83e\udd16 Intelligent Agent System\n\n- **ReAct Agent Mode**\n  - Autonomous task execution with reasoning and action cycles\n  - Independent decision-making without human intervention\n  - Advanced problem-solving through iterative reasoning\n  - Self-guided tool selection and execution\n  - Complex task decomposition and handling\n- **Orchestrator Agent Mode**\n  - Strategic multi-step task planning and execution\n  - Intelligent coordination across multiple MCP servers\n  - Dynamic agent delegation and communication\n  - Parallel task execution when possible\n  - Sophisticated workflow management with real-time progress monitoring\n- **Interactive Chat Mode**\n  - Human-in-the-loop task execution with approval workflows\n  - Step-by-step guidance and explanations\n  - Educational mode for understanding AI decision processes\n\n### \ud83d\udd0c Universal Connectivity\n\n- **Multi-Protocol Support**\n  - Native support for stdio transport\n  - Server-Sent Events (SSE) for real-time communication\n  - Streamable HTTP for efficient data streaming\n  - Docker container integration\n  - NPX package execution\n  - Extensible transport layer for future protocols\n- **Authentication Support**\n  - OAuth 2.0 authentication flow\n  - Bearer token authentication\n  - Custom header support\n  - Secure credential management\n- **Agentic Operation Modes**\n  - Seamless switching between chat, autonomous, and orchestrator modes\n  - Context-aware mode selection based on task complexity\n  - Persistent state management across mode transitions\n\n## \ud83d\udea6 Transport Types & Authentication\n\nMCPOmni Connect supports multiple ways to connect to MCP servers:\n\n### 1. **stdio** - Direct Process Communication\n\n**Use when**: Connecting to local MCP servers that run as separate processes\n\n```json\n{\n  \"server-name\": {\n    \"transport_type\": \"stdio\",\n    \"command\": \"uvx\",\n    \"args\": [\"mcp-server-package\"]\n  }\n}\n```\n\n- **No authentication needed**\n- **No OAuth server started**\n- Most common for local development\n\n### 2. **sse** - Server-Sent Events\n\n**Use when**: Connecting to HTTP-based MCP servers using Server-Sent Events\n\n```json\n{\n  \"server-name\": {\n    \"transport_type\": \"sse\",\n    \"url\": \"http://your-server.com:4010/sse\",\n    \"headers\": {\n      \"Authorization\": \"Bearer your-token\"\n    },\n    \"timeout\": 60,\n    \"sse_read_timeout\": 120\n  }\n}\n```\n\n- **Uses Bearer token or custom headers**\n- **No OAuth server started**\n\n### 3. **streamable_http** - HTTP with Optional OAuth\n\n**Use when**: Connecting to HTTP-based MCP servers with or without OAuth\n\n**Without OAuth (Bearer Token):**\n\n```json\n{\n  \"server-name\": {\n    \"transport_type\": \"streamable_http\",\n    \"url\": \"http://your-server.com:4010/mcp\",\n    \"headers\": {\n      \"Authorization\": \"Bearer your-token\"\n    },\n    \"timeout\": 60\n  }\n}\n```\n\n- **Uses Bearer token or custom headers**\n- **No OAuth server started**\n\n**With OAuth:**\n\n```json\n{\n  \"server-name\": {\n    \"transport_type\": \"streamable_http\",\n    \"auth\": {\n      \"method\": \"oauth\"\n    },\n    \"url\": \"http://your-server.com:4010/mcp\"\n  }\n}\n```\n\n- **OAuth callback server automatically starts on `http://localhost:3000`**\n- **This is hardcoded and cannot be changed**\n- **Required for OAuth flow to work properly**\n\n### \ud83d\udd10 OAuth Server Behavior\n\n**Important**: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.\n\n#### What You'll See:\n\n```\n\ud83d\udda5\ufe0f  Started callback server on http://localhost:3000\n```\n\n#### Key Points:\n\n- **This is normal behavior** - not an error\n- **The address `http://localhost:3000` is hardcoded** and cannot be changed\n- **The server only starts when** you have `\"auth\": {\"method\": \"oauth\"}` in your config\n- **The server stops** when the application shuts down\n- **Only used for OAuth token handling** - no other purpose\n\n#### When OAuth is NOT Used:\n\n- Remove the entire `\"auth\"` section from your server configuration\n- Use `\"headers\"` with `\"Authorization\": \"Bearer token\"` instead\n- No OAuth server will start\n\n## \ud83d\udda5\ufe0f CLI Commands\n\n### Memory Store Management:\n```bash\n# Switch between memory backends\n/memory_store:in_memory                    # Fast in-memory storage (default)\n/memory_store:redis                        # Redis persistent storage  \n/memory_store:database                     # SQLite database storage\n/memory_store:database:postgresql://user:pass@host/db  # PostgreSQL\n/memory_store:database:mysql://user:pass@host/db       # MySQL\n/memory_store:mongodb                      # Mongodb persistent storage\n/memory_store:mongodb:your_mongodb_connection_string   # Mongodb with custom URI\n\n# Memory strategy configuration\n/memory_mode:sliding_window:10             # Keep last 10 messages\n/memory_mode:token_budget:5000             # Keep under 5000 tokens\n```\n\n### Event Store Management:\n```bash\n# Switch between event backends\n/event_store:in_memory                     # Fast in-memory events (default)\n/event_store:redis_stream                  # Redis Streams for persistence\n```\n\n### Core MCP Operations:\n```bash\n/tools                                    # List all available tools\n/prompts                                  # List all available prompts  \n/resources                               # List all available resources\n/prompt:<name>                           # Execute a specific prompt\n/resource:<uri>                          # Read a specific resource\n/subscribe:<uri>                         # Subscribe to resource updates\n/query <your_question>                   # Ask questions using tools\n```\n\n### Enhanced Commands:\n```bash\n# Memory operations\n/history                                   # Show conversation history\n/clear_history                            # Clear conversation history\n/save_history <file>                      # Save history to file\n/load_history <file>                      # Load history from file\n\n# Server management\n/add_servers:<config.json>                # Add servers from config\n/remove_server:<server_name>              # Remove specific server\n/refresh                                  # Refresh server capabilities\n\n# Agentic modes\n/mode:auto                              # Switch to autonomous agentic mode\n/mode:orchestrator                      # Switch to multi-server orchestration\n/mode:chat                              # Switch to interactive chat mode\n\n# Debugging and monitoring\n/debug                                    # Toggle debug mode\n/api_stats                               # Show API usage statistics\n```\n\n## \ud83d\udcda MCP Usage Examples\n\n### Basic MCP Client\n```bash\n# Launch the basic MCP client\npython examples/basic_mcp.py\n```\n\n### Advanced MCP CLI\n```bash\n# Launch the advanced MCP CLI\npython examples/run_mcp.py\n\n# Core MCP client commands:\n/tools                                    # List all available tools\n/prompts                                  # List all available prompts  \n/resources                               # List all available resources\n/prompt:<name>                           # Execute a specific prompt\n/resource:<uri>                          # Read a specific resource\n/subscribe:<uri>                         # Subscribe to resource updates\n/query <your_question>                   # Ask questions using tools\n\n# Advanced platform features:\n/memory_store:redis                      # Switch to Redis memory\n/event_store:redis_stream               # Switch to Redis events\n/add_servers:<config.json>              # Add MCP servers dynamically\n/remove_server:<name>                   # Remove MCP server\n/mode:auto                              # Switch to autonomous agentic mode\n/mode:orchestrator                      # Switch to multi-server orchestration\n```\n\n---\n\n## \u2728 Platform Features\n\n> **\ud83d\ude80 Want to start building right away?** Jump to [Quick Start](#-quick-start-2-minutes) | [Examples](#-what-can-you-build-see-real-examples) | [Configuration](#\ufe0f-configuration-guide)\n\n### \ud83e\udde0 AI-Powered Intelligence\n\n- **Unified LLM Integration with LiteLLM**\n  - Single unified interface for all AI providers\n  - Support for 100+ models across providers including:\n    - OpenAI (GPT-4, GPT-3.5, etc.)\n    - Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)\n    - Google (Gemini Pro, Gemini Flash, etc.)\n    - Groq (Llama, Mixtral, Gemma, etc.)\n    - DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)\n    - Azure OpenAI\n    - OpenRouter (access to 200+ models)\n    - Ollama (local models)\n  - Simplified configuration and reduced complexity\n  - Dynamic system prompts based on available capabilities\n  - Intelligent context management\n  - Automatic tool selection and chaining\n  - Universal model support through custom ReAct Agent\n    - Handles models without native function calling\n    - Dynamic function execution based on user requests\n    - Intelligent tool orchestration\n\n### \ud83d\udd12 Security & Privacy\n\n- **Explicit User Control**\n  - All tool executions require explicit user approval in chat mode\n  - Clear explanation of tool actions before execution\n  - Transparent disclosure of data access and usage\n- **Data Protection**\n  - Strict data access controls\n  - Server-specific data isolation\n  - No unauthorized data exposure\n- **Privacy-First Approach**\n  - Minimal data collection\n  - User data remains on specified servers\n  - No cross-server data sharing without consent\n- **Secure Communication**\n  - Encrypted transport protocols\n  - Secure API key management\n  - Environment variable protection\n\n### \ud83d\udcbe Advanced Memory Management\n\n- **Multi-Backend Memory Storage**\n  - **In-Memory**: Fast development storage\n  - **Redis**: Persistent memory with real-time access\n  - **Database**: PostgreSQL, MySQL, SQLite support \n  - **Mongodb**: NoSQL document storage\n  - **File Storage**: Save/load conversation history\n  - Runtime switching: `/memory_store:redis`, `/memory_store:database:postgresql://user:pass@host/db`\n- **Multi-Tier Memory Strategy**\n  - **Short-term Memory**: Sliding window or token budget strategies\n  - **Long-term Memory**: Vector database storage for semantic retrieval\n  - **Episodic Memory**: Context-aware conversation history\n  - Runtime configuration: `/memory_mode:sliding_window:5`, `/memory_mode:token_budget:3000`\n- **Vector Database Integration**\n  - **Multiple Provider Support**: Mongodb atlas, ChromaDB (remote/cloud), and Qdrant (remote)\n  - **Smart Fallback**: Automatic failover to local storage if remote fails\n  - **Semantic Search**: Intelligent context retrieval across conversations  \n  - **Long-term & Episodic Memory**: Enable with `ENABLE_VECTOR_DB=true`\n  \n- **Real-Time Event Streaming**\n  - **In-Memory Events**: Fast development event processing\n  - **Redis Streams**: Persistent event storage and streaming\n  - Runtime switching: `/event_store:redis_stream`, `/event_store:in_memory`\n- **Advanced Tracing & Observability**\n  - **Opik Integration**: Production-grade tracing and monitoring\n    - **Real-time Performance Tracking**: Monitor LLM calls, tool executions, and agent performance\n    - **Detailed Call Traces**: See exactly where time is spent in your AI workflows\n    - **System Observability**: Understand bottlenecks and optimize performance\n    - **Open Source**: Built on Opik, the open-source observability platform\n  - **Easy Setup**: Just add your Opik credentials to start monitoring\n  - **Zero Code Changes**: Automatic tracing with `@track` decorators\n  - **Performance Insights**: Identify slow operations and optimization opportunities\n\n### \ud83d\udcac Prompt Management\n\n- **Advanced Prompt Handling**\n  - Dynamic prompt discovery across servers\n  - Flexible argument parsing (JSON and key-value formats)\n  - Cross-server prompt coordination\n  - Intelligent prompt validation\n  - Context-aware prompt execution\n  - Real-time prompt responses\n  - Support for complex nested arguments\n  - Automatic type conversion and validation\n- **Client-Side Sampling Support**\n  - Dynamic sampling configuration from client\n  - Flexible LLM response generation\n  - Customizable sampling parameters\n  - Real-time sampling adjustments\n\n### \ud83d\udee0\ufe0f Tool Orchestration\n\n- **Dynamic Tool Discovery & Management**\n  - Automatic tool capability detection\n  - Cross-server tool coordination\n  - Intelligent tool selection based on context\n  - Real-time tool availability updates\n\n### \ud83d\udce6 Resource Management\n\n- **Universal Resource Access**\n  - Cross-server resource discovery\n  - Unified resource addressing\n  - Automatic resource type detection\n  - Smart content summarization\n\n### \ud83d\udd04 Server Management\n\n- **Advanced Server Handling**\n  - Multiple simultaneous server connections\n  - Automatic server health monitoring\n  - Graceful connection management\n  - Dynamic capability updates\n  - Flexible authentication methods\n  - Runtime server configuration updates\n\n## \ud83c\udfd7\ufe0f Architecture\n\n> **\ud83d\udcda Prefer hands-on learning?** Skip to [Examples](#-what-can-you-build-see-real-examples) or [Configuration](#\ufe0f-configuration-guide)\n\n### Core Components\n\n```\nOmniCoreAgent Platform\n\u251c\u2500\u2500 \ud83e\udd16 OmniAgent System (Revolutionary Agent Builder)\n\u2502   \u251c\u2500\u2500 Local Tools Registry\n\u2502   \u251c\u2500\u2500 Background Agent Manager (Lifecycle Management)\n\u2502   \u2502   \u251c\u2500\u2500 Agent Creation & Deployment\n\u2502   \u2502   \u251c\u2500\u2500 Task & Schedule Updates\n\u2502   \u2502   \u251c\u2500\u2500 Agent Control (Start/Stop/Pause/Resume)\n\u2502   \u2502   \u251c\u2500\u2500 Health Monitoring & Status Tracking\n\u2502   \u2502   \u2514\u2500\u2500 Scheduler Integration (APScheduler + Future: RabbitMQ, Redis Pub/Sub)\n\u2502   \u251c\u2500\u2500 Custom Agent Creation\n\u2502   \u2514\u2500\u2500 Agent Orchestration Engine\n\u251c\u2500\u2500 \ud83d\udd0c MCPOmni Connect System (World-Class MCP Client)\n\u2502   \u251c\u2500\u2500 Transport Layer (stdio, SSE, HTTP, Docker, NPX)\n\u2502   \u251c\u2500\u2500 Multi-Server Orchestration\n\u2502   \u251c\u2500\u2500 Authentication & Security\n\u2502   \u2514\u2500\u2500 Connection Lifecycle Management\n\u251c\u2500\u2500 \ud83e\udde0 Shared Memory System (Both Systems)\n\u2502   \u251c\u2500\u2500 Multi-Backend Storage (Redis, DB, In-Memory)\n\u2502   \u251c\u2500\u2500 Vector Database Integration (ChromaDB, Qdrant, MongoDB)\n\u2502   \u251c\u2500\u2500 Memory Strategies (Sliding Window, Token Budget)\n\u2502   \u2514\u2500\u2500 Session Management\n\u251c\u2500\u2500 \ud83d\udce1 Event System (Both Systems)\n\u2502   \u251c\u2500\u2500 In-Memory Event Processing\n\u2502   \u251c\u2500\u2500 Redis Streams for Persistence\n\u2502   \u251c\u2500\u2500 Real-Time Event Monitoring\n\u2502   \u2514\u2500\u2500 Background Agent Event Broadcasting\n\u251c\u2500\u2500 \ud83d\udee0\ufe0f Tool Management (Both Systems)\n\u2502   \u251c\u2500\u2500 Dynamic Tool Discovery\n\u2502   \u251c\u2500\u2500 Cross-Server Tool Routing\n\u2502   \u251c\u2500\u2500 Local Python Tool Registration\n\u2502   \u2514\u2500\u2500 Tool Execution Engine\n\u2514\u2500\u2500 \ud83e\udd16 AI Integration (Both Systems)\n    \u251c\u2500\u2500 LiteLLM (100+ Models)\n    \u251c\u2500\u2500 Context Management\n    \u251c\u2500\u2500 ReAct Agent Processing\n    \u2514\u2500\u2500 Response Generation\n```\n\n---\n\n## \ud83d\udce6 Installation\n\n### \u2705 **Minimal Setup (Just Python + API Key)**\n\n**Required:**\n- Python 3.10+\n- LLM API key (OpenAI, Anthropic, Groq, etc.)\n\n**Optional (for advanced features):**\n- Redis (persistent memory)\n- Vector DB (Support Qdrant, ChromaDB, Mongodb atlas)\n- Database (PostgreSQL/MySQL/SQLite)\n- Opik account (for tracing/observability)\n\n### \ud83d\udce6 **Installation**\n\n```bash\n# Option 1: UV (recommended - faster)\nuv add omnicoreagent\n\n# Option 2: Pip (standard)\npip install omnicoreagent\n```\n\n### \u26a1 **Quick Configuration**\n\n**Minimal setup** (get started immediately):\n```bash\n# Just set your API key - that's it!\necho \"LLM_API_KEY=your_api_key_here\" > .env\n```\n\n**Advanced setup** (optional features):\n> **\ud83d\udcd6 Need more options?** See the complete [Configuration Guide](#\ufe0f-configuration-guide) below for all environment variables, vector database setup, memory configuration, and advanced features.\n\n---\n\n## \u2699\ufe0f Configuration Guide\n\n> **\u26a1 Quick Setup**: Only need `LLM_API_KEY` to get started! | **\ud83d\udd0d Detailed Setup**: [Vector DB](#-vector-database--smart-memory-setup-complete-guide) | [Tracing](#-opik-tracing--observability-setup-latest-feature)\n\n### Environment Variables\n\nCreate a `.env` file with your configuration. **Only the LLM API key is required** - everything else is optional for advanced features.\n\n#### **\ud83d\udd25 REQUIRED (Start Here)**\n```bash\n# ===============================================\n# REQUIRED: AI Model API Key (Choose one provider)\n# ===============================================\nLLM_API_KEY=your_openai_api_key_here\n# OR for other providers:\n# LLM_API_KEY=your_anthropic_api_key_here\n# LLM_API_KEY=your_groq_api_key_here\n# LLM_API_KEY=your_azure_openai_api_key_here\n# See examples/llm_usage-config.json for all provider configs\n```\n\n#### **\u26a1 OPTIONAL: Advanced Features**\n```bash\n# ===============================================\n# Embeddings (OPTIONAL) - NEW!\n# ===============================================\n# For generating text embeddings (vector representations)\n# Choose one provider - same key works for all embedding models\nEMBEDDING_API_KEY=your_embedding_api_key_here\n# OR for other providers:\n# EMBEDDING_API_KEY=your_cohere_api_key_here\n# EMBEDDING_API_KEY=your_huggingface_api_key_here\n# EMBEDDING_API_KEY=your_mistral_api_key_here\n# See docs/EMBEDDING_README.md for all provider configs\n\n# ===============================================\n# Tracing & Observability (OPTIONAL) - NEW!\n# ===============================================\n# For advanced monitoring and performance optimization\n# \ud83d\udd17 Sign up: https://www.comet.com/signup?from=llm\nOPIK_API_KEY=your_opik_api_key_here\nOPIK_WORKSPACE=your_opik_workspace_name\n\n# ===============================================\n# Vector Database (OPTIONAL) - Smart Memory\n# ===============================================\n# \u26a0\ufe0f Warning: 30-60s startup time for sentence transformer\n# \u26a0\ufe0f IMPORTANT: You MUST choose a provider - no local fallback\nENABLE_VECTOR_DB=true # Default: false\n\n# Choose ONE provider (required if ENABLE_VECTOR_DB=true):\n\n# Option 1: Qdrant Remote (RECOMMENDED)\nOMNI_MEMORY_PROVIDER=qdrant-remote\nQDRANT_HOST=localhost\nQDRANT_PORT=6333\n\n# Option 2: ChromaDB Remote\n# OMNI_MEMORY_PROVIDER=chroma-remote\n# CHROMA_HOST=localhost\n# CHROMA_PORT=8000\n\n# Option 3: ChromaDB Cloud\n# OMNI_MEMORY_PROVIDER=chroma-cloud\n# CHROMA_TENANT=your_tenant\n# CHROMA_DATABASE=your_database\n# CHROMA_API_KEY=your_api_key\n\n# Option 4: MongoDB Atlas\n# OMNI_MEMORY_PROVIDER=mongodb-remote\n# MONGODB_URI=\"your_mongodb_connection_string\"\n# MONGODB_DB_NAME=\"db name\"\n\n# ===============================================\n# Persistent Memory Storage (OPTIONAL)\n# ===============================================\n# These have sensible defaults - only set if you need custom configuration\n\n# Redis - for memory_store_type=\"redis\" (defaults to: redis://localhost:6379/0)\n# REDIS_URL=redis://your-remote-redis:6379/0\n# REDIS_URL=redis://:password@localhost:6379/0  # With password\n\n\n# DATABASE_URL=sqlite:///omnicoreagent_memory.db\n# DATABASE_URL=postgresql://user:password@localhost:5432/omnicoreagent\n# DATABASE_URL=mysql://user:password@localhost:3306/omnicoreagent\n\n# Mongodb - for memory_store_type=\"mongodb\" (defaults to: mongodb://localhost:27017/omnicoreagent)\nMONGODB_URI=\"your_mongodb_connection_string\"\nMONGODB_DB_NAME=\"db name\"\n```\n\n> **\ud83d\udca1 Quick Start**: Just set `LLM_API_KEY` and you're ready to go! Add other variables only when you need advanced features.\n\n### **Server Configuration (`servers_config.json`)**\n\nFor MCP server connections and agent settings:\n\n#### Basic OpenAI Configuration\n\n```json\n{\n  \"AgentConfig\": {\n    \"tool_call_timeout\": 30,\n    \"max_steps\": 15,\n    \"request_limit\": 0,          // 0 = unlimited (production mode), set > 0 to enable limits\n    \"total_tokens_limit\": 0,     // 0 = unlimited (production mode), set > 0 to enable limits\n    \"memory_results_limit\": 5,   // Number of memory results to retrieve (1-100, default: 5)\n    \"memory_similarity_threshold\": 0.5  // Similarity threshold for memory filtering (0.0-1.0, default: 0.5)\n  },\n  \"LLM\": {\n    \"provider\": \"openai\",\n    \"model\": \"gpt-4\",\n    \"temperature\": 0.5,\n    \"max_tokens\": 5000,\n    \"max_context_length\": 30000,\n    \"top_p\": 0\n  },\n  \"Embedding\": {\n    \"provider\": \"openai\",\n    \"model\": \"text-embedding-3-small\",\n    \"dimensions\": 1536,\n    \"encoding_format\": \"float\"\n  },\n  \"mcpServers\": {\n    \"ev_assistant\": {\n      \"transport_type\": \"streamable_http\",\n      \"auth\": {\n        \"method\": \"oauth\"\n      },\n      \"url\": \"http://localhost:8000/mcp\"\n    },\n    \"sse-server\": {\n      \"transport_type\": \"sse\",\n      \"url\": \"http://localhost:3000/sse\",\n      \"headers\": {\n        \"Authorization\": \"Bearer token\"\n      },\n      \"timeout\": 60,\n      \"sse_read_timeout\": 120\n    },\n    \"streamable_http-server\": {\n      \"transport_type\": \"streamable_http\",\n      \"url\": \"http://localhost:3000/mcp\",\n      \"headers\": {\n        \"Authorization\": \"Bearer token\"\n      },\n      \"timeout\": 60,\n      \"sse_read_timeout\": 120\n    }\n  }\n}\n```\n\n#### Multiple Provider Examples\n\n**Anthropic Claude Configuration**\n```json\n{\n  \"LLM\": {\n    \"provider\": \"anthropic\",\n    \"model\": \"claude-3-5-sonnet-20241022\",\n    \"temperature\": 0.7,\n    \"max_tokens\": 4000,\n    \"max_context_length\": 200000,\n    \"top_p\": 0.95\n  }\n}\n```\n\n**Groq Configuration**\n```json\n{\n  \"LLM\": {\n    \"provider\": \"groq\",\n    \"model\": \"llama-3.1-8b-instant\",\n    \"temperature\": 0.5,\n    \"max_tokens\": 2000,\n    \"max_context_length\": 8000,\n    \"top_p\": 0.9\n  }\n}\n```\n\n**Azure OpenAI Configuration**\n```json\n{\n  \"LLM\": {\n    \"provider\": \"azureopenai\",\n    \"model\": \"gpt-4\",\n    \"temperature\": 0.7,\n    \"max_tokens\": 2000,\n    \"max_context_length\": 100000,\n    \"top_p\": 0.95,\n    \"azure_endpoint\": \"https://your-resource.openai.azure.com\",\n    \"azure_api_version\": \"2024-02-01\",\n    \"azure_deployment\": \"your-deployment-name\"\n  }\n}\n```\n\n**Ollama Local Model Configuration**\n```json\n{\n  \"LLM\": {\n    \"provider\": \"ollama\",\n    \"model\": \"llama3.1:8b\",\n    \"temperature\": 0.5,\n    \"max_tokens\": 5000,\n    \"max_context_length\": 100000,\n    \"top_p\": 0.7,\n    \"ollama_host\": \"http://localhost:11434\"\n  }\n}\n```\n\n**OpenRouter Configuration**\n```json\n{\n  \"LLM\": {\n    \"provider\": \"openrouter\",\n    \"model\": \"anthropic/claude-3.5-sonnet\",\n    \"temperature\": 0.7,\n    \"max_tokens\": 4000,\n    \"max_context_length\": 200000,\n    \"top_p\": 0.95\n  }\n}\n```\n\n### \ud83d\udd10 Authentication Methods\n\nOmniCoreAgent supports multiple authentication methods for secure server connections:\n\n#### OAuth 2.0 Authentication\n```json\n{\n  \"server_name\": {\n    \"transport_type\": \"streamable_http\",\n    \"auth\": {\n      \"method\": \"oauth\"\n    },\n    \"url\": \"http://your-server/mcp\"\n  }\n}\n```\n\n#### Bearer Token Authentication\n```json\n{\n  \"server_name\": {\n    \"transport_type\": \"streamable_http\",\n    \"headers\": {\n      \"Authorization\": \"Bearer your-token-here\"\n    },\n    \"url\": \"http://your-server/mcp\"\n  }\n}\n```\n\n#### Custom Headers\n```json\n{\n  \"server_name\": {\n    \"transport_type\": \"streamable_http\",\n    \"headers\": {\n      \"X-Custom-Header\": \"value\",\n      \"Authorization\": \"Custom-Auth-Scheme token\"\n    },\n    \"url\": \"http://your-server/mcp\"\n  }\n}\n```\n\n## \ud83d\udd04 Dynamic Server Configuration\n\nOmniCoreAgent supports dynamic server configuration through commands:\n\n#### Add New Servers\n```bash\n# Add one or more servers from a configuration file\n/add_servers:path/to/config.json\n```\n\nThe configuration file can include multiple servers with different authentication methods:\n\n```json\n{\n  \"new-server\": {\n    \"transport_type\": \"streamable_http\",\n    \"auth\": {\n      \"method\": \"oauth\"\n    },\n    \"url\": \"http://localhost:8000/mcp\"\n  },\n  \"another-server\": {\n    \"transport_type\": \"sse\",\n    \"headers\": {\n      \"Authorization\": \"Bearer token\"\n    },\n    \"url\": \"http://localhost:3000/sse\"\n  }\n}\n```\n\n#### Remove Servers\n```bash\n# Remove a server by its name\n/remove_server:server_name\n```\n\n---\n\n## \ud83e\udde0 Vector Database & Smart Memory Setup (Complete Guide)\n\nOmniCoreAgent provides advanced memory capabilities through vector databases for intelligent, semantic search and long-term memory.\n\n#### **\u26a1 Quick Start (Choose Your Provider)**\n```bash\n# Enable vector memory - you MUST choose a provider\nENABLE_VECTOR_DB=true\n\n# Option 1: Qdrant (recommended)\nOMNI_MEMORY_PROVIDER=qdrant-remote\nQDRANT_HOST=localhost\nQDRANT_PORT=6333\n\n# Option 2: ChromaDB Remote\nOMNI_MEMORY_PROVIDER=chroma-remote\nCHROMA_HOST=localhost\nCHROMA_PORT=8000\n\n# Option 3: ChromaDB Cloud\nOMNI_MEMORY_PROVIDER=chroma-cloud\nCHROMA_TENANT=your_tenant\nCHROMA_DATABASE=your_database\nCHROMA_API_KEY=your_api_key\n\n# Option 4: MongoDB Atlas\nOMNI_MEMORY_PROVIDER=mongodb-remote\nMONGODB_URI=\"your_mongodb_connection_string\"\nMONGODB_DB_NAME=\"db name\"\n\n# Disable vector memory (default)\nENABLE_VECTOR_DB=false\n```\n\n#### **\ud83d\udd27 Vector Database Providers**\n\n**1. Qdrant Remote**\n```bash\n# Install and run Qdrant\ndocker run -p 6333:6333 qdrant/qdrant\n\n# Configure\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=qdrant-remote\nQDRANT_HOST=localhost\nQDRANT_PORT=6333\n```\n\n**2. MongoDB Atlas**\n```bash\n# Configure\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=mongodb-remote\nMONGODB_URI=\"your_mongodb_connection_string\"\nMONGODB_DB_NAME=\"db name\"\n```\n\n**3. ChromaDB Remote**\n```bash\n# Install and run ChromaDB server\ndocker run -p 8000:8000 chromadb/chroma\n\n# Configure\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=chroma-remote\nCHROMA_HOST=localhost\nCHROMA_PORT=8000\n```\n\n**4. ChromaDB Cloud**\n```bash\nENABLE_VECTOR_DB=true\nOMNI_MEMORY_PROVIDER=chroma-cloud\nCHROMA_TENANT=your_tenant\nCHROMA_DATABASE=your_database\nCHROMA_API_KEY=your_api_key\n```\n\n#### **\u2728 What You Get**\n- **Long-term Memory**: Persistent storage across sessions\n- **Episodic Memory**: Context-aware conversation history\n- **Semantic Search**: Find relevant information by meaning, not exact text\n- **Multi-session Context**: Remember information across different conversations\n- **Automatic Summarization**: Intelligent memory compression for efficiency\n\n---\n\n## \ud83d\udcca Opik Tracing & Observability Setup (Latest Feature)\n\n**Monitor and optimize your AI agents with production-grade observability:**\n\n#### **\ud83d\ude80 Quick Setup**\n\n1. **Sign up for Opik** (Free & Open Source):\n   - Visit: **[https://www.comet.com/signup?from=llm](https://www.comet.com/signup?from=llm)**\n   - Create your account and get your API key and workspace name\n\n2. **Add to your `.env` file** (see [Environment Variables](#environment-variables) above):\n   ```bash\n   OPIK_API_KEY=your_opik_api_key_here\n   OPIK_WORKSPACE=your_opik_workspace_name\n   ```\n\n#### **\u2728 What You Get Automatically**\n\nOnce configured, OmniCoreAgent automatically tracks:\n\n- **\ud83d\udd25 LLM Call Performance**: Execution time, token usage, response quality\n- **\ud83d\udee0\ufe0f Tool Execution Traces**: Which tools were used and how long they took\n- **\ud83e\udde0 Memory Operations**: Vector DB queries, memory retrieval performance\n- **\ud83e\udd16 Agent Workflow**: Complete trace of multi-step agent reasoning\n- **\ud83d\udcca System Bottlenecks**: Identify exactly where time is spent\n\n#### **\ud83d\udcc8 Benefits**\n\n- **Performance Optimization**: See which LLM calls or tools are slow\n- **Cost Monitoring**: Track token usage and API costs\n- **Debugging**: Understand agent decision-making processes\n- **Production Monitoring**: Real-time observability for deployed agents\n- **Zero Code Changes**: Works automatically with existing agents\n\n#### **\ud83d\udd0d Example: What You'll See**\n\n```\nAgent Execution Trace:\n\u251c\u2500\u2500 agent_execution: 4.6s\n\u2502   \u251c\u2500\u2500 tools_registry_retrieval: 0.02s \u2705\n\u2502   \u251c\u2500\u2500 memory_retrieval_step: 0.08s \u2705\n\u2502   \u251c\u2500\u2500 llm_call: 4.5s \u26a0\ufe0f (bottleneck identified!)\n\u2502   \u251c\u2500\u2500 response_parsing: 0.01s \u2705\n\u2502   \u2514\u2500\u2500 action_execution: 0.03s \u2705\n```\n\n**\ud83d\udca1 Pro Tip**: Opik is completely optional. If you don't set the credentials, OmniCoreAgent works normally without tracing.\n\n---\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Developer Integration\n\nOmniCoreAgent is not just a CLI tool\u2014it's also a powerful Python library. Both systems can be used programmatically in your applications.\n\n### Using OmniAgent in Applications\n\n```python\nfrom omnicoreagent.omni_agent import OmniAgent\nfrom omnicoreagent.core.memory_store.memory_router import MemoryRouter\nfrom omnicoreagent.core.events.event_router import EventRouter\nfrom omnicoreagent.core.tools.local_tools_registry import ToolRegistry\n\n# Create tool registry for custom tools\ntool_registry = ToolRegistry()\n\n@tool_registry.register_tool(\"analyze_data\")\ndef analyze_data(data: str) -> str:\n    \"\"\"Analyze data and return insights.\"\"\"\n    return f\"Analysis complete: {len(data)} characters processed\"\n\n# OmniAgent automatically handles MCP connections + your tools\nagent = OmniAgent(\n    name=\"my_app_agent\",\n    system_instruction=\"You are a helpful assistant.\",\n    model_config={\n        \"provider\": \"openai\", \n        \"model\": \"gpt-4o\",\n        \"temperature\": 0.7\n    },\n    local_tools=tool_registry,  # Your custom tools!\n    memory_store=MemoryRouter(memory_store_type=\"redis\"),\n    event_router=EventRouter(event_store_type=\"in_memory\")\n)\n\n# Use in your app\nresult = await agent.run(\"Analyze some sample data\")\n```\n\n### FastAPI Integration with OmniAgent\n\nOmniAgent makes building APIs incredibly simple. See [`examples/web_server.py`](examples/web_server.py) for a complete FastAPI example:\n\n```python\nfrom fastapi import FastAPI\nfrom omnicoreagent.omni_agent import OmniAgent\n\napp = FastAPI()\nagent = OmniAgent(...)  # Your agent setup from above\n\n@app.post(\"/chat\")\nasync def chat(message: str, session_id: str = None):\n    result = await agent.run(message, session_id)\n    return {\"response\": result['response'], \"session_id\": result['session_id']}\n\n@app.get(\"/tools\") \nasync def get_tools():\n    # Returns both MCP tools AND your custom tools automatically\n    return agent.get_available_tools()\n```\n\n### Using MCPOmni Connect Programmatically\n\n```python\nfrom omnicoreagent.mcp_client import MCPClient\n\n# Create MCP client\nclient = MCPClient(config_file=\"servers_config.json\")\n\n# Connect to servers\nawait client.connect_all()\n\n# Use tools\ntools = await client.list_tools()\nresult = await client.call_tool(\"tool_name\", {\"arg\": \"value\"})\n```\n\n**Key Benefits:**\n\n- **One OmniAgent = MCP + Custom Tools + Memory + Events**\n- **Automatic tool discovery** from all connected MCP servers\n- **Built-in session management** and conversation history\n- **Real-time event streaming** for monitoring\n- **Easy integration** with any Python web framework\n\n---\n\n## \ud83c\udfaf Usage Patterns\n\n### Interactive Commands\n\n- `/tools` - List all available tools across servers\n- `/prompts` - View available prompts\n- `/prompt:<n>/<args>` - Execute a prompt with arguments\n- `/resources` - List available resources\n- `/resource:<uri>` - Access and analyze a resource\n- `/debug` - Toggle debug mode\n- `/refresh` - Update server capabilities\n- `/memory` - Toggle Redis memory persistence (on/off)\n- `/mode:auto` - Switch to autonomous agentic mode\n- `/mode:chat` - Switch back to interactive chat mode\n- `/add_servers:<config.json>` - Add one or more servers from a configuration file\n- `/remove_server:<server_n>` - Remove a server by its name\n\n### Memory and Chat History\n\n```bash\n# Enable Redis memory persistence\n/memory\n\n# Check memory status\nMemory persistence is now ENABLED using Redis\n\n# Disable memory persistence\n/memory\n\n# Check memory status\nMemory persistence is now DISABLED\n```\n\n### Operation Modes\n\n```bash\n# Switch to autonomous mode\n/mode:auto\n\n# System confirms mode change\nNow operating in AUTONOMOUS mode. I will execute tasks independently.\n\n# Switch back to chat mode\n/mode:chat\n\n# System confirms mode change\nNow operating in CHAT mode. I will ask for approval before executing tasks.\n```\n\n### Mode Differences\n\n- **Chat Mode (Default)**\n  - Requires explicit approval for tool execution\n  - Interactive conversation style\n  - Step-by-step task execution\n  - Detailed explanations of actions\n\n- **Autonomous Mode**\n  - Independent task execution\n  - Self-guided decision making\n  - Automatic tool selection and chaining\n  - Progress updates and final results\n  - Complex task decomposition\n  - Error handling and recovery\n\n- **Orchestrator Mode**\n  - Advanced planning for complex multi-step tasks\n  - Strategic delegation across multiple MCP servers\n  - Intelligent agent coordination and communication\n  - Parallel task execution when possible\n  - Dynamic resource allocation\n  - Sophisticated workflow management\n  - Real-time progress monitoring across agents\n  - Adaptive task prioritization\n\n### Prompt Management\n\n```bash\n# List all available prompts\n/prompts\n\n# Basic prompt usage\n/prompt:weather/location=tokyo\n\n# Prompt with multiple arguments depends on the server prompt arguments requirements\n/prompt:travel-planner/from=london/to=paris/date=2024-03-25\n\n# JSON format for complex arguments\n/prompt:analyze-data/{\n    \"dataset\": \"sales_2024\",\n    \"metrics\": [\"revenue\", \"growth\"],\n    \"filters\": {\n        \"region\": \"europe\",\n        \"period\": \"q1\"\n    }\n}\n\n# Nested argument structures\n/prompt:market-research/target=smartphones/criteria={\n    \"price_range\": {\"min\": 500, \"max\": 1000},\n    \"features\": [\"5G\", \"wireless-charging\"],\n    \"markets\": [\"US\", \"EU\", \"Asia\"]\n}\n```\n\n### Advanced Prompt Features\n\n- **Argument Validation**: Automatic type checking and validation\n- **Default Values**: Smart handling of optional arguments\n- **Context Awareness**: Prompts can access previous conversation context\n- **Cross-Server Execution**: Seamless execution across multiple MCP servers\n- **Error Handling**: Graceful handling of invalid arguments with helpful messages\n- **Dynamic Help**: Detailed usage information for each prompt\n\n### AI-Powered Interactions\n\nThe client intelligently:\n\n- Chains multiple tools together\n- Provides context-aware responses\n- Automatically selects appropriate tools\n- Handles errors gracefully\n- Maintains conversation context\n\n### Model Support with LiteLLM\n\n- **Unified Model Access**\n  - Single interface for 100+ models across all major providers\n  - Automatic provider detection and routing\n  - Consistent API regardless of underlying provider\n  - Native function calling for compatible models\n  - ReAct Agent fallback for models without function calling\n- **Supported Providers**\n  - **OpenAI**: GPT-4, GPT-3.5, and all model variants\n  - **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus\n  - **Google**: Gemini Pro, Gemini Flash, PaLM models\n  - **Groq**: Ultra-fast inference for Llama, Mixtral, Gemma\n  - **DeepSeek**: DeepSeek-V3, DeepSeek-Coder, and specialized models\n  - **Azure OpenAI**: Enterprise-grade OpenAI models\n  - **OpenRouter**: Access to 200+ models from various providers\n  - **Ollama**: Local model execution with privacy\n- **Advanced Features**\n  - Automatic model capability detection\n  - Dynamic tool execution based on model features\n  - Intelligent fallback mechanisms\n  - Provider-specific optimizations\n\n### Token & Usage Management\n\nOmniCoreAgent provides advanced controls and visibility over your API usage and resource limits.\n\n#### View API Usage Stats\n\nUse the `/api_stats` command to see your current usage:\n\n```bash\n/api_stats\n```\n\nThis will display:\n\n- **Total tokens used**\n- **Total requests made**\n- **Total response tokens**\n- **Number of requests**\n\n#### Set Usage Limits\n\nYou can set limits to automatically stop execution when thresholds are reached:\n\n- **Total Request Limit:** Set the maximum number of requests allowed in a session.\n- **Total Token Usage Limit:** Set the maximum number of tokens that can be used.\n- **Tool Call Timeout:** Set the maximum time (in seconds) a tool call can take before being terminated.\n- **Max Steps:** Set the maximum number of steps the agent can take before stopping.\n\nYou can configure these in your `servers_config.json` under the `AgentConfig` section:\n\n```json\n\"AgentConfig\": {\n    \"agent_name\": \"OmniAgent\",              // Unique agent identifier\n    \"tool_call_timeout\": 30,                // Tool call timeout in seconds\n    \"max_steps\": 15,                        // Max number of reasoning/tool steps before termination\n\n    // --- Limits ---\n    \"request_limit\": 0,                     // 0 = unlimited (production mode), set > 0 to enable limits\n    \"total_tokens_limit\": 0,                // 0 = unlimited (production mode), set > 0 for hard cap on tokens\n\n    // --- Memory Retrieval Config ---\n    \"memory_config\": {\n        \"mode\": \"sliding_window\",           // Options: sliding_window, episodic, vector\n        \"value\": 100                        // Window size or parameter value depending on mode\n    },\n    \"memory_results_limit\": 5,              // Number of memory results to retrieve (1\u2013100, default: 5)\n    \"memory_similarity_threshold\": 0.5,     // Similarity threshold for memory filtering (0.0\u20131.0, default: 0.5)\n\n    // --- Tool Retrieval Config ---\n    \"enable_tools_knowledge_base\": false,   // Enable semantic tool retrieval (default: false)\n    \"tools_results_limit\": 10,              // Max number of tools to retrieve (default: 10)\n    \"tools_similarity_threshold\": 0.1,      // Similarity threshold for tool retrieval (0.0\u20131.0, default: 0.1)\n\n    // --- Memory Tool Backend ---\n    \"memory_tool_backend\": \"None\"           // Backend for memory tool. Options: \"None\" (default), \"local\", \"s3\", or \"db\"\n}\n\n\n```\n\n- When any of these limits are reached, the agent will automatically stop running and notify you.\n\n#### Example Commands\n\n```bash\n# Check your current API usage and limits\n/api_stats\n\n# Set a new request limit (example)\n# (This can be done by editing servers_config.json or via future CLI commands)\n```\n\n## \ud83d\udd27 Advanced Features\n\n### Tool Orchestration\n\n```python\n# Example of automatic tool chaining if the tool is available in the servers connected\nUser: \"Find charging stations near Silicon Valley and check their current status\"\n\n# Client automatically:\n1. Uses Google Maps API to locate Silicon Valley\n2. Searches for charging stations in the area\n3. Checks station status through EV network API\n4. Formats and presents results\n```\n\n### Resource Analysis\n\n```python\n# Automatic resource processing\nUser: \"Analyze the contents of /path/to/document.pdf\"\n\n# Client automatically:\n1. Identifies resource type\n2. Extracts content\n3. Processes through LLM\n4. Provides intelligent summary\n```\n\n### \ud83d\udee0\ufe0f Troubleshooting Common Issues\n\n#### \"Failed to connect to server: Session terminated\"\n\n**Possible Causes & Solutions:**\n\n1. **Wrong Transport Type**\n   ```\n   Problem: Your server expects 'stdio' but you configured 'streamable_http'\n   Solution: Check your server's documentation for the correct transport type\n   ```\n\n2. **OAuth Configuration Mismatch**\n   ```\n   Problem: Your server doesn't support OAuth but you have \"auth\": {\"method\": \"oauth\"}\n   Solution: Remove the \"auth\" section entirely and use headers instead:\n\n   \"headers\": {\n       \"Authorization\": \"Bearer your-token\"\n   }\n   ```\n\n3. **Server Not Running**\n   ```\n   Problem: The MCP server at the specified URL is not running\n   Solution: Start your MCP server first, then connect with OmniCoreAgent\n   ```\n\n4. **Wrong URL or Port**\n   ```\n   Problem: URL in config doesn't match where your server is running\n   Solution: Verify the server's actual address and port\n   ```\n\n#### \"Started callback server on http://localhost:3000\" - Is This Normal?\n\n**Yes, this is completely normal** when:\n\n- You have `\"auth\": {\"method\": \"oauth\"}` in any server configuration\n- The OAuth server handles authentication tokens automatically\n- You cannot and should not try to change this address\n\n**If you don't want the OAuth server:**\n\n- Remove `\"auth\": {\"method\": \"oauth\"}` from all server configurations\n- Use alternative authentication methods like Bearer tokens\n\n### \ud83d\udccb Configuration Examples by Use Case\n\n#### Local Development (stdio)\n\n```json\n{\n  \"mcpServers\": {\n    \"local-tools\": {\n      \"transport_type\": \"stdio\",\n      \"command\": \"uvx\",\n      \"args\": [\"mcp-server-tools\"]\n    }\n  }\n}\n```\n\n#### Remote Server with Token\n\n```json\n{\n  \"mcpServers\": {\n    \"remote-api\": {\n      \"transport_type\": \"streamable_http\",\n      \"url\": \"http://api.example.com:8080/mcp\",\n      \"headers\": {\n        \"Authorization\": \"Bearer abc123token\"\n      }\n    }\n  }\n}\n```\n\n#### Remote Server with OAuth\n\n```json\n{\n  \"mcpServers\": {\n    \"oauth-server\": {\n      \"transport_type\": \"streamable_http\",\n      \"auth\": {\n        \"method\": \"oauth\"\n      },\n      \"url\": \"http://oauth-server.com:8080/mcp\"\n    }\n  }\n}\n```\n\n---\n\n## \ud83e\uddea Testing\n\n### Running Tests\n\n```bash\n# Run all tests with verbose output\npytest tests/ -v\n\n# Run specific test file\npytest tests/test_specific_file.py -v\n\n# Run tests with coverage report\npytest tests/ --cov=src --cov-report=term-missing\n```\n\n### Test Structure\n\n```\ntests/\n\u251c\u2500\u2500 unit/           # Unit tests for individual components\n\u251c\u2500\u2500 omni_agent/     # OmniAgent system tests\n\u251c\u2500\u2500 mcp_client/     # MCPOmni Connect system tests\n\u2514\u2500\u2500 integration/    # Integration tests for both systems\n```\n\n### Development Quick Start\n\n1. **Installation**\n\n   ```bash\n   # Clone the repository\n   git clone https://github.com/Abiorh001/omnicoreagent.git\n   cd omnicoreagent\n\n   # Create and activate virtual environment\n   uv venv\n   source .venv/bin/activate\n\n   # Install dependencies\n   uv sync\n   ```\n\n2. **Configuration**\n\n   ```bash\n   # Set up environment variables\n   echo \"LLM_API_KEY=your_api_key_here\" > .env\n\n   # Configure your servers in servers_config.json\n   ```\n\n3. **Start Systems**\n\n   ```bash\n   # Try OmniAgent\n   uv run examples/omni_agent_example.py\n\n   # Or try MCPOmni Connect\n   uv run examples/mcp_client_example.py\n   ```\n\n   Or:\n\n   ```bash\n   python examples/omni_agent_example.py\n   python examples/mcp_client_example.py\n   ```\n\n---\n\n## \ud83d\udd0d Troubleshooting\n\n> **\ud83d\udea8 Most Common Issues**: Check [Quick Fixes](#-quick-fixes-common-issues) below first!\n> \n> **\ud83d\udcd6 For comprehensive setup help**: See [\u2699\ufe0f Configuration Guide](#\ufe0f-configuration-guide) | [\ud83e\udde0 Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide)\n\n### \ud83d\udea8 **Quick Fixes (Common Issues)**\n\n| **Error** | **Quick Fix** |\n|-----------|---------------|\n| `Error: Invalid API key` | Check your `.env` file: `LLM_API_KEY=your_actual_key` |\n| `ModuleNotFoundError: omnicoreagent` | Run: `uv add omnicoreagent` or `pip install omnicoreagent` |\n| `Connection refused` | Ensure MCP server is running before connecting |\n| `ChromaDB not available` | Install: `pip install chromadb` - [See Vector DB Setup](#-vector-database--smart-memory-setup-complete-guide) |\n| `Redis connection failed` | Install Redis or use in-memory mode (default) |\n| `Tool execution failed` | Check tool permissions and arguments |\n\n### Detailed Issues and Solutions\n\n1. **Connection Issues**\n\n   ```bash\n   Error: Could not connect to MCP server\n   ```\n\n   - Check if the server is running\n   - Verify server configuration in `servers_config.json`\n   - Ensure network connectivity\n   - Check server logs for errors\n   - **See [Transport Types & Authentication](#-transport-types--authentication) for detailed setup**\n\n2. **API Key Issues**\n\n   ```bash\n   Error: Invalid API key\n   ```\n\n   - Verify API key is correctly set in `.env`\n   - Check if API key has required permissions\n   - Ensure API key is for correct environment (production/development)\n   - **See [Configuration Guide](#\ufe0f-configuration-guide) for correct setup**\n\n3. **Redis Connection**\n\n   ```bash\n   Error: Could not connect to Redis\n   ```\n\n   - Verify Redis server is running\n   - Check Redis connection settings in `.env`\n   - Ensure Redis password is correct (if configured)\n\n4. **Tool Execution Failures**\n   ```bash\n   Error: Tool execution failed\n   ```\n   - Check tool availability on connected servers\n   - Verify tool permissions\n   - Review tool arguments for correctness\n\n5. **Vector Database Issues**\n\n   ```bash\n   Error: Vector database connection failed\n   ```\n\n   - Ensure chosen provider (Qdrant, ChromaDB, MongoDB) is running\n   - Check connection settings in `.env`\n   - Verify API keys for cloud providers\n   - **See [Vector Database Setup](#-vector-database--smart-memory-setup-complete-guide) for detailed configuration**\n\n6. **Import Errors**\n\n   ```bash\n   ImportError: cannot import name 'OmniAgent'\n   ```\n\n   - Check package installation: `pip show omnicoreagent`\n   - Verify Python version compatibility (3.10+)\n   - Try reinstalling: `pip uninstall omnicoreagent && pip install omnicoreagent`\n\n### Debug Mode\n\nEnable debug mode for detailed logging:\n\n```bash\n# In MCPOmni Connect\n/debug\n\n# In OmniAgent\nagent = OmniAgent(..., debug=True)\n```\n\n### **Getting Help**\n\n1. **First**: Check the [Quick Fixes](#-quick-fixes-common-issues) above\n2. **Examples**: Study working examples in the `examples/` directory\n3. **Issues**: Search [GitHub Issues](https://github.com/Abiorh001/omnicoreagent/issues) for similar problems\n4. **New Issue**: [Create a new issue](https://github.com/Abiorh001/omnicoreagent/issues/new) with detailed information\n\n---\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions to OmniCoreAgent! Here's how you can help:\n\n### Development Setup\n\n```bash\n# Fork and clone the repository\ngit clone https://github.com/abiorh001/omnicoreagent.git\ncd omnicoreagent\n\n# Set up development environment\nuv venv\nsource .venv/bin/activate\nuv sync --dev\n\n# Install pre-commit hooks\npre-commit install\n```\n\n### Contribution Areas\n\n- **OmniAgent System**: Custom agents, local tools, background processing\n- **MCPOmni Connect**: MCP client features, transport protocols, authentication\n- **Shared Infrastructure**: Memory systems, vector databases, event handling\n- **Documentation**: Examples, tutorials, API documentation\n- **Testing**: Unit tests, integration tests, performance tests\n\n### Pull Request Process\n\n1. Create a feature branch from `main`\n2. Make your changes with tests\n3. Run the test suite: `pytest tests/ -v`\n4. Update documentation as needed\n5. Submit a pull request with a clear description\n\n### Code Standards\n\n- Python 3.10+ compatibility\n- Type hints for all public APIs\n- Comprehensive docstrings\n- Unit tests for new functionality\n- Follow existing code style\n\n---\n\n## \ud83d\udcd6 Documentation\n\nComplete documentation is available at: **[OmniCoreAgent Docs](https://abiorh001.github.io/omnicoreagent)**\n\n### Documentation Structure\n\n- **Getting Started**: Quick setup and first steps\n- **OmniAgent Guide**: Custom agent development\n- **MCPOmni Connect Guide**: MCP client usage\n- **API Reference**: Complete code documentation\n- **Examples**: Working code examples\n- **Advanced Topics**: Vector databases, tracing, production deployment\n\n### Build Documentation Locally\n\n```bash\n# Install documentation dependencies\npip install mkdocs mkdocs-material\n\n# Serve documentation locally\nmkdocs serve\n# Open http://127.0.0.1:8000\n\n# Build static documentation\nmkdocs build\n```\n\n### Contributing to Documentation\n\nDocumentation improvements are always welcome:\n\n- Fix typos or unclear explanations\n- Add new examples or use cases\n- Improve existing tutorials\n- Translate to other languages\n\n---\n\n## Demo\n\n![omnicoreagent-demo-MadewithClipchamp-ezgif com-optimize](https://github.com/user-attachments/assets/9c4eb3df-d0d5-464c-8815-8f7415a47fce)\n\n---\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udcec Contact & Support\n\n- **Author**: Abiola Adeshina\n- **Email**: abiolaadedayo1993@gmail.com\n- **GitHub**: [https://github.com/Abiorh001/omnicoreagent](https://github.com/Abiorh001/omnicoreagent)\n- **Issues**: [Report a bug or request a feature](https://github.com/Abiorh001/omnicoreagent/issues)\n- **Discussions**: [Join the community](https://github.com/Abiorh001/omnicoreagent/discussions)\n\n### Support Channels\n\n- **GitHub Issues**: Bug reports and feature requests\n- **GitHub Discussions**: General questions and community support\n- **Email**: Direct contact for partnership or enterprise inquiries\n\n---\n\n<p align=\"center\">\n  <strong>Built with \u2764\ufe0f by the OmniCoreAgent Team</strong><br>\n  <em>Empowering developers to build the next generation of AI applications</em>\n</p>",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "OmniCoreAgent AI Framework - Universal MCP Client with multi-transport support and LLM-powered tool routing",
    "version": "0.2.9",
    "project_urls": {
        "Issues": "https://github.com/Abiorh001/mcp_omni_connect/issues",
        "Repository": "https://github.com/Abiorh001/mcp_omni_connect"
    },
    "split_keywords": [
        "agent",
        " ai",
        " automation",
        " framework",
        " git",
        " llm",
        " mcp"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fd5af43064a9ba88baa0f934da23ef84b49ecb8d0d9744af0d5ba205db6b1e5f",
                "md5": "62b3eaf531eaa57d44d35823ccdf3b8f",
                "sha256": "9491126675ce69eb63a8d3afc2a580ef9d7f3ccaa7158f5223a614a0039c0f12"
            },
            "downloads": -1,
            "filename": "omnicoreagent-0.2.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "62b3eaf531eaa57d44d35823ccdf3b8f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 213941,
            "upload_time": "2025-10-27T10:28:53",
            "upload_time_iso_8601": "2025-10-27T10:28:53.353419Z",
            "url": "https://files.pythonhosted.org/packages/fd/5a/f43064a9ba88baa0f934da23ef84b49ecb8d0d9744af0d5ba205db6b1e5f/omnicoreagent-0.2.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "598455083e48b25b6807dee9b0b07884a55deaa21e1999cf53ed2f75a0dd9570",
                "md5": "0940482edf7ce83a731b66bd2ac592ec",
                "sha256": "8d64b92fdbd15bbfcc3a87a41bda29aa6c2ae72b28c2e4a0851c70ca2ca50cc5"
            },
            "downloads": -1,
            "filename": "omnicoreagent-0.2.9.tar.gz",
            "has_sig": false,
            "md5_digest": "0940482edf7ce83a731b66bd2ac592ec",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 199883,
            "upload_time": "2025-10-27T10:28:55",
            "upload_time_iso_8601": "2025-10-27T10:28:55.343983Z",
            "url": "https://files.pythonhosted.org/packages/59/84/55083e48b25b6807dee9b0b07884a55deaa21e1999cf53ed2f75a0dd9570/omnicoreagent-0.2.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-27 10:28:55",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Abiorh001",
    "github_project": "mcp_omni_connect",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "omnicoreagent"
}
        
Elapsed time: 4.13977s