agentic-orchestra


Nameagentic-orchestra JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryThe easiest way to create and orchestrate multi-agent fleets
upload_time2025-08-20 21:13:24
maintainerNone
docs_urlNone
authorAgent Orchestra Contributors
requires_python>=3.11
licenseMIT
keywords mcp ai agents telemetry policy observability orchestration
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Agent Orchestra: Production-Ready Multi-Agent Orchestration Platform

**Agent Orchestra** is a production-grade, open-source framework for building sophisticated multi-agent workflows with enterprise-level features. Built on top of the Model Context Protocol (MCP), it provides advanced orchestration, rate limiting, agent pooling, and comprehensive observability for real-world AI applications.

## πŸš€ **Production-Ready Features**

Agent Orchestra has been battle-tested and includes all the polish improvements needed for real-world deployment:

- **🏊 Profile-Based Agent Pooling** - Intelligent agent reuse with race-safe creation and no duplicate initialization
- **⚑ Global Rate Limiting** - Per-model RPM/RPD limits with 429-aware retries and jittered exponential backoff  
- **πŸ”€ Multi-Server Routing** - Single MCP client with dynamic server-name routing per workflow node
- **πŸ›‘οΈ Security & Safety** - Path validation, directory traversal prevention, and secure parameter handling
- **🎯 Advanced Orchestration** - DAG workflows with concurrent `foreach`, intelligent `reduce`, and conditional `gate` nodes
- **πŸ“Š Comprehensive Telemetry** - Event-driven architecture with structured logging and performance metrics
- **🧹 Clean Async Management** - Proper resource lifecycle with graceful startup/shutdown

## πŸ—οΈ **Architecture Overview**

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Orchestrator  │───▢│   MCPExecutor    │───▢│   AgentPool     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚                       β”‚
         β”‚                       β–Ό                       β–Ό
         β”‚              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚              β”‚   CallBroker     β”‚    β”‚ SidecarMCPAgent β”‚
         β”‚              β”‚  (Rate Limiting) β”‚    β”‚ (with Telemetry)β”‚
         β”‚              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚                       β”‚
         β–Ό                       β”‚                       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”‚              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   GraphSpec     β”‚              β”‚              β”‚ SidecarMCPClientβ”‚
β”‚   (Workflow)    β”‚              β”‚              β”‚ (Multi-Server)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β”‚              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                 β”‚                       β”‚
                                 β–Ό                       β–Ό
                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                        β”‚   Broker Stats   β”‚    β”‚   MCP Servers   β”‚
                        β”‚   (Monitoring)   β”‚    β”‚  (fs, web, etc) β”‚
                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
```

## 🎯 **Key Components**

### **Orchestrator**
The central workflow engine that executes DAG-based workflows with support for:
- **Task Nodes** - Single agent operations
- **Foreach Nodes** - Concurrent batch processing with configurable concurrency
- **Reduce Nodes** - Intelligent aggregation of multiple results
- **Gate Nodes** - Conditional workflow control

### **AgentPool (Profile-Based)**
Production-grade agent management with:
- **Profile Keys** - `(server_name, model_key, policy_id)` for precise agent categorization
- **Race-Safe Creation** - Double-checked locking prevents duplicate agent initialization
- **Agent Reuse** - Automatic sharing of agents across workflow nodes with same profile
- **Resource Limits** - Configurable max agents per run with automatic cleanup

### **CallBroker (Rate Limiting)**
Global rate limiting system with:
- **Per-Model Limits** - Separate RPM, RPD, and concurrency limits per model
- **429-Aware Retries** - Automatic retry with jittered exponential backoff
- **Sliding Window Counters** - Precise rate tracking with time-based windows
- **Request Queuing** - Fair scheduling across multiple agents

### **MCPExecutor (Multi-Server)**
Enhanced executor with:
- **Server-Name Routing** - Dynamic routing to different MCP servers per node
- **Parameter Filtering** - Clean parameter handling for backward compatibility
- **Output Capture** - Enhanced result processing with fallback to text
- **Streaming Support** - Real-time chunk processing with telemetry

### **SidecarMCPAgent (Enhanced)**
Drop-in replacement for `mcp-use` MCPAgent with:
- **Telemetry Integration** - Comprehensive event emission and performance tracking
- **Parameter Safety** - Secure handling of server_name and other routing parameters
- **Enhanced Error Handling** - Better error reporting and recovery
- **Full API Compatibility** - 100% compatible with existing `mcp-use` code

## πŸ“¦ **Installation**

### Prerequisites
- Python 3.11+
- Node.js 18+ (for MCP servers)
- OpenAI API key (or other LLM provider)

### Install Agent Orchestra
```bash
pip install agent-orchestra
```

### Install MCP Servers
```bash
# Filesystem server
npm install -g @modelcontextprotocol/server-filesystem

# Web browser server  
npm install -g @modelcontextprotocol/server-playwright

# Or use npx to run without global install
npx @modelcontextprotocol/server-filesystem --help
```

## πŸš€ **Quick Start**

### Simple Agent Usage (Drop-in Replacement)
```python
import asyncio
from agent_orchestra import SidecarMCPClient, SidecarMCPAgent
from langchain_openai import ChatOpenAI

async def simple_example():
    # Configure MCP client
    config = {
        "mcpServers": {
            "filesystem": {
                "command": "npx",
                "args": ["-y", "@modelcontextprotocol/server-filesystem", 
                        "--stdio", "--root", "/tmp"]
            }
        }
    }
    
    client = SidecarMCPClient.from_dict(config)
    llm = ChatOpenAI(model="gpt-4o-mini")
    agent = SidecarMCPAgent(llm=llm, client=client)
    
    result = await agent.run("List the files in the current directory")
    print(result)
    
    await client.close_all_sessions()

asyncio.run(simple_example())
```

### Production Workflow with All Features
```python
import asyncio
from agent_orchestra import SidecarMCPClient, SidecarMCPAgent
from agent_orchestra.orchestrator.core import Orchestrator
from agent_orchestra.orchestrator.types import GraphSpec, NodeSpec, RunSpec
from agent_orchestra.orchestrator.executors_mcp import MCPExecutor
from agent_orchestra.orchestrator.broker_config import create_development_broker
from agent_orchestra.orchestrator.agent_pool import AgentPool, create_default_agent_factory
from langchain_openai import ChatOpenAI

async def production_workflow():
    # Multi-server MCP configuration
    config = {
        "mcpServers": {
            "fs_business": {
                "command": "npx",
                "args": ["-y", "@modelcontextprotocol/server-filesystem", 
                        "--stdio", "--root", "/business/data"]
            },
            "fs_reports": {
                "command": "npx", 
                "args": ["-y", "@modelcontextprotocol/server-filesystem",
                        "--stdio", "--root", "/reports/output"]
            },
            "web": {
                "command": "npx",
                "args": ["-y", "@modelcontextprotocol/server-playwright", "--stdio"]
            }
        }
    }
    
    # Create production components
    client = SidecarMCPClient.from_dict(config)
    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.1)
    
    # Profile-based agent pool
    agent_factory = create_default_agent_factory(client, llm)
    agent_pool = AgentPool(agent_factory, max_agents_per_run=10)
    
    # Global rate limiting
    broker = create_development_broker()
    
    # Production-ready executor
    executor = MCPExecutor(
        agent=None,  # No template agent needed
        default_server="fs_business",
        broker=broker,
        agent_pool=agent_pool,
        model_key="openai:gpt-4o-mini"
    )
    
    orchestrator = Orchestrator(executor)
    
    # Define multi-server workflow
    workflow = GraphSpec(
        nodes=[
            # Concurrent data processing
            NodeSpec(
                id="read_sales_data",
                type="foreach",
                server_name="fs_business",  # Route to business filesystem
                inputs={
                    "items": ["sales.json", "marketing.json", "operations.json"],
                    "instruction": "Read and summarize each business file"
                },
                concurrency=3
            ),
            
            # Cross-department analysis  
            NodeSpec(
                id="analyze_trends",
                type="reduce",
                inputs={
                    "from_ids": ["read_sales_data"],
                    "instruction": "Analyze trends across all departments"
                }
            ),
            
            # Web research for market context
            NodeSpec(
                id="market_research",
                type="task",
                server_name="web",  # Route to web browser
                inputs={
                    "from": "analyze_trends",
                    "instruction": "Research current market trends for context"
                }
            ),
            
            # Save final report
            NodeSpec(
                id="save_report",
                type="task", 
                server_name="fs_reports",  # Route to reports filesystem
                inputs={
                    "from": "market_research",
                    "instruction": "Create executive summary and save as report.pdf"
                }
            )
        ],
        edges=[
            ("read_sales_data", "analyze_trends"),
            ("analyze_trends", "market_research"),
            ("market_research", "save_report")
        ]
    )
    
    run_spec = RunSpec(
        run_id="business_analysis_001",
        goal="Multi-department business analysis with market research"
    )
    
    # Execute with full observability
    print("πŸš€ Starting production workflow...")
    async for event in orchestrator.run_streaming(workflow, run_spec):
        if event.type == "NODE_START":
            print(f"πŸ”„ Starting {event.node_id}")
        elif event.type == "NODE_COMPLETE":
            print(f"βœ… Completed {event.node_id}")
        elif event.type == "AGENT_CHUNK":
            print(f"   🧠 Agent progress: {event.data.get('content', '')[:50]}...")
        elif event.type == "RUN_COMPLETE":
            print(f"πŸŽ‰ Workflow completed successfully!")
    
    # Get production metrics
    broker_stats = await broker.get_stats()
    pool_stats = await agent_pool.get_pool_stats()
    
    print(f"\nπŸ“Š Production Metrics:")
    print(f"   🏊 Agent profiles created: {len(pool_stats['profiles'])}")
    for profile_key, profile_info in pool_stats['profiles'].items():
        server = profile_info['server_name'] or 'default'
        usage = profile_info['usage_count']
        print(f"      {server} server: {usage} uses")
    
    for model, stats in broker_stats.items():
        if stats['rpm_used'] > 0:
            print(f"   πŸ“ˆ {model}: {stats['rpm_used']}/{stats['rpm_limit']} RPM used")
    
    # Clean shutdown
    await broker.shutdown()
    await agent_pool.shutdown()
    await client.close_all_sessions()

# Run with proper error handling
if __name__ == "__main__":
    asyncio.run(production_workflow())
```

## πŸ“Š **Examples**

The `examples/` directory contains production-ready demonstrations:

### **Production Examples**
- **`polished_part4_demo.py`** - Complete production workflow with all features
- **`polished_simple_demo.py`** - Simple demo without complex MCP setup  
- **`polished_verification_demo.py`** - Verification of all polish improvements
- **`part4_complete_demo.py`** - CallBroker + AgentPool integration

### **Core Feature Examples**
- **`basic_orchestration.py`** - Simple DAG workflow
- **`foreach_example.py`** - Concurrent batch processing
- **`reduce_example.py`** - Data aggregation patterns
- **`gate_example.py`** - Conditional workflow control

### **Integration Examples**
- **`multi_server_example.py`** - Multiple MCP servers in one workflow
- **`rate_limiting_example.py`** - CallBroker rate limiting demonstration
- **`agent_pooling_example.py`** - AgentPool management patterns

## πŸ§ͺ **Testing**

Agent Orchestra includes comprehensive test coverage:

```bash
# Run all tests
python -m pytest tests/ -v

# Run specific test categories
python -m pytest tests/test_polish_improvements.py -v  # Production features
python -m pytest tests/test_orchestration.py -v       # Core orchestration
python -m pytest tests/test_agent_pool.py -v          # Agent management
```

**Test Coverage:**
- **Polish Improvements** - All 10 production-ready improvements
- **Race Conditions** - Concurrent agent creation safety
- **Path Validation** - Security and directory traversal prevention
- **Rate Limiting** - Global rate limiting across multiple agents
- **Multi-Server** - Server routing and profile management

## πŸ”§ **Configuration**

### **CallBroker Configuration**
```python
from agent_orchestra.orchestrator.call_broker import CallBroker, ModelLimits

# Custom rate limits
limits = {
    "openai:gpt-4": ModelLimits(rpm=60, rpd=1000, max_concurrency=10),
    "openai:gpt-4o-mini": ModelLimits(rpm=200, rpd=5000, max_concurrency=20),
    "anthropic:claude-3": ModelLimits(rpm=50, rpd=800, max_concurrency=5)
}

broker = CallBroker(limits)
```

### **AgentPool Configuration**  
```python
from agent_orchestra.orchestrator.agent_pool import AgentPool, AgentSpec

# Profile-based agent management
async def custom_factory(spec: AgentSpec):
    # Custom agent creation logic based on spec
    return SidecarMCPAgent(...)

pool = AgentPool(custom_factory, max_agents_per_run=15)

# Get agent for specific profile
spec = AgentSpec(
    server_name="fs_business",
    model_key="openai:gpt-4",
    policy_id="standard"
)
agent = await pool.get(spec, "run_123")
```

### **Multi-Server MCP Configuration**
```python
from agent_orchestra.orchestrator.fs_utils import create_multi_server_config

configs = {
    "fs_sales": {"root": "/data/sales"},
    "fs_reports": {"root": "/data/reports"},
    "playwright": {"type": "playwright"},
    "custom_server": {
        "command": "python",
        "args": ["-m", "my_custom_server", "--stdio"]
    }
}

mcp_config = create_multi_server_config(configs)
```

## πŸ›‘οΈ **Security Features**

### **Path Validation**
```python
from agent_orchestra.orchestrator.fs_utils import fs_args

# Safe path handling with directory traversal prevention
root = Path("/safe/root")
try:
    safe_args = fs_args(root, "../../etc/passwd")  # Raises ValueError
except ValueError as e:
    print(f"Security violation prevented: {e}")
```

### **Parameter Filtering**
Agent Orchestra automatically filters potentially unsafe parameters before passing them to underlying MCP agents, ensuring backward compatibility while maintaining security.

## πŸ“ˆ **Performance & Monitoring**

### **Built-in Metrics**
```python
# Get real-time broker statistics
broker_stats = await broker.get_stats()
print(f"RPM usage: {broker_stats['openai:gpt-4']['rpm_used']}")

# Get agent pool statistics  
pool_stats = await agent_pool.get_pool_stats()
print(f"Active agents: {pool_stats['total_agents']}")
print(f"Profile usage: {pool_stats['profiles']}")
```

### **Event-Driven Telemetry**
```python
# Access structured events during execution
async for event in orchestrator.run_streaming(workflow, run_spec):
    if event.type == "AGENT_CHUNK":
        # Log or emit to external monitoring
        telemetry_system.emit({
            "timestamp": event.timestamp,
            "node_id": event.node_id, 
            "content": event.data
        })
```

## 🀝 **Migration from mcp-use**

Agent Orchestra is designed as a drop-in replacement. To migrate:

1. **Replace imports:**
   ```python
   # Old
   from mcp_use import MCPClient, MCPAgent
   
   # New  
   from agent_orchestra import SidecarMCPClient as MCPClient
   from agent_orchestra import SidecarMCPAgent as MCPAgent
   ```

2. **Optional: Add production features:**
   ```python
   # Add rate limiting
   from agent_orchestra.orchestrator.broker_config import create_development_broker
   broker = create_development_broker()
   
   # Add agent pooling
   from agent_orchestra.orchestrator.agent_pool import AgentPool
   pool = AgentPool(agent_factory)
   ```

3. **Optional: Use orchestration:**
   ```python
   # Define workflows instead of sequential calls
   from agent_orchestra.orchestrator import Orchestrator, GraphSpec, NodeSpec
   ```

## πŸ“š **Documentation**

- **[Architecture Guide](docs/ARCHITECTURE.md)** - System design and component overview
- **[Production Deployment](docs/DEPLOYMENT.md)** - Best practices for production use
- **[API Reference](docs/API.md)** - Comprehensive API documentation
- **[Migration Guide](docs/MIGRATION.md)** - Detailed migration from mcp-use
- **[Performance Tuning](docs/PERFORMANCE.md)** - Optimization strategies

## 🎯 **Production Readiness Checklist**

Agent Orchestra has been thoroughly tested and includes all features needed for production deployment:

- βœ… **Race-safe agent creation** with double-checked locking
- βœ… **Global rate limiting** with 429-aware retries
- βœ… **Profile-based agent pooling** with automatic cleanup
- βœ… **Multi-server routing** with parameter filtering
- βœ… **Security validations** preventing directory traversal
- βœ… **Comprehensive error handling** with graceful degradation
- βœ… **Resource lifecycle management** with proper async cleanup
- βœ… **Production monitoring** with structured events and metrics
- βœ… **Backward compatibility** with existing mcp-use code
- βœ… **Comprehensive test coverage** including race conditions

## πŸ› οΈ **Development**

### **Setup Development Environment**
```bash
git clone https://github.com/your-org/agent-orchestra
cd agent-orchestra
pip install -e .
pip install -r requirements-dev.txt
```

### **Run Tests**
```bash
python -m pytest tests/ -v --cov=agent_orchestra
```

### **Code Quality**
```bash
ruff check .                    # Linting
ruff format .                   # Formatting  
mypy src/agent_orchestra/       # Type checking
```

## πŸ“„ **License**

Agent Orchestra is licensed under the [MIT License](LICENSE).

## πŸ™ **Contributing**

We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.

**Key Areas for Contribution:**
- Additional MCP server integrations
- Enhanced telemetry and monitoring features
- Performance optimizations
- Documentation improvements
- Example workflows and use cases

## 🌟 **Roadmap**

**Upcoming Features:**
- OpenTelemetry integration for distributed tracing
- Human-in-the-loop (HITL) workflow nodes
- Advanced policy enforcement with RBAC
- Workflow versioning and rollback
- Distributed execution across multiple nodes
- Enhanced security with request signing

---

**Agent Orchestra: Production-Ready Multi-Agent Orchestration** 🎼

*Built for enterprises, loved by developers.*

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "agentic-orchestra",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "mcp, ai, agents, telemetry, policy, observability, orchestration",
    "author": "Agent Orchestra Contributors",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ac/4b/ea5213e9533e7406adf6348317668a7a79042ac0a3eb4d3fa957278642bb/agentic_orchestra-0.1.0.tar.gz",
    "platform": null,
    "description": "# Agent Orchestra: Production-Ready Multi-Agent Orchestration Platform\n\n**Agent Orchestra** is a production-grade, open-source framework for building sophisticated multi-agent workflows with enterprise-level features. Built on top of the Model Context Protocol (MCP), it provides advanced orchestration, rate limiting, agent pooling, and comprehensive observability for real-world AI applications.\n\n## \ud83d\ude80 **Production-Ready Features**\n\nAgent Orchestra has been battle-tested and includes all the polish improvements needed for real-world deployment:\n\n- **\ud83c\udfca Profile-Based Agent Pooling** - Intelligent agent reuse with race-safe creation and no duplicate initialization\n- **\u26a1 Global Rate Limiting** - Per-model RPM/RPD limits with 429-aware retries and jittered exponential backoff  \n- **\ud83d\udd00 Multi-Server Routing** - Single MCP client with dynamic server-name routing per workflow node\n- **\ud83d\udee1\ufe0f Security & Safety** - Path validation, directory traversal prevention, and secure parameter handling\n- **\ud83c\udfaf Advanced Orchestration** - DAG workflows with concurrent `foreach`, intelligent `reduce`, and conditional `gate` nodes\n- **\ud83d\udcca Comprehensive Telemetry** - Event-driven architecture with structured logging and performance metrics\n- **\ud83e\uddf9 Clean Async Management** - Proper resource lifecycle with graceful startup/shutdown\n\n## \ud83c\udfd7\ufe0f **Architecture Overview**\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502   Orchestrator  \u2502\u2500\u2500\u2500\u25b6\u2502   MCPExecutor    \u2502\u2500\u2500\u2500\u25b6\u2502   AgentPool     \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n         \u2502                       \u2502                       \u2502\n         \u2502                       \u25bc                       \u25bc\n         \u2502              \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n         \u2502              \u2502   CallBroker     \u2502    \u2502 SidecarMCPAgent \u2502\n         \u2502              \u2502  (Rate Limiting) \u2502    \u2502 (with Telemetry)\u2502\n         \u2502              \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n         \u2502                       \u2502                       \u2502\n         \u25bc                       \u2502                       \u25bc\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510              \u2502              \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502   GraphSpec     \u2502              \u2502              \u2502 SidecarMCPClient\u2502\n\u2502   (Workflow)    \u2502              \u2502              \u2502 (Multi-Server)  \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518              \u2502              \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                                 \u2502                       \u2502\n                                 \u25bc                       \u25bc\n                        \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                        \u2502   Broker Stats   \u2502    \u2502   MCP Servers   \u2502\n                        \u2502   (Monitoring)   \u2502    \u2502  (fs, web, etc) \u2502\n                        \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## \ud83c\udfaf **Key Components**\n\n### **Orchestrator**\nThe central workflow engine that executes DAG-based workflows with support for:\n- **Task Nodes** - Single agent operations\n- **Foreach Nodes** - Concurrent batch processing with configurable concurrency\n- **Reduce Nodes** - Intelligent aggregation of multiple results\n- **Gate Nodes** - Conditional workflow control\n\n### **AgentPool (Profile-Based)**\nProduction-grade agent management with:\n- **Profile Keys** - `(server_name, model_key, policy_id)` for precise agent categorization\n- **Race-Safe Creation** - Double-checked locking prevents duplicate agent initialization\n- **Agent Reuse** - Automatic sharing of agents across workflow nodes with same profile\n- **Resource Limits** - Configurable max agents per run with automatic cleanup\n\n### **CallBroker (Rate Limiting)**\nGlobal rate limiting system with:\n- **Per-Model Limits** - Separate RPM, RPD, and concurrency limits per model\n- **429-Aware Retries** - Automatic retry with jittered exponential backoff\n- **Sliding Window Counters** - Precise rate tracking with time-based windows\n- **Request Queuing** - Fair scheduling across multiple agents\n\n### **MCPExecutor (Multi-Server)**\nEnhanced executor with:\n- **Server-Name Routing** - Dynamic routing to different MCP servers per node\n- **Parameter Filtering** - Clean parameter handling for backward compatibility\n- **Output Capture** - Enhanced result processing with fallback to text\n- **Streaming Support** - Real-time chunk processing with telemetry\n\n### **SidecarMCPAgent (Enhanced)**\nDrop-in replacement for `mcp-use` MCPAgent with:\n- **Telemetry Integration** - Comprehensive event emission and performance tracking\n- **Parameter Safety** - Secure handling of server_name and other routing parameters\n- **Enhanced Error Handling** - Better error reporting and recovery\n- **Full API Compatibility** - 100% compatible with existing `mcp-use` code\n\n## \ud83d\udce6 **Installation**\n\n### Prerequisites\n- Python 3.11+\n- Node.js 18+ (for MCP servers)\n- OpenAI API key (or other LLM provider)\n\n### Install Agent Orchestra\n```bash\npip install agent-orchestra\n```\n\n### Install MCP Servers\n```bash\n# Filesystem server\nnpm install -g @modelcontextprotocol/server-filesystem\n\n# Web browser server  \nnpm install -g @modelcontextprotocol/server-playwright\n\n# Or use npx to run without global install\nnpx @modelcontextprotocol/server-filesystem --help\n```\n\n## \ud83d\ude80 **Quick Start**\n\n### Simple Agent Usage (Drop-in Replacement)\n```python\nimport asyncio\nfrom agent_orchestra import SidecarMCPClient, SidecarMCPAgent\nfrom langchain_openai import ChatOpenAI\n\nasync def simple_example():\n    # Configure MCP client\n    config = {\n        \"mcpServers\": {\n            \"filesystem\": {\n                \"command\": \"npx\",\n                \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \n                        \"--stdio\", \"--root\", \"/tmp\"]\n            }\n        }\n    }\n    \n    client = SidecarMCPClient.from_dict(config)\n    llm = ChatOpenAI(model=\"gpt-4o-mini\")\n    agent = SidecarMCPAgent(llm=llm, client=client)\n    \n    result = await agent.run(\"List the files in the current directory\")\n    print(result)\n    \n    await client.close_all_sessions()\n\nasyncio.run(simple_example())\n```\n\n### Production Workflow with All Features\n```python\nimport asyncio\nfrom agent_orchestra import SidecarMCPClient, SidecarMCPAgent\nfrom agent_orchestra.orchestrator.core import Orchestrator\nfrom agent_orchestra.orchestrator.types import GraphSpec, NodeSpec, RunSpec\nfrom agent_orchestra.orchestrator.executors_mcp import MCPExecutor\nfrom agent_orchestra.orchestrator.broker_config import create_development_broker\nfrom agent_orchestra.orchestrator.agent_pool import AgentPool, create_default_agent_factory\nfrom langchain_openai import ChatOpenAI\n\nasync def production_workflow():\n    # Multi-server MCP configuration\n    config = {\n        \"mcpServers\": {\n            \"fs_business\": {\n                \"command\": \"npx\",\n                \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \n                        \"--stdio\", \"--root\", \"/business/data\"]\n            },\n            \"fs_reports\": {\n                \"command\": \"npx\", \n                \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\",\n                        \"--stdio\", \"--root\", \"/reports/output\"]\n            },\n            \"web\": {\n                \"command\": \"npx\",\n                \"args\": [\"-y\", \"@modelcontextprotocol/server-playwright\", \"--stdio\"]\n            }\n        }\n    }\n    \n    # Create production components\n    client = SidecarMCPClient.from_dict(config)\n    llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0.1)\n    \n    # Profile-based agent pool\n    agent_factory = create_default_agent_factory(client, llm)\n    agent_pool = AgentPool(agent_factory, max_agents_per_run=10)\n    \n    # Global rate limiting\n    broker = create_development_broker()\n    \n    # Production-ready executor\n    executor = MCPExecutor(\n        agent=None,  # No template agent needed\n        default_server=\"fs_business\",\n        broker=broker,\n        agent_pool=agent_pool,\n        model_key=\"openai:gpt-4o-mini\"\n    )\n    \n    orchestrator = Orchestrator(executor)\n    \n    # Define multi-server workflow\n    workflow = GraphSpec(\n        nodes=[\n            # Concurrent data processing\n            NodeSpec(\n                id=\"read_sales_data\",\n                type=\"foreach\",\n                server_name=\"fs_business\",  # Route to business filesystem\n                inputs={\n                    \"items\": [\"sales.json\", \"marketing.json\", \"operations.json\"],\n                    \"instruction\": \"Read and summarize each business file\"\n                },\n                concurrency=3\n            ),\n            \n            # Cross-department analysis  \n            NodeSpec(\n                id=\"analyze_trends\",\n                type=\"reduce\",\n                inputs={\n                    \"from_ids\": [\"read_sales_data\"],\n                    \"instruction\": \"Analyze trends across all departments\"\n                }\n            ),\n            \n            # Web research for market context\n            NodeSpec(\n                id=\"market_research\",\n                type=\"task\",\n                server_name=\"web\",  # Route to web browser\n                inputs={\n                    \"from\": \"analyze_trends\",\n                    \"instruction\": \"Research current market trends for context\"\n                }\n            ),\n            \n            # Save final report\n            NodeSpec(\n                id=\"save_report\",\n                type=\"task\", \n                server_name=\"fs_reports\",  # Route to reports filesystem\n                inputs={\n                    \"from\": \"market_research\",\n                    \"instruction\": \"Create executive summary and save as report.pdf\"\n                }\n            )\n        ],\n        edges=[\n            (\"read_sales_data\", \"analyze_trends\"),\n            (\"analyze_trends\", \"market_research\"),\n            (\"market_research\", \"save_report\")\n        ]\n    )\n    \n    run_spec = RunSpec(\n        run_id=\"business_analysis_001\",\n        goal=\"Multi-department business analysis with market research\"\n    )\n    \n    # Execute with full observability\n    print(\"\ud83d\ude80 Starting production workflow...\")\n    async for event in orchestrator.run_streaming(workflow, run_spec):\n        if event.type == \"NODE_START\":\n            print(f\"\ud83d\udd04 Starting {event.node_id}\")\n        elif event.type == \"NODE_COMPLETE\":\n            print(f\"\u2705 Completed {event.node_id}\")\n        elif event.type == \"AGENT_CHUNK\":\n            print(f\"   \ud83e\udde0 Agent progress: {event.data.get('content', '')[:50]}...\")\n        elif event.type == \"RUN_COMPLETE\":\n            print(f\"\ud83c\udf89 Workflow completed successfully!\")\n    \n    # Get production metrics\n    broker_stats = await broker.get_stats()\n    pool_stats = await agent_pool.get_pool_stats()\n    \n    print(f\"\\n\ud83d\udcca Production Metrics:\")\n    print(f\"   \ud83c\udfca Agent profiles created: {len(pool_stats['profiles'])}\")\n    for profile_key, profile_info in pool_stats['profiles'].items():\n        server = profile_info['server_name'] or 'default'\n        usage = profile_info['usage_count']\n        print(f\"      {server} server: {usage} uses\")\n    \n    for model, stats in broker_stats.items():\n        if stats['rpm_used'] > 0:\n            print(f\"   \ud83d\udcc8 {model}: {stats['rpm_used']}/{stats['rpm_limit']} RPM used\")\n    \n    # Clean shutdown\n    await broker.shutdown()\n    await agent_pool.shutdown()\n    await client.close_all_sessions()\n\n# Run with proper error handling\nif __name__ == \"__main__\":\n    asyncio.run(production_workflow())\n```\n\n## \ud83d\udcca **Examples**\n\nThe `examples/` directory contains production-ready demonstrations:\n\n### **Production Examples**\n- **`polished_part4_demo.py`** - Complete production workflow with all features\n- **`polished_simple_demo.py`** - Simple demo without complex MCP setup  \n- **`polished_verification_demo.py`** - Verification of all polish improvements\n- **`part4_complete_demo.py`** - CallBroker + AgentPool integration\n\n### **Core Feature Examples**\n- **`basic_orchestration.py`** - Simple DAG workflow\n- **`foreach_example.py`** - Concurrent batch processing\n- **`reduce_example.py`** - Data aggregation patterns\n- **`gate_example.py`** - Conditional workflow control\n\n### **Integration Examples**\n- **`multi_server_example.py`** - Multiple MCP servers in one workflow\n- **`rate_limiting_example.py`** - CallBroker rate limiting demonstration\n- **`agent_pooling_example.py`** - AgentPool management patterns\n\n## \ud83e\uddea **Testing**\n\nAgent Orchestra includes comprehensive test coverage:\n\n```bash\n# Run all tests\npython -m pytest tests/ -v\n\n# Run specific test categories\npython -m pytest tests/test_polish_improvements.py -v  # Production features\npython -m pytest tests/test_orchestration.py -v       # Core orchestration\npython -m pytest tests/test_agent_pool.py -v          # Agent management\n```\n\n**Test Coverage:**\n- **Polish Improvements** - All 10 production-ready improvements\n- **Race Conditions** - Concurrent agent creation safety\n- **Path Validation** - Security and directory traversal prevention\n- **Rate Limiting** - Global rate limiting across multiple agents\n- **Multi-Server** - Server routing and profile management\n\n## \ud83d\udd27 **Configuration**\n\n### **CallBroker Configuration**\n```python\nfrom agent_orchestra.orchestrator.call_broker import CallBroker, ModelLimits\n\n# Custom rate limits\nlimits = {\n    \"openai:gpt-4\": ModelLimits(rpm=60, rpd=1000, max_concurrency=10),\n    \"openai:gpt-4o-mini\": ModelLimits(rpm=200, rpd=5000, max_concurrency=20),\n    \"anthropic:claude-3\": ModelLimits(rpm=50, rpd=800, max_concurrency=5)\n}\n\nbroker = CallBroker(limits)\n```\n\n### **AgentPool Configuration**  \n```python\nfrom agent_orchestra.orchestrator.agent_pool import AgentPool, AgentSpec\n\n# Profile-based agent management\nasync def custom_factory(spec: AgentSpec):\n    # Custom agent creation logic based on spec\n    return SidecarMCPAgent(...)\n\npool = AgentPool(custom_factory, max_agents_per_run=15)\n\n# Get agent for specific profile\nspec = AgentSpec(\n    server_name=\"fs_business\",\n    model_key=\"openai:gpt-4\",\n    policy_id=\"standard\"\n)\nagent = await pool.get(spec, \"run_123\")\n```\n\n### **Multi-Server MCP Configuration**\n```python\nfrom agent_orchestra.orchestrator.fs_utils import create_multi_server_config\n\nconfigs = {\n    \"fs_sales\": {\"root\": \"/data/sales\"},\n    \"fs_reports\": {\"root\": \"/data/reports\"},\n    \"playwright\": {\"type\": \"playwright\"},\n    \"custom_server\": {\n        \"command\": \"python\",\n        \"args\": [\"-m\", \"my_custom_server\", \"--stdio\"]\n    }\n}\n\nmcp_config = create_multi_server_config(configs)\n```\n\n## \ud83d\udee1\ufe0f **Security Features**\n\n### **Path Validation**\n```python\nfrom agent_orchestra.orchestrator.fs_utils import fs_args\n\n# Safe path handling with directory traversal prevention\nroot = Path(\"/safe/root\")\ntry:\n    safe_args = fs_args(root, \"../../etc/passwd\")  # Raises ValueError\nexcept ValueError as e:\n    print(f\"Security violation prevented: {e}\")\n```\n\n### **Parameter Filtering**\nAgent Orchestra automatically filters potentially unsafe parameters before passing them to underlying MCP agents, ensuring backward compatibility while maintaining security.\n\n## \ud83d\udcc8 **Performance & Monitoring**\n\n### **Built-in Metrics**\n```python\n# Get real-time broker statistics\nbroker_stats = await broker.get_stats()\nprint(f\"RPM usage: {broker_stats['openai:gpt-4']['rpm_used']}\")\n\n# Get agent pool statistics  \npool_stats = await agent_pool.get_pool_stats()\nprint(f\"Active agents: {pool_stats['total_agents']}\")\nprint(f\"Profile usage: {pool_stats['profiles']}\")\n```\n\n### **Event-Driven Telemetry**\n```python\n# Access structured events during execution\nasync for event in orchestrator.run_streaming(workflow, run_spec):\n    if event.type == \"AGENT_CHUNK\":\n        # Log or emit to external monitoring\n        telemetry_system.emit({\n            \"timestamp\": event.timestamp,\n            \"node_id\": event.node_id, \n            \"content\": event.data\n        })\n```\n\n## \ud83e\udd1d **Migration from mcp-use**\n\nAgent Orchestra is designed as a drop-in replacement. To migrate:\n\n1. **Replace imports:**\n   ```python\n   # Old\n   from mcp_use import MCPClient, MCPAgent\n   \n   # New  \n   from agent_orchestra import SidecarMCPClient as MCPClient\n   from agent_orchestra import SidecarMCPAgent as MCPAgent\n   ```\n\n2. **Optional: Add production features:**\n   ```python\n   # Add rate limiting\n   from agent_orchestra.orchestrator.broker_config import create_development_broker\n   broker = create_development_broker()\n   \n   # Add agent pooling\n   from agent_orchestra.orchestrator.agent_pool import AgentPool\n   pool = AgentPool(agent_factory)\n   ```\n\n3. **Optional: Use orchestration:**\n   ```python\n   # Define workflows instead of sequential calls\n   from agent_orchestra.orchestrator import Orchestrator, GraphSpec, NodeSpec\n   ```\n\n## \ud83d\udcda **Documentation**\n\n- **[Architecture Guide](docs/ARCHITECTURE.md)** - System design and component overview\n- **[Production Deployment](docs/DEPLOYMENT.md)** - Best practices for production use\n- **[API Reference](docs/API.md)** - Comprehensive API documentation\n- **[Migration Guide](docs/MIGRATION.md)** - Detailed migration from mcp-use\n- **[Performance Tuning](docs/PERFORMANCE.md)** - Optimization strategies\n\n## \ud83c\udfaf **Production Readiness Checklist**\n\nAgent Orchestra has been thoroughly tested and includes all features needed for production deployment:\n\n- \u2705 **Race-safe agent creation** with double-checked locking\n- \u2705 **Global rate limiting** with 429-aware retries\n- \u2705 **Profile-based agent pooling** with automatic cleanup\n- \u2705 **Multi-server routing** with parameter filtering\n- \u2705 **Security validations** preventing directory traversal\n- \u2705 **Comprehensive error handling** with graceful degradation\n- \u2705 **Resource lifecycle management** with proper async cleanup\n- \u2705 **Production monitoring** with structured events and metrics\n- \u2705 **Backward compatibility** with existing mcp-use code\n- \u2705 **Comprehensive test coverage** including race conditions\n\n## \ud83d\udee0\ufe0f **Development**\n\n### **Setup Development Environment**\n```bash\ngit clone https://github.com/your-org/agent-orchestra\ncd agent-orchestra\npip install -e .\npip install -r requirements-dev.txt\n```\n\n### **Run Tests**\n```bash\npython -m pytest tests/ -v --cov=agent_orchestra\n```\n\n### **Code Quality**\n```bash\nruff check .                    # Linting\nruff format .                   # Formatting  \nmypy src/agent_orchestra/       # Type checking\n```\n\n## \ud83d\udcc4 **License**\n\nAgent Orchestra is licensed under the [MIT License](LICENSE).\n\n## \ud83d\ude4f **Contributing**\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n**Key Areas for Contribution:**\n- Additional MCP server integrations\n- Enhanced telemetry and monitoring features\n- Performance optimizations\n- Documentation improvements\n- Example workflows and use cases\n\n## \ud83c\udf1f **Roadmap**\n\n**Upcoming Features:**\n- OpenTelemetry integration for distributed tracing\n- Human-in-the-loop (HITL) workflow nodes\n- Advanced policy enforcement with RBAC\n- Workflow versioning and rollback\n- Distributed execution across multiple nodes\n- Enhanced security with request signing\n\n---\n\n**Agent Orchestra: Production-Ready Multi-Agent Orchestration** \ud83c\udfbc\n\n*Built for enterprises, loved by developers.*\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "The easiest way to create and orchestrate multi-agent fleets",
    "version": "0.1.0",
    "project_urls": {
        "Documentation": "https://github.com/agent-orchestra/agent-orchestra/blob/main/README.md",
        "Homepage": "https://github.com/agent-orchestra/agent-orchestra",
        "Issues": "https://github.com/agent-orchestra/agent-orchestra/issues",
        "Repository": "https://github.com/agent-orchestra/agent-orchestra"
    },
    "split_keywords": [
        "mcp",
        " ai",
        " agents",
        " telemetry",
        " policy",
        " observability",
        " orchestration"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a69a800c76bcdb6a60df381b37736a7f08f5a281a017a89a01bbf09eb3ed2769",
                "md5": "7096a3b764ca7ce84b0db42b866e8fce",
                "sha256": "0fdbab686826606106aa62409def890f79843151bd48ce77ef51eca97a453d14"
            },
            "downloads": -1,
            "filename": "agentic_orchestra-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7096a3b764ca7ce84b0db42b866e8fce",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 48866,
            "upload_time": "2025-08-20T21:13:22",
            "upload_time_iso_8601": "2025-08-20T21:13:22.607192Z",
            "url": "https://files.pythonhosted.org/packages/a6/9a/800c76bcdb6a60df381b37736a7f08f5a281a017a89a01bbf09eb3ed2769/agentic_orchestra-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ac4bea5213e9533e7406adf6348317668a7a79042ac0a3eb4d3fa957278642bb",
                "md5": "c08242dc7683911954f0714bbacb278e",
                "sha256": "00840d0f027833d664f31fa2515ff4b059c378df908ce6eb7c506f2e3681445c"
            },
            "downloads": -1,
            "filename": "agentic_orchestra-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c08242dc7683911954f0714bbacb278e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 60706,
            "upload_time": "2025-08-20T21:13:24",
            "upload_time_iso_8601": "2025-08-20T21:13:24.323914Z",
            "url": "https://files.pythonhosted.org/packages/ac/4b/ea5213e9533e7406adf6348317668a7a79042ac0a3eb4d3fa957278642bb/agentic_orchestra-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-20 21:13:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "agent-orchestra",
    "github_project": "agent-orchestra",
    "github_not_found": true,
    "lcname": "agentic-orchestra"
}
        
Elapsed time: 2.12523s