llm-orchestra


Namellm-orchestra JSON
Version 0.9.1 PyPI version JSON
download
home_pageNone
SummaryMulti-agent LLM communication system with ensemble orchestration
upload_time2025-07-26 05:53:32
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT License Copyright (c) 2025 Nathan Green Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords agents ai ensemble llm multi-agent orchestration
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM Orchestra

[![PyPI version](https://badge.fury.io/py/llm-orchestra.svg)](https://badge.fury.io/py/llm-orchestra)
[![CI](https://github.com/mrilikecoding/llm-orc/workflows/CI/badge.svg)](https://github.com/mrilikecoding/llm-orc/actions)
[![codecov](https://codecov.io/gh/mrilikecoding/llm-orc/graph/badge.svg?token=FWHP257H9E)](https://codecov.io/gh/mrilikecoding/llm-orc)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Downloads](https://static.pepy.tech/badge/llm-orchestra)](https://pepy.tech/project/llm-orchestra)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/mrilikecoding/llm-orc)](https://github.com/mrilikecoding/llm-orc/releases)

A multi-agent LLM communication system for ensemble orchestration and intelligent analysis.

## Overview

LLM Orchestra lets you coordinate multiple AI agents for complex analysis tasks. Run code reviews with security and performance specialists, analyze architecture decisions from multiple angles, or get systematic coverage of any multi-faceted problem.

Mix expensive cloud models with free local models - use Claude for strategic insights while Llama3 handles systematic analysis tasks.

## Key Features

- **Multi-Agent Ensembles**: Coordinate specialized agents with flexible dependency graphs
- **Agent Dependencies**: Define which agents depend on others for sophisticated orchestration patterns
- **Model Profiles**: Simplified configuration with named shortcuts for model + provider combinations
- **Cost Optimization**: Mix expensive and free models based on what each task needs
- **Streaming Output**: Real-time progress updates during ensemble execution
- **CLI Interface**: Simple commands with piping support (`cat code.py | llm-orc invoke code-review`)
- **Secure Authentication**: Encrypted API key storage with easy credential management
- **YAML Configuration**: Easy ensemble setup with readable config files
- **Usage Tracking**: Token counting, cost estimation, and timing metrics

## Installation

### Option 1: Homebrew (macOS - Recommended)
```bash
# Add the tap
brew tap mrilikecoding/llm-orchestra

# Install LLM Orchestra
brew install llm-orchestra

# Verify installation
llm-orc --version
```

### Option 2: pip (All Platforms)
```bash
# Install from PyPI
pip install llm-orchestra

# Verify installation
llm-orc --version
```

### Option 3: Development Installation
```bash
# Clone the repository
git clone https://github.com/mrilikecoding/llm-orc.git
cd llm-orc

# Install with development dependencies
uv sync --dev

# Verify installation
uv run llm-orc --version
```

### Updates
```bash
# Homebrew users
brew update && brew upgrade llm-orchestra

# pip users
pip install --upgrade llm-orchestra
```

## Quick Start

### 1. Set Up Authentication

Before using LLM Orchestra, configure authentication for your LLM providers:

```bash
# Interactive setup wizard (recommended for first-time users)
llm-orc auth setup

# Or add providers individually
llm-orc auth add anthropic --api-key YOUR_ANTHROPIC_KEY
llm-orc auth add google --api-key YOUR_GOOGLE_KEY

# OAuth for Claude Pro/Max users
llm-orc auth add anthropic-claude-pro-max

# List configured providers
llm-orc auth list

# Remove a provider if needed
llm-orc auth remove anthropic
```

**Security**: API keys are encrypted and stored securely in `~/.config/llm-orc/credentials.yaml`.

### 2. Configuration Options

LLM Orchestra supports both global and local configurations:

#### Global Configuration
Create `~/.config/llm-orc/ensembles/code-review.yaml`:

```yaml
name: code-review
description: Multi-perspective code review ensemble

agents:
  - name: security-reviewer
    model_profile: free-local
    system_prompt: "You are a security analyst. Focus on identifying security vulnerabilities, authentication issues, and potential attack vectors."

  - name: performance-reviewer
    model_profile: free-local
    system_prompt: "You are a performance analyst. Focus on identifying bottlenecks, inefficient algorithms, and scalability issues."

  - name: quality-reviewer
    model_profile: free-local
    system_prompt: "You are a code quality analyst. Focus on maintainability, readability, and best practices."

  - name: senior-reviewer
    model_profile: default-claude
    depends_on: [security-reviewer, performance-reviewer, quality-reviewer]
    system_prompt: |
      You are a senior engineering lead. Synthesize the security, performance,
      and quality analysis into actionable recommendations.
    output_format: json
```

#### Local Project Configuration
For project-specific ensembles, initialize local configuration:

```bash
# Initialize local configuration in your project
llm-orc config init

# This creates .llm-orc/ directory with:
# - ensembles/   (project-specific ensembles)
# - models/      (shared model configurations)
# - scripts/     (project-specific scripts)
# - config.yaml  (project configuration)
```

#### View Current Configuration
```bash
# Check configuration status with visual indicators
llm-orc config check
```

### 3. Using LLM Orchestra

#### Basic Usage
```bash
# List available ensembles
llm-orc list-ensembles

# List available model profiles
llm-orc list-profiles

# Get help for any command
llm-orc --help
llm-orc invoke --help
```

#### Invoke Ensembles
```bash
# Analyze code from a file (pipe input)
cat mycode.py | llm-orc invoke code-review

# Provide input directly
llm-orc invoke code-review --input "Review this function: def add(a, b): return a + b"

# JSON output for integration with other tools
llm-orc invoke code-review --input "..." --output-format json

# Use specific configuration directory
llm-orc invoke code-review --config-dir ./custom-config

# Enable streaming for real-time progress (enabled by default)
llm-orc invoke code-review --streaming
```

#### Configuration Management
```bash
# Initialize local project configuration
llm-orc config init --project-name my-project

# Check configuration status with visual indicators
llm-orc config check                # Global + local status with legend
llm-orc config check-global        # Global configuration only  
llm-orc config check-local         # Local project configuration only

# Reset configurations with safety options
llm-orc config reset-global        # Reset global config (backup + preserve auth by default)
llm-orc config reset-local         # Reset local config (backup + preserve ensembles by default)

# Advanced reset options
llm-orc config reset-global --no-backup --reset-auth       # Complete reset including auth
llm-orc config reset-local --reset-ensembles --no-backup   # Reset including ensembles

```

## Ensemble Library

Looking for pre-built ensembles? Check out the [LLM Orchestra Library](https://github.com/mrilikecoding/llm-orchestra-library) - a curated collection of analytical ensembles for code review, research analysis, decision support, and more.

### Library CLI Commands

LLM Orchestra includes built-in commands to browse and copy ensembles from the library:

```bash
# Browse all available categories
llm-orc library categories
llm-orc l categories  # Using alias

# Browse ensembles in a specific category
llm-orc library browse code-analysis

# Show detailed information about an ensemble
llm-orc library show code-analysis/security-review

# Copy an ensemble to your local configuration
llm-orc library copy code-analysis/security-review

# Copy an ensemble to your global configuration
llm-orc library copy code-analysis/security-review --global
```

## Use Cases

### Code Review
Get systematic analysis across security, performance, and maintainability dimensions. Each agent focuses on their specialty while synthesis provides actionable recommendations.

### Architecture Review  
Analyze system designs from scalability, security, performance, and reliability perspectives. Identify bottlenecks and suggest architectural patterns.

### Product Strategy
Evaluate business decisions from market, financial, competitive, and user experience angles. Get comprehensive analysis for complex strategic choices.

### Research Analysis
Systematic literature review, methodology evaluation, or multi-dimensional analysis of research questions.

## Model Support

- **Claude** (Anthropic) - Strategic analysis and synthesis
- **Gemini** (Google) - Multi-modal and reasoning tasks  
- **Ollama** - Local deployment of open-source models (Llama3, etc.)
- **Custom models** - Extensible interface for additional providers

## Configuration

### Model Profiles

Model profiles simplify ensemble configuration by providing named shortcuts for complete agent configurations including model, provider, system prompts, and timeouts:

```yaml
# In ~/.config/llm-orc/config.yaml or .llm-orc/config.yaml
model_profiles:
  free-local:
    model: llama3
    provider: ollama
    cost_per_token: 0.0
    system_prompt: "You are a helpful assistant that provides concise, accurate responses for local development and testing."
    timeout_seconds: 30

  default-claude:
    model: claude-sonnet-4-20250514
    provider: anthropic-claude-pro-max
    system_prompt: "You are an expert assistant that provides high-quality, detailed analysis and solutions."
    timeout_seconds: 60

  high-context:
    model: claude-3-5-sonnet-20241022
    provider: anthropic-api
    cost_per_token: 3.0e-06
    system_prompt: "You are an expert assistant capable of handling complex, multi-faceted problems with detailed analysis."
    timeout_seconds: 120

  small:
    model: claude-3-haiku-20240307
    provider: anthropic-api
    cost_per_token: 1.0e-06
    system_prompt: "You are a quick, efficient assistant that provides concise and accurate responses."
    timeout_seconds: 30
```

**Profile Benefits:**
- **Complete Agent Configuration**: Includes model, provider, system prompts, and timeout settings
- **Simplified Configuration**: Use `model_profile: default-claude` instead of explicit model + provider + system_prompt + timeout
- **Consistency**: Same profile names work across all ensembles with consistent behavior
- **Cost Tracking**: Built-in cost information for budgeting
- **Flexibility**: Local profiles override global ones, explicit agent configs override profile defaults

**Usage in Ensembles:**
```yaml
agents:
  - name: bulk-analyzer
    model_profile: free-local     # Complete config: model, provider, prompt, timeout
  - name: expert-reviewer
    model_profile: default-claude # High-quality config with appropriate timeout
  - name: document-processor
    model_profile: high-context   # Large context processing with extended timeout
    system_prompt: "Custom prompt override"  # Overrides profile default
```

**Override Behavior:**
Explicit agent configuration takes precedence over model profile defaults:
```yaml
agents:
  - name: custom-agent
    model_profile: free-local
    system_prompt: "Custom prompt"  # Overrides profile system_prompt
    timeout_seconds: 60            # Overrides profile timeout_seconds
```

### Ensemble Configuration
Ensemble configurations support:

- **Model profiles** for simplified, consistent model selection
- **Agent specialization** with role-specific prompts
- **Agent dependencies** using `depends_on` for sophisticated orchestration
- **Dependency validation** with automatic cycle detection and missing dependency checks
- **Timeout management** per agent with performance configuration
- **Mixed model strategies** combining local and cloud models
- **Output formatting** (text, JSON) for integration
- **Streaming execution** with real-time progress updates

#### Agent Dependencies

The new dependency-based architecture allows agents to depend on other agents, enabling sophisticated orchestration patterns:

```yaml
agents:
  # Independent agents execute in parallel
  - name: security-reviewer
    model_profile: free-local
    system_prompt: "Focus on security vulnerabilities..."

  - name: performance-reviewer  
    model_profile: free-local
    system_prompt: "Focus on performance issues..."

  # Dependent agent waits for dependencies to complete
  - name: senior-reviewer
    model_profile: default-claude
    depends_on: [security-reviewer, performance-reviewer]
    system_prompt: "Synthesize the security and performance analysis..."
```

**Benefits:**
- **Flexible orchestration**: Create complex dependency graphs beyond simple coordinator patterns
- **Parallel execution**: Independent agents run concurrently for better performance  
- **Automatic validation**: Circular dependencies and missing dependencies are detected at load time
- **Better maintainability**: Clear, explicit dependencies instead of implicit coordinator relationships

### Configuration Status Checking

LLM Orchestra provides visual status checking to quickly see which configurations are ready to use:

```bash
# Check all configurations with visual indicators
llm-orc config check
```

**Visual Indicators:**
- 🟢 **Ready to use** - Profile/provider is properly configured and available
- 🟥 **Needs setup** - Profile references unavailable provider or missing authentication

**Provider Availability Detection:**
- **Authenticated providers** - Checks for valid API credentials
- **Ollama service** - Tests connection to local Ollama instance (localhost:11434)
- **Configuration validation** - Verifies model profiles reference available providers

**Example Output:**
```
Configuration Status Legend:
🟢 Ready to use    🟥 Needs setup

=== Global Configuration Status ===
📁 Model Profiles:
🟢 local-free (llama3 via ollama)
🟢 quality (claude-sonnet-4 via anthropic-claude-pro-max)  
🟥 high-context (claude-3-5-sonnet via anthropic-api)

🌐 Available Providers: anthropic-claude-pro-max, ollama

=== Local Configuration Status: My Project ===
📁 Model Profiles:
🟢 security-auditor (llama3 via ollama)
🟢 senior-reviewer (claude-sonnet-4 via anthropic-claude-pro-max)
```

### Configuration Reset Commands

LLM Orchestra provides safe configuration reset with backup and selective retention options:

```bash
# Reset global configuration (safe defaults)
llm-orc config reset-global        # Creates backup, preserves authentication

# Reset local configuration (safe defaults)  
llm-orc config reset-local         # Creates backup, preserves ensembles

# Advanced reset options
llm-orc config reset-global --no-backup --reset-auth           # Complete global reset
llm-orc config reset-local --reset-ensembles --no-backup       # Complete local reset
llm-orc config reset-local --project-name "My Project"         # Set project name
```

**Safety Features:**
- **Automatic backups** - Creates timestamped `.backup` directories by default
- **Authentication preservation** - Keeps API keys and credentials safe by default
- **Ensemble retention** - Preserves local ensembles by default
- **Confirmation prompts** - Prevents accidental data loss

**Available Options:**

*Global Reset:*
- `--backup/--no-backup` - Create backup before reset (default: backup)
- `--preserve-auth/--reset-auth` - Keep authentication (default: preserve)

*Local Reset:*
- `--backup/--no-backup` - Create backup before reset (default: backup)
- `--preserve-ensembles/--reset-ensembles` - Keep ensembles (default: preserve)
- `--project-name` - Set project name (defaults to directory name)

### Configuration Hierarchy
LLM Orchestra follows a configuration hierarchy:

1. **Local project configuration** (`.llm-orc/` in current directory)
2. **Global user configuration** (`~/.config/llm-orc/`)
3. **Command-line options** (highest priority)

### XDG Base Directory Support
Configurations follow the XDG Base Directory specification:
- Global config: `~/.config/llm-orc/` (or `$XDG_CONFIG_HOME/llm-orc/`)
- Automatic migration from old `~/.llm-orc/` location

## Cost Optimization

- **Local models** (free) for systematic analysis tasks
- **Cloud models** (paid) reserved for strategic insights
- **Usage tracking** shows exactly what each analysis costs
- **Intelligent routing** based on task complexity

## Development

```bash
# Run tests
uv run pytest

# Run linting and formatting
uv run ruff check .
uv run ruff format --check .

# Type checking
uv run mypy src/llm_orc
```

## Research

This project includes comparative analysis of multi-agent vs single-agent approaches. See [docs/ensemble_vs_single_agent_analysis.md](docs/ensemble_vs_single_agent_analysis.md) for detailed findings.

## Philosophy

**Reduce toil, don't replace creativity.** Use AI to handle systematic, repetitive analysis while preserving human creativity and strategic thinking.

## License

MIT License - see [LICENSE](LICENSE) for details.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-orchestra",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "agents, ai, ensemble, llm, multi-agent, orchestration",
    "author": null,
    "author_email": "Nathan Green <contact@nate.green>",
    "download_url": "https://files.pythonhosted.org/packages/18/af/e44ec7d8f3528e4e728971a49831655426144afd1224ce3408bdfbb4bf5a/llm_orchestra-0.9.1.tar.gz",
    "platform": null,
    "description": "# LLM Orchestra\n\n[![PyPI version](https://badge.fury.io/py/llm-orchestra.svg)](https://badge.fury.io/py/llm-orchestra)\n[![CI](https://github.com/mrilikecoding/llm-orc/workflows/CI/badge.svg)](https://github.com/mrilikecoding/llm-orc/actions)\n[![codecov](https://codecov.io/gh/mrilikecoding/llm-orc/graph/badge.svg?token=FWHP257H9E)](https://codecov.io/gh/mrilikecoding/llm-orc)\n[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Downloads](https://static.pepy.tech/badge/llm-orchestra)](https://pepy.tech/project/llm-orchestra)\n[![GitHub release (latest by date)](https://img.shields.io/github/v/release/mrilikecoding/llm-orc)](https://github.com/mrilikecoding/llm-orc/releases)\n\nA multi-agent LLM communication system for ensemble orchestration and intelligent analysis.\n\n## Overview\n\nLLM Orchestra lets you coordinate multiple AI agents for complex analysis tasks. Run code reviews with security and performance specialists, analyze architecture decisions from multiple angles, or get systematic coverage of any multi-faceted problem.\n\nMix expensive cloud models with free local models - use Claude for strategic insights while Llama3 handles systematic analysis tasks.\n\n## Key Features\n\n- **Multi-Agent Ensembles**: Coordinate specialized agents with flexible dependency graphs\n- **Agent Dependencies**: Define which agents depend on others for sophisticated orchestration patterns\n- **Model Profiles**: Simplified configuration with named shortcuts for model + provider combinations\n- **Cost Optimization**: Mix expensive and free models based on what each task needs\n- **Streaming Output**: Real-time progress updates during ensemble execution\n- **CLI Interface**: Simple commands with piping support (`cat code.py | llm-orc invoke code-review`)\n- **Secure Authentication**: Encrypted API key storage with easy credential management\n- **YAML Configuration**: Easy ensemble setup with readable config files\n- **Usage Tracking**: Token counting, cost estimation, and timing metrics\n\n## Installation\n\n### Option 1: Homebrew (macOS - Recommended)\n```bash\n# Add the tap\nbrew tap mrilikecoding/llm-orchestra\n\n# Install LLM Orchestra\nbrew install llm-orchestra\n\n# Verify installation\nllm-orc --version\n```\n\n### Option 2: pip (All Platforms)\n```bash\n# Install from PyPI\npip install llm-orchestra\n\n# Verify installation\nllm-orc --version\n```\n\n### Option 3: Development Installation\n```bash\n# Clone the repository\ngit clone https://github.com/mrilikecoding/llm-orc.git\ncd llm-orc\n\n# Install with development dependencies\nuv sync --dev\n\n# Verify installation\nuv run llm-orc --version\n```\n\n### Updates\n```bash\n# Homebrew users\nbrew update && brew upgrade llm-orchestra\n\n# pip users\npip install --upgrade llm-orchestra\n```\n\n## Quick Start\n\n### 1. Set Up Authentication\n\nBefore using LLM Orchestra, configure authentication for your LLM providers:\n\n```bash\n# Interactive setup wizard (recommended for first-time users)\nllm-orc auth setup\n\n# Or add providers individually\nllm-orc auth add anthropic --api-key YOUR_ANTHROPIC_KEY\nllm-orc auth add google --api-key YOUR_GOOGLE_KEY\n\n# OAuth for Claude Pro/Max users\nllm-orc auth add anthropic-claude-pro-max\n\n# List configured providers\nllm-orc auth list\n\n# Remove a provider if needed\nllm-orc auth remove anthropic\n```\n\n**Security**: API keys are encrypted and stored securely in `~/.config/llm-orc/credentials.yaml`.\n\n### 2. Configuration Options\n\nLLM Orchestra supports both global and local configurations:\n\n#### Global Configuration\nCreate `~/.config/llm-orc/ensembles/code-review.yaml`:\n\n```yaml\nname: code-review\ndescription: Multi-perspective code review ensemble\n\nagents:\n  - name: security-reviewer\n    model_profile: free-local\n    system_prompt: \"You are a security analyst. Focus on identifying security vulnerabilities, authentication issues, and potential attack vectors.\"\n\n  - name: performance-reviewer\n    model_profile: free-local\n    system_prompt: \"You are a performance analyst. Focus on identifying bottlenecks, inefficient algorithms, and scalability issues.\"\n\n  - name: quality-reviewer\n    model_profile: free-local\n    system_prompt: \"You are a code quality analyst. Focus on maintainability, readability, and best practices.\"\n\n  - name: senior-reviewer\n    model_profile: default-claude\n    depends_on: [security-reviewer, performance-reviewer, quality-reviewer]\n    system_prompt: |\n      You are a senior engineering lead. Synthesize the security, performance,\n      and quality analysis into actionable recommendations.\n    output_format: json\n```\n\n#### Local Project Configuration\nFor project-specific ensembles, initialize local configuration:\n\n```bash\n# Initialize local configuration in your project\nllm-orc config init\n\n# This creates .llm-orc/ directory with:\n# - ensembles/   (project-specific ensembles)\n# - models/      (shared model configurations)\n# - scripts/     (project-specific scripts)\n# - config.yaml  (project configuration)\n```\n\n#### View Current Configuration\n```bash\n# Check configuration status with visual indicators\nllm-orc config check\n```\n\n### 3. Using LLM Orchestra\n\n#### Basic Usage\n```bash\n# List available ensembles\nllm-orc list-ensembles\n\n# List available model profiles\nllm-orc list-profiles\n\n# Get help for any command\nllm-orc --help\nllm-orc invoke --help\n```\n\n#### Invoke Ensembles\n```bash\n# Analyze code from a file (pipe input)\ncat mycode.py | llm-orc invoke code-review\n\n# Provide input directly\nllm-orc invoke code-review --input \"Review this function: def add(a, b): return a + b\"\n\n# JSON output for integration with other tools\nllm-orc invoke code-review --input \"...\" --output-format json\n\n# Use specific configuration directory\nllm-orc invoke code-review --config-dir ./custom-config\n\n# Enable streaming for real-time progress (enabled by default)\nllm-orc invoke code-review --streaming\n```\n\n#### Configuration Management\n```bash\n# Initialize local project configuration\nllm-orc config init --project-name my-project\n\n# Check configuration status with visual indicators\nllm-orc config check                # Global + local status with legend\nllm-orc config check-global        # Global configuration only  \nllm-orc config check-local         # Local project configuration only\n\n# Reset configurations with safety options\nllm-orc config reset-global        # Reset global config (backup + preserve auth by default)\nllm-orc config reset-local         # Reset local config (backup + preserve ensembles by default)\n\n# Advanced reset options\nllm-orc config reset-global --no-backup --reset-auth       # Complete reset including auth\nllm-orc config reset-local --reset-ensembles --no-backup   # Reset including ensembles\n\n```\n\n## Ensemble Library\n\nLooking for pre-built ensembles? Check out the [LLM Orchestra Library](https://github.com/mrilikecoding/llm-orchestra-library) - a curated collection of analytical ensembles for code review, research analysis, decision support, and more.\n\n### Library CLI Commands\n\nLLM Orchestra includes built-in commands to browse and copy ensembles from the library:\n\n```bash\n# Browse all available categories\nllm-orc library categories\nllm-orc l categories  # Using alias\n\n# Browse ensembles in a specific category\nllm-orc library browse code-analysis\n\n# Show detailed information about an ensemble\nllm-orc library show code-analysis/security-review\n\n# Copy an ensemble to your local configuration\nllm-orc library copy code-analysis/security-review\n\n# Copy an ensemble to your global configuration\nllm-orc library copy code-analysis/security-review --global\n```\n\n## Use Cases\n\n### Code Review\nGet systematic analysis across security, performance, and maintainability dimensions. Each agent focuses on their specialty while synthesis provides actionable recommendations.\n\n### Architecture Review  \nAnalyze system designs from scalability, security, performance, and reliability perspectives. Identify bottlenecks and suggest architectural patterns.\n\n### Product Strategy\nEvaluate business decisions from market, financial, competitive, and user experience angles. Get comprehensive analysis for complex strategic choices.\n\n### Research Analysis\nSystematic literature review, methodology evaluation, or multi-dimensional analysis of research questions.\n\n## Model Support\n\n- **Claude** (Anthropic) - Strategic analysis and synthesis\n- **Gemini** (Google) - Multi-modal and reasoning tasks  \n- **Ollama** - Local deployment of open-source models (Llama3, etc.)\n- **Custom models** - Extensible interface for additional providers\n\n## Configuration\n\n### Model Profiles\n\nModel profiles simplify ensemble configuration by providing named shortcuts for complete agent configurations including model, provider, system prompts, and timeouts:\n\n```yaml\n# In ~/.config/llm-orc/config.yaml or .llm-orc/config.yaml\nmodel_profiles:\n  free-local:\n    model: llama3\n    provider: ollama\n    cost_per_token: 0.0\n    system_prompt: \"You are a helpful assistant that provides concise, accurate responses for local development and testing.\"\n    timeout_seconds: 30\n\n  default-claude:\n    model: claude-sonnet-4-20250514\n    provider: anthropic-claude-pro-max\n    system_prompt: \"You are an expert assistant that provides high-quality, detailed analysis and solutions.\"\n    timeout_seconds: 60\n\n  high-context:\n    model: claude-3-5-sonnet-20241022\n    provider: anthropic-api\n    cost_per_token: 3.0e-06\n    system_prompt: \"You are an expert assistant capable of handling complex, multi-faceted problems with detailed analysis.\"\n    timeout_seconds: 120\n\n  small:\n    model: claude-3-haiku-20240307\n    provider: anthropic-api\n    cost_per_token: 1.0e-06\n    system_prompt: \"You are a quick, efficient assistant that provides concise and accurate responses.\"\n    timeout_seconds: 30\n```\n\n**Profile Benefits:**\n- **Complete Agent Configuration**: Includes model, provider, system prompts, and timeout settings\n- **Simplified Configuration**: Use `model_profile: default-claude` instead of explicit model + provider + system_prompt + timeout\n- **Consistency**: Same profile names work across all ensembles with consistent behavior\n- **Cost Tracking**: Built-in cost information for budgeting\n- **Flexibility**: Local profiles override global ones, explicit agent configs override profile defaults\n\n**Usage in Ensembles:**\n```yaml\nagents:\n  - name: bulk-analyzer\n    model_profile: free-local     # Complete config: model, provider, prompt, timeout\n  - name: expert-reviewer\n    model_profile: default-claude # High-quality config with appropriate timeout\n  - name: document-processor\n    model_profile: high-context   # Large context processing with extended timeout\n    system_prompt: \"Custom prompt override\"  # Overrides profile default\n```\n\n**Override Behavior:**\nExplicit agent configuration takes precedence over model profile defaults:\n```yaml\nagents:\n  - name: custom-agent\n    model_profile: free-local\n    system_prompt: \"Custom prompt\"  # Overrides profile system_prompt\n    timeout_seconds: 60            # Overrides profile timeout_seconds\n```\n\n### Ensemble Configuration\nEnsemble configurations support:\n\n- **Model profiles** for simplified, consistent model selection\n- **Agent specialization** with role-specific prompts\n- **Agent dependencies** using `depends_on` for sophisticated orchestration\n- **Dependency validation** with automatic cycle detection and missing dependency checks\n- **Timeout management** per agent with performance configuration\n- **Mixed model strategies** combining local and cloud models\n- **Output formatting** (text, JSON) for integration\n- **Streaming execution** with real-time progress updates\n\n#### Agent Dependencies\n\nThe new dependency-based architecture allows agents to depend on other agents, enabling sophisticated orchestration patterns:\n\n```yaml\nagents:\n  # Independent agents execute in parallel\n  - name: security-reviewer\n    model_profile: free-local\n    system_prompt: \"Focus on security vulnerabilities...\"\n\n  - name: performance-reviewer  \n    model_profile: free-local\n    system_prompt: \"Focus on performance issues...\"\n\n  # Dependent agent waits for dependencies to complete\n  - name: senior-reviewer\n    model_profile: default-claude\n    depends_on: [security-reviewer, performance-reviewer]\n    system_prompt: \"Synthesize the security and performance analysis...\"\n```\n\n**Benefits:**\n- **Flexible orchestration**: Create complex dependency graphs beyond simple coordinator patterns\n- **Parallel execution**: Independent agents run concurrently for better performance  \n- **Automatic validation**: Circular dependencies and missing dependencies are detected at load time\n- **Better maintainability**: Clear, explicit dependencies instead of implicit coordinator relationships\n\n### Configuration Status Checking\n\nLLM Orchestra provides visual status checking to quickly see which configurations are ready to use:\n\n```bash\n# Check all configurations with visual indicators\nllm-orc config check\n```\n\n**Visual Indicators:**\n- \ud83d\udfe2 **Ready to use** - Profile/provider is properly configured and available\n- \ud83d\udfe5 **Needs setup** - Profile references unavailable provider or missing authentication\n\n**Provider Availability Detection:**\n- **Authenticated providers** - Checks for valid API credentials\n- **Ollama service** - Tests connection to local Ollama instance (localhost:11434)\n- **Configuration validation** - Verifies model profiles reference available providers\n\n**Example Output:**\n```\nConfiguration Status Legend:\n\ud83d\udfe2 Ready to use    \ud83d\udfe5 Needs setup\n\n=== Global Configuration Status ===\n\ud83d\udcc1 Model Profiles:\n\ud83d\udfe2 local-free (llama3 via ollama)\n\ud83d\udfe2 quality (claude-sonnet-4 via anthropic-claude-pro-max)  \n\ud83d\udfe5 high-context (claude-3-5-sonnet via anthropic-api)\n\n\ud83c\udf10 Available Providers: anthropic-claude-pro-max, ollama\n\n=== Local Configuration Status: My Project ===\n\ud83d\udcc1 Model Profiles:\n\ud83d\udfe2 security-auditor (llama3 via ollama)\n\ud83d\udfe2 senior-reviewer (claude-sonnet-4 via anthropic-claude-pro-max)\n```\n\n### Configuration Reset Commands\n\nLLM Orchestra provides safe configuration reset with backup and selective retention options:\n\n```bash\n# Reset global configuration (safe defaults)\nllm-orc config reset-global        # Creates backup, preserves authentication\n\n# Reset local configuration (safe defaults)  \nllm-orc config reset-local         # Creates backup, preserves ensembles\n\n# Advanced reset options\nllm-orc config reset-global --no-backup --reset-auth           # Complete global reset\nllm-orc config reset-local --reset-ensembles --no-backup       # Complete local reset\nllm-orc config reset-local --project-name \"My Project\"         # Set project name\n```\n\n**Safety Features:**\n- **Automatic backups** - Creates timestamped `.backup` directories by default\n- **Authentication preservation** - Keeps API keys and credentials safe by default\n- **Ensemble retention** - Preserves local ensembles by default\n- **Confirmation prompts** - Prevents accidental data loss\n\n**Available Options:**\n\n*Global Reset:*\n- `--backup/--no-backup` - Create backup before reset (default: backup)\n- `--preserve-auth/--reset-auth` - Keep authentication (default: preserve)\n\n*Local Reset:*\n- `--backup/--no-backup` - Create backup before reset (default: backup)\n- `--preserve-ensembles/--reset-ensembles` - Keep ensembles (default: preserve)\n- `--project-name` - Set project name (defaults to directory name)\n\n### Configuration Hierarchy\nLLM Orchestra follows a configuration hierarchy:\n\n1. **Local project configuration** (`.llm-orc/` in current directory)\n2. **Global user configuration** (`~/.config/llm-orc/`)\n3. **Command-line options** (highest priority)\n\n### XDG Base Directory Support\nConfigurations follow the XDG Base Directory specification:\n- Global config: `~/.config/llm-orc/` (or `$XDG_CONFIG_HOME/llm-orc/`)\n- Automatic migration from old `~/.llm-orc/` location\n\n## Cost Optimization\n\n- **Local models** (free) for systematic analysis tasks\n- **Cloud models** (paid) reserved for strategic insights\n- **Usage tracking** shows exactly what each analysis costs\n- **Intelligent routing** based on task complexity\n\n## Development\n\n```bash\n# Run tests\nuv run pytest\n\n# Run linting and formatting\nuv run ruff check .\nuv run ruff format --check .\n\n# Type checking\nuv run mypy src/llm_orc\n```\n\n## Research\n\nThis project includes comparative analysis of multi-agent vs single-agent approaches. See [docs/ensemble_vs_single_agent_analysis.md](docs/ensemble_vs_single_agent_analysis.md) for detailed findings.\n\n## Philosophy\n\n**Reduce toil, don't replace creativity.** Use AI to handle systematic, repetitive analysis while preserving human creativity and strategic thinking.\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Nathan Green\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.",
    "summary": "Multi-agent LLM communication system with ensemble orchestration",
    "version": "0.9.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/mrilikecoding/llm-orc/issues",
        "Documentation": "https://github.com/mrilikecoding/llm-orc#readme",
        "Homepage": "https://github.com/mrilikecoding/llm-orc",
        "Repository": "https://github.com/mrilikecoding/llm-orc"
    },
    "split_keywords": [
        "agents",
        " ai",
        " ensemble",
        " llm",
        " multi-agent",
        " orchestration"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "784729ee82243bcf754ddc529110f0f6b0e1ce40dfcb249cdbd9a7d6c0ac8d71",
                "md5": "de0c6c33c6750229dcf6f0dbbf7e235c",
                "sha256": "861661c5980c734046f2733ab1385fbb875e4721ff381a2a1b45ea35d98c2bba"
            },
            "downloads": -1,
            "filename": "llm_orchestra-0.9.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "de0c6c33c6750229dcf6f0dbbf7e235c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 130052,
            "upload_time": "2025-07-26T05:53:31",
            "upload_time_iso_8601": "2025-07-26T05:53:31.345868Z",
            "url": "https://files.pythonhosted.org/packages/78/47/29ee82243bcf754ddc529110f0f6b0e1ce40dfcb249cdbd9a7d6c0ac8d71/llm_orchestra-0.9.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "18afe44ec7d8f3528e4e728971a49831655426144afd1224ce3408bdfbb4bf5a",
                "md5": "5cf1efed9353681c9ec880a302dc1258",
                "sha256": "ac36ad676fb557040a5c5ab8b6466cb62500ad83302263df71452d57993653e0"
            },
            "downloads": -1,
            "filename": "llm_orchestra-0.9.1.tar.gz",
            "has_sig": false,
            "md5_digest": "5cf1efed9353681c9ec880a302dc1258",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 404127,
            "upload_time": "2025-07-26T05:53:32",
            "upload_time_iso_8601": "2025-07-26T05:53:32.642911Z",
            "url": "https://files.pythonhosted.org/packages/18/af/e44ec7d8f3528e4e728971a49831655426144afd1224ce3408bdfbb4bf5a/llm_orchestra-0.9.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-26 05:53:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mrilikecoding",
    "github_project": "llm-orc",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-orchestra"
}
        
Elapsed time: 0.77832s