llm-orchestra


Namellm-orchestra JSON
Version 0.1.3 PyPI version JSON
download
home_pageNone
SummaryMulti-agent LLM communication system with ensemble orchestration
upload_time2025-07-08 13:37:35
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT License Copyright (c) 2025 Nathan Green Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords agents ai ensemble llm multi-agent orchestration
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM Orchestra

A multi-agent LLM communication system for ensemble orchestration and intelligent analysis.

## Overview

LLM Orchestra lets you coordinate multiple AI agents for complex analysis tasks. Run code reviews with security and performance specialists, analyze architecture decisions from multiple angles, or get systematic coverage of any multi-faceted problem.

Mix expensive cloud models with free local models - use Claude for strategic insights while Llama3 handles systematic analysis tasks.

## Key Features

- **Multi-Agent Ensembles**: Coordinate specialized agents for different aspects of analysis
- **Cost Optimization**: Mix expensive and free models based on what each task needs
- **CLI Interface**: Simple commands with piping support (`cat code.py | llm-orc invoke code-review`)
- **Secure Authentication**: Encrypted API key storage with easy credential management
- **YAML Configuration**: Easy ensemble setup with readable config files
- **Usage Tracking**: Token counting, cost estimation, and timing metrics

## Installation

### For End Users
```bash
pip install llm-orchestra
```

### For Development
```bash
# Clone the repository
git clone https://github.com/mrilikecoding/llm-orc.git
cd llm-orc

# Install with development dependencies
uv sync --dev
```

## Quick Start

### 1. Set Up Authentication

Before using LLM Orchestra, configure authentication for your LLM providers:

```bash
# Interactive setup wizard
llm-orc auth setup

# Or add providers individually
llm-orc auth add anthropic --api-key YOUR_ANTHROPIC_KEY
llm-orc auth add google --api-key YOUR_GOOGLE_KEY

# List configured providers
llm-orc auth list

# Test authentication
llm-orc auth test anthropic
```

**Security**: API keys are encrypted and stored securely in `~/.llm-orc/credentials.yaml`.

### 2. Create an Ensemble Configuration

Create `~/.llm-orc/ensembles/code-review.yaml`:

```yaml
name: code-review
description: Multi-perspective code review ensemble

agents:
  - name: security-reviewer
    role: security-analyst
    model: llama3
    timeout_seconds: 60

  - name: performance-reviewer
    role: performance-analyst  
    model: llama3
    timeout_seconds: 60

coordinator:
  synthesis_prompt: |
    You are a senior engineering lead. Synthesize the security and performance 
    analysis into actionable recommendations.
  output_format: json
  timeout_seconds: 90
```

### 3. Invoke an Ensemble

```bash
# Analyze code from a file
cat mycode.py | llm-orc invoke code-review

# Or provide input directly
llm-orc invoke code-review --input "Review this function: def add(a, b): return a + b"

# JSON output for integration
llm-orc invoke code-review --input "..." --output-format json

# List available ensembles
llm-orc list-ensembles
```

## Use Cases

### Code Review
Get systematic analysis across security, performance, and maintainability dimensions. Each agent focuses on their specialty while synthesis provides actionable recommendations.

### Architecture Review  
Analyze system designs from scalability, security, performance, and reliability perspectives. Identify bottlenecks and suggest architectural patterns.

### Product Strategy
Evaluate business decisions from market, financial, competitive, and user experience angles. Get comprehensive analysis for complex strategic choices.

### Research Analysis
Systematic literature review, methodology evaluation, or multi-dimensional analysis of research questions.

## Model Support

- **Claude** (Anthropic) - Strategic analysis and synthesis
- **Gemini** (Google) - Multi-modal and reasoning tasks  
- **Ollama** - Local deployment of open-source models (Llama3, etc.)
- **Custom models** - Extensible interface for additional providers

## Configuration

Ensemble configurations support:

- **Agent specialization** with role-specific prompts
- **Timeout management** per agent and coordinator
- **Model selection** with local and cloud options
- **Synthesis strategies** for combining agent outputs
- **Output formatting** (text, JSON) for integration

## Cost Optimization

- **Local models** (free) for systematic analysis tasks
- **Cloud models** (paid) reserved for strategic insights
- **Usage tracking** shows exactly what each analysis costs
- **Intelligent routing** based on task complexity

## Development

```bash
# Run tests
uv run pytest

# Run linting
uv run ruff check .
uv run black --check .

# Type checking
uv run mypy src/llm_orc
```

## Research

This project includes comparative analysis of multi-agent vs single-agent approaches. See [docs/ensemble_vs_single_agent_analysis.md](docs/ensemble_vs_single_agent_analysis.md) for detailed findings.

## Philosophy

**Reduce toil, don't replace creativity.** Use AI to handle systematic, repetitive analysis while preserving human creativity and strategic thinking.

## License

MIT License - see [LICENSE](LICENSE) for details.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-orchestra",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "agents, ai, ensemble, llm, multi-agent, orchestration",
    "author": null,
    "author_email": "Nathan Green <contact@nate.green>",
    "download_url": "https://files.pythonhosted.org/packages/0e/4d/f603413c1bd8cde767347746dced093f8c274304feb9f09c605dd193491b/llm_orchestra-0.1.3.tar.gz",
    "platform": null,
    "description": "# LLM Orchestra\n\nA multi-agent LLM communication system for ensemble orchestration and intelligent analysis.\n\n## Overview\n\nLLM Orchestra lets you coordinate multiple AI agents for complex analysis tasks. Run code reviews with security and performance specialists, analyze architecture decisions from multiple angles, or get systematic coverage of any multi-faceted problem.\n\nMix expensive cloud models with free local models - use Claude for strategic insights while Llama3 handles systematic analysis tasks.\n\n## Key Features\n\n- **Multi-Agent Ensembles**: Coordinate specialized agents for different aspects of analysis\n- **Cost Optimization**: Mix expensive and free models based on what each task needs\n- **CLI Interface**: Simple commands with piping support (`cat code.py | llm-orc invoke code-review`)\n- **Secure Authentication**: Encrypted API key storage with easy credential management\n- **YAML Configuration**: Easy ensemble setup with readable config files\n- **Usage Tracking**: Token counting, cost estimation, and timing metrics\n\n## Installation\n\n### For End Users\n```bash\npip install llm-orchestra\n```\n\n### For Development\n```bash\n# Clone the repository\ngit clone https://github.com/mrilikecoding/llm-orc.git\ncd llm-orc\n\n# Install with development dependencies\nuv sync --dev\n```\n\n## Quick Start\n\n### 1. Set Up Authentication\n\nBefore using LLM Orchestra, configure authentication for your LLM providers:\n\n```bash\n# Interactive setup wizard\nllm-orc auth setup\n\n# Or add providers individually\nllm-orc auth add anthropic --api-key YOUR_ANTHROPIC_KEY\nllm-orc auth add google --api-key YOUR_GOOGLE_KEY\n\n# List configured providers\nllm-orc auth list\n\n# Test authentication\nllm-orc auth test anthropic\n```\n\n**Security**: API keys are encrypted and stored securely in `~/.llm-orc/credentials.yaml`.\n\n### 2. Create an Ensemble Configuration\n\nCreate `~/.llm-orc/ensembles/code-review.yaml`:\n\n```yaml\nname: code-review\ndescription: Multi-perspective code review ensemble\n\nagents:\n  - name: security-reviewer\n    role: security-analyst\n    model: llama3\n    timeout_seconds: 60\n\n  - name: performance-reviewer\n    role: performance-analyst  \n    model: llama3\n    timeout_seconds: 60\n\ncoordinator:\n  synthesis_prompt: |\n    You are a senior engineering lead. Synthesize the security and performance \n    analysis into actionable recommendations.\n  output_format: json\n  timeout_seconds: 90\n```\n\n### 3. Invoke an Ensemble\n\n```bash\n# Analyze code from a file\ncat mycode.py | llm-orc invoke code-review\n\n# Or provide input directly\nllm-orc invoke code-review --input \"Review this function: def add(a, b): return a + b\"\n\n# JSON output for integration\nllm-orc invoke code-review --input \"...\" --output-format json\n\n# List available ensembles\nllm-orc list-ensembles\n```\n\n## Use Cases\n\n### Code Review\nGet systematic analysis across security, performance, and maintainability dimensions. Each agent focuses on their specialty while synthesis provides actionable recommendations.\n\n### Architecture Review  \nAnalyze system designs from scalability, security, performance, and reliability perspectives. Identify bottlenecks and suggest architectural patterns.\n\n### Product Strategy\nEvaluate business decisions from market, financial, competitive, and user experience angles. Get comprehensive analysis for complex strategic choices.\n\n### Research Analysis\nSystematic literature review, methodology evaluation, or multi-dimensional analysis of research questions.\n\n## Model Support\n\n- **Claude** (Anthropic) - Strategic analysis and synthesis\n- **Gemini** (Google) - Multi-modal and reasoning tasks  \n- **Ollama** - Local deployment of open-source models (Llama3, etc.)\n- **Custom models** - Extensible interface for additional providers\n\n## Configuration\n\nEnsemble configurations support:\n\n- **Agent specialization** with role-specific prompts\n- **Timeout management** per agent and coordinator\n- **Model selection** with local and cloud options\n- **Synthesis strategies** for combining agent outputs\n- **Output formatting** (text, JSON) for integration\n\n## Cost Optimization\n\n- **Local models** (free) for systematic analysis tasks\n- **Cloud models** (paid) reserved for strategic insights\n- **Usage tracking** shows exactly what each analysis costs\n- **Intelligent routing** based on task complexity\n\n## Development\n\n```bash\n# Run tests\nuv run pytest\n\n# Run linting\nuv run ruff check .\nuv run black --check .\n\n# Type checking\nuv run mypy src/llm_orc\n```\n\n## Research\n\nThis project includes comparative analysis of multi-agent vs single-agent approaches. See [docs/ensemble_vs_single_agent_analysis.md](docs/ensemble_vs_single_agent_analysis.md) for detailed findings.\n\n## Philosophy\n\n**Reduce toil, don't replace creativity.** Use AI to handle systematic, repetitive analysis while preserving human creativity and strategic thinking.\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Nathan Green\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.",
    "summary": "Multi-agent LLM communication system with ensemble orchestration",
    "version": "0.1.3",
    "project_urls": {
        "Bug Tracker": "https://github.com/mrilikecoding/llm-orc/issues",
        "Documentation": "https://github.com/mrilikecoding/llm-orc#readme",
        "Homepage": "https://github.com/mrilikecoding/llm-orc",
        "Repository": "https://github.com/mrilikecoding/llm-orc"
    },
    "split_keywords": [
        "agents",
        " ai",
        " ensemble",
        " llm",
        " multi-agent",
        " orchestration"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9af32b2d3f033dff75b32aad3ad5bd968c414ea1ebadfb0d1188ee3841bd2563",
                "md5": "9bb6ab59a2be2b2642760c9a88e8b94c",
                "sha256": "2ff07e1d4ab0a0801963a65e13c5e2b97e436276ce64b820c97f0c9399fb3adf"
            },
            "downloads": -1,
            "filename": "llm_orchestra-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9bb6ab59a2be2b2642760c9a88e8b94c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 18675,
            "upload_time": "2025-07-08T13:37:34",
            "upload_time_iso_8601": "2025-07-08T13:37:34.752648Z",
            "url": "https://files.pythonhosted.org/packages/9a/f3/2b2d3f033dff75b32aad3ad5bd968c414ea1ebadfb0d1188ee3841bd2563/llm_orchestra-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0e4df603413c1bd8cde767347746dced093f8c274304feb9f09c605dd193491b",
                "md5": "512e68bbb497a68ef33eda73f8034629",
                "sha256": "2e6db28b11fcd876f83e5977bae3e9747e59d78231bdc629bc632db5120b19d6"
            },
            "downloads": -1,
            "filename": "llm_orchestra-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "512e68bbb497a68ef33eda73f8034629",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 152338,
            "upload_time": "2025-07-08T13:37:35",
            "upload_time_iso_8601": "2025-07-08T13:37:35.986235Z",
            "url": "https://files.pythonhosted.org/packages/0e/4d/f603413c1bd8cde767347746dced093f8c274304feb9f09c605dd193491b/llm_orchestra-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-08 13:37:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mrilikecoding",
    "github_project": "llm-orc",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-orchestra"
}
        
Elapsed time: 0.62034s