arc-advisor


Namearc-advisor JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryThe learning co-pilot for AI agents. Implements the Executor-Advisor pattern for building self-improving agentic systems.
upload_time2025-07-15 02:17:42
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseApache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright 2024 The Arc AI Team Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
keywords ai agents machine-learning reinforcement-learning llm advisor executor
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Arc Advisor - Learning Infrastructure for Agentic Systems

[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
[![Python](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org)
[![PyPI](https://img.shields.io/pypi/v/arc-advisor.svg)](https://pypi.org/project/arc-advisor/)

> **Reference Implementation**: This is an experimental reference implementation of the Arc methodology, designed for developers to extend. The library provides a complete infrastructure for building self-improving agents through the Executor-Advisor pattern, with full data collection pipelines for GRPO training and multi-agent orchestration.

Arc Advisor implements the **Executor-Advisor pattern** - an architecture for building self-improving AI agents through separation of reasoning and learning. This library provides production-ready infrastructure for deploying agents that learn from their failures.

## Key Innovation: The Executor-Advisor Pattern

Traditional AI agents fail at complex, multi-step tasks due to lack of specialization. The Executor-Advisor pattern addresses this through architectural separation:

- **Executor**: General-purpose reasoning model (e.g., GPT-4.1) that handles task execution
- **Advisor**: Smaller, specialized model providing strategic guidance based on learned patterns
- **Learning Loop**: Continuous improvement through failure analysis and model updates

This pattern enables agents to improve performance without modifying the base LLM, reducing risk while enabling specialization.

![Arc Advisor Architecture](public/arc-advisor.png)

## Multi-Agent Evolution Roadmap

![Arc Progression](public/roadmap-background.png)

Arc Advisor enables a progressive deployment strategy for agentic systems, where the next level of contextual intelligence is determined by the quality of orchestration. Each stage builds upon the previous, allowing teams to start with human oversight and evolve toward autonomous agent networks.

**Stage 1: Human-in-the-Loop** represents the foundation where humans orchestrate agent control flow through the Arc API. This stage uses the `ArcAdvisorClient` with local advisor models and the `@monitor_and_learn` decorator to provide safe deployment with human oversight. The learning infrastructure captures all interactions for future training while maintaining human control over critical decisions.

**Stage 2: Mediated Agent-to-Sub-Agent Interaction** introduces autonomous operation with learned pattern matching. Here, production agents query the Arc Sub Agent for strategic guidance using `ToolAugmentedAdvisor` with semantic search capabilities. The advisor actively uses tools like `get_remediation_plan` and `query_success_patterns` to provide data-driven strategies based on historical failure analysis and success patterns.

**Stage 3: Autonomous Agent Network with Shared Learning** represents the full vision where the Arc Sub Agent orchestrates multiple specialized agents. The `multi_agent_demo()` showcases A2A-compliant hub managing GPT-4.1, Claude Sonnet-4, and O4-Mini agents working in parallel on complex business scenarios. Each agent contributes its expertise while the Arc Sub Agent synthesizes results and collects reward signals for future GRPO training.

The open-source library provides the complete infrastructure for all three stages, with structured failure data collection preparing organizations for reinforcement learning-trained advisor models. This methodology combines semantic similarity clustering for failure pattern discovery with reward signal aggregation from multi-agent collaboration outcomes, creating a foundation for truly autonomous agentic systems.

## Technical Overview

Arc Advisor provides:

1. **Inference Pipeline**: Local execution of advisor models with automatic device optimization (CUDA/MPS/CPU)
2. **Vector Database**: Semantic search powered by ChromaDB for intelligent pattern discovery
3. **Failure Tracking**: Structured logging with automatic indexing for similarity search
4. **Tool-Augmented Reasoning**: Advisor actively queries its knowledge base for data-driven strategies
5. **Model Agnostic**: Support for any HuggingFace causal language model as advisor
6. **Production Ready**: Robust error handling, configurable failure modes, and comprehensive logging

## Installation

```bash
pip install arc-advisor
```

For development with latest features:
```bash
git clone https://github.com/arc-computer/arc-advisor.git
cd arc-advisor
pip install -e .
```

## Quick Start

### Try the Interactive Demos

Experience the Arc methodology across all three stages of the agentic evolution:

```bash
# Stage 1-2: Single agent with Arc Sub Agent advisor
arc-advisor single-agent

# Stage 3: Multi-agent autonomous network
arc-advisor multi-agent

# Export learning data for analysis
arc-advisor export
```

**Requirements for live inference:**
```bash
# Required for all demos
echo "OPENAI_API_KEY=your-key-here" > .env

# Additional requirement for multi-agent
echo "ANTHROPIC_API_KEY=your-key-here" >> .env
```

The demos showcase real learning infrastructure with:
- **Live streaming inference** - No mocks, only production AI models
- **Semantic failure analysis** - ChromaDB vector search for pattern discovery
- **GRPO reward collection** - Structured signals for future RL training
- **A2A protocol compliance** - Industry-standard agent communication

### Integrate in Your Code

```python
from arc_advisor import ArcAdvisorClient

# Initialize with pre-trained advisor model
advisor = ArcAdvisorClient(
    agent_id="my-agent-001",
    hf_repo_id="Qwen/Qwen3-4B"  # Default general advisor
    # hf_repo_id="arc-computer/qwen3-4b-grpo"  # RL-trained advisor (coming soon)
)

# Decorate your agent's task function
@advisor.monitor_and_learn
def execute_task(query: str, context: dict) -> dict:
    # Get strategic advice before execution
    advice = advisor.get_advice(
        task_description="Complex multi-step workflow",
        context={"query": query, "business_context": context}
    )
    
    # Execute with your primary model
    result = your_executor_model(
        prompt=f"Task: {query}\nStrategy: {advice['strategy']}\nExecute:"
    )
    
    # Return structured outcome
    return {
        "success": validate_result(result),
        "output": result,
        "metrics": {"latency_ms": 150}
    }
```

## Architecture Details

### System Components

The Arc Advisor system consists of three primary components:

1. **Advisor Model**: Provides strategic guidance based on task context
2. **Executor Agent**: Implements the actual task using advisor strategies  
3. **Learning Infrastructure**: Captures failures for continuous improvement

### Data Flow

1. Task request arrives with context
2. Advisor generates strategy based on learned patterns
3. Executor implements task using strategy
4. Outcome logged for learning
5. Failures trigger improvement requests

### Learning Loop

![Arc Learning Infrastructure](public/architecture-background.png)

The learning loop operates as follows:

- **Production Environment**: Agent traces collected during normal operation
- **Failure Bank**: Structured storage of failure patterns and context
- **Learning Orchestrator**: Converts failures into training data
- **RL Training**: Updates advisor model using policy gradient methods
- **Evaluation**: Validates improvements before deployment

**Note**: This open-source reference implementation provides complete data collection infrastructure including:
- Structured failure tracking with semantic embeddings
- GRPO reward signal collection with custom metrics
- A2A-compliant multi-agent orchestration
- Export pipelines for training data preparation

The full continuous learning loop with automated RL training shown above is available through Arc's managed cloud.

## Advanced Configuration

### Vector Database for Semantic Search

Arc Advisor includes a vector database that enables:
- **Semantic Similarity**: Find related failures beyond keyword matching
- **Failure Clustering**: Discover common patterns across failures
- **Intelligent Remediation**: Data-driven strategies based on historical patterns

```python
# Migrate existing events to vector DB
arc-advisor-migrate

# Use tool-augmented advisor with semantic search
from arc_advisor import ToolAugmentedAdvisor

advisor = ToolAugmentedAdvisor(
    agent_id="my-agent",
    on_failure="warn"
)

# Advisor now uses semantic search in its tools
advice = advisor.get_advice(
    task_description="Handle database connection timeout",
    context={"error": "Connection pool exhausted"},
    enable_tools=True
)
```

### Custom Advisor Models

Deploy your own trained advisor:

```python
advisor = ArcAdvisorClient(
    agent_id="domain-specific-agent",
    hf_repo_id="your-org/custom-advisor-7b",
    local_model_dir="~/.arc/models"
)
```

### Generation Parameters

Control advisor output characteristics:

```python
advice = advisor.get_advice(
    task_description="Generate SQL for complex join",
    context={"schema": database_schema},
    generation_config={
        "temperature": 0.3,
        "max_new_tokens": 512,
        "top_p": 0.9
    }
)
```

### Failure Handling

Configure behavior when advisor fails:

```python
# Default: Continue without advice
advisor = ArcAdvisorClient(agent_id="prod-agent", on_failure="warn")

# Strict: Raise exception on failure  
advisor = ArcAdvisorClient(agent_id="test-agent", on_failure="raise")
```

## Interactive Learning Methodology

Arc Advisor implements a novel training methodology inspired by Reinforcement Learning Teachers (RLT) and GRPO optimization, where **advisors learn to teach rather than solve**:

### Stage-Based Learning Architecture

```bash
# Stage 1-2: Single-agent with advisor learning
arc-advisor single-agent
```
- **ToolAugmentedAdvisor** queries semantic failure patterns 
- **Streaming inference** with real-time strategy generation
- **Semantic clustering** discovers failure categories automatically
- **Reward signal collection** for GRPO policy optimization

```bash  
# Stage 3: Multi-agent collaborative learning
arc-advisor multi-agent
```
- **A2A-compliant orchestration** of specialized agents (GPT-4.1, Claude, O4-Mini)
- **Competitive evaluation** through agent collaboration outcomes
- **Relative performance metrics** replace binary success/failure signals
- **Round-robin learning** where agents teach each other through shared experiences

### Learning Infrastructure Features

**Semantic Pattern Discovery:**
- Vector similarity search beyond keyword matching
- Automatic failure clustering using ChromaDB embeddings
- Context-aware remediation strategies from historical patterns

**GRPO Reward Collection:**
- Structured signals from multi-agent collaboration outcomes
- Comparative performance evaluation between agent strategies
- Policy gradient preparation for advisor model fine-tuning
- **Enhanced custom metrics capture** for domain-specific optimization

**Real-Time Learning:**
- Live streaming inference during strategy generation
- Immediate failure indexing and pattern recognition  
- Bidirectional A2A communication for collaborative improvement

## Data Export and Analysis

Export collected failure data for analysis:

```bash
# Export all events
arc-advisor export > agent_events.json

# Extract failure patterns
cat agent_events.json | jq '.[] | select(.event.message_type == "ArcImprovementRequest")'
```

## Example: CRM Automation

See [examples/crm_pro_example.py](examples/crm_pro_example.py) for a complete implementation of a Salesforce CPQ agent using the Executor-Advisor pattern, demonstrating:

- Integration with GPT-4 as executor
- Structured context building for CRM workflows
- Failure tracking for quote generation tasks
- BANT qualification and compliance checking

## Performance Characteristics

- **Advisor Latency**: <100ms on consumer GPUs (MPS/CUDA)
- **Memory Requirements**: 8GB RAM for 4B parameter models
- **Disk Storage**: 10GB for model weights
- **Logging Overhead**: <5ms per event
- **Reward Signal Storage**: ~1KB per interaction (JSONL format)

## API Reference

### ArcAdvisorClient

```python
ArcAdvisorClient(
    agent_id: str,                    # Unique identifier for agent instance
    api_key: Optional[str] = None,    # For future cloud integration
    hf_repo_id: str = "Qwen/Qwen3-4B", # HuggingFace model repository
    local_model_dir: str = "~/.arc/models",  # Local model cache
    on_failure: str = "warn"          # Failure mode: "warn" or "raise"
)
```

### Core Methods

- `get_advice(task_description, context, generation_config)` - Retrieve strategic guidance
- `@monitor_and_learn` - Decorator for automatic outcome tracking
- Event logs: `~/.arc/logs/events.log` (JSON Lines format)

## Protocol Specification

Arc Advisor implements A2A (Agent-to-Agent) protocol for learning communication:

- `ArcLearningReport`: Captures task execution outcomes
- `ArcImprovementRequest`: Signals need for learning from failures

See [arc_advisor/protocols.py](arc_advisor/protocols.py) for schema definitions.

## Contributing

We welcome contributions. See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.

## License

Apache 2.0. See [LICENSE](LICENSE) for details.

## Citation

If you use Arc Advisor in your research, please cite:

```bibtex
@software{arc_advisor,
  title = {Arc Advisor: Learning Infrastructure for Agentic Systems},
  author = {The Arc Intellgence Team},
  year = {2025},
  url = {https://github.com/arc-computer/arc-advisor}
}
```

---

Built by [The Arc Intelligence Team](https://arc.computer)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "arc-advisor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "ai, agents, machine-learning, reinforcement-learning, llm, advisor, executor",
    "author": null,
    "author_email": "The Arc AI Team <team@arc.computer>",
    "download_url": "https://files.pythonhosted.org/packages/ad/5c/cca1398ec56a752f6c9a7171eea71ba2177b6a986d6e96b4ae4fc3fb7e5e/arc_advisor-0.1.0.tar.gz",
    "platform": null,
    "description": "# Arc Advisor - Learning Infrastructure for Agentic Systems\n\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)\n[![Python](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org)\n[![PyPI](https://img.shields.io/pypi/v/arc-advisor.svg)](https://pypi.org/project/arc-advisor/)\n\n> **Reference Implementation**: This is an experimental reference implementation of the Arc methodology, designed for developers to extend. The library provides a complete infrastructure for building self-improving agents through the Executor-Advisor pattern, with full data collection pipelines for GRPO training and multi-agent orchestration.\n\nArc Advisor implements the **Executor-Advisor pattern** - an architecture for building self-improving AI agents through separation of reasoning and learning. This library provides production-ready infrastructure for deploying agents that learn from their failures.\n\n## Key Innovation: The Executor-Advisor Pattern\n\nTraditional AI agents fail at complex, multi-step tasks due to lack of specialization. The Executor-Advisor pattern addresses this through architectural separation:\n\n- **Executor**: General-purpose reasoning model (e.g., GPT-4.1) that handles task execution\n- **Advisor**: Smaller, specialized model providing strategic guidance based on learned patterns\n- **Learning Loop**: Continuous improvement through failure analysis and model updates\n\nThis pattern enables agents to improve performance without modifying the base LLM, reducing risk while enabling specialization.\n\n![Arc Advisor Architecture](public/arc-advisor.png)\n\n## Multi-Agent Evolution Roadmap\n\n![Arc Progression](public/roadmap-background.png)\n\nArc Advisor enables a progressive deployment strategy for agentic systems, where the next level of contextual intelligence is determined by the quality of orchestration. Each stage builds upon the previous, allowing teams to start with human oversight and evolve toward autonomous agent networks.\n\n**Stage 1: Human-in-the-Loop** represents the foundation where humans orchestrate agent control flow through the Arc API. This stage uses the `ArcAdvisorClient` with local advisor models and the `@monitor_and_learn` decorator to provide safe deployment with human oversight. The learning infrastructure captures all interactions for future training while maintaining human control over critical decisions.\n\n**Stage 2: Mediated Agent-to-Sub-Agent Interaction** introduces autonomous operation with learned pattern matching. Here, production agents query the Arc Sub Agent for strategic guidance using `ToolAugmentedAdvisor` with semantic search capabilities. The advisor actively uses tools like `get_remediation_plan` and `query_success_patterns` to provide data-driven strategies based on historical failure analysis and success patterns.\n\n**Stage 3: Autonomous Agent Network with Shared Learning** represents the full vision where the Arc Sub Agent orchestrates multiple specialized agents. The `multi_agent_demo()` showcases A2A-compliant hub managing GPT-4.1, Claude Sonnet-4, and O4-Mini agents working in parallel on complex business scenarios. Each agent contributes its expertise while the Arc Sub Agent synthesizes results and collects reward signals for future GRPO training.\n\nThe open-source library provides the complete infrastructure for all three stages, with structured failure data collection preparing organizations for reinforcement learning-trained advisor models. This methodology combines semantic similarity clustering for failure pattern discovery with reward signal aggregation from multi-agent collaboration outcomes, creating a foundation for truly autonomous agentic systems.\n\n## Technical Overview\n\nArc Advisor provides:\n\n1. **Inference Pipeline**: Local execution of advisor models with automatic device optimization (CUDA/MPS/CPU)\n2. **Vector Database**: Semantic search powered by ChromaDB for intelligent pattern discovery\n3. **Failure Tracking**: Structured logging with automatic indexing for similarity search\n4. **Tool-Augmented Reasoning**: Advisor actively queries its knowledge base for data-driven strategies\n5. **Model Agnostic**: Support for any HuggingFace causal language model as advisor\n6. **Production Ready**: Robust error handling, configurable failure modes, and comprehensive logging\n\n## Installation\n\n```bash\npip install arc-advisor\n```\n\nFor development with latest features:\n```bash\ngit clone https://github.com/arc-computer/arc-advisor.git\ncd arc-advisor\npip install -e .\n```\n\n## Quick Start\n\n### Try the Interactive Demos\n\nExperience the Arc methodology across all three stages of the agentic evolution:\n\n```bash\n# Stage 1-2: Single agent with Arc Sub Agent advisor\narc-advisor single-agent\n\n# Stage 3: Multi-agent autonomous network\narc-advisor multi-agent\n\n# Export learning data for analysis\narc-advisor export\n```\n\n**Requirements for live inference:**\n```bash\n# Required for all demos\necho \"OPENAI_API_KEY=your-key-here\" > .env\n\n# Additional requirement for multi-agent\necho \"ANTHROPIC_API_KEY=your-key-here\" >> .env\n```\n\nThe demos showcase real learning infrastructure with:\n- **Live streaming inference** - No mocks, only production AI models\n- **Semantic failure analysis** - ChromaDB vector search for pattern discovery\n- **GRPO reward collection** - Structured signals for future RL training\n- **A2A protocol compliance** - Industry-standard agent communication\n\n### Integrate in Your Code\n\n```python\nfrom arc_advisor import ArcAdvisorClient\n\n# Initialize with pre-trained advisor model\nadvisor = ArcAdvisorClient(\n    agent_id=\"my-agent-001\",\n    hf_repo_id=\"Qwen/Qwen3-4B\"  # Default general advisor\n    # hf_repo_id=\"arc-computer/qwen3-4b-grpo\"  # RL-trained advisor (coming soon)\n)\n\n# Decorate your agent's task function\n@advisor.monitor_and_learn\ndef execute_task(query: str, context: dict) -> dict:\n    # Get strategic advice before execution\n    advice = advisor.get_advice(\n        task_description=\"Complex multi-step workflow\",\n        context={\"query\": query, \"business_context\": context}\n    )\n    \n    # Execute with your primary model\n    result = your_executor_model(\n        prompt=f\"Task: {query}\\nStrategy: {advice['strategy']}\\nExecute:\"\n    )\n    \n    # Return structured outcome\n    return {\n        \"success\": validate_result(result),\n        \"output\": result,\n        \"metrics\": {\"latency_ms\": 150}\n    }\n```\n\n## Architecture Details\n\n### System Components\n\nThe Arc Advisor system consists of three primary components:\n\n1. **Advisor Model**: Provides strategic guidance based on task context\n2. **Executor Agent**: Implements the actual task using advisor strategies  \n3. **Learning Infrastructure**: Captures failures for continuous improvement\n\n### Data Flow\n\n1. Task request arrives with context\n2. Advisor generates strategy based on learned patterns\n3. Executor implements task using strategy\n4. Outcome logged for learning\n5. Failures trigger improvement requests\n\n### Learning Loop\n\n![Arc Learning Infrastructure](public/architecture-background.png)\n\nThe learning loop operates as follows:\n\n- **Production Environment**: Agent traces collected during normal operation\n- **Failure Bank**: Structured storage of failure patterns and context\n- **Learning Orchestrator**: Converts failures into training data\n- **RL Training**: Updates advisor model using policy gradient methods\n- **Evaluation**: Validates improvements before deployment\n\n**Note**: This open-source reference implementation provides complete data collection infrastructure including:\n- Structured failure tracking with semantic embeddings\n- GRPO reward signal collection with custom metrics\n- A2A-compliant multi-agent orchestration\n- Export pipelines for training data preparation\n\nThe full continuous learning loop with automated RL training shown above is available through Arc's managed cloud.\n\n## Advanced Configuration\n\n### Vector Database for Semantic Search\n\nArc Advisor includes a vector database that enables:\n- **Semantic Similarity**: Find related failures beyond keyword matching\n- **Failure Clustering**: Discover common patterns across failures\n- **Intelligent Remediation**: Data-driven strategies based on historical patterns\n\n```python\n# Migrate existing events to vector DB\narc-advisor-migrate\n\n# Use tool-augmented advisor with semantic search\nfrom arc_advisor import ToolAugmentedAdvisor\n\nadvisor = ToolAugmentedAdvisor(\n    agent_id=\"my-agent\",\n    on_failure=\"warn\"\n)\n\n# Advisor now uses semantic search in its tools\nadvice = advisor.get_advice(\n    task_description=\"Handle database connection timeout\",\n    context={\"error\": \"Connection pool exhausted\"},\n    enable_tools=True\n)\n```\n\n### Custom Advisor Models\n\nDeploy your own trained advisor:\n\n```python\nadvisor = ArcAdvisorClient(\n    agent_id=\"domain-specific-agent\",\n    hf_repo_id=\"your-org/custom-advisor-7b\",\n    local_model_dir=\"~/.arc/models\"\n)\n```\n\n### Generation Parameters\n\nControl advisor output characteristics:\n\n```python\nadvice = advisor.get_advice(\n    task_description=\"Generate SQL for complex join\",\n    context={\"schema\": database_schema},\n    generation_config={\n        \"temperature\": 0.3,\n        \"max_new_tokens\": 512,\n        \"top_p\": 0.9\n    }\n)\n```\n\n### Failure Handling\n\nConfigure behavior when advisor fails:\n\n```python\n# Default: Continue without advice\nadvisor = ArcAdvisorClient(agent_id=\"prod-agent\", on_failure=\"warn\")\n\n# Strict: Raise exception on failure  \nadvisor = ArcAdvisorClient(agent_id=\"test-agent\", on_failure=\"raise\")\n```\n\n## Interactive Learning Methodology\n\nArc Advisor implements a novel training methodology inspired by Reinforcement Learning Teachers (RLT) and GRPO optimization, where **advisors learn to teach rather than solve**:\n\n### Stage-Based Learning Architecture\n\n```bash\n# Stage 1-2: Single-agent with advisor learning\narc-advisor single-agent\n```\n- **ToolAugmentedAdvisor** queries semantic failure patterns \n- **Streaming inference** with real-time strategy generation\n- **Semantic clustering** discovers failure categories automatically\n- **Reward signal collection** for GRPO policy optimization\n\n```bash  \n# Stage 3: Multi-agent collaborative learning\narc-advisor multi-agent\n```\n- **A2A-compliant orchestration** of specialized agents (GPT-4.1, Claude, O4-Mini)\n- **Competitive evaluation** through agent collaboration outcomes\n- **Relative performance metrics** replace binary success/failure signals\n- **Round-robin learning** where agents teach each other through shared experiences\n\n### Learning Infrastructure Features\n\n**Semantic Pattern Discovery:**\n- Vector similarity search beyond keyword matching\n- Automatic failure clustering using ChromaDB embeddings\n- Context-aware remediation strategies from historical patterns\n\n**GRPO Reward Collection:**\n- Structured signals from multi-agent collaboration outcomes\n- Comparative performance evaluation between agent strategies\n- Policy gradient preparation for advisor model fine-tuning\n- **Enhanced custom metrics capture** for domain-specific optimization\n\n**Real-Time Learning:**\n- Live streaming inference during strategy generation\n- Immediate failure indexing and pattern recognition  \n- Bidirectional A2A communication for collaborative improvement\n\n## Data Export and Analysis\n\nExport collected failure data for analysis:\n\n```bash\n# Export all events\narc-advisor export > agent_events.json\n\n# Extract failure patterns\ncat agent_events.json | jq '.[] | select(.event.message_type == \"ArcImprovementRequest\")'\n```\n\n## Example: CRM Automation\n\nSee [examples/crm_pro_example.py](examples/crm_pro_example.py) for a complete implementation of a Salesforce CPQ agent using the Executor-Advisor pattern, demonstrating:\n\n- Integration with GPT-4 as executor\n- Structured context building for CRM workflows\n- Failure tracking for quote generation tasks\n- BANT qualification and compliance checking\n\n## Performance Characteristics\n\n- **Advisor Latency**: <100ms on consumer GPUs (MPS/CUDA)\n- **Memory Requirements**: 8GB RAM for 4B parameter models\n- **Disk Storage**: 10GB for model weights\n- **Logging Overhead**: <5ms per event\n- **Reward Signal Storage**: ~1KB per interaction (JSONL format)\n\n## API Reference\n\n### ArcAdvisorClient\n\n```python\nArcAdvisorClient(\n    agent_id: str,                    # Unique identifier for agent instance\n    api_key: Optional[str] = None,    # For future cloud integration\n    hf_repo_id: str = \"Qwen/Qwen3-4B\", # HuggingFace model repository\n    local_model_dir: str = \"~/.arc/models\",  # Local model cache\n    on_failure: str = \"warn\"          # Failure mode: \"warn\" or \"raise\"\n)\n```\n\n### Core Methods\n\n- `get_advice(task_description, context, generation_config)` - Retrieve strategic guidance\n- `@monitor_and_learn` - Decorator for automatic outcome tracking\n- Event logs: `~/.arc/logs/events.log` (JSON Lines format)\n\n## Protocol Specification\n\nArc Advisor implements A2A (Agent-to-Agent) protocol for learning communication:\n\n- `ArcLearningReport`: Captures task execution outcomes\n- `ArcImprovementRequest`: Signals need for learning from failures\n\nSee [arc_advisor/protocols.py](arc_advisor/protocols.py) for schema definitions.\n\n## Contributing\n\nWe welcome contributions. See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n## License\n\nApache 2.0. See [LICENSE](LICENSE) for details.\n\n## Citation\n\nIf you use Arc Advisor in your research, please cite:\n\n```bibtex\n@software{arc_advisor,\n  title = {Arc Advisor: Learning Infrastructure for Agentic Systems},\n  author = {The Arc Intellgence Team},\n  year = {2025},\n  url = {https://github.com/arc-computer/arc-advisor}\n}\n```\n\n---\n\nBuilt by [The Arc Intelligence Team](https://arc.computer)\n",
    "bugtrack_url": null,
    "license": "Apache License\n                                   Version 2.0, January 2004\n                                http://www.apache.org/licenses/\n        \n           TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n        \n           1. Definitions.\n        \n              \"License\" shall mean the terms and conditions for use, reproduction,\n              and distribution as defined by Sections 1 through 9 of this document.\n        \n              \"Licensor\" shall mean the copyright owner or entity authorized by\n              the copyright owner that is granting the License.\n        \n              \"Legal Entity\" shall mean the union of the acting entity and all\n              other entities that control, are controlled by, or are under common\n              control with that entity. For the purposes of this definition,\n              \"control\" means (i) the power, direct or indirect, to cause the\n              direction or management of such entity, whether by contract or\n              otherwise, or (ii) ownership of fifty percent (50%) or more of the\n              outstanding shares, or (iii) beneficial ownership of such entity.\n        \n              \"You\" (or \"Your\") shall mean an individual or Legal Entity\n              exercising permissions granted by this License.\n        \n              \"Source\" form shall mean the preferred form for making modifications,\n              including but not limited to software source code, documentation\n              source, and configuration files.\n        \n              \"Object\" form shall mean any form resulting from mechanical\n              transformation or translation of a Source form, including but\n              not limited to compiled object code, generated documentation,\n              and conversions to other media types.\n        \n              \"Work\" shall mean the work of authorship, whether in Source or\n              Object form, made available under the License, as indicated by a\n              copyright notice that is included in or attached to the work\n              (an example is provided in the Appendix below).\n        \n              \"Derivative Works\" shall mean any work, whether in Source or Object\n              form, that is based on (or derived from) the Work and for which the\n              editorial revisions, annotations, elaborations, or other modifications\n              represent, as a whole, an original work of authorship. For the purposes\n              of this License, Derivative Works shall not include works that remain\n              separable from, or merely link (or bind by name) to the interfaces of,\n              the Work and Derivative Works thereof.\n        \n              \"Contribution\" shall mean any work of authorship, including\n              the original version of the Work and any modifications or additions\n              to that Work or Derivative Works thereof, that is intentionally\n              submitted to Licensor for inclusion in the Work by the copyright owner\n              or by an individual or Legal Entity authorized to submit on behalf of\n              the copyright owner. For the purposes of this definition, \"submitted\"\n              means any form of electronic, verbal, or written communication sent\n              to the Licensor or its representatives, including but not limited to\n              communication on electronic mailing lists, source code control systems,\n              and issue tracking systems that are managed by, or on behalf of, the\n              Licensor for the purpose of discussing and improving the Work, but\n              excluding communication that is conspicuously marked or otherwise\n              designated in writing by the copyright owner as \"Not a Contribution.\"\n        \n              \"Contributor\" shall mean Licensor and any individual or Legal Entity\n              on behalf of whom a Contribution has been received by Licensor and\n              subsequently incorporated within the Work.\n        \n           2. Grant of Copyright License. Subject to the terms and conditions of\n              this License, each Contributor hereby grants to You a perpetual,\n              worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n              copyright license to reproduce, prepare Derivative Works of,\n              publicly display, publicly perform, sublicense, and distribute the\n              Work and such Derivative Works in Source or Object form.\n        \n           3. Grant of Patent License. Subject to the terms and conditions of\n              this License, each Contributor hereby grants to You a perpetual,\n              worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n              (except as stated in this section) patent license to make, have made,\n              use, offer to sell, sell, import, and otherwise transfer the Work,\n              where such license applies only to those patent claims licensable\n              by such Contributor that are necessarily infringed by their\n              Contribution(s) alone or by combination of their Contribution(s)\n              with the Work to which such Contribution(s) was submitted. If You\n              institute patent litigation against any entity (including a\n              cross-claim or counterclaim in a lawsuit) alleging that the Work\n              or a Contribution incorporated within the Work constitutes direct\n              or contributory patent infringement, then any patent licenses\n              granted to You under this License for that Work shall terminate\n              as of the date such litigation is filed.\n        \n           4. Redistribution. You may reproduce and distribute copies of the\n              Work or Derivative Works thereof in any medium, with or without\n              modifications, and in Source or Object form, provided that You\n              meet the following conditions:\n        \n              (a) You must give any other recipients of the Work or\n                  Derivative Works a copy of this License; and\n        \n              (b) You must cause any modified files to carry prominent notices\n                  stating that You changed the files; and\n        \n              (c) You must retain, in the Source form of any Derivative Works\n                  that You distribute, all copyright, patent, trademark, and\n                  attribution notices from the Source form of the Work,\n                  excluding those notices that do not pertain to any part of\n                  the Derivative Works; and\n        \n              (d) If the Work includes a \"NOTICE\" text file as part of its\n                  distribution, then any Derivative Works that You distribute must\n                  include a readable copy of the attribution notices contained\n                  within such NOTICE file, excluding those notices that do not\n                  pertain to any part of the Derivative Works, in at least one\n                  of the following places: within a NOTICE text file distributed\n                  as part of the Derivative Works; within the Source form or\n                  documentation, if provided along with the Derivative Works; or,\n                  within a display generated by the Derivative Works, if and\n                  wherever such third-party notices normally appear. The contents\n                  of the NOTICE file are for informational purposes only and\n                  do not modify the License. You may add Your own attribution\n                  notices within Derivative Works that You distribute, alongside\n                  or as an addendum to the NOTICE text from the Work, provided\n                  that such additional attribution notices cannot be construed\n                  as modifying the License.\n        \n              You may add Your own copyright statement to Your modifications and\n              may provide additional or different license terms and conditions\n              for use, reproduction, or distribution of Your modifications, or\n              for any such Derivative Works as a whole, provided Your use,\n              reproduction, and distribution of the Work otherwise complies with\n              the conditions stated in this License.\n        \n           5. Submission of Contributions. Unless You explicitly state otherwise,\n              any Contribution intentionally submitted for inclusion in the Work\n              by You to the Licensor shall be under the terms and conditions of\n              this License, without any additional terms or conditions.\n              Notwithstanding the above, nothing herein shall supersede or modify\n              the terms of any separate license agreement you may have executed\n              with Licensor regarding such Contributions.\n        \n           6. Trademarks. This License does not grant permission to use the trade\n              names, trademarks, service marks, or product names of the Licensor,\n              except as required for reasonable and customary use in describing the\n              origin of the Work and reproducing the content of the NOTICE file.\n        \n           7. Disclaimer of Warranty. Unless required by applicable law or\n              agreed to in writing, Licensor provides the Work (and each\n              Contributor provides its Contributions) on an \"AS IS\" BASIS,\n              WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n              implied, including, without limitation, any warranties or conditions\n              of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n              PARTICULAR PURPOSE. You are solely responsible for determining the\n              appropriateness of using or redistributing the Work and assume any\n              risks associated with Your exercise of permissions under this License.\n        \n           8. Limitation of Liability. In no event and under no legal theory,\n              whether in tort (including negligence), contract, or otherwise,\n              unless required by applicable law (such as deliberate and grossly\n              negligent acts) or agreed to in writing, shall any Contributor be\n              liable to You for damages, including any direct, indirect, special,\n              incidental, or consequential damages of any character arising as a\n              result of this License or out of the use or inability to use the\n              Work (including but not limited to damages for loss of goodwill,\n              work stoppage, computer failure or malfunction, or any and all\n              other commercial damages or losses), even if such Contributor\n              has been advised of the possibility of such damages.\n        \n           9. Accepting Warranty or Additional Liability. While redistributing\n              the Work or Derivative Works thereof, You may choose to offer,\n              and charge a fee for, acceptance of support, warranty, indemnity,\n              or other liability obligations and/or rights consistent with this\n              License. However, in accepting such obligations, You may act only\n              on Your own behalf and on Your sole responsibility, not on behalf\n              of any other Contributor, and only if You agree to indemnify,\n              defend, and hold each Contributor harmless for any liability\n              incurred by, or claims asserted against, such Contributor by reason\n              of your accepting any such warranty or additional liability.\n        \n           END OF TERMS AND CONDITIONS\n        \n           Copyright 2024 The Arc AI Team\n        \n           Licensed under the Apache License, Version 2.0 (the \"License\");\n           you may not use this file except in compliance with the License.\n           You may obtain a copy of the License at\n        \n               http://www.apache.org/licenses/LICENSE-2.0\n        \n           Unless required by applicable law or agreed to in writing, software\n           distributed under the License is distributed on an \"AS IS\" BASIS,\n           WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n           See the License for the specific language governing permissions and\n           limitations under the License.",
    "summary": "The learning co-pilot for AI agents. Implements the Executor-Advisor pattern for building self-improving agentic systems.",
    "version": "0.1.0",
    "project_urls": {
        "Documentation": "https://docs.arc.computer",
        "Homepage": "https://arc.computer",
        "Issues": "https://github.com/arc-ai/arc-advisor/issues",
        "Repository": "https://github.com/arc-ai/arc-advisor"
    },
    "split_keywords": [
        "ai",
        " agents",
        " machine-learning",
        " reinforcement-learning",
        " llm",
        " advisor",
        " executor"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8413a2ef11b22f5c8f92cdea218a6bc02be34b5e08245fa23dc190eaa5a1e666",
                "md5": "1dac69a26a421ebc8a898c9fb3b994c3",
                "sha256": "f053705e7f6fec2d54d0e81632dd294ef5a467ede8bd11a0298ee6806c7560e6"
            },
            "downloads": -1,
            "filename": "arc_advisor-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1dac69a26a421ebc8a898c9fb3b994c3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 53780,
            "upload_time": "2025-07-15T02:17:41",
            "upload_time_iso_8601": "2025-07-15T02:17:41.254105Z",
            "url": "https://files.pythonhosted.org/packages/84/13/a2ef11b22f5c8f92cdea218a6bc02be34b5e08245fa23dc190eaa5a1e666/arc_advisor-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ad5ccca1398ec56a752f6c9a7171eea71ba2177b6a986d6e96b4ae4fc3fb7e5e",
                "md5": "610b48ec19a4d9652fe74fb7ecbdfd42",
                "sha256": "a181ac946c83887fcfad13510fbd4789a93ae2d10adc986cffb99839e6c61ef6"
            },
            "downloads": -1,
            "filename": "arc_advisor-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "610b48ec19a4d9652fe74fb7ecbdfd42",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 58202,
            "upload_time": "2025-07-15T02:17:42",
            "upload_time_iso_8601": "2025-07-15T02:17:42.643764Z",
            "url": "https://files.pythonhosted.org/packages/ad/5c/cca1398ec56a752f6c9a7171eea71ba2177b6a986d6e96b4ae4fc3fb7e5e/arc_advisor-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-15 02:17:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "arc-ai",
    "github_project": "arc-advisor",
    "github_not_found": true,
    "lcname": "arc-advisor"
}
        
Elapsed time: 0.99376s