ace-framework


Nameace-framework JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/Kayba-ai/agentic-context-engine
SummaryBuild self-improving AI agents that learn from experience
upload_time2025-10-15 19:28:00
maintainerNone
docs_urlNone
authorKayba.ai
requires_python>=3.9
licenseMIT
keywords ai llm agents machine-learning self-improvement context-engineering ace openai anthropic claude gpt
VCS
bugtrack_url
requirements litellm python-dotenv pydantic
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Agentic Context Engine (ACE) 🚀

**Build self-improving AI agents that learn from experience**

[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Paper](https://img.shields.io/badge/Paper-arXiv:2510.04618-red.svg)](https://arxiv.org/abs/2510.04618)

🧠 **ACE** is a framework for building AI agents that get smarter over time by learning from their mistakes and successes.

💡 Based on the paper "Agentic Context Engineering" from Stanford/SambaNova - ACE helps your LLM agents build a "playbook" of strategies that improves with each task.

🔌 **Works with any LLM** - OpenAI, Anthropic Claude, Google Gemini, and 100+ more providers out of the box!

## Quick Start

**Minimum Python 3.9 required**

### Install ACE:
```bash
pip install ace-framework
# or for development:
pip install -r requirements.txt
```

### Set up your API key:
```bash
# Copy the example environment file
cp .env.example .env

# Add your OpenAI key (or Anthropic, Google, etc.)
echo "OPENAI_API_KEY=your-key-here" >> .env
```

### Run your first agent:
```python
from ace import LiteLLMClient, OfflineAdapter, Generator, Reflector, Curator
from ace import Playbook, Sample, TaskEnvironment, EnvironmentResult
from dotenv import load_dotenv

load_dotenv()

# Create your agent with any LLM
client = LiteLLMClient(model="gpt-3.5-turbo")  # or claude-3, gemini-pro, etc.

# Set up ACE components
adapter = OfflineAdapter(
    playbook=Playbook(),
    generator=Generator(client),
    reflector=Reflector(client),
    curator=Curator(client)
)

# Define a simple task
class SimpleEnv(TaskEnvironment):
    def evaluate(self, sample, output):
        correct = sample.ground_truth.lower() in output.final_answer.lower()
        return EnvironmentResult(
            feedback="Correct!" if correct else "Try again",
            ground_truth=sample.ground_truth
        )

# Train your agent
samples = [
    Sample(question="What is 2+2?", ground_truth="4"),
    Sample(question="Capital of France?", ground_truth="Paris"),
]

results = adapter.run(samples, SimpleEnv(), epochs=1)
print(f"Agent learned {len(adapter.playbook.bullets())} strategies!")
```

## How It Works

ACE uses three AI "roles" that work together to help your agent improve:

1. **🎯 Generator** - Tries to solve tasks using the current playbook
2. **🔍 Reflector** - Analyzes what went wrong (or right)
3. **📝 Curator** - Updates the playbook with new strategies

Think of it like a sports team reviewing game footage to get better!

## Examples

### Simple Q&A Agent
```python
python examples/simple_ace_example.py
```

### Advanced Examples with Different LLMs
```python
python examples/quickstart_litellm.py
```

Check out the `examples/` folder for more!

## Supported LLM Providers

ACE works with **100+ LLM providers** through LiteLLM:

- **OpenAI** - GPT-4, GPT-3.5-turbo
- **Anthropic** - Claude 3 (Opus, Sonnet, Haiku)
- **Google** - Gemini Pro, PaLM
- **Cohere** - Command models
- **Local Models** - Ollama, Transformers
- **And many more!**

Just change the model name:
```python
# OpenAI
client = LiteLLMClient(model="gpt-4")

# Anthropic Claude
client = LiteLLMClient(model="claude-3-sonnet-20240229")

# Google Gemini
client = LiteLLMClient(model="gemini-pro")

# With fallbacks for reliability
client = LiteLLMClient(
    model="gpt-4",
    fallbacks=["claude-3-haiku", "gpt-3.5-turbo"]
)
```

## Key Features

- ✅ **Self-Improving** - Agents learn from experience and build knowledge
- ✅ **Provider Agnostic** - Switch LLMs with one line of code
- ✅ **Production Ready** - Automatic retries, fallbacks, and error handling
- ✅ **Cost Efficient** - Track costs and use cheaper models as fallbacks
- ✅ **Async Support** - Built for high-performance applications
- ✅ **Fully Typed** - Great IDE support and type safety

## Advanced Usage

### Online Learning (Learn While Running)
```python
from ace import OnlineAdapter

# Agent improves while processing real tasks
adapter = OnlineAdapter(
    playbook=existing_playbook,  # Can start with existing knowledge
    generator=Generator(client),
    reflector=Reflector(client),
    curator=Curator(client)
)

# Process tasks one by one, learning from each
for task in real_world_tasks:
    result = adapter.process(task, environment)
    # Agent automatically updates its strategies
```

### Custom Task Environments
```python
class CodeTestingEnv(TaskEnvironment):
    def evaluate(self, sample, output):
        # Run the generated code
        test_passed = run_tests(output.final_answer)

        return EnvironmentResult(
            feedback=f"Tests {'passed' if test_passed else 'failed'}",
            ground_truth=sample.ground_truth,
            metrics={"pass_rate": 1.0 if test_passed else 0.0}
        )
```

### Streaming Responses
```python
# Get responses token by token
for chunk in client.complete_with_stream("Write a story"):
    print(chunk, end="", flush=True)
```

### Async Operations
```python
import asyncio

async def main():
    response = await client.acomplete("Solve this problem...")
    print(response.text)

asyncio.run(main())
```

## Architecture

ACE implements the Agentic Context Engineering method from the research paper:

- **Playbook**: A structured memory that stores successful strategies
- **Bullets**: Individual strategies with helpful/harmful counters
- **Delta Operations**: Incremental updates that preserve knowledge
- **Three Roles**: Generator, Reflector, and Curator working together

The framework prevents "context collapse" - a common problem where agents forget important information over time.

## Repository Structure

```
ace/
├── ace/                    # Core library
│   ├── playbook.py        # Strategy storage
│   ├── roles.py           # Generator, Reflector, Curator
│   ├── adaptation.py      # Training loops
│   └── llm_providers/     # LLM integrations
├── examples/              # Ready-to-run examples
├── tests/                 # Unit tests
└── docs/                  # Documentation
```

## Contributing

We welcome contributions! Feel free to:
- 🐛 Report bugs
- 💡 Suggest features
- 🔧 Submit PRs
- 📚 Improve documentation

## Citation

If you use ACE in your research or project, please cite the original papers:

### ACE Paper (Primary Reference)
```bibtex
@article{zhang2024ace,
  title={Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models},
  author={Zhang, Qizheng and Hu, Changran and Upasani, Shubhangi and Ma, Boyuan and Hong, Fenglu and
          Kamanuru, Vamsidhar and Rainton, Jay and Wu, Chen and Ji, Mengmeng and Li, Hanchen and
          Thakker, Urmish and Zou, James and Olukotun, Kunle},
  journal={arXiv preprint arXiv:2510.04618},
  year={2024}
}
```

### Dynamic Cheatsheet (Foundation Work)
ACE builds upon the adaptive memory concepts from Dynamic Cheatsheet:

```bibtex
@article{suzgun2025dynamiccheatsheet,
  title={Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory},
  author={Suzgun, Mirac and Yuksekgonul, Mert and Bianchi, Federico and Jurafsky, Dan and Zou, James},
  year={2025},
  eprint={2504.07952},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2504.07952}
}
```

### This Implementation
If you use this specific implementation, you can also reference:

```
This repository: https://github.com/Kayba-ai/agentic-context-engine
PyPI package: https://pypi.org/project/ace-framework/
Based on the open reproduction at: https://github.com/sci-m-wang/ACE-open
```

## License

MIT License - see [LICENSE](LICENSE) file for details.

---

**Note**: This is an independent implementation based on the ACE paper (arXiv:2510.04618) and builds upon concepts from Dynamic Cheatsheet. For the original reproduction scaffold, see [sci-m-wang/ACE-open](https://github.com/sci-m-wang/ACE-open).

Made with ❤️ by [Kayba](https://kayba.ai) and the open-source community

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Kayba-ai/agentic-context-engine",
    "name": "ace-framework",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "\"Kayba.ai\" <hello@kayba.ai>",
    "keywords": "ai, llm, agents, machine-learning, self-improvement, context-engineering, ace, openai, anthropic, claude, gpt",
    "author": "Kayba.ai",
    "author_email": "\"Kayba.ai\" <hello@kayba.ai>",
    "download_url": "https://files.pythonhosted.org/packages/4e/f8/7cbc25dd25df2094bd384da69e7eae503802cc4d288d79df4ca7c1371ed8/ace_framework-0.1.0.tar.gz",
    "platform": null,
    "description": "# Agentic Context Engine (ACE) \ud83d\ude80\n\n**Build self-improving AI agents that learn from experience**\n\n[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Paper](https://img.shields.io/badge/Paper-arXiv:2510.04618-red.svg)](https://arxiv.org/abs/2510.04618)\n\n\ud83e\udde0 **ACE** is a framework for building AI agents that get smarter over time by learning from their mistakes and successes.\n\n\ud83d\udca1 Based on the paper \"Agentic Context Engineering\" from Stanford/SambaNova - ACE helps your LLM agents build a \"playbook\" of strategies that improves with each task.\n\n\ud83d\udd0c **Works with any LLM** - OpenAI, Anthropic Claude, Google Gemini, and 100+ more providers out of the box!\n\n## Quick Start\n\n**Minimum Python 3.9 required**\n\n### Install ACE:\n```bash\npip install ace-framework\n# or for development:\npip install -r requirements.txt\n```\n\n### Set up your API key:\n```bash\n# Copy the example environment file\ncp .env.example .env\n\n# Add your OpenAI key (or Anthropic, Google, etc.)\necho \"OPENAI_API_KEY=your-key-here\" >> .env\n```\n\n### Run your first agent:\n```python\nfrom ace import LiteLLMClient, OfflineAdapter, Generator, Reflector, Curator\nfrom ace import Playbook, Sample, TaskEnvironment, EnvironmentResult\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Create your agent with any LLM\nclient = LiteLLMClient(model=\"gpt-3.5-turbo\")  # or claude-3, gemini-pro, etc.\n\n# Set up ACE components\nadapter = OfflineAdapter(\n    playbook=Playbook(),\n    generator=Generator(client),\n    reflector=Reflector(client),\n    curator=Curator(client)\n)\n\n# Define a simple task\nclass SimpleEnv(TaskEnvironment):\n    def evaluate(self, sample, output):\n        correct = sample.ground_truth.lower() in output.final_answer.lower()\n        return EnvironmentResult(\n            feedback=\"Correct!\" if correct else \"Try again\",\n            ground_truth=sample.ground_truth\n        )\n\n# Train your agent\nsamples = [\n    Sample(question=\"What is 2+2?\", ground_truth=\"4\"),\n    Sample(question=\"Capital of France?\", ground_truth=\"Paris\"),\n]\n\nresults = adapter.run(samples, SimpleEnv(), epochs=1)\nprint(f\"Agent learned {len(adapter.playbook.bullets())} strategies!\")\n```\n\n## How It Works\n\nACE uses three AI \"roles\" that work together to help your agent improve:\n\n1. **\ud83c\udfaf Generator** - Tries to solve tasks using the current playbook\n2. **\ud83d\udd0d Reflector** - Analyzes what went wrong (or right)\n3. **\ud83d\udcdd Curator** - Updates the playbook with new strategies\n\nThink of it like a sports team reviewing game footage to get better!\n\n## Examples\n\n### Simple Q&A Agent\n```python\npython examples/simple_ace_example.py\n```\n\n### Advanced Examples with Different LLMs\n```python\npython examples/quickstart_litellm.py\n```\n\nCheck out the `examples/` folder for more!\n\n## Supported LLM Providers\n\nACE works with **100+ LLM providers** through LiteLLM:\n\n- **OpenAI** - GPT-4, GPT-3.5-turbo\n- **Anthropic** - Claude 3 (Opus, Sonnet, Haiku)\n- **Google** - Gemini Pro, PaLM\n- **Cohere** - Command models\n- **Local Models** - Ollama, Transformers\n- **And many more!**\n\nJust change the model name:\n```python\n# OpenAI\nclient = LiteLLMClient(model=\"gpt-4\")\n\n# Anthropic Claude\nclient = LiteLLMClient(model=\"claude-3-sonnet-20240229\")\n\n# Google Gemini\nclient = LiteLLMClient(model=\"gemini-pro\")\n\n# With fallbacks for reliability\nclient = LiteLLMClient(\n    model=\"gpt-4\",\n    fallbacks=[\"claude-3-haiku\", \"gpt-3.5-turbo\"]\n)\n```\n\n## Key Features\n\n- \u2705 **Self-Improving** - Agents learn from experience and build knowledge\n- \u2705 **Provider Agnostic** - Switch LLMs with one line of code\n- \u2705 **Production Ready** - Automatic retries, fallbacks, and error handling\n- \u2705 **Cost Efficient** - Track costs and use cheaper models as fallbacks\n- \u2705 **Async Support** - Built for high-performance applications\n- \u2705 **Fully Typed** - Great IDE support and type safety\n\n## Advanced Usage\n\n### Online Learning (Learn While Running)\n```python\nfrom ace import OnlineAdapter\n\n# Agent improves while processing real tasks\nadapter = OnlineAdapter(\n    playbook=existing_playbook,  # Can start with existing knowledge\n    generator=Generator(client),\n    reflector=Reflector(client),\n    curator=Curator(client)\n)\n\n# Process tasks one by one, learning from each\nfor task in real_world_tasks:\n    result = adapter.process(task, environment)\n    # Agent automatically updates its strategies\n```\n\n### Custom Task Environments\n```python\nclass CodeTestingEnv(TaskEnvironment):\n    def evaluate(self, sample, output):\n        # Run the generated code\n        test_passed = run_tests(output.final_answer)\n\n        return EnvironmentResult(\n            feedback=f\"Tests {'passed' if test_passed else 'failed'}\",\n            ground_truth=sample.ground_truth,\n            metrics={\"pass_rate\": 1.0 if test_passed else 0.0}\n        )\n```\n\n### Streaming Responses\n```python\n# Get responses token by token\nfor chunk in client.complete_with_stream(\"Write a story\"):\n    print(chunk, end=\"\", flush=True)\n```\n\n### Async Operations\n```python\nimport asyncio\n\nasync def main():\n    response = await client.acomplete(\"Solve this problem...\")\n    print(response.text)\n\nasyncio.run(main())\n```\n\n## Architecture\n\nACE implements the Agentic Context Engineering method from the research paper:\n\n- **Playbook**: A structured memory that stores successful strategies\n- **Bullets**: Individual strategies with helpful/harmful counters\n- **Delta Operations**: Incremental updates that preserve knowledge\n- **Three Roles**: Generator, Reflector, and Curator working together\n\nThe framework prevents \"context collapse\" - a common problem where agents forget important information over time.\n\n## Repository Structure\n\n```\nace/\n\u251c\u2500\u2500 ace/                    # Core library\n\u2502   \u251c\u2500\u2500 playbook.py        # Strategy storage\n\u2502   \u251c\u2500\u2500 roles.py           # Generator, Reflector, Curator\n\u2502   \u251c\u2500\u2500 adaptation.py      # Training loops\n\u2502   \u2514\u2500\u2500 llm_providers/     # LLM integrations\n\u251c\u2500\u2500 examples/              # Ready-to-run examples\n\u251c\u2500\u2500 tests/                 # Unit tests\n\u2514\u2500\u2500 docs/                  # Documentation\n```\n\n## Contributing\n\nWe welcome contributions! Feel free to:\n- \ud83d\udc1b Report bugs\n- \ud83d\udca1 Suggest features\n- \ud83d\udd27 Submit PRs\n- \ud83d\udcda Improve documentation\n\n## Citation\n\nIf you use ACE in your research or project, please cite the original papers:\n\n### ACE Paper (Primary Reference)\n```bibtex\n@article{zhang2024ace,\n  title={Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models},\n  author={Zhang, Qizheng and Hu, Changran and Upasani, Shubhangi and Ma, Boyuan and Hong, Fenglu and\n          Kamanuru, Vamsidhar and Rainton, Jay and Wu, Chen and Ji, Mengmeng and Li, Hanchen and\n          Thakker, Urmish and Zou, James and Olukotun, Kunle},\n  journal={arXiv preprint arXiv:2510.04618},\n  year={2024}\n}\n```\n\n### Dynamic Cheatsheet (Foundation Work)\nACE builds upon the adaptive memory concepts from Dynamic Cheatsheet:\n\n```bibtex\n@article{suzgun2025dynamiccheatsheet,\n  title={Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory},\n  author={Suzgun, Mirac and Yuksekgonul, Mert and Bianchi, Federico and Jurafsky, Dan and Zou, James},\n  year={2025},\n  eprint={2504.07952},\n  archivePrefix={arXiv},\n  primaryClass={cs.LG},\n  url={https://arxiv.org/abs/2504.07952}\n}\n```\n\n### This Implementation\nIf you use this specific implementation, you can also reference:\n\n```\nThis repository: https://github.com/Kayba-ai/agentic-context-engine\nPyPI package: https://pypi.org/project/ace-framework/\nBased on the open reproduction at: https://github.com/sci-m-wang/ACE-open\n```\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n---\n\n**Note**: This is an independent implementation based on the ACE paper (arXiv:2510.04618) and builds upon concepts from Dynamic Cheatsheet. For the original reproduction scaffold, see [sci-m-wang/ACE-open](https://github.com/sci-m-wang/ACE-open).\n\nMade with \u2764\ufe0f by [Kayba](https://kayba.ai) and the open-source community\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Build self-improving AI agents that learn from experience",
    "version": "0.1.0",
    "project_urls": {
        "Documentation": "https://github.com/Kayba-ai/agentic-context-engine#readme",
        "Homepage": "https://kayba.ai",
        "Issues": "https://github.com/Kayba-ai/agentic-context-engine/issues",
        "Repository": "https://github.com/Kayba-ai/agentic-context-engine"
    },
    "split_keywords": [
        "ai",
        " llm",
        " agents",
        " machine-learning",
        " self-improvement",
        " context-engineering",
        " ace",
        " openai",
        " anthropic",
        " claude",
        " gpt"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1290b87d4bd1f4f54944b3e0000ba2f2ff9937ec741ca19b74a933144963d4e2",
                "md5": "55c36d24288c40ec20a1459353df2e17",
                "sha256": "622286ab258cd05428386c1dcb524d9b429b623819700fa4cbab7dd177bebf7b"
            },
            "downloads": -1,
            "filename": "ace_framework-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "55c36d24288c40ec20a1459353df2e17",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 20890,
            "upload_time": "2025-10-15T19:27:59",
            "upload_time_iso_8601": "2025-10-15T19:27:59.159497Z",
            "url": "https://files.pythonhosted.org/packages/12/90/b87d4bd1f4f54944b3e0000ba2f2ff9937ec741ca19b74a933144963d4e2/ace_framework-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4ef87cbc25dd25df2094bd384da69e7eae503802cc4d288d79df4ca7c1371ed8",
                "md5": "ee994a44d2be9a8d4e0c3bfcb30d316b",
                "sha256": "d772645a5beb322cb6979d53af59d5bdf141cb48d7e81434dc3d09f8db406116"
            },
            "downloads": -1,
            "filename": "ace_framework-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ee994a44d2be9a8d4e0c3bfcb30d316b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 26525,
            "upload_time": "2025-10-15T19:28:00",
            "upload_time_iso_8601": "2025-10-15T19:28:00.738335Z",
            "url": "https://files.pythonhosted.org/packages/4e/f8/7cbc25dd25df2094bd384da69e7eae503802cc4d288d79df4ca7c1371ed8/ace_framework-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-15 19:28:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Kayba-ai",
    "github_project": "agentic-context-engine",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "litellm",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        }
    ],
    "lcname": "ace-framework"
}
        
Elapsed time: 2.06600s