<div align="center">
# AgentZ: Agent from Zero
**A Research-Oriented Multi-Agent System Platform**
</div>
AgentZ is a minimal, extensible codebase for multi-agent systems research. Build intelligent agent workflows with minimal code while achieving strong baseline performance. The platform enables autonomous reasoning, experience learning, and dynamic tool creation - providing both a comparative baseline and production-ready foundation for multi-agent research.
## Features
- **π― Minimal Implementation** - Build new systems with just a few lines of code
- **π Stateful Workflows** - Persistent memory and object management throughout agent lifecycle
- **π Experience Learning** - Agents improve over time through memory-based reasoning
- **π οΈ Dynamic Tool Creation** - Agents can generate and use custom tools on-demand
- **π§ Autonomous Reasoning** - Built-in cognitive capabilities for complex multi-step tasks
- **βοΈ Config-Driven** - Easily modify behavior through configuration files
## Installation
This project uses [uv](https://docs.astral.sh/uv/) for fast, reliable package management.
### Install uv
```bash
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or via pip
pip install uv
```
See the [uv installation guide](https://docs.astral.sh/uv/getting-started/installation/) for more options.
### Setup Environment
```bash
# Clone the repository
git clone https://github.com/yourusername/agentz.git
cd agentz
# Sync dependencies
uv sync
```
## Quick Start
```python
from pipelines.data_scientist import DataScientistPipeline
pipe = DataScientistPipeline("pipelines/configs/data_science.yaml")
pipe.run_sync()
```
## Building Your Own System
### 1. Create a Custom Pipeline
Inherit from `BasePipeline` to create your own agent workflow:
```python
from pipelines.base import BasePipeline
class MyCustomPipeline(BasePipeline):
DEFAULT_CONFIG_PATH = "pipelines/configs/my_pipeline.yaml"
def __init__(self, config=None):
super().__init__(config)
# Add your custom initialization
async def run(self):
# Implement your workflow logic
pass
```
### 2. Add Custom Agents
Implement your agents following the standard interface:
```python
from agents import Agent
def create_my_agent(config):
return Agent(
name="my_agent",
instructions="Your agent instructions here",
model=config.main_model
)
```
### 3. Configure & Run
Create a config file and run your pipeline:
```python
pipe = MyCustomPipeline(
data_path="your_data.csv",
user_prompt="Your task description",
provider="gemini",
model="gemini-2.5-flash"
)
pipe.run_sync()
```
## Architecture
AgentZ is organised around a **central conversation state** and a set of declarative
flow specifications that describe how agents collaborate. The main
components you will interact with are:
- **`pipelines/`** β High level orchestration that wires agents together.
- **`agentz/agents/`** β Capability definitions for manager agents and tool agents.
- **`agentz/flow/`** β Flow primitives (`FlowRunner`, `FlowNode`, `IterationFlow`) that
execute declarative pipelines.
- **`agentz/memory/`** β Structured state management (`ConversationState`,
`ToolExecutionResult`, global memory helpers).
- **`examples/`** β Example scripts showing end-to-end usage.
```
agentz/
βββ pipelines/
β βββ base.py # Base pipeline with config management & helpers
β βββ flow_runner.py # Declarative flow executor utilities
β βββ data_scientist.py # Reference research pipeline
βββ agentz/
β βββ agents/
β β βββ manager_agents/ # Observe, evaluate, routing, writer agents
β β βββ tool_agents/ # Specialised tool executors
β βββ flow/ # Flow node definitions and runtime objects
β βββ memory/ # Conversation state & persistence utilities
β βββ llm/ # LLM adapters and setup helpers
β βββ tools/ # Built-in tools
βββ examples/
βββ data_science.py # Example workflows
```
### Declarative Pipeline Flow
The reference `DataScientistPipeline` models an entire research workflow using
three building blocks:
1. **Central ConversationState** β A shared store that captures every field any
agent might read or write (iteration metadata, gaps, observations, tool
results, timing, final report, etc.). Each loop creates a new
`IterationRecord`, enabling partial updates and clean tracking of tool
outcomes.
2. **Structured IO Contracts** β Each agent step declares the Pydantic model it
expects and produces (for example `KnowledgeGapOutput` or
`AgentSelectionPlan`). Input builders map slices of `ConversationState` into
those models and output handlers merge the validated results back into the
central state.
3. **Declarative FlowRunner** β The pipeline defines an `IterationFlow` of
`FlowNode`s such as observe β evaluate β route β tools. Loop and termination
logic are expressed with predicates (`loop_condition`, `condition`), so the
executor can stop when evaluation marks `state.complete` or constraints are
reached. Finalisation steps (like the writer agent) run after the iteration
loop using the same structured IO.
Because the flow is declarative and all state is centralised, extending the
pipeline is as simple as adding a new node, output field, or tool capabilityβno
custom `run()` logic is required beyond sequencing the flow runner.
## Benchmarks
AgentZ has been verified on several benchmarks for multi-agent research:
- **Data Science Tasks**: State-of-the-art performance on automated ML pipelines
- **Complex Reasoning**: Competitive results on multi-step reasoning benchmarks
- **Tool Usage**: High accuracy in dynamic tool selection and execution
*Detailed benchmark results and comparisons coming soon.*
## Roadmap
- [x] Persistence Process - Stateful agent workflows
- [x] Experience Learning - Memory-based reasoning
- [x] Tool Design - Dynamic tool creation
- [ ] Workflow RAG - Retrieval-augmented generation for complex workflows
- [ ] MCPs - Model Context Protocol support for enhanced agent communication
## Key Design Principles
1. **Minimal Core** - Keep the base system simple and extensible
2. **Intelligent Defaults** - Provide strong baseline implementations
3. **Research-First** - Design for experimentation and comparison
4. **Modular Architecture** - Easy to swap components and test variations
5. **Production-Ready** - Scale from research prototypes to deployed systems
## Use Cases
- **Multi-Agent Research** - Baseline for comparing agent architectures
- **Automated Data Science** - End-to-end ML pipeline automation
- **Complex Task Decomposition** - Break down and solve multi-step problems
- **Tool-Using Agents** - Research on dynamic tool creation and usage
- **Agent Memory Systems** - Study persistence and experience learning
## Citation
If you use AgentZ in your research, please cite:
```bibtex
@software{agentz2025,
title={AgentZ: A Research-Oriented Multi-Agent System Platform},
author={Your Name},
year={2025},
url={https://github.com/yourusername/agentz}
}
```
## Contributing
We welcome contributions! AgentZ is designed to be a community resource for multi-agent research. Please open an issue or submit a pull request.
## License
[Your License Here]
## Acknowledgements
AgentZ is built with inspiration from the multi-agent systems research community. We thank the developers of various LLM frameworks and tools that make this work possible.
---
<div align="center">
**AgentZ**: Building intelligent agents from zero to hero π
</div>
Raw data
{
"_id": null,
"home_page": null,
"name": "zero-agent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "agent, multi-agent, ai, llm, research, machine-learning, autonomous-agents",
"author": null,
"author_email": "Zhimeng Guo <gzjz07@outlook.com>",
"download_url": "https://files.pythonhosted.org/packages/b8/86/19d29bb8b478f1f0ec27355f0b2ced79d3ff6d18dba1fafd087f1f2534cd/zero_agent-0.1.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n\n# AgentZ: Agent from Zero\n\n**A Research-Oriented Multi-Agent System Platform**\n\n</div>\n\nAgentZ is a minimal, extensible codebase for multi-agent systems research. Build intelligent agent workflows with minimal code while achieving strong baseline performance. The platform enables autonomous reasoning, experience learning, and dynamic tool creation - providing both a comparative baseline and production-ready foundation for multi-agent research.\n\n## Features\n\n- **\ud83c\udfaf Minimal Implementation** - Build new systems with just a few lines of code\n- **\ud83d\udd04 Stateful Workflows** - Persistent memory and object management throughout agent lifecycle\n- **\ud83d\udcda Experience Learning** - Agents improve over time through memory-based reasoning\n- **\ud83d\udee0\ufe0f Dynamic Tool Creation** - Agents can generate and use custom tools on-demand\n- **\ud83e\udde0 Autonomous Reasoning** - Built-in cognitive capabilities for complex multi-step tasks\n- **\u2699\ufe0f Config-Driven** - Easily modify behavior through configuration files\n\n## Installation\n\nThis project uses [uv](https://docs.astral.sh/uv/) for fast, reliable package management.\n\n### Install uv\n\n```bash\n# macOS/Linux\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Or via pip\npip install uv\n```\n\nSee the [uv installation guide](https://docs.astral.sh/uv/getting-started/installation/) for more options.\n\n### Setup Environment\n\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/agentz.git\ncd agentz\n\n# Sync dependencies\nuv sync\n```\n\n## Quick Start\n\n```python\nfrom pipelines.data_scientist import DataScientistPipeline\n\npipe = DataScientistPipeline(\"pipelines/configs/data_science.yaml\")\n\npipe.run_sync()\n```\n\n## Building Your Own System\n\n### 1. Create a Custom Pipeline\n\nInherit from `BasePipeline` to create your own agent workflow:\n\n```python\nfrom pipelines.base import BasePipeline\n\n\nclass MyCustomPipeline(BasePipeline):\n DEFAULT_CONFIG_PATH = \"pipelines/configs/my_pipeline.yaml\"\n\n def __init__(self, config=None):\n super().__init__(config)\n # Add your custom initialization\n\n async def run(self):\n # Implement your workflow logic\n pass\n```\n\n### 2. Add Custom Agents\n\nImplement your agents following the standard interface:\n\n```python\nfrom agents import Agent\n\ndef create_my_agent(config):\n return Agent(\n name=\"my_agent\",\n instructions=\"Your agent instructions here\",\n model=config.main_model\n )\n```\n\n### 3. Configure & Run\n\nCreate a config file and run your pipeline:\n\n```python\npipe = MyCustomPipeline(\n data_path=\"your_data.csv\",\n user_prompt=\"Your task description\",\n provider=\"gemini\",\n model=\"gemini-2.5-flash\"\n)\n\npipe.run_sync()\n```\n\n## Architecture\n\nAgentZ is organised around a **central conversation state** and a set of declarative\nflow specifications that describe how agents collaborate. The main\ncomponents you will interact with are:\n\n- **`pipelines/`** \u2013 High level orchestration that wires agents together.\n- **`agentz/agents/`** \u2013 Capability definitions for manager agents and tool agents.\n- **`agentz/flow/`** \u2013 Flow primitives (`FlowRunner`, `FlowNode`, `IterationFlow`) that\n execute declarative pipelines.\n- **`agentz/memory/`** \u2013 Structured state management (`ConversationState`,\n `ToolExecutionResult`, global memory helpers).\n- **`examples/`** \u2013 Example scripts showing end-to-end usage.\n\n```\nagentz/\n\u251c\u2500\u2500 pipelines/\n\u2502 \u251c\u2500\u2500 base.py # Base pipeline with config management & helpers\n\u2502 \u251c\u2500\u2500 flow_runner.py # Declarative flow executor utilities\n\u2502 \u2514\u2500\u2500 data_scientist.py # Reference research pipeline\n\u251c\u2500\u2500 agentz/\n\u2502 \u251c\u2500\u2500 agents/\n\u2502 \u2502 \u251c\u2500\u2500 manager_agents/ # Observe, evaluate, routing, writer agents\n\u2502 \u2502 \u2514\u2500\u2500 tool_agents/ # Specialised tool executors\n\u2502 \u251c\u2500\u2500 flow/ # Flow node definitions and runtime objects\n\u2502 \u251c\u2500\u2500 memory/ # Conversation state & persistence utilities\n\u2502 \u251c\u2500\u2500 llm/ # LLM adapters and setup helpers\n\u2502 \u2514\u2500\u2500 tools/ # Built-in tools\n\u2514\u2500\u2500 examples/\n \u2514\u2500\u2500 data_science.py # Example workflows\n```\n\n### Declarative Pipeline Flow\n\nThe reference `DataScientistPipeline` models an entire research workflow using\nthree building blocks:\n\n1. **Central ConversationState** \u2013 A shared store that captures every field any\n agent might read or write (iteration metadata, gaps, observations, tool\n results, timing, final report, etc.). Each loop creates a new\n `IterationRecord`, enabling partial updates and clean tracking of tool\n outcomes.\n2. **Structured IO Contracts** \u2013 Each agent step declares the Pydantic model it\n expects and produces (for example `KnowledgeGapOutput` or\n `AgentSelectionPlan`). Input builders map slices of `ConversationState` into\n those models and output handlers merge the validated results back into the\n central state.\n3. **Declarative FlowRunner** \u2013 The pipeline defines an `IterationFlow` of\n `FlowNode`s such as observe \u2192 evaluate \u2192 route \u2192 tools. Loop and termination\n logic are expressed with predicates (`loop_condition`, `condition`), so the\n executor can stop when evaluation marks `state.complete` or constraints are\n reached. Finalisation steps (like the writer agent) run after the iteration\n loop using the same structured IO.\n\nBecause the flow is declarative and all state is centralised, extending the\npipeline is as simple as adding a new node, output field, or tool capability\u2014no\ncustom `run()` logic is required beyond sequencing the flow runner.\n\n## Benchmarks\n\nAgentZ has been verified on several benchmarks for multi-agent research:\n\n- **Data Science Tasks**: State-of-the-art performance on automated ML pipelines\n- **Complex Reasoning**: Competitive results on multi-step reasoning benchmarks\n- **Tool Usage**: High accuracy in dynamic tool selection and execution\n\n*Detailed benchmark results and comparisons coming soon.*\n\n## Roadmap\n\n- [x] Persistence Process - Stateful agent workflows\n- [x] Experience Learning - Memory-based reasoning\n- [x] Tool Design - Dynamic tool creation\n- [ ] Workflow RAG - Retrieval-augmented generation for complex workflows\n- [ ] MCPs - Model Context Protocol support for enhanced agent communication\n\n## Key Design Principles\n\n1. **Minimal Core** - Keep the base system simple and extensible\n2. **Intelligent Defaults** - Provide strong baseline implementations\n3. **Research-First** - Design for experimentation and comparison\n4. **Modular Architecture** - Easy to swap components and test variations\n5. **Production-Ready** - Scale from research prototypes to deployed systems\n\n## Use Cases\n\n- **Multi-Agent Research** - Baseline for comparing agent architectures\n- **Automated Data Science** - End-to-end ML pipeline automation\n- **Complex Task Decomposition** - Break down and solve multi-step problems\n- **Tool-Using Agents** - Research on dynamic tool creation and usage\n- **Agent Memory Systems** - Study persistence and experience learning\n\n## Citation\n\nIf you use AgentZ in your research, please cite:\n\n```bibtex\n@software{agentz2025,\n title={AgentZ: A Research-Oriented Multi-Agent System Platform},\n author={Your Name},\n year={2025},\n url={https://github.com/yourusername/agentz}\n}\n```\n\n## Contributing\n\nWe welcome contributions! AgentZ is designed to be a community resource for multi-agent research. Please open an issue or submit a pull request.\n\n## License\n\n[Your License Here]\n\n## Acknowledgements\n\nAgentZ is built with inspiration from the multi-agent systems research community. We thank the developers of various LLM frameworks and tools that make this work possible.\n\n---\n\n<div align=\"center\">\n\n**AgentZ**: Building intelligent agents from zero to hero \ud83d\ude80\n\n</div>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Research-Oriented Multi-Agent System Platform",
"version": "0.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/TimeLovercc/agentz/issues",
"Documentation": "https://github.com/TimeLovercc/agentz#readme",
"Homepage": "https://github.com/TimeLovercc/agentz",
"Repository": "https://github.com/TimeLovercc/agentz"
},
"split_keywords": [
"agent",
" multi-agent",
" ai",
" llm",
" research",
" machine-learning",
" autonomous-agents"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "41548a928c98250c341e265ad2277c0d771cebab09c4697d6b6cd53dd15b26bf",
"md5": "e9528e141514709e489c199befc3d8c6",
"sha256": "690ea30338d116af07621913ab72fbb4bb2f0e0e6853ac0369a609a9b5ab4d5b"
},
"downloads": -1,
"filename": "zero_agent-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e9528e141514709e489c199befc3d8c6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 104402,
"upload_time": "2025-10-12T22:06:55",
"upload_time_iso_8601": "2025-10-12T22:06:55.980254Z",
"url": "https://files.pythonhosted.org/packages/41/54/8a928c98250c341e265ad2277c0d771cebab09c4697d6b6cd53dd15b26bf/zero_agent-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b88619d29bb8b478f1f0ec27355f0b2ced79d3ff6d18dba1fafd087f1f2534cd",
"md5": "bf47902089c46076e1e49dfabd1950c9",
"sha256": "409e170228df7f27f053c1c34295385ab93380ac77224165601c79859023bbf8"
},
"downloads": -1,
"filename": "zero_agent-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "bf47902089c46076e1e49dfabd1950c9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 79635,
"upload_time": "2025-10-12T22:06:57",
"upload_time_iso_8601": "2025-10-12T22:06:57.478501Z",
"url": "https://files.pythonhosted.org/packages/b8/86/19d29bb8b478f1f0ec27355f0b2ced79d3ff6d18dba1fafd087f1f2534cd/zero_agent-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-12 22:06:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "TimeLovercc",
"github_project": "agentz",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "zero-agent"
}