<p align="center">
<a href="https://github.com/Alexeyisme/agentrylab/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/Alexeyisme/agentrylab/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://pypi.org/project/agentrylab/"><img alt="PyPI" src="https://img.shields.io/pypi/v/agentrylab.svg" /></a>
<a href="https://pypi.org/project/agentrylab/"><img alt="License" src="https://img.shields.io/pypi/l/agentrylab.svg" /></a>
<a href="https://pypi.org/project/agentrylab/"><img alt="Python" src="https://img.shields.io/pypi/pyversions/agentrylab.svg" /></a>
</p>
## Agentry Lab — Multi‑Agent Orchestration Laboratory
**Agentry Lab lets you experiment with multi-agent scenarios in minutes!**
Define a room, drop in agents, give them instructions — then sit back and watch the sparks fly. Jump in yourself with interactive turns, or just let the lab run. Works equally well from the CLI or Python API.
```bash
pip install agentrylab
agentrylab run standup_club.yaml --objective "remote work" --max-iters 4
```
## 📋 Table of Contents
<details>
<summary>Click to expand</summary>
- [🚀 Get Started](#-get-started-in-2-minutes)
- [✨ Why AgentryLab?](#-why-agentrylab)
- [📋 Requirements](#-requirements)
- [💾 Installation](#-installation)
- [🔧 Environment Setup](#-environment-setup)
- [🎭 Built-in Presets](#-built-in-presets)
- [🖥️ CLI Commands](#️-cli-commands)
- [⚙️ Configuration](#️-configuration)
- [💰 Tool Budgets](#-tool-budgets)
- [📜💾 Persistence](#-persistence)
- [🏗️ Architecture](#️-architecture-at-a-glance)
- [🧑💻 Development](#-development)
- [🐍 Python API](#-python-api)
- [📦 Releasing](#-releasing)
- [📋 Event Schema](#-event-schema)
- [💾 Checkpoint Snapshot Fields](#-checkpoint-snapshot-fields)
- [🍳 Recipes](#-recipes)
</details>
## 🔗 Docs quick links
- CLI reference: `src/agentrylab/docs/CLI.md`
- Config guide: `src/agentrylab/docs/CONFIG.md`
- Providers: `src/agentrylab/docs/PROVIDERS.md`
- Tools: `src/agentrylab/docs/TOOLS.md`
## 🧠 Concepts in 30 seconds
- **Agents**: Roles that speak (pro, con, comedian, scientist, aliens — no limits!)
- **Providers**: LLM backends (OpenAI, Ollama supported)
- **Tools**: External calls wrapped as actions (DuckDuckGo, Wolfram included — contribute your own!)
- **Scheduler**: Who talks when (Round‑Robin, Every‑N — scheduling options available)
- **State**: History window, budgets, summaries, actions — continue experiments anytime
Pick preset lab setup or define your own lab (agents, tools, providers, schedules) in YAML, then run and iterate quickly from the CLI or Python. Stream outputs, save transcripts, stash checkpoints!
**10 preset lab environments - ready to have fun out of the box!** 🎭
*🎤 **Stand-Up Club** - Two comedians riff on any topic, MC closes the set*
*🏛️ **Debates** - Pro/con arguments with evidence, moderator keeps it civil*
*🧠 **Drifty Thoughts** - Three thinkers wander playfully through ideas*
*🔬 **Research** - Scientists collaborate, style coach polishes the output*
*🛋️ **Therapy Session** - Compassionate client-therapist conversations*
*💡 **Brainstorm Buddies** - Idea generation with a scribe pulling shortlists*
Want a new preset or tool? We welcome contributions — start with a tiny PR or open an issue. See CONTRIBUTING.md. Your idea might ship in the next release.
## 🚀 Get Started in 2 Minutes
```bash
pip install agentrylab
```
### 🦙 **llama3-friendly lab presets:**
```bash
# Simple chat (works great with local Ollama!)
agentrylab run solo_chat_user.yaml --max-iters 3
# Quick web research
agentrylab run ddg_quick_summary.yaml --objective "quantum computing"
```
### 🤖 **OpenAI-friendly lab presets:**
```bash
# Formal debates with evidence
agentrylab run debates.yaml --objective "Should we colonize Mars?" --max-iters 4
# Comedy club (hybrid: llama3 + GPT-4o-mini)
agentrylab run standup_club.yaml --objective "remote work" --max-iters 6
```
## ✨ Why AgentryLab?
**Because single agents are boring.** 🤖
- 📦 **YAML‑first presets** for agents/advisors/moderator/summarizer (your config, your rules)
- 🔌 **Pluggable LLM providers** (OpenAI, Ollama) and tools (DuckDuckGo, Wolfram Alpha)
- 📡 **Streaming CLI** with resume support and transcript/DB persistence (forget nothing, replay everything)
- ⏳ **Smart budgets** for tools (per‑run/per‑iteration) with shared‑per‑tick semantics (no more runaway tool spam)
- 🧩 **Small, readable runtime**: nodes, scheduler, engine, state (batteries included, drama optional)
- 🫵 **Human‑in‑the‑loop turns**: schedule `user` nodes and poke runs from CLI/API (`agentrylab say …`)
## 📋 Requirements
- 🐍 **Python 3.11+**
- 🧰 **Virtual environment** (recommended; sanity‑preserving)
- 🖥️ **Optional: Ollama** for local models (default: `http://localhost:11434`)
- 🔑 **API keys** as needed (e.g., `OPENAI_API_KEY`, `WOLFRAM_APP_ID`) — bring your own secrets
## 💾 Installation
### Option 1: From PyPI (Recommended)
```bash
pip install agentrylab
```
### Option 2: From Source (Development)
```bash
git clone https://github.com/Alexeyisme/agentrylab.git
cd agentrylab
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -U pip
pip install -e .
```
## 🔧 Environment Setup
Create a `.env` file (loaded via `python-dotenv`) with any secrets you need:
```bash
# For OpenAI models (optional)
OPENAI_API_KEY=sk-...
# For Wolfram Alpha (optional)
WOLFRAM_APP_ID=...
# For Ollama (optional, defaults to localhost:11434)
OLLAMA_BASE_URL=http://localhost:11434
```
> **💡 Pro tip**: You can start with just Ollama (free, local) and add API keys later!
## 🚀 Quick Start
### CLI Quickstart
Spin up a room and let the sparks fly:
```bash
# Simple chat (works with Ollama/llama3)
agentrylab run solo_chat_user.yaml --max-iters 3
# Or with a custom topic
agentrylab run standup_club.yaml --objective "remote work" --max-iters 4
# Or a debate (needs OpenAI API key)
agentrylab run debates.yaml --max-iters 4 --thread-id demo
```
> Cleanup outputs fast when experimenting:
> ```bash
> rm -rf outputs/ # transcripts + checkpoints
> ```
### CLI Cheat‑Sheet
```
--objective TEXT # Set topic on the fly
--thread-id ID # Name your run (enables resume)
--max-iters N # Number of iterations
--no-resume # Start fresh even if checkpoint exists
--no-stream # Print only final tail
--show-last K # Tail size at the end
```
Set a custom objective/topic at runtime:
```bash
agentrylab run debates.yaml --thread-id debate1 --objective "Proposition: apples — good or scam?" --max-iters 4
```
Interactive mode (prompt for user message each round when a user node exists):
```bash
# Solo chat with a scheduled user turn; prompt on each iteration
agentrylab run solo_chat_user.yaml --thread-id demo --resume --max-iters 3 --interactive --user-id user
```
Check version:
```bash
agentrylab --version
```
### User Messages (User-in-the-Loop)
Let a human chime in via API or CLI, and optionally schedule a user turn in cadence.
```bash
# 1) Post a user message into a thread
agentrylab say solo_chat_user.yaml demo 'Hello from Alice!'
# 2) Run one iteration to consume it (user turn then assistant)
agentrylab run solo_chat_user.yaml --thread-id demo --resume --max-iters 1
```
Python API:
```python
from agentrylab import init
lab = init("src/agentrylab/presets/solo_chat_user.yaml", experiment_id="demo")
lab.post_user_message("Hello from Alice!", user_id="user:alice")
lab.run(rounds=1)
```
### Python API Quickstart
Orchestrate from Python with minimal fuss:
```python
from agentrylab import init, list_threads
# 1. Create lab (using solo_chat_user preset - perfect for llama3!)
lab = init("src/agentrylab/presets/solo_chat_user.yaml",
experiment_id="my-chat",
prompt="Tell me about your favorite hobby!")
# 2. Run with callback
def callback(event):
if event.get("event") == "provider_result":
print(f"Agent responded: {event.get('content_len', 0)} chars")
status = lab.run(rounds=3, stream=True, on_event=callback)
# 3. Show conversation
for msg in lab.state.history:
print(f"[{msg['role']}]: {msg['content']}")
# 4. Resume with new topic
lab.state.objective = "Now tell me about your dream vacation!"
lab.run(rounds=2)
# 5. List threads
threads = list_threads("src/agentrylab/presets/solo_chat_user.yaml")
```
Python examples:
- `user_in_the_loop_quick.py` — post once and run N rounds
- `user_in_the_loop_interactive.py` — type a line, run a round, repeat
> **📝 Note**: Output streams each iteration ("=== New events ===") and prints a final tail
> of the last N transcript entries. Transcripts are written to `outputs/*.jsonl`
> and checkpoints to `outputs/checkpoints.db`.
## 🖥️ CLI Commands
### Basic Commands
```bash
# Run a preset
agentrylab run <preset.yaml> [--thread-id ID] [--max-iters N] [--show-last K] [--objective TEXT]
# Inspect a thread's checkpoint
agentrylab status <preset.yaml> <thread-id>
# List all known threads
agentrylab ls <preset.yaml>
```
### Common Options
- `--max-iters N`: Run for N iterations (default: varies by preset)
- `--thread-id ID`: Use specific thread ID (enables resume)
- `--show-last K`: Show last K messages at the end
- `--stream/--no-stream`: Enable/disable real-time streaming (default: enabled)
- `--resume/--no-resume`: Resume from checkpoint or start fresh (default: resume)
- `--objective TEXT`: Override the preset `objective` (topic) just for this run
> **📚 Full docs**: See `src/agentrylab/docs/CLI.md` for complete command reference.
User-in-the-loop:
- `agentrylab say <preset.yaml> <thread-id> 'message' [--user-id USER]` appends a user message into a thread.
- Works with scheduled user nodes (role `user`) so messages are consumed on their turns.
## ⚙️ Configuration
Describe your room in YAML; everything else clicks into place.
- **Presets**: shipped with the package; the CLI accepts packaged names like `solo_chat_user.yaml` (file paths work too)
- **Providers**: OpenAI (HTTP), Ollama; add your own under `runtime/providers`
- **Tools**: DuckDuckGo search, Wolfram Alpha; add your own under `runtime/tools`
- **Scheduler**: Round‑robin and Every‑N; build your own in `runtime/scheduler`
## 🎭 Built-in Presets
Have fun out of the box — **llama3‑friendly** and non‑strict by default.
### 📊 Quick Overview
| Preset | Description | Best For | Provider | OpenAI? |
|--------|-------------|----------|----------|--------|
| [Solo Chat](#-solo-chat-user-turn-solo_chat_useryaml---perfect-for-beginners) | Single friendly agent with user turns | Testing, beginners | llama3 | ❌ |
| [Stand‑Up Club](#-stand-up-club-standup_clubyaml---comedy-gold) | Two comedians + MC | Humor, creative writing | Hybrid | ⚠️ |
| [Debates](#-debates-debatesyaml---formal-arguments) | Pro/con + moderator, evidence | Structured arguments | OpenAI | ✅ |
| [Drifty Thoughts](#-drifty-thoughts-drifty_thoughtsyaml---free-form-thinking) | Three playful thinkers | Brainstorming | OpenAI | ✅ |
| [Research](#-research-collaboration-researchyaml---academic-vibes) | Scientists + style coach | Academic, structured | OpenAI | ✅ |
| [Therapy Session](#️-therapy-session-therapy_sessionyaml---compassionate-chat) | Client–therapist chat | Supportive conversations | OpenAI | ✅ |
| [DDG Quick Summary](#-ddg-quick-summary-ddg_quick_summaryyaml---web-research) | Web search + summary | Quick research | llama3 | ❌ |
| [Small Talk](#-small-talk-small_talkyaml---casual-chat) | Two voices + host | Casual chats | llama3 | ❌ |
| [Brainstorm Buddies](#-brainstorm-buddies-brainstorm_buddiesyaml---idea-generation) | Idea gen + scribe | Creative ideation | OpenAI | ✅ |
| [Simple Argument](#-simple-argument-argueyaml---casual-debates) | Casual debate, no rules | Opinions | Hybrid | ⚠️ |
### 🎤 **Solo Chat (User Turn)** (`solo_chat_user.yaml`) - **Perfect for beginners!**
- **What**: Single friendly agent with scheduled user turns
- **Best for**: Testing, simple conversations, llama3 users, human-in-the-loop
- **Run**: `agentrylab run solo_chat_user.yaml --max-iters 3`
- **Topic**: `--objective "your topic"`
### 🎭 **Stand‑Up Club** (`standup_club.yaml`) - **Comedy gold!**
- **What**: Two comedians riff on a topic, punch‑up advisor adds tweaks, MC closes the set
- **Best for**: Entertainment, creative writing, humor
- **Run**: `agentrylab run standup_club.yaml --objective "airports" --max-iters 6`
- **Topic**: `--objective "your topic"`
### 🧠 **Drifty Thoughts** (`drifty_thoughts.yaml`) - **Free-form thinking**
- **What**: Three "thinkers" drift playfully; gentle advisor nudges; optional summarizer
- **Best for**: Creative brainstorming, philosophical discussions
- **Run**: `agentrylab run drifty_thoughts.yaml --objective "surprising ideas"`
- **Topic**: `--objective "your topic"`
### 🔬 **Research Collaboration** (`research.yaml`) - **Academic vibes**
- **What**: Two scientists brainstorm, style coach gives clarity, summarizer wraps up
- **Best for**: Research, academic discussions, structured thinking
- **Run**: `agentrylab run research.yaml --objective "curious scientific question"`
- **Topic**: `--objective "your topic"`
### 🛋️ **Therapy Session** (`therapy_session.yaml`) - **Compassionate chat**
- **What**: Reflective client and gentle therapist; summarizer offers compassionate wrap‑up
- **Best for**: Emotional discussions, self-reflection, supportive conversations
- **Run**: `agentrylab run therapy_session.yaml --objective "something on your mind"`
- **Topic**: `--objective "your topic"`
### 🔍 **DDG Quick Summary** (`ddg_quick_summary.yaml`) - **Web research**
- **What**: One agent searches DuckDuckGo and writes a 5‑bullet web summary with URLs
- **Best for**: Quick research, web summaries, fact-finding
- **Run**: `agentrylab run ddg_quick_summary.yaml --objective "your topic"`
- **Topic**: `--objective "your topic"`
### ☕ **Small Talk** (`small_talk.yaml`) - **Casual chat**
- **What**: Two friendly voices chat; host recaps every few turns
- **Best for**: Casual conversations, social interactions
- **Run**: `agentrylab run small_talk.yaml --objective "coffee rituals"`
- **Topic**: `--objective "your topic"`
### 💡 **Brainstorm Buddies** (`brainstorm_buddies.yaml`) - **Idea generation**
- **What**: Two idea buddies riff; scribe pulls a shortlist
- **Best for**: Brainstorming, creative ideation, problem-solving
- **Run**: `agentrylab run brainstorm_buddies.yaml --objective "rainy day activities"`
- **Topic**: `--objective "your topic"`
### 🏛️ **Debates** (`debates.yaml`) - **Formal arguments**
- **What**: Pro/con debaters with moderator and evidence-based arguments
- **Best for**: Formal debates, argument analysis, structured discussions
- **Run**: `agentrylab run debates.yaml --max-iters 4`
- **Note**: Requires OpenAI API key for best results
### 🗣️ **Simple Argument** (`argue.yaml`) - **Casual debates**
- **What**: Two agents having a natural debate without strict rules
- **Best for**: Casual arguments, opinion discussions
- **Run**: `agentrylab run argue.yaml --objective "Should remote work become standard?"`
- **Topic**: `--objective "your topic"`
> **💡 Pro tip**: Start with **Solo Chat (User Turn)** for testing, then try **Stand‑Up Club** for fun!
> **📚 More tips**: See `src/agentrylab/docs/PRESET_TIPS.md` for advanced configuration.
## 💰 Tool Budgets
Control how many times tools can be called to prevent runaway costs:
- **`per_run_max`**: Total calls per tool across the entire run
- **`per_iteration_max`**: Calls per engine tick (resets each tick)
- **Scope**: Enforced per tool ID, shared across agents in the same tick
- **Minima** (`per_run_min`, `per_iteration_min`) are advisory (not enforced)
## 📜💾 Persistence
**Transcripts for storytelling; checkpoints for recovery.**
- **📜 Transcript JSONL**: `outputs/<thread-id>.jsonl` (human-readable conversation logs)
- **💾 Checkpoints (SQLite)**: `outputs/checkpoints.db` (resume from any point)
- **⏭️ Resume**: `--resume` (default) continues from last checkpoint; `--no-resume` starts fresh
- **🧠 Schemas**: See `src/agentrylab/docs/PERSISTENCE.md` for detailed field definitions
- **⏱️ Timestamps**: All recorded as Unix epoch seconds (UTC)
### Cleaning outputs (all threads)
- Remove everything (default paths): `rm -rf outputs/`
- Or per-thread: `agentrylab ls <preset.yaml>` then `agentrylab reset <preset.yaml> <thread-id> --delete-transcript`
## 🏗️ Architecture (at a glance)
**Simple, readable runtime components:**
- **Engine**: Steps the scheduler, executes nodes, applies outputs/actions
- **Nodes**: Agent, Moderator, Summarizer, Advisor (see `runtime/nodes/*`)
- **Providers**: Thin HTTP adapters (OpenAI, Ollama)
- **Tools**: Simple callables with normalized envelopes (e.g., DuckDuckGo)
- **State**: History window composition, budgets, message contracts, rollback
## 🧑💻 Development
**Serious tooling for serious… tinkering.**
```bash
# Install development dependencies
pip install -e .[dev]
# Lint and test
ruff check . && pytest -q
# Coverage (uses pytest-cov; default fail-under=40%)
make coverage
# or: pytest --cov=src/agentrylab --cov-branch --cov-report=term-missing
```
> **☕️ Pro tip**: Keep a coffee nearby. Agents love to riff.
## 🐍 Python API
### 🚀 Beginner API
**Essential functions for getting started quickly:**
```python
from agentrylab import init
# Initialize a lab and run for N rounds
lab = init("src/agentrylab/presets/solo_chat_user.yaml",
experiment_id="my-experiment",
prompt="Tell me about your favorite hobby!")
status = lab.run(rounds=5)
print(f"Iterations: {status.iter}, Active: {status.is_active}")
# View conversation history
for msg in lab.state.history:
print(f"[{msg['role']}]: {msg['content']}")
```
### 🔧 Advanced API
**Power user features for complex workflows:**
#### Posting User Messages
```python
from agentrylab import init
lab = init("src/agentrylab/presets/solo_chat_user.yaml", experiment_id="chat-1")
# Append a user line into history and transcript; also enqueue for scheduled user nodes
lab.post_user_message("Please keep it concise.", user_id="user:alice")
lab.run(rounds=1)
```
#### One-shot Run with Streaming
```python
from agentrylab import run
def on_event(ev: dict):
print(f"Iteration {ev['iter']}: {ev['agent_id']} ({ev['role']})")
lab, status = run(
"src/agentrylab/presets/solo_chat_user.yaml",
prompt="What makes jokes funny?",
experiment_id="streaming-demo",
rounds=5,
stream=True,
on_event=on_event,
)
```
#### Budget Management
```python
from agentrylab import init
# Set budgets in preset, then inspect counters
preset = {
"id": "budget-demo",
"providers": [{"id": "p1", "impl": "tests.fake_impls.TestProvider", "model": "test"}],
"tools": [{"id": "echo", "impl": "tests.fake_impls.EchoTool"}],
"agents": [{"id": "pro", "role": "agent", "provider": "p1", "system_prompt": "You are the agent.", "tools": ["echo"]}],
"runtime": {
"scheduler": {"impl": "agentrylab.runtime.scheduler.round_robin.RoundRobinScheduler", "params": {"order": ["pro"]}},
"budgets": {"tools": {"per_run_max": 1}},
},
}
lab = init(preset, experiment_id="budget-demo-1", resume=False)
lab.run(rounds=1)
snap = lab.store.load_checkpoint("budget-demo-1")
print("Total tool calls:", snap.get("_tool_calls_run_total"))
```
#### Logging & Tracing
```python
# Configure runtime logging/trace in the preset
preset = {
# ... providers/tools/agents ...
"runtime": {
"logs": {"level": "INFO", "format": "%(asctime)s %(levelname)s %(name)s: %(message)s"},
"trace": {"enabled": True},
"scheduler": {"impl": "agentrylab.runtime.scheduler.round_robin.RoundRobinScheduler", "params": {"order": ["pro"]}},
},
}
lab = init(preset, experiment_id="log-1")
lab.run(rounds=1)
```
## ❓ FAQ
- **Do I need OpenAI?** No. Many presets run great on llama3 via Ollama.
- **Where are outputs stored?** `outputs/*.jsonl` (transcripts), `outputs/checkpoints.db` (state).
- **How do I resume?** Use `--thread-id` and keep `--resume` (default).
- **How do I set a topic?** `--objective "..."` on the CLI or `prompt="..."` in Python.
## 🧯 Troubleshooting
- **Empty replies with llama3**: set provider timeout to 30s; add “never leave empty” lines to system prompts; for strict multi‑agent tasks, prefer `gpt-4o-mini`.
- **Moderator JSON errors**: use `debates.yaml` (OpenAI) or remove moderator; see `runtime/nodes/moderator.py` for JSON contract.
## 🗺️ Roadmap
- Tool sandboxing + more built‑in tools
- More providers and local models
- Preset sharing/marketplace
- Richer moderation contracts and guardrails
## 📚 API Reference
### Core Functions
**`init(config, *, experiment_id=None, prompt=None, user_messages=None, resume=True) -> Lab`**
- `config`: YAML path, dict, or validated Preset object
- `experiment_id`: Logical run/thread ID; enables resume
- `prompt`: Sets `cfg.objective` for the run (used in prompts when enabled)
- `user_messages`: String or list of strings; seeds initial user message(s) into context
- `resume`: Attempts to load checkpoint for `experiment_id`
**`run(config, *, prompt=None, experiment_id=None, rounds=None, resume=True, stream=False, on_event=None, timeout_s=None, stop_when=None, on_tick=None, on_round=None) -> (Lab, LabStatus)`**
- One-shot helper; see `Lab.run` for parameters
### Lab Methods
**`Lab.run(*, rounds=None, stream=False, on_event=None, timeout_s=None, stop_when=None, on_tick=None, on_round=None) -> LabStatus`**
- `rounds`: Number of iterations to run
- `stream`: When True, calls `on_event(event: Event)` for newly appended transcript entries
- `timeout_s`: Optional wall-clock timeout for streaming runs
- `stop_when`: Optional predicate `Event -> bool`; when returns True, run stops
**`Lab.stream(*, rounds=None, timeout_s=None, stop_when=None, on_tick=None, on_round=None) -> Iterator[Event]`**
- Generator that yields transcript events as they occur
- Optional callbacks: `on_tick(info)`, `on_round(info)` where `info = {"iter": int, "elapsed_s": float}`
**Other Lab Methods:**
- `Lab.status` (property) -> `LabStatus`
- `Lab.history(limit=50)` -> `list[Event]`
- `Lab.clean(thread_id=None, delete_transcript=True, delete_checkpoint=True) -> None`: Delete outputs for a thread
- `list_threads(config) -> list[tuple[str, float]]`: List (thread_id, updated_at) in persistence
## 📦 Releasing
We publish on tags via GitHub Actions (see `.github/workflows/release.yml`).
**For maintainers:**
1. Bump `version` in `pyproject.toml`
2. Update `CHANGELOG.md`
3. `git tag -a vX.Y.Z -m 'vX.Y.Z' && git push --tags`
4. CI builds sdist/wheel and uploads to PyPI using `PYPI_API_TOKEN` secret
## 📋 Event Schema
```python
from agentrylab import Event
def handle(ev: Event) -> None:
print(ev["iter"], ev["agent_id"], ev["role"], ev.get("latency_ms"))
# Keys: t, iter, agent_id, role, content (str|dict), metadata (dict|None), actions (dict|None), latency_ms
```
## 💾 Checkpoint Snapshot Fields
Returned by `lab.store.load_checkpoint(thread_id)` as a dict of state attributes:
- **`thread_id`**: Current experiment ID
- **`iter`**: Iteration counter
- **`stop_flag`**: Stop signal for the engine
- **`history`**: In‑memory context entries `{agent_id, role, content}` used by prompt composition
- **`running_summary`**: Summarizer running summary if set
- **`_tool_calls_run_total`, `_tool_calls_iteration`**: Global tool counters
- **`_tool_calls_run_by_id`, `_tool_calls_iter_by_id`**: Per‑tool counters
- **`cfg`, `contracts`**: Complex/opaque objects (implementation detail)
> **Note**: If a legacy/opaque pickle was saved, you'll get `{ "_pickled": ... }` instead
## 🍳 Recipes
### Programmatic Preset Construction
```python
from agentrylab import init
preset = {
"id": "programmatic",
"providers": [{"id": "p1", "impl": "agentrylab.runtime.providers.openai.OpenAIProvider", "model": "gpt-4o"}],
"tools": [],
"agents": [{"id": "pro", "role": "agent", "provider": "p1", "system_prompt": "You are the agent."}],
"runtime": {
"scheduler": {"impl": "agentrylab.runtime.scheduler.round_robin.RoundRobinScheduler", "params": {"order": ["pro"]}}
},
}
lab = init(preset, experiment_id="prog-1", user_messages=["Start topic: ..."])
lab.run(rounds=3)
```
### Multiple Runs in a Loop
```python
topics = ["jokes", "puns", "metaphors"]
for i, topic in enumerate(topics):
lab = init("src/agentrylab/presets/debates.yaml", experiment_id=f"exp-{i}", prompt=f"Explore {topic}")
lab.run(rounds=2)
```
### Inspecting Transcripts
```python
lab = init("src/agentrylab/presets/debates.yaml", experiment_id="inspect-1")
lab.run(rounds=1)
for ev in lab.history(limit=20):
print(ev["iter"], ev["agent_id"], ev["role"], str(ev["content"])[:80])
# Or read directly from the store
rows = lab.store.read_transcript("inspect-1", limit=100)
```
### Cleaning Outputs (Transcript + Checkpoint)
```python
from agentrylab import init
lab = init("src/agentrylab/presets/debates.yaml", experiment_id="demo-clean")
lab.run(rounds=1)
# Remove persisted outputs for this experiment
lab.clean() # or lab.clean(thread_id="some-other-id")
```
---
## 📄 License
MIT
Raw data
{
"_id": null,
"home_page": null,
"name": "agentrylab",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "multi-agent, llm, ai-agents, agent-orchestration, python, yaml-config, cli-tool, research, experimentation, ai-development, openai, ollama, llama3, transcripts, checkpoints, streaming, hackable, lightweight, lab, workflow-engine",
"author": "Alexey Kislitsin",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/93/42/ec1abd4f9760d2063a7470295354c02be15701f4ff764501c55c0adfd736/agentrylab-0.1.5.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <a href=\"https://github.com/Alexeyisme/agentrylab/actions/workflows/ci.yml\"><img alt=\"CI\" src=\"https://github.com/Alexeyisme/agentrylab/actions/workflows/ci.yml/badge.svg\" /></a>\n <a href=\"https://pypi.org/project/agentrylab/\"><img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/agentrylab.svg\" /></a>\n <a href=\"https://pypi.org/project/agentrylab/\"><img alt=\"License\" src=\"https://img.shields.io/pypi/l/agentrylab.svg\" /></a>\n <a href=\"https://pypi.org/project/agentrylab/\"><img alt=\"Python\" src=\"https://img.shields.io/pypi/pyversions/agentrylab.svg\" /></a>\n</p>\n\n## Agentry Lab \u2014 Multi\u2011Agent Orchestration Laboratory\n\n**Agentry Lab lets you experiment with multi-agent scenarios in minutes!**\n\nDefine a room, drop in agents, give them instructions \u2014 then sit back and watch the sparks fly. Jump in yourself with interactive turns, or just let the lab run. Works equally well from the CLI or Python API.\n\n```bash\npip install agentrylab\nagentrylab run standup_club.yaml --objective \"remote work\" --max-iters 4\n```\n\n## \ud83d\udccb Table of Contents\n\n<details>\n<summary>Click to expand</summary>\n\n- [\ud83d\ude80 Get Started](#-get-started-in-2-minutes)\n- [\u2728 Why AgentryLab?](#-why-agentrylab)\n- [\ud83d\udccb Requirements](#-requirements)\n- [\ud83d\udcbe Installation](#-installation)\n- [\ud83d\udd27 Environment Setup](#-environment-setup)\n- [\ud83c\udfad Built-in Presets](#-built-in-presets)\n- [\ud83d\udda5\ufe0f CLI Commands](#\ufe0f-cli-commands)\n- [\u2699\ufe0f Configuration](#\ufe0f-configuration)\n- [\ud83d\udcb0 Tool Budgets](#-tool-budgets)\n- [\ud83d\udcdc\ud83d\udcbe Persistence](#-persistence)\n- [\ud83c\udfd7\ufe0f Architecture](#\ufe0f-architecture-at-a-glance)\n- [\ud83e\uddd1\u200d\ud83d\udcbb Development](#-development)\n- [\ud83d\udc0d Python API](#-python-api)\n- [\ud83d\udce6 Releasing](#-releasing)\n- [\ud83d\udccb Event Schema](#-event-schema)\n- [\ud83d\udcbe Checkpoint Snapshot Fields](#-checkpoint-snapshot-fields)\n- [\ud83c\udf73 Recipes](#-recipes)\n\n</details>\n\n## \ud83d\udd17 Docs quick links\n\n- CLI reference: `src/agentrylab/docs/CLI.md`\n- Config guide: `src/agentrylab/docs/CONFIG.md`\n- Providers: `src/agentrylab/docs/PROVIDERS.md`\n- Tools: `src/agentrylab/docs/TOOLS.md`\n\n## \ud83e\udde0 Concepts in 30 seconds\n\n- **Agents**: Roles that speak (pro, con, comedian, scientist, aliens \u2014 no limits!)\n- **Providers**: LLM backends (OpenAI, Ollama supported)\n- **Tools**: External calls wrapped as actions (DuckDuckGo, Wolfram included \u2014 contribute your own!)\n- **Scheduler**: Who talks when (Round\u2011Robin, Every\u2011N \u2014 scheduling options available)\n- **State**: History window, budgets, summaries, actions \u2014 continue experiments anytime\n\nPick preset lab setup or define your own lab (agents, tools, providers, schedules) in YAML, then run and iterate quickly from the CLI or Python. Stream outputs, save transcripts, stash checkpoints!\n\n**10 preset lab environments - ready to have fun out of the box!** \ud83c\udfad\n\n*\ud83c\udfa4 **Stand-Up Club** - Two comedians riff on any topic, MC closes the set* \n*\ud83c\udfdb\ufe0f **Debates** - Pro/con arguments with evidence, moderator keeps it civil* \n*\ud83e\udde0 **Drifty Thoughts** - Three thinkers wander playfully through ideas* \n*\ud83d\udd2c **Research** - Scientists collaborate, style coach polishes the output* \n*\ud83d\udecb\ufe0f **Therapy Session** - Compassionate client-therapist conversations* \n*\ud83d\udca1 **Brainstorm Buddies** - Idea generation with a scribe pulling shortlists*\n\nWant a new preset or tool? We welcome contributions \u2014 start with a tiny PR or open an issue. See CONTRIBUTING.md. Your idea might ship in the next release.\n\n## \ud83d\ude80 Get Started in 2 Minutes\n\n```bash\npip install agentrylab\n```\n\n### \ud83e\udd99 **llama3-friendly lab presets:**\n```bash\n# Simple chat (works great with local Ollama!)\nagentrylab run solo_chat_user.yaml --max-iters 3\n\n# Quick web research\nagentrylab run ddg_quick_summary.yaml --objective \"quantum computing\"\n```\n\n### \ud83e\udd16 **OpenAI-friendly lab presets:**\n```bash\n# Formal debates with evidence\nagentrylab run debates.yaml --objective \"Should we colonize Mars?\" --max-iters 4\n\n# Comedy club (hybrid: llama3 + GPT-4o-mini)\nagentrylab run standup_club.yaml --objective \"remote work\" --max-iters 6\n```\n\n## \u2728 Why AgentryLab?\n\n**Because single agents are boring.** \ud83e\udd16\n\n- \ud83d\udce6 **YAML\u2011first presets** for agents/advisors/moderator/summarizer (your config, your rules)\n- \ud83d\udd0c **Pluggable LLM providers** (OpenAI, Ollama) and tools (DuckDuckGo, Wolfram Alpha)\n- \ud83d\udce1 **Streaming CLI** with resume support and transcript/DB persistence (forget nothing, replay everything)\n- \u23f3 **Smart budgets** for tools (per\u2011run/per\u2011iteration) with shared\u2011per\u2011tick semantics (no more runaway tool spam)\n- \ud83e\udde9 **Small, readable runtime**: nodes, scheduler, engine, state (batteries included, drama optional)\n- \ud83e\udef5 **Human\u2011in\u2011the\u2011loop turns**: schedule `user` nodes and poke runs from CLI/API (`agentrylab say \u2026`)\n\n## \ud83d\udccb Requirements\n\n- \ud83d\udc0d **Python 3.11+**\n- \ud83e\uddf0 **Virtual environment** (recommended; sanity\u2011preserving)\n- \ud83d\udda5\ufe0f **Optional: Ollama** for local models (default: `http://localhost:11434`)\n- \ud83d\udd11 **API keys** as needed (e.g., `OPENAI_API_KEY`, `WOLFRAM_APP_ID`) \u2014 bring your own secrets\n\n## \ud83d\udcbe Installation\n\n### Option 1: From PyPI (Recommended)\n```bash\npip install agentrylab\n```\n\n### Option 2: From Source (Development)\n```bash\ngit clone https://github.com/Alexeyisme/agentrylab.git\ncd agentrylab\npython -m venv .venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\npip install -U pip\npip install -e .\n```\n\n## \ud83d\udd27 Environment Setup\n\nCreate a `.env` file (loaded via `python-dotenv`) with any secrets you need:\n\n```bash\n# For OpenAI models (optional)\nOPENAI_API_KEY=sk-...\n\n# For Wolfram Alpha (optional)\nWOLFRAM_APP_ID=...\n\n# For Ollama (optional, defaults to localhost:11434)\nOLLAMA_BASE_URL=http://localhost:11434\n```\n\n> **\ud83d\udca1 Pro tip**: You can start with just Ollama (free, local) and add API keys later!\n\n## \ud83d\ude80 Quick Start\n\n### CLI Quickstart\nSpin up a room and let the sparks fly:\n\n```bash\n# Simple chat (works with Ollama/llama3)\nagentrylab run solo_chat_user.yaml --max-iters 3\n\n# Or with a custom topic\nagentrylab run standup_club.yaml --objective \"remote work\" --max-iters 4\n\n# Or a debate (needs OpenAI API key)\nagentrylab run debates.yaml --max-iters 4 --thread-id demo\n```\n\n> Cleanup outputs fast when experimenting:\n> ```bash\n> rm -rf outputs/ # transcripts + checkpoints\n> ```\n\n### CLI Cheat\u2011Sheet\n\n```\n--objective TEXT # Set topic on the fly\n--thread-id ID # Name your run (enables resume)\n--max-iters N # Number of iterations\n--no-resume # Start fresh even if checkpoint exists\n--no-stream # Print only final tail\n--show-last K # Tail size at the end\n```\n\nSet a custom objective/topic at runtime:\n\n```bash\nagentrylab run debates.yaml --thread-id debate1 --objective \"Proposition: apples \u2014 good or scam?\" --max-iters 4\n```\n\nInteractive mode (prompt for user message each round when a user node exists):\n\n```bash\n# Solo chat with a scheduled user turn; prompt on each iteration\nagentrylab run solo_chat_user.yaml --thread-id demo --resume --max-iters 3 --interactive --user-id user\n```\n\nCheck version:\n\n```bash\nagentrylab --version\n```\n\n### User Messages (User-in-the-Loop)\nLet a human chime in via API or CLI, and optionally schedule a user turn in cadence.\n\n```bash\n# 1) Post a user message into a thread\nagentrylab say solo_chat_user.yaml demo 'Hello from Alice!'\n\n# 2) Run one iteration to consume it (user turn then assistant)\nagentrylab run solo_chat_user.yaml --thread-id demo --resume --max-iters 1\n```\n\nPython API:\n```python\nfrom agentrylab import init\n\nlab = init(\"src/agentrylab/presets/solo_chat_user.yaml\", experiment_id=\"demo\")\nlab.post_user_message(\"Hello from Alice!\", user_id=\"user:alice\")\nlab.run(rounds=1)\n```\n\n### Python API Quickstart\nOrchestrate from Python with minimal fuss:\n\n```python\nfrom agentrylab import init, list_threads\n\n# 1. Create lab (using solo_chat_user preset - perfect for llama3!)\nlab = init(\"src/agentrylab/presets/solo_chat_user.yaml\", \n experiment_id=\"my-chat\",\n prompt=\"Tell me about your favorite hobby!\")\n\n# 2. Run with callback\ndef callback(event):\n if event.get(\"event\") == \"provider_result\":\n print(f\"Agent responded: {event.get('content_len', 0)} chars\")\n\nstatus = lab.run(rounds=3, stream=True, on_event=callback)\n\n# 3. Show conversation\nfor msg in lab.state.history:\n print(f\"[{msg['role']}]: {msg['content']}\")\n\n# 4. Resume with new topic\nlab.state.objective = \"Now tell me about your dream vacation!\"\nlab.run(rounds=2)\n\n# 5. List threads\nthreads = list_threads(\"src/agentrylab/presets/solo_chat_user.yaml\")\n```\n\nPython examples:\n- `user_in_the_loop_quick.py` \u2014 post once and run N rounds\n- `user_in_the_loop_interactive.py` \u2014 type a line, run a round, repeat\n\n> **\ud83d\udcdd Note**: Output streams each iteration (\"=== New events ===\") and prints a final tail\n> of the last N transcript entries. Transcripts are written to `outputs/*.jsonl`\n> and checkpoints to `outputs/checkpoints.db`.\n\n## \ud83d\udda5\ufe0f CLI Commands\n\n### Basic Commands\n```bash\n# Run a preset\nagentrylab run <preset.yaml> [--thread-id ID] [--max-iters N] [--show-last K] [--objective TEXT]\n\n# Inspect a thread's checkpoint\nagentrylab status <preset.yaml> <thread-id>\n\n# List all known threads\nagentrylab ls <preset.yaml>\n```\n\n### Common Options\n- `--max-iters N`: Run for N iterations (default: varies by preset)\n- `--thread-id ID`: Use specific thread ID (enables resume)\n- `--show-last K`: Show last K messages at the end\n- `--stream/--no-stream`: Enable/disable real-time streaming (default: enabled)\n- `--resume/--no-resume`: Resume from checkpoint or start fresh (default: resume)\n- `--objective TEXT`: Override the preset `objective` (topic) just for this run\n\n> **\ud83d\udcda Full docs**: See `src/agentrylab/docs/CLI.md` for complete command reference.\n\nUser-in-the-loop:\n- `agentrylab say <preset.yaml> <thread-id> 'message' [--user-id USER]` appends a user message into a thread.\n- Works with scheduled user nodes (role `user`) so messages are consumed on their turns.\n\n## \u2699\ufe0f Configuration\n\nDescribe your room in YAML; everything else clicks into place.\n\n- **Presets**: shipped with the package; the CLI accepts packaged names like `solo_chat_user.yaml` (file paths work too)\n- **Providers**: OpenAI (HTTP), Ollama; add your own under `runtime/providers`\n- **Tools**: DuckDuckGo search, Wolfram Alpha; add your own under `runtime/tools`\n- **Scheduler**: Round\u2011robin and Every\u2011N; build your own in `runtime/scheduler`\n\n## \ud83c\udfad Built-in Presets\n\nHave fun out of the box \u2014 **llama3\u2011friendly** and non\u2011strict by default.\n\n### \ud83d\udcca Quick Overview\n\n| Preset | Description | Best For | Provider | OpenAI? |\n|--------|-------------|----------|----------|--------|\n| [Solo Chat](#-solo-chat-user-turn-solo_chat_useryaml---perfect-for-beginners) | Single friendly agent with user turns | Testing, beginners | llama3 | \u274c |\n| [Stand\u2011Up Club](#-stand-up-club-standup_clubyaml---comedy-gold) | Two comedians + MC | Humor, creative writing | Hybrid | \u26a0\ufe0f |\n| [Debates](#-debates-debatesyaml---formal-arguments) | Pro/con + moderator, evidence | Structured arguments | OpenAI | \u2705 |\n| [Drifty Thoughts](#-drifty-thoughts-drifty_thoughtsyaml---free-form-thinking) | Three playful thinkers | Brainstorming | OpenAI | \u2705 |\n| [Research](#-research-collaboration-researchyaml---academic-vibes) | Scientists + style coach | Academic, structured | OpenAI | \u2705 |\n| [Therapy Session](#\ufe0f-therapy-session-therapy_sessionyaml---compassionate-chat) | Client\u2013therapist chat | Supportive conversations | OpenAI | \u2705 |\n| [DDG Quick Summary](#-ddg-quick-summary-ddg_quick_summaryyaml---web-research) | Web search + summary | Quick research | llama3 | \u274c |\n| [Small Talk](#-small-talk-small_talkyaml---casual-chat) | Two voices + host | Casual chats | llama3 | \u274c |\n| [Brainstorm Buddies](#-brainstorm-buddies-brainstorm_buddiesyaml---idea-generation) | Idea gen + scribe | Creative ideation | OpenAI | \u2705 |\n| [Simple Argument](#-simple-argument-argueyaml---casual-debates) | Casual debate, no rules | Opinions | Hybrid | \u26a0\ufe0f |\n\n### \ud83c\udfa4 **Solo Chat (User Turn)** (`solo_chat_user.yaml`) - **Perfect for beginners!**\n- **What**: Single friendly agent with scheduled user turns\n- **Best for**: Testing, simple conversations, llama3 users, human-in-the-loop\n- **Run**: `agentrylab run solo_chat_user.yaml --max-iters 3`\n- **Topic**: `--objective \"your topic\"`\n\n### \ud83c\udfad **Stand\u2011Up Club** (`standup_club.yaml`) - **Comedy gold!**\n- **What**: Two comedians riff on a topic, punch\u2011up advisor adds tweaks, MC closes the set\n- **Best for**: Entertainment, creative writing, humor\n- **Run**: `agentrylab run standup_club.yaml --objective \"airports\" --max-iters 6`\n- **Topic**: `--objective \"your topic\"`\n\n### \ud83e\udde0 **Drifty Thoughts** (`drifty_thoughts.yaml`) - **Free-form thinking**\n- **What**: Three \"thinkers\" drift playfully; gentle advisor nudges; optional summarizer\n- **Best for**: Creative brainstorming, philosophical discussions\n- **Run**: `agentrylab run drifty_thoughts.yaml --objective \"surprising ideas\"`\n- **Topic**: `--objective \"your topic\"`\n\n### \ud83d\udd2c **Research Collaboration** (`research.yaml`) - **Academic vibes**\n- **What**: Two scientists brainstorm, style coach gives clarity, summarizer wraps up\n- **Best for**: Research, academic discussions, structured thinking\n- **Run**: `agentrylab run research.yaml --objective \"curious scientific question\"`\n- **Topic**: `--objective \"your topic\"`\n\n### \ud83d\udecb\ufe0f **Therapy Session** (`therapy_session.yaml`) - **Compassionate chat**\n- **What**: Reflective client and gentle therapist; summarizer offers compassionate wrap\u2011up\n- **Best for**: Emotional discussions, self-reflection, supportive conversations\n- **Run**: `agentrylab run therapy_session.yaml --objective \"something on your mind\"`\n- **Topic**: `--objective \"your topic\"`\n\n### \ud83d\udd0d **DDG Quick Summary** (`ddg_quick_summary.yaml`) - **Web research**\n- **What**: One agent searches DuckDuckGo and writes a 5\u2011bullet web summary with URLs\n- **Best for**: Quick research, web summaries, fact-finding\n- **Run**: `agentrylab run ddg_quick_summary.yaml --objective \"your topic\"`\n- **Topic**: `--objective \"your topic\"`\n\n### \u2615 **Small Talk** (`small_talk.yaml`) - **Casual chat**\n- **What**: Two friendly voices chat; host recaps every few turns\n- **Best for**: Casual conversations, social interactions\n- **Run**: `agentrylab run small_talk.yaml --objective \"coffee rituals\"`\n- **Topic**: `--objective \"your topic\"`\n\n### \ud83d\udca1 **Brainstorm Buddies** (`brainstorm_buddies.yaml`) - **Idea generation**\n- **What**: Two idea buddies riff; scribe pulls a shortlist\n- **Best for**: Brainstorming, creative ideation, problem-solving\n- **Run**: `agentrylab run brainstorm_buddies.yaml --objective \"rainy day activities\"`\n- **Topic**: `--objective \"your topic\"`\n\n### \ud83c\udfdb\ufe0f **Debates** (`debates.yaml`) - **Formal arguments**\n- **What**: Pro/con debaters with moderator and evidence-based arguments\n- **Best for**: Formal debates, argument analysis, structured discussions\n- **Run**: `agentrylab run debates.yaml --max-iters 4`\n- **Note**: Requires OpenAI API key for best results\n\n### \ud83d\udde3\ufe0f **Simple Argument** (`argue.yaml`) - **Casual debates**\n- **What**: Two agents having a natural debate without strict rules\n- **Best for**: Casual arguments, opinion discussions\n- **Run**: `agentrylab run argue.yaml --objective \"Should remote work become standard?\"`\n- **Topic**: `--objective \"your topic\"`\n\n> **\ud83d\udca1 Pro tip**: Start with **Solo Chat (User Turn)** for testing, then try **Stand\u2011Up Club** for fun! \n> **\ud83d\udcda More tips**: See `src/agentrylab/docs/PRESET_TIPS.md` for advanced configuration.\n\n## \ud83d\udcb0 Tool Budgets\n\nControl how many times tools can be called to prevent runaway costs:\n\n- **`per_run_max`**: Total calls per tool across the entire run\n- **`per_iteration_max`**: Calls per engine tick (resets each tick)\n- **Scope**: Enforced per tool ID, shared across agents in the same tick\n- **Minima** (`per_run_min`, `per_iteration_min`) are advisory (not enforced)\n\n## \ud83d\udcdc\ud83d\udcbe Persistence\n\n**Transcripts for storytelling; checkpoints for recovery.**\n\n- **\ud83d\udcdc Transcript JSONL**: `outputs/<thread-id>.jsonl` (human-readable conversation logs)\n- **\ud83d\udcbe Checkpoints (SQLite)**: `outputs/checkpoints.db` (resume from any point)\n- **\u23ed\ufe0f Resume**: `--resume` (default) continues from last checkpoint; `--no-resume` starts fresh\n- **\ud83e\udde0 Schemas**: See `src/agentrylab/docs/PERSISTENCE.md` for detailed field definitions\n- **\u23f1\ufe0f Timestamps**: All recorded as Unix epoch seconds (UTC)\n\n### Cleaning outputs (all threads)\n- Remove everything (default paths): `rm -rf outputs/`\n- Or per-thread: `agentrylab ls <preset.yaml>` then `agentrylab reset <preset.yaml> <thread-id> --delete-transcript`\n\n## \ud83c\udfd7\ufe0f Architecture (at a glance)\n\n**Simple, readable runtime components:**\n\n- **Engine**: Steps the scheduler, executes nodes, applies outputs/actions\n- **Nodes**: Agent, Moderator, Summarizer, Advisor (see `runtime/nodes/*`)\n- **Providers**: Thin HTTP adapters (OpenAI, Ollama)\n- **Tools**: Simple callables with normalized envelopes (e.g., DuckDuckGo)\n- **State**: History window composition, budgets, message contracts, rollback\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Development\n\n**Serious tooling for serious\u2026 tinkering.**\n\n```bash\n# Install development dependencies\npip install -e .[dev]\n\n# Lint and test\nruff check . && pytest -q\n\n# Coverage (uses pytest-cov; default fail-under=40%)\nmake coverage\n# or: pytest --cov=src/agentrylab --cov-branch --cov-report=term-missing\n```\n\n> **\u2615\ufe0f Pro tip**: Keep a coffee nearby. Agents love to riff.\n\n## \ud83d\udc0d Python API\n\n### \ud83d\ude80 Beginner API\n\n**Essential functions for getting started quickly:**\n```python\nfrom agentrylab import init\n\n# Initialize a lab and run for N rounds\nlab = init(\"src/agentrylab/presets/solo_chat_user.yaml\", \n experiment_id=\"my-experiment\", \n prompt=\"Tell me about your favorite hobby!\")\nstatus = lab.run(rounds=5)\nprint(f\"Iterations: {status.iter}, Active: {status.is_active}\")\n\n# View conversation history\nfor msg in lab.state.history:\n print(f\"[{msg['role']}]: {msg['content']}\")\n```\n\n### \ud83d\udd27 Advanced API\n\n**Power user features for complex workflows:**\n\n#### Posting User Messages\n```python\nfrom agentrylab import init\n\nlab = init(\"src/agentrylab/presets/solo_chat_user.yaml\", experiment_id=\"chat-1\")\n# Append a user line into history and transcript; also enqueue for scheduled user nodes\nlab.post_user_message(\"Please keep it concise.\", user_id=\"user:alice\")\nlab.run(rounds=1)\n```\n\n#### One-shot Run with Streaming\n```python\nfrom agentrylab import run\n\ndef on_event(ev: dict):\n print(f\"Iteration {ev['iter']}: {ev['agent_id']} ({ev['role']})\")\n\nlab, status = run(\n \"src/agentrylab/presets/solo_chat_user.yaml\",\n prompt=\"What makes jokes funny?\",\n experiment_id=\"streaming-demo\",\n rounds=5,\n stream=True,\n on_event=on_event,\n)\n```\n\n#### Budget Management\n```python\nfrom agentrylab import init\n\n# Set budgets in preset, then inspect counters\npreset = {\n \"id\": \"budget-demo\",\n \"providers\": [{\"id\": \"p1\", \"impl\": \"tests.fake_impls.TestProvider\", \"model\": \"test\"}],\n \"tools\": [{\"id\": \"echo\", \"impl\": \"tests.fake_impls.EchoTool\"}],\n \"agents\": [{\"id\": \"pro\", \"role\": \"agent\", \"provider\": \"p1\", \"system_prompt\": \"You are the agent.\", \"tools\": [\"echo\"]}],\n \"runtime\": {\n \"scheduler\": {\"impl\": \"agentrylab.runtime.scheduler.round_robin.RoundRobinScheduler\", \"params\": {\"order\": [\"pro\"]}},\n \"budgets\": {\"tools\": {\"per_run_max\": 1}},\n },\n}\nlab = init(preset, experiment_id=\"budget-demo-1\", resume=False)\nlab.run(rounds=1)\nsnap = lab.store.load_checkpoint(\"budget-demo-1\")\nprint(\"Total tool calls:\", snap.get(\"_tool_calls_run_total\"))\n```\n\n#### Logging & Tracing\n```python\n# Configure runtime logging/trace in the preset\npreset = {\n # ... providers/tools/agents ...\n \"runtime\": {\n \"logs\": {\"level\": \"INFO\", \"format\": \"%(asctime)s %(levelname)s %(name)s: %(message)s\"},\n \"trace\": {\"enabled\": True},\n \"scheduler\": {\"impl\": \"agentrylab.runtime.scheduler.round_robin.RoundRobinScheduler\", \"params\": {\"order\": [\"pro\"]}},\n },\n}\nlab = init(preset, experiment_id=\"log-1\")\nlab.run(rounds=1)\n```\n\n## \u2753 FAQ\n\n- **Do I need OpenAI?** No. Many presets run great on llama3 via Ollama.\n- **Where are outputs stored?** `outputs/*.jsonl` (transcripts), `outputs/checkpoints.db` (state).\n- **How do I resume?** Use `--thread-id` and keep `--resume` (default).\n- **How do I set a topic?** `--objective \"...\"` on the CLI or `prompt=\"...\"` in Python.\n\n## \ud83e\uddef Troubleshooting\n\n- **Empty replies with llama3**: set provider timeout to 30s; add \u201cnever leave empty\u201d lines to system prompts; for strict multi\u2011agent tasks, prefer `gpt-4o-mini`.\n- **Moderator JSON errors**: use `debates.yaml` (OpenAI) or remove moderator; see `runtime/nodes/moderator.py` for JSON contract.\n\n## \ud83d\uddfa\ufe0f Roadmap\n\n- Tool sandboxing + more built\u2011in tools\n- More providers and local models\n- Preset sharing/marketplace\n- Richer moderation contracts and guardrails\n\n## \ud83d\udcda API Reference\n\n### Core Functions\n\n**`init(config, *, experiment_id=None, prompt=None, user_messages=None, resume=True) -> Lab`**\n- `config`: YAML path, dict, or validated Preset object\n- `experiment_id`: Logical run/thread ID; enables resume\n- `prompt`: Sets `cfg.objective` for the run (used in prompts when enabled)\n- `user_messages`: String or list of strings; seeds initial user message(s) into context\n- `resume`: Attempts to load checkpoint for `experiment_id`\n\n**`run(config, *, prompt=None, experiment_id=None, rounds=None, resume=True, stream=False, on_event=None, timeout_s=None, stop_when=None, on_tick=None, on_round=None) -> (Lab, LabStatus)`**\n- One-shot helper; see `Lab.run` for parameters\n\n### Lab Methods\n\n**`Lab.run(*, rounds=None, stream=False, on_event=None, timeout_s=None, stop_when=None, on_tick=None, on_round=None) -> LabStatus`**\n- `rounds`: Number of iterations to run\n- `stream`: When True, calls `on_event(event: Event)` for newly appended transcript entries\n- `timeout_s`: Optional wall-clock timeout for streaming runs\n- `stop_when`: Optional predicate `Event -> bool`; when returns True, run stops\n\n**`Lab.stream(*, rounds=None, timeout_s=None, stop_when=None, on_tick=None, on_round=None) -> Iterator[Event]`**\n- Generator that yields transcript events as they occur\n- Optional callbacks: `on_tick(info)`, `on_round(info)` where `info = {\"iter\": int, \"elapsed_s\": float}`\n\n**Other Lab Methods:**\n- `Lab.status` (property) -> `LabStatus`\n- `Lab.history(limit=50)` -> `list[Event]`\n- `Lab.clean(thread_id=None, delete_transcript=True, delete_checkpoint=True) -> None`: Delete outputs for a thread\n- `list_threads(config) -> list[tuple[str, float]]`: List (thread_id, updated_at) in persistence\n\n## \ud83d\udce6 Releasing\n\nWe publish on tags via GitHub Actions (see `.github/workflows/release.yml`).\n\n**For maintainers:**\n1. Bump `version` in `pyproject.toml`\n2. Update `CHANGELOG.md`\n3. `git tag -a vX.Y.Z -m 'vX.Y.Z' && git push --tags`\n4. CI builds sdist/wheel and uploads to PyPI using `PYPI_API_TOKEN` secret\n\n## \ud83d\udccb Event Schema\n\n```python\nfrom agentrylab import Event\n\ndef handle(ev: Event) -> None:\n print(ev[\"iter\"], ev[\"agent_id\"], ev[\"role\"], ev.get(\"latency_ms\"))\n # Keys: t, iter, agent_id, role, content (str|dict), metadata (dict|None), actions (dict|None), latency_ms\n```\n\n## \ud83d\udcbe Checkpoint Snapshot Fields\n\nReturned by `lab.store.load_checkpoint(thread_id)` as a dict of state attributes:\n\n- **`thread_id`**: Current experiment ID\n- **`iter`**: Iteration counter\n- **`stop_flag`**: Stop signal for the engine\n- **`history`**: In\u2011memory context entries `{agent_id, role, content}` used by prompt composition\n- **`running_summary`**: Summarizer running summary if set\n- **`_tool_calls_run_total`, `_tool_calls_iteration`**: Global tool counters\n- **`_tool_calls_run_by_id`, `_tool_calls_iter_by_id`**: Per\u2011tool counters\n- **`cfg`, `contracts`**: Complex/opaque objects (implementation detail)\n\n> **Note**: If a legacy/opaque pickle was saved, you'll get `{ \"_pickled\": ... }` instead\n\n## \ud83c\udf73 Recipes\n\n### Programmatic Preset Construction\n```python\nfrom agentrylab import init\n\npreset = {\n \"id\": \"programmatic\",\n \"providers\": [{\"id\": \"p1\", \"impl\": \"agentrylab.runtime.providers.openai.OpenAIProvider\", \"model\": \"gpt-4o\"}],\n \"tools\": [],\n \"agents\": [{\"id\": \"pro\", \"role\": \"agent\", \"provider\": \"p1\", \"system_prompt\": \"You are the agent.\"}],\n \"runtime\": {\n \"scheduler\": {\"impl\": \"agentrylab.runtime.scheduler.round_robin.RoundRobinScheduler\", \"params\": {\"order\": [\"pro\"]}}\n },\n}\nlab = init(preset, experiment_id=\"prog-1\", user_messages=[\"Start topic: ...\"]) \nlab.run(rounds=3)\n```\n\n### Multiple Runs in a Loop\n```python\ntopics = [\"jokes\", \"puns\", \"metaphors\"]\nfor i, topic in enumerate(topics):\n lab = init(\"src/agentrylab/presets/debates.yaml\", experiment_id=f\"exp-{i}\", prompt=f\"Explore {topic}\")\n lab.run(rounds=2)\n```\n\n### Inspecting Transcripts\n```python\nlab = init(\"src/agentrylab/presets/debates.yaml\", experiment_id=\"inspect-1\")\nlab.run(rounds=1)\nfor ev in lab.history(limit=20):\n print(ev[\"iter\"], ev[\"agent_id\"], ev[\"role\"], str(ev[\"content\"])[:80])\n\n# Or read directly from the store\nrows = lab.store.read_transcript(\"inspect-1\", limit=100)\n```\n\n### Cleaning Outputs (Transcript + Checkpoint)\n```python\nfrom agentrylab import init\nlab = init(\"src/agentrylab/presets/debates.yaml\", experiment_id=\"demo-clean\")\nlab.run(rounds=1)\n# Remove persisted outputs for this experiment\nlab.clean() # or lab.clean(thread_id=\"some-other-id\")\n```\n\n---\n\n## \ud83d\udcc4 License\n\nMIT\n",
"bugtrack_url": null,
"license": null,
"summary": "Lightweight, hackable multi-agent orchestration lab (CLI + Python) with transcripts, checkpoints, budgets, and pluggable providers/tools.",
"version": "0.1.5",
"project_urls": {
"Documentation": "https://github.com/Alexeyisme/agentrylab/tree/main/src/agentrylab/docs",
"Homepage": "https://github.com/Alexeyisme/agentrylab",
"Issues": "https://github.com/Alexeyisme/agentrylab/issues",
"Repository": "https://github.com/Alexeyisme/agentrylab"
},
"split_keywords": [
"multi-agent",
" llm",
" ai-agents",
" agent-orchestration",
" python",
" yaml-config",
" cli-tool",
" research",
" experimentation",
" ai-development",
" openai",
" ollama",
" llama3",
" transcripts",
" checkpoints",
" streaming",
" hackable",
" lightweight",
" lab",
" workflow-engine"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e7132a87c5993851e59603b77831a51a71daacee74e9d8d40251d068ef45a3b4",
"md5": "171e36d4325474bccbf86d138d355c77",
"sha256": "4b1d42880d6c7d96d542fc0f5767deec7bd1ff62ebe921b954c7f5b17417ab6f"
},
"downloads": -1,
"filename": "agentrylab-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "171e36d4325474bccbf86d138d355c77",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 93433,
"upload_time": "2025-09-07T21:07:00",
"upload_time_iso_8601": "2025-09-07T21:07:00.808580Z",
"url": "https://files.pythonhosted.org/packages/e7/13/2a87c5993851e59603b77831a51a71daacee74e9d8d40251d068ef45a3b4/agentrylab-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9342ec1abd4f9760d2063a7470295354c02be15701f4ff764501c55c0adfd736",
"md5": "025962798d9db78642a7e17e1d2bd20e",
"sha256": "41fe080f377b038b295864596cda28af18a2bd9b5c8e2160444ba82a8a6eb10e"
},
"downloads": -1,
"filename": "agentrylab-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "025962798d9db78642a7e17e1d2bd20e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 87774,
"upload_time": "2025-09-07T21:07:01",
"upload_time_iso_8601": "2025-09-07T21:07:01.858490Z",
"url": "https://files.pythonhosted.org/packages/93/42/ec1abd4f9760d2063a7470295354c02be15701f4ff764501c55c0adfd736/agentrylab-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-07 21:07:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Alexeyisme",
"github_project": "agentrylab",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "agentrylab"
}