Name | refinire JSON |
Version |
0.2.16
JSON |
| download |
home_page | None |
Summary | Refined simplicity for AI agents - Build intelligent workflows with automatic quality assurance and multi-provider LLM support |
upload_time | 2025-07-12 10:40:18 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT |
keywords |
agents
ai
llm
orchestration
workflow
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Refinire ✨ - Refined Simplicity for Agentic AI
[](https://pepy.tech/projects/refinire)
[](https://www.python.org/downloads/)
[](https://github.com/openai/openai-agents-python)
[]
**Transform ideas into working AI agents—intuitive agent framework**
---
## Why Refinire?
- **Simple installation** — Just `pip install refinire`
- **Simplify LLM-specific configuration** — No complex setup required
- **Unified API across providers** — OpenAI / Anthropic / Google / Ollama
- **Built-in evaluation & regeneration loops** — Quality assurance out of the box
- **One-line parallel processing** — Complex async operations with just `{"parallel": [...]}`
- **Comprehensive observability** — Automatic tracing with OpenTelemetry integration
## 30-Second Quick Start
```bash
pip install refinire
```
```python
from refinire import RefinireAgent
# Simple AI agent
agent = RefinireAgent(
name="assistant",
generation_instructions="You are a helpful assistant.",
model="gpt-4o-mini"
)
result = agent.run("Hello!")
print(result.content)
```
## The Core Components
Refinire provides key components to support AI agent development.
## RefinireAgent - Integrated Generation and Evaluation
```python
from refinire import RefinireAgent
# Agent with automatic evaluation
agent = RefinireAgent(
name="quality_writer",
generation_instructions="Generate high-quality, informative content with clear structure and engaging writing style",
evaluation_instructions="""Evaluate the content quality on a scale of 0-100 based on:
- Clarity and readability (0-25 points)
- Accuracy and factual correctness (0-25 points)
- Structure and organization (0-25 points)
- Engagement and writing style (0-25 points)
Provide your evaluation as:
Score: [0-100]
Comments:
- [Specific feedback on strengths]
- [Areas for improvement]
- [Suggestions for enhancement]""",
threshold=85.0, # Automatically regenerate if score < 85
max_retries=3,
model="gpt-4o-mini"
)
result = agent.run("Write an article about AI")
print(f"Quality Score: {result.evaluation_score}")
print(f"Content: {result.content}")
```
## Orchestration Mode - Multi-Agent Coordination
**The Challenge**: Building complex multi-agent systems requires standardized communication protocols between agents. Different output formats make agent coordination difficult and error-prone.
**The Solution**: RefinireAgent's orchestration mode provides structured JSON output with standardized status, result, reasoning, and next-step hints. This enables seamless integration in multi-agent workflows where agents need to communicate their status and recommend next actions.
**Key Benefits**:
- **Standardized Communication**: Unified JSON protocol for agent-to-agent interaction
- **Smart Coordination**: Agents provide hints about recommended next steps
- **Error Handling**: Clear status reporting (completed/failed) for robust workflows
- **Type Safety**: Structured output with optional Pydantic model integration
### Basic Orchestration Mode
```python
from refinire import RefinireAgent
# Agent configured for orchestration
orchestrator_agent = RefinireAgent(
name="analysis_worker",
generation_instructions="Analyze the provided data and provide insights",
orchestration_mode=True, # Enable structured output
model="gpt-4o-mini"
)
# Returns structured JSON instead of Context
result = orchestrator_agent.run("Analyze user engagement trends")
# Standard orchestration response format
print(f"Status: {result['status']}") # "completed" or "failed"
print(f"Result: {result['result']}") # Analysis output
print(f"Reasoning: {result['reasoning']}") # Why this result was generated
print(f"Next Task: {result['next_hint']['task']}") # Recommended next step
print(f"Confidence: {result['next_hint']['confidence']}") # Confidence level (0-1)
```
### Orchestration with Structured Output
```python
from pydantic import BaseModel
from refinire import RefinireAgent
class AnalysisReport(BaseModel):
findings: list[str]
recommendations: list[str]
confidence_score: float
# Orchestration mode with typed result
agent = RefinireAgent(
name="structured_analyst",
generation_instructions="Generate detailed analysis report",
orchestration_mode=True,
output_model=AnalysisReport, # Result will be typed
model="gpt-4o-mini"
)
result = agent.run("Analyze customer feedback data")
# result['result'] is now a typed AnalysisReport object
report = result['result']
print(f"Findings: {report.findings}")
print(f"Recommendations: {report.recommendations}")
print(f"Confidence: {report.confidence_score}")
```
### Multi-Agent Workflow Coordination
```python
from refinire import RefinireAgent, Flow, FunctionStep
# Orchestration-enabled agents
data_collector = RefinireAgent(
name="data_collector",
generation_instructions="Collect and prepare data for analysis",
orchestration_mode=True,
model="gpt-4o-mini"
)
analyzer = RefinireAgent(
name="analyzer",
generation_instructions="Perform deep analysis on collected data",
orchestration_mode=True,
model="gpt-4o-mini"
)
reporter = RefinireAgent(
name="reporter",
generation_instructions="Generate final report with recommendations",
orchestration_mode=True,
model="gpt-4o-mini"
)
def orchestration_router(ctx):
"""Route based on agent recommendations"""
if hasattr(ctx, 'result') and isinstance(ctx.result, dict):
next_task = ctx.result.get('next_hint', {}).get('task', 'unknown')
if next_task == 'analysis':
return 'analyzer'
elif next_task == 'reporting':
return 'reporter'
return 'end'
# Workflow with orchestration-based routing
flow = Flow({
"collect": data_collector,
"route": ConditionStep("route", orchestration_router, "analyzer", "end"),
"analyzer": analyzer,
"report": reporter
})
result = await flow.run("Process customer survey data")
```
**Key Orchestration Features**:
- **Status Tracking**: Clear completion/failure status for workflow control
- **Result Typing**: Optional Pydantic model integration for type safety
- **Next-Step Hints**: Agents recommend optimal next actions with confidence levels
- **Reasoning Transparency**: Agents explain their decision-making process
- **Error Handling**: Structured error reporting for robust multi-agent systems
- **Backward Compatibility**: Normal mode continues to work unchanged (orchestration_mode=False)
## Streaming Output - Real-time Response Display
**Stream responses in real-time** for improved user experience and immediate feedback. Both RefinireAgent and Flow support streaming output, perfect for chat interfaces, live dashboards, and interactive applications.
### Basic RefinireAgent Streaming
```python
from refinire import RefinireAgent
agent = RefinireAgent(
name="streaming_assistant",
generation_instructions="Provide detailed, helpful responses",
model="gpt-4o-mini"
)
# Stream response chunks as they arrive
async for chunk in agent.run_streamed("Explain quantum computing"):
print(chunk, end="", flush=True) # Real-time display
```
### Streaming with Callback Processing
```python
# Custom processing for each chunk
chunks_received = []
def process_chunk(chunk: str):
chunks_received.append(chunk)
# Send to websocket, update UI, save to file, etc.
async for chunk in agent.run_streamed(
"Write a Python tutorial",
callback=process_chunk
):
print(chunk, end="", flush=True)
print(f"\nReceived {len(chunks_received)} chunks")
```
### Context-Aware Streaming
```python
from refinire import Context
# Maintain conversation context across streaming responses
ctx = Context()
# First message
async for chunk in agent.run_streamed("Hello, I'm learning Python", ctx=ctx):
print(chunk, end="", flush=True)
# Context-aware follow-up
ctx.add_user_message("What about async programming?")
async for chunk in agent.run_streamed("What about async programming?", ctx=ctx):
print(chunk, end="", flush=True)
```
### Flow Streaming
**Flows also support streaming** for complex multi-step workflows:
```python
from refinire import Flow, FunctionStep
flow = Flow({
"analyze": FunctionStep("analyze", analyze_input),
"generate": RefinireAgent(
name="writer",
generation_instructions="Write detailed content"
)
})
# Stream entire flow output
async for chunk in flow.run_streamed("Create a technical article"):
print(chunk, end="", flush=True)
```
### Structured Output Streaming
**Important**: When using structured output (Pydantic models) with streaming, the response is streamed as **JSON chunks**, not parsed objects:
```python
from pydantic import BaseModel
class Article(BaseModel):
title: str
content: str
tags: list[str]
agent = RefinireAgent(
name="structured_writer",
generation_instructions="Generate an article",
output_model=Article # Structured output
)
# Streams JSON chunks: {"title": "...", "content": "...", "tags": [...]}
async for json_chunk in agent.run_streamed("Write about AI"):
print(json_chunk, end="", flush=True)
# For parsed objects, use regular run() method:
result = await agent.run_async("Write about AI")
article = result.content # Returns Article object
```
**Key Streaming Features**:
- **Real-time Output**: Immediate response as content is generated
- **Callback Support**: Custom processing for each chunk
- **Context Continuity**: Streaming works with conversation context
- **Flow Integration**: Stream complex multi-step workflows
- **JSON Streaming**: Structured output streams as JSON chunks
- **Error Handling**: Graceful handling of streaming interruptions
## Flow Architecture: Orchestrate Complex Workflows
**The Challenge**: Building complex AI workflows requires managing multiple agents, conditional logic, parallel processing, and error handling. Traditional approaches lead to rigid, hard-to-maintain code.
**The Solution**: Refinire's Flow Architecture lets you compose workflows from reusable steps. Each step can be a function, condition, parallel execution, or AI agent. Flows handle routing, error recovery, and state management automatically.
**Key Benefits**:
- **Composable Design**: Build complex workflows from simple, reusable components
- **Visual Logic**: Workflow structure is immediately clear from the code
- **Automatic Orchestration**: Flow engine handles execution order and data passing
- **Built-in Parallelization**: Dramatic performance improvements with simple syntax
### Simple Yet Powerful
```python
from refinire import Flow, FunctionStep, ConditionStep
# Define your workflow as a composable flow
flow = Flow({
"start": FunctionStep("analyze", analyze_request),
"route": ConditionStep("route", route_by_complexity, "simple", "complex"),
"simple": RefinireAgent(name="simple", generation_instructions="Quick response"),
"complex": {
"parallel": [
RefinireAgent(name="expert1", generation_instructions="Deep analysis"),
RefinireAgent(name="expert2", generation_instructions="Alternative perspective")
],
"next_step": "aggregate"
},
"aggregate": FunctionStep("combine", combine_results)
})
result = await flow.run("Complex user request")
```
**🎯 Complete Flow Guide**: For comprehensive workflow construction learning, explore our detailed step-by-step guides:
**📖 English**: [Complete Flow Guide](docs/tutorials/flow_complete_guide_en.md) - From basics to advanced parallel processing
**📖 日本語**: [Flow完全ガイド](docs/tutorials/flow_complete_guide_ja.md) - 包括的なワークフロー構築ガイド
### Flow Design Patterns
**Simple Routing**:
```python
# Automatic routing based on user language
def detect_language(ctx):
return "japanese" if any(char in ctx.user_input for char in "あいうえお") else "english"
flow = Flow({
"detect": ConditionStep("detect", detect_language, "jp_agent", "en_agent"),
"jp_agent": RefinireAgent(name="jp", generation_instructions="日本語で丁寧に回答"),
"en_agent": RefinireAgent(name="en", generation_instructions="Respond in English professionally")
})
```
**High-Performance Parallel Analysis**:
```python
# Execute multiple analyses simultaneously
flow = Flow(start="preprocess", steps={
"preprocess": FunctionStep("preprocess", clean_data),
"analysis": {
"parallel": [
RefinireAgent(name="sentiment", generation_instructions="Perform sentiment analysis"),
RefinireAgent(name="keywords", generation_instructions="Extract keywords"),
RefinireAgent(name="summary", generation_instructions="Create summary"),
RefinireAgent(name="classification", generation_instructions="Classify content")
],
"next_step": "report",
"max_workers": 4
},
"report": FunctionStep("report", generate_final_report)
})
```
**Compose steps like building blocks. Each step can be a function, condition, parallel execution, or LLM pipeline.**
---
## 1. Unified LLM Interface
**The Challenge**: Switching between AI providers requires different SDKs, APIs, and authentication methods. Managing multiple provider integrations creates vendor lock-in and complexity.
**The Solution**: RefinireAgent provides a single, consistent interface across all major LLM providers. Provider selection happens automatically based on your environment configuration, eliminating the need to manage multiple SDKs or rewrite code when switching providers.
**Key Benefits**:
- **Provider Freedom**: Switch between OpenAI, Anthropic, Google, and Ollama without code changes
- **Zero Vendor Lock-in**: Your agent logic remains independent of provider specifics
- **Automatic Resolution**: Environment variables determine the optimal provider automatically
- **Consistent API**: Same method calls work across all providers
```python
from refinire import RefinireAgent
# Just specify the model name—provider is resolved automatically
agent = RefinireAgent(
name="assistant",
generation_instructions="You are a helpful assistant.",
model="gpt-4o-mini" # OpenAI
)
# Anthropic, Google, and Ollama are also supported in the same way
agent2 = RefinireAgent(
name="anthropic_assistant",
generation_instructions="For Anthropic model",
model="claude-3-sonnet" # Anthropic
)
agent3 = RefinireAgent(
name="google_assistant",
generation_instructions="For Google Gemini",
model="gemini-pro" # Google
)
agent4 = RefinireAgent(
name="ollama_assistant",
generation_instructions="For Ollama model",
model="llama3.1:8b" # Ollama
)
```
This makes switching between providers and managing API keys extremely simple, greatly increasing development flexibility.
**📖 Tutorial:** [Quickstart Guide](docs/tutorials/quickstart.md) | **Details:** [Unified LLM Interface](docs/unified-llm-interface.md)
## 2. Autonomous Quality Assurance
**The Challenge**: AI outputs can be inconsistent, requiring manual review and regeneration. Quality control becomes a bottleneck in production systems.
**The Solution**: RefinireAgent includes built-in evaluation that automatically assesses output quality and regenerates content when it falls below your standards. This creates a self-improving system that maintains consistent quality without manual intervention.
**Key Benefits**:
- **Automatic Quality Control**: Set thresholds and let the system maintain standards
- **Self-Improving**: Failed outputs trigger regeneration with improved prompts
- **Production Ready**: Consistent quality without manual oversight
- **Configurable Standards**: Define your own evaluation criteria and thresholds
```python
from refinire import RefinireAgent
# Agent with evaluation loop
agent = RefinireAgent(
name="quality_assistant",
generation_instructions="Generate helpful responses",
evaluation_instructions="Rate accuracy and usefulness from 0-100",
threshold=85.0,
max_retries=3,
model="gpt-4o-mini"
)
result = agent.run("Explain quantum computing")
print(f"Evaluation Score: {result.evaluation_score}")
print(f"Content: {result.content}")
# With Context for workflow integration
from refinire import Context
ctx = Context()
result_ctx = agent.run("Explain quantum computing", ctx)
print(f"Evaluation Result: {result_ctx.evaluation_result}")
print(f"Score: {result_ctx.evaluation_result['score']}")
print(f"Passed: {result_ctx.evaluation_result['passed']}")
print(f"Feedback: {result_ctx.evaluation_result['feedback']}")
```
If evaluation falls below threshold, content is automatically regenerated for consistent high quality.
**📖 Tutorial:** [Advanced Features](docs/tutorials/advanced.md) | **Details:** [Autonomous Quality Assurance](docs/autonomous-quality-assurance.md)
## 3. Tool Integration - Automated Function Calling
**The Challenge**: AI agents often need to interact with external systems, APIs, or perform calculations. Manual tool integration is complex and error-prone.
**The Solution**: RefinireAgent automatically detects when to use tools and executes them seamlessly. Simply provide decorated functions, and the agent handles tool selection, parameter extraction, and execution automatically.
**Key Benefits**:
- **Zero Configuration**: Decorated functions are automatically available as tools
- **Intelligent Selection**: Agent chooses appropriate tools based on user requests
- **Error Handling**: Built-in retry and error recovery for tool execution
- **Extensible**: Easy to add custom tools for your specific use cases
```python
from refinire import RefinireAgent, tool
@tool
def calculate(expression: str) -> float:
"""Calculate mathematical expressions"""
return eval(expression)
@tool
def get_weather(city: str) -> str:
"""Get weather for a city"""
return f"Weather in {city}: Sunny, 22°C"
# Agent with tools
agent = RefinireAgent(
name="tool_assistant",
generation_instructions="Answer questions using tools",
tools=[calculate, get_weather],
model="gpt-4o-mini"
)
result = agent.run("What's the weather in Tokyo? Also, what's 15 * 23?")
print(result.content) # Automatically answers both questions
```
### MCP Server Integration - Model Context Protocol
RefinireAgent natively supports **MCP (Model Context Protocol) servers**, providing standardized access to external data sources and tools:
```python
from refinire import RefinireAgent
# MCP server integrated agent
agent = RefinireAgent(
name="mcp_agent",
generation_instructions="Use MCP server tools to accomplish tasks",
mcp_servers=[
"stdio://filesystem-server", # Local filesystem access
"http://localhost:8000/mcp", # Remote API server
"stdio://database-server --config db.json" # Database access
],
model="gpt-4o-mini"
)
# MCP tools become automatically available
result = agent.run("Analyze project files and include database information in your report")
```
**MCP Server Types:**
- **stdio servers**: Run as local subprocess
- **HTTP servers**: Remote HTTP endpoints
- **WebSocket servers**: Real-time communication support
**Automatic Features:**
- Tool auto-discovery from MCP servers
- Dynamic tool registration and execution
- Error handling and retry logic
- Parallel management of multiple servers
**📖 Tutorial:** [Advanced Features](docs/tutorials/advanced.md) | **Details:** [Composable Flow Architecture](docs/composable-flow-architecture.md)
## 4. Automatic Parallel Processing: Dramatic Performance Boost
**The Challenge**: Sequential processing of independent tasks creates unnecessary bottlenecks. Manual async implementation is complex and error-prone.
**The Solution**: Refinire's parallel processing automatically identifies independent operations and executes them simultaneously. Simply wrap operations in a `parallel` block, and the system handles all async coordination.
**Key Benefits**:
- **Automatic Optimization**: System identifies parallelizable operations
- **Dramatic Speedup**: 4x+ performance improvements are common
- **Zero Complexity**: No async/await or thread management required
- **Scalable**: Configurable worker pools adapt to your workload
Dramatically improve performance with parallel execution:
```python
from refinire import Flow, FunctionStep
import asyncio
# Define parallel processing with DAG structure
flow = Flow(start="preprocess", steps={
"preprocess": FunctionStep("preprocess", preprocess_text),
"parallel_analysis": {
"parallel": [
FunctionStep("sentiment", analyze_sentiment),
FunctionStep("keywords", extract_keywords),
FunctionStep("topic", classify_topic),
FunctionStep("readability", calculate_readability)
],
"next_step": "aggregate",
"max_workers": 4
},
"aggregate": FunctionStep("aggregate", combine_results)
})
# Sequential execution → Parallel execution (significant speedup)
result = await flow.run("Analyze this comprehensive text...")
```
Run complex analysis tasks simultaneously without manual async implementation.
**📖 Tutorial:** [Advanced Features](docs/tutorials/advanced.md) | **Details:** [Composable Flow Architecture](docs/composable-flow-architecture.md)
### Conditional Intelligence
```python
# AI that makes decisions
def route_by_complexity(ctx):
return "simple" if len(ctx.user_input) < 50 else "complex"
flow = Flow({
"router": ConditionStep("router", route_by_complexity, "simple", "complex"),
"simple": SimpleAgent(),
"complex": ExpertAgent()
})
```
### Parallel Processing: Dramatic Performance Boost
```python
from refinire import Flow, FunctionStep
# Process multiple analysis tasks simultaneously
flow = Flow(start="preprocess", steps={
"preprocess": FunctionStep("preprocess", preprocess_text),
"parallel_analysis": {
"parallel": [
FunctionStep("sentiment", analyze_sentiment),
FunctionStep("keywords", extract_keywords),
FunctionStep("topic", classify_topic),
FunctionStep("readability", calculate_readability)
],
"next_step": "aggregate",
"max_workers": 4
},
"aggregate": FunctionStep("aggregate", combine_results)
})
# Sequential execution → Parallel execution (significant speedup)
result = await flow.run("Analyze this comprehensive text...")
```
**Intelligence flows naturally through your logic, now with lightning speed.**
---
## Interactive Conversations
```python
from refinire import create_simple_interactive_pipeline
def completion_check(result):
return "finished" in str(result).lower()
# Multi-turn conversation agent
pipeline = create_simple_interactive_pipeline(
name="conversation_agent",
instructions="Have a natural conversation with the user.",
completion_check=completion_check,
max_turns=10,
model="gpt-4o-mini"
)
# Natural conversation flow
result = pipeline.run_interactive("Hello, I need help with my project")
while not result.is_complete:
user_input = input(f"Turn {result.turn}: ")
result = pipeline.continue_interaction(user_input)
print("Conversation complete:", result.content)
```
**Conversations that remember, understand, and evolve.**
---
## Monitoring and Insights
### Real-time Agent Analytics
```python
# Search and analyze your AI agents
registry = get_global_registry()
# Find specific patterns
customer_flows = registry.search_by_agent_name("customer_support")
performance_data = registry.complex_search(
flow_name_pattern="support",
status="completed",
min_duration=100
)
# Understand performance patterns
for flow in performance_data:
print(f"Flow: {flow.flow_name}")
print(f"Average response time: {flow.avg_duration}ms")
print(f"Success rate: {flow.success_rate}%")
```
### Quality Monitoring
```python
# Automatic quality tracking
quality_flows = registry.search_by_quality_threshold(min_score=80.0)
improvement_candidates = registry.search_by_quality_threshold(max_score=70.0)
# Continuous improvement insights
print(f"High-quality flows: {len(quality_flows)}")
print(f"Improvement opportunities: {len(improvement_candidates)}")
```
**Your AI's performance becomes visible, measurable, improvable.**
---
## Installation & Quick Start
### Install
```bash
pip install refinire
```
### Your First Agent (30 seconds)
```python
from refinire import RefinireAgent
# Create
agent = RefinireAgent(
name="hello_world",
generation_instructions="You are a friendly assistant.",
model="gpt-4o-mini"
)
# Run
result = agent.run("Hello!")
print(result.content)
```
### Provider Flexibility
```python
from refinire import get_llm
# Test multiple providers
providers = [
("openai", "gpt-4o-mini"),
("anthropic", "claude-3-haiku-20240307"),
("google", "gemini-1.5-flash"),
("ollama", "llama3.1:8b")
]
for provider, model in providers:
try:
llm = get_llm(provider=provider, model=model)
print(f"✓ {provider}: {model} - Ready")
except Exception as e:
print(f"✗ {provider}: {model} - {str(e)}")
```
---
## Advanced Features
### Structured Output
```python
from pydantic import BaseModel
from refinire import RefinireAgent
class WeatherReport(BaseModel):
location: str
temperature: float
condition: str
agent = RefinireAgent(
name="weather_reporter",
generation_instructions="Generate weather reports",
output_model=WeatherReport,
model="gpt-4o-mini"
)
result = agent.run("Weather in Tokyo")
weather = result.content # Typed WeatherReport object
```
### Guardrails and Safety
```python
from refinire import RefinireAgent
def content_filter(content: str) -> bool:
"""Filter inappropriate content"""
return "inappropriate" not in content.lower()
agent = RefinireAgent(
name="safe_assistant",
generation_instructions="Be helpful and appropriate",
output_guardrails=[content_filter],
model="gpt-4o-mini"
)
```
### Custom Tool Integration
```python
from refinire import RefinireAgent, tool
@tool
def web_search(query: str) -> str:
"""Search the web for information"""
# Your search implementation
return f"Search results for: {query}"
agent = RefinireAgent(
name="research_assistant",
generation_instructions="Help with research using web search",
tools=[web_search],
model="gpt-4o-mini"
)
```
### Context Management - Intelligent Memory
**The Challenge**: AI agents lose context between conversations and lack awareness of relevant files or code. This leads to repetitive questions and less helpful responses.
**The Solution**: RefinireAgent's context management automatically maintains conversation history, analyzes relevant files, and searches your codebase for pertinent information. The agent builds a comprehensive understanding of your project and maintains it across conversations.
**Key Benefits**:
- **Persistent Memory**: Conversations build upon previous interactions
- **Code Awareness**: Automatic analysis of relevant source files
- **Dynamic Context**: Context adapts based on current conversation topics
- **Intelligent Filtering**: Only relevant information is included to avoid token limits
RefinireAgent provides sophisticated context management for enhanced conversations:
```python
from refinire import RefinireAgent
# Agent with conversation history and file context
agent = RefinireAgent(
name="code_assistant",
generation_instructions="Help with code analysis and improvements",
context_providers_config=[
{
"type": "conversation_history",
"max_items": 10
},
{
"type": "fixed_file",
"file_path": "src/main.py",
"description": "Main application file"
},
{
"type": "source_code",
"base_path": "src/",
"file_patterns": ["*.py"],
"max_files": 5
}
],
model="gpt-4o-mini"
)
# Context is automatically managed across conversations
result = agent.run("What's the main function doing?")
print(result.content)
# Context persists and evolves
result = agent.run("How can I improve the error handling?")
print(result.content)
```
**📖 Tutorial:** [Context Management](docs/tutorials/context_management.md) | **Details:** [Context Management Design](docs/context_management.md)
### Dynamic Prompt Generation - Variable Embedding
RefinireAgent's new variable embedding feature enables dynamic prompt generation based on context:
```python
from refinire import RefinireAgent, Context
# Variable embedding capable agent
agent = RefinireAgent(
name="dynamic_responder",
generation_instructions="You are a {{agent_role}} providing {{response_style}} responses to {{user_type}} users. Previous result: {{RESULT}}",
model="gpt-4o-mini"
)
# Context setup
ctx = Context()
ctx.shared_state = {
"agent_role": "customer support expert",
"user_type": "premium",
"response_style": "prompt and detailed"
}
ctx.result = "Customer inquiry reviewed"
# Execute with dynamic prompt
result = agent.run("Handle {{user_type}} user {{priority_level}} request", ctx)
```
**Key Variable Embedding Features:**
- **`{{RESULT}}`**: Previous step execution result
- **`{{EVAL_RESULT}}`**: Detailed evaluation information
- **`{{custom_variables}}`**: Any value from `ctx.shared_state`
- **Real-time Substitution**: Dynamic prompt generation at runtime
### Context-Based Result Access
**The Challenge**: Chaining multiple AI agents requires complex data passing and state management. Results from one agent need to flow seamlessly to the next.
**The Solution**: Refinire's Context system automatically tracks agent results, evaluation data, and shared state. Agents can access previous results, evaluation scores, and custom data without manual state management.
**Key Benefits**:
- **Automatic State Management**: Context handles data flow between agents
- **Rich Result Access**: Access not just outputs but also evaluation scores and metadata
- **Flexible Data Storage**: Store custom data for complex workflow requirements
- **Seamless Integration**: No boilerplate code for agent communication
Access agent results and evaluation data through Context for seamless workflow integration:
```python
from refinire import RefinireAgent, Context, create_evaluated_agent
# Create agent with evaluation
agent = create_evaluated_agent(
name="analyzer",
generation_instructions="Analyze the input thoroughly",
evaluation_instructions="Rate analysis quality 0-100",
threshold=80
)
# Run with Context
ctx = Context()
result_ctx = agent.run("Analyze this data", ctx)
# Simple result access
print(f"Result: {result_ctx.result}")
# Evaluation result access
if result_ctx.evaluation_result:
score = result_ctx.evaluation_result["score"]
passed = result_ctx.evaluation_result["passed"]
feedback = result_ctx.evaluation_result["feedback"]
# Agent chain data passing
next_agent = create_simple_agent("summarizer", "Create summaries")
summary_ctx = next_agent.run(f"Summarize: {result_ctx.result}", result_ctx)
# Access previous agent outputs
analyzer_output = summary_ctx.prev_outputs["analyzer"]
summarizer_output = summary_ctx.prev_outputs["summarizer"]
# Custom data storage
result_ctx.shared_state["custom_data"] = {"key": "value"}
```
**Seamless data flow between agents with automatic result tracking.**
---
## Comprehensive Observability - Automatic Tracing
**The Challenge**: Debugging AI workflows and understanding agent behavior in production requires visibility into execution flows, performance metrics, and failure patterns. Manual logging is insufficient for complex multi-agent systems.
**The Solution**: Refinire provides comprehensive tracing capabilities with zero configuration. Every agent execution, workflow step, and evaluation is automatically captured and can be exported to industry-standard observability platforms like Grafana Tempo and Jaeger.
**Key Benefits**:
- **Zero Configuration**: Built-in console tracing works out of the box
- **Production Ready**: OpenTelemetry integration with OTLP export
- **Automatic Span Creation**: All agents and workflow steps traced automatically
- **Rich Metadata**: Captures inputs, outputs, evaluation scores, and performance metrics
- **Industry Standard**: Compatible with existing observability infrastructure
### Built-in Console Tracing
Every agent execution shows detailed, color-coded trace information by default:
```python
from refinire import RefinireAgent
agent = RefinireAgent(
name="traced_agent",
generation_instructions="You are a helpful assistant.",
model="gpt-4o-mini"
)
result = agent.run("What is quantum computing?")
# Console automatically shows:
# 🔵 [Instructions] You are a helpful assistant.
# 🟢 [User Input] What is quantum computing?
# 🟡 [LLM Output] Quantum computing is a revolutionary computing paradigm...
# ✅ [Result] Operation completed successfully
```
### Production OpenTelemetry Integration
For production environments, enable OpenTelemetry tracing with a single function call:
```python
from refinire import (
RefinireAgent,
enable_opentelemetry_tracing,
disable_opentelemetry_tracing
)
# Enable comprehensive tracing
enable_opentelemetry_tracing(
service_name="my-agent-app",
otlp_endpoint="http://localhost:4317", # Grafana Tempo endpoint
console_output=True # Also show console traces
)
# All agent executions are now automatically traced
agent = RefinireAgent(
name="production_agent",
generation_instructions="Generate high-quality responses",
evaluation_instructions="Rate quality from 0-100",
threshold=85.0,
model="gpt-4o-mini"
)
# This execution creates detailed spans with:
# - Agent name: "RefinireAgent(production_agent)"
# - Input/output text and instructions
# - Model name and parameters
# - Evaluation scores and pass/fail status
# - Success/error status and timing
result = agent.run("Explain machine learning concepts")
# Clean up when done
disable_opentelemetry_tracing()
```
### Disabling All Tracing
To completely disable all tracing (both console and OpenTelemetry):
```python
from refinire import disable_tracing
# Disable all tracing output
disable_tracing()
# Now all agent executions will run without any trace output
agent = RefinireAgent(name="silent_agent", model="gpt-4o-mini")
result = agent.run("This will execute silently") # No trace output
```
### Environment Variable Configuration
Use environment variables for streamlined configuration:
```bash
# Set tracing configuration
export REFINIRE_TRACE_OTLP_ENDPOINT="http://localhost:4317"
export REFINIRE_TRACE_SERVICE_NAME="my-agent-service"
export REFINIRE_TRACE_RESOURCE_ATTRIBUTES="environment=production,team=ai"
# Use oneenv for easy configuration management
oneenv init --template refinire.tracing
```
### Automatic Span Coverage
When tracing is enabled, Refinire automatically creates spans for:
#### **RefinireAgent Spans**
- Input text, generation instructions, and output
- Model name and evaluation scores
- Success/failure status and error details
#### **Workflow Step Spans**
- **ConditionStep**: Boolean results and routing decisions
- **FunctionStep**: Function execution and next steps
- **ParallelStep**: Parallel execution timing and success rates
#### **Flow Workflow Spans**
- Complete workflow execution with step counts
- Flow input/output and completion status
- Step names and execution sequence
### Grafana Tempo Integration
Set up complete observability with Grafana Tempo:
```yaml
# tempo.yaml
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
storage:
trace:
backend: local
local:
path: /tmp/tempo/traces
```
```bash
# Start Tempo
./tempo -config.file=tempo.yaml
# Run your traced application
python my_agent_app.py
# View traces in Grafana at http://localhost:3000
# Search: {service.name="my-agent-service"}
```
### Advanced Workflow Tracing
For complex workflows, add custom spans around groups of operations:
```python
from refinire import get_tracer, enable_opentelemetry_tracing
enable_opentelemetry_tracing(
service_name="workflow-app",
otlp_endpoint="http://localhost:4317"
)
tracer = get_tracer("workflow-tracer")
with tracer.start_as_current_span("multi-agent-workflow") as span:
span.set_attribute("workflow.type", "analysis-pipeline")
span.set_attribute("user.id", "user123")
# These agents automatically create spans within the workflow span
analyzer = RefinireAgent(name="analyzer", model="gpt-4o-mini")
expert = RefinireAgent(name="expert", model="gpt-4o-mini")
# Each call automatically creates detailed spans
analysis = analyzer.run("Analyze this data")
response = expert.run("Provide expert analysis")
span.set_attribute("workflow.status", "completed")
```
**📖 Complete Guide:** [Tracing and Observability Tutorial](docs/tutorials/tracing.md) - Comprehensive setup and usage
**🔗 Integration Examples:**
- [OpenTelemetry Example](examples/opentelemetry_tracing_example.py) - Basic OpenTelemetry setup
- [Grafana Tempo Example](examples/grafana_tempo_tracing_example.py) - Complete Tempo integration
- [Environment Configuration](examples/oneenv_tracing_example.py) - oneenv configuration management
---
## Why Refinire?
### For Developers
- **Immediate productivity**: Build AI agents in minutes, not days
- **Provider freedom**: Switch between OpenAI, Anthropic, Google, Ollama seamlessly
- **Quality assurance**: Automatic evaluation and improvement
- **Transparent operations**: Understand exactly what your AI is doing
### For Teams
- **Consistent architecture**: Unified patterns across all AI implementations
- **Reduced maintenance**: Automatic quality management and error handling
- **Performance visibility**: Real-time monitoring and analytics
- **Future-proof**: Provider-agnostic design protects your investment
### For Organizations
- **Faster time-to-market**: Dramatically reduced development cycles
- **Lower operational costs**: Automatic optimization and provider flexibility
- **Quality compliance**: Built-in evaluation and monitoring
- **Scalable architecture**: From prototype to production seamlessly
---
## Examples
Explore comprehensive examples in the `examples/` directory:
### Core Features
- `standalone_agent_demo.py` - Independent agent execution
- `trace_search_demo.py` - Monitoring and analytics
- `llm_pipeline_example.py` - RefinireAgent with tool integration
- `interactive_pipeline_example.py` - Multi-turn conversation agents
### Flow Architecture
- `flow_show_example.py` - Workflow visualization
- `simple_flow_test.py` - Basic flow construction
- `router_agent_example.py` - Conditional routing
- `dag_parallel_example.py` - High-performance parallel processing
### Specialized Agents
- `clarify_agent_example.py` - Requirement clarification
- `notification_agent_example.py` - Event notifications
- `extractor_agent_example.py` - Data extraction
- `validator_agent_example.py` - Content validation
### Context Management
- `context_management_basic.py` - Basic context provider usage
- `context_management_advanced.py` - Advanced context with source code analysis
- `context_management_practical.py` - Real-world context management scenarios
### Tracing and Observability
- `opentelemetry_tracing_example.py` - Basic OpenTelemetry setup and usage
- `grafana_tempo_tracing_example.py` - Complete Grafana Tempo integration
- `oneenv_tracing_example.py` - Environment configuration with oneenv
---
## Supported Environments
- **Python**: 3.10+
- **Platforms**: Windows, Linux, macOS
- **Dependencies**: OpenAI Agents SDK 0.0.17+
---
## License & Credits
MIT License. Built with gratitude on the [OpenAI Agents SDK](https://github.com/openai/openai-agents-python).
**Refinire**: Where complexity becomes clarity, and development becomes art.
---
## Release Notes
### v0.2.10 - MCP Server Integration
### 🔌 Model Context Protocol (MCP) Server Support
- **Native MCP Integration**: RefinireAgent now supports MCP (Model Context Protocol) servers through the `mcp_servers` parameter
- **Multiple Server Types**: Support for stdio, HTTP, and WebSocket MCP servers
- **Automatic Tool Discovery**: MCP server tools are automatically discovered and integrated
- **OpenAI Agents SDK Compatibility**: Leverages OpenAI Agents SDK MCP capabilities with simplified configuration
```python
# MCP server integrated agent
agent = RefinireAgent(
name="mcp_agent",
generation_instructions="Use MCP server tools to accomplish tasks",
mcp_servers=[
"stdio://filesystem-server", # Local filesystem access
"http://localhost:8000/mcp", # Remote API server
"stdio://database-server --config db.json" # Database access
],
model="gpt-4o-mini"
)
# MCP tools become automatically available
result = agent.run("Analyze project files and include database information in your report")
```
### 🚀 MCP Integration Benefits
- **Standardized Tool Access**: Use industry-standard MCP protocol for tool integration
- **Ecosystem Compatibility**: Works with existing MCP server implementations
- **Scalable Architecture**: Support for multiple concurrent MCP servers
- **Error Handling**: Built-in retry logic and error management for MCP connections
- **Context Integration**: MCP servers work seamlessly with RefinireAgent's context management system
### 💡 MCP Server Types and Use Cases
- **stdio servers**: Local subprocess execution for file system, databases, development tools
- **HTTP servers**: Remote API endpoints for web services and cloud integrations
- **WebSocket servers**: Real-time communication support for streaming data and live updates
### 🔧 Implementation Details
- **Minimal Code Changes**: Simple `mcp_servers` parameter addition maintains backward compatibility
- **SDK Pass-through**: Direct integration with OpenAI Agents SDK MCP functionality
- **Comprehensive Examples**: Complete MCP integration examples in `examples/mcp_server_example.py`
- **Documentation**: Updated guides showing MCP server configuration and usage patterns
**📖 Detailed Guide:** [MCP Server Example](examples/mcp_server_example.py) - Complete MCP integration demonstration
---
### v0.2.11 - Comprehensive Observability and Automatic Tracing
### 🔍 Complete OpenTelemetry Integration
- **Automatic Agent Tracing**: All RefinireAgent executions automatically create detailed spans with zero configuration
- **Workflow Step Tracing**: ConditionStep, FunctionStep, and ParallelStep operations automatically tracked
- **Flow-Level Spans**: Complete workflow execution visibility with comprehensive metadata
- **Rich Span Metadata**: Captures inputs, outputs, evaluation scores, model parameters, and performance metrics
```python
from refinire import enable_opentelemetry_tracing, RefinireAgent
# Enable comprehensive tracing
enable_opentelemetry_tracing(
service_name="my-agent-app",
otlp_endpoint="http://localhost:4317"
)
# All executions automatically create detailed spans
agent = RefinireAgent(
name="traced_agent",
generation_instructions="Generate responses",
evaluation_instructions="Rate quality 0-100",
threshold=85.0,
model="gpt-4o-mini"
)
# Automatic span with rich metadata
result = agent.run("Explain quantum computing")
```
### 🎯 Zero-Configuration Observability
- **Built-in Console Tracing**: Color-coded trace output works out of the box
- **Environment Variable Configuration**: `REFINIRE_TRACE_*` variables for streamlined setup
- **oneenv Template Support**: `oneenv init --template refinire.tracing` for easy configuration
- **Production Ready**: Industry-standard OTLP export to Grafana Tempo, Jaeger, and other platforms
### 🚀 Automatic Span Coverage
- **RefinireAgent Spans**: Input/output text, instructions, model name, evaluation scores, success/error status
- **ConditionStep Spans**: Boolean results, if_true/if_false branches, routing decisions
- **FunctionStep Spans**: Function name, execution success, next step information
- **ParallelStep Spans**: Parallel execution timing, success rates, worker utilization
- **Flow Spans**: Complete workflow metadata, step counts, execution sequence, completion status
### 📊 Advanced Observability Features
- **OpenAI Agents SDK Integration**: Leverages built-in tracing abstractions (`agent_span`, `custom_span`)
- **OpenTelemetry Bridge**: Seamless connection between Agents SDK spans and OpenTelemetry
- **Grafana Tempo Support**: Complete setup guide and integration examples
- **Custom Span Support**: Add business logic spans while maintaining automatic coverage
### 📖 Comprehensive Documentation
- **English Tutorial**: [Tracing and Observability](docs/tutorials/tracing.md) - Complete setup and usage guide
- **Japanese Tutorial**: [トレーシングと可観測性](docs/tutorials/tracing_ja.md) - 包括的なセットアップと使用ガイド
- **Integration Examples**: Complete examples for OpenTelemetry, Grafana Tempo, and environment configuration
- **Best Practices**: Guidelines for production deployment and performance optimization
### 🔧 Technical Implementation
- **Minimal Overhead**: Efficient span creation with automatic metadata collection
- **Error Handling**: Robust error capture and reporting in trace data
- **Performance Monitoring**: Automatic timing and performance metrics collection
- **Memory Efficiency**: Optimized trace data structure and export batching
### 💡 Developer Benefits
- **Production Debugging**: Complete visibility into multi-agent workflows and complex flows
- **Performance Optimization**: Identify bottlenecks and optimization opportunities
- **Quality Monitoring**: Track evaluation scores and improvement patterns
- **Zero Maintenance**: Automatic tracing with no manual instrumentation required
**📖 Complete Guides:**
- [Tracing Tutorial](docs/tutorials/tracing.md) - Comprehensive setup and integration guide
- [Grafana Tempo Example](examples/grafana_tempo_tracing_example.py) - Production observability setup
---
### v0.2.9 - Variable Embedding and Advanced Flow Features
### 🎯 Dynamic Variable Embedding System
- **`{{variable}}` Syntax**: Support for dynamic variable substitution in user input and generation_instructions
- **Reserved Variables**: Access previous step results and evaluations with `{{RESULT}}` and `{{EVAL_RESULT}}`
- **Context-Based**: Dynamically reference any variable from `ctx.shared_state`
- **Real-time Substitution**: Generate and customize prompts dynamically at runtime
- **Agent Flexibility**: Same agent can behave differently based on context state
```python
# Dynamic prompt generation example
agent = RefinireAgent(
name="dynamic_agent",
generation_instructions="You are a {{agent_role}} providing {{response_style}} responses for {{target_audience}}. Previous result: {{RESULT}}",
model="gpt-4o-mini"
)
ctx = Context()
ctx.shared_state = {
"agent_role": "technical expert",
"target_audience": "developers",
"response_style": "detailed technical explanations"
}
result = agent.run("Handle {{user_type}} request for {{service_level}} at {{response_time}}", ctx)
```
### 📚 Complete Flow Guide
- **Step-by-Step Guide**: [Complete Flow Guide](docs/tutorials/flow_complete_guide_en.md) for comprehensive workflow construction
- **Bilingual Support**: [Japanese Guide](docs/tutorials/flow_complete_guide_ja.md) also available
- **Practical Examples**: Progressive learning from basic flows to complex parallel processing
- **Best Practices**: Guidelines for efficient flow design and performance optimization
- **Troubleshooting**: Common issues and their solutions
### 🔧 Enhanced Context Management
- **Variable Embedding Integration**: Added variable embedding examples to [Context Management Guide](docs/tutorials/context_management.md)
- **Dynamic Prompt Generation**: Change agent behavior based on context state
- **Workflow Integration**: Patterns for Flow and context provider collaboration
- **Memory Management**: Best practices for efficient context usage
### 🛠️ Developer Experience Improvements
- **Step Compatibility Fix**: Test environment preparation for `run()` to `run_async()` migration
- **Test Organization**: Organized test files from project root to tests/ directory
- **Performance Validation**: Comprehensive testing and performance optimization for variable embedding
- **Error Handling**: Robust error handling and fallbacks in variable substitution
### 🚀 Technical Improvements
- **Regex Optimization**: Efficient variable pattern matching and context substitution
- **Type Safety**: Proper type conversion and exception handling in variable embedding
- **Memory Efficiency**: Optimized variable processing for large-scale contexts
- **Backward Compatibility**: Full compatibility with existing RefinireAgent and Flow implementations
### 💡 Practical Benefits
- **Development Efficiency**: Dynamic prompt generation enables multiple roles with single agent
- **Maintainability**: Variable-based templating makes prompt management and updates easier
- **Flexibility**: Runtime customization of agent behavior based on execution state
- **Reusability**: Creation and sharing of generic prompt templates
**📖 Detailed Guides:**
- [Complete Flow Guide](docs/tutorials/flow_complete_guide_en.md) - Comprehensive workflow construction guide
- [Context Management](docs/tutorials/context_management.md) - Including variable embedding comprehensive context management
---
### v0.2.8 - Revolutionary Tool Integration
### 🛠️ Revolutionary Tool Integration
- **New @tool Decorator**: Introduced intuitive `@tool` decorator for seamless tool creation
- **Simplified Imports**: Clean `from refinire import tool` replaces complex external SDK knowledge
- **Enhanced Debugging**: Added `get_tool_info()` and `list_tools()` for better tool introspection
- **Backward Compatibility**: Full support for existing `function_tool` decorated functions
- **Simplified Tool Development**: Streamlined tool creation process with intuitive decorator syntax
### 📚 Documentation Revolution
- **Concept-Driven Explanations**: READMEs now focus on Challenge-Solution-Benefits structure
- **Tutorial Integration**: Every feature section links to step-by-step tutorials
- **Improved Clarity**: Reduced cognitive load with clear explanations before code examples
- **Bilingual Enhancement**: Both English and Japanese documentation significantly improved
- **User-Centric Approach**: Documentation redesigned from developer perspective
### 🔄 Developer Experience Transformation
- **Unified Import Strategy**: All tool functionality available from single `refinire` package
- **Future-Proof Architecture**: Tool system insulated from external SDK changes
- **Enhanced Metadata**: Rich tool information for debugging and development
- **Intelligent Error Handling**: Better error messages and troubleshooting guidance
- **Streamlined Workflow**: From idea to working tool in under 5 minutes
### 🚀 Quality & Performance
- **Context-Based Evaluation**: New `ctx.evaluation_result` for workflow integration
- **Comprehensive Testing**: 100% test coverage for all new tool functionality
- **Migration Examples**: Complete migration guides and comparison demonstrations
- **API Consistency**: Unified patterns across all Refinire components
- **Zero Breaking Changes**: Existing code continues to work while new features enhance capability
### 💡 Key Benefits for Users
- **Faster Tool Development**: Significantly reduced tool creation time with streamlined workflow
- **Reduced Learning Curve**: No need to understand external SDK complexities
- **Better Debugging**: Rich metadata and introspection capabilities
- **Future Compatibility**: Protected from external SDK breaking changes
- **Intuitive Development**: Natural Python decorator patterns familiar to all developers
**This release represents a major step forward in making Refinire the most developer-friendly AI agent platform available.**
Raw data
{
"_id": null,
"home_page": null,
"name": "refinire",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "agents, ai, llm, orchestration, workflow",
"author": null,
"author_email": "Kitfactory <kitfactory@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/41/cd/0f97eee55ebbc1459e7a36f60d18f3360e6b998e0b2c1de3457f11a2d1a7/refinire-0.2.16.tar.gz",
"platform": null,
"description": "# Refinire \u2728 - Refined Simplicity for Agentic AI\n\n[](https://pepy.tech/projects/refinire)\n[](https://www.python.org/downloads/)\n[](https://github.com/openai/openai-agents-python)\n[]\n\n**Transform ideas into working AI agents\u2014intuitive agent framework**\n\n---\n\n## Why Refinire?\n\n- **Simple installation** \u2014 Just `pip install refinire`\n- **Simplify LLM-specific configuration** \u2014 No complex setup required\n- **Unified API across providers** \u2014 OpenAI / Anthropic / Google / Ollama \n- **Built-in evaluation & regeneration loops** \u2014 Quality assurance out of the box\n- **One-line parallel processing** \u2014 Complex async operations with just `{\"parallel\": [...]}`\n- **Comprehensive observability** \u2014 Automatic tracing with OpenTelemetry integration\n\n## 30-Second Quick Start\n\n```bash\npip install refinire\n```\n\n```python\nfrom refinire import RefinireAgent\n\n# Simple AI agent\nagent = RefinireAgent(\n name=\"assistant\",\n generation_instructions=\"You are a helpful assistant.\",\n model=\"gpt-4o-mini\"\n)\n\nresult = agent.run(\"Hello!\")\nprint(result.content)\n```\n\n## The Core Components\n\nRefinire provides key components to support AI agent development.\n\n## RefinireAgent - Integrated Generation and Evaluation\n\n```python\nfrom refinire import RefinireAgent\n\n# Agent with automatic evaluation\nagent = RefinireAgent(\n name=\"quality_writer\",\n generation_instructions=\"Generate high-quality, informative content with clear structure and engaging writing style\",\n evaluation_instructions=\"\"\"Evaluate the content quality on a scale of 0-100 based on:\n - Clarity and readability (0-25 points)\n - Accuracy and factual correctness (0-25 points) \n - Structure and organization (0-25 points)\n - Engagement and writing style (0-25 points)\n \n Provide your evaluation as:\n Score: [0-100]\n Comments:\n - [Specific feedback on strengths]\n - [Areas for improvement]\n - [Suggestions for enhancement]\"\"\",\n threshold=85.0, # Automatically regenerate if score < 85\n max_retries=3,\n model=\"gpt-4o-mini\"\n)\n\nresult = agent.run(\"Write an article about AI\")\nprint(f\"Quality Score: {result.evaluation_score}\")\nprint(f\"Content: {result.content}\")\n```\n\n## Orchestration Mode - Multi-Agent Coordination\n\n**The Challenge**: Building complex multi-agent systems requires standardized communication protocols between agents. Different output formats make agent coordination difficult and error-prone.\n\n**The Solution**: RefinireAgent's orchestration mode provides structured JSON output with standardized status, result, reasoning, and next-step hints. This enables seamless integration in multi-agent workflows where agents need to communicate their status and recommend next actions.\n\n**Key Benefits**:\n- **Standardized Communication**: Unified JSON protocol for agent-to-agent interaction\n- **Smart Coordination**: Agents provide hints about recommended next steps\n- **Error Handling**: Clear status reporting (completed/failed) for robust workflows\n- **Type Safety**: Structured output with optional Pydantic model integration\n\n### Basic Orchestration Mode\n\n```python\nfrom refinire import RefinireAgent\n\n# Agent configured for orchestration\norchestrator_agent = RefinireAgent(\n name=\"analysis_worker\",\n generation_instructions=\"Analyze the provided data and provide insights\",\n orchestration_mode=True, # Enable structured output\n model=\"gpt-4o-mini\"\n)\n\n# Returns structured JSON instead of Context\nresult = orchestrator_agent.run(\"Analyze user engagement trends\")\n\n# Standard orchestration response format\nprint(f\"Status: {result['status']}\") # \"completed\" or \"failed\"\nprint(f\"Result: {result['result']}\") # Analysis output\nprint(f\"Reasoning: {result['reasoning']}\") # Why this result was generated\nprint(f\"Next Task: {result['next_hint']['task']}\") # Recommended next step\nprint(f\"Confidence: {result['next_hint']['confidence']}\") # Confidence level (0-1)\n```\n\n### Orchestration with Structured Output\n\n```python\nfrom pydantic import BaseModel\nfrom refinire import RefinireAgent\n\nclass AnalysisReport(BaseModel):\n findings: list[str]\n recommendations: list[str]\n confidence_score: float\n\n# Orchestration mode with typed result\nagent = RefinireAgent(\n name=\"structured_analyst\",\n generation_instructions=\"Generate detailed analysis report\",\n orchestration_mode=True,\n output_model=AnalysisReport, # Result will be typed\n model=\"gpt-4o-mini\"\n)\n\nresult = agent.run(\"Analyze customer feedback data\")\n\n# result['result'] is now a typed AnalysisReport object\nreport = result['result']\nprint(f\"Findings: {report.findings}\")\nprint(f\"Recommendations: {report.recommendations}\")\nprint(f\"Confidence: {report.confidence_score}\")\n```\n\n### Multi-Agent Workflow Coordination\n\n```python\nfrom refinire import RefinireAgent, Flow, FunctionStep\n\n# Orchestration-enabled agents\ndata_collector = RefinireAgent(\n name=\"data_collector\",\n generation_instructions=\"Collect and prepare data for analysis\",\n orchestration_mode=True,\n model=\"gpt-4o-mini\"\n)\n\nanalyzer = RefinireAgent(\n name=\"analyzer\", \n generation_instructions=\"Perform deep analysis on collected data\",\n orchestration_mode=True,\n model=\"gpt-4o-mini\"\n)\n\nreporter = RefinireAgent(\n name=\"reporter\",\n generation_instructions=\"Generate final report with recommendations\",\n orchestration_mode=True,\n model=\"gpt-4o-mini\"\n)\n\ndef orchestration_router(ctx):\n \"\"\"Route based on agent recommendations\"\"\"\n if hasattr(ctx, 'result') and isinstance(ctx.result, dict):\n next_task = ctx.result.get('next_hint', {}).get('task', 'unknown')\n if next_task == 'analysis':\n return 'analyzer'\n elif next_task == 'reporting':\n return 'reporter'\n return 'end'\n\n# Workflow with orchestration-based routing\nflow = Flow({\n \"collect\": data_collector,\n \"route\": ConditionStep(\"route\", orchestration_router, \"analyzer\", \"end\"),\n \"analyzer\": analyzer,\n \"report\": reporter\n})\n\nresult = await flow.run(\"Process customer survey data\")\n```\n\n**Key Orchestration Features**:\n- **Status Tracking**: Clear completion/failure status for workflow control\n- **Result Typing**: Optional Pydantic model integration for type safety\n- **Next-Step Hints**: Agents recommend optimal next actions with confidence levels\n- **Reasoning Transparency**: Agents explain their decision-making process\n- **Error Handling**: Structured error reporting for robust multi-agent systems\n- **Backward Compatibility**: Normal mode continues to work unchanged (orchestration_mode=False)\n\n## Streaming Output - Real-time Response Display\n\n**Stream responses in real-time** for improved user experience and immediate feedback. Both RefinireAgent and Flow support streaming output, perfect for chat interfaces, live dashboards, and interactive applications.\n\n### Basic RefinireAgent Streaming\n\n```python\nfrom refinire import RefinireAgent\n\nagent = RefinireAgent(\n name=\"streaming_assistant\",\n generation_instructions=\"Provide detailed, helpful responses\",\n model=\"gpt-4o-mini\"\n)\n\n# Stream response chunks as they arrive\nasync for chunk in agent.run_streamed(\"Explain quantum computing\"):\n print(chunk, end=\"\", flush=True) # Real-time display\n```\n\n### Streaming with Callback Processing\n\n```python\n# Custom processing for each chunk\nchunks_received = []\ndef process_chunk(chunk: str):\n chunks_received.append(chunk)\n # Send to websocket, update UI, save to file, etc.\n\nasync for chunk in agent.run_streamed(\n \"Write a Python tutorial\", \n callback=process_chunk\n):\n print(chunk, end=\"\", flush=True)\n\nprint(f\"\\nReceived {len(chunks_received)} chunks\")\n```\n\n### Context-Aware Streaming\n\n```python\nfrom refinire import Context\n\n# Maintain conversation context across streaming responses\nctx = Context()\n\n# First message\nasync for chunk in agent.run_streamed(\"Hello, I'm learning Python\", ctx=ctx):\n print(chunk, end=\"\", flush=True)\n\n# Context-aware follow-up\nctx.add_user_message(\"What about async programming?\")\nasync for chunk in agent.run_streamed(\"What about async programming?\", ctx=ctx):\n print(chunk, end=\"\", flush=True)\n```\n\n### Flow Streaming\n\n**Flows also support streaming** for complex multi-step workflows:\n\n```python\nfrom refinire import Flow, FunctionStep\n\nflow = Flow({\n \"analyze\": FunctionStep(\"analyze\", analyze_input),\n \"generate\": RefinireAgent(\n name=\"writer\", \n generation_instructions=\"Write detailed content\"\n )\n})\n\n# Stream entire flow output\nasync for chunk in flow.run_streamed(\"Create a technical article\"):\n print(chunk, end=\"\", flush=True)\n```\n\n### Structured Output Streaming\n\n**Important**: When using structured output (Pydantic models) with streaming, the response is streamed as **JSON chunks**, not parsed objects:\n\n```python\nfrom pydantic import BaseModel\n\nclass Article(BaseModel):\n title: str\n content: str\n tags: list[str]\n\nagent = RefinireAgent(\n name=\"structured_writer\",\n generation_instructions=\"Generate an article\",\n output_model=Article # Structured output\n)\n\n# Streams JSON chunks: {\"title\": \"...\", \"content\": \"...\", \"tags\": [...]}\nasync for json_chunk in agent.run_streamed(\"Write about AI\"):\n print(json_chunk, end=\"\", flush=True)\n \n# For parsed objects, use regular run() method:\nresult = await agent.run_async(\"Write about AI\")\narticle = result.content # Returns Article object\n```\n\n**Key Streaming Features**:\n- **Real-time Output**: Immediate response as content is generated\n- **Callback Support**: Custom processing for each chunk \n- **Context Continuity**: Streaming works with conversation context\n- **Flow Integration**: Stream complex multi-step workflows\n- **JSON Streaming**: Structured output streams as JSON chunks\n- **Error Handling**: Graceful handling of streaming interruptions\n\n\n## Flow Architecture: Orchestrate Complex Workflows\n\n**The Challenge**: Building complex AI workflows requires managing multiple agents, conditional logic, parallel processing, and error handling. Traditional approaches lead to rigid, hard-to-maintain code.\n\n**The Solution**: Refinire's Flow Architecture lets you compose workflows from reusable steps. Each step can be a function, condition, parallel execution, or AI agent. Flows handle routing, error recovery, and state management automatically.\n\n**Key Benefits**:\n- **Composable Design**: Build complex workflows from simple, reusable components\n- **Visual Logic**: Workflow structure is immediately clear from the code\n- **Automatic Orchestration**: Flow engine handles execution order and data passing\n- **Built-in Parallelization**: Dramatic performance improvements with simple syntax\n\n### Simple Yet Powerful\n\n```python\nfrom refinire import Flow, FunctionStep, ConditionStep\n\n# Define your workflow as a composable flow\nflow = Flow({\n \"start\": FunctionStep(\"analyze\", analyze_request),\n \"route\": ConditionStep(\"route\", route_by_complexity, \"simple\", \"complex\"),\n \"simple\": RefinireAgent(name=\"simple\", generation_instructions=\"Quick response\"),\n \"complex\": {\n \"parallel\": [\n RefinireAgent(name=\"expert1\", generation_instructions=\"Deep analysis\"),\n RefinireAgent(name=\"expert2\", generation_instructions=\"Alternative perspective\")\n ],\n \"next_step\": \"aggregate\"\n },\n \"aggregate\": FunctionStep(\"combine\", combine_results)\n})\n\nresult = await flow.run(\"Complex user request\")\n```\n\n**\ud83c\udfaf Complete Flow Guide**: For comprehensive workflow construction learning, explore our detailed step-by-step guides:\n\n**\ud83d\udcd6 English**: [Complete Flow Guide](docs/tutorials/flow_complete_guide_en.md) - From basics to advanced parallel processing \n**\ud83d\udcd6 \u65e5\u672c\u8a9e**: [Flow\u5b8c\u5168\u30ac\u30a4\u30c9](docs/tutorials/flow_complete_guide_ja.md) - \u5305\u62ec\u7684\u306a\u30ef\u30fc\u30af\u30d5\u30ed\u30fc\u69cb\u7bc9\u30ac\u30a4\u30c9\n\n### Flow Design Patterns\n\n**Simple Routing**:\n```python\n# Automatic routing based on user language\ndef detect_language(ctx):\n return \"japanese\" if any(char in ctx.user_input for char in \"\u3042\u3044\u3046\u3048\u304a\") else \"english\"\n\nflow = Flow({\n \"detect\": ConditionStep(\"detect\", detect_language, \"jp_agent\", \"en_agent\"),\n \"jp_agent\": RefinireAgent(name=\"jp\", generation_instructions=\"\u65e5\u672c\u8a9e\u3067\u4e01\u5be7\u306b\u56de\u7b54\"),\n \"en_agent\": RefinireAgent(name=\"en\", generation_instructions=\"Respond in English professionally\")\n})\n```\n\n**High-Performance Parallel Analysis**:\n```python\n# Execute multiple analyses simultaneously\nflow = Flow(start=\"preprocess\", steps={\n \"preprocess\": FunctionStep(\"preprocess\", clean_data),\n \"analysis\": {\n \"parallel\": [\n RefinireAgent(name=\"sentiment\", generation_instructions=\"Perform sentiment analysis\"),\n RefinireAgent(name=\"keywords\", generation_instructions=\"Extract keywords\"),\n RefinireAgent(name=\"summary\", generation_instructions=\"Create summary\"),\n RefinireAgent(name=\"classification\", generation_instructions=\"Classify content\")\n ],\n \"next_step\": \"report\",\n \"max_workers\": 4\n },\n \"report\": FunctionStep(\"report\", generate_final_report)\n})\n```\n\n**Compose steps like building blocks. Each step can be a function, condition, parallel execution, or LLM pipeline.**\n\n---\n\n## 1. Unified LLM Interface\n\n**The Challenge**: Switching between AI providers requires different SDKs, APIs, and authentication methods. Managing multiple provider integrations creates vendor lock-in and complexity.\n\n**The Solution**: RefinireAgent provides a single, consistent interface across all major LLM providers. Provider selection happens automatically based on your environment configuration, eliminating the need to manage multiple SDKs or rewrite code when switching providers.\n\n**Key Benefits**:\n- **Provider Freedom**: Switch between OpenAI, Anthropic, Google, and Ollama without code changes\n- **Zero Vendor Lock-in**: Your agent logic remains independent of provider specifics\n- **Automatic Resolution**: Environment variables determine the optimal provider automatically\n- **Consistent API**: Same method calls work across all providers\n\n```python\nfrom refinire import RefinireAgent\n\n# Just specify the model name\u2014provider is resolved automatically\nagent = RefinireAgent(\n name=\"assistant\",\n generation_instructions=\"You are a helpful assistant.\",\n model=\"gpt-4o-mini\" # OpenAI\n)\n\n# Anthropic, Google, and Ollama are also supported in the same way\nagent2 = RefinireAgent(\n name=\"anthropic_assistant\",\n generation_instructions=\"For Anthropic model\",\n model=\"claude-3-sonnet\" # Anthropic\n)\n\nagent3 = RefinireAgent(\n name=\"google_assistant\",\n generation_instructions=\"For Google Gemini\",\n model=\"gemini-pro\" # Google\n)\n\nagent4 = RefinireAgent(\n name=\"ollama_assistant\",\n generation_instructions=\"For Ollama model\",\n model=\"llama3.1:8b\" # Ollama\n)\n```\n\nThis makes switching between providers and managing API keys extremely simple, greatly increasing development flexibility.\n\n**\ud83d\udcd6 Tutorial:** [Quickstart Guide](docs/tutorials/quickstart.md) | **Details:** [Unified LLM Interface](docs/unified-llm-interface.md)\n\n## 2. Autonomous Quality Assurance\n\n**The Challenge**: AI outputs can be inconsistent, requiring manual review and regeneration. Quality control becomes a bottleneck in production systems.\n\n**The Solution**: RefinireAgent includes built-in evaluation that automatically assesses output quality and regenerates content when it falls below your standards. This creates a self-improving system that maintains consistent quality without manual intervention.\n\n**Key Benefits**:\n- **Automatic Quality Control**: Set thresholds and let the system maintain standards\n- **Self-Improving**: Failed outputs trigger regeneration with improved prompts\n- **Production Ready**: Consistent quality without manual oversight\n- **Configurable Standards**: Define your own evaluation criteria and thresholds\n\n```python\nfrom refinire import RefinireAgent\n\n# Agent with evaluation loop\nagent = RefinireAgent(\n name=\"quality_assistant\",\n generation_instructions=\"Generate helpful responses\",\n evaluation_instructions=\"Rate accuracy and usefulness from 0-100\",\n threshold=85.0,\n max_retries=3,\n model=\"gpt-4o-mini\"\n)\n\nresult = agent.run(\"Explain quantum computing\")\nprint(f\"Evaluation Score: {result.evaluation_score}\")\nprint(f\"Content: {result.content}\")\n\n# With Context for workflow integration\nfrom refinire import Context\nctx = Context()\nresult_ctx = agent.run(\"Explain quantum computing\", ctx)\nprint(f\"Evaluation Result: {result_ctx.evaluation_result}\")\nprint(f\"Score: {result_ctx.evaluation_result['score']}\")\nprint(f\"Passed: {result_ctx.evaluation_result['passed']}\")\nprint(f\"Feedback: {result_ctx.evaluation_result['feedback']}\")\n```\n\nIf evaluation falls below threshold, content is automatically regenerated for consistent high quality.\n\n**\ud83d\udcd6 Tutorial:** [Advanced Features](docs/tutorials/advanced.md) | **Details:** [Autonomous Quality Assurance](docs/autonomous-quality-assurance.md)\n\n## 3. Tool Integration - Automated Function Calling\n\n**The Challenge**: AI agents often need to interact with external systems, APIs, or perform calculations. Manual tool integration is complex and error-prone.\n\n**The Solution**: RefinireAgent automatically detects when to use tools and executes them seamlessly. Simply provide decorated functions, and the agent handles tool selection, parameter extraction, and execution automatically.\n\n**Key Benefits**:\n- **Zero Configuration**: Decorated functions are automatically available as tools\n- **Intelligent Selection**: Agent chooses appropriate tools based on user requests\n- **Error Handling**: Built-in retry and error recovery for tool execution\n- **Extensible**: Easy to add custom tools for your specific use cases\n\n```python\nfrom refinire import RefinireAgent, tool\n\n@tool\ndef calculate(expression: str) -> float:\n \"\"\"Calculate mathematical expressions\"\"\"\n return eval(expression)\n\n@tool\ndef get_weather(city: str) -> str:\n \"\"\"Get weather for a city\"\"\"\n return f\"Weather in {city}: Sunny, 22\u00b0C\"\n\n# Agent with tools\nagent = RefinireAgent(\n name=\"tool_assistant\",\n generation_instructions=\"Answer questions using tools\",\n tools=[calculate, get_weather],\n model=\"gpt-4o-mini\"\n)\n\nresult = agent.run(\"What's the weather in Tokyo? Also, what's 15 * 23?\")\nprint(result.content) # Automatically answers both questions\n```\n\n### MCP Server Integration - Model Context Protocol\n\nRefinireAgent natively supports **MCP (Model Context Protocol) servers**, providing standardized access to external data sources and tools:\n\n```python\nfrom refinire import RefinireAgent\n\n# MCP server integrated agent\nagent = RefinireAgent(\n name=\"mcp_agent\",\n generation_instructions=\"Use MCP server tools to accomplish tasks\",\n mcp_servers=[\n \"stdio://filesystem-server\", # Local filesystem access\n \"http://localhost:8000/mcp\", # Remote API server\n \"stdio://database-server --config db.json\" # Database access\n ],\n model=\"gpt-4o-mini\"\n)\n\n# MCP tools become automatically available\nresult = agent.run(\"Analyze project files and include database information in your report\")\n```\n\n**MCP Server Types:**\n- **stdio servers**: Run as local subprocess\n- **HTTP servers**: Remote HTTP endpoints \n- **WebSocket servers**: Real-time communication support\n\n**Automatic Features:**\n- Tool auto-discovery from MCP servers\n- Dynamic tool registration and execution\n- Error handling and retry logic\n- Parallel management of multiple servers\n\n**\ud83d\udcd6 Tutorial:** [Advanced Features](docs/tutorials/advanced.md) | **Details:** [Composable Flow Architecture](docs/composable-flow-architecture.md)\n\n## 4. Automatic Parallel Processing: Dramatic Performance Boost\n\n**The Challenge**: Sequential processing of independent tasks creates unnecessary bottlenecks. Manual async implementation is complex and error-prone.\n\n**The Solution**: Refinire's parallel processing automatically identifies independent operations and executes them simultaneously. Simply wrap operations in a `parallel` block, and the system handles all async coordination.\n\n**Key Benefits**:\n- **Automatic Optimization**: System identifies parallelizable operations\n- **Dramatic Speedup**: 4x+ performance improvements are common\n- **Zero Complexity**: No async/await or thread management required\n- **Scalable**: Configurable worker pools adapt to your workload\n\nDramatically improve performance with parallel execution:\n\n```python\nfrom refinire import Flow, FunctionStep\nimport asyncio\n\n# Define parallel processing with DAG structure\nflow = Flow(start=\"preprocess\", steps={\n \"preprocess\": FunctionStep(\"preprocess\", preprocess_text),\n \"parallel_analysis\": {\n \"parallel\": [\n FunctionStep(\"sentiment\", analyze_sentiment),\n FunctionStep(\"keywords\", extract_keywords), \n FunctionStep(\"topic\", classify_topic),\n FunctionStep(\"readability\", calculate_readability)\n ],\n \"next_step\": \"aggregate\",\n \"max_workers\": 4\n },\n \"aggregate\": FunctionStep(\"aggregate\", combine_results)\n})\n\n# Sequential execution \u2192 Parallel execution (significant speedup)\nresult = await flow.run(\"Analyze this comprehensive text...\")\n```\n\nRun complex analysis tasks simultaneously without manual async implementation.\n\n**\ud83d\udcd6 Tutorial:** [Advanced Features](docs/tutorials/advanced.md) | **Details:** [Composable Flow Architecture](docs/composable-flow-architecture.md)\n\n### Conditional Intelligence\n\n```python\n# AI that makes decisions\ndef route_by_complexity(ctx):\n return \"simple\" if len(ctx.user_input) < 50 else \"complex\"\n\nflow = Flow({\n \"router\": ConditionStep(\"router\", route_by_complexity, \"simple\", \"complex\"),\n \"simple\": SimpleAgent(),\n \"complex\": ExpertAgent()\n})\n```\n\n### Parallel Processing: Dramatic Performance Boost\n\n```python\nfrom refinire import Flow, FunctionStep\n\n# Process multiple analysis tasks simultaneously\nflow = Flow(start=\"preprocess\", steps={\n \"preprocess\": FunctionStep(\"preprocess\", preprocess_text),\n \"parallel_analysis\": {\n \"parallel\": [\n FunctionStep(\"sentiment\", analyze_sentiment),\n FunctionStep(\"keywords\", extract_keywords),\n FunctionStep(\"topic\", classify_topic),\n FunctionStep(\"readability\", calculate_readability)\n ],\n \"next_step\": \"aggregate\",\n \"max_workers\": 4\n },\n \"aggregate\": FunctionStep(\"aggregate\", combine_results)\n})\n\n# Sequential execution \u2192 Parallel execution (significant speedup)\nresult = await flow.run(\"Analyze this comprehensive text...\")\n```\n\n**Intelligence flows naturally through your logic, now with lightning speed.**\n\n---\n\n## Interactive Conversations\n\n```python\nfrom refinire import create_simple_interactive_pipeline\n\ndef completion_check(result):\n return \"finished\" in str(result).lower()\n\n# Multi-turn conversation agent\npipeline = create_simple_interactive_pipeline(\n name=\"conversation_agent\",\n instructions=\"Have a natural conversation with the user.\",\n completion_check=completion_check,\n max_turns=10,\n model=\"gpt-4o-mini\"\n)\n\n# Natural conversation flow\nresult = pipeline.run_interactive(\"Hello, I need help with my project\")\nwhile not result.is_complete:\n user_input = input(f\"Turn {result.turn}: \")\n result = pipeline.continue_interaction(user_input)\n\nprint(\"Conversation complete:\", result.content)\n```\n\n**Conversations that remember, understand, and evolve.**\n\n---\n\n## Monitoring and Insights\n\n### Real-time Agent Analytics\n\n```python\n# Search and analyze your AI agents\nregistry = get_global_registry()\n\n# Find specific patterns\ncustomer_flows = registry.search_by_agent_name(\"customer_support\")\nperformance_data = registry.complex_search(\n flow_name_pattern=\"support\",\n status=\"completed\",\n min_duration=100\n)\n\n# Understand performance patterns\nfor flow in performance_data:\n print(f\"Flow: {flow.flow_name}\")\n print(f\"Average response time: {flow.avg_duration}ms\")\n print(f\"Success rate: {flow.success_rate}%\")\n```\n\n### Quality Monitoring\n\n```python\n# Automatic quality tracking\nquality_flows = registry.search_by_quality_threshold(min_score=80.0)\nimprovement_candidates = registry.search_by_quality_threshold(max_score=70.0)\n\n# Continuous improvement insights\nprint(f\"High-quality flows: {len(quality_flows)}\")\nprint(f\"Improvement opportunities: {len(improvement_candidates)}\")\n```\n\n**Your AI's performance becomes visible, measurable, improvable.**\n\n---\n\n## Installation & Quick Start\n\n### Install\n\n```bash\npip install refinire\n```\n\n### Your First Agent (30 seconds)\n\n```python\nfrom refinire import RefinireAgent\n\n# Create\nagent = RefinireAgent(\n name=\"hello_world\",\n generation_instructions=\"You are a friendly assistant.\",\n model=\"gpt-4o-mini\"\n)\n\n# Run\nresult = agent.run(\"Hello!\")\nprint(result.content)\n```\n\n### Provider Flexibility\n\n```python\nfrom refinire import get_llm\n\n# Test multiple providers\nproviders = [\n (\"openai\", \"gpt-4o-mini\"),\n (\"anthropic\", \"claude-3-haiku-20240307\"),\n (\"google\", \"gemini-1.5-flash\"),\n (\"ollama\", \"llama3.1:8b\")\n]\n\nfor provider, model in providers:\n try:\n llm = get_llm(provider=provider, model=model)\n print(f\"\u2713 {provider}: {model} - Ready\")\n except Exception as e:\n print(f\"\u2717 {provider}: {model} - {str(e)}\")\n```\n\n---\n\n## Advanced Features\n\n### Structured Output\n\n```python\nfrom pydantic import BaseModel\nfrom refinire import RefinireAgent\n\nclass WeatherReport(BaseModel):\n location: str\n temperature: float\n condition: str\n\nagent = RefinireAgent(\n name=\"weather_reporter\",\n generation_instructions=\"Generate weather reports\",\n output_model=WeatherReport,\n model=\"gpt-4o-mini\"\n)\n\nresult = agent.run(\"Weather in Tokyo\")\nweather = result.content # Typed WeatherReport object\n```\n\n### Guardrails and Safety\n\n```python\nfrom refinire import RefinireAgent\n\ndef content_filter(content: str) -> bool:\n \"\"\"Filter inappropriate content\"\"\"\n return \"inappropriate\" not in content.lower()\n\nagent = RefinireAgent(\n name=\"safe_assistant\",\n generation_instructions=\"Be helpful and appropriate\",\n output_guardrails=[content_filter],\n model=\"gpt-4o-mini\"\n)\n```\n\n### Custom Tool Integration\n\n```python\nfrom refinire import RefinireAgent, tool\n\n@tool\ndef web_search(query: str) -> str:\n \"\"\"Search the web for information\"\"\"\n # Your search implementation\n return f\"Search results for: {query}\"\n\nagent = RefinireAgent(\n name=\"research_assistant\",\n generation_instructions=\"Help with research using web search\",\n tools=[web_search],\n model=\"gpt-4o-mini\"\n)\n```\n\n### Context Management - Intelligent Memory\n\n**The Challenge**: AI agents lose context between conversations and lack awareness of relevant files or code. This leads to repetitive questions and less helpful responses.\n\n**The Solution**: RefinireAgent's context management automatically maintains conversation history, analyzes relevant files, and searches your codebase for pertinent information. The agent builds a comprehensive understanding of your project and maintains it across conversations.\n\n**Key Benefits**:\n- **Persistent Memory**: Conversations build upon previous interactions\n- **Code Awareness**: Automatic analysis of relevant source files\n- **Dynamic Context**: Context adapts based on current conversation topics\n- **Intelligent Filtering**: Only relevant information is included to avoid token limits\n\nRefinireAgent provides sophisticated context management for enhanced conversations:\n\n```python\nfrom refinire import RefinireAgent\n\n# Agent with conversation history and file context\nagent = RefinireAgent(\n name=\"code_assistant\",\n generation_instructions=\"Help with code analysis and improvements\",\n context_providers_config=[\n {\n \"type\": \"conversation_history\",\n \"max_items\": 10\n },\n {\n \"type\": \"fixed_file\",\n \"file_path\": \"src/main.py\",\n \"description\": \"Main application file\"\n },\n {\n \"type\": \"source_code\",\n \"base_path\": \"src/\",\n \"file_patterns\": [\"*.py\"],\n \"max_files\": 5\n }\n ],\n model=\"gpt-4o-mini\"\n)\n\n# Context is automatically managed across conversations\nresult = agent.run(\"What's the main function doing?\")\nprint(result.content)\n\n# Context persists and evolves\nresult = agent.run(\"How can I improve the error handling?\")\nprint(result.content)\n```\n\n**\ud83d\udcd6 Tutorial:** [Context Management](docs/tutorials/context_management.md) | **Details:** [Context Management Design](docs/context_management.md)\n\n### Dynamic Prompt Generation - Variable Embedding\n\nRefinireAgent's new variable embedding feature enables dynamic prompt generation based on context:\n\n```python\nfrom refinire import RefinireAgent, Context\n\n# Variable embedding capable agent\nagent = RefinireAgent(\n name=\"dynamic_responder\",\n generation_instructions=\"You are a {{agent_role}} providing {{response_style}} responses to {{user_type}} users. Previous result: {{RESULT}}\",\n model=\"gpt-4o-mini\"\n)\n\n# Context setup\nctx = Context()\nctx.shared_state = {\n \"agent_role\": \"customer support expert\",\n \"user_type\": \"premium\",\n \"response_style\": \"prompt and detailed\"\n}\nctx.result = \"Customer inquiry reviewed\"\n\n# Execute with dynamic prompt\nresult = agent.run(\"Handle {{user_type}} user {{priority_level}} request\", ctx)\n```\n\n**Key Variable Embedding Features:**\n- **`{{RESULT}}`**: Previous step execution result\n- **`{{EVAL_RESULT}}`**: Detailed evaluation information\n- **`{{custom_variables}}`**: Any value from `ctx.shared_state`\n- **Real-time Substitution**: Dynamic prompt generation at runtime\n\n### Context-Based Result Access\n\n**The Challenge**: Chaining multiple AI agents requires complex data passing and state management. Results from one agent need to flow seamlessly to the next.\n\n**The Solution**: Refinire's Context system automatically tracks agent results, evaluation data, and shared state. Agents can access previous results, evaluation scores, and custom data without manual state management.\n\n**Key Benefits**:\n- **Automatic State Management**: Context handles data flow between agents\n- **Rich Result Access**: Access not just outputs but also evaluation scores and metadata\n- **Flexible Data Storage**: Store custom data for complex workflow requirements\n- **Seamless Integration**: No boilerplate code for agent communication\n\nAccess agent results and evaluation data through Context for seamless workflow integration:\n\n```python\nfrom refinire import RefinireAgent, Context, create_evaluated_agent\n\n# Create agent with evaluation\nagent = create_evaluated_agent(\n name=\"analyzer\",\n generation_instructions=\"Analyze the input thoroughly\",\n evaluation_instructions=\"Rate analysis quality 0-100\",\n threshold=80\n)\n\n# Run with Context\nctx = Context()\nresult_ctx = agent.run(\"Analyze this data\", ctx)\n\n# Simple result access\nprint(f\"Result: {result_ctx.result}\")\n\n# Evaluation result access\nif result_ctx.evaluation_result:\n score = result_ctx.evaluation_result[\"score\"]\n passed = result_ctx.evaluation_result[\"passed\"]\n feedback = result_ctx.evaluation_result[\"feedback\"]\n \n# Agent chain data passing\nnext_agent = create_simple_agent(\"summarizer\", \"Create summaries\")\nsummary_ctx = next_agent.run(f\"Summarize: {result_ctx.result}\", result_ctx)\n\n# Access previous agent outputs\nanalyzer_output = summary_ctx.prev_outputs[\"analyzer\"]\nsummarizer_output = summary_ctx.prev_outputs[\"summarizer\"]\n\n# Custom data storage\nresult_ctx.shared_state[\"custom_data\"] = {\"key\": \"value\"}\n```\n\n**Seamless data flow between agents with automatic result tracking.**\n\n---\n\n## Comprehensive Observability - Automatic Tracing\n\n**The Challenge**: Debugging AI workflows and understanding agent behavior in production requires visibility into execution flows, performance metrics, and failure patterns. Manual logging is insufficient for complex multi-agent systems.\n\n**The Solution**: Refinire provides comprehensive tracing capabilities with zero configuration. Every agent execution, workflow step, and evaluation is automatically captured and can be exported to industry-standard observability platforms like Grafana Tempo and Jaeger.\n\n**Key Benefits**:\n- **Zero Configuration**: Built-in console tracing works out of the box\n- **Production Ready**: OpenTelemetry integration with OTLP export\n- **Automatic Span Creation**: All agents and workflow steps traced automatically\n- **Rich Metadata**: Captures inputs, outputs, evaluation scores, and performance metrics\n- **Industry Standard**: Compatible with existing observability infrastructure\n\n### Built-in Console Tracing\n\nEvery agent execution shows detailed, color-coded trace information by default:\n\n```python\nfrom refinire import RefinireAgent\n\nagent = RefinireAgent(\n name=\"traced_agent\",\n generation_instructions=\"You are a helpful assistant.\",\n model=\"gpt-4o-mini\"\n)\n\nresult = agent.run(\"What is quantum computing?\")\n# Console automatically shows:\n# \ud83d\udd35 [Instructions] You are a helpful assistant.\n# \ud83d\udfe2 [User Input] What is quantum computing?\n# \ud83d\udfe1 [LLM Output] Quantum computing is a revolutionary computing paradigm...\n# \u2705 [Result] Operation completed successfully\n```\n\n### Production OpenTelemetry Integration\n\nFor production environments, enable OpenTelemetry tracing with a single function call:\n\n```python\nfrom refinire import (\n RefinireAgent,\n enable_opentelemetry_tracing,\n disable_opentelemetry_tracing\n)\n\n# Enable comprehensive tracing\nenable_opentelemetry_tracing(\n service_name=\"my-agent-app\",\n otlp_endpoint=\"http://localhost:4317\", # Grafana Tempo endpoint\n console_output=True # Also show console traces\n)\n\n# All agent executions are now automatically traced\nagent = RefinireAgent(\n name=\"production_agent\",\n generation_instructions=\"Generate high-quality responses\",\n evaluation_instructions=\"Rate quality from 0-100\",\n threshold=85.0,\n model=\"gpt-4o-mini\"\n)\n\n# This execution creates detailed spans with:\n# - Agent name: \"RefinireAgent(production_agent)\"\n# - Input/output text and instructions\n# - Model name and parameters\n# - Evaluation scores and pass/fail status\n# - Success/error status and timing\nresult = agent.run(\"Explain machine learning concepts\")\n\n# Clean up when done\ndisable_opentelemetry_tracing()\n```\n\n### Disabling All Tracing\n\nTo completely disable all tracing (both console and OpenTelemetry):\n\n```python\nfrom refinire import disable_tracing\n\n# Disable all tracing output\ndisable_tracing()\n\n# Now all agent executions will run without any trace output\nagent = RefinireAgent(name=\"silent_agent\", model=\"gpt-4o-mini\")\nresult = agent.run(\"This will execute silently\") # No trace output\n```\n\n### Environment Variable Configuration\n\nUse environment variables for streamlined configuration:\n\n```bash\n# Set tracing configuration\nexport REFINIRE_TRACE_OTLP_ENDPOINT=\"http://localhost:4317\"\nexport REFINIRE_TRACE_SERVICE_NAME=\"my-agent-service\"\nexport REFINIRE_TRACE_RESOURCE_ATTRIBUTES=\"environment=production,team=ai\"\n\n# Use oneenv for easy configuration management\noneenv init --template refinire.tracing\n```\n\n### Automatic Span Coverage\n\nWhen tracing is enabled, Refinire automatically creates spans for:\n\n#### **RefinireAgent Spans**\n- Input text, generation instructions, and output\n- Model name and evaluation scores\n- Success/failure status and error details\n\n#### **Workflow Step Spans**\n- **ConditionStep**: Boolean results and routing decisions\n- **FunctionStep**: Function execution and next steps\n- **ParallelStep**: Parallel execution timing and success rates\n\n#### **Flow Workflow Spans**\n- Complete workflow execution with step counts\n- Flow input/output and completion status\n- Step names and execution sequence\n\n### Grafana Tempo Integration\n\nSet up complete observability with Grafana Tempo:\n\n```yaml\n# tempo.yaml\nserver:\n http_listen_port: 3200\n\ndistributor:\n receivers:\n otlp:\n protocols:\n grpc:\n endpoint: 0.0.0.0:4317\n http:\n endpoint: 0.0.0.0:4318\n\nstorage:\n trace:\n backend: local\n local:\n path: /tmp/tempo/traces\n```\n\n```bash\n# Start Tempo\n./tempo -config.file=tempo.yaml\n\n# Run your traced application\npython my_agent_app.py\n\n# View traces in Grafana at http://localhost:3000\n# Search: {service.name=\"my-agent-service\"}\n```\n\n### Advanced Workflow Tracing\n\nFor complex workflows, add custom spans around groups of operations:\n\n```python\nfrom refinire import get_tracer, enable_opentelemetry_tracing\n\nenable_opentelemetry_tracing(\n service_name=\"workflow-app\",\n otlp_endpoint=\"http://localhost:4317\"\n)\n\ntracer = get_tracer(\"workflow-tracer\")\n\nwith tracer.start_as_current_span(\"multi-agent-workflow\") as span:\n span.set_attribute(\"workflow.type\", \"analysis-pipeline\")\n span.set_attribute(\"user.id\", \"user123\")\n \n # These agents automatically create spans within the workflow span\n analyzer = RefinireAgent(name=\"analyzer\", model=\"gpt-4o-mini\")\n expert = RefinireAgent(name=\"expert\", model=\"gpt-4o-mini\")\n \n # Each call automatically creates detailed spans\n analysis = analyzer.run(\"Analyze this data\")\n response = expert.run(\"Provide expert analysis\")\n \n span.set_attribute(\"workflow.status\", \"completed\")\n```\n\n**\ud83d\udcd6 Complete Guide:** [Tracing and Observability Tutorial](docs/tutorials/tracing.md) - Comprehensive setup and usage\n\n**\ud83d\udd17 Integration Examples:**\n- [OpenTelemetry Example](examples/opentelemetry_tracing_example.py) - Basic OpenTelemetry setup\n- [Grafana Tempo Example](examples/grafana_tempo_tracing_example.py) - Complete Tempo integration\n- [Environment Configuration](examples/oneenv_tracing_example.py) - oneenv configuration management\n\n---\n\n## Why Refinire?\n\n### For Developers\n- **Immediate productivity**: Build AI agents in minutes, not days\n- **Provider freedom**: Switch between OpenAI, Anthropic, Google, Ollama seamlessly \n- **Quality assurance**: Automatic evaluation and improvement\n- **Transparent operations**: Understand exactly what your AI is doing\n\n### For Teams\n- **Consistent architecture**: Unified patterns across all AI implementations\n- **Reduced maintenance**: Automatic quality management and error handling\n- **Performance visibility**: Real-time monitoring and analytics\n- **Future-proof**: Provider-agnostic design protects your investment\n\n### For Organizations\n- **Faster time-to-market**: Dramatically reduced development cycles\n- **Lower operational costs**: Automatic optimization and provider flexibility\n- **Quality compliance**: Built-in evaluation and monitoring\n- **Scalable architecture**: From prototype to production seamlessly\n\n---\n\n## Examples\n\nExplore comprehensive examples in the `examples/` directory:\n\n### Core Features\n- `standalone_agent_demo.py` - Independent agent execution\n- `trace_search_demo.py` - Monitoring and analytics\n- `llm_pipeline_example.py` - RefinireAgent with tool integration\n- `interactive_pipeline_example.py` - Multi-turn conversation agents\n\n### Flow Architecture \n- `flow_show_example.py` - Workflow visualization\n- `simple_flow_test.py` - Basic flow construction\n- `router_agent_example.py` - Conditional routing\n- `dag_parallel_example.py` - High-performance parallel processing\n\n### Specialized Agents\n- `clarify_agent_example.py` - Requirement clarification\n- `notification_agent_example.py` - Event notifications\n- `extractor_agent_example.py` - Data extraction\n- `validator_agent_example.py` - Content validation\n\n### Context Management\n- `context_management_basic.py` - Basic context provider usage\n- `context_management_advanced.py` - Advanced context with source code analysis\n- `context_management_practical.py` - Real-world context management scenarios\n\n### Tracing and Observability\n- `opentelemetry_tracing_example.py` - Basic OpenTelemetry setup and usage\n- `grafana_tempo_tracing_example.py` - Complete Grafana Tempo integration\n- `oneenv_tracing_example.py` - Environment configuration with oneenv\n\n---\n\n## Supported Environments\n\n- **Python**: 3.10+\n- **Platforms**: Windows, Linux, macOS \n- **Dependencies**: OpenAI Agents SDK 0.0.17+\n\n---\n\n## License & Credits\n\nMIT License. Built with gratitude on the [OpenAI Agents SDK](https://github.com/openai/openai-agents-python).\n\n**Refinire**: Where complexity becomes clarity, and development becomes art.\n\n---\n\n## Release Notes\n\n### v0.2.10 - MCP Server Integration\n\n### \ud83d\udd0c Model Context Protocol (MCP) Server Support\n- **Native MCP Integration**: RefinireAgent now supports MCP (Model Context Protocol) servers through the `mcp_servers` parameter\n- **Multiple Server Types**: Support for stdio, HTTP, and WebSocket MCP servers\n- **Automatic Tool Discovery**: MCP server tools are automatically discovered and integrated\n- **OpenAI Agents SDK Compatibility**: Leverages OpenAI Agents SDK MCP capabilities with simplified configuration\n\n```python\n# MCP server integrated agent\nagent = RefinireAgent(\n name=\"mcp_agent\",\n generation_instructions=\"Use MCP server tools to accomplish tasks\",\n mcp_servers=[\n \"stdio://filesystem-server\", # Local filesystem access\n \"http://localhost:8000/mcp\", # Remote API server\n \"stdio://database-server --config db.json\" # Database access\n ],\n model=\"gpt-4o-mini\"\n)\n\n# MCP tools become automatically available\nresult = agent.run(\"Analyze project files and include database information in your report\")\n```\n\n### \ud83d\ude80 MCP Integration Benefits\n- **Standardized Tool Access**: Use industry-standard MCP protocol for tool integration\n- **Ecosystem Compatibility**: Works with existing MCP server implementations\n- **Scalable Architecture**: Support for multiple concurrent MCP servers\n- **Error Handling**: Built-in retry logic and error management for MCP connections\n- **Context Integration**: MCP servers work seamlessly with RefinireAgent's context management system\n\n### \ud83d\udca1 MCP Server Types and Use Cases\n- **stdio servers**: Local subprocess execution for file system, databases, development tools\n- **HTTP servers**: Remote API endpoints for web services and cloud integrations \n- **WebSocket servers**: Real-time communication support for streaming data and live updates\n\n### \ud83d\udd27 Implementation Details\n- **Minimal Code Changes**: Simple `mcp_servers` parameter addition maintains backward compatibility\n- **SDK Pass-through**: Direct integration with OpenAI Agents SDK MCP functionality\n- **Comprehensive Examples**: Complete MCP integration examples in `examples/mcp_server_example.py`\n- **Documentation**: Updated guides showing MCP server configuration and usage patterns\n\n**\ud83d\udcd6 Detailed Guide:** [MCP Server Example](examples/mcp_server_example.py) - Complete MCP integration demonstration\n\n---\n\n### v0.2.11 - Comprehensive Observability and Automatic Tracing\n\n### \ud83d\udd0d Complete OpenTelemetry Integration\n- **Automatic Agent Tracing**: All RefinireAgent executions automatically create detailed spans with zero configuration\n- **Workflow Step Tracing**: ConditionStep, FunctionStep, and ParallelStep operations automatically tracked\n- **Flow-Level Spans**: Complete workflow execution visibility with comprehensive metadata\n- **Rich Span Metadata**: Captures inputs, outputs, evaluation scores, model parameters, and performance metrics\n\n```python\nfrom refinire import enable_opentelemetry_tracing, RefinireAgent\n\n# Enable comprehensive tracing\nenable_opentelemetry_tracing(\n service_name=\"my-agent-app\",\n otlp_endpoint=\"http://localhost:4317\"\n)\n\n# All executions automatically create detailed spans\nagent = RefinireAgent(\n name=\"traced_agent\",\n generation_instructions=\"Generate responses\",\n evaluation_instructions=\"Rate quality 0-100\",\n threshold=85.0,\n model=\"gpt-4o-mini\"\n)\n\n# Automatic span with rich metadata\nresult = agent.run(\"Explain quantum computing\")\n```\n\n### \ud83c\udfaf Zero-Configuration Observability\n- **Built-in Console Tracing**: Color-coded trace output works out of the box\n- **Environment Variable Configuration**: `REFINIRE_TRACE_*` variables for streamlined setup\n- **oneenv Template Support**: `oneenv init --template refinire.tracing` for easy configuration\n- **Production Ready**: Industry-standard OTLP export to Grafana Tempo, Jaeger, and other platforms\n\n### \ud83d\ude80 Automatic Span Coverage\n- **RefinireAgent Spans**: Input/output text, instructions, model name, evaluation scores, success/error status\n- **ConditionStep Spans**: Boolean results, if_true/if_false branches, routing decisions\n- **FunctionStep Spans**: Function name, execution success, next step information\n- **ParallelStep Spans**: Parallel execution timing, success rates, worker utilization\n- **Flow Spans**: Complete workflow metadata, step counts, execution sequence, completion status\n\n### \ud83d\udcca Advanced Observability Features\n- **OpenAI Agents SDK Integration**: Leverages built-in tracing abstractions (`agent_span`, `custom_span`)\n- **OpenTelemetry Bridge**: Seamless connection between Agents SDK spans and OpenTelemetry\n- **Grafana Tempo Support**: Complete setup guide and integration examples\n- **Custom Span Support**: Add business logic spans while maintaining automatic coverage\n\n### \ud83d\udcd6 Comprehensive Documentation\n- **English Tutorial**: [Tracing and Observability](docs/tutorials/tracing.md) - Complete setup and usage guide\n- **Japanese Tutorial**: [\u30c8\u30ec\u30fc\u30b7\u30f3\u30b0\u3068\u53ef\u89b3\u6e2c\u6027](docs/tutorials/tracing_ja.md) - \u5305\u62ec\u7684\u306a\u30bb\u30c3\u30c8\u30a2\u30c3\u30d7\u3068\u4f7f\u7528\u30ac\u30a4\u30c9\n- **Integration Examples**: Complete examples for OpenTelemetry, Grafana Tempo, and environment configuration\n- **Best Practices**: Guidelines for production deployment and performance optimization\n\n### \ud83d\udd27 Technical Implementation\n- **Minimal Overhead**: Efficient span creation with automatic metadata collection\n- **Error Handling**: Robust error capture and reporting in trace data\n- **Performance Monitoring**: Automatic timing and performance metrics collection\n- **Memory Efficiency**: Optimized trace data structure and export batching\n\n### \ud83d\udca1 Developer Benefits\n- **Production Debugging**: Complete visibility into multi-agent workflows and complex flows\n- **Performance Optimization**: Identify bottlenecks and optimization opportunities\n- **Quality Monitoring**: Track evaluation scores and improvement patterns\n- **Zero Maintenance**: Automatic tracing with no manual instrumentation required\n\n**\ud83d\udcd6 Complete Guides:**\n- [Tracing Tutorial](docs/tutorials/tracing.md) - Comprehensive setup and integration guide\n- [Grafana Tempo Example](examples/grafana_tempo_tracing_example.py) - Production observability setup\n\n---\n\n### v0.2.9 - Variable Embedding and Advanced Flow Features\n\n### \ud83c\udfaf Dynamic Variable Embedding System\n- **`{{variable}}` Syntax**: Support for dynamic variable substitution in user input and generation_instructions\n- **Reserved Variables**: Access previous step results and evaluations with `{{RESULT}}` and `{{EVAL_RESULT}}`\n- **Context-Based**: Dynamically reference any variable from `ctx.shared_state`\n- **Real-time Substitution**: Generate and customize prompts dynamically at runtime\n- **Agent Flexibility**: Same agent can behave differently based on context state\n\n```python\n# Dynamic prompt generation example\nagent = RefinireAgent(\n name=\"dynamic_agent\",\n generation_instructions=\"You are a {{agent_role}} providing {{response_style}} responses for {{target_audience}}. Previous result: {{RESULT}}\",\n model=\"gpt-4o-mini\"\n)\n\nctx = Context()\nctx.shared_state = {\n \"agent_role\": \"technical expert\",\n \"target_audience\": \"developers\", \n \"response_style\": \"detailed technical explanations\"\n}\nresult = agent.run(\"Handle {{user_type}} request for {{service_level}} at {{response_time}}\", ctx)\n```\n\n### \ud83d\udcda Complete Flow Guide\n- **Step-by-Step Guide**: [Complete Flow Guide](docs/tutorials/flow_complete_guide_en.md) for comprehensive workflow construction\n- **Bilingual Support**: [Japanese Guide](docs/tutorials/flow_complete_guide_ja.md) also available\n- **Practical Examples**: Progressive learning from basic flows to complex parallel processing\n- **Best Practices**: Guidelines for efficient flow design and performance optimization\n- **Troubleshooting**: Common issues and their solutions\n\n### \ud83d\udd27 Enhanced Context Management\n- **Variable Embedding Integration**: Added variable embedding examples to [Context Management Guide](docs/tutorials/context_management.md)\n- **Dynamic Prompt Generation**: Change agent behavior based on context state\n- **Workflow Integration**: Patterns for Flow and context provider collaboration\n- **Memory Management**: Best practices for efficient context usage\n\n### \ud83d\udee0\ufe0f Developer Experience Improvements\n- **Step Compatibility Fix**: Test environment preparation for `run()` to `run_async()` migration\n- **Test Organization**: Organized test files from project root to tests/ directory\n- **Performance Validation**: Comprehensive testing and performance optimization for variable embedding\n- **Error Handling**: Robust error handling and fallbacks in variable substitution\n\n### \ud83d\ude80 Technical Improvements\n- **Regex Optimization**: Efficient variable pattern matching and context substitution\n- **Type Safety**: Proper type conversion and exception handling in variable embedding\n- **Memory Efficiency**: Optimized variable processing for large-scale contexts\n- **Backward Compatibility**: Full compatibility with existing RefinireAgent and Flow implementations\n\n### \ud83d\udca1 Practical Benefits\n- **Development Efficiency**: Dynamic prompt generation enables multiple roles with single agent\n- **Maintainability**: Variable-based templating makes prompt management and updates easier\n- **Flexibility**: Runtime customization of agent behavior based on execution state\n- **Reusability**: Creation and sharing of generic prompt templates\n\n**\ud83d\udcd6 Detailed Guides:**\n- [Complete Flow Guide](docs/tutorials/flow_complete_guide_en.md) - Comprehensive workflow construction guide\n- [Context Management](docs/tutorials/context_management.md) - Including variable embedding comprehensive context management\n\n---\n\n### v0.2.8 - Revolutionary Tool Integration\n\n### \ud83d\udee0\ufe0f Revolutionary Tool Integration\n- **New @tool Decorator**: Introduced intuitive `@tool` decorator for seamless tool creation\n- **Simplified Imports**: Clean `from refinire import tool` replaces complex external SDK knowledge\n- **Enhanced Debugging**: Added `get_tool_info()` and `list_tools()` for better tool introspection\n- **Backward Compatibility**: Full support for existing `function_tool` decorated functions\n- **Simplified Tool Development**: Streamlined tool creation process with intuitive decorator syntax\n\n### \ud83d\udcda Documentation Revolution\n- **Concept-Driven Explanations**: READMEs now focus on Challenge-Solution-Benefits structure\n- **Tutorial Integration**: Every feature section links to step-by-step tutorials\n- **Improved Clarity**: Reduced cognitive load with clear explanations before code examples\n- **Bilingual Enhancement**: Both English and Japanese documentation significantly improved\n- **User-Centric Approach**: Documentation redesigned from developer perspective\n\n### \ud83d\udd04 Developer Experience Transformation\n- **Unified Import Strategy**: All tool functionality available from single `refinire` package\n- **Future-Proof Architecture**: Tool system insulated from external SDK changes\n- **Enhanced Metadata**: Rich tool information for debugging and development\n- **Intelligent Error Handling**: Better error messages and troubleshooting guidance\n- **Streamlined Workflow**: From idea to working tool in under 5 minutes\n\n### \ud83d\ude80 Quality & Performance\n- **Context-Based Evaluation**: New `ctx.evaluation_result` for workflow integration\n- **Comprehensive Testing**: 100% test coverage for all new tool functionality\n- **Migration Examples**: Complete migration guides and comparison demonstrations\n- **API Consistency**: Unified patterns across all Refinire components\n- **Zero Breaking Changes**: Existing code continues to work while new features enhance capability\n\n### \ud83d\udca1 Key Benefits for Users\n- **Faster Tool Development**: Significantly reduced tool creation time with streamlined workflow\n- **Reduced Learning Curve**: No need to understand external SDK complexities\n- **Better Debugging**: Rich metadata and introspection capabilities\n- **Future Compatibility**: Protected from external SDK breaking changes\n- **Intuitive Development**: Natural Python decorator patterns familiar to all developers\n\n**This release represents a major step forward in making Refinire the most developer-friendly AI agent platform available.**",
"bugtrack_url": null,
"license": "MIT",
"summary": "Refined simplicity for AI agents - Build intelligent workflows with automatic quality assurance and multi-provider LLM support",
"version": "0.2.16",
"project_urls": {
"Bug-Tracker": "https://github.com/kitfactory/refinire/issues",
"Documentation": "https://kitfactory.github.io/refinire/",
"Homepage": "https://github.com/kitfactory/refinire",
"Repository": "https://github.com/kitfactory/refinire"
},
"split_keywords": [
"agents",
" ai",
" llm",
" orchestration",
" workflow"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "367ad5b82c67ee2b54ec00174927c13922c970cd73df41907e2400bb095a6a30",
"md5": "d003af8fb91377619ece4b67d7addbfd",
"sha256": "36cb863892d25f4263d24787011259b6980cc8e0b28005605a097275603bff80"
},
"downloads": -1,
"filename": "refinire-0.2.16-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d003af8fb91377619ece4b67d7addbfd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 153548,
"upload_time": "2025-07-12T10:40:15",
"upload_time_iso_8601": "2025-07-12T10:40:15.611154Z",
"url": "https://files.pythonhosted.org/packages/36/7a/d5b82c67ee2b54ec00174927c13922c970cd73df41907e2400bb095a6a30/refinire-0.2.16-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "41cd0f97eee55ebbc1459e7a36f60d18f3360e6b998e0b2c1de3457f11a2d1a7",
"md5": "bcf0b6a252b483d6a7ff2f082e4afb49",
"sha256": "34a93c4759607ec85d2d3ecf31822fa1f1b65df3d5d6bf8c8e407f587f26f5bf"
},
"downloads": -1,
"filename": "refinire-0.2.16.tar.gz",
"has_sig": false,
"md5_digest": "bcf0b6a252b483d6a7ff2f082e4afb49",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 1823729,
"upload_time": "2025-07-12T10:40:18",
"upload_time_iso_8601": "2025-07-12T10:40:18.227059Z",
"url": "https://files.pythonhosted.org/packages/41/cd/0f97eee55ebbc1459e7a36f60d18f3360e6b998e0b2c1de3457f11a2d1a7/refinire-0.2.16.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-12 10:40:18",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kitfactory",
"github_project": "refinire",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "refinire"
}