# DUKE Agents
[](https://badge.fury.io/py/duke-agents)
[](https://pypi.org/project/duke-agents/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/elmasson/duke-agents/tree/main/docs)
[](https://pepy.tech/project/duke-agents)
[](https://github.com/psf/black)
DUKE Agents is an advanced AI agent framework implementing the IPO (Input-Process-Output) architecture with enriched memory and feedback loops. It provides autonomous agents powered by Mistral LLMs for complex task execution, enabling developers to build sophisticated AI-driven workflows with minimal effort.
## ๐ฏ Why DUKE Agents?
- **Production-Ready**: Built with enterprise-grade reliability and error handling
- **Memory-Enhanced**: Persistent memory across workflow steps enables context-aware processing
- **Self-Correcting**: Automatic retry with satisfaction scoring ensures quality outputs
- **Fully Typed**: Complete type annotations for better IDE support and fewer runtime errors
- **Extensible**: Easy to create custom agents and extend functionality
- **Secure**: Sandboxed code execution and configurable security policies
## ๐ Features
### Core Capabilities
- **๐๏ธ IPO Architecture**: Structured Input-Process-Output workflow with memory persistence
- **๐ค Multiple Agent Types**:
- `AtomicAgent`: For discrete, well-defined tasks
- `CodeActAgent`: For code generation and execution
- Custom agents through simple inheritance
- **๐ง Mistral Integration**: Native support for all Mistral models including Codestral
- **๐พ Memory Management**: Rich workflow memory with feedback loops and context propagation
- **๐ Auto-correction**: Built-in retry logic with configurable satisfaction thresholds
- **๐ญ Flexible Orchestration**:
- Linear workflows for predefined sequences
- LLM-driven dynamic agent selection
- **โ
Type Safety**: Full Pydantic models for robust data validation
### Advanced Features
- **๐ Workflow Visualization**: Export workflows as diagrams
- **๐ Debugging Tools**: Comprehensive logging and memory inspection
- **โก Async Support**: Asynchronous agent execution for better performance
- **๐ก๏ธ Security**: Sandboxed execution environment for generated code
- **๐ Metrics**: Built-in performance tracking and optimization hints
- **๐ Extensible**: Plugin system for custom functionality
## ๐ฆ Installation
### Standard Installation
```bash
pip install duke-agents
```
### Development Installation
```bash
# Clone the repository
git clone https://github.com/elmasson/duke-agents.git
cd duke-agents
# Install in development mode with all dependencies
pip install -e ".[dev,docs]"
```
### Prerequisites
- Python 3.8 or higher
- Mistral API key (get one at [console.mistral.ai](https://console.mistral.ai))
## ๐ง Quick Start
### 1. Set Up Your Environment
```python
import os
from duke_agents import ContextManager, Orchestrator
# Set your Mistral API key
os.environ["MISTRAL_API_KEY"] = "your-api-key"
# Or use a .env file
# MISTRAL_API_KEY=your-api-key
```
### 2. Basic Agent Usage
```python
from duke_agents import AtomicAgent, ContextManager, Orchestrator
# Initialize context manager
context = ContextManager("Process customer feedback")
# Create orchestrator
orchestrator = Orchestrator(context)
# Create and register an agent
agent = AtomicAgent("feedback_analyzer")
orchestrator.register_agent(agent)
# Define workflow
workflow = [{
'agent': 'feedback_analyzer',
'input_type': 'atomic',
'input_data': {
'task_id': 'analyze_001',
'parameters': {
'feedback': 'Great product but shipping was slow',
'analyze': ['sentiment', 'topics', 'actionable_insights']
}
}
}]
# Execute workflow
results = orchestrator.execute_linear_workflow(workflow)
# Access results
if results[0].success:
print(f"Analysis: {results[0].result}")
print(f"Confidence: {results[0].satisfaction_score}")
```
### 3. Code Generation and Execution
```python
from duke_agents import CodeActAgent, ContextManager, Orchestrator
# Create a code generation agent
context = ContextManager("Data Analysis Assistant")
orchestrator = Orchestrator(context)
code_agent = CodeActAgent("data_analyst", model="codestral-latest")
orchestrator.register_agent(code_agent)
# Generate and execute code
workflow = [{
'agent': 'data_analyst',
'input_type': 'codeact',
'input_data': {
'prompt': '''Create a function that:
1. Loads sales data from a CSV file
2. Calculates total revenue by product category
3. Identifies top 5 performing products
4. Generates a summary report with visualizations''',
'context_data': {
'csv_path': 'sales_data.csv',
'date_column': 'transaction_date'
}
}
}]
results = orchestrator.execute_linear_workflow(workflow)
if results[0].success:
print(f"Generated Code:\n{results[0].generated_code}")
print(f"\nExecution Output:\n{results[0].execution_result}")
```
### 4. Multi-Agent Workflows
```python
# Create multiple specialized agents
data_agent = AtomicAgent("data_processor")
analysis_agent = CodeActAgent("analyzer")
report_agent = AtomicAgent("report_generator")
# Register all agents
for agent in [data_agent, analysis_agent, report_agent]:
orchestrator.register_agent(agent)
# Define multi-step workflow
workflow = [
{
'agent': 'data_processor',
'input_type': 'atomic',
'input_data': {
'task_id': 'load_data',
'parameters': {'source': 'database', 'table': 'sales_2024'}
}
},
{
'agent': 'analyzer',
'input_type': 'codeact',
'input_data': {
'prompt': 'Analyze the sales data and identify trends, anomalies, and opportunities'
}
},
{
'agent': 'report_generator',
'input_type': 'atomic',
'input_data': {
'task_id': 'create_report',
'parameters': {'format': 'pdf', 'include_visuals': True}
}
}
]
# Execute the complete workflow
results = orchestrator.execute_linear_workflow(workflow)
```
### 5. Custom Agent Creation
```python
from duke_agents.agents import BaseAgent
from duke_agents.models import AtomicInput, AtomicOutput
from pydantic import BaseModel
class TranslationOutput(BaseModel):
translated_text: str
source_language: str
target_language: str
confidence: float
class TranslationAgent(BaseAgent):
"""Custom agent for language translation."""
def __init__(self, name: str, model: str = "mistral-large"):
super().__init__(name, model)
self.agent_type = "translator"
def process(self, input_data: AtomicInput, context_data: dict = None) -> TranslationOutput:
# Custom processing logic
prompt = f"""Translate the following text to {input_data.parameters.get('target_language', 'English')}:
{input_data.parameters['text']}
Also identify the source language."""
response = self.llm_client.complete(prompt)
# Parse response and create output
return TranslationOutput(
translated_text=response['translation'],
source_language=response['source_language'],
target_language=input_data.parameters['target_language'],
confidence=0.95
)
# Use the custom agent
translator = TranslationAgent("translator")
orchestrator.register_agent(translator)
```
## ๐ Advanced Usage
### Dynamic Workflow with LLM-Driven Orchestration
```python
# Let the LLM decide which agents to use
context = ContextManager("Solve user problem: analyze and visualize climate data")
# Register multiple specialized agents
agents = {
'data_fetcher': AtomicAgent("data_fetcher"),
'data_cleaner': AtomicAgent("data_cleaner"),
'statistician': CodeActAgent("statistician"),
'visualizer': CodeActAgent("visualizer"),
'reporter': AtomicAgent("reporter")
}
for agent in agents.values():
orchestrator.register_agent(agent)
# Execute LLM-driven workflow
results = orchestrator.execute_llm_driven_workflow(
user_request="Fetch climate data for the last 10 years, clean it, perform statistical analysis, create visualizations, and generate a comprehensive report",
max_steps=10
)
```
### Memory and Context Management
```python
# Access workflow memory
memory = context.memory
# Inspect memory records
for record in memory.agent_records:
print(f"Agent: {record.agent_name}")
print(f"Input: {record.input_summary}")
print(f"Output: {record.output_summary}")
print(f"Timestamp: {record.timestamp}")
print("---")
# Add custom feedback
memory.add_feedback("visualization", "Excellent charts, very clear and informative", 0.95)
# Get memory summary for LLM context
summary = memory.get_summary()
```
### Configuration and Customization
```python
from duke_agents.config import DukeConfig
# Custom configuration
config = DukeConfig(
mistral_api_key="your-key",
default_model="mistral-large",
temperature=0.7,
max_retries=5,
satisfaction_threshold=0.8,
code_execution_timeout=60, # seconds
enable_sandboxing=True
)
# Create orchestrator with custom config
orchestrator = Orchestrator(context, config=config)
```
### Error Handling and Debugging
```python
# Enable detailed logging
import logging
logging.getLogger('duke_agents').setLevel(logging.DEBUG)
# Execute with error handling
try:
results = orchestrator.execute_linear_workflow(workflow)
except Exception as e:
# Access detailed error information
print(f"Workflow failed: {e}")
# Inspect partial results
for i, record in enumerate(context.memory.agent_records):
if record.error:
print(f"Step {i} failed: {record.error}")
# Export workflow for debugging
orchestrator.export_workflow("debug_workflow.json")
```
## ๐๏ธ Architecture
### Component Overview
```
duke-agents/
โโโ agents/ # Agent implementations
โ โโโ base_agent.py # Abstract base class
โ โโโ atomic_agent.py # Simple task execution
โ โโโ codeact_agent.py # Code generation/execution
โโโ models/ # Data models
โ โโโ atomic_models.py # Input/Output for AtomicAgent
โ โโโ codeact_models.py # Input/Output for CodeActAgent
โ โโโ memory.py # Memory management
โโโ orchestration/ # Workflow management
โ โโโ context_manager.py # Context and memory
โ โโโ orchestrator.py # Workflow execution
โโโ executors/ # Code execution
โ โโโ code_executor.py # Safe code execution
โโโ llm/ # LLM integration
โ โโโ mistral_client.py # Mistral API client
โโโ config.py # Configuration
```
### Design Principles
1. **Separation of Concerns**: Each component has a single, well-defined responsibility
2. **Extensibility**: Easy to add new agent types and capabilities
3. **Type Safety**: Full type hints and runtime validation
4. **Memory-First**: All operations consider memory and context
5. **Fail-Safe**: Graceful error handling and recovery
## ๐งช Testing
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=duke_agents
# Run specific test file
pytest tests/test_agents.py
# Run with verbose output
pytest -v
```
## ๐ Performance Considerations
- **Concurrent Execution**: Agents can run in parallel when dependencies allow
- **Caching**: LLM responses are cached to reduce API calls
- **Memory Optimization**: Automatic memory pruning for long workflows
- **Batch Processing**: Support for processing multiple inputs efficiently
## ๐ Security
- **Sandboxed Execution**: Code runs in isolated environments
- **Input Validation**: All inputs are validated before processing
- **API Key Protection**: Secure handling of sensitive credentials
- **Rate Limiting**: Built-in rate limiting for API calls
- **Audit Logging**: Complete audit trail of all operations
## ๐ Documentation
- **[Full Documentation](https://github.com/elmasson/duke-agents/tree/main/docs)**: Comprehensive guides and API reference (ReadTheDocs coming soon)
- **[Examples](https://github.com/elmasson/duke-agents/tree/main/examples)**: Ready-to-run example scripts
- **[API Reference](https://github.com/elmasson/duke-agents/tree/main/docs/api)**: Detailed API documentation
- **[Contributing Guide](CONTRIBUTING.md)**: How to contribute to the project
## ๐ค Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details on:
- Code style and standards
- Development setup
- Testing requirements
- Pull request process
- Issue reporting
## ๐ Roadmap
- [ ] **v1.1**: Async/await support throughout
- [ ] **v1.2**: Additional LLM providers (OpenAI, Anthropic)
- [ ] **v1.3**: Web UI for workflow design
- [ ] **v1.4**: Distributed agent execution
- [ ] **v2.0**: Agent marketplace and sharing
## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## ๐ Acknowledgments
- Built with [Mistral AI](https://mistral.ai) models
- Inspired by IPO (Input-Process-Output) architecture
- Thanks to all [contributors](https://github.com/elmasson/duke-agents/graphs/contributors)
## ๐ฌ Support
- **Documentation**: [GitHub Docs](https://github.com/elmasson/duke-agents/tree/main/docs)
- **Issues**: [GitHub Issues](https://github.com/elmasson/duke-agents/issues)
- **Discussions**: [GitHub Discussions](https://github.com/elmasson/duke-agents/discussions)
- **Email**: smasson@duke-ai.io
---
Made with โค๏ธ by the DUKE Analytics team
Raw data
{
"_id": null,
"home_page": null,
"name": "duke-agents",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Stephane MASSON <smasson@duke-ai.io>",
"keywords": "ai, agents, mistral, llm, autonomous-agents, ipo-architecture, codeact",
"author": null,
"author_email": "Stephane MASSON <smasson@duke-ai.io>",
"download_url": "https://files.pythonhosted.org/packages/20/22/efabc829dfd9ae8a404a2b339883a0103b7603b767206fa809cd748297d3/duke_agents-1.0.1.tar.gz",
"platform": null,
"description": "# DUKE Agents\r\n\r\n[](https://badge.fury.io/py/duke-agents)\r\n[](https://pypi.org/project/duke-agents/)\r\n[](https://opensource.org/licenses/MIT)\r\n[](https://github.com/elmasson/duke-agents/tree/main/docs)\r\n[](https://pepy.tech/project/duke-agents)\r\n[](https://github.com/psf/black)\r\n\r\nDUKE Agents is an advanced AI agent framework implementing the IPO (Input-Process-Output) architecture with enriched memory and feedback loops. It provides autonomous agents powered by Mistral LLMs for complex task execution, enabling developers to build sophisticated AI-driven workflows with minimal effort.\r\n\r\n## \ud83c\udfaf Why DUKE Agents?\r\n\r\n- **Production-Ready**: Built with enterprise-grade reliability and error handling\r\n- **Memory-Enhanced**: Persistent memory across workflow steps enables context-aware processing\r\n- **Self-Correcting**: Automatic retry with satisfaction scoring ensures quality outputs\r\n- **Fully Typed**: Complete type annotations for better IDE support and fewer runtime errors\r\n- **Extensible**: Easy to create custom agents and extend functionality\r\n- **Secure**: Sandboxed code execution and configurable security policies\r\n\r\n## \ud83d\ude80 Features\r\n\r\n### Core Capabilities\r\n\r\n- **\ud83c\udfd7\ufe0f IPO Architecture**: Structured Input-Process-Output workflow with memory persistence\r\n- **\ud83e\udd16 Multiple Agent Types**: \r\n - `AtomicAgent`: For discrete, well-defined tasks\r\n - `CodeActAgent`: For code generation and execution\r\n - Custom agents through simple inheritance\r\n- **\ud83e\udde0 Mistral Integration**: Native support for all Mistral models including Codestral\r\n- **\ud83d\udcbe Memory Management**: Rich workflow memory with feedback loops and context propagation\r\n- **\ud83d\udd04 Auto-correction**: Built-in retry logic with configurable satisfaction thresholds\r\n- **\ud83c\udfad Flexible Orchestration**: \r\n - Linear workflows for predefined sequences\r\n - LLM-driven dynamic agent selection\r\n- **\u2705 Type Safety**: Full Pydantic models for robust data validation\r\n\r\n### Advanced Features\r\n\r\n- **\ud83d\udcca Workflow Visualization**: Export workflows as diagrams\r\n- **\ud83d\udd0d Debugging Tools**: Comprehensive logging and memory inspection\r\n- **\u26a1 Async Support**: Asynchronous agent execution for better performance\r\n- **\ud83d\udee1\ufe0f Security**: Sandboxed execution environment for generated code\r\n- **\ud83d\udcc8 Metrics**: Built-in performance tracking and optimization hints\r\n- **\ud83d\udd0c Extensible**: Plugin system for custom functionality\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n### Standard Installation\r\n\r\n```bash\r\npip install duke-agents\r\n```\r\n\r\n### Development Installation\r\n\r\n```bash\r\n# Clone the repository\r\ngit clone https://github.com/elmasson/duke-agents.git\r\ncd duke-agents\r\n\r\n# Install in development mode with all dependencies\r\npip install -e \".[dev,docs]\"\r\n```\r\n\r\n### Prerequisites\r\n\r\n- Python 3.8 or higher\r\n- Mistral API key (get one at [console.mistral.ai](https://console.mistral.ai))\r\n\r\n## \ud83d\udd27 Quick Start\r\n\r\n### 1. Set Up Your Environment\r\n\r\n```python\r\nimport os\r\nfrom duke_agents import ContextManager, Orchestrator\r\n\r\n# Set your Mistral API key\r\nos.environ[\"MISTRAL_API_KEY\"] = \"your-api-key\"\r\n\r\n# Or use a .env file\r\n# MISTRAL_API_KEY=your-api-key\r\n```\r\n\r\n### 2. Basic Agent Usage\r\n\r\n```python\r\nfrom duke_agents import AtomicAgent, ContextManager, Orchestrator\r\n\r\n# Initialize context manager\r\ncontext = ContextManager(\"Process customer feedback\")\r\n\r\n# Create orchestrator\r\norchestrator = Orchestrator(context)\r\n\r\n# Create and register an agent\r\nagent = AtomicAgent(\"feedback_analyzer\")\r\norchestrator.register_agent(agent)\r\n\r\n# Define workflow\r\nworkflow = [{\r\n 'agent': 'feedback_analyzer',\r\n 'input_type': 'atomic',\r\n 'input_data': {\r\n 'task_id': 'analyze_001',\r\n 'parameters': {\r\n 'feedback': 'Great product but shipping was slow',\r\n 'analyze': ['sentiment', 'topics', 'actionable_insights']\r\n }\r\n }\r\n}]\r\n\r\n# Execute workflow\r\nresults = orchestrator.execute_linear_workflow(workflow)\r\n\r\n# Access results\r\nif results[0].success:\r\n print(f\"Analysis: {results[0].result}\")\r\n print(f\"Confidence: {results[0].satisfaction_score}\")\r\n```\r\n\r\n### 3. Code Generation and Execution\r\n\r\n```python\r\nfrom duke_agents import CodeActAgent, ContextManager, Orchestrator\r\n\r\n# Create a code generation agent\r\ncontext = ContextManager(\"Data Analysis Assistant\")\r\norchestrator = Orchestrator(context)\r\n\r\ncode_agent = CodeActAgent(\"data_analyst\", model=\"codestral-latest\")\r\norchestrator.register_agent(code_agent)\r\n\r\n# Generate and execute code\r\nworkflow = [{\r\n 'agent': 'data_analyst',\r\n 'input_type': 'codeact',\r\n 'input_data': {\r\n 'prompt': '''Create a function that:\r\n 1. Loads sales data from a CSV file\r\n 2. Calculates total revenue by product category\r\n 3. Identifies top 5 performing products\r\n 4. Generates a summary report with visualizations''',\r\n 'context_data': {\r\n 'csv_path': 'sales_data.csv',\r\n 'date_column': 'transaction_date'\r\n }\r\n }\r\n}]\r\n\r\nresults = orchestrator.execute_linear_workflow(workflow)\r\n\r\nif results[0].success:\r\n print(f\"Generated Code:\\n{results[0].generated_code}\")\r\n print(f\"\\nExecution Output:\\n{results[0].execution_result}\")\r\n```\r\n\r\n### 4. Multi-Agent Workflows\r\n\r\n```python\r\n# Create multiple specialized agents\r\ndata_agent = AtomicAgent(\"data_processor\")\r\nanalysis_agent = CodeActAgent(\"analyzer\")\r\nreport_agent = AtomicAgent(\"report_generator\")\r\n\r\n# Register all agents\r\nfor agent in [data_agent, analysis_agent, report_agent]:\r\n orchestrator.register_agent(agent)\r\n\r\n# Define multi-step workflow\r\nworkflow = [\r\n {\r\n 'agent': 'data_processor',\r\n 'input_type': 'atomic',\r\n 'input_data': {\r\n 'task_id': 'load_data',\r\n 'parameters': {'source': 'database', 'table': 'sales_2024'}\r\n }\r\n },\r\n {\r\n 'agent': 'analyzer',\r\n 'input_type': 'codeact',\r\n 'input_data': {\r\n 'prompt': 'Analyze the sales data and identify trends, anomalies, and opportunities'\r\n }\r\n },\r\n {\r\n 'agent': 'report_generator',\r\n 'input_type': 'atomic',\r\n 'input_data': {\r\n 'task_id': 'create_report',\r\n 'parameters': {'format': 'pdf', 'include_visuals': True}\r\n }\r\n }\r\n]\r\n\r\n# Execute the complete workflow\r\nresults = orchestrator.execute_linear_workflow(workflow)\r\n```\r\n\r\n### 5. Custom Agent Creation\r\n\r\n```python\r\nfrom duke_agents.agents import BaseAgent\r\nfrom duke_agents.models import AtomicInput, AtomicOutput\r\nfrom pydantic import BaseModel\r\n\r\nclass TranslationOutput(BaseModel):\r\n translated_text: str\r\n source_language: str\r\n target_language: str\r\n confidence: float\r\n\r\nclass TranslationAgent(BaseAgent):\r\n \"\"\"Custom agent for language translation.\"\"\"\r\n \r\n def __init__(self, name: str, model: str = \"mistral-large\"):\r\n super().__init__(name, model)\r\n self.agent_type = \"translator\"\r\n \r\n def process(self, input_data: AtomicInput, context_data: dict = None) -> TranslationOutput:\r\n # Custom processing logic\r\n prompt = f\"\"\"Translate the following text to {input_data.parameters.get('target_language', 'English')}:\r\n \r\n {input_data.parameters['text']}\r\n \r\n Also identify the source language.\"\"\"\r\n \r\n response = self.llm_client.complete(prompt)\r\n \r\n # Parse response and create output\r\n return TranslationOutput(\r\n translated_text=response['translation'],\r\n source_language=response['source_language'],\r\n target_language=input_data.parameters['target_language'],\r\n confidence=0.95\r\n )\r\n\r\n# Use the custom agent\r\ntranslator = TranslationAgent(\"translator\")\r\norchestrator.register_agent(translator)\r\n```\r\n\r\n## \ud83d\udcd6 Advanced Usage\r\n\r\n### Dynamic Workflow with LLM-Driven Orchestration\r\n\r\n```python\r\n# Let the LLM decide which agents to use\r\ncontext = ContextManager(\"Solve user problem: analyze and visualize climate data\")\r\n\r\n# Register multiple specialized agents\r\nagents = {\r\n 'data_fetcher': AtomicAgent(\"data_fetcher\"),\r\n 'data_cleaner': AtomicAgent(\"data_cleaner\"),\r\n 'statistician': CodeActAgent(\"statistician\"),\r\n 'visualizer': CodeActAgent(\"visualizer\"),\r\n 'reporter': AtomicAgent(\"reporter\")\r\n}\r\n\r\nfor agent in agents.values():\r\n orchestrator.register_agent(agent)\r\n\r\n# Execute LLM-driven workflow\r\nresults = orchestrator.execute_llm_driven_workflow(\r\n user_request=\"Fetch climate data for the last 10 years, clean it, perform statistical analysis, create visualizations, and generate a comprehensive report\",\r\n max_steps=10\r\n)\r\n```\r\n\r\n### Memory and Context Management\r\n\r\n```python\r\n# Access workflow memory\r\nmemory = context.memory\r\n\r\n# Inspect memory records\r\nfor record in memory.agent_records:\r\n print(f\"Agent: {record.agent_name}\")\r\n print(f\"Input: {record.input_summary}\")\r\n print(f\"Output: {record.output_summary}\")\r\n print(f\"Timestamp: {record.timestamp}\")\r\n print(\"---\")\r\n\r\n# Add custom feedback\r\nmemory.add_feedback(\"visualization\", \"Excellent charts, very clear and informative\", 0.95)\r\n\r\n# Get memory summary for LLM context\r\nsummary = memory.get_summary()\r\n```\r\n\r\n### Configuration and Customization\r\n\r\n```python\r\nfrom duke_agents.config import DukeConfig\r\n\r\n# Custom configuration\r\nconfig = DukeConfig(\r\n mistral_api_key=\"your-key\",\r\n default_model=\"mistral-large\",\r\n temperature=0.7,\r\n max_retries=5,\r\n satisfaction_threshold=0.8,\r\n code_execution_timeout=60, # seconds\r\n enable_sandboxing=True\r\n)\r\n\r\n# Create orchestrator with custom config\r\norchestrator = Orchestrator(context, config=config)\r\n```\r\n\r\n### Error Handling and Debugging\r\n\r\n```python\r\n# Enable detailed logging\r\nimport logging\r\nlogging.getLogger('duke_agents').setLevel(logging.DEBUG)\r\n\r\n# Execute with error handling\r\ntry:\r\n results = orchestrator.execute_linear_workflow(workflow)\r\nexcept Exception as e:\r\n # Access detailed error information\r\n print(f\"Workflow failed: {e}\")\r\n \r\n # Inspect partial results\r\n for i, record in enumerate(context.memory.agent_records):\r\n if record.error:\r\n print(f\"Step {i} failed: {record.error}\")\r\n\r\n# Export workflow for debugging\r\norchestrator.export_workflow(\"debug_workflow.json\")\r\n```\r\n\r\n## \ud83c\udfd7\ufe0f Architecture\r\n\r\n### Component Overview\r\n\r\n```\r\nduke-agents/\r\n\u251c\u2500\u2500 agents/ # Agent implementations\r\n\u2502 \u251c\u2500\u2500 base_agent.py # Abstract base class\r\n\u2502 \u251c\u2500\u2500 atomic_agent.py # Simple task execution\r\n\u2502 \u2514\u2500\u2500 codeact_agent.py # Code generation/execution\r\n\u251c\u2500\u2500 models/ # Data models\r\n\u2502 \u251c\u2500\u2500 atomic_models.py # Input/Output for AtomicAgent\r\n\u2502 \u251c\u2500\u2500 codeact_models.py # Input/Output for CodeActAgent\r\n\u2502 \u2514\u2500\u2500 memory.py # Memory management\r\n\u251c\u2500\u2500 orchestration/ # Workflow management\r\n\u2502 \u251c\u2500\u2500 context_manager.py # Context and memory\r\n\u2502 \u2514\u2500\u2500 orchestrator.py # Workflow execution\r\n\u251c\u2500\u2500 executors/ # Code execution\r\n\u2502 \u2514\u2500\u2500 code_executor.py # Safe code execution\r\n\u251c\u2500\u2500 llm/ # LLM integration\r\n\u2502 \u2514\u2500\u2500 mistral_client.py # Mistral API client\r\n\u2514\u2500\u2500 config.py # Configuration\r\n```\r\n\r\n### Design Principles\r\n\r\n1. **Separation of Concerns**: Each component has a single, well-defined responsibility\r\n2. **Extensibility**: Easy to add new agent types and capabilities\r\n3. **Type Safety**: Full type hints and runtime validation\r\n4. **Memory-First**: All operations consider memory and context\r\n5. **Fail-Safe**: Graceful error handling and recovery\r\n\r\n## \ud83e\uddea Testing\r\n\r\n```bash\r\n# Run all tests\r\npytest\r\n\r\n# Run with coverage\r\npytest --cov=duke_agents\r\n\r\n# Run specific test file\r\npytest tests/test_agents.py\r\n\r\n# Run with verbose output\r\npytest -v\r\n```\r\n\r\n## \ud83d\udcca Performance Considerations\r\n\r\n- **Concurrent Execution**: Agents can run in parallel when dependencies allow\r\n- **Caching**: LLM responses are cached to reduce API calls\r\n- **Memory Optimization**: Automatic memory pruning for long workflows\r\n- **Batch Processing**: Support for processing multiple inputs efficiently\r\n\r\n## \ud83d\udd12 Security\r\n\r\n- **Sandboxed Execution**: Code runs in isolated environments\r\n- **Input Validation**: All inputs are validated before processing\r\n- **API Key Protection**: Secure handling of sensitive credentials\r\n- **Rate Limiting**: Built-in rate limiting for API calls\r\n- **Audit Logging**: Complete audit trail of all operations\r\n\r\n## \ud83d\udcda Documentation\r\n\r\n- **[Full Documentation](https://github.com/elmasson/duke-agents/tree/main/docs)**: Comprehensive guides and API reference (ReadTheDocs coming soon)\r\n- **[Examples](https://github.com/elmasson/duke-agents/tree/main/examples)**: Ready-to-run example scripts\r\n- **[API Reference](https://github.com/elmasson/duke-agents/tree/main/docs/api)**: Detailed API documentation\r\n- **[Contributing Guide](CONTRIBUTING.md)**: How to contribute to the project\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details on:\r\n\r\n- Code style and standards\r\n- Development setup\r\n- Testing requirements\r\n- Pull request process\r\n- Issue reporting\r\n\r\n## \ud83d\udcc8 Roadmap\r\n\r\n- [ ] **v1.1**: Async/await support throughout\r\n- [ ] **v1.2**: Additional LLM providers (OpenAI, Anthropic)\r\n- [ ] **v1.3**: Web UI for workflow design\r\n- [ ] **v1.4**: Distributed agent execution\r\n- [ ] **v2.0**: Agent marketplace and sharing\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- Built with [Mistral AI](https://mistral.ai) models\r\n- Inspired by IPO (Input-Process-Output) architecture\r\n- Thanks to all [contributors](https://github.com/elmasson/duke-agents/graphs/contributors)\r\n\r\n## \ud83d\udcec Support\r\n\r\n- **Documentation**: [GitHub Docs](https://github.com/elmasson/duke-agents/tree/main/docs)\r\n- **Issues**: [GitHub Issues](https://github.com/elmasson/duke-agents/issues)\r\n- **Discussions**: [GitHub Discussions](https://github.com/elmasson/duke-agents/discussions)\r\n- **Email**: smasson@duke-ai.io\r\n\r\n---\r\n\r\nMade with \u2764\ufe0f by the DUKE Analytics team\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "DUKE Agents - Advanced AI Agent Framework with IPO Architecture",
"version": "1.0.1",
"project_urls": {
"Changelog": "https://github.com/elmasson/duke-agents/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/elmasson/duke-agents/tree/main/docs",
"Homepage": "https://github.com/elmasson/duke-agents",
"Issues": "https://github.com/elmasson/duke-agents/issues",
"Repository": "https://github.com/elmasson/duke-agents.git"
},
"split_keywords": [
"ai",
" agents",
" mistral",
" llm",
" autonomous-agents",
" ipo-architecture",
" codeact"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "23eb10f0bdf61f339856a52e62bef23d67589ff932cbc885c848c782a3916382",
"md5": "dc5a15d47fbb9882de963988b19b8612",
"sha256": "cf75f60383bebcf99b580accb4033e9848d1c7f1d3d4079810a7faef18f07cd5"
},
"downloads": -1,
"filename": "duke_agents-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "dc5a15d47fbb9882de963988b19b8612",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 25915,
"upload_time": "2025-07-08T10:52:55",
"upload_time_iso_8601": "2025-07-08T10:52:55.449339Z",
"url": "https://files.pythonhosted.org/packages/23/eb/10f0bdf61f339856a52e62bef23d67589ff932cbc885c848c782a3916382/duke_agents-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2022efabc829dfd9ae8a404a2b339883a0103b7603b767206fa809cd748297d3",
"md5": "62a7ee798d3be6ebb441e074f3b2743c",
"sha256": "6f63e7f141c863739a053be391103679d7ff685242f7595d317d2447e3f5808e"
},
"downloads": -1,
"filename": "duke_agents-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "62a7ee798d3be6ebb441e074f3b2743c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 46301,
"upload_time": "2025-07-08T10:52:57",
"upload_time_iso_8601": "2025-07-08T10:52:57.174438Z",
"url": "https://files.pythonhosted.org/packages/20/22/efabc829dfd9ae8a404a2b339883a0103b7603b767206fa809cd748297d3/duke_agents-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-08 10:52:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "elmasson",
"github_project": "duke-agents",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "mistralai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "PyYAML",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0.0"
]
]
}
],
"lcname": "duke-agents"
}