# Agent Framework Library
A comprehensive Python framework for building and serving conversational AI agents with FastAPI. Features automatic multi-provider support (OpenAI, Gemini), dynamic configuration, session management, streaming responses, and a rich web interface.
**🎉 NEW: PyPI Package** - The Agent Framework is now available as a pip-installable package from PyPI, making it easy to integrate into any Python project.
## Installation
```bash
# Install from PyPI (recommended)
uv pip install agent-framework-lib
# Install with development dependencies
uv pip install agent-framework-lib[dev]
# Install from local source (development)
uv pip install -e .
```
## 🚀 Features
### Core Capabilities
- **Multi-Provider Support**: Automatic routing between OpenAI and Gemini APIs
- **Dynamic System Prompts**: Session-based system prompt control
- **Agent Configuration**: Runtime model parameter adjustment
- **Session Management**: Persistent conversation handling with structured workflow
- **Session Workflow**: Initialize/end session lifecycle with immutable configurations
- **User Feedback System**: Message-level thumbs up/down and session-level flags
- **Media Detection**: Automatic detection and handling of generated images/videos
- **Web Interface**: Built-in test application with rich UI controls
- **Debug Logging**: Comprehensive logging for system prompts and model configuration
### Advanced Features
- **Model Auto-Detection**: Automatic provider selection based on model name
- **Parameter Filtering**: Provider-specific parameter validation (e.g., Gemini doesn't support frequency_penalty)
- **Configuration Validation**: Built-in validation and status endpoints
- **Correlation & Conversation Tracking**: Link sessions across agents and track individual exchanges
- **Manager Agent Support**: Built-in coordination features for multi-agent workflows
- **Persistent Session Storage**: MongoDB integration for scalable session persistence (see [MongoDB Session Storage Guide](docs/mongodb_session_storage.md))
- **Agent Identity Support**: Multi-agent deployment support with automatic agent identification in MongoDB (see [Agent Identity Guide](docs/agent-identity-support.md))
- **File Storage System**: Persistent file management with multiple storage backends (Local, S3, MinIO) and intelligent routing (see [File Storage Implementation Guide](docs/file_storage_system_implementation.md))
- **Generated File Tracking**: Automatic distinction between user-uploaded and agent-generated files with comprehensive metadata
- **Multi-Storage Architecture**: Route different file types to appropriate storage systems with backend fallbacks
- **Markdown Conversion**: Automatic conversion of uploaded files (PDF, DOCX, TXT, etc.) to Markdown for optimal LLM processing with complete dependency support for all file types (see [Markdown Conversion Guide](docs/markdown_conversion_feature.md) and [Complete Fix Documentation](docs/markdown_conversion_fix_complete.md))
- **Reverse Proxy Support**: Automatic path prefix detection for deployment behind reverse proxies (see [Reverse Proxy Setup Guide](REVERSE_PROXY_SETUP.md))
- **Backward Compatibility**: Existing implementations continue to work
## 🚀 Quick Start
### Option 1: AutoGen-Based Agents (Recommended for AutoGen)
The fastest way to create AutoGen agents with all boilerplate handled automatically:
```python
from typing import Any, Dict, List
from agent_framework import AutoGenBasedAgent, create_basic_agent_server
class MyAgent(AutoGenBasedAgent):
def get_agent_prompt(self) -> str:
return "You are a helpful assistant that can perform calculations."
def get_agent_tools(self) -> List[callable]:
return [self.add, self.subtract]
def get_agent_metadata(self) -> Dict[str, Any]:
return {
"name": "Math Assistant",
"description": "An agent that helps with basic math"
}
def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):
"""Create and configure the AutoGen agent."""
from autogen_agentchat.agents import AssistantAgent
return AssistantAgent(
name="math_assistant",
model_client=model_client,
system_message=system_message,
max_tool_iterations=250,
reflect_on_tool_use=True,
tools=tools,
model_client_stream=True
)
@staticmethod
def add(a: float, b: float) -> float:
"""Add two numbers together."""
return a + b
@staticmethod
def subtract(a: float, b: float) -> float:
"""Subtract one number from another."""
return a - b
# Start server with one line - includes AutoGen, MCP tools, streaming, etc.
create_basic_agent_server(MyAgent, port=8000)
```
**✨ Benefits:**
- **95% less code** - No AutoGen boilerplate needed
- **Built-in streaming** - Real-time responses with tool visualization
- **MCP integration** - Add external tools easily
- **Session management** - Automatic state persistence
- **10-15 minutes** to create a full-featured agent
- **Full control** over AutoGen agent type and configuration
### Option 2: Generic Agent Interface
For non-AutoGen agents or custom implementations:
```python
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput, create_basic_agent_server
class MyAgent(AgentInterface):
async def get_metadata(self):
return {"name": "My Agent", "version": "1.0.0"}
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):
return StructuredAgentOutput(response_text=f"Hello! You said: {agent_input.query}")
# Start server with one line - handles server setup, routing, and all framework features
create_basic_agent_server(MyAgent, port=8000)
```
See [docs/autogen_agent_guide.md](docs/autogen_agent_guide.md) for the complete AutoGen development guide.
## 📋 Table of Contents
- [Features](#-features)
- [Quick Start](#-quick-start)
- [Configuration](#️-configuration)
- [API Reference](#-api-reference)
- [Client Examples](#-client-examples)
- [Web Interface](#-web-interface)
- [Advanced Usage](#-advanced-usage)
- [Development](#️-development)
- [AutoGen Development Guide](#-autogen-development-guide)
- [Authentication](#-authentication)
- [Contributing](#-contributing)
- [License](#-license)
- [Support](#-support)
## 🛠️ Development
### Traditional Development Setup
For development within the AgentFramework repository:
### 1. Installation
```bash
# Clone the repository
git clone <your-repository-url>
cd AgentFramework
# Install dependencies
uv venv
uv pip install -e .[dev]
```
### 2. Configuration
```bash
# Copy configuration template
cp env-template.txt .env
# Edit .env with your API keys
```
**Minimal .env setup:**
```env
# At least one API key required
OPENAI_API_KEY=sk-your-openai-key-here
GEMINI_API_KEY=your-gemini-api-key-here
# Set default model
DEFAULT_MODEL=gpt-4
# Authentication (optional - set to true to enable)
REQUIRE_AUTH=false
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=password
API_KEYS=sk-your-secure-api-key-123
```
### 3. Start the Server
**Option A: Using convenience function (recommended for external projects)**
```python
# In your agent file
from agent_framework import create_basic_agent_server
create_basic_agent_server(MyAgent, port=8000)
```
**Option B: Traditional method**
```bash
# Start the development server
uv run python agent.py
# Or using uvicorn directly
export AGENT_CLASS_PATH="agent:Agent"
uvicorn server:app --reload --host 0.0.0.0 --port 8000
```
### 4. Test the Agent
Open your browser to `http://localhost:8000/ui` or make API calls:
```bash
# Without authentication (REQUIRE_AUTH=false)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
# With API Key authentication (REQUIRE_AUTH=true)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-H "X-API-Key: sk-your-secure-api-key-123" \
-d '{"query": "Hello, how are you?"}'
# With Basic authentication (REQUIRE_AUTH=true)
curl -u admin:password -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
```
### Project Structure
```
AgentFramework/
├── agent_framework/ # Main framework package
│ ├── __init__.py # Library exports and convenience functions
│ ├── agent_interface.py # Abstract agent interface
│ ├── base_agent.py # AutoGen-based agent implementation
│ ├── server.py # FastAPI server
│ ├── model_config.py # Multi-provider configuration
│ ├── model_clients.py # Model client factory
│ └── session_storage.py # Session storage implementations
├── examples/ # Usage examples
├── docs/ # Documentation
├── test_app.html # Web interface
├── env-template.txt # Configuration template
└── pyproject.toml # Package configuration
```
### Creating Custom Agents
#### Option 1: AutoGen-Based Agents (Recommended)
For AutoGen-powered agents, inherit from `AutoGenBasedAgent` for maximum productivity:
```python
from typing import Any, Dict, List
from agent_framework import AutoGenBasedAgent, create_basic_agent_server
class MyAutoGenAgent(AutoGenBasedAgent):
def get_agent_prompt(self) -> str:
return """You are a specialized agent for [your domain].
You can [list capabilities]."""
def get_agent_tools(self) -> List[callable]:
return [self.my_tool, self.another_tool]
def get_agent_metadata(self) -> Dict[str, Any]:
return {
"name": "My AutoGen Agent",
"description": "A specialized agent with AutoGen superpowers",
"capabilities": {
"streaming": True,
"tool_use": True,
"mcp_integration": True
}
}
def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):
"""Create and configure the AutoGen agent."""
from autogen_agentchat.agents import AssistantAgent
return AssistantAgent(
name="my_agent",
model_client=model_client,
system_message=system_message,
max_tool_iterations=300,
reflect_on_tool_use=True,
tools=tools,
model_client_stream=True
)
@staticmethod
def my_tool(input_data: str) -> str:
"""Your custom tool implementation."""
return f"Processed: {input_data}"
# Start server with full AutoGen capabilities
create_basic_agent_server(MyAutoGenAgent, port=8000)
```
**✨ What you get automatically:**
- Real-time streaming responses
- MCP tools integration
- Session state management
- Tool call visualization
- Error handling & logging
- Special block parsing (forms, charts, etc.)
#### Option 2: Generic AgentInterface
For non-AutoGen agents or when you need full control:
```python
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput
class MyCustomAgent(AgentInterface):
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput) -> StructuredAgentOutput:
# Implement your logic here
pass
async def handle_message_stream(self, session_id: str, agent_input: StructuredAgentInput):
# Implement streaming logic
pass
async def get_metadata(self):
return {
"name": "My Custom Agent",
"description": "A custom agent implementation",
"capabilities": {"streaming": True}
}
def get_system_prompt(self) -> Optional[str]:
return "Your custom system prompt here..."
# Start server
create_basic_agent_server(MyCustomAgent, port=8000)
```
### Testing
The project includes a comprehensive test suite built with `pytest` and optimized for UV-based testing. The tests are located in the `tests/` directory and are configured to run in a self-contained environment with multiple test categories.
**🚀 Quick Start with UV (Recommended):**
```bash
# Install test dependencies
uv sync --group test
# Run all tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=agent_framework --cov-report=html
# Run specific test types
uv run pytest -m unit # Fast unit tests
uv run pytest -m integration # Integration tests
uv run pytest -m "not slow" # Skip slow tests
```
**📚 Comprehensive Testing Guide:**
For detailed instructions on UV-based testing, test categories, CI/CD integration, and development workflows, see:
[**UV Testing Guide**](docs/UV_TESTING_GUIDE.md)
**🛠️ Alternative Methods:**
```bash
# Using Make (cross-platform)
make test # Run all tests
make test-coverage # Run with coverage
make test-fast # Run fast tests only
# Using test scripts
./scripts/test.sh coverage # Unix/macOS
scripts\test.bat coverage # Windows
# Using Python test runner
python scripts/test_runner.py --install-deps coverage
```
**📊 Test Categories:**
- `unit` - Fast, isolated component tests
- `integration` - Multi-component workflow tests
- `performance` - Benchmark and performance tests
- `multimodal` - Tests requiring AI vision/audio capabilities
- `storage` - File storage backend tests
- `slow` - Long-running tests (excluded from fast runs)
### Debug Logging
Set debug logging to see detailed system prompt and configuration information:
```bash
export AGENT_LOG_LEVEL=DEBUG
uv run python agent.py
```
Debug logs include:
- Model configuration loading and validation
- System prompt handling and persistence
- Agent configuration merging and application
- Provider selection and parameter filtering
- Client creation and model routing
## ⚙️ Configuration
### Session Storage Configuration
Configure persistent session storage (optional):
```env
# === Session Storage ===
# Use "memory" (default) for in-memory storage or "mongodb" for persistent storage
SESSION_STORAGE_TYPE=memory
# MongoDB configuration (only required when SESSION_STORAGE_TYPE=mongodb)
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=agent_sessions
MONGODB_COLLECTION_NAME=sessions
```
For detailed MongoDB setup and configuration, see the [MongoDB Session Storage Guide](docs/mongodb_session_storage.md).
## 📚 API Reference
### Core Endpoints
#### Send Message
Send a message to the agent and receive a complete response.
**Endpoint:** `POST /message`
**Request Body:**
```json
{
"query": "Your message here",
"parts": [],
"system_prompt": "Optional custom system prompt",
"agent_config": {
"temperature": 0.8,
"max_tokens": 1000,
"model_selection": "gpt-4"
},
"session_id": "optional-session-id",
"correlation_id": "optional-correlation-id-for-linking-sessions"
}
```
**Response:**
```json
{
"response_text": "Agent's response",
"parts": [
{
"type": "text",
"text": "Agent's response"
}
],
"session_id": "generated-or-provided-session-id",
"user_id": "user1",
"correlation_id": "correlation-id-if-provided",
"conversation_id": "unique-id-for-this-exchange"
}
```
#### Session Workflow (NEW)
**Initialize Session:** `POST /init`
```json
{
"user_id": "string", // required
"correlation_id": "string", // optional
"session_id": "string", // optional (auto-generated if not provided)
"data": { ... }, // optional
"configuration": { // required
"system_prompt": "string",
"model_name": "string",
"model_config": {
"temperature": 0.7,
"token_limit": 1000
}
}
}
```
Initializes a new chat session with immutable configuration. Must be called before any chat interactions. Returns the session configuration and generated session ID if not provided.
**End Session:** `POST /end`
```json
{
"session_id": "string"
}
```
Closes a session and prevents further interactions. Persists final session state and locks feedback system.
**Submit Message Feedback:** `POST /feedback/message`
```json
{
"session_id": "string",
"message_id": "string",
"feedback": "up" | "down"
}
```
Submit thumbs up/down feedback for a specific message. Can only be submitted once per message.
**Submit/Update Session Flag:** `POST|PUT /feedback/flag`
```json
{
"session_id": "string",
"flag_message": "string"
}
```
Submit or update a session-level flag message. Editable while session is active, locked after session ends.
#### Session Management
**List Sessions:** `GET /sessions`
```bash
curl http://localhost:8000/sessions
# Response: ["session1", "session2", ...]
```
**Get History:** `GET /sessions/{session_id}/history`
```bash
curl http://localhost:8000/sessions/abc123/history
```
**Find Sessions by Correlation ID:** `GET /sessions/by-correlation/{correlation_id}`
```bash
curl http://localhost:8000/sessions/by-correlation/task-123
# Response: [{"user_id": "user1", "session_id": "abc123", "correlation_id": "task-123"}]
```
### Correlation & Conversation Tracking
The framework provides advanced tracking capabilities for multi-agent workflows and detailed conversation analytics.
#### Correlation ID Support
**Purpose**: Link multiple sessions across different agents that are part of the same larger task or workflow.
**Usage**:
```python
# Start a task with correlation ID
response1 = client.send_message(
"Analyze this data set",
correlation_id="data-analysis-task-001"
)
# Continue task in another session/agent with same correlation ID
response2 = client.send_message(
"Generate visualizations for the analysis",
correlation_id="data-analysis-task-001" # Same correlation ID
)
# Find all sessions related to this task
sessions = requests.get("/sessions/by-correlation/data-analysis-task-001")
```
**Key Features**:
- **Optional field**: Can be set when sending messages or creating sessions
- **Persistent**: Correlation ID is maintained throughout the session lifecycle
- **Cross-agent**: Multiple agents can share the same correlation ID
- **Searchable**: Query all sessions by correlation ID
#### Conversation ID Support
**Purpose**: Track individual message exchanges (request/reply pairs) within sessions for detailed analytics and debugging.
**Key Features**:
- **Automatic generation**: Each request/reply pair gets a unique conversation ID
- **Shared between request/reply**: User message and agent response share the same conversation ID
- **Database-ready**: Designed for storing individual exchanges in databases
- **Analytics-friendly**: Enables detailed conversation flow analysis
**Example Response with IDs**:
```json
{
"response_text": "Here's the analysis...",
"session_id": "session-abc-123",
"user_id": "data-scientist-1",
"correlation_id": "data-analysis-task-001",
"conversation_id": "conv-uuid-456-789"
}
```
#### Manager Agent Coordination
These features enable sophisticated multi-agent workflows:
```python
class ManagerAgent:
def __init__(self):
self.correlation_id = f"task-{uuid.uuid4()}"
async def coordinate_task(self, task_description):
# Step 1: Data analysis agent
analysis_response = await self.send_to_agent(
"data-agent",
f"Analyze: {task_description}",
correlation_id=self.correlation_id
)
# Step 2: Visualization agent
viz_response = await self.send_to_agent(
"viz-agent",
f"Create charts for: {analysis_response}",
correlation_id=self.correlation_id
)
# Step 3: Find all related sessions
related_sessions = await self.get_sessions_by_correlation(self.correlation_id)
return {
"task_id": self.correlation_id,
"sessions": related_sessions,
"final_result": viz_response
}
```
#### Web Interface Features
The test application includes full support for correlation tracking:
- **Correlation ID Input**: Set correlation IDs when sending messages
- **Session Finder**: Search for all sessions sharing a correlation ID
- **ID Display**: Shows correlation and conversation IDs in chat history
- **Visual Indicators**: Clear display of tracking information
#### Configuration Endpoints
**Get Model Configuration:** `GET /config/models`
```json
{
"default_model": "gpt-4",
"configuration_status": {
"valid": true,
"warnings": [],
"errors": []
},
"supported_models": {
"openai": ["gpt-4", "gpt-3.5-turbo"],
"gemini": ["gemini-1.5-pro", "gemini-pro"]
},
"supported_providers": {
"openai": true,
"gemini": true
}
}
```
**Validate Model:** `GET /config/validate/{model_name}`
```json
{
"model": "gpt-4",
"provider": "openai",
"supported": true,
"api_key_configured": true,
"client_available": true,
"issues": []
}
```
**Get System Prompt:** `GET /system-prompt`
```json
{
"system_prompt": "You are a helpful AI assistant that helps users accomplish their tasks efficiently..."
}
```
Returns the default system prompt configured for the agent. Returns 404 if no system prompt is configured.
**Response (404 if not configured):**
```json
{
"detail": "System prompt not configured"
}
```
### Agent Configuration Parameters
| Parameter | Type | Range | Description | Providers |
| --------------------- | ------- | -------- | -------------------------- | -------------- |
| `temperature` | float | 0.0-2.0 | Controls randomness | OpenAI, Gemini |
| `max_tokens` | integer | 1+ | Maximum response tokens | OpenAI, Gemini |
| `top_p` | float | 0.0-1.0 | Nucleus sampling | OpenAI, Gemini |
| `frequency_penalty` | float | -2.0-2.0 | Reduce frequent tokens | OpenAI only |
| `presence_penalty` | float | -2.0-2.0 | Reduce any repetition | OpenAI only |
| `stop_sequences` | array | - | Custom stop sequences | OpenAI, Gemini |
| `timeout` | integer | 1+ | Request timeout (seconds) | OpenAI, Gemini |
| `max_retries` | integer | 0+ | Retry attempts | OpenAI, Gemini |
#### File Storage Endpoints
The framework includes a comprehensive file storage system with support for multiple backends (Local, S3, MinIO) and automatic file management.
**Upload File:** `POST /files/upload`
```bash
curl -X POST "http://localhost:8000/files/upload" \
-H "Authorization: Bearer your-token" \
-F "file=@example.pdf" \
-F "user_id=user-123" \
-F "session_id=session-123"
```
**Response:**
```json
{
"file_id": "uuid-here",
"filename": "example.pdf",
"size_bytes": 12345,
"mime_type": "application/pdf"
}
```
**Download File:** `GET /files/{file_id}/download`
```bash
curl -X GET "http://localhost:8000/files/{file_id}/download" \
-H "Authorization: Bearer your-token" \
-o downloaded_file.pdf
```
**Get File Metadata:** `GET /files/{file_id}/metadata`
```json
{
"file_id": "uuid-here",
"filename": "example.pdf",
"mime_type": "application/pdf",
"size_bytes": 12345,
"created_at": "2025-07-28T12:37:33Z",
"updated_at": "2025-07-28T12:37:33Z",
"user_id": "user-123",
"session_id": "session-123",
"agent_id": null,
"is_generated": false,
"tags": [],
"storage_backend": "local"
}
```
**List Files:** `GET /files?user_id=user-123&session_id=session-123&is_generated=false`
**Delete File:** `DELETE /files/{file_id}`
**Storage Statistics:** `GET /files/stats`
```json
{
"backends": ["local", "s3"],
"default_backend": "local",
"routing_rules": {
"image/": "s3",
"video/": "s3"
}
}
```
**File Storage Configuration:**
The file storage system supports environment-based configuration for multiple backends:
```bash
# Local Storage (always enabled)
LOCAL_STORAGE_PATH=./file_storage
# AWS S3 (optional)
AWS_S3_BUCKET=my-agent-files
AWS_REGION=us-east-1
S3_AS_DEFAULT=false
# MinIO (optional)
MINIO_ENDPOINT=localhost:9000
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin
MINIO_BUCKET=agent-files
# Routing Rules
IMAGE_STORAGE_BACKEND=s3
VIDEO_STORAGE_BACKEND=s3
FILE_ROUTING_RULES=image/:s3,video/:minio
```
**Key Features:**
- **Multi-Backend Support**: Local, S3, and MinIO storage backends
- **Intelligent Routing**: Route files to appropriate backends based on MIME type
- **Generated File Tracking**: Automatic distinction between uploaded and agent-generated files
- **Comprehensive Metadata**: Full file lifecycle tracking with user/session associations
- **Backward Compatibility**: Existing file handling continues to work seamlessly
For detailed configuration and usage examples, see the [File Storage Implementation Guide](docs/file_storage_system_implementation.md).
| `model_selection` | string | - | Override model for session | OpenAI, Gemini |
## 💻 Client Examples
### Python Client
```python
import requests
import json
class AgentClient:
def __init__(self, base_url="http://localhost:8000"):
self.base_url = base_url
self.session = requests.Session()
# Add basic auth if required
self.session.auth = ("admin", "password")
def send_message(self, message, session_id=None, correlation_id=None):
"""Send a message and get complete response."""
payload = {
"query": message,
"parts": []
}
if session_id:
payload["session_id"] = session_id
if correlation_id:
payload["correlation_id"] = correlation_id
response = self.session.post(
f"{self.base_url}/message",
json=payload
)
response.raise_for_status()
return response.json()
def init_session(self, user_id, configuration, correlation_id=None, session_id=None, data=None):
"""Initialize a new session with configuration."""
payload = {
"user_id": user_id,
"configuration": configuration
}
if correlation_id:
payload["correlation_id"] = correlation_id
if session_id:
payload["session_id"] = session_id
if data:
payload["data"] = data
response = self.session.post(
f"{self.base_url}/init",
json=payload
)
response.raise_for_status()
return response.json()
def end_session(self, session_id):
"""End a session."""
response = self.session.post(
f"{self.base_url}/end",
json={"session_id": session_id}
)
response.raise_for_status()
return response.ok
def submit_feedback(self, session_id, message_id, feedback):
"""Submit feedback for a message."""
response = self.session.post(
f"{self.base_url}/feedback/message",
json={
"session_id": session_id,
"message_id": message_id,
"feedback": feedback
}
)
response.raise_for_status()
return response.ok
def get_model_config(self):
"""Get available models and configuration."""
response = self.session.get(f"{self.base_url}/config/models")
response.raise_for_status()
return response.json()
# Usage example
client = AgentClient()
# Initialize session with configuration
session_data = client.init_session(
user_id="user123",
configuration={
"system_prompt": "You are a creative writing assistant",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.2,
"token_limit": 500
}
},
correlation_id="creative-writing-session-001"
)
session_id = session_data["session_id"]
# Send messages using the initialized session
response = client.send_message(
"Write a creative story about space exploration",
session_id=session_id
)
print(response["response_text"])
# Submit feedback on the response
client.submit_feedback(session_id, response["conversation_id"], "up")
# Continue the conversation
response2 = client.send_message("Add more details about the characters", session_id=session_id)
print(response2["response_text"])
# End session when done
client.end_session(session_id)
```
### JavaScript Client
```javascript
class AgentClient {
constructor(baseUrl = 'http://localhost:8000') {
this.baseUrl = baseUrl;
this.auth = btoa('admin:password'); // Basic auth
}
async sendMessage(message, options = {}) {
const payload = {
query: message,
parts: [],
...options
};
const response = await fetch(`${this.baseUrl}/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async initSession(userId, configuration, options = {}) {
const payload = {
user_id: userId,
configuration,
...options
};
const response = await fetch(`${this.baseUrl}/init`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async endSession(sessionId) {
const response = await fetch(`${this.baseUrl}/end`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({ session_id: sessionId })
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.ok;
}
async submitFeedback(sessionId, messageId, feedback) {
const response = await fetch(`${this.baseUrl}/feedback/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({
session_id: sessionId,
message_id: messageId,
feedback
})
});
return response.ok;
}
async getModelConfig() {
const response = await fetch(`${this.baseUrl}/config/models`, {
headers: { 'Authorization': `Basic ${this.auth}` }
});
return response.json();
}
}
// Usage example
const client = new AgentClient();
// Initialize session with configuration
const sessionInit = await client.initSession('user123', {
system_prompt: 'You are a helpful coding assistant',
model_name: 'gpt-4',
model_config: {
temperature: 0.7,
token_limit: 1000
}
}, {
correlation_id: 'coding-help-001'
});
// Send messages using the initialized session
const response = await client.sendMessage('Help me debug this Python code', {
session_id: sessionInit.session_id
});
console.log(response.response_text);
// Submit feedback
await client.submitFeedback(sessionInit.session_id, response.conversation_id, 'up');
// End session when done
await client.endSession(sessionInit.session_id);
```
### curl Examples
```bash
# Basic message with correlation ID
curl -X POST http://localhost:8000/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"query": "Hello, world!",
"correlation_id": "greeting-task-001",
"agent_config": {
"temperature": 0.8,
"model_selection": "gpt-4"
}
}'
# Initialize session
curl -X POST http://localhost:8000/init \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"correlation_id": "poetry-session-001",
"configuration": {
"system_prompt": "You are a talented poet",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.5,
"token_limit": 200
}
}
}'
# Submit feedback for a message
curl -X POST http://localhost:8000/feedback/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123",
"message_id": "msg-456",
"feedback": "up"
}'
# End session
curl -X POST http://localhost:8000/end \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123"
}'
# Get model configuration
curl http://localhost:8000/config/models -u admin:password
# Validate model support
curl http://localhost:8000/config/validate/gemini-1.5-pro -u admin:password
# Get system prompt
curl http://localhost:8000/system-prompt -u admin:password
# Find sessions by correlation ID
curl http://localhost:8000/sessions/by-correlation/greeting-task-001 -u admin:password
```
## 🌐 Web Interface
- TODO
## 🔧 Advanced Usage
### System Prompt Configuration
The framework supports configurable system prompts both at the server level and per-session:
#### Server-Level System Prompt
Agents can provide a default system prompt via the `get_system_prompt()` method:
```python
class MyAgent(AgentInterface):
def get_system_prompt(self) -> Optional[str]:
return """
You are a helpful coding assistant specializing in Python.
Always provide:
1. Working code examples
2. Clear explanations
3. Best practices
4. Error handling
"""
```
#### Accessing System Prompt via API
```python
# Get the default system prompt from server
response = requests.get("http://localhost:8000/system-prompt")
if response.status_code == 200:
system_prompt = response.json()["system_prompt"]
else:
print("No system prompt configured")
```
#### Per-Session System Prompts
```python
# Set system prompt for specific use case
custom_prompt = """
You are a creative writing assistant.
Focus on storytelling and narrative structure.
"""
response = client.send_message(
"Help me write a short story",
system_prompt=custom_prompt
)
```
#### Web Interface System Prompt Management
The web interface provides comprehensive system prompt management:
- **Auto-loading**: Default system prompt loads automatically on new sessions
- **Session persistence**: Each session remembers its custom system prompt
- **Reset functionality**: "🔄 Reset to Default" button restores server default
- **Manual reload**: Refresh system prompt from server without losing session data
## 🤖 AutoGen Development Guide
The Agent Framework provides a comprehensive base class for AutoGen agents that eliminates 95% of boilerplate code. This allows you to focus on your agent's specific logic rather than AutoGen integration details.
### Quick Start with AutoGen
```python
from typing import Any, Dict, List
from agent_framework import AutoGenBasedAgent, create_basic_agent_server
class DataAnalysisAgent(AutoGenBasedAgent):
def get_agent_prompt(self) -> str:
return """You are a data analysis expert.
You can analyze datasets, create visualizations, and generate insights.
Always provide clear explanations and cite your sources."""
def get_agent_tools(self) -> List[callable]:
return [self.analyze_data, self.create_chart, self.summarize_findings]
def get_agent_metadata(self) -> Dict[str, Any]:
return {
"name": "Data Analysis Agent",
"description": "Expert in statistical analysis and data visualization",
"version": "1.0.0",
"capabilities": {
"data_analysis": True,
"visualization": True,
"statistical_modeling": True
}
}
def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):
"""Create an AssistantAgent optimized for data analysis."""
from autogen_agentchat.agents import AssistantAgent
return AssistantAgent(
name="data_analyst",
model_client=model_client,
system_message=system_message,
max_tool_iterations=400, # More iterations for complex analysis
reflect_on_tool_use=True,
tools=tools,
model_client_stream=True
)
@staticmethod
def analyze_data(dataset: str, analysis_type: str = "descriptive") -> str:
"""Analyze a dataset with specified analysis type."""
# Your data analysis logic here
return f"Analysis complete for {dataset} using {analysis_type} methods"
@staticmethod
def create_chart(data: str, chart_type: str = "bar") -> str:
"""Create a chart from data."""
# Return chart configuration
return f'```chart\n{{"type": "chartjs", "chartConfig": {{"type": "{chart_type}"}}}}\n```'
# Start server - includes AutoGen, streaming, MCP tools, state management
create_basic_agent_server(DataAnalysisAgent, port=8000)
```
### What AutoGenBasedAgent Provides
**✅ Complete AutoGen Integration:**
- AssistantAgent setup and lifecycle management
- Model client factory integration
- AutoGen agent configuration
**✅ Advanced Features:**
- Real-time streaming with event handling
- MCP (Model Context Protocol) tools integration
- Session management and state persistence
- Special block parsing (forms, charts, tables, options)
- Tool call visualization and debugging
**✅ Error Handling:**
- Robust error handling and logging
- Graceful degradation for failed components
- Comprehensive debugging information
### Adding MCP Tools
```python
from autogen_ext.tools.mcp import StdioServerParams
class AdvancedAgent(AutoGenBasedAgent):
# ... implement required methods ...
def get_mcp_server_params(self) -> List[StdioServerParams]:
"""Configure external MCP tools."""
return [
# Python execution server
StdioServerParams(
command='deno',
args=['run', '-N', '-R=node_modules', '-W=node_modules',
'--node-modules-dir=auto', 'jsr:@pydantic/mcp-run-python', 'stdio'],
read_timeout_seconds=120
),
# File system access server
StdioServerParams(
command='npx',
args=['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
read_timeout_seconds=60
)
]
```
### Development Benefits
**📉 95% Code Reduction:**
- **Before**: 970+ lines of boilerplate per agent
- **After**: 40-60 lines for a complete agent
**⚡ Faster Development:**
- **Before**: 2-3 hours to create new agent
- **After**: 10-15 minutes to create new agent
**🔧 Better Maintainability:**
- Framework updates benefit all agents automatically
- Consistent behavior across all AutoGen agents
- Single source of truth for AutoGen integration
### Complete Documentation
For comprehensive documentation, examples, and best practices, see:
- **[AutoGen Agent Development Guide](docs/autogen_agent_guide.md)** - Complete tutorial with examples
- **[AutoGen Refactoring Summary](docs/autogen_refactoring_summary.md)** - Architecture and benefits overview
### Model-Specific Configuration
```python
# OpenAI-specific configuration
openai_config = {
"model_selection": "gpt-4",
"temperature": 0.7,
"frequency_penalty": 0.5, # OpenAI only
"presence_penalty": 0.3 # OpenAI only
}
# Gemini-specific configuration
gemini_config = {
"model_selection": "gemini-1.5-pro",
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1000
# Note: frequency_penalty not supported by Gemini
}
```
### Session Persistence
```python
# Start conversation with custom settings
response1 = client.send_message(
"Let's start a coding session",
system_prompt="You are my coding pair programming partner",
config={"temperature": 0.3}
)
session_id = response1["session_id"]
# Continue conversation - settings persist
response2 = client.send_message(
"Help me debug this function",
session_id=session_id
)
# Override settings for this message only
response3 = client.send_message(
"Now be creative and suggest alternatives",
session_id=session_id,
config={"temperature": 1.5} # Temporary override
)
```
### Multi-Modal Support
```python
# Send image with message
payload = {
"query": "What's in this image?",
"parts": [
{
"type": "image_url",
"image_url": {"url": "data:image/jpeg;base64,/9j/4AAQ..."}
}
]
}
```
## 🔒 Authentication
The framework supports two authentication methods that can be used simultaneously:
### 1. Basic Authentication (Username/Password)
HTTP Basic Authentication using username and password credentials.
**Configuration:**
```env
# Enable authentication
REQUIRE_AUTH=true
# Basic Auth credentials
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=your-secure-password
```
**Usage Examples:**
```bash
# cURL with Basic Auth
curl -u admin:password http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests
import requests
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
auth=("admin", "password")
)
```
### 2. API Key Authentication
More secure option for API clients using bearer tokens or X-API-Key headers.
**Configuration:**
```env
# Enable authentication
REQUIRE_AUTH=true
# API Keys (comma-separated list of valid keys)
API_KEYS=sk-your-secure-key-123,ak-another-api-key-456,my-client-api-key-789
```
**Usage Examples:**
```bash
# cURL with Bearer Token
curl -H "Authorization: Bearer sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# cURL with X-API-Key Header
curl -H "X-API-Key: sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests with Bearer Token
import requests
headers = {
"Authorization": "Bearer sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
# Python requests with X-API-Key
headers = {
"X-API-Key": "sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
```
### Authentication Priority
The framework tries authentication methods in this order:
1. **API Key via Bearer Token** (`Authorization: Bearer <key>`)
2. **API Key via X-API-Key Header** (`X-API-Key: <key>`)
3. **Basic Authentication** (username/password)
### Python Client Library Support
```python
from AgentClient import AgentClient
# Using Basic Auth
client = AgentClient("http://localhost:8000")
client.session.auth = ("admin", "password")
# Using API Key
client = AgentClient("http://localhost:8000")
client.session.headers.update({"X-API-Key": "sk-your-secure-key-123"})
# Send authenticated request
response = client.send_message("Hello, authenticated world!")
```
### Web Interface Authentication
The web interface (`/testapp`) supports both authentication methods. Update the JavaScript client:
```javascript
// Basic Auth
this.auth = btoa('admin:password');
headers['Authorization'] = `Basic ${this.auth}`;
// API Key
headers['X-API-Key'] = 'sk-your-secure-key-123';
```
### Security Best Practices
1. **Use Strong API Keys**: Generate cryptographically secure random keys
2. **Rotate Keys Regularly**: Update API keys periodically
3. **Environment Variables**: Never hardcode credentials in source code
4. **HTTPS Only**: Always use HTTPS in production to protect credentials
5. **Minimize Key Scope**: Use different keys for different applications/users
**Generate Secure API Keys:**
```bash
# Generate a secure API key (32 bytes, base64 encoded)
python -c "import secrets, base64; print('sk-' + base64.urlsafe_b64encode(secrets.token_bytes(32)).decode().rstrip('='))"
# Or use openssl
openssl rand -base64 32 | sed 's/^/sk-/'
```
### Disable Authentication
To disable authentication completely:
```env
REQUIRE_AUTH=false
```
When disabled, all endpoints are publicly accessible without any authentication.
## 📝 Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Submit a pull request
## 📄 License
[Your License Here]
## 🤝 Support
- **Documentation**: This README and inline code comments
- **Examples**: See `test_*.py` files for usage examples
- **Issues**: Report bugs and feature requests via GitHub Issues
---
**Quick Links:**
- [Web Interface](http://localhost:8000/testapp) - Interactive testing
- [API Documentation](http://localhost:8000/docs) - OpenAPI/Swagger docs
- [Configuration Test](http://localhost:8000/config/models) - Validate setup
Raw data
{
"_id": null,
"home_page": null,
"name": "agent-framework-lib",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Sebastian Pavel <sebastian@cinco.ai>",
"keywords": "ai, agents, fastapi, autogen, framework, conversational-ai, multi-agent, llm, openai, gemini, chatbot, session-management",
"author": null,
"author_email": "Sebastian Pavel <sebastian@cinco.ai>",
"download_url": "https://files.pythonhosted.org/packages/ce/e0/e00f4dabc8fd31c1d043d9e12ea2e36001dae1b16cd3e4eda4bbdf7491a3/agent_framework_lib-0.1.9.tar.gz",
"platform": null,
"description": "# Agent Framework Library\n\nA comprehensive Python framework for building and serving conversational AI agents with FastAPI. Features automatic multi-provider support (OpenAI, Gemini), dynamic configuration, session management, streaming responses, and a rich web interface.\n\n**\ud83c\udf89 NEW: PyPI Package** - The Agent Framework is now available as a pip-installable package from PyPI, making it easy to integrate into any Python project.\n\n## Installation\n\n```bash\n# Install from PyPI (recommended)\nuv pip install agent-framework-lib\n\n# Install with development dependencies\nuv pip install agent-framework-lib[dev]\n\n# Install from local source (development)\nuv pip install -e .\n```\n\n## \ud83d\ude80 Features\n\n### Core Capabilities\n\n- **Multi-Provider Support**: Automatic routing between OpenAI and Gemini APIs\n- **Dynamic System Prompts**: Session-based system prompt control\n- **Agent Configuration**: Runtime model parameter adjustment\n- **Session Management**: Persistent conversation handling with structured workflow\n- **Session Workflow**: Initialize/end session lifecycle with immutable configurations\n- **User Feedback System**: Message-level thumbs up/down and session-level flags\n- **Media Detection**: Automatic detection and handling of generated images/videos\n- **Web Interface**: Built-in test application with rich UI controls\n- **Debug Logging**: Comprehensive logging for system prompts and model configuration\n\n### Advanced Features\n\n- **Model Auto-Detection**: Automatic provider selection based on model name\n- **Parameter Filtering**: Provider-specific parameter validation (e.g., Gemini doesn't support frequency_penalty)\n- **Configuration Validation**: Built-in validation and status endpoints\n- **Correlation & Conversation Tracking**: Link sessions across agents and track individual exchanges\n- **Manager Agent Support**: Built-in coordination features for multi-agent workflows\n- **Persistent Session Storage**: MongoDB integration for scalable session persistence (see [MongoDB Session Storage Guide](docs/mongodb_session_storage.md))\n- **Agent Identity Support**: Multi-agent deployment support with automatic agent identification in MongoDB (see [Agent Identity Guide](docs/agent-identity-support.md))\n- **File Storage System**: Persistent file management with multiple storage backends (Local, S3, MinIO) and intelligent routing (see [File Storage Implementation Guide](docs/file_storage_system_implementation.md))\n- **Generated File Tracking**: Automatic distinction between user-uploaded and agent-generated files with comprehensive metadata\n- **Multi-Storage Architecture**: Route different file types to appropriate storage systems with backend fallbacks\n- **Markdown Conversion**: Automatic conversion of uploaded files (PDF, DOCX, TXT, etc.) to Markdown for optimal LLM processing with complete dependency support for all file types (see [Markdown Conversion Guide](docs/markdown_conversion_feature.md) and [Complete Fix Documentation](docs/markdown_conversion_fix_complete.md))\n- **Reverse Proxy Support**: Automatic path prefix detection for deployment behind reverse proxies (see [Reverse Proxy Setup Guide](REVERSE_PROXY_SETUP.md))\n- **Backward Compatibility**: Existing implementations continue to work\n\n## \ud83d\ude80 Quick Start\n\n### Option 1: AutoGen-Based Agents (Recommended for AutoGen)\n\nThe fastest way to create AutoGen agents with all boilerplate handled automatically:\n\n```python\nfrom typing import Any, Dict, List\nfrom agent_framework import AutoGenBasedAgent, create_basic_agent_server\n\nclass MyAgent(AutoGenBasedAgent):\n def get_agent_prompt(self) -> str:\n return \"You are a helpful assistant that can perform calculations.\"\n \n def get_agent_tools(self) -> List[callable]:\n return [self.add, self.subtract]\n \n def get_agent_metadata(self) -> Dict[str, Any]:\n return {\n \"name\": \"Math Assistant\",\n \"description\": \"An agent that helps with basic math\"\n }\n \n def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):\n \"\"\"Create and configure the AutoGen agent.\"\"\"\n from autogen_agentchat.agents import AssistantAgent\n return AssistantAgent(\n name=\"math_assistant\",\n model_client=model_client,\n system_message=system_message,\n max_tool_iterations=250,\n reflect_on_tool_use=True,\n tools=tools,\n model_client_stream=True\n )\n \n @staticmethod\n def add(a: float, b: float) -> float:\n \"\"\"Add two numbers together.\"\"\"\n return a + b\n \n @staticmethod\n def subtract(a: float, b: float) -> float:\n \"\"\"Subtract one number from another.\"\"\"\n return a - b\n\n# Start server with one line - includes AutoGen, MCP tools, streaming, etc.\ncreate_basic_agent_server(MyAgent, port=8000)\n```\n\n**\u2728 Benefits:**\n\n- **95% less code** - No AutoGen boilerplate needed\n- **Built-in streaming** - Real-time responses with tool visualization\n- **MCP integration** - Add external tools easily\n- **Session management** - Automatic state persistence\n- **10-15 minutes** to create a full-featured agent\n- **Full control** over AutoGen agent type and configuration\n\n### Option 2: Generic Agent Interface\n\nFor non-AutoGen agents or custom implementations:\n\n```python\nfrom agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput, create_basic_agent_server\n\nclass MyAgent(AgentInterface):\n async def get_metadata(self):\n return {\"name\": \"My Agent\", \"version\": \"1.0.0\"}\n \n async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):\n return StructuredAgentOutput(response_text=f\"Hello! You said: {agent_input.query}\")\n\n# Start server with one line - handles server setup, routing, and all framework features\ncreate_basic_agent_server(MyAgent, port=8000)\n```\n\nSee [docs/autogen_agent_guide.md](docs/autogen_agent_guide.md) for the complete AutoGen development guide.\n\n## \ud83d\udccb Table of Contents\n\n- [Features](#-features)\n- [Quick Start](#-quick-start)\n- [Configuration](#\ufe0f-configuration)\n- [API Reference](#-api-reference)\n- [Client Examples](#-client-examples)\n- [Web Interface](#-web-interface)\n- [Advanced Usage](#-advanced-usage)\n- [Development](#\ufe0f-development)\n- [AutoGen Development Guide](#-autogen-development-guide)\n- [Authentication](#-authentication)\n- [Contributing](#-contributing)\n- [License](#-license)\n- [Support](#-support)\n\n## \ud83d\udee0\ufe0f Development\n\n### Traditional Development Setup\n\nFor development within the AgentFramework repository:\n\n### 1. Installation\n\n```bash\n# Clone the repository\ngit clone <your-repository-url>\ncd AgentFramework\n\n# Install dependencies\nuv venv\nuv pip install -e .[dev]\n```\n\n### 2. Configuration\n\n```bash\n# Copy configuration template\ncp env-template.txt .env\n\n# Edit .env with your API keys\n```\n\n**Minimal .env setup:**\n\n```env\n# At least one API key required\nOPENAI_API_KEY=sk-your-openai-key-here\nGEMINI_API_KEY=your-gemini-api-key-here\n\n# Set default model\nDEFAULT_MODEL=gpt-4\n\n# Authentication (optional - set to true to enable)\nREQUIRE_AUTH=false\nBASIC_AUTH_USERNAME=admin\nBASIC_AUTH_PASSWORD=password\nAPI_KEYS=sk-your-secure-api-key-123\n```\n\n### 3. Start the Server\n\n**Option A: Using convenience function (recommended for external projects)**\n\n```python\n# In your agent file\nfrom agent_framework import create_basic_agent_server\ncreate_basic_agent_server(MyAgent, port=8000)\n```\n\n**Option B: Traditional method**\n\n```bash\n# Start the development server\nuv run python agent.py\n\n# Or using uvicorn directly\nexport AGENT_CLASS_PATH=\"agent:Agent\"\nuvicorn server:app --reload --host 0.0.0.0 --port 8000\n```\n\n### 4. Test the Agent\n\nOpen your browser to `http://localhost:8000/ui` or make API calls:\n\n```bash\n# Without authentication (REQUIRE_AUTH=false)\ncurl -X POST http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello, how are you?\"}'\n\n# With API Key authentication (REQUIRE_AUTH=true)\ncurl -X POST http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -H \"X-API-Key: sk-your-secure-api-key-123\" \\\n -d '{\"query\": \"Hello, how are you?\"}'\n\n# With Basic authentication (REQUIRE_AUTH=true)\ncurl -u admin:password -X POST http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello, how are you?\"}'\n```\n\n### Project Structure\n\n```\nAgentFramework/\n\u251c\u2500\u2500 agent_framework/ # Main framework package\n\u2502 \u251c\u2500\u2500 __init__.py # Library exports and convenience functions\n\u2502 \u251c\u2500\u2500 agent_interface.py # Abstract agent interface\n\u2502 \u251c\u2500\u2500 base_agent.py # AutoGen-based agent implementation\n\u2502 \u251c\u2500\u2500 server.py # FastAPI server\n\u2502 \u251c\u2500\u2500 model_config.py # Multi-provider configuration\n\u2502 \u251c\u2500\u2500 model_clients.py # Model client factory\n\u2502 \u2514\u2500\u2500 session_storage.py # Session storage implementations\n\u251c\u2500\u2500 examples/ # Usage examples\n\u251c\u2500\u2500 docs/ # Documentation\n\u251c\u2500\u2500 test_app.html # Web interface\n\u251c\u2500\u2500 env-template.txt # Configuration template\n\u2514\u2500\u2500 pyproject.toml # Package configuration\n```\n\n### Creating Custom Agents\n\n#### Option 1: AutoGen-Based Agents (Recommended)\n\nFor AutoGen-powered agents, inherit from `AutoGenBasedAgent` for maximum productivity:\n\n```python\nfrom typing import Any, Dict, List\nfrom agent_framework import AutoGenBasedAgent, create_basic_agent_server\n\nclass MyAutoGenAgent(AutoGenBasedAgent):\n def get_agent_prompt(self) -> str:\n return \"\"\"You are a specialized agent for [your domain].\n You can [list capabilities].\"\"\"\n \n def get_agent_tools(self) -> List[callable]:\n return [self.my_tool, self.another_tool]\n \n def get_agent_metadata(self) -> Dict[str, Any]:\n return {\n \"name\": \"My AutoGen Agent\",\n \"description\": \"A specialized agent with AutoGen superpowers\",\n \"capabilities\": {\n \"streaming\": True,\n \"tool_use\": True,\n \"mcp_integration\": True\n }\n }\n \n def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):\n \"\"\"Create and configure the AutoGen agent.\"\"\"\n from autogen_agentchat.agents import AssistantAgent\n return AssistantAgent(\n name=\"my_agent\",\n model_client=model_client,\n system_message=system_message,\n max_tool_iterations=300,\n reflect_on_tool_use=True,\n tools=tools,\n model_client_stream=True\n )\n \n @staticmethod\n def my_tool(input_data: str) -> str:\n \"\"\"Your custom tool implementation.\"\"\"\n return f\"Processed: {input_data}\"\n\n# Start server with full AutoGen capabilities\ncreate_basic_agent_server(MyAutoGenAgent, port=8000)\n```\n\n**\u2728 What you get automatically:**\n\n- Real-time streaming responses\n- MCP tools integration\n- Session state management\n- Tool call visualization\n- Error handling & logging\n- Special block parsing (forms, charts, etc.)\n\n#### Option 2: Generic AgentInterface\n\nFor non-AutoGen agents or when you need full control:\n\n```python\nfrom agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput\n\nclass MyCustomAgent(AgentInterface):\n async def handle_message(self, session_id: str, agent_input: StructuredAgentInput) -> StructuredAgentOutput:\n # Implement your logic here\n pass\n \n async def handle_message_stream(self, session_id: str, agent_input: StructuredAgentInput):\n # Implement streaming logic\n pass\n \n async def get_metadata(self):\n return {\n \"name\": \"My Custom Agent\",\n \"description\": \"A custom agent implementation\",\n \"capabilities\": {\"streaming\": True}\n }\n \n def get_system_prompt(self) -> Optional[str]:\n return \"Your custom system prompt here...\"\n\n# Start server\ncreate_basic_agent_server(MyCustomAgent, port=8000)\n```\n\n### Testing\n\nThe project includes a comprehensive test suite built with `pytest` and optimized for UV-based testing. The tests are located in the `tests/` directory and are configured to run in a self-contained environment with multiple test categories.\n\n**\ud83d\ude80 Quick Start with UV (Recommended):**\n\n```bash\n# Install test dependencies\nuv sync --group test\n\n# Run all tests\nuv run pytest\n\n# Run tests with coverage\nuv run pytest --cov=agent_framework --cov-report=html\n\n# Run specific test types\nuv run pytest -m unit # Fast unit tests\nuv run pytest -m integration # Integration tests\nuv run pytest -m \"not slow\" # Skip slow tests\n```\n\n**\ud83d\udcda Comprehensive Testing Guide:**\n\nFor detailed instructions on UV-based testing, test categories, CI/CD integration, and development workflows, see:\n\n[**UV Testing Guide**](docs/UV_TESTING_GUIDE.md)\n\n**\ud83d\udee0\ufe0f Alternative Methods:**\n\n```bash\n# Using Make (cross-platform)\nmake test # Run all tests\nmake test-coverage # Run with coverage\nmake test-fast # Run fast tests only\n\n# Using test scripts\n./scripts/test.sh coverage # Unix/macOS\nscripts\\test.bat coverage # Windows\n\n# Using Python test runner\npython scripts/test_runner.py --install-deps coverage\n```\n\n**\ud83d\udcca Test Categories:**\n\n- `unit` - Fast, isolated component tests\n- `integration` - Multi-component workflow tests \n- `performance` - Benchmark and performance tests\n- `multimodal` - Tests requiring AI vision/audio capabilities\n- `storage` - File storage backend tests\n- `slow` - Long-running tests (excluded from fast runs)\n\n### Debug Logging\n\nSet debug logging to see detailed system prompt and configuration information:\n\n```bash\nexport AGENT_LOG_LEVEL=DEBUG\nuv run python agent.py\n```\n\nDebug logs include:\n\n- Model configuration loading and validation\n- System prompt handling and persistence\n- Agent configuration merging and application\n- Provider selection and parameter filtering\n- Client creation and model routing\n\n## \u2699\ufe0f Configuration\n\n### Session Storage Configuration\n\nConfigure persistent session storage (optional):\n\n```env\n# === Session Storage ===\n# Use \"memory\" (default) for in-memory storage or \"mongodb\" for persistent storage\nSESSION_STORAGE_TYPE=memory\n\n# MongoDB configuration (only required when SESSION_STORAGE_TYPE=mongodb)\nMONGODB_CONNECTION_STRING=mongodb://localhost:27017\nMONGODB_DATABASE_NAME=agent_sessions\nMONGODB_COLLECTION_NAME=sessions\n```\n\nFor detailed MongoDB setup and configuration, see the [MongoDB Session Storage Guide](docs/mongodb_session_storage.md).\n\n## \ud83d\udcda API Reference\n\n### Core Endpoints\n\n#### Send Message\n\nSend a message to the agent and receive a complete response.\n\n**Endpoint:** `POST /message`\n\n**Request Body:**\n\n```json\n{\n \"query\": \"Your message here\",\n \"parts\": [],\n \"system_prompt\": \"Optional custom system prompt\",\n \"agent_config\": {\n \"temperature\": 0.8,\n \"max_tokens\": 1000,\n \"model_selection\": \"gpt-4\"\n },\n \"session_id\": \"optional-session-id\",\n \"correlation_id\": \"optional-correlation-id-for-linking-sessions\"\n}\n```\n\n**Response:**\n\n```json\n{\n \"response_text\": \"Agent's response\",\n \"parts\": [\n {\n \"type\": \"text\",\n \"text\": \"Agent's response\"\n }\n ],\n \"session_id\": \"generated-or-provided-session-id\",\n \"user_id\": \"user1\",\n \"correlation_id\": \"correlation-id-if-provided\",\n \"conversation_id\": \"unique-id-for-this-exchange\"\n}\n```\n\n#### Session Workflow (NEW)\n\n**Initialize Session:** `POST /init`\n\n```json\n{\n \"user_id\": \"string\", // required\n \"correlation_id\": \"string\", // optional\n \"session_id\": \"string\", // optional (auto-generated if not provided)\n \"data\": { ... }, // optional\n \"configuration\": { // required\n \"system_prompt\": \"string\",\n \"model_name\": \"string\",\n \"model_config\": {\n \"temperature\": 0.7,\n \"token_limit\": 1000\n }\n }\n}\n```\n\nInitializes a new chat session with immutable configuration. Must be called before any chat interactions. Returns the session configuration and generated session ID if not provided.\n\n**End Session:** `POST /end`\n\n```json\n{\n \"session_id\": \"string\"\n}\n```\n\nCloses a session and prevents further interactions. Persists final session state and locks feedback system.\n\n**Submit Message Feedback:** `POST /feedback/message`\n\n```json\n{\n \"session_id\": \"string\",\n \"message_id\": \"string\",\n \"feedback\": \"up\" | \"down\"\n}\n```\n\nSubmit thumbs up/down feedback for a specific message. Can only be submitted once per message.\n\n**Submit/Update Session Flag:** `POST|PUT /feedback/flag`\n\n```json\n{\n \"session_id\": \"string\",\n \"flag_message\": \"string\"\n}\n```\n\nSubmit or update a session-level flag message. Editable while session is active, locked after session ends.\n\n#### Session Management\n\n**List Sessions:** `GET /sessions`\n\n```bash\ncurl http://localhost:8000/sessions\n# Response: [\"session1\", \"session2\", ...]\n```\n\n**Get History:** `GET /sessions/{session_id}/history`\n\n```bash\ncurl http://localhost:8000/sessions/abc123/history\n```\n\n**Find Sessions by Correlation ID:** `GET /sessions/by-correlation/{correlation_id}`\n\n```bash\ncurl http://localhost:8000/sessions/by-correlation/task-123\n# Response: [{\"user_id\": \"user1\", \"session_id\": \"abc123\", \"correlation_id\": \"task-123\"}]\n```\n\n### Correlation & Conversation Tracking\n\nThe framework provides advanced tracking capabilities for multi-agent workflows and detailed conversation analytics.\n\n#### Correlation ID Support\n\n**Purpose**: Link multiple sessions across different agents that are part of the same larger task or workflow.\n\n**Usage**:\n\n```python\n# Start a task with correlation ID\nresponse1 = client.send_message(\n \"Analyze this data set\",\n correlation_id=\"data-analysis-task-001\"\n)\n\n# Continue task in another session/agent with same correlation ID\nresponse2 = client.send_message(\n \"Generate visualizations for the analysis\",\n correlation_id=\"data-analysis-task-001\" # Same correlation ID\n)\n\n# Find all sessions related to this task\nsessions = requests.get(\"/sessions/by-correlation/data-analysis-task-001\")\n```\n\n**Key Features**:\n\n- **Optional field**: Can be set when sending messages or creating sessions\n- **Persistent**: Correlation ID is maintained throughout the session lifecycle\n- **Cross-agent**: Multiple agents can share the same correlation ID\n- **Searchable**: Query all sessions by correlation ID\n\n#### Conversation ID Support\n\n**Purpose**: Track individual message exchanges (request/reply pairs) within sessions for detailed analytics and debugging.\n\n**Key Features**:\n\n- **Automatic generation**: Each request/reply pair gets a unique conversation ID\n- **Shared between request/reply**: User message and agent response share the same conversation ID\n- **Database-ready**: Designed for storing individual exchanges in databases\n- **Analytics-friendly**: Enables detailed conversation flow analysis\n\n**Example Response with IDs**:\n\n```json\n{\n \"response_text\": \"Here's the analysis...\",\n \"session_id\": \"session-abc-123\",\n \"user_id\": \"data-scientist-1\",\n \"correlation_id\": \"data-analysis-task-001\",\n \"conversation_id\": \"conv-uuid-456-789\"\n}\n```\n\n#### Manager Agent Coordination\n\nThese features enable sophisticated multi-agent workflows:\n\n```python\nclass ManagerAgent:\n def __init__(self):\n self.correlation_id = f\"task-{uuid.uuid4()}\"\n \n async def coordinate_task(self, task_description):\n # Step 1: Data analysis agent\n analysis_response = await self.send_to_agent(\n \"data-agent\", \n f\"Analyze: {task_description}\",\n correlation_id=self.correlation_id\n )\n \n # Step 2: Visualization agent\n viz_response = await self.send_to_agent(\n \"viz-agent\",\n f\"Create charts for: {analysis_response}\",\n correlation_id=self.correlation_id\n )\n \n # Step 3: Find all related sessions\n related_sessions = await self.get_sessions_by_correlation(self.correlation_id)\n \n return {\n \"task_id\": self.correlation_id,\n \"sessions\": related_sessions,\n \"final_result\": viz_response\n }\n```\n\n#### Web Interface Features\n\nThe test application includes full support for correlation tracking:\n\n- **Correlation ID Input**: Set correlation IDs when sending messages\n- **Session Finder**: Search for all sessions sharing a correlation ID\n- **ID Display**: Shows correlation and conversation IDs in chat history\n- **Visual Indicators**: Clear display of tracking information\n\n#### Configuration Endpoints\n\n**Get Model Configuration:** `GET /config/models`\n\n```json\n{\n \"default_model\": \"gpt-4\",\n \"configuration_status\": {\n \"valid\": true,\n \"warnings\": [],\n \"errors\": []\n },\n \"supported_models\": {\n \"openai\": [\"gpt-4\", \"gpt-3.5-turbo\"],\n \"gemini\": [\"gemini-1.5-pro\", \"gemini-pro\"]\n },\n \"supported_providers\": {\n \"openai\": true,\n \"gemini\": true\n }\n}\n```\n\n**Validate Model:** `GET /config/validate/{model_name}`\n\n```json\n{\n \"model\": \"gpt-4\",\n \"provider\": \"openai\",\n \"supported\": true,\n \"api_key_configured\": true,\n \"client_available\": true,\n \"issues\": []\n}\n```\n\n**Get System Prompt:** `GET /system-prompt`\n\n```json\n{\n \"system_prompt\": \"You are a helpful AI assistant that helps users accomplish their tasks efficiently...\"\n}\n```\n\nReturns the default system prompt configured for the agent. Returns 404 if no system prompt is configured.\n\n**Response (404 if not configured):**\n\n```json\n{\n \"detail\": \"System prompt not configured\"\n}\n```\n\n### Agent Configuration Parameters\n\n| Parameter | Type | Range | Description | Providers |\n| --------------------- | ------- | -------- | -------------------------- | -------------- |\n| `temperature` | float | 0.0-2.0 | Controls randomness | OpenAI, Gemini |\n| `max_tokens` | integer | 1+ | Maximum response tokens | OpenAI, Gemini |\n| `top_p` | float | 0.0-1.0 | Nucleus sampling | OpenAI, Gemini |\n| `frequency_penalty` | float | -2.0-2.0 | Reduce frequent tokens | OpenAI only |\n| `presence_penalty` | float | -2.0-2.0 | Reduce any repetition | OpenAI only |\n| `stop_sequences` | array | - | Custom stop sequences | OpenAI, Gemini |\n| `timeout` | integer | 1+ | Request timeout (seconds) | OpenAI, Gemini |\n| `max_retries` | integer | 0+ | Retry attempts | OpenAI, Gemini |\n\n#### File Storage Endpoints\n\nThe framework includes a comprehensive file storage system with support for multiple backends (Local, S3, MinIO) and automatic file management.\n\n**Upload File:** `POST /files/upload`\n\n```bash\ncurl -X POST \"http://localhost:8000/files/upload\" \\\n -H \"Authorization: Bearer your-token\" \\\n -F \"file=@example.pdf\" \\\n -F \"user_id=user-123\" \\\n -F \"session_id=session-123\"\n```\n\n**Response:**\n```json\n{\n \"file_id\": \"uuid-here\",\n \"filename\": \"example.pdf\",\n \"size_bytes\": 12345,\n \"mime_type\": \"application/pdf\"\n}\n```\n\n**Download File:** `GET /files/{file_id}/download`\n\n```bash\ncurl -X GET \"http://localhost:8000/files/{file_id}/download\" \\\n -H \"Authorization: Bearer your-token\" \\\n -o downloaded_file.pdf\n```\n\n**Get File Metadata:** `GET /files/{file_id}/metadata`\n\n```json\n{\n \"file_id\": \"uuid-here\",\n \"filename\": \"example.pdf\",\n \"mime_type\": \"application/pdf\",\n \"size_bytes\": 12345,\n \"created_at\": \"2025-07-28T12:37:33Z\",\n \"updated_at\": \"2025-07-28T12:37:33Z\",\n \"user_id\": \"user-123\",\n \"session_id\": \"session-123\",\n \"agent_id\": null,\n \"is_generated\": false,\n \"tags\": [],\n \"storage_backend\": \"local\"\n}\n```\n\n**List Files:** `GET /files?user_id=user-123&session_id=session-123&is_generated=false`\n\n**Delete File:** `DELETE /files/{file_id}`\n\n**Storage Statistics:** `GET /files/stats`\n\n```json\n{\n \"backends\": [\"local\", \"s3\"],\n \"default_backend\": \"local\",\n \"routing_rules\": {\n \"image/\": \"s3\",\n \"video/\": \"s3\"\n }\n}\n```\n\n**File Storage Configuration:**\n\nThe file storage system supports environment-based configuration for multiple backends:\n\n```bash\n# Local Storage (always enabled)\nLOCAL_STORAGE_PATH=./file_storage\n\n# AWS S3 (optional)\nAWS_S3_BUCKET=my-agent-files\nAWS_REGION=us-east-1\nS3_AS_DEFAULT=false\n\n# MinIO (optional) \nMINIO_ENDPOINT=localhost:9000\nMINIO_ACCESS_KEY=minioadmin\nMINIO_SECRET_KEY=minioadmin\nMINIO_BUCKET=agent-files\n\n# Routing Rules\nIMAGE_STORAGE_BACKEND=s3\nVIDEO_STORAGE_BACKEND=s3\nFILE_ROUTING_RULES=image/:s3,video/:minio\n```\n\n**Key Features:**\n- **Multi-Backend Support**: Local, S3, and MinIO storage backends\n- **Intelligent Routing**: Route files to appropriate backends based on MIME type\n- **Generated File Tracking**: Automatic distinction between uploaded and agent-generated files\n- **Comprehensive Metadata**: Full file lifecycle tracking with user/session associations\n- **Backward Compatibility**: Existing file handling continues to work seamlessly\n\nFor detailed configuration and usage examples, see the [File Storage Implementation Guide](docs/file_storage_system_implementation.md).\n| `model_selection` | string | - | Override model for session | OpenAI, Gemini |\n\n## \ud83d\udcbb Client Examples\n\n### Python Client\n\n```python\nimport requests\nimport json\n\nclass AgentClient:\n def __init__(self, base_url=\"http://localhost:8000\"):\n self.base_url = base_url\n self.session = requests.Session()\n # Add basic auth if required\n self.session.auth = (\"admin\", \"password\")\n \n def send_message(self, message, session_id=None, correlation_id=None):\n \"\"\"Send a message and get complete response.\"\"\"\n payload = {\n \"query\": message,\n \"parts\": []\n }\n \n if session_id:\n payload[\"session_id\"] = session_id\n if correlation_id:\n payload[\"correlation_id\"] = correlation_id\n \n response = self.session.post(\n f\"{self.base_url}/message\",\n json=payload\n )\n response.raise_for_status()\n return response.json()\n \n def init_session(self, user_id, configuration, correlation_id=None, session_id=None, data=None):\n \"\"\"Initialize a new session with configuration.\"\"\"\n payload = {\n \"user_id\": user_id,\n \"configuration\": configuration\n }\n \n if correlation_id:\n payload[\"correlation_id\"] = correlation_id\n if session_id:\n payload[\"session_id\"] = session_id\n if data:\n payload[\"data\"] = data\n \n response = self.session.post(\n f\"{self.base_url}/init\",\n json=payload\n )\n response.raise_for_status()\n return response.json()\n \n def end_session(self, session_id):\n \"\"\"End a session.\"\"\"\n response = self.session.post(\n f\"{self.base_url}/end\",\n json={\"session_id\": session_id}\n )\n response.raise_for_status()\n return response.ok\n \n def submit_feedback(self, session_id, message_id, feedback):\n \"\"\"Submit feedback for a message.\"\"\"\n response = self.session.post(\n f\"{self.base_url}/feedback/message\",\n json={\n \"session_id\": session_id,\n \"message_id\": message_id,\n \"feedback\": feedback\n }\n )\n response.raise_for_status()\n return response.ok\n \n def get_model_config(self):\n \"\"\"Get available models and configuration.\"\"\"\n response = self.session.get(f\"{self.base_url}/config/models\")\n response.raise_for_status()\n return response.json()\n\n# Usage example\nclient = AgentClient()\n\n# Initialize session with configuration\nsession_data = client.init_session(\n user_id=\"user123\",\n configuration={\n \"system_prompt\": \"You are a creative writing assistant\",\n \"model_name\": \"gpt-4\",\n \"model_config\": {\n \"temperature\": 1.2,\n \"token_limit\": 500\n }\n },\n correlation_id=\"creative-writing-session-001\"\n)\n\nsession_id = session_data[\"session_id\"]\n\n# Send messages using the initialized session\nresponse = client.send_message(\n \"Write a creative story about space exploration\",\n session_id=session_id\n)\nprint(response[\"response_text\"])\n\n# Submit feedback on the response\nclient.submit_feedback(session_id, response[\"conversation_id\"], \"up\")\n\n# Continue the conversation\nresponse2 = client.send_message(\"Add more details about the characters\", session_id=session_id)\nprint(response2[\"response_text\"])\n\n# End session when done\nclient.end_session(session_id)\n```\n\n### JavaScript Client\n\n```javascript\nclass AgentClient {\n constructor(baseUrl = 'http://localhost:8000') {\n this.baseUrl = baseUrl;\n this.auth = btoa('admin:password'); // Basic auth\n }\n \n async sendMessage(message, options = {}) {\n const payload = {\n query: message,\n parts: [],\n ...options\n };\n \n const response = await fetch(`${this.baseUrl}/message`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify(payload)\n });\n \n if (!response.ok) {\n throw new Error(`HTTP ${response.status}: ${response.statusText}`);\n }\n \n return response.json();\n }\n \n async initSession(userId, configuration, options = {}) {\n const payload = {\n user_id: userId,\n configuration,\n ...options\n };\n \n const response = await fetch(`${this.baseUrl}/init`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify(payload)\n });\n \n if (!response.ok) {\n throw new Error(`HTTP ${response.status}: ${response.statusText}`);\n }\n \n return response.json();\n }\n \n async endSession(sessionId) {\n const response = await fetch(`${this.baseUrl}/end`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify({ session_id: sessionId })\n });\n \n if (!response.ok) {\n throw new Error(`HTTP ${response.status}: ${response.statusText}`);\n }\n \n return response.ok;\n }\n \n async submitFeedback(sessionId, messageId, feedback) {\n const response = await fetch(`${this.baseUrl}/feedback/message`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify({\n session_id: sessionId,\n message_id: messageId,\n feedback\n })\n });\n \n return response.ok;\n }\n \n async getModelConfig() {\n const response = await fetch(`${this.baseUrl}/config/models`, {\n headers: { 'Authorization': `Basic ${this.auth}` }\n });\n return response.json();\n }\n}\n\n// Usage example\nconst client = new AgentClient();\n\n// Initialize session with configuration\nconst sessionInit = await client.initSession('user123', {\n system_prompt: 'You are a helpful coding assistant',\n model_name: 'gpt-4',\n model_config: {\n temperature: 0.7,\n token_limit: 1000\n }\n}, {\n correlation_id: 'coding-help-001'\n});\n\n// Send messages using the initialized session\nconst response = await client.sendMessage('Help me debug this Python code', {\n session_id: sessionInit.session_id\n});\nconsole.log(response.response_text);\n\n// Submit feedback\nawait client.submitFeedback(sessionInit.session_id, response.conversation_id, 'up');\n\n// End session when done\nawait client.endSession(sessionInit.session_id);\n```\n\n### curl Examples\n\n```bash\n# Basic message with correlation ID\ncurl -X POST http://localhost:8000/message \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"query\": \"Hello, world!\",\n \"correlation_id\": \"greeting-task-001\",\n \"agent_config\": {\n \"temperature\": 0.8,\n \"model_selection\": \"gpt-4\"\n }\n }'\n\n# Initialize session\ncurl -X POST http://localhost:8000/init \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"user_id\": \"user123\",\n \"correlation_id\": \"poetry-session-001\",\n \"configuration\": {\n \"system_prompt\": \"You are a talented poet\",\n \"model_name\": \"gpt-4\",\n \"model_config\": {\n \"temperature\": 1.5,\n \"token_limit\": 200\n }\n }\n }'\n\n# Submit feedback for a message\ncurl -X POST http://localhost:8000/feedback/message \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"session_id\": \"session-123\",\n \"message_id\": \"msg-456\",\n \"feedback\": \"up\"\n }'\n\n# End session\ncurl -X POST http://localhost:8000/end \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"session_id\": \"session-123\"\n }'\n\n# Get model configuration\ncurl http://localhost:8000/config/models -u admin:password\n\n# Validate model support\ncurl http://localhost:8000/config/validate/gemini-1.5-pro -u admin:password\n\n# Get system prompt\ncurl http://localhost:8000/system-prompt -u admin:password\n\n# Find sessions by correlation ID\ncurl http://localhost:8000/sessions/by-correlation/greeting-task-001 -u admin:password\n```\n\n## \ud83c\udf10 Web Interface\n\n- TODO\n\n## \ud83d\udd27 Advanced Usage\n\n### System Prompt Configuration\n\nThe framework supports configurable system prompts both at the server level and per-session:\n\n#### Server-Level System Prompt\n\nAgents can provide a default system prompt via the `get_system_prompt()` method:\n\n```python\nclass MyAgent(AgentInterface):\n def get_system_prompt(self) -> Optional[str]:\n return \"\"\"\n You are a helpful coding assistant specializing in Python.\n Always provide:\n 1. Working code examples\n 2. Clear explanations\n 3. Best practices\n 4. Error handling\n \"\"\"\n```\n\n#### Accessing System Prompt via API\n\n```python\n# Get the default system prompt from server\nresponse = requests.get(\"http://localhost:8000/system-prompt\")\nif response.status_code == 200:\n system_prompt = response.json()[\"system_prompt\"]\nelse:\n print(\"No system prompt configured\")\n```\n\n#### Per-Session System Prompts\n\n```python\n# Set system prompt for specific use case\ncustom_prompt = \"\"\"\nYou are a creative writing assistant.\nFocus on storytelling and narrative structure.\n\"\"\"\n\nresponse = client.send_message(\n \"Help me write a short story\",\n system_prompt=custom_prompt\n)\n```\n\n#### Web Interface System Prompt Management\n\nThe web interface provides comprehensive system prompt management:\n\n- **Auto-loading**: Default system prompt loads automatically on new sessions\n- **Session persistence**: Each session remembers its custom system prompt\n- **Reset functionality**: \"\ud83d\udd04 Reset to Default\" button restores server default\n- **Manual reload**: Refresh system prompt from server without losing session data\n\n## \ud83e\udd16 AutoGen Development Guide\n\nThe Agent Framework provides a comprehensive base class for AutoGen agents that eliminates 95% of boilerplate code. This allows you to focus on your agent's specific logic rather than AutoGen integration details.\n\n### Quick Start with AutoGen\n\n```python\nfrom typing import Any, Dict, List\nfrom agent_framework import AutoGenBasedAgent, create_basic_agent_server\n\nclass DataAnalysisAgent(AutoGenBasedAgent):\n def get_agent_prompt(self) -> str:\n return \"\"\"You are a data analysis expert.\n You can analyze datasets, create visualizations, and generate insights.\n Always provide clear explanations and cite your sources.\"\"\"\n \n def get_agent_tools(self) -> List[callable]:\n return [self.analyze_data, self.create_chart, self.summarize_findings]\n \n def get_agent_metadata(self) -> Dict[str, Any]:\n return {\n \"name\": \"Data Analysis Agent\",\n \"description\": \"Expert in statistical analysis and data visualization\",\n \"version\": \"1.0.0\",\n \"capabilities\": {\n \"data_analysis\": True,\n \"visualization\": True,\n \"statistical_modeling\": True\n }\n }\n \n def create_autogen_agent(self, tools: List[callable], model_client: Any, system_message: str):\n \"\"\"Create an AssistantAgent optimized for data analysis.\"\"\"\n from autogen_agentchat.agents import AssistantAgent\n return AssistantAgent(\n name=\"data_analyst\",\n model_client=model_client,\n system_message=system_message,\n max_tool_iterations=400, # More iterations for complex analysis\n reflect_on_tool_use=True,\n tools=tools,\n model_client_stream=True\n )\n \n @staticmethod\n def analyze_data(dataset: str, analysis_type: str = \"descriptive\") -> str:\n \"\"\"Analyze a dataset with specified analysis type.\"\"\"\n # Your data analysis logic here\n return f\"Analysis complete for {dataset} using {analysis_type} methods\"\n \n @staticmethod \n def create_chart(data: str, chart_type: str = \"bar\") -> str:\n \"\"\"Create a chart from data.\"\"\"\n # Return chart configuration\n return f'```chart\\n{{\"type\": \"chartjs\", \"chartConfig\": {{\"type\": \"{chart_type}\"}}}}\\n```'\n\n# Start server - includes AutoGen, streaming, MCP tools, state management\ncreate_basic_agent_server(DataAnalysisAgent, port=8000)\n```\n\n### What AutoGenBasedAgent Provides\n\n**\u2705 Complete AutoGen Integration:**\n\n- AssistantAgent setup and lifecycle management\n- Model client factory integration\n- AutoGen agent configuration\n\n**\u2705 Advanced Features:**\n\n- Real-time streaming with event handling\n- MCP (Model Context Protocol) tools integration\n- Session management and state persistence\n- Special block parsing (forms, charts, tables, options)\n- Tool call visualization and debugging\n\n**\u2705 Error Handling:**\n\n- Robust error handling and logging\n- Graceful degradation for failed components\n- Comprehensive debugging information\n\n### Adding MCP Tools\n\n```python\nfrom autogen_ext.tools.mcp import StdioServerParams\n\nclass AdvancedAgent(AutoGenBasedAgent):\n # ... implement required methods ...\n \n def get_mcp_server_params(self) -> List[StdioServerParams]:\n \"\"\"Configure external MCP tools.\"\"\"\n return [\n # Python execution server\n StdioServerParams(\n command='deno',\n args=['run', '-N', '-R=node_modules', '-W=node_modules',\n '--node-modules-dir=auto', 'jsr:@pydantic/mcp-run-python', 'stdio'],\n read_timeout_seconds=120\n ),\n # File system access server\n StdioServerParams(\n command='npx',\n args=['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],\n read_timeout_seconds=60\n )\n ]\n```\n\n### Development Benefits\n\n**\ud83d\udcc9 95% Code Reduction:**\n\n- **Before**: 970+ lines of boilerplate per agent\n- **After**: 40-60 lines for a complete agent\n\n**\u26a1 Faster Development:**\n\n- **Before**: 2-3 hours to create new agent\n- **After**: 10-15 minutes to create new agent\n\n**\ud83d\udd27 Better Maintainability:**\n\n- Framework updates benefit all agents automatically\n- Consistent behavior across all AutoGen agents\n- Single source of truth for AutoGen integration\n\n### Complete Documentation\n\nFor comprehensive documentation, examples, and best practices, see:\n\n- **[AutoGen Agent Development Guide](docs/autogen_agent_guide.md)** - Complete tutorial with examples\n- **[AutoGen Refactoring Summary](docs/autogen_refactoring_summary.md)** - Architecture and benefits overview\n\n### Model-Specific Configuration\n\n```python\n# OpenAI-specific configuration\nopenai_config = {\n \"model_selection\": \"gpt-4\",\n \"temperature\": 0.7,\n \"frequency_penalty\": 0.5, # OpenAI only\n \"presence_penalty\": 0.3 # OpenAI only\n}\n\n# Gemini-specific configuration \ngemini_config = {\n \"model_selection\": \"gemini-1.5-pro\",\n \"temperature\": 0.8,\n \"top_p\": 0.9,\n \"max_tokens\": 1000\n # Note: frequency_penalty not supported by Gemini\n}\n```\n\n### Session Persistence\n\n```python\n# Start conversation with custom settings\nresponse1 = client.send_message(\n \"Let's start a coding session\",\n system_prompt=\"You are my coding pair programming partner\",\n config={\"temperature\": 0.3}\n)\n\nsession_id = response1[\"session_id\"]\n\n# Continue conversation - settings persist\nresponse2 = client.send_message(\n \"Help me debug this function\",\n session_id=session_id\n)\n\n# Override settings for this message only\nresponse3 = client.send_message(\n \"Now be creative and suggest alternatives\", \n session_id=session_id,\n config={\"temperature\": 1.5} # Temporary override\n)\n```\n\n### Multi-Modal Support\n\n```python\n# Send image with message\npayload = {\n \"query\": \"What's in this image?\",\n \"parts\": [\n {\n \"type\": \"image_url\",\n \"image_url\": {\"url\": \"data:image/jpeg;base64,/9j/4AAQ...\"}\n }\n ]\n}\n```\n\n## \ud83d\udd12 Authentication\n\nThe framework supports two authentication methods that can be used simultaneously:\n\n### 1. Basic Authentication (Username/Password)\n\nHTTP Basic Authentication using username and password credentials.\n\n**Configuration:**\n\n```env\n# Enable authentication\nREQUIRE_AUTH=true\n\n# Basic Auth credentials\nBASIC_AUTH_USERNAME=admin\nBASIC_AUTH_PASSWORD=your-secure-password\n```\n\n**Usage Examples:**\n\n```bash\n# cURL with Basic Auth\ncurl -u admin:password http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello!\"}'\n\n# Python requests\nimport requests\nresponse = requests.post(\n \"http://localhost:8000/message\",\n json={\"query\": \"Hello!\"},\n auth=(\"admin\", \"password\")\n)\n```\n\n### 2. API Key Authentication\n\nMore secure option for API clients using bearer tokens or X-API-Key headers.\n\n**Configuration:**\n\n```env\n# Enable authentication\nREQUIRE_AUTH=true\n\n# API Keys (comma-separated list of valid keys)\nAPI_KEYS=sk-your-secure-key-123,ak-another-api-key-456,my-client-api-key-789\n```\n\n**Usage Examples:**\n\n```bash\n# cURL with Bearer Token\ncurl -H \"Authorization: Bearer sk-your-secure-key-123\" \\\n http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello!\"}'\n\n# cURL with X-API-Key Header\ncurl -H \"X-API-Key: sk-your-secure-key-123\" \\\n http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello!\"}'\n\n# Python requests with Bearer Token\nimport requests\nheaders = {\n \"Authorization\": \"Bearer sk-your-secure-key-123\",\n \"Content-Type\": \"application/json\"\n}\nresponse = requests.post(\n \"http://localhost:8000/message\",\n json={\"query\": \"Hello!\"},\n headers=headers\n)\n\n# Python requests with X-API-Key\nheaders = {\n \"X-API-Key\": \"sk-your-secure-key-123\",\n \"Content-Type\": \"application/json\"\n}\nresponse = requests.post(\n \"http://localhost:8000/message\",\n json={\"query\": \"Hello!\"},\n headers=headers\n)\n```\n\n### Authentication Priority\n\nThe framework tries authentication methods in this order:\n\n1. **API Key via Bearer Token** (`Authorization: Bearer <key>`)\n2. **API Key via X-API-Key Header** (`X-API-Key: <key>`)\n3. **Basic Authentication** (username/password)\n\n### Python Client Library Support\n\n```python\nfrom AgentClient import AgentClient\n\n# Using Basic Auth\nclient = AgentClient(\"http://localhost:8000\")\nclient.session.auth = (\"admin\", \"password\")\n\n# Using API Key\nclient = AgentClient(\"http://localhost:8000\")\nclient.session.headers.update({\"X-API-Key\": \"sk-your-secure-key-123\"})\n\n# Send authenticated request\nresponse = client.send_message(\"Hello, authenticated world!\")\n```\n\n### Web Interface Authentication\n\nThe web interface (`/testapp`) supports both authentication methods. Update the JavaScript client:\n\n```javascript\n// Basic Auth\nthis.auth = btoa('admin:password');\nheaders['Authorization'] = `Basic ${this.auth}`;\n\n// API Key\nheaders['X-API-Key'] = 'sk-your-secure-key-123';\n```\n\n### Security Best Practices\n\n1. **Use Strong API Keys**: Generate cryptographically secure random keys\n2. **Rotate Keys Regularly**: Update API keys periodically\n3. **Environment Variables**: Never hardcode credentials in source code\n4. **HTTPS Only**: Always use HTTPS in production to protect credentials\n5. **Minimize Key Scope**: Use different keys for different applications/users\n\n**Generate Secure API Keys:**\n\n```bash\n# Generate a secure API key (32 bytes, base64 encoded)\npython -c \"import secrets, base64; print('sk-' + base64.urlsafe_b64encode(secrets.token_bytes(32)).decode().rstrip('='))\"\n\n# Or use openssl\nopenssl rand -base64 32 | sed 's/^/sk-/'\n```\n\n### Disable Authentication\n\nTo disable authentication completely:\n\n```env\nREQUIRE_AUTH=false\n```\n\nWhen disabled, all endpoints are publicly accessible without any authentication.\n\n## \ud83d\udcdd Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests for new functionality\n5. Submit a pull request\n\n## \ud83d\udcc4 License\n\n[Your License Here]\n\n## \ud83e\udd1d Support\n\n- **Documentation**: This README and inline code comments\n- **Examples**: See `test_*.py` files for usage examples\n- **Issues**: Report bugs and feature requests via GitHub Issues\n\n---\n\n**Quick Links:**\n\n- [Web Interface](http://localhost:8000/testapp) - Interactive testing\n- [API Documentation](http://localhost:8000/docs) - OpenAPI/Swagger docs\n- [Configuration Test](http://localhost:8000/config/models) - Validate setup\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A comprehensive Python framework for building and serving conversational AI agents with FastAPI",
"version": "0.1.9",
"project_urls": {
"Bug Tracker": "https://github.com/Cinco-AI/AgentFramework/issues",
"Changelog": "https://github.com/Cinco-AI/AgentFramework/blob/main/docs/CHANGELOG.md",
"Documentation": "https://github.com/Cinco-AI/AgentFramework/blob/main/README.md",
"Homepage": "https://github.com/Cinco-AI/AgentFramework",
"Issues": "https://github.com/Cinco-AI/AgentFramework/issues",
"Repository": "https://github.com/Cinco-AI/AgentFramework.git",
"Source Code": "https://github.com/Cinco-AI/AgentFramework"
},
"split_keywords": [
"ai",
" agents",
" fastapi",
" autogen",
" framework",
" conversational-ai",
" multi-agent",
" llm",
" openai",
" gemini",
" chatbot",
" session-management"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ee5e8e47102f7cbe773860cf867ff5aea7bc1b8f4b689c35e784b6d690cd8d2c",
"md5": "401a3b00c6f374c006423186db17b9bf",
"sha256": "4435028630fc90a854e91105c454cca275c14e7a8f6ce6babfc77a3b086113b6"
},
"downloads": -1,
"filename": "agent_framework_lib-0.1.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "401a3b00c6f374c006423186db17b9bf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 289509,
"upload_time": "2025-09-10T09:21:12",
"upload_time_iso_8601": "2025-09-10T09:21:12.612795Z",
"url": "https://files.pythonhosted.org/packages/ee/5e/8e47102f7cbe773860cf867ff5aea7bc1b8f4b689c35e784b6d690cd8d2c/agent_framework_lib-0.1.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "cee0e00f4dabc8fd31c1d043d9e12ea2e36001dae1b16cd3e4eda4bbdf7491a3",
"md5": "9920429acad8af888de454bb90255aab",
"sha256": "5b04e3773eda9ce4cf3db756abcdcdcdc0d7b4b404846a65e5a0a3a8d1125ebc"
},
"downloads": -1,
"filename": "agent_framework_lib-0.1.9.tar.gz",
"has_sig": false,
"md5_digest": "9920429acad8af888de454bb90255aab",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 459921,
"upload_time": "2025-09-10T09:21:14",
"upload_time_iso_8601": "2025-09-10T09:21:14.803107Z",
"url": "https://files.pythonhosted.org/packages/ce/e0/e00f4dabc8fd31c1d043d9e12ea2e36001dae1b16cd3e4eda4bbdf7491a3/agent_framework_lib-0.1.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-10 09:21:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Cinco-AI",
"github_project": "AgentFramework",
"github_not_found": true,
"lcname": "agent-framework-lib"
}