# Agent Framework Library
A comprehensive Python framework for building and serving conversational AI agents with FastAPI. Features automatic multi-provider support (OpenAI, Gemini), dynamic configuration, session management, streaming responses, and a rich web interface.
**🎉 NEW: Library Usage** - The Agent Framework can now be installed as an external dependency from GitHub repositories. See [Library Usage Guide](docs/library_usage.md) for details.
## Library Installation
```bash
# Install from GitHub (HTTPS - works with public/private repos with token)
pip install git+https://github.com/Cinco-AI/AgentFramework.git
# Install from GitHub (SSH - requires SSH key setup)
pip install git+ssh://git@github.com/Cinco-AI/AgentFramework.git
# Install from local source (development)
pip install -e .
```
## 🚀 Features
### Core Capabilities
- **Multi-Provider Support**: Automatic routing between OpenAI and Gemini APIs
- **Dynamic System Prompts**: Session-based system prompt control
- **Agent Configuration**: Runtime model parameter adjustment
- **Session Management**: Persistent conversation handling with structured workflow
- **Session Workflow**: Initialize/end session lifecycle with immutable configurations
- **User Feedback System**: Message-level thumbs up/down and session-level flags
- **Media Detection**: Automatic detection and handling of generated images/videos
- **Web Interface**: Built-in test application with rich UI controls
- **Debug Logging**: Comprehensive logging for system prompts and model configuration
### Advanced Features
- **Model Auto-Detection**: Automatic provider selection based on model name
- **Parameter Filtering**: Provider-specific parameter validation (e.g., Gemini doesn't support frequency_penalty)
- **Configuration Validation**: Built-in validation and status endpoints
- **Correlation & Conversation Tracking**: Link sessions across agents and track individual exchanges
- **Manager Agent Support**: Built-in coordination features for multi-agent workflows
- **Persistent Session Storage**: MongoDB integration for scalable session persistence (see [MongoDB Session Storage Guide](docs/mongodb_session_storage.md))
- **Agent Identity Support**: Multi-agent deployment support with automatic agent identification in MongoDB (see [Agent Identity Guide](docs/agent-identity-support.md))
- **Reverse Proxy Support**: Automatic path prefix detection for deployment behind reverse proxies (see [Reverse Proxy Setup Guide](REVERSE_PROXY_SETUP.md))
- **Backward Compatibility**: Existing implementations continue to work
## 🚀 Quick Start
### Library Usage (Recommended)
The easiest way to use the Agent Framework is with the convenience function:
```python
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput, create_basic_agent_server
class MyAgent(AgentInterface):
async def get_metadata(self):
return {"name": "My Agent", "version": "1.0.0"}
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):
return StructuredAgentOutput(response_text=f"Hello! You said: {agent_input.query}")
# Start server with one line - no server.py file needed!
create_basic_agent_server(MyAgent, port=8000)
```
This automatically handles server setup, routing, and all framework features.
See [examples/](examples/) for complete examples and [docs/library_usage.md](docs/library_usage.md) for comprehensive documentation.
## 📋 Table of Contents
- [Features](#-features)
- [Quick Start](#-quick-start)
- [Configuration](#️-configuration)
- [API Reference](#-api-reference)
- [Client Examples](#-client-examples)
- [Web Interface](#-web-interface)
- [Advanced Usage](#-advanced-usage)
- [Development](#️-development)
- [Authentication](#-authentication)
- [Contributing](#-contributing)
- [License](#-license)
- [Support](#-support)
## 🛠️ Development
### Traditional Development Setup
For development within the AgentFramework repository:
### 1. Installation
```bash
# Clone the repository
git clone <your-repository-url>
cd AgentFramework
# Install dependencies
uv venv
uv pip install -e .[dev]
```
### 2. Configuration
```bash
# Copy configuration template
cp env-template.txt .env
# Edit .env with your API keys
```
**Minimal .env setup:**
```env
# At least one API key required
OPENAI_API_KEY=sk-your-openai-key-here
GEMINI_API_KEY=your-gemini-api-key-here
# Set default model
DEFAULT_MODEL=gpt-4
# Authentication (optional - set to true to enable)
REQUIRE_AUTH=false
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=password
API_KEYS=sk-your-secure-api-key-123
```
### 3. Start the Server
**Option A: Using convenience function (recommended for external projects)**
```python
# In your agent file
from agent_framework import create_basic_agent_server
create_basic_agent_server(MyAgent, port=8000)
```
**Option B: Traditional method**
```bash
# Start the development server
uv run python agent.py
# Or using uvicorn directly
export AGENT_CLASS_PATH="agent:Agent"
uvicorn server:app --reload --host 0.0.0.0 --port 8000
```
### 4. Test the Agent
Open your browser to `http://localhost:8000/testapp` or make API calls:
```bash
# Without authentication (REQUIRE_AUTH=false)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
# With API Key authentication (REQUIRE_AUTH=true)
curl -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-H "X-API-Key: sk-your-secure-api-key-123" \
-d '{"query": "Hello, how are you?"}'
# With Basic authentication (REQUIRE_AUTH=true)
curl -u admin:password -X POST http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello, how are you?"}'
```
### Project Structure
```
AgentFramework/
├── agent_framework/ # Main framework package
│ ├── __init__.py # Library exports and convenience functions
│ ├── agent_interface.py # Abstract agent interface
│ ├── base_agent.py # AutoGen-based agent implementation
│ ├── server.py # FastAPI server
│ ├── model_config.py # Multi-provider configuration
│ ├── model_clients.py # Model client factory
│ └── session_storage.py # Session storage implementations
├── examples/ # Usage examples
├── docs/ # Documentation
├── test_app.html # Web interface
├── env-template.txt # Configuration template
└── pyproject.toml # Package configuration
```
### Creating Custom Agents
1. **Inherit from AgentInterface:**
```python
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput
class MyCustomAgent(AgentInterface):
async def handle_message(self, session_id: str, agent_input: StructuredAgentInput) -> StructuredAgentOutput:
# Implement your logic here
pass
async def handle_message_stream(self, session_id: str, agent_input: StructuredAgentInput):
# Implement streaming logic
pass
async def get_metadata(self):
return {
"name": "My Custom Agent",
"description": "A custom agent implementation",
"capabilities": {"streaming": True}
}
def get_system_prompt(self) -> Optional[str]:
return "Your custom system prompt here..."
```
2. **Start the server:**
```python
from agent_framework import create_basic_agent_server
create_basic_agent_server(MyCustomAgent, port=8000)
```
### Testing
The project includes a comprehensive test suite built with `pytest`. The tests are located in the `tests/` directory and are configured to run in a self-contained environment.
For detailed instructions on how to set up the test environment and run the tests, please refer to the README file inside the test directory:
[**Agent Framework Test Suite Guide**](tests/README.md)
A brief overview of the steps:
1. Navigate to the test directory: `cd tests`
2. Create a virtual environment: `uv venv`
3. Activate it: `source .venv/bin/activate`
4. Install dependencies: `uv pip install -e .. && uv pip install -r requirements.txt`
5. Run the tests: `pytest`
### Debug Logging
Set debug logging to see detailed system prompt and configuration information:
```bash
export AGENT_LOG_LEVEL=DEBUG
uv run python agent.py
```
Debug logs include:
- Model configuration loading and validation
- System prompt handling and persistence
- Agent configuration merging and application
- Provider selection and parameter filtering
- Client creation and model routing
## ⚙️ Configuration
### Multi-Provider Setup
The framework automatically routes requests to the appropriate AI provider based on the model name:
```env
# === API Keys ===
OPENAI_API_KEY=sk-your-openai-key-here
GEMINI_API_KEY=your-gemini-api-key-here
# === Default Model ===
DEFAULT_MODEL=gpt-4
# === Model Lists (Optional) ===
OPENAI_MODELS=gpt-4,gpt-4-turbo,gpt-4o,gpt-3.5-turbo,o1-preview,o1-mini
GEMINI_MODELS=gemini-1.5-pro,gemini-1.5-flash,gemini-2.0-flash-exp,gemini-pro
# === Provider Defaults ===
FALLBACK_PROVIDER=openai
OPENAI_DEFAULT_TEMPERATURE=0.7
GEMINI_DEFAULT_TEMPERATURE=0.7
```
### Session Storage Configuration
Configure persistent session storage (optional):
```env
# === Session Storage ===
# Use "memory" (default) for in-memory storage or "mongodb" for persistent storage
SESSION_STORAGE_TYPE=memory
# MongoDB configuration (only required when SESSION_STORAGE_TYPE=mongodb)
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=agent_sessions
MONGODB_COLLECTION_NAME=sessions
```
For detailed MongoDB setup and configuration, see the [MongoDB Session Storage Guide](docs/mongodb_session_storage.md).
### Configuration Validation
Test your configuration:
```bash
# Validate configuration
uv run python test_multi_provider.py
# Check specific model support
curl http://localhost:8000/config/validate/gpt-4
```
## 📚 API Reference
### Core Endpoints
#### Send Message
Send a message to the agent and receive a complete response.
**Endpoint:** `POST /message`
**Request Body:**
```json
{
"query": "Your message here",
"parts": [],
"system_prompt": "Optional custom system prompt",
"agent_config": {
"temperature": 0.8,
"max_tokens": 1000,
"model_selection": "gpt-4"
},
"session_id": "optional-session-id",
"correlation_id": "optional-correlation-id-for-linking-sessions"
}
```
**Response:**
```json
{
"response_text": "Agent's response",
"parts": [
{
"type": "text",
"text": "Agent's response"
}
],
"session_id": "generated-or-provided-session-id",
"user_id": "user1",
"correlation_id": "correlation-id-if-provided",
"conversation_id": "unique-id-for-this-exchange"
}
```
#### Session Workflow (NEW)
**Initialize Session:** `POST /init`
```json
{
"user_id": "string", // required
"correlation_id": "string", // optional
"session_id": "string", // optional (auto-generated if not provided)
"data": { ... }, // optional
"configuration": { // required
"system_prompt": "string",
"model_name": "string",
"model_config": {
"temperature": 0.7,
"token_limit": 1000
}
}
}
```
Initializes a new chat session with immutable configuration. Must be called before any chat interactions. Returns the session configuration and generated session ID if not provided.
**End Session:** `POST /end`
```json
{
"session_id": "string"
}
```
Closes a session and prevents further interactions. Persists final session state and locks feedback system.
**Submit Message Feedback:** `POST /feedback/message`
```json
{
"session_id": "string",
"message_id": "string",
"feedback": "up" | "down"
}
```
Submit thumbs up/down feedback for a specific message. Can only be submitted once per message.
**Submit/Update Session Flag:** `POST|PUT /feedback/flag`
```json
{
"session_id": "string",
"flag_message": "string"
}
```
Submit or update a session-level flag message. Editable while session is active, locked after session ends.
#### Session Management
**List Sessions:** `GET /sessions`
```bash
curl http://localhost:8000/sessions
# Response: ["session1", "session2", ...]
```
**Get History:** `GET /sessions/{session_id}/history`
```bash
curl http://localhost:8000/sessions/abc123/history
```
**Find Sessions by Correlation ID:** `GET /sessions/by-correlation/{correlation_id}`
```bash
curl http://localhost:8000/sessions/by-correlation/task-123
# Response: [{"user_id": "user1", "session_id": "abc123", "correlation_id": "task-123"}]
```
### Correlation & Conversation Tracking
The framework provides advanced tracking capabilities for multi-agent workflows and detailed conversation analytics.
#### Correlation ID Support
**Purpose**: Link multiple sessions across different agents that are part of the same larger task or workflow.
**Usage**:
```python
# Start a task with correlation ID
response1 = client.send_message(
"Analyze this data set",
correlation_id="data-analysis-task-001"
)
# Continue task in another session/agent with same correlation ID
response2 = client.send_message(
"Generate visualizations for the analysis",
correlation_id="data-analysis-task-001" # Same correlation ID
)
# Find all sessions related to this task
sessions = requests.get("/sessions/by-correlation/data-analysis-task-001")
```
**Key Features**:
- **Optional field**: Can be set when sending messages or creating sessions
- **Persistent**: Correlation ID is maintained throughout the session lifecycle
- **Cross-agent**: Multiple agents can share the same correlation ID
- **Searchable**: Query all sessions by correlation ID
#### Conversation ID Support
**Purpose**: Track individual message exchanges (request/reply pairs) within sessions for detailed analytics and debugging.
**Key Features**:
- **Automatic generation**: Each request/reply pair gets a unique conversation ID
- **Shared between request/reply**: User message and agent response share the same conversation ID
- **Database-ready**: Designed for storing individual exchanges in databases
- **Analytics-friendly**: Enables detailed conversation flow analysis
**Example Response with IDs**:
```json
{
"response_text": "Here's the analysis...",
"session_id": "session-abc-123",
"user_id": "data-scientist-1",
"correlation_id": "data-analysis-task-001",
"conversation_id": "conv-uuid-456-789"
}
```
#### Manager Agent Coordination
These features enable sophisticated multi-agent workflows:
```python
class ManagerAgent:
def __init__(self):
self.correlation_id = f"task-{uuid.uuid4()}"
async def coordinate_task(self, task_description):
# Step 1: Data analysis agent
analysis_response = await self.send_to_agent(
"data-agent",
f"Analyze: {task_description}",
correlation_id=self.correlation_id
)
# Step 2: Visualization agent
viz_response = await self.send_to_agent(
"viz-agent",
f"Create charts for: {analysis_response}",
correlation_id=self.correlation_id
)
# Step 3: Find all related sessions
related_sessions = await self.get_sessions_by_correlation(self.correlation_id)
return {
"task_id": self.correlation_id,
"sessions": related_sessions,
"final_result": viz_response
}
```
#### Web Interface Features
The test application includes full support for correlation tracking:
- **Correlation ID Input**: Set correlation IDs when sending messages
- **Session Finder**: Search for all sessions sharing a correlation ID
- **ID Display**: Shows correlation and conversation IDs in chat history
- **Visual Indicators**: Clear display of tracking information
#### Configuration Endpoints
**Get Model Configuration:** `GET /config/models`
```json
{
"default_model": "gpt-4",
"configuration_status": {
"valid": true,
"warnings": [],
"errors": []
},
"supported_models": {
"openai": ["gpt-4", "gpt-3.5-turbo"],
"gemini": ["gemini-1.5-pro", "gemini-pro"]
},
"supported_providers": {
"openai": true,
"gemini": true
}
}
```
**Validate Model:** `GET /config/validate/{model_name}`
```json
{
"model": "gpt-4",
"provider": "openai",
"supported": true,
"api_key_configured": true,
"client_available": true,
"issues": []
}
```
**Get System Prompt:** `GET /system-prompt`
```json
{
"system_prompt": "You are a helpful AI assistant that helps users accomplish their tasks efficiently..."
}
```
Returns the default system prompt configured for the agent. Returns 404 if no system prompt is configured.
**Response (404 if not configured):**
```json
{
"detail": "System prompt not configured"
}
```
### Agent Configuration Parameters
| Parameter | Type | Range | Description | Providers |
|-----------|------|-------|-------------|-----------|
| `temperature` | float | 0.0-2.0 | Controls randomness | OpenAI, Gemini |
| `max_tokens` | integer | 1+ | Maximum response tokens | OpenAI, Gemini |
| `top_p` | float | 0.0-1.0 | Nucleus sampling | OpenAI, Gemini |
| `frequency_penalty` | float | -2.0-2.0 | Reduce frequent tokens | OpenAI only |
| `presence_penalty` | float | -2.0-2.0 | Reduce any repetition | OpenAI only |
| `stop_sequences` | array | - | Custom stop sequences | OpenAI, Gemini |
| `timeout` | integer | 1+ | Request timeout (seconds) | OpenAI, Gemini |
| `max_retries` | integer | 0+ | Retry attempts | OpenAI, Gemini |
| `model_selection` | string | - | Override model for session | OpenAI, Gemini |
## 💻 Client Examples
### Python Client
```python
import requests
import json
class AgentClient:
def __init__(self, base_url="http://localhost:8000"):
self.base_url = base_url
self.session = requests.Session()
# Add basic auth if required
self.session.auth = ("admin", "password")
def send_message(self, message, session_id=None, correlation_id=None):
"""Send a message and get complete response."""
payload = {
"query": message,
"parts": []
}
if session_id:
payload["session_id"] = session_id
if correlation_id:
payload["correlation_id"] = correlation_id
response = self.session.post(
f"{self.base_url}/message",
json=payload
)
response.raise_for_status()
return response.json()
def init_session(self, user_id, configuration, correlation_id=None, session_id=None, data=None):
"""Initialize a new session with configuration."""
payload = {
"user_id": user_id,
"configuration": configuration
}
if correlation_id:
payload["correlation_id"] = correlation_id
if session_id:
payload["session_id"] = session_id
if data:
payload["data"] = data
response = self.session.post(
f"{self.base_url}/init",
json=payload
)
response.raise_for_status()
return response.json()
def end_session(self, session_id):
"""End a session."""
response = self.session.post(
f"{self.base_url}/end",
json={"session_id": session_id}
)
response.raise_for_status()
return response.ok
def submit_feedback(self, session_id, message_id, feedback):
"""Submit feedback for a message."""
response = self.session.post(
f"{self.base_url}/feedback/message",
json={
"session_id": session_id,
"message_id": message_id,
"feedback": feedback
}
)
response.raise_for_status()
return response.ok
def get_model_config(self):
"""Get available models and configuration."""
response = self.session.get(f"{self.base_url}/config/models")
response.raise_for_status()
return response.json()
# Usage example
client = AgentClient()
# Initialize session with configuration
session_data = client.init_session(
user_id="user123",
configuration={
"system_prompt": "You are a creative writing assistant",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.2,
"token_limit": 500
}
},
correlation_id="creative-writing-session-001"
)
session_id = session_data["session_id"]
# Send messages using the initialized session
response = client.send_message(
"Write a creative story about space exploration",
session_id=session_id
)
print(response["response_text"])
# Submit feedback on the response
client.submit_feedback(session_id, response["conversation_id"], "up")
# Continue the conversation
response2 = client.send_message("Add more details about the characters", session_id=session_id)
print(response2["response_text"])
# End session when done
client.end_session(session_id)
```
### JavaScript Client
```javascript
class AgentClient {
constructor(baseUrl = 'http://localhost:8000') {
this.baseUrl = baseUrl;
this.auth = btoa('admin:password'); // Basic auth
}
async sendMessage(message, options = {}) {
const payload = {
query: message,
parts: [],
...options
};
const response = await fetch(`${this.baseUrl}/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async initSession(userId, configuration, options = {}) {
const payload = {
user_id: userId,
configuration,
...options
};
const response = await fetch(`${this.baseUrl}/init`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
async endSession(sessionId) {
const response = await fetch(`${this.baseUrl}/end`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({ session_id: sessionId })
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.ok;
}
async submitFeedback(sessionId, messageId, feedback) {
const response = await fetch(`${this.baseUrl}/feedback/message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Basic ${this.auth}`
},
body: JSON.stringify({
session_id: sessionId,
message_id: messageId,
feedback
})
});
return response.ok;
}
async getModelConfig() {
const response = await fetch(`${this.baseUrl}/config/models`, {
headers: { 'Authorization': `Basic ${this.auth}` }
});
return response.json();
}
}
// Usage example
const client = new AgentClient();
// Initialize session with configuration
const sessionInit = await client.initSession('user123', {
system_prompt: 'You are a helpful coding assistant',
model_name: 'gpt-4',
model_config: {
temperature: 0.7,
token_limit: 1000
}
}, {
correlation_id: 'coding-help-001'
});
// Send messages using the initialized session
const response = await client.sendMessage('Help me debug this Python code', {
session_id: sessionInit.session_id
});
console.log(response.response_text);
// Submit feedback
await client.submitFeedback(sessionInit.session_id, response.conversation_id, 'up');
// End session when done
await client.endSession(sessionInit.session_id);
```
### curl Examples
```bash
# Basic message with correlation ID
curl -X POST http://localhost:8000/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"query": "Hello, world!",
"correlation_id": "greeting-task-001",
"agent_config": {
"temperature": 0.8,
"model_selection": "gpt-4"
}
}'
# Initialize session
curl -X POST http://localhost:8000/init \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"user_id": "user123",
"correlation_id": "poetry-session-001",
"configuration": {
"system_prompt": "You are a talented poet",
"model_name": "gpt-4",
"model_config": {
"temperature": 1.5,
"token_limit": 200
}
}
}'
# Submit feedback for a message
curl -X POST http://localhost:8000/feedback/message \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123",
"message_id": "msg-456",
"feedback": "up"
}'
# End session
curl -X POST http://localhost:8000/end \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"session_id": "session-123"
}'
# Get model configuration
curl http://localhost:8000/config/models -u admin:password
# Validate model support
curl http://localhost:8000/config/validate/gemini-1.5-pro -u admin:password
# Get system prompt
curl http://localhost:8000/system-prompt -u admin:password
# Find sessions by correlation ID
curl http://localhost:8000/sessions/by-correlation/greeting-task-001 -u admin:password
```
## 🌐 Web Interface
Access the built-in web interface at `http://localhost:8000/testapp`
### Features:
- **Model Selection**: Dropdown with all available models
- **System Prompt Management**:
- Dedicated textarea for custom prompts
- Auto-loads default system prompt from server
- Session-specific prompt persistence
- Reset to default functionality
- Manual reload from server option
- **Advanced Configuration**: Collapsible panel with all parameters
- **Parameter Validation**: Real-time validation with visual feedback
- **Provider Awareness**: Disables unsupported parameters (e.g., frequency_penalty for Gemini)
- **Session Management**: Create, load, and manage conversation sessions with structured workflow
- **Session Initialization**: Configure sessions with immutable system prompts and model settings
- **User Feedback**: Thumbs up/down feedback and session-level flags
- **Media Detection**: Automatic detection and display of generated images/videos
- **Correlation Tracking**:
- Set correlation IDs to link sessions across agents
- Search for sessions by correlation ID
- Visual display of correlation and conversation IDs
- Manager agent coordination support
### Configuration Presets:
- **Creative**: High temperature, relaxed parameters for creative tasks
- **Precise**: Low temperature, focused parameters for analytical tasks
- **Custom**: Manual parameter adjustment
## 🔧 Advanced Usage
### System Prompt Configuration
The framework supports configurable system prompts both at the server level and per-session:
#### Server-Level System Prompt
Agents can provide a default system prompt via the `get_system_prompt()` method:
```python
class MyAgent(AgentInterface):
def get_system_prompt(self) -> Optional[str]:
return """
You are a helpful coding assistant specializing in Python.
Always provide:
1. Working code examples
2. Clear explanations
3. Best practices
4. Error handling
"""
```
#### Accessing System Prompt via API
```python
# Get the default system prompt from server
response = requests.get("http://localhost:8000/system-prompt")
if response.status_code == 200:
system_prompt = response.json()["system_prompt"]
else:
print("No system prompt configured")
```
#### Per-Session System Prompts
```python
# Set system prompt for specific use case
custom_prompt = """
You are a creative writing assistant.
Focus on storytelling and narrative structure.
"""
response = client.send_message(
"Help me write a short story",
system_prompt=custom_prompt
)
```
#### Web Interface System Prompt Management
The web interface provides comprehensive system prompt management:
- **Auto-loading**: Default system prompt loads automatically on new sessions
- **Session persistence**: Each session remembers its custom system prompt
- **Reset functionality**: "🔄 Reset to Default" button restores server default
- **Manual reload**: Refresh system prompt from server without losing session data
### Model-Specific Configuration
```python
# OpenAI-specific configuration
openai_config = {
"model_selection": "gpt-4",
"temperature": 0.7,
"frequency_penalty": 0.5, # OpenAI only
"presence_penalty": 0.3 # OpenAI only
}
# Gemini-specific configuration
gemini_config = {
"model_selection": "gemini-1.5-pro",
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1000
# Note: frequency_penalty not supported by Gemini
}
```
### Session Persistence
```python
# Start conversation with custom settings
response1 = client.send_message(
"Let's start a coding session",
system_prompt="You are my coding pair programming partner",
config={"temperature": 0.3}
)
session_id = response1["session_id"]
# Continue conversation - settings persist
response2 = client.send_message(
"Help me debug this function",
session_id=session_id
)
# Override settings for this message only
response3 = client.send_message(
"Now be creative and suggest alternatives",
session_id=session_id,
config={"temperature": 1.5} # Temporary override
)
```
### Multi-Modal Support
```python
# Send image with message
payload = {
"query": "What's in this image?",
"parts": [
{
"type": "image_url",
"image_url": {"url": "data:image/jpeg;base64,/9j/4AAQ..."}
}
]
}
```
## 🔒 Authentication
The framework supports two authentication methods that can be used simultaneously:
### 1. Basic Authentication (Username/Password)
HTTP Basic Authentication using username and password credentials.
**Configuration:**
```env
# Enable authentication
REQUIRE_AUTH=true
# Basic Auth credentials
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=your-secure-password
```
**Usage Examples:**
```bash
# cURL with Basic Auth
curl -u admin:password http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests
import requests
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
auth=("admin", "password")
)
```
### 2. API Key Authentication
More secure option for API clients using bearer tokens or X-API-Key headers.
**Configuration:**
```env
# Enable authentication
REQUIRE_AUTH=true
# API Keys (comma-separated list of valid keys)
API_KEYS=sk-your-secure-key-123,ak-another-api-key-456,my-client-api-key-789
```
**Usage Examples:**
```bash
# cURL with Bearer Token
curl -H "Authorization: Bearer sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# cURL with X-API-Key Header
curl -H "X-API-Key: sk-your-secure-key-123" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
# Python requests with Bearer Token
import requests
headers = {
"Authorization": "Bearer sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
# Python requests with X-API-Key
headers = {
"X-API-Key": "sk-your-secure-key-123",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:8000/message",
json={"query": "Hello!"},
headers=headers
)
```
### Authentication Priority
The framework tries authentication methods in this order:
1. **API Key via Bearer Token** (`Authorization: Bearer <key>`)
2. **API Key via X-API-Key Header** (`X-API-Key: <key>`)
3. **Basic Authentication** (username/password)
### Python Client Library Support
```python
from AgentClient import AgentClient
# Using Basic Auth
client = AgentClient("http://localhost:8000")
client.session.auth = ("admin", "password")
# Using API Key
client = AgentClient("http://localhost:8000")
client.session.headers.update({"X-API-Key": "sk-your-secure-key-123"})
# Send authenticated request
response = client.send_message("Hello, authenticated world!")
```
### Web Interface Authentication
The web interface (`/testapp`) supports both authentication methods. Update the JavaScript client:
```javascript
// Basic Auth
this.auth = btoa('admin:password');
headers['Authorization'] = `Basic ${this.auth}`;
// API Key
headers['X-API-Key'] = 'sk-your-secure-key-123';
```
### Security Best Practices
1. **Use Strong API Keys**: Generate cryptographically secure random keys
2. **Rotate Keys Regularly**: Update API keys periodically
3. **Environment Variables**: Never hardcode credentials in source code
4. **HTTPS Only**: Always use HTTPS in production to protect credentials
5. **Minimize Key Scope**: Use different keys for different applications/users
**Generate Secure API Keys:**
```bash
# Generate a secure API key (32 bytes, base64 encoded)
python -c "import secrets, base64; print('sk-' + base64.urlsafe_b64encode(secrets.token_bytes(32)).decode().rstrip('='))"
# Or use openssl
openssl rand -base64 32 | sed 's/^/sk-/'
```
### Disable Authentication
To disable authentication completely:
```env
REQUIRE_AUTH=false
```
When disabled, all endpoints are publicly accessible without any authentication.
## 📝 Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Submit a pull request
## 📄 License
[Your License Here]
## 🤝 Support
- **Documentation**: This README and inline code comments
- **Examples**: See `test_*.py` files for usage examples
- **Issues**: Report bugs and feature requests via GitHub Issues
---
**Quick Links:**
- [Web Interface](http://localhost:8000/testapp) - Interactive testing
- [API Documentation](http://localhost:8000/docs) - OpenAPI/Swagger docs
- [Configuration Test](http://localhost:8000/config/models) - Validate setup
Raw data
{
"_id": null,
"home_page": null,
"name": "agent-framework-lib",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Sebastian Pavel <sebastian@cinco.ai>",
"keywords": "ai, agents, fastapi, autogen, framework, conversational-ai, multi-agent, llm, openai, gemini, chatbot, session-management",
"author": null,
"author_email": "Sebastian Pavel <sebastian@cinco.ai>",
"download_url": "https://files.pythonhosted.org/packages/33/40/221393e896f5e2db01e1c991d0d9a96b9ce2e7912ff5cec1aad02238f519/agent_framework_lib-0.1.1.tar.gz",
"platform": null,
"description": "# Agent Framework Library\n\nA comprehensive Python framework for building and serving conversational AI agents with FastAPI. Features automatic multi-provider support (OpenAI, Gemini), dynamic configuration, session management, streaming responses, and a rich web interface.\n\n**\ud83c\udf89 NEW: Library Usage** - The Agent Framework can now be installed as an external dependency from GitHub repositories. See [Library Usage Guide](docs/library_usage.md) for details.\n\n## Library Installation\n\n```bash\n# Install from GitHub (HTTPS - works with public/private repos with token)\npip install git+https://github.com/Cinco-AI/AgentFramework.git\n\n# Install from GitHub (SSH - requires SSH key setup)\npip install git+ssh://git@github.com/Cinco-AI/AgentFramework.git\n\n# Install from local source (development)\npip install -e .\n```\n\n## \ud83d\ude80 Features\n\n### Core Capabilities\n- **Multi-Provider Support**: Automatic routing between OpenAI and Gemini APIs\n- **Dynamic System Prompts**: Session-based system prompt control\n- **Agent Configuration**: Runtime model parameter adjustment\n- **Session Management**: Persistent conversation handling with structured workflow\n- **Session Workflow**: Initialize/end session lifecycle with immutable configurations \n- **User Feedback System**: Message-level thumbs up/down and session-level flags\n- **Media Detection**: Automatic detection and handling of generated images/videos\n- **Web Interface**: Built-in test application with rich UI controls\n- **Debug Logging**: Comprehensive logging for system prompts and model configuration\n\n### Advanced Features\n- **Model Auto-Detection**: Automatic provider selection based on model name\n- **Parameter Filtering**: Provider-specific parameter validation (e.g., Gemini doesn't support frequency_penalty)\n- **Configuration Validation**: Built-in validation and status endpoints\n- **Correlation & Conversation Tracking**: Link sessions across agents and track individual exchanges\n- **Manager Agent Support**: Built-in coordination features for multi-agent workflows\n- **Persistent Session Storage**: MongoDB integration for scalable session persistence (see [MongoDB Session Storage Guide](docs/mongodb_session_storage.md))\n- **Agent Identity Support**: Multi-agent deployment support with automatic agent identification in MongoDB (see [Agent Identity Guide](docs/agent-identity-support.md))\n- **Reverse Proxy Support**: Automatic path prefix detection for deployment behind reverse proxies (see [Reverse Proxy Setup Guide](REVERSE_PROXY_SETUP.md))\n- **Backward Compatibility**: Existing implementations continue to work\n\n## \ud83d\ude80 Quick Start\n\n### Library Usage (Recommended)\n\nThe easiest way to use the Agent Framework is with the convenience function:\n\n```python\nfrom agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput, create_basic_agent_server\n\nclass MyAgent(AgentInterface):\n async def get_metadata(self):\n return {\"name\": \"My Agent\", \"version\": \"1.0.0\"}\n \n async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):\n return StructuredAgentOutput(response_text=f\"Hello! You said: {agent_input.query}\")\n\n# Start server with one line - no server.py file needed!\ncreate_basic_agent_server(MyAgent, port=8000)\n```\n\nThis automatically handles server setup, routing, and all framework features.\n\nSee [examples/](examples/) for complete examples and [docs/library_usage.md](docs/library_usage.md) for comprehensive documentation.\n\n## \ud83d\udccb Table of Contents\n\n- [Features](#-features)\n- [Quick Start](#-quick-start)\n- [Configuration](#\ufe0f-configuration)\n- [API Reference](#-api-reference)\n- [Client Examples](#-client-examples)\n- [Web Interface](#-web-interface)\n- [Advanced Usage](#-advanced-usage)\n- [Development](#\ufe0f-development)\n- [Authentication](#-authentication)\n- [Contributing](#-contributing)\n- [License](#-license)\n- [Support](#-support)\n\n## \ud83d\udee0\ufe0f Development\n\n### Traditional Development Setup\n\nFor development within the AgentFramework repository:\n\n### 1. Installation\n\n```bash\n# Clone the repository\ngit clone <your-repository-url>\ncd AgentFramework\n\n# Install dependencies\nuv venv\nuv pip install -e .[dev]\n```\n\n### 2. Configuration\n\n```bash\n# Copy configuration template\ncp env-template.txt .env\n\n# Edit .env with your API keys\n```\n\n**Minimal .env setup:**\n```env\n# At least one API key required\nOPENAI_API_KEY=sk-your-openai-key-here\nGEMINI_API_KEY=your-gemini-api-key-here\n\n# Set default model\nDEFAULT_MODEL=gpt-4\n\n# Authentication (optional - set to true to enable)\nREQUIRE_AUTH=false\nBASIC_AUTH_USERNAME=admin\nBASIC_AUTH_PASSWORD=password\nAPI_KEYS=sk-your-secure-api-key-123\n```\n\n### 3. Start the Server\n\n**Option A: Using convenience function (recommended for external projects)**\n```python\n# In your agent file\nfrom agent_framework import create_basic_agent_server\ncreate_basic_agent_server(MyAgent, port=8000)\n```\n\n**Option B: Traditional method**\n```bash\n# Start the development server\nuv run python agent.py\n\n# Or using uvicorn directly\nexport AGENT_CLASS_PATH=\"agent:Agent\"\nuvicorn server:app --reload --host 0.0.0.0 --port 8000\n```\n\n### 4. Test the Agent\n\nOpen your browser to `http://localhost:8000/testapp` or make API calls:\n\n```bash\n# Without authentication (REQUIRE_AUTH=false)\ncurl -X POST http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello, how are you?\"}'\n\n# With API Key authentication (REQUIRE_AUTH=true)\ncurl -X POST http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -H \"X-API-Key: sk-your-secure-api-key-123\" \\\n -d '{\"query\": \"Hello, how are you?\"}'\n\n# With Basic authentication (REQUIRE_AUTH=true)\ncurl -u admin:password -X POST http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello, how are you?\"}'\n```\n\n### Project Structure\n\n```\nAgentFramework/\n\u251c\u2500\u2500 agent_framework/ # Main framework package\n\u2502 \u251c\u2500\u2500 __init__.py # Library exports and convenience functions\n\u2502 \u251c\u2500\u2500 agent_interface.py # Abstract agent interface\n\u2502 \u251c\u2500\u2500 base_agent.py # AutoGen-based agent implementation\n\u2502 \u251c\u2500\u2500 server.py # FastAPI server\n\u2502 \u251c\u2500\u2500 model_config.py # Multi-provider configuration\n\u2502 \u251c\u2500\u2500 model_clients.py # Model client factory\n\u2502 \u2514\u2500\u2500 session_storage.py # Session storage implementations\n\u251c\u2500\u2500 examples/ # Usage examples\n\u251c\u2500\u2500 docs/ # Documentation\n\u251c\u2500\u2500 test_app.html # Web interface\n\u251c\u2500\u2500 env-template.txt # Configuration template\n\u2514\u2500\u2500 pyproject.toml # Package configuration\n```\n\n### Creating Custom Agents\n\n1. **Inherit from AgentInterface:**\n\n```python\nfrom agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput\n\nclass MyCustomAgent(AgentInterface):\n async def handle_message(self, session_id: str, agent_input: StructuredAgentInput) -> StructuredAgentOutput:\n # Implement your logic here\n pass\n \n async def handle_message_stream(self, session_id: str, agent_input: StructuredAgentInput):\n # Implement streaming logic\n pass\n \n async def get_metadata(self):\n return {\n \"name\": \"My Custom Agent\",\n \"description\": \"A custom agent implementation\",\n \"capabilities\": {\"streaming\": True}\n }\n \n def get_system_prompt(self) -> Optional[str]:\n return \"Your custom system prompt here...\"\n```\n\n2. **Start the server:**\n\n```python\nfrom agent_framework import create_basic_agent_server\ncreate_basic_agent_server(MyCustomAgent, port=8000)\n```\n\n### Testing\n\nThe project includes a comprehensive test suite built with `pytest`. The tests are located in the `tests/` directory and are configured to run in a self-contained environment.\n\nFor detailed instructions on how to set up the test environment and run the tests, please refer to the README file inside the test directory:\n\n[**Agent Framework Test Suite Guide**](tests/README.md)\n\nA brief overview of the steps:\n1. Navigate to the test directory: `cd tests`\n2. Create a virtual environment: `uv venv`\n3. Activate it: `source .venv/bin/activate`\n4. Install dependencies: `uv pip install -e .. && uv pip install -r requirements.txt`\n5. Run the tests: `pytest`\n\n### Debug Logging\n\nSet debug logging to see detailed system prompt and configuration information:\n\n```bash\nexport AGENT_LOG_LEVEL=DEBUG\nuv run python agent.py\n```\n\nDebug logs include:\n- Model configuration loading and validation\n- System prompt handling and persistence\n- Agent configuration merging and application\n- Provider selection and parameter filtering\n- Client creation and model routing\n\n## \u2699\ufe0f Configuration\n\n### Multi-Provider Setup\n\nThe framework automatically routes requests to the appropriate AI provider based on the model name:\n\n```env\n# === API Keys ===\nOPENAI_API_KEY=sk-your-openai-key-here\nGEMINI_API_KEY=your-gemini-api-key-here\n\n# === Default Model ===\nDEFAULT_MODEL=gpt-4\n\n# === Model Lists (Optional) ===\nOPENAI_MODELS=gpt-4,gpt-4-turbo,gpt-4o,gpt-3.5-turbo,o1-preview,o1-mini\nGEMINI_MODELS=gemini-1.5-pro,gemini-1.5-flash,gemini-2.0-flash-exp,gemini-pro\n\n# === Provider Defaults ===\nFALLBACK_PROVIDER=openai\nOPENAI_DEFAULT_TEMPERATURE=0.7\nGEMINI_DEFAULT_TEMPERATURE=0.7\n```\n\n### Session Storage Configuration\n\nConfigure persistent session storage (optional):\n\n```env\n# === Session Storage ===\n# Use \"memory\" (default) for in-memory storage or \"mongodb\" for persistent storage\nSESSION_STORAGE_TYPE=memory\n\n# MongoDB configuration (only required when SESSION_STORAGE_TYPE=mongodb)\nMONGODB_CONNECTION_STRING=mongodb://localhost:27017\nMONGODB_DATABASE_NAME=agent_sessions\nMONGODB_COLLECTION_NAME=sessions\n```\n\nFor detailed MongoDB setup and configuration, see the [MongoDB Session Storage Guide](docs/mongodb_session_storage.md).\n\n### Configuration Validation\n\nTest your configuration:\n\n```bash\n# Validate configuration\nuv run python test_multi_provider.py\n\n# Check specific model support\ncurl http://localhost:8000/config/validate/gpt-4\n```\n\n## \ud83d\udcda API Reference\n\n### Core Endpoints\n\n#### Send Message\nSend a message to the agent and receive a complete response.\n\n**Endpoint:** `POST /message`\n\n**Request Body:**\n```json\n{\n \"query\": \"Your message here\",\n \"parts\": [],\n \"system_prompt\": \"Optional custom system prompt\",\n \"agent_config\": {\n \"temperature\": 0.8,\n \"max_tokens\": 1000,\n \"model_selection\": \"gpt-4\"\n },\n \"session_id\": \"optional-session-id\",\n \"correlation_id\": \"optional-correlation-id-for-linking-sessions\"\n}\n```\n\n**Response:**\n```json\n{\n \"response_text\": \"Agent's response\",\n \"parts\": [\n {\n \"type\": \"text\",\n \"text\": \"Agent's response\"\n }\n ],\n \"session_id\": \"generated-or-provided-session-id\",\n \"user_id\": \"user1\",\n \"correlation_id\": \"correlation-id-if-provided\",\n \"conversation_id\": \"unique-id-for-this-exchange\"\n}\n```\n\n#### Session Workflow (NEW)\n\n**Initialize Session:** `POST /init`\n```json\n{\n \"user_id\": \"string\", // required\n \"correlation_id\": \"string\", // optional\n \"session_id\": \"string\", // optional (auto-generated if not provided)\n \"data\": { ... }, // optional\n \"configuration\": { // required\n \"system_prompt\": \"string\",\n \"model_name\": \"string\",\n \"model_config\": {\n \"temperature\": 0.7,\n \"token_limit\": 1000\n }\n }\n}\n```\n\nInitializes a new chat session with immutable configuration. Must be called before any chat interactions. Returns the session configuration and generated session ID if not provided.\n\n**End Session:** `POST /end`\n```json\n{\n \"session_id\": \"string\"\n}\n```\n\nCloses a session and prevents further interactions. Persists final session state and locks feedback system.\n\n**Submit Message Feedback:** `POST /feedback/message`\n```json\n{\n \"session_id\": \"string\",\n \"message_id\": \"string\",\n \"feedback\": \"up\" | \"down\"\n}\n```\n\nSubmit thumbs up/down feedback for a specific message. Can only be submitted once per message.\n\n**Submit/Update Session Flag:** `POST|PUT /feedback/flag`\n```json\n{\n \"session_id\": \"string\",\n \"flag_message\": \"string\"\n}\n```\n\nSubmit or update a session-level flag message. Editable while session is active, locked after session ends.\n\n#### Session Management\n\n**List Sessions:** `GET /sessions`\n```bash\ncurl http://localhost:8000/sessions\n# Response: [\"session1\", \"session2\", ...]\n```\n\n**Get History:** `GET /sessions/{session_id}/history`\n```bash\ncurl http://localhost:8000/sessions/abc123/history\n```\n\n**Find Sessions by Correlation ID:** `GET /sessions/by-correlation/{correlation_id}`\n```bash\ncurl http://localhost:8000/sessions/by-correlation/task-123\n# Response: [{\"user_id\": \"user1\", \"session_id\": \"abc123\", \"correlation_id\": \"task-123\"}]\n```\n\n### Correlation & Conversation Tracking\n\nThe framework provides advanced tracking capabilities for multi-agent workflows and detailed conversation analytics.\n\n#### Correlation ID Support\n\n**Purpose**: Link multiple sessions across different agents that are part of the same larger task or workflow.\n\n**Usage**:\n```python\n# Start a task with correlation ID\nresponse1 = client.send_message(\n \"Analyze this data set\",\n correlation_id=\"data-analysis-task-001\"\n)\n\n# Continue task in another session/agent with same correlation ID\nresponse2 = client.send_message(\n \"Generate visualizations for the analysis\",\n correlation_id=\"data-analysis-task-001\" # Same correlation ID\n)\n\n# Find all sessions related to this task\nsessions = requests.get(\"/sessions/by-correlation/data-analysis-task-001\")\n```\n\n**Key Features**:\n- **Optional field**: Can be set when sending messages or creating sessions\n- **Persistent**: Correlation ID is maintained throughout the session lifecycle\n- **Cross-agent**: Multiple agents can share the same correlation ID\n- **Searchable**: Query all sessions by correlation ID\n\n#### Conversation ID Support\n\n**Purpose**: Track individual message exchanges (request/reply pairs) within sessions for detailed analytics and debugging.\n\n**Key Features**:\n- **Automatic generation**: Each request/reply pair gets a unique conversation ID\n- **Shared between request/reply**: User message and agent response share the same conversation ID\n- **Database-ready**: Designed for storing individual exchanges in databases\n- **Analytics-friendly**: Enables detailed conversation flow analysis\n\n**Example Response with IDs**:\n```json\n{\n \"response_text\": \"Here's the analysis...\",\n \"session_id\": \"session-abc-123\",\n \"user_id\": \"data-scientist-1\",\n \"correlation_id\": \"data-analysis-task-001\",\n \"conversation_id\": \"conv-uuid-456-789\"\n}\n```\n\n#### Manager Agent Coordination\n\nThese features enable sophisticated multi-agent workflows:\n\n```python\nclass ManagerAgent:\n def __init__(self):\n self.correlation_id = f\"task-{uuid.uuid4()}\"\n \n async def coordinate_task(self, task_description):\n # Step 1: Data analysis agent\n analysis_response = await self.send_to_agent(\n \"data-agent\", \n f\"Analyze: {task_description}\",\n correlation_id=self.correlation_id\n )\n \n # Step 2: Visualization agent\n viz_response = await self.send_to_agent(\n \"viz-agent\",\n f\"Create charts for: {analysis_response}\",\n correlation_id=self.correlation_id\n )\n \n # Step 3: Find all related sessions\n related_sessions = await self.get_sessions_by_correlation(self.correlation_id)\n \n return {\n \"task_id\": self.correlation_id,\n \"sessions\": related_sessions,\n \"final_result\": viz_response\n }\n```\n\n#### Web Interface Features\n\nThe test application includes full support for correlation tracking:\n\n- **Correlation ID Input**: Set correlation IDs when sending messages\n- **Session Finder**: Search for all sessions sharing a correlation ID\n- **ID Display**: Shows correlation and conversation IDs in chat history\n- **Visual Indicators**: Clear display of tracking information\n\n#### Configuration Endpoints\n\n**Get Model Configuration:** `GET /config/models`\n```json\n{\n \"default_model\": \"gpt-4\",\n \"configuration_status\": {\n \"valid\": true,\n \"warnings\": [],\n \"errors\": []\n },\n \"supported_models\": {\n \"openai\": [\"gpt-4\", \"gpt-3.5-turbo\"],\n \"gemini\": [\"gemini-1.5-pro\", \"gemini-pro\"]\n },\n \"supported_providers\": {\n \"openai\": true,\n \"gemini\": true\n }\n}\n```\n\n**Validate Model:** `GET /config/validate/{model_name}`\n```json\n{\n \"model\": \"gpt-4\",\n \"provider\": \"openai\",\n \"supported\": true,\n \"api_key_configured\": true,\n \"client_available\": true,\n \"issues\": []\n}\n```\n\n**Get System Prompt:** `GET /system-prompt`\n```json\n{\n \"system_prompt\": \"You are a helpful AI assistant that helps users accomplish their tasks efficiently...\"\n}\n```\n\nReturns the default system prompt configured for the agent. Returns 404 if no system prompt is configured.\n\n**Response (404 if not configured):**\n```json\n{\n \"detail\": \"System prompt not configured\"\n}\n```\n\n### Agent Configuration Parameters\n\n| Parameter | Type | Range | Description | Providers |\n|-----------|------|-------|-------------|-----------|\n| `temperature` | float | 0.0-2.0 | Controls randomness | OpenAI, Gemini |\n| `max_tokens` | integer | 1+ | Maximum response tokens | OpenAI, Gemini |\n| `top_p` | float | 0.0-1.0 | Nucleus sampling | OpenAI, Gemini |\n| `frequency_penalty` | float | -2.0-2.0 | Reduce frequent tokens | OpenAI only |\n| `presence_penalty` | float | -2.0-2.0 | Reduce any repetition | OpenAI only |\n| `stop_sequences` | array | - | Custom stop sequences | OpenAI, Gemini |\n| `timeout` | integer | 1+ | Request timeout (seconds) | OpenAI, Gemini |\n| `max_retries` | integer | 0+ | Retry attempts | OpenAI, Gemini |\n| `model_selection` | string | - | Override model for session | OpenAI, Gemini |\n\n## \ud83d\udcbb Client Examples\n\n### Python Client\n\n```python\nimport requests\nimport json\n\nclass AgentClient:\n def __init__(self, base_url=\"http://localhost:8000\"):\n self.base_url = base_url\n self.session = requests.Session()\n # Add basic auth if required\n self.session.auth = (\"admin\", \"password\")\n \n def send_message(self, message, session_id=None, correlation_id=None):\n \"\"\"Send a message and get complete response.\"\"\"\n payload = {\n \"query\": message,\n \"parts\": []\n }\n \n if session_id:\n payload[\"session_id\"] = session_id\n if correlation_id:\n payload[\"correlation_id\"] = correlation_id\n \n response = self.session.post(\n f\"{self.base_url}/message\",\n json=payload\n )\n response.raise_for_status()\n return response.json()\n \n def init_session(self, user_id, configuration, correlation_id=None, session_id=None, data=None):\n \"\"\"Initialize a new session with configuration.\"\"\"\n payload = {\n \"user_id\": user_id,\n \"configuration\": configuration\n }\n \n if correlation_id:\n payload[\"correlation_id\"] = correlation_id\n if session_id:\n payload[\"session_id\"] = session_id\n if data:\n payload[\"data\"] = data\n \n response = self.session.post(\n f\"{self.base_url}/init\",\n json=payload\n )\n response.raise_for_status()\n return response.json()\n \n def end_session(self, session_id):\n \"\"\"End a session.\"\"\"\n response = self.session.post(\n f\"{self.base_url}/end\",\n json={\"session_id\": session_id}\n )\n response.raise_for_status()\n return response.ok\n \n def submit_feedback(self, session_id, message_id, feedback):\n \"\"\"Submit feedback for a message.\"\"\"\n response = self.session.post(\n f\"{self.base_url}/feedback/message\",\n json={\n \"session_id\": session_id,\n \"message_id\": message_id,\n \"feedback\": feedback\n }\n )\n response.raise_for_status()\n return response.ok\n \n def get_model_config(self):\n \"\"\"Get available models and configuration.\"\"\"\n response = self.session.get(f\"{self.base_url}/config/models\")\n response.raise_for_status()\n return response.json()\n\n# Usage example\nclient = AgentClient()\n\n# Initialize session with configuration\nsession_data = client.init_session(\n user_id=\"user123\",\n configuration={\n \"system_prompt\": \"You are a creative writing assistant\",\n \"model_name\": \"gpt-4\",\n \"model_config\": {\n \"temperature\": 1.2,\n \"token_limit\": 500\n }\n },\n correlation_id=\"creative-writing-session-001\"\n)\n\nsession_id = session_data[\"session_id\"]\n\n# Send messages using the initialized session\nresponse = client.send_message(\n \"Write a creative story about space exploration\",\n session_id=session_id\n)\nprint(response[\"response_text\"])\n\n# Submit feedback on the response\nclient.submit_feedback(session_id, response[\"conversation_id\"], \"up\")\n\n# Continue the conversation\nresponse2 = client.send_message(\"Add more details about the characters\", session_id=session_id)\nprint(response2[\"response_text\"])\n\n# End session when done\nclient.end_session(session_id)\n```\n\n### JavaScript Client\n\n```javascript\nclass AgentClient {\n constructor(baseUrl = 'http://localhost:8000') {\n this.baseUrl = baseUrl;\n this.auth = btoa('admin:password'); // Basic auth\n }\n \n async sendMessage(message, options = {}) {\n const payload = {\n query: message,\n parts: [],\n ...options\n };\n \n const response = await fetch(`${this.baseUrl}/message`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify(payload)\n });\n \n if (!response.ok) {\n throw new Error(`HTTP ${response.status}: ${response.statusText}`);\n }\n \n return response.json();\n }\n \n async initSession(userId, configuration, options = {}) {\n const payload = {\n user_id: userId,\n configuration,\n ...options\n };\n \n const response = await fetch(`${this.baseUrl}/init`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify(payload)\n });\n \n if (!response.ok) {\n throw new Error(`HTTP ${response.status}: ${response.statusText}`);\n }\n \n return response.json();\n }\n \n async endSession(sessionId) {\n const response = await fetch(`${this.baseUrl}/end`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify({ session_id: sessionId })\n });\n \n if (!response.ok) {\n throw new Error(`HTTP ${response.status}: ${response.statusText}`);\n }\n \n return response.ok;\n }\n \n async submitFeedback(sessionId, messageId, feedback) {\n const response = await fetch(`${this.baseUrl}/feedback/message`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Basic ${this.auth}`\n },\n body: JSON.stringify({\n session_id: sessionId,\n message_id: messageId,\n feedback\n })\n });\n \n return response.ok;\n }\n \n async getModelConfig() {\n const response = await fetch(`${this.baseUrl}/config/models`, {\n headers: { 'Authorization': `Basic ${this.auth}` }\n });\n return response.json();\n }\n}\n\n// Usage example\nconst client = new AgentClient();\n\n// Initialize session with configuration\nconst sessionInit = await client.initSession('user123', {\n system_prompt: 'You are a helpful coding assistant',\n model_name: 'gpt-4',\n model_config: {\n temperature: 0.7,\n token_limit: 1000\n }\n}, {\n correlation_id: 'coding-help-001'\n});\n\n// Send messages using the initialized session\nconst response = await client.sendMessage('Help me debug this Python code', {\n session_id: sessionInit.session_id\n});\nconsole.log(response.response_text);\n\n// Submit feedback\nawait client.submitFeedback(sessionInit.session_id, response.conversation_id, 'up');\n\n// End session when done\nawait client.endSession(sessionInit.session_id);\n```\n\n### curl Examples\n\n```bash\n# Basic message with correlation ID\ncurl -X POST http://localhost:8000/message \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"query\": \"Hello, world!\",\n \"correlation_id\": \"greeting-task-001\",\n \"agent_config\": {\n \"temperature\": 0.8,\n \"model_selection\": \"gpt-4\"\n }\n }'\n\n# Initialize session\ncurl -X POST http://localhost:8000/init \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"user_id\": \"user123\",\n \"correlation_id\": \"poetry-session-001\",\n \"configuration\": {\n \"system_prompt\": \"You are a talented poet\",\n \"model_name\": \"gpt-4\",\n \"model_config\": {\n \"temperature\": 1.5,\n \"token_limit\": 200\n }\n }\n }'\n\n# Submit feedback for a message\ncurl -X POST http://localhost:8000/feedback/message \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"session_id\": \"session-123\",\n \"message_id\": \"msg-456\",\n \"feedback\": \"up\"\n }'\n\n# End session\ncurl -X POST http://localhost:8000/end \\\n -u admin:password \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"session_id\": \"session-123\"\n }'\n\n# Get model configuration\ncurl http://localhost:8000/config/models -u admin:password\n\n# Validate model support\ncurl http://localhost:8000/config/validate/gemini-1.5-pro -u admin:password\n\n# Get system prompt\ncurl http://localhost:8000/system-prompt -u admin:password\n\n# Find sessions by correlation ID\ncurl http://localhost:8000/sessions/by-correlation/greeting-task-001 -u admin:password\n```\n\n## \ud83c\udf10 Web Interface\n\nAccess the built-in web interface at `http://localhost:8000/testapp`\n\n### Features:\n- **Model Selection**: Dropdown with all available models\n- **System Prompt Management**: \n - Dedicated textarea for custom prompts\n - Auto-loads default system prompt from server\n - Session-specific prompt persistence\n - Reset to default functionality\n - Manual reload from server option\n- **Advanced Configuration**: Collapsible panel with all parameters\n- **Parameter Validation**: Real-time validation with visual feedback\n- **Provider Awareness**: Disables unsupported parameters (e.g., frequency_penalty for Gemini)\n- **Session Management**: Create, load, and manage conversation sessions with structured workflow\n- **Session Initialization**: Configure sessions with immutable system prompts and model settings\n- **User Feedback**: Thumbs up/down feedback and session-level flags\n- **Media Detection**: Automatic detection and display of generated images/videos\n- **Correlation Tracking**: \n - Set correlation IDs to link sessions across agents\n - Search for sessions by correlation ID\n - Visual display of correlation and conversation IDs\n - Manager agent coordination support\n\n### Configuration Presets:\n- **Creative**: High temperature, relaxed parameters for creative tasks\n- **Precise**: Low temperature, focused parameters for analytical tasks\n- **Custom**: Manual parameter adjustment\n\n## \ud83d\udd27 Advanced Usage\n\n### System Prompt Configuration\n\nThe framework supports configurable system prompts both at the server level and per-session:\n\n#### Server-Level System Prompt\nAgents can provide a default system prompt via the `get_system_prompt()` method:\n\n```python\nclass MyAgent(AgentInterface):\n def get_system_prompt(self) -> Optional[str]:\n return \"\"\"\n You are a helpful coding assistant specializing in Python.\n Always provide:\n 1. Working code examples\n 2. Clear explanations\n 3. Best practices\n 4. Error handling\n \"\"\"\n```\n\n#### Accessing System Prompt via API\n```python\n# Get the default system prompt from server\nresponse = requests.get(\"http://localhost:8000/system-prompt\")\nif response.status_code == 200:\n system_prompt = response.json()[\"system_prompt\"]\nelse:\n print(\"No system prompt configured\")\n```\n\n#### Per-Session System Prompts\n```python\n# Set system prompt for specific use case\ncustom_prompt = \"\"\"\nYou are a creative writing assistant.\nFocus on storytelling and narrative structure.\n\"\"\"\n\nresponse = client.send_message(\n \"Help me write a short story\",\n system_prompt=custom_prompt\n)\n```\n\n#### Web Interface System Prompt Management\nThe web interface provides comprehensive system prompt management:\n- **Auto-loading**: Default system prompt loads automatically on new sessions\n- **Session persistence**: Each session remembers its custom system prompt\n- **Reset functionality**: \"\ud83d\udd04 Reset to Default\" button restores server default\n- **Manual reload**: Refresh system prompt from server without losing session data\n\n### Model-Specific Configuration\n\n```python\n# OpenAI-specific configuration\nopenai_config = {\n \"model_selection\": \"gpt-4\",\n \"temperature\": 0.7,\n \"frequency_penalty\": 0.5, # OpenAI only\n \"presence_penalty\": 0.3 # OpenAI only\n}\n\n# Gemini-specific configuration \ngemini_config = {\n \"model_selection\": \"gemini-1.5-pro\",\n \"temperature\": 0.8,\n \"top_p\": 0.9,\n \"max_tokens\": 1000\n # Note: frequency_penalty not supported by Gemini\n}\n```\n\n### Session Persistence\n\n```python\n# Start conversation with custom settings\nresponse1 = client.send_message(\n \"Let's start a coding session\",\n system_prompt=\"You are my coding pair programming partner\",\n config={\"temperature\": 0.3}\n)\n\nsession_id = response1[\"session_id\"]\n\n# Continue conversation - settings persist\nresponse2 = client.send_message(\n \"Help me debug this function\",\n session_id=session_id\n)\n\n# Override settings for this message only\nresponse3 = client.send_message(\n \"Now be creative and suggest alternatives\", \n session_id=session_id,\n config={\"temperature\": 1.5} # Temporary override\n)\n```\n\n### Multi-Modal Support\n\n```python\n# Send image with message\npayload = {\n \"query\": \"What's in this image?\",\n \"parts\": [\n {\n \"type\": \"image_url\",\n \"image_url\": {\"url\": \"data:image/jpeg;base64,/9j/4AAQ...\"}\n }\n ]\n}\n```\n\n## \ud83d\udd12 Authentication\n\nThe framework supports two authentication methods that can be used simultaneously:\n\n### 1. Basic Authentication (Username/Password)\n\nHTTP Basic Authentication using username and password credentials.\n\n**Configuration:**\n```env\n# Enable authentication\nREQUIRE_AUTH=true\n\n# Basic Auth credentials\nBASIC_AUTH_USERNAME=admin\nBASIC_AUTH_PASSWORD=your-secure-password\n```\n\n**Usage Examples:**\n\n```bash\n# cURL with Basic Auth\ncurl -u admin:password http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello!\"}'\n\n# Python requests\nimport requests\nresponse = requests.post(\n \"http://localhost:8000/message\",\n json={\"query\": \"Hello!\"},\n auth=(\"admin\", \"password\")\n)\n```\n\n### 2. API Key Authentication\n\nMore secure option for API clients using bearer tokens or X-API-Key headers.\n\n**Configuration:**\n```env\n# Enable authentication\nREQUIRE_AUTH=true\n\n# API Keys (comma-separated list of valid keys)\nAPI_KEYS=sk-your-secure-key-123,ak-another-api-key-456,my-client-api-key-789\n```\n\n**Usage Examples:**\n\n```bash\n# cURL with Bearer Token\ncurl -H \"Authorization: Bearer sk-your-secure-key-123\" \\\n http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello!\"}'\n\n# cURL with X-API-Key Header\ncurl -H \"X-API-Key: sk-your-secure-key-123\" \\\n http://localhost:8000/message \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"Hello!\"}'\n\n# Python requests with Bearer Token\nimport requests\nheaders = {\n \"Authorization\": \"Bearer sk-your-secure-key-123\",\n \"Content-Type\": \"application/json\"\n}\nresponse = requests.post(\n \"http://localhost:8000/message\",\n json={\"query\": \"Hello!\"},\n headers=headers\n)\n\n# Python requests with X-API-Key\nheaders = {\n \"X-API-Key\": \"sk-your-secure-key-123\",\n \"Content-Type\": \"application/json\"\n}\nresponse = requests.post(\n \"http://localhost:8000/message\",\n json={\"query\": \"Hello!\"},\n headers=headers\n)\n```\n\n### Authentication Priority\n\nThe framework tries authentication methods in this order:\n1. **API Key via Bearer Token** (`Authorization: Bearer <key>`)\n2. **API Key via X-API-Key Header** (`X-API-Key: <key>`)\n3. **Basic Authentication** (username/password)\n\n### Python Client Library Support\n\n```python\nfrom AgentClient import AgentClient\n\n# Using Basic Auth\nclient = AgentClient(\"http://localhost:8000\")\nclient.session.auth = (\"admin\", \"password\")\n\n# Using API Key\nclient = AgentClient(\"http://localhost:8000\")\nclient.session.headers.update({\"X-API-Key\": \"sk-your-secure-key-123\"})\n\n# Send authenticated request\nresponse = client.send_message(\"Hello, authenticated world!\")\n```\n\n### Web Interface Authentication\n\nThe web interface (`/testapp`) supports both authentication methods. Update the JavaScript client:\n\n```javascript\n// Basic Auth\nthis.auth = btoa('admin:password');\nheaders['Authorization'] = `Basic ${this.auth}`;\n\n// API Key\nheaders['X-API-Key'] = 'sk-your-secure-key-123';\n```\n\n### Security Best Practices\n\n1. **Use Strong API Keys**: Generate cryptographically secure random keys\n2. **Rotate Keys Regularly**: Update API keys periodically\n3. **Environment Variables**: Never hardcode credentials in source code\n4. **HTTPS Only**: Always use HTTPS in production to protect credentials\n5. **Minimize Key Scope**: Use different keys for different applications/users\n\n**Generate Secure API Keys:**\n```bash\n# Generate a secure API key (32 bytes, base64 encoded)\npython -c \"import secrets, base64; print('sk-' + base64.urlsafe_b64encode(secrets.token_bytes(32)).decode().rstrip('='))\"\n\n# Or use openssl\nopenssl rand -base64 32 | sed 's/^/sk-/'\n```\n\n### Disable Authentication\n\nTo disable authentication completely:\n\n```env\nREQUIRE_AUTH=false\n```\n\nWhen disabled, all endpoints are publicly accessible without any authentication.\n\n## \ud83d\udcdd Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests for new functionality\n5. Submit a pull request\n\n## \ud83d\udcc4 License\n\n[Your License Here]\n\n## \ud83e\udd1d Support\n\n- **Documentation**: This README and inline code comments\n- **Examples**: See `test_*.py` files for usage examples\n- **Issues**: Report bugs and feature requests via GitHub Issues\n\n---\n\n**Quick Links:**\n- [Web Interface](http://localhost:8000/testapp) - Interactive testing\n- [API Documentation](http://localhost:8000/docs) - OpenAPI/Swagger docs\n- [Configuration Test](http://localhost:8000/config/models) - Validate setup\n",
"bugtrack_url": null,
"license": null,
"summary": "A comprehensive Python framework for building and serving conversational AI agents with FastAPI",
"version": "0.1.1",
"project_urls": {
"Changelog": "https://github.com/Cinco-AI/AgentFramework/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/Cinco-AI/AgentFramework/blob/main/README.md",
"Homepage": "https://github.com/Cinco-AI/AgentFramework",
"Issues": "https://github.com/Cinco-AI/AgentFramework/issues",
"Repository": "https://github.com/Cinco-AI/AgentFramework.git"
},
"split_keywords": [
"ai",
" agents",
" fastapi",
" autogen",
" framework",
" conversational-ai",
" multi-agent",
" llm",
" openai",
" gemini",
" chatbot",
" session-management"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "03d3ea487fc8b0e46d6bbade48bc998c8b2130137a4f2e38c2a9f337d05ef3a3",
"md5": "cddc238806092c423f29d2f68e9b6d3a",
"sha256": "2916e727c05732583bfcd42a20e8844c45f0d0853139128159f1387223377d05"
},
"downloads": -1,
"filename": "agent_framework_lib-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cddc238806092c423f29d2f68e9b6d3a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 155828,
"upload_time": "2025-07-12T08:00:40",
"upload_time_iso_8601": "2025-07-12T08:00:40.995593Z",
"url": "https://files.pythonhosted.org/packages/03/d3/ea487fc8b0e46d6bbade48bc998c8b2130137a4f2e38c2a9f337d05ef3a3/agent_framework_lib-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "3340221393e896f5e2db01e1c991d0d9a96b9ce2e7912ff5cec1aad02238f519",
"md5": "cd3e33dec6e2e69ec6b1f074e2cca082",
"sha256": "c8bb636c08a2491b9a3c58b037347017956f1959f808142600ae6dfa0b8a37c3"
},
"downloads": -1,
"filename": "agent_framework_lib-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "cd3e33dec6e2e69ec6b1f074e2cca082",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 188485,
"upload_time": "2025-07-12T08:00:42",
"upload_time_iso_8601": "2025-07-12T08:00:42.757612Z",
"url": "https://files.pythonhosted.org/packages/33/40/221393e896f5e2db01e1c991d0d9a96b9ce2e7912ff5cec1aad02238f519/agent_framework_lib-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-12 08:00:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Cinco-AI",
"github_project": "AgentFramework",
"github_not_found": true,
"lcname": "agent-framework-lib"
}