# Demiurg SDK
A powerful AI agent framework for building production-ready conversational agents with support for multiple LLM providers and external tool integrations.
## 🎉 What's New in v0.1.26
- **Scheduled Agents**: New `ScheduledAgent` class for creating agents that can execute tasks on schedules
- **Direct Tool Execution**: Schedule direct execution of Composio tools, OpenAI tools, and custom tools
- **Workflow Support**: Build complex multi-step workflows with conditional logic
- **Natural Language Scheduling**: Parse schedules like "every day at 9am" or "every 30 minutes"
- **Full Agent Capabilities**: Scheduled agents maintain all conversational features
## Features
- 🚀 **Clean API** - Simple, intuitive agent initialization
- 🔌 **Multi-Provider Support** - OpenAI with more providers coming soon
- 💰 **Flexible Billing** - Choose who pays for API calls (builder or end-user)
- 🛠️ **Composio Integration** - Connect to 150+ external services with OAuth
- 📬 **Built-in Messaging** - Queue management and conversation history
- 📁 **Multimodal Support** - Handle images, audio, text, and files
- 🎨 **OpenAI Tools** - Image generation (DALL-E 3), TTS, transcription
- ⚡ **Progress Indicators** - Real-time feedback for long operations
- 🏗️ **Production Ready** - Error handling, logging, and scalability
- ⏰ **Scheduled Agents** - Run tasks automatically on schedules
- 🔄 **Workflow Engine** - Build complex multi-step automations
## Installation
```bash
pip install demiurg
```
## Quick Start
### Simple Agent
```python
from demiurg import Agent, OpenAIProvider
# Create an agent with OpenAI
agent = Agent(OpenAIProvider())
# Or with user-based billing
agent = Agent(OpenAIProvider(), billing="user")
```
### Agent with External Tools (Composio)
```python
from demiurg import Agent, OpenAIProvider, Composio
# Create agent with Twitter and GitHub access
agent = Agent(
OpenAIProvider(),
Composio("TWITTER", "GITHUB"),
billing="user"
)
```
### Custom Configuration
```python
from demiurg import Agent, OpenAIProvider, Config
config = Config(
name="My Assistant",
description="A helpful AI assistant",
model="gpt-4o",
temperature=0.7,
show_progress_indicators=True
)
agent = Agent(OpenAIProvider(), config=config)
```
## Core Concepts
### Billing Modes
The SDK supports two billing modes:
- **`"builder"`** (default) - API calls are charged to the agent builder's account
- **`"user"`** - API calls are charged to the end user's account
```python
# Builder pays for all API calls
agent = Agent(OpenAIProvider(), billing="builder")
# End users pay for their own API calls
agent = Agent(OpenAIProvider(), billing="user")
```
### Composio Integration
Connect your agents to external services like Twitter, GitHub, Gmail, and 150+ more:
```python
# Configure Composio tools
agent = Agent(
OpenAIProvider(),
Composio("TWITTER", "GITHUB", "GMAIL"),
billing="user"
)
# Check if user has connected their account
status = await agent.check_composio_connection("TWITTER", user_id)
# Handle OAuth flow in conversation
if not status["connected"]:
await agent.handle_composio_auth_in_conversation(message, "TWITTER")
```
Create a `composio-tools.txt` file in your project root:
```txt
TWITTER=ac_your_twitter_config_id
GITHUB=ac_your_github_config_id
GMAIL=ac_your_gmail_config_id
```
### Progress Indicators
Long operations automatically show progress messages:
```python
config = Config(show_progress_indicators=True) # Enabled by default
# Users will see:
# "🎨 Creating your image... This may take a moment."
# "🎵 Transcribing audio... This may take a moment."
```
## Message Handling
### Sending Messages
```python
from demiurg import send_text, send_file
# Send text message
await send_text(conversation_id, "Hello from my agent!")
# Send file with caption
await send_file(
conversation_id,
"/path/to/image.png",
caption="Here's your generated image!"
)
```
### Processing Messages
```python
from demiurg import Message
# Process user message
message = Message(
content="Generate an image of a sunset",
user_id="user123",
conversation_id="conv456"
)
response = await agent.process_message(message)
```
### Conversation History
```python
from demiurg import get_conversation_history
# Get formatted history for LLM context
messages = await get_conversation_history(
conversation_id,
limit=50,
provider="openai" # Formats for specific provider
)
```
## Built-in OpenAI Tools
When using OpenAI provider with tools enabled:
```python
config = Config(use_tools=True)
agent = Agent(OpenAIProvider(), config=config)
```
Available tools:
- **generate_image** - Create images with DALL-E 3
- **text_to_speech** - Convert text to natural speech
- **transcribe_audio** - Transcribe audio files
## Custom Agents
### Basic Custom Agent
```python
from demiurg import Agent, OpenAIProvider, Message
class MyCustomAgent(Agent):
def __init__(self):
super().__init__(
OpenAIProvider(),
billing="user"
)
async def process_message(self, message: Message, content=None) -> str:
# Add custom preprocessing
if "urgent" in message.content.lower():
return await self.handle_urgent_request(message)
# Use standard processing
return await super().process_message(message, content)
```
### Agent with Custom Tools
```python
class ToolAgent(Agent):
def __init__(self):
config = Config(use_tools=True)
super().__init__(OpenAIProvider(), config=config)
# Register custom tool
self.register_custom_tool(
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
},
self.get_weather
)
async def get_weather(self, location: str) -> str:
# Implement weather fetching
return f"Weather in {location}: Sunny, 72°F"
```
## File Handling
The SDK automatically handles various file types:
```python
# Images are analyzed with vision models
# Audio files are automatically transcribed
# Text files have their content extracted
# File size limit: 10MB
# Supported image formats: PNG, JPEG, WEBP, GIF
# Supported audio formats: MP3, WAV, M4A, and more
```
## Error Handling
```python
from demiurg.exceptions import (
DemiurgError, # Base exception
ConfigurationError,# Configuration issues
MessagingError, # Messaging failures
ProviderError, # LLM provider errors
FileError, # File operation failures
ToolError # Tool execution errors
)
try:
response = await agent.process_message(message)
except ProviderError as e:
# Handle LLM provider issues
logger.error(f"Provider error: {e}")
except DemiurgError as e:
# Handle other Demiurg errors
logger.error(f"Agent error: {e}")
```
## Environment Variables
Required environment variables:
```bash
# Core Configuration
DEMIURG_BACKEND_URL=http://backend:3000 # Backend API URL
DEMIURG_AGENT_TOKEN=your_token # Authentication token
DEMIURG_AGENT_ID=your_agent_id # Unique agent identifier
# Provider Keys
OPENAI_API_KEY=your_openai_key # For OpenAI provider
# Composio Integration (optional)
COMPOSIO_API_KEY=your_composio_key # For external tools
COMPOSIO_TOOLS=TWITTER,GITHUB,GMAIL # Comma-separated toolkits
# Advanced Settings
DEMIURG_USER_ID=builder_user_id # Builder's user ID (for billing)
TOOL_PROVIDER=composio # Tool provider selection
```
## Advanced Features
### Message Queue System
The SDK includes automatic message queuing to prevent race conditions:
```python
# Messages are automatically queued per conversation
# Prevents issues when multiple messages arrive simultaneously
# No additional configuration needed - it just works!
```
### Multimodal Capabilities
```python
# Process images with vision models
if message contains image:
# Automatically analyzed with GPT-4V
# Handle audio messages
if message contains audio:
# Automatically transcribed with Whisper
# Text file processing
if message contains text file:
# Content extracted and provided to LLM
```
### Production Deployment
```python
# Health check endpoint
@app.get("/health")
async def health_check():
return await agent.health_check()
# Queue status monitoring
@app.get("/queue-status")
async def queue_status():
return await agent.get_queue_status()
```
## Architecture
The SDK follows a modular architecture:
- **Agent**: Core class that orchestrates everything
- **Providers**: LLM integrations (OpenAI, etc.)
- **ToolRegistry**: Centralized tool management system
- Model Provider Tools: LLM-specific tools (DALL-E, TTS, etc.)
- Managed Provider Tools: External services (Composio, etc.)
- Custom Tools: User-defined functions
- **Messaging**: Communication with Demiurg platform
- **Utils**: File handling, audio processing, etc.
## Best Practices
1. **Always use async/await** - The SDK is built for async operations
2. **Handle errors gracefully** - Use try/except blocks with specific exceptions
3. **Configure billing appropriately** - Choose who pays for API calls
4. **Set up Composio auth configs** - Store in composio-tools.txt
5. **Enable progress indicators** - Better UX for long operations
6. **Use appropriate models** - GPT-4o for complex tasks, GPT-3.5 for simple ones
## Advanced Usage
### Direct LLM Queries
Sometimes you need to make LLM calls without tools or conversation context:
```python
# Use the agent's LLM for analysis
analysis = await agent.query_llm(
"Analyze this code for security issues: " + code,
system_prompt="You are a security expert. Be thorough.",
temperature=0.2
)
# Use a different model or provider
response = await agent.query_llm(
prompt="Summarize this text",
model="gpt-3.5-turbo", # Use a faster model
max_tokens=150
)
```
### Scheduled Agents
Create agents that can run tasks automatically on schedules:
```python
from demiurg import ScheduledAgent, OpenAIProvider, Composio
class DailyReportAgent(ScheduledAgent):
def __init__(self):
super().__init__(
OpenAIProvider(),
Composio("TWITTER", "GITHUB")
)
# Schedule daily report at 9 AM
self.schedule_task(
name="daily_report",
schedule="0 9 * * *", # Cron expression
task_type="workflow",
steps=[
{
"type": "tool",
"tool": "GITHUB_LIST_ISSUES",
"arguments": {"state": "open"}
},
{
"type": "llm_query",
"prompt": "Summarize these GitHub issues: {{step_0_result}}"
},
{
"type": "tool",
"tool": "TWITTER_CREATE_TWEET",
"arguments": {"text": "{{step_1_llm}}"}
}
]
)
# The agent can chat AND run scheduled tasks
agent = DailyReportAgent()
agent.start_scheduler()
```
### Natural Language Scheduling
```python
# Schedule with natural language
agent.schedule_task(
name="reminder",
schedule="every day at 2:30 PM",
task_type="llm_query",
prompt="Generate a motivational quote",
notify_channel=conversation_id
)
# Or use interval scheduling
agent.schedule_task(
name="check_mentions",
schedule={"type": "interval", "params": {"minutes": 30}},
task_type="tool",
tool_slug="TWITTER_GET_MENTIONS"
)
```
### Complex Workflows
Build multi-step workflows with conditional logic:
```python
agent.schedule_task(
name="content_pipeline",
schedule="0 10 * * MON", # Every Monday at 10 AM
task_type="workflow",
steps=[
# Generate content ideas
{
"type": "llm_query",
"prompt": "Generate 5 blog post ideas about AI"
},
# Create image for best idea
{
"type": "openai_tool",
"tool_name": "generate_image",
"arguments": {
"prompt": "Illustration for: {{step_0_llm}}"
}
},
# Conditional posting
{
"type": "condition",
"condition": {
"field": "step_1_openai.success",
"operator": "==",
"value": True
},
"true_steps": [{
"type": "tool",
"tool": "TWITTER_CREATE_TWEET_WITH_IMAGE",
"arguments": {
"text": "New blog idea: {{step_0_llm}}",
"image_path": "{{step_1_openai.file_path}}"
}
}]
}
]
)
```
## Migration Guide
### From v0.1.17 to v0.1.18
```python
# Custom tools registration changed
# Old way:
self.register_tool(tool_def, handler)
# New way:
self.register_custom_tool(tool_def, handler)
```
### From v0.1.10 to v0.1.11
```python
# Old way
from demiurg import Agent, Config
config = Config(name="My Agent")
agent = Agent(config)
# New way (backward compatible)
from demiurg import Agent, OpenAIProvider
agent = Agent(OpenAIProvider())
```
## Support
- Documentation: https://docs.demiurg.ai
- GitHub Issues: https://github.com/demiurg-ai/demiurg-sdk/issues
- Email: support@demiurg.ai
## License
Copyright © 2024 Demiurg AI. All rights reserved.
This is proprietary software. See LICENSE file for details.
Raw data
{
"_id": null,
"home_page": "https://demiurg.ai",
"name": "demiurg",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "ai, agents, llm, openai, chatbot",
"author": "Demiurg AI",
"author_email": "support@demiurg.ai",
"download_url": "https://files.pythonhosted.org/packages/a2/94/7aa972775918e5b9e5a4de3ca49353e5898e402bcfa04a51953b2b785116/demiurg-0.1.52.tar.gz",
"platform": null,
"description": "# Demiurg SDK\n\nA powerful AI agent framework for building production-ready conversational agents with support for multiple LLM providers and external tool integrations.\n\n## \ud83c\udf89 What's New in v0.1.26\n\n- **Scheduled Agents**: New `ScheduledAgent` class for creating agents that can execute tasks on schedules\n- **Direct Tool Execution**: Schedule direct execution of Composio tools, OpenAI tools, and custom tools\n- **Workflow Support**: Build complex multi-step workflows with conditional logic\n- **Natural Language Scheduling**: Parse schedules like \"every day at 9am\" or \"every 30 minutes\"\n- **Full Agent Capabilities**: Scheduled agents maintain all conversational features\n\n## Features\n\n- \ud83d\ude80 **Clean API** - Simple, intuitive agent initialization\n- \ud83d\udd0c **Multi-Provider Support** - OpenAI with more providers coming soon\n- \ud83d\udcb0 **Flexible Billing** - Choose who pays for API calls (builder or end-user)\n- \ud83d\udee0\ufe0f **Composio Integration** - Connect to 150+ external services with OAuth\n- \ud83d\udcec **Built-in Messaging** - Queue management and conversation history\n- \ud83d\udcc1 **Multimodal Support** - Handle images, audio, text, and files\n- \ud83c\udfa8 **OpenAI Tools** - Image generation (DALL-E 3), TTS, transcription\n- \u26a1 **Progress Indicators** - Real-time feedback for long operations\n- \ud83c\udfd7\ufe0f **Production Ready** - Error handling, logging, and scalability\n- \u23f0 **Scheduled Agents** - Run tasks automatically on schedules\n- \ud83d\udd04 **Workflow Engine** - Build complex multi-step automations\n\n## Installation\n\n```bash\npip install demiurg\n```\n\n## Quick Start\n\n### Simple Agent\n\n```python\nfrom demiurg import Agent, OpenAIProvider\n\n# Create an agent with OpenAI\nagent = Agent(OpenAIProvider())\n\n# Or with user-based billing\nagent = Agent(OpenAIProvider(), billing=\"user\")\n```\n\n### Agent with External Tools (Composio)\n\n```python\nfrom demiurg import Agent, OpenAIProvider, Composio\n\n# Create agent with Twitter and GitHub access\nagent = Agent(\n OpenAIProvider(),\n Composio(\"TWITTER\", \"GITHUB\"),\n billing=\"user\"\n)\n```\n\n### Custom Configuration\n\n```python\nfrom demiurg import Agent, OpenAIProvider, Config\n\nconfig = Config(\n name=\"My Assistant\",\n description=\"A helpful AI assistant\",\n model=\"gpt-4o\",\n temperature=0.7,\n show_progress_indicators=True\n)\n\nagent = Agent(OpenAIProvider(), config=config)\n```\n\n## Core Concepts\n\n### Billing Modes\n\nThe SDK supports two billing modes:\n\n- **`\"builder\"`** (default) - API calls are charged to the agent builder's account\n- **`\"user\"`** - API calls are charged to the end user's account\n\n```python\n# Builder pays for all API calls\nagent = Agent(OpenAIProvider(), billing=\"builder\")\n\n# End users pay for their own API calls\nagent = Agent(OpenAIProvider(), billing=\"user\")\n```\n\n### Composio Integration\n\nConnect your agents to external services like Twitter, GitHub, Gmail, and 150+ more:\n\n```python\n# Configure Composio tools\nagent = Agent(\n OpenAIProvider(),\n Composio(\"TWITTER\", \"GITHUB\", \"GMAIL\"),\n billing=\"user\"\n)\n\n# Check if user has connected their account\nstatus = await agent.check_composio_connection(\"TWITTER\", user_id)\n\n# Handle OAuth flow in conversation\nif not status[\"connected\"]:\n await agent.handle_composio_auth_in_conversation(message, \"TWITTER\")\n```\n\nCreate a `composio-tools.txt` file in your project root:\n```txt\nTWITTER=ac_your_twitter_config_id\nGITHUB=ac_your_github_config_id\nGMAIL=ac_your_gmail_config_id\n```\n\n### Progress Indicators\n\nLong operations automatically show progress messages:\n\n```python\nconfig = Config(show_progress_indicators=True) # Enabled by default\n\n# Users will see:\n# \"\ud83c\udfa8 Creating your image... This may take a moment.\"\n# \"\ud83c\udfb5 Transcribing audio... This may take a moment.\"\n```\n\n## Message Handling\n\n### Sending Messages\n\n```python\nfrom demiurg import send_text, send_file\n\n# Send text message\nawait send_text(conversation_id, \"Hello from my agent!\")\n\n# Send file with caption\nawait send_file(\n conversation_id, \n \"/path/to/image.png\", \n caption=\"Here's your generated image!\"\n)\n```\n\n### Processing Messages\n\n```python\nfrom demiurg import Message\n\n# Process user message\nmessage = Message(\n content=\"Generate an image of a sunset\",\n user_id=\"user123\",\n conversation_id=\"conv456\"\n)\n\nresponse = await agent.process_message(message)\n```\n\n### Conversation History\n\n```python\nfrom demiurg import get_conversation_history\n\n# Get formatted history for LLM context\nmessages = await get_conversation_history(\n conversation_id,\n limit=50,\n provider=\"openai\" # Formats for specific provider\n)\n```\n\n## Built-in OpenAI Tools\n\nWhen using OpenAI provider with tools enabled:\n\n```python\nconfig = Config(use_tools=True)\nagent = Agent(OpenAIProvider(), config=config)\n```\n\nAvailable tools:\n- **generate_image** - Create images with DALL-E 3\n- **text_to_speech** - Convert text to natural speech\n- **transcribe_audio** - Transcribe audio files\n\n## Custom Agents\n\n### Basic Custom Agent\n\n```python\nfrom demiurg import Agent, OpenAIProvider, Message\n\nclass MyCustomAgent(Agent):\n def __init__(self):\n super().__init__(\n OpenAIProvider(),\n billing=\"user\"\n )\n \n async def process_message(self, message: Message, content=None) -> str:\n # Add custom preprocessing\n if \"urgent\" in message.content.lower():\n return await self.handle_urgent_request(message)\n \n # Use standard processing\n return await super().process_message(message, content)\n```\n\n### Agent with Custom Tools\n\n```python\nclass ToolAgent(Agent):\n def __init__(self):\n config = Config(use_tools=True)\n super().__init__(OpenAIProvider(), config=config)\n \n # Register custom tool\n self.register_custom_tool(\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get current weather\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\"type\": \"string\"}\n },\n \"required\": [\"location\"]\n }\n }\n },\n self.get_weather\n )\n \n async def get_weather(self, location: str) -> str:\n # Implement weather fetching\n return f\"Weather in {location}: Sunny, 72\u00b0F\"\n```\n\n## File Handling\n\nThe SDK automatically handles various file types:\n\n```python\n# Images are analyzed with vision models\n# Audio files are automatically transcribed\n# Text files have their content extracted\n\n# File size limit: 10MB\n# Supported image formats: PNG, JPEG, WEBP, GIF\n# Supported audio formats: MP3, WAV, M4A, and more\n```\n\n## Error Handling\n\n```python\nfrom demiurg.exceptions import (\n DemiurgError, # Base exception\n ConfigurationError,# Configuration issues\n MessagingError, # Messaging failures\n ProviderError, # LLM provider errors\n FileError, # File operation failures\n ToolError # Tool execution errors\n)\n\ntry:\n response = await agent.process_message(message)\nexcept ProviderError as e:\n # Handle LLM provider issues\n logger.error(f\"Provider error: {e}\")\nexcept DemiurgError as e:\n # Handle other Demiurg errors\n logger.error(f\"Agent error: {e}\")\n```\n\n## Environment Variables\n\nRequired environment variables:\n\n```bash\n# Core Configuration\nDEMIURG_BACKEND_URL=http://backend:3000 # Backend API URL\nDEMIURG_AGENT_TOKEN=your_token # Authentication token\nDEMIURG_AGENT_ID=your_agent_id # Unique agent identifier\n\n# Provider Keys\nOPENAI_API_KEY=your_openai_key # For OpenAI provider\n\n# Composio Integration (optional)\nCOMPOSIO_API_KEY=your_composio_key # For external tools\nCOMPOSIO_TOOLS=TWITTER,GITHUB,GMAIL # Comma-separated toolkits\n\n# Advanced Settings\nDEMIURG_USER_ID=builder_user_id # Builder's user ID (for billing)\nTOOL_PROVIDER=composio # Tool provider selection\n```\n\n## Advanced Features\n\n### Message Queue System\n\nThe SDK includes automatic message queuing to prevent race conditions:\n\n```python\n# Messages are automatically queued per conversation\n# Prevents issues when multiple messages arrive simultaneously\n# No additional configuration needed - it just works!\n```\n\n### Multimodal Capabilities\n\n```python\n# Process images with vision models\nif message contains image:\n # Automatically analyzed with GPT-4V\n \n# Handle audio messages\nif message contains audio:\n # Automatically transcribed with Whisper\n \n# Text file processing\nif message contains text file:\n # Content extracted and provided to LLM\n```\n\n### Production Deployment\n\n```python\n# Health check endpoint\n@app.get(\"/health\")\nasync def health_check():\n return await agent.health_check()\n\n# Queue status monitoring\n@app.get(\"/queue-status\")\nasync def queue_status():\n return await agent.get_queue_status()\n```\n\n## Architecture\n\nThe SDK follows a modular architecture:\n\n- **Agent**: Core class that orchestrates everything\n- **Providers**: LLM integrations (OpenAI, etc.)\n- **ToolRegistry**: Centralized tool management system\n - Model Provider Tools: LLM-specific tools (DALL-E, TTS, etc.)\n - Managed Provider Tools: External services (Composio, etc.)\n - Custom Tools: User-defined functions\n- **Messaging**: Communication with Demiurg platform\n- **Utils**: File handling, audio processing, etc.\n\n## Best Practices\n\n1. **Always use async/await** - The SDK is built for async operations\n2. **Handle errors gracefully** - Use try/except blocks with specific exceptions\n3. **Configure billing appropriately** - Choose who pays for API calls\n4. **Set up Composio auth configs** - Store in composio-tools.txt\n5. **Enable progress indicators** - Better UX for long operations\n6. **Use appropriate models** - GPT-4o for complex tasks, GPT-3.5 for simple ones\n\n## Advanced Usage\n\n### Direct LLM Queries\n\nSometimes you need to make LLM calls without tools or conversation context:\n\n```python\n# Use the agent's LLM for analysis\nanalysis = await agent.query_llm(\n \"Analyze this code for security issues: \" + code,\n system_prompt=\"You are a security expert. Be thorough.\",\n temperature=0.2\n)\n\n# Use a different model or provider\nresponse = await agent.query_llm(\n prompt=\"Summarize this text\",\n model=\"gpt-3.5-turbo\", # Use a faster model\n max_tokens=150\n)\n```\n\n### Scheduled Agents\n\nCreate agents that can run tasks automatically on schedules:\n\n```python\nfrom demiurg import ScheduledAgent, OpenAIProvider, Composio\n\nclass DailyReportAgent(ScheduledAgent):\n def __init__(self):\n super().__init__(\n OpenAIProvider(),\n Composio(\"TWITTER\", \"GITHUB\")\n )\n \n # Schedule daily report at 9 AM\n self.schedule_task(\n name=\"daily_report\",\n schedule=\"0 9 * * *\", # Cron expression\n task_type=\"workflow\",\n steps=[\n {\n \"type\": \"tool\",\n \"tool\": \"GITHUB_LIST_ISSUES\",\n \"arguments\": {\"state\": \"open\"}\n },\n {\n \"type\": \"llm_query\",\n \"prompt\": \"Summarize these GitHub issues: {{step_0_result}}\"\n },\n {\n \"type\": \"tool\",\n \"tool\": \"TWITTER_CREATE_TWEET\",\n \"arguments\": {\"text\": \"{{step_1_llm}}\"}\n }\n ]\n )\n\n# The agent can chat AND run scheduled tasks\nagent = DailyReportAgent()\nagent.start_scheduler()\n```\n\n### Natural Language Scheduling\n\n```python\n# Schedule with natural language\nagent.schedule_task(\n name=\"reminder\",\n schedule=\"every day at 2:30 PM\",\n task_type=\"llm_query\",\n prompt=\"Generate a motivational quote\",\n notify_channel=conversation_id\n)\n\n# Or use interval scheduling\nagent.schedule_task(\n name=\"check_mentions\",\n schedule={\"type\": \"interval\", \"params\": {\"minutes\": 30}},\n task_type=\"tool\",\n tool_slug=\"TWITTER_GET_MENTIONS\"\n)\n```\n\n### Complex Workflows\n\nBuild multi-step workflows with conditional logic:\n\n```python\nagent.schedule_task(\n name=\"content_pipeline\",\n schedule=\"0 10 * * MON\", # Every Monday at 10 AM\n task_type=\"workflow\",\n steps=[\n # Generate content ideas\n {\n \"type\": \"llm_query\",\n \"prompt\": \"Generate 5 blog post ideas about AI\"\n },\n # Create image for best idea\n {\n \"type\": \"openai_tool\",\n \"tool_name\": \"generate_image\",\n \"arguments\": {\n \"prompt\": \"Illustration for: {{step_0_llm}}\"\n }\n },\n # Conditional posting\n {\n \"type\": \"condition\",\n \"condition\": {\n \"field\": \"step_1_openai.success\",\n \"operator\": \"==\",\n \"value\": True\n },\n \"true_steps\": [{\n \"type\": \"tool\",\n \"tool\": \"TWITTER_CREATE_TWEET_WITH_IMAGE\",\n \"arguments\": {\n \"text\": \"New blog idea: {{step_0_llm}}\",\n \"image_path\": \"{{step_1_openai.file_path}}\"\n }\n }]\n }\n ]\n)\n```\n\n## Migration Guide\n\n### From v0.1.17 to v0.1.18\n\n```python\n# Custom tools registration changed\n# Old way:\nself.register_tool(tool_def, handler)\n\n# New way:\nself.register_custom_tool(tool_def, handler)\n```\n\n### From v0.1.10 to v0.1.11\n\n```python\n# Old way\nfrom demiurg import Agent, Config\n\nconfig = Config(name=\"My Agent\")\nagent = Agent(config)\n\n# New way (backward compatible)\nfrom demiurg import Agent, OpenAIProvider\n\nagent = Agent(OpenAIProvider())\n```\n\n## Support\n\n- Documentation: https://docs.demiurg.ai\n- GitHub Issues: https://github.com/demiurg-ai/demiurg-sdk/issues\n- Email: support@demiurg.ai\n\n## License\n\nCopyright \u00a9 2024 Demiurg AI. All rights reserved.\n\nThis is proprietary software. See LICENSE file for details.",
"bugtrack_url": null,
"license": "Proprietary",
"summary": "AI Agent Framework for building intelligent agents with multiple LLM providers",
"version": "0.1.52",
"project_urls": {
"Documentation": "https://docs.demiurg.ai",
"Homepage": "https://demiurg.ai"
},
"split_keywords": [
"ai",
" agents",
" llm",
" openai",
" chatbot"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c89775ed6fda5b27e8bd6d7dda2d1478ba3193fe50af9b22a7cb3e494923a94c",
"md5": "75038fe59d5e3df3d4b5ad5c6e056dc5",
"sha256": "023ce44012c084a0eff396bdeeea2a9592391e2c1432563a20db892f51e7c137"
},
"downloads": -1,
"filename": "demiurg-0.1.52-py3-none-any.whl",
"has_sig": false,
"md5_digest": "75038fe59d5e3df3d4b5ad5c6e056dc5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 56187,
"upload_time": "2025-08-01T13:38:03",
"upload_time_iso_8601": "2025-08-01T13:38:03.267014Z",
"url": "https://files.pythonhosted.org/packages/c8/97/75ed6fda5b27e8bd6d7dda2d1478ba3193fe50af9b22a7cb3e494923a94c/demiurg-0.1.52-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a2947aa972775918e5b9e5a4de3ca49353e5898e402bcfa04a51953b2b785116",
"md5": "eba1734e5080d3a56c957e2f6d9c70a7",
"sha256": "a0121cd8c18be2b56166e65d3720af5ef549dce745ae5dc3dcd14cc3fc8ac380"
},
"downloads": -1,
"filename": "demiurg-0.1.52.tar.gz",
"has_sig": false,
"md5_digest": "eba1734e5080d3a56c957e2f6d9c70a7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 51452,
"upload_time": "2025-08-01T13:38:04",
"upload_time_iso_8601": "2025-08-01T13:38:04.504526Z",
"url": "https://files.pythonhosted.org/packages/a2/94/7aa972775918e5b9e5a4de3ca49353e5898e402bcfa04a51953b2b785116/demiurg-0.1.52.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 13:38:04",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "demiurg"
}