Name | calute JSON |
Version |
0.0.23
JSON |
| download |
home_page | None |
Summary | Agents for intelligence and coordination |
upload_time | 2025-08-23 19:41:01 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <3.14,>=3.10 |
license | Apache-2.0 |
keywords |
ai
agents
llm
calute
orchestration
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Calute 🤖
**Calute** is a powerful, production-ready framework for building and orchestrating AI agents with advanced function calling, memory systems, and multi-agent collaboration capabilities. Designed for both researchers and developers, Calute provides enterprise-grade features for creating sophisticated AI systems.
## 🚀 Key Features
### Core Capabilities
- **🎭 Multi-Agent Orchestration**: Seamlessly manage and coordinate multiple specialized agents with dynamic switching based on context, capabilities, or custom triggers
- **⚡ Enhanced Function Execution**: Advanced function calling with timeout management, retry policies, parallel/sequential execution strategies, and comprehensive error handling
- **🧠 Advanced Memory Systems**: Sophisticated memory management with multiple types (short-term, long-term, episodic, semantic, working, procedural), vector search, caching, and persistence
- **🔄 Workflow Engine**: Define and execute complex multi-step workflows with conditional logic and state management
- **🌊 Streaming Support**: Real-time streaming responses with function execution tracking
- **🔌 LLM Flexibility**: Unified interface supporting OpenAI, Gemini, Anthropic, and custom models
### Enhanced Features
- **Memory Store with Indexing**: Fast retrieval with tag-based indexing and importance scoring
- **Function Registry**: Centralized function management with metrics and validation
- **Error Recovery**: Robust error handling with customizable retry policies and fallback strategies
- **Performance Monitoring**: Built-in metrics collection for execution times, success rates, and resource usage
- **Context Management**: Sophisticated context passing between agents and functions
- **Security Features**: Function validation, safe execution environments, and access control
## 📦 Installation
### Core Installation (Lightweight)
```bash
# Minimal installation with only essential dependencies
pip install calute
```
### Feature-Specific Installations
```bash
# For web search capabilities
pip install "calute[search]"
# For image/vision processing
pip install "calute[vision]"
# For additional LLM providers (Gemini, Anthropic, Cohere)
pip install "calute[providers]"
# For database support (PostgreSQL, MongoDB, etc.)
pip install "calute[database]"
# For Redis caching/queuing
pip install "calute[redis]"
# For monitoring and observability
pip install "calute[monitoring]"
# For vector search and embeddings
pip install "calute[vectors]"
```
### Preset Configurations
```bash
# Research-focused installation (search, vision, vectors)
pip install "calute[research]"
# Enterprise installation (database, redis, monitoring, providers)
pip install "calute[enterprise]"
# Full installation with all features
pip install "calute[full]"
```
### Development Installation
```bash
git clone https://github.com/erfanzar/calute.git
cd calute
pip install -e ".[dev]"
```
## 🎯 Quick Start
### Basic Agent Setup
```python
import openai
from calute import Agent, Calute
# Initialize your LLM client
client = openai.OpenAI(api_key="your-key")
# Create an agent with functions
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
def analyze_data(data: str) -> dict:
"""Analyze provided data."""
return {"summary": data, "insights": ["insight1", "insight2"]}
agent = Agent(
id="research_agent",
name="Research Assistant",
model="gpt-4",
instructions="You are a helpful research assistant.",
functions=[search_web, analyze_data],
temperature=0.7
)
# Initialize Calute and register agent
calute = Calute(client)
calute.register_agent(agent)
# Use the agent
response = await calute.create_response(
prompt="Find information about quantum computing",
agent_id="research_agent"
)
```
### Advanced Memory-Enhanced Agent
```python
from calute.memory import MemoryStore, MemoryType
# Create memory store with persistence
memory = MemoryStore(
max_short_term=100,
max_long_term=1000,
enable_persistence=True,
persistence_path="./agent_memory"
)
# Add memories
memory.add_memory(
content="User prefers technical explanations",
memory_type=MemoryType.LONG_TERM,
agent_id="assistant",
tags=["preference", "user_profile"],
importance_score=0.9
)
# Attach to Calute
calute.memory = memory
```
### Multi-Agent Collaboration
```python
from calute.executors import EnhancedAgentOrchestrator, EnhancedFunctionExecutor
# Create specialized agents
research_agent = Agent(id="researcher", name="Researcher", ...)
analyst_agent = Agent(id="analyst", name="Data Analyst", ...)
writer_agent = Agent(id="writer", name="Content Writer", ...)
# Set up orchestrator
orchestrator = EnhancedAgentOrchestrator(enable_metrics=True)
await orchestrator.register_agent(research_agent)
await orchestrator.register_agent(analyst_agent)
await orchestrator.register_agent(writer_agent)
# Enhanced executor with parallel execution
executor = EnhancedFunctionExecutor(
orchestrator=orchestrator,
default_timeout=30.0,
max_concurrent_executions=5
)
# Execute functions across agents
from calute.types import RequestFunctionCall, FunctionCallStrategy
calls = [
RequestFunctionCall(name="research_topic", arguments={"topic": "AI"}, id="1"),
RequestFunctionCall(name="analyze_findings", arguments={"data": "..."}, id="2"),
RequestFunctionCall(name="write_report", arguments={"content": "..."}, id="3")
]
results = await executor.execute_function_calls(
calls=calls,
strategy=FunctionCallStrategy.PARALLEL
)
```
## 📚 Example Scenarios
The `examples/` directory contains comprehensive scenarios demonstrating Calute's capabilities:
1. **Conversational Assistant** (`scenario_1_conversational_assistant.py`)
- Memory-enhanced chatbot with user preference learning
- Sentiment analysis and context retention
2. **Code Analyzer** (`scenario_2_code_analyzer.py`)
- Python code analysis with security scanning
- Refactoring suggestions and test generation
- Parallel analysis execution
3. **Multi-Agent Collaboration** (`scenario_3_multi_agent_collaboration.py`)
- Coordinated task execution across specialized agents
- Dynamic agent switching based on context
- Shared memory and progress tracking
4. **Streaming Research Assistant** (`scenario_4_streaming_research_assistant.py`)
- Real-time streaming responses
- Knowledge graph building
- Research synthesis and progress tracking
## 🏗️ Architecture
```mermaid
graph TB
subgraph "Calute Core"
A[Client Interface] --> B[Agent Registry]
B --> C[Orchestrator]
C --> D[Function Executor]
D --> E[Memory Store]
end
subgraph "Enhanced Features"
F[Retry Policy] --> D
G[Timeout Manager] --> D
H[Metrics Collector] --> D
I[Vector Search] --> E
J[Cache Layer] --> E
K[Persistence] --> E
end
subgraph "Agents"
L[Agent 1] --> C
M[Agent 2] --> C
N[Agent N] --> C
end
```
## 🛠️ Core Components
### Memory System
- **MemoryStore**: Advanced memory management with indexing and caching
- **MemoryType**: SHORT_TERM, LONG_TERM, EPISODIC, SEMANTIC, WORKING, PROCEDURAL
- **Features**: Vector search, similarity matching, consolidation, pattern analysis
### Executors
- **EnhancedAgentOrchestrator**: Multi-agent coordination with metrics
- **EnhancedFunctionExecutor**: Parallel/sequential execution with timeout and retry
- **FunctionRegistry**: Centralized function management and validation
### Configuration
- **CaluteConfig**: Centralized configuration management
- **Environment-based settings**: Development, staging, production profiles
- **Logging configuration**: Structured logging with customizable levels
## 📊 Performance & Monitoring
```python
# Access execution metrics
metrics = orchestrator.function_registry.get_metrics("function_name")
print(f"Total calls: {metrics.total_calls}")
print(f"Success rate: {metrics.successful_calls / metrics.total_calls:.0%}")
print(f"Avg duration: {metrics.average_duration:.2f}s")
# Memory statistics
stats = memory.get_statistics()
print(f"Cache hit rate: {stats['cache_hit_rate']:.1%}")
print(f"Total memories: {stats['total_memories']}")
```
## 🔒 Security & Best Practices
- Function validation before execution
- Timeout protection against hanging operations
- Secure memory persistence with encryption support
- Rate limiting and resource management
- Comprehensive error handling and logging
## 📖 Documentation
- [API Reference](docs/api.md)
- [Configuration Guide](docs/configuration.md)
- [Memory System](docs/memory.md)
- [Multi-Agent Patterns](docs/patterns.md)
- [Performance Tuning](docs/performance.md)
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
### Development Setup
```bash
# Install with dev dependencies
poetry install --with dev
# Run tests
pytest
# Run linting
ruff check .
# Format code
black .
```
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
Built with ❤️ by [erfanzar](https://github.com/erfanzar) and contributors.
## 📬 Contact
- GitHub: [@erfanzar](https://github.com/erfanzar)
- Issues: [GitHub Issues](https://github.com/erfanzar/calute/issues)
---
**Note**: This is an active research project. APIs may change between versions. Please pin your dependencies for production use.
Raw data
{
"_id": null,
"home_page": null,
"name": "calute",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.10",
"maintainer_email": null,
"keywords": "AI, Agents, LLM, calute, orchestration",
"author": null,
"author_email": "Erfan Zare Chavoshi <Erfanzare810@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/ef/63/3531c2321f31c3d27a8e0dc33cc47683d34b130337240675c97a218c071d/calute-0.0.23.tar.gz",
"platform": null,
"description": "# Calute \ud83e\udd16\n\n**Calute** is a powerful, production-ready framework for building and orchestrating AI agents with advanced function calling, memory systems, and multi-agent collaboration capabilities. Designed for both researchers and developers, Calute provides enterprise-grade features for creating sophisticated AI systems.\n\n## \ud83d\ude80 Key Features\n\n### Core Capabilities\n\n- **\ud83c\udfad Multi-Agent Orchestration**: Seamlessly manage and coordinate multiple specialized agents with dynamic switching based on context, capabilities, or custom triggers\n- **\u26a1 Enhanced Function Execution**: Advanced function calling with timeout management, retry policies, parallel/sequential execution strategies, and comprehensive error handling\n- **\ud83e\udde0 Advanced Memory Systems**: Sophisticated memory management with multiple types (short-term, long-term, episodic, semantic, working, procedural), vector search, caching, and persistence\n- **\ud83d\udd04 Workflow Engine**: Define and execute complex multi-step workflows with conditional logic and state management\n- **\ud83c\udf0a Streaming Support**: Real-time streaming responses with function execution tracking\n- **\ud83d\udd0c LLM Flexibility**: Unified interface supporting OpenAI, Gemini, Anthropic, and custom models\n\n### Enhanced Features\n\n- **Memory Store with Indexing**: Fast retrieval with tag-based indexing and importance scoring\n- **Function Registry**: Centralized function management with metrics and validation\n- **Error Recovery**: Robust error handling with customizable retry policies and fallback strategies\n- **Performance Monitoring**: Built-in metrics collection for execution times, success rates, and resource usage\n- **Context Management**: Sophisticated context passing between agents and functions\n- **Security Features**: Function validation, safe execution environments, and access control\n\n## \ud83d\udce6 Installation\n\n### Core Installation (Lightweight)\n\n```bash\n# Minimal installation with only essential dependencies\npip install calute\n```\n\n### Feature-Specific Installations\n\n```bash\n# For web search capabilities\npip install \"calute[search]\"\n\n# For image/vision processing\npip install \"calute[vision]\"\n\n# For additional LLM providers (Gemini, Anthropic, Cohere)\npip install \"calute[providers]\"\n\n# For database support (PostgreSQL, MongoDB, etc.)\npip install \"calute[database]\"\n\n# For Redis caching/queuing\npip install \"calute[redis]\"\n\n# For monitoring and observability\npip install \"calute[monitoring]\"\n\n# For vector search and embeddings\npip install \"calute[vectors]\"\n```\n\n### Preset Configurations\n\n```bash\n# Research-focused installation (search, vision, vectors)\npip install \"calute[research]\"\n\n# Enterprise installation (database, redis, monitoring, providers)\npip install \"calute[enterprise]\"\n\n# Full installation with all features\npip install \"calute[full]\"\n```\n\n### Development Installation\n\n```bash\ngit clone https://github.com/erfanzar/calute.git\ncd calute\npip install -e \".[dev]\"\n```\n\n## \ud83c\udfaf Quick Start\n\n### Basic Agent Setup\n\n```python\nimport openai\nfrom calute import Agent, Calute\n\n# Initialize your LLM client\nclient = openai.OpenAI(api_key=\"your-key\")\n\n# Create an agent with functions\ndef search_web(query: str) -> str:\n \"\"\"Search the web for information.\"\"\"\n return f\"Results for: {query}\"\n\ndef analyze_data(data: str) -> dict:\n \"\"\"Analyze provided data.\"\"\"\n return {\"summary\": data, \"insights\": [\"insight1\", \"insight2\"]}\n\nagent = Agent(\n id=\"research_agent\",\n name=\"Research Assistant\",\n model=\"gpt-4\",\n instructions=\"You are a helpful research assistant.\",\n functions=[search_web, analyze_data],\n temperature=0.7\n)\n\n# Initialize Calute and register agent\ncalute = Calute(client)\ncalute.register_agent(agent)\n\n# Use the agent\nresponse = await calute.create_response(\n prompt=\"Find information about quantum computing\",\n agent_id=\"research_agent\"\n)\n```\n\n### Advanced Memory-Enhanced Agent\n\n```python\nfrom calute.memory import MemoryStore, MemoryType\n\n# Create memory store with persistence\nmemory = MemoryStore(\n max_short_term=100,\n max_long_term=1000,\n enable_persistence=True,\n persistence_path=\"./agent_memory\"\n)\n\n# Add memories\nmemory.add_memory(\n content=\"User prefers technical explanations\",\n memory_type=MemoryType.LONG_TERM,\n agent_id=\"assistant\",\n tags=[\"preference\", \"user_profile\"],\n importance_score=0.9\n)\n\n# Attach to Calute\ncalute.memory = memory\n```\n\n### Multi-Agent Collaboration\n\n```python\nfrom calute.executors import EnhancedAgentOrchestrator, EnhancedFunctionExecutor\n\n# Create specialized agents\nresearch_agent = Agent(id=\"researcher\", name=\"Researcher\", ...)\nanalyst_agent = Agent(id=\"analyst\", name=\"Data Analyst\", ...)\nwriter_agent = Agent(id=\"writer\", name=\"Content Writer\", ...)\n\n# Set up orchestrator\norchestrator = EnhancedAgentOrchestrator(enable_metrics=True)\nawait orchestrator.register_agent(research_agent)\nawait orchestrator.register_agent(analyst_agent)\nawait orchestrator.register_agent(writer_agent)\n\n# Enhanced executor with parallel execution\nexecutor = EnhancedFunctionExecutor(\n orchestrator=orchestrator,\n default_timeout=30.0,\n max_concurrent_executions=5\n)\n\n# Execute functions across agents\nfrom calute.types import RequestFunctionCall, FunctionCallStrategy\n\ncalls = [\n RequestFunctionCall(name=\"research_topic\", arguments={\"topic\": \"AI\"}, id=\"1\"),\n RequestFunctionCall(name=\"analyze_findings\", arguments={\"data\": \"...\"}, id=\"2\"),\n RequestFunctionCall(name=\"write_report\", arguments={\"content\": \"...\"}, id=\"3\")\n]\n\nresults = await executor.execute_function_calls(\n calls=calls,\n strategy=FunctionCallStrategy.PARALLEL\n)\n```\n\n## \ud83d\udcda Example Scenarios\n\nThe `examples/` directory contains comprehensive scenarios demonstrating Calute's capabilities:\n\n1. **Conversational Assistant** (`scenario_1_conversational_assistant.py`)\n - Memory-enhanced chatbot with user preference learning\n - Sentiment analysis and context retention\n\n2. **Code Analyzer** (`scenario_2_code_analyzer.py`)\n - Python code analysis with security scanning\n - Refactoring suggestions and test generation\n - Parallel analysis execution\n\n3. **Multi-Agent Collaboration** (`scenario_3_multi_agent_collaboration.py`)\n - Coordinated task execution across specialized agents\n - Dynamic agent switching based on context\n - Shared memory and progress tracking\n\n4. **Streaming Research Assistant** (`scenario_4_streaming_research_assistant.py`)\n - Real-time streaming responses\n - Knowledge graph building\n - Research synthesis and progress tracking\n\n## \ud83c\udfd7\ufe0f Architecture\n\n```mermaid\ngraph TB\n subgraph \"Calute Core\"\n A[Client Interface] --> B[Agent Registry]\n B --> C[Orchestrator]\n C --> D[Function Executor]\n D --> E[Memory Store]\n end\n \n subgraph \"Enhanced Features\"\n F[Retry Policy] --> D\n G[Timeout Manager] --> D\n H[Metrics Collector] --> D\n I[Vector Search] --> E\n J[Cache Layer] --> E\n K[Persistence] --> E\n end\n \n subgraph \"Agents\"\n L[Agent 1] --> C\n M[Agent 2] --> C\n N[Agent N] --> C\n end\n```\n\n## \ud83d\udee0\ufe0f Core Components\n\n### Memory System\n\n- **MemoryStore**: Advanced memory management with indexing and caching\n- **MemoryType**: SHORT_TERM, LONG_TERM, EPISODIC, SEMANTIC, WORKING, PROCEDURAL\n- **Features**: Vector search, similarity matching, consolidation, pattern analysis\n\n### Executors\n\n- **EnhancedAgentOrchestrator**: Multi-agent coordination with metrics\n- **EnhancedFunctionExecutor**: Parallel/sequential execution with timeout and retry\n- **FunctionRegistry**: Centralized function management and validation\n\n### Configuration\n\n- **CaluteConfig**: Centralized configuration management\n- **Environment-based settings**: Development, staging, production profiles\n- **Logging configuration**: Structured logging with customizable levels\n\n## \ud83d\udcca Performance & Monitoring\n\n```python\n# Access execution metrics\nmetrics = orchestrator.function_registry.get_metrics(\"function_name\")\nprint(f\"Total calls: {metrics.total_calls}\")\nprint(f\"Success rate: {metrics.successful_calls / metrics.total_calls:.0%}\")\nprint(f\"Avg duration: {metrics.average_duration:.2f}s\")\n\n# Memory statistics\nstats = memory.get_statistics()\nprint(f\"Cache hit rate: {stats['cache_hit_rate']:.1%}\")\nprint(f\"Total memories: {stats['total_memories']}\")\n```\n\n## \ud83d\udd12 Security & Best Practices\n\n- Function validation before execution\n- Timeout protection against hanging operations\n- Secure memory persistence with encryption support\n- Rate limiting and resource management\n- Comprehensive error handling and logging\n\n## \ud83d\udcd6 Documentation\n\n- [API Reference](docs/api.md)\n- [Configuration Guide](docs/configuration.md)\n- [Memory System](docs/memory.md)\n- [Multi-Agent Patterns](docs/patterns.md)\n- [Performance Tuning](docs/performance.md)\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.\n\n### Development Setup\n\n```bash\n# Install with dev dependencies\npoetry install --with dev\n\n# Run tests\npytest\n\n# Run linting\nruff check .\n\n# Format code\nblack .\n```\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\nBuilt with \u2764\ufe0f by [erfanzar](https://github.com/erfanzar) and contributors.\n\n## \ud83d\udcec Contact\n\n- GitHub: [@erfanzar](https://github.com/erfanzar)\n- Issues: [GitHub Issues](https://github.com/erfanzar/calute/issues)\n\n---\n\n**Note**: This is an active research project. APIs may change between versions. Please pin your dependencies for production use.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Agents for intelligence and coordination",
"version": "0.0.23",
"project_urls": {
"Documentation": "https://erfanzar.github.io/Calute",
"Homepage": "https://github.com/erfanzar/Calute",
"Repository": "https://github.com/erfanzar/Calute"
},
"split_keywords": [
"ai",
" agents",
" llm",
" calute",
" orchestration"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "31ab4c419ad2873001c5c829a30498f2d1e90274a0de22b5fe783ce02c45bb28",
"md5": "4b86bd6b5473ab39170f657dcd994a9a",
"sha256": "4107220c76cd0b007d9c7bbd16e1589213ecd634c60ce4ac4bccdf59f6509bc3"
},
"downloads": -1,
"filename": "calute-0.0.23-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4b86bd6b5473ab39170f657dcd994a9a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.10",
"size": 169117,
"upload_time": "2025-08-23T19:40:59",
"upload_time_iso_8601": "2025-08-23T19:40:59.664613Z",
"url": "https://files.pythonhosted.org/packages/31/ab/4c419ad2873001c5c829a30498f2d1e90274a0de22b5fe783ce02c45bb28/calute-0.0.23-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ef633531c2321f31c3d27a8e0dc33cc47683d34b130337240675c97a218c071d",
"md5": "9f80ac7c0d0cf450aa948a154feed3bf",
"sha256": "7e4a6c31786bfe8b0d75613cee0ac963c3e93167668068534eed797f330c6617"
},
"downloads": -1,
"filename": "calute-0.0.23.tar.gz",
"has_sig": false,
"md5_digest": "9f80ac7c0d0cf450aa948a154feed3bf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.10",
"size": 396400,
"upload_time": "2025-08-23T19:41:01",
"upload_time_iso_8601": "2025-08-23T19:41:01.495935Z",
"url": "https://files.pythonhosted.org/packages/ef/63/3531c2321f31c3d27a8e0dc33cc47683d34b130337240675c97a218c071d/calute-0.0.23.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-23 19:41:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "erfanzar",
"github_project": "Calute",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "calute"
}