# π§
ConnectOnion
<div align="center">
[](https://connectonion.com)
[](https://opensource.org/licenses/MIT)
[](https://python.org)
[](https://discord.gg/4xfD9k8AUF)
[](http://docs.connectonion.com)
**A simple, elegant open-source framework for production-ready AI agents**
[π Documentation](http://docs.connectonion.com) β’ [π¬ Discord](https://discord.gg/4xfD9k8AUF) β’ [β Star Us](https://github.com/openonion/connectonion)
</div>
---
> ## π Philosophy: "Keep simple things simple, make complicated things possible"
>
> This is the core principle that drives every design decision in ConnectOnion.
## π― Living Our Philosophy
```python
# Simple thing (2 lines) - Just works!
from connectonion import Agent
agent = Agent("assistant").input("Hello!")
# Complicated thing (still possible) - Production ready!
agent = Agent("production",
model="gpt-5", # Latest models
tools=[search, analyze, execute], # Your functions as tools
system_prompt=company_prompt, # Custom behavior
max_iterations=10, # Safety controls
trust="prompt") # Multi-agent ready
```
## β¨ What Makes ConnectOnion Special
- **π― Simple API**: Just one `Agent` class and your functions as tools
- **π Production Ready**: Battle-tested with GPT-5, Gemini 2.5, Claude Opus 4.1
- **π Open Source**: MIT licensed, community-driven development
- **β‘ No Boilerplate**: Start building in 2 lines, not 200
- **π§ Extensible**: Scale from prototypes to production systems
## π Quick Start
### Installation
```bash
pip install connectonion
```
### Quickest Start - Use the CLI
```bash
# Create a new agent project with one command
co create my-agent
# Navigate and run
cd my-agent
python agent.py
```
*The CLI guides you through API key setup automatically. No manual `.env` editing needed!*
### Manual Usage
```python
import os
from connectonion import Agent
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
# 1. Define tools as simple functions
def search(query: str) -> str:
"""Search for information."""
return f"Found information about {query}"
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
# 2. Create an agent with tools and personality
agent = Agent(
name="my_assistant",
system_prompt="You are a helpful and friendly assistant.",
tools=[search, calculate]
# max_iterations=10 is the default - agent will try up to 10 tool calls per task
)
# 3. Use the agent
result = agent.input("What is 25 * 4?")
print(result) # Agent will use the calculate function
result = agent.input("Search for Python tutorials")
print(result) # Agent will use the search function
# 4. View behavior history (automatic!)
print(agent.history.summary())
```
## π§ Core Concepts
### Agent
The main class that orchestrates LLM calls and tool usage. Each agent:
- Has a unique name for tracking purposes
- Can be given a custom personality via `system_prompt`
- Automatically converts functions to tools
- Records all behavior to JSON files
### Function-Based Tools
**NEW**: Just write regular Python functions! ConnectOnion automatically converts them to tools:
```python
def my_tool(param: str, optional_param: int = 10) -> str:
"""This docstring becomes the tool description."""
return f"Processed {param} with value {optional_param}"
# Use it directly - no wrapping needed!
agent = Agent("assistant", tools=[my_tool])
```
Key features:
- **Automatic Schema Generation**: Type hints become OpenAI function schemas
- **Docstring Integration**: First line becomes tool description
- **Parameter Handling**: Supports required and optional parameters
- **Type Conversion**: Handles different return types automatically
### System Prompts
Define your agent's personality and behavior with flexible input options:
```python
# 1. Direct string prompt
agent = Agent(
name="helpful_tutor",
system_prompt="You are an enthusiastic teacher who loves to educate.",
tools=[my_tools]
)
# 2. Load from file (any text file, no extension restrictions)
agent = Agent(
name="support_agent",
system_prompt="prompts/customer_support.md" # Automatically loads file content
)
# 3. Using Path object
from pathlib import Path
agent = Agent(
name="coder",
system_prompt=Path("prompts") / "senior_developer.txt"
)
# 4. None for default prompt
agent = Agent("basic_agent") # Uses default: "You are a helpful assistant..."
```
Example prompt file (`prompts/customer_support.md`):
```markdown
# Customer Support Agent
You are a senior customer support specialist with expertise in:
- Empathetic communication
- Problem-solving
- Technical troubleshooting
## Guidelines
- Always acknowledge the customer's concern first
- Look for root causes, not just symptoms
- Provide clear, actionable solutions
```
### Logging
Automatic logging of all agent activities including:
- User inputs and agent responses
- LLM calls with timing
- Tool executions with parameters and results
- Default storage in `.co/logs/{name}.log` (human-readable format)
## π― Example Tools
You can still use the traditional Tool class approach, but the new functional approach is much simpler:
### Traditional Tool Classes (Still Supported)
```python
from connectonion.tools import Calculator, CurrentTime, ReadFile
agent = Agent("assistant", tools=[Calculator(), CurrentTime(), ReadFile()])
```
### New Function-Based Approach (Recommended)
```python
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
def get_time(format: str = "%Y-%m-%d %H:%M:%S") -> str:
"""Get current date and time."""
from datetime import datetime
return datetime.now().strftime(format)
def read_file(filepath: str) -> str:
"""Read contents of a text file."""
with open(filepath, 'r') as f:
return f.read()
# Use them directly!
agent = Agent("assistant", tools=[calculate, get_time, read_file])
```
The function-based approach is simpler, more Pythonic, and easier to test!
## π¨ CLI Templates
ConnectOnion CLI provides templates to get you started quickly:
```bash
# Create a minimal agent (default)
co create my-agent
# Create with specific template
co create my-playwright-bot --template playwright
# Initialize in existing directory
co init # Adds .co folder only
co init --template playwright # Adds full template
```
**Available Templates:**
- `minimal` (default) - Simple agent starter
- `playwright` - Web automation with browser tools
Each template includes:
- Pre-configured agent ready to run
- Automatic API key setup
- Embedded ConnectOnion documentation
- Git-ready `.gitignore`
Learn more in the [CLI Documentation](docs/cli.md) and [Templates Guide](docs/templates.md).
## π¨ Creating Custom Tools
The simplest way is to use functions (recommended):
```python
def weather(city: str) -> str:
"""Get current weather for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 22Β°C"
# That's it! Use it directly
agent = Agent(name="weather_agent", tools=[weather])
```
Or use the Tool class for more control:
```python
from connectonion.tools import Tool
class WeatherTool(Tool):
def __init__(self):
super().__init__(
name="weather",
description="Get current weather for a city"
)
def run(self, city: str) -> str:
return f"Weather in {city}: Sunny, 22Β°C"
def get_parameters_schema(self):
return {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
agent = Agent(name="weather_agent", tools=[WeatherTool()])
```
## π Project Structure
```
connectonion/
βββ connectonion/
β βββ __init__.py # Main exports
β βββ agent.py # Agent class
β βββ tools.py # Tool interface and built-ins
β βββ llm.py # LLM interface and OpenAI implementation
β βββ console.py # Terminal output and logging
β βββ cli/ # CLI module
β βββ main.py # CLI commands
β βββ docs.md # Embedded documentation
β βββ templates/ # Agent templates
β βββ basic_agent.py
β βββ chat_agent.py
β βββ data_agent.py
β βββ *.md # Prompt templates
βββ docs/ # Documentation
β βββ getting-started.md
β βββ cli.md
β βββ templates.md
β βββ ...
βββ examples/
β βββ basic_example.py
βββ tests/
β βββ test_agent.py
βββ requirements.txt
```
## π§ͺ Running Tests
```bash
python -m pytest tests/
```
Or run individual test files:
```bash
python -m unittest tests.test_agent
```
## π Automatic Logging
All agent activities are automatically logged to:
```
.co/logs/{agent_name}.log # Default location
```
Each log entry includes:
- Timestamp
- User input
- LLM calls with timing
- Tool executions with parameters and results
- Final responses
Control logging behavior:
```python
# Default: logs to .co/logs/assistant.log
agent = Agent("assistant")
# Log to current directory
agent = Agent("assistant", log=True) # β assistant.log
# Disable logging
agent = Agent("assistant", log=False)
# Custom log file
agent = Agent("assistant", log="my_logs/custom.log")
```
## π Configuration
### OpenAI API Key
Set your API key via environment variable:
```bash
export OPENAI_API_KEY="your-api-key-here"
```
Or pass directly to agent:
```python
agent = Agent(name="test", api_key="your-api-key-here")
```
### Model Selection
```python
agent = Agent(name="test", model="gpt-5") # Default: gpt-5-mini
```
### Iteration Control
Control how many tool calling iterations an agent can perform:
```python
# Default: 10 iterations (good for most tasks)
agent = Agent(name="assistant", tools=[...])
# Complex tasks may need more iterations
research_agent = Agent(
name="researcher",
tools=[search, analyze, summarize, write_file],
max_iterations=25 # Allow more steps for complex workflows
)
# Simple agents can use fewer iterations for safety
calculator = Agent(
name="calc",
tools=[calculate],
max_iterations=5 # Prevent runaway calculations
)
# Per-request override for specific complex tasks
result = agent.input(
"Analyze all project files and generate comprehensive report",
max_iterations=50 # Override for this specific task
)
```
When an agent reaches its iteration limit, it returns:
```
"Task incomplete: Maximum iterations (10) reached."
```
**Choosing the Right Limit:**
- **Simple tasks (1-3 tools)**: 5-10 iterations
- **Standard workflows**: 10-15 iterations (default: 10)
- **Complex analysis**: 20-30 iterations
- **Research/multi-step**: 30+ iterations
## π οΈ Advanced Usage
### Multiple Tool Calls
Agents can chain multiple tool calls automatically:
```python
result = agent.input(
"Calculate 15 * 8, then tell me what time you did this calculation"
)
# Agent will use calculator first, then current_time tool
```
### Custom LLM Providers
```python
from connectonion.llm import LLM
class CustomLLM(LLM):
def complete(self, messages, tools=None):
# Your custom LLM implementation
pass
agent = Agent(name="test", llm=CustomLLM())
```
## π§ Current Limitations (MVP)
This is an MVP version with intentional limitations:
- Single LLM provider (OpenAI)
- Synchronous execution only
- JSON file storage only
- Basic error handling
- No multi-agent collaboration
## πΊοΈ Future Roadmap
- Multiple LLM provider support (Anthropic, Local models)
- Async/await support
- Database storage options
- Advanced memory systems
- Multi-agent collaboration
- Web interface for behavior monitoring
- Plugin system for tools
## π Connect With Us
<div align="center">
[](https://discord.gg/4xfD9k8AUF)
[](https://github.com/openonion/connectonion)
[](http://docs.connectonion.com)
</div>
- **π¬ Discord**: [Join our community](https://discord.gg/4xfD9k8AUF) - Get help, share ideas, meet other developers
- **π Documentation**: [docs.connectonion.com](http://docs.connectonion.com) - Comprehensive guides and examples
- **β GitHub**: [Star the repo](https://github.com/openonion/connectonion) - Show your support
- **π Issues**: [Report bugs](https://github.com/openonion/connectonion/issues) - We respond quickly
## π€ Contributing
We welcome contributions! ConnectOnion is open source and community-driven.
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Submit a pull request
See our [Contributing Guide](http://docs.connectonion.com/website-maintenance) for more details.
## π License
MIT License - Use it anywhere, even commercially. See [LICENSE](LICENSE) file for details.
---
<div align="center">
### π« Remember
## **"Keep simple things simple, make complicated things possible"**
*Built with β€οΈ by the open-source community*
[β¬ Back to top](#-connectonion)
</div>
Raw data
{
"_id": null,
"home_page": "https://github.com/connectonion/connectonion",
"name": "connectonion",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, agent, llm, tools, openai, automation",
"author": "ConnectOnion Team",
"author_email": "pypi@connectonion.com",
"download_url": "https://files.pythonhosted.org/packages/4d/a5/e961d13e6378c20e33f5d8d2befd1704582fb112f913ca7884b6e1222394/connectonion-0.3.3.tar.gz",
"platform": null,
"description": "# \ud83e\uddc5 ConnectOnion\n\n<div align=\"center\">\n\n[](https://connectonion.com)\n[](https://opensource.org/licenses/MIT)\n[](https://python.org)\n[](https://discord.gg/4xfD9k8AUF)\n[](http://docs.connectonion.com)\n\n**A simple, elegant open-source framework for production-ready AI agents**\n\n[\ud83d\udcda Documentation](http://docs.connectonion.com) \u2022 [\ud83d\udcac Discord](https://discord.gg/4xfD9k8AUF) \u2022 [\u2b50 Star Us](https://github.com/openonion/connectonion)\n\n</div>\n\n---\n\n> ## \ud83c\udf1f Philosophy: \"Keep simple things simple, make complicated things possible\"\n> \n> This is the core principle that drives every design decision in ConnectOnion.\n\n## \ud83c\udfaf Living Our Philosophy\n\n```python\n# Simple thing (2 lines) - Just works!\nfrom connectonion import Agent\nagent = Agent(\"assistant\").input(\"Hello!\")\n\n# Complicated thing (still possible) - Production ready!\nagent = Agent(\"production\",\n model=\"gpt-5\", # Latest models\n tools=[search, analyze, execute], # Your functions as tools\n system_prompt=company_prompt, # Custom behavior\n max_iterations=10, # Safety controls\n trust=\"prompt\") # Multi-agent ready\n```\n\n## \u2728 What Makes ConnectOnion Special\n\n- **\ud83c\udfaf Simple API**: Just one `Agent` class and your functions as tools\n- **\ud83d\ude80 Production Ready**: Battle-tested with GPT-5, Gemini 2.5, Claude Opus 4.1\n- **\ud83c\udf0d Open Source**: MIT licensed, community-driven development\n- **\u26a1 No Boilerplate**: Start building in 2 lines, not 200\n- **\ud83d\udd27 Extensible**: Scale from prototypes to production systems\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install connectonion\n```\n\n### Quickest Start - Use the CLI\n\n```bash\n# Create a new agent project with one command\nco create my-agent\n\n# Navigate and run\ncd my-agent\npython agent.py\n```\n\n*The CLI guides you through API key setup automatically. No manual `.env` editing needed!*\n\n### Manual Usage\n\n```python\nimport os \nfrom connectonion import Agent\n\n# Set your OpenAI API key\nos.environ[\"OPENAI_API_KEY\"] = \"your-api-key-here\"\n\n# 1. Define tools as simple functions\ndef search(query: str) -> str:\n \"\"\"Search for information.\"\"\"\n return f\"Found information about {query}\"\n\ndef calculate(expression: str) -> float:\n \"\"\"Perform mathematical calculations.\"\"\"\n return eval(expression) # Use safely in production\n\n# 2. Create an agent with tools and personality\nagent = Agent(\n name=\"my_assistant\",\n system_prompt=\"You are a helpful and friendly assistant.\",\n tools=[search, calculate]\n # max_iterations=10 is the default - agent will try up to 10 tool calls per task\n)\n\n# 3. Use the agent\nresult = agent.input(\"What is 25 * 4?\")\nprint(result) # Agent will use the calculate function\n\nresult = agent.input(\"Search for Python tutorials\") \nprint(result) # Agent will use the search function\n\n# 4. View behavior history (automatic!)\nprint(agent.history.summary())\n```\n\n## \ud83d\udd27 Core Concepts\n\n### Agent\nThe main class that orchestrates LLM calls and tool usage. Each agent:\n- Has a unique name for tracking purposes\n- Can be given a custom personality via `system_prompt`\n- Automatically converts functions to tools\n- Records all behavior to JSON files\n\n### Function-Based Tools\n**NEW**: Just write regular Python functions! ConnectOnion automatically converts them to tools:\n\n```python\ndef my_tool(param: str, optional_param: int = 10) -> str:\n \"\"\"This docstring becomes the tool description.\"\"\"\n return f\"Processed {param} with value {optional_param}\"\n\n# Use it directly - no wrapping needed!\nagent = Agent(\"assistant\", tools=[my_tool])\n```\n\nKey features:\n- **Automatic Schema Generation**: Type hints become OpenAI function schemas\n- **Docstring Integration**: First line becomes tool description \n- **Parameter Handling**: Supports required and optional parameters\n- **Type Conversion**: Handles different return types automatically\n\n### System Prompts\nDefine your agent's personality and behavior with flexible input options:\n\n```python\n# 1. Direct string prompt\nagent = Agent(\n name=\"helpful_tutor\",\n system_prompt=\"You are an enthusiastic teacher who loves to educate.\",\n tools=[my_tools]\n)\n\n# 2. Load from file (any text file, no extension restrictions)\nagent = Agent(\n name=\"support_agent\",\n system_prompt=\"prompts/customer_support.md\" # Automatically loads file content\n)\n\n# 3. Using Path object\nfrom pathlib import Path\nagent = Agent(\n name=\"coder\",\n system_prompt=Path(\"prompts\") / \"senior_developer.txt\"\n)\n\n# 4. None for default prompt\nagent = Agent(\"basic_agent\") # Uses default: \"You are a helpful assistant...\"\n```\n\nExample prompt file (`prompts/customer_support.md`):\n```markdown\n# Customer Support Agent\n\nYou are a senior customer support specialist with expertise in:\n- Empathetic communication\n- Problem-solving\n- Technical troubleshooting\n\n## Guidelines\n- Always acknowledge the customer's concern first\n- Look for root causes, not just symptoms\n- Provide clear, actionable solutions\n```\n\n### Logging\nAutomatic logging of all agent activities including:\n- User inputs and agent responses\n- LLM calls with timing\n- Tool executions with parameters and results\n- Default storage in `.co/logs/{name}.log` (human-readable format)\n\n## \ud83c\udfaf Example Tools\n\nYou can still use the traditional Tool class approach, but the new functional approach is much simpler:\n\n### Traditional Tool Classes (Still Supported)\n```python\nfrom connectonion.tools import Calculator, CurrentTime, ReadFile\n\nagent = Agent(\"assistant\", tools=[Calculator(), CurrentTime(), ReadFile()])\n```\n\n### New Function-Based Approach (Recommended)\n```python\ndef calculate(expression: str) -> float:\n \"\"\"Perform mathematical calculations.\"\"\"\n return eval(expression) # Use safely in production\n\ndef get_time(format: str = \"%Y-%m-%d %H:%M:%S\") -> str:\n \"\"\"Get current date and time.\"\"\"\n from datetime import datetime\n return datetime.now().strftime(format)\n\ndef read_file(filepath: str) -> str:\n \"\"\"Read contents of a text file.\"\"\"\n with open(filepath, 'r') as f:\n return f.read()\n\n# Use them directly!\nagent = Agent(\"assistant\", tools=[calculate, get_time, read_file])\n```\n\nThe function-based approach is simpler, more Pythonic, and easier to test!\n\n## \ud83c\udfa8 CLI Templates\n\nConnectOnion CLI provides templates to get you started quickly:\n\n```bash\n# Create a minimal agent (default)\nco create my-agent\n\n# Create with specific template\nco create my-playwright-bot --template playwright\n\n# Initialize in existing directory\nco init # Adds .co folder only\nco init --template playwright # Adds full template\n```\n\n**Available Templates:**\n- `minimal` (default) - Simple agent starter\n- `playwright` - Web automation with browser tools\n\nEach template includes:\n- Pre-configured agent ready to run\n- Automatic API key setup\n- Embedded ConnectOnion documentation\n- Git-ready `.gitignore`\n\nLearn more in the [CLI Documentation](docs/cli.md) and [Templates Guide](docs/templates.md).\n\n## \ud83d\udd28 Creating Custom Tools\n\nThe simplest way is to use functions (recommended):\n\n```python\ndef weather(city: str) -> str:\n \"\"\"Get current weather for a city.\"\"\"\n # Your weather API logic here\n return f\"Weather in {city}: Sunny, 22\u00b0C\"\n\n# That's it! Use it directly\nagent = Agent(name=\"weather_agent\", tools=[weather])\n```\n\nOr use the Tool class for more control:\n\n```python\nfrom connectonion.tools import Tool\n\nclass WeatherTool(Tool):\n def __init__(self):\n super().__init__(\n name=\"weather\",\n description=\"Get current weather for a city\"\n )\n \n def run(self, city: str) -> str:\n return f\"Weather in {city}: Sunny, 22\u00b0C\"\n \n def get_parameters_schema(self):\n return {\n \"type\": \"object\",\n \"properties\": {\n \"city\": {\"type\": \"string\", \"description\": \"City name\"}\n },\n \"required\": [\"city\"]\n }\n\nagent = Agent(name=\"weather_agent\", tools=[WeatherTool()])\n```\n\n## \ud83d\udcc1 Project Structure\n\n```\nconnectonion/\n\u251c\u2500\u2500 connectonion/\n\u2502 \u251c\u2500\u2500 __init__.py # Main exports\n\u2502 \u251c\u2500\u2500 agent.py # Agent class\n\u2502 \u251c\u2500\u2500 tools.py # Tool interface and built-ins\n\u2502 \u251c\u2500\u2500 llm.py # LLM interface and OpenAI implementation\n\u2502 \u251c\u2500\u2500 console.py # Terminal output and logging\n\u2502 \u2514\u2500\u2500 cli/ # CLI module\n\u2502 \u251c\u2500\u2500 main.py # CLI commands\n\u2502 \u251c\u2500\u2500 docs.md # Embedded documentation\n\u2502 \u2514\u2500\u2500 templates/ # Agent templates\n\u2502 \u251c\u2500\u2500 basic_agent.py\n\u2502 \u251c\u2500\u2500 chat_agent.py\n\u2502 \u251c\u2500\u2500 data_agent.py\n\u2502 \u2514\u2500\u2500 *.md # Prompt templates\n\u251c\u2500\u2500 docs/ # Documentation\n\u2502 \u251c\u2500\u2500 getting-started.md\n\u2502 \u251c\u2500\u2500 cli.md\n\u2502 \u251c\u2500\u2500 templates.md\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 examples/\n\u2502 \u2514\u2500\u2500 basic_example.py\n\u251c\u2500\u2500 tests/\n\u2502 \u2514\u2500\u2500 test_agent.py\n\u2514\u2500\u2500 requirements.txt\n```\n\n## \ud83e\uddea Running Tests\n\n```bash\npython -m pytest tests/\n```\n\nOr run individual test files:\n\n```bash\npython -m unittest tests.test_agent\n```\n\n## \ud83d\udcca Automatic Logging\n\nAll agent activities are automatically logged to:\n```\n.co/logs/{agent_name}.log # Default location\n```\n\nEach log entry includes:\n- Timestamp\n- User input\n- LLM calls with timing\n- Tool executions with parameters and results\n- Final responses\n\nControl logging behavior:\n```python\n# Default: logs to .co/logs/assistant.log\nagent = Agent(\"assistant\")\n\n# Log to current directory\nagent = Agent(\"assistant\", log=True) # \u2192 assistant.log\n\n# Disable logging\nagent = Agent(\"assistant\", log=False)\n\n# Custom log file\nagent = Agent(\"assistant\", log=\"my_logs/custom.log\")\n```\n\n## \ud83d\udd11 Configuration\n\n### OpenAI API Key\nSet your API key via environment variable:\n```bash\nexport OPENAI_API_KEY=\"your-api-key-here\"\n```\n\nOr pass directly to agent:\n```python\nagent = Agent(name=\"test\", api_key=\"your-api-key-here\")\n```\n\n### Model Selection\n```python\nagent = Agent(name=\"test\", model=\"gpt-5\") # Default: gpt-5-mini\n```\n\n### Iteration Control\nControl how many tool calling iterations an agent can perform:\n\n```python\n# Default: 10 iterations (good for most tasks)\nagent = Agent(name=\"assistant\", tools=[...])\n\n# Complex tasks may need more iterations\nresearch_agent = Agent(\n name=\"researcher\", \n tools=[search, analyze, summarize, write_file],\n max_iterations=25 # Allow more steps for complex workflows\n)\n\n# Simple agents can use fewer iterations for safety\ncalculator = Agent(\n name=\"calc\", \n tools=[calculate],\n max_iterations=5 # Prevent runaway calculations\n)\n\n# Per-request override for specific complex tasks\nresult = agent.input(\n \"Analyze all project files and generate comprehensive report\",\n max_iterations=50 # Override for this specific task\n)\n```\n\nWhen an agent reaches its iteration limit, it returns:\n```\n\"Task incomplete: Maximum iterations (10) reached.\"\n```\n\n**Choosing the Right Limit:**\n- **Simple tasks (1-3 tools)**: 5-10 iterations\n- **Standard workflows**: 10-15 iterations (default: 10)\n- **Complex analysis**: 20-30 iterations \n- **Research/multi-step**: 30+ iterations\n\n## \ud83d\udee0\ufe0f Advanced Usage\n\n### Multiple Tool Calls\nAgents can chain multiple tool calls automatically:\n```python\nresult = agent.input(\n \"Calculate 15 * 8, then tell me what time you did this calculation\"\n)\n# Agent will use calculator first, then current_time tool\n```\n\n### Custom LLM Providers\n```python\nfrom connectonion.llm import LLM\n\nclass CustomLLM(LLM):\n def complete(self, messages, tools=None):\n # Your custom LLM implementation\n pass\n\nagent = Agent(name=\"test\", llm=CustomLLM())\n```\n\n## \ud83d\udea7 Current Limitations (MVP)\n\nThis is an MVP version with intentional limitations:\n- Single LLM provider (OpenAI)\n- Synchronous execution only\n- JSON file storage only\n- Basic error handling\n- No multi-agent collaboration\n\n## \ud83d\uddfa\ufe0f Future Roadmap\n\n- Multiple LLM provider support (Anthropic, Local models)\n- Async/await support\n- Database storage options\n- Advanced memory systems\n- Multi-agent collaboration\n- Web interface for behavior monitoring\n- Plugin system for tools\n\n## \ud83d\udd17 Connect With Us\n\n<div align=\"center\">\n\n[](https://discord.gg/4xfD9k8AUF)\n[](https://github.com/openonion/connectonion)\n[](http://docs.connectonion.com)\n\n</div>\n\n- **\ud83d\udcac Discord**: [Join our community](https://discord.gg/4xfD9k8AUF) - Get help, share ideas, meet other developers\n- **\ud83d\udcda Documentation**: [docs.connectonion.com](http://docs.connectonion.com) - Comprehensive guides and examples\n- **\u2b50 GitHub**: [Star the repo](https://github.com/openonion/connectonion) - Show your support\n- **\ud83d\udc1b Issues**: [Report bugs](https://github.com/openonion/connectonion/issues) - We respond quickly\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! ConnectOnion is open source and community-driven.\n\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for new functionality\n4. Submit a pull request\n\nSee our [Contributing Guide](http://docs.connectonion.com/website-maintenance) for more details.\n\n## \ud83d\udcc4 License\n\nMIT License - Use it anywhere, even commercially. See [LICENSE](LICENSE) file for details.\n\n---\n\n<div align=\"center\">\n\n### \ud83d\udcab Remember\n\n## **\"Keep simple things simple, make complicated things possible\"**\n\n*Built with \u2764\ufe0f by the open-source community*\n\n[\u2b06 Back to top](#-connectonion)\n\n</div>\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A simple Python framework for creating AI agents with behavior tracking",
"version": "0.3.3",
"project_urls": {
"Bug Reports": "https://github.com/connectonion/connectonion/issues",
"Documentation": "https://github.com/connectonion/connectonion#readme",
"Homepage": "https://github.com/connectonion/connectonion",
"Source": "https://github.com/connectonion/connectonion"
},
"split_keywords": [
"ai",
" agent",
" llm",
" tools",
" openai",
" automation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e8648f330aaf2717307e1d602e1353e10baef7570193d4134a985e0930c976aa",
"md5": "818223276a615029615570715c32951f",
"sha256": "41c866062d324adea9d4cb7e484a703dc47fd56d614fbbe4d7dfdd3eca08c097"
},
"downloads": -1,
"filename": "connectonion-0.3.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "818223276a615029615570715c32951f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 176583,
"upload_time": "2025-10-21T05:05:11",
"upload_time_iso_8601": "2025-10-21T05:05:11.230771Z",
"url": "https://files.pythonhosted.org/packages/e8/64/8f330aaf2717307e1d602e1353e10baef7570193d4134a985e0930c976aa/connectonion-0.3.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4da5e961d13e6378c20e33f5d8d2befd1704582fb112f913ca7884b6e1222394",
"md5": "ff5a5cd352bb60711d8b6e35b9fa93b2",
"sha256": "5a6b32ddb4583e68b4bb68b93b3d0be8795bab820c2bf8407a30bb39cb142600"
},
"downloads": -1,
"filename": "connectonion-0.3.3.tar.gz",
"has_sig": false,
"md5_digest": "ff5a5cd352bb60711d8b6e35b9fa93b2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 150791,
"upload_time": "2025-10-21T05:05:13",
"upload_time_iso_8601": "2025-10-21T05:05:13.782256Z",
"url": "https://files.pythonhosted.org/packages/4d/a5/e961d13e6378c20e33f5d8d2befd1704582fb112f913ca7884b6e1222394/connectonion-0.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-21 05:05:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "connectonion",
"github_project": "connectonion",
"github_not_found": true,
"lcname": "connectonion"
}