# ConnectOnion
> **π§ Private Beta** - ConnectOnion is currently in private beta. [Join our waitlist](https://connectonion.com) to get early access!
A simple Python framework for creating AI agents that can use tools and track their behavior.
## β¨ What's New
- **π― Function-Based Tools**: Just write regular Python functions - no classes needed!
- **π System Prompts**: Define your agent's personality and role
- **π Automatic Conversion**: Functions become OpenAI-compatible tools automatically
- **π Smart Schema Generation**: Type hints become function schemas
## π Quick Start
### Installation
```bash
pip install -r requirements.txt
```
### Basic Usage
```python
import os
from connectonion import Agent
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
# 1. Define tools as simple functions
def search(query: str) -> str:
"""Search for information."""
return f"Found information about {query}"
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
# 2. Create an agent with tools and personality
agent = Agent(
name="my_assistant",
system_prompt="You are a helpful and friendly assistant.",
tools=[search, calculate]
# max_iterations=10 is the default - agent will try up to 10 tool calls per task
)
# 3. Use the agent
result = agent.input("What is 25 * 4?")
print(result) # Agent will use the calculate function
result = agent.input("Search for Python tutorials")
print(result) # Agent will use the search function
# 4. View behavior history (automatic!)
print(agent.history.summary())
```
## π§ Core Concepts
### Agent
The main class that orchestrates LLM calls and tool usage. Each agent:
- Has a unique name for tracking purposes
- Can be given a custom personality via `system_prompt`
- Automatically converts functions to tools
- Records all behavior to JSON files
### Function-Based Tools
**NEW**: Just write regular Python functions! ConnectOnion automatically converts them to tools:
```python
def my_tool(param: str, optional_param: int = 10) -> str:
"""This docstring becomes the tool description."""
return f"Processed {param} with value {optional_param}"
# Use it directly - no wrapping needed!
agent = Agent("assistant", tools=[my_tool])
```
Key features:
- **Automatic Schema Generation**: Type hints become OpenAI function schemas
- **Docstring Integration**: First line becomes tool description
- **Parameter Handling**: Supports required and optional parameters
- **Type Conversion**: Handles different return types automatically
### System Prompts
Define your agent's personality and behavior with flexible input options:
```python
# 1. Direct string prompt
agent = Agent(
name="helpful_tutor",
system_prompt="You are an enthusiastic teacher who loves to educate.",
tools=[my_tools]
)
# 2. Load from file (any text file, no extension restrictions)
agent = Agent(
name="support_agent",
system_prompt="prompts/customer_support.md" # Automatically loads file content
)
# 3. Using Path object
from pathlib import Path
agent = Agent(
name="coder",
system_prompt=Path("prompts") / "senior_developer.txt"
)
# 4. None for default prompt
agent = Agent("basic_agent") # Uses default: "You are a helpful assistant..."
```
Example prompt file (`prompts/customer_support.md`):
```markdown
# Customer Support Agent
You are a senior customer support specialist with expertise in:
- Empathetic communication
- Problem-solving
- Technical troubleshooting
## Guidelines
- Always acknowledge the customer's concern first
- Look for root causes, not just symptoms
- Provide clear, actionable solutions
```
### History
Automatic tracking of all agent behaviors including:
- Tasks executed
- Tools called with parameters and results
- Agent responses and execution time
- Persistent storage in `~/.connectonion/agents/{name}/behavior.json`
## π― Example Tools
You can still use the traditional Tool class approach, but the new functional approach is much simpler:
### Traditional Tool Classes (Still Supported)
```python
from connectonion.tools import Calculator, CurrentTime, ReadFile
agent = Agent("assistant", tools=[Calculator(), CurrentTime(), ReadFile()])
```
### New Function-Based Approach (Recommended)
```python
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
def get_time(format: str = "%Y-%m-%d %H:%M:%S") -> str:
"""Get current date and time."""
from datetime import datetime
return datetime.now().strftime(format)
def read_file(filepath: str) -> str:
"""Read contents of a text file."""
with open(filepath, 'r') as f:
return f.read()
# Use them directly!
agent = Agent("assistant", tools=[calculate, get_time, read_file])
```
The function-based approach is simpler, more Pythonic, and easier to test!
## π¨ Creating Custom Tools
```python
from connectonion.tools import Tool
class WeatherTool(Tool):
def __init__(self):
super().__init__(
name="weather",
description="Get current weather for a city"
)
def run(self, city: str) -> str:
# Your weather API logic here
return f"Weather in {city}: Sunny, 22Β°C"
def get_parameters_schema(self):
return {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "Name of the city"
}
},
"required": ["city"]
}
# Use with agent
agent = Agent(name="weather_agent", tools=[WeatherTool()])
```
## π Project Structure
```
connectonion/
βββ connectonion/
β βββ __init__.py # Main exports
β βββ agent.py # Agent class
β βββ tools.py # Tool interface and built-ins
β βββ llm.py # LLM interface and OpenAI implementation
β βββ history.py # Behavior tracking
βββ examples/
β βββ basic_example.py
βββ tests/
β βββ test_agent.py
βββ requirements.txt
```
## π§ͺ Running Tests
```bash
python -m pytest tests/
```
Or run individual test files:
```bash
python -m unittest tests.test_agent
```
## π Behavior Tracking
All agent behaviors are automatically tracked and saved to:
```
~/.connectonion/agents/{agent_name}/behavior.json
```
Each record includes:
- Timestamp
- Task description
- Tool calls with parameters and results
- Final result
- Execution duration
View behavior summary:
```python
print(agent.history.summary())
# Agent: my_assistant
# Total tasks completed: 5
# Total tool calls: 8
# Total execution time: 12.34 seconds
# History file: ~/.connectonion/agents/my_assistant/behavior.json
#
# Tool usage:
# calculator: 5 calls
# current_time: 3 calls
```
## π Configuration
### OpenAI API Key
Set your API key via environment variable:
```bash
export OPENAI_API_KEY="your-api-key-here"
```
Or pass directly to agent:
```python
agent = Agent(name="test", api_key="your-api-key-here")
```
### Model Selection
```python
agent = Agent(name="test", model="gpt-5") # Default: gpt-5-mini
```
### Iteration Control
Control how many tool calling iterations an agent can perform:
```python
# Default: 10 iterations (good for most tasks)
agent = Agent(name="assistant", tools=[...])
# Complex tasks may need more iterations
research_agent = Agent(
name="researcher",
tools=[search, analyze, summarize, write_file],
max_iterations=25 # Allow more steps for complex workflows
)
# Simple agents can use fewer iterations for safety
calculator = Agent(
name="calc",
tools=[calculate],
max_iterations=5 # Prevent runaway calculations
)
# Per-request override for specific complex tasks
result = agent.input(
"Analyze all project files and generate comprehensive report",
max_iterations=50 # Override for this specific task
)
```
When an agent reaches its iteration limit, it returns:
```
"Task incomplete: Maximum iterations (10) reached."
```
**Choosing the Right Limit:**
- **Simple tasks (1-3 tools)**: 5-10 iterations
- **Standard workflows**: 10-15 iterations (default: 10)
- **Complex analysis**: 20-30 iterations
- **Research/multi-step**: 30+ iterations
## π οΈ Advanced Usage
### Multiple Tool Calls
Agents can chain multiple tool calls automatically:
```python
result = agent.input(
"Calculate 15 * 8, then tell me what time you did this calculation"
)
# Agent will use calculator first, then current_time tool
```
### Custom LLM Providers
```python
from connectonion.llm import LLM
class CustomLLM(LLM):
def complete(self, messages, tools=None):
# Your custom LLM implementation
pass
agent = Agent(name="test", llm=CustomLLM())
```
## π§ Current Limitations (MVP)
This is an MVP version with intentional limitations:
- Single LLM provider (OpenAI)
- Synchronous execution only
- JSON file storage only
- Basic error handling
- No multi-agent collaboration
## πΊοΈ Future Roadmap
- Multiple LLM provider support (Anthropic, Local models)
- Async/await support
- Database storage options
- Advanced memory systems
- Multi-agent collaboration
- Web interface for behavior monitoring
- Plugin system for tools
## π License
MIT License - see LICENSE file for details.
## π€ Contributing
This is an MVP. For the full version roadmap:
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Submit a pull request
## π Support
For issues and questions:
- Create an issue on GitHub
- Check the examples/ directory for usage patterns
- Review the test files for implementation details
Raw data
{
"_id": null,
"home_page": "https://github.com/connectonion/connectonion",
"name": "connectonion",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, agent, llm, tools, openai, automation",
"author": "ConnectOnion Team",
"author_email": "pypi@connectonion.com",
"download_url": "https://files.pythonhosted.org/packages/05/bd/7964a3da4ca94522c94e27565199e546f014e9dd5bc02bd61113f4553fcd/connectonion-0.0.1b4.tar.gz",
"platform": null,
"description": "# ConnectOnion\n\n> **\ud83d\udea7 Private Beta** - ConnectOnion is currently in private beta. [Join our waitlist](https://connectonion.com) to get early access!\n\nA simple Python framework for creating AI agents that can use tools and track their behavior.\n\n## \u2728 What's New\n\n- **\ud83c\udfaf Function-Based Tools**: Just write regular Python functions - no classes needed!\n- **\ud83c\udfad System Prompts**: Define your agent's personality and role \n- **\ud83d\udd04 Automatic Conversion**: Functions become OpenAI-compatible tools automatically\n- **\ud83d\udcdd Smart Schema Generation**: Type hints become function schemas\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install -r requirements.txt\n```\n\n### Basic Usage\n\n```python\nimport os \nfrom connectonion import Agent\n\n# Set your OpenAI API key\nos.environ[\"OPENAI_API_KEY\"] = \"your-api-key-here\"\n\n# 1. Define tools as simple functions\ndef search(query: str) -> str:\n \"\"\"Search for information.\"\"\"\n return f\"Found information about {query}\"\n\ndef calculate(expression: str) -> float:\n \"\"\"Perform mathematical calculations.\"\"\"\n return eval(expression) # Use safely in production\n\n# 2. Create an agent with tools and personality\nagent = Agent(\n name=\"my_assistant\",\n system_prompt=\"You are a helpful and friendly assistant.\",\n tools=[search, calculate]\n # max_iterations=10 is the default - agent will try up to 10 tool calls per task\n)\n\n# 3. Use the agent\nresult = agent.input(\"What is 25 * 4?\")\nprint(result) # Agent will use the calculate function\n\nresult = agent.input(\"Search for Python tutorials\") \nprint(result) # Agent will use the search function\n\n# 4. View behavior history (automatic!)\nprint(agent.history.summary())\n```\n\n## \ud83d\udd27 Core Concepts\n\n### Agent\nThe main class that orchestrates LLM calls and tool usage. Each agent:\n- Has a unique name for tracking purposes\n- Can be given a custom personality via `system_prompt`\n- Automatically converts functions to tools\n- Records all behavior to JSON files\n\n### Function-Based Tools\n**NEW**: Just write regular Python functions! ConnectOnion automatically converts them to tools:\n\n```python\ndef my_tool(param: str, optional_param: int = 10) -> str:\n \"\"\"This docstring becomes the tool description.\"\"\"\n return f\"Processed {param} with value {optional_param}\"\n\n# Use it directly - no wrapping needed!\nagent = Agent(\"assistant\", tools=[my_tool])\n```\n\nKey features:\n- **Automatic Schema Generation**: Type hints become OpenAI function schemas\n- **Docstring Integration**: First line becomes tool description \n- **Parameter Handling**: Supports required and optional parameters\n- **Type Conversion**: Handles different return types automatically\n\n### System Prompts\nDefine your agent's personality and behavior with flexible input options:\n\n```python\n# 1. Direct string prompt\nagent = Agent(\n name=\"helpful_tutor\",\n system_prompt=\"You are an enthusiastic teacher who loves to educate.\",\n tools=[my_tools]\n)\n\n# 2. Load from file (any text file, no extension restrictions)\nagent = Agent(\n name=\"support_agent\",\n system_prompt=\"prompts/customer_support.md\" # Automatically loads file content\n)\n\n# 3. Using Path object\nfrom pathlib import Path\nagent = Agent(\n name=\"coder\",\n system_prompt=Path(\"prompts\") / \"senior_developer.txt\"\n)\n\n# 4. None for default prompt\nagent = Agent(\"basic_agent\") # Uses default: \"You are a helpful assistant...\"\n```\n\nExample prompt file (`prompts/customer_support.md`):\n```markdown\n# Customer Support Agent\n\nYou are a senior customer support specialist with expertise in:\n- Empathetic communication\n- Problem-solving\n- Technical troubleshooting\n\n## Guidelines\n- Always acknowledge the customer's concern first\n- Look for root causes, not just symptoms\n- Provide clear, actionable solutions\n```\n\n### History\nAutomatic tracking of all agent behaviors including:\n- Tasks executed\n- Tools called with parameters and results\n- Agent responses and execution time\n- Persistent storage in `~/.connectonion/agents/{name}/behavior.json`\n\n## \ud83c\udfaf Example Tools\n\nYou can still use the traditional Tool class approach, but the new functional approach is much simpler:\n\n### Traditional Tool Classes (Still Supported)\n```python\nfrom connectonion.tools import Calculator, CurrentTime, ReadFile\n\nagent = Agent(\"assistant\", tools=[Calculator(), CurrentTime(), ReadFile()])\n```\n\n### New Function-Based Approach (Recommended)\n```python\ndef calculate(expression: str) -> float:\n \"\"\"Perform mathematical calculations.\"\"\"\n return eval(expression) # Use safely in production\n\ndef get_time(format: str = \"%Y-%m-%d %H:%M:%S\") -> str:\n \"\"\"Get current date and time.\"\"\"\n from datetime import datetime\n return datetime.now().strftime(format)\n\ndef read_file(filepath: str) -> str:\n \"\"\"Read contents of a text file.\"\"\"\n with open(filepath, 'r') as f:\n return f.read()\n\n# Use them directly!\nagent = Agent(\"assistant\", tools=[calculate, get_time, read_file])\n```\n\nThe function-based approach is simpler, more Pythonic, and easier to test!\n\n## \ud83d\udd28 Creating Custom Tools\n\n```python\nfrom connectonion.tools import Tool\n\nclass WeatherTool(Tool):\n def __init__(self):\n super().__init__(\n name=\"weather\",\n description=\"Get current weather for a city\"\n )\n \n def run(self, city: str) -> str:\n # Your weather API logic here\n return f\"Weather in {city}: Sunny, 22\u00b0C\"\n \n def get_parameters_schema(self):\n return {\n \"type\": \"object\",\n \"properties\": {\n \"city\": {\n \"type\": \"string\",\n \"description\": \"Name of the city\"\n }\n },\n \"required\": [\"city\"]\n }\n\n# Use with agent\nagent = Agent(name=\"weather_agent\", tools=[WeatherTool()])\n```\n\n## \ud83d\udcc1 Project Structure\n\n```\nconnectonion/\n\u251c\u2500\u2500 connectonion/\n\u2502 \u251c\u2500\u2500 __init__.py # Main exports\n\u2502 \u251c\u2500\u2500 agent.py # Agent class\n\u2502 \u251c\u2500\u2500 tools.py # Tool interface and built-ins\n\u2502 \u251c\u2500\u2500 llm.py # LLM interface and OpenAI implementation\n\u2502 \u2514\u2500\u2500 history.py # Behavior tracking\n\u251c\u2500\u2500 examples/\n\u2502 \u2514\u2500\u2500 basic_example.py\n\u251c\u2500\u2500 tests/\n\u2502 \u2514\u2500\u2500 test_agent.py\n\u2514\u2500\u2500 requirements.txt\n```\n\n## \ud83e\uddea Running Tests\n\n```bash\npython -m pytest tests/\n```\n\nOr run individual test files:\n\n```bash\npython -m unittest tests.test_agent\n```\n\n## \ud83d\udcca Behavior Tracking\n\nAll agent behaviors are automatically tracked and saved to:\n```\n~/.connectonion/agents/{agent_name}/behavior.json\n```\n\nEach record includes:\n- Timestamp\n- Task description\n- Tool calls with parameters and results\n- Final result\n- Execution duration\n\nView behavior summary:\n```python\nprint(agent.history.summary())\n# Agent: my_assistant\n# Total tasks completed: 5\n# Total tool calls: 8\n# Total execution time: 12.34 seconds\n# History file: ~/.connectonion/agents/my_assistant/behavior.json\n# \n# Tool usage:\n# calculator: 5 calls\n# current_time: 3 calls\n```\n\n## \ud83d\udd11 Configuration\n\n### OpenAI API Key\nSet your API key via environment variable:\n```bash\nexport OPENAI_API_KEY=\"your-api-key-here\"\n```\n\nOr pass directly to agent:\n```python\nagent = Agent(name=\"test\", api_key=\"your-api-key-here\")\n```\n\n### Model Selection\n```python\nagent = Agent(name=\"test\", model=\"gpt-5\") # Default: gpt-5-mini\n```\n\n### Iteration Control\nControl how many tool calling iterations an agent can perform:\n\n```python\n# Default: 10 iterations (good for most tasks)\nagent = Agent(name=\"assistant\", tools=[...])\n\n# Complex tasks may need more iterations\nresearch_agent = Agent(\n name=\"researcher\", \n tools=[search, analyze, summarize, write_file],\n max_iterations=25 # Allow more steps for complex workflows\n)\n\n# Simple agents can use fewer iterations for safety\ncalculator = Agent(\n name=\"calc\", \n tools=[calculate],\n max_iterations=5 # Prevent runaway calculations\n)\n\n# Per-request override for specific complex tasks\nresult = agent.input(\n \"Analyze all project files and generate comprehensive report\",\n max_iterations=50 # Override for this specific task\n)\n```\n\nWhen an agent reaches its iteration limit, it returns:\n```\n\"Task incomplete: Maximum iterations (10) reached.\"\n```\n\n**Choosing the Right Limit:**\n- **Simple tasks (1-3 tools)**: 5-10 iterations\n- **Standard workflows**: 10-15 iterations (default: 10)\n- **Complex analysis**: 20-30 iterations \n- **Research/multi-step**: 30+ iterations\n\n## \ud83d\udee0\ufe0f Advanced Usage\n\n### Multiple Tool Calls\nAgents can chain multiple tool calls automatically:\n```python\nresult = agent.input(\n \"Calculate 15 * 8, then tell me what time you did this calculation\"\n)\n# Agent will use calculator first, then current_time tool\n```\n\n### Custom LLM Providers\n```python\nfrom connectonion.llm import LLM\n\nclass CustomLLM(LLM):\n def complete(self, messages, tools=None):\n # Your custom LLM implementation\n pass\n\nagent = Agent(name=\"test\", llm=CustomLLM())\n```\n\n## \ud83d\udea7 Current Limitations (MVP)\n\nThis is an MVP version with intentional limitations:\n- Single LLM provider (OpenAI)\n- Synchronous execution only\n- JSON file storage only\n- Basic error handling\n- No multi-agent collaboration\n\n## \ud83d\uddfa\ufe0f Future Roadmap\n\n- Multiple LLM provider support (Anthropic, Local models)\n- Async/await support\n- Database storage options\n- Advanced memory systems\n- Multi-agent collaboration\n- Web interface for behavior monitoring\n- Plugin system for tools\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details.\n\n## \ud83e\udd1d Contributing\n\nThis is an MVP. For the full version roadmap:\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for new functionality\n4. Submit a pull request\n\n## \ud83d\udcde Support\n\nFor issues and questions:\n- Create an issue on GitHub\n- Check the examples/ directory for usage patterns\n- Review the test files for implementation details\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A simple Python framework for creating AI agents with behavior tracking",
"version": "0.0.1b4",
"project_urls": {
"Bug Reports": "https://github.com/connectonion/connectonion/issues",
"Documentation": "https://github.com/connectonion/connectonion#readme",
"Homepage": "https://github.com/connectonion/connectonion",
"Source": "https://github.com/connectonion/connectonion"
},
"split_keywords": [
"ai",
" agent",
" llm",
" tools",
" openai",
" automation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "64598271974e123b7d68119acc9add353b48785d25e1c2ddcb8ab373dba3c556",
"md5": "25561f497dad30d545d39d3c370f698f",
"sha256": "28d0134035b81681a653c050e46c07e37b4c496ea28723ec7306f73d44356059"
},
"downloads": -1,
"filename": "connectonion-0.0.1b4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "25561f497dad30d545d39d3c370f698f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 41767,
"upload_time": "2025-08-18T02:06:25",
"upload_time_iso_8601": "2025-08-18T02:06:25.042038Z",
"url": "https://files.pythonhosted.org/packages/64/59/8271974e123b7d68119acc9add353b48785d25e1c2ddcb8ab373dba3c556/connectonion-0.0.1b4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "05bd7964a3da4ca94522c94e27565199e546f014e9dd5bc02bd61113f4553fcd",
"md5": "535990d66b43f53e9fd6506210d36dbb",
"sha256": "4209595f44bb9d373328150f7aede287bf41232384640a3a2373639a24aa5c2a"
},
"downloads": -1,
"filename": "connectonion-0.0.1b4.tar.gz",
"has_sig": false,
"md5_digest": "535990d66b43f53e9fd6506210d36dbb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 38010,
"upload_time": "2025-08-18T02:06:26",
"upload_time_iso_8601": "2025-08-18T02:06:26.651495Z",
"url": "https://files.pythonhosted.org/packages/05/bd/7964a3da4ca94522c94e27565199e546f014e9dd5bc02bd61113f4553fcd/connectonion-0.0.1b4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-18 02:06:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "connectonion",
"github_project": "connectonion",
"github_not_found": true,
"lcname": "connectonion"
}