| Name | py-calling-agent JSON |
| Version |
0.4.4
JSON |
| download |
| home_page | None |
| Summary | A Python agent framework that enables function-calling through LLM code generation |
| upload_time | 2025-08-23 09:10:39 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.11 |
| license | None |
| keywords |
agent
function-calling
llm
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# 🤖 PyCallingAgent
**🚀 AI that executes, not just generates!**
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://pypi.org/project/py-calling-agent)
PyCallingAgent is a tool-augmented agent framework that enables function-calling through LLM code generation and provides runtime state management. Unlike traditional JSON-schema approaches, it leverages LLM's inherent coding capabilities to interact with tools through a Python runtime environment, allowing direct access to execution results and runtime state.
> *"When your AI needs to run code, not just write it"*
## Why PyCallingAgent?
**Traditional function calling is broken.** JSON schemas are rigid, error-prone, and limit what your AI can do. PyCallingAgent unleashes your LLM's natural coding abilities:
- 🧠 **Native Code Generation** - LLMs excel at writing code, not parsing JSON
- ⚡ **Fewer Iterations** - Execute complex multi-step workflows in a single turn
- 🔄 **Persistent State** - Maintain variables and objects across conversations
- 🎯 **Maximum Flexibility** - Handle dynamic workflows that JSON schemas can't express
- 🛡️ **Secure by Design** - AST validation prevents dangerous code execution
- 📡 **Real-time Streaming** - Watch your AI think and execute in real-time
- 🌐 **Universal LLM Support** - Works with OpenAI, Anthropic, Google, and 100+ providers
## Quick Start
```bash
pip install 'py-calling-agent[all]'
```
Choose your installation:
```bash
# OpenAI support
pip install 'py-calling-agent[openai]'
# 100+ LLM providers via LiteLLM
pip install 'py-calling-agent[litellm]'
```
### Simple Function Calling
```python
import asyncio
from py_calling_agent import PyCallingAgent
from py_calling_agent.models import OpenAIServerModel
from py_calling_agent.python_runtime import PythonRuntime, Function, Variable
async def main():
# Initialize LLM model
model = OpenAIServerModel(
model_id="your-model",
api_key="your-api-key",
base_url="your-base-url"
)
# Define tool functions
def add_task(task_name: str) -> str:
"""Add a new task to the task list"""
tasks.append({"name": task_name, "done": False})
return f"Added task: {task_name}"
def complete_task(task_name: str) -> str:
"""Mark a task as completed"""
for task in tasks:
if task_name.lower() in task["name"].lower():
task["done"] = True
return f"Completed: {task['name']}"
return f"Task '{task_name}' not found"
def send_reminder(message: str) -> str:
"""Send a reminder notification"""
return f"Reminder: {message}"
# Initialize data
tasks = []
# Setup Runtime
runtime = PythonRuntime(
variables=[
Variable("tasks", tasks, "List of user's tasks. Example: [{'name': 'walk the dog', 'done': False}]")
],
functions=[
Function(add_task),
Function(complete_task),
Function(send_reminder)
]
)
agent = PyCallingAgent(model, runtime=runtime)
await agent.run("Add buy groceries and call mom to my tasks")
print(f"Current tasks: {runtime.get_variable_value('tasks')}")
await agent.run("Mark groceries done and remind me about mom")
print(f"Final state: {runtime.get_variable_value('tasks')}")
response = await agent.run("What's my progress?")
print(response.content)
if __name__ == "__main__":
asyncio.run(main())
```
### Advanced: Stateful Object Interactions
```python
import asyncio
from py_calling_agent import PyCallingAgent
from py_calling_agent.models import LiteLLMModel
from py_calling_agent.python_runtime import PythonRuntime, Function, Variable
async def main():
# Initialize LLM model
model = LiteLLMModel(
model_id="your-model",
api_key="your-api-key",
base_url="your-base-url"
)
# Define a class with methods
class DataProcessor:
"""A utility class for processing and filtering data collections.
This class provides methods for basic data processing operations such as
sorting, removing duplicates, and filtering based on thresholds.
Example:
>>> processor = DataProcessor()
>>> processor.process_list([3, 1, 2, 1, 3])
[1, 2, 3]
>>> processor.filter_numbers([1, 5, 3, 8, 2], 4)
[5, 8]
"""
def process_list(self, data: list) -> list:
"""Sort a list and remove duplicates"""
return sorted(set(data))
def filter_numbers(self, data: list, threshold: int) -> list:
"""Filter numbers greater than threshold"""
return [x for x in data if x > threshold]
# Prepare context
processor = DataProcessor()
numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5]
# Create runtime with variables and functions
runtime = PythonRuntime(
variables=[
Variable(
name="processor",
value=processor,
description="Data processing tool with various methods"
),
Variable(
name="numbers",
value=numbers,
description="Input list of numbers"
),
Variable(
name="processed_data",
description="Store processed data in this variable"
),
Variable(
name="filtered_data",
description="Store filtered data in this variable"
)
]
)
# Create agent
agent = PyCallingAgent(model, runtime=runtime)
# Process data
await agent.run("Use processor to sort and deduplicate numbers")
processed_data = agent.runtime.get_variable_value('processed_data')
print("Processed data:", processed_data)
# Filter data
await agent.run("Filter numbers greater than 4")
filtered_data = agent.runtime.get_variable_value('filtered_data')
print("Filtered data:", filtered_data)
if __name__ == "__main__":
asyncio.run(main())
```
### Real-time Streaming
Watch your AI think and execute code in real-time:
```python
async for event in agent.stream_events("Analyze this data and create a summary"):
if event.type.value == 'CODE':
print(f"🔧 Executing: {event.content}")
elif event.type.value == 'EXECUTION_RESULT':
print(f"✅ Result: {event.content}")
elif event.type.value == 'TEXT':
print(event.content, end="", flush=True)
```
### Security Features
PyCallingAgent includes rule-based security to prevent dangerous code execution:
```python
import asyncio
from py_calling_agent import PyCallingAgent
from py_calling_agent.models import OpenAIServerModel
from py_calling_agent.python_runtime import PythonRuntime
from py_calling_agent.security_checker import (
SecurityChecker, ImportRule, FunctionRule, AttributeRule, RegexRule
)
async def main():
model = OpenAIServerModel(
model_id="gpt-4",
api_key="your-api-key",
base_url="https://api.openai.com/v1"
)
# Configure security with specific rules
rules = [
ImportRule({"os", "subprocess", "sys", "socket"}), # Block dangerous imports
FunctionRule({"eval", "exec", "compile", "open"}), # Block dangerous functions
AttributeRule({"__globals__", "__builtins__"}), # Block attribute access
RegexRule("no_print", "Block print statements", r"print\s*\(") # Custom regex
]
checker = SecurityChecker(rules)
runtime = PythonRuntime(security_checker=checker)
agent = PyCallingAgent(model, runtime=runtime)
# This will be blocked by security
try:
await agent.run("import os and list files")
except Exception as e:
print(f"Blocked: {e}")
if __name__ == "__main__":
asyncio.run(main())
```
## Key Features
- **🤖 Code-Based Function Calling**: Leverages LLM's natural coding abilities instead of rigid JSON schemas
- **🛡️ Secure Runtime Environment**:
- Inject Python objects, variables, and functions as tools
- Rule-based security validation prevents dangerous code execution
- Flexible security rules: ImportRule, FunctionRule, AttributeRule, RegexRule
- Customizable security policies for different use cases
- Access execution results and maintain state across interactions
- **💬 Multi-Turn Conversations**: Persistent context and runtime state across multiple interactions
- **⚡ Streaming & Async**: Real-time event streaming and full async/await support for optimal performance
- **🛡️ Execution Control**: Configurable step limits and error handling to prevent infinite loops
- **🎯 Unmatched Flexibility**: JSON schemas break with dynamic workflows. Python code adapts to any situation - conditional logic, loops, and complex data transformations.
- **🌐 Flexible LLM Support**: Works with any LLM provider via OpenAI-compatible APIs or LiteLLM
## Real-World Examples
For more examples, check out the [examples](examples) directory:
- [Basic Usage](examples/basic_usage.py): Simple function calling and object processing
- [Runtime State](examples/runtime_state.py): Managing runtime state across interactions
- [Object Methods](examples/object_methods.py): Using class methods and complex objects
- [Multi-Turn](examples/multi_turn.py): Complex analysis conversations with state persistence
- [Stream](examples/stream.py): Streaming responses and execution events
## LLM Provider Support
PyCallingAgent supports multiple LLM providers:
### OpenAI-Compatible Models
```python
from py_calling_agent.models import OpenAIServerModel
model = OpenAIServerModel(
model_id="gpt-4",
api_key="your-api-key",
base_url="https://api.openai.com/v1" # or your custom endpoint
)
```
### LiteLLM Models (Recommended)
LiteLLM provides unified access to hundreds of LLM providers:
```python
from py_calling_agent.models import LiteLLMModel
# OpenAI
model = LiteLLMModel(
model_id="gpt-4",
api_key="your-api-key",
custom_llm_provider='openai'
)
# Anthropic Claude
model = LiteLLMModel(
model_id="claude-3-sonnet-20240229",
api_key="your-api-key",
custom_llm_provider='anthropic'
)
# Google Gemini
model = LiteLLMModel(
model_id="gemini/gemini-pro",
api_key="your-api-key"
)
```
## Contributing
Contributions are welcome! Please feel free to submit a PR.
For more details, see [CONTRIBUTING.md](CONTRIBUTING.md).
## License
MIT License - see [LICENSE](LICENSE) for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "py-calling-agent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "agent, function-calling, llm",
"author": null,
"author_email": "Ram <codermao@gmail.com>, Cooper <cooperimmaculate@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/0b/e9/86e12aca033b1ea2421b6b72effafd7ce4b6cb29017027fdc0db60d38905/py_calling_agent-0.4.4.tar.gz",
"platform": null,
"description": "# \ud83e\udd16 PyCallingAgent\n**\ud83d\ude80 AI that executes, not just generates!**\n\n[](https://opensource.org/licenses/MIT)\n[](https://www.python.org/downloads/)\n[](https://pypi.org/project/py-calling-agent)\n\nPyCallingAgent is a tool-augmented agent framework that enables function-calling through LLM code generation and provides runtime state management. Unlike traditional JSON-schema approaches, it leverages LLM's inherent coding capabilities to interact with tools through a Python runtime environment, allowing direct access to execution results and runtime state.\n\n> *\"When your AI needs to run code, not just write it\"*\n\n## Why PyCallingAgent?\n\n**Traditional function calling is broken.** JSON schemas are rigid, error-prone, and limit what your AI can do. PyCallingAgent unleashes your LLM's natural coding abilities:\n\n- \ud83e\udde0 **Native Code Generation** - LLMs excel at writing code, not parsing JSON\n- \u26a1 **Fewer Iterations** - Execute complex multi-step workflows in a single turn\n- \ud83d\udd04 **Persistent State** - Maintain variables and objects across conversations \n- \ud83c\udfaf **Maximum Flexibility** - Handle dynamic workflows that JSON schemas can't express\n- \ud83d\udee1\ufe0f **Secure by Design** - AST validation prevents dangerous code execution\n- \ud83d\udce1 **Real-time Streaming** - Watch your AI think and execute in real-time\n- \ud83c\udf10 **Universal LLM Support** - Works with OpenAI, Anthropic, Google, and 100+ providers\n\n## Quick Start\n\n```bash\npip install 'py-calling-agent[all]'\n```\n\nChoose your installation:\n\n```bash\n# OpenAI support\npip install 'py-calling-agent[openai]'\n\n# 100+ LLM providers via LiteLLM \npip install 'py-calling-agent[litellm]'\n```\n\n### Simple Function Calling\n\n```python\nimport asyncio\nfrom py_calling_agent import PyCallingAgent\nfrom py_calling_agent.models import OpenAIServerModel\nfrom py_calling_agent.python_runtime import PythonRuntime, Function, Variable\n\nasync def main():\n # Initialize LLM model\n model = OpenAIServerModel(\n model_id=\"your-model\",\n api_key=\"your-api-key\",\n base_url=\"your-base-url\"\n )\n\n # Define tool functions\n def add_task(task_name: str) -> str:\n \"\"\"Add a new task to the task list\"\"\"\n tasks.append({\"name\": task_name, \"done\": False})\n return f\"Added task: {task_name}\"\n\n def complete_task(task_name: str) -> str:\n \"\"\"Mark a task as completed\"\"\"\n for task in tasks:\n if task_name.lower() in task[\"name\"].lower():\n task[\"done\"] = True\n return f\"Completed: {task['name']}\"\n return f\"Task '{task_name}' not found\"\n\n def send_reminder(message: str) -> str:\n \"\"\"Send a reminder notification\"\"\"\n return f\"Reminder: {message}\"\n\n # Initialize data\n tasks = []\n\n # Setup Runtime\n runtime = PythonRuntime(\n variables=[\n Variable(\"tasks\", tasks, \"List of user's tasks. Example: [{'name': 'walk the dog', 'done': False}]\")\n ],\n functions=[\n Function(add_task),\n Function(complete_task), \n Function(send_reminder)\n ]\n )\n\n agent = PyCallingAgent(model, runtime=runtime)\n\n await agent.run(\"Add buy groceries and call mom to my tasks\")\n print(f\"Current tasks: {runtime.get_variable_value('tasks')}\")\n\n await agent.run(\"Mark groceries done and remind me about mom\")\n print(f\"Final state: {runtime.get_variable_value('tasks')}\")\n\n response = await agent.run(\"What's my progress?\")\n print(response.content)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Advanced: Stateful Object Interactions\n\n```python\nimport asyncio\nfrom py_calling_agent import PyCallingAgent\nfrom py_calling_agent.models import LiteLLMModel\nfrom py_calling_agent.python_runtime import PythonRuntime, Function, Variable\n\nasync def main():\n # Initialize LLM model\n model = LiteLLMModel(\n model_id=\"your-model\",\n api_key=\"your-api-key\",\n base_url=\"your-base-url\"\n )\n\n # Define a class with methods\n class DataProcessor:\n \"\"\"A utility class for processing and filtering data collections.\n \n This class provides methods for basic data processing operations such as\n sorting, removing duplicates, and filtering based on thresholds.\n \n Example:\n >>> processor = DataProcessor()\n >>> processor.process_list([3, 1, 2, 1, 3])\n [1, 2, 3]\n >>> processor.filter_numbers([1, 5, 3, 8, 2], 4)\n [5, 8]\n \"\"\"\n def process_list(self, data: list) -> list:\n \"\"\"Sort a list and remove duplicates\"\"\"\n return sorted(set(data))\n \n def filter_numbers(self, data: list, threshold: int) -> list:\n \"\"\"Filter numbers greater than threshold\"\"\"\n return [x for x in data if x > threshold]\n\n # Prepare context\n processor = DataProcessor()\n numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5]\n\n # Create runtime with variables and functions\n runtime = PythonRuntime(\n variables=[\n Variable(\n name=\"processor\",\n value=processor,\n description=\"Data processing tool with various methods\"\n ),\n Variable(\n name=\"numbers\",\n value=numbers,\n description=\"Input list of numbers\"\n ),\n Variable(\n name=\"processed_data\",\n description=\"Store processed data in this variable\"\n ),\n Variable(\n name=\"filtered_data\",\n description=\"Store filtered data in this variable\"\n )\n ]\n )\n\n # Create agent\n agent = PyCallingAgent(model, runtime=runtime)\n\n # Process data\n await agent.run(\"Use processor to sort and deduplicate numbers\")\n processed_data = agent.runtime.get_variable_value('processed_data')\n print(\"Processed data:\", processed_data)\n\n # Filter data\n await agent.run(\"Filter numbers greater than 4\")\n filtered_data = agent.runtime.get_variable_value('filtered_data')\n print(\"Filtered data:\", filtered_data)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Real-time Streaming\n\nWatch your AI think and execute code in real-time:\n\n```python\nasync for event in agent.stream_events(\"Analyze this data and create a summary\"):\n if event.type.value == 'CODE':\n print(f\"\ud83d\udd27 Executing: {event.content}\")\n elif event.type.value == 'EXECUTION_RESULT':\n print(f\"\u2705 Result: {event.content}\")\n elif event.type.value == 'TEXT':\n print(event.content, end=\"\", flush=True)\n```\n\n### Security Features\n\nPyCallingAgent includes rule-based security to prevent dangerous code execution:\n\n```python\nimport asyncio\nfrom py_calling_agent import PyCallingAgent\nfrom py_calling_agent.models import OpenAIServerModel\nfrom py_calling_agent.python_runtime import PythonRuntime\nfrom py_calling_agent.security_checker import (\n SecurityChecker, ImportRule, FunctionRule, AttributeRule, RegexRule\n)\n\nasync def main():\n model = OpenAIServerModel(\n model_id=\"gpt-4\",\n api_key=\"your-api-key\",\n base_url=\"https://api.openai.com/v1\"\n )\n\n # Configure security with specific rules\n rules = [\n ImportRule({\"os\", \"subprocess\", \"sys\", \"socket\"}), # Block dangerous imports\n FunctionRule({\"eval\", \"exec\", \"compile\", \"open\"}), # Block dangerous functions\n AttributeRule({\"__globals__\", \"__builtins__\"}), # Block attribute access\n RegexRule(\"no_print\", \"Block print statements\", r\"print\\s*\\(\") # Custom regex\n ]\n \n checker = SecurityChecker(rules)\n runtime = PythonRuntime(security_checker=checker)\n \n agent = PyCallingAgent(model, runtime=runtime)\n \n # This will be blocked by security\n try:\n await agent.run(\"import os and list files\")\n except Exception as e:\n print(f\"Blocked: {e}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## Key Features\n\n- **\ud83e\udd16 Code-Based Function Calling**: Leverages LLM's natural coding abilities instead of rigid JSON schemas\n- **\ud83d\udee1\ufe0f Secure Runtime Environment**: \n - Inject Python objects, variables, and functions as tools\n - Rule-based security validation prevents dangerous code execution\n - Flexible security rules: ImportRule, FunctionRule, AttributeRule, RegexRule\n - Customizable security policies for different use cases\n - Access execution results and maintain state across interactions\n- **\ud83d\udcac Multi-Turn Conversations**: Persistent context and runtime state across multiple interactions\n- **\u26a1 Streaming & Async**: Real-time event streaming and full async/await support for optimal performance\n- **\ud83d\udee1\ufe0f Execution Control**: Configurable step limits and error handling to prevent infinite loops\n- **\ud83c\udfaf Unmatched Flexibility**: JSON schemas break with dynamic workflows. Python code adapts to any situation - conditional logic, loops, and complex data transformations.\n- **\ud83c\udf10 Flexible LLM Support**: Works with any LLM provider via OpenAI-compatible APIs or LiteLLM\n\n## Real-World Examples\n\nFor more examples, check out the [examples](examples) directory:\n\n- [Basic Usage](examples/basic_usage.py): Simple function calling and object processing\n- [Runtime State](examples/runtime_state.py): Managing runtime state across interactions\n- [Object Methods](examples/object_methods.py): Using class methods and complex objects\n- [Multi-Turn](examples/multi_turn.py): Complex analysis conversations with state persistence\n- [Stream](examples/stream.py): Streaming responses and execution events\n\n## LLM Provider Support\n\nPyCallingAgent supports multiple LLM providers:\n\n### OpenAI-Compatible Models\n```python\nfrom py_calling_agent.models import OpenAIServerModel\n\nmodel = OpenAIServerModel(\n model_id=\"gpt-4\",\n api_key=\"your-api-key\",\n base_url=\"https://api.openai.com/v1\" # or your custom endpoint\n)\n```\n\n### LiteLLM Models (Recommended)\nLiteLLM provides unified access to hundreds of LLM providers:\n\n```python\nfrom py_calling_agent.models import LiteLLMModel\n\n# OpenAI\nmodel = LiteLLMModel(\n model_id=\"gpt-4\",\n api_key=\"your-api-key\"\uff0c\n custom_llm_provider='openai'\n)\n\n# Anthropic Claude\nmodel = LiteLLMModel(\n model_id=\"claude-3-sonnet-20240229\",\n api_key=\"your-api-key\",\n custom_llm_provider='anthropic' \n)\n\n# Google Gemini\nmodel = LiteLLMModel(\n model_id=\"gemini/gemini-pro\",\n api_key=\"your-api-key\"\n)\n```\n\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a PR.\nFor more details, see [CONTRIBUTING.md](CONTRIBUTING.md).\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python agent framework that enables function-calling through LLM code generation",
"version": "0.4.4",
"project_urls": {
"Homepage": "https://github.com/acodercat/py-calling-agent",
"Repository": "https://github.com/acodercat/py-calling-agent"
},
"split_keywords": [
"agent",
" function-calling",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d09ad91937f1f972ba5cb373fa9e496d78b774a316c220d211f5b9a87dfac0f0",
"md5": "ccc87591f5b0f6180f79c19299be8a94",
"sha256": "cc7c4a2fd17c03728b87cd9f935a8ce363470d5f0013369014eeeab2817bf1a7"
},
"downloads": -1,
"filename": "py_calling_agent-0.4.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ccc87591f5b0f6180f79c19299be8a94",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 23472,
"upload_time": "2025-08-23T09:10:37",
"upload_time_iso_8601": "2025-08-23T09:10:37.234973Z",
"url": "https://files.pythonhosted.org/packages/d0/9a/d91937f1f972ba5cb373fa9e496d78b774a316c220d211f5b9a87dfac0f0/py_calling_agent-0.4.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0be986e12aca033b1ea2421b6b72effafd7ce4b6cb29017027fdc0db60d38905",
"md5": "9fe3d9868ad6778bd79e695ae1ca485a",
"sha256": "80499c6f41bce2409d948653c53e5b2709216b44bcd561a589f533b651375818"
},
"downloads": -1,
"filename": "py_calling_agent-0.4.4.tar.gz",
"has_sig": false,
"md5_digest": "9fe3d9868ad6778bd79e695ae1ca485a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 147955,
"upload_time": "2025-08-23T09:10:39",
"upload_time_iso_8601": "2025-08-23T09:10:39.385252Z",
"url": "https://files.pythonhosted.org/packages/0b/e9/86e12aca033b1ea2421b6b72effafd7ce4b6cb29017027fdc0db60d38905/py_calling_agent-0.4.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-23 09:10:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "acodercat",
"github_project": "py-calling-agent",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "py-calling-agent"
}