# functioncalming
A Python library for reliable structured responses from OpenAI models using function calling and structured outputs with automatic validation, retries, and error handling.
## Why functioncalming?
Working with OpenAI's function calling can be frustrating:
- Models sometimes call functions incorrectly
- Validation errors require manual retry logic
- Message history gets cluttered with failed attempts
- Converting between functions and Pydantic models is tedious
- Cost tracking and token optimization is manual work
functioncalming solves these problems by providing:
- **Automatic retries** with intelligent error handling
- **Clean message history** that hides failed attempts
- **Seamless integration** with both functions and Pydantic models
- **Built-in validation** with helpful error messages to the model
- **Cost tracking** and token optimization features
- **Structured outputs** when available for better reliability
## Installation
```bash
pip install functioncalming
```
## Quick Start
```python
from functioncalming import get_completion
from pydantic import BaseModel, field_validator
import asyncio
def get_weather(city: str, country: str = "US") -> str:
"""Get current weather for a city"""
return f"Sunny, 72°F in {city}, {country}"
class PersonInfo(BaseModel):
"""Extract person information from text"""
name: str
age: int
occupation: str
@field_validator('age')
def validate_age(cls, v):
if v < 0 or v > 150:
raise ValueError("Age must be between 0 and 150")
return v
async def main():
# Function calling example
response = await get_completion(
user_message="What's the weather like in Paris?",
tools=[get_weather],
model="gpt-4o"
)
print(response.tool_call_results[0]) # "Sunny, 72°F in Paris, US"
# Structured output example with validation
response = await get_completion(
user_message="Extract info: John is a 30-year-old teacher",
tools=[PersonInfo],
model="gpt-4o"
)
person = response.tool_call_results[0]
print(f"{person.name}, age {person.age}, works as {person.occupation}")
if __name__ == "__main__":
asyncio.run(main())
```
## Core Features
### Automatic Retries with Clean History
When the model makes mistakes, functioncalming automatically retries and keeps your message history clean:
```python
class EmailAddress(BaseModel):
email: str
@field_validator('email')
def validate_email(cls, v):
if '@' not in v:
raise ValueError("Email must contain @ symbol")
return v
response = await get_completion(
user_message="My email is john.doe.gmail.com", # Missing @
tools=[EmailAddress],
retries=2 # Will automatically retry up to 2 times
)
# response.messages contains clean history as if the model got it right first time
# response.messages_raw contains the actual history with retries
print(f"Retries needed: {response.retries_done}")
print(f"Final result: {response.tool_call_results[0].email}")
```
### Mixed Function and Model Tools
Use Python functions and Pydantic models together seamlessly:
```python
def calculate_tip(bill_amount: float, tip_percentage: float = 15.0) -> float:
"""Calculate tip amount"""
return bill_amount * (tip_percentage / 100)
class Receipt(BaseModel):
"""Generate a structured receipt"""
items: list[str]
subtotal: float
tip: float
total: float
response = await get_completion(
user_message="Calculate tip for $45.50 bill and create receipt for coffee and sandwich",
tools=[calculate_tip, Receipt],
model="gpt-4o"
)
```
### Cost Tracking and Usage Statistics
Built-in cost tracking for all major OpenAI models:
```python
response = await get_completion(
user_message="Hello world",
model="gpt-4o"
)
print(f"Cost: ${response.cost:.4f}")
print(f"Tokens used: {response.usage.total_tokens}")
print(f"Model: {response.model}")
print(f"Unknown costs: {response.unknown_costs}") # True for very new models
```
### Tool Abbreviation for Large Schemas
Save tokens when using many tools with large schemas:
```python
# When you have many tools with complex schemas
response = await get_completion(
user_message="Parse this document",
tools=[ComplexTool1, ComplexTool2, ComplexTool3, ...],
abbreviate_tools=True # First shows tool names only, then full schema for chosen tool
)
```
### Multiple Tool Calls in Parallel
The model can call multiple tools in a single response:
```python
response = await get_completion(
user_message="Get weather for NYC and LA, then create a travel comparison",
tools=[get_weather, TravelComparison],
model="gpt-4o"
)
# response.tool_call_results contains results from all tool calls
for i, result in enumerate(response.tool_call_results):
print(f"Tool {i+1} result: {result}")
```
### Structured Outputs Integration
Automatically uses OpenAI's Structured Outputs when available for better reliability:
```python
class CodeGeneration(BaseModel):
"""Generate code with strict structure"""
language: str
code: str
explanation: str
# Uses Structured Outputs automatically on compatible models
response = await get_completion(
user_message="Write a Python function to reverse a string",
tools=[CodeGeneration],
model="gpt-4o" # Structured Outputs supported
)
```
## Advanced Usage
### Custom OpenAI Client
```python
from openai import AsyncOpenAI
from functioncalming.client import set_openai_client
custom_client = AsyncOpenAI(api_key="your-key", timeout=30.0)
with set_openai_client(custom_client):
response = await get_completion(
user_message="Hello",
tools=[SomeTool]
)
```
### Request Middleware
Wrap OpenAI API calls with custom logic:
```python
from functioncalming.client import calm_middleware
@calm_middleware
async def log_requests(*, model, messages, tools, **kwargs):
print(f"Making request to {model}")
try:
completion = yield # OpenAI API call happens here
print(f"Got response with {completion.usage.total_tokens} tokens")
except Exception as e:
print(f"Call to OpenAI failed: {e}")
# Note: you should always re-raise here - of course, you can still wrap the call to get_completion in a try/except
raise e
response = await get_completion(
user_message="Hello",
tools=[SomeTool],
middleware=log_requests
)
```
### Default Values and Escaped Output
Handle default values and return non-JSON data:
```python
from functioncalming.types import EscapedOutput
class Config(BaseModel):
name: str
debug_mode: bool = False # Defaults work automatically
def generate_report():
# Return complex data that shouldn't be shown to model
return EscapedOutput(
result_for_model="Report generated successfully",
data={"complex": "internal", "data": [1, 2, 3]}
)
response = await get_completion(
user_message="Generate config and report",
tools=[Config, generate_report]
)
# Model sees "Report generated successfully"
# You get the full data object
report_data = response.tool_call_results[1].data
```
### Access to Tool Call Context
Get information about the current tool call from within your functions:
```python
from functioncalming.context import calm_context
def my_tool(param: str) -> str:
context = calm_context.get()
tool_call_id = context.tool_call.id
function_name = context.tool_call.function.name
return f"Called {function_name} with ID {tool_call_id}"
```
## API Reference
### `get_completion()`
Main function for getting completions with tool calling.
**Parameters:**
- `messages`: Existing message history
- `system_prompt`: System message (added to start of history)
- `user_message`: User message (added to end of history)
- `tools`: List of functions or Pydantic models to use as tools
- `tool_choice`: Control tool selection ("auto", "required", specific tool, or "none")
- `model`: OpenAI model name
- `retries`: Number of retry attempts for failed tool calls
- `abbreviate_tools`: Use tool abbreviation to save tokens
- `openai_client`: Custom OpenAI client
- `openai_request_context_manager`: Middleware for API requests
**Returns:**
`CalmResponse` object with:
- `success`: Whether all tool calls succeeded
- `tool_call_results`: List of tool call results
- `messages`: Clean message history
- `messages_raw`: Raw history including retries
- `cost`: Estimated cost in USD
- `usage`: Token usage statistics
- `model`: Model used
- `error`: Exception if any tool calls failed
- `retries_done`: Number of retries performed
### Model Registration
Register new models for cost tracking:
```python
from functioncalming.client import register_model
register_model(
model_name="gpt-new-model",
supports_structured_outputs=True,
cost_per_1mm_input_tokens=2.0,
cost_per_1mm_output_tokens=8.0
)
```
## Error Handling
functioncalming handles various error scenarios automatically:
- **Validation errors**: Shown to model for retry
- **JSON parsing errors**: Handled and reported for retry
- **Unknown function calls**: Model is corrected and retries
- **Inner validation errors**: Raised immediately (no retry)
- **Tool call errors**: Can trigger retries based on error type
```python
from functioncalming.utils import ToolCallError
def strict_function(value: int) -> str:
if value < 0:
# This will trigger a retry
raise ToolCallError("Value must be positive")
return str(value)
```
## Requirements
- Python 3.11+
- OpenAI API key
- Dependencies: `openai`, `pydantic`, `docstring-parser`
## Environment Variables
```bash
export OPENAI_API_KEY="your-api-key"
export OPENAI_ORGANIZATION="your-org-id" # Optional
export OPENAI_MODEL="gpt-4o" # Default model
export OPENAI_MAX_RETRIES="2" # Default retry count
```
## License
MIT License - see LICENSE file for details.
## Contributing
Contributions welcome! Please check the issues page for current development priorities.
Raw data
{
"_id": null,
"home_page": "https://github.com/phdowling/functioncalming",
"name": "functioncalming",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.12",
"maintainer_email": null,
"keywords": "openai, function-calling, pydantic, validation, llm, tool-calling, retry, structured-ouputs",
"author": "Philipp Dowling",
"author_email": "phdowling@users.noreply.github.com",
"download_url": "https://files.pythonhosted.org/packages/d9/f2/655ea93d301e6b4599e9bdd72bc779d4d84093071499f76ea93bea2b7a83/functioncalming-0.0.20.tar.gz",
"platform": null,
"description": "# functioncalming\n\nA Python library for reliable structured responses from OpenAI models using function calling and structured outputs with automatic validation, retries, and error handling.\n\n## Why functioncalming?\n\nWorking with OpenAI's function calling can be frustrating:\n- Models sometimes call functions incorrectly\n- Validation errors require manual retry logic\n- Message history gets cluttered with failed attempts\n- Converting between functions and Pydantic models is tedious\n- Cost tracking and token optimization is manual work\n\nfunctioncalming solves these problems by providing:\n- **Automatic retries** with intelligent error handling\n- **Clean message history** that hides failed attempts\n- **Seamless integration** with both functions and Pydantic models\n- **Built-in validation** with helpful error messages to the model\n- **Cost tracking** and token optimization features\n- **Structured outputs** when available for better reliability\n\n## Installation\n\n```bash\npip install functioncalming\n```\n\n## Quick Start\n\n```python\nfrom functioncalming import get_completion\nfrom pydantic import BaseModel, field_validator\nimport asyncio\n\ndef get_weather(city: str, country: str = \"US\") -> str:\n \"\"\"Get current weather for a city\"\"\"\n return f\"Sunny, 72\u00b0F in {city}, {country}\"\n\nclass PersonInfo(BaseModel):\n \"\"\"Extract person information from text\"\"\"\n name: str\n age: int\n occupation: str\n \n @field_validator('age')\n def validate_age(cls, v):\n if v < 0 or v > 150:\n raise ValueError(\"Age must be between 0 and 150\")\n return v\n\nasync def main():\n # Function calling example\n response = await get_completion(\n user_message=\"What's the weather like in Paris?\",\n tools=[get_weather],\n model=\"gpt-4o\"\n )\n print(response.tool_call_results[0]) # \"Sunny, 72\u00b0F in Paris, US\"\n \n # Structured output example with validation\n response = await get_completion(\n user_message=\"Extract info: John is a 30-year-old teacher\",\n tools=[PersonInfo],\n model=\"gpt-4o\"\n )\n person = response.tool_call_results[0]\n print(f\"{person.name}, age {person.age}, works as {person.occupation}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## Core Features\n\n### Automatic Retries with Clean History\n\nWhen the model makes mistakes, functioncalming automatically retries and keeps your message history clean:\n\n```python\nclass EmailAddress(BaseModel):\n email: str\n \n @field_validator('email')\n def validate_email(cls, v):\n if '@' not in v:\n raise ValueError(\"Email must contain @ symbol\")\n return v\n\nresponse = await get_completion(\n user_message=\"My email is john.doe.gmail.com\", # Missing @\n tools=[EmailAddress],\n retries=2 # Will automatically retry up to 2 times\n)\n\n# response.messages contains clean history as if the model got it right first time\n# response.messages_raw contains the actual history with retries\nprint(f\"Retries needed: {response.retries_done}\")\nprint(f\"Final result: {response.tool_call_results[0].email}\")\n```\n\n### Mixed Function and Model Tools\n\nUse Python functions and Pydantic models together seamlessly:\n\n```python\ndef calculate_tip(bill_amount: float, tip_percentage: float = 15.0) -> float:\n \"\"\"Calculate tip amount\"\"\"\n return bill_amount * (tip_percentage / 100)\n\nclass Receipt(BaseModel):\n \"\"\"Generate a structured receipt\"\"\"\n items: list[str]\n subtotal: float\n tip: float\n total: float\n\nresponse = await get_completion(\n user_message=\"Calculate tip for $45.50 bill and create receipt for coffee and sandwich\",\n tools=[calculate_tip, Receipt],\n model=\"gpt-4o\"\n)\n```\n\n### Cost Tracking and Usage Statistics\n\nBuilt-in cost tracking for all major OpenAI models:\n\n```python\nresponse = await get_completion(\n user_message=\"Hello world\",\n model=\"gpt-4o\"\n)\n\nprint(f\"Cost: ${response.cost:.4f}\")\nprint(f\"Tokens used: {response.usage.total_tokens}\")\nprint(f\"Model: {response.model}\")\nprint(f\"Unknown costs: {response.unknown_costs}\") # True for very new models\n```\n\n### Tool Abbreviation for Large Schemas\n\nSave tokens when using many tools with large schemas:\n\n```python\n# When you have many tools with complex schemas\nresponse = await get_completion(\n user_message=\"Parse this document\",\n tools=[ComplexTool1, ComplexTool2, ComplexTool3, ...],\n abbreviate_tools=True # First shows tool names only, then full schema for chosen tool\n)\n```\n\n### Multiple Tool Calls in Parallel\n\nThe model can call multiple tools in a single response:\n\n```python\nresponse = await get_completion(\n user_message=\"Get weather for NYC and LA, then create a travel comparison\",\n tools=[get_weather, TravelComparison],\n model=\"gpt-4o\"\n)\n\n# response.tool_call_results contains results from all tool calls\nfor i, result in enumerate(response.tool_call_results):\n print(f\"Tool {i+1} result: {result}\")\n```\n\n### Structured Outputs Integration\n\nAutomatically uses OpenAI's Structured Outputs when available for better reliability:\n\n```python\nclass CodeGeneration(BaseModel):\n \"\"\"Generate code with strict structure\"\"\"\n language: str\n code: str\n explanation: str\n\n# Uses Structured Outputs automatically on compatible models\nresponse = await get_completion(\n user_message=\"Write a Python function to reverse a string\",\n tools=[CodeGeneration],\n model=\"gpt-4o\" # Structured Outputs supported\n)\n```\n\n## Advanced Usage\n\n### Custom OpenAI Client\n\n```python\nfrom openai import AsyncOpenAI\nfrom functioncalming.client import set_openai_client\n\ncustom_client = AsyncOpenAI(api_key=\"your-key\", timeout=30.0)\n\nwith set_openai_client(custom_client):\n response = await get_completion(\n user_message=\"Hello\",\n tools=[SomeTool]\n )\n```\n\n### Request Middleware\n\nWrap OpenAI API calls with custom logic:\n\n```python\nfrom functioncalming.client import calm_middleware\n\n\n@calm_middleware\nasync def log_requests(*, model, messages, tools, **kwargs):\n print(f\"Making request to {model}\")\n try:\n completion = yield # OpenAI API call happens here\n print(f\"Got response with {completion.usage.total_tokens} tokens\")\n except Exception as e:\n print(f\"Call to OpenAI failed: {e}\")\n # Note: you should always re-raise here - of course, you can still wrap the call to get_completion in a try/except\n raise e\n \n\n\nresponse = await get_completion(\n user_message=\"Hello\",\n tools=[SomeTool],\n middleware=log_requests\n)\n```\n\n### Default Values and Escaped Output\n\nHandle default values and return non-JSON data:\n\n```python\nfrom functioncalming.types import EscapedOutput\n\nclass Config(BaseModel):\n name: str\n debug_mode: bool = False # Defaults work automatically\n\ndef generate_report():\n # Return complex data that shouldn't be shown to model\n return EscapedOutput(\n result_for_model=\"Report generated successfully\",\n data={\"complex\": \"internal\", \"data\": [1, 2, 3]}\n )\n\nresponse = await get_completion(\n user_message=\"Generate config and report\",\n tools=[Config, generate_report]\n)\n\n# Model sees \"Report generated successfully\"\n# You get the full data object\nreport_data = response.tool_call_results[1].data\n```\n\n### Access to Tool Call Context\n\nGet information about the current tool call from within your functions:\n\n```python\nfrom functioncalming.context import calm_context\n\ndef my_tool(param: str) -> str:\n context = calm_context.get()\n tool_call_id = context.tool_call.id\n function_name = context.tool_call.function.name\n return f\"Called {function_name} with ID {tool_call_id}\"\n```\n\n## API Reference\n\n### `get_completion()`\n\nMain function for getting completions with tool calling.\n\n**Parameters:**\n- `messages`: Existing message history\n- `system_prompt`: System message (added to start of history)\n- `user_message`: User message (added to end of history)\n- `tools`: List of functions or Pydantic models to use as tools\n- `tool_choice`: Control tool selection (\"auto\", \"required\", specific tool, or \"none\")\n- `model`: OpenAI model name\n- `retries`: Number of retry attempts for failed tool calls\n- `abbreviate_tools`: Use tool abbreviation to save tokens\n- `openai_client`: Custom OpenAI client\n- `openai_request_context_manager`: Middleware for API requests\n\n**Returns:**\n`CalmResponse` object with:\n- `success`: Whether all tool calls succeeded\n- `tool_call_results`: List of tool call results\n- `messages`: Clean message history\n- `messages_raw`: Raw history including retries\n- `cost`: Estimated cost in USD\n- `usage`: Token usage statistics\n- `model`: Model used\n- `error`: Exception if any tool calls failed\n- `retries_done`: Number of retries performed\n\n### Model Registration\n\nRegister new models for cost tracking:\n\n```python\nfrom functioncalming.client import register_model\n\nregister_model(\n model_name=\"gpt-new-model\",\n supports_structured_outputs=True,\n cost_per_1mm_input_tokens=2.0,\n cost_per_1mm_output_tokens=8.0\n)\n```\n\n## Error Handling\n\nfunctioncalming handles various error scenarios automatically:\n\n- **Validation errors**: Shown to model for retry\n- **JSON parsing errors**: Handled and reported for retry \n- **Unknown function calls**: Model is corrected and retries\n- **Inner validation errors**: Raised immediately (no retry)\n- **Tool call errors**: Can trigger retries based on error type\n\n```python\nfrom functioncalming.utils import ToolCallError\n\ndef strict_function(value: int) -> str:\n if value < 0:\n # This will trigger a retry\n raise ToolCallError(\"Value must be positive\")\n return str(value)\n```\n\n## Requirements\n\n- Python 3.11+\n- OpenAI API key\n- Dependencies: `openai`, `pydantic`, `docstring-parser`\n\n## Environment Variables\n\n```bash\nexport OPENAI_API_KEY=\"your-api-key\"\nexport OPENAI_ORGANIZATION=\"your-org-id\" # Optional\nexport OPENAI_MODEL=\"gpt-4o\" # Default model\nexport OPENAI_MAX_RETRIES=\"2\" # Default retry count\n```\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Contributing\n\nContributions welcome! Please check the issues page for current development priorities.",
"bugtrack_url": null,
"license": "MIT",
"summary": "Robust and reliable OpenAI tool calling",
"version": "0.0.20",
"project_urls": {
"Homepage": "https://github.com/phdowling/functioncalming",
"Repository": "https://github.com/phdowling/functioncalming"
},
"split_keywords": [
"openai",
" function-calling",
" pydantic",
" validation",
" llm",
" tool-calling",
" retry",
" structured-ouputs"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c431201e2ff474e2a85ac0ec1c6e3fc8e02ac21843986c0688d859e80e3ea64e",
"md5": "87ab83d8c45cfe151bb2eeb0bf8710c6",
"sha256": "8f0a4bd69a1bf7b169c12f2869fe71d31c142429b927d681cfd4bd47516c530b"
},
"downloads": -1,
"filename": "functioncalming-0.0.20-py3-none-any.whl",
"has_sig": false,
"md5_digest": "87ab83d8c45cfe151bb2eeb0bf8710c6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.12",
"size": 20166,
"upload_time": "2025-08-13T12:26:45",
"upload_time_iso_8601": "2025-08-13T12:26:45.475554Z",
"url": "https://files.pythonhosted.org/packages/c4/31/201e2ff474e2a85ac0ec1c6e3fc8e02ac21843986c0688d859e80e3ea64e/functioncalming-0.0.20-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d9f2655ea93d301e6b4599e9bdd72bc779d4d84093071499f76ea93bea2b7a83",
"md5": "2eaaf9e56da2e43b9c21eb2b87ccc86b",
"sha256": "c000dd174452b913694eaf94e84166fe49f843b87a18f36dc67e6fe90cf15e22"
},
"downloads": -1,
"filename": "functioncalming-0.0.20.tar.gz",
"has_sig": false,
"md5_digest": "2eaaf9e56da2e43b9c21eb2b87ccc86b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.12",
"size": 21626,
"upload_time": "2025-08-13T12:26:46",
"upload_time_iso_8601": "2025-08-13T12:26:46.862167Z",
"url": "https://files.pythonhosted.org/packages/d9/f2/655ea93d301e6b4599e9bdd72bc779d4d84093071499f76ea93bea2b7a83/functioncalming-0.0.20.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-13 12:26:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "phdowling",
"github_project": "functioncalming",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "functioncalming"
}