# APL Python Implementation
A minimal Python implementation of the Agentic Prompting Language (APL) according to specification v1.1.
## Features
- **Full Jinja2 Support** - Use Jinja2 templates in all phases with variable assignment and control flow
- **Multi-step Workflows** - Create complex branching workflows with `next_step` control
- **Native Tool Calling** - Execute Python functions directly from LLM tool calls
- **Provider Agnostic** - Works with OpenAI API or custom providers
- **Multimodal Support** - Handle images, audio, and files with inline attachments
- **Minimal Dependencies** - Only requires `jinja2`, optionally `openai`
## Installation
```bash
pip install defuss-apl
```
For OpenAI support:
```bash
pip install defuss-apl[openai]
```
## Quick Start
### Simple Example
```python
import asyncio
from defuss_apl import start
async def main():
agent = """
# prompt: greet
Hello! How can I help you today?
"""
result = await start(agent)
print(result["result_text"])
asyncio.run(main())
```
### With Variables and Control Flow
```python
import asyncio
from defuss_apl import start
async def main():
agent = """
# pre: setup
{% set user_name = "Alice" %}
{% set max_retries = 3 %}
# prompt: setup
## system
You are a helpful assistant.
## user
Hello {{ user_name }}! Please help me with my question.
# post: setup
{% if errors and runs < max_retries %}
{% set next_step = "setup" %}
{% else %}
{% set next_step = "return" %}
{% endif %}
"""
result = await start(agent)
print(f"Final result: {result['result_text']}")
print(f"Runs: {result['global_runs']}")
asyncio.run(main())
```
### With Tool Calling
```python
import asyncio
from defuss_apl import start
def calculator(operation: str, a: float, b: float) -> float:
"""Perform basic math operations"""
if operation == "add":
return a + b
elif operation == "multiply":
return a * b
else:
raise ValueError(f"Unknown operation: {operation}")
async def main():
agent = """
# pre: setup
{% set allowed_tools = ["calculator"] %}
# prompt: setup
Please calculate 15 + 25 and then multiply the result by 2.
"""
options = {
"with_tools": {
"calculator": {
"fn": calculator,
# Descriptor is auto-generated from function signature
}
}
}
result = await start(agent, options)
print(f"Result: {result['result_text']}")
print(f"Tool calls: {result['result_tool_calls']}")
asyncio.run(main())
```
### With Custom Provider
#### Option 1: Custom OpenAI Provider
```python
import asyncio
from defuss_apl import start, create_openai_provider
async def main():
agent = """
# prompt: test
Test using a custom OpenAI API endpoint
"""
# Create a custom OpenAI provider with specific options
custom_openai = create_openai_provider(
base_url="https://api.my-deployment.com/v1",
options={
"api_key": "sk-my-custom-key",
"timeout": 60.0,
"max_retries": 3,
"default_headers": {"X-Organization": "my-org-id"}
}
)
options = {
"with_providers": {
"gpt-4-turbo": custom_openai,
"my-ft-model": custom_openai
}
}
result = await start(agent, options)
print(result["result_text"])
```
#### Option 2: Fully Custom Provider
```python
import asyncio
from defuss_apl import start, create_custom_provider
async def my_provider(context):
"""Custom LLM provider"""
prompts = context["prompts"]
# Your custom LLM logic here
response_text = "Custom response from my LLM"
return {
"choices": [
{
"message": {
"role": "assistant",
"content": response_text
}
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 5,
"total_tokens": 15
}
}
async def main():
agent = """
# prompt: test
Test message for custom provider
"""
options = {
"with_providers": {
"my-model": create_custom_provider(my_provider)
}
}
# Override default model
agent_with_model = """
# pre: setup
{% set model = "my-model" %}
# prompt: setup
Test message for custom provider
"""
result = await start(agent_with_model, options)
print(result["result_text"])
asyncio.run(main())
```
### Multimodal Example
```python
import asyncio
from defuss_apl import start
async def main():
agent = """
# pre: setup
{% set model = "gpt-4o" %}
{% set temperature = 0.1 %}
# prompt: setup
## system
You are a helpful assistant that can analyze images.
## user
Please describe what you see in this image:
@image_url https://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/Sunrise.PNG/330px-Sunrise.PNG
And also process this document:
@file https://example.com/document.pdf
"""
result = await start(agent)
print(result["result_text"])
asyncio.run(main())
```
## API Reference
### Core Functions
#### `start(apl: str, options: Dict = None) -> Dict`
Execute an APL template and return the final context.
**Parameters:**
- `apl`: APL template string
- `options`: Optional configuration dict
**Returns:** Final execution context with all variables and results.
#### `check(apl: str) -> bool`
Validate APL template syntax. Returns `True` on success, raises `ValidationError` on failure.
#### `create_openai_provider(api_key=None, base_url=None, options=None) -> callable`
Create an OpenAI provider function with custom options.
**Parameters:**
- `api_key`: Optional API key (overrides context api_key and env var)
- `base_url`: Optional base URL (overrides context base_url)
- `options`: Optional dict with provider-specific options
**Returns:** Provider function compatible with APL runtime.
**Example:**
```python
openai_provider = create_openai_provider(
base_url="https://api.my-custom-deployment.com",
options={
"api_key": "sk-my-custom-key",
"timeout": 60.0,
"max_retries": 2,
"default_headers": {"X-Custom-Header": "value"}
}
)
options = {
"with_providers": {
"gpt-4-turbo": openai_provider,
}
}
```
#### `create_custom_provider(provider_fn) -> callable`
Wrap a custom provider function to ensure proper format.
**Parameters:**
- `provider_fn`: Custom provider function that takes context dict
**Returns:** Provider function compatible with APL runtime.
### Configuration Options
```python
options = {
# Tool functions
"with_tools": {
"tool_name": {
"fn": tool_function,
"descriptor": {...}, # Optional, auto-generated if not provided
"with_context": False # Whether to pass context to tool
}
},
# Custom providers
"with_providers": {
"model_name": provider_function
},
# Execution limits
"max_timeout": 120000, # milliseconds
"max_runs": float('inf')
}
```
### Context Variables
The execution context contains:
**Executor-maintained variables:**
- `result_text`: Text output from LLM
- `result_json`: Parsed JSON output (if `output_mode` is "json")
- `result_tool_calls`: List of executed tool calls
- `result_image_urls`: List of image URLs from LLM response
- `usage`: Token usage statistics
- `runs`: Number of runs for current step
- `global_runs`: Total runs across all steps
- `errors`: List of error messages
- `time_elapsed`: Time elapsed for current step (ms)
- `time_elapsed_global`: Total time elapsed (ms)
**User-settable variables:**
- `model`: LLM model name (default: "gpt-4o")
- `temperature`: Sampling temperature
- `max_tokens`: Maximum tokens to generate
- `allowed_tools`: List of allowed tool names
- `output_mode`: "json" or "structured_output"
- `stop_sequences`: List of stop sequences
## Tool Development
### Simple Tool
```python
def my_tool(param1: str, param2: int = 42) -> str:
"""Tool description for LLM"""
return f"Processed {param1} with {param2}"
```
### Tool with Context Access
```python
def context_tool(message: str, context) -> str:
"""Tool that accesses execution context"""
user_name = context.get("user_name", "User")
return f"Hello {user_name}, you said: {message}"
# Register with context access
options = {
"with_tools": {
"context_tool": {
"fn": context_tool,
"with_context": True
}
}
}
```
### Custom Tool Descriptor
```python
options = {
"with_tools": {
"my_tool": {
"fn": my_tool_function,
"descriptor": {
"type": "function",
"function": {
"name": "my_tool",
"description": "Custom tool description",
"parameters": {
"type": "object",
"properties": {
"input": {"type": "string", "description": "Input text"}
},
"required": ["input"]
}
}
}
}
}
}
```
## Error Handling
APL provides comprehensive error handling:
```python
try:
result = await start(agent, options)
# Check for execution errors
if result["errors"]:
print("Errors occurred:", result["errors"])
print("Success:", result["result_text"])
except ValidationError as e:
print("Template validation failed:", e)
except RuntimeError as e:
print("Execution failed:", e)
```
## Development
### Running Tests
```bash
# Clone the repository
git clone https://github.com/kyr0/defuss.git && cd defuss/packages/apl/python
# Create virtual environment
python -m venv .venv
# Activate virtual environment
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Run tests
pytest tests/
```
### Testing Without OpenAI
The implementation includes a mock provider for testing without OpenAI API access:
```python
# Will use mock provider automatically if openai is not installed
result = await start(agent)
```
## License
MIT License - see LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "defuss-apl",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "apl, agentic, prompting, language, llm, ai, jinja, workflow",
"author": "Defuss Team",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/48/55/bcf3584b0db654989b854b84d33a389d156fa53773860123270bd2689c78/defuss_apl-1.1.0.tar.gz",
"platform": null,
"description": "# APL Python Implementation\n\nA minimal Python implementation of the Agentic Prompting Language (APL) according to specification v1.1.\n\n## Features\n\n- **Full Jinja2 Support** - Use Jinja2 templates in all phases with variable assignment and control flow\n- **Multi-step Workflows** - Create complex branching workflows with `next_step` control\n- **Native Tool Calling** - Execute Python functions directly from LLM tool calls\n- **Provider Agnostic** - Works with OpenAI API or custom providers\n- **Multimodal Support** - Handle images, audio, and files with inline attachments\n- **Minimal Dependencies** - Only requires `jinja2`, optionally `openai`\n\n## Installation\n\n```bash\npip install defuss-apl\n```\n\nFor OpenAI support:\n```bash\npip install defuss-apl[openai]\n```\n\n## Quick Start\n\n### Simple Example\n\n```python\nimport asyncio\nfrom defuss_apl import start\n\nasync def main():\n agent = \"\"\"\n# prompt: greet\nHello! How can I help you today?\n\"\"\"\n \n result = await start(agent)\n print(result[\"result_text\"])\n\nasyncio.run(main())\n```\n\n### With Variables and Control Flow\n\n```python\nimport asyncio\nfrom defuss_apl import start\n\nasync def main():\n agent = \"\"\"\n# pre: setup\n{% set user_name = \"Alice\" %}\n{% set max_retries = 3 %}\n\n# prompt: setup\n## system\nYou are a helpful assistant.\n\n## user \nHello {{ user_name }}! Please help me with my question.\n\n# post: setup\n{% if errors and runs < max_retries %}\n {% set next_step = \"setup\" %}\n{% else %}\n {% set next_step = \"return\" %}\n{% endif %}\n\"\"\"\n \n result = await start(agent)\n print(f\"Final result: {result['result_text']}\")\n print(f\"Runs: {result['global_runs']}\")\n\nasyncio.run(main())\n```\n\n### With Tool Calling\n\n```python\nimport asyncio\nfrom defuss_apl import start\n\ndef calculator(operation: str, a: float, b: float) -> float:\n \"\"\"Perform basic math operations\"\"\"\n if operation == \"add\":\n return a + b\n elif operation == \"multiply\":\n return a * b\n else:\n raise ValueError(f\"Unknown operation: {operation}\")\n\nasync def main():\n agent = \"\"\"\n# pre: setup\n{% set allowed_tools = [\"calculator\"] %}\n\n# prompt: setup\nPlease calculate 15 + 25 and then multiply the result by 2.\n\"\"\"\n \n options = {\n \"with_tools\": {\n \"calculator\": {\n \"fn\": calculator,\n # Descriptor is auto-generated from function signature\n }\n }\n }\n \n result = await start(agent, options)\n print(f\"Result: {result['result_text']}\")\n print(f\"Tool calls: {result['result_tool_calls']}\")\n\nasyncio.run(main())\n```\n\n### With Custom Provider\n\n#### Option 1: Custom OpenAI Provider\n\n```python\nimport asyncio\nfrom defuss_apl import start, create_openai_provider\n\nasync def main():\n agent = \"\"\"\n# prompt: test\nTest using a custom OpenAI API endpoint\n\"\"\"\n\n # Create a custom OpenAI provider with specific options\n custom_openai = create_openai_provider(\n base_url=\"https://api.my-deployment.com/v1\",\n options={\n \"api_key\": \"sk-my-custom-key\",\n \"timeout\": 60.0,\n \"max_retries\": 3,\n \"default_headers\": {\"X-Organization\": \"my-org-id\"}\n }\n )\n \n options = {\n \"with_providers\": {\n \"gpt-4-turbo\": custom_openai,\n \"my-ft-model\": custom_openai\n }\n }\n \n result = await start(agent, options)\n print(result[\"result_text\"])\n```\n\n#### Option 2: Fully Custom Provider\n\n```python\nimport asyncio\nfrom defuss_apl import start, create_custom_provider\n\nasync def my_provider(context):\n \"\"\"Custom LLM provider\"\"\"\n prompts = context[\"prompts\"]\n \n # Your custom LLM logic here\n response_text = \"Custom response from my LLM\"\n \n return {\n \"choices\": [\n {\n \"message\": {\n \"role\": \"assistant\",\n \"content\": response_text\n }\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 10,\n \"completion_tokens\": 5,\n \"total_tokens\": 15\n }\n }\n\nasync def main():\n agent = \"\"\"\n# prompt: test\nTest message for custom provider\n\"\"\"\n \n options = {\n \"with_providers\": {\n \"my-model\": create_custom_provider(my_provider)\n }\n }\n \n # Override default model\n agent_with_model = \"\"\"\n# pre: setup\n{% set model = \"my-model\" %}\n\n# prompt: setup\nTest message for custom provider\n\"\"\"\n \n result = await start(agent_with_model, options)\n print(result[\"result_text\"])\n\nasyncio.run(main())\n```\n\n### Multimodal Example\n\n```python\nimport asyncio\nfrom defuss_apl import start\n\nasync def main():\n agent = \"\"\"\n# pre: setup\n{% set model = \"gpt-4o\" %}\n{% set temperature = 0.1 %}\n\n# prompt: setup\n## system\nYou are a helpful assistant that can analyze images.\n\n## user\nPlease describe what you see in this image:\n@image_url https://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/Sunrise.PNG/330px-Sunrise.PNG\n\nAnd also process this document:\n@file https://example.com/document.pdf\n\"\"\"\n \n result = await start(agent)\n print(result[\"result_text\"])\n\nasyncio.run(main())\n```\n\n## API Reference\n\n### Core Functions\n\n#### `start(apl: str, options: Dict = None) -> Dict`\n\nExecute an APL template and return the final context.\n\n**Parameters:**\n- `apl`: APL template string\n- `options`: Optional configuration dict\n\n**Returns:** Final execution context with all variables and results.\n\n#### `check(apl: str) -> bool`\n\nValidate APL template syntax. Returns `True` on success, raises `ValidationError` on failure.\n\n#### `create_openai_provider(api_key=None, base_url=None, options=None) -> callable`\n\nCreate an OpenAI provider function with custom options.\n\n**Parameters:**\n- `api_key`: Optional API key (overrides context api_key and env var)\n- `base_url`: Optional base URL (overrides context base_url)\n- `options`: Optional dict with provider-specific options\n\n**Returns:** Provider function compatible with APL runtime.\n\n**Example:**\n```python\nopenai_provider = create_openai_provider(\n base_url=\"https://api.my-custom-deployment.com\",\n options={\n \"api_key\": \"sk-my-custom-key\",\n \"timeout\": 60.0,\n \"max_retries\": 2,\n \"default_headers\": {\"X-Custom-Header\": \"value\"}\n }\n)\n\noptions = {\n \"with_providers\": {\n \"gpt-4-turbo\": openai_provider,\n }\n}\n```\n\n#### `create_custom_provider(provider_fn) -> callable`\n\nWrap a custom provider function to ensure proper format.\n\n**Parameters:**\n- `provider_fn`: Custom provider function that takes context dict\n\n**Returns:** Provider function compatible with APL runtime.\n\n### Configuration Options\n\n```python\noptions = {\n # Tool functions\n \"with_tools\": {\n \"tool_name\": {\n \"fn\": tool_function,\n \"descriptor\": {...}, # Optional, auto-generated if not provided\n \"with_context\": False # Whether to pass context to tool\n }\n },\n \n # Custom providers\n \"with_providers\": {\n \"model_name\": provider_function\n },\n \n # Execution limits\n \"max_timeout\": 120000, # milliseconds\n \"max_runs\": float('inf')\n}\n```\n\n### Context Variables\n\nThe execution context contains:\n\n**Executor-maintained variables:**\n- `result_text`: Text output from LLM\n- `result_json`: Parsed JSON output (if `output_mode` is \"json\")\n- `result_tool_calls`: List of executed tool calls\n- `result_image_urls`: List of image URLs from LLM response\n- `usage`: Token usage statistics\n- `runs`: Number of runs for current step\n- `global_runs`: Total runs across all steps\n- `errors`: List of error messages\n- `time_elapsed`: Time elapsed for current step (ms)\n- `time_elapsed_global`: Total time elapsed (ms)\n\n**User-settable variables:**\n- `model`: LLM model name (default: \"gpt-4o\")\n- `temperature`: Sampling temperature\n- `max_tokens`: Maximum tokens to generate\n- `allowed_tools`: List of allowed tool names\n- `output_mode`: \"json\" or \"structured_output\"\n- `stop_sequences`: List of stop sequences\n\n## Tool Development\n\n### Simple Tool\n\n```python\ndef my_tool(param1: str, param2: int = 42) -> str:\n \"\"\"Tool description for LLM\"\"\"\n return f\"Processed {param1} with {param2}\"\n```\n\n### Tool with Context Access\n\n```python\ndef context_tool(message: str, context) -> str:\n \"\"\"Tool that accesses execution context\"\"\"\n user_name = context.get(\"user_name\", \"User\")\n return f\"Hello {user_name}, you said: {message}\"\n\n# Register with context access\noptions = {\n \"with_tools\": {\n \"context_tool\": {\n \"fn\": context_tool,\n \"with_context\": True\n }\n }\n}\n```\n\n### Custom Tool Descriptor\n\n```python\noptions = {\n \"with_tools\": {\n \"my_tool\": {\n \"fn\": my_tool_function,\n \"descriptor\": {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"my_tool\",\n \"description\": \"Custom tool description\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"input\": {\"type\": \"string\", \"description\": \"Input text\"}\n },\n \"required\": [\"input\"]\n }\n }\n }\n }\n }\n}\n```\n\n## Error Handling\n\nAPL provides comprehensive error handling:\n\n```python\ntry:\n result = await start(agent, options)\n \n # Check for execution errors\n if result[\"errors\"]:\n print(\"Errors occurred:\", result[\"errors\"])\n \n print(\"Success:\", result[\"result_text\"])\n \nexcept ValidationError as e:\n print(\"Template validation failed:\", e)\n \nexcept RuntimeError as e:\n print(\"Execution failed:\", e)\n```\n\n## Development\n\n### Running Tests\n\n```bash\n# Clone the repository\ngit clone https://github.com/kyr0/defuss.git && cd defuss/packages/apl/python\n\n# Create virtual environment\npython -m venv .venv\n\n# Activate virtual environment\nsource .venv/bin/activate\n\n# Install dependencies\npip install -r requirements.txt\n\n# Run tests\npytest tests/\n```\n\n### Testing Without OpenAI\n\nThe implementation includes a mock provider for testing without OpenAI API access:\n\n```python\n# Will use mock provider automatically if openai is not installed\nresult = await start(agent)\n```\n\n## License\n\nMIT License - see LICENSE file for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Agentic Prompting Language (APL) - Python Implementation",
"version": "1.1.0",
"project_urls": {
"Documentation": "https://github.com/defuss/defuss/tree/main/packages/apl",
"Homepage": "https://github.com/defuss/defuss",
"Issues": "https://github.com/defuss/defuss/issues",
"Repository": "https://github.com/defuss/defuss.git"
},
"split_keywords": [
"apl",
" agentic",
" prompting",
" language",
" llm",
" ai",
" jinja",
" workflow"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6da17b62674b60c6c1803efcc0cb2dced89de73109ce55bb24d8e4c73f1480de",
"md5": "47ae509b21656a3f1f5600c4aa43b548",
"sha256": "f6eeddf340da9974ba8507df702e0ed2ab957b0a462f46366afdf28f71ca44b1"
},
"downloads": -1,
"filename": "defuss_apl-1.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "47ae509b21656a3f1f5600c4aa43b548",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 27596,
"upload_time": "2025-07-27T14:54:45",
"upload_time_iso_8601": "2025-07-27T14:54:45.015364Z",
"url": "https://files.pythonhosted.org/packages/6d/a1/7b62674b60c6c1803efcc0cb2dced89de73109ce55bb24d8e4c73f1480de/defuss_apl-1.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4855bcf3584b0db654989b854b84d33a389d156fa53773860123270bd2689c78",
"md5": "765e9f42b6b175d89b4dc144647d5525",
"sha256": "a6e63d1eae4fa4dcde0c30dff88e8bc9ff611b2be233c6037d0586e9abd74c77"
},
"downloads": -1,
"filename": "defuss_apl-1.1.0.tar.gz",
"has_sig": false,
"md5_digest": "765e9f42b6b175d89b4dc144647d5525",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 29096,
"upload_time": "2025-07-27T14:54:46",
"upload_time_iso_8601": "2025-07-27T14:54:46.530942Z",
"url": "https://files.pythonhosted.org/packages/48/55/bcf3584b0db654989b854b84d33a389d156fa53773860123270bd2689c78/defuss_apl-1.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-27 14:54:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "defuss",
"github_project": "defuss",
"github_not_found": true,
"lcname": "defuss-apl"
}