<div align="center">
<img src="https://tellurio-public-assets.s3.us-west-1.amazonaws.com/static/images/afnio-logo-1024x1024.png" width="250">
</div>
# Afnio: Making AI System Optimization Easy for Everyone
<div align="center">
<p align="center">
<a href="#quickstart">Quickstart</a> •
<a href="#key-concepts">Key Concepts</a> •
<a href="#contributing-guidelines">Contributing Guidelines</a>
</p>
[](https://pypi.org/project/afnio/)
[](https://pypi.org/project/afnio/)
</div>
Afnio is a framework for automatic prompt and hyperparameter optimization, particularly designed for complex AI systems where Language Models (LMs) are employed multiple times in workflows, such as in LM pipelines and agent-driven architectures. Effortlessly build and optimize AI systems for classification, information retrieval, question-answering, etc.
## Quickstart
Get started with Afnio in six steps, or try it instantly in Colab: [](https://colab.research.google.com/github/Tellurio-AI/tutorials/blob/main/facility_support/facility_support_sentiment.ipynb)
1. Install the Afnio SDK with [pip](https://pip.pypa.io/en/stable/):
```bash
pip install afnio
```
2. Set the API key for the LLM model you want to use as an environment variable (OpenAI for this quickstart). Get your key from [OpenAI dashboard](https://platform.openai.com/api-keys).
```bash
export OPENAI_API_KEY="your-api-key"
```
3. Log in to [Tellurio Studio](https://platform.tellurio.ai/) and paste your API key when prompted. Create or view your API keys under the [API Keys](https://platform.tellurio.ai/settings/api-keys) page.
```bash
afnio login
```
4. Copy and run this sample code to optimize your AI agent and track its quality metrics. Your first Run will appear in [Tellurio Studio](https://platform.tellurio.ai/). Your system's checkpoints will be saved under the local `checkpoint/` directory created in the same path where you executed the script.
_This example uses [Meta's Facility Support Analyzer dataset](https://github.com/meta-llama/prompt-ops/tree/main/use-cases/facility-support-analyzer) to classify enterprise support emails as positive, neutral, or negative. **Expect accuracy to improve from 66.4% ±1.5% to 80.8% ±12.5% — a +14.5% absolute gain.**_
````python
import json
import re
import afnio
import afnio.cognitive as cog
import afnio.cognitive.functional as F
import afnio.tellurio as te
from afnio.models.openai import AsyncOpenAI
from afnio.trainer import Trainer
from afnio.utils.data import DataLoader, WeightedRandomSampler
from afnio.utils.datasets import FacilitySupport
# Initialize Project and experiment Run
run = te.init("your-username", "Facility Support")
# Compute per-sample weights to balance the training set
def compute_sample_weights(data):
with te.suppress_variable_notifications():
labels = [y.data for _, (_, y, _) in data]
counts = {label: labels.count(label) for label in set(labels)}
total = len(data)
return [total / counts[label] for label in labels]
# Prepare data and loaders
train_data = FacilitySupport(split="train", root="data")
test_data = FacilitySupport(split="test", root="data")
val_data = FacilitySupport(split="val", root="data")
weights = compute_sample_weights(train_data)
sampler = WeightedRandomSampler(weights, num_samples=len(train_data), replacement=True)
BATCH_SIZE = 33
train_dataloader = DataLoader(train_data, sampler=sampler, batch_size=BATCH_SIZE)
val_dataloader = DataLoader(val_data, batch_size=BATCH_SIZE, seed=42)
test_dataloader = DataLoader(test_data, batch_size=BATCH_SIZE, seed=42)
# Define prompt and response format
sentiment_task = "Read the provided message and determine the sentiment."
sentiment_user = "Read the provided message and determine the sentiment.\n\n**Message:**\n\n{message}\n\n"
SENTIMENT_RESPONSE_FORMAT = {
"type": "json_schema",
"json_schema": {
"strict": True,
"name": "sentiment_response_schema",
"schema": {
"type": "object",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "neutral", "negative"],
},
},
"additionalProperties": False,
"required": ["sentiment"],
},
},
}
# Set up LM model clients used for forward, backward passes and optimization step
afnio.set_backward_model_client("openai/gpt-5", completion_args={"temperature": 1.0, "max_completion_tokens": 32000, "reasoning_effort": "low"})
fw_model_client = AsyncOpenAI()
optim_model_client = AsyncOpenAI()
# Define the sentiment classification agent
class FacilitySupportAnalyzer(cog.Module):
def __init__(self):
super().__init__()
self.sentiment_task = cog.Parameter(data=sentiment_task, role="system prompt for sentiment classification", requires_grad=True)
self.sentiment_user = afnio.Variable(data=sentiment_user, role="input template to sentiment classifier")
self.sentiment_classifier = cog.ChatCompletion()
def forward(self, fwd_model, inputs, **completion_args):
sentiment_messages = [
{"role": "system", "content": [self.sentiment_task]},
{"role": "user", "content": [self.sentiment_user]},
]
return self.sentiment_classifier(fwd_model, sentiment_messages, inputs=inputs, response_format=SENTIMENT_RESPONSE_FORMAT, **completion_args)
def training_step(self, batch, batch_idx):
X, y = batch
_, gold_sentiment, _ = y
pred_sentiment = self(fw_model_client, inputs={"message": X}, model="gpt-4.1-nano", temperature=0.0)
pred_sentiment.data = [json.loads(re.sub(r"^```json\n|\n```$", "", item))["sentiment"].lower() for item in pred_sentiment.data]
loss = F.exact_match_evaluator(pred_sentiment, gold_sentiment)
return {"loss": loss, "accuracy": loss[0].data / len(gold_sentiment.data)}
def validation_step(self, batch, batch_idx):
return self.training_step(batch, batch_idx)
def test_step(self, batch, batch_idx):
return self.validation_step(batch, batch_idx)
def configure_optimizers(self):
constraints = [
afnio.Variable(
data="The improved variable must never include or reference the characters `{` or `}`. Do not output them, mention them, or describe them in any way.",
role="optimizer constraint",
)
]
optimizer = afnio.optim.TGD(self.parameters(), model_client=optim_model_client, constraints=constraints, momentum=3, model="gpt-5", temperature=1.0, max_completion_tokens=32000, reasoning_effort="low")
return optimizer
# Instantiate agent and trainer
agent = FacilitySupportAnalyzer()
trainer = Trainer(max_epochs=5)
# Evaluate the agent on the test set before training (baseline performance)
llm_clients = [fw_model_client, afnio.get_backward_model_client(), optim_model_client]
trainer.test(agent=agent, test_dataloader=test_dataloader, llm_clients=llm_clients)
# Train the agent on the training set and validate on the validation set
trainer.fit(agent=agent, train_dataloader=train_dataloader, val_dataloader=val_dataloader, llm_clients=llm_clients)
run.finish()
````
5. View live metrics, compare Runs, and share results with your team.
<div align="center">
<img src="https://tellurio-public-assets.s3.us-west-1.amazonaws.com/static/images/tellurio-studio-quickstart-plots.png" width="90%">
</div>
6. Run your optimized AI agent on the test set to see how it performs, or on new data! Check out how on our Colab: [](https://colab.research.google.com/github/Tellurio-AI/tutorials/blob/main/facility_support/facility_support_sentiment.ipynb)
## Key Concepts
- **Accelerated AI System Development:** Ship complex AI systems faster thanks to high-level UX and easy-to-debug runtime.
- **State-of-the-Art Performance:** Leverage built-in optimizers to automatically refine prompts and tune model parameters for any LM task, ensuring optimal performance.
- **LM Agnostic:** Decouple prompts and parameters from application logic, reducing LM model selection to a single hyperparameter in Afnio’s optimizers. Seamlessly switch between models without any additional rework.
- **Minimal and Flexible:** Pure Python with no API calls or dependencies, ensuring seamless integration with any tools or libraries.
- **Progressive Disclosure of Complexity:** Leverage diverse UX workflows, from high-level abstractions to fine-grained control, designed to suit various user profiles. Start simple and customize as needed, without ever feeling like you’re falling off a complexity cliff.
- **_Define-by-Run_ Scheme:** Your compound AI system is dynamically defined at runtime through forward computation, allowing for seamless handling of complex control flows like conditionals and loops, common in agent-based AI applications. With no need for precompilation, Afnio adapts on the fly to your evolving system.
## Contributing Guidelines
:computer: Would love to contribute? Please follows our [contribution guidelines](CONTRIBUTING.md).
Raw data
{
"_id": null,
"home_page": null,
"name": "afnio",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai-agent, language-models, prompt-optimization, afnio",
"author": null,
"author_email": "Tellurio <contact@tellurio.ai>",
"download_url": "https://files.pythonhosted.org/packages/91/36/18ad59c7197d3379ce478385320eb5c28def5d7f8988002e50b0d873818e/afnio-0.3.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img src=\"https://tellurio-public-assets.s3.us-west-1.amazonaws.com/static/images/afnio-logo-1024x1024.png\" width=\"250\">\n</div>\n\n# Afnio: Making AI System Optimization Easy for Everyone\n\n<div align=\"center\">\n<p align=\"center\">\n <a href=\"#quickstart\">Quickstart</a> \u2022\n <a href=\"#key-concepts\">Key Concepts</a> \u2022\n <a href=\"#contributing-guidelines\">Contributing Guidelines</a>\n</p>\n\n[](https://pypi.org/project/afnio/)\n[](https://pypi.org/project/afnio/)\n\n</div>\n\nAfnio is a framework for automatic prompt and hyperparameter optimization, particularly designed for complex AI systems where Language Models (LMs) are employed multiple times in workflows, such as in LM pipelines and agent-driven architectures. Effortlessly build and optimize AI systems for classification, information retrieval, question-answering, etc.\n\n## Quickstart\n\nGet started with Afnio in six steps, or try it instantly in Colab: [](https://colab.research.google.com/github/Tellurio-AI/tutorials/blob/main/facility_support/facility_support_sentiment.ipynb)\n\n1. Install the Afnio SDK with [pip](https://pip.pypa.io/en/stable/):\n\n```bash\npip install afnio\n```\n\n2. Set the API key for the LLM model you want to use as an environment variable (OpenAI for this quickstart). Get your key from [OpenAI dashboard](https://platform.openai.com/api-keys).\n\n```bash\nexport OPENAI_API_KEY=\"your-api-key\"\n```\n\n3. Log in to [Tellurio Studio](https://platform.tellurio.ai/) and paste your API key when prompted. Create or view your API keys under the [API Keys](https://platform.tellurio.ai/settings/api-keys) page.\n\n```bash\nafnio login\n```\n\n4. Copy and run this sample code to optimize your AI agent and track its quality metrics. Your first Run will appear in [Tellurio Studio](https://platform.tellurio.ai/). Your system's checkpoints will be saved under the local `checkpoint/` directory created in the same path where you executed the script.\n\n _This example uses [Meta's Facility Support Analyzer dataset](https://github.com/meta-llama/prompt-ops/tree/main/use-cases/facility-support-analyzer) to classify enterprise support emails as positive, neutral, or negative. **Expect accuracy to improve from 66.4% \u00b11.5% to 80.8% \u00b112.5% \u2014 a +14.5% absolute gain.**_\n\n````python\nimport json\nimport re\n\nimport afnio\nimport afnio.cognitive as cog\nimport afnio.cognitive.functional as F\nimport afnio.tellurio as te\nfrom afnio.models.openai import AsyncOpenAI\nfrom afnio.trainer import Trainer\nfrom afnio.utils.data import DataLoader, WeightedRandomSampler\nfrom afnio.utils.datasets import FacilitySupport\n\n# Initialize Project and experiment Run\nrun = te.init(\"your-username\", \"Facility Support\")\n\n\n# Compute per-sample weights to balance the training set\ndef compute_sample_weights(data):\n with te.suppress_variable_notifications():\n labels = [y.data for _, (_, y, _) in data]\n counts = {label: labels.count(label) for label in set(labels)}\n total = len(data)\n return [total / counts[label] for label in labels]\n\n\n# Prepare data and loaders\ntrain_data = FacilitySupport(split=\"train\", root=\"data\")\ntest_data = FacilitySupport(split=\"test\", root=\"data\")\nval_data = FacilitySupport(split=\"val\", root=\"data\")\n\nweights = compute_sample_weights(train_data)\nsampler = WeightedRandomSampler(weights, num_samples=len(train_data), replacement=True)\n\nBATCH_SIZE = 33\ntrain_dataloader = DataLoader(train_data, sampler=sampler, batch_size=BATCH_SIZE)\nval_dataloader = DataLoader(val_data, batch_size=BATCH_SIZE, seed=42)\ntest_dataloader = DataLoader(test_data, batch_size=BATCH_SIZE, seed=42)\n\n# Define prompt and response format\nsentiment_task = \"Read the provided message and determine the sentiment.\"\nsentiment_user = \"Read the provided message and determine the sentiment.\\n\\n**Message:**\\n\\n{message}\\n\\n\"\nSENTIMENT_RESPONSE_FORMAT = {\n \"type\": \"json_schema\",\n \"json_schema\": {\n \"strict\": True,\n \"name\": \"sentiment_response_schema\",\n \"schema\": {\n \"type\": \"object\",\n \"properties\": {\n \"sentiment\": {\n \"type\": \"string\",\n \"enum\": [\"positive\", \"neutral\", \"negative\"],\n },\n },\n \"additionalProperties\": False,\n \"required\": [\"sentiment\"],\n },\n },\n}\n\n# Set up LM model clients used for forward, backward passes and optimization step\nafnio.set_backward_model_client(\"openai/gpt-5\", completion_args={\"temperature\": 1.0, \"max_completion_tokens\": 32000, \"reasoning_effort\": \"low\"})\nfw_model_client = AsyncOpenAI()\noptim_model_client = AsyncOpenAI()\n\n\n# Define the sentiment classification agent\nclass FacilitySupportAnalyzer(cog.Module):\n def __init__(self):\n super().__init__()\n self.sentiment_task = cog.Parameter(data=sentiment_task, role=\"system prompt for sentiment classification\", requires_grad=True)\n self.sentiment_user = afnio.Variable(data=sentiment_user, role=\"input template to sentiment classifier\")\n self.sentiment_classifier = cog.ChatCompletion()\n\n def forward(self, fwd_model, inputs, **completion_args):\n sentiment_messages = [\n {\"role\": \"system\", \"content\": [self.sentiment_task]},\n {\"role\": \"user\", \"content\": [self.sentiment_user]},\n ]\n return self.sentiment_classifier(fwd_model, sentiment_messages, inputs=inputs, response_format=SENTIMENT_RESPONSE_FORMAT, **completion_args)\n\n def training_step(self, batch, batch_idx):\n X, y = batch\n _, gold_sentiment, _ = y\n pred_sentiment = self(fw_model_client, inputs={\"message\": X}, model=\"gpt-4.1-nano\", temperature=0.0)\n pred_sentiment.data = [json.loads(re.sub(r\"^```json\\n|\\n```$\", \"\", item))[\"sentiment\"].lower() for item in pred_sentiment.data]\n loss = F.exact_match_evaluator(pred_sentiment, gold_sentiment)\n return {\"loss\": loss, \"accuracy\": loss[0].data / len(gold_sentiment.data)}\n\n def validation_step(self, batch, batch_idx):\n return self.training_step(batch, batch_idx)\n\n def test_step(self, batch, batch_idx):\n return self.validation_step(batch, batch_idx)\n\n def configure_optimizers(self):\n constraints = [\n afnio.Variable(\n data=\"The improved variable must never include or reference the characters `{` or `}`. Do not output them, mention them, or describe them in any way.\",\n role=\"optimizer constraint\",\n )\n ]\n optimizer = afnio.optim.TGD(self.parameters(), model_client=optim_model_client, constraints=constraints, momentum=3, model=\"gpt-5\", temperature=1.0, max_completion_tokens=32000, reasoning_effort=\"low\")\n return optimizer\n\n\n# Instantiate agent and trainer\nagent = FacilitySupportAnalyzer()\ntrainer = Trainer(max_epochs=5)\n\n# Evaluate the agent on the test set before training (baseline performance)\nllm_clients = [fw_model_client, afnio.get_backward_model_client(), optim_model_client]\ntrainer.test(agent=agent, test_dataloader=test_dataloader, llm_clients=llm_clients)\n\n# Train the agent on the training set and validate on the validation set\ntrainer.fit(agent=agent, train_dataloader=train_dataloader, val_dataloader=val_dataloader, llm_clients=llm_clients)\n\nrun.finish()\n````\n\n5. View live metrics, compare Runs, and share results with your team.\n\n<div align=\"center\">\n <img src=\"https://tellurio-public-assets.s3.us-west-1.amazonaws.com/static/images/tellurio-studio-quickstart-plots.png\" width=\"90%\">\n</div>\n\n6. Run your optimized AI agent on the test set to see how it performs, or on new data! Check out how on our Colab: [](https://colab.research.google.com/github/Tellurio-AI/tutorials/blob/main/facility_support/facility_support_sentiment.ipynb)\n\n## Key Concepts\n\n- **Accelerated AI System Development:** Ship complex AI systems faster thanks to high-level UX and easy-to-debug runtime.\n- **State-of-the-Art Performance:** Leverage built-in optimizers to automatically refine prompts and tune model parameters for any LM task, ensuring optimal performance.\n- **LM Agnostic:** Decouple prompts and parameters from application logic, reducing LM model selection to a single hyperparameter in Afnio\u2019s optimizers. Seamlessly switch between models without any additional rework.\n- **Minimal and Flexible:** Pure Python with no API calls or dependencies, ensuring seamless integration with any tools or libraries.\n- **Progressive Disclosure of Complexity:** Leverage diverse UX workflows, from high-level abstractions to fine-grained control, designed to suit various user profiles. Start simple and customize as needed, without ever feeling like you\u2019re falling off a complexity cliff.\n- **_Define-by-Run_ Scheme:** Your compound AI system is dynamically defined at runtime through forward computation, allowing for seamless handling of complex control flows like conditionals and loops, common in agent-based AI applications. With no need for precompilation, Afnio adapts on the fly to your evolving system.\n\n## Contributing Guidelines\n\n:computer: Would love to contribute? Please follows our [contribution guidelines](CONTRIBUTING.md).\n",
"bugtrack_url": null,
"license": "License to be determined soon. Please contact author before using.\n ",
"summary": "Afnio Python library and Tellurio Studio CLI tool",
"version": "0.3.0",
"project_urls": {
"homepage": "https://github.com/Tellurio-AI/afnio",
"repository": "https://github.com/Tellurio-AI/afnio"
},
"split_keywords": [
"ai-agent",
" language-models",
" prompt-optimization",
" afnio"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2b4d46b0b35732b6816107fea9a47b30477f4d4330d8fbbfb80bec2f4aecd752",
"md5": "e079462c325d0d380aabf598d65f8418",
"sha256": "a9bb840506f882d0888ac1e55600089b39c5c17e3ebe9430efad9625e5cc9ed5"
},
"downloads": -1,
"filename": "afnio-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e079462c325d0d380aabf598d65f8418",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 184870,
"upload_time": "2025-10-07T12:29:33",
"upload_time_iso_8601": "2025-10-07T12:29:33.688955Z",
"url": "https://files.pythonhosted.org/packages/2b/4d/46b0b35732b6816107fea9a47b30477f4d4330d8fbbfb80bec2f4aecd752/afnio-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "913618ad59c7197d3379ce478385320eb5c28def5d7f8988002e50b0d873818e",
"md5": "21205df0a0d18e23d425da8ac4c9e67b",
"sha256": "52001a508f06d2db54193185f2132a223a91c314e69aa70ca912101e0132b74d"
},
"downloads": -1,
"filename": "afnio-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "21205df0a0d18e23d425da8ac4c9e67b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 147793,
"upload_time": "2025-10-07T12:29:34",
"upload_time_iso_8601": "2025-10-07T12:29:34.902353Z",
"url": "https://files.pythonhosted.org/packages/91/36/18ad59c7197d3379ce478385320eb5c28def5d7f8988002e50b0d873818e/afnio-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-07 12:29:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Tellurio-AI",
"github_project": "afnio",
"github_not_found": true,
"lcname": "afnio"
}