Name | pydantic-ai-optimizers JSON |
Version |
0.0.2
JSON |
| download |
home_page | None |
Summary | A library for optimizing PydanticAI agents prompts through iterative improvement and evaluation, built on top of PydanticAI + Pydantic Evals. |
upload_time | 2025-08-31 20:29:22 |
maintainer | None |
docs_url | None |
author | Jan Siml |
requires_python | >=3.11 |
license | MIT |
keywords |
pydantic
ai
optimizers
llm
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# PydanticAI Optimizers
> ⚠️ **Super Opinionated**: This library is specifically built on top of PydanticAI + Pydantic Evals. If you don't use both together, this is useless to you.
A Python library for systematically improving PydanticAI agent prompts through iterative optimization. **Heavily inspired by the [GEPA paper](https://arxiv.org/abs/2507.19457)** with practical extensions for prompt optimization when switching model classes or providers.
## Acknowledgments
This work builds upon the excellent research in **"GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning"** by Agrawal et al. We're grateful for their foundational work on reflective prompt evolution and have adapted (some of) their methodology with several practical tweaks for the PydanticAI ecosystem.
**Why this exists**: Every time you switch model classes (GPT-4.1 → GPT-5 → Claude Sonnet 4) or providers, your prompting needs change. Instead of manually tweaking prompts each time, this automates the optimization process for your existing PydanticAI agents with minimal effort.
## What It Does
This library optimizes prompts by:
1. **Mini-batch Testing**: Each candidate prompt is tested against a small subset of cases to see if it beats its parent before full evaluation
2. **Individual Case Tracking**: Performance on each test case is tracked, enabling weighted sampling that favors prompts that win on more individual cases
3. **Memory for Failed Attempts**: When optimization gets stuck (children keep failing mini-batch tests), the system provides previous failed attempts to the reflection agent with the message: "You've tried these approaches and they didn't work - think outside the box!"
The core insight is that you don't lose learning between iterations, and the weighted sampling based on individual case win rates helps explore more diverse and effective prompt variations.
## Quick Start
### Installation
```bash
uv sync
```
Or run an example directly from the project root:
```bash
uv run examples/chef/optimize.py
uv run examples/customer_support/optimize.py
```
### Run the Chef Example
```bash
uv run examples/chef/optimize.py
```
### Run the Customer Support Example
```bash
uv run examples/customer_support/optimize.py
```
## Repository Structure
```
.
├── src/pydantic_ai_optimizers/
│ ├── agents/
│ │ └── reflection_agent.py
│ ├── optimizer.py
│ ├── config.py
│ └── cli.py
├── examples/
│ ├── chef/
│ └── customer_support/
├── tests/
└── docs/
```
This will optimize a chef assistant prompt that helps users find recipes while avoiding allergens. You'll see the optimization process with real-time feedback and the final best prompt.
### Basic Usage in Your Project
```python
from pydantic_ai_optimizers import Optimizer, make_reflection_agent
from your_domain import create_your_agent, build_dataset, YourInputType, YourOutputType
# CRITICAL: Define your run_case function with the correct signature
async def run_case(prompt_file: str, user_input: YourInputType) -> YourOutputType:
"""
Run your agent with a specific prompt file and user input.
Args:
prompt_file: ABSOLUTE path to the prompt file (e.g., "/path/to/prompts/candidate_001.txt")
user_input: The input from your dataset cases
Returns:
The agent's output that will be evaluated
"""
# Load the prompt and create agent
agent = create_your_agent(prompt_file=prompt_file, model="your-model")
result = await agent.run(user_input.message) # Or however you pass inputs
return result.output
# Set up your dataset
dataset = build_dataset("your_cases.json")
# Optional: Customize the reflection agent
reflection_agent = make_reflection_agent(
model="openai:gpt-5-mini", # Use a different model
special_instructions="Focus on conciseness and clarity" # Add custom instructions
)
# Or use the default: reflection_agent = None (will use make_reflection_agent() internally)
# Create optimizer
optimizer = Optimizer(
dataset=dataset,
run_case=run_case, # Your async function with the signature above
reflection_agent=reflection_agent, # Optional, uses default if None
)
# Run optimization
best = await optimizer.optimize(
seed_prompt_file=Path("prompts/seed.txt"),
full_validation_budget=20
)
print(f"Best prompt: {best.prompt_path}")
```
## How It Works
### 1. Start with a Seed Prompt
The optimizer begins with your initial prompt and evaluates it on all test cases.
### 2. Mini-batch Gating (Key Innovation #1)
- Select a parent prompt using weighted sampling (prompts that win more individual cases are more likely to be selected)
- Generate a new candidate through reflection on failed cases
- Test the candidate on a small mini-batch of cases
- Only if it beats the parent on the mini-batch does it get added to the candidate pool
### 3. Individual Case Performance Tracking (Key Innovation #2)
- Track which prompt wins each individual test case
- Use this for Pareto-efficient weighted sampling of parents
- This ensures diverse exploration and prevents getting stuck in local optima
### 4. Memory for Failed Attempts (Our Addition)
- When candidates keep failing mini-batch tests, record the failed attempts
- Provide these to the reflection agent as context: "Here's what you've tried that didn't work"
- This increases pressure over time to try more creative approaches when stuck
## Creating Your Own Optimization
### 1. Set Up Your Domain
Copy the `examples/chef/` structure:
```
your_domain/
├── agent.py # Your complete agent (tools, setup, everything)
├── optimize.py # Your evaluation logic + optimization loop
├── data/ # Your domain data
└── prompts/ # Seed prompt and reflection instructions
```
### 2. Implement Required Functions
**Agent** (`agent.py`):
```python
# CRITICAL: Your run_case function must have this exact signature
async def run_case(prompt_file: str, user_input: YourInputType) -> YourOutputType:
"""
Run your agent with a specific prompt file and user input.
Args:
prompt_file: ABSOLUTE path to the prompt file (optimizer passes full paths)
user_input: Input from your dataset cases (your domain-specific type)
Returns:
Agent output that matches your evaluators' expectations
"""
# Example implementation:
agent = create_your_agent(prompt_file=prompt_file, model="gpt-4")
result = await agent.run(user_input.message)
return result.output
# Optional: Customize the reflection agent
# If you don't provide one, the optimizer uses make_reflection_agent() internally
def create_custom_reflection_agent():
from pydantic_ai_optimizers import make_reflection_agent
return make_reflection_agent(
model="gpt-4o", # Your preferred model for reflection
special_instructions="""
Focus on:
- Brevity and clarity
- Domain-specific accuracy
- Better error handling
""" # Custom instructions for prompt improvement
)
```
**Optimization** (`optimize.py`):
```python
from pydantic_ai_optimizers import Optimizer, make_reflection_agent
from pathlib import Path
def build_dataset(cases_file):
# Load test cases and evaluators using pydantic-evals
# Return dataset that can evaluate your agent's outputs
pass
async def main():
# Set up dataset
dataset = build_dataset("cases.yaml")
# Your run_case function (defined above)
# No need to wrap it - pass it directly
# Optional: Use custom reflection agent
reflection_agent = make_reflection_agent(
model="gpt-4o",
special_instructions="Focus on accuracy and brevity"
)
# Or use default: reflection_agent = None
# Create optimizer
optimizer = Optimizer(
dataset=dataset,
run_case=run_case, # Your async function
reflection_agent=reflection_agent, # Optional
pool_dir=Path("prompt_pool"),
minibatch_size=4,
max_pool_size=16,
)
# Run optimization
best = await optimizer.optimize(
seed_prompt_file=Path("prompts/seed.txt"),
full_validation_budget=20
)
print(f"Best prompt saved to: {best.prompt_path}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
### 3. Run Optimization
```bash
python optimize.py
```
## Key Integrations
This library is designed to work seamlessly with:
### [textprompts](https://github.com/svilupp/textprompts)
Makes it easy to use standard text files with placeholders for prompt evolution. Perfect for diffing prompts and version control:
```python
# In your prompt file:
"You are a {role}. Your task is to {task}..."
# textprompts handles loading and placeholder substitution
prompt = textprompts.load_prompt("my_prompt.txt", role="chef", task="find recipes")
```
### [pydantic-ai-helpers](https://github.com/svilupp/pydantic-ai-helpers)
Provides utilities that make PydanticAI much more convenient:
- Quick tool parsing and setup
- Simple evaluation comparisons between outputs and expected results
- Streamlined agent configuration
These integrations save significant development time when building optimization pipelines.
## Reflection Agent Options
The optimizer uses a reflection agent to generate improved prompts based on evaluation feedback. You have several options:
### Use Default Reflection Agent
```python
# Pass None or omit the parameter - uses make_reflection_agent() with defaults
optimizer = Optimizer(
dataset=dataset,
run_case=run_case,
# reflection_agent=None, # Uses default
)
```
### Customize the Model
```python
from pydantic_ai_optimizers import make_reflection_agent
# Use a different model for reflection
reflection_agent = make_reflection_agent(model="openai:gpt-5-mini")
optimizer = Optimizer(
dataset=dataset,
run_case=run_case,
reflection_agent=reflection_agent,
)
```
### Add Special Instructions (e.g., GPT-5 prompting tips)
```python
import textprompts
from pathlib import Path
from pydantic_ai_optimizers import make_reflection_agent
# Load GPT-5 prompting tips from a file and pass to the reflection agent
tips = str(textprompts.load_prompt(
Path("examples/customer_support/prompts/gpt5_tips.txt")
))
reflection_agent = make_reflection_agent(
model="openai:gpt-5-mini",
special_instructions=tips,
)
optimizer = Optimizer(
dataset=dataset,
run_case=run_case,
reflection_agent=reflection_agent,
)
```
### Bring Your Own Reflection Agent
```python
from pydantic_ai import Agent
# Create completely custom reflection agent
reflection_agent = Agent(
model="your-model",
instructions="Your custom reflection instructions..."
)
optimizer = Optimizer(
dataset=dataset,
run_case=run_case,
reflection_agent=reflection_agent,
)
```
## Configuration
Set up through environment variables or configuration files:
```bash
export OPENAI_API_KEY="your-key"
export REFLECTION_MODEL="openai:gpt-5"
export AGENT_MODEL="openai:gpt-5-nano"
export VALIDATION_BUDGET=20
export MAX_POOL_SIZE=16
```
## Development
```bash
# Install with dev dependencies
uv pip install -e ".[dev]"
# Run tests
make test
# Format and lint
make format && make lint
# Type check
make type-check
```
## Why This Approach Works
The combination of mini-batch gating and individual case tracking prevents two common optimization problems:
1. **Expensive Evaluation**: Mini-batches mean you only do full evaluation on promising candidates
2. **Premature Convergence**: Weighted sampling based on individual case wins maintains diversity
The memory system addresses a key weakness in memoryless optimization: when you get stuck, the system learns from its failures and tries more creative approaches.
## License
MIT License
Raw data
{
"_id": null,
"home_page": null,
"name": "pydantic-ai-optimizers",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "pydantic, ai, optimizers, llm",
"author": "Jan Siml",
"author_email": "Jan Siml <49557684+svilupp@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/00/df/1f9da97b7da94918a1a692ed38da2c47b3d7e1f3a6268ae3d33b26b13f1a/pydantic_ai_optimizers-0.0.2.tar.gz",
"platform": null,
"description": "# PydanticAI Optimizers\n\n> \u26a0\ufe0f **Super Opinionated**: This library is specifically built on top of PydanticAI + Pydantic Evals. If you don't use both together, this is useless to you.\n\nA Python library for systematically improving PydanticAI agent prompts through iterative optimization. **Heavily inspired by the [GEPA paper](https://arxiv.org/abs/2507.19457)** with practical extensions for prompt optimization when switching model classes or providers.\n\n## Acknowledgments\n\nThis work builds upon the excellent research in **\"GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning\"** by Agrawal et al. We're grateful for their foundational work on reflective prompt evolution and have adapted (some of) their methodology with several practical tweaks for the PydanticAI ecosystem.\n\n**Why this exists**: Every time you switch model classes (GPT-4.1 \u2192 GPT-5 \u2192 Claude Sonnet 4) or providers, your prompting needs change. Instead of manually tweaking prompts each time, this automates the optimization process for your existing PydanticAI agents with minimal effort.\n\n## What It Does\n\nThis library optimizes prompts by:\n\n1. **Mini-batch Testing**: Each candidate prompt is tested against a small subset of cases to see if it beats its parent before full evaluation\n2. **Individual Case Tracking**: Performance on each test case is tracked, enabling weighted sampling that favors prompts that win on more individual cases \n3. **Memory for Failed Attempts**: When optimization gets stuck (children keep failing mini-batch tests), the system provides previous failed attempts to the reflection agent with the message: \"You've tried these approaches and they didn't work - think outside the box!\"\n\nThe core insight is that you don't lose learning between iterations, and the weighted sampling based on individual case win rates helps explore more diverse and effective prompt variations.\n\n## Quick Start\n\n### Installation\n\n```bash\nuv sync\n```\n\nOr run an example directly from the project root:\n```bash\nuv run examples/chef/optimize.py\nuv run examples/customer_support/optimize.py\n```\n\n### Run the Chef Example\n\n```bash\nuv run examples/chef/optimize.py\n```\n\n### Run the Customer Support Example\n\n```bash\nuv run examples/customer_support/optimize.py\n```\n\n## Repository Structure\n\n```\n.\n\u251c\u2500\u2500 src/pydantic_ai_optimizers/\n\u2502 \u251c\u2500\u2500 agents/\n\u2502 \u2502 \u2514\u2500\u2500 reflection_agent.py\n\u2502 \u251c\u2500\u2500 optimizer.py\n\u2502 \u251c\u2500\u2500 config.py\n\u2502 \u2514\u2500\u2500 cli.py\n\u251c\u2500\u2500 examples/\n\u2502 \u251c\u2500\u2500 chef/\n\u2502 \u2514\u2500\u2500 customer_support/\n\u251c\u2500\u2500 tests/\n\u2514\u2500\u2500 docs/\n```\n\nThis will optimize a chef assistant prompt that helps users find recipes while avoiding allergens. You'll see the optimization process with real-time feedback and the final best prompt.\n\n### Basic Usage in Your Project\n\n```python\nfrom pydantic_ai_optimizers import Optimizer, make_reflection_agent\nfrom your_domain import create_your_agent, build_dataset, YourInputType, YourOutputType\n\n# CRITICAL: Define your run_case function with the correct signature\nasync def run_case(prompt_file: str, user_input: YourInputType) -> YourOutputType:\n \"\"\"\n Run your agent with a specific prompt file and user input.\n \n Args:\n prompt_file: ABSOLUTE path to the prompt file (e.g., \"/path/to/prompts/candidate_001.txt\")\n user_input: The input from your dataset cases\n \n Returns:\n The agent's output that will be evaluated\n \"\"\"\n # Load the prompt and create agent\n agent = create_your_agent(prompt_file=prompt_file, model=\"your-model\")\n result = await agent.run(user_input.message) # Or however you pass inputs\n return result.output\n\n# Set up your dataset\ndataset = build_dataset(\"your_cases.json\")\n\n# Optional: Customize the reflection agent\nreflection_agent = make_reflection_agent(\n model=\"openai:gpt-5-mini\", # Use a different model\n special_instructions=\"Focus on conciseness and clarity\" # Add custom instructions\n)\n# Or use the default: reflection_agent = None (will use make_reflection_agent() internally)\n\n# Create optimizer\noptimizer = Optimizer(\n dataset=dataset,\n run_case=run_case, # Your async function with the signature above\n reflection_agent=reflection_agent, # Optional, uses default if None\n)\n\n# Run optimization\nbest = await optimizer.optimize(\n seed_prompt_file=Path(\"prompts/seed.txt\"),\n full_validation_budget=20\n)\nprint(f\"Best prompt: {best.prompt_path}\")\n```\n\n## How It Works\n\n### 1. Start with a Seed Prompt\nThe optimizer begins with your initial prompt and evaluates it on all test cases.\n\n### 2. Mini-batch Gating (Key Innovation #1)\n- Select a parent prompt using weighted sampling (prompts that win more individual cases are more likely to be selected)\n- Generate a new candidate through reflection on failed cases\n- Test the candidate on a small mini-batch of cases\n- Only if it beats the parent on the mini-batch does it get added to the candidate pool\n\n### 3. Individual Case Performance Tracking (Key Innovation #2) \n- Track which prompt wins each individual test case\n- Use this for Pareto-efficient weighted sampling of parents\n- This ensures diverse exploration and prevents getting stuck in local optima\n\n### 4. Memory for Failed Attempts (Our Addition)\n- When candidates keep failing mini-batch tests, record the failed attempts\n- Provide these to the reflection agent as context: \"Here's what you've tried that didn't work\"\n- This increases pressure over time to try more creative approaches when stuck\n\n## Creating Your Own Optimization\n\n### 1. Set Up Your Domain\n\nCopy the `examples/chef/` structure:\n\n```\nyour_domain/\n\u251c\u2500\u2500 agent.py # Your complete agent (tools, setup, everything)\n\u251c\u2500\u2500 optimize.py # Your evaluation logic + optimization loop\n\u251c\u2500\u2500 data/ # Your domain data\n\u2514\u2500\u2500 prompts/ # Seed prompt and reflection instructions\n```\n\n### 2. Implement Required Functions\n\n**Agent** (`agent.py`):\n```python\n# CRITICAL: Your run_case function must have this exact signature\nasync def run_case(prompt_file: str, user_input: YourInputType) -> YourOutputType:\n \"\"\"\n Run your agent with a specific prompt file and user input.\n \n Args:\n prompt_file: ABSOLUTE path to the prompt file (optimizer passes full paths)\n user_input: Input from your dataset cases (your domain-specific type)\n \n Returns:\n Agent output that matches your evaluators' expectations\n \"\"\"\n # Example implementation:\n agent = create_your_agent(prompt_file=prompt_file, model=\"gpt-4\")\n result = await agent.run(user_input.message)\n return result.output\n\n# Optional: Customize the reflection agent\n# If you don't provide one, the optimizer uses make_reflection_agent() internally\ndef create_custom_reflection_agent():\n from pydantic_ai_optimizers import make_reflection_agent\n \n return make_reflection_agent(\n model=\"gpt-4o\", # Your preferred model for reflection\n special_instructions=\"\"\"\n Focus on:\n - Brevity and clarity\n - Domain-specific accuracy\n - Better error handling\n \"\"\" # Custom instructions for prompt improvement\n )\n```\n\n**Optimization** (`optimize.py`):\n```python\nfrom pydantic_ai_optimizers import Optimizer, make_reflection_agent\nfrom pathlib import Path\n\ndef build_dataset(cases_file):\n # Load test cases and evaluators using pydantic-evals\n # Return dataset that can evaluate your agent's outputs\n pass\n\nasync def main():\n # Set up dataset\n dataset = build_dataset(\"cases.yaml\")\n \n # Your run_case function (defined above)\n # No need to wrap it - pass it directly\n \n # Optional: Use custom reflection agent\n reflection_agent = make_reflection_agent(\n model=\"gpt-4o\",\n special_instructions=\"Focus on accuracy and brevity\"\n )\n # Or use default: reflection_agent = None\n \n # Create optimizer\n optimizer = Optimizer(\n dataset=dataset,\n run_case=run_case, # Your async function\n reflection_agent=reflection_agent, # Optional\n pool_dir=Path(\"prompt_pool\"),\n minibatch_size=4,\n max_pool_size=16,\n )\n \n # Run optimization\n best = await optimizer.optimize(\n seed_prompt_file=Path(\"prompts/seed.txt\"),\n full_validation_budget=20\n )\n \n print(f\"Best prompt saved to: {best.prompt_path}\")\n\nif __name__ == \"__main__\":\n import asyncio\n asyncio.run(main())\n```\n\n### 3. Run Optimization\n\n```bash\npython optimize.py\n```\n\n## Key Integrations\n\nThis library is designed to work seamlessly with:\n\n### [textprompts](https://github.com/svilupp/textprompts)\nMakes it easy to use standard text files with placeholders for prompt evolution. Perfect for diffing prompts and version control:\n\n```python\n# In your prompt file:\n\"You are a {role}. Your task is to {task}...\"\n\n# textprompts handles loading and placeholder substitution\nprompt = textprompts.load_prompt(\"my_prompt.txt\", role=\"chef\", task=\"find recipes\")\n```\n\n### [pydantic-ai-helpers](https://github.com/svilupp/pydantic-ai-helpers) \nProvides utilities that make PydanticAI much more convenient:\n- Quick tool parsing and setup\n- Simple evaluation comparisons between outputs and expected results\n- Streamlined agent configuration\n\nThese integrations save significant development time when building optimization pipelines.\n\n## Reflection Agent Options\n\nThe optimizer uses a reflection agent to generate improved prompts based on evaluation feedback. You have several options:\n\n### Use Default Reflection Agent\n```python\n# Pass None or omit the parameter - uses make_reflection_agent() with defaults\noptimizer = Optimizer(\n dataset=dataset,\n run_case=run_case,\n # reflection_agent=None, # Uses default\n)\n```\n\n### Customize the Model\n```python\nfrom pydantic_ai_optimizers import make_reflection_agent\n\n# Use a different model for reflection\nreflection_agent = make_reflection_agent(model=\"openai:gpt-5-mini\")\n\noptimizer = Optimizer(\n dataset=dataset,\n run_case=run_case,\n reflection_agent=reflection_agent,\n)\n```\n\n### Add Special Instructions (e.g., GPT-5 prompting tips)\n```python\nimport textprompts\nfrom pathlib import Path\nfrom pydantic_ai_optimizers import make_reflection_agent\n\n# Load GPT-5 prompting tips from a file and pass to the reflection agent\ntips = str(textprompts.load_prompt(\n Path(\"examples/customer_support/prompts/gpt5_tips.txt\")\n))\n\nreflection_agent = make_reflection_agent(\n model=\"openai:gpt-5-mini\",\n special_instructions=tips,\n)\n\noptimizer = Optimizer(\n dataset=dataset,\n run_case=run_case,\n reflection_agent=reflection_agent,\n)\n```\n\n### Bring Your Own Reflection Agent\n```python\nfrom pydantic_ai import Agent\n\n# Create completely custom reflection agent\nreflection_agent = Agent(\n model=\"your-model\",\n instructions=\"Your custom reflection instructions...\"\n)\n\noptimizer = Optimizer(\n dataset=dataset,\n run_case=run_case,\n reflection_agent=reflection_agent,\n)\n```\n\n## Configuration\n\nSet up through environment variables or configuration files:\n\n```bash\nexport OPENAI_API_KEY=\"your-key\"\nexport REFLECTION_MODEL=\"openai:gpt-5\" \nexport AGENT_MODEL=\"openai:gpt-5-nano\"\nexport VALIDATION_BUDGET=20\nexport MAX_POOL_SIZE=16\n```\n\n## Development\n\n```bash\n# Install with dev dependencies\nuv pip install -e \".[dev]\"\n\n# Run tests \nmake test\n\n# Format and lint\nmake format && make lint\n\n# Type check\nmake type-check\n```\n\n## Why This Approach Works\n\nThe combination of mini-batch gating and individual case tracking prevents two common optimization problems:\n\n1. **Expensive Evaluation**: Mini-batches mean you only do full evaluation on promising candidates\n2. **Premature Convergence**: Weighted sampling based on individual case wins maintains diversity\n\nThe memory system addresses a key weakness in memoryless optimization: when you get stuck, the system learns from its failures and tries more creative approaches.\n\n## License\n\nMIT License",
"bugtrack_url": null,
"license": "MIT",
"summary": "A library for optimizing PydanticAI agents prompts through iterative improvement and evaluation, built on top of PydanticAI + Pydantic Evals.",
"version": "0.0.2",
"project_urls": {
"Homepage": "https://github.com/svilupp/pydantic-ai-optimizers",
"Issues": "https://github.com/svilupp/pydantic-ai-optimizers/issues",
"Repository": "https://github.com/svilupp/pydantic-ai-optimizers"
},
"split_keywords": [
"pydantic",
" ai",
" optimizers",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "41034cd5403cfab39e8d67d9bf5559e1b0ed3a14c4d9a2d8f8f451ddd4586e6e",
"md5": "e182b17b6732845d90ac71bd670066ef",
"sha256": "68b13e0973386da727cca04447ba2de9a9ed7152136e593e62829c3717469a87"
},
"downloads": -1,
"filename": "pydantic_ai_optimizers-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e182b17b6732845d90ac71bd670066ef",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 17650,
"upload_time": "2025-08-31T20:29:20",
"upload_time_iso_8601": "2025-08-31T20:29:20.972219Z",
"url": "https://files.pythonhosted.org/packages/41/03/4cd5403cfab39e8d67d9bf5559e1b0ed3a14c4d9a2d8f8f451ddd4586e6e/pydantic_ai_optimizers-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "00df1f9da97b7da94918a1a692ed38da2c47b3d7e1f3a6268ae3d33b26b13f1a",
"md5": "f2ffd514d05e46476c26d260b3bf2ce1",
"sha256": "a4fa52c156f6b83988482551935f8c70904363984e691448df9eea788b6d7e0c"
},
"downloads": -1,
"filename": "pydantic_ai_optimizers-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "f2ffd514d05e46476c26d260b3bf2ce1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 14644,
"upload_time": "2025-08-31T20:29:22",
"upload_time_iso_8601": "2025-08-31T20:29:22.456780Z",
"url": "https://files.pythonhosted.org/packages/00/df/1f9da97b7da94918a1a692ed38da2c47b3d7e1f3a6268ae3d33b26b13f1a/pydantic_ai_optimizers-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-31 20:29:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "svilupp",
"github_project": "pydantic-ai-optimizers",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pydantic-ai-optimizers"
}