Name | pydantic-ai JSON |
Version |
1.0.6
JSON |
| download |
home_page | None |
Summary | Agent Framework / shim to use Pydantic with LLMs |
upload_time | 2025-09-12 23:16:58 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="center">
<a href="https://ai.pydantic.dev/">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://ai.pydantic.dev/img/pydantic-ai-dark.svg">
<img src="https://ai.pydantic.dev/img/pydantic-ai-light.svg" alt="Pydantic AI">
</picture>
</a>
</div>
<div align="center">
<h3>GenAI Agent Framework, the Pydantic way</h3>
</div>
<div align="center">
<a href="https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml?query=branch%3Amain"><img src="https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml/badge.svg?event=push" alt="CI"></a>
<a href="https://coverage-badge.samuelcolvin.workers.dev/redirect/pydantic/pydantic-ai"><img src="https://coverage-badge.samuelcolvin.workers.dev/pydantic/pydantic-ai.svg" alt="Coverage"></a>
<a href="https://pypi.python.org/pypi/pydantic-ai"><img src="https://img.shields.io/pypi/v/pydantic-ai.svg" alt="PyPI"></a>
<a href="https://github.com/pydantic/pydantic-ai"><img src="https://img.shields.io/pypi/pyversions/pydantic-ai.svg" alt="versions"></a>
<a href="https://github.com/pydantic/pydantic-ai/blob/main/LICENSE"><img src="https://img.shields.io/github/license/pydantic/pydantic-ai.svg?v" alt="license"></a>
<a href="https://logfire.pydantic.dev/docs/join-slack/"><img src="https://img.shields.io/badge/Slack-Join%20Slack-4A154B?logo=slack" alt="Join Slack" /></a>
</div>
---
**Documentation**: [ai.pydantic.dev](https://ai.pydantic.dev/)
---
### <em>Pydantic AI is a Python agent framework designed to help you quickly, confidently, and painlessly build production grade applications and workflows with Generative AI.</em>
FastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of [Pydantic Validation](https://docs.pydantic.dev) and modern Python features like type hints.
Yet despite virtually every Python agent framework and LLM library using Pydantic Validation, when we began to use LLMs in [Pydantic Logfire](https://pydantic.dev/logfire), we couldn't find anything that gave us the same feeling.
We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI app and agent development.
## Why use Pydantic AI
1. **Built by the Pydantic Team**:
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
2. **Model-agnostic**:
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
3. **Seamless Observability**:
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).
4. **Fully Type-safe**:
Designed to give your IDE or AI coding agent as much context as possible for auto-completion and [type checking](https://ai.pydantic.dev/agents#static-type-checking), moving entire classes of errors from runtime to write-time for a bit of that Rust "if it compiles, it works" feel.
5. **Powerful Evals**:
Enables you to systematically test and [evaluate](https://ai.pydantic.dev/evals) the performance and accuracy of the agentic systems you build, and monitor the performance over time in Pydantic Logfire.
6. **MCP, A2A, and AG-UI**:
Integrates the [Model Context Protocol](https://ai.pydantic.dev/mcp/client), [Agent2Agent](https://ai.pydantic.dev/a2a), and [AG-UI](https://ai.pydantic.dev/ag-ui) standards to give your agent access to external tools and data, let it interoperate with other agents, and build interactive applications with streaming event-based communication.
7. **Human-in-the-Loop Tool Approval**:
Easily lets you flag that certain tool calls [require approval](https://ai.pydantic.dev/deferred-tools#human-in-the-loop-tool-approval) before they can proceed, possibly depending on tool call arguments, conversation history, or user preferences.
8. **Durable Execution**:
Enables you to build [durable agents](https://ai.pydantic.dev/temporal) that can preserve their progress across transient API failures and application errors or restarts, and handle long-running, asynchronous, and human-in-the-loop workflows with production-grade reliability.
9. **Streamed Outputs**:
Provides the ability to [stream](https://ai.pydantic.dev/output#streamed-results) structured output continuously, with immediate validation, ensuring real time access to generated data.
10. **Graph Support**:
Provides a powerful way to define [graphs](https://ai.pydantic.dev/graph) using type hints, for use in complex applications where standard control flow can degrade to spaghetti code.
Realistically though, no list is going to be as convincing as [giving it a try](#next-steps) and seeing how it makes you feel!
## Hello World Example
Here's a minimal example of Pydantic AI:
```python
from pydantic_ai import Agent
# Define a very simple agent including the model to use, you can also set the model when running the agent.
agent = Agent(
'anthropic:claude-sonnet-4-0',
# Register static instructions using a keyword argument to the agent.
# For more complex dynamically-generated instructions, see the example below.
instructions='Be concise, reply with one sentence.',
)
# Run the agent synchronously, conducting a conversation with the LLM.
result = agent.run_sync('Where does "hello world" come from?')
print(result.output)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
"""
```
_(This example is complete, it can be run "as is", assuming you've [installed the `pydantic_ai` package](https://ai.pydantic.dev/install))_
The exchange will be very short: Pydantic AI will send the instructions and the user prompt to the LLM, and the model will return a text response.
Not very interesting yet, but we can easily add [tools](https://ai.pydantic.dev/tools), [dynamic instructions](https://ai.pydantic.dev/agents#instructions), and [structured outputs](https://ai.pydantic.dev/output) to build more powerful agents.
## Tools & Dependency Injection Example
Here is a concise example using Pydantic AI to build a support agent for a bank:
**(Better documented example [in the docs](https://ai.pydantic.dev/#tools-dependency-injection-example))**
```python
from dataclasses import dataclass
from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from bank_database import DatabaseConn
# SupportDependencies is used to pass data, connections, and logic into the model that will be needed when running
# instructions and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents.
@dataclass
class SupportDependencies:
customer_id: int
db: DatabaseConn
# This Pydantic model defines the structure of the output returned by the agent.
class SupportOutput(BaseModel):
support_advice: str = Field(description='Advice returned to the customer')
block_card: bool = Field(description="Whether to block the customer's card")
risk: int = Field(description='Risk level of query', ge=0, le=10)
# This agent will act as first-tier support in a bank.
# Agents are generic in the type of dependencies they accept and the type of output they return.
# In this case, the support agent has type `Agent[SupportDependencies, SupportOutput]`.
support_agent = Agent(
'openai:gpt-5',
deps_type=SupportDependencies,
# The response from the agent will, be guaranteed to be a SupportOutput,
# if validation fails the agent is prompted to try again.
output_type=SupportOutput,
instructions=(
'You are a support agent in our bank, give the '
'customer support and judge the risk level of their query.'
),
)
# Dynamic instructions can make use of dependency injection.
# Dependencies are carried via the `RunContext` argument, which is parameterized with the `deps_type` from above.
# If the type annotation here is wrong, static type checkers will catch it.
@support_agent.instructions
async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str:
customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
return f"The customer's name is {customer_name!r}"
# The `tool` decorator let you register functions which the LLM may call while responding to a user.
# Again, dependencies are carried via `RunContext`, any other arguments become the tool schema passed to the LLM.
# Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
@support_agent.tool
async def customer_balance(
ctx: RunContext[SupportDependencies], include_pending: bool
) -> float:
"""Returns the customer's current account balance."""
# The docstring of a tool is also passed to the LLM as the description of the tool.
# Parameter descriptions are extracted from the docstring and added to the parameter schema sent to the LLM.
balance = await ctx.deps.db.customer_balance(
id=ctx.deps.customer_id,
include_pending=include_pending,
)
return balance
... # In a real use case, you'd add more tools and a longer system prompt
async def main():
deps = SupportDependencies(customer_id=123, db=DatabaseConn())
# Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached.
# Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve an output.
result = await support_agent.run('What is my balance?', deps=deps)
# The `result.output` will be validated with Pydantic to guarantee it is a `SupportOutput`. Since the agent is generic,
# it'll also be typed as a `SupportOutput` to aid with static type checking.
print(result.output)
"""
support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1
"""
result = await support_agent.run('I just lost my card!', deps=deps)
print(result.output)
"""
support_advice="I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions." block_card=True risk=8
"""
```
## Next Steps
To try Pydantic AI for yourself, [install it](https://ai.pydantic.dev/install) and follow the instructions [in the examples](https://ai.pydantic.dev/examples/setup).
Read the [docs](https://ai.pydantic.dev/agents/) to learn more about building applications with Pydantic AI.
Read the [API Reference](https://ai.pydantic.dev/api/agent/) to understand Pydantic AI's interface.
Join [Slack](https://logfire.pydantic.dev/docs/join-slack/) or file an issue on [GitHub](https://github.com/pydantic/pydantic-ai/issues) if you have any questions.
Raw data
{
"_id": null,
"home_page": null,
"name": "pydantic-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Samuel Colvin <samuel@pydantic.dev>, Marcelo Trylesinski <marcelotryle@gmail.com>, David Montague <david@pydantic.dev>, Alex Hall <alex@pydantic.dev>, Douwe Maan <douwe@pydantic.dev>",
"download_url": "https://files.pythonhosted.org/packages/47/ac/57d7f7044f05c5834deb8ba75ef8d0d8ff6cf62a80e1f9894d5ad76fc5a2/pydantic_ai-1.0.6.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <a href=\"https://ai.pydantic.dev/\">\n <picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://ai.pydantic.dev/img/pydantic-ai-dark.svg\">\n <img src=\"https://ai.pydantic.dev/img/pydantic-ai-light.svg\" alt=\"Pydantic AI\">\n </picture>\n </a>\n</div>\n<div align=\"center\">\n <h3>GenAI Agent Framework, the Pydantic way</h3>\n</div>\n<div align=\"center\">\n <a href=\"https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml?query=branch%3Amain\"><img src=\"https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml/badge.svg?event=push\" alt=\"CI\"></a>\n <a href=\"https://coverage-badge.samuelcolvin.workers.dev/redirect/pydantic/pydantic-ai\"><img src=\"https://coverage-badge.samuelcolvin.workers.dev/pydantic/pydantic-ai.svg\" alt=\"Coverage\"></a>\n <a href=\"https://pypi.python.org/pypi/pydantic-ai\"><img src=\"https://img.shields.io/pypi/v/pydantic-ai.svg\" alt=\"PyPI\"></a>\n <a href=\"https://github.com/pydantic/pydantic-ai\"><img src=\"https://img.shields.io/pypi/pyversions/pydantic-ai.svg\" alt=\"versions\"></a>\n <a href=\"https://github.com/pydantic/pydantic-ai/blob/main/LICENSE\"><img src=\"https://img.shields.io/github/license/pydantic/pydantic-ai.svg?v\" alt=\"license\"></a>\n <a href=\"https://logfire.pydantic.dev/docs/join-slack/\"><img src=\"https://img.shields.io/badge/Slack-Join%20Slack-4A154B?logo=slack\" alt=\"Join Slack\" /></a>\n</div>\n\n---\n\n**Documentation**: [ai.pydantic.dev](https://ai.pydantic.dev/)\n\n---\n\n### <em>Pydantic AI is a Python agent framework designed to help you quickly, confidently, and painlessly build production grade applications and workflows with Generative AI.</em>\n\n\nFastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of [Pydantic Validation](https://docs.pydantic.dev) and modern Python features like type hints.\n\nYet despite virtually every Python agent framework and LLM library using Pydantic Validation, when we began to use LLMs in [Pydantic Logfire](https://pydantic.dev/logfire), we couldn't find anything that gave us the same feeling.\n\nWe built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI app and agent development.\n\n## Why use Pydantic AI\n\n1. **Built by the Pydantic Team**:\n[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:\n\n2. **Model-agnostic**:\nSupports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).\n\n3. **Seamless Observability**:\nTightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).\n\n4. **Fully Type-safe**:\nDesigned to give your IDE or AI coding agent as much context as possible for auto-completion and [type checking](https://ai.pydantic.dev/agents#static-type-checking), moving entire classes of errors from runtime to write-time for a bit of that Rust \"if it compiles, it works\" feel.\n\n5. **Powerful Evals**:\nEnables you to systematically test and [evaluate](https://ai.pydantic.dev/evals) the performance and accuracy of the agentic systems you build, and monitor the performance over time in Pydantic Logfire.\n\n6. **MCP, A2A, and AG-UI**:\nIntegrates the [Model Context Protocol](https://ai.pydantic.dev/mcp/client), [Agent2Agent](https://ai.pydantic.dev/a2a), and [AG-UI](https://ai.pydantic.dev/ag-ui) standards to give your agent access to external tools and data, let it interoperate with other agents, and build interactive applications with streaming event-based communication.\n\n7. **Human-in-the-Loop Tool Approval**:\nEasily lets you flag that certain tool calls [require approval](https://ai.pydantic.dev/deferred-tools#human-in-the-loop-tool-approval) before they can proceed, possibly depending on tool call arguments, conversation history, or user preferences.\n\n8. **Durable Execution**:\nEnables you to build [durable agents](https://ai.pydantic.dev/temporal) that can preserve their progress across transient API failures and application errors or restarts, and handle long-running, asynchronous, and human-in-the-loop workflows with production-grade reliability.\n\n9. **Streamed Outputs**:\nProvides the ability to [stream](https://ai.pydantic.dev/output#streamed-results) structured output continuously, with immediate validation, ensuring real time access to generated data.\n\n10. **Graph Support**:\nProvides a powerful way to define [graphs](https://ai.pydantic.dev/graph) using type hints, for use in complex applications where standard control flow can degrade to spaghetti code.\n\nRealistically though, no list is going to be as convincing as [giving it a try](#next-steps) and seeing how it makes you feel!\n\n## Hello World Example\n\nHere's a minimal example of Pydantic AI:\n\n```python\nfrom pydantic_ai import Agent\n\n# Define a very simple agent including the model to use, you can also set the model when running the agent.\nagent = Agent(\n 'anthropic:claude-sonnet-4-0',\n # Register static instructions using a keyword argument to the agent.\n # For more complex dynamically-generated instructions, see the example below.\n instructions='Be concise, reply with one sentence.',\n)\n\n# Run the agent synchronously, conducting a conversation with the LLM.\nresult = agent.run_sync('Where does \"hello world\" come from?')\nprint(result.output)\n\"\"\"\nThe first known use of \"hello, world\" was in a 1974 textbook about the C programming language.\n\"\"\"\n```\n\n_(This example is complete, it can be run \"as is\", assuming you've [installed the `pydantic_ai` package](https://ai.pydantic.dev/install))_\n\nThe exchange will be very short: Pydantic AI will send the instructions and the user prompt to the LLM, and the model will return a text response.\n\nNot very interesting yet, but we can easily add [tools](https://ai.pydantic.dev/tools), [dynamic instructions](https://ai.pydantic.dev/agents#instructions), and [structured outputs](https://ai.pydantic.dev/output) to build more powerful agents.\n\n## Tools & Dependency Injection Example\n\nHere is a concise example using Pydantic AI to build a support agent for a bank:\n\n**(Better documented example [in the docs](https://ai.pydantic.dev/#tools-dependency-injection-example))**\n\n```python\nfrom dataclasses import dataclass\n\nfrom pydantic import BaseModel, Field\nfrom pydantic_ai import Agent, RunContext\n\nfrom bank_database import DatabaseConn\n\n\n# SupportDependencies is used to pass data, connections, and logic into the model that will be needed when running\n# instructions and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents.\n@dataclass\nclass SupportDependencies:\n customer_id: int\n db: DatabaseConn\n\n\n# This Pydantic model defines the structure of the output returned by the agent.\nclass SupportOutput(BaseModel):\n support_advice: str = Field(description='Advice returned to the customer')\n block_card: bool = Field(description=\"Whether to block the customer's card\")\n risk: int = Field(description='Risk level of query', ge=0, le=10)\n\n\n# This agent will act as first-tier support in a bank.\n# Agents are generic in the type of dependencies they accept and the type of output they return.\n# In this case, the support agent has type `Agent[SupportDependencies, SupportOutput]`.\nsupport_agent = Agent(\n 'openai:gpt-5',\n deps_type=SupportDependencies,\n # The response from the agent will, be guaranteed to be a SupportOutput,\n # if validation fails the agent is prompted to try again.\n output_type=SupportOutput,\n instructions=(\n 'You are a support agent in our bank, give the '\n 'customer support and judge the risk level of their query.'\n ),\n)\n\n\n# Dynamic instructions can make use of dependency injection.\n# Dependencies are carried via the `RunContext` argument, which is parameterized with the `deps_type` from above.\n# If the type annotation here is wrong, static type checkers will catch it.\n@support_agent.instructions\nasync def add_customer_name(ctx: RunContext[SupportDependencies]) -> str:\n customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)\n return f\"The customer's name is {customer_name!r}\"\n\n\n# The `tool` decorator let you register functions which the LLM may call while responding to a user.\n# Again, dependencies are carried via `RunContext`, any other arguments become the tool schema passed to the LLM.\n# Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.\n@support_agent.tool\nasync def customer_balance(\n ctx: RunContext[SupportDependencies], include_pending: bool\n) -> float:\n \"\"\"Returns the customer's current account balance.\"\"\"\n # The docstring of a tool is also passed to the LLM as the description of the tool.\n # Parameter descriptions are extracted from the docstring and added to the parameter schema sent to the LLM.\n balance = await ctx.deps.db.customer_balance(\n id=ctx.deps.customer_id,\n include_pending=include_pending,\n )\n return balance\n\n\n... # In a real use case, you'd add more tools and a longer system prompt\n\n\nasync def main():\n deps = SupportDependencies(customer_id=123, db=DatabaseConn())\n # Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached.\n # Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve an output.\n result = await support_agent.run('What is my balance?', deps=deps)\n # The `result.output` will be validated with Pydantic to guarantee it is a `SupportOutput`. Since the agent is generic,\n # it'll also be typed as a `SupportOutput` to aid with static type checking.\n print(result.output)\n \"\"\"\n support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1\n \"\"\"\n\n result = await support_agent.run('I just lost my card!', deps=deps)\n print(result.output)\n \"\"\"\n support_advice=\"I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions.\" block_card=True risk=8\n \"\"\"\n```\n\n## Next Steps\n\nTo try Pydantic AI for yourself, [install it](https://ai.pydantic.dev/install) and follow the instructions [in the examples](https://ai.pydantic.dev/examples/setup).\n\nRead the [docs](https://ai.pydantic.dev/agents/) to learn more about building applications with Pydantic AI.\n\nRead the [API Reference](https://ai.pydantic.dev/api/agent/) to understand Pydantic AI's interface.\n\nJoin [Slack](https://logfire.pydantic.dev/docs/join-slack/) or file an issue on [GitHub](https://github.com/pydantic/pydantic-ai/issues) if you have any questions.\n",
"bugtrack_url": null,
"license": null,
"summary": "Agent Framework / shim to use Pydantic with LLMs",
"version": "1.0.6",
"project_urls": {
"Changelog": "https://github.com/pydantic/pydantic-ai/releases",
"Documentation": "https://ai.pydantic.dev",
"Homepage": "https://ai.pydantic.dev",
"Source": "https://github.com/pydantic/pydantic-ai"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3a7ed79e933968e64c8a52918b89dd55370328e16a68bc1c7bb55c3be9ccb055",
"md5": "a41ff406cb512803548069964c770334",
"sha256": "514545924397bd77fa9db9d5efcdb152631ebd9cd87d82ffb331e668cc81d566"
},
"downloads": -1,
"filename": "pydantic_ai-1.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a41ff406cb512803548069964c770334",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 11668,
"upload_time": "2025-09-12T23:16:49",
"upload_time_iso_8601": "2025-09-12T23:16:49.082167Z",
"url": "https://files.pythonhosted.org/packages/3a/7e/d79e933968e64c8a52918b89dd55370328e16a68bc1c7bb55c3be9ccb055/pydantic_ai-1.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "47ac57d7f7044f05c5834deb8ba75ef8d0d8ff6cf62a80e1f9894d5ad76fc5a2",
"md5": "24c6347b96816b207701835249ab4595",
"sha256": "facf3f1979fd48b063c4782c7e232a5d56063bca0d6b08d9c747eafc0eca3806"
},
"downloads": -1,
"filename": "pydantic_ai-1.0.6.tar.gz",
"has_sig": false,
"md5_digest": "24c6347b96816b207701835249ab4595",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 43968367,
"upload_time": "2025-09-12T23:16:58",
"upload_time_iso_8601": "2025-09-12T23:16:58.548129Z",
"url": "https://files.pythonhosted.org/packages/47/ac/57d7f7044f05c5834deb8ba75ef8d0d8ff6cf62a80e1f9894d5ad76fc5a2/pydantic_ai-1.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-12 23:16:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pydantic",
"github_project": "pydantic-ai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pydantic-ai"
}