| Name | fasta2a JSON |
| Version |
0.5.0
JSON |
| download |
| home_page | None |
| Summary | Convert an AI Agent into a A2A server! ✨ |
| upload_time | 2025-07-10 16:31:01 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.9 |
| license | None |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# FastA2A
[](https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml?query=branch%3Amain)
[](https://coverage-badge.samuelcolvin.workers.dev/redirect/pydantic/pydantic-ai)
[](https://pypi.python.org/pypi/fasta2a)
[](https://github.com/pydantic/pydantic-ai)
[](https://github.com/pydantic/pydantic-ai/blob/main/LICENSE)
**FastA2A** is an agentic framework agnostic implementation of the A2A protocol in Python.
The library is designed to be used with any agentic framework, and is not exclusive to PydanticAI.

## Installation
**FastA2A** is available on PyPI as [`fasta2a`](https://pypi.org/project/fasta2a/) so installation is as simple as:
```bash
pip install fasta2a # or `uv add fasta2a`
```
The only dependencies are:
- [starlette](https://www.starlette.io): to expose the A2A server as an [ASGI application](https://asgi.readthedocs.io/en/latest/)
- [pydantic](https://pydantic.dev): to validate the request/response messages
- [opentelemetry-api](https://opentelemetry-python.readthedocs.io/en/latest): to provide tracing capabilities
## Usage
To use **FastA2A**, you need to bring the `Storage`, `Broker` and `Worker` components.
**FastA2A** was designed with the mindset that the worker could, and should live outside the web server.
i.e. you can have a worker that runs on a different machine, or even in a different process.
You can use the `InMemoryStorage` and `InMemoryBroker` to get started, but you'll need to implement the `Worker`
to be able to execute the tasks with your agentic framework. Let's see an example:
```python
import uuid
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from typing import Any
from fasta2a import FastA2A, Worker
from fasta2a.broker import InMemoryBroker
from fasta2a.schema import Artifact, Message, TaskIdParams, TaskSendParams, TextPart
from fasta2a.storage import InMemoryStorage
Context = list[Message]
"""The shape of the context you store in the storage."""
class InMemoryWorker(Worker[Context]):
async def run_task(self, params: TaskSendParams) -> None:
task = await self.storage.load_task(params['id'])
assert task is not None
await self.storage.update_task(task['id'], state='working')
context = await self.storage.load_context(task['context_id']) or []
context.extend(task.get('history', []))
# Call your agent here...
message = Message(
role='agent',
parts=[TextPart(text=f'Your context is {len(context) + 1} messages long.', kind='text')],
kind='message',
message_id=str(uuid.uuid4()),
)
# Update the new message to the context.
context.append(message)
artifacts = self.build_artifacts(123)
await self.storage.update_context(task['context_id'], context)
await self.storage.update_task(task['id'], state='completed', new_messages=[message], new_artifacts=artifacts)
async def cancel_task(self, params: TaskIdParams) -> None: ...
def build_message_history(self, history: list[Message]) -> list[Any]: ...
def build_artifacts(self, result: Any) -> list[Artifact]: ...
storage = InMemoryStorage()
broker = InMemoryBroker()
worker = InMemoryWorker(storage=storage, broker=broker)
@asynccontextmanager
async def lifespan(app: FastA2A) -> AsyncIterator[None]:
async with app.task_manager:
async with worker.run():
yield
app = FastA2A(storage=storage, broker=broker, lifespan=lifespan)
```
_You can run this example as is with `uvicorn main:app --reload`._
### Using PydanticAI
Initially, this **FastA2A** lived under **PydanticAI** repository, but since we received community feedback,
we've decided to move it to a separate repository.
> [!NOTE]
> Other agentic frameworks are welcome to implement the `Worker` component, and we'll be happy add the reference here.
For reference, you can [check the PydanticAI implementation of the `Worker`](https://github.com/pydantic/pydantic-ai/blob/3ef42ed9a1a2c799bb94a5a69c80aa9e8968ca72/pydantic_ai_slim/pydantic_ai/_a2a.py#L115-L304).
Let's see how to use it in practice:
```python
from pydantic_ai import Agent
agent = Agent('openai:gpt-4.1')
app = agent.to_a2a()
```
_You can run this example as is with `uvicorn main:app --reload`._
As you see, it's pretty easy from the point of view of the developer using your agentic framework.
## Design
**FastA2A** is built on top of [Starlette](https://www.starlette.io/), which means it's fully compatible
with any ASGI server.
Given the nature of the A2A protocol, it's important to understand the design before using it,
as a developer you'll need to provide some components:
- **Storage**: to save and load tasks and the conversation context
- **Broker**: to schedule tasks
- **Worker**: to execute tasks
Let's have a look at how those components fit together:
```mermaid
flowchart TB
Server["HTTP Server"] <--> |Sends Requests/<br>Receives Results| TM
subgraph CC[Core Components]
direction RL
TM["TaskManager<br>(coordinates)"] --> |Schedules Tasks| Broker
TM <--> Storage
Broker["Broker<br>(queues & schedules)"] <--> Storage["Storage<br>(persistence)"]
Broker --> |Delegates Execution| Worker
end
Worker["Worker<br>(implementation)"]
```
**FastA2A** allows you to bring your own `Storage`, `Broker` and `Worker`.
You can also leverage the in-memory implementations of `Storage` and `Broker` by using
the `InMemoryStorage` and `InMemoryBroker`:
```python
from fasta2a import InMemoryStorage, InMemoryBroker
storage = InMemoryStorage()
broker = InMemoryBroker()
```
### Tasks and Context
**FastA2A** is designed to be opinionated regarding the A2A protocol.
When the server receives a message, according to the specification, the server can decide between:
- Send a **stateless** message back to the client
- Create a **stateful** Task and run it on the background
**FastA2A** will **always** create a Task and run it on the background (on the `Worker`).
> [!NOTE]
> You can read more about it [here](https://a2aproject.github.io/A2A/latest/topics/life-of-a-task/).
- **Task**: Represents one complete execution of an agent. When a client sends a message to the agent,
a new task is created. The agent runs until completion (or failure), and this entire execution is
considered one task. The final output should be stored as a task artifact.
- **Context**: Represents a conversation thread that can span multiple tasks. The A2A protocol uses a
`context_id` to maintain conversation continuity:
- When a new message is sent without a `context_id`, the server generates a new one
- Subsequent messages can include the same `context_id` to continue the conversation
- All tasks sharing the same `context_id` have access to the complete message history
#### Storage
The `Storage` component serves two purposes:
1. **Task Storage**: Stores tasks in A2A protocol format, including their status, artifacts, and message history
2. **Context Storage**: Stores conversation context in a format optimized for the specific agent implementation
This design allows for agents to store rich internal state (e.g., tool calls, reasoning traces) as well as store task-specific A2A-formatted messages and artifacts.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "fasta2a",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Marcelo Trylesinski <marcelotryle@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/5d/2a/f9d212026bdc74068ef9aef493a2b37ce0d4201694d158180759e07489b5/fasta2a-0.5.0.tar.gz",
"platform": null,
"description": "# FastA2A\n\n[](https://github.com/pydantic/pydantic-ai/actions/workflows/ci.yml?query=branch%3Amain)\n[](https://coverage-badge.samuelcolvin.workers.dev/redirect/pydantic/pydantic-ai)\n[](https://pypi.python.org/pypi/fasta2a)\n[](https://github.com/pydantic/pydantic-ai)\n[](https://github.com/pydantic/pydantic-ai/blob/main/LICENSE)\n\n**FastA2A** is an agentic framework agnostic implementation of the A2A protocol in Python.\nThe library is designed to be used with any agentic framework, and is not exclusive to PydanticAI.\n\n\n\n## Installation\n\n**FastA2A** is available on PyPI as [`fasta2a`](https://pypi.org/project/fasta2a/) so installation is as simple as:\n\n```bash\npip install fasta2a # or `uv add fasta2a`\n```\n\nThe only dependencies are:\n\n- [starlette](https://www.starlette.io): to expose the A2A server as an [ASGI application](https://asgi.readthedocs.io/en/latest/)\n- [pydantic](https://pydantic.dev): to validate the request/response messages\n- [opentelemetry-api](https://opentelemetry-python.readthedocs.io/en/latest): to provide tracing capabilities\n\n## Usage\n\nTo use **FastA2A**, you need to bring the `Storage`, `Broker` and `Worker` components.\n\n**FastA2A** was designed with the mindset that the worker could, and should live outside the web server.\ni.e. you can have a worker that runs on a different machine, or even in a different process.\n\nYou can use the `InMemoryStorage` and `InMemoryBroker` to get started, but you'll need to implement the `Worker`\nto be able to execute the tasks with your agentic framework. Let's see an example:\n\n```python\nimport uuid\nfrom collections.abc import AsyncIterator\nfrom contextlib import asynccontextmanager\nfrom typing import Any\n\nfrom fasta2a import FastA2A, Worker\nfrom fasta2a.broker import InMemoryBroker\nfrom fasta2a.schema import Artifact, Message, TaskIdParams, TaskSendParams, TextPart\nfrom fasta2a.storage import InMemoryStorage\n\nContext = list[Message]\n\"\"\"The shape of the context you store in the storage.\"\"\"\n\n\nclass InMemoryWorker(Worker[Context]):\n async def run_task(self, params: TaskSendParams) -> None:\n task = await self.storage.load_task(params['id'])\n assert task is not None\n\n await self.storage.update_task(task['id'], state='working')\n\n context = await self.storage.load_context(task['context_id']) or []\n context.extend(task.get('history', []))\n\n # Call your agent here...\n message = Message(\n role='agent',\n parts=[TextPart(text=f'Your context is {len(context) + 1} messages long.', kind='text')],\n kind='message',\n message_id=str(uuid.uuid4()),\n )\n\n # Update the new message to the context.\n context.append(message)\n\n artifacts = self.build_artifacts(123)\n await self.storage.update_context(task['context_id'], context)\n await self.storage.update_task(task['id'], state='completed', new_messages=[message], new_artifacts=artifacts)\n\n async def cancel_task(self, params: TaskIdParams) -> None: ...\n\n def build_message_history(self, history: list[Message]) -> list[Any]: ...\n\n def build_artifacts(self, result: Any) -> list[Artifact]: ...\n\n\nstorage = InMemoryStorage()\nbroker = InMemoryBroker()\nworker = InMemoryWorker(storage=storage, broker=broker)\n\n\n@asynccontextmanager\nasync def lifespan(app: FastA2A) -> AsyncIterator[None]:\n async with app.task_manager:\n async with worker.run():\n yield\n\n\napp = FastA2A(storage=storage, broker=broker, lifespan=lifespan)\n```\n\n_You can run this example as is with `uvicorn main:app --reload`._\n\n### Using PydanticAI\n\nInitially, this **FastA2A** lived under **PydanticAI** repository, but since we received community feedback,\nwe've decided to move it to a separate repository.\n\n> [!NOTE]\n> Other agentic frameworks are welcome to implement the `Worker` component, and we'll be happy add the reference here.\n\nFor reference, you can [check the PydanticAI implementation of the `Worker`](https://github.com/pydantic/pydantic-ai/blob/3ef42ed9a1a2c799bb94a5a69c80aa9e8968ca72/pydantic_ai_slim/pydantic_ai/_a2a.py#L115-L304).\n\nLet's see how to use it in practice:\n\n```python\nfrom pydantic_ai import Agent\n\nagent = Agent('openai:gpt-4.1')\napp = agent.to_a2a()\n```\n\n_You can run this example as is with `uvicorn main:app --reload`._\n\nAs you see, it's pretty easy from the point of view of the developer using your agentic framework.\n\n## Design\n\n**FastA2A** is built on top of [Starlette](https://www.starlette.io/), which means it's fully compatible\nwith any ASGI server.\n\nGiven the nature of the A2A protocol, it's important to understand the design before using it,\nas a developer you'll need to provide some components:\n\n- **Storage**: to save and load tasks and the conversation context\n- **Broker**: to schedule tasks\n- **Worker**: to execute tasks\n\nLet's have a look at how those components fit together:\n\n```mermaid\nflowchart TB\n Server[\"HTTP Server\"] <--> |Sends Requests/<br>Receives Results| TM\n\n subgraph CC[Core Components]\n direction RL\n TM[\"TaskManager<br>(coordinates)\"] --> |Schedules Tasks| Broker\n TM <--> Storage\n Broker[\"Broker<br>(queues & schedules)\"] <--> Storage[\"Storage<br>(persistence)\"]\n Broker --> |Delegates Execution| Worker\n end\n\n Worker[\"Worker<br>(implementation)\"]\n```\n\n**FastA2A** allows you to bring your own `Storage`, `Broker` and `Worker`.\n\nYou can also leverage the in-memory implementations of `Storage` and `Broker` by using\nthe `InMemoryStorage` and `InMemoryBroker`:\n\n```python\nfrom fasta2a import InMemoryStorage, InMemoryBroker\n\nstorage = InMemoryStorage()\nbroker = InMemoryBroker()\n```\n\n### Tasks and Context\n\n**FastA2A** is designed to be opinionated regarding the A2A protocol.\nWhen the server receives a message, according to the specification, the server can decide between:\n\n- Send a **stateless** message back to the client\n- Create a **stateful** Task and run it on the background\n\n**FastA2A** will **always** create a Task and run it on the background (on the `Worker`).\n\n> [!NOTE]\n> You can read more about it [here](https://a2aproject.github.io/A2A/latest/topics/life-of-a-task/).\n\n- **Task**: Represents one complete execution of an agent. When a client sends a message to the agent,\n a new task is created. The agent runs until completion (or failure), and this entire execution is\n considered one task. The final output should be stored as a task artifact.\n\n- **Context**: Represents a conversation thread that can span multiple tasks. The A2A protocol uses a\n `context_id` to maintain conversation continuity:\n - When a new message is sent without a `context_id`, the server generates a new one\n - Subsequent messages can include the same `context_id` to continue the conversation\n - All tasks sharing the same `context_id` have access to the complete message history\n\n#### Storage\n\nThe `Storage` component serves two purposes:\n\n1. **Task Storage**: Stores tasks in A2A protocol format, including their status, artifacts, and message history\n2. **Context Storage**: Stores conversation context in a format optimized for the specific agent implementation\n\nThis design allows for agents to store rich internal state (e.g., tool calls, reasoning traces) as well as store task-specific A2A-formatted messages and artifacts.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "Convert an AI Agent into a A2A server! \u2728",
"version": "0.5.0",
"project_urls": {
"Changelog": "https://github.com/pydantic/fasta2a/releases",
"Documentation": "https://pydantic.github.io/fasta2a",
"Homepage": "https://pydantic.github.io/fasta2a",
"Source": "https://github.com/pydantic/fasta2a"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "c508d25f303013a04e2bec68ed97c4f4f85ad9c178fc582e8e4345147fd141fb",
"md5": "ba7612b64ed1fb8dcd2dae58cbc00ce8",
"sha256": "806f4bbd6cd2858ca631d47e75f3bbf4746ff0752ccca38edbfe85930c4ffbe2"
},
"downloads": -1,
"filename": "fasta2a-0.5.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ba7612b64ed1fb8dcd2dae58cbc00ce8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 25198,
"upload_time": "2025-07-10T16:30:59",
"upload_time_iso_8601": "2025-07-10T16:30:59.938033Z",
"url": "https://files.pythonhosted.org/packages/c5/08/d25f303013a04e2bec68ed97c4f4f85ad9c178fc582e8e4345147fd141fb/fasta2a-0.5.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5d2af9d212026bdc74068ef9aef493a2b37ce0d4201694d158180759e07489b5",
"md5": "cea9f45d9de71fa57a5096f22ca64293",
"sha256": "0bca45f675fb3354ae6cd0e6dd0be1d504ee135b8e802b4058fb3485521f61e9"
},
"downloads": -1,
"filename": "fasta2a-0.5.0.tar.gz",
"has_sig": false,
"md5_digest": "cea9f45d9de71fa57a5096f22ca64293",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 1436123,
"upload_time": "2025-07-10T16:31:01",
"upload_time_iso_8601": "2025-07-10T16:31:01.502148Z",
"url": "https://files.pythonhosted.org/packages/5d/2a/f9d212026bdc74068ef9aef493a2b37ce0d4201694d158180759e07489b5/fasta2a-0.5.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-10 16:31:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pydantic",
"github_project": "fasta2a",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "fasta2a"
}