Name | fast-agent-mcp JSON |
Version |
0.2.43
JSON |
| download |
home_page | None |
Summary | Define, Prompt and Test MCP enabled Agents and Workflows |
upload_time | 2025-07-16 21:32:53 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.12 |
license | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2025 llmindset.co.uk Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<a href="https://pypi.org/project/fast-agent-mcp/"><img src="https://img.shields.io/pypi/v/fast-agent-mcp?color=%2334D058&label=pypi" /></a>
<a href="#"><img src="https://github.com/evalstate/fast-agent/actions/workflows/main-checks.yml/badge.svg" /></a>
<a href="https://github.com/evalstate/fast-agent/issues"><img src="https://img.shields.io/github/issues-raw/evalstate/fast-agent" /></a>
<a href="https://discord.gg/xg5cJ7ndN6"><img src="https://img.shields.io/discord/1358470293990936787" alt="discord" /></a>
<img alt="Pepy Total Downloads" src="https://img.shields.io/pepy/dt/fast-agent-mcp?label=pypi%20%7C%20downloads"/>
<a href="https://github.com/evalstate/fast-agent-mcp/blob/main/LICENSE"><img src="https://img.shields.io/pypi/l/fast-agent-mcp" /></a>
</p>
## Overview
> [!TIP]
> Documentation site is in production here : https://fast-agent.ai. Feel free to feed back what's helpful and what's not. There is also an LLMs.txt [here](https://fast-agent.ai/llms.txt)
**`fast-agent`** enables you to create and interact with sophisticated Agents and Workflows in minutes. It is the first framework with complete, end-to-end tested MCP Feature support including Sampling. Model support is comprehensive with native support for Anthropic, OpenAI and Google as well as Azure, Ollama, Deepseek and dozens of others via TensorZero.

The simple declarative syntax lets you concentrate on composing your Prompts and MCP Servers to [build effective agents](https://www.anthropic.com/research/building-effective-agents).
`fast-agent` is multi-modal, supporting Images and PDFs for both Anthropic and OpenAI endpoints via Prompts, Resources and MCP Tool Call results. The inclusion of passthrough and playback LLMs enable rapid development and test of Python glue-code for your applications.
> [!IMPORTANT]
>
> `fast-agent` The fast-agent documentation repo is here: https://github.com/evalstate/fast-agent-docs. Please feel free to submit PRs for documentation, experience reports or other content you think others may find helpful. All help and feedback warmly received.
### Agent Application Development
Prompts and configurations that define your Agent Applications are stored in simple files, with minimal boilerplate, enabling simple management and version control.
Chat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application. Agents can request human input to get additional context for task completion.
Simple model selection makes testing Model <-> MCP Server interaction painless. You can read more about the motivation behind this project [here](https://llmindset.co.uk/resources/fast-agent/)

## Get started:
Start by installing the [uv package manager](https://docs.astral.sh/uv/) for Python. Then:
```bash
uv pip install fast-agent-mcp # install fast-agent!
fast-agent go # start an interactive session
fast-agent go https://hf.co/mcp # with a remote MCP
fast-agent go --model=generic.qwen2.5 # use ollama qwen 2.5
fast-agent setup # create an example agent and config files
uv run agent.py # run your first agent
uv run agent.py --model=o3-mini.low # specify a model
fast-agent quickstart workflow # create "building effective agents" examples
```
Other quickstart examples include a Researcher Agent (with Evaluator-Optimizer workflow) and Data Analysis Agent (similar to the ChatGPT experience), demonstrating MCP Roots support.
> [!TIP]
> Windows Users - there are a couple of configuration changes needed for the Filesystem and Docker MCP Servers - necessary changes are detailed within the configuration files.
### Basic Agents
Defining an agent is as simple as:
```python
@fast.agent(
instruction="Given an object, respond only with an estimate of its size."
)
```
We can then send messages to the Agent:
```python
async with fast.run() as agent:
moon_size = await agent("the moon")
print(moon_size)
```
Or start an interactive chat with the Agent:
```python
async with fast.run() as agent:
await agent.interactive()
```
Here is the complete `sizer.py` Agent application, with boilerplate code:
```python
import asyncio
from mcp_agent.core.fastagent import FastAgent
# Create the application
fast = FastAgent("Agent Example")
@fast.agent(
instruction="Given an object, respond only with an estimate of its size."
)
async def main():
async with fast.run() as agent:
await agent.interactive()
if __name__ == "__main__":
asyncio.run(main())
```
The Agent can then be run with `uv run sizer.py`.
Specify a model with the `--model` switch - for example `uv run sizer.py --model sonnet`.
### Combining Agents and using MCP Servers
_To generate examples use `fast-agent quickstart workflow`. This example can be run with `uv run workflow/chaining.py`. fast-agent looks for configuration files in the current directory before checking parent directories recursively._
Agents can be chained to build a workflow, using MCP Servers defined in the `fastagent.config.yaml` file:
```python
@fast.agent(
"url_fetcher",
"Given a URL, provide a complete and comprehensive summary",
servers=["fetch"], # Name of an MCP Server defined in fastagent.config.yaml
)
@fast.agent(
"social_media",
"""
Write a 280 character social media post for any given text.
Respond only with the post, never use hashtags.
""",
)
@fast.chain(
name="post_writer",
sequence=["url_fetcher", "social_media"],
)
async def main():
async with fast.run() as agent:
# using chain workflow
await agent.post_writer("http://llmindset.co.uk")
```
All Agents and Workflows respond to `.send("message")` or `.prompt()` to begin a chat session.
Saved as `social.py` we can now run this workflow from the command line with:
```bash
uv run workflow/chaining.py --agent post_writer --message "<url>"
```
Add the `--quiet` switch to disable progress and message display and return only the final response - useful for simple automations.
## Workflows
### Chain
The `chain` workflow offers a more declarative approach to calling Agents in sequence:
```python
@fast.chain(
"post_writer",
sequence=["url_fetcher","social_media"]
)
# we can them prompt it directly:
async with fast.run() as agent:
await agent.post_writer()
```
This starts an interactive session, which produces a short social media post for a given URL. If a _chain_ is prompted it returns to a chat with last Agent in the chain. You can switch the agent to prompt by typing `@agent-name`.
Chains can be incorporated in other workflows, or contain other workflow elements (including other Chains). You can set an `instruction` to precisely describe it's capabilities to other workflow steps if needed.
### Human Input
Agents can request Human Input to assist with a task or get additional context:
```python
@fast.agent(
instruction="An AI agent that assists with basic tasks. Request Human Input when needed.",
human_input=True,
)
await agent("print the next number in the sequence")
```
In the example `human_input.py`, the Agent will prompt the User for additional information to complete the task.
### Parallel
The Parallel Workflow sends the same message to multiple Agents simultaneously (`fan-out`), then uses the `fan-in` Agent to process the combined content.
```python
@fast.agent("translate_fr", "Translate the text to French")
@fast.agent("translate_de", "Translate the text to German")
@fast.agent("translate_es", "Translate the text to Spanish")
@fast.parallel(
name="translate",
fan_out=["translate_fr","translate_de","translate_es"]
)
@fast.chain(
"post_writer",
sequence=["url_fetcher","social_media","translate"]
)
```
If you don't specify a `fan-in` agent, the `parallel` returns the combined Agent results verbatim.
`parallel` is also useful to ensemble ideas from different LLMs.
When using `parallel` in other workflows, specify an `instruction` to describe its operation.
### Evaluator-Optimizer
Evaluator-Optimizers combine 2 agents: one to generate content (the `generator`), and the other to judge that content and provide actionable feedback (the `evaluator`). Messages are sent to the generator first, then the pair run in a loop until either the evaluator is satisfied with the quality, or the maximum number of refinements is reached. The final result from the Generator is returned.
If the Generator has `use_history` off, the previous iteration is returned when asking for improvements - otherwise conversational context is used.
```python
@fast.evaluator_optimizer(
name="researcher",
generator="web_searcher",
evaluator="quality_assurance",
min_rating="EXCELLENT",
max_refinements=3
)
async with fast.run() as agent:
await agent.researcher.send("produce a report on how to make the perfect espresso")
```
When used in a workflow, it returns the last `generator` message as the result.
See the `evaluator.py` workflow example, or `fast-agent quickstart researcher` for a more complete example.
### Router
Routers use an LLM to assess a message, and route it to the most appropriate Agent. The routing prompt is automatically generated based on the Agent instructions and available Servers.
```python
@fast.router(
name="route",
agents=["agent1","agent2","agent3"]
)
```
Look at the `router.py` workflow for an example.
### Orchestrator
Given a complex task, the Orchestrator uses an LLM to generate a plan to divide the task amongst the available Agents. The planning and aggregation prompts are generated by the Orchestrator, which benefits from using more capable models. Plans can either be built once at the beginning (`plantype="full"`) or iteratively (`plantype="iterative"`).
```python
@fast.orchestrator(
name="orchestrate",
agents=["task1","task2","task3"]
)
```
See the `orchestrator.py` or `agent_build.py` workflow example.
## Agent Features
### Calling Agents
All definitions allow omitting the name and instructions arguments for brevity:
```python
@fast.agent("You are a helpful agent") # Create an agent with a default name.
@fast.agent("greeter","Respond cheerfully!") # Create an agent with the name "greeter"
moon_size = await agent("the moon") # Call the default (first defined agent) with a message
result = await agent.greeter("Good morning!") # Send a message to an agent by name using dot notation
result = await agent.greeter.send("Hello!") # You can call 'send' explicitly
await agent.greeter() # If no message is specified, a chat session will open
await agent.greeter.prompt() # that can be made more explicit
await agent.greeter.prompt(default_prompt="OK") # and supports setting a default prompt
agent["greeter"].send("Good Evening!") # Dictionary access is supported if preferred
```
### Defining Agents
#### Basic Agent
```python
@fast.agent(
name="agent", # name of the agent
instruction="You are a helpful Agent", # base instruction for the agent
servers=["filesystem"], # list of MCP Servers for the agent
model="o3-mini.high", # specify a model for the agent
use_history=True, # agent maintains chat history
request_params=RequestParams(temperature= 0.7), # additional parameters for the LLM (or RequestParams())
human_input=True, # agent can request human input
)
```
#### Chain
```python
@fast.chain(
name="chain", # name of the chain
sequence=["agent1", "agent2", ...], # list of agents in execution order
instruction="instruction", # instruction to describe the chain for other workflows
cumulative=False, # whether to accumulate messages through the chain
continue_with_final=True, # open chat with agent at end of chain after prompting
)
```
#### Parallel
```python
@fast.parallel(
name="parallel", # name of the parallel workflow
fan_out=["agent1", "agent2"], # list of agents to run in parallel
fan_in="aggregator", # name of agent that combines results (optional)
instruction="instruction", # instruction to describe the parallel for other workflows
include_request=True, # include original request in fan-in message
)
```
#### Evaluator-Optimizer
```python
@fast.evaluator_optimizer(
name="researcher", # name of the workflow
generator="web_searcher", # name of the content generator agent
evaluator="quality_assurance", # name of the evaluator agent
min_rating="GOOD", # minimum acceptable quality (EXCELLENT, GOOD, FAIR, POOR)
max_refinements=3, # maximum number of refinement iterations
)
```
#### Router
```python
@fast.router(
name="route", # name of the router
agents=["agent1", "agent2", "agent3"], # list of agent names router can delegate to
model="o3-mini.high", # specify routing model
use_history=False, # router maintains conversation history
human_input=False, # whether router can request human input
)
```
#### Orchestrator
```python
@fast.orchestrator(
name="orchestrator", # name of the orchestrator
instruction="instruction", # base instruction for the orchestrator
agents=["agent1", "agent2"], # list of agent names this orchestrator can use
model="o3-mini.high", # specify orchestrator planning model
use_history=False, # orchestrator doesn't maintain chat history (no effect).
human_input=False, # whether orchestrator can request human input
plan_type="full", # planning approach: "full" or "iterative"
plan_iterations=5, # maximum number of full plan attempts, or iterations
)
```
### Multimodal Support
Add Resources to prompts using either the inbuilt `prompt-server` or MCP Types directly. Convenience class are made available to do so simply, for example:
```python
summary: str = await agent.with_resource(
"Summarise this PDF please",
"mcp_server",
"resource://fast-agent/sample.pdf",
)
```
#### MCP Tool Result Conversion
LLM APIs have restrictions on the content types that can be returned as Tool Calls/Function results via their Chat Completions API's:
- OpenAI supports Text
- Anthropic supports Text and Image
For MCP Tool Results, `ImageResources` and `EmbeddedResources` are converted to User Messages and added to the conversation.
### Prompts
MCP Prompts are supported with `apply_prompt(name,arguments)`, which always returns an Assistant Message. If the last message from the MCP Server is a 'User' message, it is sent to the LLM for processing. Prompts applied to the Agent's Context are retained - meaning that with `use_history=False`, Agents can act as finely tuned responders.
Prompts can also be applied interactively through the interactive interface by using the `/prompt` command.
### Sampling
Sampling LLMs are configured per Client/Server pair. Specify the model name in fastagent.config.yaml as follows:
```yaml
mcp:
servers:
sampling_resource:
command: "uv"
args: ["run", "sampling_resource_server.py"]
sampling:
model: "haiku"
```
### Secrets File
> [!TIP]
> fast-agent will look recursively for a fastagent.secrets.yaml file, so you only need to manage this at the root folder of your agent definitions.
### Interactive Shell

## Project Notes
`fast-agent` builds on the [`mcp-agent`](https://github.com/lastmile-ai/mcp-agent) project by Sarmad Qadri.
### Contributing
Contributions and PRs are welcome - feel free to raise issues to discuss. Full guidelines for contributing and roadmap coming very soon. Get in touch!
Raw data
{
"_id": null,
"home_page": null,
"name": "fast-agent-mcp",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Shaun Smith <fastagent@llmindset.co.uk>",
"download_url": "https://files.pythonhosted.org/packages/be/c8/d4d9e6515ccc7676ef3bc0694bd83fa276b211b4dacbc996bbeeaf9ff92a/fast_agent_mcp-0.2.43.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n<a href=\"https://pypi.org/project/fast-agent-mcp/\"><img src=\"https://img.shields.io/pypi/v/fast-agent-mcp?color=%2334D058&label=pypi\" /></a>\n<a href=\"#\"><img src=\"https://github.com/evalstate/fast-agent/actions/workflows/main-checks.yml/badge.svg\" /></a>\n<a href=\"https://github.com/evalstate/fast-agent/issues\"><img src=\"https://img.shields.io/github/issues-raw/evalstate/fast-agent\" /></a>\n<a href=\"https://discord.gg/xg5cJ7ndN6\"><img src=\"https://img.shields.io/discord/1358470293990936787\" alt=\"discord\" /></a>\n<img alt=\"Pepy Total Downloads\" src=\"https://img.shields.io/pepy/dt/fast-agent-mcp?label=pypi%20%7C%20downloads\"/>\n<a href=\"https://github.com/evalstate/fast-agent-mcp/blob/main/LICENSE\"><img src=\"https://img.shields.io/pypi/l/fast-agent-mcp\" /></a>\n</p>\n\n## Overview\n\n> [!TIP]\n> Documentation site is in production here : https://fast-agent.ai. Feel free to feed back what's helpful and what's not. There is also an LLMs.txt [here](https://fast-agent.ai/llms.txt)\n\n**`fast-agent`** enables you to create and interact with sophisticated Agents and Workflows in minutes. It is the first framework with complete, end-to-end tested MCP Feature support including Sampling. Model support is comprehensive with native support for Anthropic, OpenAI and Google as well as Azure, Ollama, Deepseek and dozens of others via TensorZero.\n\n\n\nThe simple declarative syntax lets you concentrate on composing your Prompts and MCP Servers to [build effective agents](https://www.anthropic.com/research/building-effective-agents).\n\n`fast-agent` is multi-modal, supporting Images and PDFs for both Anthropic and OpenAI endpoints via Prompts, Resources and MCP Tool Call results. The inclusion of passthrough and playback LLMs enable rapid development and test of Python glue-code for your applications.\n\n> [!IMPORTANT]\n>\n> `fast-agent` The fast-agent documentation repo is here: https://github.com/evalstate/fast-agent-docs. Please feel free to submit PRs for documentation, experience reports or other content you think others may find helpful. All help and feedback warmly received.\n\n### Agent Application Development\n\nPrompts and configurations that define your Agent Applications are stored in simple files, with minimal boilerplate, enabling simple management and version control.\n\nChat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application. Agents can request human input to get additional context for task completion.\n\nSimple model selection makes testing Model <-> MCP Server interaction painless. You can read more about the motivation behind this project [here](https://llmindset.co.uk/resources/fast-agent/)\n\n\n\n## Get started:\n\nStart by installing the [uv package manager](https://docs.astral.sh/uv/) for Python. Then:\n\n```bash\nuv pip install fast-agent-mcp # install fast-agent!\nfast-agent go # start an interactive session\nfast-agent go https://hf.co/mcp # with a remote MCP\nfast-agent go --model=generic.qwen2.5 # use ollama qwen 2.5\nfast-agent setup # create an example agent and config files\nuv run agent.py # run your first agent\nuv run agent.py --model=o3-mini.low # specify a model\nfast-agent quickstart workflow # create \"building effective agents\" examples\n```\n\nOther quickstart examples include a Researcher Agent (with Evaluator-Optimizer workflow) and Data Analysis Agent (similar to the ChatGPT experience), demonstrating MCP Roots support.\n\n> [!TIP]\n> Windows Users - there are a couple of configuration changes needed for the Filesystem and Docker MCP Servers - necessary changes are detailed within the configuration files.\n\n### Basic Agents\n\nDefining an agent is as simple as:\n\n```python\n@fast.agent(\n instruction=\"Given an object, respond only with an estimate of its size.\"\n)\n```\n\nWe can then send messages to the Agent:\n\n```python\nasync with fast.run() as agent:\n moon_size = await agent(\"the moon\")\n print(moon_size)\n```\n\nOr start an interactive chat with the Agent:\n\n```python\nasync with fast.run() as agent:\n await agent.interactive()\n```\n\nHere is the complete `sizer.py` Agent application, with boilerplate code:\n\n```python\nimport asyncio\nfrom mcp_agent.core.fastagent import FastAgent\n\n# Create the application\nfast = FastAgent(\"Agent Example\")\n\n@fast.agent(\n instruction=\"Given an object, respond only with an estimate of its size.\"\n)\nasync def main():\n async with fast.run() as agent:\n await agent.interactive()\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\nThe Agent can then be run with `uv run sizer.py`.\n\nSpecify a model with the `--model` switch - for example `uv run sizer.py --model sonnet`.\n\n### Combining Agents and using MCP Servers\n\n_To generate examples use `fast-agent quickstart workflow`. This example can be run with `uv run workflow/chaining.py`. fast-agent looks for configuration files in the current directory before checking parent directories recursively._\n\nAgents can be chained to build a workflow, using MCP Servers defined in the `fastagent.config.yaml` file:\n\n```python\n@fast.agent(\n \"url_fetcher\",\n \"Given a URL, provide a complete and comprehensive summary\",\n servers=[\"fetch\"], # Name of an MCP Server defined in fastagent.config.yaml\n)\n@fast.agent(\n \"social_media\",\n \"\"\"\n Write a 280 character social media post for any given text.\n Respond only with the post, never use hashtags.\n \"\"\",\n)\n@fast.chain(\n name=\"post_writer\",\n sequence=[\"url_fetcher\", \"social_media\"],\n)\nasync def main():\n async with fast.run() as agent:\n # using chain workflow\n await agent.post_writer(\"http://llmindset.co.uk\")\n```\n\nAll Agents and Workflows respond to `.send(\"message\")` or `.prompt()` to begin a chat session.\n\nSaved as `social.py` we can now run this workflow from the command line with:\n\n```bash\nuv run workflow/chaining.py --agent post_writer --message \"<url>\"\n```\n\nAdd the `--quiet` switch to disable progress and message display and return only the final response - useful for simple automations.\n\n## Workflows\n\n### Chain\n\nThe `chain` workflow offers a more declarative approach to calling Agents in sequence:\n\n```python\n\n@fast.chain(\n \"post_writer\",\n sequence=[\"url_fetcher\",\"social_media\"]\n)\n\n# we can them prompt it directly:\nasync with fast.run() as agent:\n await agent.post_writer()\n\n```\n\nThis starts an interactive session, which produces a short social media post for a given URL. If a _chain_ is prompted it returns to a chat with last Agent in the chain. You can switch the agent to prompt by typing `@agent-name`.\n\nChains can be incorporated in other workflows, or contain other workflow elements (including other Chains). You can set an `instruction` to precisely describe it's capabilities to other workflow steps if needed.\n\n### Human Input\n\nAgents can request Human Input to assist with a task or get additional context:\n\n```python\n@fast.agent(\n instruction=\"An AI agent that assists with basic tasks. Request Human Input when needed.\",\n human_input=True,\n)\n\nawait agent(\"print the next number in the sequence\")\n```\n\nIn the example `human_input.py`, the Agent will prompt the User for additional information to complete the task.\n\n### Parallel\n\nThe Parallel Workflow sends the same message to multiple Agents simultaneously (`fan-out`), then uses the `fan-in` Agent to process the combined content.\n\n```python\n@fast.agent(\"translate_fr\", \"Translate the text to French\")\n@fast.agent(\"translate_de\", \"Translate the text to German\")\n@fast.agent(\"translate_es\", \"Translate the text to Spanish\")\n\n@fast.parallel(\n name=\"translate\",\n fan_out=[\"translate_fr\",\"translate_de\",\"translate_es\"]\n)\n\n@fast.chain(\n \"post_writer\",\n sequence=[\"url_fetcher\",\"social_media\",\"translate\"]\n)\n```\n\nIf you don't specify a `fan-in` agent, the `parallel` returns the combined Agent results verbatim.\n\n`parallel` is also useful to ensemble ideas from different LLMs.\n\nWhen using `parallel` in other workflows, specify an `instruction` to describe its operation.\n\n### Evaluator-Optimizer\n\nEvaluator-Optimizers combine 2 agents: one to generate content (the `generator`), and the other to judge that content and provide actionable feedback (the `evaluator`). Messages are sent to the generator first, then the pair run in a loop until either the evaluator is satisfied with the quality, or the maximum number of refinements is reached. The final result from the Generator is returned.\n\nIf the Generator has `use_history` off, the previous iteration is returned when asking for improvements - otherwise conversational context is used.\n\n```python\n@fast.evaluator_optimizer(\n name=\"researcher\",\n generator=\"web_searcher\",\n evaluator=\"quality_assurance\",\n min_rating=\"EXCELLENT\",\n max_refinements=3\n)\n\nasync with fast.run() as agent:\n await agent.researcher.send(\"produce a report on how to make the perfect espresso\")\n```\n\nWhen used in a workflow, it returns the last `generator` message as the result.\n\nSee the `evaluator.py` workflow example, or `fast-agent quickstart researcher` for a more complete example.\n\n### Router\n\nRouters use an LLM to assess a message, and route it to the most appropriate Agent. The routing prompt is automatically generated based on the Agent instructions and available Servers.\n\n```python\n@fast.router(\n name=\"route\",\n agents=[\"agent1\",\"agent2\",\"agent3\"]\n)\n```\n\nLook at the `router.py` workflow for an example.\n\n### Orchestrator\n\nGiven a complex task, the Orchestrator uses an LLM to generate a plan to divide the task amongst the available Agents. The planning and aggregation prompts are generated by the Orchestrator, which benefits from using more capable models. Plans can either be built once at the beginning (`plantype=\"full\"`) or iteratively (`plantype=\"iterative\"`).\n\n```python\n@fast.orchestrator(\n name=\"orchestrate\",\n agents=[\"task1\",\"task2\",\"task3\"]\n)\n```\n\nSee the `orchestrator.py` or `agent_build.py` workflow example.\n\n## Agent Features\n\n### Calling Agents\n\nAll definitions allow omitting the name and instructions arguments for brevity:\n\n```python\n@fast.agent(\"You are a helpful agent\") # Create an agent with a default name.\n@fast.agent(\"greeter\",\"Respond cheerfully!\") # Create an agent with the name \"greeter\"\n\nmoon_size = await agent(\"the moon\") # Call the default (first defined agent) with a message\n\nresult = await agent.greeter(\"Good morning!\") # Send a message to an agent by name using dot notation\nresult = await agent.greeter.send(\"Hello!\") # You can call 'send' explicitly\n\nawait agent.greeter() # If no message is specified, a chat session will open\nawait agent.greeter.prompt() # that can be made more explicit\nawait agent.greeter.prompt(default_prompt=\"OK\") # and supports setting a default prompt\n\nagent[\"greeter\"].send(\"Good Evening!\") # Dictionary access is supported if preferred\n```\n\n### Defining Agents\n\n#### Basic Agent\n\n```python\n@fast.agent(\n name=\"agent\", # name of the agent\n instruction=\"You are a helpful Agent\", # base instruction for the agent\n servers=[\"filesystem\"], # list of MCP Servers for the agent\n model=\"o3-mini.high\", # specify a model for the agent\n use_history=True, # agent maintains chat history\n request_params=RequestParams(temperature= 0.7), # additional parameters for the LLM (or RequestParams())\n human_input=True, # agent can request human input\n)\n```\n\n#### Chain\n\n```python\n@fast.chain(\n name=\"chain\", # name of the chain\n sequence=[\"agent1\", \"agent2\", ...], # list of agents in execution order\n instruction=\"instruction\", # instruction to describe the chain for other workflows\n cumulative=False, # whether to accumulate messages through the chain\n continue_with_final=True, # open chat with agent at end of chain after prompting\n)\n```\n\n#### Parallel\n\n```python\n@fast.parallel(\n name=\"parallel\", # name of the parallel workflow\n fan_out=[\"agent1\", \"agent2\"], # list of agents to run in parallel\n fan_in=\"aggregator\", # name of agent that combines results (optional)\n instruction=\"instruction\", # instruction to describe the parallel for other workflows\n include_request=True, # include original request in fan-in message\n)\n```\n\n#### Evaluator-Optimizer\n\n```python\n@fast.evaluator_optimizer(\n name=\"researcher\", # name of the workflow\n generator=\"web_searcher\", # name of the content generator agent\n evaluator=\"quality_assurance\", # name of the evaluator agent\n min_rating=\"GOOD\", # minimum acceptable quality (EXCELLENT, GOOD, FAIR, POOR)\n max_refinements=3, # maximum number of refinement iterations\n)\n```\n\n#### Router\n\n```python\n@fast.router(\n name=\"route\", # name of the router\n agents=[\"agent1\", \"agent2\", \"agent3\"], # list of agent names router can delegate to\n model=\"o3-mini.high\", # specify routing model\n use_history=False, # router maintains conversation history\n human_input=False, # whether router can request human input\n)\n```\n\n#### Orchestrator\n\n```python\n@fast.orchestrator(\n name=\"orchestrator\", # name of the orchestrator\n instruction=\"instruction\", # base instruction for the orchestrator\n agents=[\"agent1\", \"agent2\"], # list of agent names this orchestrator can use\n model=\"o3-mini.high\", # specify orchestrator planning model\n use_history=False, # orchestrator doesn't maintain chat history (no effect).\n human_input=False, # whether orchestrator can request human input\n plan_type=\"full\", # planning approach: \"full\" or \"iterative\"\n plan_iterations=5, # maximum number of full plan attempts, or iterations\n)\n```\n\n### Multimodal Support\n\nAdd Resources to prompts using either the inbuilt `prompt-server` or MCP Types directly. Convenience class are made available to do so simply, for example:\n\n```python\n summary: str = await agent.with_resource(\n \"Summarise this PDF please\",\n \"mcp_server\",\n \"resource://fast-agent/sample.pdf\",\n )\n```\n\n#### MCP Tool Result Conversion\n\nLLM APIs have restrictions on the content types that can be returned as Tool Calls/Function results via their Chat Completions API's:\n\n- OpenAI supports Text\n- Anthropic supports Text and Image\n\nFor MCP Tool Results, `ImageResources` and `EmbeddedResources` are converted to User Messages and added to the conversation.\n\n### Prompts\n\nMCP Prompts are supported with `apply_prompt(name,arguments)`, which always returns an Assistant Message. If the last message from the MCP Server is a 'User' message, it is sent to the LLM for processing. Prompts applied to the Agent's Context are retained - meaning that with `use_history=False`, Agents can act as finely tuned responders.\n\nPrompts can also be applied interactively through the interactive interface by using the `/prompt` command.\n\n### Sampling\n\nSampling LLMs are configured per Client/Server pair. Specify the model name in fastagent.config.yaml as follows:\n\n```yaml\nmcp:\n servers:\n sampling_resource:\n command: \"uv\"\n args: [\"run\", \"sampling_resource_server.py\"]\n sampling:\n model: \"haiku\"\n```\n\n### Secrets File\n\n> [!TIP]\n> fast-agent will look recursively for a fastagent.secrets.yaml file, so you only need to manage this at the root folder of your agent definitions.\n\n### Interactive Shell\n\n\n\n## Project Notes\n\n`fast-agent` builds on the [`mcp-agent`](https://github.com/lastmile-ai/mcp-agent) project by Sarmad Qadri.\n\n### Contributing\n\nContributions and PRs are welcome - feel free to raise issues to discuss. Full guidelines for contributing and roadmap coming very soon. Get in touch!\n",
"bugtrack_url": null,
"license": "Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License. \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\" \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives. Copyright 2025 llmindset.co.uk Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"summary": "Define, Prompt and Test MCP enabled Agents and Workflows",
"version": "0.2.43",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ebfe2da4e9421944bc2d8d2136aef0269ba9dc940c8df4056aa9e86479b05072",
"md5": "e400fc18dde080a872794a1bc064db38",
"sha256": "211d46271a720307ca191d3434d1267955d491fb4c1bea0cca9852b46d091b01"
},
"downloads": -1,
"filename": "fast_agent_mcp-0.2.43-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e400fc18dde080a872794a1bc064db38",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 409260,
"upload_time": "2025-07-16T21:32:51",
"upload_time_iso_8601": "2025-07-16T21:32:51.810468Z",
"url": "https://files.pythonhosted.org/packages/eb/fe/2da4e9421944bc2d8d2136aef0269ba9dc940c8df4056aa9e86479b05072/fast_agent_mcp-0.2.43-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "bec8d4d9e6515ccc7676ef3bc0694bd83fa276b211b4dacbc996bbeeaf9ff92a",
"md5": "7973362501b46dc287199e20245128ab",
"sha256": "58eba108683c35a9fc4e6a9c42895a6306336887e3c667f88f7bf6ab425ce2f8"
},
"downloads": -1,
"filename": "fast_agent_mcp-0.2.43.tar.gz",
"has_sig": false,
"md5_digest": "7973362501b46dc287199e20245128ab",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 981583,
"upload_time": "2025-07-16T21:32:53",
"upload_time_iso_8601": "2025-07-16T21:32:53.688075Z",
"url": "https://files.pythonhosted.org/packages/be/c8/d4d9e6515ccc7676ef3bc0694bd83fa276b211b4dacbc996bbeeaf9ff92a/fast_agent_mcp-0.2.43.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-16 21:32:53",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "fast-agent-mcp"
}