# gpt-multi-atomic-agents
A simple dynamic multi-agent framework based on [atomic-agents](https://github.com/BrainBlend-AI/atomic-agents) and [Instructor](https://github.com/instructor-ai/instructor). Uses the power of [Pydantic](https://docs.pydantic.dev) for data and schema validation and serialization.
- compose Agents made of a system prompt, with a shared language of either **Function Calls** or else **GraphQL mutations**
- convert user input into data modifications (functions or GraphQL mutations)
- to maximise user engagement, uses a 2-phase process:
- Planning Phase:
- a router uses an LLM to process complex 'composite' user prompts, and automatically route them to the best sequence of your agents
- the router rewrites the user prompt, to best suit each agent
- an execution plan is generated
- the client can use the `router` to iterate over the execution plan, with user feedback
- Generation Phase:
- when the user is happy -> the client can use the `generator` to execute the plan, using the recommended agents
- the client then receives function calls (or GraphQL mutations) to update the data
- generate via OpenAI or AWS Bedrock or groq
- usage:
1. as a library
2. OR run out-of-the-box as a REST API, accepting Agents from the client
- there is a simple [TypeScript framework](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/clients/gpt-maa-ts), for writing Agent-based TypeScript clients of the REST API
3. OR as a command line chat-loop
- note: the `!! framework is at an early stage !!` - breaking changes will be indicated by increasing the *minor* version (major is still at zero).
[url_repo]: https://github.com/mrseanryan/gpt-multi-atomic-agents
[url_semver_org]: https://semver.org/
[![MIT License][img_license]][url_license]
[![Supported Python Versions][img_pyversions]][url_pyversions]
[![gpt-multi-atomic-agents][img_version]][url_version]
[![PyPI Releases][img_pypi]][url_pypi]
[![PyPI - Downloads](https://img.shields.io/pypi/dm/gpt-multi-atomic-agents.svg)](https://pypi.org/project/gpt-multi-atomic-agents)
[img_license]: https://img.shields.io/badge/License-MIT-blue.svg
[url_license]: https://github.com/mrseanryan/gpt-multi-atomic-agents/blob/master/LICENSE
[url_version]: https://pypi.org/project/gpt-multi-atomic-agents/
[img_version]: https://img.shields.io/static/v1.svg?label=SemVer&message=gpt-multi-atomic-agents&color=blue
[url_version]: https://pypi.org/project/bumpver/
[img_pypi]: https://img.shields.io/badge/PyPI-wheels-green.svg
[url_pypi]: https://pypi.org/project/gpt-multi-atomic-agents/#files
[img_pyversions]: https://img.shields.io/pypi/pyversions/gpt-multi-atomic-agents.svg
[url_pyversions]: https://pypi.python.org/pypi/gpt-multi-atomic-agents
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/K3K73ALBJ)
## Introduction
An LLM based Agents Framework using an Agent Oriented Programming approach to orchestrate agents using a shared language.
The agent language can either be **Function Calling** based, or else **GraphQL** based.
The framework is generic and allows agents to be defined in terms of a name, description, accepted input calls, and allowed output calls.
The agents communicate indirectly using a blackboard. The language is a composed of (Function or GraphQL mutation) calls: each agent specifies what it understands as input, and what calls it is able to generate. Each agent can be configured to understand a subset of the output of the other agents. In this way, the agents can understand each other's output and collaborate together.
![System overview](https://raw.githubusercontent.com/mrseanryan/gpt-multi-atomic-agents/master/images/diagram-Multi-LLM-based-Agent-collaboration-via-Dynamic-Router-GraphQL-context.jpg)
A router takes the user prompt and generates an agent execution plan.
The execution plan uses the best sequence of the most suitable agents, to handle the user prompt.
The router rewrites the user prompt to suit each agent, which improves quality and avoids unwanted output.
note: Optionally, the router can be run separately, allowing for human-in-the-loop feedback on the execution plan that the router generated. In this way, the user can collaborate more with the router, before the generative agents are actually executed.
- this allows the user to have more control over the output, and has the added benefit of reducing the *perceived* time taken to generate, since the user has intermediate interaction with the router.
Finally, the output is returned in the form of an ordered list of (Function or GraphQL) calls.
To read more about this approach, you can see this [Medium article](https://medium.com/@mr.sean.ryan/multi-llm-based-agent-collaboration-via-dynamic-router-and-graphql-handle-composite-prompts-with-83e16a22a1cb).
- note: the `framework is at an early stage`. The Evaluator is not currently implemented.
## Integration
When integrating, depending on which kind of Agent Definitions are used, the client needs to:
- **Function Calling Agents:** client implements the functions. The client executes the functions according to the results (function calls) generated by this framework.
- this approach is less flexible but good for simple use cases where GraphQL may be too complicated
- **GraphQL based Agents:** The client executes the GraphQL mutations on the GraphQL document they earler submitted to the framework.
- this approach provides the most flexibility:
- the input is a GraphQL schema with any previouly made mutation calls, the output is a set of mutation calls
- the agents can communicate generations (modifications to data) by generating GraphQL mutations that match the given schema
## Examples [Function Calls Based Approach]
### Sim Life world builder
This is a demo 'Sim Life' world builder.
It uses 3 agents (Creature Creature, Vegetation Creator, Relationship Creator) to process user prompts.
The agents are defined in terms of functions.
The output is a series of Function Calls which can be implemented by the client, to build the Sim Life world.
#### Definitions [Function Calls Based Approach]
The AddCreature function:
```python
function_add_creature = FunctionSpecSchema(
function_name="AddCreature",
description="Adds a new creature to the world (not vegetation)",
parameters=[
ParameterSpec(name="creature_name", type=ParameterType.string),
ParameterSpec(name="allowed_terrain", type=ParameterType.string, allowed_values=terrain_types),
ParameterSpec(name="age", type=ParameterType.int),
ParameterSpec(name="icon_name", type=ParameterType.string, allowed_values=creature_icons),
]
)
```
The AddCreatureRelationship function:
```python
function_add_relationship = FunctionSpecSchema(
function_name="AddCreatureRelationship",
description="Adds a new relationship between two creatures",
parameters=[
ParameterSpec(
name="from_name", type=ParameterType.string
),
ParameterSpec(
name="to_name", type=ParameterType.string
),
ParameterSpec(
name="relationship_name",
type=ParameterType.string,
allowed_values=["eats", "buys", "feeds", "sells"],
),
],
)
```
#### Agent Definitions [Function Calls Based Approach]
The Creature Creator agent is defined declaratively in terms of:
- its description (a very short prompt)
- its input schema (a list of accepted function definitions)
- its output schema (a list of output function definitions)
Agents can collaborate and exchange information indirectly, by reusing the same function defintions via a blackboard.
```python
def build_creature_agent():
agent_definition = build_function_agent_definition(
agent_name="Creature Creator",
description="Creates new creatures given the user prompt. Ensures that ALL creatures mentioned by the user are created.",
accepted_functions=[function_add_creature, function_add_relationship],
functions_allowed_to_generate=[function_add_creature],
topics=["creature", "summary"]
)
return agent_definition
```
Notes about the Creature Creator agent:
- this agent can only generate "AddCreature" function calls.
- the agent also accepts (understands) previous "AddCreature" calls, so that it knows what has already been created.
- additionally, this agent understands a subset of function calls from agents: here, it understands the "AddRelationship" function defined by `function_add_relationship`. This allows the agents to collaborate. See the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples/sim_life) for more details.
## Examples [GraphQL Based Approach]
### Sim Life world builder
This is a demo 'Sim Life' world builder.
It uses 3 agents (Creature Creature, Vegetation Creator, Relationship Creator) to process user prompts.
The agents are defined declaratively in terms of GraphQL input schema, and allowed generated mutations.
The output is a series of GraphQL mutations which can be executed by the client, to build the Sim Life world.
#### Definitions [GraphQL Based Approach]
The GraphQL schema:
```graphql
type Creature {
id: ID!
creature_name: String!
allowed_terrain: TerrainType!
age: Int!
icon_name: IconType!
}
type Vegetation {
id: ID!
vegetation_name: String!
icon_name: IconType!
allowed_terrain: TerrainType!
}
type Relationship {
id: ID!
from_name: String!
to_name: String!
relationship_kind: RelationshipType!
}
...
```
The GraphQL mutations that we want the Agents to generate, are distinct for each agent:
Creature Creator agent:
```graphql
type Mutation {
addCreature(input: CreatureInput!): Creature!
}
input CreatureInput {
creature_name: String!
allowed_terrain: TerrainType!
age: Int!
icon_name: IconType!
}
```
Vegetation Creator agent:
```graphql
type Mutation {
addVegetation(input: VegetationInput!): Vegetation!
}
input VegetationInput {
vegetation_name: String!
icon_name: IconType!
allowed_terrain: TerrainType!
}
```
#### Agent Definitions [GraphQL Based Approach]
The Creature Creator agent is defined declaratively in terms of:
- its description (a very short prompt)
- its input schema (a list of accepted GraphQL schemas)
- its output schema (a list of output GraphQL mutation calls)
An agent is basically a composition of input and output schemas, together with a prompt.
Agents collaborate and exchange information indirectly via a blackboard, by reusing the same GraphQL schemas and mutation calls.
```python
creatures_graphql = _read_schema("creature.graphql")
creature_mutations_graphql = _read_schema("creature.mutations.graphql")
def build_creature_agent():
agent_definition = build_graphql_agent_definition(
agent_name="Creature Creator",
description="Creates new creatures given the user prompt. Ensures that ALL creatures mentioned by the user are created.",
accepted_graphql_schemas=[creatures_graphql, creature_mutations_graphql],
mutations_allowed_to_generate=[creature_mutations_graphql],
topics=["creature", "summary"]
)
return agent_definition
```
Notes about this agent:
- this agent can only generate mutations that are defined by `creature_mutations_graphql` from the file "creature.mutations.graphql".
- the agent also accepts (understands) previous mutations calls, so that it knows what has already been created (`creature_mutations_graphql`).
- additionally, this agent understands the shared GraphQL schema defined by `creatures_graphql` from the file "creature.graphql".
- This array of GraphQL files can also be used to allow an Agent to understand a subset of the mutations output by other agents. This allows the agents to collaborate.
- See the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples/sim_life_via_graphql) for more details.
## Using the Agents in a chat loop
The agents can be used together to form a chat bot:
```python
from gpt_multi_atomic_agents import functions_expert_service, config
from . import agents
def run_chat_loop(given_user_prompt: str|None = None) -> list:
CHAT_AGENT_DESCRIPTION = "Handles users questions about an ecosystem game like Sim Life"
agent_definitions = [
build_creature_agent(), build_relationship_agent(), build_vegatation_agent() # for more capabilities, add more agents here
]
_config = config.Config(
ai_platform = config.AI_PLATFORM_Enum.bedrock_anthropic,
model = config.ANTHROPIC_MODEL,
max_tokens = config.ANTHROPIC_MAX_TOKENS,
is_debug = False
)
return functions_expert_service.run_chat_loop(agent_definitions=agent_definitions, chat_agent_description=CHAT_AGENT_DESCRIPTION, _config=_config, given_user_prompt=given_user_prompt)
```
> note: if `given_user_prompt` is not set, then `run_chat_loop()` will wait for user input from the keyboard
See the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples) for more details.
## Example Execution [Function Calls Based Approach]
USER INPUT:
```
Add a sheep that eats grass
```
OUTPUT:
```
Generated 3 function calls
[Agent: Creature Creator] AddCreature( creature_name=sheep, icon_name=sheep-icon, land_type=prairie, age=1 )
[Agent: Plant Creator] AddPlant( plant_name=grass, icon_name=grass-icon, land_type=prairie )
[Agent: Relationship Creator] AddCreatureRelationship( from_name=sheep, to_name=grass, relationship_name=eats )
```
Because the framework has a dynamic router, it can handle more complex 'composite' prompts, such as:
> Add a cow that eats grass. Add a human - the cow feeds the human. Add and alien that eats the human. The human also eats cows.
The router figures out which agents to use, what order to run them in, and what prompt to send to each agent.
Optionally, the router can be re-executed with user feedback on its genereated plan, before actually executing the agents.
The recommended agents are then executed in order, building up their results in the shared blackboard.
Finally, the framework combines the resulting calls together and returns them to the client.
### Example run via Function Call based agents:
![example run - function calls](https://raw.githubusercontent.com/mrseanryan/gpt-multi-atomic-agents/master/images/screenshot-example-run.png)
## Example Execution [GraphQL Based Approach]
USER INPUT:
```
Add a sheep that eats grass
```
OUTPUT:
```
['mutation {\n addCreature(input: {\n creature_name: "sheep",\n allowed_terrain: GRASSLAND,\n age: 2,\n icon_name: SHEEP\n }) {\n creature_name\n allowed_terrain\n age\n icon_name\n }\n }']
['mutation {', ' addVegetation(input: {', ' vegetation_name: "Grass",', ' icon_name: GRASS,', ' allowed_terrain: LAND', ' }) {', ' vegetation_name', ' icon_name', ' allowed_terrain', ' }', '}']
['mutation {', ' addCreatureRelationship(input: {', ' from_name: "Sheep",', ' to_name: "Grass",', ' relationship_kind: EATS', ' }) {', ' id', ' }', '}']
```
### Example run via GraphQL based agents:
![example run - GraphQL](https://raw.githubusercontent.com/mrseanryan/gpt-multi-atomic-agents/master/images/screenshot-example-run.graphql.png)
## Setup
0. Install Python 3.11 and [poetry](https://github.com/python-poetry/install.python-poetry.org)
1. Install dependencies.
```
poetry install
```
2. Setup your credentials for your preferred AI platform.
For OpenAI:
- You need to get an Open AI key.
- Set environment variable for your with your Open AI key:
```
export OPENAI_API_KEY="xxx"
```
Add that to your shell initializing script (`~/.zprofile` or similar)
Load in current terminal:
```
source ~/.zprofile
```
## Usage
gpt-multi-atomic-agents can be used in three ways:
1 - as a framework for your application or service
2 - as a REST API, where a client provides the agents and user prompts
3 - as a command line tool to chat and generate functions to modify your data
### 1. Usage as a framework (library)
See the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples) for more details.
### 2. Usage as REST API (with Swagger examples):
```
./run-rest-api.sh
```
The REST API URL and Swagger URLs are printed to the console
The available REST methods:
- generate_plan: Optionally call this before generate_function_calls, in order to generate an Execution Plan separately, and get user feedback. This can help reduce *perceived* latency for the user.
- generate_function_calls: Generates Function Calls to fulfill the user's prompt, given the available Agents in the user's request. If an Execution Plan is included in the request, then that is used to decide which Agents to execute. Otherwise an Execution Plan will be internally generated.
- [Not yet implemented] generate_graphql
#### TypeScript REST API Client
There is a simple TypeScript framework, for writing Agent-based TypeScript clients of the REST API.
For an example with simple Agents, see the [TypeScript Framework README](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/clients/gpt-maa-ts) and the [TypeScript Example Agents](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/clients/gpt-maa-ts/src/test_gpt_maa_client.ts).
### 3. Usage as a command line chat tool
Chat to generate mutations (Function Calls or GraphQL) with the configured Agents. The Blackboard can be saved out for later chatting, or it can be consumed by other tools, for example to execute against application data.
The example command line chats are setup with the same Sim Life style example agents.
Via function calling:
```
./run-example.sh
```
Via GraphQL:
```
./run-example.graphql.sh
```
```
🤖 Assistant : Welcome to multi-agent chat
Type in a question for the AI. If you are not sure what to type, then ask it a question like 'What can you do?'
To exit, use the quit command
Available commands:
clear - Clear the blackboard, starting over. (alias: reset)
dump - Dump the current blackboard state to the console (alias: show)
help - Display help text
list - List the local data files from previous blackboards
load - Load a blackboard from the local data store
save - Save the blackboard to the local data store
quit - Exit the chat loop (alias: bye, exit, stop)
```
## Tests
```
./test.sh
```
Raw data
{
"_id": null,
"home_page": "https://github.com/mrseanryan/gpt-multi-atomic-agents",
"name": "gpt-multi-atomic-agents",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.11",
"maintainer_email": null,
"keywords": "python, ai, artificial-intelligence, multi-agent, openai, multi-agent-systems, openai-api, large-language-models, llm, large-language-model, genai, genai-chatbot",
"author": "Sean Ryan",
"author_email": "mr.sean.ryan@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/d9/c3/7bd03350cc35983b244a242a2fe44bec082f7d002168230235be61802100/gpt_multi_atomic_agents-0.6.3.tar.gz",
"platform": null,
"description": "# gpt-multi-atomic-agents\nA simple dynamic multi-agent framework based on [atomic-agents](https://github.com/BrainBlend-AI/atomic-agents) and [Instructor](https://github.com/instructor-ai/instructor). Uses the power of [Pydantic](https://docs.pydantic.dev) for data and schema validation and serialization.\n\n- compose Agents made of a system prompt, with a shared language of either **Function Calls** or else **GraphQL mutations**\n- convert user input into data modifications (functions or GraphQL mutations)\n- to maximise user engagement, uses a 2-phase process:\n - Planning Phase:\n - a router uses an LLM to process complex 'composite' user prompts, and automatically route them to the best sequence of your agents\n - the router rewrites the user prompt, to best suit each agent\n - an execution plan is generated\n - the client can use the `router` to iterate over the execution plan, with user feedback\n - Generation Phase:\n - when the user is happy -> the client can use the `generator` to execute the plan, using the recommended agents\n - the client then receives function calls (or GraphQL mutations) to update the data\n- generate via OpenAI or AWS Bedrock or groq\n- usage:\n 1. as a library\n 2. OR run out-of-the-box as a REST API, accepting Agents from the client\n - there is a simple [TypeScript framework](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/clients/gpt-maa-ts), for writing Agent-based TypeScript clients of the REST API\n 3. OR as a command line chat-loop\n\n- note: the `!! framework is at an early stage !!` - breaking changes will be indicated by increasing the *minor* version (major is still at zero).\n\n[url_repo]: https://github.com/mrseanryan/gpt-multi-atomic-agents\n[url_semver_org]: https://semver.org/\n\n[![MIT License][img_license]][url_license]\n[![Supported Python Versions][img_pyversions]][url_pyversions]\n[![gpt-multi-atomic-agents][img_version]][url_version]\n\n[![PyPI Releases][img_pypi]][url_pypi]\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/gpt-multi-atomic-agents.svg)](https://pypi.org/project/gpt-multi-atomic-agents)\n\n[img_license]: https://img.shields.io/badge/License-MIT-blue.svg\n[url_license]: https://github.com/mrseanryan/gpt-multi-atomic-agents/blob/master/LICENSE\n\n[url_version]: https://pypi.org/project/gpt-multi-atomic-agents/\n\n[img_version]: https://img.shields.io/static/v1.svg?label=SemVer&message=gpt-multi-atomic-agents&color=blue\n[url_version]: https://pypi.org/project/bumpver/\n\n[img_pypi]: https://img.shields.io/badge/PyPI-wheels-green.svg\n[url_pypi]: https://pypi.org/project/gpt-multi-atomic-agents/#files\n\n[img_pyversions]: https://img.shields.io/pypi/pyversions/gpt-multi-atomic-agents.svg\n[url_pyversions]: https://pypi.python.org/pypi/gpt-multi-atomic-agents\n\n[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/K3K73ALBJ)\n\n## Introduction\n\nAn LLM based Agents Framework using an Agent Oriented Programming approach to orchestrate agents using a shared language.\n\nThe agent language can either be **Function Calling** based, or else **GraphQL** based.\n\nThe framework is generic and allows agents to be defined in terms of a name, description, accepted input calls, and allowed output calls.\n\nThe agents communicate indirectly using a blackboard. The language is a composed of (Function or GraphQL mutation) calls: each agent specifies what it understands as input, and what calls it is able to generate. Each agent can be configured to understand a subset of the output of the other agents. In this way, the agents can understand each other's output and collaborate together.\n\n![System overview](https://raw.githubusercontent.com/mrseanryan/gpt-multi-atomic-agents/master/images/diagram-Multi-LLM-based-Agent-collaboration-via-Dynamic-Router-GraphQL-context.jpg)\n\nA router takes the user prompt and generates an agent execution plan.\n\nThe execution plan uses the best sequence of the most suitable agents, to handle the user prompt.\n\nThe router rewrites the user prompt to suit each agent, which improves quality and avoids unwanted output.\n\nnote: Optionally, the router can be run separately, allowing for human-in-the-loop feedback on the execution plan that the router generated. In this way, the user can collaborate more with the router, before the generative agents are actually executed.\n\n- this allows the user to have more control over the output, and has the added benefit of reducing the *perceived* time taken to generate, since the user has intermediate interaction with the router.\n\nFinally, the output is returned in the form of an ordered list of (Function or GraphQL) calls.\n\nTo read more about this approach, you can see this [Medium article](https://medium.com/@mr.sean.ryan/multi-llm-based-agent-collaboration-via-dynamic-router-and-graphql-handle-composite-prompts-with-83e16a22a1cb).\n\n- note: the `framework is at an early stage`. The Evaluator is not currently implemented.\n\n## Integration\n\nWhen integrating, depending on which kind of Agent Definitions are used, the client needs to:\n\n- **Function Calling Agents:** client implements the functions. The client executes the functions according to the results (function calls) generated by this framework.\n - this approach is less flexible but good for simple use cases where GraphQL may be too complicated\n- **GraphQL based Agents:** The client executes the GraphQL mutations on the GraphQL document they earler submitted to the framework.\n - this approach provides the most flexibility:\n - the input is a GraphQL schema with any previouly made mutation calls, the output is a set of mutation calls\n - the agents can communicate generations (modifications to data) by generating GraphQL mutations that match the given schema\n\n## Examples [Function Calls Based Approach]\n\n### Sim Life world builder\n\nThis is a demo 'Sim Life' world builder.\nIt uses 3 agents (Creature Creature, Vegetation Creator, Relationship Creator) to process user prompts.\nThe agents are defined in terms of functions.\nThe output is a series of Function Calls which can be implemented by the client, to build the Sim Life world.\n\n#### Definitions [Function Calls Based Approach]\n\nThe AddCreature function:\n\n```python\nfunction_add_creature = FunctionSpecSchema(\n function_name=\"AddCreature\",\n description=\"Adds a new creature to the world (not vegetation)\",\n parameters=[\n ParameterSpec(name=\"creature_name\", type=ParameterType.string),\n ParameterSpec(name=\"allowed_terrain\", type=ParameterType.string, allowed_values=terrain_types),\n ParameterSpec(name=\"age\", type=ParameterType.int),\n ParameterSpec(name=\"icon_name\", type=ParameterType.string, allowed_values=creature_icons),\n ]\n)\n```\n\nThe AddCreatureRelationship function:\n\n```python\nfunction_add_relationship = FunctionSpecSchema(\n function_name=\"AddCreatureRelationship\",\n description=\"Adds a new relationship between two creatures\",\n parameters=[\n ParameterSpec(\n name=\"from_name\", type=ParameterType.string\n ),\n ParameterSpec(\n name=\"to_name\", type=ParameterType.string\n ),\n ParameterSpec(\n name=\"relationship_name\",\n type=ParameterType.string,\n allowed_values=[\"eats\", \"buys\", \"feeds\", \"sells\"],\n ),\n ],\n)\n```\n\n#### Agent Definitions [Function Calls Based Approach]\n\nThe Creature Creator agent is defined declaratively in terms of:\n\n- its description (a very short prompt)\n- its input schema (a list of accepted function definitions)\n- its output schema (a list of output function definitions)\n\nAgents can collaborate and exchange information indirectly, by reusing the same function defintions via a blackboard.\n\n```python\ndef build_creature_agent():\n agent_definition = build_function_agent_definition(\n agent_name=\"Creature Creator\",\n description=\"Creates new creatures given the user prompt. Ensures that ALL creatures mentioned by the user are created.\",\n accepted_functions=[function_add_creature, function_add_relationship],\n functions_allowed_to_generate=[function_add_creature],\n topics=[\"creature\", \"summary\"]\n )\n\n return agent_definition\n```\n\nNotes about the Creature Creator agent:\n- this agent can only generate \"AddCreature\" function calls.\n- the agent also accepts (understands) previous \"AddCreature\" calls, so that it knows what has already been created.\n- additionally, this agent understands a subset of function calls from agents: here, it understands the \"AddRelationship\" function defined by `function_add_relationship`. This allows the agents to collaborate. See the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples/sim_life) for more details.\n\n## Examples [GraphQL Based Approach]\n\n### Sim Life world builder\n\nThis is a demo 'Sim Life' world builder.\nIt uses 3 agents (Creature Creature, Vegetation Creator, Relationship Creator) to process user prompts.\nThe agents are defined declaratively in terms of GraphQL input schema, and allowed generated mutations.\nThe output is a series of GraphQL mutations which can be executed by the client, to build the Sim Life world.\n\n#### Definitions [GraphQL Based Approach]\n\nThe GraphQL schema:\n\n```graphql\ntype Creature {\n id: ID!\n creature_name: String!\n allowed_terrain: TerrainType!\n age: Int!\n icon_name: IconType!\n}\n\ntype Vegetation {\n id: ID!\n vegetation_name: String!\n icon_name: IconType!\n allowed_terrain: TerrainType!\n}\n\ntype Relationship {\n id: ID!\n from_name: String!\n to_name: String!\n relationship_kind: RelationshipType!\n}\n...\n```\n\nThe GraphQL mutations that we want the Agents to generate, are distinct for each agent:\n\nCreature Creator agent:\n\n```graphql\ntype Mutation {\n addCreature(input: CreatureInput!): Creature!\n}\n\ninput CreatureInput {\n creature_name: String!\n allowed_terrain: TerrainType!\n age: Int!\n icon_name: IconType!\n}\n```\n\nVegetation Creator agent:\n\n```graphql\ntype Mutation {\n addVegetation(input: VegetationInput!): Vegetation!\n}\n\ninput VegetationInput {\n vegetation_name: String!\n icon_name: IconType!\n allowed_terrain: TerrainType!\n}\n```\n\n#### Agent Definitions [GraphQL Based Approach]\n\nThe Creature Creator agent is defined declaratively in terms of:\n\n- its description (a very short prompt)\n- its input schema (a list of accepted GraphQL schemas)\n- its output schema (a list of output GraphQL mutation calls)\n\nAn agent is basically a composition of input and output schemas, together with a prompt.\n\nAgents collaborate and exchange information indirectly via a blackboard, by reusing the same GraphQL schemas and mutation calls.\n\n```python\ncreatures_graphql = _read_schema(\"creature.graphql\")\ncreature_mutations_graphql = _read_schema(\"creature.mutations.graphql\")\n\ndef build_creature_agent():\n agent_definition = build_graphql_agent_definition(\n agent_name=\"Creature Creator\",\n description=\"Creates new creatures given the user prompt. Ensures that ALL creatures mentioned by the user are created.\",\n accepted_graphql_schemas=[creatures_graphql, creature_mutations_graphql],\n mutations_allowed_to_generate=[creature_mutations_graphql],\n topics=[\"creature\", \"summary\"]\n )\n\n return agent_definition\n```\n\nNotes about this agent:\n- this agent can only generate mutations that are defined by `creature_mutations_graphql` from the file \"creature.mutations.graphql\".\n- the agent also accepts (understands) previous mutations calls, so that it knows what has already been created (`creature_mutations_graphql`).\n- additionally, this agent understands the shared GraphQL schema defined by `creatures_graphql` from the file \"creature.graphql\".\n - This array of GraphQL files can also be used to allow an Agent to understand a subset of the mutations output by other agents. This allows the agents to collaborate.\n - See the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples/sim_life_via_graphql) for more details.\n\n## Using the Agents in a chat loop\n\nThe agents can be used together to form a chat bot:\n\n```python\nfrom gpt_multi_atomic_agents import functions_expert_service, config\nfrom . import agents\n\ndef run_chat_loop(given_user_prompt: str|None = None) -> list:\n CHAT_AGENT_DESCRIPTION = \"Handles users questions about an ecosystem game like Sim Life\"\n\n agent_definitions = [\n build_creature_agent(), build_relationship_agent(), build_vegatation_agent() # for more capabilities, add more agents here\n ]\n\n _config = config.Config(\n ai_platform = config.AI_PLATFORM_Enum.bedrock_anthropic,\n model = config.ANTHROPIC_MODEL,\n max_tokens = config.ANTHROPIC_MAX_TOKENS,\n is_debug = False\n )\n\n return functions_expert_service.run_chat_loop(agent_definitions=agent_definitions, chat_agent_description=CHAT_AGENT_DESCRIPTION, _config=_config, given_user_prompt=given_user_prompt)\n```\n\n> note: if `given_user_prompt` is not set, then `run_chat_loop()` will wait for user input from the keyboard\n\nSee the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples) for more details.\n\n## Example Execution [Function Calls Based Approach]\n\nUSER INPUT:\n```\nAdd a sheep that eats grass\n```\n\nOUTPUT:\n```\nGenerated 3 function calls\n[Agent: Creature Creator] AddCreature( creature_name=sheep, icon_name=sheep-icon, land_type=prairie, age=1 )\n[Agent: Plant Creator] AddPlant( plant_name=grass, icon_name=grass-icon, land_type=prairie )\n[Agent: Relationship Creator] AddCreatureRelationship( from_name=sheep, to_name=grass, relationship_name=eats )\n```\n\nBecause the framework has a dynamic router, it can handle more complex 'composite' prompts, such as:\n\n> Add a cow that eats grass. Add a human - the cow feeds the human. Add and alien that eats the human. The human also eats cows.\n\nThe router figures out which agents to use, what order to run them in, and what prompt to send to each agent.\n\nOptionally, the router can be re-executed with user feedback on its genereated plan, before actually executing the agents.\n\nThe recommended agents are then executed in order, building up their results in the shared blackboard.\n\nFinally, the framework combines the resulting calls together and returns them to the client.\n\n### Example run via Function Call based agents:\n\n![example run - function calls](https://raw.githubusercontent.com/mrseanryan/gpt-multi-atomic-agents/master/images/screenshot-example-run.png)\n\n\n## Example Execution [GraphQL Based Approach]\n\nUSER INPUT:\n```\nAdd a sheep that eats grass\n```\n\nOUTPUT:\n```\n['mutation {\\n addCreature(input: {\\n creature_name: \"sheep\",\\n allowed_terrain: GRASSLAND,\\n age: 2,\\n icon_name: SHEEP\\n }) {\\n creature_name\\n allowed_terrain\\n age\\n icon_name\\n }\\n }']\n['mutation {', ' addVegetation(input: {', ' vegetation_name: \"Grass\",', ' icon_name: GRASS,', ' allowed_terrain: LAND', ' }) {', ' vegetation_name', ' icon_name', ' allowed_terrain', ' }', '}']\n['mutation {', ' addCreatureRelationship(input: {', ' from_name: \"Sheep\",', ' to_name: \"Grass\",', ' relationship_kind: EATS', ' }) {', ' id', ' }', '}']\n```\n\n### Example run via GraphQL based agents:\n\n![example run - GraphQL](https://raw.githubusercontent.com/mrseanryan/gpt-multi-atomic-agents/master/images/screenshot-example-run.graphql.png)\n\n## Setup\n\n0. Install Python 3.11 and [poetry](https://github.com/python-poetry/install.python-poetry.org)\n\n1. Install dependencies.\n\n```\npoetry install\n```\n\n2. Setup your credentials for your preferred AI platform.\n\nFor OpenAI:\n\n- You need to get an Open AI key.\n- Set environment variable for your with your Open AI key:\n\n```\nexport OPENAI_API_KEY=\"xxx\"\n```\n\nAdd that to your shell initializing script (`~/.zprofile` or similar)\n\nLoad in current terminal:\n\n```\nsource ~/.zprofile\n```\n\n## Usage\n\ngpt-multi-atomic-agents can be used in three ways:\n\n1 - as a framework for your application or service\n2 - as a REST API, where a client provides the agents and user prompts\n3 - as a command line tool to chat and generate functions to modify your data\n\n### 1. Usage as a framework (library)\n\nSee the [example source code](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/examples) for more details.\n\n### 2. Usage as REST API (with Swagger examples):\n\n```\n./run-rest-api.sh\n```\n\nThe REST API URL and Swagger URLs are printed to the console\n\nThe available REST methods:\n\n- generate_plan: Optionally call this before generate_function_calls, in order to generate an Execution Plan separately, and get user feedback. This can help reduce *perceived* latency for the user.\n\n- generate_function_calls: Generates Function Calls to fulfill the user's prompt, given the available Agents in the user's request. If an Execution Plan is included in the request, then that is used to decide which Agents to execute. Otherwise an Execution Plan will be internally generated.\n\n- [Not yet implemented] generate_graphql\n\n#### TypeScript REST API Client\n\nThere is a simple TypeScript framework, for writing Agent-based TypeScript clients of the REST API.\n\nFor an example with simple Agents, see the [TypeScript Framework README](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/clients/gpt-maa-ts) and the [TypeScript Example Agents](https://github.com/mrseanryan/gpt-multi-atomic-agents/tree/master/clients/gpt-maa-ts/src/test_gpt_maa_client.ts).\n\n### 3. Usage as a command line chat tool\n\nChat to generate mutations (Function Calls or GraphQL) with the configured Agents. The Blackboard can be saved out for later chatting, or it can be consumed by other tools, for example to execute against application data.\n\nThe example command line chats are setup with the same Sim Life style example agents.\n\nVia function calling:\n\n```\n./run-example.sh\n```\n\nVia GraphQL:\n\n\n```\n./run-example.graphql.sh\n```\n\n```\n\ud83e\udd16 Assistant : Welcome to multi-agent chat\nType in a question for the AI. If you are not sure what to type, then ask it a question like 'What can you do?'\nTo exit, use the quit command\nAvailable commands:\n clear - Clear the blackboard, starting over. (alias: reset)\n dump - Dump the current blackboard state to the console (alias: show)\n help - Display help text\n list - List the local data files from previous blackboards\n load - Load a blackboard from the local data store\n save - Save the blackboard to the local data store\n quit - Exit the chat loop (alias: bye, exit, stop)\n```\n\n## Tests\n\n```\n./test.sh\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Combine multiple graphql-based or function-based agents with dynamic routing - based on atomic-agents.",
"version": "0.6.3",
"project_urls": {
"Documentation": "https://github.com/mrseanryan/gpt-multi-atomic-agents",
"Homepage": "https://github.com/mrseanryan/gpt-multi-atomic-agents",
"Repository": "https://github.com/mrseanryan/gpt-multi-atomic-agents"
},
"split_keywords": [
"python",
" ai",
" artificial-intelligence",
" multi-agent",
" openai",
" multi-agent-systems",
" openai-api",
" large-language-models",
" llm",
" large-language-model",
" genai",
" genai-chatbot"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3ff8c2eb75e178eb7040ec8e99e70aaec8b3ba56bf6c8b3e083e8272394bdfcd",
"md5": "eac163fe962361b547472ebe096acac9",
"sha256": "3d691a9b20e8e2e690eefa89fae9cddd45480383bd463af54b155ee3dbf5388e"
},
"downloads": -1,
"filename": "gpt_multi_atomic_agents-0.6.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "eac163fe962361b547472ebe096acac9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.11",
"size": 33759,
"upload_time": "2024-12-12T19:40:11",
"upload_time_iso_8601": "2024-12-12T19:40:11.489649Z",
"url": "https://files.pythonhosted.org/packages/3f/f8/c2eb75e178eb7040ec8e99e70aaec8b3ba56bf6c8b3e083e8272394bdfcd/gpt_multi_atomic_agents-0.6.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d9c37bd03350cc35983b244a242a2fe44bec082f7d002168230235be61802100",
"md5": "50f897cc7e269c7b3b0234f6b9648b77",
"sha256": "fcf9bcf3d5cc05e5fb405b55aa474c5525e37af4622e537c4a12c34c37a9a36e"
},
"downloads": -1,
"filename": "gpt_multi_atomic_agents-0.6.3.tar.gz",
"has_sig": false,
"md5_digest": "50f897cc7e269c7b3b0234f6b9648b77",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.11",
"size": 24937,
"upload_time": "2024-12-12T19:40:13",
"upload_time_iso_8601": "2024-12-12T19:40:13.917166Z",
"url": "https://files.pythonhosted.org/packages/d9/c3/7bd03350cc35983b244a242a2fe44bec082f7d002168230235be61802100/gpt_multi_atomic_agents-0.6.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-12 19:40:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mrseanryan",
"github_project": "gpt-multi-atomic-agents",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "gpt-multi-atomic-agents"
}