Name | reflex-agent JSON |
Version |
0.1.0a30
JSON |
| download |
home_page | None |
Summary | A flexible agent framework. |
upload_time | 2025-01-16 23:54:41 |
maintainer | None |
docs_url | None |
author | Nikhil Rao |
requires_python | <4.0,>=3.10 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Reflex Agent
`reflex-agent` is a flexible framework to create AI agents with modular capabilities and tool integrations.
## Installation
```bash
pip install reflex-agent
```
## Features
- **Zero Dependencies**: The core install has no extra dependencies - bring your own LLM.
- **Modular Capabilities**: The base agent loop is simple and can be modified through capabilities to add features like memory and chain of thought.
- **Simple Tool Use**: Use any Python function as a tool that the agent can call.
- **Client-Server Architecture**: Designed for easy use in a client-server architecture - allowing for stepping through the agent loop.
- **Async by Default**: All functions are async by default.
## Usage
### Agents
Agents are entities that you can chat with. To create a basic agent, by specifiying an LLM and a system prompt.
We currently have built-in support for OpenAI and Anthropic models.
```python
from flexai import Agent
from flexai.llm.openai import OpenAIClient
from flexai.llm.anthropic import AnthropicClient
openai_agent = Agent(
llm=OpenAIClient(model="gpt-4o-mini"),
prompt="You are a helpful assistant.",
)
anthropic_client = Agent(
llm=AnthropicClient(model="claude-3-5-sonnet-20240620"),
prompt="You are a helpful assistant.",
)
```
To interact with an agent, pass in a list of messages. There are two ways to interact with an agent - `stream` and `step`.
Streaming allows the agent to use multiple messages (such as inner tool uses) before returning a final response.
```python
import asyncio
from flexai import Agent, UserMessage
from flexai.llm.openai import OpenAIClient
agent = Agent(
llm=OpenAIClient(model="gpt-4o-mini"),
prompt="You are an expert mathematician.",
)
async def get_agent_response(messages):
async for response in agent.stream(messages):
print(response)
asyncio.run(get_agent_response([UserMessage("What's 2 + 2?")]))
```
Stepping allows you to step through the agent loop, allowing you to see the agent's internal state at each step.
```python
import asyncio
from flexai import Agent, UserMessage
from flexai.llm.openai import OpenAIClient
agent = Agent(
llm=OpenAIClient(model="gpt-4o-mini"),
prompt="You are an expert mathematician.",
)
async def get_agent_response(messages):
response = await agent.step(messages)
print(response)
asyncio.run(get_agent_response([UserMessage("What's 2 + 2?")]))
```
### Memory
Agents are stateless by default - all state management is done on the user end.
You can save the agent's output messages to have an extended conversation.
```python
import asyncio
from flexai import Agent, UserMessage
from flexai.llm.openai import OpenAIClient
agent = Agent(
llm=OpenAIClient(model="gpt-4o-mini"),
prompt="You are an expert mathematician.",
)
async def get_agent_response(messages):
response = await agent.step(messages):
print(response)
messages.append(response)
messages.append(UserMessage("Tell me some key themes from your story."))
response = await agent.step(messages):
print(response)
asyncio.run(get_agent_response([UserMessage("Tell me a random story")]))
```
### Tools
Tools are Python functions that the agent can call. The function's signature and docstring are used to determine when and how the tool is called.
```python
import asyncio
from flexai import Agent, UserMessage
from flexai.llm.openai import OpenAIClient
def read_file(file_path: str) -> str:
"""Read a file and get the contents."""
with open(file_path, "r") as file:
return file.read()
def write_file(file_path: str, contents: str) -> None:
"""Write contents to a file."""
with open(file_path, "w") as file:
file.write(contents)
agent = Agent(
llm=OpenAIClient(model="gpt-4o-mini"),
prompt="You are an expert mathematician.",
tools=[
read_file,
write_file,
]
)
async def get_agent_response(messages):
# Stream allows the agent to use multiple tool uses in one go.
async for message in agent.stream(messages):
print(message)
asyncio.run(get_agent_response([UserMessage("Read the README.md file and create a copy at README2.md with a high level summary.")]))
```
### Capabilities
Capabilities allow you to modify the core agent loop and change the behavior of the agent.
You can plug in to the agent loop to modify messages, responses, and system messages. For example, the `TruncateMessages` capability truncates the input messages to the LLM to a maximum.
```python
@dataclass
class TruncateMessages(Capability):
"""Truncate the input messages to the LLM to a maximum number."""
# The maximum number of messages to keep.
max_messages: int
async def modify_messages(self, messages: list[Message]) -> list[Message]:
return messages[-self.max_messages :]
```
This capability is built-in to the framework to use:
```python
import asyncio
from flexai import Agent, UserMessage
from flexai.llm.openai import OpenAIClient
from flexai.capabilities import TruncateMessages
agent = Agent(
llm=OpenAIClient(model="gpt-4o-mini"),
prompt="You are an expert mathematician.",
capabilities=[TruncateMessages(max_messages=3)],
)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "reflex-agent",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Nikhil Rao",
"author_email": "nikhil@reflex.dev",
"download_url": "https://files.pythonhosted.org/packages/de/b2/18b600c0b87bcffa9670aa9c2a5b0e23d8fa007e15f7d0c08d8d5bf90cf2/reflex_agent-0.1.0a30.tar.gz",
"platform": null,
"description": "# Reflex Agent\n\n`reflex-agent` is a flexible framework to create AI agents with modular capabilities and tool integrations.\n\n\n## Installation\n\n```bash\npip install reflex-agent\n```\n\n## Features\n\n- **Zero Dependencies**: The core install has no extra dependencies - bring your own LLM.\n- **Modular Capabilities**: The base agent loop is simple and can be modified through capabilities to add features like memory and chain of thought.\n- **Simple Tool Use**: Use any Python function as a tool that the agent can call.\n- **Client-Server Architecture**: Designed for easy use in a client-server architecture - allowing for stepping through the agent loop.\n- **Async by Default**: All functions are async by default.\n\n## Usage\n\n### Agents\n\nAgents are entities that you can chat with. To create a basic agent, by specifiying an LLM and a system prompt.\n\nWe currently have built-in support for OpenAI and Anthropic models.\n\n```python\nfrom flexai import Agent\nfrom flexai.llm.openai import OpenAIClient\nfrom flexai.llm.anthropic import AnthropicClient\n\n\nopenai_agent = Agent(\n llm=OpenAIClient(model=\"gpt-4o-mini\"),\n prompt=\"You are a helpful assistant.\",\n)\n\nanthropic_client = Agent(\n llm=AnthropicClient(model=\"claude-3-5-sonnet-20240620\"),\n prompt=\"You are a helpful assistant.\",\n)\n```\n\nTo interact with an agent, pass in a list of messages. There are two ways to interact with an agent - `stream` and `step`.\n\nStreaming allows the agent to use multiple messages (such as inner tool uses) before returning a final response.\n\n```python\nimport asyncio\nfrom flexai import Agent, UserMessage\nfrom flexai.llm.openai import OpenAIClient\n\nagent = Agent(\n llm=OpenAIClient(model=\"gpt-4o-mini\"),\n prompt=\"You are an expert mathematician.\",\n)\n\nasync def get_agent_response(messages):\n async for response in agent.stream(messages):\n print(response)\n\n\nasyncio.run(get_agent_response([UserMessage(\"What's 2 + 2?\")]))\n```\n\nStepping allows you to step through the agent loop, allowing you to see the agent's internal state at each step.\n\n```python\nimport asyncio\nfrom flexai import Agent, UserMessage\nfrom flexai.llm.openai import OpenAIClient\n\nagent = Agent(\n llm=OpenAIClient(model=\"gpt-4o-mini\"),\n prompt=\"You are an expert mathematician.\",\n)\n\nasync def get_agent_response(messages):\n response = await agent.step(messages)\n print(response)\n\nasyncio.run(get_agent_response([UserMessage(\"What's 2 + 2?\")]))\n```\n\n### Memory\n\nAgents are stateless by default - all state management is done on the user end.\nYou can save the agent's output messages to have an extended conversation.\n\n```python\nimport asyncio\nfrom flexai import Agent, UserMessage\nfrom flexai.llm.openai import OpenAIClient\n\nagent = Agent(\n llm=OpenAIClient(model=\"gpt-4o-mini\"),\n prompt=\"You are an expert mathematician.\",\n)\n\nasync def get_agent_response(messages):\n response = await agent.step(messages):\n print(response)\n messages.append(response)\n messages.append(UserMessage(\"Tell me some key themes from your story.\"))\n response = await agent.step(messages):\n print(response)\n\nasyncio.run(get_agent_response([UserMessage(\"Tell me a random story\")]))\n```\n\n### Tools\n\nTools are Python functions that the agent can call. The function's signature and docstring are used to determine when and how the tool is called.\n\n```python\nimport asyncio\nfrom flexai import Agent, UserMessage\nfrom flexai.llm.openai import OpenAIClient\n\ndef read_file(file_path: str) -> str:\n \"\"\"Read a file and get the contents.\"\"\"\n with open(file_path, \"r\") as file:\n return file.read()\n\ndef write_file(file_path: str, contents: str) -> None:\n \"\"\"Write contents to a file.\"\"\"\n with open(file_path, \"w\") as file:\n file.write(contents)\n\nagent = Agent(\n llm=OpenAIClient(model=\"gpt-4o-mini\"),\n prompt=\"You are an expert mathematician.\",\n tools=[\n read_file,\n write_file,\n ]\n)\n\n\nasync def get_agent_response(messages):\n # Stream allows the agent to use multiple tool uses in one go.\n async for message in agent.stream(messages):\n print(message)\n\nasyncio.run(get_agent_response([UserMessage(\"Read the README.md file and create a copy at README2.md with a high level summary.\")]))\n```\n\n### Capabilities\n\nCapabilities allow you to modify the core agent loop and change the behavior of the agent.\n\nYou can plug in to the agent loop to modify messages, responses, and system messages. For example, the `TruncateMessages` capability truncates the input messages to the LLM to a maximum.\n\n```python\n@dataclass\nclass TruncateMessages(Capability):\n \"\"\"Truncate the input messages to the LLM to a maximum number.\"\"\"\n\n # The maximum number of messages to keep.\n max_messages: int\n\n async def modify_messages(self, messages: list[Message]) -> list[Message]:\n return messages[-self.max_messages :]\n```\n\nThis capability is built-in to the framework to use:\n\n```python\nimport asyncio\nfrom flexai import Agent, UserMessage\nfrom flexai.llm.openai import OpenAIClient\nfrom flexai.capabilities import TruncateMessages\n\nagent = Agent(\n llm=OpenAIClient(model=\"gpt-4o-mini\"),\n prompt=\"You are an expert mathematician.\",\n capabilities=[TruncateMessages(max_messages=3)],\n)\n```",
"bugtrack_url": null,
"license": null,
"summary": "A flexible agent framework.",
"version": "0.1.0a30",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a8d2373e574cb9cc96250d4b03ef68ff8be35c6faf92ac30adedda2608540c41",
"md5": "51c43c4826e0673afde1956a93afea18",
"sha256": "b4b25e7694a8e51b06964a456a041943771f692010ed7fba27173ecb74084f4c"
},
"downloads": -1,
"filename": "reflex_agent-0.1.0a30-py3-none-any.whl",
"has_sig": false,
"md5_digest": "51c43c4826e0673afde1956a93afea18",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 20670,
"upload_time": "2025-01-16T23:54:37",
"upload_time_iso_8601": "2025-01-16T23:54:37.595174Z",
"url": "https://files.pythonhosted.org/packages/a8/d2/373e574cb9cc96250d4b03ef68ff8be35c6faf92ac30adedda2608540c41/reflex_agent-0.1.0a30-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "deb218b600c0b87bcffa9670aa9c2a5b0e23d8fa007e15f7d0c08d8d5bf90cf2",
"md5": "6d28219ff29ffe4ed8666d91b6a279f9",
"sha256": "2cf4450408ea942c0267f5105a33a2e67c9e92cb3546d0d34c975466edb78d96"
},
"downloads": -1,
"filename": "reflex_agent-0.1.0a30.tar.gz",
"has_sig": false,
"md5_digest": "6d28219ff29ffe4ed8666d91b6a279f9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 13244,
"upload_time": "2025-01-16T23:54:41",
"upload_time_iso_8601": "2025-01-16T23:54:41.570173Z",
"url": "https://files.pythonhosted.org/packages/de/b2/18b600c0b87bcffa9670aa9c2a5b0e23d8fa007e15f7d0c08d8d5bf90cf2/reflex_agent-0.1.0a30.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-16 23:54:41",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "reflex-agent"
}