| Name | axm-agent JSON |
| Version |
0.2.3
JSON |
| download |
| home_page | None |
| Summary | A simple, elegant Python framework for building AI agents with decorators, MCP support, and powerful utilities |
| upload_time | 2025-10-30 08:09:38 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.9 |
| license | MIT |
| keywords |
ai
agent
llm
mcp
function-calling
openai
anthropic
langchain
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# 🤖 AXM Agent
**A simple, elegant Python framework for building AI agents with decorators, MCP support, and powerful utilities**
[](https://badge.fury.io/py/axm-agent)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## ✨ Why AXM Agent?
AXM Agent is designed with **simplicity** and **developer experience** in mind. Unlike heavyweight frameworks, AXM Agent lets you build powerful AI agents with just a few lines of code using elegant decorators and intuitive APIs.
```python
from axm import Agent, tool
# Create an agent
agent = Agent("gpt-4")
# Define tools with a simple decorator
@agent.tool
def get_weather(city: str) -> str:
"""Get the weather for a city"""
return f"Sunny in {city}"
# Run the agent
response = agent.run("What's the weather in Paris?")
print(response)
```
## 🚀 Features
- **🎯 Simple Decorator API** - Define tools and agents with intuitive decorators
- **🔌 MCP Support** - Full Model Context Protocol integration
- **📞 Function Calling** - Automatic function calling with type validation
- **📋 Planning & Scheduling** - Built-in task planning and execution
- **✅ Format-Constrained Output** - JSON, Pydantic models, and custom schemas
- **⚡ Async & Streaming** - Full async support with streaming responses
- **🎨 Multi-Agent Systems** - Easy collaboration between multiple agents
- **🔄 Memory & Context** - Conversation memory and context management
- **🛠️ Multiple LLM Support** - OpenAI, Anthropic, and custom providers
- **📊 Observable** - Built-in logging and tracing
## 📦 Installation
```bash
pip install axm-agent
```
For OpenAI support:
```bash
pip install axm-agent[openai]
```
For Anthropic (Claude) support:
```bash
pip install axm-agent[anthropic]
```
For all providers:
```bash
pip install axm-agent[all]
```
## 🔑 Configuration
AXM Agent uses environment variables for API credentials:
### OpenAI
```bash
export AXM_OPENAI_API_KEY="sk-..."
```
### Anthropic (Claude)
```bash
export AXM_ANTHROPIC_API_KEY="sk-ant-..."
```
### OpenAI-Compatible Providers (DeepSeek, local LLMs, etc.)
```bash
export AXM_OPENAI_COMPATIBLE_API_KEY="your-api-key"
export AXM_OPENAI_COMPATIBLE_BASE_URL="https://your-endpoint.com/v1"
```
You can also pass credentials directly when creating agents:
```python
agent = Agent("gpt-4", api_key="sk-...", base_url="https://custom-endpoint.com/v1")
```
## 🎓 Quick Start
### Basic Agent
```python
from axm import Agent
agent = Agent("gpt-4")
response = agent.run("Tell me a joke")
print(response)
```
### Agent with Tools
```python
from axm import Agent, tool
import datetime
agent = Agent("gpt-4")
@agent.tool
def get_current_time() -> str:
"""Get the current time"""
return datetime.datetime.now().strftime("%H:%M:%S")
@agent.tool
def calculate(expression: str) -> float:
"""Safely evaluate a mathematical expression"""
return eval(expression, {"__builtins__": {}})
response = agent.run("What time is it and what is 25 * 4?")
print(response)
```
### Structured Output
```python
from axm import Agent
from pydantic import BaseModel
class WeatherReport(BaseModel):
city: str
temperature: float
conditions: str
humidity: int
agent = Agent("gpt-4")
report = agent.run(
"Generate a weather report for Paris",
response_format=WeatherReport
)
print(f"{report.city}: {report.temperature}°C, {report.conditions}")
```
### Planning Agent
```python
from axm import PlanningAgent
agent = PlanningAgent("gpt-4")
# The agent will break down the task into steps and execute them
result = agent.execute_plan(
"Research the top 3 programming languages in 2025 and create a comparison"
)
```
### Multi-Agent System
```python
from axm import Agent, MultiAgent
researcher = Agent("gpt-4", role="researcher")
writer = Agent("gpt-4", role="writer")
critic = Agent("gpt-4", role="critic")
team = MultiAgent([researcher, writer, critic])
result = team.collaborate("Write an article about AI agents")
```
### Async Support
```python
from axm import Agent
import asyncio
async def main():
agent = Agent("gpt-4")
# Async execution
response = await agent.arun("Tell me about async programming")
# Streaming
async for chunk in agent.stream("Write a story"):
print(chunk, end="", flush=True)
asyncio.run(main())
```
### MCP Integration
```python
from axm import Agent
from axm.mcp import MCPServer
# Create an MCP server
mcp = MCPServer()
@mcp.tool
def search_database(query: str) -> list:
"""Search the database"""
return ["result1", "result2"]
# Connect agent to MCP
agent = Agent("gpt-4", mcp_server=mcp)
response = agent.run("Search for user data")
```
## 📚 Advanced Features
### Custom LLM Provider
```python
from axm import Agent, LLMProvider
class CustomLLM(LLMProvider):
def generate(self, messages, **kwargs):
# Your custom LLM logic
pass
agent = Agent(CustomLLM())
```
### Memory Management
```python
from axm import Agent
from axm.memory import ConversationMemory
agent = Agent("gpt-4", memory=ConversationMemory(max_messages=10))
agent.run("My name is Alice")
agent.run("What's my name?") # Will remember "Alice"
```
### Retry & Error Handling
```python
from axm import Agent
agent = Agent("gpt-4", max_retries=3, timeout=30)
@agent.tool
def risky_operation() -> str:
"""An operation that might fail"""
# Will automatically retry on failure
pass
```
## 🏗️ Architecture
AXM Agent is built on three core principles:
1. **Simplicity First** - Easy things should be easy, complex things should be possible
2. **Type Safety** - Full Pydantic integration for validation
3. **Composability** - Mix and match components to build what you need
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## 📄 License
MIT License - see LICENSE file for details
## 🙏 Acknowledgments
Inspired by the best ideas from LangChain, CrewAI, and AutoGen, but designed for simplicity.
## 📖 Documentation
For full documentation, visit [our docs](https://github.com/AIxMath/axm-agent/docs)
## 🐛 Issues
Found a bug? Please [open an issue](https://github.com/AIxMath/axm-agent/issues)
Raw data
{
"_id": null,
"home_page": null,
"name": "axm-agent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, agent, llm, mcp, function-calling, openai, anthropic, langchain",
"author": null,
"author_email": "Your Name <your.email@example.com>",
"download_url": "https://files.pythonhosted.org/packages/0d/3f/5d0f6dc56b4f98a58cd44e24cc284fe673fb8c33f482574549b448fb09f6/axm_agent-0.2.3.tar.gz",
"platform": null,
"description": "# \ud83e\udd16 AXM Agent\n\n**A simple, elegant Python framework for building AI agents with decorators, MCP support, and powerful utilities**\n\n[](https://badge.fury.io/py/axm-agent)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n\n## \u2728 Why AXM Agent?\n\nAXM Agent is designed with **simplicity** and **developer experience** in mind. Unlike heavyweight frameworks, AXM Agent lets you build powerful AI agents with just a few lines of code using elegant decorators and intuitive APIs.\n\n```python\nfrom axm import Agent, tool\n\n# Create an agent\nagent = Agent(\"gpt-4\")\n\n# Define tools with a simple decorator\n@agent.tool\ndef get_weather(city: str) -> str:\n \"\"\"Get the weather for a city\"\"\"\n return f\"Sunny in {city}\"\n\n# Run the agent\nresponse = agent.run(\"What's the weather in Paris?\")\nprint(response)\n```\n\n## \ud83d\ude80 Features\n\n- **\ud83c\udfaf Simple Decorator API** - Define tools and agents with intuitive decorators\n- **\ud83d\udd0c MCP Support** - Full Model Context Protocol integration\n- **\ud83d\udcde Function Calling** - Automatic function calling with type validation\n- **\ud83d\udccb Planning & Scheduling** - Built-in task planning and execution\n- **\u2705 Format-Constrained Output** - JSON, Pydantic models, and custom schemas\n- **\u26a1 Async & Streaming** - Full async support with streaming responses\n- **\ud83c\udfa8 Multi-Agent Systems** - Easy collaboration between multiple agents\n- **\ud83d\udd04 Memory & Context** - Conversation memory and context management\n- **\ud83d\udee0\ufe0f Multiple LLM Support** - OpenAI, Anthropic, and custom providers\n- **\ud83d\udcca Observable** - Built-in logging and tracing\n\n## \ud83d\udce6 Installation\n\n```bash\npip install axm-agent\n```\n\nFor OpenAI support:\n```bash\npip install axm-agent[openai]\n```\n\nFor Anthropic (Claude) support:\n```bash\npip install axm-agent[anthropic]\n```\n\nFor all providers:\n```bash\npip install axm-agent[all]\n```\n\n## \ud83d\udd11 Configuration\n\nAXM Agent uses environment variables for API credentials:\n\n### OpenAI\n```bash\nexport AXM_OPENAI_API_KEY=\"sk-...\"\n```\n\n### Anthropic (Claude)\n```bash\nexport AXM_ANTHROPIC_API_KEY=\"sk-ant-...\"\n```\n\n### OpenAI-Compatible Providers (DeepSeek, local LLMs, etc.)\n```bash\nexport AXM_OPENAI_COMPATIBLE_API_KEY=\"your-api-key\"\nexport AXM_OPENAI_COMPATIBLE_BASE_URL=\"https://your-endpoint.com/v1\"\n```\n\nYou can also pass credentials directly when creating agents:\n```python\nagent = Agent(\"gpt-4\", api_key=\"sk-...\", base_url=\"https://custom-endpoint.com/v1\")\n```\n\n## \ud83c\udf93 Quick Start\n\n### Basic Agent\n\n```python\nfrom axm import Agent\n\nagent = Agent(\"gpt-4\")\nresponse = agent.run(\"Tell me a joke\")\nprint(response)\n```\n\n### Agent with Tools\n\n```python\nfrom axm import Agent, tool\nimport datetime\n\nagent = Agent(\"gpt-4\")\n\n@agent.tool\ndef get_current_time() -> str:\n \"\"\"Get the current time\"\"\"\n return datetime.datetime.now().strftime(\"%H:%M:%S\")\n\n@agent.tool\ndef calculate(expression: str) -> float:\n \"\"\"Safely evaluate a mathematical expression\"\"\"\n return eval(expression, {\"__builtins__\": {}})\n\nresponse = agent.run(\"What time is it and what is 25 * 4?\")\nprint(response)\n```\n\n### Structured Output\n\n```python\nfrom axm import Agent\nfrom pydantic import BaseModel\n\nclass WeatherReport(BaseModel):\n city: str\n temperature: float\n conditions: str\n humidity: int\n\nagent = Agent(\"gpt-4\")\nreport = agent.run(\n \"Generate a weather report for Paris\",\n response_format=WeatherReport\n)\nprint(f\"{report.city}: {report.temperature}\u00b0C, {report.conditions}\")\n```\n\n### Planning Agent\n\n```python\nfrom axm import PlanningAgent\n\nagent = PlanningAgent(\"gpt-4\")\n\n# The agent will break down the task into steps and execute them\nresult = agent.execute_plan(\n \"Research the top 3 programming languages in 2025 and create a comparison\"\n)\n```\n\n### Multi-Agent System\n\n```python\nfrom axm import Agent, MultiAgent\n\nresearcher = Agent(\"gpt-4\", role=\"researcher\")\nwriter = Agent(\"gpt-4\", role=\"writer\")\ncritic = Agent(\"gpt-4\", role=\"critic\")\n\nteam = MultiAgent([researcher, writer, critic])\nresult = team.collaborate(\"Write an article about AI agents\")\n```\n\n### Async Support\n\n```python\nfrom axm import Agent\nimport asyncio\n\nasync def main():\n agent = Agent(\"gpt-4\")\n\n # Async execution\n response = await agent.arun(\"Tell me about async programming\")\n\n # Streaming\n async for chunk in agent.stream(\"Write a story\"):\n print(chunk, end=\"\", flush=True)\n\nasyncio.run(main())\n```\n\n### MCP Integration\n\n```python\nfrom axm import Agent\nfrom axm.mcp import MCPServer\n\n# Create an MCP server\nmcp = MCPServer()\n\n@mcp.tool\ndef search_database(query: str) -> list:\n \"\"\"Search the database\"\"\"\n return [\"result1\", \"result2\"]\n\n# Connect agent to MCP\nagent = Agent(\"gpt-4\", mcp_server=mcp)\nresponse = agent.run(\"Search for user data\")\n```\n\n## \ud83d\udcda Advanced Features\n\n### Custom LLM Provider\n\n```python\nfrom axm import Agent, LLMProvider\n\nclass CustomLLM(LLMProvider):\n def generate(self, messages, **kwargs):\n # Your custom LLM logic\n pass\n\nagent = Agent(CustomLLM())\n```\n\n### Memory Management\n\n```python\nfrom axm import Agent\nfrom axm.memory import ConversationMemory\n\nagent = Agent(\"gpt-4\", memory=ConversationMemory(max_messages=10))\n\nagent.run(\"My name is Alice\")\nagent.run(\"What's my name?\") # Will remember \"Alice\"\n```\n\n### Retry & Error Handling\n\n```python\nfrom axm import Agent\n\nagent = Agent(\"gpt-4\", max_retries=3, timeout=30)\n\n@agent.tool\ndef risky_operation() -> str:\n \"\"\"An operation that might fail\"\"\"\n # Will automatically retry on failure\n pass\n```\n\n## \ud83c\udfd7\ufe0f Architecture\n\nAXM Agent is built on three core principles:\n\n1. **Simplicity First** - Easy things should be easy, complex things should be possible\n2. **Type Safety** - Full Pydantic integration for validation\n3. **Composability** - Mix and match components to build what you need\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details\n\n## \ud83d\ude4f Acknowledgments\n\nInspired by the best ideas from LangChain, CrewAI, and AutoGen, but designed for simplicity.\n\n## \ud83d\udcd6 Documentation\n\nFor full documentation, visit [our docs](https://github.com/AIxMath/axm-agent/docs)\n\n## \ud83d\udc1b Issues\n\nFound a bug? Please [open an issue](https://github.com/AIxMath/axm-agent/issues)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A simple, elegant Python framework for building AI agents with decorators, MCP support, and powerful utilities",
"version": "0.2.3",
"project_urls": {
"Documentation": "https://github.com/AIxMath/axm-agent#readme",
"Homepage": "https://github.com/AIxMath/axm-agent",
"Issues": "https://github.com/AIxMath/axm-agent/issues",
"Repository": "https://github.com/AIxMath/axm-agent"
},
"split_keywords": [
"ai",
" agent",
" llm",
" mcp",
" function-calling",
" openai",
" anthropic",
" langchain"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ce7cb352ced7f770b214ae53fad6723253192b382f1326f1ad1114a54cd27a49",
"md5": "2fcdc7a990c8e36bf630c01e5a58a4d0",
"sha256": "fe47bf4b9fe91644e1c7dd9c9a4e436e86b71d10822aadf84ead74377dc2b878"
},
"downloads": -1,
"filename": "axm_agent-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2fcdc7a990c8e36bf630c01e5a58a4d0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 26735,
"upload_time": "2025-10-30T08:09:36",
"upload_time_iso_8601": "2025-10-30T08:09:36.835139Z",
"url": "https://files.pythonhosted.org/packages/ce/7c/b352ced7f770b214ae53fad6723253192b382f1326f1ad1114a54cd27a49/axm_agent-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0d3f5d0f6dc56b4f98a58cd44e24cc284fe673fb8c33f482574549b448fb09f6",
"md5": "02de7fe9e93f2698ad0050a6757ca207",
"sha256": "a7316b08dbc2172c05a29fa085a82f73219ad3fa2c54348d6b2fc1c3d5c1ad54"
},
"downloads": -1,
"filename": "axm_agent-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "02de7fe9e93f2698ad0050a6757ca207",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 38567,
"upload_time": "2025-10-30T08:09:38",
"upload_time_iso_8601": "2025-10-30T08:09:38.081512Z",
"url": "https://files.pythonhosted.org/packages/0d/3f/5d0f6dc56b4f98a58cd44e24cc284fe673fb8c33f482574549b448fb09f6/axm_agent-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-30 08:09:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AIxMath",
"github_project": "axm-agent#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "axm-agent"
}