# Loom Agent
<div align="center">
**Production-ready Python Agent framework with enterprise-grade reliability and observability**
[](https://pypi.org/project/loom-agent/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](tests/)
[Documentation](docs/user/user-guide.md) | [API Reference](docs/user/api-reference.md) | [Contributing](CONTRIBUTING.md)
</div>
---
## π― What is Loom Agent?
Loom Agent is a Python framework for building reliable AI agents with production-grade features like automatic retries, context compression, persistent memory, and comprehensive observability.
**Key Features:**
- π **Simple API** - Get started with just 3 lines of code
- π§ **Tool System** - Easy decorator-based tool creation
- πΎ **Persistent Memory** - Cross-session conversation history
- π **Observability** - Structured logging with correlation IDs
- π‘οΈ **Production Ready** - Circuit breakers, retries, and failover
- β‘ **High Performance** - Parallel tool execution and smart context compression (40% faster in v0.0.3)
- π **Multi-LLM** - OpenAI, Anthropic, and more
- π― **Unified Coordination** - Advanced multi-agent coordination system
- π **TT Recursive Mode** - Enhanced task handling with improved recursion
## π¦ Installation
```bash
# Basic installation
pip install loom-agent
# With OpenAI support
pip install loom-agent[openai]
# With all features
pip install loom-agent[all]
```
**Requirements:** Python 3.11+
## π Quick Start
### Basic Agent
```python
import asyncio
from loom import agent
from loom.builtin.llms import MockLLM
async def main():
# Create an agent
my_agent = agent(llm=MockLLM())
# Run it
result = await my_agent.run("Hello, world!")
print(result)
asyncio.run(main())
```
### With OpenAI
```python
from loom import agent
# Create agent with OpenAI
my_agent = agent(
provider="openai",
model="gpt-4",
api_key="sk-..." # or set OPENAI_API_KEY env var
)
result = await my_agent.run("What is the capital of France?")
print(result)
```
### Custom Tools
```python
from loom import agent, tool
@tool()
def add(a: int, b: int) -> int:
"""Add two numbers together"""
return a + b
my_agent = agent(
provider="openai",
model="gpt-4",
tools=[add()]
)
result = await my_agent.run("What is 15 plus 27?")
print(result)
```
## π Documentation
- **[Getting Started](docs/user/getting-started.md)** - Your first Loom agent in 5 minutes
- **[User Guide](docs/user/user-guide.md)** - Complete usage documentation
- **[API Reference](docs/user/api-reference.md)** - Detailed API documentation
- **[Contributing Guide](CONTRIBUTING.md)** - How to contribute
## π οΈ Core Components
### Agent Builder
```python
from loom import agent
my_agent = agent(
provider="openai", # LLM provider
model="gpt-4", # Model name
tools=[...], # Custom tools
memory=..., # Memory system
callbacks=[...] # Observability
)
```
### Tool Decorator
```python
from loom import tool
@tool(description="Fetch weather data")
def get_weather(city: str) -> dict:
return {"temp": 72, "condition": "sunny"}
```
### Memory System
```python
from loom import PersistentMemory
memory = PersistentMemory() # Conversations persist across restarts
agent = agent(llm=..., memory=memory)
```
### Observability
```python
from loom import ObservabilityCallback, MetricsAggregator
obs = ObservabilityCallback()
metrics = MetricsAggregator()
agent = agent(llm=..., callbacks=[obs, metrics])
```
## π― Supported Platforms
- **Python:** 3.11, 3.12
- **Operating Systems:** Linux, macOS, Windows
- **LLM Providers:** OpenAI, Anthropic, Ollama
## π What's New in v0.0.3
**Major Performance & Reliability Improvements:**
- β‘ **40% Performance Boost** - Optimized execution pipeline and context management
- π§ **Unified Coordination System** - Advanced multi-agent coordination with improved reliability
- π **Enhanced TT Recursive Mode** - Better task handling and recursion management
- π‘οΈ **Bug Fixes** - All known issues resolved, compilation passes cleanly
- π **Improved Documentation** - Comprehensive guides and API references
**Production Ready Features:**
- β
Core agent execution (stable)
- β
Tool system and decorators (enhanced)
- β
Memory and context management (optimized)
- β
Multi-LLM provider support (OpenAI, Anthropic, Ollama)
- β
Structured logging and observability
- β
Circuit breakers and retry mechanisms
- β
Unified coordination for complex workflows
## π€ Contributing
We welcome contributions! Here's how to get started:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes and add tests
4. Run tests: `poetry run pytest`
5. Submit a pull request
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## π Project Status
- **Version:** 0.0.3 (Alpha)
- **Status:** Active Development
- **Tests:** 18/18 passing β
- **Python:** 3.11+ supported
- **Performance:** 40% improvement over v0.0.2
## πΊοΈ Roadmap
### v0.1.0 (Planned)
- API stabilization
- More examples and tutorials
- Performance optimizations
- Extended documentation
### v0.2.0 (Planned)
- Additional LLM providers
- Plugin system
- Web UI for debugging
### v1.0.0 (Goal)
- Stable API
- Production-grade quality
- Comprehensive documentation
- Community ecosystem
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Links
- **PyPI:** https://pypi.org/project/loom-agent/
- **GitHub:** https://github.com/kongusen/loom-agent
- **Issues:** https://github.com/kongusen/loom-agent/issues
- **Releases:** [v0.0.3](releases/v0.0.3.md) | [v0.0.2](releases/v0.0.2.md) | [v0.0.1](releases/v0.0.1.md)
## π Acknowledgments
Special thanks to the Claude Code project for inspiration and to all early testers and contributors!
---
**Built with β€οΈ for the AI community**
<div align="center">
<sub>If you find Loom Agent useful, please consider giving it a β on GitHub!</sub>
</div>
Raw data
{
"_id": null,
"home_page": "https://github.com/kongusen/loom-agent",
"name": "loom-agent",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.11",
"maintainer_email": null,
"keywords": "ai, llm, agent, multi-agent, rag, tooling, asyncio",
"author": "kongusen",
"author_email": "wanghaishan0210@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/cd/6c/030e3459f210a5d0b444e16f075c3cde2aba583e6c885f3571c7b64f940a/loom_agent-0.0.4.tar.gz",
"platform": null,
"description": "# Loom Agent\n\n<div align=\"center\">\n\n**Production-ready Python Agent framework with enterprise-grade reliability and observability**\n\n[](https://pypi.org/project/loom-agent/)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](tests/)\n\n[Documentation](docs/user/user-guide.md) | [API Reference](docs/user/api-reference.md) | [Contributing](CONTRIBUTING.md)\n\n</div>\n\n---\n\n## \ud83c\udfaf What is Loom Agent?\n\nLoom Agent is a Python framework for building reliable AI agents with production-grade features like automatic retries, context compression, persistent memory, and comprehensive observability.\n\n**Key Features:**\n\n- \ud83d\ude80 **Simple API** - Get started with just 3 lines of code\n- \ud83d\udd27 **Tool System** - Easy decorator-based tool creation\n- \ud83d\udcbe **Persistent Memory** - Cross-session conversation history\n- \ud83d\udcca **Observability** - Structured logging with correlation IDs\n- \ud83d\udee1\ufe0f **Production Ready** - Circuit breakers, retries, and failover\n- \u26a1 **High Performance** - Parallel tool execution and smart context compression (40% faster in v0.0.3)\n- \ud83c\udf10 **Multi-LLM** - OpenAI, Anthropic, and more\n- \ud83c\udfaf **Unified Coordination** - Advanced multi-agent coordination system\n- \ud83d\udd04 **TT Recursive Mode** - Enhanced task handling with improved recursion\n\n## \ud83d\udce6 Installation\n\n```bash\n# Basic installation\npip install loom-agent\n\n# With OpenAI support\npip install loom-agent[openai]\n\n# With all features\npip install loom-agent[all]\n```\n\n**Requirements:** Python 3.11+\n\n## \ud83d\ude80 Quick Start\n\n### Basic Agent\n\n```python\nimport asyncio\nfrom loom import agent\nfrom loom.builtin.llms import MockLLM\n\nasync def main():\n # Create an agent\n my_agent = agent(llm=MockLLM())\n\n # Run it\n result = await my_agent.run(\"Hello, world!\")\n print(result)\n\nasyncio.run(main())\n```\n\n### With OpenAI\n\n```python\nfrom loom import agent\n\n# Create agent with OpenAI\nmy_agent = agent(\n provider=\"openai\",\n model=\"gpt-4\",\n api_key=\"sk-...\" # or set OPENAI_API_KEY env var\n)\n\nresult = await my_agent.run(\"What is the capital of France?\")\nprint(result)\n```\n\n### Custom Tools\n\n```python\nfrom loom import agent, tool\n\n@tool()\ndef add(a: int, b: int) -> int:\n \"\"\"Add two numbers together\"\"\"\n return a + b\n\nmy_agent = agent(\n provider=\"openai\",\n model=\"gpt-4\",\n tools=[add()]\n)\n\nresult = await my_agent.run(\"What is 15 plus 27?\")\nprint(result)\n```\n\n## \ud83d\udcda Documentation\n\n- **[Getting Started](docs/user/getting-started.md)** - Your first Loom agent in 5 minutes\n- **[User Guide](docs/user/user-guide.md)** - Complete usage documentation\n- **[API Reference](docs/user/api-reference.md)** - Detailed API documentation\n- **[Contributing Guide](CONTRIBUTING.md)** - How to contribute\n\n## \ud83d\udee0\ufe0f Core Components\n\n### Agent Builder\n```python\nfrom loom import agent\n\nmy_agent = agent(\n provider=\"openai\", # LLM provider\n model=\"gpt-4\", # Model name\n tools=[...], # Custom tools\n memory=..., # Memory system\n callbacks=[...] # Observability\n)\n```\n\n### Tool Decorator\n```python\nfrom loom import tool\n\n@tool(description=\"Fetch weather data\")\ndef get_weather(city: str) -> dict:\n return {\"temp\": 72, \"condition\": \"sunny\"}\n```\n\n### Memory System\n```python\nfrom loom import PersistentMemory\n\nmemory = PersistentMemory() # Conversations persist across restarts\nagent = agent(llm=..., memory=memory)\n```\n\n### Observability\n```python\nfrom loom import ObservabilityCallback, MetricsAggregator\n\nobs = ObservabilityCallback()\nmetrics = MetricsAggregator()\n\nagent = agent(llm=..., callbacks=[obs, metrics])\n```\n\n## \ud83c\udfaf Supported Platforms\n\n- **Python:** 3.11, 3.12\n- **Operating Systems:** Linux, macOS, Windows\n- **LLM Providers:** OpenAI, Anthropic, Ollama\n\n## \ud83c\udf8a What's New in v0.0.3\n\n**Major Performance & Reliability Improvements:**\n\n- \u26a1 **40% Performance Boost** - Optimized execution pipeline and context management\n- \ud83d\udd27 **Unified Coordination System** - Advanced multi-agent coordination with improved reliability\n- \ud83d\udd04 **Enhanced TT Recursive Mode** - Better task handling and recursion management\n- \ud83d\udee1\ufe0f **Bug Fixes** - All known issues resolved, compilation passes cleanly\n- \ud83d\udcda **Improved Documentation** - Comprehensive guides and API references\n\n**Production Ready Features:**\n- \u2705 Core agent execution (stable)\n- \u2705 Tool system and decorators (enhanced)\n- \u2705 Memory and context management (optimized)\n- \u2705 Multi-LLM provider support (OpenAI, Anthropic, Ollama)\n- \u2705 Structured logging and observability\n- \u2705 Circuit breakers and retry mechanisms\n- \u2705 Unified coordination for complex workflows\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Here's how to get started:\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Make your changes and add tests\n4. Run tests: `poetry run pytest`\n5. Submit a pull request\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.\n\n## \ud83d\udcca Project Status\n\n- **Version:** 0.0.3 (Alpha)\n- **Status:** Active Development\n- **Tests:** 18/18 passing \u2705\n- **Python:** 3.11+ supported\n- **Performance:** 40% improvement over v0.0.2\n\n## \ud83d\uddfa\ufe0f Roadmap\n\n### v0.1.0 (Planned)\n- API stabilization\n- More examples and tutorials\n- Performance optimizations\n- Extended documentation\n\n### v0.2.0 (Planned)\n- Additional LLM providers\n- Plugin system\n- Web UI for debugging\n\n### v1.0.0 (Goal)\n- Stable API\n- Production-grade quality\n- Comprehensive documentation\n- Community ecosystem\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udd17 Links\n\n- **PyPI:** https://pypi.org/project/loom-agent/\n- **GitHub:** https://github.com/kongusen/loom-agent\n- **Issues:** https://github.com/kongusen/loom-agent/issues\n- **Releases:** [v0.0.3](releases/v0.0.3.md) | [v0.0.2](releases/v0.0.2.md) | [v0.0.1](releases/v0.0.1.md)\n\n## \ud83d\ude4f Acknowledgments\n\nSpecial thanks to the Claude Code project for inspiration and to all early testers and contributors!\n\n---\n\n**Built with \u2764\ufe0f for the AI community**\n\n<div align=\"center\">\n <sub>If you find Loom Agent useful, please consider giving it a \u2b50 on GitHub!</sub>\n</div>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Production-ready Python Agent framework with enterprise-grade reliability and observability",
"version": "0.0.4",
"project_urls": {
"Documentation": "https://github.com/kongusen/loom-agent#readme",
"Homepage": "https://github.com/kongusen/loom-agent",
"Repository": "https://github.com/kongusen/loom-agent"
},
"split_keywords": [
"ai",
" llm",
" agent",
" multi-agent",
" rag",
" tooling",
" asyncio"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ba184c2a9c47cb1d6c1b8b57f8f0ed5be6af789b3809c87592972953f60d8038",
"md5": "ea59f95fddca90034ab584ce888f0e2c",
"sha256": "567b1d0584573506afef6f1f29ea0b1d6fce0a4eb5f443e0636df4ce9c0837cd"
},
"downloads": -1,
"filename": "loom_agent-0.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ea59f95fddca90034ab584ce888f0e2c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.11",
"size": 185916,
"upload_time": "2025-10-27T05:53:37",
"upload_time_iso_8601": "2025-10-27T05:53:37.159871Z",
"url": "https://files.pythonhosted.org/packages/ba/18/4c2a9c47cb1d6c1b8b57f8f0ed5be6af789b3809c87592972953f60d8038/loom_agent-0.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cd6c030e3459f210a5d0b444e16f075c3cde2aba583e6c885f3571c7b64f940a",
"md5": "bb54987b812edb6787938b33d1dd840d",
"sha256": "cff0192c00b8985a9dfe00a95a3f76fc248754b346e24f18f5dd6225f5ce5cd7"
},
"downloads": -1,
"filename": "loom_agent-0.0.4.tar.gz",
"has_sig": false,
"md5_digest": "bb54987b812edb6787938b33d1dd840d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.11",
"size": 137408,
"upload_time": "2025-10-27T05:53:40",
"upload_time_iso_8601": "2025-10-27T05:53:40.181455Z",
"url": "https://files.pythonhosted.org/packages/cd/6c/030e3459f210a5d0b444e16f075c3cde2aba583e6c885f3571c7b64f940a/loom_agent-0.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-27 05:53:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kongusen",
"github_project": "loom-agent",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "loom-agent"
}