Name | alloy-ai JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | Agent-first programming for Python - clean, Pythonic API for AI agents |
upload_time | 2025-07-29 07:26:48 |
maintainer | None |
docs_url | None |
author | Alloy Team |
requires_python | >=3.8 |
license | MIT |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Alloy DSL
Agent-first programming for Python. A clean, Pythonic API for AI agents with first-class support for:
- **Agents as first-class citizens** with multi-provider support
- **Structured output** with automatic parsing and validation
- **Tool calling** and autonomous agentic behavior
- **Pipeline operations** with clean error handling
- **Design-by-contract** patterns
- **Natural language commands** as reusable templates
## Quick Start
### 1. Installation
```bash
# Clone and set up development environment
git clone <repo-url>
cd alloy-ai
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -e ".[dev]"
```
### 2. Set up API Keys
Create a `.env` file:
```bash
# .env
OPENAI_API_KEY=your_openai_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
OPENROUTER_API_KEY=your_openrouter_key_here
```
### 3. Basic Usage
```python
from alloy import Agent
from dataclasses import dataclass
# Agentic by default - autonomous reasoning and tool use
agent = Agent("gpt-4o", tools=[calculate_area, get_weather])
response = await agent("What's the area of a 5x3 room?")
print(response.value) # Agent autonomously uses calculate_area tool
# Simple chat mode (bypasses agentic reasoning)
response = await agent.async_chat("What's the capital of France?")
print(response.value) # Direct LLM response: "Paris"
# Synchronous usage
response = agent.sync("What's the area of a 5x3 room?")
print(response.value) # Blocks until completion
# Structured output works in both modes
@dataclass
class WeatherInfo:
location: str
temperature: int
condition: str
weather_agent = Agent("gpt-4o", output_schema=WeatherInfo)
result = await weather_agent("Get weather for Paris: 22°C, sunny") # Agentic mode
weather = result.value # Automatically parsed WeatherInfo object
print(f"{weather.location}: {weather.temperature}°C, {weather.condition}")
```
### 4. Multi-Provider Support
```python
# Automatic provider detection
openai_agent = Agent("gpt-4o") # Uses OpenAI
claude_agent = Agent("claude-3.5-sonnet") # Uses Anthropic
router_agent = Agent("anthropic/claude-3.5-sonnet") # Uses OpenRouter
# All agents have the same interface
for agent in [openai_agent, claude_agent, router_agent]:
result = await agent("Hello!")
print(result.value)
```
### 5. Autonomous Agent Behavior
```python
# Define tools as regular Python functions
def calculate_area(length: float, width: float) -> float:
"""Calculate the area of a rectangle."""
return length * width
def get_current_weather(location: str) -> str:
"""Get the current weather for a location."""
return f"The weather in {location} is sunny and 22°C"
# Agent automatically becomes agentic when given tools
agent = Agent("gpt-4o", tools=[calculate_area, get_current_weather])
# Agent autonomously reasons and uses tools as needed
result = await agent("What's the area of a 5x3 meter room and the weather in Paris?")
print(result.value)
# Agent will:
# 1. Use calculate_area(5, 3) → 15
# 2. Use get_current_weather("Paris") → "sunny and 22°C"
# 3. Synthesize: "The area is 15 square meters and the weather in Paris is sunny and 22°C"
```
## Provider Support
| Provider | Models | Function Calling | Structured Output | Status |
|----------|--------|-----------------|-------------------|---------|
| **OpenAI** | GPT-4o, GPT-4.1, GPT-3.5+ | ✅ | ✅ JSON Schema | ✅ Ready |
| **Anthropic** | Claude 3+, Claude 4 | ✅ | ✅ Tool-based | ✅ Ready |
| **OpenRouter** | 50+ models | ✅ Variable* | ✅ Variable* | ✅ Ready |
| **xAI** | Grok models | ✅ | ✅ | 🚧 Stub |
| **Gemini** | Gemini Pro+ | ✅ | ✅ | 🚧 Stub |
| **Ollama** | Local models | ⚠️ Limited | ⚠️ Limited | 🚧 Stub |
*Variable support depends on the underlying model
## Development
### Running Tests
```bash
# Quick smoke tests
make test-quick
# All integration tests (requires API keys)
make test
# Specific test categories
make test-basic # Provider basics
make test-structured # Structured output
make test-agentic # Autonomous agents
# Provider-specific tests
make test-openai
make test-anthropic
make test-openrouter
```
### Code Quality
```bash
make format # Format code with black/isort
make lint # Check with ruff
make type-check # Run mypy
```
### Using pytest directly
```bash
# Run all integration tests
pytest tests/ -m requires_api_key -v
# Run specific tests
pytest tests/test_providers_integration.py::TestProviderBasics -v
pytest tests/ -k "openai and basic" -v
```
See [TESTING.md](TESTING.md) for comprehensive testing documentation.
## Architecture
```
src/alloy/
├── core/ # Core DSL components
│ ├── agent.py # Agent class with dual interface
│ ├── result.py # Result monad for error handling
│ ├── memory.py # Conversation & explicit memory
│ ├── command.py # Natural language commands
│ ├── agentic_loop.py # Autonomous agent behavior
│ └── contracts.py # Design-by-contract (TODO)
├── providers/ # Multi-provider system
│ ├── base.py # Abstract provider interface
│ ├── openai_provider.py
│ ├── anthropic_provider.py
│ ├── openrouter_provider.py
│ └── registry.py # Provider auto-detection
└── utilities/ # Helper utilities
```
## Roadmap
- ✅ Multi-provider system with capability detection
- ✅ Structured output with dataclass parsing
- ✅ Agentic loop with tool calling
- ✅ Memory system and conversation history
- 🚧 Design-by-contract decorators (`@require`, `@ensure`, `@invariant`)
- 🚧 Pipeline system with `>>` operator
- 🚧 Complete xAI, Gemini, Ollama providers
- 🚧 Full Alloy language parser/interpreter
- 🚧 Claude Code integration
- 🚧 Advanced agent patterns (ReAct, multi-agent coordination)
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes with tests
4. Run the test suite: `make test`
5. Submit a pull request
## License
MIT License - see [LICENSE](LICENSE) for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "alloy-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Alloy Team",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/f5/2b/abf0324cdf06a6a2d9041f6281ce54242a8917de44afbccd5e09dda0b649/alloy_ai-0.1.0.tar.gz",
"platform": null,
"description": "# Alloy DSL\n\nAgent-first programming for Python. A clean, Pythonic API for AI agents with first-class support for:\n\n- **Agents as first-class citizens** with multi-provider support\n- **Structured output** with automatic parsing and validation \n- **Tool calling** and autonomous agentic behavior\n- **Pipeline operations** with clean error handling\n- **Design-by-contract** patterns\n- **Natural language commands** as reusable templates\n\n## Quick Start\n\n### 1. Installation\n\n```bash\n# Clone and set up development environment\ngit clone <repo-url>\ncd alloy-ai\npython3 -m venv .venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\npip install -e \".[dev]\"\n```\n\n### 2. Set up API Keys\n\nCreate a `.env` file:\n\n```bash\n# .env\nOPENAI_API_KEY=your_openai_key_here\nANTHROPIC_API_KEY=your_anthropic_key_here\nOPENROUTER_API_KEY=your_openrouter_key_here\n```\n\n### 3. Basic Usage\n\n```python\nfrom alloy import Agent\nfrom dataclasses import dataclass\n\n# Agentic by default - autonomous reasoning and tool use\nagent = Agent(\"gpt-4o\", tools=[calculate_area, get_weather])\nresponse = await agent(\"What's the area of a 5x3 room?\")\nprint(response.value) # Agent autonomously uses calculate_area tool\n\n# Simple chat mode (bypasses agentic reasoning) \nresponse = await agent.async_chat(\"What's the capital of France?\")\nprint(response.value) # Direct LLM response: \"Paris\"\n\n# Synchronous usage\nresponse = agent.sync(\"What's the area of a 5x3 room?\")\nprint(response.value) # Blocks until completion\n\n# Structured output works in both modes\n@dataclass\nclass WeatherInfo:\n location: str\n temperature: int\n condition: str\n\nweather_agent = Agent(\"gpt-4o\", output_schema=WeatherInfo)\nresult = await weather_agent(\"Get weather for Paris: 22\u00b0C, sunny\") # Agentic mode\nweather = result.value # Automatically parsed WeatherInfo object\nprint(f\"{weather.location}: {weather.temperature}\u00b0C, {weather.condition}\")\n```\n\n### 4. Multi-Provider Support\n\n```python\n# Automatic provider detection\nopenai_agent = Agent(\"gpt-4o\") # Uses OpenAI\nclaude_agent = Agent(\"claude-3.5-sonnet\") # Uses Anthropic \nrouter_agent = Agent(\"anthropic/claude-3.5-sonnet\") # Uses OpenRouter\n\n# All agents have the same interface\nfor agent in [openai_agent, claude_agent, router_agent]:\n result = await agent(\"Hello!\")\n print(result.value)\n```\n\n### 5. Autonomous Agent Behavior\n\n```python\n# Define tools as regular Python functions\ndef calculate_area(length: float, width: float) -> float:\n \"\"\"Calculate the area of a rectangle.\"\"\"\n return length * width\n\ndef get_current_weather(location: str) -> str:\n \"\"\"Get the current weather for a location.\"\"\"\n return f\"The weather in {location} is sunny and 22\u00b0C\"\n\n# Agent automatically becomes agentic when given tools\nagent = Agent(\"gpt-4o\", tools=[calculate_area, get_current_weather])\n\n# Agent autonomously reasons and uses tools as needed\nresult = await agent(\"What's the area of a 5x3 meter room and the weather in Paris?\")\nprint(result.value) \n# Agent will:\n# 1. Use calculate_area(5, 3) \u2192 15\n# 2. Use get_current_weather(\"Paris\") \u2192 \"sunny and 22\u00b0C\" \n# 3. Synthesize: \"The area is 15 square meters and the weather in Paris is sunny and 22\u00b0C\"\n```\n\n## Provider Support\n\n| Provider | Models | Function Calling | Structured Output | Status |\n|----------|--------|-----------------|-------------------|---------|\n| **OpenAI** | GPT-4o, GPT-4.1, GPT-3.5+ | \u2705 | \u2705 JSON Schema | \u2705 Ready |\n| **Anthropic** | Claude 3+, Claude 4 | \u2705 | \u2705 Tool-based | \u2705 Ready |\n| **OpenRouter** | 50+ models | \u2705 Variable* | \u2705 Variable* | \u2705 Ready |\n| **xAI** | Grok models | \u2705 | \u2705 | \ud83d\udea7 Stub |\n| **Gemini** | Gemini Pro+ | \u2705 | \u2705 | \ud83d\udea7 Stub |\n| **Ollama** | Local models | \u26a0\ufe0f Limited | \u26a0\ufe0f Limited | \ud83d\udea7 Stub |\n\n*Variable support depends on the underlying model\n\n## Development\n\n### Running Tests\n\n```bash\n# Quick smoke tests\nmake test-quick\n\n# All integration tests (requires API keys)\nmake test\n\n# Specific test categories\nmake test-basic # Provider basics\nmake test-structured # Structured output\nmake test-agentic # Autonomous agents\n\n# Provider-specific tests\nmake test-openai\nmake test-anthropic\nmake test-openrouter\n```\n\n### Code Quality\n\n```bash\nmake format # Format code with black/isort\nmake lint # Check with ruff\nmake type-check # Run mypy\n```\n\n### Using pytest directly\n\n```bash\n# Run all integration tests\npytest tests/ -m requires_api_key -v\n\n# Run specific tests\npytest tests/test_providers_integration.py::TestProviderBasics -v\npytest tests/ -k \"openai and basic\" -v\n```\n\nSee [TESTING.md](TESTING.md) for comprehensive testing documentation.\n\n## Architecture\n\n```\nsrc/alloy/\n\u251c\u2500\u2500 core/ # Core DSL components\n\u2502 \u251c\u2500\u2500 agent.py # Agent class with dual interface\n\u2502 \u251c\u2500\u2500 result.py # Result monad for error handling\n\u2502 \u251c\u2500\u2500 memory.py # Conversation & explicit memory\n\u2502 \u251c\u2500\u2500 command.py # Natural language commands\n\u2502 \u251c\u2500\u2500 agentic_loop.py # Autonomous agent behavior\n\u2502 \u2514\u2500\u2500 contracts.py # Design-by-contract (TODO)\n\u251c\u2500\u2500 providers/ # Multi-provider system\n\u2502 \u251c\u2500\u2500 base.py # Abstract provider interface\n\u2502 \u251c\u2500\u2500 openai_provider.py\n\u2502 \u251c\u2500\u2500 anthropic_provider.py\n\u2502 \u251c\u2500\u2500 openrouter_provider.py\n\u2502 \u2514\u2500\u2500 registry.py # Provider auto-detection\n\u2514\u2500\u2500 utilities/ # Helper utilities\n```\n\n## Roadmap\n\n- \u2705 Multi-provider system with capability detection\n- \u2705 Structured output with dataclass parsing\n- \u2705 Agentic loop with tool calling\n- \u2705 Memory system and conversation history \n- \ud83d\udea7 Design-by-contract decorators (`@require`, `@ensure`, `@invariant`)\n- \ud83d\udea7 Pipeline system with `>>` operator\n- \ud83d\udea7 Complete xAI, Gemini, Ollama providers\n- \ud83d\udea7 Full Alloy language parser/interpreter\n- \ud83d\udea7 Claude Code integration\n- \ud83d\udea7 Advanced agent patterns (ReAct, multi-agent coordination)\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes with tests\n4. Run the test suite: `make test`\n5. Submit a pull request\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Agent-first programming for Python - clean, Pythonic API for AI agents",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://docs.alloy-ai.dev",
"Homepage": "https://github.com/alloy-ai/alloy",
"Repository": "https://github.com/alloy-ai/alloy"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f65a6d5007bfc123d8e7ffcdb4d1b1c4e5f0e9114b88ed7e4245f16bae84ab3e",
"md5": "4849f9fc3d01ff21649128ec5152e1b0",
"sha256": "c15e5f60a1f9c09ae868726c481b7c67371b90edddb0ac022ed6fb6c5cc82eea"
},
"downloads": -1,
"filename": "alloy_ai-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4849f9fc3d01ff21649128ec5152e1b0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 40152,
"upload_time": "2025-07-29T07:26:47",
"upload_time_iso_8601": "2025-07-29T07:26:47.159827Z",
"url": "https://files.pythonhosted.org/packages/f6/5a/6d5007bfc123d8e7ffcdb4d1b1c4e5f0e9114b88ed7e4245f16bae84ab3e/alloy_ai-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f52babf0324cdf06a6a2d9041f6281ce54242a8917de44afbccd5e09dda0b649",
"md5": "b4c6f509c051712532877b354c9ae0eb",
"sha256": "7929b70f0b770239c652925400f98ec76f21b3b3fb4cc432e1901c9ee65c3904"
},
"downloads": -1,
"filename": "alloy_ai-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "b4c6f509c051712532877b354c9ae0eb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 33804,
"upload_time": "2025-07-29T07:26:48",
"upload_time_iso_8601": "2025-07-29T07:26:48.568658Z",
"url": "https://files.pythonhosted.org/packages/f5/2b/abf0324cdf06a6a2d9041f6281ce54242a8917de44afbccd5e09dda0b649/alloy_ai-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-29 07:26:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "alloy-ai",
"github_project": "alloy",
"github_not_found": true,
"lcname": "alloy-ai"
}