# SimpleModelRouter
A Python library for interfacing with various Large Language Model (LLM) inference endpoints, including OpenAI, Anthropic, and Ollama. The library provides a unified, async-first interface for interacting with different LLM providers.
## Features
- Support for multiple LLM providers:
- OpenAI (GPT-3.5, GPT-4)
- Anthropic (Claude)
- Ollama (Local models)
- Async HTTP support using httpx
- Streaming responses for real-time text generation
- Unified interface across providers
- Type hints and comprehensive documentation
- Configurable API endpoints and models
- Error handling and retries
- Resource cleanup and connection management
## Installation
### Using pip
```bash
pip install simplemodelrouter
```
### Using Poetry (recommended)
```bash
poetry add simplemodelrouter
```
For development:
```bash
# Clone the repository
git clone https://github.com/yourusername/simplemodelrouter.git
cd simplemodelrouter
# Install Poetry if you haven't already
curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies including development tools
poetry install
```
## Quick Start
```python
import asyncio
from simplemodelrouter import OpenAIProvider, Message
async def main():
provider = OpenAIProvider(api_key="your-api-key")
messages = [Message(role="user", content="Hello!")]
response = await provider.chat(messages)
print(response.message.content)
await provider.close()
asyncio.run(main())
```
## Detailed Usage
### Provider Configuration
Each provider can be configured with:
- API key (required for OpenAI and Anthropic)
- Base URL (optional, for custom deployments)
- Default model (optional)
```python
# OpenAI with custom configuration
openai = OpenAIProvider(
api_key="your-api-key",
base_url="https://api.custom-deployment.com/v1",
default_model="gpt-4"
)
# Anthropic with default configuration
anthropic = AnthropicProvider(
api_key="your-api-key"
)
# Ollama for local deployment
ollama = OllamaProvider(
base_url="http://localhost:11434",
default_model="llama2"
)
```
### Chat Interface
The chat interface supports conversations with multiple messages:
```python
messages = [
Message(role="system", content="You are a helpful assistant."),
Message(role="user", content="What's the weather like?"),
Message(role="assistant", content="I don't have access to current weather data."),
Message(role="user", content="What can you help me with?")
]
response = await provider.chat(
messages=messages,
temperature=0.7,
stream=False
)
```
### Streaming Responses
All providers support streaming for both chat and completion endpoints:
```python
async for chunk in await provider.chat(messages, stream=True):
print(chunk.message.content, end="", flush=True)
async for chunk in await provider.complete(prompt, stream=True):
print(chunk.text, end="", flush=True)
```
### Error Handling
The library provides consistent error handling across providers:
```python
try:
response = await provider.chat(messages)
except httpx.HTTPStatusError as e:
print(f"API error: {e.response.status_code}")
except httpx.RequestError as e:
print(f"Network error: {str(e)}")
finally:
await provider.close()
```
### Resource Management
Always close providers when done to clean up resources:
```python
try:
provider = OpenAIProvider(api_key="your-api-key")
# ... use provider ...
finally:
await provider.close()
```
Or use async context managers (coming soon):
```python
async with OpenAIProvider(api_key="your-api-key") as provider:
response = await provider.chat(messages)
```
## Examples
Check out the `examples/` directory for more detailed examples:
- `chat_comparison.py`: Compare responses from different providers
- `streaming_example.py`: Demonstrate streaming capabilities
- `error_handling.py`: Show error handling scenarios
## Development
1. Clone the repository:
```bash
git clone https://github.com/yourusername/simplemodelrouter.git
cd simplemodelrouter
```
2. Install Poetry:
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
3. Install dependencies:
```bash
poetry install
```
4. Activate the virtual environment:
```bash
poetry shell
```
5. Run tests:
```bash
poetry run pytest
```
6. Format code:
```bash
poetry run black simplemodelrouter
poetry run isort simplemodelrouter
```
7. Type check:
```bash
poetry run mypy simplemodelrouter
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
MIT License
Raw data
{
"_id": null,
"home_page": null,
"name": "simplemodelrouter",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "llm, ai, openai, anthropic, ollama",
"author": "Luke Hinds",
"author_email": "luke@stacklok.com",
"download_url": "https://files.pythonhosted.org/packages/c1/df/f084b360c1dba680cf650b4a033ccbe5263b4806242c99bcb31f07864feb/simplemodelrouter-0.1.3.tar.gz",
"platform": null,
"description": "# SimpleModelRouter\n\nA Python library for interfacing with various Large Language Model (LLM) inference endpoints, including OpenAI, Anthropic, and Ollama. The library provides a unified, async-first interface for interacting with different LLM providers.\n\n## Features\n\n- Support for multiple LLM providers:\n - OpenAI (GPT-3.5, GPT-4)\n - Anthropic (Claude)\n - Ollama (Local models)\n- Async HTTP support using httpx\n- Streaming responses for real-time text generation\n- Unified interface across providers\n- Type hints and comprehensive documentation\n- Configurable API endpoints and models\n- Error handling and retries\n- Resource cleanup and connection management\n\n## Installation\n\n### Using pip\n\n```bash\npip install simplemodelrouter\n```\n\n### Using Poetry (recommended)\n\n```bash\npoetry add simplemodelrouter\n```\n\nFor development:\n\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/simplemodelrouter.git\ncd simplemodelrouter\n\n# Install Poetry if you haven't already\ncurl -sSL https://install.python-poetry.org | python3 -\n\n# Install dependencies including development tools\npoetry install\n```\n\n## Quick Start\n\n```python\nimport asyncio\nfrom simplemodelrouter import OpenAIProvider, Message\n\nasync def main():\n provider = OpenAIProvider(api_key=\"your-api-key\")\n messages = [Message(role=\"user\", content=\"Hello!\")]\n \n response = await provider.chat(messages)\n print(response.message.content)\n \n await provider.close()\n\nasyncio.run(main())\n```\n\n## Detailed Usage\n\n### Provider Configuration\n\nEach provider can be configured with:\n- API key (required for OpenAI and Anthropic)\n- Base URL (optional, for custom deployments)\n- Default model (optional)\n\n```python\n# OpenAI with custom configuration\nopenai = OpenAIProvider(\n api_key=\"your-api-key\",\n base_url=\"https://api.custom-deployment.com/v1\",\n default_model=\"gpt-4\"\n)\n\n# Anthropic with default configuration\nanthropic = AnthropicProvider(\n api_key=\"your-api-key\"\n)\n\n# Ollama for local deployment\nollama = OllamaProvider(\n base_url=\"http://localhost:11434\",\n default_model=\"llama2\"\n)\n```\n\n### Chat Interface\n\nThe chat interface supports conversations with multiple messages:\n\n```python\nmessages = [\n Message(role=\"system\", content=\"You are a helpful assistant.\"),\n Message(role=\"user\", content=\"What's the weather like?\"),\n Message(role=\"assistant\", content=\"I don't have access to current weather data.\"),\n Message(role=\"user\", content=\"What can you help me with?\")\n]\n\nresponse = await provider.chat(\n messages=messages,\n temperature=0.7,\n stream=False\n)\n```\n\n### Streaming Responses\n\nAll providers support streaming for both chat and completion endpoints:\n\n```python\nasync for chunk in await provider.chat(messages, stream=True):\n print(chunk.message.content, end=\"\", flush=True)\n\nasync for chunk in await provider.complete(prompt, stream=True):\n print(chunk.text, end=\"\", flush=True)\n```\n\n### Error Handling\n\nThe library provides consistent error handling across providers:\n\n```python\ntry:\n response = await provider.chat(messages)\nexcept httpx.HTTPStatusError as e:\n print(f\"API error: {e.response.status_code}\")\nexcept httpx.RequestError as e:\n print(f\"Network error: {str(e)}\")\nfinally:\n await provider.close()\n```\n\n### Resource Management\n\nAlways close providers when done to clean up resources:\n\n```python\ntry:\n provider = OpenAIProvider(api_key=\"your-api-key\")\n # ... use provider ...\nfinally:\n await provider.close()\n```\n\nOr use async context managers (coming soon):\n\n```python\nasync with OpenAIProvider(api_key=\"your-api-key\") as provider:\n response = await provider.chat(messages)\n```\n\n## Examples\n\nCheck out the `examples/` directory for more detailed examples:\n\n- `chat_comparison.py`: Compare responses from different providers\n- `streaming_example.py`: Demonstrate streaming capabilities\n- `error_handling.py`: Show error handling scenarios\n\n## Development\n\n1. Clone the repository:\n ```bash\n git clone https://github.com/yourusername/simplemodelrouter.git\n cd simplemodelrouter\n ```\n\n2. Install Poetry:\n ```bash\n curl -sSL https://install.python-poetry.org | python3 -\n ```\n\n3. Install dependencies:\n ```bash\n poetry install\n ```\n\n4. Activate the virtual environment:\n ```bash\n poetry shell\n ```\n\n5. Run tests:\n ```bash\n poetry run pytest\n ```\n\n6. Format code:\n ```bash\n poetry run black simplemodelrouter\n poetry run isort simplemodelrouter\n ```\n\n7. Type check:\n ```bash\n poetry run mypy simplemodelrouter\n ```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nMIT License\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A Python library for interfacing with various LLM inference endpoints",
"version": "0.1.3",
"project_urls": null,
"split_keywords": [
"llm",
" ai",
" openai",
" anthropic",
" ollama"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3a43348d837b009e41ed42b09d44e3a46201dbb73569ec275883259aae8129f5",
"md5": "06cb908a012aa12666f7121753bc6768",
"sha256": "221ae8a35c19831e4bc6adc4a8cdb3c2738a0a5615c2a339b6aa00dfd90bd647"
},
"downloads": -1,
"filename": "simplemodelrouter-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "06cb908a012aa12666f7121753bc6768",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 9995,
"upload_time": "2025-01-12T22:32:08",
"upload_time_iso_8601": "2025-01-12T22:32:08.784903Z",
"url": "https://files.pythonhosted.org/packages/3a/43/348d837b009e41ed42b09d44e3a46201dbb73569ec275883259aae8129f5/simplemodelrouter-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c1dff084b360c1dba680cf650b4a033ccbe5263b4806242c99bcb31f07864feb",
"md5": "3c94dc1a35fdd46ba99fc245087abe86",
"sha256": "92f8763a56d3d4a17f7c635b0758b98f3a78baf377d83be9705c45784953845b"
},
"downloads": -1,
"filename": "simplemodelrouter-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "3c94dc1a35fdd46ba99fc245087abe86",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 7848,
"upload_time": "2025-01-12T22:32:10",
"upload_time_iso_8601": "2025-01-12T22:32:10.967411Z",
"url": "https://files.pythonhosted.org/packages/c1/df/f084b360c1dba680cf650b4a033ccbe5263b4806242c99bcb31f07864feb/simplemodelrouter-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-12 22:32:10",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "simplemodelrouter"
}