# Hyper LLM
[](https://opensource.org/licenses/MIT)
A unified Python interface for multiple LLM providers with intelligent caching and cost-saving interactive development mode. **Stop burning money on API calls during development** - use interactive mode to prototype with real LLMs at zero cost!
## ๐ Why Hyper LLM?
**Save Development Costs**: Interactive mode eliminates expensive API calls during prototyping. Copy prompts to your clipboard, paste responses back, and build your application without the API meter running.
**Provider Freedom**: Switch between OpenAI, Anthropic, local Ollama, or any custom API with a single line of code. No vendor lock-in, no rewriting.
**Smart Caching**: Never pay for the same prompt twice. Responses are automatically cached and reused across development and production.
**Developer Experience**: Built by developers, for developers. Clipboard integration, file I/O, and seamless workflows that just work.
## โจ Features
- ๐ **Unified Interface**: One API for all LLM providers - OpenAI, Anthropic, Ollama, and more
- ๐ฐ **Cost-Saving Interactive Mode**: Develop without API costs using copy-paste workflow
- ๐พ **Intelligent Caching**: Automatic response caching with prompt-based deduplication
- ๐ **Extensible Provider System**: Easy to add new providers or custom APIs
- ๐ **Clipboard Integration**: Automatic prompt copying for seamless workflows
- ๐ **File I/O Support**: Read prompts from and write responses to files
- ๐ก๏ธ **Production Ready**: Robust error handling, validation, and monitoring
- ๐ฏ **Zero Configuration**: Works out of the box with sensible defaults
## ๐ฆ Installation
```bash
# Basic installation
pip install hyperllm
# With specific provider support
pip install hyperllm[openai] # OpenAI GPT models
pip install hyperllm[anthropic] # Anthropic Claude models
pip install hyperllm[clipboard] # Enhanced clipboard features
# Install everything
pip install hyperllm[all]
```
## ๐ Quick Start
```python
from hyperllm import HyperLLM
# Initialize once, use everywhere
interface = HyperLLM()
# Configure your preferred provider
interface.set_llm('openai',
api_key='your-api-key',
model='gpt-4')
# Get responses (cached automatically)
response = interface.get_response("Explain quantum computing simply")
print(response)
# Subsequent identical prompts use cache - zero cost!
cached_response = interface.get_response("Explain quantum computing simply")
```
## ๐ฅ Interactive Development Mode
**The game-changer for LLM development costs:**
```bash
# Enable interactive mode - save money during development!
export LLM_INTERACTIVE_MODE=true
```
```python
from hyperllm import HyperLLM
interface = HyperLLM()
interface.set_llm('openai') # Provider doesn't matter in interactive mode
# This copies prompt to clipboard and waits for your response
response = interface.get_response("Write a Python function to sort a list")
# Workflow:
# 1. Prompt automatically copied to clipboard โจ
# 2. Paste into ChatGPT/Claude/etc.
# 3. Copy response and paste back
# 4. Response cached for production use
# 5. Zero API costs during development! ๐ฐ
```
**Perfect for:**
- Prompt engineering and iteration
- Building demos and prototypes
- Testing different prompt variations
- Learning and experimentation
## ๐ Supported Providers
### OpenAI GPT Models
```python
interface.set_llm('openai',
api_key='sk-...',
model='gpt-4o', # or gpt-3.5-turbo, gpt-4, etc.
temperature=0.7,
max_tokens=2000)
```
### Anthropic Claude
```python
interface.set_llm('anthropic', # or 'claude'
api_key='sk-ant-...',
model='claude-3-sonnet-20240229',
max_tokens=2000)
```
### Local Ollama
```python
interface.set_llm('ollama',
base_url='http://localhost:11434',
model='llama2', # or codellama, mistral, etc.
temperature=0.7)
```
### Custom APIs (OpenAI Compatible)
```python
interface.set_llm('custom',
base_url='https://api.your-provider.com',
api_key='your-key',
model='your-model',
headers={'Custom-Header': 'value'})
```
### Check Available Providers
```python
from hyperllm import get_available_providers
providers = get_available_providers()
for name, info in providers.items():
status = "โ
Available" if info['available'] else "โ Missing deps"
print(f"{name}: {status}")
```
## ๐พ Smart Caching System
Responses are automatically cached using prompt hashing:
```python
# First call - hits API/interactive mode
response1 = interface.get_response("What is machine learning?")
# Subsequent calls - instant cache retrieval, zero cost
response2 = interface.get_response("What is machine learning?")
# Manage cache
stats = interface.get_cache_stats()
print(f"Cached responses: {stats['total_entries']}")
interface.clear_cache() # Clean slate when needed
```
## ๐ ๏ธ Advanced Usage
### One-Line Setup
```python
from hyperllm import create_interface
# Create and configure in one step
interface = create_interface('anthropic',
cache_dir='/custom/cache',
api_key='sk-ant-...',
model='claude-3-opus-20240229')
```
### File-Based Workflows
```bash
# Configure file I/O for automated workflows
export PROMPT_OUTPUT_FILE="/tmp/prompt.txt"
export RESPONSE_INPUT_FILE="/tmp/response.txt"
export LLM_INTERACTIVE_MODE=true
```
### Provider Comparison
```python
# Easy A/B testing between providers
providers = [
('openai', {'model': 'gpt-4'}),
('anthropic', {'model': 'claude-3-sonnet-20240229'}),
('ollama', {'model': 'llama2'})
]
for name, config in providers:
interface.set_llm(name, **config)
response = interface.get_response("Compare these approaches...")
print(f"{name}: {response[:100]}...")
```
### Custom Provider Registration
```python
from hyperllm.providers import register_provider, BaseLLMProvider
class MyProvider(BaseLLMProvider):
def validate_config(self):
return True
def _setup_client(self):
pass
def generate_response(self, prompt, **kwargs):
return f"Custom response for: {prompt}"
# Register and use
register_provider('myprovider', MyProvider)
interface.set_llm('myprovider')
```
## ๐ฏ Real-World Examples
### Development to Production Pipeline
```python
import os
from hyperllm import HyperLLM
# Development phase - zero API costs
os.environ['LLM_INTERACTIVE_MODE'] = 'true'
interface = HyperLLM()
interface.set_llm('openai')
# Build your prompts interactively
prompts = [
"Generate a REST API design for a blog",
"Write error handling for user authentication",
"Create database schema for user profiles"
]
for prompt in prompts:
response = interface.get_response(prompt)
# Responses cached automatically
# Production deployment - use cached responses + API
os.environ['LLM_INTERACTIVE_MODE'] = 'false'
interface.set_llm('openai', api_key=os.environ['OPENAI_API_KEY'])
# Cached responses used when available, new prompts hit API
response = interface.get_response("Generate a REST API design for a blog") # From cache!
new_response = interface.get_response("Add OAuth2 to the API") # New API call
```
### Multi-Model Analysis
```python
# Compare responses across providers effortlessly
interface = HyperLLM()
test_prompt = "Explain the trade-offs of microservices architecture"
results = {}
for provider in ['openai', 'anthropic', 'ollama']:
try:
interface.set_llm(provider, **provider_configs[provider])
results[provider] = interface.get_response(test_prompt)
except Exception as e:
results[provider] = f"Error: {e}"
# Analyze differences in responses
for provider, response in results.items():
print(f"\n=== {provider.title()} ===")
print(response)
```
## ๐ Use Cases
- **๐งช Prototype Development**: Build LLM features without burning budget
- **๐ฌ Prompt Engineering**: Iterate on prompts using interactive mode
- **โก Production Applications**: Seamless transition from development to production
- **๐ A/B Testing**: Compare providers and models effortlessly
- **๐ Learning & Experimentation**: Explore LLMs without cost concerns
- **๐๏ธ Enterprise Integration**: Unified interface for multiple LLM services
## ๐ Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `LLM_INTERACTIVE_MODE` | Enable interactive development mode | `false` |
| `PROMPT_OUTPUT_FILE` | File to write prompts (interactive mode) | None |
| `RESPONSE_INPUT_FILE` | File to read responses (interactive mode) | None |
| `OPENAI_API_KEY` | Default OpenAI API key | None |
| `ANTHROPIC_API_KEY` | Default Anthropic API key | None |
## ๐ง Error Handling
```python
from hyperllm import HyperLLM
from hyperllm.providers.base import ConfigurationError, APIError
try:
interface = HyperLLM()
interface.set_llm('openai', api_key='invalid-key')
response = interface.get_response("Test prompt")
except ConfigurationError as e:
print(f"Configuration error: {e}")
except APIError as e:
print(f"API error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
```
## ๐ค Contributing
We welcome contributions! Hyper LLM is designed to be extensible and community-driven.
### Quick Start for Contributors
1. **Fork and Clone**
```bash
git clone https://github.com/your-username/hyperllm.git
cd hyperllm
```
2. **Set Up Development Environment**
```bash
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode with all dependencies
pip install -e ".[all,dev]"
```
3. **Run Tests**
```bash
pytest tests/ -v
```
### ๐ ๏ธ Development Setup
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run tests with coverage
pytest tests/ --cov=hyperllm --cov-report=html
# Run linting
flake8 hyperllm tests
black hyperllm tests
# Type checking
mypy hyperllm
```
### ๐ Contributing Guidelines
#### Adding New Providers
We're always looking for new LLM provider integrations! Here's how to add one:
1. **Create Provider File**
```python
# hyperllm/providers/newprovider_provider.py
from .base import BaseLLMProvider
class NewProviderProvider(BaseLLMProvider):
def validate_config(self):
# Validate configuration
pass
def _setup_client(self):
# Initialize client
pass
def generate_response(self, prompt, **kwargs):
# Implement response generation
pass
```
2. **Register Provider**
```python
# Add to hyperllm/providers/__init__.py
from .newprovider_provider import NewProviderProvider
PROVIDER_REGISTRY['newprovider'] = NewProviderProvider
```
3. **Add Tests**
```python
# tests/test_providers/test_newprovider.py
import unittest
from hyperllm.providers.newprovider_provider import NewProviderProvider
class TestNewProviderProvider(unittest.TestCase):
def test_validation(self):
# Test provider validation
pass
```
4. **Update Documentation**
- Add usage example to README
- Document configuration options
- Add to provider comparison table
#### Code Style
- **Black** for code formatting
- **flake8** for linting
- **mypy** for type hints
- **pytest** for testing
- **Google style** docstrings
#### Commit Guidelines
```bash
# Use conventional commits
git commit -m "feat: add support for NewProvider LLM"
git commit -m "fix: handle timeout errors in OpenAI provider"
git commit -m "docs: add examples for interactive mode"
```
### ๐ Bug Reports
Found a bug? Please open an issue with:
1. **Environment details** (Python version, OS, package version)
2. **Minimal reproduction code**
3. **Expected vs actual behavior**
4. **Error messages/stack traces**
### ๐ก Feature Requests
Have an idea? We'd love to hear it! Open an issue with:
1. **Use case description**
2. **Proposed API/interface**
3. **Benefits and alternatives considered**
### ๐ฏ Good First Issues
Look for issues labeled `good-first-issue`:
- Adding new provider integrations
- Improving error messages
- Adding configuration examples
- Writing documentation
- Adding tests for edge cases
### ๐ Development Resources
- **Provider Base Class**: `hyperllm/providers/base.py`
- **Main Interface**: `hyperllm/interface.py`
- **Test Examples**: `tests/test_providers/`
- **Integration Examples**: `examples/`
### ๐ Contributors
Thanks to all contributors who make Hyper LLM better!
<!-- Will be auto-generated -->
## ๐ License
MIT License - see [LICENSE](LICENSE) file for details.
## ๐ Links
- **PyPI**: https://pypi.org/project/hyperllm/
- **Documentation**: https://github.com/hyper-swe/hyperllm#readme
- **Issues**: https://github.com/hyper-swe/hyperllm/issues
- **Changelog**: https://github.com/hyper-swe/hyperllm/releases
## โญ Support the Project
If Hyper LLM saves you development time and costs, please:
- โญ **Star the repository**
- ๐ **Report bugs** and suggest features
- ๐ค **Contribute** new providers or improvements
- ๐ข **Share** with other developers
---
**Made with โค๏ธ by developers who got tired of expensive LLM development cycles.**
*Hyper LLM - One interface, all providers, zero waste.*
Raw data
{
"_id": null,
"home_page": "https://github.com/hyper-swe/hyperllm",
"name": "hyperllm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, llmproxy, ai, openai, anthropic, claude, ollama, cost-saving, caching, interactive-mode, genai",
"author": "Swe",
"author_email": "swe@hyperswe.com",
"download_url": "https://files.pythonhosted.org/packages/dd/a8/ef463e864731e731544e3908619fad3a71c51f7cbbaa0accbec84c90e724/hyperllm-1.0.0.tar.gz",
"platform": null,
"description": "# Hyper LLM\n\n[](https://opensource.org/licenses/MIT)\n\nA unified Python interface for multiple LLM providers with intelligent caching and cost-saving interactive development mode. **Stop burning money on API calls during development** - use interactive mode to prototype with real LLMs at zero cost!\n\n## \ud83d\ude80 Why Hyper LLM?\n\n**Save Development Costs**: Interactive mode eliminates expensive API calls during prototyping. Copy prompts to your clipboard, paste responses back, and build your application without the API meter running.\n\n**Provider Freedom**: Switch between OpenAI, Anthropic, local Ollama, or any custom API with a single line of code. No vendor lock-in, no rewriting.\n\n**Smart Caching**: Never pay for the same prompt twice. Responses are automatically cached and reused across development and production.\n\n**Developer Experience**: Built by developers, for developers. Clipboard integration, file I/O, and seamless workflows that just work.\n\n## \u2728 Features\n\n- \ud83d\udd04 **Unified Interface**: One API for all LLM providers - OpenAI, Anthropic, Ollama, and more\n- \ud83d\udcb0 **Cost-Saving Interactive Mode**: Develop without API costs using copy-paste workflow \n- \ud83d\udcbe **Intelligent Caching**: Automatic response caching with prompt-based deduplication\n- \ud83d\udd0c **Extensible Provider System**: Easy to add new providers or custom APIs\n- \ud83d\udccb **Clipboard Integration**: Automatic prompt copying for seamless workflows\n- \ud83d\udcc1 **File I/O Support**: Read prompts from and write responses to files\n- \ud83d\udee1\ufe0f **Production Ready**: Robust error handling, validation, and monitoring\n- \ud83c\udfaf **Zero Configuration**: Works out of the box with sensible defaults\n\n## \ud83d\udce6 Installation\n\n```bash\n# Basic installation\npip install hyperllm\n\n# With specific provider support\npip install hyperllm[openai] # OpenAI GPT models\npip install hyperllm[anthropic] # Anthropic Claude models \npip install hyperllm[clipboard] # Enhanced clipboard features\n\n# Install everything\npip install hyperllm[all]\n```\n\n## \ud83d\ude80 Quick Start\n\n```python\nfrom hyperllm import HyperLLM\n\n# Initialize once, use everywhere\ninterface = HyperLLM()\n\n# Configure your preferred provider\ninterface.set_llm('openai', \n api_key='your-api-key', \n model='gpt-4')\n\n# Get responses (cached automatically)\nresponse = interface.get_response(\"Explain quantum computing simply\")\nprint(response)\n\n# Subsequent identical prompts use cache - zero cost!\ncached_response = interface.get_response(\"Explain quantum computing simply\")\n```\n\n## \ud83d\udd25 Interactive Development Mode\n\n**The game-changer for LLM development costs:**\n\n```bash\n# Enable interactive mode - save money during development!\nexport LLM_INTERACTIVE_MODE=true\n```\n\n```python\nfrom hyperllm import HyperLLM\n\ninterface = HyperLLM()\ninterface.set_llm('openai') # Provider doesn't matter in interactive mode\n\n# This copies prompt to clipboard and waits for your response\nresponse = interface.get_response(\"Write a Python function to sort a list\")\n\n# Workflow:\n# 1. Prompt automatically copied to clipboard \u2728 \n# 2. Paste into ChatGPT/Claude/etc.\n# 3. Copy response and paste back\n# 4. Response cached for production use\n# 5. Zero API costs during development! \ud83d\udcb0\n```\n\n**Perfect for:**\n- Prompt engineering and iteration\n- Building demos and prototypes \n- Testing different prompt variations\n- Learning and experimentation\n\n## \ud83c\udf10 Supported Providers\n\n### OpenAI GPT Models\n```python\ninterface.set_llm('openai',\n api_key='sk-...',\n model='gpt-4o', # or gpt-3.5-turbo, gpt-4, etc.\n temperature=0.7,\n max_tokens=2000)\n```\n\n### Anthropic Claude\n```python\ninterface.set_llm('anthropic', # or 'claude'\n api_key='sk-ant-...',\n model='claude-3-sonnet-20240229',\n max_tokens=2000)\n```\n\n### Local Ollama\n```python\ninterface.set_llm('ollama',\n base_url='http://localhost:11434',\n model='llama2', # or codellama, mistral, etc.\n temperature=0.7)\n```\n\n### Custom APIs (OpenAI Compatible)\n```python\ninterface.set_llm('custom',\n base_url='https://api.your-provider.com',\n api_key='your-key',\n model='your-model',\n headers={'Custom-Header': 'value'})\n```\n\n### Check Available Providers\n```python\nfrom hyperllm import get_available_providers\n\nproviders = get_available_providers()\nfor name, info in providers.items():\n status = \"\u2705 Available\" if info['available'] else \"\u274c Missing deps\"\n print(f\"{name}: {status}\")\n```\n\n## \ud83d\udcbe Smart Caching System\n\nResponses are automatically cached using prompt hashing:\n\n```python\n# First call - hits API/interactive mode\nresponse1 = interface.get_response(\"What is machine learning?\")\n\n# Subsequent calls - instant cache retrieval, zero cost\nresponse2 = interface.get_response(\"What is machine learning?\") \n\n# Manage cache\nstats = interface.get_cache_stats()\nprint(f\"Cached responses: {stats['total_entries']}\")\n\ninterface.clear_cache() # Clean slate when needed\n```\n\n## \ud83d\udee0\ufe0f Advanced Usage\n\n### One-Line Setup\n```python\nfrom hyperllm import create_interface\n\n# Create and configure in one step\ninterface = create_interface('anthropic',\n cache_dir='/custom/cache',\n api_key='sk-ant-...',\n model='claude-3-opus-20240229')\n```\n\n### File-Based Workflows\n```bash\n# Configure file I/O for automated workflows\nexport PROMPT_OUTPUT_FILE=\"/tmp/prompt.txt\"\nexport RESPONSE_INPUT_FILE=\"/tmp/response.txt\" \nexport LLM_INTERACTIVE_MODE=true\n```\n\n### Provider Comparison\n```python\n# Easy A/B testing between providers\nproviders = [\n ('openai', {'model': 'gpt-4'}),\n ('anthropic', {'model': 'claude-3-sonnet-20240229'}),\n ('ollama', {'model': 'llama2'})\n]\n\nfor name, config in providers:\n interface.set_llm(name, **config)\n response = interface.get_response(\"Compare these approaches...\")\n print(f\"{name}: {response[:100]}...\")\n```\n\n### Custom Provider Registration\n```python\nfrom hyperllm.providers import register_provider, BaseLLMProvider\n\nclass MyProvider(BaseLLMProvider):\n def validate_config(self):\n return True\n \n def _setup_client(self):\n pass\n \n def generate_response(self, prompt, **kwargs):\n return f\"Custom response for: {prompt}\"\n\n# Register and use\nregister_provider('myprovider', MyProvider)\ninterface.set_llm('myprovider')\n```\n\n## \ud83c\udfaf Real-World Examples\n\n### Development to Production Pipeline\n```python\nimport os\nfrom hyperllm import HyperLLM\n\n# Development phase - zero API costs\nos.environ['LLM_INTERACTIVE_MODE'] = 'true'\n\ninterface = HyperLLM()\ninterface.set_llm('openai')\n\n# Build your prompts interactively\nprompts = [\n \"Generate a REST API design for a blog\",\n \"Write error handling for user authentication\", \n \"Create database schema for user profiles\"\n]\n\nfor prompt in prompts:\n response = interface.get_response(prompt)\n # Responses cached automatically\n\n# Production deployment - use cached responses + API\nos.environ['LLM_INTERACTIVE_MODE'] = 'false'\ninterface.set_llm('openai', api_key=os.environ['OPENAI_API_KEY'])\n\n# Cached responses used when available, new prompts hit API\nresponse = interface.get_response(\"Generate a REST API design for a blog\") # From cache!\nnew_response = interface.get_response(\"Add OAuth2 to the API\") # New API call\n```\n\n### Multi-Model Analysis\n```python\n# Compare responses across providers effortlessly\ninterface = HyperLLM()\ntest_prompt = \"Explain the trade-offs of microservices architecture\"\n\nresults = {}\nfor provider in ['openai', 'anthropic', 'ollama']:\n try:\n interface.set_llm(provider, **provider_configs[provider])\n results[provider] = interface.get_response(test_prompt)\n except Exception as e:\n results[provider] = f\"Error: {e}\"\n\n# Analyze differences in responses\nfor provider, response in results.items():\n print(f\"\\n=== {provider.title()} ===\")\n print(response)\n```\n\n## \ud83d\udcca Use Cases\n\n- **\ud83e\uddea Prototype Development**: Build LLM features without burning budget\n- **\ud83d\udd2c Prompt Engineering**: Iterate on prompts using interactive mode\n- **\u26a1 Production Applications**: Seamless transition from development to production\n- **\ud83d\udcc8 A/B Testing**: Compare providers and models effortlessly\n- **\ud83c\udf93 Learning & Experimentation**: Explore LLMs without cost concerns\n- **\ud83c\udfd7\ufe0f Enterprise Integration**: Unified interface for multiple LLM services\n\n## \ud83d\udcc8 Environment Variables\n\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `LLM_INTERACTIVE_MODE` | Enable interactive development mode | `false` |\n| `PROMPT_OUTPUT_FILE` | File to write prompts (interactive mode) | None |\n| `RESPONSE_INPUT_FILE` | File to read responses (interactive mode) | None |\n| `OPENAI_API_KEY` | Default OpenAI API key | None |\n| `ANTHROPIC_API_KEY` | Default Anthropic API key | None |\n\n## \ud83d\udd27 Error Handling\n\n```python\nfrom hyperllm import HyperLLM\nfrom hyperllm.providers.base import ConfigurationError, APIError\n\ntry:\n interface = HyperLLM()\n interface.set_llm('openai', api_key='invalid-key')\n response = interface.get_response(\"Test prompt\")\n \nexcept ConfigurationError as e:\n print(f\"Configuration error: {e}\")\nexcept APIError as e:\n print(f\"API error: {e}\")\nexcept Exception as e:\n print(f\"Unexpected error: {e}\")\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Hyper LLM is designed to be extensible and community-driven.\n\n### Quick Start for Contributors\n\n1. **Fork and Clone**\n ```bash\n git clone https://github.com/your-username/hyperllm.git\n cd hyperllm\n ```\n\n2. **Set Up Development Environment**\n ```bash\n # Create virtual environment\n python -m venv venv\n source venv/bin/activate # On Windows: venv\\Scripts\\activate\n \n # Install in development mode with all dependencies\n pip install -e \".[all,dev]\"\n ```\n\n3. **Run Tests**\n ```bash\n pytest tests/ -v\n ```\n\n### \ud83d\udee0\ufe0f Development Setup\n\n```bash\n# Install development dependencies\npip install -e \".[dev]\"\n\n# Run tests with coverage\npytest tests/ --cov=hyperllm --cov-report=html\n\n# Run linting\nflake8 hyperllm tests\nblack hyperllm tests\n\n# Type checking\nmypy hyperllm\n```\n\n### \ud83d\udccb Contributing Guidelines\n\n#### Adding New Providers\n\nWe're always looking for new LLM provider integrations! Here's how to add one:\n\n1. **Create Provider File**\n ```python\n # hyperllm/providers/newprovider_provider.py\n from .base import BaseLLMProvider\n \n class NewProviderProvider(BaseLLMProvider):\n def validate_config(self):\n # Validate configuration\n pass\n \n def _setup_client(self):\n # Initialize client\n pass\n \n def generate_response(self, prompt, **kwargs):\n # Implement response generation\n pass\n ```\n\n2. **Register Provider**\n ```python\n # Add to hyperllm/providers/__init__.py\n from .newprovider_provider import NewProviderProvider\n \n PROVIDER_REGISTRY['newprovider'] = NewProviderProvider\n ```\n\n3. **Add Tests**\n ```python\n # tests/test_providers/test_newprovider.py\n import unittest\n from hyperllm.providers.newprovider_provider import NewProviderProvider\n \n class TestNewProviderProvider(unittest.TestCase):\n def test_validation(self):\n # Test provider validation\n pass\n ```\n\n4. **Update Documentation**\n - Add usage example to README\n - Document configuration options\n - Add to provider comparison table\n\n#### Code Style\n\n- **Black** for code formatting\n- **flake8** for linting \n- **mypy** for type hints\n- **pytest** for testing\n- **Google style** docstrings\n\n#### Commit Guidelines\n\n```bash\n# Use conventional commits\ngit commit -m \"feat: add support for NewProvider LLM\"\ngit commit -m \"fix: handle timeout errors in OpenAI provider\" \ngit commit -m \"docs: add examples for interactive mode\"\n```\n\n### \ud83d\udc1b Bug Reports\n\nFound a bug? Please open an issue with:\n\n1. **Environment details** (Python version, OS, package version)\n2. **Minimal reproduction code**\n3. **Expected vs actual behavior**\n4. **Error messages/stack traces**\n\n### \ud83d\udca1 Feature Requests\n\nHave an idea? We'd love to hear it! Open an issue with:\n\n1. **Use case description**\n2. **Proposed API/interface**\n3. **Benefits and alternatives considered**\n\n### \ud83c\udfaf Good First Issues\n\nLook for issues labeled `good-first-issue`:\n\n- Adding new provider integrations\n- Improving error messages\n- Adding configuration examples\n- Writing documentation\n- Adding tests for edge cases\n\n### \ud83d\udcda Development Resources\n\n- **Provider Base Class**: `hyperllm/providers/base.py`\n- **Main Interface**: `hyperllm/interface.py`\n- **Test Examples**: `tests/test_providers/`\n- **Integration Examples**: `examples/`\n\n### \ud83c\udfc6 Contributors\n\nThanks to all contributors who make Hyper LLM better!\n\n<!-- Will be auto-generated -->\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## \ud83d\udd17 Links\n\n- **PyPI**: https://pypi.org/project/hyperllm/\n- **Documentation**: https://github.com/hyper-swe/hyperllm#readme\n- **Issues**: https://github.com/hyper-swe/hyperllm/issues\n- **Changelog**: https://github.com/hyper-swe/hyperllm/releases\n\n## \u2b50 Support the Project\n\nIf Hyper LLM saves you development time and costs, please:\n\n- \u2b50 **Star the repository**\n- \ud83d\udc1b **Report bugs** and suggest features\n- \ud83e\udd1d **Contribute** new providers or improvements\n- \ud83d\udce2 **Share** with other developers\n\n---\n\n**Made with \u2764\ufe0f by developers who got tired of expensive LLM development cycles.**\n\n*Hyper LLM - One interface, all providers, zero waste.*\n",
"bugtrack_url": null,
"license": null,
"summary": "A unified interface for multiple LLM providers with caching and interactive development mode to get massive cost savings.",
"version": "1.0.0",
"project_urls": {
"Bug Reports": "https://github.com/hyper-swe/hyperllm/issues",
"Documentation": "https://github.com/hyper-swe/hyperllm#readme",
"Homepage": "https://github.com/hyper-swe/hyperllm",
"Source": "https://github.com/hyper-swe/hyperllm"
},
"split_keywords": [
"llm",
" llmproxy",
" ai",
" openai",
" anthropic",
" claude",
" ollama",
" cost-saving",
" caching",
" interactive-mode",
" genai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "34214155da272e947376b9e0c6b2295c375e9f3e1ad21bc1c3b65f20c59f6015",
"md5": "6158d2b3fac4679920f43d4ccea79c5d",
"sha256": "897e5a478f5a09c86ce394ad8eebd84e2789b481cfbc61461586bf312ab439cf"
},
"downloads": -1,
"filename": "hyperllm-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6158d2b3fac4679920f43d4ccea79c5d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 26373,
"upload_time": "2025-08-31T15:02:16",
"upload_time_iso_8601": "2025-08-31T15:02:16.090549Z",
"url": "https://files.pythonhosted.org/packages/34/21/4155da272e947376b9e0c6b2295c375e9f3e1ad21bc1c3b65f20c59f6015/hyperllm-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "dda8ef463e864731e731544e3908619fad3a71c51f7cbbaa0accbec84c90e724",
"md5": "7d12d54cab0d310f929b7489febe4405",
"sha256": "f4e9dd96102c0688d32e5c9c5920ace91f7ff38d90354b7ab59aa2515d61c0a9"
},
"downloads": -1,
"filename": "hyperllm-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "7d12d54cab0d310f929b7489febe4405",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 27437,
"upload_time": "2025-08-31T15:02:17",
"upload_time_iso_8601": "2025-08-31T15:02:17.951402Z",
"url": "https://files.pythonhosted.org/packages/dd/a8/ef463e864731e731544e3908619fad3a71c51f7cbbaa0accbec84c90e724/hyperllm-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-31 15:02:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hyper-swe",
"github_project": "hyperllm",
"github_not_found": true,
"lcname": "hyperllm"
}