# LLM Fallbacks
[](https://github.com/bodencrouch/llm-fallbacks/actions/workflows/python-package.yml)
[](https://github.com/bodencrouch/llm-fallbacks/actions/workflows/python-publish.yml)
[](https://github.com/bodencrouch/llm-fallbacks/actions/workflows/daily-config-update.yml)
A comprehensive Python library for managing fallback mechanisms for Large Language Model (LLM) API calls using the [LiteLLM](https://github.com/BerriAI/liteLLM) library. This library helps you handle API failures gracefully by providing alternative models to try when a primary model fails.
## Features
- **Comprehensive Model Management**: Access to thousands of LLM models across multiple providers
- **Intelligent Fallback Strategies**: Automatic fallback configuration based on model capabilities and cost
- **Cost Optimization**: Built-in cost calculation and model sorting by price and performance
- **Multi-Modal Support**: Support for chat, completion, embedding, vision, audio, and more model types
- **Provider Management**: Custom provider configuration and API key management
- **LiteLLM Integration**: Seamless integration with LiteLLM proxy for production deployments
- **GUI Interface**: Interactive model filtering and selection interface
- **Configuration Export**: Generate ready-to-use LiteLLM YAML configurations
## Installation
```bash
pip install llm-fallbacks
```
### Development Installation
```bash
git clone https://github.com/bodencrouch/llm-fallbacks.git
cd llm-fallbacks
pip install -e .
```
## Quick Start
### Download the repo
```bash
git clone https://github.com/th3w1zard1/llm_fallbacks.git
cd llm_fallbacks
uv sync
uv run src/tests/test_core.py
```
### Basic Usage
```python
from llm_fallbacks import get_chat_models, get_fallback_list
# Get all available chat models
chat_models = get_chat_models()
# Get fallback models for a specific model
fallbacks = get_fallback_list("gpt-4", model_type="chat")
print(f"Fallback models for GPT-4: {fallbacks}")
```
### Model Filtering
```python
from llm_fallbacks import filter_models
# Get free chat models only
free_models = filter_models(
model_type="chat",
free_only=True
)
# Get models with specific capabilities
vision_models = filter_models(
model_type="chat",
vision=True
)
```
### Cost Analysis Examples
Sort the models by cost:
```python
from llm_fallbacks import get_litellm_models, sort_models_by_cost_and_limits
# Get all models with cost information
models = get_litellm_models()
# Sort models by cost (cheapest first)
sorted_models = sort_models_by_cost_and_limits(models, free_only=True)
print(repr(sorted_models))
```
Calculate cost for a specific model:
```python
from llm_fallbacks import get_litellm_models, calculate_cost_per_token
model_spec = get_litellm_models()["gpt-5"]
cost_per_token = calculate_cost_per_token(model_spec)
print(f"Cost per token: ${cost_per_token:.6f}")
```
### Configuration Generation
```python
from llm_fallbacks.generate_configs import to_litellm_config_yaml
# Generate LiteLLM configuration
config = to_litellm_config_yaml(
providers=[], # Your custom providers
free_only=True
)
# Save to YAML file
import yaml
with open("litellm_config.yaml", "w") as f:
yaml.dump(config, f)
```
or run `generate_configs.py`:
```bash
uv run src/generate_configs.py
```
## Core Components
### 1. Model Management (`core.py`)
- **`get_litellm_models()`**: Retrieve all available LiteLLM models with specifications
- **`get_chat_models()`**: Get models supporting chat completion
- **`get_completion_models()`**: Get models supporting text completion
- **`get_embedding_models()`**: Get models supporting text embeddings
- **`get_vision_models()`**: Get models supporting vision tasks
- **`get_audio_models()`**: Get models supporting audio processing
### 2. Configuration (`config.py`)
- **Model Specifications**: Comprehensive model metadata including capabilities, costs, and limits
- **Provider Configuration**: Custom provider setup for private or specialized models
- **Fallback Strategies**: Intelligent fallback configuration based on model compatibility
### 3. Configuration Generation (`generate_configs.py`)
- **LiteLLM YAML Export**: Generate production-ready LiteLLM proxy configurations
- **Fallback Mapping**: Automatic fallback model assignment based on capabilities
- **Cost Optimization**: Prioritize models by cost and performance
### 4. Interactive Interface (`__main__.py`)
- **GUI Application**: Tkinter-based interface for model exploration (experimental)
- **Advanced Filtering**: Multiple filtering methods (regex, quantile, outlier detection)
- **Data Export**: Export filtered results to various formats
## Configuration Files
The library generates several configuration files that are stored in the `configs/` directory:
- **`litellm_config.yaml`**: Full LiteLLM configuration with all models
- **`litellm_config_free.yaml`**: Configuration with free models only
- **`all_models.json`**: Complete model database in JSON format
- **`free_chat_models.json`**: Free chat models only
- **`custom_providers.json`**: Custom provider configurations
These files are automatically updated daily at 12:00 AM UTC via GitHub Actions to ensure you always have the latest model information and configurations.
## Advanced Features
### Custom Provider Configuration
```python
from llm_fallbacks.config import CustomProviderConfig
custom_provider = CustomProviderConfig(
name="my-custom-provider",
base_url="https://api.myprovider.com",
api_key="your-api-key",
models=["custom-model-1", "custom-model-2"]
)
```
### Fallback Strategy Customization
```python
from llm_fallbacks.config import RouterSettings
router_settings = RouterSettings(
allowed_fails=3,
cooldown_time=30,
fallbacks=[{"gpt-4": ["gpt-3.5-turbo", "claude-3-sonnet"]}]
)
```
## CLI Usage
### Interactive GUI
```bash
python -m llm_fallbacks
```
### Generate Configurations
```bash
python -m llm_fallbacks.generate_configs
```
### System Testing
```bash
python test_system.py
```
## Development
### Prerequisites
- Python 3.12+
- Poetry or pip for dependency management
### Setup Development Environment
```bash
# Install development dependencies
pip install -r requirements-dev.txt
# Install pre-commit hooks
pre-commit install
# Run tests
pytest tests/
```
## CI/CD
The project includes automated workflows:
- **Python Package**: Runs on every push/PR with linting, testing, and building
- **Python Publish**: Automatically publishes to PyPI on releases
- **Daily Config Update**: Updates model configurations daily at 12:00 AM UTC
### Automated Configuration Updates
The library automatically maintains up-to-date model configurations through:
1. **Daily Updates**: GitHub Actions workflow runs every day at 12:00 AM UTC
2. **Model Database**: Fetches latest model information from LiteLLM
3. **Fallback Strategies**: Generates intelligent fallback configurations
4. **Version Control**: All changes are automatically committed and tracked
This ensures your applications always have access to the latest models, pricing, and capabilities without manual intervention.
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the Business Source License 1.1 (BSL 1.1) - see the [LICENSE](LICENSE) file for details.
## Support
For support and questions:
- Open an issue on GitHub
- Check the [documentation](https://github.com/bodencrouch/llm-fallbacks)
- Contact: boden.crouch@gmail.com
## Acknowledgments
- Built on top of [LiteLLM](https://github.com/BerriAI/liteLLM)
- Inspired by the need for robust LLM fallback strategies
- Community contributions and feedback
---
**Note**: This library is designed to work with the LiteLLM ecosystem and provides fallback mechanisms for production LLM applications. Always test fallback configurations in your specific environment before deploying to production.
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-fallbacks",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": "Boden Crouch <boden.crouch@gmail.com>",
"keywords": "llm, ai, fallbacks, litellm, language-models, api, fallback-strategies",
"author": null,
"author_email": "Boden Crouch <boden.crouch@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/87/c9/fba9f3dde01190dc1da99183c0e6c69bdeade96e8bf5b97655efcb125352/llm_fallbacks-0.1.0.tar.gz",
"platform": null,
"description": "# LLM Fallbacks\n\n[](https://github.com/bodencrouch/llm-fallbacks/actions/workflows/python-package.yml)\n[](https://github.com/bodencrouch/llm-fallbacks/actions/workflows/python-publish.yml)\n[](https://github.com/bodencrouch/llm-fallbacks/actions/workflows/daily-config-update.yml)\n\nA comprehensive Python library for managing fallback mechanisms for Large Language Model (LLM) API calls using the [LiteLLM](https://github.com/BerriAI/liteLLM) library. This library helps you handle API failures gracefully by providing alternative models to try when a primary model fails.\n\n## Features\n\n- **Comprehensive Model Management**: Access to thousands of LLM models across multiple providers\n- **Intelligent Fallback Strategies**: Automatic fallback configuration based on model capabilities and cost\n- **Cost Optimization**: Built-in cost calculation and model sorting by price and performance\n- **Multi-Modal Support**: Support for chat, completion, embedding, vision, audio, and more model types\n- **Provider Management**: Custom provider configuration and API key management\n- **LiteLLM Integration**: Seamless integration with LiteLLM proxy for production deployments\n- **GUI Interface**: Interactive model filtering and selection interface\n- **Configuration Export**: Generate ready-to-use LiteLLM YAML configurations\n\n## Installation\n\n```bash\npip install llm-fallbacks\n```\n\n### Development Installation\n\n```bash\ngit clone https://github.com/bodencrouch/llm-fallbacks.git\ncd llm-fallbacks\npip install -e .\n```\n\n## Quick Start\n\n### Download the repo\n\n```bash\ngit clone https://github.com/th3w1zard1/llm_fallbacks.git\ncd llm_fallbacks\nuv sync\nuv run src/tests/test_core.py\n```\n\n### Basic Usage\n\n```python\nfrom llm_fallbacks import get_chat_models, get_fallback_list\n\n# Get all available chat models\nchat_models = get_chat_models()\n\n# Get fallback models for a specific model\nfallbacks = get_fallback_list(\"gpt-4\", model_type=\"chat\")\nprint(f\"Fallback models for GPT-4: {fallbacks}\")\n```\n\n### Model Filtering\n\n```python\nfrom llm_fallbacks import filter_models\n\n# Get free chat models only\nfree_models = filter_models(\n model_type=\"chat\",\n free_only=True\n)\n\n# Get models with specific capabilities\nvision_models = filter_models(\n model_type=\"chat\",\n vision=True\n)\n```\n\n### Cost Analysis Examples\n\nSort the models by cost:\n\n```python\nfrom llm_fallbacks import get_litellm_models, sort_models_by_cost_and_limits\n\n# Get all models with cost information\nmodels = get_litellm_models()\n\n# Sort models by cost (cheapest first)\nsorted_models = sort_models_by_cost_and_limits(models, free_only=True)\nprint(repr(sorted_models))\n```\n\nCalculate cost for a specific model:\n\n```python\nfrom llm_fallbacks import get_litellm_models, calculate_cost_per_token\n\nmodel_spec = get_litellm_models()[\"gpt-5\"]\ncost_per_token = calculate_cost_per_token(model_spec)\nprint(f\"Cost per token: ${cost_per_token:.6f}\")\n```\n\n### Configuration Generation\n\n```python\nfrom llm_fallbacks.generate_configs import to_litellm_config_yaml\n\n# Generate LiteLLM configuration\nconfig = to_litellm_config_yaml(\n providers=[], # Your custom providers\n free_only=True\n)\n\n# Save to YAML file\nimport yaml\nwith open(\"litellm_config.yaml\", \"w\") as f:\n yaml.dump(config, f)\n```\n\nor run `generate_configs.py`:\n```bash\nuv run src/generate_configs.py\n```\n\n## Core Components\n\n### 1. Model Management (`core.py`)\n\n- **`get_litellm_models()`**: Retrieve all available LiteLLM models with specifications\n- **`get_chat_models()`**: Get models supporting chat completion\n- **`get_completion_models()`**: Get models supporting text completion\n- **`get_embedding_models()`**: Get models supporting text embeddings\n- **`get_vision_models()`**: Get models supporting vision tasks\n- **`get_audio_models()`**: Get models supporting audio processing\n\n### 2. Configuration (`config.py`)\n\n- **Model Specifications**: Comprehensive model metadata including capabilities, costs, and limits\n- **Provider Configuration**: Custom provider setup for private or specialized models\n- **Fallback Strategies**: Intelligent fallback configuration based on model compatibility\n\n### 3. Configuration Generation (`generate_configs.py`)\n\n- **LiteLLM YAML Export**: Generate production-ready LiteLLM proxy configurations\n- **Fallback Mapping**: Automatic fallback model assignment based on capabilities\n- **Cost Optimization**: Prioritize models by cost and performance\n\n### 4. Interactive Interface (`__main__.py`)\n\n- **GUI Application**: Tkinter-based interface for model exploration (experimental)\n- **Advanced Filtering**: Multiple filtering methods (regex, quantile, outlier detection)\n- **Data Export**: Export filtered results to various formats\n\n## Configuration Files\n\nThe library generates several configuration files that are stored in the `configs/` directory:\n\n- **`litellm_config.yaml`**: Full LiteLLM configuration with all models\n- **`litellm_config_free.yaml`**: Configuration with free models only\n- **`all_models.json`**: Complete model database in JSON format\n- **`free_chat_models.json`**: Free chat models only\n- **`custom_providers.json`**: Custom provider configurations\n\nThese files are automatically updated daily at 12:00 AM UTC via GitHub Actions to ensure you always have the latest model information and configurations.\n\n## Advanced Features\n\n### Custom Provider Configuration\n\n```python\nfrom llm_fallbacks.config import CustomProviderConfig\n\ncustom_provider = CustomProviderConfig(\n name=\"my-custom-provider\",\n base_url=\"https://api.myprovider.com\",\n api_key=\"your-api-key\",\n models=[\"custom-model-1\", \"custom-model-2\"]\n)\n```\n\n### Fallback Strategy Customization\n\n```python\nfrom llm_fallbacks.config import RouterSettings\n\nrouter_settings = RouterSettings(\n allowed_fails=3,\n cooldown_time=30,\n fallbacks=[{\"gpt-4\": [\"gpt-3.5-turbo\", \"claude-3-sonnet\"]}]\n)\n```\n\n## CLI Usage\n\n### Interactive GUI\n\n```bash\npython -m llm_fallbacks\n```\n\n### Generate Configurations\n\n```bash\npython -m llm_fallbacks.generate_configs\n```\n\n### System Testing\n\n```bash\npython test_system.py\n```\n\n## Development\n\n### Prerequisites\n\n- Python 3.12+\n- Poetry or pip for dependency management\n\n### Setup Development Environment\n\n```bash\n# Install development dependencies\npip install -r requirements-dev.txt\n\n# Install pre-commit hooks\npre-commit install\n\n# Run tests\npytest tests/\n```\n\n## CI/CD\n\nThe project includes automated workflows:\n\n- **Python Package**: Runs on every push/PR with linting, testing, and building\n- **Python Publish**: Automatically publishes to PyPI on releases\n- **Daily Config Update**: Updates model configurations daily at 12:00 AM UTC\n\n### Automated Configuration Updates\n\nThe library automatically maintains up-to-date model configurations through:\n\n1. **Daily Updates**: GitHub Actions workflow runs every day at 12:00 AM UTC\n2. **Model Database**: Fetches latest model information from LiteLLM\n3. **Fallback Strategies**: Generates intelligent fallback configurations\n4. **Version Control**: All changes are automatically committed and tracked\n\nThis ensures your applications always have access to the latest models, pricing, and capabilities without manual intervention.\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## License\n\nThis project is licensed under the Business Source License 1.1 (BSL 1.1) - see the [LICENSE](LICENSE) file for details.\n\n## Support\n\nFor support and questions:\n\n- Open an issue on GitHub\n- Check the [documentation](https://github.com/bodencrouch/llm-fallbacks)\n- Contact: boden.crouch@gmail.com\n\n## Acknowledgments\n\n- Built on top of [LiteLLM](https://github.com/BerriAI/liteLLM)\n- Inspired by the need for robust LLM fallback strategies\n- Community contributions and feedback\n\n---\n\n**Note**: This library is designed to work with the LiteLLM ecosystem and provides fallback mechanisms for production LLM applications. Always test fallback configurations in your specific environment before deploying to production.\n",
"bugtrack_url": null,
"license": "Business Source License 1.1",
"summary": "A comprehensive Python library for managing fallback mechanisms for Large Language Model (LLM) API calls using LiteLLM",
"version": "0.1.0",
"project_urls": {
"Changelog": "https://github.com/bodencrouch/llm-fallbacks/releases",
"Documentation": "https://github.com/bodencrouch/llm-fallbacks",
"Homepage": "https://github.com/bodencrouch/llm-fallbacks",
"Issues": "https://github.com/bodencrouch/llm-fallbacks/issues",
"Repository": "https://github.com/bodencrouch/llm-fallbacks"
},
"split_keywords": [
"llm",
" ai",
" fallbacks",
" litellm",
" language-models",
" api",
" fallback-strategies"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0c5984a1ca12d158b66bd395027ec4004d54272e34fdf86b4b482e176aa6794c",
"md5": "596541f0f7831763149efc3cfcff8970",
"sha256": "922eb74c73f35dedd71e925da1da8a247998988d8cf25a2ec51eef161a0f68e6"
},
"downloads": -1,
"filename": "llm_fallbacks-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "596541f0f7831763149efc3cfcff8970",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 26435,
"upload_time": "2025-08-22T01:44:22",
"upload_time_iso_8601": "2025-08-22T01:44:22.162922Z",
"url": "https://files.pythonhosted.org/packages/0c/59/84a1ca12d158b66bd395027ec4004d54272e34fdf86b4b482e176aa6794c/llm_fallbacks-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "87c9fba9f3dde01190dc1da99183c0e6c69bdeade96e8bf5b97655efcb125352",
"md5": "24bf60bb53fff969bebb53fdb35f91c4",
"sha256": "30f161598a0fe4627dbeab5d7ebfe55c70676aef2dcc41f775997697c3a39997"
},
"downloads": -1,
"filename": "llm_fallbacks-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "24bf60bb53fff969bebb53fdb35f91c4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 28184,
"upload_time": "2025-08-22T01:44:23",
"upload_time_iso_8601": "2025-08-22T01:44:23.479950Z",
"url": "https://files.pythonhosted.org/packages/87/c9/fba9f3dde01190dc1da99183c0e6c69bdeade96e8bf5b97655efcb125352/llm_fallbacks-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-22 01:44:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bodencrouch",
"github_project": "llm-fallbacks",
"github_not_found": true,
"lcname": "llm-fallbacks"
}