# llm-invoker
[](https://github.com/JlassiRAed/llm-invoker/actions/workflows/ci.yml)
[](https://badge.fury.io/py/llm-invoker)
[](https://pypi.org/project/llm-invoker/)
[](https://opensource.org/licenses/MIT)
A Python library for managing multi-agent model invocation with automatic failover strategies, designed for POC development with seamless provider switching and conversation history management.
## π― Why This Project Exists
### The Problem
During the development of multi-agent systems and proof-of-concept projects, developers face several recurring challenges:
1. **Rate Limiting**: Free and low-cost LLM providers impose strict rate limits, causing interruptions during active development
2. **Provider Reliability**: Individual providers can experience downtime or temporary service issues
3. **Model Comparison**: Developers need to test the same prompts across different models and providers to find the best fit
4. **Context Loss**: When switching between providers manually, conversation history and context are often lost
5. **Configuration Complexity**: Managing multiple API keys and provider configurations becomes cumbersome
### The Solution
`llmInvoker` was created to solve these exact problems by providing:
- **Automatic Provider Switching**: When one provider hits rate limits or fails, automatically switch to the next available provider
- **Context Preservation**: Maintain conversation history across provider switches, ensuring continuity
- **Unified Interface**: Single API to interact with multiple LLM providers (GitHub Models, OpenRouter, Google, OpenAI, Anthropic, etc.)
- **Development-Focused**: Optimized for rapid prototyping and POC development workflows
- **Zero Configuration**: Works out of the box with sensible defaults, but fully customizable when needed
This library was born from real-world frustration during multi-agent system development, where hitting rate limits would halt development flow and require manual intervention to switch providers.
## β¨ Features
- **π Automatic Failover**: Seamlessly switch between providers when rate limits or errors occur
- **β‘ Parallel Invocation**: Compare responses from multiple models simultaneously
- **π Conversation History**: Maintain context across provider switches
- **π Multi-Provider Support**: GitHub Models, OpenRouter, Google Generative AI, Hugging Face, OpenAI, Anthropic
- **π LangSmith Integration**: Monitor token usage and trace executions
- **π οΈ LangChain Compatible**: Easy integration with existing multi-agent frameworks
- **βοΈ Simple Configuration**: Environment-based API key management with code-level provider setup
## π Installation
```bash
# Using uv (recommended for modern Python projects)
uv add llm-invoker
# Using pip
pip install llm-invoker
# For development/contribution
git clone https://github.com/RaedJlassi/llm-invoker.git
cd llm-invoker
uv sync --dev
```
## βοΈ Environment Setup
Create a `.env` file in your project root with your API keys (add only the providers you plan to use):
```bash
# OpenAI API Key
OPENAI_API_KEY=your_openai_api_key_here
# Anthropic API Key
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# GitHub Models API Key (free tier available)
GITHUB_TOKEN=your_github_token_here
# Google Generative AI API Key
GOOGLE_API_KEY=your_google_api_key_here
# Hugging Face API Key
HUGGINGFACE_API_KEY=your_huggingface_api_key_here
# OpenRouter API Key (aggregates multiple providers)
OPENROUTER_API_KEY=your_openrouter_api_key_here
# LangSmith Configuration (optional - for monitoring)
LANGSMITH_API_KEY=your_langsmith_api_key_here
LANGSMITH_PROJECT=multiagent_failover_poc
```
> **Note**: You don't need all API keys. The library will automatically detect which providers are available based on your environment variables.
## π― Use Cases
This library is particularly useful for:
### π¬ Research & Prototyping
- **Multi-agent system development** where different agents might use different models
- **POC development** where you need reliable access to LLMs without manual intervention
- **Comparing model outputs** across different providers for research purposes
### ποΈ Development Workflows
- **Rate limit management** during intensive development sessions
- **Provider redundancy** for production applications that can't afford downtime
- **Cost optimization** by utilizing free tiers across multiple providers
### π€ Multi-Agent Applications
- **Agent swarms** where different agents can use different models
- **Fallback strategies** for critical agent communications
- **Context preservation** when agents switch between conversation partners
### π Model Evaluation
- **A/B testing** different models on the same prompts
- **Performance benchmarking** across providers
- **Response quality comparison** for specific use cases
## π Quick Start
### 1. Installation & Setup
```python
# Convenience function (recommended for simple use cases)
from llmInvoker import invoke_failover
response = invoke_failover(
message="Explain quantum computing in simple terms",
providers={
"github": ["gpt-4o", "gpt-4o-mini"],
"google": ["gemini-2.0-flash-exp"]
}
)
if response['success']:
print(response['response'])
```
### 2. Class-based Usage
```python
from llmInvoker import llmInvoker
# Initialize with custom configuration
invoker = llmInvoker(
strategy="failover",
max_retries=3,
timeout=30,
enable_history=True
)
### Convenience Functions
```python
from llmInvoker import invoke_failover, invoke_parallel
# Quick failover
response = invoke_failover(
"What are the benefits of renewable energy?",
providers={
"github": ["gpt-4o"],
"google": ["gemini-2.0-flash-exp"]
}
)
# Parallel comparison
response = invoke_parallel(
"Explain machine learning in one sentence",
providers={
"github": ["gpt-4o"],
"openrouter": ["deepseek/deepseek-r1"],
"google": ["gemini-2.0-flash-exp"]
}
)
# Compare responses from all providers
for result in response['successful_responses']:
print(f"{result['provider']}: {result['response']}")
```
## π Strategies
### 1. Failover Strategy
Tries providers in order until one succeeds:
```python
invoker = llmInvoker(strategy="failover")
invoker.configure_providers(
github=["gpt-4o", "gpt-4o-mini"],
google=["gemini-2.0-flash-exp"],
openrouter=["deepseek/deepseek-r1"]
)
```
### 2. Parallel Strategy
Invokes all providers simultaneously for comparison:
```python
invoker = llmInvoker(strategy="parallel")
# Same configuration as above
response = invoker.invoke_sync("Your question here")
# Get multiple responses to compare
```
## π§ Advanced Configuration
### String-based Configuration
```python
# Configure providers using string format
invoker.configure_from_string(
"github['gpt-4o','gpt-4o-mini'],google['gemini-2.0-flash-exp'],openrouter['deepseek/deepseek-r1']"
)
```
### Default Configurations
```python
# Use default configurations for all available providers
invoker.use_defaults()
```
### Custom Parameters
```python
# Add model parameters
response = invoker.invoke_sync(
"Your question",
temperature=0.7,
max_tokens=500,
top_p=0.9
)
```
## π€ LangChain Integration
Seamlessly integrate with existing LangChain workflows:
```python
from llmInvoker import llmInvoker
class LangChainWrapper:
def __init__(self):
self.invoker = llmInvoker(strategy="failover")
self.invoker.use_defaults()
async def __call__(self, prompt: str) -> str:
response = await self.invoker.invoke(prompt)
if response['success']:
return self._extract_content(response['response'])
raise Exception(f"All providers failed: {response['error']}")
# Use in LangChain chains
llm_wrapper = LangChainWrapper()
```
## π Monitoring & History
### Conversation History
```python
# Enable history (default)
invoker = llmInvoker(enable_history=True)
# Get conversation summary
history = invoker.get_history()
summary = history.get_summary()
print(f"Total interactions: {summary['total_entries']}")
print(f"Providers used: {summary['providers_used']}")
# Export/import history
invoker.export_history("conversation_history.json")
invoker.import_history("conversation_history.json")
```
### Provider Statistics
```python
stats = invoker.get_provider_stats()
print(f"Total providers: {stats['total_providers']}")
print(f"Total models: {stats['total_models']}")
print(f"Provider details: {stats['providers']}")
```
### LangSmith Integration
Automatic token usage tracking and execution tracing when LangSmith is configured:
```bash
# In .env file
LANGSMITH_API_KEY=your_langsmith_api_key
LANGSMITH_PROJECT=your_project_name
```
## π Examples
Check the `examples/` directory for complete examples:
- `failover_example.py` - Comprehensive failover strategy examples
- `parallel_invoke_example.py` - Parallel invocation and response comparison
- `langchain_integration.py` - Integration with LangChain and multi-agent frameworks
## π οΈ Development
### Project Structure
```
multiagent_failover_invoke/
βββ multiagent_failover_invoke/
β βββ __init__.py # Main exports
β βββ core.py # Core llmInvoker class
β βββ providers.py # Provider implementations
β βββ strategies.py # Strategy implementations
β βββ history.py # Conversation history management
β βββ config.py # Configuration management
β βββ utils.py # Utility functions
βββ examples/ # Usage examples
βββ tests/ # Test suite
βββ .env.example # Environment template
βββ pyproject.toml # Project configuration
βββ README.md # This file
```
### Supported Providers
| Provider | Models | Free Tier | Rate Limits |
|----------|--------|-----------|-------------|
| GitHub Models | gpt-4o, gpt-4o-mini | β
| High |
| OpenRouter | deepseek/deepseek-r1, llama-3.2-3b:free | β
| Medium |
| Google AI | gemini-2.0-flash-exp, gemini-1.5-pro | β
| Medium |
| Hugging Face | Various open models | β
| Variable |
| OpenAI | gpt-4o, gpt-4o-mini, gpt-3.5-turbo | π° | High |
| Anthropic | claude-3-5-haiku, claude-3-haiku | π° | High |
## π€ Use Cases
Perfect for:
- **POC Development**: Rapid prototyping without worrying about rate limits
- **Multi-Agent Systems**: LangGraph, CrewAI, AutoGen integration
- **Model Comparison**: A/B testing different models on same tasks
- **Reliability**: Production backup strategies for mission-critical applications
- **Cost Optimization**: Prefer free models, fallback to paid when needed
## π License
MIT License - see LICENSE file for details.
## π€ Contributing
Contributions are welcome! This project was created to solve real-world development challenges, and we'd love to hear about your use cases and improvements.
### Getting Started
1. Fork the repository
2. Create a feature branch: `git checkout -b feature/amazing-feature`
3. Make your changes and add tests
4. Run tests: `pytest tests/`
5. Submit a pull request
### Development Setup
```bash
git clone https://github.com/yourusername/multiagent-failover-invoke.git
cd multiagent-failover-invoke
uv sync --dev
```
## π§ͺ Testing
Run the test suite:
```bash
# Run all tests
pytest tests/
# Run with coverage
pytest tests/ --cov=llmInvoker
# Run specific test file
pytest tests/test_core.py
```
## π Examples
The `examples/` directory contains comprehensive examples:
- **`failover_example.py`** - Basic and advanced failover strategies
- **`parallel_invoke_example.py`** - Parallel model invocation
- **`multimodal_example.py`** - Working with images and multimodal content
- **`langchain_integration.py`** - Integration with LangChain workflows
- **`quickstart.py`** - Quick start guide examples
## π Support & Community
- **Issues**: [GitHub Issues](https://github.com/RaedJlassi/llm-invoker/issues)
- **Discussions**: [GitHub Discussions](https://github.com/RaedJlassi/llm-invoker/discussions)
- **Documentation**: Comprehensive examples in the `examples/` directory
## π¨βπ» Author
**Jlassi Raed**
- Email: raed.jlassi@etudiant-enit.utm.tn
- GitHub: [@RaedJlassi](https://github.com/RaedJlassi)
*Created during multi-agent system development at ENIT (Γcole Nationale d'IngΓ©nieurs de Tunis) to solve real-world rate limiting and provider reliability challenges in POC phase.*
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π Acknowledgments
- Thanks to all the LLM providers for their APIs and free tiers that make development accessible
- Inspired by real-world challenges in multi-agent system development
- Built for the developer community facing similar rate limiting and reliability issues
---
**β If this project helps you in your development workflow, please consider giving it a star!**
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-invoker",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, anthropic, failover, github, google, llm, multiagent, openai",
"author": null,
"author_email": "Jlassi Raed <raed.jlassi@etudiant-enit.utm.tn>",
"download_url": "https://files.pythonhosted.org/packages/cf/9f/c89627b87b9afafdaca208f2640e25d9e27244867501ea232c8b72c004eb/llm_invoker-0.1.0.tar.gz",
"platform": null,
"description": "# llm-invoker\n\n[](https://github.com/JlassiRAed/llm-invoker/actions/workflows/ci.yml)\n[](https://badge.fury.io/py/llm-invoker)\n[](https://pypi.org/project/llm-invoker/)\n[](https://opensource.org/licenses/MIT)\n\nA Python library for managing multi-agent model invocation with automatic failover strategies, designed for POC development with seamless provider switching and conversation history management.\n\n## \ud83c\udfaf Why This Project Exists\n\n### The Problem\nDuring the development of multi-agent systems and proof-of-concept projects, developers face several recurring challenges:\n\n1. **Rate Limiting**: Free and low-cost LLM providers impose strict rate limits, causing interruptions during active development\n2. **Provider Reliability**: Individual providers can experience downtime or temporary service issues\n3. **Model Comparison**: Developers need to test the same prompts across different models and providers to find the best fit\n4. **Context Loss**: When switching between providers manually, conversation history and context are often lost\n5. **Configuration Complexity**: Managing multiple API keys and provider configurations becomes cumbersome\n\n### The Solution\n`llmInvoker` was created to solve these exact problems by providing:\n\n- **Automatic Provider Switching**: When one provider hits rate limits or fails, automatically switch to the next available provider\n- **Context Preservation**: Maintain conversation history across provider switches, ensuring continuity\n- **Unified Interface**: Single API to interact with multiple LLM providers (GitHub Models, OpenRouter, Google, OpenAI, Anthropic, etc.)\n- **Development-Focused**: Optimized for rapid prototyping and POC development workflows\n- **Zero Configuration**: Works out of the box with sensible defaults, but fully customizable when needed\n\nThis library was born from real-world frustration during multi-agent system development, where hitting rate limits would halt development flow and require manual intervention to switch providers.\n\n## \u2728 Features\n\n- **\ud83d\udd04 Automatic Failover**: Seamlessly switch between providers when rate limits or errors occur\n- **\u26a1 Parallel Invocation**: Compare responses from multiple models simultaneously \n- **\ud83d\udcad Conversation History**: Maintain context across provider switches\n- **\ud83d\udd0c Multi-Provider Support**: GitHub Models, OpenRouter, Google Generative AI, Hugging Face, OpenAI, Anthropic\n- **\ud83d\udd0d LangSmith Integration**: Monitor token usage and trace executions\n- **\ud83d\udee0\ufe0f LangChain Compatible**: Easy integration with existing multi-agent frameworks\n- **\u2699\ufe0f Simple Configuration**: Environment-based API key management with code-level provider setup\n\n## \ud83d\ude80 Installation\n\n```bash\n# Using uv (recommended for modern Python projects)\nuv add llm-invoker\n\n# Using pip\npip install llm-invoker\n\n# For development/contribution\ngit clone https://github.com/RaedJlassi/llm-invoker.git\ncd llm-invoker\nuv sync --dev\n```\n\n## \u2699\ufe0f Environment Setup\n\nCreate a `.env` file in your project root with your API keys (add only the providers you plan to use):\n\n```bash\n# OpenAI API Key\nOPENAI_API_KEY=your_openai_api_key_here\n\n# Anthropic API Key \nANTHROPIC_API_KEY=your_anthropic_api_key_here\n\n# GitHub Models API Key (free tier available)\nGITHUB_TOKEN=your_github_token_here\n\n# Google Generative AI API Key\nGOOGLE_API_KEY=your_google_api_key_here\n\n# Hugging Face API Key\nHUGGINGFACE_API_KEY=your_huggingface_api_key_here\n\n# OpenRouter API Key (aggregates multiple providers)\nOPENROUTER_API_KEY=your_openrouter_api_key_here\n\n# LangSmith Configuration (optional - for monitoring)\nLANGSMITH_API_KEY=your_langsmith_api_key_here\nLANGSMITH_PROJECT=multiagent_failover_poc\n```\n\n> **Note**: You don't need all API keys. The library will automatically detect which providers are available based on your environment variables.\n\n## \ud83c\udfaf Use Cases\n\nThis library is particularly useful for:\n\n### \ud83d\udd2c Research & Prototyping\n- **Multi-agent system development** where different agents might use different models\n- **POC development** where you need reliable access to LLMs without manual intervention\n- **Comparing model outputs** across different providers for research purposes\n\n### \ud83c\udfd7\ufe0f Development Workflows\n- **Rate limit management** during intensive development sessions\n- **Provider redundancy** for production applications that can't afford downtime\n- **Cost optimization** by utilizing free tiers across multiple providers\n\n### \ud83e\udd16 Multi-Agent Applications\n- **Agent swarms** where different agents can use different models\n- **Fallback strategies** for critical agent communications\n- **Context preservation** when agents switch between conversation partners\n\n### \ud83d\udcca Model Evaluation\n- **A/B testing** different models on the same prompts\n- **Performance benchmarking** across providers\n- **Response quality comparison** for specific use cases\n\n## \ud83d\ude80 Quick Start\n\n### 1. Installation & Setup\n\n```python\n# Convenience function (recommended for simple use cases)\nfrom llmInvoker import invoke_failover\n\nresponse = invoke_failover(\n message=\"Explain quantum computing in simple terms\",\n providers={\n \"github\": [\"gpt-4o\", \"gpt-4o-mini\"],\n \"google\": [\"gemini-2.0-flash-exp\"]\n }\n)\n\nif response['success']:\n print(response['response'])\n```\n\n### 2. Class-based Usage\n\n```python\nfrom llmInvoker import llmInvoker\n\n# Initialize with custom configuration\ninvoker = llmInvoker(\n strategy=\"failover\",\n max_retries=3,\n timeout=30,\n enable_history=True\n)\n\n### Convenience Functions\n\n```python\nfrom llmInvoker import invoke_failover, invoke_parallel\n\n# Quick failover\nresponse = invoke_failover(\n \"What are the benefits of renewable energy?\",\n providers={\n \"github\": [\"gpt-4o\"],\n \"google\": [\"gemini-2.0-flash-exp\"]\n }\n)\n\n# Parallel comparison\nresponse = invoke_parallel(\n \"Explain machine learning in one sentence\",\n providers={\n \"github\": [\"gpt-4o\"],\n \"openrouter\": [\"deepseek/deepseek-r1\"],\n \"google\": [\"gemini-2.0-flash-exp\"]\n }\n)\n\n# Compare responses from all providers\nfor result in response['successful_responses']:\n print(f\"{result['provider']}: {result['response']}\")\n```\n\n## \ud83d\udccb Strategies\n\n### 1. Failover Strategy\nTries providers in order until one succeeds:\n\n```python\ninvoker = llmInvoker(strategy=\"failover\")\ninvoker.configure_providers(\n github=[\"gpt-4o\", \"gpt-4o-mini\"],\n google=[\"gemini-2.0-flash-exp\"],\n openrouter=[\"deepseek/deepseek-r1\"]\n)\n```\n\n### 2. Parallel Strategy \nInvokes all providers simultaneously for comparison:\n\n```python\ninvoker = llmInvoker(strategy=\"parallel\")\n# Same configuration as above\nresponse = invoker.invoke_sync(\"Your question here\")\n# Get multiple responses to compare\n```\n\n## \ud83d\udd27 Advanced Configuration\n\n### String-based Configuration\n\n```python\n# Configure providers using string format\ninvoker.configure_from_string(\n \"github['gpt-4o','gpt-4o-mini'],google['gemini-2.0-flash-exp'],openrouter['deepseek/deepseek-r1']\"\n)\n```\n\n### Default Configurations\n\n```python\n# Use default configurations for all available providers\ninvoker.use_defaults()\n```\n\n### Custom Parameters\n\n```python\n# Add model parameters\nresponse = invoker.invoke_sync(\n \"Your question\",\n temperature=0.7,\n max_tokens=500,\n top_p=0.9\n)\n```\n\n## \ud83e\udd16 LangChain Integration\n\nSeamlessly integrate with existing LangChain workflows:\n\n```python\nfrom llmInvoker import llmInvoker\n\nclass LangChainWrapper:\n def __init__(self):\n self.invoker = llmInvoker(strategy=\"failover\")\n self.invoker.use_defaults()\n \n async def __call__(self, prompt: str) -> str:\n response = await self.invoker.invoke(prompt)\n if response['success']:\n return self._extract_content(response['response'])\n raise Exception(f\"All providers failed: {response['error']}\")\n\n# Use in LangChain chains\nllm_wrapper = LangChainWrapper()\n```\n\n## \ud83d\udcca Monitoring & History\n\n### Conversation History\n\n```python\n# Enable history (default)\ninvoker = llmInvoker(enable_history=True)\n\n# Get conversation summary\nhistory = invoker.get_history()\nsummary = history.get_summary()\nprint(f\"Total interactions: {summary['total_entries']}\")\nprint(f\"Providers used: {summary['providers_used']}\")\n\n# Export/import history\ninvoker.export_history(\"conversation_history.json\")\ninvoker.import_history(\"conversation_history.json\")\n```\n\n### Provider Statistics\n\n```python\nstats = invoker.get_provider_stats()\nprint(f\"Total providers: {stats['total_providers']}\")\nprint(f\"Total models: {stats['total_models']}\")\nprint(f\"Provider details: {stats['providers']}\")\n```\n\n### LangSmith Integration\n\nAutomatic token usage tracking and execution tracing when LangSmith is configured:\n\n```bash\n# In .env file\nLANGSMITH_API_KEY=your_langsmith_api_key\nLANGSMITH_PROJECT=your_project_name\n```\n\n## \ud83d\udd0d Examples\n\nCheck the `examples/` directory for complete examples:\n\n- `failover_example.py` - Comprehensive failover strategy examples\n- `parallel_invoke_example.py` - Parallel invocation and response comparison\n- `langchain_integration.py` - Integration with LangChain and multi-agent frameworks\n\n## \ud83d\udee0\ufe0f Development\n\n### Project Structure\n\n```\nmultiagent_failover_invoke/\n\u251c\u2500\u2500 multiagent_failover_invoke/\n\u2502 \u251c\u2500\u2500 __init__.py # Main exports\n\u2502 \u251c\u2500\u2500 core.py # Core llmInvoker class \n\u2502 \u251c\u2500\u2500 providers.py # Provider implementations\n\u2502 \u251c\u2500\u2500 strategies.py # Strategy implementations\n\u2502 \u251c\u2500\u2500 history.py # Conversation history management\n\u2502 \u251c\u2500\u2500 config.py # Configuration management\n\u2502 \u2514\u2500\u2500 utils.py # Utility functions\n\u251c\u2500\u2500 examples/ # Usage examples\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 .env.example # Environment template\n\u251c\u2500\u2500 pyproject.toml # Project configuration\n\u2514\u2500\u2500 README.md # This file\n```\n\n### Supported Providers\n\n| Provider | Models | Free Tier | Rate Limits |\n|----------|--------|-----------|-------------|\n| GitHub Models | gpt-4o, gpt-4o-mini | \u2705 | High |\n| OpenRouter | deepseek/deepseek-r1, llama-3.2-3b:free | \u2705 | Medium |\n| Google AI | gemini-2.0-flash-exp, gemini-1.5-pro | \u2705 | Medium |\n| Hugging Face | Various open models | \u2705 | Variable |\n| OpenAI | gpt-4o, gpt-4o-mini, gpt-3.5-turbo | \ud83d\udcb0 | High |\n| Anthropic | claude-3-5-haiku, claude-3-haiku | \ud83d\udcb0 | High |\n\n## \ud83e\udd1d Use Cases\n\nPerfect for:\n\n- **POC Development**: Rapid prototyping without worrying about rate limits\n- **Multi-Agent Systems**: LangGraph, CrewAI, AutoGen integration\n- **Model Comparison**: A/B testing different models on same tasks \n- **Reliability**: Production backup strategies for mission-critical applications\n- **Cost Optimization**: Prefer free models, fallback to paid when needed\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details.\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! This project was created to solve real-world development challenges, and we'd love to hear about your use cases and improvements.\n\n### Getting Started\n1. Fork the repository\n2. Create a feature branch: `git checkout -b feature/amazing-feature`\n3. Make your changes and add tests\n4. Run tests: `pytest tests/`\n5. Submit a pull request\n\n### Development Setup\n```bash\ngit clone https://github.com/yourusername/multiagent-failover-invoke.git\ncd multiagent-failover-invoke\nuv sync --dev\n```\n\n## \ud83e\uddea Testing\n\nRun the test suite:\n```bash\n# Run all tests\npytest tests/\n\n# Run with coverage\npytest tests/ --cov=llmInvoker\n\n# Run specific test file\npytest tests/test_core.py\n```\n\n## \ud83d\udcda Examples\n\nThe `examples/` directory contains comprehensive examples:\n\n- **`failover_example.py`** - Basic and advanced failover strategies\n- **`parallel_invoke_example.py`** - Parallel model invocation\n- **`multimodal_example.py`** - Working with images and multimodal content\n- **`langchain_integration.py`** - Integration with LangChain workflows\n- **`quickstart.py`** - Quick start guide examples\n\n## \ud83d\udcde Support & Community\n\n- **Issues**: [GitHub Issues](https://github.com/RaedJlassi/llm-invoker/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/RaedJlassi/llm-invoker/discussions)\n- **Documentation**: Comprehensive examples in the `examples/` directory\n\n## \ud83d\udc68\u200d\ud83d\udcbb Author\n\n**Jlassi Raed**\n- Email: raed.jlassi@etudiant-enit.utm.tn\n- GitHub: [@RaedJlassi](https://github.com/RaedJlassi)\n\n*Created during multi-agent system development at ENIT (\u00c9cole Nationale d'Ing\u00e9nieurs de Tunis) to solve real-world rate limiting and provider reliability challenges in POC phase.*\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Thanks to all the LLM providers for their APIs and free tiers that make development accessible\n- Inspired by real-world challenges in multi-agent system development\n- Built for the developer community facing similar rate limiting and reliability issues\n\n---\n\n**\u2b50 If this project helps you in your development workflow, please consider giving it a star!**",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python library for managing multi-agent model invocation with automatic failover strategies",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://github.com/RaedJlassi/llm-invoker#readme",
"Homepage": "https://github.com/RaedJlassi/llm-invoker",
"Issues": "https://github.com/RaedJlassi/llm-invoker/issues",
"Repository": "https://github.com/RaedJlassi/llm-invoker"
},
"split_keywords": [
"ai",
" anthropic",
" failover",
" github",
" google",
" llm",
" multiagent",
" openai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "77496190ed890a5b9c2fe9d02898a0be1172889190f64cd3d65804c277e0574a",
"md5": "613cd991072285f650f28c5853ca3dd7",
"sha256": "10089ddc74f997fa32d58c7da3a422c2cca647de7af2d2976da44033a5d35c2e"
},
"downloads": -1,
"filename": "llm_invoker-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "613cd991072285f650f28c5853ca3dd7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 21335,
"upload_time": "2025-08-13T23:16:49",
"upload_time_iso_8601": "2025-08-13T23:16:49.995920Z",
"url": "https://files.pythonhosted.org/packages/77/49/6190ed890a5b9c2fe9d02898a0be1172889190f64cd3d65804c277e0574a/llm_invoker-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "cf9fc89627b87b9afafdaca208f2640e25d9e27244867501ea232c8b72c004eb",
"md5": "4609b7e7c3b096b8bbe463528e3e8bde",
"sha256": "f6910294b09ab563c6ff186218a3d2f1ed35ecda6f6538f3b655a7639ebceb1f"
},
"downloads": -1,
"filename": "llm_invoker-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "4609b7e7c3b096b8bbe463528e3e8bde",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 177394,
"upload_time": "2025-08-13T23:16:53",
"upload_time_iso_8601": "2025-08-13T23:16:53.131530Z",
"url": "https://files.pythonhosted.org/packages/cf/9f/c89627b87b9afafdaca208f2640e25d9e27244867501ea232c8b72c004eb/llm_invoker-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-13 23:16:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "RaedJlassi",
"github_project": "llm-invoker#readme",
"github_not_found": true,
"lcname": "llm-invoker"
}