# Generic LLM API Client
A unified, provider-agnostic Python client for multiple LLM APIs. Query any LLM (OpenAI, Anthropic Claude, Google Gemini, Mistral, DeepSeek, Qwen, OpenRouter, and more) through a single, consistent interface.
**Perfect for**: Research workflows, benchmarking studies, automated testing, and applications that need to work with multiple LLM providers without dealing with their individual APIs.
## Important Note
This package is a **convenience wrapper** for working with multiple LLM providers through a unified interface. It is **not intended as a replacement** for the official provider libraries (openai, anthropic, google-genai, etc.).
### Use this package when:
- You need to query multiple LLM providers in the same project
- You're building benchmarking or comparison tools
- You want a consistent interface across providers
- You need provider-agnostic code for research workflows
### Use the official libraries when:
- You need cutting-edge features on day one of release
- You require provider-specific advanced features
- You only work with a single provider
**Update pace:** This package is maintained by a small team and may not immediately support every new feature from upstream providers. We prioritize stability and cross-provider compatibility over bleeding-edge feature coverage.
## Features
- **Provider-Agnostic**: Single interface for OpenAI, Anthropic, Google, Mistral, DeepSeek, Qwen, and OpenRouter
- **Multimodal Support**: Text + images across all supporting providers
- **Structured Output**: Unified Pydantic model support across providers
- **Rich Response Objects**: Detailed token usage, costs, timing, and metadata
- **Async Support**: Parallel processing for faster benchmarks
- **Built-in Retry Logic**: Automatic exponential backoff for rate limits
- **Custom Base URLs**: Easy integration with OpenRouter, sciCORE, and other OpenAI-compatible APIs
## Installation
```bash
pip install generic-llm-api-client
```
## Quick Start
```python
from ai_client import create_ai_client
# Create a client for any provider
client = create_ai_client('openai', api_key='sk-...')
# Send a prompt
response, duration = client.prompt('gpt-4', 'What is 2+2?')
print(f"Response: {response.text}")
print(f"Tokens used: {response.usage.total_tokens}")
print(f"Time: {duration:.2f}s")
```
## Supported Providers
| Provider | ID | Multimodal | Structured Output |
|----------|-----|-----------|-------------------|
| OpenAI | `openai` | Yes | Yes |
| Anthropic Claude | `anthropic` | Yes | Yes (via tools) |
| Google Gemini | `genai` | Yes | Yes |
| Mistral | `mistral` | Yes | Yes |
| DeepSeek | `deepseek` | Yes | Yes |
| Qwen | `qwen` | Yes | Yes |
| OpenRouter | `openrouter` | Yes | Yes |
| sciCORE | `scicore` | Yes | Yes |
## Usage Examples
### Basic Text Prompt
```python
from ai_client import create_ai_client
client = create_ai_client('anthropic', api_key='sk-ant-...')
response, duration = client.prompt(
'claude-3-5-sonnet-20241022',
'Explain quantum computing in simple terms'
)
print(response.text)
```
### Multimodal (Text + Images)
```python
from ai_client import create_ai_client
client = create_ai_client('openai', api_key='sk-...')
response, duration = client.prompt(
'gpt-4o',
'Describe this image in detail',
images=['path/to/image.jpg']
)
print(response.text)
```
### Multiple Images
```python
response, duration = client.prompt(
'gpt-4o',
'Compare these two images',
images=['image1.jpg', 'image2.jpg']
)
```
### Structured Output with Pydantic
```python
from pydantic import BaseModel
from ai_client import create_ai_client
class Person(BaseModel):
name: str
age: int
occupation: str
client = create_ai_client('openai', api_key='sk-...')
response, duration = client.prompt(
'gpt-4',
'Extract: John Smith is a 35-year-old software engineer',
response_format=Person
)
# Parse the response
import json
person_data = json.loads(response.text)
person = Person(**person_data)
print(f"{person.name}, {person.age}, {person.occupation}")
```
### Async for Parallel Processing
```python
import asyncio
from ai_client import create_ai_client
async def process_batch():
client = create_ai_client('openai', api_key='sk-...')
# Process multiple prompts in parallel
tasks = [
client.prompt_async('gpt-4', f'Tell me about {topic}')
for topic in ['Python', 'JavaScript', 'Rust']
]
results = await asyncio.gather(*tasks)
for response, duration in results:
print(f"({duration:.2f}s) {response.text[:100]}...")
asyncio.run(process_batch())
```
### Custom Base URLs (OpenRouter, sciCORE)
```python
from ai_client import create_ai_client
# OpenRouter - access to 100+ models
client = create_ai_client(
'openrouter',
api_key='sk-or-...',
base_url='https://openrouter.ai/api/v1',
default_headers={
"HTTP-Referer": "https://your-site.com",
"X-Title": "Your App"
}
)
response, _ = client.prompt('anthropic/claude-3-opus', 'Hello!')
# sciCORE (University HPC)
client = create_ai_client(
'scicore',
api_key='your-key',
base_url='https://llm-api-h200.ceda.unibas.ch/litellm/v1'
)
response, _ = client.prompt('deepseek/deepseek-chat', 'Hello!')
```
### Accessing Response Metadata
```python
response, duration = client.prompt('gpt-4', 'Hello')
# Response text
print(response.text)
# Token usage
print(f"Input tokens: {response.usage.input_tokens}")
print(f"Output tokens: {response.usage.output_tokens}")
print(f"Total tokens: {response.usage.total_tokens}")
# Metadata
print(f"Model: {response.model}")
print(f"Provider: {response.provider}")
print(f"Finish reason: {response.finish_reason}")
print(f"Duration: {response.duration}s")
# Raw provider response (for detailed analysis)
raw = response.raw_response
# Convert to dict (for JSON serialization)
response_dict = response.to_dict()
```
## Configuration
### Provider-Specific Settings
```python
from ai_client import create_ai_client
# OpenAI
client = create_ai_client(
'openai',
api_key='sk-...',
temperature=0.7,
max_tokens=500,
frequency_penalty=0.5
)
# Claude
client = create_ai_client(
'anthropic',
api_key='sk-ant-...',
temperature=1.0,
max_tokens=4096,
top_k=40
)
# Settings can also be passed per-request
response, _ = client.prompt(
'gpt-4',
'Hello',
temperature=0.9,
max_tokens=100
)
```
### Custom System Prompts
```python
from ai_client import create_ai_client
client = create_ai_client(
'openai',
api_key='sk-...',
system_prompt="You are a helpful coding assistant specialized in Python."
)
# Override for specific request
response, _ = client.prompt(
'gpt-4',
'Write a haiku',
system_prompt="You are a poetic assistant."
)
```
## Use Case: Benchmarking
Perfect for research workflows that need to evaluate multiple models:
```python
from ai_client import create_ai_client
import asyncio
async def benchmark_models():
providers = [
('openai', 'gpt-4'),
('anthropic', 'claude-3-5-sonnet-20241022'),
('genai', 'gemini-2.0-flash-exp'),
]
prompt = 'Explain quantum entanglement'
for provider_id, model in providers:
client = create_ai_client(provider_id, api_key=f'{provider_id}_key')
response, duration = await client.prompt_async(model, prompt)
print(f"\n=== {provider_id}/{model} ===")
print(f"Duration: {duration:.2f}s")
print(f"Tokens: {response.usage.total_tokens}")
print(f"Response: {response.text[:200]}...")
asyncio.run(benchmark_models())
```
## Error Handling
The package includes built-in retry logic with exponential backoff:
```python
from ai_client import create_ai_client, RateLimitError, APIError
client = create_ai_client('openai', api_key='sk-...')
try:
response, duration = client.prompt('gpt-4', 'Hello')
# Automatically retries up to 3 times on rate limit errors
except RateLimitError as e:
print(f"Rate limited after retries: {e}")
except APIError as e:
print(f"API error: {e}")
except Exception as e:
print(f"Unknown error: {e}")
```
## Advanced Features
### Get Available Models
```python
from ai_client import create_ai_client
client = create_ai_client('openai', api_key='sk-...')
models = client.get_model_list()
for model_id, created_date in models:
print(f"{model_id} (created: {created_date})")
```
### Check Multimodal Support
```python
client = create_ai_client('openai', api_key='sk-...')
if client.has_multimodal_support():
print("This provider supports images!")
```
## Package Structure
```
ai_client/
__init__.py # Package exports
base_client.py # BaseAIClient + factory
response.py # LLMResponse, Usage dataclasses
utils.py # Retry logic, exceptions, utilities
openai_client.py # OpenAI implementation
claude_client.py # Anthropic Claude
gemini_client.py # Google Gemini
mistral_client.py # Mistral AI
deepseek_client.py # DeepSeek
qwen_client.py # Qwen
```
## Requirements
- Python >=3.9
- anthropic ~=0.71.0
- openai ~=2.6.1
- mistralai ~=1.9.11
- google-genai ~=1.46.0
- requests ~=2.32.5
## Development
```bash
# Clone the repository
git clone https://github.com/RISE-UNIBAS/generic-llm-api-client.git
cd generic-llm-api-client
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Run integration tests (requires API keys)
pytest -m integration
# Format code
black ai_client tests
# Type checking
mypy ai_client/
```
## Documentation
- **[EXAMPLES.md](EXAMPLES.md)** - Comprehensive usage examples
- **[PUBLISHING.md](PUBLISHING.md)** - Guide for maintainers on publishing releases
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Citation
If you use this package in your research, please cite:
```bibtex
@software{generic_llm_api_client,
author = {Sorin Marti},
title = {Generic LLM API Client: A Unified Interface for Multiple LLM Providers},
year = {2025},
url = {https://github.com/RISE-UNIBAS/generic-llm-api-client}
}
```
## Support
- GitHub Issues: [Report bugs or request features](https://github.com/RISE-UNIBAS/generic-llm-api-client/issues)
- Documentation: [Full documentation](https://github.com/RISE-UNIBAS/generic-llm-api-client#readme)
## Roadmap
- [ ] Tool use / function calling support
- [ ] Streaming support
- [ ] Conversation history management
- [ ] More providers (Cohere, AI21, etc.)
- [ ] Cost estimation utilities
- [ ] Prompt caching support
Raw data
{
"_id": null,
"home_page": null,
"name": "generic-llm-api-client",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Sorin Marti <sorin.marti@gmail.com>",
"keywords": "llm, ai, openai, anthropic, claude, gemini, mistral, deepseek, qwen, api-client, humanities, research, benchmarking",
"author": null,
"author_email": "Sorin Marti <sorin.marti@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/9f/dd/29ecc516b996f9968c9d0b15d2a7f82b19128059646c2fa8c4ff24b132a4/generic_llm_api_client-0.1.2.tar.gz",
"platform": null,
"description": "# Generic LLM API Client\n\nA unified, provider-agnostic Python client for multiple LLM APIs. Query any LLM (OpenAI, Anthropic Claude, Google Gemini, Mistral, DeepSeek, Qwen, OpenRouter, and more) through a single, consistent interface.\n\n**Perfect for**: Research workflows, benchmarking studies, automated testing, and applications that need to work with multiple LLM providers without dealing with their individual APIs.\n\n## Important Note\n\nThis package is a **convenience wrapper** for working with multiple LLM providers through a unified interface. It is **not intended as a replacement** for the official provider libraries (openai, anthropic, google-genai, etc.).\n\n### Use this package when:\n- You need to query multiple LLM providers in the same project\n- You're building benchmarking or comparison tools\n- You want a consistent interface across providers\n- You need provider-agnostic code for research workflows\n\n### Use the official libraries when:\n- You need cutting-edge features on day one of release\n- You require provider-specific advanced features\n- You only work with a single provider\n\n**Update pace:** This package is maintained by a small team and may not immediately support every new feature from upstream providers. We prioritize stability and cross-provider compatibility over bleeding-edge feature coverage.\n\n## Features\n\n- **Provider-Agnostic**: Single interface for OpenAI, Anthropic, Google, Mistral, DeepSeek, Qwen, and OpenRouter\n- **Multimodal Support**: Text + images across all supporting providers\n- **Structured Output**: Unified Pydantic model support across providers\n- **Rich Response Objects**: Detailed token usage, costs, timing, and metadata\n- **Async Support**: Parallel processing for faster benchmarks\n- **Built-in Retry Logic**: Automatic exponential backoff for rate limits\n- **Custom Base URLs**: Easy integration with OpenRouter, sciCORE, and other OpenAI-compatible APIs\n\n## Installation\n\n```bash\npip install generic-llm-api-client\n```\n\n## Quick Start\n\n```python\nfrom ai_client import create_ai_client\n\n# Create a client for any provider\nclient = create_ai_client('openai', api_key='sk-...')\n\n# Send a prompt\nresponse, duration = client.prompt('gpt-4', 'What is 2+2?')\n\nprint(f\"Response: {response.text}\")\nprint(f\"Tokens used: {response.usage.total_tokens}\")\nprint(f\"Time: {duration:.2f}s\")\n```\n\n## Supported Providers\n\n| Provider | ID | Multimodal | Structured Output |\n|----------|-----|-----------|-------------------|\n| OpenAI | `openai` | Yes | Yes |\n| Anthropic Claude | `anthropic` | Yes | Yes (via tools) |\n| Google Gemini | `genai` | Yes | Yes |\n| Mistral | `mistral` | Yes | Yes |\n| DeepSeek | `deepseek` | Yes | Yes |\n| Qwen | `qwen` | Yes | Yes |\n| OpenRouter | `openrouter` | Yes | Yes |\n| sciCORE | `scicore` | Yes | Yes |\n\n## Usage Examples\n\n### Basic Text Prompt\n\n```python\nfrom ai_client import create_ai_client\n\nclient = create_ai_client('anthropic', api_key='sk-ant-...')\nresponse, duration = client.prompt(\n 'claude-3-5-sonnet-20241022',\n 'Explain quantum computing in simple terms'\n)\n\nprint(response.text)\n```\n\n### Multimodal (Text + Images)\n\n```python\nfrom ai_client import create_ai_client\n\nclient = create_ai_client('openai', api_key='sk-...')\n\nresponse, duration = client.prompt(\n 'gpt-4o',\n 'Describe this image in detail',\n images=['path/to/image.jpg']\n)\n\nprint(response.text)\n```\n\n### Multiple Images\n\n```python\nresponse, duration = client.prompt(\n 'gpt-4o',\n 'Compare these two images',\n images=['image1.jpg', 'image2.jpg']\n)\n```\n\n### Structured Output with Pydantic\n\n```python\nfrom pydantic import BaseModel\nfrom ai_client import create_ai_client\n\nclass Person(BaseModel):\n name: str\n age: int\n occupation: str\n\nclient = create_ai_client('openai', api_key='sk-...')\n\nresponse, duration = client.prompt(\n 'gpt-4',\n 'Extract: John Smith is a 35-year-old software engineer',\n response_format=Person\n)\n\n# Parse the response\nimport json\nperson_data = json.loads(response.text)\nperson = Person(**person_data)\n\nprint(f\"{person.name}, {person.age}, {person.occupation}\")\n```\n\n### Async for Parallel Processing\n\n```python\nimport asyncio\nfrom ai_client import create_ai_client\n\nasync def process_batch():\n client = create_ai_client('openai', api_key='sk-...')\n\n # Process multiple prompts in parallel\n tasks = [\n client.prompt_async('gpt-4', f'Tell me about {topic}')\n for topic in ['Python', 'JavaScript', 'Rust']\n ]\n\n results = await asyncio.gather(*tasks)\n\n for response, duration in results:\n print(f\"({duration:.2f}s) {response.text[:100]}...\")\n\nasyncio.run(process_batch())\n```\n\n### Custom Base URLs (OpenRouter, sciCORE)\n\n```python\nfrom ai_client import create_ai_client\n\n# OpenRouter - access to 100+ models\nclient = create_ai_client(\n 'openrouter',\n api_key='sk-or-...',\n base_url='https://openrouter.ai/api/v1',\n default_headers={\n \"HTTP-Referer\": \"https://your-site.com\",\n \"X-Title\": \"Your App\"\n }\n)\n\nresponse, _ = client.prompt('anthropic/claude-3-opus', 'Hello!')\n\n# sciCORE (University HPC)\nclient = create_ai_client(\n 'scicore',\n api_key='your-key',\n base_url='https://llm-api-h200.ceda.unibas.ch/litellm/v1'\n)\n\nresponse, _ = client.prompt('deepseek/deepseek-chat', 'Hello!')\n```\n\n### Accessing Response Metadata\n\n```python\nresponse, duration = client.prompt('gpt-4', 'Hello')\n\n# Response text\nprint(response.text)\n\n# Token usage\nprint(f\"Input tokens: {response.usage.input_tokens}\")\nprint(f\"Output tokens: {response.usage.output_tokens}\")\nprint(f\"Total tokens: {response.usage.total_tokens}\")\n\n# Metadata\nprint(f\"Model: {response.model}\")\nprint(f\"Provider: {response.provider}\")\nprint(f\"Finish reason: {response.finish_reason}\")\nprint(f\"Duration: {response.duration}s\")\n\n# Raw provider response (for detailed analysis)\nraw = response.raw_response\n\n# Convert to dict (for JSON serialization)\nresponse_dict = response.to_dict()\n```\n\n## Configuration\n\n### Provider-Specific Settings\n\n```python\nfrom ai_client import create_ai_client\n\n# OpenAI\nclient = create_ai_client(\n 'openai',\n api_key='sk-...',\n temperature=0.7,\n max_tokens=500,\n frequency_penalty=0.5\n)\n\n# Claude\nclient = create_ai_client(\n 'anthropic',\n api_key='sk-ant-...',\n temperature=1.0,\n max_tokens=4096,\n top_k=40\n)\n\n# Settings can also be passed per-request\nresponse, _ = client.prompt(\n 'gpt-4',\n 'Hello',\n temperature=0.9,\n max_tokens=100\n)\n```\n\n### Custom System Prompts\n\n```python\nfrom ai_client import create_ai_client\n\nclient = create_ai_client(\n 'openai',\n api_key='sk-...',\n system_prompt=\"You are a helpful coding assistant specialized in Python.\"\n)\n\n# Override for specific request\nresponse, _ = client.prompt(\n 'gpt-4',\n 'Write a haiku',\n system_prompt=\"You are a poetic assistant.\"\n)\n```\n\n## Use Case: Benchmarking\n\nPerfect for research workflows that need to evaluate multiple models:\n\n```python\nfrom ai_client import create_ai_client\nimport asyncio\n\nasync def benchmark_models():\n providers = [\n ('openai', 'gpt-4'),\n ('anthropic', 'claude-3-5-sonnet-20241022'),\n ('genai', 'gemini-2.0-flash-exp'),\n ]\n\n prompt = 'Explain quantum entanglement'\n\n for provider_id, model in providers:\n client = create_ai_client(provider_id, api_key=f'{provider_id}_key')\n\n response, duration = await client.prompt_async(model, prompt)\n\n print(f\"\\n=== {provider_id}/{model} ===\")\n print(f\"Duration: {duration:.2f}s\")\n print(f\"Tokens: {response.usage.total_tokens}\")\n print(f\"Response: {response.text[:200]}...\")\n\nasyncio.run(benchmark_models())\n```\n\n## Error Handling\n\nThe package includes built-in retry logic with exponential backoff:\n\n```python\nfrom ai_client import create_ai_client, RateLimitError, APIError\n\nclient = create_ai_client('openai', api_key='sk-...')\n\ntry:\n response, duration = client.prompt('gpt-4', 'Hello')\n # Automatically retries up to 3 times on rate limit errors\nexcept RateLimitError as e:\n print(f\"Rate limited after retries: {e}\")\nexcept APIError as e:\n print(f\"API error: {e}\")\nexcept Exception as e:\n print(f\"Unknown error: {e}\")\n```\n\n## Advanced Features\n\n### Get Available Models\n\n```python\nfrom ai_client import create_ai_client\n\nclient = create_ai_client('openai', api_key='sk-...')\nmodels = client.get_model_list()\n\nfor model_id, created_date in models:\n print(f\"{model_id} (created: {created_date})\")\n```\n\n### Check Multimodal Support\n\n```python\nclient = create_ai_client('openai', api_key='sk-...')\n\nif client.has_multimodal_support():\n print(\"This provider supports images!\")\n```\n\n## Package Structure\n\n```\nai_client/\n __init__.py # Package exports\n base_client.py # BaseAIClient + factory\n response.py # LLMResponse, Usage dataclasses\n utils.py # Retry logic, exceptions, utilities\n openai_client.py # OpenAI implementation\n claude_client.py # Anthropic Claude\n gemini_client.py # Google Gemini\n mistral_client.py # Mistral AI\n deepseek_client.py # DeepSeek\n qwen_client.py # Qwen\n```\n\n## Requirements\n\n- Python >=3.9\n- anthropic ~=0.71.0\n- openai ~=2.6.1\n- mistralai ~=1.9.11\n- google-genai ~=1.46.0\n- requests ~=2.32.5\n\n## Development\n\n```bash\n# Clone the repository\ngit clone https://github.com/RISE-UNIBAS/generic-llm-api-client.git\ncd generic-llm-api-client\n\n# Install in development mode\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Run integration tests (requires API keys)\npytest -m integration\n\n# Format code\nblack ai_client tests\n\n# Type checking\nmypy ai_client/\n```\n\n## Documentation\n\n- **[EXAMPLES.md](EXAMPLES.md)** - Comprehensive usage examples\n- **[PUBLISHING.md](PUBLISHING.md)** - Guide for maintainers on publishing releases\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Citation\n\nIf you use this package in your research, please cite:\n\n```bibtex\n@software{generic_llm_api_client,\n author = {Sorin Marti},\n title = {Generic LLM API Client: A Unified Interface for Multiple LLM Providers},\n year = {2025},\n url = {https://github.com/RISE-UNIBAS/generic-llm-api-client}\n}\n```\n\n## Support\n\n- GitHub Issues: [Report bugs or request features](https://github.com/RISE-UNIBAS/generic-llm-api-client/issues)\n- Documentation: [Full documentation](https://github.com/RISE-UNIBAS/generic-llm-api-client#readme)\n\n## Roadmap\n\n- [ ] Tool use / function calling support\n- [ ] Streaming support\n- [ ] Conversation history management\n- [ ] More providers (Cohere, AI21, etc.)\n- [ ] Cost estimation utilities\n- [ ] Prompt caching support\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A unified, provider-agnostic Python client for multiple LLM APIs",
"version": "0.1.2",
"project_urls": {
"Documentation": "https://github.com/RISE-UNIBAS/generic_llm_api_client#readme",
"Homepage": "https://github.com/RISE-UNIBAS/generic_llm_api_client",
"Issues": "https://github.com/RISE-UNIBAS/generic_llm_api_client/issues",
"Repository": "https://github.com/RISE-UNIBAS/generic_llm_api_client"
},
"split_keywords": [
"llm",
" ai",
" openai",
" anthropic",
" claude",
" gemini",
" mistral",
" deepseek",
" qwen",
" api-client",
" humanities",
" research",
" benchmarking"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5d85a3b047ce44f478e629e674457b986a6544985a3015086fbe5ac4159b3c23",
"md5": "79ccd5e3d14f56ef326374b52205977f",
"sha256": "987534867e58bef6e3c1168a77b51b8442cc1856364f64757426dabfa41c7dc0"
},
"downloads": -1,
"filename": "generic_llm_api_client-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "79ccd5e3d14f56ef326374b52205977f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 25724,
"upload_time": "2025-10-28T13:32:04",
"upload_time_iso_8601": "2025-10-28T13:32:04.600726Z",
"url": "https://files.pythonhosted.org/packages/5d/85/a3b047ce44f478e629e674457b986a6544985a3015086fbe5ac4159b3c23/generic_llm_api_client-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9fdd29ecc516b996f9968c9d0b15d2a7f82b19128059646c2fa8c4ff24b132a4",
"md5": "03f4ae7c86630e133217157701141a78",
"sha256": "0a74c5bc33ce7e2f9f6c674e7673feabe0d6514f36bb5c25c2cb9eee65328fda"
},
"downloads": -1,
"filename": "generic_llm_api_client-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "03f4ae7c86630e133217157701141a78",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 33629,
"upload_time": "2025-10-28T13:32:06",
"upload_time_iso_8601": "2025-10-28T13:32:06.139693Z",
"url": "https://files.pythonhosted.org/packages/9f/dd/29ecc516b996f9968c9d0b15d2a7f82b19128059646c2fa8c4ff24b132a4/generic_llm_api_client-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-28 13:32:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "RISE-UNIBAS",
"github_project": "generic_llm_api_client#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "anthropic",
"specs": [
[
"~=",
"0.71.0"
]
]
},
{
"name": "openai",
"specs": [
[
"~=",
"2.6.1"
]
]
},
{
"name": "mistralai",
"specs": [
[
"~=",
"1.9.11"
]
]
},
{
"name": "google-genai",
"specs": [
[
"~=",
"1.46.0"
]
]
},
{
"name": "requests",
"specs": [
[
"~=",
"2.32.5"
]
]
}
],
"lcname": "generic-llm-api-client"
}