# LLM Provider Factory
A unified, extensible Python library for interacting with multiple Large Language Model (LLM) providers through a single, consistent interface. Support for OpenAI, Anthropic Claude, Google Gemini, VertexAI, and Ollama (local LLMs).
## π Features
- **Unified Interface**: Single API for multiple LLM providers
- **Cloud & Local LLMs**: Support for both cloud-based and local LLM providers
- **Async Support**: Full async/await support for better performance
- **Streaming**: Real-time streaming responses from all providers
- **Type Safety**: Complete type hints and Pydantic models
- **Error Handling**: Comprehensive error handling with specific exceptions
- **Configuration Management**: Flexible configuration system
- **Extensible**: Easy to add new providers
- **Testing**: Full test coverage with mocking support
## π Supported Providers
| Provider | Models | Features | Type |
|----------|--------|----------|------|
| **OpenAI** | GPT-3.5, GPT-4, GPT-4o | Generate, Stream, Conversation | Cloud |
| **Anthropic** | Claude-3 (Haiku, Sonnet, Opus) | Generate, Stream, Conversation | Cloud |
| **Google Gemini** | Gemini Pro, Gemini Flash | Generate, Stream, Conversation | Cloud |
| **VertexAI** | Mistral, Gemini | Generate, Conversation | Cloud |
| **Ollama** | Llama, CodeLlama, Mistral, etc. | Generate, Stream, Conversation | Local |
## π Quick Start
### Installation
```bash
pip install llm-provider-factory
```
### Basic Usage
```python
import asyncio
from llm_provider import LLMProviderFactory, OpenAIConfig, OllamaConfig
async def main():
# Cloud LLM - OpenAI
openai_config = OpenAIConfig(api_key="your-api-key", model="gpt-4")
openai_provider = LLMProviderFactory().create_provider("openai", openai_config)
response = await openai_provider.generate("Hello, world!")
print(f"OpenAI: {response.content}")
# Local LLM - Ollama
ollama_config = OllamaConfig(
base_url="http://localhost:11434",
model="llama3.1:latest"
)
ollama_provider = LLMProviderFactory().create_provider("ollama", ollama_config)
response = await ollama_provider.generate("Hello, world!")
print(f"Ollama: {response.content}")
asyncio.run(main())
```
## π Detailed Usage
### Configuration
#### Environment Variables
```bash
# Cloud LLM Providers
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
export GOOGLE_CLOUD_PROJECT="your-gcp-project-id"
export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
# Local LLM Providers
export OLLAMA_BASE_URL="http://localhost:11434" # Default Ollama server
```
#### Programmatic Configuration
```python
from llm_provider import OpenAIConfig, AnthropicConfig, GeminiConfig, VertexAIConfig
# OpenAI Configuration
openai_config = OpenAIConfig(
api_key="your-key",
model="gpt-4",
max_tokens=1000,
temperature=0.7
)
# Anthropic Configuration
anthropic_config = AnthropicConfig(
api_key="your-key",
model="claude-3-sonnet-20240229",
max_tokens=1000,
temperature=0.7
)
# Gemini Configuration
gemini_config = GeminiConfig(
api_key="your-key",
model="gemini-pro",
max_tokens=1000,
temperature=0.7
)
# VertexAI Configuration
vertexai_config = VertexAIConfig(
project_id="your-gcp-project",
location="us-central1",
model="gemini-1.5-pro",
credentials_path="path/to/service-account.json" # Optional if using GOOGLE_APPLICATION_CREDENTIALS
)
# Ollama Configuration (Local LLM)
ollama_config = OllamaConfig(
base_url="http://localhost:11434", # Default Ollama server
model="llama3.1:latest", # Any Ollama model
max_tokens=1000,
temperature=0.7
)
```
### Multiple Provider Usage
```python
from llm_provider import LLMProviderFactory
async def compare_providers():
factory = LLMProviderFactory()
prompt = "Explain quantum computing in simple terms"
# Generate with different providers
openai_response = await factory.generate(prompt, provider="openai")
anthropic_response = await factory.generate(prompt, provider="anthropic")
gemini_response = await factory.generate(prompt, provider="gemini")
vertexai_response = await factory.generate(prompt, provider="vertexai")
print(f"OpenAI: {openai_response.content}")
print(f"Anthropic: {anthropic_response.content}")
print(f"Gemini: {gemini_response.content}")
print(f"VertexAI: {vertexai_response.content}")
```
### Conversation History
```python
from llm_provider import Message, MessageRole
async def conversation_example():
factory = LLMProviderFactory.create_openai()
history = [
Message(role=MessageRole.USER, content="Hello, I'm learning Python"),
Message(role=MessageRole.ASSISTANT, content="Hello! I'd be happy to help you learn Python."),
Message(role=MessageRole.USER, content="Can you explain variables?")
]
response = await factory.generate(
"Now explain functions",
history=history
)
print(response.content)
```
### Streaming Responses
```python
async def streaming_example():
factory = LLMProviderFactory.create_openai()
async for chunk in factory.stream_generate("Write a short story about AI"):
if chunk.content:
print(chunk.content, end="", flush=True)
if chunk.is_final:
print(f"\nFinish reason: {chunk.finish_reason}")
break
```
### Error Handling
```python
from llm_provider import (
AuthenticationError,
RateLimitError,
ModelNotAvailableError,
GenerationError
)
async def robust_generation():
factory = LLMProviderFactory.create_openai()
try:
response = await factory.generate("Hello world")
return response.content
except AuthenticationError:
print("Check your API key")
except RateLimitError:
print("Rate limit exceeded, try again later")
except ModelNotAvailableError:
print("Model not available")
except GenerationError as e:
print(f"Generation failed: {e}")
```
## π§ Advanced Usage
### Custom Provider
```python
from llm_provider import BaseLLMProvider, ProviderConfig
class CustomProvider(BaseLLMProvider):
async def initialize(self):
# Initialize your custom provider
pass
async def generate(self, request):
# Implement generation logic
pass
async def stream_generate(self, request):
# Implement streaming logic
pass
def get_supported_models(self):
return ["custom-model-1", "custom-model-2"]
def validate_config(self):
return True
def get_provider_info(self):
# Return provider information
pass
# Register custom provider
factory = LLMProviderFactory()
factory.register_provider("custom", CustomProvider)
```
### Provider Information
```python
async def provider_info_example():
factory = LLMProviderFactory()
# Get all provider information
all_providers = factory.get_provider_info()
for info in all_providers:
print(f"{info.display_name}: {info.supported_models}")
# Get specific provider info
openai_info = factory.get_provider_info("openai")
print(f"OpenAI models: {openai_info.supported_models}")
```
## π Project Structure
```
llm-provider-factory/
βββ src/
β βββ llm_provider/
β βββ __init__.py
β βββ factory.py
β βββ base_provider.py
β βββ settings.py
β βββ providers/
β β βββ __init__.py
β β βββ openai_provider.py
β β βββ anthropic_provider.py
β β βββ gemini_provider.py
β βββ utils/
β βββ __init__.py
β βββ config.py
β βββ exceptions.py
β βββ logger.py
βββ tests/
βββ pyproject.toml
βββ README.md
```
## π§ͺ Testing
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage
pytest --cov=src/llm_provider --cov-report=html
# Run specific test file
pytest tests/test_factory.py
```
## π€ Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/new-provider`)
3. Make your changes
4. Add tests for new functionality
5. Ensure all tests pass (`pytest`)
6. Submit a pull request
### Adding a New Provider
1. Create a new provider class in `src/llm_provider/providers/`
2. Inherit from `BaseLLMProvider`
3. Implement all abstract methods
4. Add configuration class in `utils/config.py`
5. Register the provider in `factory.py`
6. Add tests in `tests/`
## π License
This project is licensed under the MIT License - see the LICENSE file for details.
## π Acknowledgments
- OpenAI for their excellent API and documentation
- Anthropic for Claude's capabilities
- Google for Gemini's multimodal features
- The Python community for inspiration and tools
# LLM Provider Factory
A unified factory for multiple LLM providers (OpenAI, Anthropic, Google Gemini, Google Vertex AI).
## VertexAI + Mistral Support
Bu paket artΔ±k Google Cloud Vertex AI ΓΌzerinden Mistral modellerini desteklemektedir!
### Desteklenen Modeller
**VertexAI Provider:**
- Gemini modelleri: `gemini-1.5-pro`, `gemini-1.5-flash`, `gemini-1.0-pro`
- Text/Chat modelleri: `text-bison`, `chat-bison`
- **Mistral modelleri**: `mistral-large-2411`, `mistral-7b-instruct`
### HΔ±zlΔ± BaΕlangΔ±Γ§ - VertexAI
1. **Paketleri yΓΌkleyin:**
```bash
python setup_vertexai.py
```
2. **Google Cloud Setup:**
- Service account oluΕturun
- JSON credentials dosyasΔ± indirin
- Environment variable ayarlayΔ±n:
```bash
export GOOGLE_APPLICATION_CREDENTIALS='/path/to/your/credentials.json'
```
3. **KullanΔ±m:**
```python
from llm_provider import LLMProviderFactory
from llm_provider.utils.config import VertexAIConfig
# Configuration
config = VertexAIConfig(
project_id="your-project-id",
location="us-central1",
model="mistral-large-2411", # Mistral model!
credentials_path="/path/to/credentials.json",
temperature=0.1,
max_tokens=1000
)
# Create provider
factory = LLMProviderFactory()
provider = factory.create_provider("vertexai", config)
# Generate response
response = await provider.generate(request)
```
4. **Test:**
```bash
python test_vertexai_mistral.py
``` Built with clean architecture principles and SOLID design patterns.
## π Quick Start
```bash
pip install llm-provider
```
```python
from llm_provider import LLMProviderFactory, OpenAI
provider = LLMProviderFactory(OpenAI(api_key="your-key"))
response = provider.generate(prompt="Hello", history=[])
print(response.content)
```
## β¨ Features
- π **Factory Pattern**: Clean, consistent interface
- π **Extensible**: Easy to add new providers
- π‘οΈ **Type Safe**: Full typing support
- π **Production Ready**: Comprehensive error handling
- π¦ **Zero Dependencies**: Only requires `requests`
## π Links
- **PyPI**: https://pypi.org/project/llm-provider/
- **Test PyPI**: https://test.pypi.org/project/llm-provider/
## π¦ Supported Providers
- **OpenAI** (GPT-3.5, GPT-4)
- **Anthropic** (Claude models)
- **Google Gemini** (Gemini Pro, Flash)
## π Documentation
See the package source code and examples in the repository.
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-provider-factory",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, ai, openai, anthropic, gemini, ollama, local-llm, language-model, factory-pattern",
"author": null,
"author_email": "Sad\u0131k Hanecioglu <sadik@example.com>",
"download_url": "https://files.pythonhosted.org/packages/21/09/e42723e634ec05387c3ea3f515ee3f8a0540905bc3092322b4bbd8c60aab/llm_provider_factory-0.4.1.tar.gz",
"platform": null,
"description": "# LLM Provider Factory\n\nA unified, extensible Python library for interacting with multiple Large Language Model (LLM) providers through a single, consistent interface. Support for OpenAI, Anthropic Claude, Google Gemini, VertexAI, and Ollama (local LLMs).\n\n## \ud83c\udf1f Features\n\n- **Unified Interface**: Single API for multiple LLM providers\n- **Cloud & Local LLMs**: Support for both cloud-based and local LLM providers\n- **Async Support**: Full async/await support for better performance\n- **Streaming**: Real-time streaming responses from all providers\n- **Type Safety**: Complete type hints and Pydantic models\n- **Error Handling**: Comprehensive error handling with specific exceptions\n- **Configuration Management**: Flexible configuration system\n- **Extensible**: Easy to add new providers\n- **Testing**: Full test coverage with mocking support\n\n## \ud83d\udd0c Supported Providers\n\n| Provider | Models | Features | Type |\n|----------|--------|----------|------|\n| **OpenAI** | GPT-3.5, GPT-4, GPT-4o | Generate, Stream, Conversation | Cloud |\n| **Anthropic** | Claude-3 (Haiku, Sonnet, Opus) | Generate, Stream, Conversation | Cloud |\n| **Google Gemini** | Gemini Pro, Gemini Flash | Generate, Stream, Conversation | Cloud |\n| **VertexAI** | Mistral, Gemini | Generate, Conversation | Cloud |\n| **Ollama** | Llama, CodeLlama, Mistral, etc. | Generate, Stream, Conversation | Local |\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install llm-provider-factory\n```\n\n### Basic Usage\n\n```python\nimport asyncio\nfrom llm_provider import LLMProviderFactory, OpenAIConfig, OllamaConfig\n\nasync def main():\n # Cloud LLM - OpenAI\n openai_config = OpenAIConfig(api_key=\"your-api-key\", model=\"gpt-4\")\n openai_provider = LLMProviderFactory().create_provider(\"openai\", openai_config)\n response = await openai_provider.generate(\"Hello, world!\")\n print(f\"OpenAI: {response.content}\")\n \n # Local LLM - Ollama\n ollama_config = OllamaConfig(\n base_url=\"http://localhost:11434\",\n model=\"llama3.1:latest\"\n )\n ollama_provider = LLMProviderFactory().create_provider(\"ollama\", ollama_config)\n response = await ollama_provider.generate(\"Hello, world!\")\n print(f\"Ollama: {response.content}\")\n\nasyncio.run(main())\n```\n\n## \ud83d\udcd6 Detailed Usage\n\n### Configuration\n\n#### Environment Variables\n```bash\n# Cloud LLM Providers\nexport OPENAI_API_KEY=\"your-openai-key\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-key\" \nexport GOOGLE_API_KEY=\"your-google-key\"\nexport GOOGLE_CLOUD_PROJECT=\"your-gcp-project-id\"\nexport GOOGLE_APPLICATION_CREDENTIALS=\"path/to/service-account.json\"\n\n# Local LLM Providers\nexport OLLAMA_BASE_URL=\"http://localhost:11434\" # Default Ollama server\n```\n\n#### Programmatic Configuration\n```python\nfrom llm_provider import OpenAIConfig, AnthropicConfig, GeminiConfig, VertexAIConfig\n\n# OpenAI Configuration\nopenai_config = OpenAIConfig(\n api_key=\"your-key\",\n model=\"gpt-4\",\n max_tokens=1000,\n temperature=0.7\n)\n\n# Anthropic Configuration\nanthropic_config = AnthropicConfig(\n api_key=\"your-key\",\n model=\"claude-3-sonnet-20240229\",\n max_tokens=1000,\n temperature=0.7\n)\n\n# Gemini Configuration\ngemini_config = GeminiConfig(\n api_key=\"your-key\",\n model=\"gemini-pro\",\n max_tokens=1000,\n temperature=0.7\n)\n\n# VertexAI Configuration \nvertexai_config = VertexAIConfig(\n project_id=\"your-gcp-project\",\n location=\"us-central1\",\n model=\"gemini-1.5-pro\",\n credentials_path=\"path/to/service-account.json\" # Optional if using GOOGLE_APPLICATION_CREDENTIALS\n)\n\n# Ollama Configuration (Local LLM)\nollama_config = OllamaConfig(\n base_url=\"http://localhost:11434\", # Default Ollama server\n model=\"llama3.1:latest\", # Any Ollama model\n max_tokens=1000,\n temperature=0.7\n)\n```\n\n### Multiple Provider Usage\n\n```python\nfrom llm_provider import LLMProviderFactory\n\nasync def compare_providers():\n factory = LLMProviderFactory()\n prompt = \"Explain quantum computing in simple terms\"\n \n # Generate with different providers\n openai_response = await factory.generate(prompt, provider=\"openai\")\n anthropic_response = await factory.generate(prompt, provider=\"anthropic\")\n gemini_response = await factory.generate(prompt, provider=\"gemini\")\n vertexai_response = await factory.generate(prompt, provider=\"vertexai\")\n \n print(f\"OpenAI: {openai_response.content}\")\n print(f\"Anthropic: {anthropic_response.content}\")\n print(f\"Gemini: {gemini_response.content}\")\n print(f\"VertexAI: {vertexai_response.content}\")\n```\n\n### Conversation History\n\n```python\nfrom llm_provider import Message, MessageRole\n\nasync def conversation_example():\n factory = LLMProviderFactory.create_openai()\n \n history = [\n Message(role=MessageRole.USER, content=\"Hello, I'm learning Python\"),\n Message(role=MessageRole.ASSISTANT, content=\"Hello! I'd be happy to help you learn Python.\"),\n Message(role=MessageRole.USER, content=\"Can you explain variables?\")\n ]\n \n response = await factory.generate(\n \"Now explain functions\",\n history=history\n )\n print(response.content)\n```\n\n### Streaming Responses\n\n```python\nasync def streaming_example():\n factory = LLMProviderFactory.create_openai()\n \n async for chunk in factory.stream_generate(\"Write a short story about AI\"):\n if chunk.content:\n print(chunk.content, end=\"\", flush=True)\n \n if chunk.is_final:\n print(f\"\\nFinish reason: {chunk.finish_reason}\")\n break\n```\n\n### Error Handling\n\n```python\nfrom llm_provider import (\n AuthenticationError,\n RateLimitError,\n ModelNotAvailableError,\n GenerationError\n)\n\nasync def robust_generation():\n factory = LLMProviderFactory.create_openai()\n \n try:\n response = await factory.generate(\"Hello world\")\n return response.content\n except AuthenticationError:\n print(\"Check your API key\")\n except RateLimitError:\n print(\"Rate limit exceeded, try again later\")\n except ModelNotAvailableError:\n print(\"Model not available\")\n except GenerationError as e:\n print(f\"Generation failed: {e}\")\n```\n\n## \ud83d\udd27 Advanced Usage\n\n### Custom Provider\n\n```python\nfrom llm_provider import BaseLLMProvider, ProviderConfig\n\nclass CustomProvider(BaseLLMProvider):\n async def initialize(self):\n # Initialize your custom provider\n pass\n \n async def generate(self, request):\n # Implement generation logic\n pass\n \n async def stream_generate(self, request):\n # Implement streaming logic\n pass\n \n def get_supported_models(self):\n return [\"custom-model-1\", \"custom-model-2\"]\n \n def validate_config(self):\n return True\n \n def get_provider_info(self):\n # Return provider information\n pass\n\n# Register custom provider\nfactory = LLMProviderFactory()\nfactory.register_provider(\"custom\", CustomProvider)\n```\n\n### Provider Information\n\n```python\nasync def provider_info_example():\n factory = LLMProviderFactory()\n \n # Get all provider information\n all_providers = factory.get_provider_info()\n for info in all_providers:\n print(f\"{info.display_name}: {info.supported_models}\")\n \n # Get specific provider info\n openai_info = factory.get_provider_info(\"openai\")\n print(f\"OpenAI models: {openai_info.supported_models}\")\n```\n\n## \ud83d\udcc1 Project Structure\n\n```\nllm-provider-factory/\n\u251c\u2500\u2500 src/\n\u2502 \u2514\u2500\u2500 llm_provider/\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 factory.py\n\u2502 \u251c\u2500\u2500 base_provider.py\n\u2502 \u251c\u2500\u2500 settings.py\n\u2502 \u251c\u2500\u2500 providers/\n\u2502 \u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2502 \u251c\u2500\u2500 openai_provider.py\n\u2502 \u2502 \u251c\u2500\u2500 anthropic_provider.py\n\u2502 \u2502 \u2514\u2500\u2500 gemini_provider.py\n\u2502 \u2514\u2500\u2500 utils/\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 config.py\n\u2502 \u251c\u2500\u2500 exceptions.py\n\u2502 \u2514\u2500\u2500 logger.py\n\u251c\u2500\u2500 tests/\n\u251c\u2500\u2500 pyproject.toml\n\u2514\u2500\u2500 README.md\n```\n\n## \ud83e\uddea Testing\n\n```bash\n# Install development dependencies\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Run tests with coverage\npytest --cov=src/llm_provider --cov-report=html\n\n# Run specific test file\npytest tests/test_factory.py\n```\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/new-provider`)\n3. Make your changes\n4. Add tests for new functionality\n5. Ensure all tests pass (`pytest`)\n6. Submit a pull request\n\n### Adding a New Provider\n\n1. Create a new provider class in `src/llm_provider/providers/`\n2. Inherit from `BaseLLMProvider`\n3. Implement all abstract methods\n4. Add configuration class in `utils/config.py`\n5. Register the provider in `factory.py`\n6. Add tests in `tests/`\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- OpenAI for their excellent API and documentation\n- Anthropic for Claude's capabilities\n- Google for Gemini's multimodal features\n- The Python community for inspiration and tools\n\n# LLM Provider Factory\n\nA unified factory for multiple LLM providers (OpenAI, Anthropic, Google Gemini, Google Vertex AI).\n\n## VertexAI + Mistral Support\n\nBu paket art\u0131k Google Cloud Vertex AI \u00fczerinden Mistral modellerini desteklemektedir!\n\n### Desteklenen Modeller\n\n**VertexAI Provider:**\n- Gemini modelleri: `gemini-1.5-pro`, `gemini-1.5-flash`, `gemini-1.0-pro`\n- Text/Chat modelleri: `text-bison`, `chat-bison` \n- **Mistral modelleri**: `mistral-large-2411`, `mistral-7b-instruct`\n\n### H\u0131zl\u0131 Ba\u015flang\u0131\u00e7 - VertexAI\n\n1. **Paketleri y\u00fckleyin:**\n```bash\npython setup_vertexai.py\n```\n\n2. **Google Cloud Setup:**\n - Service account olu\u015fturun\n - JSON credentials dosyas\u0131 indirin\n - Environment variable ayarlay\u0131n:\n```bash\nexport GOOGLE_APPLICATION_CREDENTIALS='/path/to/your/credentials.json'\n```\n\n3. **Kullan\u0131m:**\n```python\nfrom llm_provider import LLMProviderFactory\nfrom llm_provider.utils.config import VertexAIConfig\n\n# Configuration\nconfig = VertexAIConfig(\n project_id=\"your-project-id\",\n location=\"us-central1\", \n model=\"mistral-large-2411\", # Mistral model!\n credentials_path=\"/path/to/credentials.json\",\n temperature=0.1,\n max_tokens=1000\n)\n\n# Create provider\nfactory = LLMProviderFactory()\nprovider = factory.create_provider(\"vertexai\", config)\n\n# Generate response\nresponse = await provider.generate(request)\n```\n\n4. **Test:**\n```bash\npython test_vertexai_mistral.py\n``` Built with clean architecture principles and SOLID design patterns.\n\n## \ud83d\ude80 Quick Start\n\n```bash\npip install llm-provider\n```\n\n```python\nfrom llm_provider import LLMProviderFactory, OpenAI\n\nprovider = LLMProviderFactory(OpenAI(api_key=\"your-key\"))\nresponse = provider.generate(prompt=\"Hello\", history=[])\nprint(response.content)\n```\n\n## \u2728 Features\n\n- \ud83c\udfed **Factory Pattern**: Clean, consistent interface\n- \ud83d\udd0c **Extensible**: Easy to add new providers \n- \ud83d\udee1\ufe0f **Type Safe**: Full typing support\n- \ud83d\ude80 **Production Ready**: Comprehensive error handling\n- \ud83d\udce6 **Zero Dependencies**: Only requires `requests`\n\n## \ud83d\udd17 Links\n\n- **PyPI**: https://pypi.org/project/llm-provider/\n- **Test PyPI**: https://test.pypi.org/project/llm-provider/\n\n## \ud83d\udce6 Supported Providers\n\n- **OpenAI** (GPT-3.5, GPT-4)\n- **Anthropic** (Claude models)\n- **Google Gemini** (Gemini Pro, Flash)\n\n## \ud83d\udcda Documentation\n\nSee the package source code and examples in the repository.\n",
"bugtrack_url": null,
"license": null,
"summary": "A unified interface for multiple LLM providers (OpenAI, Anthropic, Gemini, VertexAI, Ollama) supporting both cloud and local LLMs",
"version": "0.4.1",
"project_urls": {
"Documentation": "https://github.com/sadikhanecioglu/llmfactory.py#readme",
"Homepage": "https://github.com/sadikhanecioglu/llmfactory.py",
"Issues": "https://github.com/sadikhanecioglu/llmfactory.py/issues",
"Repository": "https://github.com/sadikhanecioglu/llmfactory.py"
},
"split_keywords": [
"llm",
" ai",
" openai",
" anthropic",
" gemini",
" ollama",
" local-llm",
" language-model",
" factory-pattern"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "16416ac115be88cb5ca2e0ec0409c447dab6163a81ce0744f17ff938fc8f973a",
"md5": "09a26fa078346f62111c1f7fd217c13b",
"sha256": "8b27e50aea444cb771e5a4b5d287761dba7762716601e75fa4954bf672fae03b"
},
"downloads": -1,
"filename": "llm_provider_factory-0.4.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "09a26fa078346f62111c1f7fd217c13b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 31622,
"upload_time": "2025-10-08T23:08:13",
"upload_time_iso_8601": "2025-10-08T23:08:13.348971Z",
"url": "https://files.pythonhosted.org/packages/16/41/6ac115be88cb5ca2e0ec0409c447dab6163a81ce0744f17ff938fc8f973a/llm_provider_factory-0.4.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2109e42723e634ec05387c3ea3f515ee3f8a0540905bc3092322b4bbd8c60aab",
"md5": "1b2bf7cd639c6068c47ef4905e1cf2d4",
"sha256": "215facf4614f784f1391e38669496c7f959c209163224b7099d6e715e160017b"
},
"downloads": -1,
"filename": "llm_provider_factory-0.4.1.tar.gz",
"has_sig": false,
"md5_digest": "1b2bf7cd639c6068c47ef4905e1cf2d4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 48142,
"upload_time": "2025-10-08T23:08:16",
"upload_time_iso_8601": "2025-10-08T23:08:16.858096Z",
"url": "https://files.pythonhosted.org/packages/21/09/e42723e634ec05387c3ea3f515ee3f8a0540905bc3092322b4bbd8c60aab/llm_provider_factory-0.4.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-08 23:08:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sadikhanecioglu",
"github_project": "llmfactory.py#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "openai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "anthropic",
"specs": [
[
">=",
"0.3.0"
]
]
},
{
"name": "google-generativeai",
"specs": [
[
">=",
"0.3.0"
]
]
},
{
"name": "google-cloud-aiplatform",
"specs": [
[
">=",
"1.25.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "typing-extensions",
"specs": [
[
">=",
"4.0.0"
]
]
}
],
"lcname": "llm-provider-factory"
}