# LLM Relic
A lightweight Python library that provides easy access to popular LLM model names and allows you to define which models your application supports.
## Why LLM Relic?
- **No more hardcoded model names**: Access standardized model names from major providers
- **Easy support definition**: Fluent interface to define which models your app supports
- **Validation**: Built-in validation to ensure only supported models are used
- **Zero dependencies**: Lightweight library with no external dependencies
- **Type hints**: Full type hint support for better IDE experience
## Installation
```bash
pip install llmrelic
```
## Quick Start
### Access Model Names
```python
from llmrelic import OpenAI, Anthropic, Google
# Access model names directly
print(OpenAI.gpt_4) # "gpt-4"
print(Anthropic.claude_3_opus) # "claude-3-opus-20240229"
print(Google.gemini_pro) # "gemini-pro"
# List all models from a provider
print(OpenAI.list_models())
```
### Define Supported Models
```python
from llmrelic import SupportedModels
# Define which models your app supports
supported = (SupportedModels.create()
.openai() # All OpenAI models
.anthropic(["claude-3-opus-20240229", "claude-3-sonnet-20240229"]) # Specific models
.google() # All Google models
.custom(["my-custom-model"]) # Your custom models
.build())
# Validate model support
if supported.is_supported("gpt-4"):
print("GPT-4 is supported!")
# Get all supported models
print(supported.get_supported_models())
```
### Use in Your Application
```python
from llmrelic import OpenAI, SupportedModels
class MyLLMApp:
def __init__(self):
# Define what models your app supports
self.supported_models = (SupportedModels.create()
.openai(["gpt-4", "gpt-3.5-turbo"])
.anthropic()
.build())
def chat(self, model_name: str, message: str):
if not self.supported_models.is_supported(model_name):
available = ", ".join(self.supported_models.get_supported_models())
raise ValueError(f"Model {model_name} not supported. Available: {available}")
# Your chat logic here
return f"Response from {model_name}"
# Usage
app = MyLLMApp()
app.chat(OpenAI.gpt_4, "Hello!") # Works
app.chat("gpt-4", "Hello!") # Works
# app.chat("unsupported-model", "Hello!") # Raises ValueError
```
## Supported Providers
- **OpenAI**: GPT-4, GPT-3.5-turbo, and more
- **Anthropic**: Claude 3 Opus, Sonnet, Haiku, and more
- **Google**: Gemini Pro, Bard, PaLM-2, and more
- **Cohere**: Command, Command-Light, Command-R, and more
- **Mistral**: Mistral 7B, Mixtral 8x7B, and more
- **Meta**: Llama 2, Code Llama, and more
- **Hugging Face**: Popular open-source models
## API Reference
### Model Providers
Each provider exposes models as attributes:
```python
from llmrelic import OpenAI, Anthropic, Google, Cohere, Mistral, Meta, Huggingface
# Access models
OpenAI.gpt_4 # "gpt-4"
Anthropic.claude_3_opus # "claude-3-opus-20240229"
Google.gemini_pro # "gemini-pro"
# List all models
OpenAI.list_models()
# Check if model exists
"gpt-4" in OpenAI # True
```
### SupportedModels (Fluent Interface)
```python
from llmrelic import SupportedModels
supported = (SupportedModels.create()
.openai() # All OpenAI models
.openai(["gpt-4", "gpt-3.5-turbo"]) # Specific OpenAI models
.anthropic() # All Anthropic models
.google(["gemini-pro"]) # Specific Google models
.custom(["my-model"]) # Custom models
.build())
# Check support
supported.is_supported("gpt-4") # True
# Get models
supported.get_models() # List of all supported models
```
### ModelRegistry (Direct Interface)
```python
from llmrelic import ModelRegistry
registry = ModelRegistry()
registry.add_provider("openai")
registry.add_models(["custom-model-1", "custom-model-2"])
registry.add_model("another-model")
# Check support
registry.is_supported("gpt-4") # True
"gpt-4" in registry # True
# Get models
registry.get_supported_models()
registry.get_supported_by_provider()
# Iterate
for model in registry:
print(model)
```
## Utility Functions
```python
from llmrelic import get_all_models, find_model
# Get all available models by provider
all_models = get_all_models()
# Find which provider a model belongs to
provider = find_model("gpt-4") # "openai"
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add your changes
4. Run tests: `pytest`
5. Submit a pull request
## License
MIT License
## Changelog
### 0.1.0
- Initial release
- Basic model name access
- Fluent interface for defining supported models
- Model validation and registry functionality
Raw data
{
"_id": null,
"home_page": null,
"name": "llmrelic",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, ai, models, registry, openai, anthropic, google, cohere, mistral, meta, huggingface",
"author": null,
"author_email": "OVECJOE <vohachor@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/60/39/9ec70acbc7b43d809a6920d7bb2967bb14d7bb520f759c9a1ed579729bdb/llmrelic-0.1.3.tar.gz",
"platform": null,
"description": "# LLM Relic\n\nA lightweight Python library that provides easy access to popular LLM model names and allows you to define which models your application supports.\n\n## Why LLM Relic?\n\n- **No more hardcoded model names**: Access standardized model names from major providers\n- **Easy support definition**: Fluent interface to define which models your app supports\n- **Validation**: Built-in validation to ensure only supported models are used\n- **Zero dependencies**: Lightweight library with no external dependencies\n- **Type hints**: Full type hint support for better IDE experience\n\n## Installation\n\n```bash\npip install llmrelic\n```\n\n## Quick Start\n\n### Access Model Names\n\n```python\nfrom llmrelic import OpenAI, Anthropic, Google\n\n# Access model names directly\nprint(OpenAI.gpt_4) # \"gpt-4\"\nprint(Anthropic.claude_3_opus) # \"claude-3-opus-20240229\"\nprint(Google.gemini_pro) # \"gemini-pro\"\n\n# List all models from a provider\nprint(OpenAI.list_models())\n```\n\n### Define Supported Models\n\n```python\nfrom llmrelic import SupportedModels\n\n# Define which models your app supports\nsupported = (SupportedModels.create()\n .openai() # All OpenAI models\n .anthropic([\"claude-3-opus-20240229\", \"claude-3-sonnet-20240229\"]) # Specific models\n .google() # All Google models\n .custom([\"my-custom-model\"]) # Your custom models\n .build())\n\n# Validate model support\nif supported.is_supported(\"gpt-4\"):\n print(\"GPT-4 is supported!\")\n\n# Get all supported models\nprint(supported.get_supported_models())\n```\n\n### Use in Your Application\n\n```python\nfrom llmrelic import OpenAI, SupportedModels\n\nclass MyLLMApp:\n def __init__(self):\n # Define what models your app supports\n self.supported_models = (SupportedModels.create()\n .openai([\"gpt-4\", \"gpt-3.5-turbo\"])\n .anthropic()\n .build())\n \n def chat(self, model_name: str, message: str):\n if not self.supported_models.is_supported(model_name):\n available = \", \".join(self.supported_models.get_supported_models())\n raise ValueError(f\"Model {model_name} not supported. Available: {available}\")\n \n # Your chat logic here\n return f\"Response from {model_name}\"\n\n# Usage\napp = MyLLMApp()\napp.chat(OpenAI.gpt_4, \"Hello!\") # Works\napp.chat(\"gpt-4\", \"Hello!\") # Works\n# app.chat(\"unsupported-model\", \"Hello!\") # Raises ValueError\n```\n\n## Supported Providers\n\n- **OpenAI**: GPT-4, GPT-3.5-turbo, and more\n- **Anthropic**: Claude 3 Opus, Sonnet, Haiku, and more\n- **Google**: Gemini Pro, Bard, PaLM-2, and more\n- **Cohere**: Command, Command-Light, Command-R, and more\n- **Mistral**: Mistral 7B, Mixtral 8x7B, and more\n- **Meta**: Llama 2, Code Llama, and more\n- **Hugging Face**: Popular open-source models\n\n## API Reference\n\n### Model Providers\n\nEach provider exposes models as attributes:\n\n```python\nfrom llmrelic import OpenAI, Anthropic, Google, Cohere, Mistral, Meta, Huggingface\n\n# Access models\nOpenAI.gpt_4 # \"gpt-4\"\nAnthropic.claude_3_opus # \"claude-3-opus-20240229\"\nGoogle.gemini_pro # \"gemini-pro\"\n\n# List all models\nOpenAI.list_models()\n\n# Check if model exists\n\"gpt-4\" in OpenAI # True\n```\n\n### SupportedModels (Fluent Interface)\n\n```python\nfrom llmrelic import SupportedModels\n\nsupported = (SupportedModels.create()\n .openai() # All OpenAI models\n .openai([\"gpt-4\", \"gpt-3.5-turbo\"]) # Specific OpenAI models\n .anthropic() # All Anthropic models\n .google([\"gemini-pro\"]) # Specific Google models\n .custom([\"my-model\"]) # Custom models\n .build())\n\n# Check support\nsupported.is_supported(\"gpt-4\") # True\n\n# Get models\nsupported.get_models() # List of all supported models\n```\n\n### ModelRegistry (Direct Interface)\n\n```python\nfrom llmrelic import ModelRegistry\n\nregistry = ModelRegistry()\nregistry.add_provider(\"openai\")\nregistry.add_models([\"custom-model-1\", \"custom-model-2\"])\nregistry.add_model(\"another-model\")\n\n# Check support\nregistry.is_supported(\"gpt-4\") # True\n\"gpt-4\" in registry # True\n\n# Get models\nregistry.get_supported_models()\nregistry.get_supported_by_provider()\n\n# Iterate\nfor model in registry:\n print(model)\n```\n\n## Utility Functions\n\n```python\nfrom llmrelic import get_all_models, find_model\n\n# Get all available models by provider\nall_models = get_all_models()\n\n# Find which provider a model belongs to\nprovider = find_model(\"gpt-4\") # \"openai\"\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Add your changes\n4. Run tests: `pytest`\n5. Submit a pull request\n\n## License\n\nMIT License\n\n## Changelog\n\n### 0.1.0\n\n- Initial release\n- Basic model name access\n- Fluent interface for defining supported models\n- Model validation and registry functionality\n",
"bugtrack_url": null,
"license": null,
"summary": "A lightweight Python library that provides easy access to popular LLM model names and allows you to define which models your application supports.",
"version": "0.1.3",
"project_urls": {
"Bug Reports": "https://github.com/OVECJOE/llmrelic/issues",
"Source": "https://github.com/OVECJOE/llmrelic"
},
"split_keywords": [
"llm",
" ai",
" models",
" registry",
" openai",
" anthropic",
" google",
" cohere",
" mistral",
" meta",
" huggingface"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f55db7ad4a4267856d5c9027cc365c5fd46d3d74cec954d94cbdd13684ee0827",
"md5": "506dfd9ba417730022cb921583ccac6d",
"sha256": "9e2ee8339031e4f69dd28729e8b0d0a521e2b8171c74c037bb52dcdc2e29cb72"
},
"downloads": -1,
"filename": "llmrelic-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "506dfd9ba417730022cb921583ccac6d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 6403,
"upload_time": "2025-07-15T14:13:18",
"upload_time_iso_8601": "2025-07-15T14:13:18.072312Z",
"url": "https://files.pythonhosted.org/packages/f5/5d/b7ad4a4267856d5c9027cc365c5fd46d3d74cec954d94cbdd13684ee0827/llmrelic-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "60399ec70acbc7b43d809a6920d7bb2967bb14d7bb520f759c9a1ed579729bdb",
"md5": "ff6c221fa3be4d41241c44f61c5aac61",
"sha256": "50ca2b39377f0bf3acaccbd8b94b49c1c95557b308f6cc09556eccd9973a58f9"
},
"downloads": -1,
"filename": "llmrelic-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "ff6c221fa3be4d41241c44f61c5aac61",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 6317,
"upload_time": "2025-07-15T14:13:19",
"upload_time_iso_8601": "2025-07-15T14:13:19.287391Z",
"url": "https://files.pythonhosted.org/packages/60/39/9ec70acbc7b43d809a6920d7bb2967bb14d7bb520f759c9a1ed579729bdb/llmrelic-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-15 14:13:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "OVECJOE",
"github_project": "llmrelic",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llmrelic"
}