# JustLLMs
A production-ready Python library that simplifies working with multiple Large Language Model providers through intelligent routing, comprehensive analytics, and enterprise-grade features.
[](https://pypi.org/project/justllms/)
## Why JustLLMs?
Managing multiple LLM providers is complex. You need to handle different APIs, optimize costs, monitor usage, and ensure reliability. JustLLMs solves these challenges by providing a unified interface that automatically routes requests to the best provider based on your criteriaβwhether that's cost, speed, or quality.
## Installation
```bash
# Basic installation
pip install justllms
# With PDF export capabilities
pip install justllms[pdf]
# All optional dependencies (PDF export, Redis caching, advanced analytics)
pip install justllms[all]
```
**Package size**: 1.1MB | **Lines of code**: ~11K | **Dependencies**: Minimal production requirements
## Quick Start
```python
from justllms import JustLLM
# Initialize with your API keys
client = JustLLM({
"providers": {
"openai": {"api_key": "your-openai-key"},
"google": {"api_key": "your-google-key"},
"anthropic": {"api_key": "your-anthropic-key"}
}
})
# Simple completion - automatically routes to best provider
response = client.completion.create(
messages=[{"role": "user", "content": "Explain quantum computing briefly"}]
)
print(response.content)
```
## Core Features
### Multi-Provider Support
Connect to all major LLM providers with a single, consistent interface:
- **OpenAI** (GPT-5, GPT-4, etc.) <yes, you can use GPT 5 :)>
- **Google** (Gemini 2.5, Gemini 1.5 models)
- **Anthropic** (Claude 3.5, Claude 3 models)
- **Azure OpenAI** (with deployment mapping)
- **xAI Grok**, **DeepSeek**, and more
```python
# Switch between providers seamlessly
client = JustLLM({
"providers": {
"openai": {"api_key": "your-key"},
"google": {"api_key": "your-key"},
"anthropic": {"api_key": "your-key"}
}
})
# Same interface, different providers automatically chosen
response1 = client.completion.create(
messages=[{"role": "user", "content": "Explain AI"}],
provider="openai" # Force specific provider
)
response2 = client.completion.create(
messages=[{"role": "user", "content": "Explain AI"}]
# Auto-routes to best provider based on your strategy
)
```
### Intelligent Routing
**The game-changing feature that sets JustLLMs apart.** Instead of manually choosing models, let our intelligent routing engine automatically select the optimal provider and model for each request based on your priorities.
#### How It Works
Our routing engine analyzes each request and considers:
- **Cost efficiency** - Real-time pricing across all providers
- **Performance metrics** - Historical latency and success rates
- **Model capabilities** - Task complexity and model strengths
- **Provider health** - Current availability and response times
```python
# Cost-optimized: Always picks the cheapest option
client = JustLLM({
"providers": {...},
"routing": {"strategy": "cost"}
})
# Speed-optimized: Prioritizes fastest response times
# Routes to providers with lowest latency in your region
client = JustLLM({
"providers": {...},
"routing": {"strategy": "latency"}
})
# Quality-optimized: Uses the best models for complex tasks
client = JustLLM({
"providers": {...},
"routing": {"strategy": "quality"}
})
# Advanced: Custom routing with business rules
client = JustLLM({
"providers": {...},
"routing": {
"strategy": "hybrid",
"cost_weight": 0.4,
"quality_weight": 0.6,
"max_cost_per_request": 0.05,
"fallback_provider": "openai"
}
})
```
**Result**: 60% cost reduction on average while maintaining quality, with automatic failover to backup providers.
### Real-time Streaming
Full streaming support with proper token handling across all providers:
```python
stream = client.completion.create(
messages=[{"role": "user", "content": "Write a short story"}],
stream=True
)
for chunk in stream:
print(chunk.content, end="", flush=True)
```
### Conversation Management
Built-in conversation state management with context preservation:
```python
# Create a managed conversation
conversation = client.conversations.create_sync(
system_prompt="You are a helpful coding assistant"
)
# Context is automatically maintained
response1 = conversation.send_sync("How do I sort a list in Python?")
response2 = conversation.send_sync("What about in reverse order?")
# Export conversations for analysis
conversation.export_sync(format="markdown", path="chat_history.md")
```
**Conversation Features:**
- **Auto-save**: Persist conversations automatically
- **Context management**: Smart context window handling
- **Export/Import**: JSON, Markdown, and TXT formats
- **Analytics**: Track usage, costs, and performance per conversation
- **Search**: Find conversations by content or metadata
### Smart Caching
Intelligent response caching that dramatically reduces costs and improves response times:
```python
client = JustLLM({
"providers": {...},
"caching": {
"enabled": True,
"ttl": 3600, # 1 hour
"max_size": 1000
}
})
# First call - cache miss
response1 = client.completion.create(
messages=[{"role": "user", "content": "What is AI?"}]
) # ~2 seconds, full cost
# Second call - cache hit
response2 = client.completion.create(
messages=[{"role": "user", "content": "What is AI?"}]
) # ~50ms, no cost
```
### Enterprise Analytics
**Comprehensive usage tracking and cost analysis** that gives you complete visibility into your LLM operations. Unlike other solutions that require external tools, JustLLMs provides built-in analytics that finance and engineering teams actually need.
#### What You Get
- **Cross-provider metrics**: Compare performance across providers
- **Cost tracking**: Detailed cost analysis per model/provider
- **Performance insights**: Latency, throughput, success rates
- **Export capabilities**: CSV, PDF with charts
- **Time series analysis**: Usage patterns over time
- **Top models/providers**: Usage and cost rankings
```python
# Generate detailed reports
report = client.analytics.generate_report()
print(f"Total requests: {report.cross_provider_metrics.total_requests}")
print(f"Total cost: ${report.cross_provider_metrics.total_cost:.2f}")
print(f"Fastest provider: {report.cross_provider_metrics.fastest_provider}")
print(f"Cost per request: ${report.cross_provider_metrics.avg_cost_per_request:.4f}")
# Get granular insights
print(f"Cache hit rate: {report.performance_metrics.cache_hit_rate:.1f}%")
print(f"Token efficiency: {report.optimization_suggestions.token_savings:.1f}%")
# Export reports for finance teams
from justllms.analytics.reports import CSVExporter, PDFExporter
csv_exporter = CSVExporter()
csv_exporter.export(report, "monthly_llm_costs.csv")
pdf_exporter = PDFExporter(include_charts=True)
pdf_exporter.export(report, "executive_summary.pdf")
```
**Business Impact**: Teams typically save 40-70% on LLM costs within the first month by identifying usage patterns and optimizing model selection.
### Business Rule Validation
**Enterprise-grade content filtering and compliance** built for regulated industries. Ensure your LLM applications meet security, privacy, and business requirements without custom development.
#### Compliance Features
- **PII Detection** - Automatically detect and handle social security numbers, credit cards, phone numbers
- **Content Filtering** - Block inappropriate content, profanity, or sensitive topics
- **Custom Business Rules** - Define your own validation logic with regex patterns or custom functions
- **Audit Trail** - Complete logging of all validation actions for compliance reporting
```python
from justllms.validation import ValidationConfig, BusinessRule, RuleType, ValidationAction
client = JustLLM({
"providers": {...},
"validation": ValidationConfig(
enabled=True,
business_rules=[
# Block sensitive data patterns
BusinessRule(
name="no_ssn",
type=RuleType.PATTERNS,
pattern=r"\\b\\d{3}-\\d{2}-\\d{4}\\b",
action=ValidationAction.BLOCK,
message="SSN detected - request blocked for privacy"
),
# Content filtering
BusinessRule(
name="professional_content",
type=RuleType.CONTENT_FILTER,
categories=["hate", "violence", "adult"],
action=ValidationAction.SANITIZE
),
# Custom business logic
BusinessRule(
name="company_policy",
type=RuleType.CUSTOM,
validator=lambda content: "competitor" not in content.lower(),
action=ValidationAction.WARN
)
],
# Compliance presets
compliance_mode="GDPR", # or "HIPAA", "PCI_DSS"
audit_logging=True
)
})
# All requests are automatically validated
response = client.completion.create(
messages=[{"role": "user", "content": "My SSN is 123-45-6789"}]
)
# This request would be blocked and logged for compliance
```
**Regulatory Compliance**: Built-in support for major compliance frameworks saves months of custom security development.
## Advanced Usage
### Async Operations
Full async/await support for high-performance applications:
```python
import asyncio
async def process_batch():
tasks = []
for prompt in prompts:
task = client.completion.acreate(
messages=[{"role": "user", "content": prompt}]
)
tasks.append(task)
responses = await asyncio.gather(*tasks)
return responses
```
### Error Handling & Reliability
Automatic retries and fallback providers ensure high availability:
```python
client = JustLLM({
"providers": {...},
"retry": {
"max_attempts": 3,
"backoff_factor": 2,
"retry_on": ["timeout", "rate_limit", "server_error"]
}
})
# Automatically retries on failures
try:
response = client.completion.create(
messages=[{"role": "user", "content": "Hello"}],
provider="invalid-provider" # Will fail and retry
)
except Exception as e:
print(f"All retries failed: {e}")
```
### Configuration Management
Flexible configuration with environment variable support:
```python
# Environment-based config
import os
client = JustLLM({
"providers": {
"openai": {"api_key": os.getenv("OPENAI_API_KEY")},
"azure_openai": {
"api_key": os.getenv("AZURE_OPENAI_KEY"),
"resource_name": os.getenv("AZURE_RESOURCE_NAME"),
"api_version": "2024-12-01-preview"
}
}
})
# File-based config
import yaml
with open("config.yaml") as f:
config = yaml.safe_load(f)
client = JustLLM(config)
```
## π Comparison with Alternatives
| Feature | JustLLMs | LangChain | LiteLLM | OpenAI SDK | Haystack |
|---------|----------|-----------|---------|------------|----------|
| **Package Size** | 1.1MB | ~50MB | ~5MB | ~1MB | ~20MB |
| **Setup Complexity** | Simple config | Complex chains | Medium | Simple | Complex |
| **Multi-Provider** | β
6+ providers | β
Many integrations | β
100+ providers | β OpenAI only | β
Limited LLMs |
| **Intelligent Routing** | β
Cost/speed/quality | β Manual only | β οΈ Basic routing | β None | β Pipeline-based |
| **Built-in Analytics** | β
Enterprise-grade | β External tools needed | β οΈ Basic metrics | β None | β οΈ Pipeline metrics |
| **Conversation Management** | β
Full lifecycle | β οΈ Memory components | β None | β Manual handling | β
Dialog systems |
| **Business Rules** | β
Content validation | β Custom implementation | β None | β None | β οΈ Custom filters |
| **Cost Optimization** | β
Automatic routing | β Manual optimization | β οΈ Basic cost tracking | β None | β None |
| **Streaming Support** | β
All providers | β
Provider-dependent | β
Most providers | β
OpenAI only | β οΈ Limited |
| **Production Ready** | β
Out of the box | β οΈ Requires setup | β
Minimal setup | β οΈ Basic features | β
Complex setup |
| **Learning Curve** | Low | High | Low | Low | High |
| **Enterprise Features** | β
Full suite | β οΈ Custom development | β Limited | β None | β
Workflow focus |
| **Async Support** | β
Native async/await | β
Yes | β
Yes | β
Yes | β
Yes |
| **Caching** | β
Multi-backend | β οΈ Custom implementation | β
Basic caching | β None | β
Document stores |
### Key Differentiators
**JustLLMs is the sweet spot** for teams who need:
- **Production-ready LLM orchestration** without the complexity of LangChain
- **Enterprise features** that LiteLLM and OpenAI SDK lack
- **Intelligent cost optimization** that others require manual implementation
- **Lightweight package** compared to heavy frameworks
- **Minimal learning curve** while maintaining powerful capabilities
## Enterprise Configuration
For production deployments with advanced features:
```python
enterprise_config = {
"providers": {
"azure_openai": {
"api_key": os.getenv("AZURE_OPENAI_KEY"),
"resource_name": "my-enterprise-resource",
"deployment_mapping": {
"gpt-4": "my-gpt4-deployment",
"gpt-3.5-turbo": "my-gpt35-deployment"
}
},
"anthropic": {"api_key": os.getenv("ANTHROPIC_KEY")},
"google": {"api_key": os.getenv("GOOGLE_KEY")}
},
"routing": {
"strategy": "cost",
"fallback_provider": "azure_openai",
"fallback_model": "gpt-3.5-turbo"
},
"validation": {
"enabled": True,
"business_rules": [
# PII detection, content filtering, compliance rules
]
},
"analytics": {
"enabled": True,
"track_usage": True,
"track_performance": True
},
"caching": {
"enabled": True,
"backend": "redis",
"ttl": 3600
},
"conversations": {
"backend": "disk",
"auto_save": True,
"auto_title": True,
"max_context_tokens": 8000
}
}
client = JustLLM(enterprise_config)
```
## Monitoring & Observability
Real-time insights into your LLM usage:
```python
# Live metrics
metrics = client.analytics.get_live_metrics()
print(f"Requests (last 5 min): {metrics['recent_requests_5min']}")
print(f"Cache hit rate: {metrics['cache_hit_rate']:.1f}%")
print(f"Active providers: {metrics['active_providers']}")
# Detailed reporting
report = client.analytics.generate_report()
print(f"Most cost-efficient provider: {report.cross_provider_metrics.cost_efficiency_ranking[0]}")
print(f"Average latency: {report.cross_provider_metrics.average_latency_ms:.0f}ms")
# Export for business intelligence
from justllms.analytics.reports import PDFExporter
pdf_exporter = PDFExporter(include_charts=True)
pdf_exporter.export(report, "executive_llm_report.pdf")
```
## π Upcoming Features
**Next Release (v1.1.0)** - Coming Soon
### Function Calling & Multi-modal Support
Advanced model capabilities for complex workflows:
```python
# Function calling with automatic tool routing
functions = [{
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}]
response = client.completion.create(
messages=[{"role": "user", "content": "What's the weather in Paris?"}],
functions=functions
)
# Vision capabilities across all compatible providers
response = client.completion.create(
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Analyze this chart"},
{"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}}
]
}],
model="auto" # Automatically selects best vision model
)
```
### Additional Planned Features
- **Web-based Analytics Dashboard** - Visual insights and real-time monitoring
- **Advanced Conversation Analytics** - Sentiment analysis, topic modeling, conversation scoring
- **Custom Model Fine-tuning Integration** - Train and deploy custom models seamlessly
- **Enterprise SSO Support** - OAuth, SAML, and directory integration
- **Enhanced Compliance Tools** - SOC 2, ISO 27001 audit trails
- **Multi-region Deployment** - Automatic geographic routing for performance
## Contributing
We welcome contributions! Whether it's adding new providers, improving routing strategies, or enhancing analytics capabilities.
```bash
# Development setup
git clone https://github.com/your-org/justllms.git
cd justllms
pip install -e ".[dev]"
pytest
```
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Support
- **Documentation**: Comprehensive guides and API reference
- **Examples**: Ready-to-run code samples in the `examples/` directory
- **Issues**: Report bugs and request features via GitHub Issues
- **Discussions**: Community support and ideas via GitHub Discussions
---
**JustLLMs** - Simple to start, powerful to scale, intelligent by design.
Raw data
{
"_id": null,
"home_page": null,
"name": "justllms",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, ai, openai, anthropic, gemini, gateway, proxy",
"author": null,
"author_email": "darshan harihar <darshanharihar2950@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/7a/87/9b4b5263beeeda49ec9439c463d7712f1678f7f9a220d59b70ecb8e6983a/justllms-1.0.1.tar.gz",
"platform": null,
"description": "# JustLLMs\n\nA production-ready Python library that simplifies working with multiple Large Language Model providers through intelligent routing, comprehensive analytics, and enterprise-grade features.\n\n[](https://pypi.org/project/justllms/)\n\n## Why JustLLMs?\n\nManaging multiple LLM providers is complex. You need to handle different APIs, optimize costs, monitor usage, and ensure reliability. JustLLMs solves these challenges by providing a unified interface that automatically routes requests to the best provider based on your criteria\u2014whether that's cost, speed, or quality.\n\n## Installation\n\n```bash\n# Basic installation\npip install justllms\n\n# With PDF export capabilities\npip install justllms[pdf]\n\n# All optional dependencies (PDF export, Redis caching, advanced analytics)\npip install justllms[all]\n```\n\n**Package size**: 1.1MB | **Lines of code**: ~11K | **Dependencies**: Minimal production requirements\n\n## Quick Start\n\n```python\nfrom justllms import JustLLM\n\n# Initialize with your API keys\nclient = JustLLM({\n \"providers\": {\n \"openai\": {\"api_key\": \"your-openai-key\"},\n \"google\": {\"api_key\": \"your-google-key\"},\n \"anthropic\": {\"api_key\": \"your-anthropic-key\"}\n }\n})\n\n# Simple completion - automatically routes to best provider\nresponse = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"Explain quantum computing briefly\"}]\n)\nprint(response.content)\n```\n\n## Core Features\n\n### Multi-Provider Support\nConnect to all major LLM providers with a single, consistent interface:\n- **OpenAI** (GPT-5, GPT-4, etc.) <yes, you can use GPT 5 :)>\n- **Google** (Gemini 2.5, Gemini 1.5 models) \n- **Anthropic** (Claude 3.5, Claude 3 models)\n- **Azure OpenAI** (with deployment mapping)\n- **xAI Grok**, **DeepSeek**, and more\n\n```python\n# Switch between providers seamlessly\nclient = JustLLM({\n \"providers\": {\n \"openai\": {\"api_key\": \"your-key\"},\n \"google\": {\"api_key\": \"your-key\"},\n \"anthropic\": {\"api_key\": \"your-key\"}\n }\n})\n\n# Same interface, different providers automatically chosen\nresponse1 = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"Explain AI\"}],\n provider=\"openai\" # Force specific provider\n)\n\nresponse2 = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"Explain AI\"}]\n # Auto-routes to best provider based on your strategy\n)\n```\n\n### Intelligent Routing\n**The game-changing feature that sets JustLLMs apart.** Instead of manually choosing models, let our intelligent routing engine automatically select the optimal provider and model for each request based on your priorities.\n\n#### How It Works\nOur routing engine analyzes each request and considers:\n- **Cost efficiency** - Real-time pricing across all providers\n- **Performance metrics** - Historical latency and success rates\n- **Model capabilities** - Task complexity and model strengths\n- **Provider health** - Current availability and response times\n\n```python\n# Cost-optimized: Always picks the cheapest option\nclient = JustLLM({\n \"providers\": {...},\n \"routing\": {\"strategy\": \"cost\"}\n})\n\n# Speed-optimized: Prioritizes fastest response times\n# Routes to providers with lowest latency in your region\nclient = JustLLM({\n \"providers\": {...},\n \"routing\": {\"strategy\": \"latency\"}\n})\n\n# Quality-optimized: Uses the best models for complex tasks\nclient = JustLLM({\n \"providers\": {...},\n \"routing\": {\"strategy\": \"quality\"}\n})\n\n# Advanced: Custom routing with business rules\nclient = JustLLM({\n \"providers\": {...},\n \"routing\": {\n \"strategy\": \"hybrid\",\n \"cost_weight\": 0.4,\n \"quality_weight\": 0.6,\n \"max_cost_per_request\": 0.05,\n \"fallback_provider\": \"openai\"\n }\n})\n```\n\n**Result**: 60% cost reduction on average while maintaining quality, with automatic failover to backup providers.\n\n### Real-time Streaming\nFull streaming support with proper token handling across all providers:\n\n```python\nstream = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"Write a short story\"}],\n stream=True\n)\n\nfor chunk in stream:\n print(chunk.content, end=\"\", flush=True)\n```\n\n### Conversation Management\nBuilt-in conversation state management with context preservation:\n\n```python\n# Create a managed conversation\nconversation = client.conversations.create_sync(\n system_prompt=\"You are a helpful coding assistant\"\n)\n\n# Context is automatically maintained\nresponse1 = conversation.send_sync(\"How do I sort a list in Python?\")\nresponse2 = conversation.send_sync(\"What about in reverse order?\")\n\n# Export conversations for analysis\nconversation.export_sync(format=\"markdown\", path=\"chat_history.md\")\n```\n**Conversation Features:**\n- **Auto-save**: Persist conversations automatically\n- **Context management**: Smart context window handling\n- **Export/Import**: JSON, Markdown, and TXT formats\n- **Analytics**: Track usage, costs, and performance per conversation\n- **Search**: Find conversations by content or metadata\n\n### Smart Caching\nIntelligent response caching that dramatically reduces costs and improves response times:\n\n```python\nclient = JustLLM({\n \"providers\": {...},\n \"caching\": {\n \"enabled\": True,\n \"ttl\": 3600, # 1 hour\n \"max_size\": 1000\n }\n})\n\n# First call - cache miss\nresponse1 = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"What is AI?\"}]\n) # ~2 seconds, full cost\n\n# Second call - cache hit\nresponse2 = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"What is AI?\"}]\n) # ~50ms, no cost\n```\n\n### Enterprise Analytics\n**Comprehensive usage tracking and cost analysis** that gives you complete visibility into your LLM operations. Unlike other solutions that require external tools, JustLLMs provides built-in analytics that finance and engineering teams actually need.\n\n#### What You Get\n- **Cross-provider metrics**: Compare performance across providers\n- **Cost tracking**: Detailed cost analysis per model/provider\n- **Performance insights**: Latency, throughput, success rates\n- **Export capabilities**: CSV, PDF with charts\n- **Time series analysis**: Usage patterns over time\n- **Top models/providers**: Usage and cost rankings\n\n```python\n# Generate detailed reports\nreport = client.analytics.generate_report()\nprint(f\"Total requests: {report.cross_provider_metrics.total_requests}\")\nprint(f\"Total cost: ${report.cross_provider_metrics.total_cost:.2f}\")\nprint(f\"Fastest provider: {report.cross_provider_metrics.fastest_provider}\")\nprint(f\"Cost per request: ${report.cross_provider_metrics.avg_cost_per_request:.4f}\")\n\n# Get granular insights\nprint(f\"Cache hit rate: {report.performance_metrics.cache_hit_rate:.1f}%\")\nprint(f\"Token efficiency: {report.optimization_suggestions.token_savings:.1f}%\")\n\n# Export reports for finance teams\nfrom justllms.analytics.reports import CSVExporter, PDFExporter\ncsv_exporter = CSVExporter()\ncsv_exporter.export(report, \"monthly_llm_costs.csv\")\n\npdf_exporter = PDFExporter(include_charts=True)\npdf_exporter.export(report, \"executive_summary.pdf\")\n```\n\n**Business Impact**: Teams typically save 40-70% on LLM costs within the first month by identifying usage patterns and optimizing model selection.\n\n### Business Rule Validation\n**Enterprise-grade content filtering and compliance** built for regulated industries. Ensure your LLM applications meet security, privacy, and business requirements without custom development.\n\n#### Compliance Features\n- **PII Detection** - Automatically detect and handle social security numbers, credit cards, phone numbers\n- **Content Filtering** - Block inappropriate content, profanity, or sensitive topics\n- **Custom Business Rules** - Define your own validation logic with regex patterns or custom functions\n- **Audit Trail** - Complete logging of all validation actions for compliance reporting\n\n```python\nfrom justllms.validation import ValidationConfig, BusinessRule, RuleType, ValidationAction\n\nclient = JustLLM({\n \"providers\": {...},\n \"validation\": ValidationConfig(\n enabled=True,\n business_rules=[\n # Block sensitive data patterns\n BusinessRule(\n name=\"no_ssn\",\n type=RuleType.PATTERNS,\n pattern=r\"\\\\b\\\\d{3}-\\\\d{2}-\\\\d{4}\\\\b\",\n action=ValidationAction.BLOCK,\n message=\"SSN detected - request blocked for privacy\"\n ),\n # Content filtering\n BusinessRule(\n name=\"professional_content\",\n type=RuleType.CONTENT_FILTER,\n categories=[\"hate\", \"violence\", \"adult\"],\n action=ValidationAction.SANITIZE\n ),\n # Custom business logic\n BusinessRule(\n name=\"company_policy\",\n type=RuleType.CUSTOM,\n validator=lambda content: \"competitor\" not in content.lower(),\n action=ValidationAction.WARN\n )\n ],\n # Compliance presets\n compliance_mode=\"GDPR\", # or \"HIPAA\", \"PCI_DSS\"\n audit_logging=True\n )\n})\n\n# All requests are automatically validated\nresponse = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"My SSN is 123-45-6789\"}]\n)\n# This request would be blocked and logged for compliance\n```\n\n**Regulatory Compliance**: Built-in support for major compliance frameworks saves months of custom security development.\n\n## Advanced Usage\n\n### Async Operations\nFull async/await support for high-performance applications:\n\n```python\nimport asyncio\n\nasync def process_batch():\n tasks = []\n for prompt in prompts:\n task = client.completion.acreate(\n messages=[{\"role\": \"user\", \"content\": prompt}]\n )\n tasks.append(task)\n \n responses = await asyncio.gather(*tasks)\n return responses\n```\n\n### Error Handling & Reliability\nAutomatic retries and fallback providers ensure high availability:\n\n```python\nclient = JustLLM({\n \"providers\": {...},\n \"retry\": {\n \"max_attempts\": 3,\n \"backoff_factor\": 2,\n \"retry_on\": [\"timeout\", \"rate_limit\", \"server_error\"]\n }\n})\n\n# Automatically retries on failures\ntry:\n response = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"Hello\"}],\n provider=\"invalid-provider\" # Will fail and retry\n )\nexcept Exception as e:\n print(f\"All retries failed: {e}\")\n```\n\n### Configuration Management\nFlexible configuration with environment variable support:\n\n```python\n# Environment-based config\nimport os\nclient = JustLLM({\n \"providers\": {\n \"openai\": {\"api_key\": os.getenv(\"OPENAI_API_KEY\")},\n \"azure_openai\": {\n \"api_key\": os.getenv(\"AZURE_OPENAI_KEY\"),\n \"resource_name\": os.getenv(\"AZURE_RESOURCE_NAME\"),\n \"api_version\": \"2024-12-01-preview\"\n }\n }\n})\n\n# File-based config\nimport yaml\nwith open(\"config.yaml\") as f:\n config = yaml.safe_load(f)\nclient = JustLLM(config)\n```\n\n## \ud83c\udfc6 Comparison with Alternatives\n\n| Feature | JustLLMs | LangChain | LiteLLM | OpenAI SDK | Haystack |\n|---------|----------|-----------|---------|------------|----------|\n| **Package Size** | 1.1MB | ~50MB | ~5MB | ~1MB | ~20MB |\n| **Setup Complexity** | Simple config | Complex chains | Medium | Simple | Complex |\n| **Multi-Provider** | \u2705 6+ providers | \u2705 Many integrations | \u2705 100+ providers | \u274c OpenAI only | \u2705 Limited LLMs |\n| **Intelligent Routing** | \u2705 Cost/speed/quality | \u274c Manual only | \u26a0\ufe0f Basic routing | \u274c None | \u274c Pipeline-based |\n| **Built-in Analytics** | \u2705 Enterprise-grade | \u274c External tools needed | \u26a0\ufe0f Basic metrics | \u274c None | \u26a0\ufe0f Pipeline metrics |\n| **Conversation Management** | \u2705 Full lifecycle | \u26a0\ufe0f Memory components | \u274c None | \u274c Manual handling | \u2705 Dialog systems |\n| **Business Rules** | \u2705 Content validation | \u274c Custom implementation | \u274c None | \u274c None | \u26a0\ufe0f Custom filters |\n| **Cost Optimization** | \u2705 Automatic routing | \u274c Manual optimization | \u26a0\ufe0f Basic cost tracking | \u274c None | \u274c None |\n| **Streaming Support** | \u2705 All providers | \u2705 Provider-dependent | \u2705 Most providers | \u2705 OpenAI only | \u26a0\ufe0f Limited |\n| **Production Ready** | \u2705 Out of the box | \u26a0\ufe0f Requires setup | \u2705 Minimal setup | \u26a0\ufe0f Basic features | \u2705 Complex setup |\n| **Learning Curve** | Low | High | Low | Low | High |\n| **Enterprise Features** | \u2705 Full suite | \u26a0\ufe0f Custom development | \u274c Limited | \u274c None | \u2705 Workflow focus |\n| **Async Support** | \u2705 Native async/await | \u2705 Yes | \u2705 Yes | \u2705 Yes | \u2705 Yes |\n| **Caching** | \u2705 Multi-backend | \u26a0\ufe0f Custom implementation | \u2705 Basic caching | \u274c None | \u2705 Document stores |\n\n### Key Differentiators\n\n**JustLLMs is the sweet spot** for teams who need:\n- **Production-ready LLM orchestration** without the complexity of LangChain\n- **Enterprise features** that LiteLLM and OpenAI SDK lack\n- **Intelligent cost optimization** that others require manual implementation\n- **Lightweight package** compared to heavy frameworks\n- **Minimal learning curve** while maintaining powerful capabilities\n\n## Enterprise Configuration\n\nFor production deployments with advanced features:\n\n```python\nenterprise_config = {\n \"providers\": {\n \"azure_openai\": {\n \"api_key\": os.getenv(\"AZURE_OPENAI_KEY\"),\n \"resource_name\": \"my-enterprise-resource\",\n \"deployment_mapping\": {\n \"gpt-4\": \"my-gpt4-deployment\",\n \"gpt-3.5-turbo\": \"my-gpt35-deployment\"\n }\n },\n \"anthropic\": {\"api_key\": os.getenv(\"ANTHROPIC_KEY\")},\n \"google\": {\"api_key\": os.getenv(\"GOOGLE_KEY\")}\n },\n \"routing\": {\n \"strategy\": \"cost\",\n \"fallback_provider\": \"azure_openai\",\n \"fallback_model\": \"gpt-3.5-turbo\"\n },\n \"validation\": {\n \"enabled\": True,\n \"business_rules\": [\n # PII detection, content filtering, compliance rules\n ]\n },\n \"analytics\": {\n \"enabled\": True,\n \"track_usage\": True,\n \"track_performance\": True\n },\n \"caching\": {\n \"enabled\": True,\n \"backend\": \"redis\",\n \"ttl\": 3600\n },\n \"conversations\": {\n \"backend\": \"disk\",\n \"auto_save\": True,\n \"auto_title\": True,\n \"max_context_tokens\": 8000\n }\n}\n\nclient = JustLLM(enterprise_config)\n```\n\n## Monitoring & Observability\n\nReal-time insights into your LLM usage:\n\n```python\n# Live metrics\nmetrics = client.analytics.get_live_metrics()\nprint(f\"Requests (last 5 min): {metrics['recent_requests_5min']}\")\nprint(f\"Cache hit rate: {metrics['cache_hit_rate']:.1f}%\")\nprint(f\"Active providers: {metrics['active_providers']}\")\n\n# Detailed reporting\nreport = client.analytics.generate_report()\nprint(f\"Most cost-efficient provider: {report.cross_provider_metrics.cost_efficiency_ranking[0]}\")\nprint(f\"Average latency: {report.cross_provider_metrics.average_latency_ms:.0f}ms\")\n\n# Export for business intelligence\nfrom justllms.analytics.reports import PDFExporter\npdf_exporter = PDFExporter(include_charts=True)\npdf_exporter.export(report, \"executive_llm_report.pdf\")\n```\n\n## \ud83d\ude80 Upcoming Features\n\n**Next Release (v1.1.0)** - Coming Soon\n\n### Function Calling & Multi-modal Support\nAdvanced model capabilities for complex workflows:\n\n```python\n# Function calling with automatic tool routing\nfunctions = [{\n \"name\": \"get_weather\",\n \"description\": \"Get weather for a location\", \n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\"location\": {\"type\": \"string\"}},\n \"required\": [\"location\"]\n }\n}]\n\nresponse = client.completion.create(\n messages=[{\"role\": \"user\", \"content\": \"What's the weather in Paris?\"}],\n functions=functions\n)\n\n# Vision capabilities across all compatible providers\nresponse = client.completion.create(\n messages=[{\n \"role\": \"user\", \n \"content\": [\n {\"type\": \"text\", \"text\": \"Analyze this chart\"},\n {\"type\": \"image_url\", \"image_url\": {\"url\": \"data:image/png;base64,...\"}}\n ]\n }],\n model=\"auto\" # Automatically selects best vision model\n)\n```\n\n### Additional Planned Features\n- **Web-based Analytics Dashboard** - Visual insights and real-time monitoring\n- **Advanced Conversation Analytics** - Sentiment analysis, topic modeling, conversation scoring\n- **Custom Model Fine-tuning Integration** - Train and deploy custom models seamlessly\n- **Enterprise SSO Support** - OAuth, SAML, and directory integration\n- **Enhanced Compliance Tools** - SOC 2, ISO 27001 audit trails\n- **Multi-region Deployment** - Automatic geographic routing for performance\n\n## Contributing\n\nWe welcome contributions! Whether it's adding new providers, improving routing strategies, or enhancing analytics capabilities.\n\n```bash\n# Development setup\ngit clone https://github.com/your-org/justllms.git\ncd justllms\npip install -e \".[dev]\"\npytest\n```\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## Support\n\n- **Documentation**: Comprehensive guides and API reference\n- **Examples**: Ready-to-run code samples in the `examples/` directory\n- **Issues**: Report bugs and request features via GitHub Issues\n- **Discussions**: Community support and ideas via GitHub Discussions\n\n---\n\n**JustLLMs** - Simple to start, powerful to scale, intelligent by design.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Production-ready Python library for multi-provider LLM orchestration with intelligent routing",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/just-llms/justllms/issues",
"Documentation": "https://github.com/just-llms/justllms#readme",
"Homepage": "https://github.com/just-llms/justllms",
"Repository": "https://github.com/just-llms/justllms"
},
"split_keywords": [
"llm",
" ai",
" openai",
" anthropic",
" gemini",
" gateway",
" proxy"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "25b32d7763a76a4b2425d62f4846dbdefe723b5383dee13aed3d47996a750250",
"md5": "e03b5d350489556632a44df9e46306bd",
"sha256": "4b565df6022e0dd7b136861cc2bc2441ed831b2274460f184a9f3b9c02ff7338"
},
"downloads": -1,
"filename": "justllms-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e03b5d350489556632a44df9e46306bd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 104692,
"upload_time": "2025-08-08T10:26:11",
"upload_time_iso_8601": "2025-08-08T10:26:11.331210Z",
"url": "https://files.pythonhosted.org/packages/25/b3/2d7763a76a4b2425d62f4846dbdefe723b5383dee13aed3d47996a750250/justllms-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7a879b4b5263beeeda49ec9439c463d7712f1678f7f9a220d59b70ecb8e6983a",
"md5": "973fa933bf61b4d45dc3c8a69e7fd9e9",
"sha256": "48c6c5479335bbce8357bf85362138e0e312958248f67bd764d9e0b6def99b7c"
},
"downloads": -1,
"filename": "justllms-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "973fa933bf61b4d45dc3c8a69e7fd9e9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 88591,
"upload_time": "2025-08-08T10:26:12",
"upload_time_iso_8601": "2025-08-08T10:26:12.744183Z",
"url": "https://files.pythonhosted.org/packages/7a/87/9b4b5263beeeda49ec9439c463d7712f1678f7f9a220d59b70ecb8e6983a/justllms-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-08 10:26:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "just-llms",
"github_project": "justllms",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "httpx",
"specs": [
[
">=",
"0.25.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "tenacity",
"specs": [
[
">=",
"8.0.0"
]
]
},
{
"name": "diskcache",
"specs": [
[
">=",
"5.6.0"
]
]
},
{
"name": "tiktoken",
"specs": [
[
">=",
"0.5.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "rich",
"specs": [
[
">=",
"13.0.0"
]
]
},
{
"name": "PyYAML",
"specs": [
[
">=",
"6.0.0"
]
]
},
{
"name": "opentelemetry-api",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "opentelemetry-sdk",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "redis",
"specs": [
[
">=",
"4.5.0"
]
]
}
],
"lcname": "justllms"
}