llmswap


Namellmswap JSON
Version 4.1.4 PyPI version JSON
download
home_pageNone
SummaryUniversal AI SDK + Code Generation CLI: GitHub Copilot alternative with 7 providers (OpenAI GPT-4o/o1, Claude, Gemini, Cohere, Perplexity, IBM Watson, Groq), natural language to code, cost optimization, enterprise analytics.
upload_time2025-09-14 12:45:26
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2025 Sreenath Menon Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords ai ai-assistant ai-chat ai-cli ai-code-assistant ai-code-generation ai-code-review ai-debugging ai-logs ai-pair-programming anthropic bash-generator chatgpt-alternative chromadb claude cli code-completion-api code-generation code-review cohere command-generation command-line-ai copilot-alternative copilot-cli copilot-cli-alternative copilot-replacement cost-analytics cost-optimization debugging developer-ai-tools developer-tools editor-integration embeddings enterprise-ai fast-inference fastapi gemini github-copilot-alternative gpt-4 granite groq groq-inference hackathon hackathon-starter ibm-watson ibm-watsonx langchain langchain-alternative litellm-alternative llama llm llm-api llm-gateway llm-switching log-analysis mistral multi-llm multi-llm-copilot natural-language-to-code ollama open-source-copilot openai perplexity pinecone provider-fallback python-generator rag response-caching retrieval-augmented-generation self-hosted-ai shell-integration streamlit terminal-ai terminal-assistant text-to-code token-tracking usage-analytics vector-database vim-integration vim-plugin watson-ai watsonx
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llmswap - Universal AI SDK + Code Generation CLI

[![PyPI version](https://badge.fury.io/py/llmswap.svg)](https://badge.fury.io/py/llmswap)
[![PyPI Downloads](https://static.pepy.tech/badge/llmswap)](https://pepy.tech/projects/llmswap)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

**Universal AI SDK for Developers**: Switch between 7 AI providers (OpenAI GPT-4o/o1, Claude, Gemini, Cohere, Perplexity, IBM watsonx, Groq) with natural language code generation, cost optimization and enterprise analytics.

**🆕 GitHub Copilot CLI Alternative**: Generate commands and code from natural language using any AI provider. **Save 50-90% on AI Costs**: Intelligent caching, provider comparison, usage analytics.

```bash
# 🆕 NEW: GitHub Copilot CLI Alternative
llmswap generate "sort files by size in reverse order"
# Output: du -sh * | sort -hr

llmswap generate "Python function to read JSON with error handling" --language python
# Output: Complete Python function with try/catch blocks

# Vim Integration - Direct code insertion into your editor!
:r !llmswap generate "Express.js REST API with CRUD operations"
# Inserts complete code directly into your vim buffer
```

```python
# Before: Provider lock-in, complex setup
import openai  # Locked to OpenAI
client = openai.Client(api_key="...")
response = client.chat.completions.create(...)  # $$$ every API call

# After: Freedom + savings with llmswap
from llmswap import LLMClient
client = LLMClient()  # Auto-detects any provider
response = client.query("Hello")  # Automatic caching = 50-90% savings
```

## ⚡ Get Started in 30 Seconds

```bash
pip install llmswap
```

```python
from llmswap import LLMClient

# Works with any provider you have
client = LLMClient()  # Auto-detects from environment
response = client.query("Explain quantum computing in 50 words")
print(response.content)
```

## 🎯 Why llmswap for AI Development?

| Feature | llmswap | LangChain | LiteLLM | Direct APIs |
|---------|---------|-----------|---------|-------------|
| **AI Providers** | 7 providers, 1 line switch | 50+ complex setup | 20+ basic support | 1 per codebase |
| **Integration** | `pip install llmswap` | Complex framework | Moderate setup | Per-provider SDKs |
| **Cost Control** | Built-in optimization | Manual configuration | Basic tracking | No tracking |
| **Enterprise Analytics** | Native cost/usage tracking | External tools required | Limited insights | Manual logging |
| **CLI Tools** | 5 powerful commands | Separate packages | None included | None |
| **Caching** | Intelligent built-in | Manual implementation | Basic support | DIY solution |
| **Learning Curve** | 5 minutes | Hours of documentation | 30 minutes | Per-API learning |
| **Self-Hosted** | Full control | Complex deployment | Basic options | Manual setup |

## 🚀 Three Ways to Use llmswap:

**📚 1. Python Library/SDK**
```python
from llmswap import LLMClient
client = LLMClient()  # Import into any codebase
response = client.query("Analyze this data")
```

**⚡ 2. CLI Tools**  
```bash
llmswap generate "sort files by size"           # GitHub Copilot alternative
llmswap generate "Python function to read JSON" # Multi-language code generation
llmswap ask "Debug this error"                  # Terminal AI assistant
llmswap costs                                    # Cost optimization insights
```

**📊 3. Enterprise Analytics**
```python
stats = client.get_usage_stats()         # Track AI spend
comparison = client.get_provider_comparison()  # Compare costs
```

## 🚀 Complete Feature Set

### 1️⃣ **Python SDK** - Multi-Provider Intelligence
```python
from llmswap import LLMClient

# Auto-detects available providers
client = LLMClient()  

# Or specify your preference
client = LLMClient(provider="anthropic")  # Claude 3 Opus/Sonnet/Haiku
client = LLMClient(provider="openai")     # GPT-4, GPT-3.5
client = LLMClient(provider="gemini")     # Google Gemini Pro/Flash
client = LLMClient(provider="watsonx")    # IBM watsonx.ai Granite
client = LLMClient(provider="ollama")     # Llama, Mistral, Phi, 100+ local
client = LLMClient(provider="groq")       # Groq ultra-fast inference
client = LLMClient(provider="cohere")     # Cohere Command models for RAG
client = LLMClient(provider="perplexity") # Perplexity web-connected AI

# Automatic failover
client = LLMClient(fallback=True)
response = client.query("Hello")  # Tries multiple providers

# Save 50-90% with intelligent caching
client = LLMClient(cache_enabled=True)
response1 = client.query("Expensive question")  # $$$ API call
response2 = client.query("Expensive question")  # FREE from cache
```

### 2️⃣ **CLI Suite** - 6 Powerful Terminal Tools
```bash
# 🆕 Generate code from natural language (GitHub Copilot alternative)
llmswap generate "sort files by size in reverse order"
llmswap generate "Python function to read JSON file" --language python
llmswap generate "find large files over 100MB" --execute

# Ask one-line questions
llmswap ask "How to optimize PostgreSQL queries?"

# Interactive AI chat
llmswap chat

# AI code review
llmswap review app.py --focus security

# Debug errors instantly
llmswap debug --error "ConnectionTimeout at line 42"

# Analyze logs with AI
llmswap logs --analyze app.log --since "2h ago"
```

### 3️⃣ **Analytics & Cost Optimization** (v4.0 NEW!)
```bash
# Compare provider costs before choosing
llmswap compare --input-tokens 1000 --output-tokens 500
# Output: Gemini $0.0005 | OpenAI $0.014 | Claude $0.011

# Track your actual usage and spending
llmswap usage --days 30 --format table
# Shows: queries, tokens, costs by provider, response times

# Get AI spend optimization recommendations
llmswap costs
# Suggests: Switch to Gemini, enable caching, use Ollama for dev
```

```python
# Python SDK - Full analytics suite
client = LLMClient(analytics_enabled=True)

# Automatic conversation memory
response = client.chat("What is Python?")
response = client.chat("How is it different from Java?")  # Remembers context

# Real-time cost tracking
stats = client.get_usage_stats()
print(f"Total queries: {stats['totals']['queries']}")
print(f"Total cost: ${stats['totals']['cost']:.4f}")
print(f"Avg response time: {stats['avg_response_time_ms']}ms")

# Cost optimization insights
analysis = client.get_cost_breakdown()
print(f"Potential savings: ${analysis['optimization_opportunities']['potential_provider_savings']:.2f}")
print(f"Recommended provider: {analysis['recommendations'][0]}")

# Compare providers for your specific use case
comparison = client.get_provider_comparison(input_tokens=1500, output_tokens=500)
print(f"Cheapest: {comparison['cheapest']} (${comparison['cheapest_cost']:.6f})")
print(f"Savings vs current: {comparison['max_savings_percentage']:.1f}%")
```

### 4️⃣ **Advanced Features**

**Async/Streaming Support**
```python
import asyncio
from llmswap import AsyncLLMClient

async def main():
    client = AsyncLLMClient()
    
    # Async queries
    response = await client.query("Explain AI")
    
    # Streaming responses
    async for chunk in client.stream("Write a story"):
        print(chunk, end="")
```

**Multi-User Security**
```python
# Context-aware caching for multi-tenant apps
response = client.query(
    "Get user data",
    cache_context={"user_id": "user123"}  # Isolated cache
)
```

**Provider Comparison**
```python
# Compare responses from different models
comparison = client.compare_providers(
    "Solve this problem",
    providers=["anthropic", "openai", "gemini"]
)
```

## 📊 Real-World Use Cases & Examples

### 🏢 **Enterprise: Content Generation at Scale**
**Netflix-style recommendation descriptions for millions of items:**
```python
from llmswap import LLMClient

# Start with OpenAI, switch to Gemini for 96% cost savings
client = LLMClient(provider="gemini", cache_enabled=True)

def generate_descriptions(items):
    for item in items:
        # Cached responses save 90% on similar content
        description = client.query(
            f"Create engaging description for {item['title']}",
            cache_context={"category": item['category']}
        )
        yield description.content

# Cost: $0.0005 per description vs $0.015 with OpenAI
```

### 👨‍💻 **Developers: AI-Powered Code Review**
**GitHub Copilot alternative for your team:**
```python
# CLI for instant code review
$ llmswap review api_handler.py --focus security

# Python SDK for CI/CD integration
from llmswap import LLMClient

client = LLMClient(analytics_enabled=True)
review = client.query(f"Review this PR for bugs: {pr_diff}")

# Track costs across your team
stats = client.get_usage_stats()
print(f"This month's AI costs: ${stats['totals']['cost']:.2f}")
```

### 🎓 **Education: AI Tutoring Platform**
**Khan Academy-style personalized learning:**
```python
client = LLMClient(provider="ollama")  # Free for schools!

def ai_tutor(student_question, subject):
    # Use watsonx for STEM, Ollama for general subjects
    if subject in ["math", "science"]:
        client.set_provider("watsonx")
    
    response = client.query(
        f"Explain {student_question} for a {subject} student",
        cache_context={"grade_level": student.grade}
    )
    return response.content

# Zero cost with Ollama, enterprise-grade with watsonx
```

### 🚀 **Startups: Multi-Modal Customer Support**
**Shopify-scale merchant assistance:**
```python
from llmswap import LLMClient

# Start with Anthropic, fallback to others if rate-limited
client = LLMClient(fallback=True, cache_enabled=True)

async def handle_support_ticket(ticket):
    # 90% of questions are similar - cache saves thousands
    response = await client.aquery(
        f"Help with: {ticket.issue}",
        cache_context={"type": ticket.category}
    )
    
    # Auto-escalate complex issues
    if response.confidence < 0.8:
        client.set_provider("anthropic")  # Use best model
        response = await client.aquery(ticket.issue)
    
    return response.content
```

### 📱 **Content Creators: Writing Assistant**
**Medium/Substack article generation:**
```bash
# Quick blog post ideas
llmswap ask "10 trending topics in AI for developers"

# Full article draft
llmswap chat
> Write a 1000-word article on prompt engineering
> Make it more technical
> Add code examples
```

### 🔧 **DevOps Engineers: Infrastructure as Code**
**Kubernetes and Docker automation:**
```bash
# Generate Kubernetes deployment
llmswap generate "Kubernetes deployment for React app with 3 replicas" --save k8s-deploy.yaml

# Docker multi-stage build
llmswap generate "Docker multi-stage build for Node.js app with Alpine" --language dockerfile

# Terraform AWS infrastructure
llmswap generate "Terraform script for AWS VPC with public/private subnets" --save main.tf
```

### 🎯 **Data Scientists: Analysis Workflows**
**Pandas, visualization, and ML pipeline generation:**
```bash
# Data analysis scripts
llmswap generate "Pandas script to clean CSV and handle missing values" --language python

# Visualization code
llmswap generate "Matplotlib script for correlation heatmap" --save plot.py

# ML pipeline
llmswap generate "scikit-learn pipeline for text classification with TF-IDF" --language python
```

### 💬 **App Developers: Full Applications**
**Complete app generation with modern frameworks:**
```bash
# Streamlit chatbot
llmswap generate "Streamlit chatbot app with session state and file upload" --save chatbot.py

# FastAPI REST API
llmswap generate "FastAPI app with CRUD operations for user management" --save api.py

# React component
llmswap generate "React component for data table with sorting and filtering" --language javascript --save DataTable.jsx
```

### 🤖 **AI/ML Engineers: Model Deployment**
**Production-ready ML workflows and deployments:**
```bash
# LangChain RAG pipeline
llmswap generate "LangChain RAG system with ChromaDB and OpenAI embeddings" --language python --save rag_pipeline.py

# Hugging Face model fine-tuning
llmswap generate "Script to fine-tune BERT for sentiment analysis with Hugging Face" --save finetune.py

# Gradio ML demo app
llmswap generate "Gradio app for image classification with drag and drop" --save demo.py

# Vector database setup
llmswap generate "Pinecone vector database setup for semantic search" --language python
```

### 🔒 **Security Engineers: Vulnerability Scanning**  
**Security automation and compliance scripts:**
```bash
# Security audit script
llmswap generate "Python script to scan for exposed API keys in codebase" --save security_scan.py

# OAuth2 implementation
llmswap generate "FastAPI OAuth2 with JWT tokens implementation" --language python

# Rate limiting middleware
llmswap generate "Redis-based rate limiting for Express.js" --language javascript
```

### 🛠️ **AI Agent Development: Tool Creation**
**Build tools and functions for AI agents (inspired by Anthropic's writing tools):**
```bash
# Create tool functions for agents
llmswap generate "Python function for web scraping with BeautifulSoup error handling" --save tools/scraper.py

# Database interaction tools
llmswap generate "SQLAlchemy functions for CRUD operations with type hints" --save tools/database.py

# File manipulation utilities
llmswap generate "Python class for safe file operations with context managers" --save tools/file_ops.py

# API integration tools
llmswap generate "Async Python functions for parallel API calls with rate limiting" --save tools/api_client.py

# Agent orchestration
llmswap generate "LangChain agent with custom tools for research tasks" --language python
```

### 🏆 **Hackathon Power Kit: Win Your Next Hackathon**
**Build complete MVPs in minutes, not hours:**
```bash
# RAG Chatbot for Document Q&A (Most requested hackathon project)
llmswap generate "Complete RAG chatbot with OpenAI embeddings, Pinecone vector store, and Streamlit UI for PDF document Q&A" --save rag_chatbot.py

# Full-Stack SaaS Starter (0 to production in 5 minutes)
llmswap generate "Next.js 14 app with Clerk auth, Stripe payments, Prisma ORM, and PostgreSQL schema for SaaS platform" --save saas_mvp.js
```

## 🛠️ Installation & Setup

```bash
# Install package
pip install llmswap

# Set any API key (one is enough to get started)
export ANTHROPIC_API_KEY="sk-..."       # For Claude
export OPENAI_API_KEY="sk-..."          # For GPT-4
export GEMINI_API_KEY="..."             # For Google Gemini
export WATSONX_API_KEY="..."            # For IBM watsonx
export WATSONX_PROJECT_ID="..."         # watsonx project
export GROQ_API_KEY="gsk_..."           # For Groq ultra-fast inference
export COHERE_API_KEY="co_..."            # For Cohere Command models
export PERPLEXITY_API_KEY="pplx-..."      # For Perplexity web search
# Or run Ollama locally for 100% free usage
```

## 📈 Token Usage Guidelines

| Task Type | Input Tokens | Output Tokens | Estimated Cost |
|-----------|--------------|---------------|----------------|
| Simple Q&A | 100 | 50 | ~$0.001 |
| Code Review | 1000 | 300 | ~$0.010 |
| Document Analysis | 3000 | 800 | ~$0.025 |
| Creative Writing | 500 | 2000 | ~$0.020 |

## 🔗 Quick Links

- **GitHub**: [github.com/sreenathmmenon/llmswap](https://github.com/sreenathmmenon/llmswap)
- **Documentation**: [Full API Reference](https://github.com/sreenathmmenon/llmswap#readme)
- **PyPI**: [pypi.org/project/llmswap](https://pypi.org/project/llmswap)
- **Issues**: [Report bugs or request features](https://github.com/sreenathmmenon/llmswap/issues)

## 🚀 Get Started Now

```bash
pip install llmswap
```

```python
from llmswap import LLMClient
client = LLMClient()
print(client.query("Hello, AI!").content)
```

**That's it!** You're now using AI with automatic provider detection, failover support, and cost optimization.

---

Built with ❤️ for developers who value simplicity and efficiency. Star us on [GitHub](https://github.com/sreenathmmenon/llmswap) if llmswap saves you time or money!
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llmswap",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, ai-assistant, ai-chat, ai-cli, ai-code-assistant, ai-code-generation, ai-code-review, ai-debugging, ai-logs, ai-pair-programming, anthropic, bash-generator, chatgpt-alternative, chromadb, claude, cli, code-completion-api, code-generation, code-review, cohere, command-generation, command-line-ai, copilot-alternative, copilot-cli, copilot-cli-alternative, copilot-replacement, cost-analytics, cost-optimization, debugging, developer-ai-tools, developer-tools, editor-integration, embeddings, enterprise-ai, fast-inference, fastapi, gemini, github-copilot-alternative, gpt-4, granite, groq, groq-inference, hackathon, hackathon-starter, ibm-watson, ibm-watsonx, langchain, langchain-alternative, litellm-alternative, llama, llm, llm-api, llm-gateway, llm-switching, log-analysis, mistral, multi-llm, multi-llm-copilot, natural-language-to-code, ollama, open-source-copilot, openai, perplexity, pinecone, provider-fallback, python-generator, rag, response-caching, retrieval-augmented-generation, self-hosted-ai, shell-integration, streamlit, terminal-ai, terminal-assistant, text-to-code, token-tracking, usage-analytics, vector-database, vim-integration, vim-plugin, watson-ai, watsonx",
    "author": null,
    "author_email": "Sreenath Menon <zreenathmenon@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/1a/e9/8bd4733a67546ff40a0d519973e953d4b713394b74a20863832913b17cbd/llmswap-4.1.4.tar.gz",
    "platform": null,
    "description": "# llmswap - Universal AI SDK + Code Generation CLI\n\n[![PyPI version](https://badge.fury.io/py/llmswap.svg)](https://badge.fury.io/py/llmswap)\n[![PyPI Downloads](https://static.pepy.tech/badge/llmswap)](https://pepy.tech/projects/llmswap)\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n**Universal AI SDK for Developers**: Switch between 7 AI providers (OpenAI GPT-4o/o1, Claude, Gemini, Cohere, Perplexity, IBM watsonx, Groq) with natural language code generation, cost optimization and enterprise analytics.\n\n**\ud83c\udd95 GitHub Copilot CLI Alternative**: Generate commands and code from natural language using any AI provider. **Save 50-90% on AI Costs**: Intelligent caching, provider comparison, usage analytics.\n\n```bash\n# \ud83c\udd95 NEW: GitHub Copilot CLI Alternative\nllmswap generate \"sort files by size in reverse order\"\n# Output: du -sh * | sort -hr\n\nllmswap generate \"Python function to read JSON with error handling\" --language python\n# Output: Complete Python function with try/catch blocks\n\n# Vim Integration - Direct code insertion into your editor!\n:r !llmswap generate \"Express.js REST API with CRUD operations\"\n# Inserts complete code directly into your vim buffer\n```\n\n```python\n# Before: Provider lock-in, complex setup\nimport openai  # Locked to OpenAI\nclient = openai.Client(api_key=\"...\")\nresponse = client.chat.completions.create(...)  # $$$ every API call\n\n# After: Freedom + savings with llmswap\nfrom llmswap import LLMClient\nclient = LLMClient()  # Auto-detects any provider\nresponse = client.query(\"Hello\")  # Automatic caching = 50-90% savings\n```\n\n## \u26a1 Get Started in 30 Seconds\n\n```bash\npip install llmswap\n```\n\n```python\nfrom llmswap import LLMClient\n\n# Works with any provider you have\nclient = LLMClient()  # Auto-detects from environment\nresponse = client.query(\"Explain quantum computing in 50 words\")\nprint(response.content)\n```\n\n## \ud83c\udfaf Why llmswap for AI Development?\n\n| Feature | llmswap | LangChain | LiteLLM | Direct APIs |\n|---------|---------|-----------|---------|-------------|\n| **AI Providers** | 7 providers, 1 line switch | 50+ complex setup | 20+ basic support | 1 per codebase |\n| **Integration** | `pip install llmswap` | Complex framework | Moderate setup | Per-provider SDKs |\n| **Cost Control** | Built-in optimization | Manual configuration | Basic tracking | No tracking |\n| **Enterprise Analytics** | Native cost/usage tracking | External tools required | Limited insights | Manual logging |\n| **CLI Tools** | 5 powerful commands | Separate packages | None included | None |\n| **Caching** | Intelligent built-in | Manual implementation | Basic support | DIY solution |\n| **Learning Curve** | 5 minutes | Hours of documentation | 30 minutes | Per-API learning |\n| **Self-Hosted** | Full control | Complex deployment | Basic options | Manual setup |\n\n## \ud83d\ude80 Three Ways to Use llmswap:\n\n**\ud83d\udcda 1. Python Library/SDK**\n```python\nfrom llmswap import LLMClient\nclient = LLMClient()  # Import into any codebase\nresponse = client.query(\"Analyze this data\")\n```\n\n**\u26a1 2. CLI Tools**  \n```bash\nllmswap generate \"sort files by size\"           # GitHub Copilot alternative\nllmswap generate \"Python function to read JSON\" # Multi-language code generation\nllmswap ask \"Debug this error\"                  # Terminal AI assistant\nllmswap costs                                    # Cost optimization insights\n```\n\n**\ud83d\udcca 3. Enterprise Analytics**\n```python\nstats = client.get_usage_stats()         # Track AI spend\ncomparison = client.get_provider_comparison()  # Compare costs\n```\n\n## \ud83d\ude80 Complete Feature Set\n\n### 1\ufe0f\u20e3 **Python SDK** - Multi-Provider Intelligence\n```python\nfrom llmswap import LLMClient\n\n# Auto-detects available providers\nclient = LLMClient()  \n\n# Or specify your preference\nclient = LLMClient(provider=\"anthropic\")  # Claude 3 Opus/Sonnet/Haiku\nclient = LLMClient(provider=\"openai\")     # GPT-4, GPT-3.5\nclient = LLMClient(provider=\"gemini\")     # Google Gemini Pro/Flash\nclient = LLMClient(provider=\"watsonx\")    # IBM watsonx.ai Granite\nclient = LLMClient(provider=\"ollama\")     # Llama, Mistral, Phi, 100+ local\nclient = LLMClient(provider=\"groq\")       # Groq ultra-fast inference\nclient = LLMClient(provider=\"cohere\")     # Cohere Command models for RAG\nclient = LLMClient(provider=\"perplexity\") # Perplexity web-connected AI\n\n# Automatic failover\nclient = LLMClient(fallback=True)\nresponse = client.query(\"Hello\")  # Tries multiple providers\n\n# Save 50-90% with intelligent caching\nclient = LLMClient(cache_enabled=True)\nresponse1 = client.query(\"Expensive question\")  # $$$ API call\nresponse2 = client.query(\"Expensive question\")  # FREE from cache\n```\n\n### 2\ufe0f\u20e3 **CLI Suite** - 6 Powerful Terminal Tools\n```bash\n# \ud83c\udd95 Generate code from natural language (GitHub Copilot alternative)\nllmswap generate \"sort files by size in reverse order\"\nllmswap generate \"Python function to read JSON file\" --language python\nllmswap generate \"find large files over 100MB\" --execute\n\n# Ask one-line questions\nllmswap ask \"How to optimize PostgreSQL queries?\"\n\n# Interactive AI chat\nllmswap chat\n\n# AI code review\nllmswap review app.py --focus security\n\n# Debug errors instantly\nllmswap debug --error \"ConnectionTimeout at line 42\"\n\n# Analyze logs with AI\nllmswap logs --analyze app.log --since \"2h ago\"\n```\n\n### 3\ufe0f\u20e3 **Analytics & Cost Optimization** (v4.0 NEW!)\n```bash\n# Compare provider costs before choosing\nllmswap compare --input-tokens 1000 --output-tokens 500\n# Output: Gemini $0.0005 | OpenAI $0.014 | Claude $0.011\n\n# Track your actual usage and spending\nllmswap usage --days 30 --format table\n# Shows: queries, tokens, costs by provider, response times\n\n# Get AI spend optimization recommendations\nllmswap costs\n# Suggests: Switch to Gemini, enable caching, use Ollama for dev\n```\n\n```python\n# Python SDK - Full analytics suite\nclient = LLMClient(analytics_enabled=True)\n\n# Automatic conversation memory\nresponse = client.chat(\"What is Python?\")\nresponse = client.chat(\"How is it different from Java?\")  # Remembers context\n\n# Real-time cost tracking\nstats = client.get_usage_stats()\nprint(f\"Total queries: {stats['totals']['queries']}\")\nprint(f\"Total cost: ${stats['totals']['cost']:.4f}\")\nprint(f\"Avg response time: {stats['avg_response_time_ms']}ms\")\n\n# Cost optimization insights\nanalysis = client.get_cost_breakdown()\nprint(f\"Potential savings: ${analysis['optimization_opportunities']['potential_provider_savings']:.2f}\")\nprint(f\"Recommended provider: {analysis['recommendations'][0]}\")\n\n# Compare providers for your specific use case\ncomparison = client.get_provider_comparison(input_tokens=1500, output_tokens=500)\nprint(f\"Cheapest: {comparison['cheapest']} (${comparison['cheapest_cost']:.6f})\")\nprint(f\"Savings vs current: {comparison['max_savings_percentage']:.1f}%\")\n```\n\n### 4\ufe0f\u20e3 **Advanced Features**\n\n**Async/Streaming Support**\n```python\nimport asyncio\nfrom llmswap import AsyncLLMClient\n\nasync def main():\n    client = AsyncLLMClient()\n    \n    # Async queries\n    response = await client.query(\"Explain AI\")\n    \n    # Streaming responses\n    async for chunk in client.stream(\"Write a story\"):\n        print(chunk, end=\"\")\n```\n\n**Multi-User Security**\n```python\n# Context-aware caching for multi-tenant apps\nresponse = client.query(\n    \"Get user data\",\n    cache_context={\"user_id\": \"user123\"}  # Isolated cache\n)\n```\n\n**Provider Comparison**\n```python\n# Compare responses from different models\ncomparison = client.compare_providers(\n    \"Solve this problem\",\n    providers=[\"anthropic\", \"openai\", \"gemini\"]\n)\n```\n\n## \ud83d\udcca Real-World Use Cases & Examples\n\n### \ud83c\udfe2 **Enterprise: Content Generation at Scale**\n**Netflix-style recommendation descriptions for millions of items:**\n```python\nfrom llmswap import LLMClient\n\n# Start with OpenAI, switch to Gemini for 96% cost savings\nclient = LLMClient(provider=\"gemini\", cache_enabled=True)\n\ndef generate_descriptions(items):\n    for item in items:\n        # Cached responses save 90% on similar content\n        description = client.query(\n            f\"Create engaging description for {item['title']}\",\n            cache_context={\"category\": item['category']}\n        )\n        yield description.content\n\n# Cost: $0.0005 per description vs $0.015 with OpenAI\n```\n\n### \ud83d\udc68\u200d\ud83d\udcbb **Developers: AI-Powered Code Review**\n**GitHub Copilot alternative for your team:**\n```python\n# CLI for instant code review\n$ llmswap review api_handler.py --focus security\n\n# Python SDK for CI/CD integration\nfrom llmswap import LLMClient\n\nclient = LLMClient(analytics_enabled=True)\nreview = client.query(f\"Review this PR for bugs: {pr_diff}\")\n\n# Track costs across your team\nstats = client.get_usage_stats()\nprint(f\"This month's AI costs: ${stats['totals']['cost']:.2f}\")\n```\n\n### \ud83c\udf93 **Education: AI Tutoring Platform**\n**Khan Academy-style personalized learning:**\n```python\nclient = LLMClient(provider=\"ollama\")  # Free for schools!\n\ndef ai_tutor(student_question, subject):\n    # Use watsonx for STEM, Ollama for general subjects\n    if subject in [\"math\", \"science\"]:\n        client.set_provider(\"watsonx\")\n    \n    response = client.query(\n        f\"Explain {student_question} for a {subject} student\",\n        cache_context={\"grade_level\": student.grade}\n    )\n    return response.content\n\n# Zero cost with Ollama, enterprise-grade with watsonx\n```\n\n### \ud83d\ude80 **Startups: Multi-Modal Customer Support**\n**Shopify-scale merchant assistance:**\n```python\nfrom llmswap import LLMClient\n\n# Start with Anthropic, fallback to others if rate-limited\nclient = LLMClient(fallback=True, cache_enabled=True)\n\nasync def handle_support_ticket(ticket):\n    # 90% of questions are similar - cache saves thousands\n    response = await client.aquery(\n        f\"Help with: {ticket.issue}\",\n        cache_context={\"type\": ticket.category}\n    )\n    \n    # Auto-escalate complex issues\n    if response.confidence < 0.8:\n        client.set_provider(\"anthropic\")  # Use best model\n        response = await client.aquery(ticket.issue)\n    \n    return response.content\n```\n\n### \ud83d\udcf1 **Content Creators: Writing Assistant**\n**Medium/Substack article generation:**\n```bash\n# Quick blog post ideas\nllmswap ask \"10 trending topics in AI for developers\"\n\n# Full article draft\nllmswap chat\n> Write a 1000-word article on prompt engineering\n> Make it more technical\n> Add code examples\n```\n\n### \ud83d\udd27 **DevOps Engineers: Infrastructure as Code**\n**Kubernetes and Docker automation:**\n```bash\n# Generate Kubernetes deployment\nllmswap generate \"Kubernetes deployment for React app with 3 replicas\" --save k8s-deploy.yaml\n\n# Docker multi-stage build\nllmswap generate \"Docker multi-stage build for Node.js app with Alpine\" --language dockerfile\n\n# Terraform AWS infrastructure\nllmswap generate \"Terraform script for AWS VPC with public/private subnets\" --save main.tf\n```\n\n### \ud83c\udfaf **Data Scientists: Analysis Workflows**\n**Pandas, visualization, and ML pipeline generation:**\n```bash\n# Data analysis scripts\nllmswap generate \"Pandas script to clean CSV and handle missing values\" --language python\n\n# Visualization code\nllmswap generate \"Matplotlib script for correlation heatmap\" --save plot.py\n\n# ML pipeline\nllmswap generate \"scikit-learn pipeline for text classification with TF-IDF\" --language python\n```\n\n### \ud83d\udcac **App Developers: Full Applications**\n**Complete app generation with modern frameworks:**\n```bash\n# Streamlit chatbot\nllmswap generate \"Streamlit chatbot app with session state and file upload\" --save chatbot.py\n\n# FastAPI REST API\nllmswap generate \"FastAPI app with CRUD operations for user management\" --save api.py\n\n# React component\nllmswap generate \"React component for data table with sorting and filtering\" --language javascript --save DataTable.jsx\n```\n\n### \ud83e\udd16 **AI/ML Engineers: Model Deployment**\n**Production-ready ML workflows and deployments:**\n```bash\n# LangChain RAG pipeline\nllmswap generate \"LangChain RAG system with ChromaDB and OpenAI embeddings\" --language python --save rag_pipeline.py\n\n# Hugging Face model fine-tuning\nllmswap generate \"Script to fine-tune BERT for sentiment analysis with Hugging Face\" --save finetune.py\n\n# Gradio ML demo app\nllmswap generate \"Gradio app for image classification with drag and drop\" --save demo.py\n\n# Vector database setup\nllmswap generate \"Pinecone vector database setup for semantic search\" --language python\n```\n\n### \ud83d\udd12 **Security Engineers: Vulnerability Scanning**  \n**Security automation and compliance scripts:**\n```bash\n# Security audit script\nllmswap generate \"Python script to scan for exposed API keys in codebase\" --save security_scan.py\n\n# OAuth2 implementation\nllmswap generate \"FastAPI OAuth2 with JWT tokens implementation\" --language python\n\n# Rate limiting middleware\nllmswap generate \"Redis-based rate limiting for Express.js\" --language javascript\n```\n\n### \ud83d\udee0\ufe0f **AI Agent Development: Tool Creation**\n**Build tools and functions for AI agents (inspired by Anthropic's writing tools):**\n```bash\n# Create tool functions for agents\nllmswap generate \"Python function for web scraping with BeautifulSoup error handling\" --save tools/scraper.py\n\n# Database interaction tools\nllmswap generate \"SQLAlchemy functions for CRUD operations with type hints\" --save tools/database.py\n\n# File manipulation utilities\nllmswap generate \"Python class for safe file operations with context managers\" --save tools/file_ops.py\n\n# API integration tools\nllmswap generate \"Async Python functions for parallel API calls with rate limiting\" --save tools/api_client.py\n\n# Agent orchestration\nllmswap generate \"LangChain agent with custom tools for research tasks\" --language python\n```\n\n### \ud83c\udfc6 **Hackathon Power Kit: Win Your Next Hackathon**\n**Build complete MVPs in minutes, not hours:**\n```bash\n# RAG Chatbot for Document Q&A (Most requested hackathon project)\nllmswap generate \"Complete RAG chatbot with OpenAI embeddings, Pinecone vector store, and Streamlit UI for PDF document Q&A\" --save rag_chatbot.py\n\n# Full-Stack SaaS Starter (0 to production in 5 minutes)\nllmswap generate \"Next.js 14 app with Clerk auth, Stripe payments, Prisma ORM, and PostgreSQL schema for SaaS platform\" --save saas_mvp.js\n```\n\n## \ud83d\udee0\ufe0f Installation & Setup\n\n```bash\n# Install package\npip install llmswap\n\n# Set any API key (one is enough to get started)\nexport ANTHROPIC_API_KEY=\"sk-...\"       # For Claude\nexport OPENAI_API_KEY=\"sk-...\"          # For GPT-4\nexport GEMINI_API_KEY=\"...\"             # For Google Gemini\nexport WATSONX_API_KEY=\"...\"            # For IBM watsonx\nexport WATSONX_PROJECT_ID=\"...\"         # watsonx project\nexport GROQ_API_KEY=\"gsk_...\"           # For Groq ultra-fast inference\nexport COHERE_API_KEY=\"co_...\"            # For Cohere Command models\nexport PERPLEXITY_API_KEY=\"pplx-...\"      # For Perplexity web search\n# Or run Ollama locally for 100% free usage\n```\n\n## \ud83d\udcc8 Token Usage Guidelines\n\n| Task Type | Input Tokens | Output Tokens | Estimated Cost |\n|-----------|--------------|---------------|----------------|\n| Simple Q&A | 100 | 50 | ~$0.001 |\n| Code Review | 1000 | 300 | ~$0.010 |\n| Document Analysis | 3000 | 800 | ~$0.025 |\n| Creative Writing | 500 | 2000 | ~$0.020 |\n\n## \ud83d\udd17 Quick Links\n\n- **GitHub**: [github.com/sreenathmmenon/llmswap](https://github.com/sreenathmmenon/llmswap)\n- **Documentation**: [Full API Reference](https://github.com/sreenathmmenon/llmswap#readme)\n- **PyPI**: [pypi.org/project/llmswap](https://pypi.org/project/llmswap)\n- **Issues**: [Report bugs or request features](https://github.com/sreenathmmenon/llmswap/issues)\n\n## \ud83d\ude80 Get Started Now\n\n```bash\npip install llmswap\n```\n\n```python\nfrom llmswap import LLMClient\nclient = LLMClient()\nprint(client.query(\"Hello, AI!\").content)\n```\n\n**That's it!** You're now using AI with automatic provider detection, failover support, and cost optimization.\n\n---\n\nBuilt with \u2764\ufe0f for developers who value simplicity and efficiency. Star us on [GitHub](https://github.com/sreenathmmenon/llmswap) if llmswap saves you time or money!",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Sreenath Menon\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.",
    "summary": "Universal AI SDK + Code Generation CLI: GitHub Copilot alternative with 7 providers (OpenAI GPT-4o/o1, Claude, Gemini, Cohere, Perplexity, IBM Watson, Groq), natural language to code, cost optimization, enterprise analytics.",
    "version": "4.1.4",
    "project_urls": {
        "Changelog": "https://github.com/sreenathmmenon/llmswap/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/sreenathmmenon/llmswap#readme",
        "Homepage": "https://github.com/sreenathmmenon/llmswap",
        "Issues": "https://github.com/sreenathmmenon/llmswap/issues",
        "Repository": "https://github.com/sreenathmmenon/llmswap"
    },
    "split_keywords": [
        "ai",
        " ai-assistant",
        " ai-chat",
        " ai-cli",
        " ai-code-assistant",
        " ai-code-generation",
        " ai-code-review",
        " ai-debugging",
        " ai-logs",
        " ai-pair-programming",
        " anthropic",
        " bash-generator",
        " chatgpt-alternative",
        " chromadb",
        " claude",
        " cli",
        " code-completion-api",
        " code-generation",
        " code-review",
        " cohere",
        " command-generation",
        " command-line-ai",
        " copilot-alternative",
        " copilot-cli",
        " copilot-cli-alternative",
        " copilot-replacement",
        " cost-analytics",
        " cost-optimization",
        " debugging",
        " developer-ai-tools",
        " developer-tools",
        " editor-integration",
        " embeddings",
        " enterprise-ai",
        " fast-inference",
        " fastapi",
        " gemini",
        " github-copilot-alternative",
        " gpt-4",
        " granite",
        " groq",
        " groq-inference",
        " hackathon",
        " hackathon-starter",
        " ibm-watson",
        " ibm-watsonx",
        " langchain",
        " langchain-alternative",
        " litellm-alternative",
        " llama",
        " llm",
        " llm-api",
        " llm-gateway",
        " llm-switching",
        " log-analysis",
        " mistral",
        " multi-llm",
        " multi-llm-copilot",
        " natural-language-to-code",
        " ollama",
        " open-source-copilot",
        " openai",
        " perplexity",
        " pinecone",
        " provider-fallback",
        " python-generator",
        " rag",
        " response-caching",
        " retrieval-augmented-generation",
        " self-hosted-ai",
        " shell-integration",
        " streamlit",
        " terminal-ai",
        " terminal-assistant",
        " text-to-code",
        " token-tracking",
        " usage-analytics",
        " vector-database",
        " vim-integration",
        " vim-plugin",
        " watson-ai",
        " watsonx"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d70f6ce060e19595ef82a86c7245f3e096c92ca352e7189395f045fc4b38f5f0",
                "md5": "f88431ea61206594d5e3903a035243e3",
                "sha256": "2cb55954456d263168616d44e151788de892c72cc65a797320835ba02d335868"
            },
            "downloads": -1,
            "filename": "llmswap-4.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f88431ea61206594d5e3903a035243e3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 52351,
            "upload_time": "2025-09-14T12:45:24",
            "upload_time_iso_8601": "2025-09-14T12:45:24.034011Z",
            "url": "https://files.pythonhosted.org/packages/d7/0f/6ce060e19595ef82a86c7245f3e096c92ca352e7189395f045fc4b38f5f0/llmswap-4.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1ae98bd4733a67546ff40a0d519973e953d4b713394b74a20863832913b17cbd",
                "md5": "b572daf2b8eb277d6ff060c96be3227d",
                "sha256": "76039047aae63114316d4d54fc74ae1a12d07244ff64a0e9fcb83550f1a622af"
            },
            "downloads": -1,
            "filename": "llmswap-4.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "b572daf2b8eb277d6ff060c96be3227d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 72391,
            "upload_time": "2025-09-14T12:45:26",
            "upload_time_iso_8601": "2025-09-14T12:45:26.079556Z",
            "url": "https://files.pythonhosted.org/packages/1a/e9/8bd4733a67546ff40a0d519973e953d4b713394b74a20863832913b17cbd/llmswap-4.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-14 12:45:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "sreenathmmenon",
    "github_project": "llmswap",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llmswap"
}
        
Elapsed time: 1.60579s