llmbridge-py


Namellmbridge-py JSON
Version 0.2.2 PyPI version JSON
download
home_pageNone
SummaryUnified Python interface for OpenAI, Anthropic, Google, and Ollama LLMs with optional model registry and usage tracking
upload_time2025-08-13 17:02:57
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords ai anthropic api claude gemini gpt llm ollama openai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLMBridge

Unified Python service to call multiple LLM providers (OpenAI, Anthropic, Google, Ollama) with one API. Optional DB logging and model registry. Built-in response caching.

## Highlights

- **One interface** for all providers
- **Optional DB** (SQLite or PostgreSQL) for logging/models
- **Built-in caching** (opt‑in, TTL, deterministic requests)
- **Model registry & costs** (optional)
- **Files/images** helpers

## Installation

```bash
# Basic installation (SQLite support only)
uv add llmbridge-py

# With PostgreSQL support  
uv add "llmbridge-py[postgres]"

# Development installation (includes examples dependencies)
uv add --dev llmbridge
# Or from source:
uv pip install -e ".[dev]"
```

### Requirements
- Python 3.10+
- API keys for the LLM providers you want to use

## Quick Start

### 1) No database (just call a model)

```python
import asyncio
from llmbridge.service import LLMBridge
from llmbridge.schemas import LLMRequest, Message

async def main():
    # Initialize service without database
    service = LLMBridge(enable_db_logging=False)
    
    # Make a request
    request = LLMRequest(
        messages=[Message(role="user", content="Hello!")],
        model="gpt-4o-mini"
    )
    
    response = await service.chat(request)
    print(response.content)

asyncio.run(main())
```

### 2) With SQLite (local logging)

```python
import asyncio
from llmbridge.service_sqlite import LLMBridgeSQLite
from llmbridge.schemas import LLMRequest, Message

async def main():
    # Initialize with SQLite (default: llmbridge.db)
    service = LLMBridgeSQLite()
    
    # Or specify a custom SQLite file
    service = LLMBridgeSQLite(db_path="my_app.db")
    
    # Calls are logged to SQLite
    request = LLMRequest(
        messages=[Message(role="user", content="Hello!")],
        model="claude-3-5-haiku-20241022"
    )
    
    response = await service.chat(request)
    print(f"Response: {response.content}")
    print(f"Cost: ${response.usage.get('cost', 0):.4f} if tracked")

asyncio.run(main())
```

### 3) With PostgreSQL (production logging)

```python
import asyncio
from llmbridge.service import LLMBridge

async def main():
    # Initialize with PostgreSQL
    service = LLMBridge(
        db_connection_string="postgresql://user:pass@localhost/dbname"
    )
    
    # Use service.chat(...) as above

asyncio.run(main())
```

## Database setup (optional)

- SQLite: no setup, tables auto-created on first use.
- PostgreSQL: point `db_connection_string` to an existing DB; schema/tables are created on first use.

## Caching

Response cache is opt‑in per request and applies only to deterministic calls (temperature ≤ 0.1). You control the TTL.

```python
request = LLMRequest(
    messages=[Message(role="user", content="What is RAG?")],
    model="gpt-4o-mini",
    temperature=0.0,
    cache={"enabled": True, "ttl_seconds": 600},
)
response = await service.chat(request)
```

Notes:
- Cache key is provider‑agnostic (messages, model, format, tools, limits, temperature).
- If DB logging is on, a small DB‑backed cache is used; otherwise in‑memory.
- Anthropic additionally uses provider‑side prompt caching for the system prompt when `cache.enabled` is true.

## Model registry (optional)

When DB logging is enabled, you can query models and usage via the service:

```python
# List active models
models = await service.get_models_from_db()
for m in models:
    print(m.provider, m.model_name)

# Per-user usage (id_at_origin is your user/session ID)
stats = await service.get_usage_stats(id_at_origin="user-123", days=30)
```

## CLI (optional)

`llmbridge` manages initialization and the model registry.

```bash
# Initialize database schema and seed default models

# PostgreSQL (use DATABASE_URL)
export DATABASE_URL=postgresql://user:pass@localhost/dbname
llmbridge init-db

# SQLite
llmbridge --sqlite ./llmbridge.db init-db

# Load curated JSONs (PostgreSQL or SQLite)
llmbridge json-refresh

# With SQLite file
llmbridge --sqlite ./llmbridge.db json-refresh
```

## Configuration

Set env vars or a `.env` file:

```bash
# API Keys (at least one required)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...  # or GEMINI_API_KEY
OLLAMA_BASE_URL=http://localhost:11434  # Optional

# Database (optional)
DATABASE_URL=postgresql://user:pass@localhost/dbname  # For PostgreSQL
# Or leave unset to use SQLite
```

### Provider selection

```python
# Explicitly specify provider
response = await service.chat(
    LLMRequest(
        messages=[Message(role="user", content="Hello")],
        model="anthropic:claude-3-5-sonnet-20241022"  # Provider prefix
    )
)

# Auto-detection also works
response = await service.chat(
    LLMRequest(
        messages=[Message(role="user", content="Hello")],
        model="gpt-4o"  # Automatically uses OpenAI
    )
)
```

## Files and images

```python
from llmbridge.file_utils import analyze_image

# Analyze an image
image_content = analyze_image(
    "path/to/image.png",
    "What's in this image?"
)

request = LLMRequest(
    messages=[Message(role="user", content=image_content)],
    model="gpt-4o"  # Use a vision-capable model
)

response = await service.chat(request)
```

Notes:
- When sending messages that include PDF documents with OpenAI models, the service automatically routes to the Assistants API for analysis. Tools and custom response formats are not supported in this PDF path.
- OpenAI reasoning models (`o1`, `o1-mini`) are routed via the Responses API. These do not support tools or custom response formats; attempting to use them will raise a validation error.

## Reference (minimal)

- `LLMBridge` (service) → `await service.chat(LLMRequest(...))`
- `LLMRequest` fields: `messages`, `model`, optional `temperature`, `max_tokens`, `tools`, `response_format`, `cache={enabled: bool, ttl_seconds: int}`
- Provider string: `"provider:model"` or just `"model"` (auto-detected)
- Optional DB helpers: `await service.get_models_from_db()`, `await service.get_usage_stats(id_at_origin, days)`

## Initialization patterns

There are two production modes for database usage:

- Managed by llmbridge (recommended default)
  - PostgreSQL: `service = LLMBridge(db_connection_string=os.environ["DATABASE_URL"])`. The service initializes the connection, applies migrations (creates schema/tables/functions), and seeds curated models on first use.
  - SQLite (local/dev): `service = LLMBridgeSQLite(db_path="llmbridge.db")`. Tables are created and default models inserted automatically on initialization.

- Managed by host app (pgdbm)
  - Create an `AsyncDatabaseManager` in your application and pass it in. llmbridge will apply migrations and seed models within the provided schema but will not own the pool.

```python
from pgdbm import AsyncDatabaseManager
from llmbridge.service import LLMBridge

# Example: use an externally created manager (pool & config omitted here)
db_manager = AsyncDatabaseManager(..., schema="llmbridge")
service = LLMBridge(db_manager=db_manager, origin="myapp")
# On first use, llmbridge will initialize and migrate the llmbridge schema
```

PostgreSQL migrations require the `pgcrypto` extension; the migrations enable it if missing and use `gen_random_uuid()` for primary keys.

## Development

```bash
# Clone and install
git clone https://github.com/juanreyero/llmbridge.git
cd llmbridge
uv pip install -e ".[dev]"

# Run tests
uv run pytest tests/

# Format code
uv run black src/ tests/
uv run isort src/ tests/
```

### Contributing

1. Fork the repo and create a feature branch
2. Make changes and add tests
3. Ensure tests pass and code is formatted
4. Submit a pull request

Note: The repo may contain symlinks (pgdbm, mcp-client) for local development. These are gitignored and not required.

## License

MIT

Pull requests welcome! Please ensure all tests pass and add new tests for new features.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llmbridge-py",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Juan Reyero <juan@juanreyero.com>",
    "keywords": "ai, anthropic, api, claude, gemini, gpt, llm, ollama, openai",
    "author": null,
    "author_email": "Juan Reyero <juan@juanreyero.com>",
    "download_url": "https://files.pythonhosted.org/packages/3b/a2/11e77665afde35bfb1f546f80e5eae4f9fb64f642af98bd90ef78be0007a/llmbridge_py-0.2.2.tar.gz",
    "platform": null,
    "description": "# LLMBridge\n\nUnified Python service to call multiple LLM providers (OpenAI, Anthropic, Google, Ollama) with one API. Optional DB logging and model registry. Built-in response caching.\n\n## Highlights\n\n- **One interface** for all providers\n- **Optional DB** (SQLite or PostgreSQL) for logging/models\n- **Built-in caching** (opt\u2011in, TTL, deterministic requests)\n- **Model registry & costs** (optional)\n- **Files/images** helpers\n\n## Installation\n\n```bash\n# Basic installation (SQLite support only)\nuv add llmbridge-py\n\n# With PostgreSQL support  \nuv add \"llmbridge-py[postgres]\"\n\n# Development installation (includes examples dependencies)\nuv add --dev llmbridge\n# Or from source:\nuv pip install -e \".[dev]\"\n```\n\n### Requirements\n- Python 3.10+\n- API keys for the LLM providers you want to use\n\n## Quick Start\n\n### 1) No database (just call a model)\n\n```python\nimport asyncio\nfrom llmbridge.service import LLMBridge\nfrom llmbridge.schemas import LLMRequest, Message\n\nasync def main():\n    # Initialize service without database\n    service = LLMBridge(enable_db_logging=False)\n    \n    # Make a request\n    request = LLMRequest(\n        messages=[Message(role=\"user\", content=\"Hello!\")],\n        model=\"gpt-4o-mini\"\n    )\n    \n    response = await service.chat(request)\n    print(response.content)\n\nasyncio.run(main())\n```\n\n### 2) With SQLite (local logging)\n\n```python\nimport asyncio\nfrom llmbridge.service_sqlite import LLMBridgeSQLite\nfrom llmbridge.schemas import LLMRequest, Message\n\nasync def main():\n    # Initialize with SQLite (default: llmbridge.db)\n    service = LLMBridgeSQLite()\n    \n    # Or specify a custom SQLite file\n    service = LLMBridgeSQLite(db_path=\"my_app.db\")\n    \n    # Calls are logged to SQLite\n    request = LLMRequest(\n        messages=[Message(role=\"user\", content=\"Hello!\")],\n        model=\"claude-3-5-haiku-20241022\"\n    )\n    \n    response = await service.chat(request)\n    print(f\"Response: {response.content}\")\n    print(f\"Cost: ${response.usage.get('cost', 0):.4f} if tracked\")\n\nasyncio.run(main())\n```\n\n### 3) With PostgreSQL (production logging)\n\n```python\nimport asyncio\nfrom llmbridge.service import LLMBridge\n\nasync def main():\n    # Initialize with PostgreSQL\n    service = LLMBridge(\n        db_connection_string=\"postgresql://user:pass@localhost/dbname\"\n    )\n    \n    # Use service.chat(...) as above\n\nasyncio.run(main())\n```\n\n## Database setup (optional)\n\n- SQLite: no setup, tables auto-created on first use.\n- PostgreSQL: point `db_connection_string` to an existing DB; schema/tables are created on first use.\n\n## Caching\n\nResponse cache is opt\u2011in per request and applies only to deterministic calls (temperature \u2264 0.1). You control the TTL.\n\n```python\nrequest = LLMRequest(\n    messages=[Message(role=\"user\", content=\"What is RAG?\")],\n    model=\"gpt-4o-mini\",\n    temperature=0.0,\n    cache={\"enabled\": True, \"ttl_seconds\": 600},\n)\nresponse = await service.chat(request)\n```\n\nNotes:\n- Cache key is provider\u2011agnostic (messages, model, format, tools, limits, temperature).\n- If DB logging is on, a small DB\u2011backed cache is used; otherwise in\u2011memory.\n- Anthropic additionally uses provider\u2011side prompt caching for the system prompt when `cache.enabled` is true.\n\n## Model registry (optional)\n\nWhen DB logging is enabled, you can query models and usage via the service:\n\n```python\n# List active models\nmodels = await service.get_models_from_db()\nfor m in models:\n    print(m.provider, m.model_name)\n\n# Per-user usage (id_at_origin is your user/session ID)\nstats = await service.get_usage_stats(id_at_origin=\"user-123\", days=30)\n```\n\n## CLI (optional)\n\n`llmbridge` manages initialization and the model registry.\n\n```bash\n# Initialize database schema and seed default models\n\n# PostgreSQL (use DATABASE_URL)\nexport DATABASE_URL=postgresql://user:pass@localhost/dbname\nllmbridge init-db\n\n# SQLite\nllmbridge --sqlite ./llmbridge.db init-db\n\n# Load curated JSONs (PostgreSQL or SQLite)\nllmbridge json-refresh\n\n# With SQLite file\nllmbridge --sqlite ./llmbridge.db json-refresh\n```\n\n## Configuration\n\nSet env vars or a `.env` file:\n\n```bash\n# API Keys (at least one required)\nOPENAI_API_KEY=sk-...\nANTHROPIC_API_KEY=sk-ant-...\nGOOGLE_API_KEY=...  # or GEMINI_API_KEY\nOLLAMA_BASE_URL=http://localhost:11434  # Optional\n\n# Database (optional)\nDATABASE_URL=postgresql://user:pass@localhost/dbname  # For PostgreSQL\n# Or leave unset to use SQLite\n```\n\n### Provider selection\n\n```python\n# Explicitly specify provider\nresponse = await service.chat(\n    LLMRequest(\n        messages=[Message(role=\"user\", content=\"Hello\")],\n        model=\"anthropic:claude-3-5-sonnet-20241022\"  # Provider prefix\n    )\n)\n\n# Auto-detection also works\nresponse = await service.chat(\n    LLMRequest(\n        messages=[Message(role=\"user\", content=\"Hello\")],\n        model=\"gpt-4o\"  # Automatically uses OpenAI\n    )\n)\n```\n\n## Files and images\n\n```python\nfrom llmbridge.file_utils import analyze_image\n\n# Analyze an image\nimage_content = analyze_image(\n    \"path/to/image.png\",\n    \"What's in this image?\"\n)\n\nrequest = LLMRequest(\n    messages=[Message(role=\"user\", content=image_content)],\n    model=\"gpt-4o\"  # Use a vision-capable model\n)\n\nresponse = await service.chat(request)\n```\n\nNotes:\n- When sending messages that include PDF documents with OpenAI models, the service automatically routes to the Assistants API for analysis. Tools and custom response formats are not supported in this PDF path.\n- OpenAI reasoning models (`o1`, `o1-mini`) are routed via the Responses API. These do not support tools or custom response formats; attempting to use them will raise a validation error.\n\n## Reference (minimal)\n\n- `LLMBridge` (service) \u2192 `await service.chat(LLMRequest(...))`\n- `LLMRequest` fields: `messages`, `model`, optional `temperature`, `max_tokens`, `tools`, `response_format`, `cache={enabled: bool, ttl_seconds: int}`\n- Provider string: `\"provider:model\"` or just `\"model\"` (auto-detected)\n- Optional DB helpers: `await service.get_models_from_db()`, `await service.get_usage_stats(id_at_origin, days)`\n\n## Initialization patterns\n\nThere are two production modes for database usage:\n\n- Managed by llmbridge (recommended default)\n  - PostgreSQL: `service = LLMBridge(db_connection_string=os.environ[\"DATABASE_URL\"])`. The service initializes the connection, applies migrations (creates schema/tables/functions), and seeds curated models on first use.\n  - SQLite (local/dev): `service = LLMBridgeSQLite(db_path=\"llmbridge.db\")`. Tables are created and default models inserted automatically on initialization.\n\n- Managed by host app (pgdbm)\n  - Create an `AsyncDatabaseManager` in your application and pass it in. llmbridge will apply migrations and seed models within the provided schema but will not own the pool.\n\n```python\nfrom pgdbm import AsyncDatabaseManager\nfrom llmbridge.service import LLMBridge\n\n# Example: use an externally created manager (pool & config omitted here)\ndb_manager = AsyncDatabaseManager(..., schema=\"llmbridge\")\nservice = LLMBridge(db_manager=db_manager, origin=\"myapp\")\n# On first use, llmbridge will initialize and migrate the llmbridge schema\n```\n\nPostgreSQL migrations require the `pgcrypto` extension; the migrations enable it if missing and use `gen_random_uuid()` for primary keys.\n\n## Development\n\n```bash\n# Clone and install\ngit clone https://github.com/juanreyero/llmbridge.git\ncd llmbridge\nuv pip install -e \".[dev]\"\n\n# Run tests\nuv run pytest tests/\n\n# Format code\nuv run black src/ tests/\nuv run isort src/ tests/\n```\n\n### Contributing\n\n1. Fork the repo and create a feature branch\n2. Make changes and add tests\n3. Ensure tests pass and code is formatted\n4. Submit a pull request\n\nNote: The repo may contain symlinks (pgdbm, mcp-client) for local development. These are gitignored and not required.\n\n## License\n\nMIT\n\nPull requests welcome! Please ensure all tests pass and add new tests for new features.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Unified Python interface for OpenAI, Anthropic, Google, and Ollama LLMs with optional model registry and usage tracking",
    "version": "0.2.2",
    "project_urls": {
        "Documentation": "https://github.com/juanre/llmbridge",
        "Homepage": "https://juanreyero.com/open/llmbridge/",
        "Issues": "https://github.com/juanre/llmbridge/issues",
        "Repository": "https://github.com/juanre/llmbridge"
    },
    "split_keywords": [
        "ai",
        " anthropic",
        " api",
        " claude",
        " gemini",
        " gpt",
        " llm",
        " ollama",
        " openai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4de29ef52bf5fa5141b965aa1790f9763e0f6ddfa18c189ef9d2086bba09954b",
                "md5": "0547c1c7203d53c872cb1a898cfda2cd",
                "sha256": "6490b30e8958a0333014ece32f0398ff9084533c0d2b83c9f8711cd047dd75b3"
            },
            "downloads": -1,
            "filename": "llmbridge_py-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0547c1c7203d53c872cb1a898cfda2cd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 141246,
            "upload_time": "2025-08-13T17:02:55",
            "upload_time_iso_8601": "2025-08-13T17:02:55.902751Z",
            "url": "https://files.pythonhosted.org/packages/4d/e2/9ef52bf5fa5141b965aa1790f9763e0f6ddfa18c189ef9d2086bba09954b/llmbridge_py-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3ba211e77665afde35bfb1f546f80e5eae4f9fb64f642af98bd90ef78be0007a",
                "md5": "fcbcf62d459609a1c8bfa211ee7ba88f",
                "sha256": "0401fcf38893f1963c1322d7dc34d6997128e53451a9e72e513d00d57fea2a13"
            },
            "downloads": -1,
            "filename": "llmbridge_py-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "fcbcf62d459609a1c8bfa211ee7ba88f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 111734,
            "upload_time": "2025-08-13T17:02:57",
            "upload_time_iso_8601": "2025-08-13T17:02:57.650361Z",
            "url": "https://files.pythonhosted.org/packages/3b/a2/11e77665afde35bfb1f546f80e5eae4f9fb64f642af98bd90ef78be0007a/llmbridge_py-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-13 17:02:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "juanre",
    "github_project": "llmbridge",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llmbridge-py"
}
        
Elapsed time: 0.95020s