api-jongler


Nameapi-jongler JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://github.com/antonpavlenko/api-jongler
SummaryA middleware utility for calling Google AI APIs (Gemini and Gemma) using multiple API keys to reduce need for paid tiers
upload_time2025-07-16 01:44:57
maintainerNone
docs_urlNone
authorAnton Pavlenko
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # API Jongler

A middleware utility for calling Google AI APIs (Gemini and Gemma) using multiple API keys to reduce the need for paid tiers.

## Description

APIJongler is a Python utility that manages multiple API keys for Google AI services (Gemini) and Hugging Face Gemma models, automatically handles key rotation, and provides optional Tor connectivity for enhanced privacy. It's designed to help developers distribute API calls across multiple keys to stay within free tier limits and seamlessly work with both Google's Gemini API and open-source Gemma models.

## Features

- **Google AI Integration**: Seamless access to both Gemini API and Gemma models via Hugging Face
- **Multiple API Key Management**: Automatically rotates between available API keys to maximize free tier usage
- **Lock Management**: Prevents concurrent use of the same API key across multiple processes
- **Error Handling**: Tracks and avoids problematic API keys automatically
- **Tor Support**: Optional routing through Tor network for enhanced privacy
- **Extensible**: Easy to add new API connectors via JSON configuration
- **Comprehensive Logging**: Configurable logging with colored console output

## Installation

```bash
pip install api-jongler
```

## Configuration

1. Set the configuration file path:
```bash
export APIJONGLER_CONFIG=/path/to/your/APIJongler.ini
```

2. Create your configuration file (APIJongler.ini):
```ini
[generativelanguage.googleapis.com]
key1 = your-gemini-api-key-1
key2 = your-gemini-api-key-2
key3 = your-gemini-api-key-3

[api-inference.huggingface.co]
key1 = hf_your-huggingface-token-1
key2 = hf_your-huggingface-token-2
key3 = hf_your-huggingface-token-3
```

**Note**: 
- For Google Gemini API keys, get them free at [Google AI Studio](https://aistudio.google.com/app/apikey).
- For Gemma models via Hugging Face, get API tokens at [Hugging Face Settings](https://huggingface.co/settings/tokens).

## Usage

### Basic Example with Google Gemini (Free Tier)

```python
from api_jongler import APIJongler

# Initialize with Gemini connector
jongler = APIJongler("generativelanguage.googleapis.com", is_tor_enabled=False)

# Use Gemini 1.5 Flash (free tier) for text generation
response, status_code = jongler.request(
    method="POST",
    endpoint="/v1beta/models/gemini-1.5-flash:generateContent",
    request='{"contents":[{"parts":[{"text":"Hello, how are you?"}]}]}'
)

print(f"Response: {response}")
print(f"Status Code: {status_code}")

# Clean up when done (automatically called on destruction)
del jongler

# Or manually clean up all locks and errors
APIJongler.cleanUp()
```

### Working with JSON Data (Recommended)

```python
from api_jongler import APIJongler

# Initialize with Gemini connector
jongler = APIJongler("generativelanguage.googleapis.com")

# Use requestJSON() for automatic JSON handling (recommended)
response_data = jongler.requestJSON(
    endpoint="/v1beta/models/gemini-1.5-flash:generateContent",
    data={
        "contents": [{"parts": [{"text": "Explain machine learning"}]}]
    }
)

# Response is automatically parsed as dictionary
print(response_data["candidates"][0]["content"]["parts"][0]["text"])
```

### Method Comparison

APIJongler provides two methods for making requests:

| Method | Input | Output | Use Case |
|--------|--------|---------|----------|
| `request()` | Raw string | `(response_text, status_code)` | Low-level control, non-JSON APIs |
| `requestJSON()` | Python dict | Parsed dictionary | JSON APIs (recommended) |

**Example with both methods:**

```python
# Low-level with request()
response_text, status_code = jongler.request(
    method="POST",
    endpoint="/v1beta/models/gemini-1.5-flash:generateContent", 
    request='{"contents":[{"parts":[{"text":"Hello"}]}]}'  # Raw JSON string
)
import json
data = json.loads(response_text)  # Manual parsing

# High-level with requestJSON() 
data = jongler.requestJSON(
    endpoint="/v1beta/models/gemini-1.5-flash:generateContent",
    data={"contents": [{"parts": [{"text": "Hello"}]}]}  # Python dict
)
# No manual parsing needed
```

### Available Gemini Models

The Gemini connector provides access to these models:

| Model | Description | Free Tier | Best For |
|-------|-------------|-----------|----------|
| `gemini-1.5-flash` | Fast and versatile | ✅ Yes | General tasks, quick responses |
| `gemini-2.0-flash` | Latest generation | ✅ Yes | Modern features, enhanced speed |
| `gemini-2.5-flash` | Best price/performance | Paid | Cost-effective quality responses |
| `gemini-2.5-pro` | Most powerful | Paid | Complex reasoning, advanced tasks |
| `gemini-1.5-pro` | Complex reasoning | Paid | Advanced analysis, coding |

### CLI Usage Examples

```bash
# Quick text generation (free tier)
apijongler generativelanguage.googleapis.com POST /v1beta/models/gemini-1.5-flash:generateContent '{"contents":[{"parts":[{"text":"Hello"}]}]}' --pretty

# Code generation (free tier)  
apijongler generativelanguage.googleapis.com POST /v1beta/models/gemini-2.0-flash:generateContent '{"contents":[{"parts":[{"text":"Write a Python function"}]}]}' --pretty

# Advanced reasoning (requires paid tier)
apijongler generativelanguage.googleapis.com POST /v1beta/models/gemini-2.5-pro:generateContent '{"contents":[{"parts":[{"text":"Analyze this problem"}]}]}' --pretty
```

## API Connectors

API connectors are defined in JSON files in the `connectors/` directory. Example:

```json
{
    "name": "generativelanguage.googleapis.com",
    "host": "generativelanguage.googleapis.com",
    "port": 443,
    "protocol": "https",
    "format": "json",
    "requires_api_key": true
}
```

### Pre-configured Connectors

- **generativelanguage.googleapis.com**: Access to Google's Gemini API models (gemini-1.5-flash, gemini-2.0-flash, gemini-2.5-flash, etc.)
- **api-inference.huggingface.co**: Open-source Gemma models via Hugging Face Inference API (gemma-2-9b-it, gemma-2-27b-it, etc.)
- **httpbin.org**: For testing and development purposes only

### Gemma vs Gemini Models

**Important**: Gemma and Gemini are different model families:

| Model Family | Access Method | API Keys Source | Example Model |
|--------------|---------------|-----------------|---------------|
| **Gemini** | Google's Cloud API | [Google AI Studio](https://aistudio.google.com/app/apikey) | gemini-1.5-flash |
| **Gemma** | Hugging Face Inference API | [HuggingFace Tokens](https://huggingface.co/settings/tokens) | google/gemma-2-9b-it |

#### Gemma Usage Examples

```python
from api_jongler import APIJongler

# Use Gemma 2 9B model
jongler = APIJongler("api-inference.huggingface.co")
response = jongler.requestJSON(
    endpoint="/models/google/gemma-2-9b-it",
    data={
        "inputs": "What is machine learning?",
        "parameters": {"max_new_tokens": 100, "temperature": 0.7}
    }
)
print(response)
```

```bash
# CLI usage for Gemma
apijongler api-inference.huggingface.co POST /models/google/gemma-2-27b-it '{"inputs":"Explain Python","parameters":{"max_new_tokens":150}}' --pretty
```

**Note**: The Gemini connector provides access to Google's **Gemini API** models, not Gemma models. Available models include:
- `gemini-1.5-flash` - Fast and versatile (free tier)
- `gemini-2.0-flash` - Latest generation (free tier)  
- `gemini-2.5-flash` - Best price/performance
- `gemini-2.5-pro` - Most powerful model
- `gemini-1.5-pro` - Complex reasoning tasks

## License

MIT License

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/antonpavlenko/api-jongler",
    "name": "api-jongler",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Anton Pavlenko",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/d5/e7/b529133b1a0a0b626528455f43603571502385bbcb0fc657540658df1367/api_jongler-1.1.0.tar.gz",
    "platform": null,
    "description": "# API Jongler\n\nA middleware utility for calling Google AI APIs (Gemini and Gemma) using multiple API keys to reduce the need for paid tiers.\n\n## Description\n\nAPIJongler is a Python utility that manages multiple API keys for Google AI services (Gemini) and Hugging Face Gemma models, automatically handles key rotation, and provides optional Tor connectivity for enhanced privacy. It's designed to help developers distribute API calls across multiple keys to stay within free tier limits and seamlessly work with both Google's Gemini API and open-source Gemma models.\n\n## Features\n\n- **Google AI Integration**: Seamless access to both Gemini API and Gemma models via Hugging Face\n- **Multiple API Key Management**: Automatically rotates between available API keys to maximize free tier usage\n- **Lock Management**: Prevents concurrent use of the same API key across multiple processes\n- **Error Handling**: Tracks and avoids problematic API keys automatically\n- **Tor Support**: Optional routing through Tor network for enhanced privacy\n- **Extensible**: Easy to add new API connectors via JSON configuration\n- **Comprehensive Logging**: Configurable logging with colored console output\n\n## Installation\n\n```bash\npip install api-jongler\n```\n\n## Configuration\n\n1. Set the configuration file path:\n```bash\nexport APIJONGLER_CONFIG=/path/to/your/APIJongler.ini\n```\n\n2. Create your configuration file (APIJongler.ini):\n```ini\n[generativelanguage.googleapis.com]\nkey1 = your-gemini-api-key-1\nkey2 = your-gemini-api-key-2\nkey3 = your-gemini-api-key-3\n\n[api-inference.huggingface.co]\nkey1 = hf_your-huggingface-token-1\nkey2 = hf_your-huggingface-token-2\nkey3 = hf_your-huggingface-token-3\n```\n\n**Note**: \n- For Google Gemini API keys, get them free at [Google AI Studio](https://aistudio.google.com/app/apikey).\n- For Gemma models via Hugging Face, get API tokens at [Hugging Face Settings](https://huggingface.co/settings/tokens).\n\n## Usage\n\n### Basic Example with Google Gemini (Free Tier)\n\n```python\nfrom api_jongler import APIJongler\n\n# Initialize with Gemini connector\njongler = APIJongler(\"generativelanguage.googleapis.com\", is_tor_enabled=False)\n\n# Use Gemini 1.5 Flash (free tier) for text generation\nresponse, status_code = jongler.request(\n    method=\"POST\",\n    endpoint=\"/v1beta/models/gemini-1.5-flash:generateContent\",\n    request='{\"contents\":[{\"parts\":[{\"text\":\"Hello, how are you?\"}]}]}'\n)\n\nprint(f\"Response: {response}\")\nprint(f\"Status Code: {status_code}\")\n\n# Clean up when done (automatically called on destruction)\ndel jongler\n\n# Or manually clean up all locks and errors\nAPIJongler.cleanUp()\n```\n\n### Working with JSON Data (Recommended)\n\n```python\nfrom api_jongler import APIJongler\n\n# Initialize with Gemini connector\njongler = APIJongler(\"generativelanguage.googleapis.com\")\n\n# Use requestJSON() for automatic JSON handling (recommended)\nresponse_data = jongler.requestJSON(\n    endpoint=\"/v1beta/models/gemini-1.5-flash:generateContent\",\n    data={\n        \"contents\": [{\"parts\": [{\"text\": \"Explain machine learning\"}]}]\n    }\n)\n\n# Response is automatically parsed as dictionary\nprint(response_data[\"candidates\"][0][\"content\"][\"parts\"][0][\"text\"])\n```\n\n### Method Comparison\n\nAPIJongler provides two methods for making requests:\n\n| Method | Input | Output | Use Case |\n|--------|--------|---------|----------|\n| `request()` | Raw string | `(response_text, status_code)` | Low-level control, non-JSON APIs |\n| `requestJSON()` | Python dict | Parsed dictionary | JSON APIs (recommended) |\n\n**Example with both methods:**\n\n```python\n# Low-level with request()\nresponse_text, status_code = jongler.request(\n    method=\"POST\",\n    endpoint=\"/v1beta/models/gemini-1.5-flash:generateContent\", \n    request='{\"contents\":[{\"parts\":[{\"text\":\"Hello\"}]}]}'  # Raw JSON string\n)\nimport json\ndata = json.loads(response_text)  # Manual parsing\n\n# High-level with requestJSON() \ndata = jongler.requestJSON(\n    endpoint=\"/v1beta/models/gemini-1.5-flash:generateContent\",\n    data={\"contents\": [{\"parts\": [{\"text\": \"Hello\"}]}]}  # Python dict\n)\n# No manual parsing needed\n```\n\n### Available Gemini Models\n\nThe Gemini connector provides access to these models:\n\n| Model | Description | Free Tier | Best For |\n|-------|-------------|-----------|----------|\n| `gemini-1.5-flash` | Fast and versatile | \u2705 Yes | General tasks, quick responses |\n| `gemini-2.0-flash` | Latest generation | \u2705 Yes | Modern features, enhanced speed |\n| `gemini-2.5-flash` | Best price/performance | Paid | Cost-effective quality responses |\n| `gemini-2.5-pro` | Most powerful | Paid | Complex reasoning, advanced tasks |\n| `gemini-1.5-pro` | Complex reasoning | Paid | Advanced analysis, coding |\n\n### CLI Usage Examples\n\n```bash\n# Quick text generation (free tier)\napijongler generativelanguage.googleapis.com POST /v1beta/models/gemini-1.5-flash:generateContent '{\"contents\":[{\"parts\":[{\"text\":\"Hello\"}]}]}' --pretty\n\n# Code generation (free tier)  \napijongler generativelanguage.googleapis.com POST /v1beta/models/gemini-2.0-flash:generateContent '{\"contents\":[{\"parts\":[{\"text\":\"Write a Python function\"}]}]}' --pretty\n\n# Advanced reasoning (requires paid tier)\napijongler generativelanguage.googleapis.com POST /v1beta/models/gemini-2.5-pro:generateContent '{\"contents\":[{\"parts\":[{\"text\":\"Analyze this problem\"}]}]}' --pretty\n```\n\n## API Connectors\n\nAPI connectors are defined in JSON files in the `connectors/` directory. Example:\n\n```json\n{\n    \"name\": \"generativelanguage.googleapis.com\",\n    \"host\": \"generativelanguage.googleapis.com\",\n    \"port\": 443,\n    \"protocol\": \"https\",\n    \"format\": \"json\",\n    \"requires_api_key\": true\n}\n```\n\n### Pre-configured Connectors\n\n- **generativelanguage.googleapis.com**: Access to Google's Gemini API models (gemini-1.5-flash, gemini-2.0-flash, gemini-2.5-flash, etc.)\n- **api-inference.huggingface.co**: Open-source Gemma models via Hugging Face Inference API (gemma-2-9b-it, gemma-2-27b-it, etc.)\n- **httpbin.org**: For testing and development purposes only\n\n### Gemma vs Gemini Models\n\n**Important**: Gemma and Gemini are different model families:\n\n| Model Family | Access Method | API Keys Source | Example Model |\n|--------------|---------------|-----------------|---------------|\n| **Gemini** | Google's Cloud API | [Google AI Studio](https://aistudio.google.com/app/apikey) | gemini-1.5-flash |\n| **Gemma** | Hugging Face Inference API | [HuggingFace Tokens](https://huggingface.co/settings/tokens) | google/gemma-2-9b-it |\n\n#### Gemma Usage Examples\n\n```python\nfrom api_jongler import APIJongler\n\n# Use Gemma 2 9B model\njongler = APIJongler(\"api-inference.huggingface.co\")\nresponse = jongler.requestJSON(\n    endpoint=\"/models/google/gemma-2-9b-it\",\n    data={\n        \"inputs\": \"What is machine learning?\",\n        \"parameters\": {\"max_new_tokens\": 100, \"temperature\": 0.7}\n    }\n)\nprint(response)\n```\n\n```bash\n# CLI usage for Gemma\napijongler api-inference.huggingface.co POST /models/google/gemma-2-27b-it '{\"inputs\":\"Explain Python\",\"parameters\":{\"max_new_tokens\":150}}' --pretty\n```\n\n**Note**: The Gemini connector provides access to Google's **Gemini API** models, not Gemma models. Available models include:\n- `gemini-1.5-flash` - Fast and versatile (free tier)\n- `gemini-2.0-flash` - Latest generation (free tier)  \n- `gemini-2.5-flash` - Best price/performance\n- `gemini-2.5-pro` - Most powerful model\n- `gemini-1.5-pro` - Complex reasoning tasks\n\n## License\n\nMIT License\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A middleware utility for calling Google AI APIs (Gemini and Gemma) using multiple API keys to reduce need for paid tiers",
    "version": "1.1.0",
    "project_urls": {
        "Homepage": "https://github.com/antonpavlenko/api-jongler"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8e08422373583b20f204382448ee167bd08434b14e510396b6d97e1fc0155099",
                "md5": "87ae17782df916f17f6397989306c0df",
                "sha256": "43eb55d62fbf7f5ca1b8184e07858a36ea07a0080ff2f92747df6a311bf21c60"
            },
            "downloads": -1,
            "filename": "api_jongler-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "87ae17782df916f17f6397989306c0df",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 15825,
            "upload_time": "2025-07-16T01:44:56",
            "upload_time_iso_8601": "2025-07-16T01:44:56.505287Z",
            "url": "https://files.pythonhosted.org/packages/8e/08/422373583b20f204382448ee167bd08434b14e510396b6d97e1fc0155099/api_jongler-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d5e7b529133b1a0a0b626528455f43603571502385bbcb0fc657540658df1367",
                "md5": "c4ece046aeb20f87e85acc522f189188",
                "sha256": "32cd756c98929e970aaf8444a2d090393097691fc80e613dfc516655a17b41ec"
            },
            "downloads": -1,
            "filename": "api_jongler-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c4ece046aeb20f87e85acc522f189188",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 20370,
            "upload_time": "2025-07-16T01:44:57",
            "upload_time_iso_8601": "2025-07-16T01:44:57.948780Z",
            "url": "https://files.pythonhosted.org/packages/d5/e7/b529133b1a0a0b626528455f43603571502385bbcb0fc657540658df1367/api_jongler-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-16 01:44:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "antonpavlenko",
    "github_project": "api-jongler",
    "github_not_found": true,
    "lcname": "api-jongler"
}
        
Elapsed time: 1.73773s