speedy-openai


Namespeedy-openai JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/lucafirefox/speedy-openai
SummaryAsync OpenAI client for fast and efficient API requests using AIOHTTP module.
upload_time2024-12-16 15:28:37
maintainerNone
docs_urlNone
authorLuca Ferrario
requires_python<3.13,>=3.9
licenseMIT
keywords openai async aiohttp api
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Speedy OpenAI

[![PyPI version](https://badge.fury.io/py/speedy-openai.svg)](https://badge.fury.io/py/speedy-openai)
[![Python](https://img.shields.io/pypi/pyversions/speedy-openai.svg)](https://pypi.org/project/speedy-openai/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)


A high-performance, asynchronous Python client for the OpenAI API with built-in rate limiting and concurrency control.

## Features

- ⚡ Asynchronous request handling for optimal performance
- 🔄 Built-in rate limiting for both requests and tokens
- 🎛️ Configurable concurrency control
- 🔁 Automatic retry mechanism with backoff
- 📊 Progress tracking for batch requests
- 🎯 Token counting and management
- 📝 Comprehensive logging

## Installation

```bash
pip install speedy-openai
```

## Quick Start

```python
import asyncio
from speedy_openai import OpenAIClient

async def main():
    # Initialize the client
    client = OpenAIClient(
        api_key="your-api-key",
        max_requests_per_min=5000,  # Optional: default 5000
        max_tokens_per_min=15000000,  # Optional: default 15M
        max_concurrent_requests=250  # Optional: default 250
    )

    # Single request
    request = {
        "custom_id": "req1",
        "method": "POST",
        "url": "/v1/chat/completions",
        "body": {
            "model": "gpt-3.5-turbo",
            "messages": [{"role": "user", "content": "Hello!"}]
        }
    }
    
    response = await client.process_request(request)

    # Batch requests
    requests = [request, request]  # List of requests
    responses = await client.process_batch(requests)

if __name__ == "__main__":
    asyncio.run(main())
```

## Configuration Options

| Parameter | Default | Description |
|-----------|---------|-------------|
| `api_key` | Required | Your OpenAI API key |
| `max_requests_per_min` | 5000 | Maximum API requests per minute |
| `max_tokens_per_min` | 15000000 | Maximum tokens per minute |
| `max_concurrent_requests` | 250 | Maximum concurrent requests |
| `max_retries` | 5 | Maximum retry attempts |
| `max_sleep_time` | 60 | Maximum sleep time between retries (seconds) |

## Features in Detail

### Rate Limiting

The client includes a sophisticated rate limiter that manages both request frequency and token usage:
- Automatically tracks remaining requests and tokens
- Updates limits from API response headers
- Implements waiting periods when limits are reached
- Supports dynamic limit adjustments

### Concurrency Control

- Manages concurrent requests using asyncio semaphores
- Prevents overwhelming the API with too many simultaneous requests
- Configurable maximum concurrent requests

### Retry Mechanism

Built-in retry logic for handling common API errors:
- Automatic retries with fixed wait times
- Configurable maximum retry attempts
- Specific exception handling for API-related errors

### Progress Tracking

Batch requests include:
- Progress bar visualization using tqdm
- Processing time logging
- Detailed success/failure reporting

## Error Handling

The client includes comprehensive error handling:
- API response validation
- Rate limit handling
- Network error recovery
- Invalid request detection

## Requirements

- Python 3.7+
- aiohttp
- tiktoken
- tenacity
- tqdm
- loguru
- pydantic

## Common Use Cases

### 1. Chat Completion with GPT-4

```python
import asyncio
from speedy_openai import OpenAIClient

async def chat_with_gpt4():
    client = OpenAIClient(api_key="your-api-key")
    
    request = {
        "custom_id": "chat-1",
        "method": "POST",
        "url": "/v1/chat/completions",
        "body": {
            "model": "gpt-4",
            "messages": [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "Explain quantum computing in simple terms."}
            ],
            "temperature": 0.7
        }
    }
    
    response = await client.process_request(request)
    print(response["response"]["choices"][0]["message"]["content"])

asyncio.run(chat_with_gpt4())
```

### 2. Batch Processing Multiple Conversations

```python
async def process_multiple_conversations():
    client = OpenAIClient(api_key="your-api-key")
    
    conversations = [
        {"role": "user", "content": "What is AI?"},
        {"role": "user", "content": "Explain machine learning."},
        {"role": "user", "content": "What is deep learning?"}
    ]
    
    requests = [
        {
            "custom_id": f"batch-{i}",
            "method": "POST",
            "url": "/v1/chat/completions",
            "body": {
                "model": "gpt-3.5-turbo",
                "messages": [conv],
                "temperature": 0.7
            }
        }
        for i, conv in enumerate(conversations)
    ]
    
    responses = await client.process_batch(requests)
    return responses
```

## Testing

The project uses pytest for testing. To run the tests:

1. Clone the repository:
```bash
git clone https://github.com/yourusername/speedy-openai.git
cd speedy-openai
```

2. Install development dependencies:
```bash
poetry install
```

3. Run tests:
```bash
poetry run pytest
```

### Test Structure

The test suite includes:

- Unit tests for core functionality
- Integration tests for API interactions
- Rate limiting tests
- Concurrency tests
- Error handling tests

### Running Specific Tests

Run specific test categories:
```bash
# Run only rate limiter tests
poetry run pytest tests/test_rate_limiter.py

# Run only client tests
poetry run pytest tests/test_client.py

# Run with verbose output
poetry run pytest -v

# Run with coverage report
poetry run pytest --cov=speedy_openai
```

### Writing Tests

When contributing new features, please ensure:
- All new features have corresponding tests
- Test coverage remains above 80%
- Tests are properly documented
- Both success and failure cases are covered

## License

This project is licensed under the MIT License - see the LICENSE file for details.

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## Support

For issues, questions, or contributions, please create an issue in the GitHub repository.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/lucafirefox/speedy-openai",
    "name": "speedy-openai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.9",
    "maintainer_email": null,
    "keywords": "openai, async, aiohttp, api",
    "author": "Luca Ferrario",
    "author_email": "lucaferrario199@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/61/a3/6f587aba5087453f1f4a02a36194869337b4aba60e0ce4d8c60f6308223b/speedy_openai-0.2.0.tar.gz",
    "platform": null,
    "description": "# Speedy OpenAI\n\n[![PyPI version](https://badge.fury.io/py/speedy-openai.svg)](https://badge.fury.io/py/speedy-openai)\n[![Python](https://img.shields.io/pypi/pyversions/speedy-openai.svg)](https://pypi.org/project/speedy-openai/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n\nA high-performance, asynchronous Python client for the OpenAI API with built-in rate limiting and concurrency control.\n\n## Features\n\n- \u26a1 Asynchronous request handling for optimal performance\n- \ud83d\udd04 Built-in rate limiting for both requests and tokens\n- \ud83c\udf9b\ufe0f Configurable concurrency control\n- \ud83d\udd01 Automatic retry mechanism with backoff\n- \ud83d\udcca Progress tracking for batch requests\n- \ud83c\udfaf Token counting and management\n- \ud83d\udcdd Comprehensive logging\n\n## Installation\n\n```bash\npip install speedy-openai\n```\n\n## Quick Start\n\n```python\nimport asyncio\nfrom speedy_openai import OpenAIClient\n\nasync def main():\n    # Initialize the client\n    client = OpenAIClient(\n        api_key=\"your-api-key\",\n        max_requests_per_min=5000,  # Optional: default 5000\n        max_tokens_per_min=15000000,  # Optional: default 15M\n        max_concurrent_requests=250  # Optional: default 250\n    )\n\n    # Single request\n    request = {\n        \"custom_id\": \"req1\",\n        \"method\": \"POST\",\n        \"url\": \"/v1/chat/completions\",\n        \"body\": {\n            \"model\": \"gpt-3.5-turbo\",\n            \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]\n        }\n    }\n    \n    response = await client.process_request(request)\n\n    # Batch requests\n    requests = [request, request]  # List of requests\n    responses = await client.process_batch(requests)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## Configuration Options\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `api_key` | Required | Your OpenAI API key |\n| `max_requests_per_min` | 5000 | Maximum API requests per minute |\n| `max_tokens_per_min` | 15000000 | Maximum tokens per minute |\n| `max_concurrent_requests` | 250 | Maximum concurrent requests |\n| `max_retries` | 5 | Maximum retry attempts |\n| `max_sleep_time` | 60 | Maximum sleep time between retries (seconds) |\n\n## Features in Detail\n\n### Rate Limiting\n\nThe client includes a sophisticated rate limiter that manages both request frequency and token usage:\n- Automatically tracks remaining requests and tokens\n- Updates limits from API response headers\n- Implements waiting periods when limits are reached\n- Supports dynamic limit adjustments\n\n### Concurrency Control\n\n- Manages concurrent requests using asyncio semaphores\n- Prevents overwhelming the API with too many simultaneous requests\n- Configurable maximum concurrent requests\n\n### Retry Mechanism\n\nBuilt-in retry logic for handling common API errors:\n- Automatic retries with fixed wait times\n- Configurable maximum retry attempts\n- Specific exception handling for API-related errors\n\n### Progress Tracking\n\nBatch requests include:\n- Progress bar visualization using tqdm\n- Processing time logging\n- Detailed success/failure reporting\n\n## Error Handling\n\nThe client includes comprehensive error handling:\n- API response validation\n- Rate limit handling\n- Network error recovery\n- Invalid request detection\n\n## Requirements\n\n- Python 3.7+\n- aiohttp\n- tiktoken\n- tenacity\n- tqdm\n- loguru\n- pydantic\n\n## Common Use Cases\n\n### 1. Chat Completion with GPT-4\n\n```python\nimport asyncio\nfrom speedy_openai import OpenAIClient\n\nasync def chat_with_gpt4():\n    client = OpenAIClient(api_key=\"your-api-key\")\n    \n    request = {\n        \"custom_id\": \"chat-1\",\n        \"method\": \"POST\",\n        \"url\": \"/v1/chat/completions\",\n        \"body\": {\n            \"model\": \"gpt-4\",\n            \"messages\": [\n                {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n                {\"role\": \"user\", \"content\": \"Explain quantum computing in simple terms.\"}\n            ],\n            \"temperature\": 0.7\n        }\n    }\n    \n    response = await client.process_request(request)\n    print(response[\"response\"][\"choices\"][0][\"message\"][\"content\"])\n\nasyncio.run(chat_with_gpt4())\n```\n\n### 2. Batch Processing Multiple Conversations\n\n```python\nasync def process_multiple_conversations():\n    client = OpenAIClient(api_key=\"your-api-key\")\n    \n    conversations = [\n        {\"role\": \"user\", \"content\": \"What is AI?\"},\n        {\"role\": \"user\", \"content\": \"Explain machine learning.\"},\n        {\"role\": \"user\", \"content\": \"What is deep learning?\"}\n    ]\n    \n    requests = [\n        {\n            \"custom_id\": f\"batch-{i}\",\n            \"method\": \"POST\",\n            \"url\": \"/v1/chat/completions\",\n            \"body\": {\n                \"model\": \"gpt-3.5-turbo\",\n                \"messages\": [conv],\n                \"temperature\": 0.7\n            }\n        }\n        for i, conv in enumerate(conversations)\n    ]\n    \n    responses = await client.process_batch(requests)\n    return responses\n```\n\n## Testing\n\nThe project uses pytest for testing. To run the tests:\n\n1. Clone the repository:\n```bash\ngit clone https://github.com/yourusername/speedy-openai.git\ncd speedy-openai\n```\n\n2. Install development dependencies:\n```bash\npoetry install\n```\n\n3. Run tests:\n```bash\npoetry run pytest\n```\n\n### Test Structure\n\nThe test suite includes:\n\n- Unit tests for core functionality\n- Integration tests for API interactions\n- Rate limiting tests\n- Concurrency tests\n- Error handling tests\n\n### Running Specific Tests\n\nRun specific test categories:\n```bash\n# Run only rate limiter tests\npoetry run pytest tests/test_rate_limiter.py\n\n# Run only client tests\npoetry run pytest tests/test_client.py\n\n# Run with verbose output\npoetry run pytest -v\n\n# Run with coverage report\npoetry run pytest --cov=speedy_openai\n```\n\n### Writing Tests\n\nWhen contributing new features, please ensure:\n- All new features have corresponding tests\n- Test coverage remains above 80%\n- Tests are properly documented\n- Both success and failure cases are covered\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## Support\n\nFor issues, questions, or contributions, please create an issue in the GitHub repository.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Async OpenAI client for fast and efficient API requests using AIOHTTP module.",
    "version": "0.2.0",
    "project_urls": {
        "Documentation": "https://github.com/lucafirefox/speedy-openai#readme",
        "Homepage": "https://github.com/lucafirefox/speedy-openai",
        "Repository": "https://github.com/lucafirefox/speedy-openai"
    },
    "split_keywords": [
        "openai",
        " async",
        " aiohttp",
        " api"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "11b5f244498a7d815bc9609c6e5108a7e0936f943e1de098e247869428bc967d",
                "md5": "c06ed6e6e336043e49ea1a4c3f80b554",
                "sha256": "60b4151c65f7d873cf890ea4f69c020e199c1c975605b16a5972fbd5c6c0e4b9"
            },
            "downloads": -1,
            "filename": "speedy_openai-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c06ed6e6e336043e49ea1a4c3f80b554",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.9",
            "size": 12279,
            "upload_time": "2024-12-16T15:28:35",
            "upload_time_iso_8601": "2024-12-16T15:28:35.327457Z",
            "url": "https://files.pythonhosted.org/packages/11/b5/f244498a7d815bc9609c6e5108a7e0936f943e1de098e247869428bc967d/speedy_openai-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "61a36f587aba5087453f1f4a02a36194869337b4aba60e0ce4d8c60f6308223b",
                "md5": "38f5e4301f26eeb2530de13d6b8d31ac",
                "sha256": "c6d36ae5072ed006989147bd6505f8374d8a972c8553c966ae67ce748bf181d9"
            },
            "downloads": -1,
            "filename": "speedy_openai-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "38f5e4301f26eeb2530de13d6b8d31ac",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.9",
            "size": 8275,
            "upload_time": "2024-12-16T15:28:37",
            "upload_time_iso_8601": "2024-12-16T15:28:37.731596Z",
            "url": "https://files.pythonhosted.org/packages/61/a3/6f587aba5087453f1f4a02a36194869337b4aba60e0ce4d8c60f6308223b/speedy_openai-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-16 15:28:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lucafirefox",
    "github_project": "speedy-openai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "speedy-openai"
}
        
Elapsed time: 0.38504s