openlimit-lite


Nameopenlimit-lite JSON
Version 1.3.1 PyPI version JSON
download
home_pageNone
SummaryRate limiter for the OpenAI API
upload_time2024-05-24 20:16:00
maintainerNone
docs_urlNone
authorNone
requires_python>=3.0
licenseNone
keywords openai rate-limit limit api request token leaky-bucket gcra asyncio
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # openlimit-lite

Forked from https://github.com/shobrook/openlimit. Maintaining for our own use, stripping down to remove Redis dependencies.

A simple tool for maximizing usage of the OpenAI API without hitting the rate limit.

- Handles both _request_ and _token_ limits
- Precisely (to the millisecond) enforces rate limits with one line of code
- Handles _synchronous_ and _asynchronous_ requests

Implements the [generic cell rate algorithm,](https://en.wikipedia.org/wiki/Generic_cell_rate_algorithm) a variant of the leaky bucket pattern.

## Usage

### Define a rate limit

First, define your rate limits for the OpenAI model you're using. For example:

```python
from openlimit_lite import ChatRateLimiter

rate_limiter = ChatRateLimiter(request_limit=200, token_limit=40000)
```

This sets a rate limit for a chat completion model (e.g. gpt-4, gpt-3.5-turbo). `openlimit` offers different rate limiter objects for different OpenAI models, all with the same parameters: `request_limit` and `token_limit`. Both limits are measured _per-minute_ and may vary depending on the user.

| Rate limiter            | Supported models                                                                   |
| ----------------------- | ---------------------------------------------------------------------------------- |
| `ChatRateLimiter`       | gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301    |
| `CompletionRateLimiter` | text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001 |
| `EmbeddingRateLimiter`  | text-embedding-ada-002                                                             |

### Apply the rate limit

To apply the rate limit, add a `with` statement to your API calls:

```python
chat_params = {
    "model": "gpt-4",
    "messages": [{"role": "user", "content": "Hello!"}]
}

with rate_limiter.limit(**chat_params):
    response = openai.ChatCompletion.create(**chat_params)
```

Ensure that `rate_limiter.limit` receives the same parameters as the actual API call. This is important for calculating expected token usage.

Alternatively, you can decorate functions that make API calls, as long as the decorated function receives the same parameters as the API call:

```python
@rate_limiter.is_limited()
def call_openai(**chat_params):
    response = openai.ChatCompletion.create(**chat_params)
    return response
```

### Asynchronous requests

Rate limits can be enforced for asynchronous requests too:

```python
chat_params = {
    "model": "gpt-4",
    "messages": [{"role": "user", "content": "Hello!"}]
}

async with rate_limiter.limit(**chat_params):
    response = await openai.ChatCompletion.acreate(**chat_params)
```

### Token counting

Aside from rate limiting, `openlimit` also provides methods for counting tokens consumed by requests.

#### Chat requests

To count the _maximum_ number of tokens that could be consumed by a chat request (e.g. `gpt-3.5-turbo`, `gpt-4`), pass the [request arguments](https://platform.openai.com/docs/api-reference/chat/create) into the following function:

```python
from openlimit.utilities import num_tokens_consumed_by_chat_request

request_args = {
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "...", "content": "..."}, ...],
    "max_tokens": 15,
    "n": 1
}
num_tokens = num_tokens_consumed_by_chat_requests(**request_args)
```

#### Completion requests

Similar to chat requests, to count tokens for completion requests (e.g. `text-davinci-003`), pass the [request arguments](https://platform.openai.com/docs/api-reference/completions/create) into the following function:

```python
from openlimit.utilities import num_tokens_consumed_by_completion_request

request_args = {
    "model": "text-davinci-003",
    "prompt": "...",
    "max_tokens": 15,
    "n": 1
}
num_tokens = num_tokens_consumed_by_completion_request(**request_args)
```

#### Embedding requests

For embedding requests (e.g. `text-embedding-ada-002`), pass the [request arguments](https://platform.openai.com/docs/api-reference/embeddings/create) into the following function:

```python
from openlimit.utilities import num_tokens_consumed_by_embedding_request

request_args = {
    "model": "text-embedding-ada-002",
    "input": "..."
}
num_tokens = num_tokens_consumed_by_embedding_request(**request_args)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "openlimit-lite",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.0",
    "maintainer_email": null,
    "keywords": "openai, rate-limit, limit, api, request, token, leaky-bucket, gcra, asyncio",
    "author": null,
    "author_email": "alexliao <support@dragoneye.ai>, shobrook <shobrookj@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/09/74/63ea1ecc02612263aa357b685160074f4abc2ddd51dbee285f84b73ba727/openlimit_lite-1.3.1.tar.gz",
    "platform": null,
    "description": "# openlimit-lite\n\nForked from https://github.com/shobrook/openlimit. Maintaining for our own use, stripping down to remove Redis dependencies.\n\nA simple tool for maximizing usage of the OpenAI API without hitting the rate limit.\n\n- Handles both _request_ and _token_ limits\n- Precisely (to the millisecond) enforces rate limits with one line of code\n- Handles _synchronous_ and _asynchronous_ requests\n\nImplements the [generic cell rate algorithm,](https://en.wikipedia.org/wiki/Generic_cell_rate_algorithm) a variant of the leaky bucket pattern.\n\n## Usage\n\n### Define a rate limit\n\nFirst, define your rate limits for the OpenAI model you're using. For example:\n\n```python\nfrom openlimit_lite import ChatRateLimiter\n\nrate_limiter = ChatRateLimiter(request_limit=200, token_limit=40000)\n```\n\nThis sets a rate limit for a chat completion model (e.g. gpt-4, gpt-3.5-turbo). `openlimit` offers different rate limiter objects for different OpenAI models, all with the same parameters: `request_limit` and `token_limit`. Both limits are measured _per-minute_ and may vary depending on the user.\n\n| Rate limiter            | Supported models                                                                   |\n| ----------------------- | ---------------------------------------------------------------------------------- |\n| `ChatRateLimiter`       | gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301    |\n| `CompletionRateLimiter` | text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001 |\n| `EmbeddingRateLimiter`  | text-embedding-ada-002                                                             |\n\n### Apply the rate limit\n\nTo apply the rate limit, add a `with` statement to your API calls:\n\n```python\nchat_params = {\n    \"model\": \"gpt-4\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]\n}\n\nwith rate_limiter.limit(**chat_params):\n    response = openai.ChatCompletion.create(**chat_params)\n```\n\nEnsure that `rate_limiter.limit` receives the same parameters as the actual API call. This is important for calculating expected token usage.\n\nAlternatively, you can decorate functions that make API calls, as long as the decorated function receives the same parameters as the API call:\n\n```python\n@rate_limiter.is_limited()\ndef call_openai(**chat_params):\n    response = openai.ChatCompletion.create(**chat_params)\n    return response\n```\n\n### Asynchronous requests\n\nRate limits can be enforced for asynchronous requests too:\n\n```python\nchat_params = {\n    \"model\": \"gpt-4\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]\n}\n\nasync with rate_limiter.limit(**chat_params):\n    response = await openai.ChatCompletion.acreate(**chat_params)\n```\n\n### Token counting\n\nAside from rate limiting, `openlimit` also provides methods for counting tokens consumed by requests.\n\n#### Chat requests\n\nTo count the _maximum_ number of tokens that could be consumed by a chat request (e.g. `gpt-3.5-turbo`, `gpt-4`), pass the [request arguments](https://platform.openai.com/docs/api-reference/chat/create) into the following function:\n\n```python\nfrom openlimit.utilities import num_tokens_consumed_by_chat_request\n\nrequest_args = {\n    \"model\": \"gpt-3.5-turbo\",\n    \"messages\": [{\"role\": \"...\", \"content\": \"...\"}, ...],\n    \"max_tokens\": 15,\n    \"n\": 1\n}\nnum_tokens = num_tokens_consumed_by_chat_requests(**request_args)\n```\n\n#### Completion requests\n\nSimilar to chat requests, to count tokens for completion requests (e.g. `text-davinci-003`), pass the [request arguments](https://platform.openai.com/docs/api-reference/completions/create) into the following function:\n\n```python\nfrom openlimit.utilities import num_tokens_consumed_by_completion_request\n\nrequest_args = {\n    \"model\": \"text-davinci-003\",\n    \"prompt\": \"...\",\n    \"max_tokens\": 15,\n    \"n\": 1\n}\nnum_tokens = num_tokens_consumed_by_completion_request(**request_args)\n```\n\n#### Embedding requests\n\nFor embedding requests (e.g. `text-embedding-ada-002`), pass the [request arguments](https://platform.openai.com/docs/api-reference/embeddings/create) into the following function:\n\n```python\nfrom openlimit.utilities import num_tokens_consumed_by_embedding_request\n\nrequest_args = {\n    \"model\": \"text-embedding-ada-002\",\n    \"input\": \"...\"\n}\nnum_tokens = num_tokens_consumed_by_embedding_request(**request_args)\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Rate limiter for the OpenAI API",
    "version": "1.3.1",
    "project_urls": {
        "Homepage": "https://github.com/dragoneyeAI/openlimit-lite"
    },
    "split_keywords": [
        "openai",
        " rate-limit",
        " limit",
        " api",
        " request",
        " token",
        " leaky-bucket",
        " gcra",
        " asyncio"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f0b12f7906f05b0b6f7ce143de54d65e4a6810d097833b539c1c9e909e7ad737",
                "md5": "3ccaa65fea48b6eb50638bc54a8d9734",
                "sha256": "abb953c26deaf15ba92b15c362db6bebecf93bcbd1d04cc00318d3d961be61a9"
            },
            "downloads": -1,
            "filename": "openlimit_lite-1.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3ccaa65fea48b6eb50638bc54a8d9734",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.0",
            "size": 19593,
            "upload_time": "2024-05-24T20:15:58",
            "upload_time_iso_8601": "2024-05-24T20:15:58.100248Z",
            "url": "https://files.pythonhosted.org/packages/f0/b1/2f7906f05b0b6f7ce143de54d65e4a6810d097833b539c1c9e909e7ad737/openlimit_lite-1.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "097463ea1ecc02612263aa357b685160074f4abc2ddd51dbee285f84b73ba727",
                "md5": "365876614eda09542f19541d487b01de",
                "sha256": "0d9f359ceca71d7e9c9e9fca1f3d665eaa3121a463387b8c028758b65d11af0f"
            },
            "downloads": -1,
            "filename": "openlimit_lite-1.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "365876614eda09542f19541d487b01de",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.0",
            "size": 19997,
            "upload_time": "2024-05-24T20:16:00",
            "upload_time_iso_8601": "2024-05-24T20:16:00.903262Z",
            "url": "https://files.pythonhosted.org/packages/09/74/63ea1ecc02612263aa357b685160074f4abc2ddd51dbee285f84b73ba727/openlimit_lite-1.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-24 20:16:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dragoneyeAI",
    "github_project": "openlimit-lite",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "openlimit-lite"
}
        
Elapsed time: 0.36084s