<a href="https://pypi.org/project/self-limiters/"><img alt="PyPI" src="https://img.shields.io/pypi/v/self-limiters.svg"></a>
<a href="https://github.com/sondrelg/self-limiters/actions/workflows/publish.yml"><img alt="test status" src="https://github.com/sondrelg/self-limiters/actions/workflows/publish.yml/badge.svg"></a>
<a href="https://codecov.io/gh/sondrelg/self-limiters/"><img alt="coverage" src="https://codecov.io/gh/sondrelg/self-limiters/branch/main/graph/badge.svg?token=Q4YJPOFC1F"></a>
<a href="https://codecov.io/gh/sondrelg/self-limiters/"><img alt="python version" src="https://img.shields.io/badge/python-3.9%2B-blue"></a>
# Self limiters
A library for regulating traffic with respect to **concurrency** or **time**.
It implements a [semaphore](https://en.wikipedia.org/wiki/Semaphore_(programming)) to be used when you need to
limit the number of concurrent requests to an API (or other resources). For example if you can only
send 5 concurrent requests.
It also implements the [token bucket algorithm](https://en.wikipedia.org/wiki/Token_bucket) which can be used
to limit the number of requests made in a given time interval. For example if you're restricted to 10 requests
per second.
Both limiters are async, FIFO, and distributed using Redis. You should probably only use this if you need
distributed queues.
This was written with rate-limiting in mind, but the semaphore and token bucket
implementations can be used for anything.
# Installation
```
pip install self-limiters
```
# Usage
Both implementations are written as async context managers.
### Semaphore
The `Semaphore` can be used like this:
```python
from self_limiters import Semaphore
# 5 requests at the time
async with Semaphore(name="", capacity=5, max_sleep=60, redis_url=""):
client.get(...)
```
We use [`blpop`](https://redis.io/commands/blpop/) to wait for the semaphore to be freed up, under the hood, which is non-blocking.
If you specify a non-zero `max_sleep`, a `MaxSleepExceededError` will be raised if `blpop` waits for longer than that specified value.
### Token bucket
The `TokenBucket` context manager is used the same way, like this:
```python
from self_limiters import TokenBucket
# 1 requests per minute
async with TokenBucket(
name="",
capacity=1,
refill_amount=1,
refill_frequency=60,
max_sleep=600,
redis_url=""
):
client.get(...)
```
The limiter first estimates when there will be capacity in the bucket - i.e., when it's this instances turn to go,
then async sleeps until then.
If `max_sleep` is set and the estimated sleep time exceeds this, a `MaxSleepExceededError`
is raised immediately.
### As a decorator
The package doesn't ship any decorators, but if you would
like to limit the rate at which a whole function is run,
you can create your own, like this:
```python
from self_limiters import Semaphore
# Define a decorator function
def limit(name, capacity):
def middle(f):
async def inner(*args, **kwargs):
async with Semaphore(
name=name,
capacity=capacity,
redis_url="redis://127.0.0.1:6389"
):
return await f(*args, **kwargs)
return inner
return middle
# Then pass the relevant limiter arguments like this
@limit(name="foo", capacity=5)
def fetch_foo(id: UUID) -> Foo:
```
# Implementation and performance breakdown
The library is written in Rust (for fun) and relies on [Lua](http://www.lua.org/about.html)
scripts and [pipelining](https://docs.rs/redis/0.22.0/redis/struct.Pipeline.html) to
improve the performance of each implementation.
Redis lets users upload and execute Lua scripts on the server directly, meaning we can write
e.g., the entire token bucket logic in Lua. This present a couple of nice benefits:
- Since they are executed on the redis instance, we can make 1 request to redis
where we would otherwise have to make 3 or 4. The time saved by reducing
the number of requests is significant.
- Redis is single-threaded and guarantees atomic execution of scripts, meaning
we don't have to worry about data races. In a prior iteration, when we had to make 4 requests
to estimate the wake-up time for a token bucket instance, we had needed to use the
[redlock](https://redis.com/redis-best-practices/communication-patterns/redlock/)
algorithm to ensure fairness. With Lua scripts, our implementations are FIFO out of the box.
So in summary they make our implementation faster, since we save several round-trips to the server
and back **and** since we no longer need locks, and distributed locks are expensive. And they
simultaneously make the code much, much simpler.
This is how each implementation has ended up looking:
### The semaphore implementation
1. Run a [lua script](https://github.com/sondrelg/self-limiters/blob/main/src/semaphore.rs#L59:L109) to create a list data structure in redis, as the foundation of the semaphore.
This script is idempotent, and skipped if it has already been created.
2. Run [`BLPOP`](https://redis.io/commands/blpop/) to non-blockingly wait until the semaphore has capacity, and pop from the list when it does.
3. Then run a [pipelined command](https://github.com/sondrelg/self-limiters/blob/main/src/semaphore.rs#L139:L144) to release the semaphore by adding back the capacity.
So in total we make 3 calls to redis, where we would have made 6 without the scripts, which are all non-blocking.
### The token bucket implementation
The token bucket implementation is even simpler. The steps are:
1. Run a [lua script](https://github.com/sondrelg/self-limiters/blob/main/src/token_bucket.rs#L69:L159) to estimate and return a wake-up time.
2. Sleep until the given timestamp.
We make 1 call instead of 3, then sleep. Both are non-blocking.
In other words, the very large majority of time is spent waiting in a non-blocking way, meaning the limiters' impact on an application event-loop should be close to completely negligible.
## Benchmarks
We run benchmarks in CI with Github actions. For a normal `ubuntu-latest` runner, we see runtimes for both limiters:
When creating 100 instances of each implementation and calling them at the same time, the average runtimes are:
- Semaphore implementation: ~0.6ms per instance
- Token bucket implementation: ~0.03ms per instance
Take a look at the [benchmarking script](https://github.com/sondrelg/self-limiters/blob/main/src/bench.py) if you want
to run your own tests!
# Implementation reference
## The semaphore implementation
The semaphore implementation is useful when you need to limit a process
to `n` concurrent actions. For example if you have several web servers, and
you're interacting with an API that will only tolerate a certain amount of
concurrent requests before locking you out.
The flow can be broken down as follows:
<img width=500 src="docs/semaphore.png"></img>
The initial [lua script](https://github.com/sondrelg/self-limiters/blob/main/src/semaphore.rs#L59:L109)
first checks if the redis list we will build the semaphore on exists or not.
It does this by calling [`SETNX`](https://redis.io/commands/setnx/) on the key of the queue plus a postfix
(if the `name` specified in the class instantiation is "my-queue", then the queue name will be
`__self-limiters:my-queue` and setnx will be called for `__self-limiters:my-queue-exists`). If the returned
value is 1 it means the queue we will use for our semaphore does not exist yet and needs to be created.
It might strike you as weird to maintain a separate value, just to indicate whether a list exists,
when we could just check the list itself. It would be nice if we could use
[`EXISTS`](https://redis.io/commands/exists/) on the list directly, but unfortunately a list is considered
not to exist when all elements are popped (i.e., when a semaphore is fully acquired), so I don't see
another way of doing this. Contributions are very welcome if you do!
<br><br>
Then if the queue needs to be created we call [`RPUSH`](https://redis.io/commands/rpush/) with the number of arguments
equal to the `capacity` value used when initializing the semaphore instance. For a semaphore with
a capacity of 5, we call `RPUSH 1 1 1 1 1`, where the values are completely arbitrary.
Once the list/queue has been created, we [`BLPOP`](https://redis.io/commands/blpop/) to block until it's
our turn. `BLPOP` is FIFO by default. We also make sure to specify the `max_sleep` based on the initialized
semaphore instance setting. If nothing was passed we allow sleeping forever.
On `__aexit__` we run three commands in a pipelined query. We [`RPUSH`](https://redis.io/commands/rpush/) a `1`
back into the queue to "release" the semaphore, and set an expiry on the queue and the string value we called
`SETNX` on.
<br><br>
The expires are a half measure for dealing with dropped capacity. If a node holding the semaphore dies,
the capacity might never be returned. If, however, there is no one using the semaphore for the duration of the
expiry value, all values will be cleared, and the semaphore will be recreated at full capacity next time it's used.
The expiry is 30 seconds at the time of writing, but could be made configurable.
### The token bucket implementation
The token bucket implementation is useful when you need to limit a process by
a time interval. For example, to 1 request per minute, or 50 requests every 10 seconds.
The implementation is forward-looking. It works out the time there *would have been*
capacity in the bucket for a given client and returns that time. From there we can
asynchronously sleep until it's time to perform our rate limited action.
The flow can be broken down as follows:
<img width=700 src="docs/token_bucket.png"></img>
Call the [schedule Lua script](https://github.com/sondrelg/self-limiters/blob/main/src/scripts/schedule.lua)
which first [`GET`](https://redis.io/commands/get/)s the *state* of the bucket.
The bucket state contains the last time slot scheduled and the number of tokens left for that time slot.
With a capacity of 1, having a `tokens_left_for_slot` variable makes no sense, but if there's
capacity of 2 or more, it is possible that we will need to schedule multiple clients to the
same time slot.
The script then works out whether to decrement the `tokens_left_for_slot` value, or to
increment the time slot value wrt. the frequency variable.
Finally, we store the bucket state again using [`SETEX`](https://redis.io/commands/setex/).
This allows us to store the state and set expiry at the same time. The default expiry
is 30 at the time of writing, but could be made configurable.
One thing to note, is that this would not work if it wasn't for the fact that redis is single threaded,
so Lua scripts on Redis are FIFO. Without this we would need locks and a lot more logic.
Then we just sleep!
# Contributing
Please do! Feedback on the implementation, issues, and PRs are all welcome. See [`CONTRIBUTING.md`](https://github.com/sondrelg/self-limiters/blob/main/CONTRIBUTING.md) for more details.
Please also consider starring the repo to raise visibility.
Raw data
{
"_id": null,
"home_page": "",
"name": "self-limiters",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "",
"keywords": "distributed,async,rate-limit,rate,limit,limiting,redis,rust,semaphore,token,leaky,bucket,tokenbucket",
"author": "",
"author_email": "Sondre Lilleb\u00f8 Gundersen <sondrelg@live.no>",
"download_url": "",
"platform": null,
"description": "<a href=\"https://pypi.org/project/self-limiters/\"><img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/self-limiters.svg\"></a>\n<a href=\"https://github.com/sondrelg/self-limiters/actions/workflows/publish.yml\"><img alt=\"test status\" src=\"https://github.com/sondrelg/self-limiters/actions/workflows/publish.yml/badge.svg\"></a>\n<a href=\"https://codecov.io/gh/sondrelg/self-limiters/\"><img alt=\"coverage\" src=\"https://codecov.io/gh/sondrelg/self-limiters/branch/main/graph/badge.svg?token=Q4YJPOFC1F\"></a>\n<a href=\"https://codecov.io/gh/sondrelg/self-limiters/\"><img alt=\"python version\" src=\"https://img.shields.io/badge/python-3.9%2B-blue\"></a>\n\n# Self limiters\n\nA library for regulating traffic with respect to **concurrency** or **time**.\n\nIt implements a [semaphore](https://en.wikipedia.org/wiki/Semaphore_(programming)) to be used when you need to\nlimit the number of concurrent requests to an API (or other resources). For example if you can only\nsend 5 concurrent requests.\n\nIt also implements the [token bucket algorithm](https://en.wikipedia.org/wiki/Token_bucket) which can be used\nto limit the number of requests made in a given time interval. For example if you're restricted to 10 requests\nper second.\n\nBoth limiters are async, FIFO, and distributed using Redis. You should probably only use this if you need\ndistributed queues.\n\nThis was written with rate-limiting in mind, but the semaphore and token bucket\nimplementations can be used for anything.\n\n# Installation\n\n```\npip install self-limiters\n```\n\n# Usage\n\nBoth implementations are written as async context managers.\n\n### Semaphore\n\nThe `Semaphore` can be used like this:\n\n```python\nfrom self_limiters import Semaphore\n\n\n# 5 requests at the time\nasync with Semaphore(name=\"\", capacity=5, max_sleep=60, redis_url=\"\"):\n client.get(...)\n```\n\nWe use [`blpop`](https://redis.io/commands/blpop/) to wait for the semaphore to be freed up, under the hood, which is non-blocking.\n\nIf you specify a non-zero `max_sleep`, a `MaxSleepExceededError` will be raised if `blpop` waits for longer than that specified value.\n\n### Token bucket\n\nThe `TokenBucket` context manager is used the same way, like this:\n\n```python\nfrom self_limiters import TokenBucket\n\n\n# 1 requests per minute\nasync with TokenBucket(\n name=\"\",\n capacity=1,\n refill_amount=1,\n refill_frequency=60,\n max_sleep=600,\n redis_url=\"\"\n):\n client.get(...)\n```\n\nThe limiter first estimates when there will be capacity in the bucket - i.e., when it's this instances turn to go,\nthen async sleeps until then.\n\nIf `max_sleep` is set and the estimated sleep time exceeds this, a `MaxSleepExceededError`\nis raised immediately.\n\n### As a decorator\n\nThe package doesn't ship any decorators, but if you would\nlike to limit the rate at which a whole function is run,\nyou can create your own, like this:\n\n```python\nfrom self_limiters import Semaphore\n\n\n# Define a decorator function\ndef limit(name, capacity):\n def middle(f):\n async def inner(*args, **kwargs):\n async with Semaphore(\n name=name,\n capacity=capacity,\n redis_url=\"redis://127.0.0.1:6389\"\n ):\n return await f(*args, **kwargs)\n return inner\n return middle\n\n\n# Then pass the relevant limiter arguments like this\n@limit(name=\"foo\", capacity=5)\ndef fetch_foo(id: UUID) -> Foo:\n```\n\n# Implementation and performance breakdown\n\nThe library is written in Rust (for fun) and relies on [Lua](http://www.lua.org/about.html)\nscripts and [pipelining](https://docs.rs/redis/0.22.0/redis/struct.Pipeline.html) to\nimprove the performance of each implementation.\n\nRedis lets users upload and execute Lua scripts on the server directly, meaning we can write\ne.g., the entire token bucket logic in Lua. This present a couple of nice benefits:\n\n- Since they are executed on the redis instance, we can make 1 request to redis\n where we would otherwise have to make 3 or 4. The time saved by reducing\n the number of requests is significant.\n\n- Redis is single-threaded and guarantees atomic execution of scripts, meaning\n we don't have to worry about data races. In a prior iteration, when we had to make 4 requests\n to estimate the wake-up time for a token bucket instance, we had needed to use the\n [redlock](https://redis.com/redis-best-practices/communication-patterns/redlock/)\n algorithm to ensure fairness. With Lua scripts, our implementations are FIFO out of the box.\n\nSo in summary they make our implementation faster, since we save several round-trips to the server\nand back **and** since we no longer need locks, and distributed locks are expensive. And they\nsimultaneously make the code much, much simpler.\n\nThis is how each implementation has ended up looking:\n\n### The semaphore implementation\n\n1. Run a [lua script](https://github.com/sondrelg/self-limiters/blob/main/src/semaphore.rs#L59:L109) to create a list data structure in redis, as the foundation of the semaphore.\n\n This script is idempotent, and skipped if it has already been created.\n\n2. Run [`BLPOP`](https://redis.io/commands/blpop/) to non-blockingly wait until the semaphore has capacity, and pop from the list when it does.\n3. Then run a [pipelined command](https://github.com/sondrelg/self-limiters/blob/main/src/semaphore.rs#L139:L144) to release the semaphore by adding back the capacity.\n\nSo in total we make 3 calls to redis, where we would have made 6 without the scripts, which are all non-blocking.\n\n### The token bucket implementation\n\nThe token bucket implementation is even simpler. The steps are:\n\n1. Run a [lua script](https://github.com/sondrelg/self-limiters/blob/main/src/token_bucket.rs#L69:L159) to estimate and return a wake-up time.\n2. Sleep until the given timestamp.\n\nWe make 1 call instead of 3, then sleep. Both are non-blocking.\n\nIn other words, the very large majority of time is spent waiting in a non-blocking way, meaning the limiters' impact on an application event-loop should be close to completely negligible.\n\n## Benchmarks\n\nWe run benchmarks in CI with Github actions. For a normal `ubuntu-latest` runner, we see runtimes for both limiters:\n\nWhen creating 100 instances of each implementation and calling them at the same time, the average runtimes are:\n\n- Semaphore implementation: ~0.6ms per instance\n- Token bucket implementation: ~0.03ms per instance\n\nTake a look at the [benchmarking script](https://github.com/sondrelg/self-limiters/blob/main/src/bench.py) if you want\nto run your own tests!\n\n# Implementation reference\n\n\n## The semaphore implementation\n\nThe semaphore implementation is useful when you need to limit a process\nto `n` concurrent actions. For example if you have several web servers, and\nyou're interacting with an API that will only tolerate a certain amount of\nconcurrent requests before locking you out.\n\nThe flow can be broken down as follows:\n\n<img width=500 src=\"docs/semaphore.png\"></img>\n\nThe initial [lua script](https://github.com/sondrelg/self-limiters/blob/main/src/semaphore.rs#L59:L109)\nfirst checks if the redis list we will build the semaphore on exists or not.\nIt does this by calling [`SETNX`](https://redis.io/commands/setnx/) on the key of the queue plus a postfix\n(if the `name` specified in the class instantiation is \"my-queue\", then the queue name will be\n`__self-limiters:my-queue` and setnx will be called for `__self-limiters:my-queue-exists`). If the returned\nvalue is 1 it means the queue we will use for our semaphore does not exist yet and needs to be created.\n\nIt might strike you as weird to maintain a separate value, just to indicate whether a list exists,\nwhen we could just check the list itself. It would be nice if we could use\n[`EXISTS`](https://redis.io/commands/exists/) on the list directly, but unfortunately a list is considered\nnot to exist when all elements are popped (i.e., when a semaphore is fully acquired), so I don't see\nanother way of doing this. Contributions are very welcome if you do!\n<br><br>\nThen if the queue needs to be created we call [`RPUSH`](https://redis.io/commands/rpush/) with the number of arguments\nequal to the `capacity` value used when initializing the semaphore instance. For a semaphore with\na capacity of 5, we call `RPUSH 1 1 1 1 1`, where the values are completely arbitrary.\n\nOnce the list/queue has been created, we [`BLPOP`](https://redis.io/commands/blpop/) to block until it's\n our turn. `BLPOP` is FIFO by default. We also make sure to specify the `max_sleep` based on the initialized\n semaphore instance setting. If nothing was passed we allow sleeping forever.\n\nOn `__aexit__` we run three commands in a pipelined query. We [`RPUSH`](https://redis.io/commands/rpush/) a `1`\nback into the queue to \"release\" the semaphore, and set an expiry on the queue and the string value we called\n`SETNX` on.\n<br><br>\nThe expires are a half measure for dealing with dropped capacity. If a node holding the semaphore dies,\nthe capacity might never be returned. If, however, there is no one using the semaphore for the duration of the\nexpiry value, all values will be cleared, and the semaphore will be recreated at full capacity next time it's used.\nThe expiry is 30 seconds at the time of writing, but could be made configurable.\n\n### The token bucket implementation\n\nThe token bucket implementation is useful when you need to limit a process by\na time interval. For example, to 1 request per minute, or 50 requests every 10 seconds.\n\nThe implementation is forward-looking. It works out the time there *would have been*\ncapacity in the bucket for a given client and returns that time. From there we can\nasynchronously sleep until it's time to perform our rate limited action.\n\nThe flow can be broken down as follows:\n\n<img width=700 src=\"docs/token_bucket.png\"></img>\n\nCall the [schedule Lua script](https://github.com/sondrelg/self-limiters/blob/main/src/scripts/schedule.lua)\nwhich first [`GET`](https://redis.io/commands/get/)s the *state* of the bucket.\n\nThe bucket state contains the last time slot scheduled and the number of tokens left for that time slot.\nWith a capacity of 1, having a `tokens_left_for_slot` variable makes no sense, but if there's\ncapacity of 2 or more, it is possible that we will need to schedule multiple clients to the\nsame time slot.\n\nThe script then works out whether to decrement the `tokens_left_for_slot` value, or to\nincrement the time slot value wrt. the frequency variable.\n\nFinally, we store the bucket state again using [`SETEX`](https://redis.io/commands/setex/).\nThis allows us to store the state and set expiry at the same time. The default expiry\nis 30 at the time of writing, but could be made configurable.\n\nOne thing to note, is that this would not work if it wasn't for the fact that redis is single threaded,\nso Lua scripts on Redis are FIFO. Without this we would need locks and a lot more logic.\n\nThen we just sleep!\n\n\n# Contributing\n\nPlease do! Feedback on the implementation, issues, and PRs are all welcome. See [`CONTRIBUTING.md`](https://github.com/sondrelg/self-limiters/blob/main/CONTRIBUTING.md) for more details.\n\nPlease also consider starring the repo to raise visibility.\n\n",
"bugtrack_url": null,
"license": "",
"summary": "Distributed async rate limiters, using Redis",
"version": "0.4.0",
"split_keywords": [
"distributed",
"async",
"rate-limit",
"rate",
"limit",
"limiting",
"redis",
"rust",
"semaphore",
"token",
"leaky",
"bucket",
"tokenbucket"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "e831e43b21c6cc4fbc860ca3ecf731b2",
"sha256": "6a5dfe25550847ca8c610709ede213e3b636004fd11092e201b9b12f15fc11e0"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-macosx_10_7_x86_64.whl",
"has_sig": false,
"md5_digest": "e831e43b21c6cc4fbc860ca3ecf731b2",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 655882,
"upload_time": "2022-12-28T14:09:59",
"upload_time_iso_8601": "2022-12-28T14:09:59.910970Z",
"url": "https://files.pythonhosted.org/packages/cd/cb/531a3b8b194e06b0a9dda5e0f84cf03b2d33cbf17c30c9b45f1c9a48439b/self_limiters-0.4.0-cp38-abi3-macosx_10_7_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "5f04c78ceba9402d343076295ab682ef",
"sha256": "0a3d024f08a9a2a4c4e7b6cafe41f6e6f1beeeedecba2f2b4b72a38b5a09d6e8"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "5f04c78ceba9402d343076295ab682ef",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 677252,
"upload_time": "2022-12-28T14:10:01",
"upload_time_iso_8601": "2022-12-28T14:10:01.711071Z",
"url": "https://files.pythonhosted.org/packages/76/0d/311030246413c2b1ef1312b644168028f575bc08e71ca3b6d5edd9702c52/self_limiters-0.4.0-cp38-abi3-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "f13a09aff20be5c3878bac829bdd84e7",
"sha256": "db0d109cd5fe6181a6f4df4212102039ef712b54d824684d4b6f0b18cb152731"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-manylinux_2_12_i686.manylinux2010_i686.whl",
"has_sig": false,
"md5_digest": "f13a09aff20be5c3878bac829bdd84e7",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 803483,
"upload_time": "2022-12-28T14:10:03",
"upload_time_iso_8601": "2022-12-28T14:10:03.103142Z",
"url": "https://files.pythonhosted.org/packages/5f/dd/176a05618c625378ab20074a3fc127b42d114f5b957efa18abd3d90f8b4d/self_limiters-0.4.0-cp38-abi3-manylinux_2_12_i686.manylinux2010_i686.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "2706d9349590f36a63c9c78b1ec9e15d",
"sha256": "18a4bdfc5f6f173cce724c495e892c536e5d21d060409ba7fefbcc7f91512f88"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "2706d9349590f36a63c9c78b1ec9e15d",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 698446,
"upload_time": "2022-12-28T14:10:04",
"upload_time_iso_8601": "2022-12-28T14:10:04.666965Z",
"url": "https://files.pythonhosted.org/packages/12/52/5c8817d7169481986478ddeddfbe77cefd7f8f8012043c213dd3ea755142/self_limiters-0.4.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "87799b59f1b3fdbce50c77705de19f89",
"sha256": "a7d556fc3eb3ae0419e4dbd09b5ca083011e9fd6e7fb236466cb35bee90f186c"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"has_sig": false,
"md5_digest": "87799b59f1b3fdbce50c77705de19f89",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 742399,
"upload_time": "2022-12-28T14:10:06",
"upload_time_iso_8601": "2022-12-28T14:10:06.003474Z",
"url": "https://files.pythonhosted.org/packages/62/d8/db448b00143b42020d106e12080fb9343a5fe939b13a5ff1ef9fa394ce4f/self_limiters-0.4.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "a6dfaa8a44cb5780a6b494357763b544",
"sha256": "cef557a0e645cec993d512b60053df7de646ae63d7b93457f3576daa9cda1d8a"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-manylinux_2_24_armv7l.whl",
"has_sig": false,
"md5_digest": "a6dfaa8a44cb5780a6b494357763b544",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 664740,
"upload_time": "2022-12-28T14:10:07",
"upload_time_iso_8601": "2022-12-28T14:10:07.013237Z",
"url": "https://files.pythonhosted.org/packages/2e/aa/f238b2d94c568e40c6ff09cff20eb612dcbbf2f17890424e5297b716e7e0/self_limiters-0.4.0-cp38-abi3-manylinux_2_24_armv7l.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "026e15d11133768847258fbc811c0202",
"sha256": "886109104835794a09d23a52acf5ff09d4b1203f2fad489b41f4cc51b90966f9"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-musllinux_1_1_aarch64.whl",
"has_sig": false,
"md5_digest": "026e15d11133768847258fbc811c0202",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 874913,
"upload_time": "2022-12-28T14:10:08",
"upload_time_iso_8601": "2022-12-28T14:10:08.350994Z",
"url": "https://files.pythonhosted.org/packages/b2/5c/b7f86446ced409443026c9857eb2a8b262af14e91c1bba0ca40a1c6e6441/self_limiters-0.4.0-cp38-abi3-musllinux_1_1_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "074fb79cda0811be2b486ef519c13e30",
"sha256": "f2a10118f6385b1229b03a3adb11092ed219847b0d1eca8610b302c143d07856"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-musllinux_1_1_x86_64.whl",
"has_sig": false,
"md5_digest": "074fb79cda0811be2b486ef519c13e30",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 913969,
"upload_time": "2022-12-28T14:10:09",
"upload_time_iso_8601": "2022-12-28T14:10:09.428689Z",
"url": "https://files.pythonhosted.org/packages/6b/d3/1c10679830447231718b1f69be243246393731c9e20f0736c97ed71c354f/self_limiters-0.4.0-cp38-abi3-musllinux_1_1_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "bb7632fe9075b902a0907a14b0a75fa2",
"sha256": "14e3f7bdf1e4c5a1365f8355a19d44f430c3f1e8f6ca16ee76f1062cdff42ec4"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-win32.whl",
"has_sig": false,
"md5_digest": "bb7632fe9075b902a0907a14b0a75fa2",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 584450,
"upload_time": "2022-12-28T14:10:10",
"upload_time_iso_8601": "2022-12-28T14:10:10.670712Z",
"url": "https://files.pythonhosted.org/packages/2d/de/096c7b31a4f19fc6bc2caf2772332a0111020e0fd7534b29ddf132d7888b/self_limiters-0.4.0-cp38-abi3-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "bff1845d40e84f457fa6becb71f66f5c",
"sha256": "03087dcb2194e12d36a6ed880f0af61af5a8aeb20684797c1bf8105bb0a5701d"
},
"downloads": -1,
"filename": "self_limiters-0.4.0-cp38-abi3-win_amd64.whl",
"has_sig": false,
"md5_digest": "bff1845d40e84f457fa6becb71f66f5c",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.9",
"size": 613318,
"upload_time": "2022-12-28T14:10:11",
"upload_time_iso_8601": "2022-12-28T14:10:11.932459Z",
"url": "https://files.pythonhosted.org/packages/1d/c8/78dd50b1e30ce0b96197dfbfd0adde469bb27afbd9f35fa83ea1a840319f/self_limiters-0.4.0-cp38-abi3-win_amd64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-28 14:09:59",
"github": false,
"gitlab": false,
"bitbucket": false,
"lcname": "self-limiters"
}