Name | mpire JSON |
Version |
2.10.1
JSON |
| download |
home_page | https://github.com/sybrenjansen/mpire |
Summary | A Python package for easy multiprocessing, but faster than multiprocessing |
upload_time | 2024-03-19 08:26:34 |
maintainer | |
docs_url | None |
author | Sybren Jansen |
requires_python | |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
MPIRE (MultiProcessing Is Really Easy)
======================================
|Build status| |Docs status| |Pypi status| |Python versions|
.. |Build status| image:: https://github.com/sybrenjansen/mpire/workflows/Build/badge.svg?branch=master
:target: https://github.com/sybrenjansen/mpire/actions/workflows/python-package.yml
:alt: Build status
.. |Docs status| image:: https://github.com/sybrenjansen/mpire/workflows/Docs/badge.svg?branch=master
:target: https://sybrenjansen.github.io/mpire/
:alt: Documentation
.. |PyPI status| image:: https://img.shields.io/pypi/v/mpire
:target: https://pypi.org/project/mpire/
:alt: PyPI project page
.. |Python versions| image:: https://img.shields.io/pypi/pyversions/mpire
:target: https://pypi.org/project/mpire/
:alt: PyPI project page
``MPIRE``, short for MultiProcessing Is Really Easy, is a Python package for multiprocessing. ``MPIRE`` is faster in
most scenarios, packs more features, and is generally more user-friendly than the default multiprocessing package. It
combines the convenient map like functions of ``multiprocessing.Pool`` with the benefits of using copy-on-write shared
objects of ``multiprocessing.Process``, together with easy-to-use worker state, worker insights, worker init and exit
functions, timeouts, and progress bar functionality.
Full documentation is available at https://sybrenjansen.github.io/mpire/.
Features
--------
- Faster execution than other multiprocessing libraries. See benchmarks_.
- Intuitive, Pythonic syntax
- Multiprocessing with ``map``/``map_unordered``/``imap``/``imap_unordered``/``apply``/``apply_async`` functions
- Easy use of copy-on-write shared objects with a pool of workers (copy-on-write is only available for start method
``fork``)
- Each worker can have its own state and with convenient worker init and exit functionality this state can be easily
manipulated (e.g., to load a memory-intensive model only once for each worker without the need of sending it through a
queue)
- Progress bar support using tqdm_ (``rich`` and notebook widgets are supported)
- Progress dashboard support
- Worker insights to provide insight into your multiprocessing efficiency
- Graceful and user-friendly exception handling
- Timeouts, including for worker init and exit functions
- Automatic task chunking for all available map functions to speed up processing of small task queues (including numpy
arrays)
- Adjustable maximum number of active tasks to avoid memory problems
- Automatic restarting of workers after a specified number of tasks to reduce memory footprint
- Nested pool of workers are allowed when setting the ``daemon`` option
- Child processes can be pinned to specific or a range of CPUs
- Optionally utilizes dill_ as serialization backend through multiprocess_, enabling parallelizing more exotic objects,
lambdas, and functions in iPython and Jupyter notebooks.
MPIRE is tested on Linux, macOS, and Windows. For Windows and macOS users, there are a few minor known caveats, which
are documented in the Troubleshooting_ chapter.
.. _benchmarks: https://towardsdatascience.com/mpire-for-python-multiprocessing-is-really-easy-d2ae7999a3e9
.. _multiprocess: https://github.com/uqfoundation/multiprocess
.. _dill: https://pypi.org/project/dill/
.. _tqdm: https://tqdm.github.io/
.. _Troubleshooting: https://sybrenjansen.github.io/mpire/troubleshooting.html
Installation
------------
Through pip (PyPi):
.. code-block:: bash
pip install mpire
MPIRE is also available through conda-forge:
.. code-block:: bash
conda install -c conda-forge mpire
Getting started
---------------
Suppose you have a time consuming function that receives some input and returns its results. Simple functions like these
are known as `embarrassingly parallel`_ problems, functions that require little to no effort to turn into a parallel
task. Parallelizing a simple function as this can be as easy as importing ``multiprocessing`` and using the
``multiprocessing.Pool`` class:
.. _embarrassingly parallel: https://en.wikipedia.org/wiki/Embarrassingly_parallel
.. code-block:: python
import time
from multiprocessing import Pool
def time_consuming_function(x):
time.sleep(1) # Simulate that this function takes long to complete
return ...
with Pool(processes=5) as pool:
results = pool.map(time_consuming_function, range(10))
MPIRE can be used almost as a drop-in replacement to ``multiprocessing``. We use the ``mpire.WorkerPool`` class and
call one of the available ``map`` functions:
.. code-block:: python
from mpire import WorkerPool
with WorkerPool(n_jobs=5) as pool:
results = pool.map(time_consuming_function, range(10))
The differences in code are small: there's no need to learn a completely new multiprocessing syntax, if you're used to
vanilla ``multiprocessing``. The additional available functionality, though, is what sets MPIRE apart.
Progress bar
~~~~~~~~~~~~
Suppose we want to know the status of the current task: how many tasks are completed, how long before the work is ready?
It's as simple as setting the ``progress_bar`` parameter to ``True``:
.. code-block:: python
with WorkerPool(n_jobs=5) as pool:
results = pool.map(time_consuming_function, range(10), progress_bar=True)
And it will output a nicely formatted tqdm_ progress bar.
MPIRE also offers a dashboard, for which you need to install additional dependencies_. See Dashboard_ for more
information.
.. _dependencies: https://sybrenjansen.github.io/mpire/install.html#dashboard
.. _Dashboard: https://sybrenjansen.github.io/mpire/usage/dashboard.html
Shared objects
~~~~~~~~~~~~~~
Note: Copy-on-write shared objects is only available for start method ``fork``. For ``threading`` the objects are shared
as-is. For other start methods the shared objects are copied once for each worker, which can still be better than once
per task.
If you have one or more objects that you want to share between all workers you can make use of the copy-on-write
``shared_objects`` option of MPIRE. MPIRE will pass on these objects only once for each worker without
copying/serialization. Only when you alter the object in the worker function it will start copying it for that worker.
.. code-block:: python
def time_consuming_function(some_object, x):
time.sleep(1) # Simulate that this function takes long to complete
return ...
def main():
some_object = ...
with WorkerPool(n_jobs=5, shared_objects=some_object) as pool:
results = pool.map(time_consuming_function, range(10), progress_bar=True)
See shared_objects_ for more details.
.. _shared_objects: https://sybrenjansen.github.io/mpire/usage/workerpool/shared_objects.html
Worker initialization
~~~~~~~~~~~~~~~~~~~~~
Workers can be initialized using the ``worker_init`` feature. Together with ``worker_state`` you can load a model, or
set up a database connection, etc.:
.. code-block:: python
def init(worker_state):
# Load a big dataset or model and store it in a worker specific worker_state
worker_state['dataset'] = ...
worker_state['model'] = ...
def task(worker_state, idx):
# Let the model predict a specific instance of the dataset
return worker_state['model'].predict(worker_state['dataset'][idx])
with WorkerPool(n_jobs=5, use_worker_state=True) as pool:
results = pool.map(task, range(10), worker_init=init)
Similarly, you can use the ``worker_exit`` feature to let MPIRE call a function whenever a worker terminates. You can
even let this exit function return results, which can be obtained later on. See the `worker_init and worker_exit`_
section for more information.
.. _worker_init and worker_exit: https://sybrenjansen.github.io/mpire/usage/map/worker_init_exit.html
Worker insights
~~~~~~~~~~~~~~~
When your multiprocessing setup isn't performing as you want it to and you have no clue what's causing it, there's the
worker insights functionality. This will give you insight in your setup, but it will not profile the function you're
running (there are other libraries for that). Instead, it profiles the worker start up time, waiting time and
working time. When worker init and exit functions are provided it will time those as well.
Perhaps you're sending a lot of data over the task queue, which makes the waiting time go up. Whatever the case, you
can enable and grab the insights using the ``enable_insights`` flag and ``mpire.WorkerPool.get_insights`` function,
respectively:
.. code-block:: python
with WorkerPool(n_jobs=5, enable_insights=True) as pool:
results = pool.map(time_consuming_function, range(10))
insights = pool.get_insights()
See `worker insights`_ for a more detailed example and expected output.
.. _worker insights: https://sybrenjansen.github.io/mpire/usage/workerpool/worker_insights.html
Timeouts
~~~~~~~~
Timeouts can be set separately for the target, ``worker_init`` and ``worker_exit`` functions. When a timeout has been
set and reached, it will throw a ``TimeoutError``:
.. code-block:: python
def init():
...
def exit_():
...
# Will raise TimeoutError, provided that the target function takes longer
# than half a second to complete
with WorkerPool(n_jobs=5) as pool:
pool.map(time_consuming_function, range(10), task_timeout=0.5)
# Will raise TimeoutError, provided that the worker_init function takes longer
# than 3 seconds to complete or the worker_exit function takes longer than
# 150.5 seconds to complete
with WorkerPool(n_jobs=5) as pool:
pool.map(time_consuming_function, range(10), worker_init=init, worker_exit=exit_,
worker_init_timeout=3.0, worker_exit_timeout=150.5)
When using ``threading`` as start method MPIRE won't be able to interrupt certain functions, like ``time.sleep``.
See timeouts_ for more details.
.. _timeouts: https://sybrenjansen.github.io/mpire/usage/map/timeouts.html
Benchmarks
----------
MPIRE has been benchmarked on three different benchmarks: numerical computation, stateful computation, and expensive
initialization. More details on these benchmarks can be found in this `blog post`_. All code for these benchmarks can
be found in this project_.
In short, the main reasons why MPIRE is faster are:
- When ``fork`` is available we can make use of copy-on-write shared objects, which reduces the need to copy objects
that need to be shared over child processes
- Workers can hold state over multiple tasks. Therefore you can choose to load a big file or send resources over only
once per worker
- Automatic task chunking
The following graph shows the average normalized results of all three benchmarks. Results for individual benchmarks
can be found in the `blog post`_. The benchmarks were run on a Linux machine with 20 cores, with disabled hyperthreading
and 200GB of RAM. For each task, experiments were run with different numbers of processes/workers and results were
averaged over 5 runs.
.. image:: images/benchmarks_averaged.png
:width: 600px
:alt: Average normalized bechmark results
.. _blog post: https://towardsdatascience.com/mpire-for-python-multiprocessing-is-really-easy-d2ae7999a3e9
.. _project: https://github.com/sybrenjansen/multiprocessing_benchmarks
Documentation
-------------
See the full documentation at https://sybrenjansen.github.io/mpire/ for information on all the other features of MPIRE.
If you want to build the documentation yourself, please install the documentation dependencies by executing:
.. code-block:: bash
pip install mpire[docs]
or
.. code-block:: bash
pip install .[docs]
Documentation can then be build by using Python <= 3.9 and executing:
.. code-block:: bash
python setup.py build_docs
Documentation can also be build from the ``docs`` folder directly. In that case ``MPIRE`` should be installed and
available in your current working environment. Then execute:
.. code-block:: bash
make html
in the ``docs`` folder.
Raw data
{
"_id": null,
"home_page": "https://github.com/sybrenjansen/mpire",
"name": "mpire",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Sybren Jansen",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/32/e6/6e51b1b951a492b51c715254e5df8857eaa4fb4f500cea1a0600c354dfe9/mpire-2.10.1.tar.gz",
"platform": null,
"description": "MPIRE (MultiProcessing Is Really Easy)\n======================================\n\n|Build status| |Docs status| |Pypi status| |Python versions|\n\n.. |Build status| image:: https://github.com/sybrenjansen/mpire/workflows/Build/badge.svg?branch=master\n :target: https://github.com/sybrenjansen/mpire/actions/workflows/python-package.yml\n :alt: Build status\n.. |Docs status| image:: https://github.com/sybrenjansen/mpire/workflows/Docs/badge.svg?branch=master\n :target: https://sybrenjansen.github.io/mpire/\n :alt: Documentation\n.. |PyPI status| image:: https://img.shields.io/pypi/v/mpire\n :target: https://pypi.org/project/mpire/\n :alt: PyPI project page\n.. |Python versions| image:: https://img.shields.io/pypi/pyversions/mpire\n :target: https://pypi.org/project/mpire/\n :alt: PyPI project page\n\n``MPIRE``, short for MultiProcessing Is Really Easy, is a Python package for multiprocessing. ``MPIRE`` is faster in\nmost scenarios, packs more features, and is generally more user-friendly than the default multiprocessing package. It\ncombines the convenient map like functions of ``multiprocessing.Pool`` with the benefits of using copy-on-write shared\nobjects of ``multiprocessing.Process``, together with easy-to-use worker state, worker insights, worker init and exit\nfunctions, timeouts, and progress bar functionality.\n\nFull documentation is available at https://sybrenjansen.github.io/mpire/.\n\nFeatures\n--------\n\n- Faster execution than other multiprocessing libraries. See benchmarks_.\n- Intuitive, Pythonic syntax\n- Multiprocessing with ``map``/``map_unordered``/``imap``/``imap_unordered``/``apply``/``apply_async`` functions\n- Easy use of copy-on-write shared objects with a pool of workers (copy-on-write is only available for start method\n ``fork``)\n- Each worker can have its own state and with convenient worker init and exit functionality this state can be easily\n manipulated (e.g., to load a memory-intensive model only once for each worker without the need of sending it through a\n queue)\n- Progress bar support using tqdm_ (``rich`` and notebook widgets are supported)\n- Progress dashboard support\n- Worker insights to provide insight into your multiprocessing efficiency\n- Graceful and user-friendly exception handling\n- Timeouts, including for worker init and exit functions\n- Automatic task chunking for all available map functions to speed up processing of small task queues (including numpy\n arrays)\n- Adjustable maximum number of active tasks to avoid memory problems\n- Automatic restarting of workers after a specified number of tasks to reduce memory footprint\n- Nested pool of workers are allowed when setting the ``daemon`` option\n- Child processes can be pinned to specific or a range of CPUs\n- Optionally utilizes dill_ as serialization backend through multiprocess_, enabling parallelizing more exotic objects,\n lambdas, and functions in iPython and Jupyter notebooks.\n\nMPIRE is tested on Linux, macOS, and Windows. For Windows and macOS users, there are a few minor known caveats, which \nare documented in the Troubleshooting_ chapter.\n\n.. _benchmarks: https://towardsdatascience.com/mpire-for-python-multiprocessing-is-really-easy-d2ae7999a3e9\n.. _multiprocess: https://github.com/uqfoundation/multiprocess\n.. _dill: https://pypi.org/project/dill/\n.. _tqdm: https://tqdm.github.io/\n.. _Troubleshooting: https://sybrenjansen.github.io/mpire/troubleshooting.html\n\n\nInstallation\n------------\n\nThrough pip (PyPi):\n\n.. code-block:: bash\n\n pip install mpire\n\nMPIRE is also available through conda-forge:\n\n.. code-block:: bash\n\n conda install -c conda-forge mpire\n\n\nGetting started\n---------------\n\nSuppose you have a time consuming function that receives some input and returns its results. Simple functions like these\nare known as `embarrassingly parallel`_ problems, functions that require little to no effort to turn into a parallel\ntask. Parallelizing a simple function as this can be as easy as importing ``multiprocessing`` and using the\n``multiprocessing.Pool`` class:\n\n.. _embarrassingly parallel: https://en.wikipedia.org/wiki/Embarrassingly_parallel\n\n.. code-block:: python\n\n import time\n from multiprocessing import Pool\n\n def time_consuming_function(x):\n time.sleep(1) # Simulate that this function takes long to complete\n return ...\n\n with Pool(processes=5) as pool:\n results = pool.map(time_consuming_function, range(10))\n\nMPIRE can be used almost as a drop-in replacement to ``multiprocessing``. We use the ``mpire.WorkerPool`` class and\ncall one of the available ``map`` functions:\n\n.. code-block:: python\n\n from mpire import WorkerPool\n\n with WorkerPool(n_jobs=5) as pool:\n results = pool.map(time_consuming_function, range(10))\n\nThe differences in code are small: there's no need to learn a completely new multiprocessing syntax, if you're used to\nvanilla ``multiprocessing``. The additional available functionality, though, is what sets MPIRE apart.\n\nProgress bar\n~~~~~~~~~~~~\n\nSuppose we want to know the status of the current task: how many tasks are completed, how long before the work is ready?\nIt's as simple as setting the ``progress_bar`` parameter to ``True``:\n\n.. code-block:: python\n\n with WorkerPool(n_jobs=5) as pool:\n results = pool.map(time_consuming_function, range(10), progress_bar=True)\n\nAnd it will output a nicely formatted tqdm_ progress bar.\n\nMPIRE also offers a dashboard, for which you need to install additional dependencies_. See Dashboard_ for more\ninformation.\n\n.. _dependencies: https://sybrenjansen.github.io/mpire/install.html#dashboard\n.. _Dashboard: https://sybrenjansen.github.io/mpire/usage/dashboard.html\n\n\nShared objects\n~~~~~~~~~~~~~~\n\nNote: Copy-on-write shared objects is only available for start method ``fork``. For ``threading`` the objects are shared\nas-is. For other start methods the shared objects are copied once for each worker, which can still be better than once\nper task.\n\nIf you have one or more objects that you want to share between all workers you can make use of the copy-on-write\n``shared_objects`` option of MPIRE. MPIRE will pass on these objects only once for each worker without\ncopying/serialization. Only when you alter the object in the worker function it will start copying it for that worker.\n\n.. code-block:: python\n\n def time_consuming_function(some_object, x):\n time.sleep(1) # Simulate that this function takes long to complete\n return ...\n\n def main():\n some_object = ...\n with WorkerPool(n_jobs=5, shared_objects=some_object) as pool:\n results = pool.map(time_consuming_function, range(10), progress_bar=True)\n\nSee shared_objects_ for more details.\n\n.. _shared_objects: https://sybrenjansen.github.io/mpire/usage/workerpool/shared_objects.html\n\nWorker initialization\n~~~~~~~~~~~~~~~~~~~~~\n\nWorkers can be initialized using the ``worker_init`` feature. Together with ``worker_state`` you can load a model, or\nset up a database connection, etc.:\n\n.. code-block:: python\n\n def init(worker_state):\n # Load a big dataset or model and store it in a worker specific worker_state\n worker_state['dataset'] = ...\n worker_state['model'] = ...\n\n def task(worker_state, idx):\n # Let the model predict a specific instance of the dataset\n return worker_state['model'].predict(worker_state['dataset'][idx])\n\n with WorkerPool(n_jobs=5, use_worker_state=True) as pool:\n results = pool.map(task, range(10), worker_init=init)\n\nSimilarly, you can use the ``worker_exit`` feature to let MPIRE call a function whenever a worker terminates. You can\neven let this exit function return results, which can be obtained later on. See the `worker_init and worker_exit`_\nsection for more information.\n\n.. _worker_init and worker_exit: https://sybrenjansen.github.io/mpire/usage/map/worker_init_exit.html\n\n\nWorker insights\n~~~~~~~~~~~~~~~\n\nWhen your multiprocessing setup isn't performing as you want it to and you have no clue what's causing it, there's the\nworker insights functionality. This will give you insight in your setup, but it will not profile the function you're\nrunning (there are other libraries for that). Instead, it profiles the worker start up time, waiting time and\nworking time. When worker init and exit functions are provided it will time those as well.\n\nPerhaps you're sending a lot of data over the task queue, which makes the waiting time go up. Whatever the case, you\ncan enable and grab the insights using the ``enable_insights`` flag and ``mpire.WorkerPool.get_insights`` function,\nrespectively:\n\n.. code-block:: python\n\n with WorkerPool(n_jobs=5, enable_insights=True) as pool:\n results = pool.map(time_consuming_function, range(10))\n insights = pool.get_insights()\n\nSee `worker insights`_ for a more detailed example and expected output.\n\n.. _worker insights: https://sybrenjansen.github.io/mpire/usage/workerpool/worker_insights.html\n\n\nTimeouts\n~~~~~~~~\n\nTimeouts can be set separately for the target, ``worker_init`` and ``worker_exit`` functions. When a timeout has been\nset and reached, it will throw a ``TimeoutError``:\n\n.. code-block:: python\n\n def init():\n ...\n\n def exit_():\n ...\n\n # Will raise TimeoutError, provided that the target function takes longer\n # than half a second to complete\n with WorkerPool(n_jobs=5) as pool:\n pool.map(time_consuming_function, range(10), task_timeout=0.5)\n\n # Will raise TimeoutError, provided that the worker_init function takes longer\n # than 3 seconds to complete or the worker_exit function takes longer than\n # 150.5 seconds to complete\n with WorkerPool(n_jobs=5) as pool:\n pool.map(time_consuming_function, range(10), worker_init=init, worker_exit=exit_,\n worker_init_timeout=3.0, worker_exit_timeout=150.5)\n\nWhen using ``threading`` as start method MPIRE won't be able to interrupt certain functions, like ``time.sleep``.\n\nSee timeouts_ for more details.\n\n.. _timeouts: https://sybrenjansen.github.io/mpire/usage/map/timeouts.html\n\nBenchmarks\n----------\n\nMPIRE has been benchmarked on three different benchmarks: numerical computation, stateful computation, and expensive\ninitialization. More details on these benchmarks can be found in this `blog post`_. All code for these benchmarks can\nbe found in this project_.\n\nIn short, the main reasons why MPIRE is faster are:\n\n- When ``fork`` is available we can make use of copy-on-write shared objects, which reduces the need to copy objects\n that need to be shared over child processes\n- Workers can hold state over multiple tasks. Therefore you can choose to load a big file or send resources over only\n once per worker\n- Automatic task chunking\n\nThe following graph shows the average normalized results of all three benchmarks. Results for individual benchmarks\ncan be found in the `blog post`_. The benchmarks were run on a Linux machine with 20 cores, with disabled hyperthreading\nand 200GB of RAM. For each task, experiments were run with different numbers of processes/workers and results were\naveraged over 5 runs.\n\n.. image:: images/benchmarks_averaged.png\n :width: 600px\n :alt: Average normalized bechmark results\n\n.. _blog post: https://towardsdatascience.com/mpire-for-python-multiprocessing-is-really-easy-d2ae7999a3e9\n.. _project: https://github.com/sybrenjansen/multiprocessing_benchmarks\n\n\n\nDocumentation\n-------------\n\nSee the full documentation at https://sybrenjansen.github.io/mpire/ for information on all the other features of MPIRE.\n\nIf you want to build the documentation yourself, please install the documentation dependencies by executing:\n\n.. code-block:: bash\n\n pip install mpire[docs]\n\nor \n\n.. code-block:: bash\n\n pip install .[docs]\n\n\nDocumentation can then be build by using Python <= 3.9 and executing:\n\n.. code-block:: bash\n\n python setup.py build_docs\n\nDocumentation can also be build from the ``docs`` folder directly. In that case ``MPIRE`` should be installed and\navailable in your current working environment. Then execute:\n\n.. code-block:: bash\n\n make html\n\nin the ``docs`` folder.\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python package for easy multiprocessing, but faster than multiprocessing",
"version": "2.10.1",
"project_urls": {
"Homepage": "https://github.com/sybrenjansen/mpire"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "f895a7f4faac575426b42738fa827b755f0c01028c0d5c260e24f8ecebfb6146",
"md5": "c332372e23898dbb80231878e1477ecd",
"sha256": "984295df6dbc6092f7ca4a6266202623ce7a121f40a22fe27305289faad2e9e1"
},
"downloads": -1,
"filename": "mpire-2.10.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c332372e23898dbb80231878e1477ecd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 272473,
"upload_time": "2024-03-19T08:26:32",
"upload_time_iso_8601": "2024-03-19T08:26:32.129475Z",
"url": "https://files.pythonhosted.org/packages/f8/95/a7f4faac575426b42738fa827b755f0c01028c0d5c260e24f8ecebfb6146/mpire-2.10.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "32e66e51b1b951a492b51c715254e5df8857eaa4fb4f500cea1a0600c354dfe9",
"md5": "7e9e8648309ce9ede1c24052a03395cb",
"sha256": "8a1c510a159867d0c0d66b9ce360b835268f9ba3bfe654f59fae9e8ca6c221bd"
},
"downloads": -1,
"filename": "mpire-2.10.1.tar.gz",
"has_sig": false,
"md5_digest": "7e9e8648309ce9ede1c24052a03395cb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 266820,
"upload_time": "2024-03-19T08:26:34",
"upload_time_iso_8601": "2024-03-19T08:26:34.448996Z",
"url": "https://files.pythonhosted.org/packages/32/e6/6e51b1b951a492b51c715254e5df8857eaa4fb4f500cea1a0600c354dfe9/mpire-2.10.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-19 08:26:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sybrenjansen",
"github_project": "mpire",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "mpire"
}