zwembad


Namezwembad JSON
Version 1.2.2 PyPI version JSON
download
home_pagehttps://github.com/Helveg/zwembad
SummaryParallel MPIPoolExecutor implementing the concurrent.futures interface
upload_time2021-03-09 17:26:36
maintainer
docs_urlNone
authorRobin De Schepper
requires_python
license
keywords mpi pool mpipool zwembad
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Documentation Status](https://readthedocs.org/projects/zwembad/badge/?version=latest)](https://zwembad.readthedocs.io/en/latest/?badge=latest)

# About

`zwembad` offers an `MPIPoolExecutor` class, an implementation of the
`concurrent.futures.Executor` class of the standard library.

# Example usage

```
from zwembad import MPIPoolExecutor
from mpi4py import MPI

def menial_task(x):
  return x ** MPI.COMM_WORLD.Get_rank()

with MPIPoolExecutor() as pool:
  pool.workers_exit()
  print("Only the master executes this code.")

  # Submit some tasks to the pool
  fs = [pool.submit(menial_task, i) for i in range(100)]

  # Wait for all of the results and print them
  print([f.result() for f in fs])

  # A shorter notation to dispatch the same function with different args
  # and to wait for all results is the `.map` method:
  results = pool.map(menial_task, range(100))

print("All processes join again here.")
```

You'll see that some results will have exponentiated either by 1, 2, ..., n
depending on which worker they were sent to. It's also important to prevent your
workers from running the master code using the `pool.workers_exit()` call. As a
fail safe any attribute access on the `pool` object made from workers will
result in them exiting anyway.

The `MPIPoolExecutor` of zwembad is designed to function without `MPI.Spawn()`
for cases where this approach isn't feasible, like supercomputers where
`MPI.Spawn` is deliberatly not implemented (for example CrayMPI).

Therefor the pool can only use MPI processes that are spawned when the MPI world
is initialised and must be run from the command line using an MPI helper such as
`mpirun`, `mpiexec` or SLURM's `srun`:

```
$ mpirun -n 4 python example.py
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Helveg/zwembad",
    "name": "zwembad",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "mpi pool mpipool zwembad",
    "author": "Robin De Schepper",
    "author_email": "robingilbert.deschepper@unipv.it",
    "download_url": "",
    "platform": "",
    "description": "[![Documentation Status](https://readthedocs.org/projects/zwembad/badge/?version=latest)](https://zwembad.readthedocs.io/en/latest/?badge=latest)\n\n# About\n\n`zwembad` offers an `MPIPoolExecutor` class, an implementation of the\n`concurrent.futures.Executor` class of the standard library.\n\n# Example usage\n\n```\nfrom zwembad import MPIPoolExecutor\nfrom mpi4py import MPI\n\ndef menial_task(x):\n  return x ** MPI.COMM_WORLD.Get_rank()\n\nwith MPIPoolExecutor() as pool:\n  pool.workers_exit()\n  print(\"Only the master executes this code.\")\n\n  # Submit some tasks to the pool\n  fs = [pool.submit(menial_task, i) for i in range(100)]\n\n  # Wait for all of the results and print them\n  print([f.result() for f in fs])\n\n  # A shorter notation to dispatch the same function with different args\n  # and to wait for all results is the `.map` method:\n  results = pool.map(menial_task, range(100))\n\nprint(\"All processes join again here.\")\n```\n\nYou'll see that some results will have exponentiated either by 1, 2, ..., n\ndepending on which worker they were sent to. It's also important to prevent your\nworkers from running the master code using the `pool.workers_exit()` call. As a\nfail safe any attribute access on the `pool` object made from workers will\nresult in them exiting anyway.\n\nThe `MPIPoolExecutor` of zwembad is designed to function without `MPI.Spawn()`\nfor cases where this approach isn't feasible, like supercomputers where\n`MPI.Spawn` is deliberatly not implemented (for example CrayMPI).\n\nTherefor the pool can only use MPI processes that are spawned when the MPI world\nis initialised and must be run from the command line using an MPI helper such as\n`mpirun`, `mpiexec` or SLURM's `srun`:\n\n```\n$ mpirun -n 4 python example.py\n```\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Parallel MPIPoolExecutor implementing the concurrent.futures interface",
    "version": "1.2.2",
    "split_keywords": [
        "mpi",
        "pool",
        "mpipool",
        "zwembad"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "38d3e38e7ef1e652205a464c13e9a1da",
                "sha256": "ca9fd0dd3f0cbe92f1e96073ba7742bd580d8f92d3bcf45b0ee6aa6ae45621b7"
            },
            "downloads": -1,
            "filename": "zwembad-1.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "38d3e38e7ef1e652205a464c13e9a1da",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 6235,
            "upload_time": "2021-03-09T17:26:36",
            "upload_time_iso_8601": "2021-03-09T17:26:36.703166Z",
            "url": "https://files.pythonhosted.org/packages/b6/9e/75f04f672d4caa83b2a358aee7ff5a1289d8b0d5f1b8a46019db3db2d2bc/zwembad-1.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-03-09 17:26:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "Helveg",
    "github_project": "zwembad",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "zwembad"
}
        
Elapsed time: 0.01665s