simplempi


Namesimplempi JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/taobrienlbl/simplempi
SummaryA wrapper around mpi4py that offers simple scattering of iterable objects.
upload_time2024-11-11 21:36:19
maintainerNone
docs_urlNone
authorTravis A. O'Brien
requires_pythonNone
licenseBSD-3-Clause
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPI version](https://badge.fury.io/py/simplempi.svg)](https://badge.fury.io/py/simplempi)
![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/taobrienlbl/simplempi/main.yml?event=push&label=tests)


A wrapper around mpi4py that offers simple scattering of iterable objects.

This is useful for embarassingly parallel, SPMD, type tasks that simply need to work on a list of things.

example usage:

`parfor_test.py`
```python
# import the parfor function; note
# that this will automatically initialize MPI
# also import pprint for parallel-friendly printing
from simplempi.parfor import parfor, pprint

# define a list to loop over
my_list = list(range(10)) 

# define a function that does something with each item in my_list
def func(i):
    return i**2

# loop in parallel over my_list
for i in parfor(my_list):
    result = func(i)
    pprint(f"{i}**2 = {result}")
```

Running this with mpirun on 4 processors shows that the list of 10 numbers gets
scattered as evenly as possible across all 4 processors; it also shows that the order of evaluation in the for loop is not well-defined (which is okay for embarassingly parallel code like this):

```bash
$ mpirun -n 4 python parfor_test.py 
(rank 1/4):  0**2 = 0
(rank 1/4):  4**2 = 16
(rank 1/4):  8**2 = 64
(rank 3/4):  2**2 = 4
(rank 3/4):  6**2 = 36
(rank 4/4):  3**2 = 9
(rank 4/4):  7**2 = 49
(rank 2/4):  1**2 = 1
(rank 2/4):  5**2 = 25
(rank 2/4):  9**2 = 81
```

Alternatively, one can use the object-oriented interface:

`simpleMPI_test.py`
```python
import simplempi

#Initialize MPI 
smpi = simplempi.simpleMPI()

#Make a list of things (20 numbers in this case)
testList = range(20)

#Scatter the list to all processors (myList differs among processes now)
myList = smpi.scatterList(testList)

#Print the list contents (as well as the rank of the printing process)
smpi.pprint(myList)
```

Running this with mpirun on 6 processors shows that the list of 20 numbers gets
scattered as evenly as possible across all 6 processors:

```bash
$ mpirun -n 6 python simpleMPI_test.py 
(rank 1/6): [0, 6, 12, 18]
(rank 2/6): [1, 7, 13, 19]
(rank 4/6): [3, 9, 15]
(rank 6/6): [5, 11, 17]
(rank 5/6): [4, 10, 16]
(rank 3/6): [2, 8, 14]

```

# Install
`python3 -m pip install simplempi`


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/taobrienlbl/simplempi",
    "name": "simplempi",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Travis A. O'Brien",
    "author_email": "obrienta@iu.edu",
    "download_url": "https://files.pythonhosted.org/packages/86/72/3ff6782c15ed24e9b990edd4bcb5824fe4ed153cb9c0968072b42b984134/simplempi-0.1.5.tar.gz",
    "platform": null,
    "description": "[![PyPI version](https://badge.fury.io/py/simplempi.svg)](https://badge.fury.io/py/simplempi)\n![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/taobrienlbl/simplempi/main.yml?event=push&label=tests)\n\n\nA wrapper around mpi4py that offers simple scattering of iterable objects.\n\nThis is useful for embarassingly parallel, SPMD, type tasks that simply need to work on a list of things.\n\nexample usage:\n\n`parfor_test.py`\n```python\n# import the parfor function; note\n# that this will automatically initialize MPI\n# also import pprint for parallel-friendly printing\nfrom simplempi.parfor import parfor, pprint\n\n# define a list to loop over\nmy_list = list(range(10)) \n\n# define a function that does something with each item in my_list\ndef func(i):\n    return i**2\n\n# loop in parallel over my_list\nfor i in parfor(my_list):\n    result = func(i)\n    pprint(f\"{i}**2 = {result}\")\n```\n\nRunning this with mpirun on 4 processors shows that the list of 10 numbers gets\nscattered as evenly as possible across all 4 processors; it also shows that the order of evaluation in the for loop is not well-defined (which is okay for embarassingly parallel code like this):\n\n```bash\n$ mpirun -n 4 python parfor_test.py \n(rank 1/4):  0**2 = 0\n(rank 1/4):  4**2 = 16\n(rank 1/4):  8**2 = 64\n(rank 3/4):  2**2 = 4\n(rank 3/4):  6**2 = 36\n(rank 4/4):  3**2 = 9\n(rank 4/4):  7**2 = 49\n(rank 2/4):  1**2 = 1\n(rank 2/4):  5**2 = 25\n(rank 2/4):  9**2 = 81\n```\n\nAlternatively, one can use the object-oriented interface:\n\n`simpleMPI_test.py`\n```python\nimport simplempi\n\n#Initialize MPI \nsmpi = simplempi.simpleMPI()\n\n#Make a list of things (20 numbers in this case)\ntestList = range(20)\n\n#Scatter the list to all processors (myList differs among processes now)\nmyList = smpi.scatterList(testList)\n\n#Print the list contents (as well as the rank of the printing process)\nsmpi.pprint(myList)\n```\n\nRunning this with mpirun on 6 processors shows that the list of 20 numbers gets\nscattered as evenly as possible across all 6 processors:\n\n```bash\n$ mpirun -n 6 python simpleMPI_test.py \n(rank 1/6): [0, 6, 12, 18]\n(rank 2/6): [1, 7, 13, 19]\n(rank 4/6): [3, 9, 15]\n(rank 6/6): [5, 11, 17]\n(rank 5/6): [4, 10, 16]\n(rank 3/6): [2, 8, 14]\n\n```\n\n# Install\n`python3 -m pip install simplempi`\n\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "A wrapper around mpi4py that offers simple scattering of iterable objects.",
    "version": "0.1.5",
    "project_urls": {
        "Homepage": "https://github.com/taobrienlbl/simplempi"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "86e5f98e465d14904bf5a15b26572f7c1121a508d42836e7b97333f1f21a1b57",
                "md5": "c3636cb6535d07cb6aaa641d9a6c56c5",
                "sha256": "4f9c112e9aac4f60d292c1080a344129f2679852f5ec6977b745d9ffaf68e7ae"
            },
            "downloads": -1,
            "filename": "simplempi-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c3636cb6535d07cb6aaa641d9a6c56c5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 5567,
            "upload_time": "2024-11-11T21:36:18",
            "upload_time_iso_8601": "2024-11-11T21:36:18.351260Z",
            "url": "https://files.pythonhosted.org/packages/86/e5/f98e465d14904bf5a15b26572f7c1121a508d42836e7b97333f1f21a1b57/simplempi-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "86723ff6782c15ed24e9b990edd4bcb5824fe4ed153cb9c0968072b42b984134",
                "md5": "073e27bd0a968fe1597e2d50609269d2",
                "sha256": "f6632b19243938d847e4e0c2a68c9970cee32149c872c691c7e9b78736c4a9e8"
            },
            "downloads": -1,
            "filename": "simplempi-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "073e27bd0a968fe1597e2d50609269d2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 5201,
            "upload_time": "2024-11-11T21:36:19",
            "upload_time_iso_8601": "2024-11-11T21:36:19.823220Z",
            "url": "https://files.pythonhosted.org/packages/86/72/3ff6782c15ed24e9b990edd4bcb5824fe4ed153cb9c0968072b42b984134/simplempi-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-11 21:36:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "taobrienlbl",
    "github_project": "simplempi",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "simplempi"
}
        
Elapsed time: 0.40064s