# dlp_mpi - Data-level parallelism with mpi for python
[![PyPI](https://img.shields.io/pypi/v/dlp_mpi.svg)](https://pypi.org/project/dlp-mpi)
[![PyPI](https://img.shields.io/pypi/dm/dlp_mpi)](https://pypi.org/project/dlp-mpi)
[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/fgnt/dlp_mpi/master/LICENSE)
<table>
<tr>
<th>
Run an serial algorithm on multiple examples
</th>
<th>
Use dlp_mpi to run the loop body in parallel
</th>
<th>
Use dlp_mpi to run a function in parallel
</th>
</tr>
<tr>
<td>
```python
# python script.py
import time
examples = list(range(10))
results = []
for example in examples:
# Some heavy workload:
# CPU or IO
time.sleep(0.2)
result = example
# Remember the results
results.append(result)
# Summarize your experiment
print(sum(results))
```
</td>
<td>
```python
# mpiexec -np 8 python script.py
import time
import dlp_mpi
examples = list(range(10))
results = []
for example in dlp_mpi.split_managed(
examples):
# Some heavy workload:
# CPU or IO
time.sleep(0.2)
result = example
# Remember the results
results.append(result)
results = dlp_mpi.gather(results)
if dlp_mpi.IS_MASTER:
results = [
result
for worker_results in results
for result in worker_results
]
# Summarize your experiment
print(results)
```
</td>
<td>
```python
# mpiexec -np 8 python script.py
import time
import dlp_mpi
examples = list(range(10))
results = []
def work_load(example):
# Some heavy workload:
# CPU or IO
time.sleep(0.2)
result = example
return result
for result in dlp_mpi.map_unordered(
work_load, examples):
# Remember the results
results.append(result)
if dlp_mpi.IS_MASTER:
# Summarize your experiment
print(results)
```
</td>
</tr>
</table>
This package uses `mpi4py` to provide utilities to parallelize algorithms that are applied to multiple examples.
The core idea is: Start `N` processes and each process works on a subset of all examples.
To start the processes `mpiexec` can be used. Most HPC systems support MPI to scatter the workload across multiple hosts. For the command, look in the documentation for your HPC system and search for MPI launches.
Since each process should operate on different examples, MPI provides the variables `RANK` and `SIZE`, where `SIZE` is the number of workers and `RANK` is a unique identifier from `0` to `SIZE - 1`.
The easiest way to improve the execution time is to process `examples[RANK::SIZE]` on each worker.
This is a round robin load balancing (`dlp_mpi.split_round_robin`).
A more advanced load balancing is `dlp_mpi.split_managed`, where one process manages the load and assigns a new task to a worker once he finishes the last task.
When in the end of a program all results should be summarized or written in a single file, communication between all processes is nessesary.
For this purpose `dlp_mpi.gather` (`mpi4py.MPI.COMM_WORLD.gather`) can be used. This function sends all data to the root process (Here, `pickle` is used for serialization).
As an alternative to splitting the data, this package also provides a `map` style parallelization (see example in the beginning):
The function `dlp_mpi.map_unordered` calls `work_load` in parallel and executes the `for` body in serial.
The communication between the processes is only the `result` and the index to get the `i`th example from the examples, i.e., the example aren't transferred between the processes.
# Availabel utilities and functions
Note: `dlp_mpi` has dummy implementations, when `mpi4py` is not installed and the enviroment indicate that no MPI is used (Useful for running on a laptop).
- `dlp_mpi.RANK` or `mpi4py.MPI.COMM_WORLD.rank`: The rank of the process. To avoid programming errors, `if dlp_mpi.RANK: ...` will fail.
- `dlp_mpi.SIZE` or `mpi4py.MPI.COMM_WORLD.size`: The number of processes.
- `dlp_mpi.IS_MASTER`: A flag that indicates whether the process is the default master/controller/root.
- `dlp_mpi.bcast(...)` or `mpi4py.MPI.COMM_WORLD.bcast(...)`: Broadcast the data from the root to all workers.
- `dlp_mpi.gather(...)` or `mpi4py.MPI.COMM_WORLD.gather(...)`: Send data from all workers to the root.
- `dlp_mpi.barrier()` or `mpi4py.MPI.COMM_WORLD.Barrier()`: Sync all prosesses.
The advanced functions that are provided in this package are
- `split_round_robin(examples)`: Zero communication split of the data. The default is identically to `examples[dlp_mpi.RANK::dlp_mpi.SIZE]`.
- `split_managed(examples)`: The master process manages the load balance, while the others do the work. Note: The master process does not distribute the examples. It is assumed that examples have the same order on each worker.
- `map_unordered(work_load, examples)`: The master process manages the load balance, while the others execute the `work_load` function. The result is send back to the master process.
# Runtime
Without this package your code runs in serial.
The execution time of the following code snippets will be demonstrated by running it with this package.
Regarding the color: The `examples = ...` is the setup code.
Therefore, the code and the correspoding block representing the execution time it is blue in the code.
![(Serial Worker)](doc/tikz_split_managed_serial.svg)
This easiest way to parallelize the workload (dark orange) is to do a round robin assignment of the load:
`for example in dlp_mpi.split_round_robin(examples)`.
This function call is equivalent to `for example in examples[dlp_mpi.RANK::dlp_mpi.SIZE]`.
Thus, there is zero comunications between the workers.
Only when it is nessesary to do some final work on the results of all data (e.g. calculating average metrics) a communication is nessesary.
This is done with the `gather` function.
This functions returns the worker results in a list on the master process and the worker process gets a `None` return value.
Depending on the workload the round robin assingment can be suboptimal.
See the example block diagramm.
Worker 1 got tasks that are relative long.
So this worker used much more time than the others.
![(Round Robin)](doc/tikz_split_managed_rr.svg)
To overcome the limitations of the round robin assignment, this package helps to use a manager to assign the work to the workers.
This optimizes the utilisation of the workers.
Once a worker finished an example, it requests a new one from the manager and gets one assigned.
Note: The communication is only which example should be processed (i.e. the index of the example) not the example itself.
![(Managed Split)](doc/tikz_split_managed_split.svg)
An alternative to splitting the iterator is to use a `map` function.
The function is then executed on a worker and the return value is sent back to the manager.
Be carefull, that the loop body is fast enough, otherwise it can be a bottleneck.
You should use the loop body only for book keeping, not for actual work load.
When a worker sends a task to the manager, the manager sends back a new task and enters the for loop body.
While the manager is in the loop body, he cannot react on requests of other workers, see the block diagramm:
![(Managed Map)](doc/tikz_split_managed_map.svg)
# Installation
You can install this package from pypi:
```bash
pip install dlp_mpi
```
To check if the installation was successful, try the following command:
```bash
$ mpiexec -np 4 python -c 'import dlp_mpi; print(dlp_mpi.RANK)'
3
0
1
2
```
The command should print the numbers 0, 1, 2 and 3.
The order is random.
When that line prints 4 times a zero, something went wrong.
This can happen, when you have no `mpi` installed or the installation is brocken.
In a Debian-based Linux you can install it with `sudo apt install libopenmpi-dev`.
When you do not have the rights to install something with `apt`, you could also install `mpi4py` with `conda`.
The above `pip install` will install `mpi4py` from `pypi`.
Be careful, that the installation from `conda` may conflict with your locally installed `mpi`.
Especially in High Performance Computing (HPC) environments this can cause troubles.
# FAQ
**Q**: Can I run a script that uses `dlp_mpi` on my laptop, that has no running MPI (i.e. broken installation)?
**A**: Yes, when you uninstall `mpi4py` (i.e. `pip uninstall mpi4py`) after installing this package. When `MPI` is working or missing, code written with `dlp_mpi` should work.
Raw data
{
"_id": null,
"home_page": "https://github.com/fgnt/dlp_mpi",
"name": "dlp-mpi",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.6",
"maintainer_email": null,
"keywords": "mpi",
"author": "Christoph Boeddeker",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/0a/6a/61a84f6732851f38c9be1609f3e79122ccb3ddeaa4fb10170905fffebe6a/dlp_mpi-0.0.4.tar.gz",
"platform": null,
"description": "# dlp_mpi - Data-level parallelism with mpi for python\n\n[![PyPI](https://img.shields.io/pypi/v/dlp_mpi.svg)](https://pypi.org/project/dlp-mpi)\n[![PyPI](https://img.shields.io/pypi/dm/dlp_mpi)](https://pypi.org/project/dlp-mpi)\n[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/fgnt/dlp_mpi/master/LICENSE)\n\n<table>\n<tr>\n<th>\nRun an serial algorithm on multiple examples\n</th>\n<th>\nUse dlp_mpi to run the loop body in parallel\n</th>\n<th>\nUse dlp_mpi to run a function in parallel\n</th>\n</tr>\n<tr>\n<td>\n\n```python\n# python script.py\n\nimport time\n\n\nexamples = list(range(10))\nresults = []\n\n\n\n\n\n\n\n\nfor example in examples:\n\n # Some heavy workload:\n # CPU or IO\n time.sleep(0.2)\n result = example\n\n # Remember the results\n results.append(result)\n\n\n\n\n\n\n\n\n\n\n# Summarize your experiment\nprint(sum(results))\n```\n</td>\n<td>\n\n```python\n# mpiexec -np 8 python script.py\n\nimport time\nimport dlp_mpi\n\nexamples = list(range(10))\nresults = []\n\n\n\n\n\n\n\n\nfor example in dlp_mpi.split_managed(\n examples):\n # Some heavy workload:\n # CPU or IO\n time.sleep(0.2)\n result = example\n\n # Remember the results\n results.append(result)\n\nresults = dlp_mpi.gather(results)\n\nif dlp_mpi.IS_MASTER:\n results = [\n result\n for worker_results in results\n for result in worker_results\n ]\n \n # Summarize your experiment\n print(results)\n```\n</td>\n<td>\n\n```python\n# mpiexec -np 8 python script.py\n\nimport time\nimport dlp_mpi\n\nexamples = list(range(10))\nresults = []\n\ndef work_load(example):\n # Some heavy workload:\n # CPU or IO\n time.sleep(0.2)\n result = example\n return result\n\nfor result in dlp_mpi.map_unordered(\n work_load, examples):\n\n\n\n\n\n # Remember the results\n results.append(result)\n\n\n\n\n\n\n\n\n\nif dlp_mpi.IS_MASTER:\n # Summarize your experiment\n print(results)\n```\n</td>\n</tr>\n</table>\n\nThis package uses `mpi4py` to provide utilities to parallelize algorithms that are applied to multiple examples.\n\nThe core idea is: Start `N` processes and each process works on a subset of all examples.\nTo start the processes `mpiexec` can be used. Most HPC systems support MPI to scatter the workload across multiple hosts. For the command, look in the documentation for your HPC system and search for MPI launches.\n\nSince each process should operate on different examples, MPI provides the variables `RANK` and `SIZE`, where `SIZE` is the number of workers and `RANK` is a unique identifier from `0` to `SIZE - 1`.\nThe easiest way to improve the execution time is to process `examples[RANK::SIZE]` on each worker.\nThis is a round robin load balancing (`dlp_mpi.split_round_robin`).\nA more advanced load balancing is `dlp_mpi.split_managed`, where one process manages the load and assigns a new task to a worker once he finishes the last task.\n\nWhen in the end of a program all results should be summarized or written in a single file, communication between all processes is nessesary.\nFor this purpose `dlp_mpi.gather` (`mpi4py.MPI.COMM_WORLD.gather`) can be used. This function sends all data to the root process (Here, `pickle` is used for serialization).\n\nAs an alternative to splitting the data, this package also provides a `map` style parallelization (see example in the beginning):\nThe function `dlp_mpi.map_unordered` calls `work_load` in parallel and executes the `for` body in serial.\nThe communication between the processes is only the `result` and the index to get the `i`th example from the examples, i.e., the example aren't transferred between the processes.\n\n# Availabel utilities and functions\n\nNote: `dlp_mpi` has dummy implementations, when `mpi4py` is not installed and the enviroment indicate that no MPI is used (Useful for running on a laptop).\n\n - `dlp_mpi.RANK` or `mpi4py.MPI.COMM_WORLD.rank`: The rank of the process. To avoid programming errors, `if dlp_mpi.RANK: ...` will fail.\n - `dlp_mpi.SIZE` or `mpi4py.MPI.COMM_WORLD.size`: The number of processes.\n - `dlp_mpi.IS_MASTER`: A flag that indicates whether the process is the default master/controller/root.\n - `dlp_mpi.bcast(...)` or `mpi4py.MPI.COMM_WORLD.bcast(...)`: Broadcast the data from the root to all workers.\n - `dlp_mpi.gather(...)` or `mpi4py.MPI.COMM_WORLD.gather(...)`: Send data from all workers to the root.\n - `dlp_mpi.barrier()` or `mpi4py.MPI.COMM_WORLD.Barrier()`: Sync all prosesses.\n\nThe advanced functions that are provided in this package are\n\n - `split_round_robin(examples)`: Zero communication split of the data. The default is identically to `examples[dlp_mpi.RANK::dlp_mpi.SIZE]`.\n - `split_managed(examples)`: The master process manages the load balance, while the others do the work. Note: The master process does not distribute the examples. It is assumed that examples have the same order on each worker.\n - `map_unordered(work_load, examples)`: The master process manages the load balance, while the others execute the `work_load` function. The result is send back to the master process.\n\n\n# Runtime\n\nWithout this package your code runs in serial.\nThe execution time of the following code snippets will be demonstrated by running it with this package.\nRegarding the color: The `examples = ...` is the setup code.\nTherefore, the code and the correspoding block representing the execution time it is blue in the code.\n\n![(Serial Worker)](doc/tikz_split_managed_serial.svg)\n\nThis easiest way to parallelize the workload (dark orange) is to do a round robin assignment of the load:\n`for example in dlp_mpi.split_round_robin(examples)`.\nThis function call is equivalent to `for example in examples[dlp_mpi.RANK::dlp_mpi.SIZE]`.\nThus, there is zero comunications between the workers.\nOnly when it is nessesary to do some final work on the results of all data (e.g. calculating average metrics) a communication is nessesary.\nThis is done with the `gather` function.\nThis functions returns the worker results in a list on the master process and the worker process gets a `None` return value.\nDepending on the workload the round robin assingment can be suboptimal.\nSee the example block diagramm.\nWorker 1 got tasks that are relative long.\nSo this worker used much more time than the others.\n\n![(Round Robin)](doc/tikz_split_managed_rr.svg)\n\nTo overcome the limitations of the round robin assignment, this package helps to use a manager to assign the work to the workers.\nThis optimizes the utilisation of the workers.\nOnce a worker finished an example, it requests a new one from the manager and gets one assigned.\nNote: The communication is only which example should be processed (i.e. the index of the example) not the example itself.\n\n![(Managed Split)](doc/tikz_split_managed_split.svg)\n\nAn alternative to splitting the iterator is to use a `map` function.\nThe function is then executed on a worker and the return value is sent back to the manager.\nBe carefull, that the loop body is fast enough, otherwise it can be a bottleneck.\nYou should use the loop body only for book keeping, not for actual work load.\nWhen a worker sends a task to the manager, the manager sends back a new task and enters the for loop body. \nWhile the manager is in the loop body, he cannot react on requests of other workers, see the block diagramm:\n\n![(Managed Map)](doc/tikz_split_managed_map.svg)\n\n\n# Installation\n\nYou can install this package from pypi:\n```bash\npip install dlp_mpi\n```\n\nTo check if the installation was successful, try the following command:\n```bash \n$ mpiexec -np 4 python -c 'import dlp_mpi; print(dlp_mpi.RANK)'\n3\n0\n1\n2\n```\nThe command should print the numbers 0, 1, 2 and 3.\nThe order is random.\nWhen that line prints 4 times a zero, something went wrong.\n\nThis can happen, when you have no `mpi` installed or the installation is brocken.\nIn a Debian-based Linux you can install it with `sudo apt install libopenmpi-dev`.\nWhen you do not have the rights to install something with `apt`, you could also install `mpi4py` with `conda`.\nThe above `pip install` will install `mpi4py` from `pypi`.\nBe careful, that the installation from `conda` may conflict with your locally installed `mpi`. \nEspecially in High Performance Computing (HPC) environments this can cause troubles.\n\n# FAQ\n\n**Q**: Can I run a script that uses `dlp_mpi` on my laptop, that has no running MPI (i.e. broken installation)?\n\n**A**: Yes, when you uninstall `mpi4py` (i.e. `pip uninstall mpi4py`) after installing this package. When `MPI` is working or missing, code written with `dlp_mpi` should work.\n",
"bugtrack_url": null,
"license": null,
"summary": "Data-level parallelism with mpi in python",
"version": "0.0.4",
"project_urls": {
"Homepage": "https://github.com/fgnt/dlp_mpi"
},
"split_keywords": [
"mpi"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "be6bbf700fdfdb5f79be6e7535ec5fd0c9ae4c194bac9369f6f2b28913dbc146",
"md5": "7a2dc599ce4a67ddb6c65b71c0e52c37",
"sha256": "2f0e8bc570e7683e3f49d74bfe210bbf7646c8a640fe01fdb82609d9485ffc43"
},
"downloads": -1,
"filename": "dlp_mpi-0.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7a2dc599ce4a67ddb6c65b71c0e52c37",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.6",
"size": 17189,
"upload_time": "2024-06-18T14:29:52",
"upload_time_iso_8601": "2024-06-18T14:29:52.054058Z",
"url": "https://files.pythonhosted.org/packages/be/6b/bf700fdfdb5f79be6e7535ec5fd0c9ae4c194bac9369f6f2b28913dbc146/dlp_mpi-0.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0a6a61a84f6732851f38c9be1609f3e79122ccb3ddeaa4fb10170905fffebe6a",
"md5": "75811d348b2ae7d3ade85b23f38e73cd",
"sha256": "030281529a5da4308e11d3209bc300fd18135ae0a7c3fae8ee7b28d0ad50168a"
},
"downloads": -1,
"filename": "dlp_mpi-0.0.4.tar.gz",
"has_sig": false,
"md5_digest": "75811d348b2ae7d3ade85b23f38e73cd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.6",
"size": 19504,
"upload_time": "2024-06-18T14:30:12",
"upload_time_iso_8601": "2024-06-18T14:30:12.192679Z",
"url": "https://files.pythonhosted.org/packages/0a/6a/61a84f6732851f38c9be1609f3e79122ccb3ddeaa4fb10170905fffebe6a/dlp_mpi-0.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-18 14:30:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fgnt",
"github_project": "dlp_mpi",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dlp-mpi"
}