Name | mpi4pytools JSON |
Version |
0.2.0
JSON |
| download |
home_page | None |
Summary | Simple decorators and utilities for MPI parallel computing |
upload_time | 2025-08-13 07:11:29 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT |
keywords |
mpi
parallel
computing
decorators
hpc
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# mpitools




> **⚠️ Development Notice**: This package is in active development. The API may change significantly between versions until v1.0.0. Use in production environments is not recommended.
A Python package providing simple decorators and utilities for MPI (Message Passing Interface) parallel computing. Built on top of mpi4py, mpitools makes it easy to write parallel code with minimal boilerplate.
## Features
- **Work distribution decorators**: Execute functions on specific ranks or groups of processes
- **Communication decorators**: Collective communications and reduce operations made simple
- **Error handling**: Graceful error handling across all MPI processes
- **Task queue system**: Distributed task processing with the queue submodule
## Installation
```bash
pip install mpi4pytools
```
**Requirements:**
- Python 3.7+
- mpi4py
- An MPI implementation (OpenMPI, MPICH, etc.)
## Quick Start
### Basic Usage
```python
from mpitools import setup_mpi, broadcast_from_main, gather_to_main, eval_on_main
# Initialize MPI environment
comm, rank, size = setup_mpi()
# Execute only on rank 0, broadcast result to all processes
@broadcast_from_main()
def load_config():
return {"num_iterations": 1000, "tolerance": 1e-6}
# Execute on all processes, gather results to rank 0
@gather_to_main()
def compute_partial_sum():
return sum(range(rank * 100, (rank + 1) * 100))
# Execute only on rank 0
@eval_on_main()
def save_results(data):
with open("results.txt", "w") as f:
f.write(str(data))
# Usage
config = load_config() # Same config on all processes
partial_sums = compute_partial_sum() # List of sums on rank 0, None elsewhere
save_results(partial_sums) # Only saves on rank 0
```
### Task Queue System
```python
from mpitools import setup_mpi
from mpitools.queue import MPIQueue, Task
# Initialize MPI environment
comm, rank, size = setup_mpi()
# Define a task class
class MyTask(Task):
def __init__(self, task_id: str, data: int):
super().__init__(task_id)
self.data = data
def execute(self):
# Perform some computation
result = self.data * 2 # Example computation
return result
# Create a distributed task queue
queue = MPIQueue()
# Add tasks to the queue
if rank == 0:
tasks = [MyTask(f"task_{i}", i) for i in range(10)]
queue.add_tasks(tasks)
# Run the task queue
results = queue.run()
```
### Error Handling
```python
from mpitools import abort_on_error
@abort_on_error() # Aborts all processes if any process encounters an error
def risky_computation():
# If this fails on any process, all processes will terminate
result = 1 / some_calculation()
return result
```
## Core Decorators
### Error Handling
- `@abort_on_error()` - Abort all processes if any process raises an exception
### Work Distribution
- `@eval_on_main()` - Execute only on rank 0
- `@eval_on_workers()` - Execute only on worker ranks (1, 2, ...)
- `@eval_on_single(rank)` - Execute only on specified rank
- `@eval_on_select([ranks])` - Execute only on specified ranks
### Collective Communication
- `@broadcast_from_main()` - Execute on rank 0, broadcast result to all processes
- `@broadcast_from_process(rank)` - Execute on specified rank, broadcast to all processes
- `@scatter_from_main()` - Execute on rank 0, scatter data to all processes
- `@scatter_from_process(rank)` - Execute on specified rank, scatter data to all processes
- `@gather_to_main()` - Execute on all processes, gather results to rank 0
- `@gather_to_process(rank)` - Execute on all processes, gather results to specified rank
- `@gather_to_all()` - Execute on all processes, gather results to all processes
- `@all_to_all()` - Execute on all processes, exchange data between all processes
### Reduction Operations
- `@reduce_to_main(op='sum')` - Execute on all processes, reduce to rank 0
- `@reduce_to_process(rank, op='sum')` - Execute on all processes, reduce to specified rank
- `@reduce_to_all(op='sum')` - Execute on all processes, reduce to all processes
Supported reduction operations: `'sum'`, `'prod'`, `'max'`, `'min'`, `'land'`, `'band'`, `'lor'`, `'bor'`, `'lxor'`, `'bxor'`, `'maxloc'`, `'minloc'`
### Decorator Variants
- `@buffered_*` - Buffered versions of collective communication and reduction operations for improved performance.
- `@variable_*` - Variable-sized versions of buffered scatter, gather and all_to_all communications for handling dynamic data sizes.
- Currently, only numpy arrays are supported.
## Running MPI Programs
```bash
# Run with 4 processes
mpirun -n 4 python your_script.py
# Run with specific hosts
mpirun -n 4 -H host1,host2 python your_script.py
```
## Documentation
- [Full API Reference](API_DOCS.md) - Complete documentation of all functions and classes
- [MPI4PY Documentation](https://mpi4py.readthedocs.io/) - Underlying MPI library
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Acknowledgments
- Built on [mpi4py](https://github.com/mpi4py/mpi4py)
Raw data
{
"_id": null,
"home_page": null,
"name": "mpi4pytools",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "mpi, parallel, computing, decorators, hpc",
"author": null,
"author_email": "\"Erik A. Bensen\" <erik-a-bensen@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/ed/1b/0131db1265edee766bb345813fa6e8555278df844f10d0979c9617c5e209/mpi4pytools-0.2.0.tar.gz",
"platform": null,
"description": "# mpitools\n\n\n\n\n\n\n> **\u26a0\ufe0f Development Notice**: This package is in active development. The API may change significantly between versions until v1.0.0. Use in production environments is not recommended.\n\n\nA Python package providing simple decorators and utilities for MPI (Message Passing Interface) parallel computing. Built on top of mpi4py, mpitools makes it easy to write parallel code with minimal boilerplate.\n\n## Features\n\n- **Work distribution decorators**: Execute functions on specific ranks or groups of processes\n- **Communication decorators**: Collective communications and reduce operations made simple\n- **Error handling**: Graceful error handling across all MPI processes\n- **Task queue system**: Distributed task processing with the queue submodule\n\n## Installation\n\n```bash\npip install mpi4pytools\n```\n\n**Requirements:**\n- Python 3.7+\n- mpi4py\n- An MPI implementation (OpenMPI, MPICH, etc.)\n\n## Quick Start\n\n### Basic Usage\n\n```python\nfrom mpitools import setup_mpi, broadcast_from_main, gather_to_main, eval_on_main\n\n# Initialize MPI environment\ncomm, rank, size = setup_mpi()\n\n# Execute only on rank 0, broadcast result to all processes\n@broadcast_from_main()\ndef load_config():\n return {\"num_iterations\": 1000, \"tolerance\": 1e-6}\n\n# Execute on all processes, gather results to rank 0\n@gather_to_main()\ndef compute_partial_sum():\n return sum(range(rank * 100, (rank + 1) * 100))\n\n# Execute only on rank 0\n@eval_on_main()\ndef save_results(data):\n with open(\"results.txt\", \"w\") as f:\n f.write(str(data))\n\n# Usage\nconfig = load_config() # Same config on all processes\npartial_sums = compute_partial_sum() # List of sums on rank 0, None elsewhere\nsave_results(partial_sums) # Only saves on rank 0\n```\n\n### Task Queue System\n\n```python\nfrom mpitools import setup_mpi\nfrom mpitools.queue import MPIQueue, Task\n\n# Initialize MPI environment\ncomm, rank, size = setup_mpi()\n\n# Define a task class\nclass MyTask(Task):\n def __init__(self, task_id: str, data: int):\n super().__init__(task_id)\n self.data = data\n\n def execute(self):\n # Perform some computation\n result = self.data * 2 # Example computation\n return result\n\n# Create a distributed task queue\nqueue = MPIQueue()\n\n# Add tasks to the queue\nif rank == 0:\n tasks = [MyTask(f\"task_{i}\", i) for i in range(10)]\n queue.add_tasks(tasks)\n\n# Run the task queue\nresults = queue.run()\n\n```\n\n### Error Handling\n\n```python\nfrom mpitools import abort_on_error\n\n@abort_on_error() # Aborts all processes if any process encounters an error\ndef risky_computation():\n # If this fails on any process, all processes will terminate\n result = 1 / some_calculation()\n return result\n```\n\n## Core Decorators\n\n### Error Handling\n- `@abort_on_error()` - Abort all processes if any process raises an exception\n\n### Work Distribution\n- `@eval_on_main()` - Execute only on rank 0\n- `@eval_on_workers()` - Execute only on worker ranks (1, 2, ...) \n- `@eval_on_single(rank)` - Execute only on specified rank\n- `@eval_on_select([ranks])` - Execute only on specified ranks\n\n### Collective Communication\n- `@broadcast_from_main()` - Execute on rank 0, broadcast result to all processes\n- `@broadcast_from_process(rank)` - Execute on specified rank, broadcast to all processes\n- `@scatter_from_main()` - Execute on rank 0, scatter data to all processes\n- `@scatter_from_process(rank)` - Execute on specified rank, scatter data to all processes\n- `@gather_to_main()` - Execute on all processes, gather results to rank 0\n- `@gather_to_process(rank)` - Execute on all processes, gather results to specified rank\n- `@gather_to_all()` - Execute on all processes, gather results to all processes\n- `@all_to_all()` - Execute on all processes, exchange data between all processes\n\n### Reduction Operations\n- `@reduce_to_main(op='sum')` - Execute on all processes, reduce to rank 0\n- `@reduce_to_process(rank, op='sum')` - Execute on all processes, reduce to specified rank\n- `@reduce_to_all(op='sum')` - Execute on all processes, reduce to all processes\n\nSupported reduction operations: `'sum'`, `'prod'`, `'max'`, `'min'`, `'land'`, `'band'`, `'lor'`, `'bor'`, `'lxor'`, `'bxor'`, `'maxloc'`, `'minloc'`\n\n### Decorator Variants\n- `@buffered_*` - Buffered versions of collective communication and reduction operations for improved performance. \n- `@variable_*` - Variable-sized versions of buffered scatter, gather and all_to_all communications for handling dynamic data sizes.\n- Currently, only numpy arrays are supported.\n\n## Running MPI Programs\n\n```bash\n# Run with 4 processes\nmpirun -n 4 python your_script.py\n\n# Run with specific hosts\nmpirun -n 4 -H host1,host2 python your_script.py\n```\n\n## Documentation\n\n- [Full API Reference](API_DOCS.md) - Complete documentation of all functions and classes\n- [MPI4PY Documentation](https://mpi4py.readthedocs.io/) - Underlying MPI library\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Acknowledgments\n\n- Built on [mpi4py](https://github.com/mpi4py/mpi4py)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Simple decorators and utilities for MPI parallel computing",
"version": "0.2.0",
"project_urls": {
"Documentation": "https://github.com/erik-a-bensen/mpitools/blob/main/API_DOCS.md",
"Homepage": "https://github.com/erik-a-bensen/mpitools",
"Issues": "https://github.com/erik-a-bensen/mpitools/issues",
"Repository": "https://github.com/erik-a-bensen/mpitools"
},
"split_keywords": [
"mpi",
" parallel",
" computing",
" decorators",
" hpc"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e6dd04c217739f0518f63841b214ebcac92534cda7c81b0363f954297a711455",
"md5": "b19f16a739b518f2b92d4c364bebf24d",
"sha256": "88e83166cb16d7dd36fa4efe28d381d6060fceabad7e6407d348f3a074abb8ab"
},
"downloads": -1,
"filename": "mpi4pytools-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b19f16a739b518f2b92d4c364bebf24d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 18778,
"upload_time": "2025-08-13T07:11:28",
"upload_time_iso_8601": "2025-08-13T07:11:28.231880Z",
"url": "https://files.pythonhosted.org/packages/e6/dd/04c217739f0518f63841b214ebcac92534cda7c81b0363f954297a711455/mpi4pytools-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ed1b0131db1265edee766bb345813fa6e8555278df844f10d0979c9617c5e209",
"md5": "33f1eeae31a95a92b374277f83e32041",
"sha256": "0e85beaf5f0478ae4a5e7b2717765af039a8f1a63050cbbb144fdc52ff8989f8"
},
"downloads": -1,
"filename": "mpi4pytools-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "33f1eeae31a95a92b374277f83e32041",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 14104,
"upload_time": "2025-08-13T07:11:29",
"upload_time_iso_8601": "2025-08-13T07:11:29.399027Z",
"url": "https://files.pythonhosted.org/packages/ed/1b/0131db1265edee766bb345813fa6e8555278df844f10d0979c9617c5e209/mpi4pytools-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-13 07:11:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "erik-a-bensen",
"github_project": "mpitools",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "mpi4pytools"
}