numba-mpi


Namenumba-mpi JSON
Version 1.1.2 PyPI version JSON
download
home_pagehttps://github.com/numba-mpi/numba-mpi
SummaryNumba @jittable MPI wrappers tested on Linux, macOS and Windows
upload_time2024-12-12 12:27:37
maintainerNone
docs_urlNone
authorhttps://github.com/numba-mpi/numba-mpi/graphs/contributors
requires_python>=3.8
licenseGPL v3
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img src="https://raw.githubusercontent.com/numba-mpi/numba-mpi/main/.github/numba_mpi_logo.png" width=128 height=142 alt="numba-mpi logo"> numba-mpi

[![Python 3](https://img.shields.io/static/v1?label=Python&logo=Python&color=3776AB&message=3)](https://www.python.org/)
[![LLVM](https://img.shields.io/static/v1?label=LLVM&logo=LLVM&color=gold&message=Numba)](https://numba.pydata.org)
[![Linux OK](https://img.shields.io/static/v1?label=Linux&logo=Linux&color=yellow&message=%E2%9C%93)](https://en.wikipedia.org/wiki/Linux)
[![macOS OK](https://img.shields.io/static/v1?label=macOS&logo=Apple&color=silver&message=%E2%9C%93)](https://en.wikipedia.org/wiki/macOS)
[![Windows OK](https://img.shields.io/static/v1?label=Windows&logo=Windows&color=white&message=%E2%9C%93)](https://en.wikipedia.org/wiki/Windows)
[![Github Actions Status](https://github.com/numba-mpi/numba-mpi/workflows/tests+pypi/badge.svg?branch=main)](https://github.com/numba-mpi/numba-mpi/actions/workflows/tests+pypi.yml)
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/numba-mpi/numba-mpi/graphs/commit-activity)
[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.html)
[![PyPI version](https://badge.fury.io/py/numba-mpi.svg)](https://pypi.org/project/numba-mpi)
[![Anaconda-Server Badge](https://anaconda.org/conda-forge/numba-mpi/badges/version.svg)](https://anaconda.org/conda-forge/numba-mpi)
[![AUR package](https://repology.org/badge/version-for-repo/aur/python:numba-mpi.svg)](https://aur.archlinux.org/packages/python-numba-mpi)
[![DOI](https://zenodo.org/badge/316911228.svg)](https://zenodo.org/badge/latestdoi/316911228)

### Overview
numba-mpi provides Python wrappers to the C MPI API callable from within [Numba JIT-compiled code](https://numba.readthedocs.io/en/stable/user/jit.html) (@jit mode). For an outline of the project, rationale, architecture, and features, refer to: [numba-mpi arXiv e-print](https://doi.org/10.48550/arXiv.2407.13712) (please cite if numba-mpi is used in your research).

Support is provided for a subset of MPI routines covering: `size`/`rank`, `send`/`recv`, `allreduce`, `reduce`, `bcast`, `scatter`/`gather` & `allgather`, `barrier`, `wtime`
and basic asynchronous communication with `isend`/`irecv` (only for contiguous arrays); for request handling including `wait`/`waitall`/`waitany` and `test`/`testall`/`testany`.

The API uses NumPy and supports both numeric and character datatypes (e.g., `broadcast`). 
Auto-generated docstring-based API docs are published on the web: https://numba-mpi.github.io/numba-mpi

Packages can be obtained from 
  [PyPI](https://pypi.org/project/numba-mpi), 
  [Conda Forge](https://anaconda.org/conda-forge/numba-mpi), 
  [Arch Linux](https://aur.archlinux.org/packages/python-numba-mpi)
  or by invoking `pip install git+https://github.com/numba-mpi/numba-mpi.git`.

numba-mpi is a pure-Python package.
The codebase includes a test suite used through the GitHub Actions workflows ([thanks to mpi4py's setup-mpi](https://github.com/mpi4py/setup-mpi)!)
for automated testing on: Linux ([MPICH](https://www.mpich.org/), [OpenMPI](https://www.open-mpi.org/doc/) 
& [Intel MPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html)), 
macOS ([MPICH](https://www.mpich.org/) & [OpenMPI](https://www.open-mpi.org/doc/)) and 
Windows ([MS MPI](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi)).

Features that are not implemented yet include (help welcome!):
- support for non-default communicators
- support for `MPI_IN_PLACE` in `[all]gather`/`scatter` and `allreduce`
- support for `MPI_Type_create_struct` (Numpy structured arrays) 
- ...

### Hello world send/recv example:
```python
import numba, numba_mpi, numpy

@numba.jit()
def hello():
    src = numpy.array([1., 2., 3., 4., 5.])
    dst_tst = numpy.empty_like(src)

    if numba_mpi.rank() == 0:
        numba_mpi.send(src, dest=1, tag=11)
    elif numba_mpi.rank() == 1:
        numba_mpi.recv(dst_tst, source=0, tag=11)

hello()
```

### Example comparing numba-mpi vs. mpi4py performance:

The example below compares `Numba`+`mpi4py` vs. `Numba`+`numba-mpi` performance.
The sample code estimates $\pi$ by numerical integration of $\int_0^1 (4/(1+x^2))dx=\pi$ 
dividing the workload into `n_intervals` handled by separate MPI processes 
and then obtaining a sum using `allreduce` (see, e.g., analogous [Matlab docs example](https://www.mathworks.com/help/parallel-computing/numerical-estimation-of-pi-using-message-passing.html)).
The computation is carried out in a JIT-compiled function `get_pi_part()` and is repeated
`N_TIMES`. The repetitions and the MPI-handled reduction are done outside or 
inside of the JIT-compiled block for `mpi4py` and `numba-mpi`, respectively.
Timing is repeated `N_REPEAT` times and the minimum time is reported.
The generated plot shown below depicts the speedup obtained by replacing `mpi4py`
with `numba_mpi`, plotted as a function of `N_TIMES / n_intervals` - the number of MPI calls per 
interval. The speedup, which stems from avoiding roundtrips between JIT-compiled
and Python code is significant (150%-300%) in all cases. The more often communication
is needed (smaller `n_intervals`), the larger the measured speedup. Note that nothing 
in the actual number crunching (within the `get_pi_part()` function) or in the employed communication logic
(handled by the same MPI library) differs between the `mpi4py` or `numba-mpi` solutions.
These are the overhead of `mpi4py` higher-level abstractions and the overhead of 
repeatedly entering and leaving the JIT-compiled block if using `mpi4py`, which can be
eliminated by using `numba-mpi`, and which the measured differences in execution time
stem from.
```python
import timeit, mpi4py, numba, numpy as np, numba_mpi

N_TIMES = 10000
RTOL = 1e-3

@numba.jit
def get_pi_part(n_intervals=1000000, rank=0, size=1):
    h = 1 / n_intervals
    partial_sum = 0.0
    for i in range(rank + 1, n_intervals, size):
        x = h * (i - 0.5)
        partial_sum += 4 / (1 + x**2)
    return h * partial_sum

@numba.jit
def pi_numba_mpi(n_intervals):
    pi = np.array([0.])
    part = np.empty_like(pi)
    for _ in range(N_TIMES):
        part[0] = get_pi_part(n_intervals, numba_mpi.rank(), numba_mpi.size())
        numba_mpi.allreduce(part, pi, numba_mpi.Operator.SUM)
        assert abs(pi[0] - np.pi) / np.pi < RTOL

def pi_mpi4py(n_intervals):
    pi = np.array([0.])
    part = np.empty_like(pi)
    for _ in range(N_TIMES):
        part[0] = get_pi_part(n_intervals, mpi4py.MPI.COMM_WORLD.rank, mpi4py.MPI.COMM_WORLD.size)
        mpi4py.MPI.COMM_WORLD.Allreduce(part, (pi, mpi4py.MPI.DOUBLE), op=mpi4py.MPI.SUM)
        assert abs(pi[0] - np.pi) / np.pi < RTOL

plot_x = [x for x in range(1, 11)]
plot_y = {'numba_mpi': [], 'mpi4py': []}
for x in plot_x:
    for impl in plot_y:
        plot_y[impl].append(min(timeit.repeat(
            f"pi_{impl}(n_intervals={N_TIMES // x})",
            globals=locals(),
            number=1,
            repeat=10
        )))

if numba_mpi.rank() == 0:
    from matplotlib import pyplot
    pyplot.figure(figsize=(8.3, 3.5), tight_layout=True)
    pyplot.plot(plot_x, np.array(plot_y['mpi4py'])/np.array(plot_y['numba_mpi']), marker='o')
    pyplot.xlabel('number of MPI calls per interval')
    pyplot.ylabel('mpi4py/numba-mpi wall-time ratio')
    pyplot.title(f'mpiexec -np {numba_mpi.size()}')
    pyplot.grid()
    pyplot.savefig('readme_plot.svg')
```

![plot](https://github.com/numba-mpi/numba-mpi/releases/download/tip/readme_plot.png)


### MPI resources on the web:

- MPI standard and general information:
    - https://www.mpi-forum.org/docs
    - https://en.wikipedia.org/wiki/Message_Passing_Interface
- MPI implementations:
    - OpenMPI: https://www.open-mpi.org
    - MPICH: https://www.mpich.org
    - MS MPI: https://learn.microsoft.com/en-us/message-passing-interface
    - Intel MPI: https://intel.com/content/www/us/en/developer/tools/oneapi/mpi-library-documentation.html
- MPI bindings:
    - Python: https://mpi4py.readthedocs.io
    - Python/JAX: https://mpi4jax.readthedocs.io
    - Julia: https://juliaparallel.org/MPI.jl
    - Rust: https://docs.rs/mpi
    - C++: https://boost.org/doc/html/mpi.html
    - R: https://cran.r-project.org/web/packages/Rmpi

### Acknowledgements:

We thank [all contributors](https://github.com/numba-mpi/numba-mpi/graphs/contributors) and users who reported feedback to the project 
  through [GitHub issues](https://github.com/numba-mpi/numba-mpi/issues).

Development of numba-mpi has been supported by the [Polish National Science Centre](https://ncn.gov.pl/en) (grant no. 2020/39/D/ST10/01220),
  the [Max Planck Society](https://www.mpg.de/en) and the [European Union](https://erc.europa.eu/) (ERC, EmulSim, 101044662). 
We further acknowledge Poland’s high-performance computing infrastructure [PLGrid](https://plgrid.pl) (HPC Centers: [ACK Cyfronet AGH](https://www.cyfronet.pl/en)) 
  for providing computer facilities and support within computational grant no. PLG/2023/016369.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/numba-mpi/numba-mpi",
    "name": "numba-mpi",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "https://github.com/numba-mpi/numba-mpi/graphs/contributors",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/cc/d2/b37bb3892e98bfcd9fe70ef027373a03434f1d1566715b0938e94e1962d3/numba_mpi-1.1.2.tar.gz",
    "platform": null,
    "description": "# <img src=\"https://raw.githubusercontent.com/numba-mpi/numba-mpi/main/.github/numba_mpi_logo.png\" width=128 height=142 alt=\"numba-mpi logo\"> numba-mpi\n\n[![Python 3](https://img.shields.io/static/v1?label=Python&logo=Python&color=3776AB&message=3)](https://www.python.org/)\n[![LLVM](https://img.shields.io/static/v1?label=LLVM&logo=LLVM&color=gold&message=Numba)](https://numba.pydata.org)\n[![Linux OK](https://img.shields.io/static/v1?label=Linux&logo=Linux&color=yellow&message=%E2%9C%93)](https://en.wikipedia.org/wiki/Linux)\n[![macOS OK](https://img.shields.io/static/v1?label=macOS&logo=Apple&color=silver&message=%E2%9C%93)](https://en.wikipedia.org/wiki/macOS)\n[![Windows OK](https://img.shields.io/static/v1?label=Windows&logo=Windows&color=white&message=%E2%9C%93)](https://en.wikipedia.org/wiki/Windows)\n[![Github Actions Status](https://github.com/numba-mpi/numba-mpi/workflows/tests+pypi/badge.svg?branch=main)](https://github.com/numba-mpi/numba-mpi/actions/workflows/tests+pypi.yml)\n[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/numba-mpi/numba-mpi/graphs/commit-activity)\n[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.html)\n[![PyPI version](https://badge.fury.io/py/numba-mpi.svg)](https://pypi.org/project/numba-mpi)\n[![Anaconda-Server Badge](https://anaconda.org/conda-forge/numba-mpi/badges/version.svg)](https://anaconda.org/conda-forge/numba-mpi)\n[![AUR package](https://repology.org/badge/version-for-repo/aur/python:numba-mpi.svg)](https://aur.archlinux.org/packages/python-numba-mpi)\n[![DOI](https://zenodo.org/badge/316911228.svg)](https://zenodo.org/badge/latestdoi/316911228)\n\n### Overview\nnumba-mpi provides Python wrappers to the C MPI API callable from within [Numba JIT-compiled code](https://numba.readthedocs.io/en/stable/user/jit.html) (@jit mode). For an outline of the project, rationale, architecture, and features, refer to: [numba-mpi arXiv e-print](https://doi.org/10.48550/arXiv.2407.13712) (please cite if numba-mpi is used in your research).\n\nSupport is provided for a subset of MPI routines covering: `size`/`rank`, `send`/`recv`, `allreduce`, `reduce`, `bcast`, `scatter`/`gather` & `allgather`, `barrier`, `wtime`\nand basic asynchronous communication with `isend`/`irecv` (only for contiguous arrays); for request handling including `wait`/`waitall`/`waitany` and `test`/`testall`/`testany`.\n\nThe API uses NumPy and supports both numeric and character datatypes (e.g., `broadcast`). \nAuto-generated docstring-based API docs are published on the web: https://numba-mpi.github.io/numba-mpi\n\nPackages can be obtained from \n  [PyPI](https://pypi.org/project/numba-mpi), \n  [Conda Forge](https://anaconda.org/conda-forge/numba-mpi), \n  [Arch Linux](https://aur.archlinux.org/packages/python-numba-mpi)\n  or by invoking `pip install git+https://github.com/numba-mpi/numba-mpi.git`.\n\nnumba-mpi is a pure-Python package.\nThe codebase includes a test suite used through the GitHub Actions workflows ([thanks to mpi4py's setup-mpi](https://github.com/mpi4py/setup-mpi)!)\nfor automated testing on: Linux ([MPICH](https://www.mpich.org/), [OpenMPI](https://www.open-mpi.org/doc/) \n& [Intel MPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html)), \nmacOS ([MPICH](https://www.mpich.org/) & [OpenMPI](https://www.open-mpi.org/doc/)) and \nWindows ([MS MPI](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi)).\n\nFeatures that are not implemented yet include (help welcome!):\n- support for non-default communicators\n- support for `MPI_IN_PLACE` in `[all]gather`/`scatter` and `allreduce`\n- support for `MPI_Type_create_struct` (Numpy structured arrays) \n- ...\n\n### Hello world send/recv example:\n```python\nimport numba, numba_mpi, numpy\n\n@numba.jit()\ndef hello():\n    src = numpy.array([1., 2., 3., 4., 5.])\n    dst_tst = numpy.empty_like(src)\n\n    if numba_mpi.rank() == 0:\n        numba_mpi.send(src, dest=1, tag=11)\n    elif numba_mpi.rank() == 1:\n        numba_mpi.recv(dst_tst, source=0, tag=11)\n\nhello()\n```\n\n### Example comparing numba-mpi vs. mpi4py performance:\n\nThe example below compares `Numba`+`mpi4py` vs. `Numba`+`numba-mpi` performance.\nThe sample code estimates $\\pi$ by numerical integration of $\\int_0^1 (4/(1+x^2))dx=\\pi$ \ndividing the workload into `n_intervals` handled by separate MPI processes \nand then obtaining a sum using `allreduce` (see, e.g., analogous [Matlab docs example](https://www.mathworks.com/help/parallel-computing/numerical-estimation-of-pi-using-message-passing.html)).\nThe computation is carried out in a JIT-compiled function `get_pi_part()` and is repeated\n`N_TIMES`. The repetitions and the MPI-handled reduction are done outside or \ninside of the JIT-compiled block for `mpi4py` and `numba-mpi`, respectively.\nTiming is repeated `N_REPEAT` times and the minimum time is reported.\nThe generated plot shown below depicts the speedup obtained by replacing `mpi4py`\nwith `numba_mpi`, plotted as a function of `N_TIMES / n_intervals` - the number of MPI calls per \ninterval. The speedup, which stems from avoiding roundtrips between JIT-compiled\nand Python code is significant (150%-300%) in all cases. The more often communication\nis needed (smaller `n_intervals`), the larger the measured speedup. Note that nothing \nin the actual number crunching (within the `get_pi_part()` function) or in the employed communication logic\n(handled by the same MPI library) differs between the `mpi4py` or `numba-mpi` solutions.\nThese are the overhead of `mpi4py` higher-level abstractions and the overhead of \nrepeatedly entering and leaving the JIT-compiled block if using `mpi4py`, which can be\neliminated by using `numba-mpi`, and which the measured differences in execution time\nstem from.\n```python\nimport timeit, mpi4py, numba, numpy as np, numba_mpi\n\nN_TIMES = 10000\nRTOL = 1e-3\n\n@numba.jit\ndef get_pi_part(n_intervals=1000000, rank=0, size=1):\n    h = 1 / n_intervals\n    partial_sum = 0.0\n    for i in range(rank + 1, n_intervals, size):\n        x = h * (i - 0.5)\n        partial_sum += 4 / (1 + x**2)\n    return h * partial_sum\n\n@numba.jit\ndef pi_numba_mpi(n_intervals):\n    pi = np.array([0.])\n    part = np.empty_like(pi)\n    for _ in range(N_TIMES):\n        part[0] = get_pi_part(n_intervals, numba_mpi.rank(), numba_mpi.size())\n        numba_mpi.allreduce(part, pi, numba_mpi.Operator.SUM)\n        assert abs(pi[0] - np.pi) / np.pi < RTOL\n\ndef pi_mpi4py(n_intervals):\n    pi = np.array([0.])\n    part = np.empty_like(pi)\n    for _ in range(N_TIMES):\n        part[0] = get_pi_part(n_intervals, mpi4py.MPI.COMM_WORLD.rank, mpi4py.MPI.COMM_WORLD.size)\n        mpi4py.MPI.COMM_WORLD.Allreduce(part, (pi, mpi4py.MPI.DOUBLE), op=mpi4py.MPI.SUM)\n        assert abs(pi[0] - np.pi) / np.pi < RTOL\n\nplot_x = [x for x in range(1, 11)]\nplot_y = {'numba_mpi': [], 'mpi4py': []}\nfor x in plot_x:\n    for impl in plot_y:\n        plot_y[impl].append(min(timeit.repeat(\n            f\"pi_{impl}(n_intervals={N_TIMES // x})\",\n            globals=locals(),\n            number=1,\n            repeat=10\n        )))\n\nif numba_mpi.rank() == 0:\n    from matplotlib import pyplot\n    pyplot.figure(figsize=(8.3, 3.5), tight_layout=True)\n    pyplot.plot(plot_x, np.array(plot_y['mpi4py'])/np.array(plot_y['numba_mpi']), marker='o')\n    pyplot.xlabel('number of MPI calls per interval')\n    pyplot.ylabel('mpi4py/numba-mpi wall-time ratio')\n    pyplot.title(f'mpiexec -np {numba_mpi.size()}')\n    pyplot.grid()\n    pyplot.savefig('readme_plot.svg')\n```\n\n![plot](https://github.com/numba-mpi/numba-mpi/releases/download/tip/readme_plot.png)\n\n\n### MPI resources on the web:\n\n- MPI standard and general information:\n    - https://www.mpi-forum.org/docs\n    - https://en.wikipedia.org/wiki/Message_Passing_Interface\n- MPI implementations:\n    - OpenMPI: https://www.open-mpi.org\n    - MPICH: https://www.mpich.org\n    - MS MPI: https://learn.microsoft.com/en-us/message-passing-interface\n    - Intel MPI: https://intel.com/content/www/us/en/developer/tools/oneapi/mpi-library-documentation.html\n- MPI bindings:\n    - Python: https://mpi4py.readthedocs.io\n    - Python/JAX: https://mpi4jax.readthedocs.io\n    - Julia: https://juliaparallel.org/MPI.jl\n    - Rust: https://docs.rs/mpi\n    - C++: https://boost.org/doc/html/mpi.html\n    - R: https://cran.r-project.org/web/packages/Rmpi\n\n### Acknowledgements:\n\nWe thank [all contributors](https://github.com/numba-mpi/numba-mpi/graphs/contributors) and users who reported feedback to the project \n  through [GitHub issues](https://github.com/numba-mpi/numba-mpi/issues).\n\nDevelopment of numba-mpi has been supported by the [Polish National Science Centre](https://ncn.gov.pl/en) (grant no. 2020/39/D/ST10/01220),\n  the [Max Planck Society](https://www.mpg.de/en) and the [European Union](https://erc.europa.eu/) (ERC, EmulSim, 101044662). \nWe further acknowledge Poland\u2019s high-performance computing infrastructure [PLGrid](https://plgrid.pl) (HPC Centers: [ACK Cyfronet AGH](https://www.cyfronet.pl/en)) \n  for providing computer facilities and support within computational grant no. PLG/2023/016369.\n\n",
    "bugtrack_url": null,
    "license": "GPL v3",
    "summary": "Numba @jittable MPI wrappers tested on Linux, macOS and Windows",
    "version": "1.1.2",
    "project_urls": {
        "Documentation": "https://numba-mpi.github.io/numba-mpi",
        "Homepage": "https://github.com/numba-mpi/numba-mpi",
        "Source": "https://github.com/numba-mpi/numba-mpi",
        "Tracker": "https://github.com/numba-mpi/numba-mpi/issues"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d2d29dc25d13380c9455e424e66a5b303a2c85668fe6da956f03a65bedf6e753",
                "md5": "fde790211c8c1f0ae346e0dd688b280a",
                "sha256": "ee22605e8b5f04313de136f2c2ca5c7c8ad18920abf557b89fd9c4b583fca410"
            },
            "downloads": -1,
            "filename": "numba_mpi-1.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fde790211c8c1f0ae346e0dd688b280a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 30043,
            "upload_time": "2024-12-12T12:27:35",
            "upload_time_iso_8601": "2024-12-12T12:27:35.312194Z",
            "url": "https://files.pythonhosted.org/packages/d2/d2/9dc25d13380c9455e424e66a5b303a2c85668fe6da956f03a65bedf6e753/numba_mpi-1.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ccd2b37bb3892e98bfcd9fe70ef027373a03434f1d1566715b0938e94e1962d3",
                "md5": "d43657a9cf043eeb7afb0336d366a069",
                "sha256": "6100858de251a09f4c38b2a314cf2d4dd630f1abd67bdc315d922fd2ece61181"
            },
            "downloads": -1,
            "filename": "numba_mpi-1.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "d43657a9cf043eeb7afb0336d366a069",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 51769,
            "upload_time": "2024-12-12T12:27:37",
            "upload_time_iso_8601": "2024-12-12T12:27:37.494070Z",
            "url": "https://files.pythonhosted.org/packages/cc/d2/b37bb3892e98bfcd9fe70ef027373a03434f1d1566715b0938e94e1962d3/numba_mpi-1.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-12 12:27:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "numba-mpi",
    "github_project": "numba-mpi",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "numba-mpi"
}
        
Elapsed time: 1.09345s