mpipartition


Namempipartition JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://github.com/ArgonneCPAC/MPIPartition
SummaryMPI volume decomposition and particle distribution tools
upload_time2023-12-04 14:09:17
maintainer
docs_urlNone
authorMichael Buehlmann
requires_python>=3.8
licenseMIT
keywords mpi mpi4py scientific computing parallel computing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            MPIPartition
============


.. image:: https://img.shields.io/pypi/v/mpipartition.svg
   :target: https://pypi.python.org/pypi/mpipartition

.. image:: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/pypi.yml/badge.svg
   :target: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/pypi.yml

.. image:: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/sphinx.yml/badge.svg
   :target: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/sphinx.yml

A python module for MPI volume decomposition and particle distribution


* Free software: MIT license
* Documentation: https://argonnecpac.github.io/MPIPartition
* Repository: https://github.com/ArgonneCPAC/MPIPartition


Features
--------

* Cartesian partitioning of a cubic volume (arbitrary dimensions) among MPI ranks
* Equal area decomposition of the spherical shell (S2) among MPI ranks
* distributing particle-data among ranks to the corresponding subvolume / surface segment
* overloading particle-data at rank boundaries ("ghost particles")



Installation
------------

Installing from the PyPI repository:

.. code-block:: bash

   pip install mpipartition

Installing the development version from the GIT repository

.. code-block:: bash

   git clone https://github.com/ArgonneCPAC/mpipartition.git
   cd mpipartition
   python setup.py develop


Requirements
------------

These packages will be automatically installed if they are not already present:

* Python >= 3.8
* `mpi4py <https://mpi4py.readthedocs.io/en/stable/>`_: MPI for Python
* `numpy <https://numpy.org/>`_: Python array library
* `numba <https://numba.pydata.org/>`_: Python JIT compiler

Basic Usage
-----------
Check the `documentation <https://argonnecpac.github.io/MPIPartition>`_ for
an in-depth explanation / documentation.

.. code-block:: python

   # this code goes into mpipartition_example.py

   from mpipartition import Partition, distribute, overload
   import numpy as np

   # create a partition of the unit cube with available MPI ranks
   box_size = 1.
   partition = Partition()

   if partition.rank == 0:
       print(f"Number of ranks: {partition.nranks}")
       print(f"Volume decomposition: {partition.decomposition}")

   # create random data
   nparticles_local = 1000
   data = {
       "x": np.random.uniform(0, 1, nparticles_local),
       "y": np.random.uniform(0, 1, nparticles_local),
       "z": np.random.uniform(0, 1, nparticles_local)
   }

   # distribute data to ranks assigned to corresponding subvolume
   data = distribute(partition, box_size, data, ('x', 'y', 'z'))

   # overload "edge" of each subvolume by 0.05
   data = overload(partition, box_size, data, 0.05, ('x', 'y', 'z'))

This code can then be executed with ``mpi``:

.. code-block:: bash

   mpirun -n 10 python mpipartition_example.py

--------

A more applied example, using halo catalogs from a
`HACC <https://cpac.hep.anl.gov/projects/hacc/>`_ cosmological simulation (in
the `GenericIO <https://git.cels.anl.gov/hacc/genericio>`_ data format):

.. code-block:: python

   from mpipartition import Partition, distribute, overload
   import numpy as np
   import pygio

   # create a partition with available MPI ranks
   box_size = 64.  # box size in Mpc/h
   partition = Partition(3)  # by default, the dimension is 3

   # read GenericIO data in parallel
   data = pygio.read_genericio("m000p-499.haloproperties")

   # distribute
   data = distribute(partition, box_size, data, [f"fof_halo_center_{x}" for x in "xyz"])

   # mark "owned" data with rank (allows differentiating owned and overloaded data)
   data["status"] = partition.rank * np.ones(len(data["fof_halo_center_x"]), dtype=np.uint16)

   # overload by 4Mpc/h
   data = overload(partition, box_size, data, 4., [f"fof_halo_center_{x}" for x in "xyz"])

   # now we can do analysis such as 2pt correlation functions (up to 4Mpc/h)
   # or neighbor finding, etc.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ArgonneCPAC/MPIPartition",
    "name": "mpipartition",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "MPI,mpi4py,scientific computing,parallel computing",
    "author": "Michael Buehlmann",
    "author_email": "buehlmann.michi@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/07/c6/0fecf3052de5f6e6a2dd2450a1ec20533a0263fa320074690a4fc294e798/mpipartition-1.2.0.tar.gz",
    "platform": null,
    "description": "MPIPartition\n============\n\n\n.. image:: https://img.shields.io/pypi/v/mpipartition.svg\n   :target: https://pypi.python.org/pypi/mpipartition\n\n.. image:: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/pypi.yml/badge.svg\n   :target: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/pypi.yml\n\n.. image:: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/sphinx.yml/badge.svg\n   :target: https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/sphinx.yml\n\nA python module for MPI volume decomposition and particle distribution\n\n\n* Free software: MIT license\n* Documentation: https://argonnecpac.github.io/MPIPartition\n* Repository: https://github.com/ArgonneCPAC/MPIPartition\n\n\nFeatures\n--------\n\n* Cartesian partitioning of a cubic volume (arbitrary dimensions) among MPI ranks\n* Equal area decomposition of the spherical shell (S2) among MPI ranks\n* distributing particle-data among ranks to the corresponding subvolume / surface segment\n* overloading particle-data at rank boundaries (\"ghost particles\")\n\n\n\nInstallation\n------------\n\nInstalling from the PyPI repository:\n\n.. code-block:: bash\n\n   pip install mpipartition\n\nInstalling the development version from the GIT repository\n\n.. code-block:: bash\n\n   git clone https://github.com/ArgonneCPAC/mpipartition.git\n   cd mpipartition\n   python setup.py develop\n\n\nRequirements\n------------\n\nThese packages will be automatically installed if they are not already present:\n\n* Python >= 3.8\n* `mpi4py <https://mpi4py.readthedocs.io/en/stable/>`_: MPI for Python\n* `numpy <https://numpy.org/>`_: Python array library\n* `numba <https://numba.pydata.org/>`_: Python JIT compiler\n\nBasic Usage\n-----------\nCheck the `documentation <https://argonnecpac.github.io/MPIPartition>`_ for\nan in-depth explanation / documentation.\n\n.. code-block:: python\n\n   # this code goes into mpipartition_example.py\n\n   from mpipartition import Partition, distribute, overload\n   import numpy as np\n\n   # create a partition of the unit cube with available MPI ranks\n   box_size = 1.\n   partition = Partition()\n\n   if partition.rank == 0:\n       print(f\"Number of ranks: {partition.nranks}\")\n       print(f\"Volume decomposition: {partition.decomposition}\")\n\n   # create random data\n   nparticles_local = 1000\n   data = {\n       \"x\": np.random.uniform(0, 1, nparticles_local),\n       \"y\": np.random.uniform(0, 1, nparticles_local),\n       \"z\": np.random.uniform(0, 1, nparticles_local)\n   }\n\n   # distribute data to ranks assigned to corresponding subvolume\n   data = distribute(partition, box_size, data, ('x', 'y', 'z'))\n\n   # overload \"edge\" of each subvolume by 0.05\n   data = overload(partition, box_size, data, 0.05, ('x', 'y', 'z'))\n\nThis code can then be executed with ``mpi``:\n\n.. code-block:: bash\n\n   mpirun -n 10 python mpipartition_example.py\n\n--------\n\nA more applied example, using halo catalogs from a\n`HACC <https://cpac.hep.anl.gov/projects/hacc/>`_ cosmological simulation (in\nthe `GenericIO <https://git.cels.anl.gov/hacc/genericio>`_ data format):\n\n.. code-block:: python\n\n   from mpipartition import Partition, distribute, overload\n   import numpy as np\n   import pygio\n\n   # create a partition with available MPI ranks\n   box_size = 64.  # box size in Mpc/h\n   partition = Partition(3)  # by default, the dimension is 3\n\n   # read GenericIO data in parallel\n   data = pygio.read_genericio(\"m000p-499.haloproperties\")\n\n   # distribute\n   data = distribute(partition, box_size, data, [f\"fof_halo_center_{x}\" for x in \"xyz\"])\n\n   # mark \"owned\" data with rank (allows differentiating owned and overloaded data)\n   data[\"status\"] = partition.rank * np.ones(len(data[\"fof_halo_center_x\"]), dtype=np.uint16)\n\n   # overload by 4Mpc/h\n   data = overload(partition, box_size, data, 4., [f\"fof_halo_center_{x}\" for x in \"xyz\"])\n\n   # now we can do analysis such as 2pt correlation functions (up to 4Mpc/h)\n   # or neighbor finding, etc.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "MPI volume decomposition and particle distribution tools",
    "version": "1.2.0",
    "project_urls": {
        "Documentation": "https://argonnecpac.github.io/MPIPartition",
        "Homepage": "https://github.com/ArgonneCPAC/MPIPartition",
        "Repository": "https://github.com/ArgonneCPAC/MPIPartition"
    },
    "split_keywords": [
        "mpi",
        "mpi4py",
        "scientific computing",
        "parallel computing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "00d23cb16dcf120719bc43a8f42f43d14bd6b330315712994944660a53959373",
                "md5": "0ff4e439f22438f46845caa12ba828a3",
                "sha256": "f21679204a8c42c76ae24b577b8619f2d19bf029a91e85043d09d93ef426fcab"
            },
            "downloads": -1,
            "filename": "mpipartition-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0ff4e439f22438f46845caa12ba828a3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 23280,
            "upload_time": "2023-12-04T14:09:15",
            "upload_time_iso_8601": "2023-12-04T14:09:15.642631Z",
            "url": "https://files.pythonhosted.org/packages/00/d2/3cb16dcf120719bc43a8f42f43d14bd6b330315712994944660a53959373/mpipartition-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "07c60fecf3052de5f6e6a2dd2450a1ec20533a0263fa320074690a4fc294e798",
                "md5": "c85ade4a577f0755d7f68d50008d18ea",
                "sha256": "05956cf58a5f09317a1a11446e1688b8e2c089d94a30747ed9b057f3de86f306"
            },
            "downloads": -1,
            "filename": "mpipartition-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c85ade4a577f0755d7f68d50008d18ea",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 18012,
            "upload_time": "2023-12-04T14:09:17",
            "upload_time_iso_8601": "2023-12-04T14:09:17.044862Z",
            "url": "https://files.pythonhosted.org/packages/07/c6/0fecf3052de5f6e6a2dd2450a1ec20533a0263fa320074690a4fc294e798/mpipartition-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-04 14:09:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ArgonneCPAC",
    "github_project": "MPIPartition",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "mpipartition"
}
        
Elapsed time: 0.14903s