==========================================
MPI4All: A script to generate mpi bindings
==========================================
--------
Overview
--------
This package provides a Python script to parse and generate bindings for *Message Passing
Interface* (`MPI <https://www.mpi-forum.org/>`_) standard. The parser analyzes the MPI headers and generates a specification file with the defined macros, functions and types. The specification file is different for each version and implementation of MPI, the file can be stored to generate binding without using the parser.
We can currently generate bindings for Java and Go. Java makes use of Foreign Linker API and Foreign Memory Access API so the performance is significantly better than Java Native Interface (JNI) implementations. Go use cgo, so MPI headers are needed to compile. More languages may be added in the future, feel free to make a pull request.
The objective of the project is to create efficient bindings for MPI automatically. The project will never become an object oriented interface like `mpi4py <https://github.com/mpi4py/mpi4py/>`_, although an equivalent library could be built using our bindings.
MPI4All has been built in the field of the `IgnisHPC <https://github.com/ignishpc/>`_ project for MPI usage.
-------
Install
-------
You can install MPI4All using pip::
$ pip install mpi4all
------------
Dependencies
------------
* `Python <https://www.python.org/>`_ 3.8+
* An MPI implementation, Java requires building shared/dynamic
libraries.
Tested with:
* `MPICH <https://www.mpich.org/>`_: 3.1.4, 3.2.1, 3.3.2, 3.4.3, 4.0, 4.1
* `Open MPI <https://www.open-mpi.org/>`_: 4.0.7, 4.1.4, 5.0.0rc12
--------
Examples
--------
MPI4All
^^^^^^^
MPI4All can generate the bindings for **Java** and **Go** with the default MPI library installed in the system::
$ mpi4all --go --java
or using a specification file::
$ mpi4all --load mpich-4.0.json --go --java
Specification files can be generated with ``--dump`` or downloaded from the `releases <https://github.com/citiususc/mpi4all/releases>`_ section.
Java
^^^^
External functions cannot use data inside java heap. The example shows how to use ``ByteBuffer.allocateDirect`` and ``Arena`` to allocate memory outside the java heap.
.. code-block:: java
import java.lang.foreign.*;
import java.nio.ByteBuffer;
import java.nio.IntBuffer;
import org.mpi.Mpi;
public class Main {
public static void main(String[] args) throws Throwable {
Mpi.MPI_Init(Mpi.C_pointer.NULL.cast(), Mpi.MPI_ARGVS_NULL);
int rank;
int size;
// When the buffer is interpreted by the MPI function, the native order must be used.
// If the MPI function only sends or receives the buffer, the order is indifferent.
ByteBuffer buffer = ByteBuffer.allocateDirect(Mpi.C_int.byteSize()).order(ByteOrder.nativeOrder());
Mpi.MPI_Comm_rank(Mpi.MPI_COMM_WORLD, new Mpi.C_pointer<>(MemorySegment.ofBuffer(buffer)));
rank = buffer.get(0);
try (Arena arena = Arena.ofConfined()) {// Using confined arena
Mpi.C_int c_size = Mpi.C_int.alloc(arena);
Mpi.MPI_Comm_size(Mpi.MPI_COMM_WORLD, c_size.pointer(arena));
size = c_size.get();
}
buffer = ByteBuffer.allocateDirect(Mpi.C_int.byteSize() * size);
Mpi.C_int c_rank = Mpi.C_int.alloc(); // Using auto gc arena
c_rank.set(rank);
Mpi.MPI_Allgather(c_rank.pointer().cast(), 1, Mpi.MPI_INT,
new Mpi.C_pointer<>(MemorySegment.ofBuffer(buffer)), 1, Mpi.MPI_INT, Mpi.MPI_COMM_WORLD);
for (int i = 0; i < size; i++) {
if (i != buffer.get(i)) {
throw new RuntimeException("Allgather error");
}
}
Mpi.MPI_Finalize();
}
}
GO
^^
``C_int`` and ``int`` data types are usually aliases but it is preferable to use ``C_int`` to avoid surprises. Functions with ``void *`` arguments use ``usafe.pointer`` instead, you can use the auxiliary functions ``mpi.P`` and ``mpi.PA`` to convert variables and array respectively to ``usafe.pointer``. All other pointers are converted to their equivalents in Go, ``&var`` or ``&array[0]`` is sufficient to send the memory address.
.. code-block:: go
package main
import "mpi"
func main() {
if err := mpi.MPI_Init(nil, nil); err != nil {
panic(err)
}
var rank mpi.C_int
var size mpi.C_int
if err := mpi.MPI_Comm_rank(mpi.MPI_COMM_WORLD, &rank); err != nil {
panic(err)
}
if err := mpi.MPI_Comm_size(mpi.MPI_COMM_WORLD, &size); err != nil {
panic(err)
}
result := make([]mpi.C_int, int(size))
if err := mpi.MPI_Allgather(mpi.P(&rank), 1, mpi.MPI_INT,
mpi.PA(&result), mpi.C_int(len(result)), mpi.MPI_INT, mpi.MPI_COMM_WORLD); err != nil {
panic(err)
}
for i := 0; i < int(size); i++ {
if i != int(result[i]) {
panic("Allgather error")
}
}
if err := mpi.MPI_Finalize(); err != nil {
panic(err)
}
}
-----
Usage
-----
.. code-block::
usage: mpi4all [-h] [--out path] [--log lvl] [--cc path] [--cxx path]
[--exclude str [str ...]] [--enable-fortran] [--no-arg-names]
[--dump path] [--load path] [--cache path] [--go]
[--no-generic] [--go-package name] [--go-out name] [--java]
[--java-package name] [--java-class name] [--java-out name]
[--java-lib-name name] [--java-lib-out name] [--version]
A script to generate mpi bindings
options:
-h, --help show this help message and exit
--out path Output folder, by default is working directory
--log lvl Log level, default error
--version show program's version number and exit
Mpi parser arguments:
--cc path MPI C compiler, by default uses the 'mpicc' in PATH
--cxx path MPI C++ compiler, by default uses the 'mpic++' in PATH
--exclude str [str ...]
Exclude functions and macros that match with any
pattern
--enable-fortran Parse MPI Fortran functions, which are disabled by
default, to avoid linking errors if they are not
available
--no-arg-names Use xi as the parameter name in MPI functions
--dump path Dump parser output as json file, - for stdout
--load path Don't use a parser and load info from a JSON file, -
for stdin
--cache path Make --dump if the file does not exist and --load
otherwise
Go builder arguments:
--go Enable Go generator
--no-generic Disable utility functions that require go 1.18+
--go-package name Go package name, default mpi
--go-out name Go output directory, by default <out>
Java builder arguments:
--java Enable Java 21 generator
--java-package name Java package name, default org.mpi
--java-class name Java class name, default Mpi
--java-out name Java output directory, default <out>
--java-lib-name name Java C library name without any extension, default
mpi4alljava
--java-lib-out name Java output directory for C library, default <java-
out>/<java-lib-name>
Raw data
{
"_id": null,
"home_page": "https://github.com/citiususc/mpi4all",
"name": "MPI4All",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9,<4.0",
"maintainer_email": "",
"keywords": "HPC,MPI,Multi-language,Programming models",
"author": "C\u00e9sar Pomar",
"author_email": "cesaralfredo.pineiro@usc.es",
"download_url": "https://files.pythonhosted.org/packages/db/a2/497099df3f2c3e4304059f2705be151718e656dd3886fb017b611d7b7dbb/mpi4all-1.1.1.dev1.tar.gz",
"platform": null,
"description": "==========================================\nMPI4All: A script to generate mpi bindings\n==========================================\n\n--------\nOverview\n--------\n\nThis package provides a Python script to parse and generate bindings for *Message Passing\nInterface* (`MPI <https://www.mpi-forum.org/>`_) standard. The parser analyzes the MPI headers and generates a specification file with the defined macros, functions and types. The specification file is different for each version and implementation of MPI, the file can be stored to generate binding without using the parser.\n\nWe can currently generate bindings for Java and Go. Java makes use of Foreign Linker API and Foreign Memory Access API so the performance is significantly better than Java Native Interface (JNI) implementations. Go use cgo, so MPI headers are needed to compile. More languages may be added in the future, feel free to make a pull request.\n\nThe objective of the project is to create efficient bindings for MPI automatically. The project will never become an object oriented interface like `mpi4py <https://github.com/mpi4py/mpi4py/>`_, although an equivalent library could be built using our bindings.\n\n\nMPI4All has been built in the field of the `IgnisHPC <https://github.com/ignishpc/>`_ project for MPI usage.\n\n-------\nInstall\n-------\n\nYou can install MPI4All using pip::\n\n $ pip install mpi4all\n\n------------\nDependencies\n------------\n\n* `Python <https://www.python.org/>`_ 3.8+\n\n* An MPI implementation, Java requires building shared/dynamic\n libraries.\n\nTested with:\n\n* `MPICH <https://www.mpich.org/>`_: 3.1.4, 3.2.1, 3.3.2, 3.4.3, 4.0, 4.1\n\n* `Open MPI <https://www.open-mpi.org/>`_: 4.0.7, 4.1.4, 5.0.0rc12\n\n--------\nExamples\n--------\n\n\nMPI4All\n^^^^^^^\n\nMPI4All can generate the bindings for **Java** and **Go** with the default MPI library installed in the system::\n\n $ mpi4all --go --java\n\nor using a specification file::\n\n $ mpi4all --load mpich-4.0.json --go --java\n\nSpecification files can be generated with ``--dump`` or downloaded from the `releases <https://github.com/citiususc/mpi4all/releases>`_ section.\n\nJava\n^^^^\n\nExternal functions cannot use data inside java heap. The example shows how to use ``ByteBuffer.allocateDirect`` and ``Arena`` to allocate memory outside the java heap.\n\n.. code-block:: java\n\n import java.lang.foreign.*;\n import java.nio.ByteBuffer;\n import java.nio.IntBuffer;\n\n import org.mpi.Mpi;\n\n public class Main {\n public static void main(String[] args) throws Throwable {\n Mpi.MPI_Init(Mpi.C_pointer.NULL.cast(), Mpi.MPI_ARGVS_NULL);\n\n int rank;\n int size;\n\n // When the buffer is interpreted by the MPI function, the native order must be used.\n // If the MPI function only sends or receives the buffer, the order is indifferent.\n ByteBuffer buffer = ByteBuffer.allocateDirect(Mpi.C_int.byteSize()).order(ByteOrder.nativeOrder());\n\n Mpi.MPI_Comm_rank(Mpi.MPI_COMM_WORLD, new Mpi.C_pointer<>(MemorySegment.ofBuffer(buffer)));\n rank = buffer.get(0);\n try (Arena arena = Arena.ofConfined()) {// Using confined arena\n Mpi.C_int c_size = Mpi.C_int.alloc(arena);\n Mpi.MPI_Comm_size(Mpi.MPI_COMM_WORLD, c_size.pointer(arena));\n size = c_size.get();\n }\n\n buffer = ByteBuffer.allocateDirect(Mpi.C_int.byteSize() * size);\n\n Mpi.C_int c_rank = Mpi.C_int.alloc(); // Using auto gc arena\n c_rank.set(rank);\n Mpi.MPI_Allgather(c_rank.pointer().cast(), 1, Mpi.MPI_INT,\n new Mpi.C_pointer<>(MemorySegment.ofBuffer(buffer)), 1, Mpi.MPI_INT, Mpi.MPI_COMM_WORLD);\n\n\n for (int i = 0; i < size; i++) {\n if (i != buffer.get(i)) {\n throw new RuntimeException(\"Allgather error\");\n }\n }\n\n\n Mpi.MPI_Finalize();\n }\n }\n\n\nGO\n^^\n\n``C_int`` and ``int`` data types are usually aliases but it is preferable to use ``C_int`` to avoid surprises. Functions with ``void *`` arguments use ``usafe.pointer`` instead, you can use the auxiliary functions ``mpi.P`` and ``mpi.PA`` to convert variables and array respectively to ``usafe.pointer``. All other pointers are converted to their equivalents in Go, ``&var`` or ``&array[0]`` is sufficient to send the memory address.\n\n.. code-block:: go\n\n package main\n\n import \"mpi\"\n\n func main() {\n if err := mpi.MPI_Init(nil, nil); err != nil {\n panic(err)\n }\n\n var rank mpi.C_int\n var size mpi.C_int\n\n if err := mpi.MPI_Comm_rank(mpi.MPI_COMM_WORLD, &rank); err != nil {\n panic(err)\n }\n\n if err := mpi.MPI_Comm_size(mpi.MPI_COMM_WORLD, &size); err != nil {\n panic(err)\n }\n\n result := make([]mpi.C_int, int(size))\n\n if err := mpi.MPI_Allgather(mpi.P(&rank), 1, mpi.MPI_INT,\n mpi.PA(&result), mpi.C_int(len(result)), mpi.MPI_INT, mpi.MPI_COMM_WORLD); err != nil {\n panic(err)\n }\n\n for i := 0; i < int(size); i++ {\n if i != int(result[i]) {\n panic(\"Allgather error\")\n }\n }\n\n if err := mpi.MPI_Finalize(); err != nil {\n panic(err)\n }\n\n }\n\n-----\nUsage\n-----\n\n.. code-block::\n\n usage: mpi4all [-h] [--out path] [--log lvl] [--cc path] [--cxx path]\n [--exclude str [str ...]] [--enable-fortran] [--no-arg-names]\n [--dump path] [--load path] [--cache path] [--go]\n [--no-generic] [--go-package name] [--go-out name] [--java]\n [--java-package name] [--java-class name] [--java-out name]\n [--java-lib-name name] [--java-lib-out name] [--version]\n\n A script to generate mpi bindings\n\n options:\n -h, --help show this help message and exit\n --out path Output folder, by default is working directory\n --log lvl Log level, default error\n --version show program's version number and exit\n\n Mpi parser arguments:\n --cc path MPI C compiler, by default uses the 'mpicc' in PATH\n --cxx path MPI C++ compiler, by default uses the 'mpic++' in PATH\n --exclude str [str ...]\n Exclude functions and macros that match with any\n pattern\n --enable-fortran Parse MPI Fortran functions, which are disabled by\n default, to avoid linking errors if they are not\n available\n --no-arg-names Use xi as the parameter name in MPI functions\n --dump path Dump parser output as json file, - for stdout\n --load path Don't use a parser and load info from a JSON file, -\n for stdin\n --cache path Make --dump if the file does not exist and --load\n otherwise\n\n Go builder arguments:\n --go Enable Go generator\n --no-generic Disable utility functions that require go 1.18+\n --go-package name Go package name, default mpi\n --go-out name Go output directory, by default <out>\n\n Java builder arguments:\n --java Enable Java 21 generator\n --java-package name Java package name, default org.mpi\n --java-class name Java class name, default Mpi\n --java-out name Java output directory, default <out>\n --java-lib-name name Java C library name without any extension, default\n mpi4alljava\n --java-lib-out name Java output directory for C library, default <java-\n out>/<java-lib-name>\n",
"bugtrack_url": null,
"license": "LGPL-3.0-only",
"summary": "MPI4All: An MPI Binding Generator",
"version": "1.1.1.dev1",
"project_urls": {
"Homepage": "https://github.com/citiususc/mpi4all",
"Repository": "https://github.com/citiususc/mpi4all"
},
"split_keywords": [
"hpc",
"mpi",
"multi-language",
"programming models"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b905dec6d220bb4c5629d1529b32a001ddcea48e91fdc3c3408d5a994e422c18",
"md5": "61f7511ab5c0ae471ed77c1756d2de07",
"sha256": "6765d4350cbda2d58e13c78bccf6dff4829cd9b3e16a3447d88efb57d7eb7a5b"
},
"downloads": -1,
"filename": "mpi4all-1.1.1.dev1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "61f7511ab5c0ae471ed77c1756d2de07",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9,<4.0",
"size": 28575,
"upload_time": "2024-02-21T17:39:18",
"upload_time_iso_8601": "2024-02-21T17:39:18.188300Z",
"url": "https://files.pythonhosted.org/packages/b9/05/dec6d220bb4c5629d1529b32a001ddcea48e91fdc3c3408d5a994e422c18/mpi4all-1.1.1.dev1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "dba2497099df3f2c3e4304059f2705be151718e656dd3886fb017b611d7b7dbb",
"md5": "009b1d2a54ce20a7c26f2400e5e4eeee",
"sha256": "c0381e3de415075eced15f03b6995055b0ae04c4cb2b85ac6cab6d3beff5a29f"
},
"downloads": -1,
"filename": "mpi4all-1.1.1.dev1.tar.gz",
"has_sig": false,
"md5_digest": "009b1d2a54ce20a7c26f2400e5e4eeee",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9,<4.0",
"size": 28269,
"upload_time": "2024-02-21T17:39:21",
"upload_time_iso_8601": "2024-02-21T17:39:21.343264Z",
"url": "https://files.pythonhosted.org/packages/db/a2/497099df3f2c3e4304059f2705be151718e656dd3886fb017b611d7b7dbb/mpi4all-1.1.1.dev1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-21 17:39:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "citiususc",
"github_project": "mpi4all",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "mpi4all"
}