# ACME: Asynchronous Computing Made Easy
main: [](https://travis-ci.com/esi-neuroscience/acme)
dev: [](https://travis-ci.com/esi-neuroscience/acme)
## Summary
The objective of ACME (pronounced *"ak-mee"*) is to provide easy-to-use
wrappers for calling Python functions in parallel ("embarassingly parallel workloads").
ACME is developed at the
[Ernst Strüngmann Institute (ESI) gGmbH for Neuroscience in Cooperation with Max Planck Society](https://www.esi-frankfurt.de/>)
and released free of charge under the
[BSD 3-Clause "New" or "Revised" License](https://en.wikipedia.org/wiki/BSD_licenses#3-clause_license_(%22BSD_License_2.0%22,_%22Revised_BSD_License%22,_%22New_BSD_License%22,_or_%22Modified_BSD_License%22)).
ACME relies on the concurrent processing library [Dask](https://docs.dask.org/en/latest/>)
and was primarily designed to facilitate the use of [SLURM](https://slurm.schedmd.com/documentation.html)
on the ESI HPC cluster. However, local multi-processing hardware (i.e., multi-core CPUs)
is fully supported as well. ACME is based on the parallelization engine used in [SyNCoPy](http://www.syncopy.org/) and
is itself part of the SyNCoPy package.
## Installation
ACME can be installed with pip
```
pip install esi-acme
```
To get the latest development version, simply clone our GitHub repository:
```
git clone https://github.com/esi-neuroscience/acme.git
```
## Usage
### Basic Examples
Simplest use, everything is done automatically.
```python
from acme import ParallelMap
def f(x, y, z=3):
return (x + y) * z
with ParallelMap(f, [2, 4, 6, 8], 4) as pmap:
pmap.compute()
```
### Intermediate Examples
Set number of function calls via `n_inputs`
```python
import numpy as np
from acme import ParallelMap
def f(x, y, z=3, w=np.zeros((3, 1)), **kwargs):
return (sum(x) + y) * z * w.max()
pmap = ParallelMap(f, [2, 4, 6, 8], [2, 2], z=np.array([1, 2]), w=np.ones((8, 1)), n_inputs=2)
with pmap as p:
p.compute()
```
### Advanced Use
Allocate custom `client` object and recycle it for several computations
```python
import numpy as np
from acme import ParallelMap, esi_cluster_setup
def f(x, y, z=3, w=np.zeros((3, 1)), **kwargs):
return (sum(x) + y) * z * w.max()
def g(x, y, z=3, w=np.zeros((3, 1)), **kwargs):
return (max(x) + y) * z * w.sum()
n_jobs = 200
client = esi_cluster_setup(partition="8GBXS", n_jobs=n_jobs)
x = [2, 4, 6, 8]
z = range(n_jobs)
w = np.ones((8, 1))
pmap = ParallelMap(f, x, np.random.rand(n_jobs), z=z, w=w, n_inputs=n_jobs)
with pmap as p:
p.compute()
pmap = ParallelMap(g, x, np.random.rand(n_jobs), z=z, w=w, n_inputs=n_jobs)
with pmap as p:
p.compute()
```
## Handling results
### Load results from files
The results are saved to disk in HDF5 format and the filenames are returned as a list of strings.
```python
with ParallelMap(f, [2, 4, 6, 8], 4) as pmap:
filenames = pmap.compute()
```
Example loading code:
```python
out = np.zeros((4))
import h5py
for ii, fname in enumerate(filenames):
with h5py.File(fname, 'r') as f:
out[ii] = np.array(f['result_0'])
```
### Collect results in local memory
This is possible but not recommended.
```python
with ParallelMap(f, [2, 4, 6, 8], 4, write_worker_results=False) as pmap:
results = pmap.compute()
out = np.array([xi[0][0] for xi in results])
```
## Debugging
Use the `debug` keyword to perform all function calls in the local thread of
the active Python interpreter
```python
with ParallelMap(f, [2, 4, 6, 8], 4, z=None) as pmap:
results = pmap.compute(debug=True)
```
This way tools like `pdb` or ``%debug`` IPython magics can be used.
## Documentation and Contact
To report bugs or ask questions please use our
[GitHub issue tracker](https://github.com/esi-neuroscience/acme/issues).
More usage details and background information is available in our
[online documentation](https://esi-acme.readthedocs.io/en/latest/).
Raw data
{
"_id": null,
"home_page": "https://esi-acme.readthedocs.io/en/latest/",
"name": "esi-acme",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Ernst Str\u00fcngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society",
"author_email": "acme@esi-frankfurt.de",
"download_url": "https://files.pythonhosted.org/packages/12/25/0e5e1023f7cf70c6a7676d9387b04c90a186fd8ab996a70efce32c6a28bd/esi-acme-0.1b1.dev2.tar.gz",
"platform": "",
"description": "# ACME: Asynchronous Computing Made Easy\n\nmain: [](https://travis-ci.com/esi-neuroscience/acme)\ndev: [](https://travis-ci.com/esi-neuroscience/acme)\n\n## Summary\n\nThe objective of ACME (pronounced *\"ak-mee\"*) is to provide easy-to-use\nwrappers for calling Python functions in parallel (\"embarassingly parallel workloads\").\nACME is developed at the\n[Ernst Str\u00fcngmann Institute (ESI) gGmbH for Neuroscience in Cooperation with Max Planck Society](https://www.esi-frankfurt.de/>)\nand released free of charge under the\n[BSD 3-Clause \"New\" or \"Revised\" License](https://en.wikipedia.org/wiki/BSD_licenses#3-clause_license_(%22BSD_License_2.0%22,_%22Revised_BSD_License%22,_%22New_BSD_License%22,_or_%22Modified_BSD_License%22)).\nACME relies on the concurrent processing library [Dask](https://docs.dask.org/en/latest/>)\nand was primarily designed to facilitate the use of [SLURM](https://slurm.schedmd.com/documentation.html)\non the ESI HPC cluster. However, local multi-processing hardware (i.e., multi-core CPUs)\nis fully supported as well. ACME is based on the parallelization engine used in [SyNCoPy](http://www.syncopy.org/) and\nis itself part of the SyNCoPy package.\n\n## Installation\n\nACME can be installed with pip\n\n```\npip install esi-acme\n```\nTo get the latest development version, simply clone our GitHub repository:\n```\ngit clone https://github.com/esi-neuroscience/acme.git\n```\n\n## Usage\n\n### Basic Examples\n\nSimplest use, everything is done automatically.\n\n```python\nfrom acme import ParallelMap\n\ndef f(x, y, z=3):\n return (x + y) * z\n\nwith ParallelMap(f, [2, 4, 6, 8], 4) as pmap:\n pmap.compute()\n```\n\n### Intermediate Examples\n\nSet number of function calls via `n_inputs`\n\n```python\nimport numpy as np\nfrom acme import ParallelMap\n\ndef f(x, y, z=3, w=np.zeros((3, 1)), **kwargs):\n return (sum(x) + y) * z * w.max()\n\npmap = ParallelMap(f, [2, 4, 6, 8], [2, 2], z=np.array([1, 2]), w=np.ones((8, 1)), n_inputs=2)\n\nwith pmap as p:\n p.compute()\n```\n\n### Advanced Use\n\nAllocate custom `client` object and recycle it for several computations\n\n```python\nimport numpy as np\nfrom acme import ParallelMap, esi_cluster_setup\n\ndef f(x, y, z=3, w=np.zeros((3, 1)), **kwargs):\n return (sum(x) + y) * z * w.max()\n\ndef g(x, y, z=3, w=np.zeros((3, 1)), **kwargs):\n return (max(x) + y) * z * w.sum()\n\nn_jobs = 200\nclient = esi_cluster_setup(partition=\"8GBXS\", n_jobs=n_jobs)\n\nx = [2, 4, 6, 8]\nz = range(n_jobs)\nw = np.ones((8, 1))\n\npmap = ParallelMap(f, x, np.random.rand(n_jobs), z=z, w=w, n_inputs=n_jobs)\nwith pmap as p:\n p.compute()\n\npmap = ParallelMap(g, x, np.random.rand(n_jobs), z=z, w=w, n_inputs=n_jobs)\nwith pmap as p:\n p.compute()\n```\n\n## Handling results\n\n### Load results from files\n\nThe results are saved to disk in HDF5 format and the filenames are returned as a list of strings.\n\n```python\nwith ParallelMap(f, [2, 4, 6, 8], 4) as pmap:\n filenames = pmap.compute()\n```\n\nExample loading code:\n\n```python\nout = np.zeros((4))\nimport h5py\nfor ii, fname in enumerate(filenames):\n with h5py.File(fname, 'r') as f:\n out[ii] = np.array(f['result_0'])\n```\n\n### Collect results in local memory\n\nThis is possible but not recommended.\n\n```python\nwith ParallelMap(f, [2, 4, 6, 8], 4, write_worker_results=False) as pmap:\n results = pmap.compute()\n\nout = np.array([xi[0][0] for xi in results])\n```\n\n## Debugging\n\nUse the `debug` keyword to perform all function calls in the local thread of\nthe active Python interpreter\n\n```python\nwith ParallelMap(f, [2, 4, 6, 8], 4, z=None) as pmap:\n results = pmap.compute(debug=True)\n```\n\nThis way tools like `pdb` or ``%debug`` IPython magics can be used.\n\n## Documentation and Contact\n\nTo report bugs or ask questions please use our\n[GitHub issue tracker](https://github.com/esi-neuroscience/acme/issues).\nMore usage details and background information is available in our\n[online documentation](https://esi-acme.readthedocs.io/en/latest/).\n\n\n",
"bugtrack_url": null,
"license": "BSD-3",
"summary": "Asynchronous Computing Made Easy",
"version": "0.1b1.dev2",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "543a68ea2ac33a346d025e2ee920d365",
"sha256": "f2a06691d6df653424b1408aa0892fdeae0799f385cb663adac0d62877b4af98"
},
"downloads": -1,
"filename": "esi_acme-0.1b1.dev2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "543a68ea2ac33a346d025e2ee920d365",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 42179,
"upload_time": "2021-02-19T10:12:03",
"upload_time_iso_8601": "2021-02-19T10:12:03.787107Z",
"url": "https://files.pythonhosted.org/packages/a8/98/1c1ac58a89a64db63547312790de0d0c304e889aa84aca069525f1c2f3e3/esi_acme-0.1b1.dev2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "b3edbb3aee85ccbffe2724fe527deb91",
"sha256": "96fb9dfe22a657f0f3d13909557a031e6a6ebf4af90b9a50a6cf46bf78c2a0e6"
},
"downloads": -1,
"filename": "esi-acme-0.1b1.dev2.tar.gz",
"has_sig": false,
"md5_digest": "b3edbb3aee85ccbffe2724fe527deb91",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 36662,
"upload_time": "2021-02-19T10:12:04",
"upload_time_iso_8601": "2021-02-19T10:12:04.882549Z",
"url": "https://files.pythonhosted.org/packages/12/25/0e5e1023f7cf70c6a7676d9387b04c90a186fd8ab996a70efce32c6a28bd/esi-acme-0.1b1.dev2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2021-02-19 10:12:04",
"github": false,
"gitlab": false,
"bitbucket": false,
"lcname": "esi-acme"
}