Name | executorlib JSON |
Version |
0.4.0
JSON |
| download |
home_page | None |
Summary | Up-scale python functions for high performance computing (HPC) with executorlib. |
upload_time | 2025-02-15 18:01:55 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <3.14,>=3.9 |
license | BSD 3-Clause License
Copyright (c) 2022, Jan Janssen
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
keywords |
high performance computing
hpc
task scheduler
slurm
flux-framework
executor
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# executorlib
[](https://github.com/pyiron/executorlib/actions/workflows/pipeline.yml)
[](https://codecov.io/gh/pyiron/executorlib)
[](https://mybinder.org/v2/gh/pyiron/executorlib/HEAD?labpath=notebooks%2Fexamples.ipynb)
Up-scale python functions for high performance computing (HPC) with executorlib.
## Key Features
* **Up-scale your Python functions beyond a single computer.** - executorlib extends the [Executor interface](https://docs.python.org/3/library/concurrent.futures.html#executor-objects)
from the Python standard library and combines it with job schedulers for high performance computing (HPC) including
the [Simple Linux Utility for Resource Management (SLURM)](https://slurm.schedmd.com) and [flux](http://flux-framework.org).
With this combination executorlib allows users to distribute their Python functions over multiple compute nodes.
* **Parallelize your Python program one function at a time** - executorlib allows users to assign dedicated computing
resources like CPU cores, threads or GPUs to one Python function call at a time. So you can accelerate your Python
code function by function.
* **Permanent caching of intermediate results to accelerate rapid prototyping** - To accelerate the development of
machine learning pipelines and simulation workflows executorlib provides optional caching of intermediate results for
iterative development in interactive environments like jupyter notebooks.
## Examples
The Python standard library provides the [Executor interface](https://docs.python.org/3/library/concurrent.futures.html#executor-objects)
with the [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) and the
[ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) for parallel
execution of Python functions on a single computer. executorlib extends this functionality to distribute Python
functions over multiple computers within a high performance computing (HPC) cluster. This can be either achieved by
submitting each function as individual job to the HPC job scheduler with an [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html) -
or by requesting a job from the HPC cluster and then distribute the Python functions within this job with an
[HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html). Finally, to accelerate the
development process executorlib also provides a [Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html) -
to use the executorlib functionality on a laptop, workstation or single compute node for testing. Starting with the
[Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html):
```python
from executorlib import SingleNodeExecutor
with SingleNodeExecutor() as exe:
future_lst = [exe.submit(sum, [i, i]) for i in range(1, 5)]
print([f.result() for f in future_lst])
```
In the same way executorlib can also execute Python functions which use additional computing resources, like multiple
CPU cores, CPU threads or GPUs. For example if the Python function internally uses the Message Passing Interface (MPI)
via the [mpi4py](https://mpi4py.readthedocs.io) Python libary:
```python
from executorlib import SingleNodeExecutor
def calc(i):
from mpi4py import MPI
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
return i, size, rank
with SingleNodeExecutor() as exe:
fs = exe.submit(calc, 3, resource_dict={"cores": 2})
print(fs.result())
```
The additional `resource_dict` parameter defines the computing resources allocated to the execution of the submitted
Python function. In addition to the compute cores `cores`, the resource dictionary can also define the threads per core
as `threads_per_core`, the GPUs per core as `gpus_per_core`, the working directory with `cwd`, the option to use the
OpenMPI oversubscribe feature with `openmpi_oversubscribe` and finally for the [Simple Linux Utility for Resource
Management (SLURM)](https://slurm.schedmd.com) queuing system the option to provide additional command line arguments
with the `slurm_cmd_args` parameter - [resource dictionary](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#resource-dictionary)
This flexibility to assign computing resources on a per-function-call basis simplifies the up-scaling of Python programs.
Only the part of the Python functions which benefit from parallel execution are implemented as MPI parallel Python
funtions, while the rest of the program remains serial.
The same function can be submitted to the [SLURM](https://slurm.schedmd.com) job scheduler by replacing the
`SingleNodeExecutor` with the `SlurmClusterExecutor`. The rest of the example remains the same, which highlights how
executorlib accelerates the rapid prototyping and up-scaling of HPC Python programs.
```python
from executorlib import SlurmClusterExecutor
def calc(i):
from mpi4py import MPI
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
return i, size, rank
with SlurmClusterExecutor() as exe:
fs = exe.submit(calc, 3, resource_dict={"cores": 2})
print(fs.result())
```
In this case the [Python simple queuing system adapter (pysqa)](https://pysqa.readthedocs.io) is used to submit the
`calc()` function to the [SLURM](https://slurm.schedmd.com) job scheduler and request an allocation with two CPU cores
for the execution of the function - [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html). In the background the [sbatch](https://slurm.schedmd.com/sbatch.html)
command is used to request the allocation to execute the Python function.
Within a given [SLURM](https://slurm.schedmd.com) job executorlib can also be used to assign a subset of the
available computing resources to execute a given Python function. In terms of the [SLURM](https://slurm.schedmd.com)
commands, this functionality internally uses the [srun](https://slurm.schedmd.com/srun.html) command to receive a subset
of the resources of a given queuing system allocation.
```python
from executorlib import SlurmJobExecutor
def calc(i):
from mpi4py import MPI
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
return i, size, rank
with SlurmJobExecutor() as exe:
fs = exe.submit(calc, 3, resource_dict={"cores": 2})
print(fs.result())
```
In addition, to support for [SLURM](https://slurm.schedmd.com) executorlib also provides support for the hierarchical
[flux](http://flux-framework.org) job scheduler. The [flux](http://flux-framework.org) job scheduler is developed at
[Larwence Livermore National Laboratory](https://computing.llnl.gov/projects/flux-building-framework-resource-management)
to address the needs for the up-coming generation of Exascale computers. Still even on traditional HPC clusters the
hierarchical approach of the [flux](http://flux-framework.org) is beneficial to distribute hundreds of tasks within a
given allocation. Even when [SLURM](https://slurm.schedmd.com) is used as primary job scheduler of your HPC, it is
recommended to use [SLURM with flux](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#slurm-with-flux)
as hierarchical job scheduler within the allocations.
## Documentation
* [Installation](https://executorlib.readthedocs.io/en/latest/installation.html)
* [Minimal](https://executorlib.readthedocs.io/en/latest/installation.html#minimal)
* [MPI Support](https://executorlib.readthedocs.io/en/latest/installation.html#mpi-support)
* [Caching](https://executorlib.readthedocs.io/en/latest/installation.html#caching)
* [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-cluster-executor)
* [HPC Job Executor](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-job-executor)
* [Visualisation](https://executorlib.readthedocs.io/en/latest/installation.html#visualisation)
* [For Developers](https://executorlib.readthedocs.io/en/latest/installation.html#for-developers)
* [Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html)
* [Basic Functionality](https://executorlib.readthedocs.io/en/latest/1-single-node.html#basic-functionality)
* [Parallel Functions](https://executorlib.readthedocs.io/en/latest/1-single-node.html#parallel-functions)
* [Performance Optimization](https://executorlib.readthedocs.io/en/latest/1-single-node.html#performance-optimization)
* [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html)
* [SLURM](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html#slurm)
* [Flux](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html#flux)
* [HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html)
* [SLURM](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#slurm)
* [SLURM with Flux](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#slurm-with-flux)
* [Flux](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#flux)
* [Trouble Shooting](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html)
* [Filesystem Usage](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#filesystem-usage)
* [Firewall Issues](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#firewall-issues)
* [Message Passing Interface](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#message-passing-interface)
* [Python Version](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#python-version)
* [Resource Dictionary](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#resource-dictionary)
* [SSH Connection](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#ssh-connection)
* [Developer](https://executorlib.readthedocs.io/en/latest/4-developer.html)
* [Communication](https://executorlib.readthedocs.io/en/latest/4-developer.html#communication)
* [External Executables](https://executorlib.readthedocs.io/en/latest/4-developer.html#external-executables)
* [License](https://executorlib.readthedocs.io/en/latest/4-developer.html#license)
* [Modules](https://executorlib.readthedocs.io/en/latest/4-developer.html#modules)
* [Interface](https://executorlib.readthedocs.io/en/latest/api.html)
Raw data
{
"_id": null,
"home_page": null,
"name": "executorlib",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.9",
"maintainer_email": null,
"keywords": "high performance computing, hpc, task scheduler, slurm, flux-framework, executor",
"author": null,
"author_email": "Jan Janssen <janssen@lanl.gov>",
"download_url": "https://files.pythonhosted.org/packages/a2/3a/6b50f148eed7b0eeda6b082cb8cafd382d708f3126c196ed4a2e3aa6499a/executorlib-0.4.0.tar.gz",
"platform": null,
"description": "# executorlib\n[](https://github.com/pyiron/executorlib/actions/workflows/pipeline.yml)\n[](https://codecov.io/gh/pyiron/executorlib)\n[](https://mybinder.org/v2/gh/pyiron/executorlib/HEAD?labpath=notebooks%2Fexamples.ipynb)\n\nUp-scale python functions for high performance computing (HPC) with executorlib. \n\n## Key Features\n* **Up-scale your Python functions beyond a single computer.** - executorlib extends the [Executor interface](https://docs.python.org/3/library/concurrent.futures.html#executor-objects)\n from the Python standard library and combines it with job schedulers for high performance computing (HPC) including \n the [Simple Linux Utility for Resource Management (SLURM)](https://slurm.schedmd.com) and [flux](http://flux-framework.org). \n With this combination executorlib allows users to distribute their Python functions over multiple compute nodes.\n* **Parallelize your Python program one function at a time** - executorlib allows users to assign dedicated computing\n resources like CPU cores, threads or GPUs to one Python function call at a time. So you can accelerate your Python \n code function by function.\n* **Permanent caching of intermediate results to accelerate rapid prototyping** - To accelerate the development of \n machine learning pipelines and simulation workflows executorlib provides optional caching of intermediate results for \n iterative development in interactive environments like jupyter notebooks.\n\n## Examples\nThe Python standard library provides the [Executor interface](https://docs.python.org/3/library/concurrent.futures.html#executor-objects)\nwith the [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) and the \n[ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) for parallel \nexecution of Python functions on a single computer. executorlib extends this functionality to distribute Python \nfunctions over multiple computers within a high performance computing (HPC) cluster. This can be either achieved by \nsubmitting each function as individual job to the HPC job scheduler with an [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html) - \nor by requesting a job from the HPC cluster and then distribute the Python functions within this job with an\n[HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html). Finally, to accelerate the \ndevelopment process executorlib also provides a [Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html) - \nto use the executorlib functionality on a laptop, workstation or single compute node for testing. Starting with the \n[Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html):\n```python\nfrom executorlib import SingleNodeExecutor\n\n\nwith SingleNodeExecutor() as exe:\n future_lst = [exe.submit(sum, [i, i]) for i in range(1, 5)]\n print([f.result() for f in future_lst])\n```\nIn the same way executorlib can also execute Python functions which use additional computing resources, like multiple \nCPU cores, CPU threads or GPUs. For example if the Python function internally uses the Message Passing Interface (MPI) \nvia the [mpi4py](https://mpi4py.readthedocs.io) Python libary: \n```python\nfrom executorlib import SingleNodeExecutor\n\n\ndef calc(i):\n from mpi4py import MPI\n\n size = MPI.COMM_WORLD.Get_size()\n rank = MPI.COMM_WORLD.Get_rank()\n return i, size, rank\n\n\nwith SingleNodeExecutor() as exe:\n fs = exe.submit(calc, 3, resource_dict={\"cores\": 2})\n print(fs.result())\n```\nThe additional `resource_dict` parameter defines the computing resources allocated to the execution of the submitted \nPython function. In addition to the compute cores `cores`, the resource dictionary can also define the threads per core\nas `threads_per_core`, the GPUs per core as `gpus_per_core`, the working directory with `cwd`, the option to use the\nOpenMPI oversubscribe feature with `openmpi_oversubscribe` and finally for the [Simple Linux Utility for Resource \nManagement (SLURM)](https://slurm.schedmd.com) queuing system the option to provide additional command line arguments \nwith the `slurm_cmd_args` parameter - [resource dictionary](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#resource-dictionary)\nThis flexibility to assign computing resources on a per-function-call basis simplifies the up-scaling of Python programs.\nOnly the part of the Python functions which benefit from parallel execution are implemented as MPI parallel Python \nfuntions, while the rest of the program remains serial. \n\nThe same function can be submitted to the [SLURM](https://slurm.schedmd.com) job scheduler by replacing the \n`SingleNodeExecutor` with the `SlurmClusterExecutor`. The rest of the example remains the same, which highlights how \nexecutorlib accelerates the rapid prototyping and up-scaling of HPC Python programs. \n```python\nfrom executorlib import SlurmClusterExecutor\n\n\ndef calc(i):\n from mpi4py import MPI\n\n size = MPI.COMM_WORLD.Get_size()\n rank = MPI.COMM_WORLD.Get_rank()\n return i, size, rank\n\n\nwith SlurmClusterExecutor() as exe:\n fs = exe.submit(calc, 3, resource_dict={\"cores\": 2})\n print(fs.result())\n```\nIn this case the [Python simple queuing system adapter (pysqa)](https://pysqa.readthedocs.io) is used to submit the \n`calc()` function to the [SLURM](https://slurm.schedmd.com) job scheduler and request an allocation with two CPU cores \nfor the execution of the function - [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html). In the background the [sbatch](https://slurm.schedmd.com/sbatch.html) \ncommand is used to request the allocation to execute the Python function. \n\nWithin a given [SLURM](https://slurm.schedmd.com) job executorlib can also be used to assign a subset of the \navailable computing resources to execute a given Python function. In terms of the [SLURM](https://slurm.schedmd.com) \ncommands, this functionality internally uses the [srun](https://slurm.schedmd.com/srun.html) command to receive a subset\nof the resources of a given queuing system allocation. \n```python\nfrom executorlib import SlurmJobExecutor\n\n\ndef calc(i):\n from mpi4py import MPI\n\n size = MPI.COMM_WORLD.Get_size()\n rank = MPI.COMM_WORLD.Get_rank()\n return i, size, rank\n\n\nwith SlurmJobExecutor() as exe:\n fs = exe.submit(calc, 3, resource_dict={\"cores\": 2})\n print(fs.result())\n```\nIn addition, to support for [SLURM](https://slurm.schedmd.com) executorlib also provides support for the hierarchical \n[flux](http://flux-framework.org) job scheduler. The [flux](http://flux-framework.org) job scheduler is developed at \n[Larwence Livermore National Laboratory](https://computing.llnl.gov/projects/flux-building-framework-resource-management)\nto address the needs for the up-coming generation of Exascale computers. Still even on traditional HPC clusters the \nhierarchical approach of the [flux](http://flux-framework.org) is beneficial to distribute hundreds of tasks within a\ngiven allocation. Even when [SLURM](https://slurm.schedmd.com) is used as primary job scheduler of your HPC, it is \nrecommended to use [SLURM with flux](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#slurm-with-flux) \nas hierarchical job scheduler within the allocations. \n\n## Documentation\n* [Installation](https://executorlib.readthedocs.io/en/latest/installation.html)\n * [Minimal](https://executorlib.readthedocs.io/en/latest/installation.html#minimal)\n * [MPI Support](https://executorlib.readthedocs.io/en/latest/installation.html#mpi-support)\n * [Caching](https://executorlib.readthedocs.io/en/latest/installation.html#caching)\n * [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-cluster-executor)\n * [HPC Job Executor](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-job-executor)\n * [Visualisation](https://executorlib.readthedocs.io/en/latest/installation.html#visualisation)\n * [For Developers](https://executorlib.readthedocs.io/en/latest/installation.html#for-developers)\n* [Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html)\n * [Basic Functionality](https://executorlib.readthedocs.io/en/latest/1-single-node.html#basic-functionality)\n * [Parallel Functions](https://executorlib.readthedocs.io/en/latest/1-single-node.html#parallel-functions)\n * [Performance Optimization](https://executorlib.readthedocs.io/en/latest/1-single-node.html#performance-optimization)\n* [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html)\n * [SLURM](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html#slurm)\n * [Flux](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html#flux)\n* [HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html)\n * [SLURM](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#slurm)\n * [SLURM with Flux](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#slurm-with-flux)\n * [Flux](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html#flux)\n* [Trouble Shooting](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html)\n * [Filesystem Usage](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#filesystem-usage)\n * [Firewall Issues](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#firewall-issues)\n * [Message Passing Interface](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#message-passing-interface)\n * [Python Version](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#python-version)\n * [Resource Dictionary](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#resource-dictionary)\n * [SSH Connection](https://executorlib.readthedocs.io/en/latest/trouble_shooting.html#ssh-connection)\n* [Developer](https://executorlib.readthedocs.io/en/latest/4-developer.html)\n * [Communication](https://executorlib.readthedocs.io/en/latest/4-developer.html#communication)\n * [External Executables](https://executorlib.readthedocs.io/en/latest/4-developer.html#external-executables)\n * [License](https://executorlib.readthedocs.io/en/latest/4-developer.html#license)\n * [Modules](https://executorlib.readthedocs.io/en/latest/4-developer.html#modules)\n* [Interface](https://executorlib.readthedocs.io/en/latest/api.html)\n",
"bugtrack_url": null,
"license": "BSD 3-Clause License\n \n Copyright (c) 2022, Jan Janssen\n All rights reserved.\n \n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n \n * Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n \n * Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n \n * Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n \n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n ",
"summary": "Up-scale python functions for high performance computing (HPC) with executorlib.",
"version": "0.4.0",
"project_urls": {
"Documentation": "https://executorlib.readthedocs.io",
"Homepage": "https://github.com/pyiron/executorlib",
"Repository": "https://github.com/pyiron/executorlib"
},
"split_keywords": [
"high performance computing",
" hpc",
" task scheduler",
" slurm",
" flux-framework",
" executor"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "077deaeeb97e9d19a9b7308b667f18cdfdfac7d2dc17347fc5fcf663755696cc",
"md5": "cfca97acd00271abf7aaa9aaf787e08f",
"sha256": "b8bc12331f46c32ee718d6350171f91abe7334726e90942fee03f010ea40fa26"
},
"downloads": -1,
"filename": "executorlib-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cfca97acd00271abf7aaa9aaf787e08f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.9",
"size": 64157,
"upload_time": "2025-02-15T18:01:52",
"upload_time_iso_8601": "2025-02-15T18:01:52.740425Z",
"url": "https://files.pythonhosted.org/packages/07/7d/eaeeb97e9d19a9b7308b667f18cdfdfac7d2dc17347fc5fcf663755696cc/executorlib-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a23a6b50f148eed7b0eeda6b082cb8cafd382d708f3126c196ed4a2e3aa6499a",
"md5": "0e6f499e6727f3b1681bf55cea009d44",
"sha256": "0f8a09b0b1d79e1caa370cd0c70748b7b690aead881d5847080070da47121cf3"
},
"downloads": -1,
"filename": "executorlib-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "0e6f499e6727f3b1681bf55cea009d44",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.9",
"size": 56560,
"upload_time": "2025-02-15T18:01:55",
"upload_time_iso_8601": "2025-02-15T18:01:55.193858Z",
"url": "https://files.pythonhosted.org/packages/a2/3a/6b50f148eed7b0eeda6b082cb8cafd382d708f3126c196ed4a2e3aa6499a/executorlib-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-15 18:01:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pyiron",
"github_project": "executorlib",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "executorlib"
}