scalable


Namescalable JSON
Version 0.5.7 PyPI version JSON
download
home_pagehttps://github.com/JGCRI/scalable
SummaryAssist with running models on job queing systems like Slurm
upload_time2024-09-11 14:59:16
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseBSD 2-Clause License Copyright (c) 2022, Joint Global Change Research Institute All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Scalable 
[v0.5.7](https://github.com/JGCRI/scalable/tree/0.5.7)

Scalable is a Python library which aids in running complex workflows on HPCs by orchestrating multiple containers, requesting appropriate HPC jobs to the scheduler, and providing a python environment for distributed computing. It's designed to be primarily used with JGCRI Climate Models but can be easily adapted for any arbitrary uses.

## Installation

Use the package manager [pip](https://pip.pypa.io/en/stable/) to install scalable.

```bash
[user@localhost ~]$ pip install scalable
```

Alternatively, the git repo can be cloned directly and installed locally. The git repo should be cloned to the preferred working directory. 

```bash
[user@localhost <local_work_dir>]$ git clone https://github.com/JGCRI/scalable.git
[user@localhost <local_work_dir>]$ pip install ./scalable
```

## Setup

### Compatibility Requirements

Docker is needed to run the bootstrap script. The script itself is preferred to be ran in a linux environment. 
For Windows users, Git Bash is recommended for bootstrapping. For MacOS users, just the terminal app should suffice.

HPC Schedulers Supported: Slurm

Tools required on HPC Host: apptainer\
Tools required on Local Host: docker

### Work Directory Setup

A work directory needs to be setup on the HPC host which would ensure the presence and a structured location for all required dependencies and any outputs. The provided bootstrap script helps in setting up the work directory and the containers which would be used as workers. **It is highly recommended to use the bootstrap script to use scalable.** Moreover, since the bootstrap scripts attempts to connect to the HPC host multiple times, **it is also highly recommended to have password-less ssh login enabled through private keys.** Otherwise, a password would need to be entered up to 15 times when running the script only once. A guide to setup key based authentication could be found [here](https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server).

Once scalable is installed through pip, navigate to a directory on your local computer where the bootstrap script can place containers, logs, and any other required dependency. The bootstrap script downloads and builds files both on your local system and the HPC system. 

```bash
[user@localhost ~]$ cd <local_work_dir>
[user@localhost <local_work_dir>]$ scalable_bootstrap
```

Follow and answer the prompts in the bootstrap script. All the dependencies will be automatically downloaded. Once everything has been downloaded and built, the script will initiate a SSH Session with the HPC Host logging in the user to the work directory on the HPC. 

The python3 command is aliased to start a server too. Simply calling python3 will launch an interactive session with all the dependencies. A file or other arguments can also be given to python3 and they will be ran as a python file within a container. **Only files present in the current work directory and subdirectories on the HPC Host can be ran this way.** Any files stored above the current work directory would need to be copied under it to be ran. 

```bash
[user@hpchost <work_dir>]$ python3
[user@hpchost <work_dir>]$ python3 <filename>.py
```

If the script fails in the middle, or if a new session needs to be started, simply run the same command again and the bootstrap script will pickup where it left off. If everything is already installed then the script will log in to the HPC SSH session directly. For everything to function properly, it is recommended to use the bootstrap script every time scalable needs to be used. The initial setup takes time but the script connects to the HPC Host directly only checking for required dependencies if everything is already installed. 

### Manual Changes

One of the most relevant files to change for most users would be the Dockerfile. Users can just use the one provided in this repo or to make a Dockerfile of their own. The Dockerfile consists of one or more container targets along with the commands for each one. The targets included in the Dockerfile provided make containers for [gcam](https://github.com/JGCRI/gcam-core), [stitches](https://github.com/JGCRI/stitches), [osiris](https://github.com/JGCRI/osiris), along with other targets which represent some other models. The targets of [scalable](https://github.com/JGCRI/scalable) and [apptainer](https://github.com/apptainer/apptainer) are required for the bootstrap script. 

## Usage

Scalable leverages Dask to manage resources and workers on the HPC system. After launching python3, a SlurmCluster object can be made to start the Dask Scheduler. 

```bash
[user@hpchost <work_dir>]$ python3
```
```python
from scalable import SlurmCluster, ScalableClient

cluster = SlurmCluster(queue='slurm', walltime='02:00:00', account='GCIMS', interface='ib0', silence_logs=False)
```

Similar to Dask, information about the queue and the account to use on the HPC scheduler is required. `ib0` would be likely be the interface on most HPC systems. The walltime is the expected time in which the jobs assigned to can be completed in. **If walltime is lesser than the time it takes to run any single function given to the cluster, then that function will never run to completion.** Instead, the job will get stuck in a cycle of getting killed when the time is up but getting re-scheduled as it was unable to finish. For this reason, it is recommended to set the walltime to be more than the estimated time taken to complete the longest running function. The walltime can also be changed anytime after the cluster is launched and any future resource requests will include the new walltime. 

```python
cluster.add_container(tag="gcam", cpus=10, memory="20G", dirs={"/qfs/people/user/work/gcam-core":"/gcam-core", "/rcfs":"/rcfs"})
cluster.add_container(tag="stitches", cpus=6, memory="50G", dirs={"/qfs/people/user":"/user", "/rcfs":"/rcfs"})
cluster.add_container(tag="osiris", cpus=8, memory="20G", dirs={"/rcfs/projects/gcims/data":"/data", "/qfs/people/user/test":"/scratch"})
```

Before launching the workers, the configuration of worker or container targets needs to be specified. The containers to be launched as workers need to be first added by specifying their tag, number of cpu cores they need, the memory they would need, and the directory on the HPC Host to bind to the containers so that these directories are accessible by the container.

```python
cluster.add_worker(n=3, tag="gcam")
cluster.add_worker(n=2, tag="stitches")
cluster.add_worker(n=3, tag="osiris")
```

Launching workers on the cluster can be done by just adding workers to the cluster. This call will only be successful if the tags used have also had containers with the same tag added beforehand. Removing workers is similarly as easy.

```python
cluster.remove_workers(n=2, tag="gcam")
cluster.remove_workers(n=1, tag="stitches")
cluster.remove_workers(n=3, tag="osiris")
```

To compute functions on these workers, a client object needs to be made to interact with the cluster. Then functions can be submitted to be computed on the workers.

```python

def func1(param):
    import gcam
    print(f"{param=} {gcam.__version__}")
    return gcam.__version__

def func2(param):
    import stitches
    print(f"{param=} {stitches.__version__}")
    return stitches.__version__

def func3(param):
    import osiris
    print(f"{param=} {osiris.__version__}")
    return osiris.__version__

client = ScalableClient(cluster)

fut1 = client.submit(func1, "gcam", tag="gcam")
fut2 = client.submit(func2, "stitches", tag="stitches")
fut3 = client.submit(func3, "osiris", tag="osiris")
```

Note how different functions are using different libraries. These functions can't be ran by containers which don't have the libraries used. **It is therefore recommended to always specify the tag of the desired worker while submitting a function.**

The functions will print to the logs of whichever worker they ran on. Futures are returned by the client. 

The cluster can optionally be closed on exit. Automatic exit is supported. **It is recommended to check with the job scheduler on the HPC Host for any pending/zombie jobs.** Although, the cluster should cancel any such jobs on exit. 

### Function Caching

To prevent wastage of resources and time in the case of a crash, workers getting disconnected, or simply the walltime running out, function caching is supported to avoid running functions which have already been ran before. To make any function cacheable, just using the decorator should suffice. 

```python
from scalable import cacheable
import time

@cacheable(return_type=str, param=str)
def func1(param):
    import gcam
    time.sleep(5)
    print(f"{param=} {gcam.__version__}")
    return gcam.__version__

@cacheable(return_type=str, recompute=True, param=str)
def func2(param):
    import stitches
    time.sleep(3)
    print(f"{param=} {stitches.__version__}")
    return stitches.__version__

@cacheable
def func3(param):
    import osiris
    time.sleep(10)
    print(f"{param=} {osiris.__version__}")
    return osiris.__version__

```

In the example above, the functions will wait 5, 3, and 10 seconds for the first time they are computed. However, their results will be cached due to the decorator and so, if the functions are ran again with the same arguments, their results are going to be returned from memory instead and they wouldn't sleep. There are arguments which directly can be given to the cacheable decorator. **It is always recommended to specify the return type and the type of arguments for each use.** This ensures expected functioning of the module and for correct caching. --TODO--

## Contact

For any contribution, questions, or requests, please feel free to [open an issue](https://github.com/JGCRI/scalable/issues) or contact us directly:
**Shashank Lamba** [shashank.lamba@pnnl.gov](mailto:shashank.lamba@pnnl.gov)
**Pralit Patel** [pralit.patel@pnnl.gov](mailto:pralit.patel@pnnl.gov)

## [License](https://github.com/JGCRI/scalable/blob/master/LICENSE.md)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/JGCRI/scalable",
    "name": "scalable",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Shashank Lamba <shashank.lamba@pnnl.gov>",
    "keywords": null,
    "author": null,
    "author_email": "Shashank Lamba <shashank.lamba@pnnl.gov>, Pralit Patel <pralit.patel@pnnl.gov>",
    "download_url": "https://files.pythonhosted.org/packages/c2/fb/71f01579b9d24d1363689639e384ded30c2a3f3ac16aea8fda4e3e1c3412/scalable-0.5.7.tar.gz",
    "platform": null,
    "description": "# Scalable \n[v0.5.7](https://github.com/JGCRI/scalable/tree/0.5.7)\n\nScalable is a Python library which aids in running complex workflows on HPCs by orchestrating multiple containers, requesting appropriate HPC jobs to the scheduler, and providing a python environment for distributed computing. It's designed to be primarily used with JGCRI Climate Models but can be easily adapted for any arbitrary uses.\n\n## Installation\n\nUse the package manager [pip](https://pip.pypa.io/en/stable/) to install scalable.\n\n```bash\n[user@localhost ~]$ pip install scalable\n```\n\nAlternatively, the git repo can be cloned directly and installed locally. The git repo should be cloned to the preferred working directory. \n\n```bash\n[user@localhost <local_work_dir>]$ git clone https://github.com/JGCRI/scalable.git\n[user@localhost <local_work_dir>]$ pip install ./scalable\n```\n\n## Setup\n\n### Compatibility Requirements\n\nDocker is needed to run the bootstrap script. The script itself is preferred to be ran in a linux environment. \nFor Windows users, Git Bash is recommended for bootstrapping. For MacOS users, just the terminal app should suffice.\n\nHPC Schedulers Supported: Slurm\n\nTools required on HPC Host: apptainer\\\nTools required on Local Host: docker\n\n### Work Directory Setup\n\nA work directory needs to be setup on the HPC host which would ensure the presence and a structured location for all required dependencies and any outputs. The provided bootstrap script helps in setting up the work directory and the containers which would be used as workers. **It is highly recommended to use the bootstrap script to use scalable.** Moreover, since the bootstrap scripts attempts to connect to the HPC host multiple times, **it is also highly recommended to have password-less ssh login enabled through private keys.** Otherwise, a password would need to be entered up to 15 times when running the script only once. A guide to setup key based authentication could be found [here](https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server).\n\nOnce scalable is installed through pip, navigate to a directory on your local computer where the bootstrap script can place containers, logs, and any other required dependency. The bootstrap script downloads and builds files both on your local system and the HPC system. \n\n```bash\n[user@localhost ~]$ cd <local_work_dir>\n[user@localhost <local_work_dir>]$ scalable_bootstrap\n```\n\nFollow and answer the prompts in the bootstrap script. All the dependencies will be automatically downloaded. Once everything has been downloaded and built, the script will initiate a SSH Session with the HPC Host logging in the user to the work directory on the HPC. \n\nThe python3 command is aliased to start a server too. Simply calling python3 will launch an interactive session with all the dependencies. A file or other arguments can also be given to python3 and they will be ran as a python file within a container. **Only files present in the current work directory and subdirectories on the HPC Host can be ran this way.** Any files stored above the current work directory would need to be copied under it to be ran. \n\n```bash\n[user@hpchost <work_dir>]$ python3\n[user@hpchost <work_dir>]$ python3 <filename>.py\n```\n\nIf the script fails in the middle, or if a new session needs to be started, simply run the same command again and the bootstrap script will pickup where it left off. If everything is already installed then the script will log in to the HPC SSH session directly. For everything to function properly, it is recommended to use the bootstrap script every time scalable needs to be used. The initial setup takes time but the script connects to the HPC Host directly only checking for required dependencies if everything is already installed. \n\n### Manual Changes\n\nOne of the most relevant files to change for most users would be the Dockerfile. Users can just use the one provided in this repo or to make a Dockerfile of their own. The Dockerfile consists of one or more container targets along with the commands for each one. The targets included in the Dockerfile provided make containers for [gcam](https://github.com/JGCRI/gcam-core), [stitches](https://github.com/JGCRI/stitches), [osiris](https://github.com/JGCRI/osiris), along with other targets which represent some other models. The targets of [scalable](https://github.com/JGCRI/scalable) and [apptainer](https://github.com/apptainer/apptainer) are required for the bootstrap script. \n\n## Usage\n\nScalable leverages Dask to manage resources and workers on the HPC system. After launching python3, a SlurmCluster object can be made to start the Dask Scheduler. \n\n```bash\n[user@hpchost <work_dir>]$ python3\n```\n```python\nfrom scalable import SlurmCluster, ScalableClient\n\ncluster = SlurmCluster(queue='slurm', walltime='02:00:00', account='GCIMS', interface='ib0', silence_logs=False)\n```\n\nSimilar to Dask, information about the queue and the account to use on the HPC scheduler is required. `ib0` would be likely be the interface on most HPC systems. The walltime is the expected time in which the jobs assigned to can be completed in. **If walltime is lesser than the time it takes to run any single function given to the cluster, then that function will never run to completion.** Instead, the job will get stuck in a cycle of getting killed when the time is up but getting re-scheduled as it was unable to finish. For this reason, it is recommended to set the walltime to be more than the estimated time taken to complete the longest running function. The walltime can also be changed anytime after the cluster is launched and any future resource requests will include the new walltime. \n\n```python\ncluster.add_container(tag=\"gcam\", cpus=10, memory=\"20G\", dirs={\"/qfs/people/user/work/gcam-core\":\"/gcam-core\", \"/rcfs\":\"/rcfs\"})\ncluster.add_container(tag=\"stitches\", cpus=6, memory=\"50G\", dirs={\"/qfs/people/user\":\"/user\", \"/rcfs\":\"/rcfs\"})\ncluster.add_container(tag=\"osiris\", cpus=8, memory=\"20G\", dirs={\"/rcfs/projects/gcims/data\":\"/data\", \"/qfs/people/user/test\":\"/scratch\"})\n```\n\nBefore launching the workers, the configuration of worker or container targets needs to be specified. The containers to be launched as workers need to be first added by specifying their tag, number of cpu cores they need, the memory they would need, and the directory on the HPC Host to bind to the containers so that these directories are accessible by the container.\n\n```python\ncluster.add_worker(n=3, tag=\"gcam\")\ncluster.add_worker(n=2, tag=\"stitches\")\ncluster.add_worker(n=3, tag=\"osiris\")\n```\n\nLaunching workers on the cluster can be done by just adding workers to the cluster. This call will only be successful if the tags used have also had containers with the same tag added beforehand. Removing workers is similarly as easy.\n\n```python\ncluster.remove_workers(n=2, tag=\"gcam\")\ncluster.remove_workers(n=1, tag=\"stitches\")\ncluster.remove_workers(n=3, tag=\"osiris\")\n```\n\nTo compute functions on these workers, a client object needs to be made to interact with the cluster. Then functions can be submitted to be computed on the workers.\n\n```python\n\ndef func1(param):\n    import gcam\n    print(f\"{param=} {gcam.__version__}\")\n    return gcam.__version__\n\ndef func2(param):\n    import stitches\n    print(f\"{param=} {stitches.__version__}\")\n    return stitches.__version__\n\ndef func3(param):\n    import osiris\n    print(f\"{param=} {osiris.__version__}\")\n    return osiris.__version__\n\nclient = ScalableClient(cluster)\n\nfut1 = client.submit(func1, \"gcam\", tag=\"gcam\")\nfut2 = client.submit(func2, \"stitches\", tag=\"stitches\")\nfut3 = client.submit(func3, \"osiris\", tag=\"osiris\")\n```\n\nNote how different functions are using different libraries. These functions can't be ran by containers which don't have the libraries used. **It is therefore recommended to always specify the tag of the desired worker while submitting a function.**\n\nThe functions will print to the logs of whichever worker they ran on. Futures are returned by the client. \n\nThe cluster can optionally be closed on exit. Automatic exit is supported. **It is recommended to check with the job scheduler on the HPC Host for any pending/zombie jobs.** Although, the cluster should cancel any such jobs on exit. \n\n### Function Caching\n\nTo prevent wastage of resources and time in the case of a crash, workers getting disconnected, or simply the walltime running out, function caching is supported to avoid running functions which have already been ran before. To make any function cacheable, just using the decorator should suffice. \n\n```python\nfrom scalable import cacheable\nimport time\n\n@cacheable(return_type=str, param=str)\ndef func1(param):\n    import gcam\n    time.sleep(5)\n    print(f\"{param=} {gcam.__version__}\")\n    return gcam.__version__\n\n@cacheable(return_type=str, recompute=True, param=str)\ndef func2(param):\n    import stitches\n    time.sleep(3)\n    print(f\"{param=} {stitches.__version__}\")\n    return stitches.__version__\n\n@cacheable\ndef func3(param):\n    import osiris\n    time.sleep(10)\n    print(f\"{param=} {osiris.__version__}\")\n    return osiris.__version__\n\n```\n\nIn the example above, the functions will wait 5, 3, and 10 seconds for the first time they are computed. However, their results will be cached due to the decorator and so, if the functions are ran again with the same arguments, their results are going to be returned from memory instead and they wouldn't sleep. There are arguments which directly can be given to the cacheable decorator. **It is always recommended to specify the return type and the type of arguments for each use.** This ensures expected functioning of the module and for correct caching. --TODO--\n\n## Contact\n\nFor any contribution, questions, or requests, please feel free to [open an issue](https://github.com/JGCRI/scalable/issues) or contact us directly:\n**Shashank Lamba** [shashank.lamba@pnnl.gov](mailto:shashank.lamba@pnnl.gov)\n**Pralit Patel** [pralit.patel@pnnl.gov](mailto:pralit.patel@pnnl.gov)\n\n## [License](https://github.com/JGCRI/scalable/blob/master/LICENSE.md)\n",
    "bugtrack_url": null,
    "license": "BSD 2-Clause License  Copyright (c) 2022, Joint Global Change Research Institute All rights reserved.  Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ",
    "summary": "Assist with running models on job queing systems like Slurm",
    "version": "0.5.7",
    "project_urls": {
        "Github": "https://github.com/JGCRI/scalable/tree/master/scalable",
        "Homepage": "https://www.pnnl.gov"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d00a52ee2f3adf6dbaa561475999adb251481bcfe1c6621f5d41c2b98c08a45b",
                "md5": "e66a056e801d8c7ba8e7a455a7fe69c7",
                "sha256": "69ab1b50696933c0a7f97198311d62516111a9484271a0185b430dcaf62168cd"
            },
            "downloads": -1,
            "filename": "scalable-0.5.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e66a056e801d8c7ba8e7a455a7fe69c7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 37967,
            "upload_time": "2024-09-11T14:59:15",
            "upload_time_iso_8601": "2024-09-11T14:59:15.348494Z",
            "url": "https://files.pythonhosted.org/packages/d0/0a/52ee2f3adf6dbaa561475999adb251481bcfe1c6621f5d41c2b98c08a45b/scalable-0.5.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c2fb71f01579b9d24d1363689639e384ded30c2a3f3ac16aea8fda4e3e1c3412",
                "md5": "89ad9d2e0d321a512fc394c610a2b408",
                "sha256": "03b8ff9699d3fbf97b287462d5d6aa730ec402aa8668e33fbcd306b203c7f6da"
            },
            "downloads": -1,
            "filename": "scalable-0.5.7.tar.gz",
            "has_sig": false,
            "md5_digest": "89ad9d2e0d321a512fc394c610a2b408",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 57886,
            "upload_time": "2024-09-11T14:59:16",
            "upload_time_iso_8601": "2024-09-11T14:59:16.508357Z",
            "url": "https://files.pythonhosted.org/packages/c2/fb/71f01579b9d24d1363689639e384ded30c2a3f3ac16aea8fda4e3e1c3412/scalable-0.5.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-11 14:59:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "JGCRI",
    "github_project": "scalable",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "scalable"
}
        
Elapsed time: 0.70114s