slurmpilot


Nameslurmpilot JSON
Version 0.1.4.2 PyPI version JSON
download
home_pagehttps://github.com/geoalgo/slurmpilot
SummaryA tool for launching and tracking Slurm jobs across many clusters in Python.
upload_time2024-10-30 13:57:51
maintainerNone
docs_urlNone
authorDavid Salinas
requires_python>=3.10.0
licenseMIT
keywords ml ops slurm experiment management
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Slurmpilot

Slurmpilot is a python library to launch experiments in Slurm on any cluster from the comfort of your local machine.
The library aims to take care of things such as sending remote code for execution, calling slurm, 
finding good places to write logs and accessing status from your jobs.

The key features are:
* simplify job creation, improve reproducibility and allow launching slurm jobs from your machine
* allows to easily list experiments, logs or show status and stop jobs 
* easy switch between cluster by just providing different config files

Essentially we want to make it much easier and faster for user to run experiments on Slurm and reach the quality of cloud usage.

Important note: Right now, the library is very much work in progress. It is usable (I am using it for all my experiments) but the documentation is yet to be made and API has not been frozen yet.

**What about other tools?**

If you are familiar with tools, you may know the great [Skypilot](https://github.com/skypilot-org/skypilot) which allows to run experiments seamlessly between different cloud providers.
The goal of this project is to ultimately provide a similar high-quality user experience for academics who are running on slurm and not cloud machines.
Extending skypilot to support seems hard given the different nature of slurm and cloud (for instance not all slurm cluster could run docker) and hence this library was made rather than just contributing to skypilot.

This library is also influenced by [Sagemaker python API](https://sagemaker.readthedocs.io/en/stable/) and you may find some similarities. 

## Installing

To install, run the following:
```bash
pip install "slurmpilot[extra] @ git+https://github.com/geoalgo/slurmpilot.git"
```

## Adding a cluster
Before you can schedule a job, you will need to provide information about a cluster by specifying a configuration.

You can run the following command:
```bash 
slurmpilot --add-cluster
```
which will ask you for the name of the cluster, the hostname, your username etc. After adding those information, a ssh
connection will be made with the provided information to check if the connection can be made.

Alternatively, you can specify/edit configuration directly in `~/slurmpilot/config/clusters/YOUR_CLUSTER.yaml`, 
for instance a configuration could be like this:
```yaml
# connecting to this host via ssh should work as Slurmpilot relies on ssh
host: your-gpu-cluster.com
# optional, specify the path where files will be written by slurmpilot on the remote machine, default to ~/slurmpilot
remote_path: "/home/username2/foo/slurmpilot/"
# optional, only specify if the user on the cluster is different than on your local machine
user: username2  
# optional, specify a slurm account if needed
account: "AN_ACCOUNT"  
# optional, allow to avoid the need to specify the partition
default_partition: "NAME_OF_PARTITION_TO_BE_USED_BY_DEFAULT"
# optional (default to false), whether you should be prompted to use a login password for ssh
prompt_for_login_password: true 
# optional (default to false), whether you should be prompted to use a login passphrase for ssh
prompt_for_login_passphrase: false
```

In addition, you can configure `~/slurmpilot/config/general.yaml` with the following:

```yaml
# default path where slurmpilot job files are generated
local_path: "~/slurmpilot"

# default path where slurmpilot job files are generated on the remote machine, Note: "~" cannot be used
remote_path: "slurmpilot/"

# optional, cluster that is being used by default
default_cluster: "YOUR_CLUSTER"
```

## Scheduling a job
You are now ready to schedule jobs! Let us have a look at `launch_hellocluster.py`, in particular, you can call the following to schedule a job:

```python
config = load_config()
cluster, partition = default_cluster_and_partition()
jobname = unify("examples/hello-cluster", method="coolname")  # make the jobname unique by appending a coolname
slurm = SlurmWrapper(config=config, clusters=[cluster])
max_runtime_minutes = 60
jobinfo = JobCreationInfo(
    cluster=cluster,
    partition=partition,
    jobname=jobname,
    entrypoint="hellocluster_script.sh",
    src_dir="./",
    n_cpus=1,
    max_runtime_minutes=max_runtime_minutes,
    # Shows how to pass an environment variable to the running script
    env={"API_TOKEN": "DUMMY"},
)
jobid = slurm.schedule_job(jobinfo)
```

Here we created a job in the default cluster and partition. A couple of points:
* `cluster`: you can use any cluster `YOURCLUSTER` as long as the file `config/clusters/YOURCLUSTER.yaml` exists, that the hostname is reachable through ssh and that Slurm is installed on the host.
* `jobname` must be unique, we use `unify` which appends a unique suffix to ensure unicity even if the scripts is launched multiple times. Nested folders can be used, in this case, files will be written under "~/slurmpilot/jobs/examples/hello-cluster..."
* `entrypoint` is the script we want to launched and should be present in `{src_dir}/{entrypoint}`
* `n_cpus` is the number of CPUs, we can control other slurm arguments such as number of GPUs, number of nodes etc
* `env` allows to pass environment variable to the script that is being remotely executed

### Workflow
When scheduling a job, the files required to run it are first copied to `~/slurmpilot/jobs/YOUR_JOB_NAME` and then
sent to the remote host to `~/slurmpilot/jobs/YOUR_JOB_NAME` (those defaults paths are modifiable).

In particular, the following files are generated locally under `~/slurmpilot/jobs/YOUR_JOB_NAME`:
* `slurm_script.sh`: a slurm script automatically generated from your options that is executed on the remote node with sbatch
* `metadata.json`: contains metadata such as time and the configuration of the job that was scheduled
* `jobid.json`: contains the slurm jobid obtained when scheduling the job, if this step was successful
* `src_dir`: the folder containing the entrypoint
* `{src_dir}/entrypoint`: the entrypoint to be executed

On the remote host, the logs are written under `logs/stderr` and `logs/stdout` and the current working dir is also 
`~/slurmpilot/jobs/YOUR_JOB_NAME` unless overwritten in `general.yaml` config (see `Other ways to specify configurations` section).


### Scheduling python jobs

If you want to schedule directly a Python jobs, you can also do:

```python
jobinfo = JobCreationInfo(
    cluster=cluster,
    partition=partition,
    jobname=jobname,
    entrypoint="main_hello_cluster.py",
    python_args="--argument1 dummy",
    python_binary="~/miniconda3/bin/python",
    n_cpus=1,
    max_runtime_minutes=60,
    # Shows how to pass an environment variable to the running script
    env={"API_TOKEN": "DUMMY"},
)
jobid = slurm.schedule_job(jobinfo)
```

This will create a sbatch script as in the previous example but this time, it will call directly your python script
with the binary and the arguments provided, you can see the full example
[launch_hellocluster_python.py](examples%2Fhellocluster-python%2Flaunch_hellocluster_python.py). 
Note that you can also set `bash_setup_command` which allows to run some command before 
calling your python script (for instance to setup the environment, activate conda, setup a server ...).

### CLI

Slurmpilot provides a CLI which allows to:
* display log of a job
* list information about a list of jobs in a table
* stop a job
* download the artifact of a job locally
* show the status of a particular job
* add a cluster
* test ssh connection of the list of configured clusters

After installing slurmpilot, you can run the following to get help on how to use those commands.

```bash
sp --help
```
For instance, running `sp --list-jobs 5` will display informations of the past 5 jobs as follows:
```
                                         job           date    cluster                 status                                       full jobname
    v2-loop-judge-option-2024-09-24-16-47-36 24/09/24-16:47   clusterX    Pending ⏳           judge-tuning-v0/v2-loop-judge-option-2024-09-24...
    v2-loop-judge-option-2024-09-24-16-47-30 24/09/24-16:47   clusterX    Pending ⏳           judge-tuning-v0/v2-loop-judge-option-2024-09-24...
job-arboreal-foxhound-of-splendid-domination 24/09/24-12:54   clusterY    Completed ✅         examples/hello-cluster-python/job-arboreal-foxh...
    v2-loop-judge-option-2024-09-23-18-01-36 23/09/24-18:01   clusterX    CANCELLED by 975941  judge-tuning-v0/v2-loop-judge-option-2024-09-23...
    v2-loop-judge-option-2024-09-23-18-00-49 23/09/24-18:00   clusterZ    Slurm job failed ❌  judge-tuning-v0/v2-loop-judge-option-2024-09-23...
```

Note that listing jobs requires the ssh connection to work with every cluster since Slurm will be queried to know the
current status, if cluster is unavailable because the ssh credentials expired for instance then a place holder status 
will be shown.


## FAQ/misc

**Developer setup.**
If you want to develop features, run the following:
```bash
git clone https://github.com/geoalgo/slurmpilot.git
cd slurmpilot
pip install -e ".[dev]"
pre-commit install 
pre-commit autoupdate 
```

**Global configuration.**
You can specify global properties by writing `~/slurmpilot/config/general.yaml`
and edit the following:
```
# where files are written locally on your machine for job status, logs and artifacts
local_path: "~/slurmpilot"  

# default path where slurmpilot job files are generated on the remote machine, Note: "~" cannot be used
remote_path: "slurmpilot/"
```

**Why do you rely on SSH?**
A typical workflow for Slurm user is to send their code to a remote machine and call sbatch there. We rather
work with ssh from a machine (typically the developer) machine because we want to be able to switch to several cluster
without hassle.

**Why don't you rely on docker?** 
Docker is a great option and is being used in similar tools built for the cloud such as Skypilot, SageMaker, ...
However, running docker in Slurm is often not an option due to difficulties to run without root privileges.

**TODOs**
* high: explain examples in readme
* high: better support to launch series of experiments
* medium: discuss getting out of your way philosophy of the tool
* medium: report runtime in sp --list_jobs
* medium: make script execution independent of cwd and dump variable to enforce reproducibility
* medium: support local execution, see `notes/running_locally.md`
* medium: allow to copy only python files (or as skypilot keep only files .gitignore)
* medium: generates animation of demo in readme.md
* medium: allow to stop all jobs in CLI
* medium: allow to submit list of jobs until all executed
* medium: rename SlurmWrapper to SlurmPilot
* medium: rerun/restart job (useful in case of transient error)
* medium: download in batch
* low: support numerating suffix "-01", "-2" instead of random names
* low: doc for handling python dependencies
* low: allow to share common folders to avoid sending code lots of times, probably do a doc example
* TBD: chain of jobs

**DONE**
* high: support password and passphrase for ssh
* low: remove logging info ssh
* medium: suppress connection print output of fabrik (happens at connection, not when running commands)
* high: add description of CLI in readme.md
* high: add unit test actions
* high: support python wrapper
* medium/high: list jobs
* high: support subfolders for experiment files
* medium: add support to add cluster from CLI
* medium/high: script to install cluster (ask username, hostname etc)
* high: support defining cluster as env variable, would allow to run example and make it easier to explain examples in README.md
* medium: dont make ssh connection to every cluster in cli, requires small refactor to avoid needing SlurmWrapper to get last jobname
* high: handle python code dependencies
* high: add example in main repo
* medium: add option to stop in the CLI 
* high: push in github 
* high: allow to fetch info from local folders in list_jobs
* when creating job, show command that can be copy-pasted to display log, status, sync artifact
* support setting local configs path via local files
* test "jobs/" new folder structure
* tool to display logs/status from terminal
* make sp installed in pip (add it to setup)
* local configurations to allow clean repo, could be located in ~/slurmpilot/config
* set environment variables
* able to create jobs
* able to see logs of latest job
* run ssh command to remote host
* test generation of slurm script/package to be sent
* run sfree on remote cluster
* able to see logs of one job
* local path rather that current dir
* test path logic
* able to see status of jobs
* list all jobs and see their status
* enable multiple configurations
* "integration" tests that runs sinfo and lightweight operations 
* option to wait until complete or failed
* stop job



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/geoalgo/slurmpilot",
    "name": "slurmpilot",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10.0",
    "maintainer_email": null,
    "keywords": "ML ops, slurm, experiment management",
    "author": "David Salinas",
    "author_email": "check@mywebsite.com",
    "download_url": "https://files.pythonhosted.org/packages/06/07/98a63b3c6281a2e90807fa41c4e172e4138128970ab18dd3e462a159adfc/slurmpilot-0.1.4.2.tar.gz",
    "platform": null,
    "description": "# Slurmpilot\n\nSlurmpilot is a python library to launch experiments in Slurm on any cluster from the comfort of your local machine.\nThe library aims to take care of things such as sending remote code for execution, calling slurm, \nfinding good places to write logs and accessing status from your jobs.\n\nThe key features are:\n* simplify job creation, improve reproducibility and allow launching slurm jobs from your machine\n* allows to easily list experiments, logs or show status and stop jobs \n* easy switch between cluster by just providing different config files\n\nEssentially we want to make it much easier and faster for user to run experiments on Slurm and reach the quality of cloud usage.\n\nImportant note: Right now, the library is very much work in progress. It is usable (I am using it for all my experiments) but the documentation is yet to be made and API has not been frozen yet.\n\n**What about other tools?**\n\nIf you are familiar with tools, you may know the great [Skypilot](https://github.com/skypilot-org/skypilot) which allows to run experiments seamlessly between different cloud providers.\nThe goal of this project is to ultimately provide a similar high-quality user experience for academics who are running on slurm and not cloud machines.\nExtending skypilot to support seems hard given the different nature of slurm and cloud (for instance not all slurm cluster could run docker) and hence this library was made rather than just contributing to skypilot.\n\nThis library is also influenced by [Sagemaker python API](https://sagemaker.readthedocs.io/en/stable/) and you may find some similarities. \n\n## Installing\n\nTo install, run the following:\n```bash\npip install \"slurmpilot[extra] @ git+https://github.com/geoalgo/slurmpilot.git\"\n```\n\n## Adding a cluster\nBefore you can schedule a job, you will need to provide information about a cluster by specifying a configuration.\n\nYou can run the following command:\n```bash \nslurmpilot --add-cluster\n```\nwhich will ask you for the name of the cluster, the hostname, your username etc. After adding those information, a ssh\nconnection will be made with the provided information to check if the connection can be made.\n\nAlternatively, you can specify/edit configuration directly in `~/slurmpilot/config/clusters/YOUR_CLUSTER.yaml`, \nfor instance a configuration could be like this:\n```yaml\n# connecting to this host via ssh should work as Slurmpilot relies on ssh\nhost: your-gpu-cluster.com\n# optional, specify the path where files will be written by slurmpilot on the remote machine, default to ~/slurmpilot\nremote_path: \"/home/username2/foo/slurmpilot/\"\n# optional, only specify if the user on the cluster is different than on your local machine\nuser: username2  \n# optional, specify a slurm account if needed\naccount: \"AN_ACCOUNT\"  \n# optional, allow to avoid the need to specify the partition\ndefault_partition: \"NAME_OF_PARTITION_TO_BE_USED_BY_DEFAULT\"\n# optional (default to false), whether you should be prompted to use a login password for ssh\nprompt_for_login_password: true \n# optional (default to false), whether you should be prompted to use a login passphrase for ssh\nprompt_for_login_passphrase: false\n```\n\nIn addition, you can configure `~/slurmpilot/config/general.yaml` with the following:\n\n```yaml\n# default path where slurmpilot job files are generated\nlocal_path: \"~/slurmpilot\"\n\n# default path where slurmpilot job files are generated on the remote machine, Note: \"~\" cannot be used\nremote_path: \"slurmpilot/\"\n\n# optional, cluster that is being used by default\ndefault_cluster: \"YOUR_CLUSTER\"\n```\n\n## Scheduling a job\nYou are now ready to schedule jobs! Let us have a look at `launch_hellocluster.py`, in particular, you can call the following to schedule a job:\n\n```python\nconfig = load_config()\ncluster, partition = default_cluster_and_partition()\njobname = unify(\"examples/hello-cluster\", method=\"coolname\")  # make the jobname unique by appending a coolname\nslurm = SlurmWrapper(config=config, clusters=[cluster])\nmax_runtime_minutes = 60\njobinfo = JobCreationInfo(\n    cluster=cluster,\n    partition=partition,\n    jobname=jobname,\n    entrypoint=\"hellocluster_script.sh\",\n    src_dir=\"./\",\n    n_cpus=1,\n    max_runtime_minutes=max_runtime_minutes,\n    # Shows how to pass an environment variable to the running script\n    env={\"API_TOKEN\": \"DUMMY\"},\n)\njobid = slurm.schedule_job(jobinfo)\n```\n\nHere we created a job in the default cluster and partition. A couple of points:\n* `cluster`: you can use any cluster `YOURCLUSTER` as long as the file `config/clusters/YOURCLUSTER.yaml` exists, that the hostname is reachable through ssh and that Slurm is installed on the host.\n* `jobname` must be unique, we use `unify` which appends a unique suffix to ensure unicity even if the scripts is launched multiple times. Nested folders can be used, in this case, files will be written under \"~/slurmpilot/jobs/examples/hello-cluster...\"\n* `entrypoint` is the script we want to launched and should be present in `{src_dir}/{entrypoint}`\n* `n_cpus` is the number of CPUs, we can control other slurm arguments such as number of GPUs, number of nodes etc\n* `env` allows to pass environment variable to the script that is being remotely executed\n\n### Workflow\nWhen scheduling a job, the files required to run it are first copied to `~/slurmpilot/jobs/YOUR_JOB_NAME` and then\nsent to the remote host to `~/slurmpilot/jobs/YOUR_JOB_NAME` (those defaults paths are modifiable).\n\nIn particular, the following files are generated locally under `~/slurmpilot/jobs/YOUR_JOB_NAME`:\n* `slurm_script.sh`: a slurm script automatically generated from your options that is executed on the remote node with sbatch\n* `metadata.json`: contains metadata such as time and the configuration of the job that was scheduled\n* `jobid.json`: contains the slurm jobid obtained when scheduling the job, if this step was successful\n* `src_dir`: the folder containing the entrypoint\n* `{src_dir}/entrypoint`: the entrypoint to be executed\n\nOn the remote host, the logs are written under `logs/stderr` and `logs/stdout` and the current working dir is also \n`~/slurmpilot/jobs/YOUR_JOB_NAME` unless overwritten in `general.yaml` config (see `Other ways to specify configurations` section).\n\n\n### Scheduling python jobs\n\nIf you want to schedule directly a Python jobs, you can also do:\n\n```python\njobinfo = JobCreationInfo(\n    cluster=cluster,\n    partition=partition,\n    jobname=jobname,\n    entrypoint=\"main_hello_cluster.py\",\n    python_args=\"--argument1 dummy\",\n    python_binary=\"~/miniconda3/bin/python\",\n    n_cpus=1,\n    max_runtime_minutes=60,\n    # Shows how to pass an environment variable to the running script\n    env={\"API_TOKEN\": \"DUMMY\"},\n)\njobid = slurm.schedule_job(jobinfo)\n```\n\nThis will create a sbatch script as in the previous example but this time, it will call directly your python script\nwith the binary and the arguments provided, you can see the full example\n[launch_hellocluster_python.py](examples%2Fhellocluster-python%2Flaunch_hellocluster_python.py). \nNote that you can also set `bash_setup_command` which allows to run some command before \ncalling your python script (for instance to setup the environment, activate conda, setup a server ...).\n\n### CLI\n\nSlurmpilot provides a CLI which allows to:\n* display log of a job\n* list information about a list of jobs in a table\n* stop a job\n* download the artifact of a job locally\n* show the status of a particular job\n* add a cluster\n* test ssh connection of the list of configured clusters\n\nAfter installing slurmpilot, you can run the following to get help on how to use those commands.\n\n```bash\nsp --help\n```\nFor instance, running `sp --list-jobs 5` will display informations of the past 5 jobs as follows:\n```\n                                         job           date    cluster                 status                                       full jobname\n    v2-loop-judge-option-2024-09-24-16-47-36 24/09/24-16:47   clusterX    Pending \u23f3           judge-tuning-v0/v2-loop-judge-option-2024-09-24...\n    v2-loop-judge-option-2024-09-24-16-47-30 24/09/24-16:47   clusterX    Pending \u23f3           judge-tuning-v0/v2-loop-judge-option-2024-09-24...\njob-arboreal-foxhound-of-splendid-domination 24/09/24-12:54   clusterY    Completed \u2705         examples/hello-cluster-python/job-arboreal-foxh...\n    v2-loop-judge-option-2024-09-23-18-01-36 23/09/24-18:01   clusterX    CANCELLED by 975941  judge-tuning-v0/v2-loop-judge-option-2024-09-23...\n    v2-loop-judge-option-2024-09-23-18-00-49 23/09/24-18:00   clusterZ    Slurm job failed \u274c  judge-tuning-v0/v2-loop-judge-option-2024-09-23...\n```\n\nNote that listing jobs requires the ssh connection to work with every cluster since Slurm will be queried to know the\ncurrent status, if cluster is unavailable because the ssh credentials expired for instance then a place holder status \nwill be shown.\n\n\n## FAQ/misc\n\n**Developer setup.**\nIf you want to develop features, run the following:\n```bash\ngit clone https://github.com/geoalgo/slurmpilot.git\ncd slurmpilot\npip install -e \".[dev]\"\npre-commit install \npre-commit autoupdate \n```\n\n**Global configuration.**\nYou can specify global properties by writing `~/slurmpilot/config/general.yaml`\nand edit the following:\n```\n# where files are written locally on your machine for job status, logs and artifacts\nlocal_path: \"~/slurmpilot\"  \n\n# default path where slurmpilot job files are generated on the remote machine, Note: \"~\" cannot be used\nremote_path: \"slurmpilot/\"\n```\n\n**Why do you rely on SSH?**\nA typical workflow for Slurm user is to send their code to a remote machine and call sbatch there. We rather\nwork with ssh from a machine (typically the developer) machine because we want to be able to switch to several cluster\nwithout hassle.\n\n**Why don't you rely on docker?** \nDocker is a great option and is being used in similar tools built for the cloud such as Skypilot, SageMaker, ...\nHowever, running docker in Slurm is often not an option due to difficulties to run without root privileges.\n\n**TODOs**\n* high: explain examples in readme\n* high: better support to launch series of experiments\n* medium: discuss getting out of your way philosophy of the tool\n* medium: report runtime in sp --list_jobs\n* medium: make script execution independent of cwd and dump variable to enforce reproducibility\n* medium: support local execution, see `notes/running_locally.md`\n* medium: allow to copy only python files (or as skypilot keep only files .gitignore)\n* medium: generates animation of demo in readme.md\n* medium: allow to stop all jobs in CLI\n* medium: allow to submit list of jobs until all executed\n* medium: rename SlurmWrapper to SlurmPilot\n* medium: rerun/restart job (useful in case of transient error)\n* medium: download in batch\n* low: support numerating suffix \"-01\", \"-2\" instead of random names\n* low: doc for handling python dependencies\n* low: allow to share common folders to avoid sending code lots of times, probably do a doc example\n* TBD: chain of jobs\n\n**DONE**\n* high: support password and passphrase for ssh\n* low: remove logging info ssh\n* medium: suppress connection print output of fabrik (happens at connection, not when running commands)\n* high: add description of CLI in readme.md\n* high: add unit test actions\n* high: support python wrapper\n* medium/high: list jobs\n* high: support subfolders for experiment files\n* medium: add support to add cluster from CLI\n* medium/high: script to install cluster (ask username, hostname etc)\n* high: support defining cluster as env variable, would allow to run example and make it easier to explain examples in README.md\n* medium: dont make ssh connection to every cluster in cli, requires small refactor to avoid needing SlurmWrapper to get last jobname\n* high: handle python code dependencies\n* high: add example in main repo\n* medium: add option to stop in the CLI \n* high: push in github \n* high: allow to fetch info from local folders in list_jobs\n* when creating job, show command that can be copy-pasted to display log, status, sync artifact\n* support setting local configs path via local files\n* test \"jobs/\" new folder structure\n* tool to display logs/status from terminal\n* make sp installed in pip (add it to setup)\n* local configurations to allow clean repo, could be located in ~/slurmpilot/config\n* set environment variables\n* able to create jobs\n* able to see logs of latest job\n* run ssh command to remote host\n* test generation of slurm script/package to be sent\n* run sfree on remote cluster\n* able to see logs of one job\n* local path rather that current dir\n* test path logic\n* able to see status of jobs\n* list all jobs and see their status\n* enable multiple configurations\n* \"integration\" tests that runs sinfo and lightweight operations \n* option to wait until complete or failed\n* stop job\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A tool for launching and tracking Slurm jobs across many clusters in Python.",
    "version": "0.1.4.2",
    "project_urls": {
        "Homepage": "https://github.com/geoalgo/slurmpilot",
        "Repository": "https://github.com/geoalgo/slurmpilot"
    },
    "split_keywords": [
        "ml ops",
        " slurm",
        " experiment management"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e0f99d2582390b26a3ede6ea3bbf34bac2c1dcbaf08fa9ab3c9050b380e2f103",
                "md5": "c08dca7e4d98eefb6531495cff7defee",
                "sha256": "66195787e085fb1ae07d82af6d28ee99856dd0bb647507a6eba46069ce137469"
            },
            "downloads": -1,
            "filename": "slurmpilot-0.1.4.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c08dca7e4d98eefb6531495cff7defee",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10.0",
            "size": 29178,
            "upload_time": "2024-10-30T13:57:50",
            "upload_time_iso_8601": "2024-10-30T13:57:50.214855Z",
            "url": "https://files.pythonhosted.org/packages/e0/f9/9d2582390b26a3ede6ea3bbf34bac2c1dcbaf08fa9ab3c9050b380e2f103/slurmpilot-0.1.4.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "060798a63b3c6281a2e90807fa41c4e172e4138128970ab18dd3e462a159adfc",
                "md5": "6c52e4e469f71d38595915a0b9bfc670",
                "sha256": "822ed0e4c61582c22d57a59808f08510634eaf58c07b101fb9f42fb3432d4128"
            },
            "downloads": -1,
            "filename": "slurmpilot-0.1.4.2.tar.gz",
            "has_sig": false,
            "md5_digest": "6c52e4e469f71d38595915a0b9bfc670",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10.0",
            "size": 28902,
            "upload_time": "2024-10-30T13:57:51",
            "upload_time_iso_8601": "2024-10-30T13:57:51.860057Z",
            "url": "https://files.pythonhosted.org/packages/06/07/98a63b3c6281a2e90807fa41c4e172e4138128970ab18dd3e462a159adfc/slurmpilot-0.1.4.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-30 13:57:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "geoalgo",
    "github_project": "slurmpilot",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "slurmpilot"
}
        
Elapsed time: 0.44164s