qspool


Nameqspool JSON
Version 0.5.0 PyPI version JSON
download
home_page
SummaryDependency-free script to spool jobs into SLURM scheduler without exceeding queue capacity limits.
upload_time2024-03-09 03:23:05
maintainer
docs_urlNone
author
requires_python>=3.6
licenseMIT license
keywords slurm high-performance computing cluster computing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Usage

You need to submit more slurm scripts than fit on the queue at once.
```bash
tree .
.
├── slurmscript0.slurm.sh
├── slurmscript1.slurm.sh
├── slurmscript2.slurm.sh
├── slurmscript3.slurm.sh
├── slurmscript4.slurm.sh
├── slurmscript5.slurm.sh
├── slurmscript6.slurm.sh
├── slurmscript7.slurm.sh
├── slurmscript8.slurm.sh
...
```

The `qspool` script will feed your job scripts onto the queue as space becomes available.
```bash
python3 -m qspool *.slurm.sh
```

You can also provide job names via stdin, which is useful for very large job batches.
```bash
find . -maxdepth 1 -name '*.slurm.sh' | python3 -m qspool
```

The `qspool` script creates a slurm job that submits your job scripts.
When queue capacity fills, this `qspool` job will schedule a follow-up job to submit any remaining job scripts.
This process continues until all job scripts have been submitted.

```
usage: qspool.py [-h] [--payload-job-script-paths-infile PAYLOAD_JOB_SCRIPT_PATHS_INFILE] [--job-log-path JOB_LOG_PATH] [--job-script-cc-path JOB_SCRIPT_CC_PATH]
                 [--queue-capacity QUEUE_CAPACITY] [--qspooler-job-title QSPOOLER_JOB_TITLE]
                 [payload_job_script_paths ...]

positional arguments:
  payload_job_script_paths
                        What scripts to spool onto slurm queue? (default: None)

options:
  -h, --help            show this help message and exit
  --payload-job-script-paths-infile PAYLOAD_JOB_SCRIPT_PATHS_INFILE
                        Where to read script paths to spool onto slurm queue? (default: <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>)
  --job-log-path JOB_LOG_PATH
                        Where should logs for qspool jobs be written? (default: ~/joblog/)
  --job-script-cc-path JOB_SCRIPT_CC_PATH
                        Where should copies of submitted job scripts be kept? (default: ~/jobscript/)
  --queue-capacity QUEUE_CAPACITY
                        How many jobs can be running or waiting at once? (default: 1000)
  --qspooler-job-title QSPOOLER_JOB_TITLE
                        What title should be included in qspooler job names? (default: none)
```

# Installation

no installation:
```bash
python3 "$(tmpfile="$(mktemp)"; curl -s https://raw.githubusercontent.com/mmore500/qspool/v0.5.0/qspool.py > "${tmpfile}"; echo "${tmpfile}")" [ARGS]
```

pip installation:
```bash
python3 -m pip install qspool
python3 -m qspool [ARGS]
```

`qspool` has zero dependencies, so no setup or maintenance is required to use it.
Compatible all the way back to Python 3.6, so it will work on your cluster's ancient Python install.

# How it Works

```
qspool
  * read contents of target slurm scripts
  * instantiate qspooler job script w/ target slurm scripts embedded
  * submit qspooler job script to slurm queue
```

⬇️ ⬇️ ⬇️

```
qspooler job 1
  * submit embedded target slurm scripts one by one until queue is almost full
  * instantiate qspooler job script w/ remaining target slurm scripts embedded
  * submit qspooler job script to slurm queue
```

⬇️ ⬇️ ⬇️

```
qspooler job 2
  * submit embedded target slurm scripts one by one until queue is almost full
  * instantiate qspooler job script w/ remaining target slurm scripts embedded
  * submit qspooler job script to slurm queue
```

...

```
qspooler job n
  * submit embedded target slurm scripts one by one
  * no embedded target slurm scripts remain
  * exit
```

## Related Software

[`roll_q`](https://github.com/FergusonAJ/roll_q) uses a similar approach to solve this problem.
`roll_q` differs in implementation strategy.
`roll_q` tracks submission progress via an index variable in a file associated with a job batch.
`qspool` embeds jobs in the submission worker script itself.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "qspool",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "slurm,high-performance computing,cluster computing",
    "author": "",
    "author_email": "Matthew Andres Moreno <m.more500@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/4e/0d/7f7697019093cc68c78c114590b1b2e97a48351e108e78b5f4c96fd97287/qspool-0.5.0.tar.gz",
    "platform": null,
    "description": "# Usage\n\nYou need to submit more slurm scripts than fit on the queue at once.\n```bash\ntree .\n.\n\u251c\u2500\u2500 slurmscript0.slurm.sh\n\u251c\u2500\u2500 slurmscript1.slurm.sh\n\u251c\u2500\u2500 slurmscript2.slurm.sh\n\u251c\u2500\u2500 slurmscript3.slurm.sh\n\u251c\u2500\u2500 slurmscript4.slurm.sh\n\u251c\u2500\u2500 slurmscript5.slurm.sh\n\u251c\u2500\u2500 slurmscript6.slurm.sh\n\u251c\u2500\u2500 slurmscript7.slurm.sh\n\u251c\u2500\u2500 slurmscript8.slurm.sh\n...\n```\n\nThe `qspool` script will feed your job scripts onto the queue as space becomes available.\n```bash\npython3 -m qspool *.slurm.sh\n```\n\nYou can also provide job names via stdin, which is useful for very large job batches.\n```bash\nfind . -maxdepth 1 -name '*.slurm.sh' | python3 -m qspool\n```\n\nThe `qspool` script creates a slurm job that submits your job scripts.\nWhen queue capacity fills, this `qspool` job will schedule a follow-up job to submit any remaining job scripts.\nThis process continues until all job scripts have been submitted.\n\n```\nusage: qspool.py [-h] [--payload-job-script-paths-infile PAYLOAD_JOB_SCRIPT_PATHS_INFILE] [--job-log-path JOB_LOG_PATH] [--job-script-cc-path JOB_SCRIPT_CC_PATH]\n                 [--queue-capacity QUEUE_CAPACITY] [--qspooler-job-title QSPOOLER_JOB_TITLE]\n                 [payload_job_script_paths ...]\n\npositional arguments:\n  payload_job_script_paths\n                        What scripts to spool onto slurm queue? (default: None)\n\noptions:\n  -h, --help            show this help message and exit\n  --payload-job-script-paths-infile PAYLOAD_JOB_SCRIPT_PATHS_INFILE\n                        Where to read script paths to spool onto slurm queue? (default: <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>)\n  --job-log-path JOB_LOG_PATH\n                        Where should logs for qspool jobs be written? (default: ~/joblog/)\n  --job-script-cc-path JOB_SCRIPT_CC_PATH\n                        Where should copies of submitted job scripts be kept? (default: ~/jobscript/)\n  --queue-capacity QUEUE_CAPACITY\n                        How many jobs can be running or waiting at once? (default: 1000)\n  --qspooler-job-title QSPOOLER_JOB_TITLE\n                        What title should be included in qspooler job names? (default: none)\n```\n\n# Installation\n\nno installation:\n```bash\npython3 \"$(tmpfile=\"$(mktemp)\"; curl -s https://raw.githubusercontent.com/mmore500/qspool/v0.5.0/qspool.py > \"${tmpfile}\"; echo \"${tmpfile}\")\" [ARGS]\n```\n\npip installation:\n```bash\npython3 -m pip install qspool\npython3 -m qspool [ARGS]\n```\n\n`qspool` has zero dependencies, so no setup or maintenance is required to use it.\nCompatible all the way back to Python 3.6, so it will work on your cluster's ancient Python install.\n\n# How it Works\n\n```\nqspool\n  * read contents of target slurm scripts\n  * instantiate qspooler job script w/ target slurm scripts embedded\n  * submit qspooler job script to slurm queue\n```\n\n\u2b07\ufe0f \u2b07\ufe0f \u2b07\ufe0f\n\n```\nqspooler job 1\n  * submit embedded target slurm scripts one by one until queue is almost full\n  * instantiate qspooler job script w/ remaining target slurm scripts embedded\n  * submit qspooler job script to slurm queue\n```\n\n\u2b07\ufe0f \u2b07\ufe0f \u2b07\ufe0f\n\n```\nqspooler job 2\n  * submit embedded target slurm scripts one by one until queue is almost full\n  * instantiate qspooler job script w/ remaining target slurm scripts embedded\n  * submit qspooler job script to slurm queue\n```\n\n...\n\n```\nqspooler job n\n  * submit embedded target slurm scripts one by one\n  * no embedded target slurm scripts remain\n  * exit\n```\n\n## Related Software\n\n[`roll_q`](https://github.com/FergusonAJ/roll_q) uses a similar approach to solve this problem.\n`roll_q` differs in implementation strategy.\n`roll_q` tracks submission progress via an index variable in a file associated with a job batch.\n`qspool` embeds jobs in the submission worker script itself.\n",
    "bugtrack_url": null,
    "license": "MIT license",
    "summary": "Dependency-free script to spool jobs into SLURM scheduler without exceeding queue capacity limits.",
    "version": "0.5.0",
    "project_urls": {
        "homepage": "https://github.com/mmore500/qspool"
    },
    "split_keywords": [
        "slurm",
        "high-performance computing",
        "cluster computing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cca856b32656867718f46c2d573798c0ca1863f242327a17efc01443e2bcf641",
                "md5": "6c3977a57526b7167eaca2397dcdbe9b",
                "sha256": "8a62fdb1da09ac9a3a018e54cd577d79226eeb82d8f778a980dc3ca91bb28f24"
            },
            "downloads": -1,
            "filename": "qspool-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6c3977a57526b7167eaca2397dcdbe9b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 7124,
            "upload_time": "2024-03-09T03:23:04",
            "upload_time_iso_8601": "2024-03-09T03:23:04.179990Z",
            "url": "https://files.pythonhosted.org/packages/cc/a8/56b32656867718f46c2d573798c0ca1863f242327a17efc01443e2bcf641/qspool-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4e0d7f7697019093cc68c78c114590b1b2e97a48351e108e78b5f4c96fd97287",
                "md5": "5dbe0fa425c0a69f1f893b68ead7990b",
                "sha256": "eb676941d0afd5d9e1a66076cdbef55e216c10613b8a1ba716535549bcbd16b9"
            },
            "downloads": -1,
            "filename": "qspool-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "5dbe0fa425c0a69f1f893b68ead7990b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 6922,
            "upload_time": "2024-03-09T03:23:05",
            "upload_time_iso_8601": "2024-03-09T03:23:05.784437Z",
            "url": "https://files.pythonhosted.org/packages/4e/0d/7f7697019093cc68c78c114590b1b2e97a48351e108e78b5f4c96fd97287/qspool-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-09 03:23:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mmore500",
    "github_project": "qspool",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "qspool"
}
        
Elapsed time: 0.19767s