batchspawner


Namebatchspawner JSON
Version 1.3.0 PyPI version JSON
download
home_pagehttp://jupyter.org
SummaryBatchspawner: A spawner for Jupyterhub to spawn notebooks using batch resource managers.
upload_time2024-03-19 06:32:51
maintainer
docs_urlNone
authorMichael Milligan, Andrea Zonca, Mike Gilbert
requires_python>=3.6
licenseBSD
keywords interactive interpreter shell web jupyter
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # batchspawner for Jupyterhub

[![Latest PyPI version](https://img.shields.io/pypi/v/batchspawner?logo=pypi)](https://pypi.python.org/pypi/batchspawner)
[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/batchspawner?logo=conda-forge)](https://anaconda.org/conda-forge/batchspawner)
[![GitHub Workflow Status - Test](https://img.shields.io/github/actions/workflow/status/jupyterhub/batchspawner/test.yaml?logo=github&label=tests)](https://github.com/jupyterhub/batchspawner/actions)
[![Test coverage of code](https://codecov.io/gh/jupyterhub/batchspawner/branch/main/graph/badge.svg)](https://codecov.io/gh/jupyterhub/batchspawner)
[![Issue tracking - GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/batchspawner/issues)
[![Help forum - Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub)
[![Contribute](https://img.shields.io/badge/I_want_to_contribute!-grey?logo=jupyter)](https://github.com/jupyterhub/batchspawner/blob/master/CONTRIBUTING.md)

This is a custom spawner for [Jupyterhub](https://jupyterhub.readthedocs.io/) that is designed for installations on clusters using batch scheduling software.

This began as a generalization of [mkgilbert's batchspawner](https://github.com/mkgilbert/slurmspawner) which in turn was inspired by [Andrea Zonca's blog post](http://zonca.github.io/2015/04/jupyterhub-hpc.html "Run jupyterhub on a Supercomputer") where he explains his implementation for a spawner that uses SSH and Torque. His github repo is found [here](http://www.github.com/zonca/remotespawner "RemoteSpawner").

This package formerly included WrapSpawner and ProfilesSpawner, which provide mechanisms for runtime configuration of spawners. These have been split out and moved to the [`wrapspawner`](https://github.com/jupyterhub/wrapspawner) package.

## Installation

1. from root directory of this repo (where setup.py is), run `pip install -e .`

   If you don't actually need an editable version, you can simply run
   `pip install batchspawner`

2. add lines in jupyterhub_config.py for the spawner you intend to use, e.g.

   ```python
      c = get_config()
      c.JupyterHub.spawner_class = 'batchspawner.TorqueSpawner'
      import batchspawner    # Even though not used, needed to register batchspawner interface
   ```

3. Depending on the spawner, additional configuration will likely be needed.

## Batch Spawners

For information on the specific spawners, see [SPAWNERS.md](SPAWNERS.md).

### Overview

This file contains an abstraction layer for batch job queueing systems (`BatchSpawnerBase`), and implements
Jupyterhub spawners for Torque, Moab, SLURM, SGE, HTCondor, LSF, and eventually others.
Common attributes of batch submission / resource manager environments will include notions of:

- queue names, resource manager addresses
- resource limits including runtime, number of processes, memory
- singleuser child process running on (usually remote) host not known until runtime
- job submission and monitoring via resource manager utilities
- remote execution via submission of templated scripts
- job names instead of PIDs

`BatchSpawnerBase` provides several general mechanisms:

- configurable traits `req_foo` that are exposed as `{foo}` in job template scripts. Templates (submit scripts in particular) may also use the full power of [jinja2](http://jinja.pocoo.org/). Templates are automatically detected if a `{{` or `{%` is present, otherwise str.format() used.
- configurable command templates for submitting/querying/cancelling jobs
- a generic concept of job-ID and ID-based job state tracking
- overrideable hooks for subclasses to plug in logic at numerous points

### Example

Every effort has been made to accommodate highly diverse systems through configuration
only. This example consists of the (lightly edited) configuration used by the author
to run Jupyter notebooks on an academic supercomputer cluster.

```python
# Select the Torque backend and increase the timeout since batch jobs may take time to start
import batchspawner
c.JupyterHub.spawner_class = 'batchspawner.TorqueSpawner'
c.Spawner.http_timeout = 120

#------------------------------------------------------------------------------
# BatchSpawnerBase configuration
#    These are simply setting parameters used in the job script template below
#------------------------------------------------------------------------------
c.BatchSpawnerBase.req_nprocs = '2'
c.BatchSpawnerBase.req_queue = 'mesabi'
c.BatchSpawnerBase.req_host = 'mesabi.xyz.edu'
c.BatchSpawnerBase.req_runtime = '12:00:00'
c.BatchSpawnerBase.req_memory = '4gb'
#------------------------------------------------------------------------------
# TorqueSpawner configuration
#    The script below is nearly identical to the default template, but we needed
#    to add a line for our local environment. For most sites the default templates
#    should be a good starting point.
#------------------------------------------------------------------------------
c.TorqueSpawner.batch_script = '''#!/bin/sh
#PBS -q {queue}@{host}
#PBS -l walltime={runtime}
#PBS -l nodes=1:ppn={nprocs}
#PBS -l mem={memory}
#PBS -N jupyterhub-singleuser
#PBS -v {keepvars}
module load python3
{cmd}
'''
# For our site we need to munge the execution hostname returned by qstat
c.TorqueSpawner.state_exechost_exp = r'int-\1.mesabi.xyz.edu'
```

### Security

Unless otherwise stated for a specific spawner, assume that spawners
_do_ evaluate shell environment for users and thus the [security
requirements of JupyterHub security for untrusted
users](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html)
are not fulfilled because some (most?) spawners _do_ start a user
shell which will execute arbitrary user environment configuration
(`.profile`, `.bashrc` and the like) unless users do not have
access to their own cluster user account. This is something which we
are working on.

## Provide different configurations of BatchSpawner

### Overview

`ProfilesSpawner`, available as part of the [`wrapspawner`](https://github.com/jupyterhub/wrapspawner)
package, allows the Jupyterhub administrator to define a set of different spawning configurations,
both different spawners and different configurations of the same spawner.
The user is then presented a dropdown menu for choosing the most suitable configuration for their needs.

This method provides an easy and safe way to provide different configurations of `BatchSpawner` to the
users, see an example below.

### Example

The following is based on the author's configuration (at the same site as the example above)
showing how to give users access to multiple job configurations on the batch scheduled
clusters, as well as an option to run a local notebook directly on the jupyterhub server.

```python
# Same initial setup as the previous example
import batchspawner
c.JupyterHub.spawner_class = 'wrapspawner.ProfilesSpawner'
c.Spawner.http_timeout = 120
#------------------------------------------------------------------------------
# BatchSpawnerBase configuration
#   Providing default values that we may omit in the profiles
#------------------------------------------------------------------------------
c.BatchSpawnerBase.req_host = 'mesabi.xyz.edu'
c.BatchSpawnerBase.req_runtime = '12:00:00'
c.TorqueSpawner.state_exechost_exp = r'in-\1.mesabi.xyz.edu'
#------------------------------------------------------------------------------
# ProfilesSpawner configuration
#------------------------------------------------------------------------------
# List of profiles to offer for selection. Signature is:
#   List(Tuple( Unicode, Unicode, Type(Spawner), Dict ))
# corresponding to profile display name, unique key, Spawner class,
# dictionary of spawner config options.
#
# The first three values will be exposed in the input_template as {display},
# {key}, and {type}
#
c.ProfilesSpawner.profiles = [
   ( "Local server", 'local', 'jupyterhub.spawner.LocalProcessSpawner', {'ip':'0.0.0.0'} ),
   ('Mesabi - 2 cores, 4 GB, 8 hours', 'mesabi2c4g12h', 'batchspawner.TorqueSpawner',
      dict(req_nprocs='2', req_queue='mesabi', req_runtime='8:00:00', req_memory='4gb')),
   ('Mesabi - 12 cores, 128 GB, 4 hours', 'mesabi128gb', 'batchspawner.TorqueSpawner',
      dict(req_nprocs='12', req_queue='ram256g', req_runtime='4:00:00', req_memory='125gb')),
   ('Mesabi - 2 cores, 4 GB, 24 hours', 'mesabi2c4gb24h', 'batchspawner.TorqueSpawner',
      dict(req_nprocs='2', req_queue='mesabi', req_runtime='24:00:00', req_memory='4gb')),
   ('Interactive Cluster - 2 cores, 4 GB, 8 hours', 'lab', 'batchspawner.TorqueSpawner',
      dict(req_nprocs='2', req_host='labhost.xyz.edu', req_queue='lab',
          req_runtime='8:00:00', req_memory='4gb', state_exechost_exp='')),
   ]
c.ProfilesSpawner.ip = '0.0.0.0'
```

## Debugging batchspawner

Sometimes it can be hard to debug batchspawner, but it's not really
once you know how the pieces interact. Check the following places for
error messages:

- Check the JupyterHub logs for errors.

- Check the JupyterHub logs for the batch script that got submitted
  and the command used to submit it. Are these correct? (Note that
  there are submission environment variables too, which aren't
  displayed.)

- At this point, it's a matter of checking the batch system. Is the
  job ever scheduled? Does it run? Does it succeed? Check the batch
  system status and output of the job. The most comon failure
  patterns are a) job never starting due to bad scheduler options, b)
  job waiting in the queue beyond the `start_timeout`, causing
  JupyterHub to kill the job.

- At this point the job starts. Does it fail immediately, or before
  Jupyter starts? Check the scheduler output files (stdout/stderr of
  the job), wherever it is stored. To debug the job script, you can
  add debugging into the batch script, such as an `env` or `set -x`.

- At this point Jupyter itself starts - check its error messages. Is
  it starting with the right options? Can it communicate with the
  hub? At this point there usually isn't anything
  batchspawner-specific, with the one exception below. The error log
  would be in the batch script output (same file as above). There may
  also be clues in the JupyterHub logfile.
- Are you running on an NFS filesystem? It's possible for Jupyter to
  experience issues due to varying implementations of the fcntl() system
  call. (See also [Jupyterhub-Notes and Tips: SQLite](https://jupyterhub.readthedocs.io/en/latest/reference/database.html?highlight=NFS#sqlite))

Common problems:

- Did you `import batchspawner` in the `jupyterhub_config.py` file?
  This is needed in order to activate the batchspawer API in
  JupyterHub.

## Changelog

See [CHANGELOG.md](CHANGELOG.md).

            

Raw data

            {
    "_id": null,
    "home_page": "http://jupyter.org",
    "name": "batchspawner",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "Interactive,Interpreter,Shell,Web,Jupyter",
    "author": "Michael Milligan, Andrea Zonca, Mike Gilbert",
    "author_email": "milligan@umn.edu",
    "download_url": "https://files.pythonhosted.org/packages/64/83/43de227b620b3e71d9264d06b3e0bc969ce8820e5090472ba497e6595e66/batchspawner-1.3.0.tar.gz",
    "platform": "Linux",
    "description": "# batchspawner for Jupyterhub\n\n[![Latest PyPI version](https://img.shields.io/pypi/v/batchspawner?logo=pypi)](https://pypi.python.org/pypi/batchspawner)\n[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/batchspawner?logo=conda-forge)](https://anaconda.org/conda-forge/batchspawner)\n[![GitHub Workflow Status - Test](https://img.shields.io/github/actions/workflow/status/jupyterhub/batchspawner/test.yaml?logo=github&label=tests)](https://github.com/jupyterhub/batchspawner/actions)\n[![Test coverage of code](https://codecov.io/gh/jupyterhub/batchspawner/branch/main/graph/badge.svg)](https://codecov.io/gh/jupyterhub/batchspawner)\n[![Issue tracking - GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/batchspawner/issues)\n[![Help forum - Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub)\n[![Contribute](https://img.shields.io/badge/I_want_to_contribute!-grey?logo=jupyter)](https://github.com/jupyterhub/batchspawner/blob/master/CONTRIBUTING.md)\n\nThis is a custom spawner for [Jupyterhub](https://jupyterhub.readthedocs.io/) that is designed for installations on clusters using batch scheduling software.\n\nThis began as a generalization of [mkgilbert's batchspawner](https://github.com/mkgilbert/slurmspawner) which in turn was inspired by [Andrea Zonca's blog post](http://zonca.github.io/2015/04/jupyterhub-hpc.html \"Run jupyterhub on a Supercomputer\") where he explains his implementation for a spawner that uses SSH and Torque. His github repo is found [here](http://www.github.com/zonca/remotespawner \"RemoteSpawner\").\n\nThis package formerly included WrapSpawner and ProfilesSpawner, which provide mechanisms for runtime configuration of spawners. These have been split out and moved to the [`wrapspawner`](https://github.com/jupyterhub/wrapspawner) package.\n\n## Installation\n\n1. from root directory of this repo (where setup.py is), run `pip install -e .`\n\n   If you don't actually need an editable version, you can simply run\n   `pip install batchspawner`\n\n2. add lines in jupyterhub_config.py for the spawner you intend to use, e.g.\n\n   ```python\n      c = get_config()\n      c.JupyterHub.spawner_class = 'batchspawner.TorqueSpawner'\n      import batchspawner    # Even though not used, needed to register batchspawner interface\n   ```\n\n3. Depending on the spawner, additional configuration will likely be needed.\n\n## Batch Spawners\n\nFor information on the specific spawners, see [SPAWNERS.md](SPAWNERS.md).\n\n### Overview\n\nThis file contains an abstraction layer for batch job queueing systems (`BatchSpawnerBase`), and implements\nJupyterhub spawners for Torque, Moab, SLURM, SGE, HTCondor, LSF, and eventually others.\nCommon attributes of batch submission / resource manager environments will include notions of:\n\n- queue names, resource manager addresses\n- resource limits including runtime, number of processes, memory\n- singleuser child process running on (usually remote) host not known until runtime\n- job submission and monitoring via resource manager utilities\n- remote execution via submission of templated scripts\n- job names instead of PIDs\n\n`BatchSpawnerBase` provides several general mechanisms:\n\n- configurable traits `req_foo` that are exposed as `{foo}` in job template scripts. Templates (submit scripts in particular) may also use the full power of [jinja2](http://jinja.pocoo.org/). Templates are automatically detected if a `{{` or `{%` is present, otherwise str.format() used.\n- configurable command templates for submitting/querying/cancelling jobs\n- a generic concept of job-ID and ID-based job state tracking\n- overrideable hooks for subclasses to plug in logic at numerous points\n\n### Example\n\nEvery effort has been made to accommodate highly diverse systems through configuration\nonly. This example consists of the (lightly edited) configuration used by the author\nto run Jupyter notebooks on an academic supercomputer cluster.\n\n```python\n# Select the Torque backend and increase the timeout since batch jobs may take time to start\nimport batchspawner\nc.JupyterHub.spawner_class = 'batchspawner.TorqueSpawner'\nc.Spawner.http_timeout = 120\n\n#------------------------------------------------------------------------------\n# BatchSpawnerBase configuration\n#    These are simply setting parameters used in the job script template below\n#------------------------------------------------------------------------------\nc.BatchSpawnerBase.req_nprocs = '2'\nc.BatchSpawnerBase.req_queue = 'mesabi'\nc.BatchSpawnerBase.req_host = 'mesabi.xyz.edu'\nc.BatchSpawnerBase.req_runtime = '12:00:00'\nc.BatchSpawnerBase.req_memory = '4gb'\n#------------------------------------------------------------------------------\n# TorqueSpawner configuration\n#    The script below is nearly identical to the default template, but we needed\n#    to add a line for our local environment. For most sites the default templates\n#    should be a good starting point.\n#------------------------------------------------------------------------------\nc.TorqueSpawner.batch_script = '''#!/bin/sh\n#PBS -q {queue}@{host}\n#PBS -l walltime={runtime}\n#PBS -l nodes=1:ppn={nprocs}\n#PBS -l mem={memory}\n#PBS -N jupyterhub-singleuser\n#PBS -v {keepvars}\nmodule load python3\n{cmd}\n'''\n# For our site we need to munge the execution hostname returned by qstat\nc.TorqueSpawner.state_exechost_exp = r'int-\\1.mesabi.xyz.edu'\n```\n\n### Security\n\nUnless otherwise stated for a specific spawner, assume that spawners\n_do_ evaluate shell environment for users and thus the [security\nrequirements of JupyterHub security for untrusted\nusers](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html)\nare not fulfilled because some (most?) spawners _do_ start a user\nshell which will execute arbitrary user environment configuration\n(`.profile`, `.bashrc` and the like) unless users do not have\naccess to their own cluster user account. This is something which we\nare working on.\n\n## Provide different configurations of BatchSpawner\n\n### Overview\n\n`ProfilesSpawner`, available as part of the [`wrapspawner`](https://github.com/jupyterhub/wrapspawner)\npackage, allows the Jupyterhub administrator to define a set of different spawning configurations,\nboth different spawners and different configurations of the same spawner.\nThe user is then presented a dropdown menu for choosing the most suitable configuration for their needs.\n\nThis method provides an easy and safe way to provide different configurations of `BatchSpawner` to the\nusers, see an example below.\n\n### Example\n\nThe following is based on the author's configuration (at the same site as the example above)\nshowing how to give users access to multiple job configurations on the batch scheduled\nclusters, as well as an option to run a local notebook directly on the jupyterhub server.\n\n```python\n# Same initial setup as the previous example\nimport batchspawner\nc.JupyterHub.spawner_class = 'wrapspawner.ProfilesSpawner'\nc.Spawner.http_timeout = 120\n#------------------------------------------------------------------------------\n# BatchSpawnerBase configuration\n#   Providing default values that we may omit in the profiles\n#------------------------------------------------------------------------------\nc.BatchSpawnerBase.req_host = 'mesabi.xyz.edu'\nc.BatchSpawnerBase.req_runtime = '12:00:00'\nc.TorqueSpawner.state_exechost_exp = r'in-\\1.mesabi.xyz.edu'\n#------------------------------------------------------------------------------\n# ProfilesSpawner configuration\n#------------------------------------------------------------------------------\n# List of profiles to offer for selection. Signature is:\n#   List(Tuple( Unicode, Unicode, Type(Spawner), Dict ))\n# corresponding to profile display name, unique key, Spawner class,\n# dictionary of spawner config options.\n#\n# The first three values will be exposed in the input_template as {display},\n# {key}, and {type}\n#\nc.ProfilesSpawner.profiles = [\n   ( \"Local server\", 'local', 'jupyterhub.spawner.LocalProcessSpawner', {'ip':'0.0.0.0'} ),\n   ('Mesabi - 2 cores, 4 GB, 8 hours', 'mesabi2c4g12h', 'batchspawner.TorqueSpawner',\n      dict(req_nprocs='2', req_queue='mesabi', req_runtime='8:00:00', req_memory='4gb')),\n   ('Mesabi - 12 cores, 128 GB, 4 hours', 'mesabi128gb', 'batchspawner.TorqueSpawner',\n      dict(req_nprocs='12', req_queue='ram256g', req_runtime='4:00:00', req_memory='125gb')),\n   ('Mesabi - 2 cores, 4 GB, 24 hours', 'mesabi2c4gb24h', 'batchspawner.TorqueSpawner',\n      dict(req_nprocs='2', req_queue='mesabi', req_runtime='24:00:00', req_memory='4gb')),\n   ('Interactive Cluster - 2 cores, 4 GB, 8 hours', 'lab', 'batchspawner.TorqueSpawner',\n      dict(req_nprocs='2', req_host='labhost.xyz.edu', req_queue='lab',\n          req_runtime='8:00:00', req_memory='4gb', state_exechost_exp='')),\n   ]\nc.ProfilesSpawner.ip = '0.0.0.0'\n```\n\n## Debugging batchspawner\n\nSometimes it can be hard to debug batchspawner, but it's not really\nonce you know how the pieces interact. Check the following places for\nerror messages:\n\n- Check the JupyterHub logs for errors.\n\n- Check the JupyterHub logs for the batch script that got submitted\n  and the command used to submit it. Are these correct? (Note that\n  there are submission environment variables too, which aren't\n  displayed.)\n\n- At this point, it's a matter of checking the batch system. Is the\n  job ever scheduled? Does it run? Does it succeed? Check the batch\n  system status and output of the job. The most comon failure\n  patterns are a) job never starting due to bad scheduler options, b)\n  job waiting in the queue beyond the `start_timeout`, causing\n  JupyterHub to kill the job.\n\n- At this point the job starts. Does it fail immediately, or before\n  Jupyter starts? Check the scheduler output files (stdout/stderr of\n  the job), wherever it is stored. To debug the job script, you can\n  add debugging into the batch script, such as an `env` or `set -x`.\n\n- At this point Jupyter itself starts - check its error messages. Is\n  it starting with the right options? Can it communicate with the\n  hub? At this point there usually isn't anything\n  batchspawner-specific, with the one exception below. The error log\n  would be in the batch script output (same file as above). There may\n  also be clues in the JupyterHub logfile.\n- Are you running on an NFS filesystem? It's possible for Jupyter to\n  experience issues due to varying implementations of the fcntl() system\n  call. (See also [Jupyterhub-Notes and Tips: SQLite](https://jupyterhub.readthedocs.io/en/latest/reference/database.html?highlight=NFS#sqlite))\n\nCommon problems:\n\n- Did you `import batchspawner` in the `jupyterhub_config.py` file?\n  This is needed in order to activate the batchspawer API in\n  JupyterHub.\n\n## Changelog\n\nSee [CHANGELOG.md](CHANGELOG.md).\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Batchspawner: A spawner for Jupyterhub to spawn notebooks using batch resource managers.",
    "version": "1.3.0",
    "project_urls": {
        "About Jupyterhub": "http://jupyterhub.readthedocs.io/en/latest/",
        "Bug Reports": "https://github.com/jupyterhub/batchspawner/issues",
        "Homepage": "http://jupyter.org",
        "Jupyter Project": "http://jupyter.org",
        "Source": "https://github.com/jupyterhub/batchspawner/"
    },
    "split_keywords": [
        "interactive",
        "interpreter",
        "shell",
        "web",
        "jupyter"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e161029013f25d9c03efe4d9994afe957bec07a87c8be876fe7e91e2d4e9c4e7",
                "md5": "7bd6db3c2483c94f641d91b8680d5919",
                "sha256": "4290fef50505de8d9189b8bebe900ca239ced1b4ce2ec388cdef26bcba44c471"
            },
            "downloads": -1,
            "filename": "batchspawner-1.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7bd6db3c2483c94f641d91b8680d5919",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 17574,
            "upload_time": "2024-03-19T06:32:49",
            "upload_time_iso_8601": "2024-03-19T06:32:49.761046Z",
            "url": "https://files.pythonhosted.org/packages/e1/61/029013f25d9c03efe4d9994afe957bec07a87c8be876fe7e91e2d4e9c4e7/batchspawner-1.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "648343de227b620b3e71d9264d06b3e0bc969ce8820e5090472ba497e6595e66",
                "md5": "79dee819f0f26adea4e8ae1cca922a5d",
                "sha256": "c0f422eb6a6288f7f711db8b780055b37c1a5c630283cdeb2ef9b5e94ba78caa"
            },
            "downloads": -1,
            "filename": "batchspawner-1.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "79dee819f0f26adea4e8ae1cca922a5d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 26417,
            "upload_time": "2024-03-19T06:32:51",
            "upload_time_iso_8601": "2024-03-19T06:32:51.801656Z",
            "url": "https://files.pythonhosted.org/packages/64/83/43de227b620b3e71d9264d06b3e0bc969ce8820e5090472ba497e6595e66/batchspawner-1.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-19 06:32:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jupyterhub",
    "github_project": "batchspawner",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "batchspawner"
}
        
Elapsed time: 0.20690s