# jupyterhub_moss: JupyterHub MOdular Slurm Spawner
**jupyterhub_moss** is a Python package that provides:
- A [JupyterHub](https://jupyterhub.readthedocs.io/)
[Slurm](https://slurm.schedmd.com/) Spawner that can be configured by
[setting the available partitions](#partition-settings). It is an extension of
[`batchspawner.SlurmSpawner`](https://github.com/jupyterhub/batchspawner).
- An associated [spawn page](#spawn-page) that changes according to the
partitions set in the Spawner and allows the user to select Slurm resources to
use.
<img style="margin:auto" src=https://user-images.githubusercontent.com/9449698/215526389-2ef5ac32-5d50-49de-aa5f-46972feaccf1.png width="50%">
## Install
`pip install jupyterhub_moss`
## Usage
### Partition settings
To use **jupyterhub_moss**, you need first a working
[JupyterHub](https://jupyterhub.readthedocs.io/) instance. **jupyterhub_moss**
needs then to be imported in
[your JupyterHub configuration file](https://jupyterhub.readthedocs.io/en/stable/getting-started/config-basics.html)
(usually named `jupyterhub_conf.py`):
```python
import batchspawner
import jupyterhub_moss
c = get_config()
# ...your config
# Init JupyterHub configuration to use this spawner
jupyterhub_moss.set_config(c)
```
Once **jupyterhub_moss** is set up, you can define the partitions available on
Slurm by setting `c.MOSlurmSpawner.partitions` in the same file:
```python
# ...
# Partition descriptions
c.MOSlurmSpawner.partitions = {
"partition_1": { # Partition name # (See description of fields below for more info)
"architecture": "x86_86", # Nodes architecture
"description": "Partition 1", # Displayed description
"gpu": None, # --gres= template to use for requesting GPUs
"max_ngpus": 0, # Maximum number of GPUs per node
"max_nprocs": 28, # Maximum number of CPUs per node
"max_runtime": 12*3600, # Maximum time limit in seconds (Must be at least 1hour)
"simple": True, # True to show in Simple tab
"jupyter_environments": {
"default": { # Jupyter environment identifier, at least "path" or "modules" is mandatory
"description": "Default", # Text displayed for this environment select option
"path": "/env/path/bin/", # Path to Python environment bin/ used to start Jupyter server on the Slurm nodes
"modules": "", # Space separated list of environment modules to load before starting Jupyter server
"add_to_path": True, # Toggle adding the environment to shell PATH (optional, default: True)
"prologue": "", # Shell commands to execute before starting the Jupyter server (optional, default: "")
},
},
},
"partition_2": {
"architecture": "ppc64le",
"description": "Partition 2",
"gpu": "gpu:V100-SXM2-32GB:{}",
"max_ngpus": 2,
"max_nprocs": 128,
"max_runtime": 1*3600,
"simple": True,
"jupyter_environments": {
"default": {
"description": "Default",
"path": "",
"modules": "JupyterLab/3.6.0",
"add_to_path": True,
"prologue": "echo 'Starting default environment'",
},
},
},
"partition_3": {
"architecture": "x86_86",
"description": "Partition 3",
"gpu": None,
"max_ngpus": 0,
"max_nprocs": 28,
"max_runtime": 12*3600,
"simple": False,
"jupyter_environments": {
"default": {
"description": "Partition 3 default",
"path": "/path/to/jupyter/env/for/partition_3/bin/",
"modules": "JupyterLab/3.6.0",
"add_to_path": True,
"prologue": "echo 'Starting default environment'",
},
},
}
```
For a minimalistic working demo, check the
[`demo/jupyterhub_conf.py`](demo/jupyterhub_conf.py) config file.
### Field descriptions
- `architecture`: The architecture of the partition. This is only cosmetic and
will be used to generate subtitles in the spawn page.
- `description`: The description of the partition. This is only cosmetic and
will be used to generate subtitles in the spawn page.
- `gpu`: [Optional] A template string that will be used to request GPU resources
through `--gres`. The template should therefore include a `{}` that will be
replaced by the number of requested GPU **and** follow the format expected by
`--gres`. If no GPU is available for this partition, set to `""`. It is
retrieved from SLURM if not provided.
- `max_ngpus`: [Optional] The maximum number of GPU that can be requested for
this partition. The spawn page will use this to generate appropriate bounds
for the user inputs. If no GPU is available for this partition, set to `0`. It
is retrieved from SLURM if not provided.
- `max_nprocs`: [Optional] The maximum number of processors that can be
requested for this partition. The spawn page will use this to generate
appropriate bounds for the user inputs. It is retrieved from SLURM if not
provided.
- `max_runtime`: [Optional] The maximum job runtime for this partition in
seconds. It should be of minimum 1 hour as the _Simple_ tab only display
buttons for runtimes greater than 1 hour. It is retrieved from SLURM if not
provided.
- `simple`: Whether the partition should be available in the _Simple_ tab. The
spawn page that will be generated is organized in a two tabs: a _Simple_ tab
with minimal settings that will be enough for most users and an _Advanced_ tab
where almost all Slurm job settings can be set. Some partitions can be hidden
from the _Simple_ tab with setting `simple` to `False`.
- `jupyter_environments`: Mapping of identifer name to information about Python
environment used to run Jupyter on the Slurm nodes. Either `path` or `modules`
(or both) should be defined. This information is a mapping containing:
- `description`: Text used for display in the selection options.
- `path`: The path to a Python environment bin/ used to start jupyter on the
Slurm nodes. **jupyterhub_moss** needs that a virtual (or conda) environment
is used to start Jupyter. This path can be changed according to the
partitions.
- `modules`: Space separated list of environment modules to load before
starting the Jupyter server. Environment modules will be loaded with the
`module` command.
- `add_to_path`: Whether or not to prepend the environment `path` to shell
`PATH`.
- `prologue`: Shell commands to execute on the Slurm node before starting the
Jupyter single-user server. By default no command is run.
### Spawn page
The spawn page (available at `/hub/spawn`) will be generated according to the
partition settings. For example, this is the spawn page generated for the
partition settings above:
<img style="margin:1rem auto" src=https://user-images.githubusercontent.com/9449698/215526389-2ef5ac32-5d50-49de-aa5f-46972feaccf1.png width="50%">
This spawn page is separated in two tabs: a _Simple_ and an _Advanced_ tab. On
the _Simple_ tab, the user can choose between the partitions set though
`simple: True` (`partition_1` and `partition_2` in this case), choose to take a
minimum, a half or a maximum number of cores and choose the job duration. The
available resources are checked using `sinfo` and displayed on the table below.
Clicking on the **Start** button will request the job.
The spawn page adapts to the chosen partition. This is the page when selecting
the `partition_2`:
<img style="margin:1rem auto" src=https://user-images.githubusercontent.com/9449698/215526553-4ba57510-efac-4a28-a576-ef81ff9ec2f5.png width="50%">
As the maximum number of cores is different, the CPUs row change accordingly.
Also, as `gpu` was set for `partition_2`, a new button row appears to enable GPU
requests.
The _Advanced_ tab allows finer control on the requested resources.
<img style="margin:1rem auto" src=https://user-images.githubusercontent.com/9449698/262627623-91bd63de-6374-47d4-9064-d1a6e3d56411.png width="50%">
The user can select any partition (`partition_3` is added in this case) and the
table of available resources reflects this. The user can also choose any number
of nodes (with the max given by `max_nprocs`), of GPUs (max: `max_gpus`) and
have more control on the job duration (max: `max_runtime`).
### Spawn through URL
It is also possible to pass the spawning options as query arguments to the spawn
URL: `https://<server:port>/hub/spawn`. For example,
`https://<server:port>/hub/spawn?partition=partition_1&nprocs=4` will directly
spawn a Jupyter server on `partition_1` with 4 cores allocated.
The following query argument is required:
- `partition`: The name of the SLURM partition to use.
The following optional query arguments are available:
- SLURM configuration:
- `memory`: Total amount of memory per node
([`--mem`](https://slurm.schedmd.com/sbatch.html#OPT_mem))
- `ngpus`: Number of GPUs
([`--gres:<gpu>:`](https://slurm.schedmd.com/sbatch.html#OPT_gres))
- `nprocs`: Number of CPUs per task
([`--cpus-per-task`](https://slurm.schedmd.com/sbatch.html#OPT_cpus-per-task))
- `options`: Extra SLURM options
- `output`: Set to `true` to save logs to `slurm-*.out` files.
- `reservation`: SLURM reservation name
([`--reservation`](https://slurm.schedmd.com/sbatch.html#OPT_reservation))
- `runtime`: Job duration as hh:mm:ss
([`--time`](https://slurm.schedmd.com/sbatch.html#OPT_time))
- Jupyter(Lab) configuration:
- `default_url`: The URL to open the Jupyter environment with: use `/lab` to
start [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) or use
[JupyterLab URLs](https://jupyterlab.readthedocs.io/en/stable/user/urls.html)
- `environment_id`: Name of the Python environment defined in the
configuration used to start Jupyter
- `environment_path`: Path to the Python environment bin/ used to start
Jupyter
- `environment_modules`: Space-separated list of
[environment module](https://modules.sourceforge.net/) names to load before
starting Jupyter
- `root_dir`: The path of the "root" folder browsable from Jupyter(Lab)
(user's home directory if not provided)
To use a Jupyter environment defined in the configuration, only provide its
`environment_id`, for example:
`https://<server:port>/hub/spawn?partition=partition_1&environment_id=default`.
To use a custom Jupyter environment, instead provide the corresponding
`environment_path` and/or `environment_modules`, for example:
- `https://<server:port>/hub/spawn?partition=partition_1&environment_path=/path/to/jupyter/bin`,
or
- `https://<server:port>/hub/spawn?partition=partition_1&environment_modules=myjupytermodule`.
## Development
See [CONTRIBUTING.md](CONTRIBUTING.md).
## Credits:
We would like acknowledge the following ressources that served as base for this
project and thank their authors:
- This [gist](https://gist.github.com/zonca/aaed55502c4b16535fe947791d02ac32)
for the initial spawner implementation.
- The
[DESY JupyterHub Slurm service](https://confluence.desy.de/display/IS/JupyterHub+on+Maxwell)
for the table of available resources.
- The
[TUDresden JupyterHub Slurm service](https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/JupyterHub)
for the spawn page design.
Raw data
{
"_id": null,
"home_page": "https://github.com/silx-kit/jupyterhub_moss",
"name": "jupyterhub-moss",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "jupyterhub, slurm",
"author": "European Synchrotron Radiation Facility",
"author_email": "jupyter@esrf.fr",
"download_url": "https://files.pythonhosted.org/packages/b8/23/ed9b6a978852c5ba44f0ccd1234c0be84f3dbb62ef34bbc9c97eadad8610/jupyterhub_moss-9.0.0.tar.gz",
"platform": null,
"description": "# jupyterhub_moss: JupyterHub MOdular Slurm Spawner\n\n**jupyterhub_moss** is a Python package that provides:\n\n- A [JupyterHub](https://jupyterhub.readthedocs.io/)\n [Slurm](https://slurm.schedmd.com/) Spawner that can be configured by\n [setting the available partitions](#partition-settings). It is an extension of\n [`batchspawner.SlurmSpawner`](https://github.com/jupyterhub/batchspawner).\n- An associated [spawn page](#spawn-page) that changes according to the\n partitions set in the Spawner and allows the user to select Slurm resources to\n use.\n\n<img style=\"margin:auto\" src=https://user-images.githubusercontent.com/9449698/215526389-2ef5ac32-5d50-49de-aa5f-46972feaccf1.png width=\"50%\">\n\n## Install\n\n`pip install jupyterhub_moss`\n\n## Usage\n\n### Partition settings\n\nTo use **jupyterhub_moss**, you need first a working\n[JupyterHub](https://jupyterhub.readthedocs.io/) instance. **jupyterhub_moss**\nneeds then to be imported in\n[your JupyterHub configuration file](https://jupyterhub.readthedocs.io/en/stable/getting-started/config-basics.html)\n(usually named `jupyterhub_conf.py`):\n\n```python\nimport batchspawner\nimport jupyterhub_moss\n\nc = get_config()\n\n# ...your config\n\n# Init JupyterHub configuration to use this spawner\njupyterhub_moss.set_config(c)\n```\n\nOnce **jupyterhub_moss** is set up, you can define the partitions available on\nSlurm by setting `c.MOSlurmSpawner.partitions` in the same file:\n\n```python\n# ...\n\n# Partition descriptions\nc.MOSlurmSpawner.partitions = {\n \"partition_1\": { # Partition name # (See description of fields below for more info)\n \"architecture\": \"x86_86\", # Nodes architecture\n \"description\": \"Partition 1\", # Displayed description\n \"gpu\": None, # --gres= template to use for requesting GPUs\n \"max_ngpus\": 0, # Maximum number of GPUs per node\n \"max_nprocs\": 28, # Maximum number of CPUs per node\n \"max_runtime\": 12*3600, # Maximum time limit in seconds (Must be at least 1hour)\n \"simple\": True, # True to show in Simple tab\n \"jupyter_environments\": {\n \"default\": { # Jupyter environment identifier, at least \"path\" or \"modules\" is mandatory\n \"description\": \"Default\", # Text displayed for this environment select option\n \"path\": \"/env/path/bin/\", # Path to Python environment bin/ used to start Jupyter server on the Slurm nodes\n \"modules\": \"\", # Space separated list of environment modules to load before starting Jupyter server\n \"add_to_path\": True, # Toggle adding the environment to shell PATH (optional, default: True)\n \"prologue\": \"\", # Shell commands to execute before starting the Jupyter server (optional, default: \"\")\n },\n },\n },\n \"partition_2\": {\n \"architecture\": \"ppc64le\",\n \"description\": \"Partition 2\",\n \"gpu\": \"gpu:V100-SXM2-32GB:{}\",\n \"max_ngpus\": 2,\n \"max_nprocs\": 128,\n \"max_runtime\": 1*3600,\n \"simple\": True,\n \"jupyter_environments\": {\n \"default\": {\n \"description\": \"Default\",\n \"path\": \"\",\n \"modules\": \"JupyterLab/3.6.0\",\n \"add_to_path\": True,\n \"prologue\": \"echo 'Starting default environment'\",\n },\n },\n },\n \"partition_3\": {\n \"architecture\": \"x86_86\",\n \"description\": \"Partition 3\",\n \"gpu\": None,\n \"max_ngpus\": 0,\n \"max_nprocs\": 28,\n \"max_runtime\": 12*3600,\n \"simple\": False,\n \"jupyter_environments\": {\n \"default\": {\n \"description\": \"Partition 3 default\",\n \"path\": \"/path/to/jupyter/env/for/partition_3/bin/\",\n \"modules\": \"JupyterLab/3.6.0\",\n \"add_to_path\": True,\n \"prologue\": \"echo 'Starting default environment'\",\n },\n },\n}\n```\n\nFor a minimalistic working demo, check the\n[`demo/jupyterhub_conf.py`](demo/jupyterhub_conf.py) config file.\n\n### Field descriptions\n\n- `architecture`: The architecture of the partition. This is only cosmetic and\n will be used to generate subtitles in the spawn page.\n- `description`: The description of the partition. This is only cosmetic and\n will be used to generate subtitles in the spawn page.\n- `gpu`: [Optional] A template string that will be used to request GPU resources\n through `--gres`. The template should therefore include a `{}` that will be\n replaced by the number of requested GPU **and** follow the format expected by\n `--gres`. If no GPU is available for this partition, set to `\"\"`. It is\n retrieved from SLURM if not provided.\n- `max_ngpus`: [Optional] The maximum number of GPU that can be requested for\n this partition. The spawn page will use this to generate appropriate bounds\n for the user inputs. If no GPU is available for this partition, set to `0`. It\n is retrieved from SLURM if not provided.\n- `max_nprocs`: [Optional] The maximum number of processors that can be\n requested for this partition. The spawn page will use this to generate\n appropriate bounds for the user inputs. It is retrieved from SLURM if not\n provided.\n- `max_runtime`: [Optional] The maximum job runtime for this partition in\n seconds. It should be of minimum 1 hour as the _Simple_ tab only display\n buttons for runtimes greater than 1 hour. It is retrieved from SLURM if not\n provided.\n- `simple`: Whether the partition should be available in the _Simple_ tab. The\n spawn page that will be generated is organized in a two tabs: a _Simple_ tab\n with minimal settings that will be enough for most users and an _Advanced_ tab\n where almost all Slurm job settings can be set. Some partitions can be hidden\n from the _Simple_ tab with setting `simple` to `False`.\n- `jupyter_environments`: Mapping of identifer name to information about Python\n environment used to run Jupyter on the Slurm nodes. Either `path` or `modules`\n (or both) should be defined. This information is a mapping containing:\n - `description`: Text used for display in the selection options.\n - `path`: The path to a Python environment bin/ used to start jupyter on the\n Slurm nodes. **jupyterhub_moss** needs that a virtual (or conda) environment\n is used to start Jupyter. This path can be changed according to the\n partitions.\n - `modules`: Space separated list of environment modules to load before\n starting the Jupyter server. Environment modules will be loaded with the\n `module` command.\n - `add_to_path`: Whether or not to prepend the environment `path` to shell\n `PATH`.\n - `prologue`: Shell commands to execute on the Slurm node before starting the\n Jupyter single-user server. By default no command is run.\n\n### Spawn page\n\nThe spawn page (available at `/hub/spawn`) will be generated according to the\npartition settings. For example, this is the spawn page generated for the\npartition settings above:\n\n<img style=\"margin:1rem auto\" src=https://user-images.githubusercontent.com/9449698/215526389-2ef5ac32-5d50-49de-aa5f-46972feaccf1.png width=\"50%\">\n\nThis spawn page is separated in two tabs: a _Simple_ and an _Advanced_ tab. On\nthe _Simple_ tab, the user can choose between the partitions set though\n`simple: True` (`partition_1` and `partition_2` in this case), choose to take a\nminimum, a half or a maximum number of cores and choose the job duration. The\navailable resources are checked using `sinfo` and displayed on the table below.\nClicking on the **Start** button will request the job.\n\nThe spawn page adapts to the chosen partition. This is the page when selecting\nthe `partition_2`:\n\n<img style=\"margin:1rem auto\" src=https://user-images.githubusercontent.com/9449698/215526553-4ba57510-efac-4a28-a576-ef81ff9ec2f5.png width=\"50%\">\n\nAs the maximum number of cores is different, the CPUs row change accordingly.\nAlso, as `gpu` was set for `partition_2`, a new button row appears to enable GPU\nrequests.\n\nThe _Advanced_ tab allows finer control on the requested resources.\n\n<img style=\"margin:1rem auto\" src=https://user-images.githubusercontent.com/9449698/262627623-91bd63de-6374-47d4-9064-d1a6e3d56411.png width=\"50%\">\n\nThe user can select any partition (`partition_3` is added in this case) and the\ntable of available resources reflects this. The user can also choose any number\nof nodes (with the max given by `max_nprocs`), of GPUs (max: `max_gpus`) and\nhave more control on the job duration (max: `max_runtime`).\n\n### Spawn through URL\n\nIt is also possible to pass the spawning options as query arguments to the spawn\nURL: `https://<server:port>/hub/spawn`. For example,\n`https://<server:port>/hub/spawn?partition=partition_1&nprocs=4` will directly\nspawn a Jupyter server on `partition_1` with 4 cores allocated.\n\nThe following query argument is required:\n\n- `partition`: The name of the SLURM partition to use.\n\nThe following optional query arguments are available:\n\n- SLURM configuration:\n\n - `memory`: Total amount of memory per node\n ([`--mem`](https://slurm.schedmd.com/sbatch.html#OPT_mem))\n - `ngpus`: Number of GPUs\n ([`--gres:<gpu>:`](https://slurm.schedmd.com/sbatch.html#OPT_gres))\n - `nprocs`: Number of CPUs per task\n ([`--cpus-per-task`](https://slurm.schedmd.com/sbatch.html#OPT_cpus-per-task))\n - `options`: Extra SLURM options\n - `output`: Set to `true` to save logs to `slurm-*.out` files.\n - `reservation`: SLURM reservation name\n ([`--reservation`](https://slurm.schedmd.com/sbatch.html#OPT_reservation))\n - `runtime`: Job duration as hh:mm:ss\n ([`--time`](https://slurm.schedmd.com/sbatch.html#OPT_time))\n\n- Jupyter(Lab) configuration:\n\n - `default_url`: The URL to open the Jupyter environment with: use `/lab` to\n start [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) or use\n [JupyterLab URLs](https://jupyterlab.readthedocs.io/en/stable/user/urls.html)\n - `environment_id`: Name of the Python environment defined in the\n configuration used to start Jupyter\n - `environment_path`: Path to the Python environment bin/ used to start\n Jupyter\n - `environment_modules`: Space-separated list of\n [environment module](https://modules.sourceforge.net/) names to load before\n starting Jupyter\n - `root_dir`: The path of the \"root\" folder browsable from Jupyter(Lab)\n (user's home directory if not provided)\n\nTo use a Jupyter environment defined in the configuration, only provide its\n`environment_id`, for example:\n`https://<server:port>/hub/spawn?partition=partition_1&environment_id=default`.\n\nTo use a custom Jupyter environment, instead provide the corresponding\n`environment_path` and/or `environment_modules`, for example:\n\n- `https://<server:port>/hub/spawn?partition=partition_1&environment_path=/path/to/jupyter/bin`,\n or\n- `https://<server:port>/hub/spawn?partition=partition_1&environment_modules=myjupytermodule`.\n\n## Development\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md).\n\n## Credits:\n\nWe would like acknowledge the following ressources that served as base for this\nproject and thank their authors:\n\n- This [gist](https://gist.github.com/zonca/aaed55502c4b16535fe947791d02ac32)\n for the initial spawner implementation.\n- The\n [DESY JupyterHub Slurm service](https://confluence.desy.de/display/IS/JupyterHub+on+Maxwell)\n for the table of available resources.\n- The\n [TUDresden JupyterHub Slurm service](https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/JupyterHub)\n for the spawn page design.\n",
"bugtrack_url": null,
"license": null,
"summary": "JupyterHub SLURM Spawner with specific spawn page",
"version": "9.0.0",
"project_urls": {
"Homepage": "https://github.com/silx-kit/jupyterhub_moss"
},
"split_keywords": [
"jupyterhub",
" slurm"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "87ed9c742bdaef2229736d89f51e205f24d7f4b6a6f3afd0e54f77c084f5d4a4",
"md5": "2959b83054334a0fc25da08a4a1d0a6c",
"sha256": "de6b6e971a924e9ee1b0f1b6c934cfe4a11c031d6a7be93531425309b5f3153a"
},
"downloads": -1,
"filename": "jupyterhub_moss-9.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2959b83054334a0fc25da08a4a1d0a6c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 22920,
"upload_time": "2024-07-30T12:53:27",
"upload_time_iso_8601": "2024-07-30T12:53:27.013821Z",
"url": "https://files.pythonhosted.org/packages/87/ed/9c742bdaef2229736d89f51e205f24d7f4b6a6f3afd0e54f77c084f5d4a4/jupyterhub_moss-9.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b823ed9b6a978852c5ba44f0ccd1234c0be84f3dbb62ef34bbc9c97eadad8610",
"md5": "3851d9395d49b76a6d9e028d05869df0",
"sha256": "f1bb05d68c72e186a847b86c349e14034bb22fa779c3378710ad948d41e5f93d"
},
"downloads": -1,
"filename": "jupyterhub_moss-9.0.0.tar.gz",
"has_sig": false,
"md5_digest": "3851d9395d49b76a6d9e028d05869df0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 29195,
"upload_time": "2024-07-30T12:53:28",
"upload_time_iso_8601": "2024-07-30T12:53:28.553606Z",
"url": "https://files.pythonhosted.org/packages/b8/23/ed9b6a978852c5ba44f0ccd1234c0be84f3dbb62ef34bbc9c97eadad8610/jupyterhub_moss-9.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-30 12:53:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "silx-kit",
"github_project": "jupyterhub_moss",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "jupyterhub-moss"
}