yucca


Nameyucca JSON
Version 2.2.2 PyPI version JSON
download
home_pageNone
SummaryThe modern framework for sandbox experimentation and out-of-the box machine learning on medical data.
upload_time2024-11-14 13:25:05
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords deep learning image segmentation medical image analysis medical image segmentation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<img src="https://github.com/Sllambias/yucca/assets/9844416/dc37d3c0-5181-4bb2-9630-dee9bc67165e" width="368" height="402" />

</div>

# Yucca

Yucca is a modular machine learning framework built on PyTorch and PyTorch Lightning, presented in our paper [here](https://arxiv.org/abs/2407.19888), and inspired by Fabien Isensee's [nnUNet](https://github.com/MIC-DKFZ/nnUNet) and implemented for end-to-end medical imaging applications. This includes preprocessing volumetric data, training segmentation and self-supervised models, running inference and evaluation, and managing folder structure and naming conventions. 

Yucca supports (1) external projects importing individual Yucca components, (2) standalone Yucca-based projects e.g. using the preprocessing, training, and inference template scripts, or (3) projects employing the CLI-based end-to-end Yucca implementation, illustrated in the [diagram](#yucca). To cater to our different users Yucca features a three-tiered architecture: Functional, Modules, and Pipeline.

The Functional tier is inspired by torch.nn.functional and consists solely of stateless functions. This tier shapes the foundational building blocks of the framework, providing essential operations without maintaining any internal state. These functions are designed to be simple and reusable, allowing users to build custom implementations from scratch. The components are modular and can be easily tested and debugged by focusing on pure functions. 

The Modules tier is responsible for composing functions established in the Functional tier with logic, and conventions. Modules introduce a layer of structure, handling the organization and processing of inputs and outputs. They encapsulate specific functionalities and are designed to be more user-friendly, reducing the complexity involved in building custom models. While modules rely on more assumptions about the data, they still offer significant flexibility for customization and extension. 

The Pipeline tier represents our interpretation of an end-to-end implementation, built upon the previous two tiers. The Pipeline offers the end-to-end capabilities known from nnU-Net, while also allowing for effortless customization, as supported by the comprehensive documentation found in [Changing Parameters](yucca/documentation/guides/changing_parameters.md#model--training). 

Our Pipeline allows users to quickly train solid baselines or change features to conduct experiments on individual components in a robust and thoroughly tested research environment. For situations where full control is required, or simply desired, the Functional and Modules tiers are better suited. These tiers serve the advanced machine learning practitioners, wishing to import building blocks with which they can build their own house.

![alt text](yucca/documentation/illustrations/yucca_diagram.svg?raw=true)

# Table of Contents
- [Guides](#guides)
- [Installation](#installation)
- [Introduction to Yucca](#introduction-to-yucca)
- [Task Conversion](#task-conversion)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Inference](#inference)

# Guides

- [Changing Pipeline Parameters](yucca/documentation/guides/changing_pipeline_parameters.md#model--training)
- [Classification](yucca/documentation/guides/classification.md)
- [Environment Variables](yucca/documentation/guides/environment_variables.md)
- [Ensembles](yucca/documentation/guides/ensembles.md)
- [FAQ](yucca/documentation/guides/FAQ.md)
- [Run Scripts Advanced](yucca/documentation/guides/run_scripts_advanced.md)
- [Task Conversion](yucca/documentation/guides/task_conversion.md)
- [Unsupervised](yucca/documentation/guides/unsupervised.md)

# Installation

## Install an editable version of the project with Cuda support using Conda

Create a python=3.10 or python=3.11 environment exclusively for Yucca to avoid conflicts with other projects. 

IMPORTANT: First install Pytorch for GPU following appropriate instructions from e.g. https://pytorch.org/get-started/locally/.
Then navigate to Yucca and install the package from there.

For an Ubuntu system with Cuda=>12.1 and python=3.11:
```
> git clone https://github.com/Sllambias/yucca.git
> conda create -n yuccaenv python=3.11
> conda activate yuccaenv
> conda install -c anaconda setuptools
> conda install -c "nvidia/label/cuda-12.1.1" cuda-toolkit
> conda install pytorch==2.1.2 torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
> cd yucca
> pip install -e .
```

To use other CUDA or PyTorch versions refer to 1. for the current PyTorch installation, 2. for previous versions and 3. for the appropriate CUDA toolkit. Note that the CUDA versions used in the PyTorch and CUDA-toolkit installations should match (in the example above both use 12.1).

1. https://pytorch.org/get-started/locally/
2. https://pytorch.org/get-started/previous-versions/
3. https://anaconda.org/nvidia/cuda-toolkit

## Install the package as a dependency in another project
If you just want to install Yucca locally on your computer, use
```
pip install git+https://github.com/Sllambias/yucca.git
```
this will install the code from github, not an eventual local clone.


# Weights & Biases
Weights & Biases is the main tool for experiment tracking in Yucca. It is extremely useful to understand how your models are behaving and often also why. Although it can be disabled, it is heavily encouraged to install and use it with Yucca.

When W&B is enabled Yucca will automatically generate plots and illustrations and upload these to your personal Yucca project. This happens while your experiments are running, and you'll find pages that look somewhat similar to the example screenshot found [here](yucca/documentation/illustrations/WB_Example.pdf).

Setting up W&B is very simple.
First navigate to https://wandb.ai/home and log in or sign up for Weights and Biases.
Then activate the appropriate environment, install Weights and Biases and log in by following the instructions (i.e. paste the key from https://wandb.ai/authorize into the terminal).
```console
> conda activate yuccaenv
> pip install wandb
> wandb login
wandb: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server)
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter, or press ctrl+c to quit:
```

# Introduction to Yucca

The Yucca pipeline comprises the 4 processes illustrated in the [diagram](#yucca). In the first step, the user is expected to prepare the data for Yucca. In the remaining three steps, Yucca will take over regarding file management.
  1. **The Task Conversion** step requires that the user _converts_ their arbitrarily structured data to the file and folder structure Yucca requires. From now on, Yucca will handle the data. Task Conversion involves moving and renaming the data along with creating a metadata file.
  2. **The Preprocessing step** takes the Task Converted data and preprocesses it and then subsequently saves it in its preprocessed state in the format expected by the Yucca training process.
  3. **The Training step** takes the preprocessed data and trains a model, and then subsequently saves it along with its checkpoints and metadata.
  4. **The Inference step** takes the trained model and applies it to a task-converted (but not preprocessed) test set. During inference, the unseen samples are preprocessed with the same preprocessor used in the preprocessing step. Predictions are then saved. When inference is concluded, the predictions are evaluated against the ground truth, and a .json file containing the results is saved next to the predictions.

## Environment Variables

Initially, the environment variables used in Yucca must be defined. To set these, see the [Environment Variables](yucca/documentation/guides/environment_variables.md) guide. 

## Task Conversion

Before preprocessing and training, all datasets must be converted to Yucca-compliant tasks. This is done to ensure reproducibility and eliminate data leakage. For a tutorial see the [Task Conversion Guide](yucca/documentation/guides/task_conversion.md).

## Preprocessing

Preprocessing is carried out using the `yucca_preprocess` command. For advanced usage see: [`run_scripts_advanced.py`](yucca/documentation/guides/run_scripts_advanced.md#preprocessing)

Basic Yucca preprocessing relies on three CLI flags:
  1. **-t**: The target task-converted raw data to be preprocessed.
  2. **-pl**: The Planner class, which is responsible for determining *what* we do in preprocessing and *how* it is done. This includes setting the normalization, resizing, resampling and transposition operations and any values associated with them. The planner class defaults to the `YuccaPlanner`, but it can also be any custom planner found or created in the [Planner directory](yucca/pipeline/planning) and its subdirectories.
  3. **-pr**: The Preprocessor class. The Preprocessor is a workhorse that receives an instruction manual from the Planner, which it carries out. The Preprocessor can be one of `YuccaPreprocessor` (default), `ClassificationPreprocessor` and `UnsupervisedPreprocessor`. The only aspect in which they differ is how they expect the ground truth to look. The `YuccaPreprocessor` expects to find images, the `ClassificationPreprocessor` expects to find .txt files with image-level classes and the `UnsupervisedPreprocessor` expects not to find any ground truth. 

An example of preprocessing a task called `Task001_Brains` with the default planner and the `ClassificationPreprocessor`:
```
> yucca_preprocess -t Task001_Brains -pr ClassificationPreprocessor
```

## Training

Training is carried out using the `yucca_train` command. For advanced usage see: [`run_scripts_advanced.py`](yucca/documentation/guides/run_scripts_advanced.md#training). Before training any models, a preprocessed dataset must be prepared using the `yucca_preprocessing` command.

Basic Yucca training relies on five CLI flags:
  1. **-t**: The target preprocessed data on which the model will be trained.
  2. **-d**: The model dimensions. This can be either 2D or 3D (default).
  3. **-m**: The model architecture. This includes any model implemented in the [Model directory](yucca/networks/networks). Including, but not limited to, `U-Net`, `UNetR`, `MultiResUNet` and `ResNet50`.
  4. **-man**: The Manager to use. This defaults to the `YuccaManager`.
  5. **-pl**: The Planner used to preprocess the training data. This defaults to the `YuccaPlanner`.

An example of training a `MultiResUNet` with the default Manager on a task called `Task001_Brains` that has been preprocessed using the default `YuccaPlanner`:
 using a 2D `MultiResUnet`:
```
> yucca_train -t Task001_Brains -m MultiResUNet -d 2D
```

## Inference

Inference is carried out using the `yucca_inference` command. For advanced usage see: [`run_scripts_advanced.py`](yucca/documentation/guides/run_scripts_advanced.md#inference). Prior to inference, the model must be trained using the `yucca_train` command, and the target dataset must be task-converted.

Basic Yucca inference relies on six CLI flags.
  1. **-t**: The target task-converted raw data on which to run inference.
  2. **-s**: The source task on which the model was trained.
  3. **-d**: The dimensions of the trained model.
  4. **-m**: The architecture of the trained model.
  5. **-man**: The Manager to use. This defaults to the `YuccaManager`.
  6. **-pl**: The Planner used to preprocess the training data.

An example of running inference on the test set of a task called `Task001_Brains`, using a 3D `MultiResUnet` trained on the train set of the same task:
```
> yucca_inference -t Task001_Brains -s Task001_Brains -m MultiResUNet
```
An example of running inference on the test set of a task called `Task002_Lungs`, using a 2D `UNet` trained on a task called `Task001_Brains`:
```
> yucca_inference -t Task002_NotBrains -s Task001_Brains -d 2D -m UNet
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "yucca",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "deep learning, image segmentation, medical image analysis, medical image segmentation",
    "author": null,
    "author_email": "Sebastian Llambias <llambias@live.com>, Asbj\u00f8rn Munk <9844416+asbjrnmunk@users.noreply.github.com>, Jakob Ambsdorf <20045425+jakobamb@users.noreply.github.com>",
    "download_url": "https://files.pythonhosted.org/packages/7e/d4/1d84c5d9c22296f3a480cce1771a64e59b7f11c6f6fbb9e8f061b5b169e9/yucca-2.2.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<img src=\"https://github.com/Sllambias/yucca/assets/9844416/dc37d3c0-5181-4bb2-9630-dee9bc67165e\" width=\"368\" height=\"402\" />\n\n</div>\n\n# Yucca\n\nYucca is a modular machine learning framework built on PyTorch and PyTorch Lightning, presented in our paper [here](https://arxiv.org/abs/2407.19888), and inspired by Fabien Isensee's [nnUNet](https://github.com/MIC-DKFZ/nnUNet) and implemented for end-to-end medical imaging applications. This includes preprocessing volumetric data, training segmentation and self-supervised models, running inference and evaluation, and managing folder structure and naming conventions. \n\nYucca supports (1) external projects importing individual Yucca components, (2) standalone Yucca-based projects e.g. using the preprocessing, training, and inference template scripts, or (3) projects employing the CLI-based end-to-end Yucca implementation, illustrated in the [diagram](#yucca). To cater to our different users Yucca features a three-tiered architecture: Functional, Modules, and Pipeline.\n\nThe Functional tier is inspired by torch.nn.functional and consists solely of stateless functions. This tier shapes the foundational building blocks of the framework, providing essential operations without maintaining any internal state. These functions are designed to be simple and reusable, allowing users to build custom implementations from scratch. The components are modular and can be easily tested and debugged by focusing on pure functions. \n\nThe Modules tier is responsible for composing functions established in the Functional tier with logic, and conventions. Modules introduce a layer of structure, handling the organization and processing of inputs and outputs. They encapsulate specific functionalities and are designed to be more user-friendly, reducing the complexity involved in building custom models. While modules rely on more assumptions about the data, they still offer significant flexibility for customization and extension. \n\nThe Pipeline tier represents our interpretation of an end-to-end implementation, built upon the previous two tiers. The Pipeline offers the end-to-end capabilities known from nnU-Net, while also allowing for effortless customization, as supported by the comprehensive documentation found in [Changing Parameters](yucca/documentation/guides/changing_parameters.md#model--training). \n\nOur Pipeline allows users to quickly train solid baselines or change features to conduct experiments on individual components in a robust and thoroughly tested research environment. For situations where full control is required, or simply desired, the Functional and Modules tiers are better suited. These tiers serve the advanced machine learning practitioners, wishing to import building blocks with which they can build their own house.\n\n![alt text](yucca/documentation/illustrations/yucca_diagram.svg?raw=true)\n\n# Table of Contents\n- [Guides](#guides)\n- [Installation](#installation)\n- [Introduction to Yucca](#introduction-to-yucca)\n- [Task Conversion](#task-conversion)\n- [Preprocessing](#preprocessing)\n- [Training](#training)\n- [Inference](#inference)\n\n# Guides\n\n- [Changing Pipeline Parameters](yucca/documentation/guides/changing_pipeline_parameters.md#model--training)\n- [Classification](yucca/documentation/guides/classification.md)\n- [Environment Variables](yucca/documentation/guides/environment_variables.md)\n- [Ensembles](yucca/documentation/guides/ensembles.md)\n- [FAQ](yucca/documentation/guides/FAQ.md)\n- [Run Scripts Advanced](yucca/documentation/guides/run_scripts_advanced.md)\n- [Task Conversion](yucca/documentation/guides/task_conversion.md)\n- [Unsupervised](yucca/documentation/guides/unsupervised.md)\n\n# Installation\n\n## Install an editable version of the project with Cuda support using Conda\n\nCreate a python=3.10 or python=3.11 environment exclusively for Yucca to avoid conflicts with other projects. \n\nIMPORTANT: First install Pytorch for GPU following appropriate instructions from e.g. https://pytorch.org/get-started/locally/.\nThen navigate to Yucca and install the package from there.\n\nFor an Ubuntu system with Cuda=>12.1 and python=3.11:\n```\n> git clone https://github.com/Sllambias/yucca.git\n> conda create -n yuccaenv python=3.11\n> conda activate yuccaenv\n> conda install -c anaconda setuptools\n> conda install -c \"nvidia/label/cuda-12.1.1\" cuda-toolkit\n> conda install pytorch==2.1.2 torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia\n> cd yucca\n> pip install -e .\n```\n\nTo use other CUDA or PyTorch versions refer to 1. for the current PyTorch installation, 2. for previous versions and 3. for the appropriate CUDA toolkit. Note that the CUDA versions used in the PyTorch and CUDA-toolkit installations should match (in the example above both use 12.1).\n\n1. https://pytorch.org/get-started/locally/\n2. https://pytorch.org/get-started/previous-versions/\n3. https://anaconda.org/nvidia/cuda-toolkit\n\n## Install the package as a dependency in another project\nIf you just want to install Yucca locally on your computer, use\n```\npip install git+https://github.com/Sllambias/yucca.git\n```\nthis will install the code from github, not an eventual local clone.\n\n\n# Weights & Biases\nWeights & Biases is the main tool for experiment tracking in Yucca. It is extremely useful to understand how your models are behaving and often also why. Although it can be disabled, it is heavily encouraged to install and use it with Yucca.\n\nWhen W&B is enabled Yucca will automatically generate plots and illustrations and upload these to your personal Yucca project. This happens while your experiments are running, and you'll find pages that look somewhat similar to the example screenshot found [here](yucca/documentation/illustrations/WB_Example.pdf).\n\nSetting up W&B is very simple.\nFirst navigate to https://wandb.ai/home and log in or sign up for Weights and Biases.\nThen activate the appropriate environment, install Weights and Biases and log in by following the instructions (i.e. paste the key from https://wandb.ai/authorize into the terminal).\n```console\n> conda activate yuccaenv\n> pip install wandb\n> wandb login\nwandb: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server)\nwandb: You can find your API key in your browser here: https://wandb.ai/authorize\nwandb: Paste an API key from your profile and hit enter, or press ctrl+c to quit:\n```\n\n# Introduction to Yucca\n\nThe Yucca pipeline comprises the 4 processes illustrated in the [diagram](#yucca). In the first step, the user is expected to prepare the data for Yucca. In the remaining three steps, Yucca will take over regarding file management.\n  1. **The Task Conversion** step requires that the user _converts_ their arbitrarily structured data to the file and folder structure Yucca requires. From now on, Yucca will handle the data. Task Conversion involves moving and renaming the data along with creating a metadata file.\n  2. **The Preprocessing step** takes the Task Converted data and preprocesses it and then subsequently saves it in its preprocessed state in the format expected by the Yucca training process.\n  3. **The Training step** takes the preprocessed data and trains a model, and then subsequently saves it along with its checkpoints and metadata.\n  4. **The Inference step** takes the trained model and applies it to a task-converted (but not preprocessed) test set. During inference, the unseen samples are preprocessed with the same preprocessor used in the preprocessing step. Predictions are then saved. When inference is concluded, the predictions are evaluated against the ground truth, and a .json file containing the results is saved next to the predictions.\n\n## Environment Variables\n\nInitially, the environment variables used in Yucca must be defined. To set these, see the [Environment Variables](yucca/documentation/guides/environment_variables.md) guide. \n\n## Task Conversion\n\nBefore preprocessing and training, all datasets must be converted to Yucca-compliant tasks. This is done to ensure reproducibility and eliminate data leakage. For a tutorial see the [Task Conversion Guide](yucca/documentation/guides/task_conversion.md).\n\n## Preprocessing\n\nPreprocessing is carried out using the `yucca_preprocess` command. For advanced usage see: [`run_scripts_advanced.py`](yucca/documentation/guides/run_scripts_advanced.md#preprocessing)\n\nBasic Yucca preprocessing relies on three CLI flags:\n  1. **-t**: The target task-converted raw data to be preprocessed.\n  2. **-pl**: The Planner class, which is responsible for determining *what* we do in preprocessing and *how* it is done. This includes setting the normalization, resizing, resampling and transposition operations and any values associated with them. The planner class defaults to the `YuccaPlanner`, but it can also be any custom planner found or created in the [Planner directory](yucca/pipeline/planning) and its subdirectories.\n  3. **-pr**: The Preprocessor class. The Preprocessor is a workhorse that receives an instruction manual from the Planner, which it carries out. The Preprocessor can be one of `YuccaPreprocessor` (default), `ClassificationPreprocessor` and `UnsupervisedPreprocessor`. The only aspect in which they differ is how they expect the ground truth to look. The `YuccaPreprocessor` expects to find images, the `ClassificationPreprocessor` expects to find .txt files with image-level classes and the `UnsupervisedPreprocessor` expects not to find any ground truth. \n\nAn example of preprocessing a task called `Task001_Brains` with the default planner and the `ClassificationPreprocessor`:\n```\n> yucca_preprocess -t Task001_Brains -pr ClassificationPreprocessor\n```\n\n## Training\n\nTraining is carried out using the `yucca_train` command. For advanced usage see: [`run_scripts_advanced.py`](yucca/documentation/guides/run_scripts_advanced.md#training). Before training any models, a preprocessed dataset must be prepared using the `yucca_preprocessing` command.\n\nBasic Yucca training relies on five CLI flags:\n  1. **-t**: The target preprocessed data on which the model will be trained.\n  2. **-d**: The model dimensions. This can be either 2D or 3D (default).\n  3. **-m**: The model architecture. This includes any model implemented in the [Model directory](yucca/networks/networks). Including, but not limited to, `U-Net`, `UNetR`, `MultiResUNet` and `ResNet50`.\n  4. **-man**: The Manager to use. This defaults to the `YuccaManager`.\n  5. **-pl**: The Planner used to preprocess the training data. This defaults to the `YuccaPlanner`.\n\nAn example of training a `MultiResUNet` with the default Manager on a task called `Task001_Brains` that has been preprocessed using the default `YuccaPlanner`:\n using a 2D `MultiResUnet`:\n```\n> yucca_train -t Task001_Brains -m MultiResUNet -d 2D\n```\n\n## Inference\n\nInference is carried out using the `yucca_inference` command. For advanced usage see: [`run_scripts_advanced.py`](yucca/documentation/guides/run_scripts_advanced.md#inference). Prior to inference, the model must be trained using the `yucca_train` command, and the target dataset must be task-converted.\n\nBasic Yucca inference relies on six CLI flags.\n  1. **-t**: The target task-converted raw data on which to run inference.\n  2. **-s**: The source task on which the model was trained.\n  3. **-d**: The dimensions of the trained model.\n  4. **-m**: The architecture of the trained model.\n  5. **-man**: The Manager to use. This defaults to the `YuccaManager`.\n  6. **-pl**: The Planner used to preprocess the training data.\n\nAn example of running inference on the test set of a task called `Task001_Brains`, using a 3D `MultiResUnet` trained on the train set of the same task:\n```\n> yucca_inference -t Task001_Brains -s Task001_Brains -m MultiResUNet\n```\nAn example of running inference on the test set of a task called `Task002_Lungs`, using a 2D `UNet` trained on a task called `Task001_Brains`:\n```\n> yucca_inference -t Task002_NotBrains -s Task001_Brains -d 2D -m UNet\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "The modern framework for sandbox experimentation and out-of-the box machine learning on medical data.",
    "version": "2.2.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/sllambias/yucca/issues",
        "Homepage": "https://github.com/sllambias/yucca"
    },
    "split_keywords": [
        "deep learning",
        " image segmentation",
        " medical image analysis",
        " medical image segmentation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1d8c752080dcd93c1c2e4dfc7fdaebc1a01a07b8c3c643203c42305ae75022af",
                "md5": "6b6757065f35df60e92f39b0910ccb57",
                "sha256": "7fceed8bfdd2a7462a80a2ea7f3a0c334b909b708e4a2399cf67e3407e217b32"
            },
            "downloads": -1,
            "filename": "yucca-2.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6b6757065f35df60e92f39b0910ccb57",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 302078,
            "upload_time": "2024-11-14T13:25:04",
            "upload_time_iso_8601": "2024-11-14T13:25:04.237584Z",
            "url": "https://files.pythonhosted.org/packages/1d/8c/752080dcd93c1c2e4dfc7fdaebc1a01a07b8c3c643203c42305ae75022af/yucca-2.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7ed41d84c5d9c22296f3a480cce1771a64e59b7f11c6f6fbb9e8f061b5b169e9",
                "md5": "8f147b865a4207d2e4c70644cb1986f6",
                "sha256": "7f6c92ef6563a9b79e1ca272dad90ec0cdbc5ab52b97c5dcda4b04324d8da2b3"
            },
            "downloads": -1,
            "filename": "yucca-2.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "8f147b865a4207d2e4c70644cb1986f6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 176715,
            "upload_time": "2024-11-14T13:25:05",
            "upload_time_iso_8601": "2024-11-14T13:25:05.531802Z",
            "url": "https://files.pythonhosted.org/packages/7e/d4/1d84c5d9c22296f3a480cce1771a64e59b7f11c6f6fbb9e8f061b5b169e9/yucca-2.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-14 13:25:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "sllambias",
    "github_project": "yucca",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "yucca"
}
        
Elapsed time: 0.48911s