imsy-htc


Nameimsy-htc JSON
Version 0.0.17 PyPI version JSON
download
home_pagehttps://github.com/imsy-dkfz/htc
SummaryFramework for automatic classification and segmentation of hyperspectral images.
upload_time2024-10-23 15:32:31
maintainerNone
docs_urlNone
authorDivision of Intelligent Medical Systems, DKFZ
requires_python>=3.10
licenseMIT
keywords surgical data science open surgery hyperspectral imaging organ segmentation semantic scene segmentation deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
<a href="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/htc_logo.pdf"><img src="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/htc_logo.svg" alt="Logo" width="600" /></a>

[![Python](https://img.shields.io/pypi/pyversions/imsy-htc.svg)](https://pypi.org/project/imsy-htc)
[![PyPI version](https://badge.fury.io/py/imsy-htc.svg)](https://pypi.org/project/imsy-htc)
[![Tests](https://github.com/IMSY-DKFZ/htc/actions/workflows/tests.yml/badge.svg)](https://github.com/IMSY-DKFZ/htc/actions/workflows/tests.yml)

</div>

# Hyperspectral Tissue Classification

This package is a framework for automated tissue classification and segmentation on medical hyperspectral imaging (HSI) data. It contains:

-   The implementation of deep learning models to solve supervised classification and segmentation problems for a variety of different input spatial granularities (pixels, superpixels, patches and entire images, cf. figure below) and modalities (RGB data, raw and processed HSI data) from our paper [“Robust deep learning-based semantic organ segmentation in hyperspectral images”](https://doi.org/10.1016/j.media.2022.102488). It is based on [PyTorch](https://pytorch.org/) and [PyTorch Lightning](https://lightning.ai/).
-   Corresponding pretrained models.
-   A pipeline to efficiently load and process HSI data, to aggregate deep learning results and to validate and visualize findings.
-   Presentation of several solutions to speed up the data loading process (see [Pytorch Conference 2023 poster details](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#-dealing-with-io-bottlenecks-in-high-throughput-model-training) below).

<div align="center">
<a href="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_model_overview.pdf"><img src="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_model_overview.svg" alt="Overview of deep learning models in the htc framework, here shown for HSI input." /></a>
</div>

This framework is designed to work on HSI data from the [Tivita](https://diaspective-vision.com/en/) cameras but you can adapt it to different HSI datasets as well. Potential applications include:

-   Use our data loading and processing pipeline to easily access image and meta data for any work utilizing Tivita datasets.
-   This repository is tightly coupled to work with the public [HeiPorSPECTRAL](https://heiporspectral.org/) dataset. If you already downloaded the data, you only need to perform the setup steps and then you can directly use the `htc` framework to work on the data (cf. [our tutorials](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#tutorials)).
-   Train your own networks and benefit from a pipeline offering e.g. efficient data loading, correct hierarchical aggregation of results and a set of helpful visualizations.
-   Apply deep learning models for different spatial granularities and modalities on your own semantically annotated dataset.
-   Use our pretrained models to initialize the weights for your own training.
-   Use our pretrained models to generate predictions for your own data.

If you use the `htc` framework, please consider citing the [corresponding papers](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#papers). You can also cite this repository directly via:

<details closed>
<summary>Cite via BibTeX</summary>

```bibtex
@software{sellner_htc_2023,
  author    = {Sellner, Jan and Seidlitz, Silvia},
  publisher = {Zenodo},
  url       = {https://github.com/IMSY-DKFZ/htc},
  date      = {2024-10-23},
  doi       = {10.5281/zenodo.6577614},
  title     = {Hyperspectral Tissue Classification},
  version   = {v0.0.17},
}
```

</details>

## Setup

### Package Installation

This package can be installed via pip:

```bash
pip install imsy-htc
```

This installs all the required dependencies defined in [`requirements.txt`](https://github.com/IMSY-DKFZ/htc/tree/main/requirements.txt). The requirements include [PyTorch](https://pytorch.org/), so you may want to install it manually before installing the package in case you have specific needs (e.g. CUDA version).

> &#x26a0;&#xfe0f; This framework was developed and tested using the Ubuntu 20.04+ Linux distribution. Despite we do provide wheels for Windows and macOS as well, they are not tested.

> &#x26a0;&#xfe0f; Network training and inference was conducted using an RTX 3090 GPU with 24 GiB of memory. It should also work with GPUs which have less memory but you may have to adjust some settings (e.g. the batch size).

<details close>
<summary>PyTorch Compatibility</summary>

We cannot provide wheels for all PyTorch versions. Hence, a version of `imsy-htc` may not work with all versions of PyTorch due to changes in the ABI. In the following table, we list the PyTorch versions which are compatible with the respective `imsy-htc` version.

| `imsy-htc` | `torch` |
| ---------- | ------- |
| 0.0.9      | 1.13    |
| 0.0.10     | 1.13    |
| 0.0.11     | 2.0     |
| 0.0.12     | 2.0     |
| 0.0.13     | 2.1     |
| 0.0.14     | 2.1     |
| 0.0.15     | 2.2     |
| 0.0.15     | 2.3     |
| 0.0.16     | 2.4     |
| 0.0.17     | 2.5     |

However, we do not make explicit version constraints in the dependencies of the `imsy-htc` package because a future version of PyTorch may still work and we don't want to break the installation if it is not necessary.

> 💡 Please note that it is always possible to build the `imsy-htc` package with your installed PyTorch version yourself (cf. Developer Installation).

</details>

<details close>
<summary>Optional Dependencies (<code>imsy-htc[extra]</code>)</summary>

Some requirements are considered optional (e.g. if they are only needed by certain scripts) and you will get an error message if they are needed but unavailable. You can install them via

```bash
pip install --extra-index-url https://read_package:CnzBrgDfKMWS4cxf-r31@git.dkfz.de/api/v4/projects/15/packages/pypi/simple imsy-htc[extra]
```

or by adding the following lines to your `requirements.txt`

```
--extra-index-url https://read_package:CnzBrgDfKMWS4cxf-r31@git.dkfz.de/api/v4/projects/15/packages/pypi/simple
imsy-htc[extra]
```

This installs the optional dependencies defined in [`requirements-extra.txt`](https://github.com/IMSY-DKFZ/htc/tree/main/requirements-extra.txt), including for example our Python wrapper for the [challengeR toolkit](https://github.com/wiesenfa/challengeR).

</details>

<details close>
<summary>Docker</summary>

We also provide a Docker setup for testing. As a prerequisite:

-   Clone this repository
-   Install [Docker](https://docs.docker.com/get-docker/) and the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
-   Install the required dependencies to run the Docker startup script:

```bash
pip install python-dotenv
```

Make sure that your environment variables are available and then bash into the container

```bash
export PATH_Tivita_HeiPorSPECTRAL="/path/to/the/dataset"
python run_docker.py bash
```

You can now run any commands you like. All datasets you provided via an environment variable that starts with `PATH_Tivita` will be accessible in your container (you can also check the generated `docker-compose.override.yml` file for details). Please note that the Docker container is meant for small testing only and not for development. This is also reflected by the fact that per default all results are stored inside the container and hence will also be deleted after exiting the container. If you want to keep your results, let the environment variable `PATH_HTC_DOCKER_RESULTS` point to the directory where you want to store the results.

</details>

<details close>
<summary>Developer Installation</summary>

If you want to make changes to the package code (which is highly welcome 😉), we recommend to install the `htc` package in editable mode in a separate conda environment:

```bash
# Set up the conda environment
# Note: By adding conda-forge to your default channels, you will get the latest patch releases for Python:
#   conda config --add channels conda-forge
conda create --yes --name htc python=3.12
conda activate htc

# Install the htc package and its requirements
pip install -r requirements-dev.txt
pip install --no-use-pep517 -e .
```

Before committing any files, please run the static code checks locally:

```bash
git add .
pre-commit run --all-files
```

</details>

### Environment Variables

This framework can be configured via environment variables. Most importantly, we need to know where your data is located (e.g. `PATH_Tivita_HeiPorSPECTRAL`) and where results should be stored (e.g. `PATH_HTC_RESULTS`). For a full list of possible environment variables, please have a look at the documentation of the [`Settings`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/settings.py) class.

> 💡 If you set an environment variable for a dataset path, it is important that the variable name matches the folder name (e.g. the variable name `PATH_Tivita_HeiPorSPECTRAL` matches the dataset path `my/path/HeiPorSPECTRAL` with its folder name `HeiPorSPECTRAL`, whereas the variable name `PATH_Tivita_some_other_name` does not match). Furthermore, the dataset path needs to point to a directory which contains a `data` and an `intermediates` subfolder.

There are several options to set the environment variables. For example:

-   You can specify a variable as part of your bash startup script `~/.bashrc` or before running each command:
    ```bash
    PATH_HTC_RESULTS="~/htc/results" htc training --model image --config "models/image/configs/default"
    ```
    However, this might get cumbersome or might not give you the flexibility you need.
-   Recommended if you cloned this repository (in contrast to simply installing it via pip): You can create a `.env` file in the repository root and add your variables, for example:

    ```bash
    export PATH_Tivita_HeiPorSPECTRAL=/mnt/nvme_4tb/HeiPorSPECTRAL
    export PATH_HTC_RESULTS=~/htc/results

    # You can also add your own datasets via (the environment variable name must start with PATH_Tivita)
    # export PATH_Tivita_my_dataset=~/htc/Tivita_my_dataset:shortcut=my_shortcut
    # You can then access it via settings.data_dirs.my_shortcut
    ```

-   Recommended if you installed the package via pip: You can create user settings for this application. The location is OS-specific. For Linux the location might be at `~/.config/htc/variables.env`. Please run `htc info` upon package installation to retrieve the exact location on your system. The content of the file is of the same format as of the `.env` above.

After setting your environment variables, it is recommended to run `htc info` to check that your variables are correctly registered in the framework.

## Tutorials

A series of [tutorials](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials) can help you get started on the `htc` framework by guiding you through different usage scenarios.

> 💡 The tutorials make use of our public HSI dataset [HeiPorSPECTRAL](https://heiporspectral.org/). If you want to directly run them, please download the dataset first and make it accessible via the environment variable `PATH_Tivita_HeiPorSPECTRAL` as described above.

-   As a start, we recommend to take a look at this [general notebook](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/General.ipynb) which showcases the basic functionalities of the `htc` framework. Namely, it demonstrates the usage of the `DataPath` class which is the entry point to load and process HSI data. For example, you will learn how to read HSI cubes, segmentation masks and meta data. Among others, you can use this information to calculate the median spectrum of an organ.
-   If you want to use our framework with your own dataset, it might be necessary to write a custom `DataPath` class so that you can load and process your images and annotations. We [collected some tips](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CustomDataPath.md) on how this can be achieved.
-   You have some HSI data at hand and want to use one of our pretrained models to generate predictions? Then our [prediction notebook](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CreatingPredictions.ipynb) has got you covered.
-   You want to use our pretrained models to initialize the weights for your own training? See the section about [pretrained models](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#pretrained-models) below for details.
-   You want to use our framework to train a network? The [network training notebook](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/network_training/NetworkTraining.ipynb) will show you how to achieve this on the example of a heart and lung segmentation network.
-   If you are interested in our technical validation (e.g. because you want to compare your colorchecker images with ours) and need to create a mask to detect the different colorchecker fields, you might find our automatic [colorchecker mask creation pipeline](https://github.com/IMSY-DKFZ/htc/tree/main/htc/utils/ColorcheckerMaskCreation.ipynb) useful.

We do not have a separate documentation website for our framework yet. However, most of the functions and classes are documented so feel free to explore the source code or use your favorite IDE to display the documentation. If something does not become clear from the documentation, feel free to open an issue!

## Pretrained Models

This framework gives you access to a variety of pretrained segmentation and classification models. The models will be automatically downloaded, provided you specify the model type (e.g. `image`) and the run folder (e.g. `2022-02-03_22-58-44_generated_default_model_comparison`). It can then be used for example to [create predictions](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CreatingPredictions.ipynb) on some data or as a baseline for your own training (see example below).

The following table lists all the models you can get:
| model type | modality | class | run folder |
| ----------- | ----------- | ----------- | ----------- |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_rat2pig_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2pig_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2pig_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2pig_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_rat2human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2human_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_pig2rat_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2rat_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2rat_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2rat_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_pig2human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2human_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_baseline_rat_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_rat_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_rat_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_rat_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_baseline_pig_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_pig_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_pig_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_pig_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_baseline_human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_human_nested-2-2.zip)) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2023-02-08_14-48-02_organ_transplantation_0.8`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2023-02-08_14-48-02_organ_transplantation_0.8.zip) |
| image | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2023-01-29_11-31-04_organ_transplantation_0.8_rgb`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2023-01-29_11-31-04_organ_transplantation_0.8_rgb.zip) |
| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2022-02-03_22-58-44_generated_default_model_comparison.zip) |
| image | param | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |
| image | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |
| patch | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_64_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_64_model_comparison.zip) |
| patch | param | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_64_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_64_parameters_model_comparison.zip) |
| patch | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_64_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_64_rgb_model_comparison.zip) |
| patch | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_model_comparison.zip) |
| patch | param | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |
| patch | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |
| superpixel_classification | hsi | [`ModelSuperpixelClassification`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/superpixel_classification/ModelSuperpixelClassification.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/superpixel_classification@2022-02-03_22-58-44_generated_default_model_comparison.zip) |
| superpixel_classification | param | [`ModelSuperpixelClassification`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/superpixel_classification/ModelSuperpixelClassification.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/superpixel_classification@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |
| superpixel_classification | rgb | [`ModelSuperpixelClassification`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/superpixel_classification/ModelSuperpixelClassification.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/superpixel_classification@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |
| pixel | hsi | [`ModelPixel`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/pixel/ModelPixel.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/pixel@2022-02-03_22-58-44_generated_default_model_comparison.zip) |
| pixel | param | [`ModelPixelRGB`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/pixel/ModelPixelRGB.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/pixel@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |
| pixel | rgb | [`ModelPixelRGB`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/pixel/ModelPixelRGB.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/pixel@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |

> 💡 The modality `param` refers to stacked tissue parameter images (named TPI in our paper [“Robust deep learning-based semantic organ segmentation in hyperspectral images”](https://doi.org/10.1016/j.media.2022.102488)). For the model type `patch`, pretrained models are available for the patch sizes 64 x 64 and 32 x 32 pixels. The modality and patch size is not specified when loading a model as it is already characterized by specifying a certain run folder.

> 💡 A wildcard `*` in the run folder name refers to a collection of models (e.g. from nested cross validation). You can use the name as noted in the table to retrieve all models from this collection as list of models or explicitly set the index to only retrieve one specific model from the collection. If you keep the wildcard for creating predictions (see below), all models will be loaded and the final prediction is an ensemble of the output from all individual networks (e.g. 15 networks with 3 outer and 5 inner folds).

After successful installation of the `htc` package, you can use any of the pretrained models listed in the table. There are several ways to use them but the general principle is that models are always specified via their `model` and `run_folder`.

### Option 1: Use the models in your own training pipeline

Every model class listed in the table has a static method [`pretrained_model()`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/common/HTCModel.py) which you can use to create a model instance and initialize it with the pretrained weights. The model object will be an instance of `torch.nn.Module`. The function has examples for all the different model types but as a teaser consider the following example which loads the pretrained image HSI network:

```python
import torch
from htc import ModelImage, Normalization

run_folder = "2022-02-03_22-58-44_generated_default_model_comparison"  # HSI model
model = ModelImage.pretrained_model(model="image", run_folder=run_folder, n_channels=100, n_classes=19)
input_data = torch.randn(1, 100, 480, 640)  # NCHW
input_data = Normalization(channel_dim=1)(input_data)  # Model expects L1 normalized input
model(input_data).shape
# torch.Size([1, 19, 480, 640])
```

> 💡 Please note that when initializing the weights as in this example, the segmentation head is initialized randomly. Meaningful predictions on your own data can thus not be expected out of the box, but you will have to train the model on your data first.

### Option 2: Use the models to create predictions for your data

The models can be used to predict segmentation masks for your data. The segmentation models automatically sample from your input image according to the selected model spatial granularity (e.g. by creating patches) and the output is always a segmentation mask for an entire image. The set of output classes is determined by the training configuration, e.g. 18 organ classes + background for our semantic models. There are two alternatives for creating predictions:

1. The [`CreatingPredictions`](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CreatingPredictions.ipynb) notebook shows how to create predictions for all images in a folder (via the `htc inference` command) and how to map the network output to meaningful label names.
2. If you want to compute predictions directly within your code for custom tensors, batches or paths, you can use the [`SinglePredictor`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/model_processing/SinglePredictor.py) class.

### Option 3: Use the models to train a network with the `htc` package

If you are using the `htc` framework to [train your networks](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/network_training/NetworkTraining.ipynb), you only need to define the model in your configuration:

```json
{
    "model": {
        "pretrained_model": {
            "model": "image",
            "run_folder": "2022-02-03_22-58-44_generated_default_model_comparison"
        }
    }
}
```

This is very similar to option 1 but may be more convenient if you already train with the `htc` framework.

> 💡 We have a [JSON Schema file](https://github.com/IMSY-DKFZ/htc/tree/main/htc/utils/config.schema) which describes the structure of our config files including descriptions of the attributes.

## CLI

There is a common command line interface for many scripts in this repository. More precisely, every script which is prefixed with `run_NAME.py` can also be run via `htc NAME` from any directory. For more details, just type `htc`.

## Papers

This repository contains code to reproduce our publications listed below:

### 📝 Xeno-learning: knowledge transfer across species in deep learning-based spectral image analysis

<div align="center">
<a href="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/species_motivation.pdf"><img src="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/species_motivation.png" alt="Logo" width="700" /></a>
</div>

This paper introduces a cross-species knowledge transfer paradigm termed <i>xeno-learning</i> to make use of what has been learned in one species in other species. Specifically, we showcase how human segmentation performance on malperfused tissues can be improved by leveraging perfusion knowledge obtained from animal data via a <q>physiology-based data augmentation</q> method. All trained networks are available as pretrained models (baseline networks and networks which included the new data augmentation method during training). Compared to previous papers, we switched to a nested cross-validation scheme with 3 outer folds so each training configuration is composed of three run folders on disk. However, you can still refer to them via the `run_folder` argument by using a wildcard (e.g., `2024-09-11_00-11-38_baseline_human_nested-*-2` to get the baseline networks `0`, `1` and `2` trained on human data). You can find all notebooks which generate the paper figures in [paper/XenoLearning2024](https://github.com/IMSY-DKFZ/htc/tree/main/paper/XenoLearning2024) accompanied by [reproducibility instructions](https://github.com/IMSY-DKFZ/htc/tree/main/paper/XenoLearning2024/reproducibility.md). The code for all experiments is located in the [htc_projects/species](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/species/) folder.

> 📂 The dataset for this paper is not fully publicly available, but a subset of the data is available through the public [HeiPorSPECTRAL](https://heiporspectral.org/) dataset.

### 📝 [Semantic segmentation of surgical hyperspectral images under geometric domain shifts](https://doi.org/10.48550/arXiv.2303.10972)

<div align="center">
<a href="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MICCAI_abstract.pdf"><img src="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MICCAI_abstract.png" alt="Logo" width="600" /></a>
</div>

This MICCAI2023 paper is the direct successor of our MIA2022 paper. We analyzed how well our networks perform under geometrical domain shifts which commonly occur in real-world open surgeries (e.g. situs occlusions). The effect is drastic (drop of Dice similarity coefficient by 45 %) but the good news is that performance on par with in-distribution data can be achieved with our simple, model-independent solution (augmentation method). You can find all the code in [htc_projects/context](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/context) and paper figures as well as [reproducibility instructions](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MICCAI2023/reproducibility.md) in [paper/MICCAI2023](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MICCAI2023). Pretrained models are available for our organ transplantation networks with HSI and RGB modalities.

> 💡 If you are only interested in our data augmentation method, you can also head over to [Kornia](https://github.com/kornia/kornia) where this augmentation is implemented for generic use cases (including 2D and 3D data). You will find it under the name [`RandomTransplantation`](https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomTransplantation).

> 📂 The dataset for this paper is not publicly available.

<details closed>
<summary>Cite via BibTeX</summary>

```bibtex
@inproceedings{sellner_context_2023,
  author    = {Sellner, Jan and Seidlitz, Silvia and Studier-Fischer, Alexander and Motta, Alessandro and Özdemir, Berkin and Müller-Stich, Beat Peter and Nickel, Felix and Maier-Hein, Lena},
  editor    = {Greenspan, Hayit and Madabhushi, Anant and Mousavi, Parvin and Salcudean, Septimiu and Duncan, James and Syeda-Mahmood, Tanveer and Taylor, Russell},
  location  = {Cham},
  publisher = {Springer Nature Switzerland},
  booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2023},
  date      = {2023},
  doi       = {10.1007/978-3-031-43996-4_59},
  isbn      = {978-3-031-43996-4},
  pages     = {618--627},
  title     = {Semantic Segmentation of Surgical Hyperspectral Images Under Geometric Domain Shifts},
}
```

</details>

### 📝 [Robust deep learning-based semantic organ segmentation in hyperspectral images](https://doi.org/10.1016/j.media.2022.102488)

<div align="center">
<a href="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_abstract.pdf"><img src="https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_abstract.png" alt="Logo" width="800" /></a>
</div>

In this paper, we tackled fully automatic organ segmentation and compared deep learning models on different spatial granularities (e.g. patch vs. image) and modalities (e.g. HSI vs. RGB). Furthermore, we studied the required amount of training data and the generalization capabilities of our models across subjects. The pretrained networks are related to this paper. You can find the notebooks to generate the paper figures in [paper/MIA2022](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MIA2022) (the folder also includes a [reproducibility document](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MIA2022/reproducibility.md)) and the models in [htc/models](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models). For each model, there are three configuration files, namely `default`, `default_rgb` and `default_parameters`, which correspond to the HSI, RGB and TPI modality, respectively. You can also download the [NSD thresholds](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/nsd_thresholds_semantic.csv) which we used for the NSD metric (cf. Fig. 12).

> 📂 The dataset for this paper is not publicly available.

<details closed>
<summary>Cite via BibTeX</summary>

```bibtex
@article{seidlitz_seg_2022,
  author       = {Seidlitz, Silvia and Sellner, Jan and Odenthal, Jan and Özdemir, Berkin and Studier-Fischer, Alexander and Knödler, Samuel and Ayala, Leonardo and Adler, Tim J. and Kenngott, Hannes G. and Tizabi, Minu and Wagner, Martin and Nickel, Felix and Müller-Stich, Beat P. and Maier-Hein, Lena},
  date         = {2022-08},
  doi          = {10.1016/j.media.2022.102488},
  issn         = {1361-8415},
  journaltitle = {Medical Image Analysis},
  keywords     = {Hyperspectral imaging,Surgical data science,Deep learning,Open surgery,Organ segmentation,Semantic scene segmentation},
  pages        = {102488},
  title        = {Robust deep learning-based semantic organ segmentation in hyperspectral images},
  volume       = {80},
}
```

</details>

### 📝 [Dealing with I/O bottlenecks in high-throughput model training](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/PyTorchConference_Poster.pdf)

The poster was presented at the PyTorch Conference 2023 and presents several solutions to improve data loading for faster network training. This originated from our MICCAI2023 paper, where we load huge amount of data while using a relatively small network resulting in GPU idle times when the GPU has to wait for the CPU to deliver new data. This requested the need for fast data loading strategies so that the CPU delivers data in-time for the GPU. The solutions include (1) efficient data storage via [Blosc](https://www.blosc.org/) compression, (2) appropriate precision settings, (3) GPU instead of CPU augmentations using the [Kornia](https://kornia.readthedocs.io) library and (4) a fixed shared pinned memory buffer for efficient data transfer to the GPU. For the last part, you will find the relevant code to create the buffer in this repository as part of the [SharedMemoryDatasetMixin](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/common/SharedMemoryDatasetMixin.py) class (`_add_tensor_shared()` method).

You can find the code to generate the results figures of the poster in [paper/PyTorchConf2023](https://github.com/IMSY-DKFZ/htc/tree/main/paper/PyTorchConf2023) including [reproducibility instructions](https://github.com/IMSY-DKFZ/htc/tree/main/paper/PyTorchConf2023/reproducibility.md). The experiment code can be found in the project folder [htc_projects/benchmarking](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/benchmarking).

> 📂 The dataset for this poster is not publicly available.

<details closed>
<summary>Cite via BibTeX</summary>

```bibtex
@misc{sellner_benchmarking_2023,
  author       = {Sellner, Jan and Seidlitz, Silvia and Maier-Hein, Lena},
  url          = {https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/PyTorchConference_Poster.pdf},
  date         = {2023-10-16},
  howpublished = {Poster presented at the PyTorch Conference 2023, San Francisco, United States of America},
  title        = {Dealing with I/O bottlenecks in high-throughput model training},
}
```

</details>

### 📝 [HeiPorSPECTRAL - the Heidelberg Porcine HyperSPECTRAL Imaging Dataset of 20 Physiological Organs](https://doi.org/10.1038/s41597-023-02315-8)

This paper introduces the [HeiPorSPECTRAL](https://heiporspectral.org/) dataset containing 5756 hyperspectral images from 11 subjects. We are using these images in our tutorials. You can find the visualization notebook for the paper figures in [paper/NatureData2023](https://github.com/IMSY-DKFZ/htc/tree/main/paper/NatureData2023) (the folder also includes a [reproducibility document](https://github.com/IMSY-DKFZ/htc/tree/main/paper/NatureData2023/reproducibility.md)) and the remaining code in [htc_projects/atlas_open](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/atlas_open).

If you want to learn more about the [HeiPorSPECTRAL](https://heiporspectral.org/) dataset (e.g. the underlying data structure) or you stumbled upon a file and want to know how to read it, you might find this [notebook with low-level details](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/atlas_open/FileReference.ipynb) helpful.

> 📂 The dataset for this paper is publicly available.

<details closed>
<summary>Cite via BibTeX</summary>

```bibtex
@article{studierfischer_open_2023,
  author       = {Studier-Fischer, Alexander and Seidlitz, Silvia and Sellner, Jan and Bressan, Marc and Özdemir, Berkin and Ayala, Leonardo and Odenthal, Jan and Knoedler, Samuel and Kowalewski, Karl-Friedrich and Haney, Caelan Max and Salg, Gabriel and Dietrich, Maximilian and Kenngott, Hannes and Gockel, Ines and Hackert, Thilo and Müller-Stich, Beat Peter and Maier-Hein, Lena and Nickel, Felix},
  url          = {https://heiporspectral.org/},
  date         = {2023-06-24},
  doi          = {10.1038/s41597-023-02315-8},
  issn         = {2052-4463},
  journaltitle = {Scientific Data},
  number       = {1},
  pages        = {414},
  title        = {HeiPorSPECTRAL - the Heidelberg Porcine HyperSPECTRAL Imaging Dataset of 20 Physiological Organs},
  volume       = {10},
}
```

</details>

### 📝 [Spectral organ fingerprints for machine learning-based intraoperative tissue classification with hyperspectral imaging in a porcine model](https://doi.org/10.1038/s41598-022-15040-w)

In this paper, we trained a classification model based on median spectra from HSI data. You can find the model code in [htc_projects/atlas](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/atlas) and the confusion matrix figure of the paper in [paper/NatureReports2022](https://github.com/IMSY-DKFZ/htc/tree/main/paper/NatureReports2022) (including a reproducibility document).

> 📂 The dataset for this paper is not fully publicly available, but a subset of the data is available through the public [HeiPorSPECTRAL](https://heiporspectral.org/) dataset.

<details closed>
<summary>Cite via BibTeX</summary>

```bibtex
@article{studierfischer_atlas_2022,
  author       = {Studier-Fischer, Alexander and Seidlitz, Silvia and Sellner, Jan and Özdemir, Berkin and Wiesenfarth, Manuel and Ayala, Leonardo and Odenthal, Jan and Knödler, Samuel and Kowalewski, Karl Friedrich and Haney, Caelan Max and Camplisson, Isabella and Dietrich, Maximilian and Schmidt, Karsten and Salg, Gabriel Alexander and Kenngott, Hannes Götz and Adler, Tim Julian and Schreck, Nicholas and Kopp-Schneider, Annette and Maier-Hein, Klaus and Maier-Hein, Lena and Müller-Stich, Beat Peter and Nickel, Felix},
  date         = {2022-06-30},
  doi          = {10.1038/s41598-022-15040-w},
  issn         = {2045-2322},
  journaltitle = {Scientific Reports},
  number       = {1},
  pages        = {11028},
  title        = {Spectral organ fingerprints for machine learning-based intraoperative tissue classification with hyperspectral imaging in a porcine model},
  volume       = {12},
}
```

</details>

### 📝 [Künstliche Intelligenz und hyperspektrale Bildgebung zur bildgestützten Assistenz in der minimal-invasiven Chirurgie](https://doi.org/10.1007/s00104-022-01677-w)

This paper presents several applications of intraoperative HSI, including our organ [segmentation](https://doi.org/10.1016/j.media.2022.102488) and [classification](https://doi.org/10.1038/s41598-022-15040-w) work. You can find the code generating our figure for this paper at [paper/Chirurg2022](https://github.com/IMSY-DKFZ/htc/tree/main/paper/Chirurg2022).

> 📂 The sample image used here is contained in the dataset from our paper [“Robust deep learning-based semantic organ segmentation in hyperspectral images”](https://doi.org/10.1016/j.media.2022.102488) and hence not publicly available.

<details closed>
<summary>Cite via BibTeX</summary>

```bibtex
@article{chalopin_chirurgie_2022,
  author       = {Chalopin, Claire and Nickel, Felix and Pfahl, Annekatrin and Köhler, Hannes and Maktabi, Marianne and Thieme, René and Sucher, Robert and Jansen-Winkeln, Boris and Studier-Fischer, Alexander and Seidlitz, Silvia and Maier-Hein, Lena and Neumuth, Thomas and Melzer, Andreas and Müller-Stich, Beat Peter and Gockel, Ines},
  date         = {2022-10-01},
  doi          = {10.1007/s00104-022-01677-w},
  issn         = {2731-698X},
  journaltitle = {Die Chirurgie},
  number       = {10},
  pages        = {940--947},
  title        = {Künstliche Intelligenz und hyperspektrale Bildgebung zur bildgestützten Assistenz in der minimal-invasiven Chirurgie},
  volume       = {93},
}
```

</details>

## Funding

This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (NEURAL SPICING, grant agreement No. 101002198) and was supported by the German Cancer Research Center (DKFZ) and the Helmholtz Association under the joint research school HIDSS4Health (Helmholtz Information and Data Science School for Health). It further received funding from the Surgical Oncology Program of the National Center for Tumor Diseases (NCT) Heidelberg.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/imsy-dkfz/htc",
    "name": "imsy-htc",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "surgical data science, open surgery, hyperspectral imaging, organ segmentation, semantic scene segmentation, deep learning",
    "author": "Division of Intelligent Medical Systems, DKFZ",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "<div align=\"center\">\n<a href=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/htc_logo.pdf\"><img src=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/htc_logo.svg\" alt=\"Logo\" width=\"600\" /></a>\n\n[![Python](https://img.shields.io/pypi/pyversions/imsy-htc.svg)](https://pypi.org/project/imsy-htc)\n[![PyPI version](https://badge.fury.io/py/imsy-htc.svg)](https://pypi.org/project/imsy-htc)\n[![Tests](https://github.com/IMSY-DKFZ/htc/actions/workflows/tests.yml/badge.svg)](https://github.com/IMSY-DKFZ/htc/actions/workflows/tests.yml)\n\n</div>\n\n# Hyperspectral Tissue Classification\n\nThis package is a framework for automated tissue classification and segmentation on medical hyperspectral imaging (HSI) data. It contains:\n\n-   The implementation of deep learning models to solve supervised classification and segmentation problems for a variety of different input spatial granularities (pixels, superpixels, patches and entire images, cf. figure below) and modalities (RGB data, raw and processed HSI data) from our paper [\u201cRobust deep learning-based semantic organ segmentation in hyperspectral images\u201d](https://doi.org/10.1016/j.media.2022.102488). It is based on [PyTorch](https://pytorch.org/) and [PyTorch Lightning](https://lightning.ai/).\n-   Corresponding pretrained models.\n-   A pipeline to efficiently load and process HSI data, to aggregate deep learning results and to validate and visualize findings.\n-   Presentation of several solutions to speed up the data loading process (see [Pytorch Conference 2023 poster details](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#-dealing-with-io-bottlenecks-in-high-throughput-model-training) below).\n\n<div align=\"center\">\n<a href=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_model_overview.pdf\"><img src=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_model_overview.svg\" alt=\"Overview of deep learning models in the htc framework, here shown for HSI input.\" /></a>\n</div>\n\nThis framework is designed to work on HSI data from the [Tivita](https://diaspective-vision.com/en/) cameras but you can adapt it to different HSI datasets as well. Potential applications include:\n\n-   Use our data loading and processing pipeline to easily access image and meta data for any work utilizing Tivita datasets.\n-   This repository is tightly coupled to work with the public [HeiPorSPECTRAL](https://heiporspectral.org/) dataset. If you already downloaded the data, you only need to perform the setup steps and then you can directly use the `htc` framework to work on the data (cf. [our tutorials](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#tutorials)).\n-   Train your own networks and benefit from a pipeline offering e.g. efficient data loading, correct hierarchical aggregation of results and a set of helpful visualizations.\n-   Apply deep learning models for different spatial granularities and modalities on your own semantically annotated dataset.\n-   Use our pretrained models to initialize the weights for your own training.\n-   Use our pretrained models to generate predictions for your own data.\n\nIf you use the `htc` framework, please consider citing the [corresponding papers](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#papers). You can also cite this repository directly via:\n\n<details closed>\n<summary>Cite via BibTeX</summary>\n\n```bibtex\n@software{sellner_htc_2023,\n  author    = {Sellner, Jan and Seidlitz, Silvia},\n  publisher = {Zenodo},\n  url       = {https://github.com/IMSY-DKFZ/htc},\n  date      = {2024-10-23},\n  doi       = {10.5281/zenodo.6577614},\n  title     = {Hyperspectral Tissue Classification},\n  version   = {v0.0.17},\n}\n```\n\n</details>\n\n## Setup\n\n### Package Installation\n\nThis package can be installed via pip:\n\n```bash\npip install imsy-htc\n```\n\nThis installs all the required dependencies defined in [`requirements.txt`](https://github.com/IMSY-DKFZ/htc/tree/main/requirements.txt). The requirements include [PyTorch](https://pytorch.org/), so you may want to install it manually before installing the package in case you have specific needs (e.g. CUDA version).\n\n> &#x26a0;&#xfe0f; This framework was developed and tested using the Ubuntu 20.04+ Linux distribution. Despite we do provide wheels for Windows and macOS as well, they are not tested.\n\n> &#x26a0;&#xfe0f; Network training and inference was conducted using an RTX 3090 GPU with 24 GiB of memory. It should also work with GPUs which have less memory but you may have to adjust some settings (e.g. the batch size).\n\n<details close>\n<summary>PyTorch Compatibility</summary>\n\nWe cannot provide wheels for all PyTorch versions. Hence, a version of `imsy-htc` may not work with all versions of PyTorch due to changes in the ABI. In the following table, we list the PyTorch versions which are compatible with the respective `imsy-htc` version.\n\n| `imsy-htc` | `torch` |\n| ---------- | ------- |\n| 0.0.9      | 1.13    |\n| 0.0.10     | 1.13    |\n| 0.0.11     | 2.0     |\n| 0.0.12     | 2.0     |\n| 0.0.13     | 2.1     |\n| 0.0.14     | 2.1     |\n| 0.0.15     | 2.2     |\n| 0.0.15     | 2.3     |\n| 0.0.16     | 2.4     |\n| 0.0.17     | 2.5     |\n\nHowever, we do not make explicit version constraints in the dependencies of the `imsy-htc` package because a future version of PyTorch may still work and we don't want to break the installation if it is not necessary.\n\n> \ud83d\udca1 Please note that it is always possible to build the `imsy-htc` package with your installed PyTorch version yourself (cf. Developer Installation).\n\n</details>\n\n<details close>\n<summary>Optional Dependencies (<code>imsy-htc[extra]</code>)</summary>\n\nSome requirements are considered optional (e.g. if they are only needed by certain scripts) and you will get an error message if they are needed but unavailable. You can install them via\n\n```bash\npip install --extra-index-url https://read_package:CnzBrgDfKMWS4cxf-r31@git.dkfz.de/api/v4/projects/15/packages/pypi/simple imsy-htc[extra]\n```\n\nor by adding the following lines to your `requirements.txt`\n\n```\n--extra-index-url https://read_package:CnzBrgDfKMWS4cxf-r31@git.dkfz.de/api/v4/projects/15/packages/pypi/simple\nimsy-htc[extra]\n```\n\nThis installs the optional dependencies defined in [`requirements-extra.txt`](https://github.com/IMSY-DKFZ/htc/tree/main/requirements-extra.txt), including for example our Python wrapper for the [challengeR toolkit](https://github.com/wiesenfa/challengeR).\n\n</details>\n\n<details close>\n<summary>Docker</summary>\n\nWe also provide a Docker setup for testing. As a prerequisite:\n\n-   Clone this repository\n-   Install [Docker](https://docs.docker.com/get-docker/) and the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)\n-   Install the required dependencies to run the Docker startup script:\n\n```bash\npip install python-dotenv\n```\n\nMake sure that your environment variables are available and then bash into the container\n\n```bash\nexport PATH_Tivita_HeiPorSPECTRAL=\"/path/to/the/dataset\"\npython run_docker.py bash\n```\n\nYou can now run any commands you like. All datasets you provided via an environment variable that starts with `PATH_Tivita` will be accessible in your container (you can also check the generated `docker-compose.override.yml` file for details). Please note that the Docker container is meant for small testing only and not for development. This is also reflected by the fact that per default all results are stored inside the container and hence will also be deleted after exiting the container. If you want to keep your results, let the environment variable `PATH_HTC_DOCKER_RESULTS` point to the directory where you want to store the results.\n\n</details>\n\n<details close>\n<summary>Developer Installation</summary>\n\nIf you want to make changes to the package code (which is highly welcome \ud83d\ude09), we recommend to install the `htc` package in editable mode in a separate conda environment:\n\n```bash\n# Set up the conda environment\n# Note: By adding conda-forge to your default channels, you will get the latest patch releases for Python:\n#   conda config --add channels conda-forge\nconda create --yes --name htc python=3.12\nconda activate htc\n\n# Install the htc package and its requirements\npip install -r requirements-dev.txt\npip install --no-use-pep517 -e .\n```\n\nBefore committing any files, please run the static code checks locally:\n\n```bash\ngit add .\npre-commit run --all-files\n```\n\n</details>\n\n### Environment Variables\n\nThis framework can be configured via environment variables. Most importantly, we need to know where your data is located (e.g. `PATH_Tivita_HeiPorSPECTRAL`) and where results should be stored (e.g. `PATH_HTC_RESULTS`). For a full list of possible environment variables, please have a look at the documentation of the [`Settings`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/settings.py) class.\n\n> \ud83d\udca1 If you set an environment variable for a dataset path, it is important that the variable name matches the folder name (e.g. the variable name `PATH_Tivita_HeiPorSPECTRAL` matches the dataset path `my/path/HeiPorSPECTRAL` with its folder name `HeiPorSPECTRAL`, whereas the variable name `PATH_Tivita_some_other_name` does not match). Furthermore, the dataset path needs to point to a directory which contains a `data` and an `intermediates` subfolder.\n\nThere are several options to set the environment variables. For example:\n\n-   You can specify a variable as part of your bash startup script `~/.bashrc` or before running each command:\n    ```bash\n    PATH_HTC_RESULTS=\"~/htc/results\" htc training --model image --config \"models/image/configs/default\"\n    ```\n    However, this might get cumbersome or might not give you the flexibility you need.\n-   Recommended if you cloned this repository (in contrast to simply installing it via pip): You can create a `.env` file in the repository root and add your variables, for example:\n\n    ```bash\n    export PATH_Tivita_HeiPorSPECTRAL=/mnt/nvme_4tb/HeiPorSPECTRAL\n    export PATH_HTC_RESULTS=~/htc/results\n\n    # You can also add your own datasets via (the environment variable name must start with PATH_Tivita)\n    # export PATH_Tivita_my_dataset=~/htc/Tivita_my_dataset:shortcut=my_shortcut\n    # You can then access it via settings.data_dirs.my_shortcut\n    ```\n\n-   Recommended if you installed the package via pip: You can create user settings for this application. The location is OS-specific. For Linux the location might be at `~/.config/htc/variables.env`. Please run `htc info` upon package installation to retrieve the exact location on your system. The content of the file is of the same format as of the `.env` above.\n\nAfter setting your environment variables, it is recommended to run `htc info` to check that your variables are correctly registered in the framework.\n\n## Tutorials\n\nA series of [tutorials](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials) can help you get started on the `htc` framework by guiding you through different usage scenarios.\n\n> \ud83d\udca1 The tutorials make use of our public HSI dataset [HeiPorSPECTRAL](https://heiporspectral.org/). If you want to directly run them, please download the dataset first and make it accessible via the environment variable `PATH_Tivita_HeiPorSPECTRAL` as described above.\n\n-   As a start, we recommend to take a look at this [general notebook](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/General.ipynb) which showcases the basic functionalities of the `htc` framework. Namely, it demonstrates the usage of the `DataPath` class which is the entry point to load and process HSI data. For example, you will learn how to read HSI cubes, segmentation masks and meta data. Among others, you can use this information to calculate the median spectrum of an organ.\n-   If you want to use our framework with your own dataset, it might be necessary to write a custom `DataPath` class so that you can load and process your images and annotations. We [collected some tips](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CustomDataPath.md) on how this can be achieved.\n-   You have some HSI data at hand and want to use one of our pretrained models to generate predictions? Then our [prediction notebook](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CreatingPredictions.ipynb) has got you covered.\n-   You want to use our pretrained models to initialize the weights for your own training? See the section about [pretrained models](https://github.com/IMSY-DKFZ/htc/tree/main/README.md#pretrained-models) below for details.\n-   You want to use our framework to train a network? The [network training notebook](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/network_training/NetworkTraining.ipynb) will show you how to achieve this on the example of a heart and lung segmentation network.\n-   If you are interested in our technical validation (e.g. because you want to compare your colorchecker images with ours) and need to create a mask to detect the different colorchecker fields, you might find our automatic [colorchecker mask creation pipeline](https://github.com/IMSY-DKFZ/htc/tree/main/htc/utils/ColorcheckerMaskCreation.ipynb) useful.\n\nWe do not have a separate documentation website for our framework yet. However, most of the functions and classes are documented so feel free to explore the source code or use your favorite IDE to display the documentation. If something does not become clear from the documentation, feel free to open an issue!\n\n## Pretrained Models\n\nThis framework gives you access to a variety of pretrained segmentation and classification models. The models will be automatically downloaded, provided you specify the model type (e.g. `image`) and the run folder (e.g. `2022-02-03_22-58-44_generated_default_model_comparison`). It can then be used for example to [create predictions](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CreatingPredictions.ipynb) on some data or as a baseline for your own training (see example below).\n\nThe following table lists all the models you can get:\n| model type | modality | class | run folder |\n| ----------- | ----------- | ----------- | ----------- |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_rat2pig_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2pig_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2pig_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2pig_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_rat2human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_rat2human_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_pig2rat_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2rat_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2rat_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2rat_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_projected_pig2human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_projected_pig2human_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_joint_pig-p+rat-p2human_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_baseline_rat_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_rat_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_rat_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_rat_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_baseline_pig_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_pig_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_pig_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_pig_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | `2024-09-11_00-11-38_baseline_human_nested-*-2` (outer folds: [0](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_human_nested-0-2.zip), [1](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_human_nested-1-2.zip), [2](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2024-09-11_00-11-38_baseline_human_nested-2-2.zip)) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2023-02-08_14-48-02_organ_transplantation_0.8`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2023-02-08_14-48-02_organ_transplantation_0.8.zip) |\n| image | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2023-01-29_11-31-04_organ_transplantation_0.8_rgb`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2023-01-29_11-31-04_organ_transplantation_0.8_rgb.zip) |\n| image | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2022-02-03_22-58-44_generated_default_model_comparison.zip) |\n| image | param | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |\n| image | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/image@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |\n| patch | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_64_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_64_model_comparison.zip) |\n| patch | param | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_64_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_64_parameters_model_comparison.zip) |\n| patch | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_64_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_64_rgb_model_comparison.zip) |\n| patch | hsi | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_model_comparison.zip) |\n| patch | param | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |\n| patch | rgb | [`ModelImage`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/image/ModelImage.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/patch@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |\n| superpixel_classification | hsi | [`ModelSuperpixelClassification`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/superpixel_classification/ModelSuperpixelClassification.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/superpixel_classification@2022-02-03_22-58-44_generated_default_model_comparison.zip) |\n| superpixel_classification | param | [`ModelSuperpixelClassification`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/superpixel_classification/ModelSuperpixelClassification.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/superpixel_classification@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |\n| superpixel_classification | rgb | [`ModelSuperpixelClassification`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/superpixel_classification/ModelSuperpixelClassification.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/superpixel_classification@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |\n| pixel | hsi | [`ModelPixel`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/pixel/ModelPixel.py) | [`2022-02-03_22-58-44_generated_default_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/pixel@2022-02-03_22-58-44_generated_default_model_comparison.zip) |\n| pixel | param | [`ModelPixelRGB`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/pixel/ModelPixelRGB.py) | [`2022-02-03_22-58-44_generated_default_parameters_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/pixel@2022-02-03_22-58-44_generated_default_parameters_model_comparison.zip) |\n| pixel | rgb | [`ModelPixelRGB`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/pixel/ModelPixelRGB.py) | [`2022-02-03_22-58-44_generated_default_rgb_model_comparison`](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/pixel@2022-02-03_22-58-44_generated_default_rgb_model_comparison.zip) |\n\n> \ud83d\udca1 The modality `param` refers to stacked tissue parameter images (named TPI in our paper [\u201cRobust deep learning-based semantic organ segmentation in hyperspectral images\u201d](https://doi.org/10.1016/j.media.2022.102488)). For the model type `patch`, pretrained models are available for the patch sizes 64 x 64 and 32 x 32 pixels. The modality and patch size is not specified when loading a model as it is already characterized by specifying a certain run folder.\n\n> \ud83d\udca1 A wildcard `*` in the run folder name refers to a collection of models (e.g. from nested cross validation). You can use the name as noted in the table to retrieve all models from this collection as list of models or explicitly set the index to only retrieve one specific model from the collection. If you keep the wildcard for creating predictions (see below), all models will be loaded and the final prediction is an ensemble of the output from all individual networks (e.g. 15 networks with 3 outer and 5 inner folds).\n\nAfter successful installation of the `htc` package, you can use any of the pretrained models listed in the table. There are several ways to use them but the general principle is that models are always specified via their `model` and `run_folder`.\n\n### Option 1: Use the models in your own training pipeline\n\nEvery model class listed in the table has a static method [`pretrained_model()`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/common/HTCModel.py) which you can use to create a model instance and initialize it with the pretrained weights. The model object will be an instance of `torch.nn.Module`. The function has examples for all the different model types but as a teaser consider the following example which loads the pretrained image HSI network:\n\n```python\nimport torch\nfrom htc import ModelImage, Normalization\n\nrun_folder = \"2022-02-03_22-58-44_generated_default_model_comparison\"  # HSI model\nmodel = ModelImage.pretrained_model(model=\"image\", run_folder=run_folder, n_channels=100, n_classes=19)\ninput_data = torch.randn(1, 100, 480, 640)  # NCHW\ninput_data = Normalization(channel_dim=1)(input_data)  # Model expects L1 normalized input\nmodel(input_data).shape\n# torch.Size([1, 19, 480, 640])\n```\n\n> \ud83d\udca1 Please note that when initializing the weights as in this example, the segmentation head is initialized randomly. Meaningful predictions on your own data can thus not be expected out of the box, but you will have to train the model on your data first.\n\n### Option 2: Use the models to create predictions for your data\n\nThe models can be used to predict segmentation masks for your data. The segmentation models automatically sample from your input image according to the selected model spatial granularity (e.g. by creating patches) and the output is always a segmentation mask for an entire image. The set of output classes is determined by the training configuration, e.g. 18 organ classes + background for our semantic models. There are two alternatives for creating predictions:\n\n1. The [`CreatingPredictions`](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/CreatingPredictions.ipynb) notebook shows how to create predictions for all images in a folder (via the `htc inference` command) and how to map the network output to meaningful label names.\n2. If you want to compute predictions directly within your code for custom tensors, batches or paths, you can use the [`SinglePredictor`](https://github.com/IMSY-DKFZ/htc/tree/main/htc/model_processing/SinglePredictor.py) class.\n\n### Option 3: Use the models to train a network with the `htc` package\n\nIf you are using the `htc` framework to [train your networks](https://github.com/IMSY-DKFZ/htc/tree/main/tutorials/network_training/NetworkTraining.ipynb), you only need to define the model in your configuration:\n\n```json\n{\n    \"model\": {\n        \"pretrained_model\": {\n            \"model\": \"image\",\n            \"run_folder\": \"2022-02-03_22-58-44_generated_default_model_comparison\"\n        }\n    }\n}\n```\n\nThis is very similar to option 1 but may be more convenient if you already train with the `htc` framework.\n\n> \ud83d\udca1 We have a [JSON Schema file](https://github.com/IMSY-DKFZ/htc/tree/main/htc/utils/config.schema) which describes the structure of our config files including descriptions of the attributes.\n\n## CLI\n\nThere is a common command line interface for many scripts in this repository. More precisely, every script which is prefixed with `run_NAME.py` can also be run via `htc NAME` from any directory. For more details, just type `htc`.\n\n## Papers\n\nThis repository contains code to reproduce our publications listed below:\n\n### \ud83d\udcdd Xeno-learning: knowledge transfer across species in deep learning-based spectral image analysis\n\n<div align=\"center\">\n<a href=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/species_motivation.pdf\"><img src=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/species_motivation.png\" alt=\"Logo\" width=\"700\" /></a>\n</div>\n\nThis paper introduces a cross-species knowledge transfer paradigm termed <i>xeno-learning</i> to make use of what has been learned in one species in other species. Specifically, we showcase how human segmentation performance on malperfused tissues can be improved by leveraging perfusion knowledge obtained from animal data via a <q>physiology-based data augmentation</q> method. All trained networks are available as pretrained models (baseline networks and networks which included the new data augmentation method during training). Compared to previous papers, we switched to a nested cross-validation scheme with 3 outer folds so each training configuration is composed of three run folders on disk. However, you can still refer to them via the `run_folder` argument by using a wildcard (e.g., `2024-09-11_00-11-38_baseline_human_nested-*-2` to get the baseline networks `0`, `1` and `2` trained on human data). You can find all notebooks which generate the paper figures in [paper/XenoLearning2024](https://github.com/IMSY-DKFZ/htc/tree/main/paper/XenoLearning2024) accompanied by [reproducibility instructions](https://github.com/IMSY-DKFZ/htc/tree/main/paper/XenoLearning2024/reproducibility.md). The code for all experiments is located in the [htc_projects/species](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/species/) folder.\n\n> \ud83d\udcc2 The dataset for this paper is not fully publicly available, but a subset of the data is available through the public [HeiPorSPECTRAL](https://heiporspectral.org/) dataset.\n\n### \ud83d\udcdd [Semantic segmentation of surgical hyperspectral images under geometric domain shifts](https://doi.org/10.48550/arXiv.2303.10972)\n\n<div align=\"center\">\n<a href=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MICCAI_abstract.pdf\"><img src=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MICCAI_abstract.png\" alt=\"Logo\" width=\"600\" /></a>\n</div>\n\nThis MICCAI2023 paper is the direct successor of our MIA2022 paper. We analyzed how well our networks perform under geometrical domain shifts which commonly occur in real-world open surgeries (e.g. situs occlusions). The effect is drastic (drop of Dice similarity coefficient by 45\u202f%) but the good news is that performance on par with in-distribution data can be achieved with our simple, model-independent solution (augmentation method). You can find all the code in [htc_projects/context](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/context) and paper figures as well as [reproducibility instructions](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MICCAI2023/reproducibility.md) in [paper/MICCAI2023](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MICCAI2023). Pretrained models are available for our organ transplantation networks with HSI and RGB modalities.\n\n> \ud83d\udca1 If you are only interested in our data augmentation method, you can also head over to [Kornia](https://github.com/kornia/kornia) where this augmentation is implemented for generic use cases (including 2D and 3D data). You will find it under the name [`RandomTransplantation`](https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomTransplantation).\n\n> \ud83d\udcc2 The dataset for this paper is not publicly available.\n\n<details closed>\n<summary>Cite via BibTeX</summary>\n\n```bibtex\n@inproceedings{sellner_context_2023,\n  author    = {Sellner, Jan and Seidlitz, Silvia and Studier-Fischer, Alexander and Motta, Alessandro and \u00d6zdemir, Berkin and M\u00fcller-Stich, Beat Peter and Nickel, Felix and Maier-Hein, Lena},\n  editor    = {Greenspan, Hayit and Madabhushi, Anant and Mousavi, Parvin and Salcudean, Septimiu and Duncan, James and Syeda-Mahmood, Tanveer and Taylor, Russell},\n  location  = {Cham},\n  publisher = {Springer Nature Switzerland},\n  booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2023},\n  date      = {2023},\n  doi       = {10.1007/978-3-031-43996-4_59},\n  isbn      = {978-3-031-43996-4},\n  pages     = {618--627},\n  title     = {Semantic Segmentation of Surgical Hyperspectral Images Under Geometric Domain Shifts},\n}\n```\n\n</details>\n\n### \ud83d\udcdd [Robust deep learning-based semantic organ segmentation in hyperspectral images](https://doi.org/10.1016/j.media.2022.102488)\n\n<div align=\"center\">\n<a href=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_abstract.pdf\"><img src=\"https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/MIA_abstract.png\" alt=\"Logo\" width=\"800\" /></a>\n</div>\n\nIn this paper, we tackled fully automatic organ segmentation and compared deep learning models on different spatial granularities (e.g. patch vs. image) and modalities (e.g. HSI vs. RGB). Furthermore, we studied the required amount of training data and the generalization capabilities of our models across subjects. The pretrained networks are related to this paper. You can find the notebooks to generate the paper figures in [paper/MIA2022](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MIA2022) (the folder also includes a [reproducibility document](https://github.com/IMSY-DKFZ/htc/tree/main/paper/MIA2022/reproducibility.md)) and the models in [htc/models](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models). For each model, there are three configuration files, namely `default`, `default_rgb` and `default_parameters`, which correspond to the HSI, RGB and TPI modality, respectively. You can also download the [NSD thresholds](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/models/nsd_thresholds_semantic.csv) which we used for the NSD metric (cf. Fig. 12).\n\n> \ud83d\udcc2 The dataset for this paper is not publicly available.\n\n<details closed>\n<summary>Cite via BibTeX</summary>\n\n```bibtex\n@article{seidlitz_seg_2022,\n  author       = {Seidlitz, Silvia and Sellner, Jan and Odenthal, Jan and \u00d6zdemir, Berkin and Studier-Fischer, Alexander and Kn\u00f6dler, Samuel and Ayala, Leonardo and Adler, Tim J. and Kenngott, Hannes G. and Tizabi, Minu and Wagner, Martin and Nickel, Felix and M\u00fcller-Stich, Beat P. and Maier-Hein, Lena},\n  date         = {2022-08},\n  doi          = {10.1016/j.media.2022.102488},\n  issn         = {1361-8415},\n  journaltitle = {Medical Image Analysis},\n  keywords     = {Hyperspectral imaging,Surgical data science,Deep learning,Open surgery,Organ segmentation,Semantic scene segmentation},\n  pages        = {102488},\n  title        = {Robust deep learning-based semantic organ segmentation in hyperspectral images},\n  volume       = {80},\n}\n```\n\n</details>\n\n### \ud83d\udcdd [Dealing with I/O bottlenecks in high-throughput model training](https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/PyTorchConference_Poster.pdf)\n\nThe poster was presented at the PyTorch Conference 2023 and presents several solutions to improve data loading for faster network training. This originated from our MICCAI2023 paper, where we load huge amount of data while using a relatively small network resulting in GPU idle times when the GPU has to wait for the CPU to deliver new data. This requested the need for fast data loading strategies so that the CPU delivers data in-time for the GPU. The solutions include (1) efficient data storage via [Blosc](https://www.blosc.org/) compression, (2) appropriate precision settings, (3) GPU instead of CPU augmentations using the [Kornia](https://kornia.readthedocs.io) library and (4) a fixed shared pinned memory buffer for efficient data transfer to the GPU. For the last part, you will find the relevant code to create the buffer in this repository as part of the [SharedMemoryDatasetMixin](https://github.com/IMSY-DKFZ/htc/tree/main/htc/models/common/SharedMemoryDatasetMixin.py) class (`_add_tensor_shared()` method).\n\nYou can find the code to generate the results figures of the poster in [paper/PyTorchConf2023](https://github.com/IMSY-DKFZ/htc/tree/main/paper/PyTorchConf2023) including [reproducibility instructions](https://github.com/IMSY-DKFZ/htc/tree/main/paper/PyTorchConf2023/reproducibility.md). The experiment code can be found in the project folder [htc_projects/benchmarking](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/benchmarking).\n\n> \ud83d\udcc2 The dataset for this poster is not publicly available.\n\n<details closed>\n<summary>Cite via BibTeX</summary>\n\n```bibtex\n@misc{sellner_benchmarking_2023,\n  author       = {Sellner, Jan and Seidlitz, Silvia and Maier-Hein, Lena},\n  url          = {https://e130-hyperspectal-tissue-classification.s3.dkfz.de/figures/PyTorchConference_Poster.pdf},\n  date         = {2023-10-16},\n  howpublished = {Poster presented at the PyTorch Conference 2023, San Francisco, United States of America},\n  title        = {Dealing with I/O bottlenecks in high-throughput model training},\n}\n```\n\n</details>\n\n### \ud83d\udcdd [HeiPorSPECTRAL - the Heidelberg Porcine HyperSPECTRAL Imaging Dataset of 20 Physiological Organs](https://doi.org/10.1038/s41597-023-02315-8)\n\nThis paper introduces the [HeiPorSPECTRAL](https://heiporspectral.org/) dataset containing 5756 hyperspectral images from 11 subjects. We are using these images in our tutorials. You can find the visualization notebook for the paper figures in [paper/NatureData2023](https://github.com/IMSY-DKFZ/htc/tree/main/paper/NatureData2023) (the folder also includes a [reproducibility document](https://github.com/IMSY-DKFZ/htc/tree/main/paper/NatureData2023/reproducibility.md)) and the remaining code in [htc_projects/atlas_open](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/atlas_open).\n\nIf you want to learn more about the [HeiPorSPECTRAL](https://heiporspectral.org/) dataset (e.g. the underlying data structure) or you stumbled upon a file and want to know how to read it, you might find this [notebook with low-level details](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/atlas_open/FileReference.ipynb) helpful.\n\n> \ud83d\udcc2 The dataset for this paper is publicly available.\n\n<details closed>\n<summary>Cite via BibTeX</summary>\n\n```bibtex\n@article{studierfischer_open_2023,\n  author       = {Studier-Fischer, Alexander and Seidlitz, Silvia and Sellner, Jan and Bressan, Marc and \u00d6zdemir, Berkin and Ayala, Leonardo and Odenthal, Jan and Knoedler, Samuel and Kowalewski, Karl-Friedrich and Haney, Caelan Max and Salg, Gabriel and Dietrich, Maximilian and Kenngott, Hannes and Gockel, Ines and Hackert, Thilo and M\u00fcller-Stich, Beat Peter and Maier-Hein, Lena and Nickel, Felix},\n  url          = {https://heiporspectral.org/},\n  date         = {2023-06-24},\n  doi          = {10.1038/s41597-023-02315-8},\n  issn         = {2052-4463},\n  journaltitle = {Scientific Data},\n  number       = {1},\n  pages        = {414},\n  title        = {HeiPorSPECTRAL - the Heidelberg Porcine HyperSPECTRAL Imaging Dataset of 20 Physiological Organs},\n  volume       = {10},\n}\n```\n\n</details>\n\n### \ud83d\udcdd [Spectral organ fingerprints for machine learning-based intraoperative tissue classification with hyperspectral imaging in a porcine model](https://doi.org/10.1038/s41598-022-15040-w)\n\nIn this paper, we trained a classification model based on median spectra from HSI data. You can find the model code in [htc_projects/atlas](https://github.com/IMSY-DKFZ/htc/tree/main/htc_projects/atlas) and the confusion matrix figure of the paper in [paper/NatureReports2022](https://github.com/IMSY-DKFZ/htc/tree/main/paper/NatureReports2022) (including a reproducibility document).\n\n> \ud83d\udcc2 The dataset for this paper is not fully publicly available, but a subset of the data is available through the public [HeiPorSPECTRAL](https://heiporspectral.org/) dataset.\n\n<details closed>\n<summary>Cite via BibTeX</summary>\n\n```bibtex\n@article{studierfischer_atlas_2022,\n  author       = {Studier-Fischer, Alexander and Seidlitz, Silvia and Sellner, Jan and \u00d6zdemir, Berkin and Wiesenfarth, Manuel and Ayala, Leonardo and Odenthal, Jan and Kn\u00f6dler, Samuel and Kowalewski, Karl Friedrich and Haney, Caelan Max and Camplisson, Isabella and Dietrich, Maximilian and Schmidt, Karsten and Salg, Gabriel Alexander and Kenngott, Hannes G\u00f6tz and Adler, Tim Julian and Schreck, Nicholas and Kopp-Schneider, Annette and Maier-Hein, Klaus and Maier-Hein, Lena and M\u00fcller-Stich, Beat Peter and Nickel, Felix},\n  date         = {2022-06-30},\n  doi          = {10.1038/s41598-022-15040-w},\n  issn         = {2045-2322},\n  journaltitle = {Scientific Reports},\n  number       = {1},\n  pages        = {11028},\n  title        = {Spectral organ fingerprints for machine learning-based intraoperative tissue classification with hyperspectral imaging in a porcine model},\n  volume       = {12},\n}\n```\n\n</details>\n\n### \ud83d\udcdd [K\u00fcnstliche Intelligenz und hyperspektrale Bildgebung zur bildgest\u00fctzten Assistenz in der minimal-invasiven Chirurgie](https://doi.org/10.1007/s00104-022-01677-w)\n\nThis paper presents several applications of intraoperative HSI, including our organ [segmentation](https://doi.org/10.1016/j.media.2022.102488) and [classification](https://doi.org/10.1038/s41598-022-15040-w) work. You can find the code generating our figure for this paper at [paper/Chirurg2022](https://github.com/IMSY-DKFZ/htc/tree/main/paper/Chirurg2022).\n\n> \ud83d\udcc2 The sample image used here is contained in the dataset from our paper [\u201cRobust deep learning-based semantic organ segmentation in hyperspectral images\u201d](https://doi.org/10.1016/j.media.2022.102488) and hence not publicly available.\n\n<details closed>\n<summary>Cite via BibTeX</summary>\n\n```bibtex\n@article{chalopin_chirurgie_2022,\n  author       = {Chalopin, Claire and Nickel, Felix and Pfahl, Annekatrin and K\u00f6hler, Hannes and Maktabi, Marianne and Thieme, Ren\u00e9 and Sucher, Robert and Jansen-Winkeln, Boris and Studier-Fischer, Alexander and Seidlitz, Silvia and Maier-Hein, Lena and Neumuth, Thomas and Melzer, Andreas and M\u00fcller-Stich, Beat Peter and Gockel, Ines},\n  date         = {2022-10-01},\n  doi          = {10.1007/s00104-022-01677-w},\n  issn         = {2731-698X},\n  journaltitle = {Die Chirurgie},\n  number       = {10},\n  pages        = {940--947},\n  title        = {K\u00fcnstliche Intelligenz und hyperspektrale Bildgebung zur bildgest\u00fctzten Assistenz in der minimal-invasiven Chirurgie},\n  volume       = {93},\n}\n```\n\n</details>\n\n## Funding\n\nThis project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (NEURAL SPICING, grant agreement No. 101002198) and was supported by the German Cancer Research Center (DKFZ) and the Helmholtz Association under the joint research school HIDSS4Health (Helmholtz Information and Data Science School for Health). It further received funding from the Surgical Oncology Program of the National Center for Tumor Diseases (NCT) Heidelberg.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Framework for automatic classification and segmentation of hyperspectral images.",
    "version": "0.0.17",
    "project_urls": {
        "Homepage": "https://github.com/imsy-dkfz/htc"
    },
    "split_keywords": [
        "surgical data science",
        " open surgery",
        " hyperspectral imaging",
        " organ segmentation",
        " semantic scene segmentation",
        " deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fe25fe957a4e9fd469310b6bb1653b0f7f307d91875725162b1ac477c54d537",
                "md5": "38e5e93a840b0b4c4b133052eb707be9",
                "sha256": "d318837568fbde7841537c9db65be82d259d97d22e422694ed3aaef177980cbb"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp310-cp310-macosx_10_14_x86_64.whl",
            "has_sig": false,
            "md5_digest": "38e5e93a840b0b4c4b133052eb707be9",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.10",
            "size": 688419,
            "upload_time": "2024-10-23T15:32:31",
            "upload_time_iso_8601": "2024-10-23T15:32:31.080016Z",
            "url": "https://files.pythonhosted.org/packages/4f/e2/5fe957a4e9fd469310b6bb1653b0f7f307d91875725162b1ac477c54d537/imsy_htc-0.0.17-cp310-cp310-macosx_10_14_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9d17782bae16e2d6a4e1c28a390f7a1767c192a621d579fde796251ba2d0695c",
                "md5": "369ea1fe7f0bd90d1523470ea73e76e1",
                "sha256": "890f9ba1ea25417d74eadd8f6c41310c09154caeb86433a57bb0605a6fe89e4f"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp310-cp310-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "369ea1fe7f0bd90d1523470ea73e76e1",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.10",
            "size": 684725,
            "upload_time": "2024-10-23T15:32:32",
            "upload_time_iso_8601": "2024-10-23T15:32:32.534946Z",
            "url": "https://files.pythonhosted.org/packages/9d/17/782bae16e2d6a4e1c28a390f7a1767c192a621d579fde796251ba2d0695c/imsy_htc-0.0.17-cp310-cp310-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1f007413d771da4deadbde8a196244be8e3f129c44e50751434af996b3241d60",
                "md5": "bc5d0b8cf1260bc5b6ed94174f62465b",
                "sha256": "f7d85a77e600f0dcec9e8f5b7f063ad0a02206940d63bb5b4419d83824ba961c"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "bc5d0b8cf1260bc5b6ed94174f62465b",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.10",
            "size": 12462530,
            "upload_time": "2024-10-23T15:32:33",
            "upload_time_iso_8601": "2024-10-23T15:32:33.934151Z",
            "url": "https://files.pythonhosted.org/packages/1f/00/7413d771da4deadbde8a196244be8e3f129c44e50751434af996b3241d60/imsy_htc-0.0.17-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "42d3f4f3b9642b85ca9795aecf4804d6a6d2934f73b49e25039d93d53d4645e6",
                "md5": "72c23e54904d6a2ba810b375814d7e24",
                "sha256": "876d2b3ba7a3d9883222b9b6a468eb92213c70efbfa42d2b7ba517f225e59b6a"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "72c23e54904d6a2ba810b375814d7e24",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.10",
            "size": 680438,
            "upload_time": "2024-10-23T15:32:35",
            "upload_time_iso_8601": "2024-10-23T15:32:35.953218Z",
            "url": "https://files.pythonhosted.org/packages/42/d3/f4f3b9642b85ca9795aecf4804d6a6d2934f73b49e25039d93d53d4645e6/imsy_htc-0.0.17-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f26a3ea5aaa5778be331c34a860c5cc8d783557ea0f446b1f07fe233b920a875",
                "md5": "16dc3b5b94ab950b1a78bc97cb1fa1f9",
                "sha256": "b0495b842fd3f696d7636306074613ad7af033737b94cc391db7ac5e14b4fd4d"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp311-cp311-macosx_10_14_x86_64.whl",
            "has_sig": false,
            "md5_digest": "16dc3b5b94ab950b1a78bc97cb1fa1f9",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.10",
            "size": 690184,
            "upload_time": "2024-10-23T15:32:37",
            "upload_time_iso_8601": "2024-10-23T15:32:37.251230Z",
            "url": "https://files.pythonhosted.org/packages/f2/6a/3ea5aaa5778be331c34a860c5cc8d783557ea0f446b1f07fe233b920a875/imsy_htc-0.0.17-cp311-cp311-macosx_10_14_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5b6da6a43930846ee709a2ce27fc485c8eb82d9cabfdaeacf18c6313fb3fe5e9",
                "md5": "f8a0ff7eb7db04be3ea0ff1f997931c9",
                "sha256": "2fe5507928b90a0088e0832abbfb3df076ff19c5bce128800da16fc4fa2dd169"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp311-cp311-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "f8a0ff7eb7db04be3ea0ff1f997931c9",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.10",
            "size": 686383,
            "upload_time": "2024-10-23T15:32:38",
            "upload_time_iso_8601": "2024-10-23T15:32:38.517189Z",
            "url": "https://files.pythonhosted.org/packages/5b/6d/a6a43930846ee709a2ce27fc485c8eb82d9cabfdaeacf18c6313fb3fe5e9/imsy_htc-0.0.17-cp311-cp311-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f907cfbd5ff9d6dc60dcf3683e13676a4fef3b8b769a17f60d5e0a54947f39c5",
                "md5": "cdb29926f4ea29439c2981c969ac1c06",
                "sha256": "9f4e722dba9eae3066c115da9fe6180b649a6b8e72e39d7daf62352780f79bb3"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "cdb29926f4ea29439c2981c969ac1c06",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.10",
            "size": 12507015,
            "upload_time": "2024-10-23T15:32:40",
            "upload_time_iso_8601": "2024-10-23T15:32:40.860068Z",
            "url": "https://files.pythonhosted.org/packages/f9/07/cfbd5ff9d6dc60dcf3683e13676a4fef3b8b769a17f60d5e0a54947f39c5/imsy_htc-0.0.17-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "25829cb36f673f90a925a8b7fbf018e2b900b66c2758de541c9188284ee9581f",
                "md5": "41853ef97f3117770de8fe8bec5fec68",
                "sha256": "59623a4ebf9841ec77f582fc9057c4b6318701b8ace4f8ae871021e88f69c01f"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "41853ef97f3117770de8fe8bec5fec68",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.10",
            "size": 681607,
            "upload_time": "2024-10-23T15:32:42",
            "upload_time_iso_8601": "2024-10-23T15:32:42.955559Z",
            "url": "https://files.pythonhosted.org/packages/25/82/9cb36f673f90a925a8b7fbf018e2b900b66c2758de541c9188284ee9581f/imsy_htc-0.0.17-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c6bc4be817d6679ac0b0cb170b549c1b9a95fec48d8c04fa3970e134926fb26d",
                "md5": "a0ed4778b3ff9cdd00deea38e4c83faa",
                "sha256": "a2b8c2143d54578e3c5bfb98a4b3c2489ddfb5e899b169913b82de48c548eb58"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp312-cp312-macosx_10_14_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a0ed4778b3ff9cdd00deea38e4c83faa",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.10",
            "size": 689712,
            "upload_time": "2024-10-23T15:32:44",
            "upload_time_iso_8601": "2024-10-23T15:32:44.597030Z",
            "url": "https://files.pythonhosted.org/packages/c6/bc/4be817d6679ac0b0cb170b549c1b9a95fec48d8c04fa3970e134926fb26d/imsy_htc-0.0.17-cp312-cp312-macosx_10_14_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b89065048abd18046ea026bac3f9131b1cda7038deb474df49a5f1b831848563",
                "md5": "f13196887f0c8c13ad9a40213fb0cd44",
                "sha256": "08a77067d13f7d4873883e8b4344a4b1303b35a4108fa20c7d5c1e0e27eed45b"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp312-cp312-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "f13196887f0c8c13ad9a40213fb0cd44",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.10",
            "size": 685980,
            "upload_time": "2024-10-23T15:32:46",
            "upload_time_iso_8601": "2024-10-23T15:32:46.707508Z",
            "url": "https://files.pythonhosted.org/packages/b8/90/65048abd18046ea026bac3f9131b1cda7038deb474df49a5f1b831848563/imsy_htc-0.0.17-cp312-cp312-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a8889a0a11c50f2fb8ed98bf392c90eeec4fba055fa8e9b4b2c07ee3a5449fcb",
                "md5": "6f08ba0acbe53da5f9a4d1f55f41a0f2",
                "sha256": "3447505e8a7a6d56941daecee6af10c5e15cdc0beecbd8ef627f0656a1cb693d"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "6f08ba0acbe53da5f9a4d1f55f41a0f2",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.10",
            "size": 12552325,
            "upload_time": "2024-10-23T15:32:48",
            "upload_time_iso_8601": "2024-10-23T15:32:48.944965Z",
            "url": "https://files.pythonhosted.org/packages/a8/88/9a0a11c50f2fb8ed98bf392c90eeec4fba055fa8e9b4b2c07ee3a5449fcb/imsy_htc-0.0.17-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "662298bdcb0439b59b64107a05f7b4137b874c1e2b858bd3418428dc419adfdc",
                "md5": "6ff6628bd1f77e20b5c841b42fc6e6ae",
                "sha256": "98d3e8172ed61bc665bd7c562fb1021b915520209ad720d4b145cb2db2ba7b81"
            },
            "downloads": -1,
            "filename": "imsy_htc-0.0.17-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "6ff6628bd1f77e20b5c841b42fc6e6ae",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.10",
            "size": 681987,
            "upload_time": "2024-10-23T15:32:51",
            "upload_time_iso_8601": "2024-10-23T15:32:51.365554Z",
            "url": "https://files.pythonhosted.org/packages/66/22/98bdcb0439b59b64107a05f7b4137b874c1e2b858bd3418428dc419adfdc/imsy_htc-0.0.17-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-23 15:32:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "imsy-dkfz",
    "github_project": "htc",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "imsy-htc"
}
        
Elapsed time: 0.86254s