bio-volumentations


Namebio-volumentations JSON
Version 1.3.1 PyPI version JSON
download
home_pageNone
SummaryLibrary for 3D-5D augmentations of volumetric multi-dimensional time-lapse biomedical images with annotations
upload_time2025-01-08 12:09:00
maintainerNone
docs_urlNone
authorLucia Hradecká, Filip Lux, Samuel Šuľan
requires_python>=3.1
licenseMIT License
keywords image augmentation 3d volumetric biomedical bioimage preprocessing transformation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Bio-Volumentations

`Bio-Volumentations` is an **image augmentation and preprocessing package** for 3D (volumetric), 
4D (time-lapse volumetric or multi-channel volumetric), and 5D (time-lapse multi-channel volumetric) 
biomedical images and their annotations.

The library offers a wide range of efficiently implemented image transformations.
This includes both preprocessing transformations (such as intensity normalisation and padding) 
and augmentation transformations (such as affine transform, noise addition and removal, and contrast manipulation).


# Why use Bio-Volumentations?

`Bio-Volumentations` are a handy tool for image manipulation in machine learning applications. 
The library can transform **3D to 5D images** with **image-based and point-based annotations**, 
gives you **fine-grained control** over the transformation pipelines, 
and can be used with **any major Python deep learning library** 
(including PyTorch, PyTorch Lightning, TensorFlow, and Keras) 
in **a wide range of applications** including classification, object detection, semantic & instance 
segmentation, and object tracking.

`Bio-Volumentations` build upon widely used libraries such as Albumentations and TorchIO 
(see the _Contributions and Acknowledgements_ section below) and are accompanied by 
[detailed documentation and a user guide](https://biovolumentations.readthedocs.io/1.3.1/). 
Therefore, they can easily be adopted by developers.


# Installation

Simply install the package from pip using:
```commandline
pip install bio-volumentations
```

That's it :)

For more details, see [the project's PyPI page](https://pypi.org/project/bio-volumentations/).

### Requirements

- [NumPy](https://numpy.org/)
- [SciPy](https://scipy.org/)
- [Scikit-image](https://scikit-image.org/)
- [SimpleITK](https://simpleitk.org/)


# Usage

### The First Example

To check out our library on test data, you can run the example provided in the `example` folder.

There, you will find a test sample consisting of a 3D image (`image.tif`) with an associated binary mask
(`segmentation_mask.tif`), a runnable Python script, and the transformed sample (`image_transformed.tif` and 
`segmentation_mask_transformed.tif`).

To run the example, please download the `example` folder and 
install the `bio-volumentations`, `tiffile` and `imagecodecs` packages to your Python environment. 
Then run the following from the command line:

```commandline
cd example
python transformation_example.py
```

The script will generate a new randomly transformed sample and save it into the `image_transformed.tif` and 
`segmentation_mask_transformed.tif` files. These files can be opened using ImageJ.

This example uses data from the _Fluo-N3DH-CE_ dataset [1] from the Cell Tracking Challenge repository [2].

[1] Murray J, Bao Z, Boyle T, et al. Automated analysis of embryonic gene expression with cellular 
resolution in _C. elegans_. _Nat Methods_ 2008;**5**:703–709. https://doi.org/10.1038/nmeth.1228.

[2] Maška M, Ulman V, Delgado-Rodriguez P, et al. The Cell Tracking Challenge: 10 years of objective 
benchmarking. _Nat Methods_ 2023;**20**:1010–1020. https://doi.org/10.1038/s41592-023-01879-y.
Repository: https://celltrackingchallenge.net/3d-datasets/.

### Importing

Import the library to your project using:
```python
import bio_volumentations as biovol
```

### How to Use Bio-Volumentations?

The `Bio-Volumentations` library processes 3D, 4D, and 5D images. Each image must be 
represented as a `numpy.ndarray` and must conform to the following conventions:

- The order of dimensions is [C, Z, Y, X, T], where C is the channel dimension, 
   T is the time dimension, and Z, Y, and X are the spatial dimensions.
- The three spatial dimensions (Z, Y, X) must be present. To transform a 2D image, please create a dummy Z dimension first. 
- The channel (C) dimension is optional. If it is not present, the library will automatically
   create a dummy dimension in its place, so the output image shape will be [1, Z, Y, X].
- The time (T) dimension is optional and can only be present if the channel (C) dimension is 
   also present in the input data. To process single-channel time-lapse images, please create a dummy C dimension.

Thus, an input image is interpreted in the following ways based on its dimensionality:

1. 3D: a single-channel volumetric image [Z, Y, X];
2. 4D: a multi-channel volumetric image [C, Z, Y, X];
3. 5D: a single- or multi-channel volumetric image sequence [C, Z, Y, X, T].

The shape of the output image is either [C, Z, Y, X] (cases 1 & 2) or [C, Z, Y, X, T] (case 3).

The images are type-casted to a floating-point datatype before being transformed, irrespective of their actual datatype.

For the specification of image annotation conventions, please see below.

The transformations are implemented as callable classes inheriting from an abstract `Transform` class.
Upon instantiating a transformation object, one has to specify the parameters of the transformation.

All transformations work in a fully 3D fashion. Individual channels and time points of a data volume
are usually transformed separately and in the same manner; however, certain transformations can also work
along these dimensions. For instance, `GaussianBlur` can perform the blurring along the temporal dimension and
with different strength in individual channels.

The data can be transformed by a call to the transformation object.
**It is strongly recommended to use `Compose` to create and use transformation pipelines.** <br>
An instantiated `Compose` object encapsulates the full transformation pipeline and provides additional support:
it automatically checks and adjusts image format and datatype, outputs the image as a contiguous array, and
can optionally convert the transformed image to a desired format.
If you call transformations outside of `Compose`, we cannot guarantee the all assumptions
are checked and enforced, so you might encounter unexpected behaviour.

Below, there are several examples of how to use this library. You are also welcome to check 
[our documentation pages](https://biovolumentations.readthedocs.io/1.3.1/).

### Example: Transforming a Single Image

To create the transformation pipeline, you just need to instantiate all desired transformations
(with the desired parameter values)
and then feed a list of these transformation objects into a new `Compose` object. 

Optionally, you can specify a datatype conversion transformation that will be applied after the last transformation
in the list, e.g. from the default `numpy.ndarray` to a `torch.Tensor`. You can also specify the probability
of actually applying the whole pipeline as a number between 0 and 1. 
The default probability is 1 (i.e., the pipeline is applied in each call).
See the [docs](https://biovolumentations.readthedocs.io/1.3.1/examples.html) for more details.

The `Compose` object is callable. The data is passed as a keyword argument, and the call returns a dictionary 
with the same keyword and the corresponding transformed image. This might look like an overkill for a single image, 
but it will come handy when transforming images with annotations. The default key for an image is `'image'`.


```python
import numpy as np
from bio_volumentations import Compose, RandomGamma, RandomRotate90, GaussianBlur

# Create the transformation pipeline using Compose
aug = Compose([
        RandomGamma(gamma_limit = (0.8, 1.2), p = 0.8),
        RandomRotate90(axes = [1, 2, 3], p = 1),
        GaussianBlur(sigma = 1.2, p = 0.8)
      ])

# Generate an image - shape [C, Z, Y, X]
img = np.random.rand(1, 128, 256, 256)

# Transform the image
# Please note that the image must be passed as a keyword argument to the transformation pipeline
# and extracted from the outputted dictionary.
data = {'image': img}
aug_data = aug(**data)
transformed_img = aug_data['image']
```

### Example: Transforming Images with Annotations

Sometimes, it is necessary to transform an image with some corresponding additional targets.
To that end, `Bio-Volumentations` define several target types:

- `image` for the image data;
- `mask` for integer-valued label images;
- `float_mask` for real-valued label images;
- `keypoints` for a list of key points; and
- `value` for non-transformed values.

You cannot define your own target types; that would require re-implementing all existing transforms.

For more information on the format of individual target types, see the 
[Getting Started guide](https://biovolumentations.readthedocs.io/1.3.1/examples.html#example-transforming-images-with-annotations)

If a `Random...` transform receives multiple targets on its input in a single call,
the same transformation parameters are used to transform all of these targets.
For example, `RandomAffineTransform` applies the same geometric transformation to all target types in a single call.

Some transformations, such as `RandomGaussianNoise` or `RandomGamma`, are only defined for the `image` target 
and leave the other targets unchanged. Please consult the 
[documentation of the individual transforms](https://biovolumentations.readthedocs.io/1.3.1/modules.html) for more details.

The corresponding targets are fed to the `Compose` object call as keyword arguments and extracted from the outputted
dictionary using the same keys. The default key values are `'image'`, `'mask'`, `'float_mask'`, `'keypoints'`, and `'value'`.

```python
import numpy as np
from bio_volumentations import Compose, RandomGamma, RandomRotate90, GaussianBlur

# Create the transformation using Compose
aug = Compose([
        RandomGamma(gamma_limit = (0.8, 1.2), p = 0.8),
        RandomRotate90(axes = [1, 2, 3], p = 1),
        GaussianBlur(sigma = 1.2, p = 0.8)
      ])

# Generate image and a corresponding labeled image
img = np.random.rand(1, 128, 256, 256)
lbl = np.random.randint(0, 1, size=(128, 256, 256), dtype=np.uint8)

# Transform the images
# Please note that the images must be passed as keyword arguments to the transformation pipeline
# and extracted from the outputted dictionary.
data = {'image': img, 'mask': lbl}
aug_data = aug(**data)
transformed_img, transformed_lbl = aug_data['image'], aug_data['mask']
```


### Example: Transforming Multiple Targets of the Same Type

You can input arbitrary number of inputs to any transformation. To achieve this, you have to define the keywords
for the individual inputs when creating the `Compose` object.
The specified keywords will then be used to input the images to the transformation call as well as to extract the
transformed images from the outputted dictionary.

Specifically, you can define `image`-type target keywords using the `img_keywords` parameter - its value
must be a tuple of strings, each string representing a single keyword. Similarly, there are `mask_keywords`,
`fmask_keywords`, `value_keywords`, and `keypoints_keywords` parameters for the other target types. 
Setting any of these parameters overwrites its default value.

Please note that there must always be an `image`-type target with the keyword `'image'`.
Otherwise, the keywords can be any valid dictionary keys, and they must be unique.

You do not need to use all specified keywords in a transformation call. However, at least the target with
the `'image'` keyword must be present in each transformation call.
In our example below, we only transform three targets even though we defined four target keywords explicitly 
(and there are some implicit keywords as well for the other target types).

```python
import numpy as np
from bio_volumentations import Compose, RandomGamma, RandomRotate90, GaussianBlur

# Create the transformation using Compose: do not forget to define targets
aug = Compose([
        RandomGamma(gamma_limit = (0.8, 1.2), p = 0.8),
        RandomRotate90(axes = [1, 2, 3], p = 1),
        GaussianBlur(sigma = 1.2, p = 0.8)
    ],
    img_keywords=('image', 'abc'), mask_keywords=('mask',), fmask_keywords=('nothing',))

# Generate the image data: two images and a single int-valued mask
img = np.random.rand(1, 128, 256, 256)
img1 = np.random.rand(1, 128, 256, 256)
lbl = np.random.randint(0, 1, size=(128, 256, 256), dtype=np.uint8)

# Transform the images
# Please note that the images must be passed as keyword arguments to the transformation pipeline
# and extracted from the outputted dictionary.
data = {'image': img, 'abc': img1, 'mask': lbl}
aug_data = aug(**data)
transformed_img = aug_data['image']
transformed_img1 = aug_data['abc']
transformed_lbl = aug_data['mask']
```

### Example: Adding a Custom Transformation

Each transformation inherits from the `Transform` class. You can thus easily implement your own 
transformations and use them with this library. You can check our implementations to see how this can be done.
For example, `Flip` can be implemented as follows:

```python
import numpy as np
from typing import List
from bio_volumentations import DualTransform

class Flip(DualTransform):
    def __init__(self, axes: List[int] = None, always_apply=False, p=1):
        super().__init__(always_apply, p)
        self.axes = axes

    # Transform the image
    def apply(self, img, **params):
        return np.flip(img, params["axes"])

    # Transform the int-valued mask
    def apply_to_mask(self, mask, **params):
       # The mask has no channels
        return np.flip(mask, axis=[item - 1 for item in params["axes"]])
    
    # Transform the float-valued mask
    # By default, float_mask uses the implementation of mask, unless it is overridden (see the implementation of DualTransform).
    #def apply_to_float_mask(self, float_mask, **params):
    #    return self.apply_to_mask(float_mask, **params)

    # Set transformation parameters. Useful especially for RandomXXX transforms to ensure consistent transformation of image tuples.
    def get_params(self, **data):
        axes = self.axes if self.axes is not None else [1, 2, 3]
        return {"axes": axes}
```


# Implemented Transforms

### A List of Implemented Transformations

Point transformations:
```python
Normalize
NormalizeMeanStd
HistogramEqualization 
GaussianNoise 
PoissonNoise
RandomBrightnessContrast 
RandomGamma
```

Local transformations:
```python
GaussianBlur 
RandomGaussianBlur
RemoveBackgroundGaussian
```

Geometric transformations:
```python
AffineTransform
Resize 
Scale
Rescale
Flip 
Pad
CenterCrop 
RandomAffineTransform
RandomScale 
RandomRotate90
RandomFlip 
RandomCrop
```

### Runtime

Here, we present the execution times of individual transformations from our library 
with respect to input image size.

The shape (size) of inputs was [1, 32, 32, 32, 1] (32k voxels), [4, 32, 32, 32, 5] (655k voxels), 
[4, 64, 64, 64, 5] (5M voxels), and [4, 128, 128, 128, 5] (42M voxels), respectively. 
The runtimes, presented in milliseconds, were averaged over 100 runs.
All measurements were done on a single workstation with an i7-7700 CPU @ 3.60GHz.

| Transformation           | 32k voxels |  655k voxels |    5M voxels |  42M voxels |
|:-------------------------|-----------:|-------------:|-------------:|------------:|
| AffineTransform          |       3 ms |        26 ms |       113 ms |      845 ms |
| RandomAffineTransform    |       2 ms |        19 ms |       110 ms |      899 ms |
| Scale                    |       2 ms |        19 ms |       103 ms |      854 ms |
| RandomScale              |       2 ms |        22 ms |       132 ms |      937 ms |
| Flip                     |     < 1 ms |         1 ms |        11 ms |       86 ms |
| RandomFlip               |     < 1 ms |         1 ms |         8 ms |       66 ms |
| RandomRotate90           |     < 1 ms |         1 ms |        14 ms |      197 ms |
| GaussianBlur             |       1 ms |         9 ms |        82 ms |      855 ms |
| RandomGaussianBlur       |     < 1 ms |         8 ms |        74 ms |      788 ms |
| GaussianNoise            |       1 ms |        15 ms |       124 ms |      989 ms |
| PoissonNoise             |       1 ms |        21 ms |       176 ms |     1427 ms |
| HistogramEqualization    |       2 ms |        35 ms |       285 ms |     2330 ms |
| Normalize                |     < 1 ms |         2 ms |        17 ms |      158 ms |
| NormalizeMeanStd         |     < 1 ms |         1 ms |         7 ms |       58 ms |
| RandomBrightnessContrast |     < 1 ms |       < 1 ms |         4 ms |       38 ms |
| RandomGamma              |     < 1 ms |         7 ms |        55 ms |      453 ms |


### Runtime: Comparison to Other Libraries

We also present the execution times of eight commonly used transformations, comparing the performance 
of our `Bio-Volumentations` to other libraries capable of processing volumetric image data: 
`TorchIO` [3], `Volumentations` [4, 5], and `Gunpowder` [6].

Asterisks (*) denote transformations that only partially correspond to the desired functionality. 
Dashes (-) denote transformations that are missing from the respective library. 
The fastest implementation of each transformation is highlighted in bold.
The runtimes, presented in milliseconds, were averaged over 100 runs.
All measurements were done with a single-channel volumetric input image of size (256, 256, 256) 
on a single workstation with a Ryzen 7-3700X CPU @ 3.60GHz.

| Transformation                       |      `TorchIO` |     `Volumentations` |  `Gunpowder` | `Bio-Volumentations` |
|:-------------------------------------|---------------:|---------------------:|-------------:|---------------------:|
| Cropping                             |         *26 ms |                20 ms |     **7 ms** |                20 ms |
| Flipping                             |          48 ms |                39 ms |    **31 ms** |                34 ms |
| Affine transform                     |     **931 ms** |             *4177 ms |            - |              2719 ms |
| Affine transform (anisotropic image) |              - |                    - |            - |            **2723 ms** |
| Gaussian blur                        |        4699 ms |                    - |            - |          **3149 ms** |
| Gaussian noise                       |     **182 ms** |               405 ms |      *340 ms |               400 ms |
| Brightness and contrast change       |              - |                75 ms |       183 ms |            **28 ms** |
| Padding                              |          68 ms |            **30 ms** |        54 ms |                43 ms |
| Z-normalization                      |         214 ms |           **119 ms** |            - |               133 ms |

[3] Pérez-García F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, 
preprocessing, augmentation and patch-based sampling of medical images in deep learning. 
_Comput Meth Prog Bio_ 2021;**208**:106236. https://www.sciencedirect.com/science/article/pii/S0169260721003102

[4] Volumentations maintainers and contributors. Volumentations 3D. Version 1.0.4 [software]. 
GitHub, 2020 [cited 2024 Dec 16]. https://github.com/ZFTurbo/volumentations

[5] Solovyev R, Kalinin AA, Gabruseva T. 3D convolutional neural networks
for stalled brain capillary detection. _Comput Biol Med_ 2022;**141**:105089.
https://doi.org/10.1016/j.compbiomed.2021.105089

[6] Gunpowder maintainers and contributors. Gunpowder. Version 1.4.0 [software]. 
GitHub, 2024 [cited 2024 Dec 16]. https://github.com/funkelab/gunpowder

# Contributions and Acknowledgements

Authors of `Bio-Volumentations`: Samuel Šuľan, Lucia Hradecká, Filip Lux.
- Lucia Hradecká: lucia.d.hradecka@gmail.com   
- Filip Lux: lux.filip@gmail.com     

The `Bio-Volumentations` library is based on the following image augmentation libraries:
- [Albumentations](https://github.com/albumentations-team/albumentations)  
- [Volumentations](https://github.com/ashawkey/volumentations)                  
- [Volumentations: Continued Development](https://github.com/ZFTurbo/volumentations)                   
- [Volumentations: Enhancements](https://github.com/qubvel/volumentations)        
- [Volumentations: Further Enhancements](https://github.com/muellerdo/volumentations)
- [TorchIO](https://github.com/fepegar/torchio)

We would thus like to thank their authors, namely [the Albumentations team](https://github.com/albumentations-team), 
[Pavel Iakubovskii](https://github.com/qubvel), [ZFTurbo](https://github.com/ZFTurbo), 
[ashawkey](https://github.com/ashawkey), [Dominik Müller](https://github.com/muellerdo), and 
[TorchIO contributors](https://github.com/fepegar/torchio?tab=readme-ov-file#contributors).         


# Citation

TBA




            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "bio-volumentations",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.1",
    "maintainer_email": null,
    "keywords": "image, augmentation, 3D, volumetric, biomedical, bioimage, preprocessing, transformation",
    "author": "Lucia Hradeck\u00e1, Filip Lux, Samuel \u0160u\u013ean",
    "author_email": "Lucia Hradeck\u00e1 <lucia.d.hradecka@gmail.com>, Filip Lux <lux.filip@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/d3/25/9fe92b83083561abf244139fa7fbee20c5fdc00a45c81a18f3d8a579ec22/bio_volumentations-1.3.1.tar.gz",
    "platform": null,
    "description": "# Bio-Volumentations\r\n\r\n`Bio-Volumentations` is an **image augmentation and preprocessing package** for 3D (volumetric), \r\n4D (time-lapse volumetric or multi-channel volumetric), and 5D (time-lapse multi-channel volumetric) \r\nbiomedical images and their annotations.\r\n\r\nThe library offers a wide range of efficiently implemented image transformations.\r\nThis includes both preprocessing transformations (such as intensity normalisation and padding) \r\nand augmentation transformations (such as affine transform, noise addition and removal, and contrast manipulation).\r\n\r\n\r\n# Why use Bio-Volumentations?\r\n\r\n`Bio-Volumentations` are a handy tool for image manipulation in machine learning applications. \r\nThe library can transform **3D to 5D images** with **image-based and point-based annotations**, \r\ngives you **fine-grained control** over the transformation pipelines, \r\nand can be used with **any major Python deep learning library** \r\n(including PyTorch, PyTorch Lightning, TensorFlow, and Keras) \r\nin **a wide range of applications** including classification, object detection, semantic & instance \r\nsegmentation, and object tracking.\r\n\r\n`Bio-Volumentations` build upon widely used libraries such as Albumentations and TorchIO \r\n(see the _Contributions and Acknowledgements_ section below) and are accompanied by \r\n[detailed documentation and a user guide](https://biovolumentations.readthedocs.io/1.3.1/). \r\nTherefore, they can easily be adopted by developers.\r\n\r\n\r\n# Installation\r\n\r\nSimply install the package from pip using:\r\n```commandline\r\npip install bio-volumentations\r\n```\r\n\r\nThat's it :)\r\n\r\nFor more details, see [the project's PyPI page](https://pypi.org/project/bio-volumentations/).\r\n\r\n### Requirements\r\n\r\n- [NumPy](https://numpy.org/)\r\n- [SciPy](https://scipy.org/)\r\n- [Scikit-image](https://scikit-image.org/)\r\n- [SimpleITK](https://simpleitk.org/)\r\n\r\n\r\n# Usage\r\n\r\n### The First Example\r\n\r\nTo check out our library on test data, you can run the example provided in the `example` folder.\r\n\r\nThere, you will find a test sample consisting of a 3D image (`image.tif`) with an associated binary mask\r\n(`segmentation_mask.tif`), a runnable Python script, and the transformed sample (`image_transformed.tif` and \r\n`segmentation_mask_transformed.tif`).\r\n\r\nTo run the example, please download the `example` folder and \r\ninstall the `bio-volumentations`, `tiffile` and `imagecodecs` packages to your Python environment. \r\nThen run the following from the command line:\r\n\r\n```commandline\r\ncd example\r\npython transformation_example.py\r\n```\r\n\r\nThe script will generate a new randomly transformed sample and save it into the `image_transformed.tif` and \r\n`segmentation_mask_transformed.tif` files. These files can be opened using ImageJ.\r\n\r\nThis example uses data from the _Fluo-N3DH-CE_ dataset [1] from the Cell Tracking Challenge repository [2].\r\n\r\n[1] Murray J, Bao Z, Boyle T, et al. Automated analysis of embryonic gene expression with cellular \r\nresolution in _C. elegans_. _Nat Methods_ 2008;**5**:703\u2013709. https://doi.org/10.1038/nmeth.1228.\r\n\r\n[2] Ma\u0161ka M, Ulman V, Delgado-Rodriguez P, et al. The Cell Tracking Challenge: 10 years of objective \r\nbenchmarking. _Nat Methods_ 2023;**20**:1010\u20131020. https://doi.org/10.1038/s41592-023-01879-y.\r\nRepository: https://celltrackingchallenge.net/3d-datasets/.\r\n\r\n### Importing\r\n\r\nImport the library to your project using:\r\n```python\r\nimport bio_volumentations as biovol\r\n```\r\n\r\n### How to Use Bio-Volumentations?\r\n\r\nThe `Bio-Volumentations` library processes 3D, 4D, and 5D images. Each image must be \r\nrepresented as a `numpy.ndarray` and must conform to the following conventions:\r\n\r\n- The order of dimensions is [C, Z, Y, X, T], where C is the channel dimension, \r\n   T is the time dimension, and Z, Y, and X are the spatial dimensions.\r\n- The three spatial dimensions (Z, Y, X) must be present. To transform a 2D image, please create a dummy Z dimension first. \r\n- The channel (C) dimension is optional. If it is not present, the library will automatically\r\n   create a dummy dimension in its place, so the output image shape will be [1, Z, Y, X].\r\n- The time (T) dimension is optional and can only be present if the channel (C) dimension is \r\n   also present in the input data. To process single-channel time-lapse images, please create a dummy C dimension.\r\n\r\nThus, an input image is interpreted in the following ways based on its dimensionality:\r\n\r\n1. 3D: a single-channel volumetric image [Z, Y, X];\r\n2. 4D: a multi-channel volumetric image [C, Z, Y, X];\r\n3. 5D: a single- or multi-channel volumetric image sequence [C, Z, Y, X, T].\r\n\r\nThe shape of the output image is either [C, Z, Y, X] (cases 1 & 2) or [C, Z, Y, X, T] (case 3).\r\n\r\nThe images are type-casted to a floating-point datatype before being transformed, irrespective of their actual datatype.\r\n\r\nFor the specification of image annotation conventions, please see below.\r\n\r\nThe transformations are implemented as callable classes inheriting from an abstract `Transform` class.\r\nUpon instantiating a transformation object, one has to specify the parameters of the transformation.\r\n\r\nAll transformations work in a fully 3D fashion. Individual channels and time points of a data volume\r\nare usually transformed separately and in the same manner; however, certain transformations can also work\r\nalong these dimensions. For instance, `GaussianBlur` can perform the blurring along the temporal dimension and\r\nwith different strength in individual channels.\r\n\r\nThe data can be transformed by a call to the transformation object.\r\n**It is strongly recommended to use `Compose` to create and use transformation pipelines.** <br>\r\nAn instantiated `Compose` object encapsulates the full transformation pipeline and provides additional support:\r\nit automatically checks and adjusts image format and datatype, outputs the image as a contiguous array, and\r\ncan optionally convert the transformed image to a desired format.\r\nIf you call transformations outside of `Compose`, we cannot guarantee the all assumptions\r\nare checked and enforced, so you might encounter unexpected behaviour.\r\n\r\nBelow, there are several examples of how to use this library. You are also welcome to check \r\n[our documentation pages](https://biovolumentations.readthedocs.io/1.3.1/).\r\n\r\n### Example: Transforming a Single Image\r\n\r\nTo create the transformation pipeline, you just need to instantiate all desired transformations\r\n(with the desired parameter values)\r\nand then feed a list of these transformation objects into a new `Compose` object. \r\n\r\nOptionally, you can specify a datatype conversion transformation that will be applied after the last transformation\r\nin the list, e.g. from the default `numpy.ndarray` to a `torch.Tensor`. You can also specify the probability\r\nof actually applying the whole pipeline as a number between 0 and 1. \r\nThe default probability is 1 (i.e., the pipeline is applied in each call).\r\nSee the [docs](https://biovolumentations.readthedocs.io/1.3.1/examples.html) for more details.\r\n\r\nThe `Compose` object is callable. The data is passed as a keyword argument, and the call returns a dictionary \r\nwith the same keyword and the corresponding transformed image. This might look like an overkill for a single image, \r\nbut it will come handy when transforming images with annotations. The default key for an image is `'image'`.\r\n\r\n\r\n```python\r\nimport numpy as np\r\nfrom bio_volumentations import Compose, RandomGamma, RandomRotate90, GaussianBlur\r\n\r\n# Create the transformation pipeline using Compose\r\naug = Compose([\r\n        RandomGamma(gamma_limit = (0.8, 1.2), p = 0.8),\r\n        RandomRotate90(axes = [1, 2, 3], p = 1),\r\n        GaussianBlur(sigma = 1.2, p = 0.8)\r\n      ])\r\n\r\n# Generate an image - shape [C, Z, Y, X]\r\nimg = np.random.rand(1, 128, 256, 256)\r\n\r\n# Transform the image\r\n# Please note that the image must be passed as a keyword argument to the transformation pipeline\r\n# and extracted from the outputted dictionary.\r\ndata = {'image': img}\r\naug_data = aug(**data)\r\ntransformed_img = aug_data['image']\r\n```\r\n\r\n### Example: Transforming Images with Annotations\r\n\r\nSometimes, it is necessary to transform an image with some corresponding additional targets.\r\nTo that end, `Bio-Volumentations` define several target types:\r\n\r\n- `image` for the image data;\r\n- `mask` for integer-valued label images;\r\n- `float_mask` for real-valued label images;\r\n- `keypoints` for a list of key points; and\r\n- `value` for non-transformed values.\r\n\r\nYou cannot define your own target types; that would require re-implementing all existing transforms.\r\n\r\nFor more information on the format of individual target types, see the \r\n[Getting Started guide](https://biovolumentations.readthedocs.io/1.3.1/examples.html#example-transforming-images-with-annotations)\r\n\r\nIf a `Random...` transform receives multiple targets on its input in a single call,\r\nthe same transformation parameters are used to transform all of these targets.\r\nFor example, `RandomAffineTransform` applies the same geometric transformation to all target types in a single call.\r\n\r\nSome transformations, such as `RandomGaussianNoise` or `RandomGamma`, are only defined for the `image` target \r\nand leave the other targets unchanged. Please consult the \r\n[documentation of the individual transforms](https://biovolumentations.readthedocs.io/1.3.1/modules.html) for more details.\r\n\r\nThe corresponding targets are fed to the `Compose` object call as keyword arguments and extracted from the outputted\r\ndictionary using the same keys. The default key values are `'image'`, `'mask'`, `'float_mask'`, `'keypoints'`, and `'value'`.\r\n\r\n```python\r\nimport numpy as np\r\nfrom bio_volumentations import Compose, RandomGamma, RandomRotate90, GaussianBlur\r\n\r\n# Create the transformation using Compose\r\naug = Compose([\r\n        RandomGamma(gamma_limit = (0.8, 1.2), p = 0.8),\r\n        RandomRotate90(axes = [1, 2, 3], p = 1),\r\n        GaussianBlur(sigma = 1.2, p = 0.8)\r\n      ])\r\n\r\n# Generate image and a corresponding labeled image\r\nimg = np.random.rand(1, 128, 256, 256)\r\nlbl = np.random.randint(0, 1, size=(128, 256, 256), dtype=np.uint8)\r\n\r\n# Transform the images\r\n# Please note that the images must be passed as keyword arguments to the transformation pipeline\r\n# and extracted from the outputted dictionary.\r\ndata = {'image': img, 'mask': lbl}\r\naug_data = aug(**data)\r\ntransformed_img, transformed_lbl = aug_data['image'], aug_data['mask']\r\n```\r\n\r\n\r\n### Example: Transforming Multiple Targets of the Same Type\r\n\r\nYou can input arbitrary number of inputs to any transformation. To achieve this, you have to define the keywords\r\nfor the individual inputs when creating the `Compose` object.\r\nThe specified keywords will then be used to input the images to the transformation call as well as to extract the\r\ntransformed images from the outputted dictionary.\r\n\r\nSpecifically, you can define `image`-type target keywords using the `img_keywords` parameter - its value\r\nmust be a tuple of strings, each string representing a single keyword. Similarly, there are `mask_keywords`,\r\n`fmask_keywords`, `value_keywords`, and `keypoints_keywords` parameters for the other target types. \r\nSetting any of these parameters overwrites its default value.\r\n\r\nPlease note that there must always be an `image`-type target with the keyword `'image'`.\r\nOtherwise, the keywords can be any valid dictionary keys, and they must be unique.\r\n\r\nYou do not need to use all specified keywords in a transformation call. However, at least the target with\r\nthe `'image'` keyword must be present in each transformation call.\r\nIn our example below, we only transform three targets even though we defined four target keywords explicitly \r\n(and there are some implicit keywords as well for the other target types).\r\n\r\n```python\r\nimport numpy as np\r\nfrom bio_volumentations import Compose, RandomGamma, RandomRotate90, GaussianBlur\r\n\r\n# Create the transformation using Compose: do not forget to define targets\r\naug = Compose([\r\n        RandomGamma(gamma_limit = (0.8, 1.2), p = 0.8),\r\n        RandomRotate90(axes = [1, 2, 3], p = 1),\r\n        GaussianBlur(sigma = 1.2, p = 0.8)\r\n    ],\r\n    img_keywords=('image', 'abc'), mask_keywords=('mask',), fmask_keywords=('nothing',))\r\n\r\n# Generate the image data: two images and a single int-valued mask\r\nimg = np.random.rand(1, 128, 256, 256)\r\nimg1 = np.random.rand(1, 128, 256, 256)\r\nlbl = np.random.randint(0, 1, size=(128, 256, 256), dtype=np.uint8)\r\n\r\n# Transform the images\r\n# Please note that the images must be passed as keyword arguments to the transformation pipeline\r\n# and extracted from the outputted dictionary.\r\ndata = {'image': img, 'abc': img1, 'mask': lbl}\r\naug_data = aug(**data)\r\ntransformed_img = aug_data['image']\r\ntransformed_img1 = aug_data['abc']\r\ntransformed_lbl = aug_data['mask']\r\n```\r\n\r\n### Example: Adding a Custom Transformation\r\n\r\nEach transformation inherits from the `Transform` class. You can thus easily implement your own \r\ntransformations and use them with this library. You can check our implementations to see how this can be done.\r\nFor example, `Flip` can be implemented as follows:\r\n\r\n```python\r\nimport numpy as np\r\nfrom typing import List\r\nfrom bio_volumentations import DualTransform\r\n\r\nclass Flip(DualTransform):\r\n    def __init__(self, axes: List[int] = None, always_apply=False, p=1):\r\n        super().__init__(always_apply, p)\r\n        self.axes = axes\r\n\r\n    # Transform the image\r\n    def apply(self, img, **params):\r\n        return np.flip(img, params[\"axes\"])\r\n\r\n    # Transform the int-valued mask\r\n    def apply_to_mask(self, mask, **params):\r\n       # The mask has no channels\r\n        return np.flip(mask, axis=[item - 1 for item in params[\"axes\"]])\r\n    \r\n    # Transform the float-valued mask\r\n    # By default, float_mask uses the implementation of mask, unless it is overridden (see the implementation of DualTransform).\r\n    #def apply_to_float_mask(self, float_mask, **params):\r\n    #    return self.apply_to_mask(float_mask, **params)\r\n\r\n    # Set transformation parameters. Useful especially for RandomXXX transforms to ensure consistent transformation of image tuples.\r\n    def get_params(self, **data):\r\n        axes = self.axes if self.axes is not None else [1, 2, 3]\r\n        return {\"axes\": axes}\r\n```\r\n\r\n\r\n# Implemented Transforms\r\n\r\n### A List of Implemented Transformations\r\n\r\nPoint transformations:\r\n```python\r\nNormalize\r\nNormalizeMeanStd\r\nHistogramEqualization \r\nGaussianNoise \r\nPoissonNoise\r\nRandomBrightnessContrast \r\nRandomGamma\r\n```\r\n\r\nLocal transformations:\r\n```python\r\nGaussianBlur \r\nRandomGaussianBlur\r\nRemoveBackgroundGaussian\r\n```\r\n\r\nGeometric transformations:\r\n```python\r\nAffineTransform\r\nResize \r\nScale\r\nRescale\r\nFlip \r\nPad\r\nCenterCrop \r\nRandomAffineTransform\r\nRandomScale \r\nRandomRotate90\r\nRandomFlip \r\nRandomCrop\r\n```\r\n\r\n### Runtime\r\n\r\nHere, we present the execution times of individual transformations from our library \r\nwith respect to input image size.\r\n\r\nThe shape (size) of inputs was [1, 32, 32, 32, 1] (32k voxels), [4, 32, 32, 32, 5] (655k voxels), \r\n[4, 64, 64, 64, 5] (5M voxels), and [4, 128, 128, 128, 5] (42M voxels), respectively. \r\nThe runtimes, presented in milliseconds, were averaged over 100 runs.\r\nAll measurements were done on a single workstation with an i7-7700 CPU @ 3.60GHz.\r\n\r\n| Transformation           | 32k voxels |  655k voxels |    5M voxels |  42M voxels |\r\n|:-------------------------|-----------:|-------------:|-------------:|------------:|\r\n| AffineTransform          |       3 ms |        26 ms |       113 ms |      845 ms |\r\n| RandomAffineTransform    |       2 ms |        19 ms |       110 ms |      899 ms |\r\n| Scale                    |       2 ms |        19 ms |       103 ms |      854 ms |\r\n| RandomScale              |       2 ms |        22 ms |       132 ms |      937 ms |\r\n| Flip                     |     < 1 ms |         1 ms |        11 ms |       86 ms |\r\n| RandomFlip               |     < 1 ms |         1 ms |         8 ms |       66 ms |\r\n| RandomRotate90           |     < 1 ms |         1 ms |        14 ms |      197 ms |\r\n| GaussianBlur             |       1 ms |         9 ms |        82 ms |      855 ms |\r\n| RandomGaussianBlur       |     < 1 ms |         8 ms |        74 ms |      788 ms |\r\n| GaussianNoise            |       1 ms |        15 ms |       124 ms |      989 ms |\r\n| PoissonNoise             |       1 ms |        21 ms |       176 ms |     1427 ms |\r\n| HistogramEqualization    |       2 ms |        35 ms |       285 ms |     2330 ms |\r\n| Normalize                |     < 1 ms |         2 ms |        17 ms |      158 ms |\r\n| NormalizeMeanStd         |     < 1 ms |         1 ms |         7 ms |       58 ms |\r\n| RandomBrightnessContrast |     < 1 ms |       < 1 ms |         4 ms |       38 ms |\r\n| RandomGamma              |     < 1 ms |         7 ms |        55 ms |      453 ms |\r\n\r\n\r\n### Runtime: Comparison to Other Libraries\r\n\r\nWe also present the execution times of eight commonly used transformations, comparing the performance \r\nof our `Bio-Volumentations` to other libraries capable of processing volumetric image data: \r\n`TorchIO` [3], `Volumentations` [4, 5], and `Gunpowder` [6].\r\n\r\nAsterisks (*) denote transformations that only partially correspond to the desired functionality. \r\nDashes (-) denote transformations that are missing from the respective library. \r\nThe fastest implementation of each transformation is highlighted in bold.\r\nThe runtimes, presented in milliseconds, were averaged over 100 runs.\r\nAll measurements were done with a single-channel volumetric input image of size (256, 256, 256) \r\non a single workstation with a Ryzen 7-3700X CPU @ 3.60GHz.\r\n\r\n| Transformation                       |      `TorchIO` |     `Volumentations` |  `Gunpowder` | `Bio-Volumentations` |\r\n|:-------------------------------------|---------------:|---------------------:|-------------:|---------------------:|\r\n| Cropping                             |         *26 ms |                20 ms |     **7 ms** |                20 ms |\r\n| Flipping                             |          48 ms |                39 ms |    **31 ms** |                34 ms |\r\n| Affine transform                     |     **931 ms** |             *4177 ms |            - |              2719 ms |\r\n| Affine transform (anisotropic image) |              - |                    - |            - |            **2723 ms** |\r\n| Gaussian blur                        |        4699 ms |                    - |            - |          **3149 ms** |\r\n| Gaussian noise                       |     **182 ms** |               405 ms |      *340 ms |               400 ms |\r\n| Brightness and contrast change       |              - |                75 ms |       183 ms |            **28 ms** |\r\n| Padding                              |          68 ms |            **30 ms** |        54 ms |                43 ms |\r\n| Z-normalization                      |         214 ms |           **119 ms** |            - |               133 ms |\r\n\r\n[3] P\u00e9rez-Garc\u00eda F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, \r\npreprocessing, augmentation and patch-based sampling of medical images in deep learning. \r\n_Comput Meth Prog Bio_ 2021;**208**:106236. https://www.sciencedirect.com/science/article/pii/S0169260721003102\r\n\r\n[4] Volumentations maintainers and contributors. Volumentations 3D. Version 1.0.4 [software]. \r\nGitHub, 2020 [cited 2024 Dec 16]. https://github.com/ZFTurbo/volumentations\r\n\r\n[5] Solovyev R, Kalinin AA, Gabruseva T. 3D convolutional neural networks\r\nfor stalled brain capillary detection. _Comput Biol Med_ 2022;**141**:105089.\r\nhttps://doi.org/10.1016/j.compbiomed.2021.105089\r\n\r\n[6] Gunpowder maintainers and contributors. Gunpowder. Version 1.4.0 [software]. \r\nGitHub, 2024 [cited 2024 Dec 16]. https://github.com/funkelab/gunpowder\r\n\r\n# Contributions and Acknowledgements\r\n\r\nAuthors of `Bio-Volumentations`: Samuel \u0160u\u013ean, Lucia Hradeck\u00e1, Filip Lux.\r\n- Lucia Hradeck\u00e1: lucia.d.hradecka@gmail.com   \r\n- Filip Lux: lux.filip@gmail.com     \r\n\r\nThe `Bio-Volumentations` library is based on the following image augmentation libraries:\r\n- [Albumentations](https://github.com/albumentations-team/albumentations)  \r\n- [Volumentations](https://github.com/ashawkey/volumentations)                  \r\n- [Volumentations: Continued Development](https://github.com/ZFTurbo/volumentations)                   \r\n- [Volumentations: Enhancements](https://github.com/qubvel/volumentations)        \r\n- [Volumentations: Further Enhancements](https://github.com/muellerdo/volumentations)\r\n- [TorchIO](https://github.com/fepegar/torchio)\r\n\r\nWe would thus like to thank their authors, namely [the Albumentations team](https://github.com/albumentations-team), \r\n[Pavel Iakubovskii](https://github.com/qubvel), [ZFTurbo](https://github.com/ZFTurbo), \r\n[ashawkey](https://github.com/ashawkey), [Dominik M\u00fcller](https://github.com/muellerdo), and \r\n[TorchIO contributors](https://github.com/fepegar/torchio?tab=readme-ov-file#contributors).         \r\n\r\n\r\n# Citation\r\n\r\nTBA\r\n\r\n\r\n\r\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Library for 3D-5D augmentations of volumetric multi-dimensional time-lapse biomedical images with annotations",
    "version": "1.3.1",
    "project_urls": {
        "Documentation": "https://biovolumentations.readthedocs.io/1.3.1/",
        "Homepage": "https://gitlab.fi.muni.cz/cbia/bio-volumentations/-/tree/1.3.1?ref_type=tags",
        "Repository": "https://gitlab.fi.muni.cz/cbia/bio-volumentations/-/tree/1.3.1?ref_type=tags"
    },
    "split_keywords": [
        "image",
        " augmentation",
        " 3d",
        " volumetric",
        " biomedical",
        " bioimage",
        " preprocessing",
        " transformation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "40b35ece0670c12d2dcf92c4616aac07b3a7e5370f592a7c39c6a5f5f6f9d307",
                "md5": "4fe8eb4bbbc504eb9b19ac765a308140",
                "sha256": "6171c95fac6c514b4b8578fb474a38cd5f5759fd1d3e9b2f53f647b79304491f"
            },
            "downloads": -1,
            "filename": "bio_volumentations-1.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4fe8eb4bbbc504eb9b19ac765a308140",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.1",
            "size": 50191,
            "upload_time": "2025-01-08T12:08:56",
            "upload_time_iso_8601": "2025-01-08T12:08:56.666381Z",
            "url": "https://files.pythonhosted.org/packages/40/b3/5ece0670c12d2dcf92c4616aac07b3a7e5370f592a7c39c6a5f5f6f9d307/bio_volumentations-1.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d3259fe92b83083561abf244139fa7fbee20c5fdc00a45c81a18f3d8a579ec22",
                "md5": "1022cafdd9ffea1daa23dc7132a74fb4",
                "sha256": "f17e6eeeaa429855333ad95aee9203d16fec4c107896d4e541ffb1cd31219827"
            },
            "downloads": -1,
            "filename": "bio_volumentations-1.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "1022cafdd9ffea1daa23dc7132a74fb4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.1",
            "size": 43069,
            "upload_time": "2025-01-08T12:09:00",
            "upload_time_iso_8601": "2025-01-08T12:09:00.349824Z",
            "url": "https://files.pythonhosted.org/packages/d3/25/9fe92b83083561abf244139fa7fbee20c5fdc00a45c81a18f3d8a579ec22/bio_volumentations-1.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-08 12:09:00",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "bio-volumentations"
}
        
Elapsed time: 0.39467s