pydeepimagej


Namepydeepimagej JSON
Version 2.4 PyPI version JSON
download
home_pagehttps://deepimagej.github.io/deepimagej/
SummaryPython package to export TensorFlow models as DeepImageJ bundled models
upload_time2022-08-10 16:32:41
maintainer
docs_urlNone
authorE. Gomez-de-Mariscal, C. Garcia-Lopez-de-Haro, W. Ouyang, L. Donati, E- Lundberg, M. Unser, A. Munoz-Barrutia, D. Sage.
requires_python>=3.0
licenseBSD 2-Clause License
keywords fiji imagej deepimagej deep learning image processing bioimage.io bioimage model zoo
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PydeepImageJ

[![GitHub](https://img.shields.io/github/license/deepimagej/pydeepimagej)](https://raw.githubusercontent.com/deepimagej/pydeepimagej/master/LICENSE)
[![minimal Python version](https://img.shields.io/badge/Python-3-6666ff.svg)](https://www.anaconda.com/distribution/)

Python code to export trained models into the [BioImage Model Zoo](https://bioimage.io/) format and read them in Fiji & ImageJ using the deepImageJ plugin.
  - Creates a configuration class in Python with all the information about the trained model needed for its correct use in Fiji & ImageJ.
  - Includes the metadata of an example image.
  - Includes all expected results and needed pre / post-processing routines.
  - Creates basic cover images for the model card in the BioImage Model Zoo.
  - Creates de the version 0.3.2 of the [BioImage Model Zoo specification file](https://bioimage.io/docs/#/contribute_models/README?id=model-contribution-requirements): `model.yaml`   
  - See [deepImageJ webpage](https://deepimagej.github.io/deepimagej/) for more information about how to use the model in Fiji & ImageJ. 

### Requirements & Installation

- PyDeepImageJ requires Python 3 to run. 
- TensorFlow: It runs using the local installation of TensorFlow, i.e. the one corresponding to the trained model. However, deepImageJ is only compatible with TensorFlow versions <= 2.2.1.

To install pydeepImageJ either clone this repository or use PyPi via `pip`:

```sh
$ pip install pydeepimagej
```
or
```sh
$ git clone https://github.com/deepimagej/pydeepimagej.git
$ cd pydeepimagej
$ pip install .
```
----

### Reference: 
* Gómez-de-Mariscal, E., García-López-de-Haro, C., Ouyang, W., Donati, L., Lundberg, L., Unser, M., Muñoz-Barrutia, A. and Sage, D., "DeepImageJ: A user-friendly environment to run deep learning models in ImageJ", Nat Methods 18, 1192–1195 (2021). 
https://doi.org/10.1038/s41592-021-01262-9
  * **Read the paper online with this link: [rdcu.be/cyG3K](https://rdcu.be/cyG3K)**

- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Spain
- Science for Life Laboratory, KTH – Royal Institute of Technology, Stockholm, Sweden
- Biomedical Imaging Group, Ecole polytechnique federale de Lausanne (EPFL), Switzerland

Corresponding authors: mamunozb@ing.uc3m.es, daniel.sage@epfl.ch
Copyright © 2019. Universidad Carlos III, Madrid; Spain and EPFL, Lausanne, Switzerland.
#### How to cite
```bibtex
@article{gomez2021deepimagej,
  title={DeepImageJ: A user-friendly environment to run deep learning models in ImageJ},
  author={G{\'o}mez-de-Mariscal, Estibaliz and Garc{\'i}a-L{\'o}pez-de-Haro, Carlos and Ouyang, Wei and Donati, Laur{\`e}ne and Lundberg, Emma and Unser, Michael and Mu{\~{n}}oz-Barrutia, Arrate and Sage, Daniel},
  journal={Nature Methods},
  year={2021},
  volume={18},
  number={10},
  pages={1192-1195},
  URL = {https://doi.org/10.1038/s41592-021-01262-9},
  doi = {10.1038/s41592-021-01262-9}
}
```
#### License

[BSD 2-Clause License](https://raw.githubusercontent.com/deepimagej/pydeepimagej/master/LICENSE)

----

## Example of how to use it
Try a Jupyter notebook in Google Colaboratory: [![GoogleColab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepimagej/pydeepimagej/blob/master/examples/ExportBioImageModelZoo_deepImageJ.ipynb)

Otherwise, follow the next steps:

Let `model` be a Keras or TensorFlow trained model. Initialize the configuration class with the trained model `model`
````python
from pydeepimagej.yaml import BioImageModelZooConfig
# MinimumSize needs to be given as it cannot be always estimated. See Additional commands for hints.
dij_config = BioImageModelZooConfig(model, MinimumSize)
````
Update model information
````python
dij_config.Name = 'My trained model v0.1'
dij_config.Description = 'Brief description of the task to perform by the trained model'
dij_config.Authors.Names = ['First author', 'Second Author', 'Third Author who create the configuration specifications']
dij_config.Authors.Affiliations = ['First author affiliation', 'Second author affiliation', 'Third author affiliation']
dij_config.References = ['Gómez-de-Mariscal, E., García-López-de-Haro, C. et al., bioRxiv 2019', 'Second citation']
dij_config.DOI = ['https://doi.org/10.1101/799270', 'second citation doi']
dij_config.GitHub = 'https://github.com/deepimagej/pydeepimagej'
dij_config.License = 'BSD-3'
dij_config.Documentation = 'https://useful_documentation.pdf'
dij_config.Tags = ['deepimagej', 'segmentation', 'Fiji', 'microscopy']
dij_config.CoverImage =  ['./input.png', './output.png']
dij_config.Framework = 'TensorFlow'
# Parent model in the BioImage Model Zoo whose trained weights were used as pretrained weights.
dij_config.Parent = "https://bioimage.io/#/?id=deepimagej%2FUNet2DPancreaticSegmentation"
````
### 1. Pre & post-processing specification.
#### 1.1. Specify the pre&post-processing steps following the BioImage Model Zoo specifications.
If the pre-processing or the post-processing can be defined using the implementations defined at
, then it is also possible to specify them with some code:
```python
dij_config.add_bioimageio_spec('pre-processing', 'scale_range',
                               mode='per_sample', axes='xyzc',
                               min_percentile=0, 
                               max_percentile=100)

dij_config.add_bioimageio_spec('post-processing', 'binarize',
                               threshold=threshold)
```
The `BioImageModelZooConfig` class will include as many steps as times the previous functions are called. For example:
```python
# Make sure that there's no pre-processing specified.
dij_config.BioImage_Preprocessing=None
dij_config.add_bioimageio_spec('pre-processing', 'scale_range',
                               mode='per_sample', axes='xyzc',
                               min_percentile=min_percentile, 
                               max_percentile=max_percentile)
dij_config.add_bioimageio_spec('pre-processing', 'scale_linear',
                               gain=255, offset=0, axes='xy')
```
```
dij_config.BioImage_Preprocessing:
[{'scale_range': {'kwargs': {'axes': 'xyzc',
  'max_percentile': 100,
  'min_percentile': 0,
  'mode': 'per_sample'}}},
 {'scale_range': {'kwargs': {'axes': 'xy', 'gain': 255, 'offset': 0}}}]
```
The same applies for the post-processing:
```python
dij_config.BioImage_Postprocessing=None 
dij_config.add_bioimageio_spec('post-processing', 'scale_range',
                               mode='per_sample', axes='xyzc', 
                               min_percentile=0, max_percentile=100)

dij_config.add_bioimageio_spec('post-processing', 'scale_linear',
                               gain=255, offset=0, axes='xy')

dij_config.add_bioimageio_spec('post-processing', 'binarize',
                               threshold=threshold)
```
```
dij_config.BioImage_Postprocessing:
[{'scale_range': {'kwargs': {'axes': 'xyzc',
  'max_percentile': 100,
  'min_percentile': 0,
  'mode': 'per_sample'}}},
 {'scale_range': {'kwargs': {'axes': 'xy', 'gain': 255, 'offset': 0}}},
 {'binarize': {'kwargs': {'threshold': 0.5}}}]
```
#### 1.2. Prepare an ImageJ pre/post-processing macro.
You may need to preprocess the input image before the inference. Some ImageJ macro routines can be downloaded from [here](https://github.com/deepimagej/imagej-macros/) and included in the model specifications. Note that ImageJ macros are text files so it is easy to modify them inside a Python script ([see an example](https://github.com/deepimagej/pydeepimagej/blob/master/README.md#additional-commands)). To add any ImageJ macro code we need to run `add_preprocessing(local_path_to_the_macro_file, 'name_to_store_the_macro_in_the_bundled_model')`:
````python
path_preprocessing = "PercentileNormalization.ijm"
# Download the macro file
urllib.request.urlretrieve("https://raw.githubusercontent.com/deepimagej/imagej-macros/master/PercentileNormalization.ijm", path_preprocessing )
# Include it in the configuration class
dij_config.add_preprocessing(path_preprocessing, "preprocessing")
````
The same holds for the postprocessing.
````python
path_postprocessing = "8bitBinarize.ijm"
urllib.request.urlretrieve("https://raw.githubusercontent.com/deepimagej/imagej-macros/master/8bitBinarize.ijm", path_postprocessing )
# Include the info about the postprocessing 
dij_config.add_postprocessing(path_postprocessing,"postprocessing")
````
DeepImageJ accepts two pre/post-processing routines. The images will be processed in the order in which we include them with `add_postprocessing`. Thus, in this example, the output of the model is first binarized with `'8bitBinarize.ijm'` and then, processed with `'another_macro.ijm'`: 
````python
path_second_postprocessing = './folder/another_macro.ijm'
dij_config.add_postprocessing(path_second_postprocessing, 'postprocessing_2')
````

### 2. Add information about the example image.
Let `test_img` be an example image to test the model inference and `test_prediction` be the resulting image after the post-processing. It is possible to export the trained model with these two, so an end user can see an example. 
`PixelSize` should be a list of values according to `test_img` dimensions and given in microns (µm). 
````python
PixelSize = [0.64,0.64,1] # Pixel size of a 3D volume with axis yxz
dij_config.add_test_info(test_img, test_prediction, PixelSize)
````

#### 2.1. Create some covers for the model card in the BioImage Model Zoo.
Let `test_img` and `test_mask` be the input and output example images, and `./input.png` and `./output.png` the names we want to use to store them within bundled model. `dij_config` stretches the intensity range of the given images to the [0, 255] range so the images can be exported as 8-bits images and visualized properly on the website.  
```python
dij_config.create_covers([test_img, test_mask])
dij_config.Covers =  ['./input.png', './output.png']
```

### 3. Store weights using specific formats.
The weights of a trained model can be stored either as a TensorFlow SavedModel bundle (`saved_model.pb` + `variables/`) or as a Keras HDF5 model (`model.h5`). Let `model` be a trained model in TensorFlow. With pydeepimagej, the weights information can be included as follows:
````python

dij_config.add_weights_formats(model, 'KerasHDF5', 
                               authors=['Authors', 'who', 'trained it'])
dij_config.add_weights_formats(model, 'TensorFlow', 
                               parent="keras_hdf5",
                               authors=['Authors who', 'converted the model', 'into this new format'])
````
which in the `model.yaml` appear as :
````yaml
weights:
  keras_hdf5:
    source: ./keras_model.h5
    sha256: 9f7512eb28de4c6c4182f976dd8e53e9da7f342e14b2528ef897a970fb26875d
    authors:
    - Authors
    - who
    - trained it
  tensorflow_saved_model_bundle:
    source: ./tensorflow_saved_model_bundle.zip
    sha256: 2c552aa561c3c3c9063f42b78dda380e2b85a8ad04e434604af5cbb50eaaa54d
    parent: keras_hdf5
    authors:
    - Authors who
    - converted the model
    - into this new format
````

### 4. EXPORT THE MODEL
````python
deepimagej_model_path = './my_trained_model_deepimagej'
dij_config.export_model(deepimagej_model_path)
`````
When exporting the model, a new folder with a deepImageJ 2.1.0 bundled model is created. The folder is also provided as a zip file, so it can be easily transferable.

## Additional commands
### Change one line in an ImageJ macro
````python
# Download the macro file
path_postprocessing = "8bitBinarize.ijm"
urllib.request.urlretrieve("https://raw.githubusercontent.com/deepimagej/imagej-macros/master/8bitBinarize.ijm", path_postprocessing )
# Modify the threshold in the macro to the chosen threshold
ijmacro = open(path_postprocessing,"r")  
list_of_lines = ijmacro. readlines()
# Line 21 is the one corresponding to the optimal threshold
list_of_lines[21] = "optimalThreshold = {}\n".format(128)
ijmacro.close()
ijmacro = open(path_postprocessing,"w")  
ijmacro. writelines(list_of_lines)
ijmacro. close()
````
### Estimation of the step size for the shape of the input image.
If the model is an encoder-decoder with skip connections, and the input shape of your trained model is not fixed (i.e. `[None, None, 1]` ), the input shape still needs to fit some requirements. You can caluculate it knowing the number of poolings in the encoder path of the network:
````python
import numpy as np
pooling_steps = 0
for keras_layer in model.layers:
    if keras_layer.name.startswith('max') or "pool" in keras_layer.name:
      pooling_steps += 1
MinimumSize = np.str(2**(pooling_steps))
````
## Exceptions
pydeepimagej is meant to connect Python with DeepImageJ so images can be processed in the Fiji & ImageJ ecosystem. Hence, images (tensors) are expected to have at least 3 dimensions: height, width and channels. For this reason, models with input shapes of less than 4 dimensions (`model.input_shape = [batch, height, width, channels]` are not considered. For example, if you have the following situation:
```python
model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10)])
```
please, modify it to
```python
model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10)])
```
## Code references used in this package:
This code uses similar functions to the ones in [StarDist](https://github.com/stardist/stardist) package for the calculation of a pixel's receptive field in a network. Citations:
- Uwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers.
  Cell Detection with Star-convex Polygons.
  International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, September 2018.
  DOI: [10.1007/978-3-030-00934-2_30](https://doi.org/10.1007/978-3-030-00934-2_30)

- Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, and Gene Myers.
  Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy.
  The IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, Colorado, March 2020 
  DOI: [10.1109/WACV45572.2020.9093435](https://doi.org/10.1109/WACV45572.2020.9093435)
  
## TODO list

 - Addapt pydeepimagej to PyTorch models so it can export trained models into TorchScript format.
 - Consider multiple inputs and outputs.


            

Raw data

            {
    "_id": null,
    "home_page": "https://deepimagej.github.io/deepimagej/",
    "name": "pydeepimagej",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.0",
    "maintainer_email": "",
    "keywords": "Fiji,ImageJ,deepImageJ,Deep Learning,Image processing,bioimage.io,BioImage Model Zoo",
    "author": "E. Gomez-de-Mariscal, C. Garcia-Lopez-de-Haro, W. Ouyang, L. Donati, E- Lundberg, M. Unser, A. Munoz-Barrutia, D. Sage.",
    "author_email": "esgomezm@pa.uc3m.com, daniel.sage@epfl.ch, mamunozb@ing.uc3m.es",
    "download_url": "https://files.pythonhosted.org/packages/17/41/ef9608cce914e8f6cf3991832ac7ec095e385dcbfc4c829fab01534d6c9d/pydeepimagej-2.4.tar.gz",
    "platform": null,
    "description": "# PydeepImageJ\n\n[![GitHub](https://img.shields.io/github/license/deepimagej/pydeepimagej)](https://raw.githubusercontent.com/deepimagej/pydeepimagej/master/LICENSE)\n[![minimal Python version](https://img.shields.io/badge/Python-3-6666ff.svg)](https://www.anaconda.com/distribution/)\n\nPython code to export trained models into the [BioImage Model Zoo](https://bioimage.io/) format and read them in Fiji & ImageJ using the deepImageJ plugin.\n  - Creates a configuration class in Python with all the information about the trained model needed for its correct use in Fiji & ImageJ.\n  - Includes the metadata of an example image.\n  - Includes all expected results and needed pre / post-processing routines.\n  - Creates basic cover images for the model card in the BioImage Model Zoo.\n  - Creates de the version 0.3.2 of the [BioImage Model Zoo specification file](https://bioimage.io/docs/#/contribute_models/README?id=model-contribution-requirements): `model.yaml`   \n  - See [deepImageJ webpage](https://deepimagej.github.io/deepimagej/) for more information about how to use the model in Fiji & ImageJ. \n\n### Requirements & Installation\n\n- PyDeepImageJ requires Python 3 to run. \n- TensorFlow: It runs using the local installation of TensorFlow, i.e. the one corresponding to the trained model. However, deepImageJ is only compatible with TensorFlow versions <= 2.2.1.\n\nTo install pydeepImageJ either clone this repository or use PyPi via `pip`:\n\n```sh\n$ pip install pydeepimagej\n```\nor\n```sh\n$ git clone https://github.com/deepimagej/pydeepimagej.git\n$ cd pydeepimagej\n$ pip install .\n```\n----\n\n### Reference: \n* G\u00f3mez-de-Mariscal, E., Garc\u00eda-L\u00f3pez-de-Haro, C., Ouyang, W., Donati, L., Lundberg, L., Unser, M., Mu\u00f1oz-Barrutia, A. and Sage, D., \"DeepImageJ: A user-friendly environment to run deep learning models in ImageJ\", Nat Methods 18, 1192\u20131195 (2021). \nhttps://doi.org/10.1038/s41592-021-01262-9\n  * **Read the paper online with this link: [rdcu.be/cyG3K](https://rdcu.be/cyG3K)**\n\n- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Spain\n- Science for Life Laboratory, KTH \u2013 Royal Institute of Technology, Stockholm, Sweden\n- Biomedical Imaging Group, Ecole polytechnique federale de Lausanne (EPFL), Switzerland\n\nCorresponding authors: mamunozb@ing.uc3m.es, daniel.sage@epfl.ch\nCopyright \u00a9 2019. Universidad Carlos III, Madrid; Spain and EPFL, Lausanne, Switzerland.\n#### How to cite\n```bibtex\n@article{gomez2021deepimagej,\n  title={DeepImageJ: A user-friendly environment to run deep learning models in ImageJ},\n  author={G{\\'o}mez-de-Mariscal, Estibaliz and Garc{\\'i}a-L{\\'o}pez-de-Haro, Carlos and Ouyang, Wei and Donati, Laur{\\`e}ne and Lundberg, Emma and Unser, Michael and Mu{\\~{n}}oz-Barrutia, Arrate and Sage, Daniel},\n  journal={Nature Methods},\n  year={2021},\n  volume={18},\n  number={10},\n  pages={1192-1195},\n  URL = {https://doi.org/10.1038/s41592-021-01262-9},\n  doi = {10.1038/s41592-021-01262-9}\n}\n```\n#### License\n\n[BSD 2-Clause License](https://raw.githubusercontent.com/deepimagej/pydeepimagej/master/LICENSE)\n\n----\n\n## Example of how to use it\nTry a Jupyter notebook in Google Colaboratory: [![GoogleColab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepimagej/pydeepimagej/blob/master/examples/ExportBioImageModelZoo_deepImageJ.ipynb)\n\nOtherwise, follow the next steps:\n\nLet `model` be a Keras or TensorFlow trained model. Initialize the configuration class with the trained model `model`\n````python\nfrom pydeepimagej.yaml import BioImageModelZooConfig\n# MinimumSize needs to be given as it cannot be always estimated. See Additional commands for hints.\ndij_config = BioImageModelZooConfig(model, MinimumSize)\n````\nUpdate model information\n````python\ndij_config.Name = 'My trained model v0.1'\ndij_config.Description = 'Brief description of the task to perform by the trained model'\ndij_config.Authors.Names = ['First author', 'Second Author', 'Third Author who create the configuration specifications']\ndij_config.Authors.Affiliations = ['First author affiliation', 'Second author affiliation', 'Third author affiliation']\ndij_config.References = ['G\u00f3mez-de-Mariscal, E., Garc\u00eda-L\u00f3pez-de-Haro, C. et al., bioRxiv 2019', 'Second citation']\ndij_config.DOI = ['https://doi.org/10.1101/799270', 'second citation doi']\ndij_config.GitHub = 'https://github.com/deepimagej/pydeepimagej'\ndij_config.License = 'BSD-3'\ndij_config.Documentation = 'https://useful_documentation.pdf'\ndij_config.Tags = ['deepimagej', 'segmentation', 'Fiji', 'microscopy']\ndij_config.CoverImage =  ['./input.png', './output.png']\ndij_config.Framework = 'TensorFlow'\n# Parent model in the BioImage Model Zoo whose trained weights were used as pretrained weights.\ndij_config.Parent = \"https://bioimage.io/#/?id=deepimagej%2FUNet2DPancreaticSegmentation\"\n````\n### 1. Pre & post-processing specification.\n#### 1.1. Specify the pre&post-processing steps following the BioImage Model Zoo specifications.\nIf the pre-processing or the post-processing can be defined using the implementations defined at\n, then it is also possible to specify them with some code:\n```python\ndij_config.add_bioimageio_spec('pre-processing', 'scale_range',\n                               mode='per_sample', axes='xyzc',\n                               min_percentile=0, \n                               max_percentile=100)\n\ndij_config.add_bioimageio_spec('post-processing', 'binarize',\n                               threshold=threshold)\n```\nThe `BioImageModelZooConfig` class will include as many steps as times the previous functions are called. For example:\n```python\n# Make sure that there's no pre-processing specified.\ndij_config.BioImage_Preprocessing=None\ndij_config.add_bioimageio_spec('pre-processing', 'scale_range',\n                               mode='per_sample', axes='xyzc',\n                               min_percentile=min_percentile, \n                               max_percentile=max_percentile)\ndij_config.add_bioimageio_spec('pre-processing', 'scale_linear',\n                               gain=255, offset=0, axes='xy')\n```\n```\ndij_config.BioImage_Preprocessing:\n[{'scale_range': {'kwargs': {'axes': 'xyzc',\n  'max_percentile': 100,\n  'min_percentile': 0,\n  'mode': 'per_sample'}}},\n {'scale_range': {'kwargs': {'axes': 'xy', 'gain': 255, 'offset': 0}}}]\n```\nThe same applies for the post-processing:\n```python\ndij_config.BioImage_Postprocessing=None \ndij_config.add_bioimageio_spec('post-processing', 'scale_range',\n                               mode='per_sample', axes='xyzc', \n                               min_percentile=0, max_percentile=100)\n\ndij_config.add_bioimageio_spec('post-processing', 'scale_linear',\n                               gain=255, offset=0, axes='xy')\n\ndij_config.add_bioimageio_spec('post-processing', 'binarize',\n                               threshold=threshold)\n```\n```\ndij_config.BioImage_Postprocessing:\n[{'scale_range': {'kwargs': {'axes': 'xyzc',\n  'max_percentile': 100,\n  'min_percentile': 0,\n  'mode': 'per_sample'}}},\n {'scale_range': {'kwargs': {'axes': 'xy', 'gain': 255, 'offset': 0}}},\n {'binarize': {'kwargs': {'threshold': 0.5}}}]\n```\n#### 1.2. Prepare an ImageJ pre/post-processing macro.\nYou may need to preprocess the input image before the inference. Some ImageJ macro routines can be downloaded from [here](https://github.com/deepimagej/imagej-macros/) and included in the model specifications. Note that ImageJ macros are text files so it is easy to modify them inside a Python script ([see an example](https://github.com/deepimagej/pydeepimagej/blob/master/README.md#additional-commands)). To add any ImageJ macro code we need to run `add_preprocessing(local_path_to_the_macro_file, 'name_to_store_the_macro_in_the_bundled_model')`:\n````python\npath_preprocessing = \"PercentileNormalization.ijm\"\n# Download the macro file\nurllib.request.urlretrieve(\"https://raw.githubusercontent.com/deepimagej/imagej-macros/master/PercentileNormalization.ijm\", path_preprocessing )\n# Include it in the configuration class\ndij_config.add_preprocessing(path_preprocessing, \"preprocessing\")\n````\nThe same holds for the postprocessing.\n````python\npath_postprocessing = \"8bitBinarize.ijm\"\nurllib.request.urlretrieve(\"https://raw.githubusercontent.com/deepimagej/imagej-macros/master/8bitBinarize.ijm\", path_postprocessing )\n# Include the info about the postprocessing \ndij_config.add_postprocessing(path_postprocessing,\"postprocessing\")\n````\nDeepImageJ accepts two pre/post-processing routines. The images will be processed in the order in which we include them with `add_postprocessing`. Thus, in this example, the output of the model is first binarized with `'8bitBinarize.ijm'` and then, processed with `'another_macro.ijm'`: \n````python\npath_second_postprocessing = './folder/another_macro.ijm'\ndij_config.add_postprocessing(path_second_postprocessing, 'postprocessing_2')\n````\n\n### 2. Add information about the example image.\nLet `test_img` be an example image to test the model inference and `test_prediction` be the resulting image after the post-processing. It is possible to export the trained model with these two, so an end user can see an example. \n`PixelSize` should be a list of values according to `test_img` dimensions and given in microns (\u00b5m). \n````python\nPixelSize = [0.64,0.64,1] # Pixel size of a 3D volume with axis yxz\ndij_config.add_test_info(test_img, test_prediction, PixelSize)\n````\n\n#### 2.1. Create some covers for the model card in the BioImage Model Zoo.\nLet `test_img` and `test_mask` be the input and output example images, and `./input.png` and `./output.png` the names we want to use to store them within bundled model. `dij_config` stretches the intensity range of the given images to the [0, 255] range so the images can be exported as 8-bits images and visualized properly on the website.  \n```python\ndij_config.create_covers([test_img, test_mask])\ndij_config.Covers =  ['./input.png', './output.png']\n```\n\n### 3. Store weights using specific formats.\nThe weights of a trained model can be stored either as a TensorFlow SavedModel bundle (`saved_model.pb` + `variables/`) or as a Keras HDF5 model (`model.h5`). Let `model` be a trained model in TensorFlow. With pydeepimagej, the weights information can be included as follows:\n````python\n\ndij_config.add_weights_formats(model, 'KerasHDF5', \n                               authors=['Authors', 'who', 'trained it'])\ndij_config.add_weights_formats(model, 'TensorFlow', \n                               parent=\"keras_hdf5\",\n                               authors=['Authors who', 'converted the model', 'into this new format'])\n````\nwhich in the `model.yaml` appear as :\n````yaml\nweights:\n  keras_hdf5:\n    source: ./keras_model.h5\n    sha256: 9f7512eb28de4c6c4182f976dd8e53e9da7f342e14b2528ef897a970fb26875d\n    authors:\n    - Authors\n    - who\n    - trained it\n  tensorflow_saved_model_bundle:\n    source: ./tensorflow_saved_model_bundle.zip\n    sha256: 2c552aa561c3c3c9063f42b78dda380e2b85a8ad04e434604af5cbb50eaaa54d\n    parent: keras_hdf5\n    authors:\n    - Authors who\n    - converted the model\n    - into this new format\n````\n\n### 4. EXPORT THE MODEL\n````python\ndeepimagej_model_path = './my_trained_model_deepimagej'\ndij_config.export_model(deepimagej_model_path)\n`````\nWhen exporting the model, a new folder with a deepImageJ 2.1.0 bundled model is created. The folder is also provided as a zip file, so it can be easily transferable.\n\n## Additional commands\n### Change one line in an ImageJ macro\n````python\n# Download the macro file\npath_postprocessing = \"8bitBinarize.ijm\"\nurllib.request.urlretrieve(\"https://raw.githubusercontent.com/deepimagej/imagej-macros/master/8bitBinarize.ijm\", path_postprocessing )\n# Modify the threshold in the macro to the chosen threshold\nijmacro = open(path_postprocessing,\"r\")  \nlist_of_lines = ijmacro. readlines()\n# Line 21 is the one corresponding to the optimal threshold\nlist_of_lines[21] = \"optimalThreshold = {}\\n\".format(128)\nijmacro.close()\nijmacro = open(path_postprocessing,\"w\")  \nijmacro. writelines(list_of_lines)\nijmacro. close()\n````\n### Estimation of the step size for the shape of the input image.\nIf the model is an encoder-decoder with skip connections, and the input shape of your trained model is not fixed (i.e. `[None, None, 1]` ), the input shape still needs to fit some requirements. You can caluculate it knowing the number of poolings in the encoder path of the network:\n````python\nimport numpy as np\npooling_steps = 0\nfor keras_layer in model.layers:\n    if keras_layer.name.startswith('max') or \"pool\" in keras_layer.name:\n      pooling_steps += 1\nMinimumSize = np.str(2**(pooling_steps))\n````\n## Exceptions\npydeepimagej is meant to connect Python with DeepImageJ so images can be processed in the Fiji & ImageJ ecosystem. Hence, images (tensors) are expected to have at least 3 dimensions: height, width and channels. For this reason, models with input shapes of less than 4 dimensions (`model.input_shape = [batch, height, width, channels]` are not considered. For example, if you have the following situation:\n```python\nmodel = tf.keras.Sequential([\n    tf.keras.layers.Flatten(input_shape=(28, 28)),\n    tf.keras.layers.Dense(128, activation='relu'),\n    tf.keras.layers.Dense(10)])\n```\nplease, modify it to\n```python\nmodel = tf.keras.Sequential([\n    tf.keras.layers.Flatten(input_shape=(28, 28, 1)),\n    tf.keras.layers.Dense(128, activation='relu'),\n    tf.keras.layers.Dense(10)])\n```\n## Code references used in this package:\nThis code uses similar functions to the ones in [StarDist](https://github.com/stardist/stardist) package for the calculation of a pixel's receptive field in a network. Citations:\n- Uwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers.\n  Cell Detection with Star-convex Polygons.\n  International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, September 2018.\n  DOI: [10.1007/978-3-030-00934-2_30](https://doi.org/10.1007/978-3-030-00934-2_30)\n\n- Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, and Gene Myers.\n  Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy.\n  The IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, Colorado, March 2020 \n  DOI: [10.1109/WACV45572.2020.9093435](https://doi.org/10.1109/WACV45572.2020.9093435)\n  \n## TODO list\n\n - Addapt pydeepimagej to PyTorch models so it can export trained models into TorchScript format.\n - Consider multiple inputs and outputs.\n\n",
    "bugtrack_url": null,
    "license": "BSD 2-Clause License",
    "summary": "Python package to export TensorFlow models as DeepImageJ bundled models",
    "version": "2.4",
    "split_keywords": [
        "fiji",
        "imagej",
        "deepimagej",
        "deep learning",
        "image processing",
        "bioimage.io",
        "bioimage model zoo"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "67b933b6fe7276ca32f954de3d7446a2",
                "sha256": "c279e30d2412744123dfa343aba69216443c18aba478524ddd3323d5a4bc35b8"
            },
            "downloads": -1,
            "filename": "pydeepimagej-2.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "67b933b6fe7276ca32f954de3d7446a2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.0",
            "size": 25087,
            "upload_time": "2022-08-10T16:32:39",
            "upload_time_iso_8601": "2022-08-10T16:32:39.610154Z",
            "url": "https://files.pythonhosted.org/packages/5a/5c/b0c04bff0e98b3e877b2b36b0ec4d586df7453f6100735aadc05999c84e6/pydeepimagej-2.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "517262f02b7601cb90577e39ccc2b266",
                "sha256": "1ed61b8ae1d32e0eedaaa7c045c036b3919178a1cc1aa7b67914481c5ddbe1c9"
            },
            "downloads": -1,
            "filename": "pydeepimagej-2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "517262f02b7601cb90577e39ccc2b266",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.0",
            "size": 25753,
            "upload_time": "2022-08-10T16:32:41",
            "upload_time_iso_8601": "2022-08-10T16:32:41.693866Z",
            "url": "https://files.pythonhosted.org/packages/17/41/ef9608cce914e8f6cf3991832ac7ec095e385dcbfc4c829fab01534d6c9d/pydeepimagej-2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-08-10 16:32:41",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "pydeepimagej"
}
        
Elapsed time: 0.01469s