czmodel


Nameczmodel JSON
Version 5.1.0 PyPI version JSON
download
home_page
SummaryA conversion tool for TensorFlow or ONNX ANNs to CZANN
upload_time2023-06-29 19:26:22
maintainer
docs_urlNone
authorSebastian Soyer
requires_python>=3.7,<3.12
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            This project provides simple-to-use conversion tools to generate a CZANN file from a 
[TensorFlow](https://www.tensorflow.org/) or [ONNX](https://onnx.ai/) model that resides in memory or on disk to be usable in the 
[ZEN Intellesis](https://www.zeiss.com/microscopy/int/products/microscope-software/zen-intellesis-image-segmentation-by-deep-learning.html) module starting with ZEN blue >=3.2 and ZEN Core >3.0.  

Please check the following compatibility matrix for ZEN Blue/Core and the respective version (self.version) of the CZANN Model Specification JSON Meta data file (see _CZANN Model Specification_ below). Version compatibility is defined via the [Semantic Versioning Specification (SemVer)](https://semver.org/lang/de/).

| Model (legacy)/JSON | ZEN Blue | ZEN Core |
|---------------------|:--------:|---------:|
| 1.1.0               | \>= 3.5  |  \>= 3.4 |
| 1.0.0               | \>= 3.5  |  \>= 3.4 |
| 3.1.0 (legacy)      | \>= 3.4  |  \>= 3.3 |
| 3.0.0 (legacy)      | \>= 3.2  |  \>= 3.1 |

If you encounter a version mismatch when importing a model into ZEN, please check for the correct version of this package.

## Structure of repo
This repo is divided into 3 separate packages -: core, tensorflow, pytorch.

- Core - Provides base functionality, no dependency on Tensorflow or Pytorch required.
- Tensorflow - Provides Tensorflow-specific functionalities, and converters based on Tensorflow-logics.
- PyTorch - Provides PyTorch-specific functionalities, and converters based on PyTorch-logics.

## Installation
The library provides a base package and extras for export functionalities that require specific dependencies -:

- ```pip install czmodel``` - This would only install base dependencies, no Tensorflow-/Pytorch-specific packages.
- ```pip install czmodel[tensorflow]``` - This would install base and Tensorflow-specific packages.
- ```pip install czmodel[pytorch]``` - This would install base and Pytorch-specific packages.


## Samples
### For czmodel[pytorch]:
- For single-class semantic segmentation:&nbsp;
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/SingleClassSemanticSegmentation_PyTorch_5_0_0.ipynb)

- For regression:&nbsp;
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/Regresssion_PyTorch_5_0_0.ipynb)

### For czmodel[tensorflow]:
- For single-class semantic segmentation:&nbsp;
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/SingleClassSemanticSegmentation_Tensorflow_5_0_0.ipynb)

- For regression:&nbsp;
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/Regresssion_Tensorflow_5_0_0.ipynb)

## System setup
The current version of this toolbox only requires a fresh Python 3.x installation. 
It was tested with Python 3.7 on Windows.

## Model conversion
The toolbox provides a `convert` module that features all supported conversion strategies. It currently supports 
converting Keras / PyTorch models in memory or stored on disk with a corresponding metadata JSON file (see _CZANN Model Specification_ below).

### Keras / PyTorch models in memory
The toolbox also provides functionality that can be imported e.g. in the training script used to fit a Keras / PyTorch model. 
It provides different converters to target specific versions of the export format. Currently, there are two converters available:
- DefaultConverter: Exports a .czann file complying with the specification below.
- LegacyConverter (Only for segmentation): Exports a .czmodel file (version 3.1.0 of the legacy ANN-based segmentation models in ZEN).

The converters are accessible by running:

For Keras model:
```python
from czmodel.tensorflow.convert import DefaultConverter, LegacyConverter
```
For PyTorch model:
```python
from czmodel.pytorch.convert import DefaultConverter, LegacyConverter
```

Every converter provides a `convert_from_model_spec` function that uses a model specification object to convert a model 
to the corresponding export format. It accepts a `tensorflow.keras.Model` / `torch.nn.Module` that will be exported to [ONNX](https://onnx.ai/) (For Keras model, in case of failure it will be exported to [SavedModel](https://www.tensorflow.org/guide/saved_model))
format and at the same time wrapped into a .czann/.czmodel file that can be imported and used by Intellesis.  
To provide the meta data, the toolbox provides a ModelSpec class that must be filled with the model, a ModelMetadata 
instance containing the information required by the specification (see _Model Metadata_ below), and optionally a license file. 

A CZANN/CZMODEL can be created from a Keras / PyTorch model with the following three steps.

#### 1. Create a model meta data class
To export a CZANN, meta information is needed that must be provided through a `ModelMetadata` instance.

For segmentation:

```python
from czmodel.core.model_metadata import ModelMetadata, ModelType

model_metadata = ModelMetadata(
    input_shape=[1024, 1024, 3],
    output_shape=[1024, 1024, 5],
    model_type=ModelType.SINGLE_CLASS_SEMANTIC_SEGMENTATION,
    classes=["class1", "class2", "class3", "class4", "class5"],
    model_name="ModelName",
    min_overlap=[90, 90]
)
```
For regression:

```python
from czmodel.core.model_metadata import ModelMetadata, ModelType

model_metadata = ModelMetadata(
    input_shape=[1024, 1024, 3],
    output_shape=[1024, 1024, 3],
    model_type=ModelType.REGRESSION,
    model_name="ModelName",
    min_overlap=[90, 90]
)
```
For legacy CZMODEL models the legacy `ModelMetadata` must be used:

```python
from czmodel.core.legacy_model_metadata import ModelMetadata as LegacyModelMetadata

model_metadata_legacy = LegacyModelMetadata(
    name="Simple_Nuclei_SegmentationModel_Legacy",
    classes=["class1", "class2"],
    pixel_types="Bgr24",
    color_handling="ConvertToMonochrome",
    border_size=90,
)
``` 

#### 2 .Creating a model specification
The model and its corresponding metadata are now wrapped into a ModelSpec object.

```python
from czmodel.tensorflow.model_spec import ModelSpec # for czmodel[tensorflow]
#from czmodel.pytorch.model_spec import ModelSpec   # for czmodel[pytorch]

model_spec = ModelSpec(
    model=model,
    model_metadata=model_metadata,
    license_file="C:\\some\\path\\to\\a\\LICENSE.txt"
)
```
The corresponding model spec for legacy models is instantiated analogously.

```python
from czmodel.tensorflow.legacy_model_spec import ModelSpec as LegacyModelSpec  # for czmodel[tensorflow]
#from czmodel.pytorch.legacy_model_spec import ModelSpec as LegacyModelSpec    # for czmodel[pytorch]

legacy_model_spec = LegacyModelSpec(
    model=model,
    model_metadata=model_metadata_legacy,
    license_file="C:\\some\\path\\to\\a\\LICENSE.txt"
)
```

#### 3. Converting the model
The actual model conversion is finally performed with the ModelSpec object and the output path and name of the CZANN.

```python
from czmodel.tensorflow.convert import DefaultConverter as TensorflowDefaultConverter  # for czmodel[tensorflow]

TensorflowDefaultConverter().convert_from_model_spec(model_spec=model_spec, output_path='some/path', output_name='some_file_name')


from czmodel.pytorch.convert import DefaultConverter as PytorchDefaultConverter  # for czmodel[pytorch]

PytorchDefaultConverter().convert_from_model_spec(model_spec=model_spec, output_path='some/path', output_name='some_file_name', input_shape=(3, 1024, 1024))

```
For legacy models the interface is similar.

```python
from czmodel.tensorflow.convert import LegacyConverter as TensorflowDefaultConverter  # for czmodel[tensorflow]

TensorflowDefaultConverter().convert_from_model_spec(model_spec=legacy_model_spec, output_path='some/path',       
                                          output_name='some_file_name')


from czmodel.pytorch.convert import LegacyConverter as PytorchLegacyConverter  # for czmodel[pytorch]

PytorchLegacyConverter().convert_from_model_spec(model_spec=legacy_model_spec, output_path='some/path',
                                          output_name='some_file_name', input_shape=(3, 1024, 1024))
```

### Exported TensorFlow / PyTorch models
Not all TensorFlow / PyTorch models can be converted. You can convert a model exported from TensorFlow / PyTorch if the model and the 
provided meta data comply with the _CZANN Model Specification_ below.

The actual conversion is triggered by either calling:

```python
from czmodel.tensorflow.convert import DefaultConverter as TensorflowDefaultConverter  # for czmodel[tensorflow]

TensorflowDefaultConverter().convert_from_json_spec('Path to JSON file', 'Output path', 'Model Name')


from czmodel.pytorch.convert import DefaultConverter as PytorchDefaultConverter  # for czmodel[pytorch]

PytorchDefaultConverter().convert_from_json_spec('Path to JSON file', 'Output path', (3, 1024, 1024), 'Model Name')
```
or by using the command line interface of the `savedmodel2czann` script (only for Keras model):
```console
savedmodel2ann path/to/model_spec.json output/path/ output_name --license_file path/to/license_file.txt
```

### Adding pre- and post-processing layers (only for Keras model)
Both, `convert_from_json_spec` and `convert_from_model_spec` in the converter classes accept the 
following optional parameters:
- `spatial_dims`: Set new spatial dimensions for the new input node of the model. This parameter is expected to contain the new height 
and width in that order. **Note:** The spatial input dimensions can only be changed in ANN architectures that are invariant to the 
spatial dimensions of the input, e.g. FCNs.
- `preprocessing`: One or more pre-processing layers that will be prepended to the deployed model. A pre-processing 
layer must be derived from the `tensorflow.keras.layers.Layer` class.
- `postprocessing`: One or more post-processing layers that will be appended to the deployed model. A post-processing 
layer must be derived from the `tensorflow.keras.layers.Layer` class.

While ANN models are often trained on images in RGB(A) space, the ZEN infrastructure requires models inside a CZANN to 
expect inputs in BGR(A) color space. This toolbox offers pre-processing layers to convert the color space before 
passing the input to the model to be actually deployed. The following code shows how to add a BGR to RGB conversion layer 
to a model and set its spatial input dimensions to 512x512.

```python
from czmodel.tensorflow.util.transforms import TransposeChannels
from czmodel.tensorflow.convert import DefaultConverter

# Define dimensions and pre-processing
spatial_dims = 512, 512  # Optional: Target spatial dimensions of the model
preprocessing = [TransposeChannels(order=(2, 1,
                                          0))]  # Optional: Pre-Processing layers to be prepended to the model. Can be a single layer, a list of layers or None.
postprocessing = None  # Optional: Post-Processing layers to be appended to the model. Can be a single layer, a list of layers or None.

# Perform conversion
DefaultConverter().convert_from_model_spec(
    model_spec=model_spec,
    output_path='some/path',
    output_name='some_file_name',
    spatial_dims=spatial_dims,
    preprocessing=preprocessing,
    postprocessing=postprocessing
)
```

Additionally, the toolbox offers a `SigmoidToSoftmaxScores` layer that can be appended through the `postprocessing` parameter to convert 
the output of a model with sigmoid output activation to the output that would be produced by an equivalent model with softmax activation.


### Unpacking CZANN/CZSEG files
The czmodel library offers functionality to unpack existing CZANN/CZSEG models. For a given .czann or .czseg model it is possible to extract the underlying ANN model to a specified folder and retrieve the corresponding meta-data as instances of the meta-data classes defined in the czmodel library.

For CZANN files:

```python
from czmodel.tensorflow.convert import DefaultConverter  # for czmodel[tensorflow]
#from czmodel.pytorch.convert import DefaultConverter    # for czmodel[pytorch]
from pathlib import Path

model_metadata, model_path = DefaultConverter().unpack_model(model_file='Path of the .czann file',
                                                             target_dir=Path('Output Path'))
```

For CZSEG/CZMODEL files:

```python
from czmodel.tensorflow.convert import LegacyConverter  # for czmodel[tensorflow]
#from czmodel.pytorch.convert import DefaultConverter   # for czmodel[pytorch]
from pathlib import Path

model_metadata, model_path = LegacyConverter().unpack_model(model_file='Path of the .czseg/.czann file',
                                                            target_dir=Path('Output Path'))
```


## CZANN Model Specification
This section specifies the requirements for an artificial neural network (ANN) model and the additionally required metadata to enable execution of the model inside the ZEN Intellesis infrastructure starting with ZEN blue >=3.2 and ZEN Core >3.0.  

The model format currently allows to bundle models for semantic segmentation, instance segmentation, object detection, classification and regression and is defined as a ZIP archive with the file extension .czann containing the following files with the respective filenames:
- JSON Meta data file. (filename: model.json)
- Model in ONNX/TensorFlow SavedModel format. In case of  SavedModel format the folder representing the model must be zipped to a single file. (filename: model.model)
- Optionally: A license file for the contained model. (filename: license.txt)

The meta data file must comply with the following specification:

```json
{
    "$schema": "http://iglucentral.com/schemas/com.snowplowanalytics.self-desc/schema/jsonschema/1-0-0#",
    "$id": "http://127.0.0.1/model_format.schema.json",
    "title": "Exchange format for ANN models",
    "description": "A format that defines the meta information for exchanging ANN models. Any future versions of this specification should be evaluated through https://docs.snowplowanalytics.com/docs/pipeline-components-and-applications/iglu/igluctl-0-7-2/#lint-1 with --skip-checks numericMinMax,stringLength,optionalNull and https://www.json-buddy.com/json-schema-analyzer.htm.",
    "type": "object",
    "self": {
        "vendor": "com.zeiss",
        "name": "model-format",
        "format": "jsonschema",
        "version": "1-1-0"
    },
    "properties": {
        "Id": {
            "description": "Universally unique identifier of 128 bits for the model.",
            "type": "string"
        },
        "Type": {
            "description": "The type of problem addressed by the model.",
            "type": "string",
            "enum": ["SingleClassInstanceSegmentation", "MultiClassInstanceSegmentation", "SingleClassSemanticSegmentation", "MultiClassSemanticSegmentation", "SingleClassClassification", "MultiClassClassification", "ObjectDetection", "Regression"]
        },
        "MinOverlap": {
            "description": "The minimum overlap of tiles for each dimension in pixels. Must be divisible by two. In tiling strategies that consider tile borders instead of overlaps the minimum overlap is twice the border size.",
            "type": "array",
            "items": {
                "description": "The overlap of a single spatial dimension",
                "type": "integer",
                "minimum": 0
            },
            "minItems": 1
        },
        "Classes": {
            "description": "The class names corresponding to the last output dimension of the prediction. If the last dimension of the prediction has shape n the provided list must be of length n",
            "type": "array",
            "items": {
                "description": "A name describing a class for segmentation and classification tasks",
                "type": "string"
            },
            "minItems": 2
        },
        "ModelName": {
            "description": "The name of exported neural network model in ONNX (file) or TensorFlow SavedModel (folder) format in the same ZIP archive as the meta data file. In the case of ONNX the model must use ONNX opset version 12. In the case of TensorFlow SavedModel all operations in the model must be supported by TensorFlow 2.0.0. The model must contain exactly one input node which must comply with the input shape defined in the InputShape parameter and must have a batch dimension as its first dimension that is either 1 or undefined.",
            "type": "string"
        },
        "InputShape": {
            "description": "The shape of an input image. A typical 2D model has an input of shape [h, w, c] where h and w are the spatial dimensions and c is the number of channels. A 3D model is expected to have an input shape of [z, h, w, c] that contains an additional dimension z which represents the third spatial dimension. The batch dimension is not specified here. The input of the model must be of type float32 in the range [0..1].",
            "type": "array",
            "items": {
                "description": "The size of a single dimension",
                "type": "integer",
                "minimum": 1
            },
            "minItems": 3,
            "maxItems": 4
        },
        "OutputShape": {
            "description": "The shape of the output image. A typical 2D model has an input of shape [h, w, c] where h and w are the spatial dimensions and c is the number of classes. A 3D model is expected to have an input shape of [z, h, w, c] that contains an additional dimension z which represents the third spatial dimension. The batch dimension is not specified here. If the output of the model represents an image, it must be of type float32 in the range [0..1].",
            "type": "array",
            "items": {
                "description": "The size of a single dimension",
                "type": "integer",
                "minimum": 1
            },
            "minItems": 3,
            "maxItems": 4
        },
        "Scaling": {
            "description": "The extent of a pixel in x- and y-direction (in that order) in units of m.",
            "type": "array",
            "items": {
                "description": "The extent of a pixel in a single dimension in units of m",
                "type": "number"
            },
            "minItems": 2,
            "maxItems": 2
        }
    },
    "required": ["Id", "Type", "InputShape", "OutputShape"]
}
```
Json files can contain escape sequences and \\-characters in paths must be escaped with \\\\.

The following code snippet shows an example for a valid metadata file:

For single-class semantic segmentation:
```json
{
  "Id": "b511d295-91ff-46ca-bb60-b2e26c393809",
  "Type": "SingleClassSemanticSegmentation",
  "Classes": ["class1", "class2", "class3", "class4", "class5"],
  "InputShape": [1024, 1024, 3],
  "OutputShape": [1024, 1024, 5]
}
```

For regression:
```json
{
  "Id": "064587eb-d5a1-4434-82fc-2fbc9f5871f9",
  "Type": "Regression",
  "InputShape": [1024, 1024, 3],
  "OutputShape": [1024, 1024, 3]
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "czmodel",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7,<3.12",
    "maintainer_email": "",
    "keywords": "",
    "author": "Sebastian Soyer",
    "author_email": "sebastian.soyer@zeiss.com",
    "download_url": "https://files.pythonhosted.org/packages/54/46/2c7dfb946fd863a079c710cb8218bc6c29c0c0d830456dacefde27f237e6/czmodel-5.1.0.tar.gz",
    "platform": null,
    "description": "This project provides simple-to-use conversion tools to generate a CZANN file from a \r\n[TensorFlow](https://www.tensorflow.org/) or [ONNX](https://onnx.ai/) model that resides in memory or on disk to be usable in the \r\n[ZEN Intellesis](https://www.zeiss.com/microscopy/int/products/microscope-software/zen-intellesis-image-segmentation-by-deep-learning.html) module starting with ZEN blue >=3.2 and ZEN Core >3.0.  \r\n\r\nPlease check the following compatibility matrix for ZEN Blue/Core and the respective version (self.version) of the CZANN Model Specification JSON Meta data file (see _CZANN Model Specification_ below). Version compatibility is defined via the [Semantic Versioning Specification (SemVer)](https://semver.org/lang/de/).\r\n\r\n| Model (legacy)/JSON | ZEN Blue | ZEN Core |\r\n|---------------------|:--------:|---------:|\r\n| 1.1.0               | \\>= 3.5  |  \\>= 3.4 |\r\n| 1.0.0               | \\>= 3.5  |  \\>= 3.4 |\r\n| 3.1.0 (legacy)      | \\>= 3.4  |  \\>= 3.3 |\r\n| 3.0.0 (legacy)      | \\>= 3.2  |  \\>= 3.1 |\r\n\r\nIf you encounter a version mismatch when importing a model into ZEN, please check for the correct version of this package.\r\n\r\n## Structure of repo\r\nThis repo is divided into 3 separate packages -: core, tensorflow, pytorch.\r\n\r\n- Core - Provides base functionality, no dependency on Tensorflow or Pytorch required.\r\n- Tensorflow - Provides Tensorflow-specific functionalities, and converters based on Tensorflow-logics.\r\n- PyTorch - Provides PyTorch-specific functionalities, and converters based on PyTorch-logics.\r\n\r\n## Installation\r\nThe library provides a base package and extras for export functionalities that require specific dependencies -:\r\n\r\n- ```pip install czmodel``` - This would only install base dependencies, no Tensorflow-/Pytorch-specific packages.\r\n- ```pip install czmodel[tensorflow]``` - This would install base and Tensorflow-specific packages.\r\n- ```pip install czmodel[pytorch]``` - This would install base and Pytorch-specific packages.\r\n\r\n\r\n## Samples\r\n### For czmodel[pytorch]:\r\n- For single-class semantic segmentation:&nbsp;\r\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/SingleClassSemanticSegmentation_PyTorch_5_0_0.ipynb)\r\n\r\n- For regression:&nbsp;\r\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/Regresssion_PyTorch_5_0_0.ipynb)\r\n\r\n### For czmodel[tensorflow]:\r\n- For single-class semantic segmentation:&nbsp;\r\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/SingleClassSemanticSegmentation_Tensorflow_5_0_0.ipynb)\r\n\r\n- For regression:&nbsp;\r\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/zeiss-microscopy/OAD/blob/master/Machine_Learning/notebooks/czmodel/Regresssion_Tensorflow_5_0_0.ipynb)\r\n\r\n## System setup\r\nThe current version of this toolbox only requires a fresh Python 3.x installation. \r\nIt was tested with Python 3.7 on Windows.\r\n\r\n## Model conversion\r\nThe toolbox provides a `convert` module that features all supported conversion strategies. It currently supports \r\nconverting Keras / PyTorch models in memory or stored on disk with a corresponding metadata JSON file (see _CZANN Model Specification_ below).\r\n\r\n### Keras / PyTorch models in memory\r\nThe toolbox also provides functionality that can be imported e.g. in the training script used to fit a Keras / PyTorch model. \r\nIt provides different converters to target specific versions of the export format. Currently, there are two converters available:\r\n- DefaultConverter: Exports a .czann file complying with the specification below.\r\n- LegacyConverter (Only for segmentation): Exports a .czmodel file (version 3.1.0 of the legacy ANN-based segmentation models in ZEN).\r\n\r\nThe converters are accessible by running:\r\n\r\nFor Keras model:\r\n```python\r\nfrom czmodel.tensorflow.convert import DefaultConverter, LegacyConverter\r\n```\r\nFor PyTorch model:\r\n```python\r\nfrom czmodel.pytorch.convert import DefaultConverter, LegacyConverter\r\n```\r\n\r\nEvery converter provides a `convert_from_model_spec` function that uses a model specification object to convert a model \r\nto the corresponding export format. It accepts a `tensorflow.keras.Model` / `torch.nn.Module` that will be exported to [ONNX](https://onnx.ai/) (For Keras model, in case of failure it will be exported to [SavedModel](https://www.tensorflow.org/guide/saved_model))\r\nformat and at the same time wrapped into a .czann/.czmodel file that can be imported and used by Intellesis.  \r\nTo provide the meta data, the toolbox provides a ModelSpec class that must be filled with the model, a ModelMetadata \r\ninstance containing the information required by the specification (see _Model Metadata_ below), and optionally a license file. \r\n\r\nA CZANN/CZMODEL can be created from a Keras / PyTorch model with the following three steps.\r\n\r\n#### 1. Create a model meta data class\r\nTo export a CZANN, meta information is needed that must be provided through a `ModelMetadata` instance.\r\n\r\nFor segmentation:\r\n\r\n```python\r\nfrom czmodel.core.model_metadata import ModelMetadata, ModelType\r\n\r\nmodel_metadata = ModelMetadata(\r\n    input_shape=[1024, 1024, 3],\r\n    output_shape=[1024, 1024, 5],\r\n    model_type=ModelType.SINGLE_CLASS_SEMANTIC_SEGMENTATION,\r\n    classes=[\"class1\", \"class2\", \"class3\", \"class4\", \"class5\"],\r\n    model_name=\"ModelName\",\r\n    min_overlap=[90, 90]\r\n)\r\n```\r\nFor regression:\r\n\r\n```python\r\nfrom czmodel.core.model_metadata import ModelMetadata, ModelType\r\n\r\nmodel_metadata = ModelMetadata(\r\n    input_shape=[1024, 1024, 3],\r\n    output_shape=[1024, 1024, 3],\r\n    model_type=ModelType.REGRESSION,\r\n    model_name=\"ModelName\",\r\n    min_overlap=[90, 90]\r\n)\r\n```\r\nFor legacy CZMODEL models the legacy `ModelMetadata` must be used:\r\n\r\n```python\r\nfrom czmodel.core.legacy_model_metadata import ModelMetadata as LegacyModelMetadata\r\n\r\nmodel_metadata_legacy = LegacyModelMetadata(\r\n    name=\"Simple_Nuclei_SegmentationModel_Legacy\",\r\n    classes=[\"class1\", \"class2\"],\r\n    pixel_types=\"Bgr24\",\r\n    color_handling=\"ConvertToMonochrome\",\r\n    border_size=90,\r\n)\r\n``` \r\n\r\n#### 2 .Creating a model specification\r\nThe model and its corresponding metadata are now wrapped into a ModelSpec object.\r\n\r\n```python\r\nfrom czmodel.tensorflow.model_spec import ModelSpec # for czmodel[tensorflow]\r\n#from czmodel.pytorch.model_spec import ModelSpec   # for czmodel[pytorch]\r\n\r\nmodel_spec = ModelSpec(\r\n    model=model,\r\n    model_metadata=model_metadata,\r\n    license_file=\"C:\\\\some\\\\path\\\\to\\\\a\\\\LICENSE.txt\"\r\n)\r\n```\r\nThe corresponding model spec for legacy models is instantiated analogously.\r\n\r\n```python\r\nfrom czmodel.tensorflow.legacy_model_spec import ModelSpec as LegacyModelSpec  # for czmodel[tensorflow]\r\n#from czmodel.pytorch.legacy_model_spec import ModelSpec as LegacyModelSpec    # for czmodel[pytorch]\r\n\r\nlegacy_model_spec = LegacyModelSpec(\r\n    model=model,\r\n    model_metadata=model_metadata_legacy,\r\n    license_file=\"C:\\\\some\\\\path\\\\to\\\\a\\\\LICENSE.txt\"\r\n)\r\n```\r\n\r\n#### 3. Converting the model\r\nThe actual model conversion is finally performed with the ModelSpec object and the output path and name of the CZANN.\r\n\r\n```python\r\nfrom czmodel.tensorflow.convert import DefaultConverter as TensorflowDefaultConverter  # for czmodel[tensorflow]\r\n\r\nTensorflowDefaultConverter().convert_from_model_spec(model_spec=model_spec, output_path='some/path', output_name='some_file_name')\r\n\r\n\r\nfrom czmodel.pytorch.convert import DefaultConverter as PytorchDefaultConverter  # for czmodel[pytorch]\r\n\r\nPytorchDefaultConverter().convert_from_model_spec(model_spec=model_spec, output_path='some/path', output_name='some_file_name', input_shape=(3, 1024, 1024))\r\n\r\n```\r\nFor legacy models the interface is similar.\r\n\r\n```python\r\nfrom czmodel.tensorflow.convert import LegacyConverter as TensorflowDefaultConverter  # for czmodel[tensorflow]\r\n\r\nTensorflowDefaultConverter().convert_from_model_spec(model_spec=legacy_model_spec, output_path='some/path',       \r\n                                          output_name='some_file_name')\r\n\r\n\r\nfrom czmodel.pytorch.convert import LegacyConverter as PytorchLegacyConverter  # for czmodel[pytorch]\r\n\r\nPytorchLegacyConverter().convert_from_model_spec(model_spec=legacy_model_spec, output_path='some/path',\r\n                                          output_name='some_file_name', input_shape=(3, 1024, 1024))\r\n```\r\n\r\n### Exported TensorFlow / PyTorch models\r\nNot all TensorFlow / PyTorch models can be converted. You can convert a model exported from TensorFlow / PyTorch if the model and the \r\nprovided meta data comply with the _CZANN Model Specification_ below.\r\n\r\nThe actual conversion is triggered by either calling:\r\n\r\n```python\r\nfrom czmodel.tensorflow.convert import DefaultConverter as TensorflowDefaultConverter  # for czmodel[tensorflow]\r\n\r\nTensorflowDefaultConverter().convert_from_json_spec('Path to JSON file', 'Output path', 'Model Name')\r\n\r\n\r\nfrom czmodel.pytorch.convert import DefaultConverter as PytorchDefaultConverter  # for czmodel[pytorch]\r\n\r\nPytorchDefaultConverter().convert_from_json_spec('Path to JSON file', 'Output path', (3, 1024, 1024), 'Model Name')\r\n```\r\nor by using the command line interface of the `savedmodel2czann` script (only for Keras model):\r\n```console\r\nsavedmodel2ann path/to/model_spec.json output/path/ output_name --license_file path/to/license_file.txt\r\n```\r\n\r\n### Adding pre- and post-processing layers (only for Keras model)\r\nBoth, `convert_from_json_spec` and `convert_from_model_spec` in the converter classes accept the \r\nfollowing optional parameters:\r\n- `spatial_dims`: Set new spatial dimensions for the new input node of the model. This parameter is expected to contain the new height \r\nand width in that order. **Note:** The spatial input dimensions can only be changed in ANN architectures that are invariant to the \r\nspatial dimensions of the input, e.g. FCNs.\r\n- `preprocessing`: One or more pre-processing layers that will be prepended to the deployed model. A pre-processing \r\nlayer must be derived from the `tensorflow.keras.layers.Layer` class.\r\n- `postprocessing`: One or more post-processing layers that will be appended to the deployed model. A post-processing \r\nlayer must be derived from the `tensorflow.keras.layers.Layer` class.\r\n\r\nWhile ANN models are often trained on images in RGB(A) space, the ZEN infrastructure requires models inside a CZANN to \r\nexpect inputs in BGR(A) color space. This toolbox offers pre-processing layers to convert the color space before \r\npassing the input to the model to be actually deployed. The following code shows how to add a BGR to RGB conversion layer \r\nto a model and set its spatial input dimensions to 512x512.\r\n\r\n```python\r\nfrom czmodel.tensorflow.util.transforms import TransposeChannels\r\nfrom czmodel.tensorflow.convert import DefaultConverter\r\n\r\n# Define dimensions and pre-processing\r\nspatial_dims = 512, 512  # Optional: Target spatial dimensions of the model\r\npreprocessing = [TransposeChannels(order=(2, 1,\r\n                                          0))]  # Optional: Pre-Processing layers to be prepended to the model. Can be a single layer, a list of layers or None.\r\npostprocessing = None  # Optional: Post-Processing layers to be appended to the model. Can be a single layer, a list of layers or None.\r\n\r\n# Perform conversion\r\nDefaultConverter().convert_from_model_spec(\r\n    model_spec=model_spec,\r\n    output_path='some/path',\r\n    output_name='some_file_name',\r\n    spatial_dims=spatial_dims,\r\n    preprocessing=preprocessing,\r\n    postprocessing=postprocessing\r\n)\r\n```\r\n\r\nAdditionally, the toolbox offers a `SigmoidToSoftmaxScores` layer that can be appended through the `postprocessing` parameter to convert \r\nthe output of a model with sigmoid output activation to the output that would be produced by an equivalent model with softmax activation.\r\n\r\n\r\n### Unpacking CZANN/CZSEG files\r\nThe czmodel library offers functionality to unpack existing CZANN/CZSEG models. For a given .czann or .czseg model it is possible to extract the underlying ANN model to a specified folder and retrieve the corresponding meta-data as instances of the meta-data classes defined in the czmodel library.\r\n\r\nFor CZANN files:\r\n\r\n```python\r\nfrom czmodel.tensorflow.convert import DefaultConverter  # for czmodel[tensorflow]\r\n#from czmodel.pytorch.convert import DefaultConverter    # for czmodel[pytorch]\r\nfrom pathlib import Path\r\n\r\nmodel_metadata, model_path = DefaultConverter().unpack_model(model_file='Path of the .czann file',\r\n                                                             target_dir=Path('Output Path'))\r\n```\r\n\r\nFor CZSEG/CZMODEL files:\r\n\r\n```python\r\nfrom czmodel.tensorflow.convert import LegacyConverter  # for czmodel[tensorflow]\r\n#from czmodel.pytorch.convert import DefaultConverter   # for czmodel[pytorch]\r\nfrom pathlib import Path\r\n\r\nmodel_metadata, model_path = LegacyConverter().unpack_model(model_file='Path of the .czseg/.czann file',\r\n                                                            target_dir=Path('Output Path'))\r\n```\r\n\r\n\r\n## CZANN Model Specification\r\nThis section specifies the requirements for an artificial neural network (ANN) model and the additionally required metadata to enable execution of the model inside the ZEN Intellesis infrastructure starting with ZEN blue >=3.2 and ZEN Core >3.0.  \r\n\r\nThe model format currently allows to bundle models for semantic segmentation, instance segmentation, object detection, classification and regression and is defined as a ZIP archive with the file extension .czann containing the following files with the respective filenames:\r\n- JSON Meta data file. (filename: model.json)\r\n- Model in ONNX/TensorFlow SavedModel format. In case of  SavedModel format the folder representing the model must be zipped to a single file. (filename: model.model)\r\n- Optionally: A license file for the contained model. (filename: license.txt)\r\n\r\nThe meta data file must comply with the following specification:\r\n\r\n```json\r\n{\r\n    \"$schema\": \"http://iglucentral.com/schemas/com.snowplowanalytics.self-desc/schema/jsonschema/1-0-0#\",\r\n    \"$id\": \"http://127.0.0.1/model_format.schema.json\",\r\n    \"title\": \"Exchange format for ANN models\",\r\n    \"description\": \"A format that defines the meta information for exchanging ANN models. Any future versions of this specification should be evaluated through https://docs.snowplowanalytics.com/docs/pipeline-components-and-applications/iglu/igluctl-0-7-2/#lint-1 with --skip-checks numericMinMax,stringLength,optionalNull and https://www.json-buddy.com/json-schema-analyzer.htm.\",\r\n    \"type\": \"object\",\r\n    \"self\": {\r\n        \"vendor\": \"com.zeiss\",\r\n        \"name\": \"model-format\",\r\n        \"format\": \"jsonschema\",\r\n        \"version\": \"1-1-0\"\r\n    },\r\n    \"properties\": {\r\n        \"Id\": {\r\n            \"description\": \"Universally unique identifier of 128 bits for the model.\",\r\n            \"type\": \"string\"\r\n        },\r\n        \"Type\": {\r\n            \"description\": \"The type of problem addressed by the model.\",\r\n            \"type\": \"string\",\r\n            \"enum\": [\"SingleClassInstanceSegmentation\", \"MultiClassInstanceSegmentation\", \"SingleClassSemanticSegmentation\", \"MultiClassSemanticSegmentation\", \"SingleClassClassification\", \"MultiClassClassification\", \"ObjectDetection\", \"Regression\"]\r\n        },\r\n        \"MinOverlap\": {\r\n            \"description\": \"The minimum overlap of tiles for each dimension in pixels. Must be divisible by two. In tiling strategies that consider tile borders instead of overlaps the minimum overlap is twice the border size.\",\r\n            \"type\": \"array\",\r\n            \"items\": {\r\n                \"description\": \"The overlap of a single spatial dimension\",\r\n                \"type\": \"integer\",\r\n                \"minimum\": 0\r\n            },\r\n            \"minItems\": 1\r\n        },\r\n        \"Classes\": {\r\n            \"description\": \"The class names corresponding to the last output dimension of the prediction. If the last dimension of the prediction has shape n the provided list must be of length n\",\r\n            \"type\": \"array\",\r\n            \"items\": {\r\n                \"description\": \"A name describing a class for segmentation and classification tasks\",\r\n                \"type\": \"string\"\r\n            },\r\n            \"minItems\": 2\r\n        },\r\n        \"ModelName\": {\r\n            \"description\": \"The name of exported neural network model in ONNX (file) or TensorFlow SavedModel (folder) format in the same ZIP archive as the meta data file. In the case of ONNX the model must use ONNX opset version 12. In the case of TensorFlow SavedModel all operations in the model must be supported by TensorFlow 2.0.0. The model must contain exactly one input node which must comply with the input shape defined in the InputShape parameter and must have a batch dimension as its first dimension that is either 1 or undefined.\",\r\n            \"type\": \"string\"\r\n        },\r\n        \"InputShape\": {\r\n            \"description\": \"The shape of an input image. A typical 2D model has an input of shape [h, w, c] where h and w are the spatial dimensions and c is the number of channels. A 3D model is expected to have an input shape of [z, h, w, c] that contains an additional dimension z which represents the third spatial dimension. The batch dimension is not specified here. The input of the model must be of type float32 in the range [0..1].\",\r\n            \"type\": \"array\",\r\n            \"items\": {\r\n                \"description\": \"The size of a single dimension\",\r\n                \"type\": \"integer\",\r\n                \"minimum\": 1\r\n            },\r\n            \"minItems\": 3,\r\n            \"maxItems\": 4\r\n        },\r\n        \"OutputShape\": {\r\n            \"description\": \"The shape of the output image. A typical 2D model has an input of shape [h, w, c] where h and w are the spatial dimensions and c is the number of classes. A 3D model is expected to have an input shape of [z, h, w, c] that contains an additional dimension z which represents the third spatial dimension. The batch dimension is not specified here. If the output of the model represents an image, it must be of type float32 in the range [0..1].\",\r\n            \"type\": \"array\",\r\n            \"items\": {\r\n                \"description\": \"The size of a single dimension\",\r\n                \"type\": \"integer\",\r\n                \"minimum\": 1\r\n            },\r\n            \"minItems\": 3,\r\n            \"maxItems\": 4\r\n        },\r\n        \"Scaling\": {\r\n            \"description\": \"The extent of a pixel in x- and y-direction (in that order) in units of m.\",\r\n            \"type\": \"array\",\r\n            \"items\": {\r\n                \"description\": \"The extent of a pixel in a single dimension in units of m\",\r\n                \"type\": \"number\"\r\n            },\r\n            \"minItems\": 2,\r\n            \"maxItems\": 2\r\n        }\r\n    },\r\n    \"required\": [\"Id\", \"Type\", \"InputShape\", \"OutputShape\"]\r\n}\r\n```\r\nJson files can contain escape sequences and \\\\-characters in paths must be escaped with \\\\\\\\.\r\n\r\nThe following code snippet shows an example for a valid metadata file:\r\n\r\nFor single-class semantic segmentation:\r\n```json\r\n{\r\n  \"Id\": \"b511d295-91ff-46ca-bb60-b2e26c393809\",\r\n  \"Type\": \"SingleClassSemanticSegmentation\",\r\n  \"Classes\": [\"class1\", \"class2\", \"class3\", \"class4\", \"class5\"],\r\n  \"InputShape\": [1024, 1024, 3],\r\n  \"OutputShape\": [1024, 1024, 5]\r\n}\r\n```\r\n\r\nFor regression:\r\n```json\r\n{\r\n  \"Id\": \"064587eb-d5a1-4434-82fc-2fbc9f5871f9\",\r\n  \"Type\": \"Regression\",\r\n  \"InputShape\": [1024, 1024, 3],\r\n  \"OutputShape\": [1024, 1024, 3]\r\n}\r\n```\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A conversion tool for TensorFlow or ONNX ANNs to CZANN",
    "version": "5.1.0",
    "project_urls": {
        "ZEN Intellesis": "https://www.zeiss.com/microscopy/int/products/microscope-software/zen-intellesis-image-segmentation-by-deep-learning.html"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d4b27bfec75ba1c246ca055b712536b06a00aa1c460eb85e8b6390a4574ee84d",
                "md5": "42909ce588738a1b114469c2a6176d3f",
                "sha256": "67ca5b25633cd9e75968088e368cbd5fd23c1b1d2a044ed913fac61cb5ae3d2e"
            },
            "downloads": -1,
            "filename": "czmodel-5.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "42909ce588738a1b114469c2a6176d3f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7,<3.12",
            "size": 52915,
            "upload_time": "2023-06-29T19:26:20",
            "upload_time_iso_8601": "2023-06-29T19:26:20.097640Z",
            "url": "https://files.pythonhosted.org/packages/d4/b2/7bfec75ba1c246ca055b712536b06a00aa1c460eb85e8b6390a4574ee84d/czmodel-5.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "54462c7dfb946fd863a079c710cb8218bc6c29c0c0d830456dacefde27f237e6",
                "md5": "ad4f15e1267250c8724098ecb5f531a0",
                "sha256": "d0aee4aa6e0e20719aaf95514ba5fd435d5aeb7d2acbf8c8465c6f4f79dbe167"
            },
            "downloads": -1,
            "filename": "czmodel-5.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ad4f15e1267250c8724098ecb5f531a0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7,<3.12",
            "size": 37246,
            "upload_time": "2023-06-29T19:26:22",
            "upload_time_iso_8601": "2023-06-29T19:26:22.368708Z",
            "url": "https://files.pythonhosted.org/packages/54/46/2c7dfb946fd863a079c710cb8218bc6c29c0c0d830456dacefde27f237e6/czmodel-5.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-29 19:26:22",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "czmodel"
}
        
Elapsed time: 0.27119s