openvino2tensorflow


Nameopenvino2tensorflow JSON
Version 1.15.1 PyPI version JSON
download
home_pagehttps://github.com/PINTO0309/openvino2tensorflow
SummaryThis script converts the OpenVINO IR model to Tensorflow's saved_model, tflite, h5 and pb. in (NCHW) format
upload_time2021-07-27 14:39:19
maintainer
docs_urlNone
authorKatsuya Hyodo
requires_python>3.6
licenseMIT License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # openvino2tensorflow

<p align="center">
  <img src="https://user-images.githubusercontent.com/33194443/104584047-4e688f80-56a5-11eb-8dc2-5816487239d0.png" />
</p>

This script converts the OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC) -> TFLite (NHWC). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support.

[Special custom TensorFlow binaries](https://github.com/PINTO0309/Tensorflow-bin) and [special custom TensorFLow Lite binaries](https://github.com/PINTO0309/TensorflowLite-bin) are used.

Work in progress now.

**I'm continuing to add more layers of support and bug fixes on a daily basis. If you have a model that you are having trouble converting, please share the `.bin` and `.xml` with the issue. I will try to convert as much as possible.**

[![PyPI - Downloads](https://img.shields.io/pypi/dm/openvino2tensorflow?color=2BAF2B&label=Downloads%EF%BC%8FInstalled)](https://pypistats.org/packages/openvino2tensorflow) ![GitHub](https://img.shields.io/github/license/PINTO0309/openvino2tensorflow?color=2BAF2B) [![PyPI](https://img.shields.io/pypi/v/openvino2tensorflow?color=2BAF2B)](https://pypi.org/project/openvino2tensorflow/)

![ezgif com-gif-maker (4)](https://user-images.githubusercontent.com/33194443/103457894-4ffd9380-4d46-11eb-86dd-f753f2fca093.gif)

![ezgif com-gif-maker (3)](https://user-images.githubusercontent.com/33194443/103456348-8d5b2480-4d38-11eb-8a58-9b7c7203b18c.gif)

## 1. Environment
- TensorFlow v2.6.0+
- OpenVINO 2021.4.582+
- Python 3.6+
- tensorflowjs **`pip3 install --upgrade tensorflowjs`**
- **[tensorrt](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html)**
- coremltools **`pip3 install --upgrade coremltools`**
- onnx **`pip3 install --upgrade onnx`**
- tf2onnx **`pip3 install --upgrade tf2onnx`**
- tensorflow-datasets **`pip3 install --upgrade tensorflow-datasets`**
- **[edgetpu_compiler](https://coral.ai/docs/edgetpu/compiler/#system-requirements)**
- Docker

## 2. Use case

- PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) ->
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)
  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)

- Caffe (NCHW) -> OpenVINO (NCHW) ->
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)
  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)

- MXNet (NCHW) -> OpenVINO (NCHW) ->
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)
  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)

- Keras (NHWC) -> OpenVINO (NCHW・Optimized) ->
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)
  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)
  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)

- saved_model -> **`saved_model_to_pb`** -> pb

- saved_model ->
  - -> **`saved_model_to_tflite`** -> TFLite
  - -> **`saved_model_to_tflite`** -> TFJS
  - -> **`saved_model_to_tflite`** -> TF-TRT
  - -> **`saved_model_to_tflite`** -> EdgeTPU
  - -> **`saved_model_to_tflite`** -> CoreML
  - -> **`saved_model_to_tflite`** -> ONNX

- pb -> **`pb_to_tflite`** -> TFLite

- pb -> **`pb_to_saved_model`** -> saved_model

## 3. Supported Layers
- Currently, there are problems with the Reshape operation of 5D Tensor.

|No.|OpenVINO Layer|TF Layer|Remarks|
|:--:|:--|:--|:--|
|1|Parameter|Input||
|2|Const|Constant, Bias||
|3|Convolution|Conv2D||
|4|Add|Add||
|5|ReLU|ReLU||
|6|PReLU|PReLU|Maximum(0.0,x)+alpha\*Minimum(0.0,x)|
|7|MaxPool|MaxPool2D||
|8|AvgPool|AveragePooling2D||
|9|GroupConvolution|DepthwiseConv2D, Conv2D/Split/Concat||
|10|ConvolutionBackpropData|Conv2DTranspose||
|11|Concat|Concat||
|12|Multiply|Multiply||
|13|Tan|Tan||
|14|Tanh|Tanh||
|15|Elu|Elu||
|16|Sigmoid|Sigmoid||
|17|HardSigmoid|hard_sigmoid||
|18|SoftPlus|SoftPlus||
|19|Swish|Swish|You can replace swish and hard-swish with each other by using the "--replace_swish_and_hardswish" option|
|20|Interpolate|ResizeNearestNeighbor, ResizeBilinear||
|21|ShapeOf|Shape||
|22|Convert|Cast||
|23|StridedSlice|Strided_Slice||
|24|Pad|Pad, MirrorPad||
|25|Clamp|ReLU6, Clip||
|26|TopK|ArgMax, top_k||
|27|Transpose|Transpose||
|28|Squeeze|Squeeze||
|29|Unsqueeze|Identity, expand_dims|WIP|
|30|ReduceMean|reduce_mean||
|31|ReduceMax|reduce_max||
|32|ReduceMin|reduce_min||
|33|ReduceSum|reduce_sum||
|34|ReduceProd|reduce_prod||
|35|Subtract|Subtract||
|36|MatMul|MatMul||
|37|Reshape|Reshape||
|38|Range|Range|WIP|
|39|Exp|Exp||
|40|Abs|Abs||
|41|SoftMax|SoftMax||
|42|Negative|Negative||
|43|Maximum|Maximum|No broadcast|
|44|Minimum|Minimum|No broadcast|
|45|Acos|Acos||
|46|Acosh|Acosh||
|47|Asin|Asin||
|48|Asinh|Asinh||
|49|Atan|Atan||
|50|Atanh|Atanh||
|51|Ceiling|Ceil||
|52|Cos|Cos||
|53|Cosh|Cosh||
|54|Sin|Sin||
|55|Sinh|Sinh||
|56|Gather|Gather||
|57|Divide|Divide, FloorDiv||
|58|Erf|Erf||
|59|Floor|Floor||
|60|FloorMod|FloorMod||
|61|HSwish|HardSwish|x\*ReLU6(x+3)\*0.16666667, You can replace swish and hard-swish with each other by using the "--replace_swish_and_hardswish" option|
|62|Log|Log||
|63|Power|Pow|No broadcast|
|64|Mish|Mish|x\*Tanh(softplus(x))|
|65|Selu|Selu||
|66|Equal|equal||
|67|NotEqual|not_equal||
|68|Greater|greater||
|69|GreaterEqual|greater_equal||
|70|Less|less||
|71|LessEqual|less_equal||
|72|Select|Select|No broadcast|
|73|LogicalAnd|logical_and||
|74|LogicalNot|logical_not||
|75|LogicalOr|logical_or||
|76|LogicalXor|logical_xor||
|77|Broadcast|broadcast_to, ones, Multiply|numpy / bidirectional mode, WIP|
|78|Split|Split||
|79|VariadicSplit|Split, Slice, SplitV||
|80|MVN|reduce_mean, sqrt, reduce_variance|(x-reduce_mean(x))/sqrt(reduce_variance(x)+eps)|
|81|NonZero|not_equal, boolean_mask||
|82|ReduceL2|Multiply, reduce_sum, rsqrt||
|83|SpaceToDepth|SpaceToDepth||
|84|DepthToSpace|DepthToSpace||
|85|Sqrt|sqrt||
|86|SquaredDifference|squared_difference||
|87|FakeQuantize|subtract, multiply, round, greater, where, less_equal, add||
|88|Tile|tile||
|89|GatherND|gather_nd||
|90|NonMaxSuppression|non_max_suppression|WIP. Only available for batch size 1. To simplify post-processing ignore all OPs after non_max_suppression.|
|91|Gelu|gelu||
|92|Result|Identity|Output|

## 4. Setup
### 4-1. **[Environment construction pattern 1]** Execution by Docker (`strongly recommended`)
You do not need to install any packages other than Docker.
```bash
$ docker pull pinto0309/openvino2tensorflow
or
$ docker build -t pinto0309/openvino2tensorflow:latest .

# If you don't need to access the GUI of the HostPC and the USB camera.
$ docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  pinto0309/openvino2tensorflow:latest

# If conversion to TF-TRT is not required. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  pinto0309/openvino2tensorflow:latest
$ cd workdir

# If you need to convert to TF-TRT. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run --gpus all -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  pinto0309/openvino2tensorflow:latest
$ cd workdir
```
### 4-2. **[Environment construction pattern 2]** Execution by Host machine
To install using the **[Python Package Index (PyPI)](https://pypi.org/project/openvino2tensorflow/)**, use the following command.

```bash
$ pip3 install --user --upgrade openvino2tensorflow
```

To install with the latest source code of the main branch, use the following command.

```bash
$ pip3 install --user --upgrade git+https://github.com/PINTO0309/openvino2tensorflow
```

## 5. Usage
### 5-1. openvino to tensorflow convert
```bash
usage: openvino2tensorflow
  [-h]
  --model_path MODEL_PATH
  [--model_output_path MODEL_OUTPUT_PATH]
  [--output_saved_model]
  [--output_h5]
  [--output_weight_and_json]
  [--output_pb]
  [--output_no_quant_float32_tflite]
  [--output_weight_quant_tflite]
  [--output_float16_quant_tflite]
  [--output_integer_quant_tflite]
  [--output_full_integer_quant_tflite]
  [--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]
  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
  [--calib_ds_type CALIB_DS_TYPE]
  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
  [--tfds_download_flg]
  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
  [--output_tfjs]
  [--output_tftrt]
  [--output_coreml]
  [--output_edgetpu]
  [--output_onnx]
  [--onnx_opset ONNX_OPSET]
  [--output_myriad]
  [--vpu_number_of_shaves VPU_NUMBER_OF_SHAVES]
  [--vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES]
  [--replace_swish_and_hardswish]
  [--optimizing_hardswish_for_edgetpu]
  [--replace_prelu_and_minmax]
  [--yolact]
  [--restricted_resize_image_mode]
  [--weight_replacement_config WEIGHT_REPLACEMENT_CONFIG]
  [--debug]
  [--debug_layer_number DEBUG_LAYER_NUMBER]


optional arguments:
  -h, --help
                        show this help message and exit
  --model_path MODEL_PATH
                        input IR model path (.xml)
  --model_output_path MODEL_OUTPUT_PATH
                        The output folder path of the converted model file
  --output_saved_model
                        saved_model output switch
  --output_h5
                        .h5 output switch
  --output_weight_and_json
                        weight of h5 and json output switch
  --output_pb
                        .pb output switch
  --output_no_quant_float32_tflite
                        float32 tflite output switch
  --output_weight_quant_tflite
                        weight quant tflite output switch
  --output_float16_quant_tflite
                        float16 quant tflite output switch
  --output_integer_quant_tflite
                        integer quant tflite output switch
  --output_full_integer_quant_tflite
                        full integer quant tflite output switch
  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
                        Input and output types when doing Integer Quantization
                        ('int8 (default)' or 'uint8')
  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
                        String formulas for normalization. It is evaluated by
                        Pythons eval() function.
                        Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
  --calib_ds_type CALIB_DS_TYPE
                        Types of data sets for calibration. tfds or numpy
                        Default: numpy
  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
                        Dataset name for TensorFlow Datasets for calibration.
                        https://www.tensorflow.org/datasets/catalog/overview
  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
                        Split name for TensorFlow Datasets for calibration.
                        https://www.tensorflow.org/datasets/catalog/overview
  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
                        Download destination folder path for the calibration
                        dataset. Default: $HOME/TFDS
  --tfds_download_flg
                        True to automatically download datasets from
                        TensorFlow Datasets. True or False
  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
                        The path from which to load the .npy file containing
                        the numpy binary version of the calibration data.
                        Default: sample_npy/calibration_data_img_sample.npy
  --output_tfjs
                        tfjs model output switch
  --output_tftrt
                        tftrt model output switch
  --output_coreml
                        coreml model output switch
  --output_edgetpu
                        edgetpu model output switch
  --output_onnx
                        onnx model output switch
  --onnx_opset ONNX_OPSET
                        onnx opset version number
  --output_myriad
                        myriad inference engine blob output switch
  --vpu_number_of_shaves VPU_NUMBER_OF_SHAVES
                        vpu number of shaves. Default: 4
  --vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES
                        vpu number of cmx slices. Default: 4
  --replace_swish_and_hardswish
                        Replace swish and hard-swish with each other
  --optimizing_hardswish_for_edgetpu
                        Optimizing hardswish for edgetpu
  --replace_prelu_and_minmax
                        Replace prelu and minimum/maximum with each other
  --yolact
                        Specify when converting the Yolact model
  --restricted_resize_image_mode
                        Specify this if the upsampling contains OPs that are
                        not scaled by integer multiples. Optimization for
                        EdgeTPU will be disabled.
  --weight_replacement_config WEIGHT_REPLACEMENT_CONFIG
                        Replaces the value of Const for each layer_id defined
                        in json. Specify the path to the json file.
                        'weight_replacement_config.json'
  --debug
                        debug mode switch
  --debug_layer_number DEBUG_LAYER_NUMBER
                        The last layer number to output when debugging. Used
                        only when --debug=True
```
### 5-2. saved_model to tflite convert
```bash
usage: saved_model_to_tflite
  [-h]
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
  [--signature_def SIGNATURE_DEF]
  [--input_shapes INPUT_SHAPES]
  [--model_output_dir_path MODEL_OUTPUT_DIR_PATH]
  [--output_no_quant_float32_tflite]
  [--output_weight_quant_tflite]
  [--output_float16_quant_tflite]
  [--output_integer_quant_tflite]
  [--output_full_integer_quant_tflite]
  [--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]
  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
  [--calib_ds_type CALIB_DS_TYPE]
  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
  [--tfds_download_flg]
  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
  [--output_tfjs]
  [--output_tftrt]
  [--output_coreml]
  [--output_edgetpu]
  [--output_onnx]
  [--onnx_opset ONNX_OPSET]

optional arguments:
  -h, --help
                        show this help message and exit
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
                        Input saved_model dir path
  --signature_def SIGNATURE_DEF
                        Specifies the signature name to load from saved_model
  --input_shapes INPUT_SHAPES
                        Overwrites an undefined input dimension (None or -1).
                        Specify the input shape in [n,h,w,c] format.
                        For non-4D tensors, specify [a,b,c,d,e], [a,b], etc.
                        A comma-separated list if there are multiple inputs.
                        (e.g.) --input_shapes [1,256,256,3],[1,64,64,3],[1,2,16,16,3]
  --model_output_dir_path MODEL_OUTPUT_DIR_PATH
                        The output folder path of the converted model file
  --output_no_quant_float32_tflite
                        float32 tflite output switch
  --output_weight_quant_tflite
                        weight quant tflite output switch
  --output_float16_quant_tflite
                        float16 quant tflite output switch
  --output_integer_quant_tflite
                        integer quant tflite output switch
  --output_full_integer_quant_tflite
                        full integer quant tflite output switch
  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
                        Input and output types when doing Integer Quantization
                        ('int8 (default)' or 'uint8')
  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
                        String formulas for normalization. It is evaluated by
                        Pythons eval() function.
                        Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
  --calib_ds_type CALIB_DS_TYPE
                        Types of data sets for calibration. tfds or numpy
                        Default: numpy
  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
                        Dataset name for TensorFlow Datasets for calibration.
                        https://www.tensorflow.org/datasets/catalog/overview
  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
                        Split name for TensorFlow Datasets for calibration.
                        https://www.tensorflow.org/datasets/catalog/overview
  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
                        Download destination folder path for the calibration
                        dataset. Default: $HOME/TFDS
  --tfds_download_flg
                        True to automatically download datasets from
                        TensorFlow Datasets. True or False
  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
                        The path from which to load the .npy file containing
                        the numpy binary version of the calibration data.
                        Default: sample_npy/calibration_data_img_sample.npy
  --output_tfjs
                        tfjs model output switch
  --output_tftrt
                        tftrt model output switch
  --output_coreml
                        coreml model output switch
  --output_edgetpu
                        edgetpu model output switch
  --output_onnx
                        onnx model output switch
  --onnx_opset ONNX_OPSET
                        onnx opset version number
```
### 5-3. pb to saved_model convert
```bash
usage: pb_to_saved_model
  [-h]
  --pb_file_path PB_FILE_PATH
  --inputs INPUTS
  --outputs OUTPUTS
  [--model_output_path MODEL_OUTPUT_PATH]

optional arguments:
  -h, --help
                        show this help message and exit
  --pb_file_path PB_FILE_PATH
                        Input .pb file path (.pb)
  --inputs INPUTS
                        (e.g.1) input:0,input:1,input:2
                        (e.g.2) images:0,input:0,param:0
  --outputs OUTPUTS
                        (e.g.1) output:0,output:1,output:2
                        (e.g.2) Identity:0,Identity:1,output:0
  --model_output_path MODEL_OUTPUT_PATH
                        The output folder path of the converted model file
```
### 5-4. pb to tflite convert
```bash
usage: pb_to_tflite
  [-h]
  --pb_file_path PB_FILE_PATH
  --inputs INPUTS
  --outputs OUTPUTS
  [--model_output_path MODEL_OUTPUT_PATH]

optional arguments:
  -h, --help
                        show this help message and exit
  --pb_file_path PB_FILE_PATH
                        Input .pb file path (.pb)
  --inputs INPUTS
                        (e.g.1) input,input_1,input_2
                        (e.g.2) images,input,param
  --outputs OUTPUTS
                        (e.g.1) output,output_1,output_2
                        (e.g.2) Identity,Identity_1,output
  --model_output_path MODEL_OUTPUT_PATH
                        The output folder path of the converted model file
```
### 5-5. saved_model to pb convert
```bash
usage: saved_model_to_pb
  [-h]
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
  [--model_output_dir_path MODEL_OUTPUT_DIR_PATH]
  [--signature_name SIGNATURE_NAME]

optional arguments:
  -h, --help
                        show this help message and exit
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
                        Input saved_model dir path
  --model_output_dir_path MODEL_OUTPUT_DIR_PATH
                        The output folder path of the converted model file (.pb)
  --signature_name SIGNATURE_NAME
                        Signature name to be extracted from saved_model
```
### 5-6. Extraction of IR weight
```bash
usage: ir_weight_extractor
  [-h]
  -m MODEL
  -o OUTPUT_PATH

optional arguments:
  -h, --help
                        show this help message and exit
  -m MODEL, --model MODEL
                        input IR model path
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        weights output folder path
```

## 6. Execution sample
### 6-1. Conversion of OpenVINO IR to Tensorflow models
OutOfMemory may occur when converting to saved_model or h5 when the file size of the original model is large, please try the conversion to a pb file alone.
```
$ openvino2tensorflow \
  --model_path openvino/448x448/FP32/Resnet34_3inputs_448x448_20200609.xml \
  --output_saved_model \
  --output_pb \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_no_quant_float32_tflite
```
### 6-2. Convert Protocol Buffer (.pb) to saved_model
This tool is useful if you want to check the internal structure of pb files, tflite files, .h5 files, coreml files and IR (.xml) files. **https://lutzroeder.github.io/netron/**
```
$ pb_to_saved_model \
  --pb_file_path model_float32.pb \
  --inputs inputs:0 \
  --outputs Identity:0
```
### 6-3. Convert Protocol Buffer (.pb) to tflite
```
$ pb_to_tflite \
  --pb_file_path model_float32.pb \
  --inputs inputs \
  --outputs Identity,Identity_1,Identity_2
```
### 6-4. Convert saved_model to Protocol Buffer (.pb)
```
$ saved_model_to_pb \
  --saved_model_dir_path saved_model \
  --model_output_dir_path pb_from_saved_model \
  --signature_name serving_default
```

### 6-5. Converts saved_model to OpenVINO IR
```
$ python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo_tf.py \
  --saved_model_dir saved_model \
  --output_dir openvino/reverse
```
### 6-6. Checking the structure of saved_model
```
$ saved_model_cli show \
  --dir saved_model \
  --tag_set serve \
  --signature_def serving_default
```

### 6-7. Replace weights or constant values in **`Const`** OP
If the transformation behavior of **`Reshape`**, **`Transpose`**, etc. does not go as expected, you can force the **`Const`** content to change by defining weights and constant values in a JSON file and having it read in.
```
$ openvino2tensorflow \
  --model_path xxx.xml \
  --output_saved_model \
  --output_pb \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_no_quant_float32_tflite \
  --weight_replacement_config weight_replacement_config_sample.json
```
Structure of JSON sample
```json
{
    "format_version": 1,
    "layers": [
        {
            "layer_id": "1123",
            "replace_mode": "direct",
            "values": [
                1,
                2,
                513,
                513
            ]
        },
        {
            "layer_id": "1125",
            "replace_mode": "npy",
            "values": "weights_sample/1125.npy"
        }
    ]
}
```

|No.|Elements|Description|
|:--|:--|:--|
|1|format_version|Format version of weight_replacement_config. Only 1 so far.|
|2|layers|A list of layers. Enclose it with "[ ]" to define multiple layers to child elements.|
|2-1|layer_id|ID of the Const layer whose weight/constant parameter is to be swapped. For example, specify "1123" for layer id="1123" for type="Const" in .xml.<br>![Screenshot 2021-02-08 01:06:30](https://user-images.githubusercontent.com/33194443/107152221-068a0f00-69aa-11eb-9d9e-f48bb1c3f781.png)|
|2-2|replace_mode|"direct" or "npy".<br>"direct": Specify the values of the Numpy matrix directly in the "values" attribute. Ignores the values recorded in the .bin file and replaces them with the values specified in "values".<br>![Screenshot 2021-02-08 01:12:06](https://user-images.githubusercontent.com/33194443/107152361-cc6d3d00-69aa-11eb-8302-5e18a723ec34.png)<br>"npy": Load a Numpy binary file with the matrix output by np.save('xyz', a). The "values" attribute specifies the path to the Numpy binary file.<br>![Screenshot 2021-02-08 01:12:23](https://user-images.githubusercontent.com/33194443/107152376-dc851c80-69aa-11eb-9b3f-469b91af1d19.png)|
|2-3|values|Specify the value or the path to the Numpy binary file to replace the weight/constant value recorded in .bin. The way to specify is as described in the description of 'replace_mode'.|

### 6-8. Check the contents of the .npy file, which is a binary version of the image file
```
$ view_npy --npy_file_path sample_npy/calibration_data_img_sample.npy
```
Press the **`Q`** button to display the next image. **`calibration_data_img_sample.npy`** contains 20 images extracted from the MS-COCO data set.
![ezgif com-gif-maker](https://user-images.githubusercontent.com/33194443/109318923-aba15480-7891-11eb-84aa-034f77125f34.gif)

### 6-9. Sample image of a conversion error message
Since it is very difficult to mechanically predict the correct behavior of **`Transpose`** and **`Reshape`**, errors like the one shown below may occur. Using the information in the figure below, try several times to force the replacement of constants and weights using the **`--weight_replacement_config`** option [#6-7-replace-weights-or-constant-values-in-const-op](#6-7-replace-weights-or-constant-values-in-const-op). This is a very patient process, but if you take the time, you should be able to convert it correctly.
![error_sample2](https://user-images.githubusercontent.com/33194443/124498169-e181b700-ddf6-11eb-9200-83ba44c62410.png)

## 7. Output sample
![Screenshot 2020-10-16 00:08:40](https://user-images.githubusercontent.com/33194443/96149093-e38fa700-0f43-11eb-8101-65fc20b2cc8f.png)


## 8. Model Structure
**[https://digital-standard.com/threedpose/models/Resnet34_3inputs_448x448_20200609.onnx](https://github.com/digital-standard/ThreeDPoseUnityBarracuda#download-and-put-files)**

|ONNX (NCHW)|OpenVINO (NCHW)|TFLite (NHWC)|
|:--:|:--:|:--:|
|![Resnet34_3inputs_448x448_20200609 onnx_](https://user-images.githubusercontent.com/33194443/96398683-62683680-1207-11eb-928d-e4cb6c8cc188.png)|![Resnet34_3inputs_448x448_20200609 xml](https://user-images.githubusercontent.com/33194443/96153010-23f12400-0f48-11eb-8186-4bbad73b517a.png)|![model_float32 tflite](https://user-images.githubusercontent.com/33194443/96153019-26ec1480-0f48-11eb-96be-0c405ee2cbf7.png)|

## 9. My article
- **[[English] Converting PyTorch, ONNX, Caffe, and OpenVINO (NCHW) models to Tensorflow / TensorflowLite (NHWC) in a snap](https://qiita.com/PINTO/items/ed06e03eb5c007c2e102)**

- **[PyTorch, ONNX, Caffe, OpenVINO (NCHW) のモデルをTensorflow / TensorflowLite (NHWC) へお手軽に変換する](https://qiita.com/PINTO/items/7a0bcaacc77bb5d6abb1)**

- **[tf.image.resizeを含むFull Integer Quantization (.tflite)モデルのEdgeTPUモデルへの変換後の推論時に発生する "main.ERROR - Only float32 and uint8 are supported currently, got -xxx.Node number n (op name) failed to invoke" エラーの回避方法](https://qiita.com/PINTO/items/6ff62da1d02089442c8c)**

## 10. Conversion Confirmed Models
1. u-2-net
2. mobilenet-v2-pytorch
3. midasnet
4. footprints
5. efficientnet-b0-pytorch
6. efficientdet-d0
7. dense_depth
8. deeplabv3
9. colorization-v2-norebal
10. age-gender-recognition-retail-0013
11. resnet
12. arcface
13. emotion-ferplus
14. mosaic
15. retinanet
16. shufflenet-v2
17. squeezenet
18. version-RFB-320
19. yolov4
20. yolov4x-mish
21. ThreeDPoseUnityBarracuda - Resnet34_3inputs_448x448
22. efficientnet-lite4
23. nanodet
24. yolov4-tiny
25. yolov5s
26. yolact
27. MiDaS v2
28. MODNet
29. Person Reidentification
30. DeepSort
31. DINO (Transformer)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/PINTO0309/openvino2tensorflow",
    "name": "openvino2tensorflow",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "Katsuya Hyodo",
    "author_email": "rmsdh122@yahoo.co.jp",
    "download_url": "https://files.pythonhosted.org/packages/09/75/e11777080b0e8783958da7c66ffc8e2a5d066e43f0a6ce91f603fb8efa25/openvino2tensorflow-1.15.1.tar.gz",
    "platform": "linux",
    "description": "# openvino2tensorflow\n\n<p align=\"center\">\n  <img src=\"https://user-images.githubusercontent.com/33194443/104584047-4e688f80-56a5-11eb-8dc2-5816487239d0.png\" />\n</p>\n\nThis script converts the OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC) -> TFLite (NHWC). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support.\n\n[Special custom TensorFlow binaries](https://github.com/PINTO0309/Tensorflow-bin) and [special custom TensorFLow Lite binaries](https://github.com/PINTO0309/TensorflowLite-bin) are used.\n\nWork in progress now.\n\n**I'm continuing to add more layers of support and bug fixes on a daily basis. If you have a model that you are having trouble converting, please share the `.bin` and `.xml` with the issue. I will try to convert as much as possible.**\n\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/openvino2tensorflow?color=2BAF2B&label=Downloads%EF%BC%8FInstalled)](https://pypistats.org/packages/openvino2tensorflow) ![GitHub](https://img.shields.io/github/license/PINTO0309/openvino2tensorflow?color=2BAF2B) [![PyPI](https://img.shields.io/pypi/v/openvino2tensorflow?color=2BAF2B)](https://pypi.org/project/openvino2tensorflow/)\n\n![ezgif com-gif-maker (4)](https://user-images.githubusercontent.com/33194443/103457894-4ffd9380-4d46-11eb-86dd-f753f2fca093.gif)\n\n![ezgif com-gif-maker (3)](https://user-images.githubusercontent.com/33194443/103456348-8d5b2480-4d38-11eb-8a58-9b7c7203b18c.gif)\n\n## 1. Environment\n- TensorFlow v2.6.0+\n- OpenVINO 2021.4.582+\n- Python 3.6+\n- tensorflowjs **`pip3 install --upgrade tensorflowjs`**\n- **[tensorrt](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html)**\n- coremltools **`pip3 install --upgrade coremltools`**\n- onnx **`pip3 install --upgrade onnx`**\n- tf2onnx **`pip3 install --upgrade tf2onnx`**\n- tensorflow-datasets **`pip3 install --upgrade tensorflow-datasets`**\n- **[edgetpu_compiler](https://coral.ai/docs/edgetpu/compiler/#system-requirements)**\n- Docker\n\n## 2. Use case\n\n- PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) ->\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)\n  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)\n\n- Caffe (NCHW) -> OpenVINO (NCHW) ->\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)\n  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)\n\n- MXNet (NCHW) -> OpenVINO (NCHW) ->\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)\n  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)\n\n- Keras (NHWC) -> OpenVINO (NCHW\u30fbOptimized) ->\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFLite (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TFJS (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> CoreML (NHWC)\n  - -> **`openvino2tensorflow`** -> Tensorflow/Keras (NHWC) -> ONNX (NHWC)\n  - -> **`openvino2tensorflow`** -> Myriad Inference Engine Blob (NCHW)\n\n- saved_model -> **`saved_model_to_pb`** -> pb\n\n- saved_model ->\n  - -> **`saved_model_to_tflite`** -> TFLite\n  - -> **`saved_model_to_tflite`** -> TFJS\n  - -> **`saved_model_to_tflite`** -> TF-TRT\n  - -> **`saved_model_to_tflite`** -> EdgeTPU\n  - -> **`saved_model_to_tflite`** -> CoreML\n  - -> **`saved_model_to_tflite`** -> ONNX\n\n- pb -> **`pb_to_tflite`** -> TFLite\n\n- pb -> **`pb_to_saved_model`** -> saved_model\n\n## 3. Supported Layers\n- Currently, there are problems with the Reshape operation of 5D Tensor.\n\n|No.|OpenVINO Layer|TF Layer|Remarks|\n|:--:|:--|:--|:--|\n|1|Parameter|Input||\n|2|Const|Constant, Bias||\n|3|Convolution|Conv2D||\n|4|Add|Add||\n|5|ReLU|ReLU||\n|6|PReLU|PReLU|Maximum(0.0,x)+alpha\\*Minimum(0.0,x)|\n|7|MaxPool|MaxPool2D||\n|8|AvgPool|AveragePooling2D||\n|9|GroupConvolution|DepthwiseConv2D, Conv2D/Split/Concat||\n|10|ConvolutionBackpropData|Conv2DTranspose||\n|11|Concat|Concat||\n|12|Multiply|Multiply||\n|13|Tan|Tan||\n|14|Tanh|Tanh||\n|15|Elu|Elu||\n|16|Sigmoid|Sigmoid||\n|17|HardSigmoid|hard_sigmoid||\n|18|SoftPlus|SoftPlus||\n|19|Swish|Swish|You can replace swish and hard-swish with each other by using the \"--replace_swish_and_hardswish\" option|\n|20|Interpolate|ResizeNearestNeighbor, ResizeBilinear||\n|21|ShapeOf|Shape||\n|22|Convert|Cast||\n|23|StridedSlice|Strided_Slice||\n|24|Pad|Pad, MirrorPad||\n|25|Clamp|ReLU6, Clip||\n|26|TopK|ArgMax, top_k||\n|27|Transpose|Transpose||\n|28|Squeeze|Squeeze||\n|29|Unsqueeze|Identity, expand_dims|WIP|\n|30|ReduceMean|reduce_mean||\n|31|ReduceMax|reduce_max||\n|32|ReduceMin|reduce_min||\n|33|ReduceSum|reduce_sum||\n|34|ReduceProd|reduce_prod||\n|35|Subtract|Subtract||\n|36|MatMul|MatMul||\n|37|Reshape|Reshape||\n|38|Range|Range|WIP|\n|39|Exp|Exp||\n|40|Abs|Abs||\n|41|SoftMax|SoftMax||\n|42|Negative|Negative||\n|43|Maximum|Maximum|No broadcast|\n|44|Minimum|Minimum|No broadcast|\n|45|Acos|Acos||\n|46|Acosh|Acosh||\n|47|Asin|Asin||\n|48|Asinh|Asinh||\n|49|Atan|Atan||\n|50|Atanh|Atanh||\n|51|Ceiling|Ceil||\n|52|Cos|Cos||\n|53|Cosh|Cosh||\n|54|Sin|Sin||\n|55|Sinh|Sinh||\n|56|Gather|Gather||\n|57|Divide|Divide, FloorDiv||\n|58|Erf|Erf||\n|59|Floor|Floor||\n|60|FloorMod|FloorMod||\n|61|HSwish|HardSwish|x\\*ReLU6(x+3)\\*0.16666667, You can replace swish and hard-swish with each other by using the \"--replace_swish_and_hardswish\" option|\n|62|Log|Log||\n|63|Power|Pow|No broadcast|\n|64|Mish|Mish|x\\*Tanh(softplus(x))|\n|65|Selu|Selu||\n|66|Equal|equal||\n|67|NotEqual|not_equal||\n|68|Greater|greater||\n|69|GreaterEqual|greater_equal||\n|70|Less|less||\n|71|LessEqual|less_equal||\n|72|Select|Select|No broadcast|\n|73|LogicalAnd|logical_and||\n|74|LogicalNot|logical_not||\n|75|LogicalOr|logical_or||\n|76|LogicalXor|logical_xor||\n|77|Broadcast|broadcast_to, ones, Multiply|numpy / bidirectional mode, WIP|\n|78|Split|Split||\n|79|VariadicSplit|Split, Slice, SplitV||\n|80|MVN|reduce_mean, sqrt, reduce_variance|(x-reduce_mean(x))/sqrt(reduce_variance(x)+eps)|\n|81|NonZero|not_equal, boolean_mask||\n|82|ReduceL2|Multiply, reduce_sum, rsqrt||\n|83|SpaceToDepth|SpaceToDepth||\n|84|DepthToSpace|DepthToSpace||\n|85|Sqrt|sqrt||\n|86|SquaredDifference|squared_difference||\n|87|FakeQuantize|subtract, multiply, round, greater, where, less_equal, add||\n|88|Tile|tile||\n|89|GatherND|gather_nd||\n|90|NonMaxSuppression|non_max_suppression|WIP. Only available for batch size 1. To simplify post-processing ignore all OPs after non_max_suppression.|\n|91|Gelu|gelu||\n|92|Result|Identity|Output|\n\n## 4. Setup\n### 4-1. **[Environment construction pattern 1]** Execution by Docker (`strongly recommended`)\nYou do not need to install any packages other than Docker.\n```bash\n$ docker pull pinto0309/openvino2tensorflow\nor\n$ docker build -t pinto0309/openvino2tensorflow:latest .\n\n# If you don't need to access the GUI of the HostPC and the USB camera.\n$ docker run -it --rm \\\n  -v `pwd`:/home/user/workdir \\\n  pinto0309/openvino2tensorflow:latest\n\n# If conversion to TF-TRT is not required. And if you need to access the HostPC GUI and USB camera.\n$ xhost +local: && \\\n  docker run -it --rm \\\n  -v `pwd`:/home/user/workdir \\\n  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \\\n  --device /dev/video0:/dev/video0:mwr \\\n  --net=host \\\n  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \\\n  -e DISPLAY=$DISPLAY \\\n  --privileged \\\n  pinto0309/openvino2tensorflow:latest\n$ cd workdir\n\n# If you need to convert to TF-TRT. And if you need to access the HostPC GUI and USB camera.\n$ xhost +local: && \\\n  docker run --gpus all -it --rm \\\n  -v `pwd`:/home/user/workdir \\\n  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \\\n  --device /dev/video0:/dev/video0:mwr \\\n  --net=host \\\n  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \\\n  -e DISPLAY=$DISPLAY \\\n  --privileged \\\n  pinto0309/openvino2tensorflow:latest\n$ cd workdir\n```\n### 4-2. **[Environment construction pattern 2]** Execution by Host machine\nTo install using the **[Python Package Index (PyPI)](https://pypi.org/project/openvino2tensorflow/)**, use the following command.\n\n```bash\n$ pip3 install --user --upgrade openvino2tensorflow\n```\n\nTo install with the latest source code of the main branch, use the following command.\n\n```bash\n$ pip3 install --user --upgrade git+https://github.com/PINTO0309/openvino2tensorflow\n```\n\n## 5. Usage\n### 5-1. openvino to tensorflow convert\n```bash\nusage: openvino2tensorflow\n  [-h]\n  --model_path MODEL_PATH\n  [--model_output_path MODEL_OUTPUT_PATH]\n  [--output_saved_model]\n  [--output_h5]\n  [--output_weight_and_json]\n  [--output_pb]\n  [--output_no_quant_float32_tflite]\n  [--output_weight_quant_tflite]\n  [--output_float16_quant_tflite]\n  [--output_integer_quant_tflite]\n  [--output_full_integer_quant_tflite]\n  [--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]\n  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]\n  [--calib_ds_type CALIB_DS_TYPE]\n  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]\n  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]\n  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]\n  [--tfds_download_flg]\n  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]\n  [--output_tfjs]\n  [--output_tftrt]\n  [--output_coreml]\n  [--output_edgetpu]\n  [--output_onnx]\n  [--onnx_opset ONNX_OPSET]\n  [--output_myriad]\n  [--vpu_number_of_shaves VPU_NUMBER_OF_SHAVES]\n  [--vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES]\n  [--replace_swish_and_hardswish]\n  [--optimizing_hardswish_for_edgetpu]\n  [--replace_prelu_and_minmax]\n  [--yolact]\n  [--restricted_resize_image_mode]\n  [--weight_replacement_config WEIGHT_REPLACEMENT_CONFIG]\n  [--debug]\n  [--debug_layer_number DEBUG_LAYER_NUMBER]\n\n\noptional arguments:\n  -h, --help\n                        show this help message and exit\n  --model_path MODEL_PATH\n                        input IR model path (.xml)\n  --model_output_path MODEL_OUTPUT_PATH\n                        The output folder path of the converted model file\n  --output_saved_model\n                        saved_model output switch\n  --output_h5\n                        .h5 output switch\n  --output_weight_and_json\n                        weight of h5 and json output switch\n  --output_pb\n                        .pb output switch\n  --output_no_quant_float32_tflite\n                        float32 tflite output switch\n  --output_weight_quant_tflite\n                        weight quant tflite output switch\n  --output_float16_quant_tflite\n                        float16 quant tflite output switch\n  --output_integer_quant_tflite\n                        integer quant tflite output switch\n  --output_full_integer_quant_tflite\n                        full integer quant tflite output switch\n  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE\n                        Input and output types when doing Integer Quantization\n                        ('int8 (default)' or 'uint8')\n  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION\n                        String formulas for normalization. It is evaluated by\n                        Pythons eval() function.\n                        Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'\n  --calib_ds_type CALIB_DS_TYPE\n                        Types of data sets for calibration. tfds or numpy\n                        Default: numpy\n  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION\n                        Dataset name for TensorFlow Datasets for calibration.\n                        https://www.tensorflow.org/datasets/catalog/overview\n  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION\n                        Split name for TensorFlow Datasets for calibration.\n                        https://www.tensorflow.org/datasets/catalog/overview\n  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS\n                        Download destination folder path for the calibration\n                        dataset. Default: $HOME/TFDS\n  --tfds_download_flg\n                        True to automatically download datasets from\n                        TensorFlow Datasets. True or False\n  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY\n                        The path from which to load the .npy file containing\n                        the numpy binary version of the calibration data.\n                        Default: sample_npy/calibration_data_img_sample.npy\n  --output_tfjs\n                        tfjs model output switch\n  --output_tftrt\n                        tftrt model output switch\n  --output_coreml\n                        coreml model output switch\n  --output_edgetpu\n                        edgetpu model output switch\n  --output_onnx\n                        onnx model output switch\n  --onnx_opset ONNX_OPSET\n                        onnx opset version number\n  --output_myriad\n                        myriad inference engine blob output switch\n  --vpu_number_of_shaves VPU_NUMBER_OF_SHAVES\n                        vpu number of shaves. Default: 4\n  --vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES\n                        vpu number of cmx slices. Default: 4\n  --replace_swish_and_hardswish\n                        Replace swish and hard-swish with each other\n  --optimizing_hardswish_for_edgetpu\n                        Optimizing hardswish for edgetpu\n  --replace_prelu_and_minmax\n                        Replace prelu and minimum/maximum with each other\n  --yolact\n                        Specify when converting the Yolact model\n  --restricted_resize_image_mode\n                        Specify this if the upsampling contains OPs that are\n                        not scaled by integer multiples. Optimization for\n                        EdgeTPU will be disabled.\n  --weight_replacement_config WEIGHT_REPLACEMENT_CONFIG\n                        Replaces the value of Const for each layer_id defined\n                        in json. Specify the path to the json file.\n                        'weight_replacement_config.json'\n  --debug\n                        debug mode switch\n  --debug_layer_number DEBUG_LAYER_NUMBER\n                        The last layer number to output when debugging. Used\n                        only when --debug=True\n```\n### 5-2. saved_model to tflite convert\n```bash\nusage: saved_model_to_tflite\n  [-h]\n  --saved_model_dir_path SAVED_MODEL_DIR_PATH\n  [--signature_def SIGNATURE_DEF]\n  [--input_shapes INPUT_SHAPES]\n  [--model_output_dir_path MODEL_OUTPUT_DIR_PATH]\n  [--output_no_quant_float32_tflite]\n  [--output_weight_quant_tflite]\n  [--output_float16_quant_tflite]\n  [--output_integer_quant_tflite]\n  [--output_full_integer_quant_tflite]\n  [--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]\n  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]\n  [--calib_ds_type CALIB_DS_TYPE]\n  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]\n  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]\n  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]\n  [--tfds_download_flg]\n  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]\n  [--output_tfjs]\n  [--output_tftrt]\n  [--output_coreml]\n  [--output_edgetpu]\n  [--output_onnx]\n  [--onnx_opset ONNX_OPSET]\n\noptional arguments:\n  -h, --help\n                        show this help message and exit\n  --saved_model_dir_path SAVED_MODEL_DIR_PATH\n                        Input saved_model dir path\n  --signature_def SIGNATURE_DEF\n                        Specifies the signature name to load from saved_model\n  --input_shapes INPUT_SHAPES\n                        Overwrites an undefined input dimension (None or -1).\n                        Specify the input shape in [n,h,w,c] format.\n                        For non-4D tensors, specify [a,b,c,d,e], [a,b], etc.\n                        A comma-separated list if there are multiple inputs.\n                        (e.g.) --input_shapes [1,256,256,3],[1,64,64,3],[1,2,16,16,3]\n  --model_output_dir_path MODEL_OUTPUT_DIR_PATH\n                        The output folder path of the converted model file\n  --output_no_quant_float32_tflite\n                        float32 tflite output switch\n  --output_weight_quant_tflite\n                        weight quant tflite output switch\n  --output_float16_quant_tflite\n                        float16 quant tflite output switch\n  --output_integer_quant_tflite\n                        integer quant tflite output switch\n  --output_full_integer_quant_tflite\n                        full integer quant tflite output switch\n  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE\n                        Input and output types when doing Integer Quantization\n                        ('int8 (default)' or 'uint8')\n  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION\n                        String formulas for normalization. It is evaluated by\n                        Pythons eval() function.\n                        Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'\n  --calib_ds_type CALIB_DS_TYPE\n                        Types of data sets for calibration. tfds or numpy\n                        Default: numpy\n  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION\n                        Dataset name for TensorFlow Datasets for calibration.\n                        https://www.tensorflow.org/datasets/catalog/overview\n  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION\n                        Split name for TensorFlow Datasets for calibration.\n                        https://www.tensorflow.org/datasets/catalog/overview\n  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS\n                        Download destination folder path for the calibration\n                        dataset. Default: $HOME/TFDS\n  --tfds_download_flg\n                        True to automatically download datasets from\n                        TensorFlow Datasets. True or False\n  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY\n                        The path from which to load the .npy file containing\n                        the numpy binary version of the calibration data.\n                        Default: sample_npy/calibration_data_img_sample.npy\n  --output_tfjs\n                        tfjs model output switch\n  --output_tftrt\n                        tftrt model output switch\n  --output_coreml\n                        coreml model output switch\n  --output_edgetpu\n                        edgetpu model output switch\n  --output_onnx\n                        onnx model output switch\n  --onnx_opset ONNX_OPSET\n                        onnx opset version number\n```\n### 5-3. pb to saved_model convert\n```bash\nusage: pb_to_saved_model\n  [-h]\n  --pb_file_path PB_FILE_PATH\n  --inputs INPUTS\n  --outputs OUTPUTS\n  [--model_output_path MODEL_OUTPUT_PATH]\n\noptional arguments:\n  -h, --help\n                        show this help message and exit\n  --pb_file_path PB_FILE_PATH\n                        Input .pb file path (.pb)\n  --inputs INPUTS\n                        (e.g.1) input:0,input:1,input:2\n                        (e.g.2) images:0,input:0,param:0\n  --outputs OUTPUTS\n                        (e.g.1) output:0,output:1,output:2\n                        (e.g.2) Identity:0,Identity:1,output:0\n  --model_output_path MODEL_OUTPUT_PATH\n                        The output folder path of the converted model file\n```\n### 5-4. pb to tflite convert\n```bash\nusage: pb_to_tflite\n  [-h]\n  --pb_file_path PB_FILE_PATH\n  --inputs INPUTS\n  --outputs OUTPUTS\n  [--model_output_path MODEL_OUTPUT_PATH]\n\noptional arguments:\n  -h, --help\n                        show this help message and exit\n  --pb_file_path PB_FILE_PATH\n                        Input .pb file path (.pb)\n  --inputs INPUTS\n                        (e.g.1) input,input_1,input_2\n                        (e.g.2) images,input,param\n  --outputs OUTPUTS\n                        (e.g.1) output,output_1,output_2\n                        (e.g.2) Identity,Identity_1,output\n  --model_output_path MODEL_OUTPUT_PATH\n                        The output folder path of the converted model file\n```\n### 5-5. saved_model to pb convert\n```bash\nusage: saved_model_to_pb\n  [-h]\n  --saved_model_dir_path SAVED_MODEL_DIR_PATH\n  [--model_output_dir_path MODEL_OUTPUT_DIR_PATH]\n  [--signature_name SIGNATURE_NAME]\n\noptional arguments:\n  -h, --help\n                        show this help message and exit\n  --saved_model_dir_path SAVED_MODEL_DIR_PATH\n                        Input saved_model dir path\n  --model_output_dir_path MODEL_OUTPUT_DIR_PATH\n                        The output folder path of the converted model file (.pb)\n  --signature_name SIGNATURE_NAME\n                        Signature name to be extracted from saved_model\n```\n### 5-6. Extraction of IR weight\n```bash\nusage: ir_weight_extractor\n  [-h]\n  -m MODEL\n  -o OUTPUT_PATH\n\noptional arguments:\n  -h, --help\n                        show this help message and exit\n  -m MODEL, --model MODEL\n                        input IR model path\n  -o OUTPUT_PATH, --output_path OUTPUT_PATH\n                        weights output folder path\n```\n\n## 6. Execution sample\n### 6-1. Conversion of OpenVINO IR to Tensorflow models\nOutOfMemory may occur when converting to saved_model or h5 when the file size of the original model is large, please try the conversion to a pb file alone.\n```\n$ openvino2tensorflow \\\n  --model_path openvino/448x448/FP32/Resnet34_3inputs_448x448_20200609.xml \\\n  --output_saved_model \\\n  --output_pb \\\n  --output_weight_quant_tflite \\\n  --output_float16_quant_tflite \\\n  --output_no_quant_float32_tflite\n```\n### 6-2. Convert Protocol Buffer (.pb) to saved_model\nThis tool is useful if you want to check the internal structure of pb files, tflite files, .h5 files, coreml files and IR (.xml) files. **https://lutzroeder.github.io/netron/**\n```\n$ pb_to_saved_model \\\n  --pb_file_path model_float32.pb \\\n  --inputs inputs:0 \\\n  --outputs Identity:0\n```\n### 6-3. Convert Protocol Buffer (.pb) to tflite\n```\n$ pb_to_tflite \\\n  --pb_file_path model_float32.pb \\\n  --inputs inputs \\\n  --outputs Identity,Identity_1,Identity_2\n```\n### 6-4. Convert saved_model to Protocol Buffer (.pb)\n```\n$ saved_model_to_pb \\\n  --saved_model_dir_path saved_model \\\n  --model_output_dir_path pb_from_saved_model \\\n  --signature_name serving_default\n```\n\n### 6-5. Converts saved_model to OpenVINO IR\n```\n$ python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo_tf.py \\\n  --saved_model_dir saved_model \\\n  --output_dir openvino/reverse\n```\n### 6-6. Checking the structure of saved_model\n```\n$ saved_model_cli show \\\n  --dir saved_model \\\n  --tag_set serve \\\n  --signature_def serving_default\n```\n\n### 6-7. Replace weights or constant values in **`Const`** OP\nIf the transformation behavior of **`Reshape`**, **`Transpose`**, etc. does not go as expected, you can force the **`Const`** content to change by defining weights and constant values in a JSON file and having it read in.\n```\n$ openvino2tensorflow \\\n  --model_path xxx.xml \\\n  --output_saved_model \\\n  --output_pb \\\n  --output_weight_quant_tflite \\\n  --output_float16_quant_tflite \\\n  --output_no_quant_float32_tflite \\\n  --weight_replacement_config weight_replacement_config_sample.json\n```\nStructure of JSON sample\n```json\n{\n    \"format_version\": 1,\n    \"layers\": [\n        {\n            \"layer_id\": \"1123\",\n            \"replace_mode\": \"direct\",\n            \"values\": [\n                1,\n                2,\n                513,\n                513\n            ]\n        },\n        {\n            \"layer_id\": \"1125\",\n            \"replace_mode\": \"npy\",\n            \"values\": \"weights_sample/1125.npy\"\n        }\n    ]\n}\n```\n\n|No.|Elements|Description|\n|:--|:--|:--|\n|1|format_version|Format version of weight_replacement_config. Only 1 so far.|\n|2|layers|A list of layers. Enclose it with \"[ ]\" to define multiple layers to child elements.|\n|2-1|layer_id|ID of the Const layer whose weight/constant parameter is to be swapped. For example, specify \"1123\" for layer id=\"1123\" for type=\"Const\" in .xml.<br>![Screenshot 2021-02-08 01:06:30](https://user-images.githubusercontent.com/33194443/107152221-068a0f00-69aa-11eb-9d9e-f48bb1c3f781.png)|\n|2-2|replace_mode|\"direct\" or \"npy\".<br>\"direct\": Specify the values of the Numpy matrix directly in the \"values\" attribute. Ignores the values recorded in the .bin file and replaces them with the values specified in \"values\".<br>![Screenshot 2021-02-08 01:12:06](https://user-images.githubusercontent.com/33194443/107152361-cc6d3d00-69aa-11eb-8302-5e18a723ec34.png)<br>\"npy\": Load a Numpy binary file with the matrix output by np.save('xyz', a). The \"values\" attribute specifies the path to the Numpy binary file.<br>![Screenshot 2021-02-08 01:12:23](https://user-images.githubusercontent.com/33194443/107152376-dc851c80-69aa-11eb-9b3f-469b91af1d19.png)|\n|2-3|values|Specify the value or the path to the Numpy binary file to replace the weight/constant value recorded in .bin. The way to specify is as described in the description of 'replace_mode'.|\n\n### 6-8. Check the contents of the .npy file, which is a binary version of the image file\n```\n$ view_npy --npy_file_path sample_npy/calibration_data_img_sample.npy\n```\nPress the **`Q`** button to display the next image. **`calibration_data_img_sample.npy`** contains 20 images extracted from the MS-COCO data set.\n![ezgif com-gif-maker](https://user-images.githubusercontent.com/33194443/109318923-aba15480-7891-11eb-84aa-034f77125f34.gif)\n\n### 6-9. Sample image of a conversion error message\nSince it is very difficult to mechanically predict the correct behavior of **`Transpose`** and **`Reshape`**, errors like the one shown below may occur. Using the information in the figure below, try several times to force the replacement of constants and weights using the **`--weight_replacement_config`** option [#6-7-replace-weights-or-constant-values-in-const-op](#6-7-replace-weights-or-constant-values-in-const-op). This is a very patient process, but if you take the time, you should be able to convert it correctly.\n![error_sample2](https://user-images.githubusercontent.com/33194443/124498169-e181b700-ddf6-11eb-9200-83ba44c62410.png)\n\n## 7. Output sample\n![Screenshot 2020-10-16 00:08:40](https://user-images.githubusercontent.com/33194443/96149093-e38fa700-0f43-11eb-8101-65fc20b2cc8f.png)\n\n\n## 8. Model Structure\n**[https://digital-standard.com/threedpose/models/Resnet34_3inputs_448x448_20200609.onnx](https://github.com/digital-standard/ThreeDPoseUnityBarracuda#download-and-put-files)**\n\n|ONNX (NCHW)|OpenVINO (NCHW)|TFLite (NHWC)|\n|:--:|:--:|:--:|\n|![Resnet34_3inputs_448x448_20200609 onnx_](https://user-images.githubusercontent.com/33194443/96398683-62683680-1207-11eb-928d-e4cb6c8cc188.png)|![Resnet34_3inputs_448x448_20200609 xml](https://user-images.githubusercontent.com/33194443/96153010-23f12400-0f48-11eb-8186-4bbad73b517a.png)|![model_float32 tflite](https://user-images.githubusercontent.com/33194443/96153019-26ec1480-0f48-11eb-96be-0c405ee2cbf7.png)|\n\n## 9. My article\n- **[[English] Converting PyTorch, ONNX, Caffe, and OpenVINO (NCHW) models to Tensorflow / TensorflowLite (NHWC) in a snap](https://qiita.com/PINTO/items/ed06e03eb5c007c2e102)**\n\n- **[PyTorch, ONNX, Caffe, OpenVINO (NCHW) \u306e\u30e2\u30c7\u30eb\u3092Tensorflow / TensorflowLite (NHWC) \u3078\u304a\u624b\u8efd\u306b\u5909\u63db\u3059\u308b](https://qiita.com/PINTO/items/7a0bcaacc77bb5d6abb1)**\n\n- **[tf.image.resize\u3092\u542b\u3080Full Integer Quantization (.tflite)\u30e2\u30c7\u30eb\u306eEdgeTPU\u30e2\u30c7\u30eb\u3078\u306e\u5909\u63db\u5f8c\u306e\u63a8\u8ad6\u6642\u306b\u767a\u751f\u3059\u308b \"main.ERROR - Only float32 and uint8 are supported currently, got -xxx.Node number n (op name) failed to invoke\" \u30a8\u30e9\u30fc\u306e\u56de\u907f\u65b9\u6cd5](https://qiita.com/PINTO/items/6ff62da1d02089442c8c)**\n\n## 10. Conversion Confirmed Models\n1. u-2-net\n2. mobilenet-v2-pytorch\n3. midasnet\n4. footprints\n5. efficientnet-b0-pytorch\n6. efficientdet-d0\n7. dense_depth\n8. deeplabv3\n9. colorization-v2-norebal\n10. age-gender-recognition-retail-0013\n11. resnet\n12. arcface\n13. emotion-ferplus\n14. mosaic\n15. retinanet\n16. shufflenet-v2\n17. squeezenet\n18. version-RFB-320\n19. yolov4\n20. yolov4x-mish\n21. ThreeDPoseUnityBarracuda - Resnet34_3inputs_448x448\n22. efficientnet-lite4\n23. nanodet\n24. yolov4-tiny\n25. yolov5s\n26. yolact\n27. MiDaS v2\n28. MODNet\n29. Person Reidentification\n30. DeepSort\n31. DINO (Transformer)\n\n\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "This script converts the OpenVINO IR model to Tensorflow's saved_model, tflite, h5 and pb. in (NCHW) format",
    "version": "1.15.1",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "f9881ecab79d9bd08c9794c19c35035e",
                "sha256": "5a6972ecdafaf593d5089647bc0ebe28066ddd0326e6325bcdb1ae3f65962f6f"
            },
            "downloads": -1,
            "filename": "openvino2tensorflow-1.15.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f9881ecab79d9bd08c9794c19c35035e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">3.6",
            "size": 41542,
            "upload_time": "2021-07-27T14:39:17",
            "upload_time_iso_8601": "2021-07-27T14:39:17.661951Z",
            "url": "https://files.pythonhosted.org/packages/4e/23/05531b8e4d6fd5746c2d542a802ea9a0528daa92e30989b6929798de6dd5/openvino2tensorflow-1.15.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "00113f57f1a83241d60fe1add642f097",
                "sha256": "b58717beae81af83b386fe0792db923093a48ed981cadc9daaf68ec2f3f9014d"
            },
            "downloads": -1,
            "filename": "openvino2tensorflow-1.15.1.tar.gz",
            "has_sig": false,
            "md5_digest": "00113f57f1a83241d60fe1add642f097",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">3.6",
            "size": 49266,
            "upload_time": "2021-07-27T14:39:19",
            "upload_time_iso_8601": "2021-07-27T14:39:19.761988Z",
            "url": "https://files.pythonhosted.org/packages/09/75/e11777080b0e8783958da7c66ffc8e2a5d066e43f0a6ce91f603fb8efa25/openvino2tensorflow-1.15.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-07-27 14:39:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "PINTO0309",
    "github_project": "openvino2tensorflow",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "openvino2tensorflow"
}
        
Elapsed time: 0.29587s