Name | modelconv JSON |
Version |
0.3.1
JSON |
| download |
home_page | None |
Summary | Converter for neural models into various formats. |
upload_time | 2024-11-12 15:11:50 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | None |
keywords |
ml
onnx
openvino
nn
ai
embedded
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# ModelConverter - Compilation Library
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![PyPI](https://img.shields.io/pypi/v/modelconv?label=pypi%20package)](https://pypi.org/project/modelconv/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/modelconv)](https://pypi.org/project/modelconv/)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Docformatter](https://img.shields.io/badge/%20formatter-docformatter-fedcba.svg)](https://github.com/PyCQA/docformatter)
[![Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
Convert your **ONNX** models to a format compatible with any generation of Luxonis camera using the **Model Compilation Library**.
## Status
| Package | Test | Deploy |
| --------- | ----------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- |
| **RVC2** | ![RVC2 Tests](https://github.com/luxonis/modelconverter/actions/workflows/rvc2_test.yaml/badge.svg) | ![RVC2 Push](https://github.com/luxonis/modelconverter/actions/workflows/rvc2_publish.yaml/badge.svg) |
| **RVC3** | ![RVC3 Tests](https://github.com/luxonis/modelconverter/actions/workflows/rvc3_test.yaml/badge.svg) | ![RVC3 Push](https://github.com/luxonis/modelconverter/actions/workflows/rvc3_publish.yaml/badge.svg) |
| **RVC4** | ![RVC4 Tests](https://github.com/luxonis/modelconverter/actions/workflows/rvc4_test.yaml/badge.svg) | ![RVC4 Push](https://github.com/luxonis/modelconverter/actions/workflows/rvc4_publish.yaml/badge.svg) |
| **Hailo** | ![Hailo Tests](https://github.com/luxonis/modelconverter/actions/workflows/hailo_test.yaml/badge.svg) | ![Hailo Push](https://github.com/luxonis/modelconverter/actions/workflows/hailo_publish.yaml/badge.svg) |
## Table of Contents
- [ModelConverter - Compilation Library](#modelconverter---compilation-library)
- [Status](#status)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [System Requirements](#system-requirements)
- [Before You Begin](#before-you-begin)
- [Instructions](#instructions)
- [GPU Support](#gpu-support)
- [Running ModelConverter](#running-modelconverter)
- [Encoding Configuration Flags](#encoding-configuration-flags)
- [YAML Configuration File](#yaml-configuration-file)
- [NN Archive Configuration File](#nn-archive-configuration-file)
- [Sharing Files](#sharing-files)
- [Usage](#usage)
- [Examples](#examples)
- [Multi-Stage Conversion](#multi-stage-conversion)
- [Interactive Mode](#interactive-mode)
- [Calibration Data](#calibration-data)
- [Inference](#inference)
- [Inference Example](#inference-example)
- [Benchmarking](#benchmarking)
## Installation
### System Requirements
`ModelConverter` requires `docker` to be installed on your system.
It is recommended to use Ubuntu OS for the best compatibility.
On Windows or MacOS, it is recommended to install `docker` using the [Docker Desktop](https://www.docker.com/products/docker-desktop).
Otherwise follow the installation instructions for your OS from the [official website](https://docs.docker.com/engine/install/).
### Before You Begin
`ModelConverter` is in an experimental public beta stage. Some parts might change in the future.
To build the images, you need to download additional packages depending on the selected target and the desired version of the underlying conversion tools.
**RVC2**
Requires `openvino-<version>.tar.gz` to be present in `docker/extra_packages/`.
- Version `2023.2.0` archive can be downloaded from [here](https://drive.google.com/file/d/1IXtYi1Mwpsg3pr5cDXlEHdSUZlwJRTVP/view?usp=share_link).
- Version `2021.4.0` archive can be downloaded from [here](https://storage.openvinotoolkit.org/repositories/openvino/packages/2021.4/l_openvino_toolkit_dev_ubuntu20_p_2021.4.582.tgz)
You only need to rename the archive to either `openvino-2023.2.0.tar.gz` or `openvino-2021.4.0.tar.gz` and place it in the `docker/extra_packages` directory.
**RVC3**
Only the version `2023.2.0` of `OpenVino` is supported for `RVC3`. Follow the instructions for `RVC2` to use the correct archive.
**RVC4**
Requires `snpe-<version>.zip` archive to be present in `docker/extra_packages`. You can download version `2.23.0` from [here](https://softwarecenter.qualcomm.com/api/download/software/qualcomm_neural_processing_sdk/v2.23.0.24.06.24.zip). You only need to rename it to `snpe-2.23.0.zip` and place it in the `docker/extra_packages` directory.
**HAILO**
Requires `hailo_ai_sw_suite_<version>:1` docker image to be present on the system. You can obtain the image by following the instructions on [Hailo website](https://developer.hailo.ai/developer-zone/sw-downloads/).
After you obtain the image, you need to rename it to `hailo_ai_sw_suite_<version>:1` using `docker tag <old_name> hailo_ai_sw_suite_<version>:1`.
Furthermore, you need to use the `docker/hailo/Dockerfile.public` file to build the image. The `docker/hailo/Dockerfile` is for internal use only.
### Instructions
1. Build the docker image:
```bash
docker build -f docker/<package>/Dockerfile -t luxonis/modelconverter-<package>:latest .
```
1. For easier use, you can install the ModelConverter CLI. You can install it from PyPI using the following command:
```bash
pip install modelconv
```
For usage instructions, see `modelconverter --help`.
### GPU Support
To enable GPU acceleration for `hailo` conversion, install the [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker).
## Running ModelConverter
There are two main ways to execute configure the conversion process:
1. **YAML Config File (Primary Method)**:
The primary way to configure the conversion is through a YAML configuration file. For reference, you can check [defaults.yaml](shared_with_container/configs/defaults.yaml) and other examples located in the [shared_with_container/configs](shared_with_container/configs) directory.
1. **NN Archive**:
Alternatively, you can use an [NN Archive](https://rvc4.docs.luxonis.com/software/ai-inference/nn-archive/#NN%20Archive) as input. An NN Archive includes a model in one of the supported formats—ONNX (.onnx), OpenVINO IR (.xml and .bin), or TensorFlow Lite (.tflite)—alongside a `config.json` file. The config.json file follows a specific configuration format as described in the [NN Archive Configuration Guide](https://rvc4.docs.luxonis.com/software/ai-inference/nn-archive/#NN%20Archive-Configuration).
**Modifying Settings with Command-Line Arguments**:
In addition to these two configuration methods, you have the flexibility to override specific settings directly via command-line arguments. By supplying `key-value` pairs in the CLI, you can adjust particular settings without explicitly altering the config files (YAML or NN Archive). For further details, refer to the [Examples](#examples) section.
### Encoding Configuration Flags
In the conversion process, you have options to control the color encoding format in both the YAML configuration file and the NN Archive configuration. Here’s a breakdown of each available flag:
#### YAML Configuration File
The `encoding` flag in the YAML configuration file allows you to specify color encoding as follows:
- **Single-Value `encoding`**:
Setting encoding to a single value, such as *"RGB"*, *"BGR"*, *"GRAY"*, or *"NONE"*, will automatically apply this setting to both `encoding.from` and `encoding.to`. For example, `encoding: RGB` sets both `encoding.from` and `encoding.to` to *"RGB"* internally.
- **Multi-Value `encoding.from` and `encoding.to`**:
Alternatively, you can explicitly set `encoding.from` and `encoding.to` to different values. For example:
```yaml
encoding:
from: RGB
to: BGR
```
This configuration specifies that the input data is in RGB format and will be converted to BGR format during processing.
> [!NOTE]
> If the encoding is not specified in the YAML configuration, the default values are set to `encoding.from=RGB` and `encoding.to=BGR`.
> [!NOTE]
> Certain options can be set **globally**, applying to all inputs of the model, or **per input**. If specified per input, these settings will override the global configuration for that input alone. The options that support this flexibility include `scale_values`, `mean_values`, `encoding`, `data_type`, `shape`, and `layout`.
#### NN Archive Configuration File
In the NN Archive configuration, there are two flags related to color encoding control:
- **`dai_type`**:
Provides a more comprehensive control over the input type compatible with the DAI backend. It is read by DepthAI to automatically configure the processing pipeline, including any necessary modifications to the input image format.
- **`reverse_channels` (Deprecated)**:
Determines the input color format of the model: when set to *True*, the input is considered to be *"RGB"*, and when set to *False*, it is treated as *"BGR"*. This flag is deprecated and will be replaced by the `dai_type` flag in future versions.
> [!NOTE]
> If neither `dai_type` nor `reverse_channels` the input to the model is considered to be *"RGB"*.
> [!NOTE]
> If both `dai_type` and `reverse_channels` are provided, the converter will give priority to `dai_type`.
> [!IMPORTANT]
> Provide mean/scale values in the original color format used during model training (e.g., RGB or BGR). Any necessary channel permutation is handled internally—do not reorder values manually.
### Sharing Files
When using the supplied `docker-compose.yaml`, the `shared_with_container` directory facilitates file sharing between the host and container. This directory is mounted as `/app/shared_with_container/` inside the container. You can place your models, calibration data, and config files here. The directory structure is:
```txt
shared_with_container/
│
├── calibration_data/
│ └── <calibration data will be downloaded here>
│
├── configs/
│ ├── resnet18.yaml
│ └── <configs will be downloaded here>
│
├── models/
│ ├── resnet18.onnx
│ └── <models will be downloaded here>
│
└── outputs/
└── <output_dir>
├── resnet18.onnx
├── resnet18.dlc
├── logs.txt
├── config.yaml
└── intermediate_outputs/
└── <intermediate files generated during the conversion>
```
While adhering to this structure is not mandatory as long as the files are visible inside the container, it is advised to keep the files organized.
The converter first searches for files exactly at the provided path. If not found, it searches relative to `/app/shared_with_container/`.
The `output_dir` can be specified using the `--output-dir` CLI argument. If such a directory already exists, the `output_dir_name` will be appended with the current date and time. If not specified, the `output_dir_name` will be autogenerated in the following format: `<model_name>_to_<target>_<date>_<time>`.
### Usage
You can run the built image either manually using the `docker run` command or using the `modelconverter` CLI.
1. Set your credentials as environment variables (if required):
```bash
export AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key>
export AWS_ACCESS_KEY_ID=<your_aws_access_key_id>
export AWS_S3_ENDPOINT_URL=<your_aws_s3_endpoint_url>
```
1. If `shared_with_container` directory doesn't exist on your host, create it.
1. Without remote files, place the model, config, and calibration data in the respective directories (refer [Sharing Files](#sharing-files)).
1. Execute the conversion:
- If using the `docker run` command:
```bash
docker run --rm -it \
-v $(pwd)/shared_with_container:/app/shared_with_container/ \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_S3_ENDPOINT_URL=$AWS_S3_ENDPOINT_URL \
luxonis/modelconverter-<package>:latest \
convert <target> \
--path <s3_url_or_path> [ config overrides ]
```
- If using the `modelconverter` CLI:
```bash
modelconverter convert <target> --path <s3_url_or_path> [ config overrides ]
```
- If using `docker-compose`:
```bash
docker compose run <target> convert <target> ...
```
### Examples
Use `resnet18.yaml` config, but override `calibration.path`:
```bash
modelconverter convert rvc4 --path configs/resnet18.yaml \
calibration.path s3://path/to/calibration_data
```
Override inputs and outputs with command line arguments:
```bash
modelconverter convert rvc3 --path configs/resnet18.yaml \
inputs.0.name input_1 \
inputs.0.shape "[1,3,256,256]" \
outputs.0.name output_0
```
Specify all options via the command line without a config file:
```bash
modelconverter convert rvc2 input_model models/yolov6n.onnx \
scale_values "[255,255,255]" \
inputs.0.encoding.from RGB \
inputs.0.encoding.to BGR \
shape "[1,3,256,256]" \
outputs.0.name out_0 \
outputs.1.name out_1 \
outputs.2.name out_2
```
> [!WARNING]
> If you modify the default stages names (`stages.stage_name`) in the configuration file (`config.yaml`), you need to provide the full path to each stage in the command-line arguments. For instance, if a stage name is changed to `stage1`, use `stages.stage1.inputs.0.name` instead of `inputs.0.name`.
## Multi-Stage Conversion
The converter supports multi-stage conversion. This means conversion of multiple
models where the output of one model is the input to another model. For mulit-stage
conversion you must specify the `stages` section in the config file, see [defaults.yaml](shared_with_container/configs/defaults.yaml)
and [multistage.yaml](shared_with_container/configs/multistage.yaml) for reference.
The output directory structure would be (assuming RVC4 conversion):
```txt
output_path/
├── config.yaml
├── modelconverter.log
├── stage_name1
│ ├── config.yaml
│ ├── intermediate_outputs/
│ ├── model1.onnx
│ └── model1.dlc
└── stage_name2
├── config.yaml
├── intermediate_outputs/
├── model2.onnx
└── model2.dlc
```
## Interactive Mode
Run the container interactively without any post-target arguments:
```bash
modelconverter shell rvc4
```
Inside, you'll find all the necessary tools for manual conversion.
The `modelconverter` CLI is available inside the container as well.
## Calibration Data
Calibration data can be a mix of images (`.jpg`, `.png`, `.jpeg`) and `.npy`, `.raw` files.
Image files will be loaded and converted to the format specified in the config.
No conversion is performed for `.npy` or `.raw` files, the files are used as provided.
**NOTE for RVC4**: `RVC4` expects images to be provided in `NHWC` layout. If you provide the calibration data in a form of `.npy` or `.raw` format, you need to make sure they have the correct layout.
## Inference
A basic support for inference. To run the inference, use `modelconverter infer <target> <args>`.
For usage instructions, see `modelconverter infer --help`.
The input files must be provided in a specific directory structure.
```txt
input_path/
├── <name of first input node>
│ ├── 0.npy
│ ├── 1.npy
│ └── ...
├── <name of second input node>
│ ├── 0.npy
│ ├── 1.npy
│ └── ...
├── ...
└── <name of last input node>
├── 0.npy
├── 1.npy
└── ...
```
**Note**: The numpy files are sent to the model with no preprocessing, so they must be provided in the correct format and shape.
The output files are then saved in a similar structure.
### Inference Example
For `yolov6n` model, the input directory structure would be:
```txt
input_path/
└── images
├── 0.npy
├── 1.npy
└── ...
```
To run the inference, use:
```bash
modelconverter infer rvc4 \
--model_path <path_to_model.dlc> \
--output-dir <output_dir_name> \
--input_path <input_path>
--path <path_to_config.yaml>
```
The output directory structure would be:
```txt
output_path/
├── output1_yolov6r2
│ ├── 0.npy
│ ├── 1.npy
│ └── ...
├── output2_yolov6r2
│ └── <outputs>
└── output3_yolov6r2
└── <outputs>
```
## Benchmarking
The ModelConverter additionally supports benchmarking of converted models.
To install the package with the benchmarking dependencies, use:
```bash
pip install modelconv[bench]
```
To run the benchmark, use `modelconverter benchmark <target> <args>`.
For usage instructions, see `modelconverter benchmark --help`.
**Example:**
```bash
modelconverter benchmark rvc3 --model-path <path_to_model.xml>
```
The command prints a table with the benchmark results to the console and
optionally saves the results to a `.csv` file.
Raw data
{
"_id": null,
"home_page": null,
"name": "modelconv",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Luxonis <support@luxonis.com>",
"keywords": "ml, onnx, openvino, nn, ai, embedded",
"author": null,
"author_email": "Luxonis <support@luxonis.com>",
"download_url": "https://files.pythonhosted.org/packages/ea/0b/97f6c6a4f81580b2a9d34e629e6eb357fce595ea85407c5577aab54d34b1/modelconv-0.3.1.tar.gz",
"platform": null,
"description": "# ModelConverter - Compilation Library\n\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![PyPI](https://img.shields.io/pypi/v/modelconv?label=pypi%20package)](https://pypi.org/project/modelconv/)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/modelconv)](https://pypi.org/project/modelconv/)\n\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![Docformatter](https://img.shields.io/badge/%20formatter-docformatter-fedcba.svg)](https://github.com/PyCQA/docformatter)\n[![Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\nConvert your **ONNX** models to a format compatible with any generation of Luxonis camera using the **Model Compilation Library**.\n\n## Status\n\n| Package | Test | Deploy |\n| --------- | ----------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- |\n| **RVC2** | ![RVC2 Tests](https://github.com/luxonis/modelconverter/actions/workflows/rvc2_test.yaml/badge.svg) | ![RVC2 Push](https://github.com/luxonis/modelconverter/actions/workflows/rvc2_publish.yaml/badge.svg) |\n| **RVC3** | ![RVC3 Tests](https://github.com/luxonis/modelconverter/actions/workflows/rvc3_test.yaml/badge.svg) | ![RVC3 Push](https://github.com/luxonis/modelconverter/actions/workflows/rvc3_publish.yaml/badge.svg) |\n| **RVC4** | ![RVC4 Tests](https://github.com/luxonis/modelconverter/actions/workflows/rvc4_test.yaml/badge.svg) | ![RVC4 Push](https://github.com/luxonis/modelconverter/actions/workflows/rvc4_publish.yaml/badge.svg) |\n| **Hailo** | ![Hailo Tests](https://github.com/luxonis/modelconverter/actions/workflows/hailo_test.yaml/badge.svg) | ![Hailo Push](https://github.com/luxonis/modelconverter/actions/workflows/hailo_publish.yaml/badge.svg) |\n\n## Table of Contents\n\n- [ModelConverter - Compilation Library](#modelconverter---compilation-library)\n - [Status](#status)\n - [Table of Contents](#table-of-contents)\n - [Installation](#installation)\n - [System Requirements](#system-requirements)\n - [Before You Begin](#before-you-begin)\n - [Instructions](#instructions)\n - [GPU Support](#gpu-support)\n - [Running ModelConverter](#running-modelconverter)\n - [Encoding Configuration Flags](#encoding-configuration-flags)\n - [YAML Configuration File](#yaml-configuration-file)\n - [NN Archive Configuration File](#nn-archive-configuration-file)\n - [Sharing Files](#sharing-files)\n - [Usage](#usage)\n - [Examples](#examples)\n - [Multi-Stage Conversion](#multi-stage-conversion)\n - [Interactive Mode](#interactive-mode)\n - [Calibration Data](#calibration-data)\n - [Inference](#inference)\n - [Inference Example](#inference-example)\n - [Benchmarking](#benchmarking)\n\n## Installation\n\n### System Requirements\n\n`ModelConverter` requires `docker` to be installed on your system.\nIt is recommended to use Ubuntu OS for the best compatibility.\nOn Windows or MacOS, it is recommended to install `docker` using the [Docker Desktop](https://www.docker.com/products/docker-desktop).\nOtherwise follow the installation instructions for your OS from the [official website](https://docs.docker.com/engine/install/).\n\n### Before You Begin\n\n`ModelConverter` is in an experimental public beta stage. Some parts might change in the future.\n\nTo build the images, you need to download additional packages depending on the selected target and the desired version of the underlying conversion tools.\n\n**RVC2**\n\nRequires `openvino-<version>.tar.gz` to be present in `docker/extra_packages/`.\n\n- Version `2023.2.0` archive can be downloaded from [here](https://drive.google.com/file/d/1IXtYi1Mwpsg3pr5cDXlEHdSUZlwJRTVP/view?usp=share_link).\n\n- Version `2021.4.0` archive can be downloaded from [here](https://storage.openvinotoolkit.org/repositories/openvino/packages/2021.4/l_openvino_toolkit_dev_ubuntu20_p_2021.4.582.tgz)\n\nYou only need to rename the archive to either `openvino-2023.2.0.tar.gz` or `openvino-2021.4.0.tar.gz` and place it in the `docker/extra_packages` directory.\n\n**RVC3**\n\nOnly the version `2023.2.0` of `OpenVino` is supported for `RVC3`. Follow the instructions for `RVC2` to use the correct archive.\n\n**RVC4**\n\nRequires `snpe-<version>.zip` archive to be present in `docker/extra_packages`. You can download version `2.23.0` from [here](https://softwarecenter.qualcomm.com/api/download/software/qualcomm_neural_processing_sdk/v2.23.0.24.06.24.zip). You only need to rename it to `snpe-2.23.0.zip` and place it in the `docker/extra_packages` directory.\n\n**HAILO**\n\nRequires `hailo_ai_sw_suite_<version>:1` docker image to be present on the system. You can obtain the image by following the instructions on [Hailo website](https://developer.hailo.ai/developer-zone/sw-downloads/).\n\nAfter you obtain the image, you need to rename it to `hailo_ai_sw_suite_<version>:1` using `docker tag <old_name> hailo_ai_sw_suite_<version>:1`.\n\nFurthermore, you need to use the `docker/hailo/Dockerfile.public` file to build the image. The `docker/hailo/Dockerfile` is for internal use only.\n\n### Instructions\n\n1. Build the docker image:\n\n ```bash\n docker build -f docker/<package>/Dockerfile -t luxonis/modelconverter-<package>:latest .\n ```\n\n1. For easier use, you can install the ModelConverter CLI. You can install it from PyPI using the following command:\n\n ```bash\n pip install modelconv\n ```\n\n For usage instructions, see `modelconverter --help`.\n\n### GPU Support\n\nTo enable GPU acceleration for `hailo` conversion, install the [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker).\n\n## Running ModelConverter\n\nThere are two main ways to execute configure the conversion process:\n\n1. **YAML Config File (Primary Method)**:\n The primary way to configure the conversion is through a YAML configuration file. For reference, you can check [defaults.yaml](shared_with_container/configs/defaults.yaml) and other examples located in the [shared_with_container/configs](shared_with_container/configs) directory.\n1. **NN Archive**:\n Alternatively, you can use an [NN Archive](https://rvc4.docs.luxonis.com/software/ai-inference/nn-archive/#NN%20Archive) as input. An NN Archive includes a model in one of the supported formats\u2014ONNX (.onnx), OpenVINO IR (.xml and .bin), or TensorFlow Lite (.tflite)\u2014alongside a `config.json` file. The config.json file follows a specific configuration format as described in the [NN Archive Configuration Guide](https://rvc4.docs.luxonis.com/software/ai-inference/nn-archive/#NN%20Archive-Configuration).\n\n**Modifying Settings with Command-Line Arguments**:\nIn addition to these two configuration methods, you have the flexibility to override specific settings directly via command-line arguments. By supplying `key-value` pairs in the CLI, you can adjust particular settings without explicitly altering the config files (YAML or NN Archive). For further details, refer to the [Examples](#examples) section.\n\n### Encoding Configuration Flags\n\nIn the conversion process, you have options to control the color encoding format in both the YAML configuration file and the NN Archive configuration. Here\u2019s a breakdown of each available flag:\n\n#### YAML Configuration File\n\nThe `encoding` flag in the YAML configuration file allows you to specify color encoding as follows:\n\n- **Single-Value `encoding`**:\n Setting encoding to a single value, such as *\"RGB\"*, *\"BGR\"*, *\"GRAY\"*, or *\"NONE\"*, will automatically apply this setting to both `encoding.from` and `encoding.to`. For example, `encoding: RGB` sets both `encoding.from` and `encoding.to` to *\"RGB\"* internally.\n- **Multi-Value `encoding.from` and `encoding.to`**:\n Alternatively, you can explicitly set `encoding.from` and `encoding.to` to different values. For example:\n ```yaml\n encoding:\n from: RGB\n to: BGR\n ```\n This configuration specifies that the input data is in RGB format and will be converted to BGR format during processing.\n\n> [!NOTE]\n> If the encoding is not specified in the YAML configuration, the default values are set to `encoding.from=RGB` and `encoding.to=BGR`.\n\n> [!NOTE]\n> Certain options can be set **globally**, applying to all inputs of the model, or **per input**. If specified per input, these settings will override the global configuration for that input alone. The options that support this flexibility include `scale_values`, `mean_values`, `encoding`, `data_type`, `shape`, and `layout`.\n\n#### NN Archive Configuration File\n\nIn the NN Archive configuration, there are two flags related to color encoding control:\n\n- **`dai_type`**:\n Provides a more comprehensive control over the input type compatible with the DAI backend. It is read by DepthAI to automatically configure the processing pipeline, including any necessary modifications to the input image format.\n- **`reverse_channels` (Deprecated)**:\n Determines the input color format of the model: when set to *True*, the input is considered to be *\"RGB\"*, and when set to *False*, it is treated as *\"BGR\"*. This flag is deprecated and will be replaced by the `dai_type` flag in future versions.\n\n> [!NOTE]\n> If neither `dai_type` nor `reverse_channels` the input to the model is considered to be *\"RGB\"*.\n\n> [!NOTE]\n> If both `dai_type` and `reverse_channels` are provided, the converter will give priority to `dai_type`.\n\n> [!IMPORTANT]\n> Provide mean/scale values in the original color format used during model training (e.g., RGB or BGR). Any necessary channel permutation is handled internally\u2014do not reorder values manually.\n\n### Sharing Files\n\nWhen using the supplied `docker-compose.yaml`, the `shared_with_container` directory facilitates file sharing between the host and container. This directory is mounted as `/app/shared_with_container/` inside the container. You can place your models, calibration data, and config files here. The directory structure is:\n\n```txt\nshared_with_container/\n\u2502\n\u251c\u2500\u2500 calibration_data/\n\u2502 \u2514\u2500\u2500 <calibration data will be downloaded here>\n\u2502\n\u251c\u2500\u2500 configs/\n\u2502 \u251c\u2500\u2500 resnet18.yaml\n\u2502 \u2514\u2500\u2500 <configs will be downloaded here>\n\u2502\n\u251c\u2500\u2500 models/\n\u2502 \u251c\u2500\u2500 resnet18.onnx\n\u2502 \u2514\u2500\u2500 <models will be downloaded here>\n\u2502\n\u2514\u2500\u2500 outputs/\n \u2514\u2500\u2500 <output_dir>\n \u251c\u2500\u2500 resnet18.onnx\n \u251c\u2500\u2500 resnet18.dlc\n \u251c\u2500\u2500 logs.txt\n \u251c\u2500\u2500 config.yaml\n \u2514\u2500\u2500 intermediate_outputs/\n \u2514\u2500\u2500 <intermediate files generated during the conversion>\n```\n\nWhile adhering to this structure is not mandatory as long as the files are visible inside the container, it is advised to keep the files organized.\n\nThe converter first searches for files exactly at the provided path. If not found, it searches relative to `/app/shared_with_container/`.\n\nThe `output_dir` can be specified using the `--output-dir` CLI argument. If such a directory already exists, the `output_dir_name` will be appended with the current date and time. If not specified, the `output_dir_name` will be autogenerated in the following format: `<model_name>_to_<target>_<date>_<time>`.\n\n### Usage\n\nYou can run the built image either manually using the `docker run` command or using the `modelconverter` CLI.\n\n1. Set your credentials as environment variables (if required):\n\n ```bash\n export AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key>\n export AWS_ACCESS_KEY_ID=<your_aws_access_key_id>\n export AWS_S3_ENDPOINT_URL=<your_aws_s3_endpoint_url>\n ```\n\n1. If `shared_with_container` directory doesn't exist on your host, create it.\n\n1. Without remote files, place the model, config, and calibration data in the respective directories (refer [Sharing Files](#sharing-files)).\n\n1. Execute the conversion:\n\n- If using the `docker run` command:\n\n ```bash\n docker run --rm -it \\\n -v $(pwd)/shared_with_container:/app/shared_with_container/ \\\n -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \\\n -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \\\n -e AWS_S3_ENDPOINT_URL=$AWS_S3_ENDPOINT_URL \\\n luxonis/modelconverter-<package>:latest \\\n convert <target> \\\n --path <s3_url_or_path> [ config overrides ]\n ```\n\n- If using the `modelconverter` CLI:\n\n ```bash\n modelconverter convert <target> --path <s3_url_or_path> [ config overrides ]\n ```\n\n- If using `docker-compose`:\n\n ```bash\n docker compose run <target> convert <target> ...\n ```\n\n### Examples\n\nUse `resnet18.yaml` config, but override `calibration.path`:\n\n```bash\nmodelconverter convert rvc4 --path configs/resnet18.yaml \\\n calibration.path s3://path/to/calibration_data\n```\n\nOverride inputs and outputs with command line arguments:\n\n```bash\nmodelconverter convert rvc3 --path configs/resnet18.yaml \\\n inputs.0.name input_1 \\\n inputs.0.shape \"[1,3,256,256]\" \\\n outputs.0.name output_0\n```\n\nSpecify all options via the command line without a config file:\n\n```bash\nmodelconverter convert rvc2 input_model models/yolov6n.onnx \\\n scale_values \"[255,255,255]\" \\\n inputs.0.encoding.from RGB \\\n inputs.0.encoding.to BGR \\\n shape \"[1,3,256,256]\" \\\n outputs.0.name out_0 \\\n outputs.1.name out_1 \\\n outputs.2.name out_2\n```\n\n> [!WARNING]\n> If you modify the default stages names (`stages.stage_name`) in the configuration file (`config.yaml`), you need to provide the full path to each stage in the command-line arguments. For instance, if a stage name is changed to `stage1`, use `stages.stage1.inputs.0.name` instead of `inputs.0.name`.\n\n## Multi-Stage Conversion\n\nThe converter supports multi-stage conversion. This means conversion of multiple\nmodels where the output of one model is the input to another model. For mulit-stage\nconversion you must specify the `stages` section in the config file, see [defaults.yaml](shared_with_container/configs/defaults.yaml)\nand [multistage.yaml](shared_with_container/configs/multistage.yaml) for reference.\n\nThe output directory structure would be (assuming RVC4 conversion):\n\n```txt\noutput_path/\n\u251c\u2500\u2500 config.yaml\n\u251c\u2500\u2500 modelconverter.log\n\u251c\u2500\u2500 stage_name1\n\u2502 \u251c\u2500\u2500 config.yaml\n\u2502 \u251c\u2500\u2500 intermediate_outputs/\n\u2502 \u251c\u2500\u2500 model1.onnx\n\u2502 \u2514\u2500\u2500 model1.dlc\n\u2514\u2500\u2500 stage_name2\n \u251c\u2500\u2500 config.yaml\n \u251c\u2500\u2500 intermediate_outputs/\n \u251c\u2500\u2500 model2.onnx\n \u2514\u2500\u2500 model2.dlc\n```\n\n## Interactive Mode\n\nRun the container interactively without any post-target arguments:\n\n```bash\nmodelconverter shell rvc4\n```\n\nInside, you'll find all the necessary tools for manual conversion.\nThe `modelconverter` CLI is available inside the container as well.\n\n## Calibration Data\n\nCalibration data can be a mix of images (`.jpg`, `.png`, `.jpeg`) and `.npy`, `.raw` files.\nImage files will be loaded and converted to the format specified in the config.\nNo conversion is performed for `.npy` or `.raw` files, the files are used as provided.\n**NOTE for RVC4**: `RVC4` expects images to be provided in `NHWC` layout. If you provide the calibration data in a form of `.npy` or `.raw` format, you need to make sure they have the correct layout.\n\n## Inference\n\nA basic support for inference. To run the inference, use `modelconverter infer <target> <args>`.\nFor usage instructions, see `modelconverter infer --help`.\n\nThe input files must be provided in a specific directory structure.\n\n```txt\ninput_path/\n\u251c\u2500\u2500 <name of first input node>\n\u2502 \u251c\u2500\u2500 0.npy\n\u2502 \u251c\u2500\u2500 1.npy\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 <name of second input node>\n\u2502 \u251c\u2500\u2500 0.npy\n\u2502 \u251c\u2500\u2500 1.npy\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 ...\n\u2514\u2500\u2500 <name of last input node>\n \u251c\u2500\u2500 0.npy\n \u251c\u2500\u2500 1.npy\n \u2514\u2500\u2500 ...\n```\n\n**Note**: The numpy files are sent to the model with no preprocessing, so they must be provided in the correct format and shape.\n\nThe output files are then saved in a similar structure.\n\n### Inference Example\n\nFor `yolov6n` model, the input directory structure would be:\n\n```txt\ninput_path/\n\u2514\u2500\u2500 images\n \u251c\u2500\u2500 0.npy\n \u251c\u2500\u2500 1.npy\n \u2514\u2500\u2500 ...\n```\n\nTo run the inference, use:\n\n```bash\nmodelconverter infer rvc4 \\\n --model_path <path_to_model.dlc> \\\n --output-dir <output_dir_name> \\\n --input_path <input_path>\n --path <path_to_config.yaml>\n```\n\nThe output directory structure would be:\n\n```txt\noutput_path/\n\u251c\u2500\u2500 output1_yolov6r2\n\u2502 \u251c\u2500\u2500 0.npy\n\u2502 \u251c\u2500\u2500 1.npy\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 output2_yolov6r2\n\u2502 \u2514\u2500\u2500 <outputs>\n\u2514\u2500\u2500 output3_yolov6r2\n \u2514\u2500\u2500 <outputs>\n```\n\n## Benchmarking\n\nThe ModelConverter additionally supports benchmarking of converted models.\n\nTo install the package with the benchmarking dependencies, use:\n\n```bash\npip install modelconv[bench]\n```\n\nTo run the benchmark, use `modelconverter benchmark <target> <args>`.\n\nFor usage instructions, see `modelconverter benchmark --help`.\n\n**Example:**\n\n```bash\nmodelconverter benchmark rvc3 --model-path <path_to_model.xml>\n```\n\nThe command prints a table with the benchmark results to the console and\noptionally saves the results to a `.csv` file.\n",
"bugtrack_url": null,
"license": null,
"summary": "Converter for neural models into various formats.",
"version": "0.3.1",
"project_urls": {
"issues": "https://github.com/luxonis/modelconverter/issues",
"repository": "https://github.com/luxonis/modelconverter"
},
"split_keywords": [
"ml",
" onnx",
" openvino",
" nn",
" ai",
" embedded"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4d48656b9a1dd27352b4552dea9b1319382a231dc5e8b7fe6bc15b73254d40eb",
"md5": "a53f7f74a1723be3e995df69114eee02",
"sha256": "556fe947fab13502ccb741523ca8baa07882c914a6d4305e2c4149de9441121c"
},
"downloads": -1,
"filename": "modelconv-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a53f7f74a1723be3e995df69114eee02",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 86884,
"upload_time": "2024-11-12T15:11:49",
"upload_time_iso_8601": "2024-11-12T15:11:49.451378Z",
"url": "https://files.pythonhosted.org/packages/4d/48/656b9a1dd27352b4552dea9b1319382a231dc5e8b7fe6bc15b73254d40eb/modelconv-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ea0b97f6c6a4f81580b2a9d34e629e6eb357fce595ea85407c5577aab54d34b1",
"md5": "eea9594ed5e8482a3752ac9416da3dbf",
"sha256": "3137abd278e2425485bff376b4d0af1a419a3ceb770d9cfff6a86ed2f51a3093"
},
"downloads": -1,
"filename": "modelconv-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "eea9594ed5e8482a3752ac9416da3dbf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 68345,
"upload_time": "2024-11-12T15:11:50",
"upload_time_iso_8601": "2024-11-12T15:11:50.617732Z",
"url": "https://files.pythonhosted.org/packages/ea/0b/97f6c6a4f81580b2a9d34e629e6eb357fce595ea85407c5577aab54d34b1/modelconv-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-12 15:11:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "luxonis",
"github_project": "modelconverter",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "modelconv"
}