# Exporting YOLO models to ONNX with embedded pre and post processing
This repository contains the code to export Yolo models to ONNX format using the runtime extensions to **add pre and post processing** to the exported ONNX.
Models supported (code examples on how to use in `examples` folder):
* [X] YOLOv8 Classification.
* [X] YOLOv8 Object Detection
* [X] YOLOv8 Segmentation.
* [ ] Processing of resulting box coordinates only covered. Segmentation polygon not supported yet
## Python Installation
### YOLO2ONNX Extended package
Create a python environment and install using the wheel package file:
```
pip install yolo2onnx_extended-0.0.1-py3-none-any.whl
```
### Build from raw
Clone this repo and install the main requirements:
* [PyTorch](https://pytorch.org/get-started/locally/): `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`
* [Ultralyics](https://docs.ultralytics.com/quickstart/): `pip install ultralytics`
* [ONNX Runtime](https://onnxruntime.ai/docs/install/):
* CPU: `pip install onnxruntime`
* GPU (CUDA 11.8): `pip install onnxruntime-gpu`
* [ONNX Runtime Extensions](https://pytorch.org/get-started/locally/): `pip install onnxruntime-extensions`
## Use of exported model in other platforms (C/C#/C++/JavaScript/Android/iOS)
ONNX packages need to be installed. Check the supported versions for the platform you are using.
* ONNX Runtime installations for other platforms can be found in the [documentation](https://onnxruntime.ai/docs/install/).
* ONNX Extensions installations can be found in the [documentation](https://pytorch.org/get-started/locally/).
**[Inference install table for all languagues](Be aware of the supported versions of the extensions.)**
## Useful resources and Ideas
* [API - Python API documentation (onnxruntime.ai)](https://onnxruntime.ai/docs/api/python/api_summary.html)
* CUDA Optimization: [NVIDIA - CUDA | onnxruntime](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)
* Model Quantization with ONNX
* ONNX Model Visualizer: [Netron](https://netron.app/)
* Processing [Segmentation YOLOV8 ONNX](https://github.com/ibaiGorordo/ONNX-YOLOv8-Instance-Segmentation/)
* [Python Packaging](https://packaging.python.org/en/latest/tutorials/packaging-projects/)
## Inference Benchmarks
* **CPU** (Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz 6 cores, 12 virtual):
* Object Detection `0.35 secs` per image
* **GPU** (NVIDIA Quadro T2000 with Max-Q Design):
* Object Detection: `4 - 5 secs` for first image. `0.068` for rest of images
Raw data
{
"_id": null,
"home_page": null,
"name": "yolo2onnx-extended",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "yolo, onnx, preprocessing, postprocessing",
"author": null,
"author_email": "Francisco Tendero <francisco.tendero@tdk.com>",
"download_url": "https://files.pythonhosted.org/packages/e5/56/45fc0b32be4d02bee7dd541f12872199a979df96e31d524a4b796de6be27/yolo2onnx_extended-0.2.2.tar.gz",
"platform": null,
"description": "# Exporting YOLO models to ONNX with embedded pre and post processing\r\n\r\nThis repository contains the code to export Yolo models to ONNX format using the runtime extensions to **add pre and post processing** to the exported ONNX.\r\n\r\nModels supported (code examples on how to use in `examples` folder):\r\n\r\n* [X] YOLOv8 Classification.\r\n* [X] YOLOv8 Object Detection\r\n* [X] YOLOv8 Segmentation.\r\n * [ ] Processing of resulting box coordinates only covered. Segmentation polygon not supported yet\r\n\r\n## Python Installation\r\n\r\n### YOLO2ONNX Extended package\r\n\r\nCreate a python environment and install using the wheel package file:\r\n\r\n```\r\npip install yolo2onnx_extended-0.0.1-py3-none-any.whl\r\n```\r\n\r\n### Build from raw\r\n\r\nClone this repo and install the main requirements:\r\n\r\n* [PyTorch](https://pytorch.org/get-started/locally/): `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`\r\n* [Ultralyics](https://docs.ultralytics.com/quickstart/): `pip install ultralytics`\r\n* [ONNX Runtime](https://onnxruntime.ai/docs/install/):\r\n * CPU: `pip install onnxruntime`\r\n * GPU (CUDA 11.8): `pip install onnxruntime-gpu`\r\n* [ONNX Runtime Extensions](https://pytorch.org/get-started/locally/): `pip install onnxruntime-extensions`\r\n\r\n## Use of exported model in other platforms (C/C#/C++/JavaScript/Android/iOS)\r\n\r\nONNX packages need to be installed. Check the supported versions for the platform you are using.\r\n\r\n* ONNX Runtime installations for other platforms can be found in the [documentation](https://onnxruntime.ai/docs/install/).\r\n* ONNX Extensions installations can be found in the [documentation](https://pytorch.org/get-started/locally/).\r\n\r\n**[Inference install table for all languagues](Be aware of the supported versions of the extensions.)**\r\n\r\n## Useful resources and Ideas\r\n\r\n* [API - Python API documentation (onnxruntime.ai)](https://onnxruntime.ai/docs/api/python/api_summary.html)\r\n* CUDA Optimization: [NVIDIA - CUDA | onnxruntime](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)\r\n* Model Quantization with ONNX\r\n* ONNX Model Visualizer: [Netron](https://netron.app/)\r\n* Processing [Segmentation YOLOV8 ONNX](https://github.com/ibaiGorordo/ONNX-YOLOv8-Instance-Segmentation/)\r\n* [Python Packaging](https://packaging.python.org/en/latest/tutorials/packaging-projects/)\r\n\r\n## Inference Benchmarks\r\n\r\n* **CPU** (Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz 6 cores, 12 virtual):\r\n * Object Detection `0.35 secs` per image\r\n* **GPU** (NVIDIA Quadro T2000 with Max-Q Design):\r\n * Object Detection: `4 - 5 secs` for first image. `0.068` for rest of images\r\n",
"bugtrack_url": null,
"license": null,
"summary": "YOLOv8 to ONNX Exporter with Pre and Post Processing",
"version": "0.2.2",
"project_urls": {
"Homepage": "https://bitbucket.tdk-electronics.biz/scm/dsc/yolo2onnx-extended.git"
},
"split_keywords": [
"yolo",
" onnx",
" preprocessing",
" postprocessing"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "23e348eb432e1ef9e044689b593480ed1bc3c15dedd01fe77b68224be9d1fe9b",
"md5": "86df2e8a8df7369bb427d2fecfa9f47e",
"sha256": "283636990dd16f2f597c95f5933276e330c8df12f7f49fcae9bd56e1f2cb46dd"
},
"downloads": -1,
"filename": "yolo2onnx_extended-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "86df2e8a8df7369bb427d2fecfa9f47e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 11009,
"upload_time": "2024-10-30T12:05:19",
"upload_time_iso_8601": "2024-10-30T12:05:19.437995Z",
"url": "https://files.pythonhosted.org/packages/23/e3/48eb432e1ef9e044689b593480ed1bc3c15dedd01fe77b68224be9d1fe9b/yolo2onnx_extended-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e55645fc0b32be4d02bee7dd541f12872199a979df96e31d524a4b796de6be27",
"md5": "422448f4510fa802841baadf72006e2d",
"sha256": "3c3d83fd1fe370065229b576fe763fbf71c9fab58b68054448bacfbef50e87f2"
},
"downloads": -1,
"filename": "yolo2onnx_extended-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "422448f4510fa802841baadf72006e2d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 11164,
"upload_time": "2024-10-30T12:05:24",
"upload_time_iso_8601": "2024-10-30T12:05:24.187443Z",
"url": "https://files.pythonhosted.org/packages/e5/56/45fc0b32be4d02bee7dd541f12872199a979df96e31d524a4b796de6be27/yolo2onnx_extended-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-30 12:05:24",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "yolo2onnx-extended"
}