Name | openvino-model-api JSON |
Version |
0.2.5
JSON |
| download |
home_page | None |
Summary | Model API: model wrappers and pipelines for inference with OpenVINO |
upload_time | 2024-10-22 09:55:07 |
maintainer | Intel(R) Corporation |
docs_url | None |
author | Intel(R) Corporation |
requires_python | >=3.9 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Python* Model API package
Model API package is a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures (model loading, asynchronous execution, etc...)
An application feeds model class with input data, then the model returns postprocessed output data in user-friendly format.
## Package structure
The Model API consists of 3 libraries:
* _adapters_ implements a common interface to allow Model API wrappers usage with different executors. See [Model API adapters](#model-api-adapters) section
* _models_ implements wrappers for Open Model Zoo models. See [Model API Wrappers](#model-api-wrappers) section
* _pipelines_ implements pipelines for model inference and manage the synchronous/asynchronous execution. See [Model API Pipelines](#model-api-pipelines) section
### Prerequisites
The package requires
- one of OpenVINO supported Python version (see OpenVINO documentation for the details)
- OpenVINO™ toolkit
If you build Model API package from source, you should install the OpenVINO™ toolkit. See the options:
Use installation package for [Intel® Distribution of OpenVINO™ toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
Also, you can install the OpenVINO Python\* package via the command:
```sh
pip install openvino
```
## Installing Python* Model API package
Use the following command to install Model API from source:
```sh
pip install <omz_dir>/demos/common/python
```
Alternatively, you can generate the package using a wheel. Follow the steps below:
1. Build the wheel.
```sh
python <omz_dir>/demos/common/python/setup.py bdist_wheel
```
The wheel should appear in the dist folder.
Name example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`
2. Install the package in the clean environment with `--force-reinstall` key.
```sh
pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall
```
To verify the package is installed, you might use the following command:
```sh
python -c "from openvino.model_zoo import model_api"
```
## Model API Wrappers
The Model API package provides model wrappers, which implement standardized preprocessing/postprocessing functions per "task type" and incapsulate model-specific logic for usage of different models in a unified manner inside the application.
The following tasks can be solved with wrappers usage:
| Task type | Model API wrappers |
|----------------------------|--------------------|
| Background Matting | <ul><li>`VideoBackgroundMatting`</li><li>`ImageMattingWithBackground`</li><li>`PortraitBackgroundMatting`</li></ul> |
| Classification | <ul><li>`ClassificationModel`</li></ul> |
| Deblurring | <ul><li>`Deblurring`</li></ul> |
| Human Pose Estimation | <ul><li>`HpeAssociativeEmbedding`</li><li>`OpenPose`</li></ul> |
| Instance Segmentation | <ul><li>`MaskRCNNModel`</li><li>`YolactModel`</li></ul> |
| Monocular Depth Estimation | <ul><li> `MonoDepthModel`</li></ul> |
| Named Entity Recognition | <ul><li>`BertNamedEntityRecognition`</li></ul> |
| Object Detection | <ul><li>`CenterNet`</li><li>`DETR`</li><li>`CTPN`</li><li>`FaceBoxes`</li><li>`NanoDet`</li><li>`NanoDetPlus`</li><li>`RetinaFace`</li><li>`RetinaFacePyTorch`</li><li>`SSD`</li><li>`UltraLightweightFaceDetection`</li><li>`YOLO`</li><li>`YoloV3ONNX`</li><li>`YoloV4`</li><li>`YOLOF`</li><li>`YOLOX`</li></ul> |
| Question Answering | <ul><li>`BertQuestionAnswering`</li></ul> |
| Salient Object Detection | <ul><li>`SalientObjectDetectionModel`</li></ul> |
| Semantic Segmentation | <ul><li>`SegmentationModel`</li></ul> |
| Action Classification | <ul><li>`ActionClassificationModel`</li></ul> |
## Model API Adapters
Model API wrappers are executor-agnostic, meaning it does not implement the specific model inference or model loading, instead it can be used with different executors having the implementation of common interface methods in adapter class respectively.
Currently, `OpenvinoAdapter` and `OVMSAdapter` are supported.
#### OpenVINO Adapter
`OpenvinoAdapter` hides the OpenVINO™ toolkit API, which allows Model API wrappers launching with models represented in Intermediate Representation (IR) format.
It accepts a path to either `xml` model file or `onnx` model file.
#### OpenVINO Model Server Adapter
`OVMSAdapter` hides the OpenVINO Model Server python client API, which allows Model API wrappers launching with models served by OVMS.
Refer to __[`OVMSAdapter`](adapters/ovms_adapter.md)__ to learn about running demos with OVMS.
For using OpenVINO Model Server Adapter you need to install the package with extra module:
```sh
pip install <omz_dir>/demos/common/python[ovms]
```
#### ONNXRuntime Adapter
`ONNXRuntimeAdapter` hides the ONNXRuntime, which Model API wrappers launching with models represented in ONNX format.
It accepts a path to `onnx` file. This adapter's functionality is limited: it doesn't support model reshaping, asynchronous inference and
was tested only on limited scope of models. Supported model wrappers: `SSD`, `MaskRCNNModel`, `SegmentationModel`, and `ClassificationModel`.
To use this adapter, install extra dependencies:
```sh
pip install onnx onnxruntime
```
## Model API Pipelines
Model API Pipelines represent the high-level wrappers upon the input data and accessing model results management.
They perform the data submission for model inference, verification of inference status, whether the result is ready or not, and results accessing.
The `AsyncPipeline` is available, which handles the asynchronous execution of a single model.
## Ready-to-use Model API solutions
To apply Model API wrappers in custom applications, learn the provided example of common scenario of how to use Model API.
In the example, the SSD architecture is used to predict bounding boxes on input image `"sample.png"`. The model execution is produced by `OpenvinoAdapter`, therefore we submit the path to the model's `xml` file.
Once the SSD model wrapper instance is created, we get the predictions by the model in one line: `ssd_model(input_data)` - the wrapper performs the preprocess method, synchronous inference on OpenVINO™ toolkit side and postprocess method.
```python
import cv2
# import model wrapper class
from model_api.models import SSD
# import inference adapter and helper for runtime setup
from model_api.adapters import OpenvinoAdapter, create_core
# read input image using opencv
input_data = cv2.imread("sample.png")
# define the path to mobilenet-ssd model in IR format
model_path = "public/mobilenet-ssd/FP32/mobilenet-ssd.xml"
# create adapter for OpenVINO™ runtime, pass the model path
inference_adapter = OpenvinoAdapter(create_core(), model_path, device="CPU")
# create model API wrapper for SSD architecture
# preload=True loads the model on CPU inside the adapter
ssd_model = SSD(inference_adapter, preload=True)
# apply input preprocessing, sync inference, model output postprocessing
results = ssd_model(input_data)
```
To study the complex scenarios, refer to Open Model Zoo Python* demos, where the asynchronous inference is applied.
Raw data
{
"_id": null,
"home_page": null,
"name": "openvino-model-api",
"maintainer": "Intel(R) Corporation",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Intel(R) Corporation",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/77/33/2825699fcce3197b9d3136612a58f65503faa90955c9ac5e41992422b50f/openvino_model_api-0.2.5.tar.gz",
"platform": null,
"description": "# Python* Model API package\n\nModel API package is a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures (model loading, asynchronous execution, etc...)\nAn application feeds model class with input data, then the model returns postprocessed output data in user-friendly format.\n\n## Package structure\n\nThe Model API consists of 3 libraries:\n* _adapters_ implements a common interface to allow Model API wrappers usage with different executors. See [Model API adapters](#model-api-adapters) section\n* _models_ implements wrappers for Open Model Zoo models. See [Model API Wrappers](#model-api-wrappers) section\n* _pipelines_ implements pipelines for model inference and manage the synchronous/asynchronous execution. See [Model API Pipelines](#model-api-pipelines) section\n\n### Prerequisites\n\nThe package requires\n- one of OpenVINO supported Python version (see OpenVINO documentation for the details)\n- OpenVINO\u2122 toolkit\n\nIf you build Model API package from source, you should install the OpenVINO\u2122 toolkit. See the options:\n\nUse installation package for [Intel\u00ae Distribution of OpenVINO\u2122 toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html) or build the open-source version available in the [OpenVINO GitHub repository](https://github.com/openvinotoolkit/openvino) using the [build instructions](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).\n\nAlso, you can install the OpenVINO Python\\* package via the command:\n ```sh\npip install openvino\n ```\n\n## Installing Python* Model API package\n\nUse the following command to install Model API from source:\n```sh\npip install <omz_dir>/demos/common/python\n```\n\nAlternatively, you can generate the package using a wheel. Follow the steps below:\n1. Build the wheel.\n\n```sh\npython <omz_dir>/demos/common/python/setup.py bdist_wheel\n```\nThe wheel should appear in the dist folder.\nName example: `openmodelzoo_modelapi-0.0.0-py3-none-any.whl`\n\n2. Install the package in the clean environment with `--force-reinstall` key.\n```sh\npip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall\n```\n\nTo verify the package is installed, you might use the following command:\n```sh\npython -c \"from openvino.model_zoo import model_api\"\n```\n\n## Model API Wrappers\n\nThe Model API package provides model wrappers, which implement standardized preprocessing/postprocessing functions per \"task type\" and incapsulate model-specific logic for usage of different models in a unified manner inside the application.\n\nThe following tasks can be solved with wrappers usage:\n\n| Task type | Model API wrappers |\n|----------------------------|--------------------|\n| Background Matting | <ul><li>`VideoBackgroundMatting`</li><li>`ImageMattingWithBackground`</li><li>`PortraitBackgroundMatting`</li></ul> |\n| Classification | <ul><li>`ClassificationModel`</li></ul> |\n| Deblurring | <ul><li>`Deblurring`</li></ul> |\n| Human Pose Estimation | <ul><li>`HpeAssociativeEmbedding`</li><li>`OpenPose`</li></ul> |\n| Instance Segmentation | <ul><li>`MaskRCNNModel`</li><li>`YolactModel`</li></ul> |\n| Monocular Depth Estimation | <ul><li> `MonoDepthModel`</li></ul> |\n| Named Entity Recognition | <ul><li>`BertNamedEntityRecognition`</li></ul> |\n| Object Detection | <ul><li>`CenterNet`</li><li>`DETR`</li><li>`CTPN`</li><li>`FaceBoxes`</li><li>`NanoDet`</li><li>`NanoDetPlus`</li><li>`RetinaFace`</li><li>`RetinaFacePyTorch`</li><li>`SSD`</li><li>`UltraLightweightFaceDetection`</li><li>`YOLO`</li><li>`YoloV3ONNX`</li><li>`YoloV4`</li><li>`YOLOF`</li><li>`YOLOX`</li></ul> |\n| Question Answering | <ul><li>`BertQuestionAnswering`</li></ul> |\n| Salient Object Detection | <ul><li>`SalientObjectDetectionModel`</li></ul> |\n| Semantic Segmentation | <ul><li>`SegmentationModel`</li></ul> |\n| Action Classification | <ul><li>`ActionClassificationModel`</li></ul> |\n\n## Model API Adapters\n\nModel API wrappers are executor-agnostic, meaning it does not implement the specific model inference or model loading, instead it can be used with different executors having the implementation of common interface methods in adapter class respectively.\n\nCurrently, `OpenvinoAdapter` and `OVMSAdapter` are supported.\n\n#### OpenVINO Adapter\n\n`OpenvinoAdapter` hides the OpenVINO\u2122 toolkit API, which allows Model API wrappers launching with models represented in Intermediate Representation (IR) format.\nIt accepts a path to either `xml` model file or `onnx` model file.\n\n#### OpenVINO Model Server Adapter\n\n`OVMSAdapter` hides the OpenVINO Model Server python client API, which allows Model API wrappers launching with models served by OVMS.\n\nRefer to __[`OVMSAdapter`](adapters/ovms_adapter.md)__ to learn about running demos with OVMS.\n\nFor using OpenVINO Model Server Adapter you need to install the package with extra module:\n```sh\npip install <omz_dir>/demos/common/python[ovms]\n```\n\n#### ONNXRuntime Adapter\n\n`ONNXRuntimeAdapter` hides the ONNXRuntime, which Model API wrappers launching with models represented in ONNX format.\nIt accepts a path to `onnx` file. This adapter's functionality is limited: it doesn't support model reshaping, asynchronous inference and\nwas tested only on limited scope of models. Supported model wrappers: `SSD`, `MaskRCNNModel`, `SegmentationModel`, and `ClassificationModel`.\n\nTo use this adapter, install extra dependencies:\n```sh\npip install onnx onnxruntime\n```\n\n## Model API Pipelines\n\nModel API Pipelines represent the high-level wrappers upon the input data and accessing model results management.\nThey perform the data submission for model inference, verification of inference status, whether the result is ready or not, and results accessing.\n\nThe `AsyncPipeline` is available, which handles the asynchronous execution of a single model.\n\n## Ready-to-use Model API solutions\n\nTo apply Model API wrappers in custom applications, learn the provided example of common scenario of how to use Model API.\n\n In the example, the SSD architecture is used to predict bounding boxes on input image `\"sample.png\"`. The model execution is produced by `OpenvinoAdapter`, therefore we submit the path to the model's `xml` file.\n\nOnce the SSD model wrapper instance is created, we get the predictions by the model in one line: `ssd_model(input_data)` - the wrapper performs the preprocess method, synchronous inference on OpenVINO\u2122 toolkit side and postprocess method.\n\n```python\nimport cv2\n# import model wrapper class\nfrom model_api.models import SSD\n# import inference adapter and helper for runtime setup\nfrom model_api.adapters import OpenvinoAdapter, create_core\n\n\n# read input image using opencv\ninput_data = cv2.imread(\"sample.png\")\n\n# define the path to mobilenet-ssd model in IR format\nmodel_path = \"public/mobilenet-ssd/FP32/mobilenet-ssd.xml\"\n\n# create adapter for OpenVINO\u2122 runtime, pass the model path\ninference_adapter = OpenvinoAdapter(create_core(), model_path, device=\"CPU\")\n\n# create model API wrapper for SSD architecture\n# preload=True loads the model on CPU inside the adapter\nssd_model = SSD(inference_adapter, preload=True)\n\n# apply input preprocessing, sync inference, model output postprocessing\nresults = ssd_model(input_data)\n```\n\nTo study the complex scenarios, refer to Open Model Zoo Python* demos, where the asynchronous inference is applied.\n",
"bugtrack_url": null,
"license": null,
"summary": "Model API: model wrappers and pipelines for inference with OpenVINO",
"version": "0.2.5",
"project_urls": {
"Documentation": "https://github.com/openvinotoolkit/model_api/blob/master/README.md",
"Homepage": "https://github.com/openvinotoolkit/model_api",
"Repository": "https://github.com/openvinotoolkit/model_api.git"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0fe07217125a4c0dac00bc8a33b433fa6cdbe3d7bf1b0c8b56766d05d86fda52",
"md5": "4507aafc3e03105da5ba08286cedf63d",
"sha256": "758f51059aecdc26ba8b903079f2fcafa208441bbebd8b82483eb045dc04dd6f"
},
"downloads": -1,
"filename": "openvino_model_api-0.2.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4507aafc3e03105da5ba08286cedf63d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 132248,
"upload_time": "2024-10-22T09:55:05",
"upload_time_iso_8601": "2024-10-22T09:55:05.345203Z",
"url": "https://files.pythonhosted.org/packages/0f/e0/7217125a4c0dac00bc8a33b433fa6cdbe3d7bf1b0c8b56766d05d86fda52/openvino_model_api-0.2.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "77332825699fcce3197b9d3136612a58f65503faa90955c9ac5e41992422b50f",
"md5": "97e4cfc1409d5cc42d1ac1d4e1182dbc",
"sha256": "26a12d73d83f1b23c9640e4ccf2d2ccfdc866a3b2dce612e0c99a81bff2d06a7"
},
"downloads": -1,
"filename": "openvino_model_api-0.2.5.tar.gz",
"has_sig": false,
"md5_digest": "97e4cfc1409d5cc42d1ac1d4e1182dbc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 97814,
"upload_time": "2024-10-22T09:55:07",
"upload_time_iso_8601": "2024-10-22T09:55:07.528750Z",
"url": "https://files.pythonhosted.org/packages/77/33/2825699fcce3197b9d3136612a58f65503faa90955c9ac5e41992422b50f/openvino_model_api-0.2.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-22 09:55:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "openvinotoolkit",
"github_project": "model_api",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "openvino-model-api"
}