open-image-models


Nameopen-image-models JSON
Version 0.5.1 PyPI version JSON
download
home_pageNone
SummaryPre-trained image models using ONNX for fast, out-of-the-box inference.
upload_time2025-09-15 01:15:42
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords computer-vision deep-learning image-classification image-processing object-detection onnx open-source-models
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Open Image Models

[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/test.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)
[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/release.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)
[![GitHub version](https://img.shields.io/github/v/release/ankandrew/open-image-models)](https://github.com/ankandrew/open-image-models/releases)
[![image](https://img.shields.io/pypi/pyversions/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://ankandrew.github.io/open-image-models/)
[![Pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/pylint-dev/pylint)
[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)
[![ONNX Model](https://img.shields.io/badge/model-ONNX-blue?logo=onnx&logoColor=white)](https://onnx.ai/)
[![Hugging Face Spaces](https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-orange)](https://huggingface.co/spaces/ankandrew/open-image-models)
![License](https://img.shields.io/github/license/ankandrew/open-image-models)

<!-- TOC -->
* [Open Image Models](#open-image-models)
  * [Introduction](#introduction)
  * [Features](#features)
  * [Installation](#installation)
  * [Available Models](#available-models)
    * [Object Detection](#object-detection)
      * [Plate Detection](#plate-detection)
  * [Contributing](#contributing)
  * [Citation](#citation)
<!-- TOC -->

---

## Introduction

**Ready-to-use** models for a range of **computer vision** tasks like **detection**, **classification**, and
**more**. With **ONNX** support, you get **fast** and **accurate** results right out of the box.

Easily integrate these models into your apps for **real-time** processing—ideal for edge devices, cloud setups, or
production environments. In **one line of code**, you can have **powerful** model **inference** running!

```python
from open_image_models import LicensePlateDetector

lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-256-license-plate-end2end")
lp_detector.predict("path/to/license_plate_image.jpg")
```

✨ That's it! Powerful license plate detection with just a few lines of code.

## Features

- 🚀 Pre-trained: Models are **ready** for immediate use, no additional training required.
- 🌟 ONNX: Cross-platform support for **fast inference** on both CPU and GPU environments.
- âš¡ Performance: Optimized for both speed and accuracy, ensuring efficient **real-time** applications.
- 💻 Simple API: Power up your applications with robust model inference in just one line of code.

## Installation

To install open-image-models via pip, use the following command:

```shell
pip install open-image-models[onnx]
```

> [!NOTE]
> For hardware acceleration, you can use one of the following extras instead: `onnx-gpu`, `onnx-openvino`,
> `onnx-directml`, or `onnx-qnn`. Example: `pip install open-image-models[onnx-gpu]`

## Available Models

### Object Detection

#### Plate Detection

![](https://raw.githubusercontent.com/ankandrew/LocalizadorPatentes/2e765012f69c4fbd8decf998e61ed136004ced24/extra/demo_localizador.gif)

|                 Model                 | Image Size | Precision (P) | Recall (R) | mAP50 | mAP50-95 |
|:-------------------------------------:|------------|---------------|------------|-------|----------|
| `yolo-v9-s-608-license-plate-end2end` | 608        | 0.957         | 0.917      | 0.966 | 0.772    |
| `yolo-v9-t-640-license-plate-end2end` | 640        | 0.966         | 0.896      | 0.958 | 0.758    |
| `yolo-v9-t-512-license-plate-end2end` | 512        | 0.955         | 0.901      | 0.948 | 0.724    |
| `yolo-v9-t-416-license-plate-end2end` | 416        | 0.94          | 0.894      | 0.94  | 0.702    |
| `yolo-v9-t-384-license-plate-end2end` | 384        | 0.942         | 0.863      | 0.92  | 0.687    |
| `yolo-v9-t-256-license-plate-end2end` | 256        | 0.937         | 0.797      | 0.858 | 0.606    |

<details>
  <summary>Usage</summary>

  ```python
import cv2
from rich import print

from open_image_models import LicensePlateDetector

# Initialize the License Plate Detector with the pre-trained YOLOv9 model
lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")

# Load an image
image_path = "path/to/license_plate_image.jpg"
image = cv2.imread(image_path)

# Perform license plate detection
detections = lp_detector.predict(image)
print(detections)

# Benchmark the model performance
lp_detector.show_benchmark(num_runs=1000)

# Display predictions on the image
annotated_image = lp_detector.display_predictions(image)

# Show the annotated image
cv2.imshow("Annotated Image", annotated_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
  ```

</details>

> [!TIP]
> Checkout the [docs](https://ankandrew.github.io/open-image-models)!

## Contributing

Contributions to the repo are greatly appreciated. Whether it's bug fixes, feature enhancements, or new models,
your contributions are warmly welcomed.

To start contributing or to begin development, you can follow these steps:

1. Clone repo
    ```shell
    git clone https://github.com/ankandrew/open-image-models.git
    ```
2. Install all dependencies using [Poetry](https://python-poetry.org/docs/#installation):
    ```shell
    make install
    ```
3. To ensure your changes pass linting and tests before submitting a PR:
    ```shell
    make checks
    ```

## Citation

```
@article{wang2024yolov9,
  title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},
  author={Wang, Chien-Yao  and Liao, Hong-Yuan Mark},
  booktitle={arXiv preprint arXiv:2402.13616},
  year={2024}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "open-image-models",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "computer-vision, deep-learning, image-classification, image-processing, object-detection, onnx, open-source-models",
    "author": null,
    "author_email": "ankandrew <61120139+ankandrew@users.noreply.github.com>",
    "download_url": "https://files.pythonhosted.org/packages/74/9f/46a8b1f0e9d7affa47f8295e436a8a66b3a920921da8dc153c0f27f59edf/open_image_models-0.5.1.tar.gz",
    "platform": null,
    "description": "# Open Image Models\n\n[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/test.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)\n[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/release.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)\n[![GitHub version](https://img.shields.io/github/v/release/ankandrew/open-image-models)](https://github.com/ankandrew/open-image-models/releases)\n[![image](https://img.shields.io/pypi/pyversions/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://ankandrew.github.io/open-image-models/)\n[![Pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/pylint-dev/pylint)\n[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)\n[![ONNX Model](https://img.shields.io/badge/model-ONNX-blue?logo=onnx&logoColor=white)](https://onnx.ai/)\n[![Hugging Face Spaces](https://img.shields.io/badge/\ud83e\udd17%20Hugging%20Face-Spaces-orange)](https://huggingface.co/spaces/ankandrew/open-image-models)\n![License](https://img.shields.io/github/license/ankandrew/open-image-models)\n\n<!-- TOC -->\n* [Open Image Models](#open-image-models)\n  * [Introduction](#introduction)\n  * [Features](#features)\n  * [Installation](#installation)\n  * [Available Models](#available-models)\n    * [Object Detection](#object-detection)\n      * [Plate Detection](#plate-detection)\n  * [Contributing](#contributing)\n  * [Citation](#citation)\n<!-- TOC -->\n\n---\n\n## Introduction\n\n**Ready-to-use** models for a range of **computer vision** tasks like **detection**, **classification**, and\n**more**. With **ONNX** support, you get **fast** and **accurate** results right out of the box.\n\nEasily integrate these models into your apps for **real-time** processing\u2014ideal for edge devices, cloud setups, or\nproduction environments. In **one line of code**, you can have **powerful** model **inference** running!\n\n```python\nfrom open_image_models import LicensePlateDetector\n\nlp_detector = LicensePlateDetector(detection_model=\"yolo-v9-t-256-license-plate-end2end\")\nlp_detector.predict(\"path/to/license_plate_image.jpg\")\n```\n\n\u2728 That's it! Powerful license plate detection with just a few lines of code.\n\n## Features\n\n- \ud83d\ude80 Pre-trained: Models are **ready** for immediate use, no additional training required.\n- \ud83c\udf1f ONNX: Cross-platform support for **fast inference** on both CPU and GPU environments.\n- \u26a1 Performance: Optimized for both speed and accuracy, ensuring efficient **real-time** applications.\n- \ud83d\udcbb Simple API: Power up your applications with robust model inference in just one line of code.\n\n## Installation\n\nTo install open-image-models via pip, use the following command:\n\n```shell\npip install open-image-models[onnx]\n```\n\n> [!NOTE]\n> For hardware acceleration, you can use one of the following extras instead: `onnx-gpu`, `onnx-openvino`,\n> `onnx-directml`, or `onnx-qnn`. Example: `pip install open-image-models[onnx-gpu]`\n\n## Available Models\n\n### Object Detection\n\n#### Plate Detection\n\n![](https://raw.githubusercontent.com/ankandrew/LocalizadorPatentes/2e765012f69c4fbd8decf998e61ed136004ced24/extra/demo_localizador.gif)\n\n|                 Model                 | Image Size | Precision (P) | Recall (R) | mAP50 | mAP50-95 |\n|:-------------------------------------:|------------|---------------|------------|-------|----------|\n| `yolo-v9-s-608-license-plate-end2end` | 608        | 0.957         | 0.917      | 0.966 | 0.772    |\n| `yolo-v9-t-640-license-plate-end2end` | 640        | 0.966         | 0.896      | 0.958 | 0.758    |\n| `yolo-v9-t-512-license-plate-end2end` | 512        | 0.955         | 0.901      | 0.948 | 0.724    |\n| `yolo-v9-t-416-license-plate-end2end` | 416        | 0.94          | 0.894      | 0.94  | 0.702    |\n| `yolo-v9-t-384-license-plate-end2end` | 384        | 0.942         | 0.863      | 0.92  | 0.687    |\n| `yolo-v9-t-256-license-plate-end2end` | 256        | 0.937         | 0.797      | 0.858 | 0.606    |\n\n<details>\n  <summary>Usage</summary>\n\n  ```python\nimport cv2\nfrom rich import print\n\nfrom open_image_models import LicensePlateDetector\n\n# Initialize the License Plate Detector with the pre-trained YOLOv9 model\nlp_detector = LicensePlateDetector(detection_model=\"yolo-v9-t-384-license-plate-end2end\")\n\n# Load an image\nimage_path = \"path/to/license_plate_image.jpg\"\nimage = cv2.imread(image_path)\n\n# Perform license plate detection\ndetections = lp_detector.predict(image)\nprint(detections)\n\n# Benchmark the model performance\nlp_detector.show_benchmark(num_runs=1000)\n\n# Display predictions on the image\nannotated_image = lp_detector.display_predictions(image)\n\n# Show the annotated image\ncv2.imshow(\"Annotated Image\", annotated_image)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n  ```\n\n</details>\n\n> [!TIP]\n> Checkout the [docs](https://ankandrew.github.io/open-image-models)!\n\n## Contributing\n\nContributions to the repo are greatly appreciated. Whether it's bug fixes, feature enhancements, or new models,\nyour contributions are warmly welcomed.\n\nTo start contributing or to begin development, you can follow these steps:\n\n1. Clone repo\n    ```shell\n    git clone https://github.com/ankandrew/open-image-models.git\n    ```\n2. Install all dependencies using [Poetry](https://python-poetry.org/docs/#installation):\n    ```shell\n    make install\n    ```\n3. To ensure your changes pass linting and tests before submitting a PR:\n    ```shell\n    make checks\n    ```\n\n## Citation\n\n```\n@article{wang2024yolov9,\n  title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},\n  author={Wang, Chien-Yao  and Liao, Hong-Yuan Mark},\n  booktitle={arXiv preprint arXiv:2402.13616},\n  year={2024}\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Pre-trained image models using ONNX for fast, out-of-the-box inference.",
    "version": "0.5.1",
    "project_urls": {
        "Repository": "https://github.com/ankandrew/open-image-models"
    },
    "split_keywords": [
        "computer-vision",
        " deep-learning",
        " image-classification",
        " image-processing",
        " object-detection",
        " onnx",
        " open-source-models"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2697dde7229af43b7bb60bc371f9d38d252de61ee23a240e59dbf39ae6dfb3bf",
                "md5": "86fb36be69f2105f0240724d62397154",
                "sha256": "050f55b68c55d7743a20cca43f593a53c08d44234e1e536c61fb82da2d60f981"
            },
            "downloads": -1,
            "filename": "open_image_models-0.5.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "86fb36be69f2105f0240724d62397154",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 18165,
            "upload_time": "2025-09-15T01:15:39",
            "upload_time_iso_8601": "2025-09-15T01:15:39.921489Z",
            "url": "https://files.pythonhosted.org/packages/26/97/dde7229af43b7bb60bc371f9d38d252de61ee23a240e59dbf39ae6dfb3bf/open_image_models-0.5.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "749f46a8b1f0e9d7affa47f8295e436a8a66b3a920921da8dc153c0f27f59edf",
                "md5": "4e1d7a5591bf359fec4e9ea866786956",
                "sha256": "2a2315c9e30c691271e186008c760cd51a94e87bbe8d8afaa4dddad83440747e"
            },
            "downloads": -1,
            "filename": "open_image_models-0.5.1.tar.gz",
            "has_sig": false,
            "md5_digest": "4e1d7a5591bf359fec4e9ea866786956",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 119854,
            "upload_time": "2025-09-15T01:15:42",
            "upload_time_iso_8601": "2025-09-15T01:15:42.019655Z",
            "url": "https://files.pythonhosted.org/packages/74/9f/46a8b1f0e9d7affa47f8295e436a8a66b3a920921da8dc153c0f27f59edf/open_image_models-0.5.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-15 01:15:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ankandrew",
    "github_project": "open-image-models",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "open-image-models"
}
        
Elapsed time: 1.18792s