# Open Image Models
[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/main.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)
[![GitHub version](https://img.shields.io/github/v/release/ankandrew/fast-alpr)](https://github.com/ankandrew/fast-alpr/releases)
[![image](https://img.shields.io/pypi/pyversions/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://ankandrew.github.io/open-image-models/)
[![Pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/pylint-dev/pylint)
[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)
[![ONNX Model](https://img.shields.io/badge/model-ONNX-blue?logo=onnx&logoColor=white)](https://onnx.ai/)
[![Hugging Face Spaces](https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-orange)](https://huggingface.co/spaces/ankandrew/open-image-models)
![License](https://img.shields.io/github/license/ankandrew/fast-alpr)
<!-- TOC -->
* [Open Image Models](#open-image-models)
* [Introduction](#introduction)
* [Features](#features)
* [Available Models](#available-models)
* [Object Detection](#object-detection)
* [Plate Detection](#plate-detection)
* [Installation](#installation)
* [Contributing](#contributing)
* [Citation](#citation)
<!-- TOC -->
---
## Introduction
**Ready-to-use** models for a range of **computer vision** tasks like **detection**, **classification**, and
**more**. With **ONNX** support, you get **fast** and **accurate** results right out of the box.
Easily integrate these models into your apps for **real-time** processing—ideal for edge devices, cloud setups, or
production environments. In **one line of code**, you can have **powerful** model **inference** running!
```python
from open_image_models import LicensePlateDetector
lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-256-license-plate-end2end")
lp_detector.predict("path/to/license_plate_image.jpg")
```
✨ That's it! Powerful license plate detection with just a few lines of code.
## Features
- 🚀 Pre-trained: Models are **ready** for immediate use, no additional training required.
- 🌟 ONNX: Cross-platform support for **fast inference** on both CPU and GPU environments.
- âš¡ Performance: Optimized for both speed and accuracy, ensuring efficient **real-time** applications.
- 💻 Simple API: Power up your applications with robust model inference in just one line of code.
## Available Models
### Object Detection
#### Plate Detection
![](https://raw.githubusercontent.com/ankandrew/LocalizadorPatentes/2e765012f69c4fbd8decf998e61ed136004ced24/extra/demo_localizador.gif)
| Model | Image Size | Precision (P) | Recall (R) | mAP50 | mAP50-95 |
|:-------------------------------------:|------------|---------------|------------|-------|----------|
| `yolo-v9-s-608-license-plate-end2end` | 608 | 0.957 | 0.917 | 0.966 | 0.772 |
| `yolo-v9-t-640-license-plate-end2end` | 640 | 0.966 | 0.896 | 0.958 | 0.758 |
| `yolo-v9-t-512-license-plate-end2end` | 512 | 0.955 | 0.901 | 0.948 | 0.724 |
| `yolo-v9-t-416-license-plate-end2end` | 416 | 0.94 | 0.894 | 0.94 | 0.702 |
| `yolo-v9-t-384-license-plate-end2end` | 384 | 0.942 | 0.863 | 0.92 | 0.687 |
| `yolo-v9-t-256-license-plate-end2end` | 256 | 0.937 | 0.797 | 0.858 | 0.606 |
<details>
<summary>Usage</summary>
```python
import cv2
from rich import print
from open_image_models import LicensePlateDetector
# Initialize the License Plate Detector with the pre-trained YOLOv9 model
lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")
# Load an image
image_path = "path/to/license_plate_image.jpg"
image = cv2.imread(image_path)
# Perform license plate detection
detections = lp_detector.predict(image)
print(detections)
# Benchmark the model performance
lp_detector.show_benchmark(num_runs=1000)
# Display predictions on the image
annotated_image = lp_detector.display_predictions(image)
# Show the annotated image
cv2.imshow("Annotated Image", annotated_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
</details>
> [!TIP]
> Checkout the [docs](https://ankandrew.github.io/open-image-models)!
### Installation
To install open-image-models via pip, use the following command:
```shell
pip install open-image-models
```
### Contributing
Contributions to the repo are greatly appreciated. Whether it's bug fixes, feature enhancements, or new models,
your contributions are warmly welcomed.
To start contributing or to begin development, you can follow these steps:
1. Clone repo
```shell
git clone https://github.com/ankandrew/open-image-models.git
```
2. Install all dependencies using [Poetry](https://python-poetry.org/docs/#installation):
```shell
poetry install --all-extras
```
3. To ensure your changes pass linting and tests before submitting a PR:
```shell
make checks
```
### Citation
```
@article{wang2024yolov9,
title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},
author={Wang, Chien-Yao and Liao, Hong-Yuan Mark},
booktitle={arXiv preprint arXiv:2402.13616},
year={2024}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/ankandrew/open-image-models",
"name": "open-image-models",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "image-processing, computer-vision, deep-learning, image-classification, object-detection, open-source-models, onnx",
"author": "ankandrew",
"author_email": "61120139+ankandrew@users.noreply.github.com",
"download_url": "https://files.pythonhosted.org/packages/71/a8/e0718731815a18dc93ea4eacd4e3ab373308862c6b5df3ef336f40c3efc3/open_image_models-0.2.0.tar.gz",
"platform": null,
"description": "# Open Image Models\n\n[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/main.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)\n[![GitHub version](https://img.shields.io/github/v/release/ankandrew/fast-alpr)](https://github.com/ankandrew/fast-alpr/releases)\n[![image](https://img.shields.io/pypi/pyversions/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://ankandrew.github.io/open-image-models/)\n[![Pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/pylint-dev/pylint)\n[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)\n[![ONNX Model](https://img.shields.io/badge/model-ONNX-blue?logo=onnx&logoColor=white)](https://onnx.ai/)\n[![Hugging Face Spaces](https://img.shields.io/badge/\ud83e\udd17%20Hugging%20Face-Spaces-orange)](https://huggingface.co/spaces/ankandrew/open-image-models)\n![License](https://img.shields.io/github/license/ankandrew/fast-alpr)\n\n<!-- TOC -->\n* [Open Image Models](#open-image-models)\n * [Introduction](#introduction)\n * [Features](#features)\n * [Available Models](#available-models)\n * [Object Detection](#object-detection)\n * [Plate Detection](#plate-detection)\n * [Installation](#installation)\n * [Contributing](#contributing)\n * [Citation](#citation)\n<!-- TOC -->\n\n---\n\n## Introduction\n\n**Ready-to-use** models for a range of **computer vision** tasks like **detection**, **classification**, and\n**more**. With **ONNX** support, you get **fast** and **accurate** results right out of the box.\n\nEasily integrate these models into your apps for **real-time** processing\u2014ideal for edge devices, cloud setups, or\nproduction environments. In **one line of code**, you can have **powerful** model **inference** running!\n\n```python\nfrom open_image_models import LicensePlateDetector\n\nlp_detector = LicensePlateDetector(detection_model=\"yolo-v9-t-256-license-plate-end2end\")\nlp_detector.predict(\"path/to/license_plate_image.jpg\")\n```\n\n\u2728 That's it! Powerful license plate detection with just a few lines of code.\n\n## Features\n\n- \ud83d\ude80 Pre-trained: Models are **ready** for immediate use, no additional training required.\n- \ud83c\udf1f ONNX: Cross-platform support for **fast inference** on both CPU and GPU environments.\n- \u26a1 Performance: Optimized for both speed and accuracy, ensuring efficient **real-time** applications.\n- \ud83d\udcbb Simple API: Power up your applications with robust model inference in just one line of code.\n\n## Available Models\n\n### Object Detection\n\n#### Plate Detection\n\n![](https://raw.githubusercontent.com/ankandrew/LocalizadorPatentes/2e765012f69c4fbd8decf998e61ed136004ced24/extra/demo_localizador.gif)\n\n| Model | Image Size | Precision (P) | Recall (R) | mAP50 | mAP50-95 |\n|:-------------------------------------:|------------|---------------|------------|-------|----------|\n| `yolo-v9-s-608-license-plate-end2end` | 608 | 0.957 | 0.917 | 0.966 | 0.772 |\n| `yolo-v9-t-640-license-plate-end2end` | 640 | 0.966 | 0.896 | 0.958 | 0.758 |\n| `yolo-v9-t-512-license-plate-end2end` | 512 | 0.955 | 0.901 | 0.948 | 0.724 |\n| `yolo-v9-t-416-license-plate-end2end` | 416 | 0.94 | 0.894 | 0.94 | 0.702 |\n| `yolo-v9-t-384-license-plate-end2end` | 384 | 0.942 | 0.863 | 0.92 | 0.687 |\n| `yolo-v9-t-256-license-plate-end2end` | 256 | 0.937 | 0.797 | 0.858 | 0.606 |\n\n<details>\n <summary>Usage</summary>\n\n ```python\nimport cv2\nfrom rich import print\n\nfrom open_image_models import LicensePlateDetector\n\n# Initialize the License Plate Detector with the pre-trained YOLOv9 model\nlp_detector = LicensePlateDetector(detection_model=\"yolo-v9-t-384-license-plate-end2end\")\n\n# Load an image\nimage_path = \"path/to/license_plate_image.jpg\"\nimage = cv2.imread(image_path)\n\n# Perform license plate detection\ndetections = lp_detector.predict(image)\nprint(detections)\n\n# Benchmark the model performance\nlp_detector.show_benchmark(num_runs=1000)\n\n# Display predictions on the image\nannotated_image = lp_detector.display_predictions(image)\n\n# Show the annotated image\ncv2.imshow(\"Annotated Image\", annotated_image)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n ```\n\n</details>\n\n> [!TIP]\n> Checkout the [docs](https://ankandrew.github.io/open-image-models)!\n\n### Installation\n\nTo install open-image-models via pip, use the following command:\n\n```shell\npip install open-image-models\n```\n\n### Contributing\n\nContributions to the repo are greatly appreciated. Whether it's bug fixes, feature enhancements, or new models,\nyour contributions are warmly welcomed.\n\nTo start contributing or to begin development, you can follow these steps:\n\n1. Clone repo\n ```shell\n git clone https://github.com/ankandrew/open-image-models.git\n ```\n2. Install all dependencies using [Poetry](https://python-poetry.org/docs/#installation):\n ```shell\n poetry install --all-extras\n ```\n3. To ensure your changes pass linting and tests before submitting a PR:\n ```shell\n make checks\n ```\n\n### Citation\n\n```\n@article{wang2024yolov9,\n title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},\n author={Wang, Chien-Yao and Liao, Hong-Yuan Mark},\n booktitle={arXiv preprint arXiv:2402.13616},\n year={2024}\n}\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Pre-trained image models using ONNX for fast, out-of-the-box inference.",
"version": "0.2.0",
"project_urls": {
"Homepage": "https://github.com/ankandrew/open-image-models",
"Repository": "https://github.com/ankandrew/open-image-models"
},
"split_keywords": [
"image-processing",
" computer-vision",
" deep-learning",
" image-classification",
" object-detection",
" open-source-models",
" onnx"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0710efe4be86de8202dd674e1f299bed41538c73e34bb0ef7d73dca4889b2244",
"md5": "ac5c4cc86fd92e65c6de5be991b3e349",
"sha256": "a4b5c89409dbe373cc2d0f6db7312ec7c9e61caa0c9fbed400d7b8179c8b6eab"
},
"downloads": -1,
"filename": "open_image_models-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ac5c4cc86fd92e65c6de5be991b3e349",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 16912,
"upload_time": "2024-11-12T02:22:40",
"upload_time_iso_8601": "2024-11-12T02:22:40.398650Z",
"url": "https://files.pythonhosted.org/packages/07/10/efe4be86de8202dd674e1f299bed41538c73e34bb0ef7d73dca4889b2244/open_image_models-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "71a8e0718731815a18dc93ea4eacd4e3ab373308862c6b5df3ef336f40c3efc3",
"md5": "2c38a04a025bc18cd3d45f185902b0e0",
"sha256": "b77ec208fb45e5b74ac3de7b7b4258fa985009168e5222b0cc5dd1aa79bd0c2e"
},
"downloads": -1,
"filename": "open_image_models-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "2c38a04a025bc18cd3d45f185902b0e0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 15217,
"upload_time": "2024-11-12T02:22:42",
"upload_time_iso_8601": "2024-11-12T02:22:42.291119Z",
"url": "https://files.pythonhosted.org/packages/71/a8/e0718731815a18dc93ea4eacd4e3ab373308862c6b5df3ef336f40c3efc3/open_image_models-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-12 02:22:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ankandrew",
"github_project": "open-image-models",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "open-image-models"
}