open-image-models


Nameopen-image-models JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/ankandrew/open-image-models
SummaryPre-trained image models using ONNX for fast, out-of-the-box inference.
upload_time2024-09-30 03:14:32
maintainerNone
docs_urlNone
authorankandrew
requires_python<4.0,>=3.10
licenseNone
keywords image-processing computer-vision deep-learning image-classification object-detection open-source-models onnx
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Open Image Models

[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/main.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)
[![image](https://img.shields.io/pypi/v/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)
[![image](https://img.shields.io/pypi/pyversions/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/pylint-dev/pylint)
[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)
[![image](https://img.shields.io/pypi/l/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)

<p>
  <img src="./assets/open-image-models-logo.png" alt="Open Image Models Logo" width="650"/>
</p>

<!-- TOC -->
* [Open Image Models](#open-image-models)
  * [Introduction](#introduction)
  * [Features](#features)
  * [Available Models](#available-models)
    * [Object Detection](#object-detection)
      * [Plate Detection](#plate-detection)
    * [Installation](#installation)
    * [Contributing](#contributing)
<!-- TOC -->

---

## Introduction

We offer **ready-to-use** models for a range of **computer vision** tasks like **detection**, **classification**, and
**more**. With **ONNX** support, you get **fast** and **accurate** results right out of the box.

Easily integrate these models into your apps for **real-time** processing—ideal for edge devices, cloud setups, or
production environments. In **one line of code**, you can have **powerful** model **inference** running!

## Features

- 🚀 Pre-trained Models: Models are **ready** for immediate use, no additional training required.
- 🌟 ONNX Format: Cross-platform support for **fast inference** on both CPU and GPU environments.
- ⚡ High Performance: Optimized for both speed and accuracy, ensuring efficient **real-time** applications.
- 📏 Variety of Image Sizes: Models **available** with different input sizes, allowing flexibility based on the task's
  performance and speed requirements.
- 💻 Simple API: Achieve license plate detection with just **one line of code**, enabling rapid integration and
deployment.

## Available Models

### Object Detection

#### Plate Detection

| Model    | Image Size | Precision (P) | Recall (R) | mAP50 | mAP50-95 | Speed (ms) |
|----------|------------|---------------|------------|-------|----------|------------|
| yolov9-t | 640        | 0.955         | 0.91       | 0.959 | 0.75     | XXX        |
| yolov9-t | 512        | 0.948         | 0.901      | 0.95  | 0.718    | XXX        |
| yolov9-t | 384        | 0.943         | 0.863      | 0.921 | 0.688    | XXX        |
| yolov9-t | 256        | 0.937         | 0.797      | 0.858 | 0.606    | XXX        |

_<sup>[1]</sup> Inference on Mac M1 chip using CPUExecutionProvider. Utilizing CoreMLExecutionProvider accelerates speed
by 5x._

<details>
  <summary>Usage</summary>

  ```python
import cv2
from rich import print

from open_image_models import LicensePlateDetector

# Initialize the License Plate Detector with the pre-trained YOLOv9 model
lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")

# Load an image
image_path = "path/to/license_plate_image.jpg"
image = cv2.imread(image_path)

# Perform license plate detection
detections = lp_detector.predict(image)
print(detections)

# Benchmark the model performance
lp_detector.show_benchmark(num_runs=1000)

# Display predictions on the image
annotated_image = lp_detector.display_predictions(image)

# Show the annotated image
cv2.imshow("Annotated Image", annotated_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
  ```

</details>

### Installation

To install open-image-models via pip, use the following command:

```shell
pip install open-image-models
```

### Contributing

Contributions to the repo are greatly appreciated. Whether it's bug fixes, feature enhancements, or new models,
your contributions are warmly welcomed.

To start contributing or to begin development, you can follow these steps:

1. Clone repo
    ```shell
    git clone https://github.com/ankandrew/open-image-models.git
    ```
2. Install all dependencies using [Poetry](https://python-poetry.org/docs/#installation):
    ```shell
    poetry install --all-extras
    ```
3. To ensure your changes pass linting and tests before submitting a PR:
    ```shell
    make checks
    ```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ankandrew/open-image-models",
    "name": "open-image-models",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "image-processing, computer-vision, deep-learning, image-classification, object-detection, open-source-models, onnx",
    "author": "ankandrew",
    "author_email": "61120139+ankandrew@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/dd/ea/86de2be094bd83473f2a045962e4d2365a88a0fed65ac73e754edb5a7a5b/open_image_models-0.1.0.tar.gz",
    "platform": null,
    "description": "# Open Image Models\n\n[![Actions status](https://github.com/ankandrew/open-image-models/actions/workflows/main.yaml/badge.svg)](https://github.com/ankandrew/open-image-models/actions)\n[![image](https://img.shields.io/pypi/v/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)\n[![image](https://img.shields.io/pypi/pyversions/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![Pylint](https://img.shields.io/badge/linting-pylint-yellowgreen)](https://github.com/pylint-dev/pylint)\n[![Checked with mypy](http://www.mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/)\n[![image](https://img.shields.io/pypi/l/open-image-models.svg)](https://pypi.python.org/pypi/open-image-models)\n\n<p>\n  <img src=\"./assets/open-image-models-logo.png\" alt=\"Open Image Models Logo\" width=\"650\"/>\n</p>\n\n<!-- TOC -->\n* [Open Image Models](#open-image-models)\n  * [Introduction](#introduction)\n  * [Features](#features)\n  * [Available Models](#available-models)\n    * [Object Detection](#object-detection)\n      * [Plate Detection](#plate-detection)\n    * [Installation](#installation)\n    * [Contributing](#contributing)\n<!-- TOC -->\n\n---\n\n## Introduction\n\nWe offer **ready-to-use** models for a range of **computer vision** tasks like **detection**, **classification**, and\n**more**. With **ONNX** support, you get **fast** and **accurate** results right out of the box.\n\nEasily integrate these models into your apps for **real-time** processing\u2014ideal for edge devices, cloud setups, or\nproduction environments. In **one line of code**, you can have **powerful** model **inference** running!\n\n## Features\n\n- \ud83d\ude80 Pre-trained Models: Models are **ready** for immediate use, no additional training required.\n- \ud83c\udf1f ONNX Format: Cross-platform support for **fast inference** on both CPU and GPU environments.\n- \u26a1 High Performance: Optimized for both speed and accuracy, ensuring efficient **real-time** applications.\n- \ud83d\udccf Variety of Image Sizes: Models **available** with different input sizes, allowing flexibility based on the task's\n  performance and speed requirements.\n- \ud83d\udcbb Simple API: Achieve license plate detection with just **one line of code**, enabling rapid integration and\ndeployment.\n\n## Available Models\n\n### Object Detection\n\n#### Plate Detection\n\n| Model    | Image Size | Precision (P) | Recall (R) | mAP50 | mAP50-95 | Speed (ms) |\n|----------|------------|---------------|------------|-------|----------|------------|\n| yolov9-t | 640        | 0.955         | 0.91       | 0.959 | 0.75     | XXX        |\n| yolov9-t | 512        | 0.948         | 0.901      | 0.95  | 0.718    | XXX        |\n| yolov9-t | 384        | 0.943         | 0.863      | 0.921 | 0.688    | XXX        |\n| yolov9-t | 256        | 0.937         | 0.797      | 0.858 | 0.606    | XXX        |\n\n_<sup>[1]</sup> Inference on Mac M1 chip using CPUExecutionProvider. Utilizing CoreMLExecutionProvider accelerates speed\nby 5x._\n\n<details>\n  <summary>Usage</summary>\n\n  ```python\nimport cv2\nfrom rich import print\n\nfrom open_image_models import LicensePlateDetector\n\n# Initialize the License Plate Detector with the pre-trained YOLOv9 model\nlp_detector = LicensePlateDetector(detection_model=\"yolo-v9-t-384-license-plate-end2end\")\n\n# Load an image\nimage_path = \"path/to/license_plate_image.jpg\"\nimage = cv2.imread(image_path)\n\n# Perform license plate detection\ndetections = lp_detector.predict(image)\nprint(detections)\n\n# Benchmark the model performance\nlp_detector.show_benchmark(num_runs=1000)\n\n# Display predictions on the image\nannotated_image = lp_detector.display_predictions(image)\n\n# Show the annotated image\ncv2.imshow(\"Annotated Image\", annotated_image)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n  ```\n\n</details>\n\n### Installation\n\nTo install open-image-models via pip, use the following command:\n\n```shell\npip install open-image-models\n```\n\n### Contributing\n\nContributions to the repo are greatly appreciated. Whether it's bug fixes, feature enhancements, or new models,\nyour contributions are warmly welcomed.\n\nTo start contributing or to begin development, you can follow these steps:\n\n1. Clone repo\n    ```shell\n    git clone https://github.com/ankandrew/open-image-models.git\n    ```\n2. Install all dependencies using [Poetry](https://python-poetry.org/docs/#installation):\n    ```shell\n    poetry install --all-extras\n    ```\n3. To ensure your changes pass linting and tests before submitting a PR:\n    ```shell\n    make checks\n    ```\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Pre-trained image models using ONNX for fast, out-of-the-box inference.",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/ankandrew/open-image-models",
        "Repository": "https://github.com/ankandrew/open-image-models"
    },
    "split_keywords": [
        "image-processing",
        " computer-vision",
        " deep-learning",
        " image-classification",
        " object-detection",
        " open-source-models",
        " onnx"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "67cefc627e4b1379648e977f662ffd70acf3374ffcb986a244070ff85924792f",
                "md5": "5c26374c95107acc1ef543072b056de1",
                "sha256": "47c2c17abb2f9ec9d757fcf276e1baf973e64094fb3b9394bc2dc94bc2fb2ee5"
            },
            "downloads": -1,
            "filename": "open_image_models-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5c26374c95107acc1ef543072b056de1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 15822,
            "upload_time": "2024-09-30T03:14:31",
            "upload_time_iso_8601": "2024-09-30T03:14:31.374375Z",
            "url": "https://files.pythonhosted.org/packages/67/ce/fc627e4b1379648e977f662ffd70acf3374ffcb986a244070ff85924792f/open_image_models-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ddea86de2be094bd83473f2a045962e4d2365a88a0fed65ac73e754edb5a7a5b",
                "md5": "97cbcb9d0a4dee5b2f0063b95ae97912",
                "sha256": "0f4175dba4c141f3d673d662085df6b0559978b1de4591289f62cf0b2a2c2c84"
            },
            "downloads": -1,
            "filename": "open_image_models-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "97cbcb9d0a4dee5b2f0063b95ae97912",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 13988,
            "upload_time": "2024-09-30T03:14:32",
            "upload_time_iso_8601": "2024-09-30T03:14:32.686109Z",
            "url": "https://files.pythonhosted.org/packages/dd/ea/86de2be094bd83473f2a045962e4d2365a88a0fed65ac73e754edb5a7a5b/open_image_models-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-30 03:14:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ankandrew",
    "github_project": "open-image-models",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "open-image-models"
}
        
Elapsed time: 4.92286s