qrdet


Nameqrdet JSON
Version 2.4 PyPI version JSON
download
home_pagehttps://github.com/Eric-Canas/qrdet
SummaryRobust QR Detector based on YOLOv8
upload_time2023-10-11 17:10:58
maintainer
docs_urlNone
authorEric Canas
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # QRDet
**QRDet** is a robust **QR Detector** based on <a href="https://github.com/ultralytics/ultralytics" target="_blank">YOLOv8</a>.

**QRDet** will detect & segment **QR** codes even in **difficult** positions or **tricky** images. If you are looking for a complete **QR Detection** + **Decoding** pipeline, take a look at <a href="https://github.com/Eric-Canas/qreader" target="_blank">QReader</a>.  

## Installation

To install **QRDet**, simply run:

```bash
pip install qrdet
```

## Usage

There is only one function you'll need to call to use **QRDet**, ``detect``:

```python

from qrdet import QRDetector
import cv2

detector = QRDetector(model_size='s')
image = cv2.imread(filename='resources/qreader_test_image.jpeg')
detections = detector.detect(image=image, is_bgr=True)

# Draw the detections
for detection in detections:
    x1, y1, x2, y2 = detections['bbox_xyxy']
    confidence = detections['confidence']
    segmenation_xy = detections['quadrilateral_xy']
    cv2.rectangle(image, (x1, y1), (x2, y2), color=(0, 255, 0), thickness=2)
    cv2.putText(image, f'{confidence:.2f}', (x1, y1 - 10), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                fontScale=1, color=(0, 255, 0), thickness=2)
# Save the results
cv2.imwrite(filename='resources/qreader_test_image_detections.jpeg', img=image)
```

<img alt="detections_output" title="detections_output" src="https://raw.githubusercontent.com/Eric-Canas/qrdet/main/resources/qreader_test_image_detections.jpeg" width="100%">

## API Reference

### QReader.detect(image, is_bgr = False, **kwargs)

- ``image``: **np.ndarray|'PIL.Image'|'torch.Tensor'|str**. `np.ndarray` of shape **(H, W, 3)**, `PIL.Image`, `Tensor` of shape **(1, 3, H, W)**, or `path`/`url` to the image to predict. `'screen'` for grabbing a screenshot.
- ``is_bgr``: **bool**. If `True` the image is expected to be in **BGR**. Otherwise, it will be expected to be **RGB**. Only used when image is `np.ndarray` or `torch.tensor`. Default: `False`
- ``legacy``: **bool**. If sent as **kwarg**, will parse the output to make it identical to 1.x versions. Not Recommended. Default: False.

- **Returns**: **tuple[dict[str, np.ndarray|float|tuple[float|int, float|int]]]**. A tuple of dictionaries containing all the information of every detection. Contains the following keys.

| Key              | Value Desc.                                 | Value Type                 | Value Form                  |
|------------------|---------------------------------------------|----------------------------|-----------------------------|
| `confidence`     | Detection confidence                        | `float`                    | `conf.`                     |
| `bbox_xyxy`      | Bounding box                                | np.ndarray (**4**)         | `[x1, y1, x2, y2]`          |
| `cxcy`           | Center of bounding box                      | tuple[`float`, `float`]    | `(x, y)`                    |
| `wh`             | Bounding box width and height               | tuple[`float`, `float`]    | `(w, h)`                    |
| `polygon_xy`     | Precise polygon that segments the _QR_      | np.ndarray (**N**, **2**)  | `[[x1, y1], [x2, y2], ...]` |
| `quad_xy`        | Four corners polygon that segments the _QR_ | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |
| `padded_quad_xy` |`quad_xy` padded to fully cover `polygon_xy` | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |
| `image_shape`    | Shape of the input image                    | tuple[`float`, `float`]    | `(h, w)`                    |  

> **NOTE:**
> - All `np.ndarray` values are of type `np.float32` 
> - All keys (except `confidence` and `image_shape`) have a normalized ('n') version. For example,`bbox_xyxy` represents the bbox of the QR in image coordinates [[0., im_w], [0., im_h]], while `bbox_xyxyn` contains the same bounding box in normalized coordinates [0., 1.].
> - `bbox_xyxy[n]` and `polygon_xy[n]` are clipped to `image_shape`. You can use them for indexing without further management

## Acknowledgements

This library is based on the following projects:

- <a href="https://github.com/ultralytics/ultralytics" target="_blank">YoloV8</a> model for **Object Segmentation**.
- <a href="https://github.com/Eric-Canas/quadrilateral-fitter" target="_blank">QuadrilateralFitter</a> for fitting 4 corners polygons from noisy **segmentation outputs**.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Eric-Canas/qrdet",
    "name": "qrdet",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Eric Canas",
    "author_email": "eric@ericcanas.com",
    "download_url": "https://files.pythonhosted.org/packages/ac/34/65f34f2a8b58d60b75cdb72090b0fc2a60d4f1a2afb5a88b993cf129873b/qrdet-2.4.tar.gz",
    "platform": null,
    "description": "# QRDet\r\n**QRDet** is a robust **QR Detector** based on <a href=\"https://github.com/ultralytics/ultralytics\" target=\"_blank\">YOLOv8</a>.\r\n\r\n**QRDet** will detect & segment **QR** codes even in **difficult** positions or **tricky** images. If you are looking for a complete **QR Detection** + **Decoding** pipeline, take a look at <a href=\"https://github.com/Eric-Canas/qreader\" target=\"_blank\">QReader</a>.  \r\n\r\n## Installation\r\n\r\nTo install **QRDet**, simply run:\r\n\r\n```bash\r\npip install qrdet\r\n```\r\n\r\n## Usage\r\n\r\nThere is only one function you'll need to call to use **QRDet**, ``detect``:\r\n\r\n```python\r\n\r\nfrom qrdet import QRDetector\r\nimport cv2\r\n\r\ndetector = QRDetector(model_size='s')\r\nimage = cv2.imread(filename='resources/qreader_test_image.jpeg')\r\ndetections = detector.detect(image=image, is_bgr=True)\r\n\r\n# Draw the detections\r\nfor detection in detections:\r\n    x1, y1, x2, y2 = detections['bbox_xyxy']\r\n    confidence = detections['confidence']\r\n    segmenation_xy = detections['quadrilateral_xy']\r\n    cv2.rectangle(image, (x1, y1), (x2, y2), color=(0, 255, 0), thickness=2)\r\n    cv2.putText(image, f'{confidence:.2f}', (x1, y1 - 10), fontFace=cv2.FONT_HERSHEY_SIMPLEX,\r\n                fontScale=1, color=(0, 255, 0), thickness=2)\r\n# Save the results\r\ncv2.imwrite(filename='resources/qreader_test_image_detections.jpeg', img=image)\r\n```\r\n\r\n<img alt=\"detections_output\" title=\"detections_output\" src=\"https://raw.githubusercontent.com/Eric-Canas/qrdet/main/resources/qreader_test_image_detections.jpeg\" width=\"100%\">\r\n\r\n## API Reference\r\n\r\n### QReader.detect(image, is_bgr = False, **kwargs)\r\n\r\n- ``image``: **np.ndarray|'PIL.Image'|'torch.Tensor'|str**. `np.ndarray` of shape **(H, W, 3)**, `PIL.Image`, `Tensor` of shape **(1, 3, H, W)**, or `path`/`url` to the image to predict. `'screen'` for grabbing a screenshot.\r\n- ``is_bgr``: **bool**. If `True` the image is expected to be in **BGR**. Otherwise, it will be expected to be **RGB**. Only used when image is `np.ndarray` or `torch.tensor`. Default: `False`\r\n- ``legacy``: **bool**. If sent as **kwarg**, will parse the output to make it identical to 1.x versions. Not Recommended. Default: False.\r\n\r\n- **Returns**: **tuple[dict[str, np.ndarray|float|tuple[float|int, float|int]]]**. A tuple of dictionaries containing all the information of every detection. Contains the following keys.\r\n\r\n| Key              | Value Desc.                                 | Value Type                 | Value Form                  |\r\n|------------------|---------------------------------------------|----------------------------|-----------------------------|\r\n| `confidence`     | Detection confidence                        | `float`                    | `conf.`                     |\r\n| `bbox_xyxy`      | Bounding box                                | np.ndarray (**4**)         | `[x1, y1, x2, y2]`          |\r\n| `cxcy`           | Center of bounding box                      | tuple[`float`, `float`]    | `(x, y)`                    |\r\n| `wh`             | Bounding box width and height               | tuple[`float`, `float`]    | `(w, h)`                    |\r\n| `polygon_xy`     | Precise polygon that segments the _QR_      | np.ndarray (**N**, **2**)  | `[[x1, y1], [x2, y2], ...]` |\r\n| `quad_xy`        | Four corners polygon that segments the _QR_ | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |\r\n| `padded_quad_xy` |`quad_xy` padded to fully cover `polygon_xy` | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |\r\n| `image_shape`    | Shape of the input image                    | tuple[`float`, `float`]    | `(h, w)`                    |  \r\n\r\n> **NOTE:**\r\n> - All `np.ndarray` values are of type `np.float32` \r\n> - All keys (except `confidence` and `image_shape`) have a normalized ('n') version. For example,`bbox_xyxy` represents the bbox of the QR in image coordinates [[0., im_w], [0., im_h]], while `bbox_xyxyn` contains the same bounding box in normalized coordinates [0., 1.].\r\n> - `bbox_xyxy[n]` and `polygon_xy[n]` are clipped to `image_shape`. You can use them for indexing without further management\r\n\r\n## Acknowledgements\r\n\r\nThis library is based on the following projects:\r\n\r\n- <a href=\"https://github.com/ultralytics/ultralytics\" target=\"_blank\">YoloV8</a> model for **Object Segmentation**.\r\n- <a href=\"https://github.com/Eric-Canas/quadrilateral-fitter\" target=\"_blank\">QuadrilateralFitter</a> for fitting 4 corners polygons from noisy **segmentation outputs**.\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Robust QR Detector based on YOLOv8",
    "version": "2.4",
    "project_urls": {
        "Homepage": "https://github.com/Eric-Canas/qrdet"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ac3465f34f2a8b58d60b75cdb72090b0fc2a60d4f1a2afb5a88b993cf129873b",
                "md5": "333c6a516fc17556521993474d37a4aa",
                "sha256": "5794b24b839de685ac77184d2f7dc29457a6a64a92b9703bb7e69d9caabdbd9d"
            },
            "downloads": -1,
            "filename": "qrdet-2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "333c6a516fc17556521993474d37a4aa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 12492,
            "upload_time": "2023-10-11T17:10:58",
            "upload_time_iso_8601": "2023-10-11T17:10:58.246023Z",
            "url": "https://files.pythonhosted.org/packages/ac/34/65f34f2a8b58d60b75cdb72090b0fc2a60d4f1a2afb5a88b993cf129873b/qrdet-2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-11 17:10:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Eric-Canas",
    "github_project": "qrdet",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "qrdet"
}
        
Elapsed time: 0.11962s