qreader


Nameqreader JSON
Version 3.12 PyPI version JSON
download
home_pagehttps://github.com/Eric-Canas/qreader
SummaryRobust and Straight-Forward solution for reading difficult and tricky QR codes within images in Python. Supported by a YOLOv8 QR Segmentation model.
upload_time2023-10-13 09:21:38
maintainer
docs_urlNone
authorEric Canas
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # QReader

<img alt="QReader" title="QReader" src="https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/resources/logo.png" width="20%" align="left"> **QReader** is a **Robust** and **Straight-Forward** solution for reading **difficult** and **tricky** **QR** codes within images in **Python**. Powered by a <a href="https://github.com/Eric-Canas/qrdet" target="_blank">YOLOv8</a> model.

Behind the scenes, the library is composed by two main building blocks: A <a href="https://github.com/ultralytics/ultralytics" target="_blank">YOLOv8</a> **QR Detector** model trained to **detect** and **segment** QR codes (also offered as <a href="https://github.com/Eric-Canas/qrdet" target="_blank">stand-alone</a>), and the <a href="https://github.com/NaturalHistoryMuseum/pyzbar" target="_blank">Pyzbar</a> **QR Decoder**. Using the information extracted from this **QR Detector**, **QReader** transparently applies, on top of <a href="https://github.com/NaturalHistoryMuseum/pyzbar" target="_blank">Pyzbar</a>, different image preprocessing techniques that maximize the **decoding** rate on difficult images.


## Installation

To install **QReader**, simply run:

```bash
pip install qreader
```

You may need to install some additional **pyzbar** dependencies:

On **Windows**:  

Rarely, you can see an ugly ImportError related with `lizbar-64.dll`. If it happens, install the [vcredist_x64.exe](https://www.microsoft.com/en-gb/download/details.aspx?id=40784) from the _Visual C++ Redistributable Packages for Visual Studio 2013_

On **Linux**:  
```bash
sudo apt-get install libzbar0
```

On **Mac OS X**: 
```bash
brew install zbar
```

**NOTE:** If you're running **QReader** in a server with very limited resources, you may want to install the **CPU** version of [**PyTorch**](https://pytorch.org/get-started/locally/), before installing **QReader**. To do so, run: ``pip install torch --no-cache-dir`` (Thanks to [**@cjwalther**](https://github.com/Eric-Canas/QReader/issues/5) for his advice).

## Usage
<a href="https://colab.research.google.com/github/Eric-Canas/QReader/blob/main/example.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg" style="max-width: 100%;"></a>

**QReader** is a very simple and straight-forward library. For most use cases, you'll only need to call ``detect_and_decode``:

```python
from qreader import QReader
import cv2


# Create a QReader instance
qreader = QReader()

# Get the image that contains the QR code
image = cv2.cvtColor(cv2.imread("path/to/image.png"), cv2.COLOR_BGR2RGB)

# Use the detect_and_decode function to get the decoded QR data
decoded_text = qreader.detect_and_decode(image=image)
```

``detect_and_decode`` will return a `tuple` containing the decoded _string_ of every **QR** found in the image. 

> **NOTE**: Some entries can be `None`, it will happen when a **QR** have been detected but **couldn't be decoded**.


## API Reference

### QReader(model_size = 's', min_confidence = 0.5, reencode_to = 'shift-jis')

This is the main class of the library. Please, try to instantiate it just once to avoid loading the model every time you need to detect a **QR** code.
- ``model_size``: **str**. The size of the model to use. It can be **'n'** (nano), **'s'** (small), **'m'** (medium) or **'l'** (large). Larger models are more accurate but slower. Default: 's'.
- ``min_confidence``: **float**. The minimum confidence of the QR detection to be considered valid. Values closer to 0.0 can get more _False Positives_, while values closer to 1.0 can lose difficult QRs. Default (and recommended): 0.5.
- ``reencode_to``: **str** | **None**. The encoding to reencode the `utf-8` decoded QR string. If None, it won't re-encode. If you find some characters being decoded incorrectly, try to set a [Code Page](https://learn.microsoft.com/en-us/windows/win32/intl/code-page-identifiers) that matches your specific charset. Recommendations that have been found useful:
  - 'shift-jis' for Germanic languages
  - 'cp65001' for Asian languages (Thanks to @nguyen-viet-hung for the suggestion)

### QReader.detect_and_decode(image, return_detections = False)

This method will decode the **QR** codes in the given image and return the decoded _strings_ (or _None_, if any of them was detected but not decoded).

- ``image``: **np.ndarray**. The image to be read. It is expected to be _RGB_ or _BGR_ (_uint8_). Format (_HxWx3_).
- ``return_detections``: **bool**. If `True`, it will return the full detection results together with the decoded QRs. If False, it will return only the decoded content of the QR codes.
- ``is_bgr``: **boolean**. If `True`, the received image is expected to be _BGR_ instead of _RGB_.

  
- **Returns**: **tuple[str | None] | tuple[tuple[dict[str, np.ndarray | float | tuple[float | int, float | int]]], str | None]]**: A tuple with all detected **QR** codes decodified. If ``return_detections`` is `False`, the output will look like: `('Decoded QR 1', 'Decoded QR 2', None, 'Decoded QR 4', ...)`. If ``return_detections`` is `True` it will look like: `(('Decoded QR 1', {'bbox_xyxy': (x1_1, y1_1, x2_1, y2_1), 'confidence': conf_1}), ('Decoded QR 2', {'bbox_xyxy': (x1_2, y1_2, x2_2, y2_2), 'confidence': conf_2, ...}), ...)`. Look [QReader.detect()](#QReader_detect_table) for more information about detections format.

<a name="QReader_detect"></a>

### QReader.detect(image)

This method detects the **QR** codes in the image and returns a _tuple of dictionaries_ with all the detection information.

- ``image``: **np.ndarray**. The image to be read. It is expected to be _RGB_ or _BGR_ (_uint8_). Format (_HxWx3_).
- ``is_bgr``: **boolean**. If `True`, the received image is expected to be _BGR_ instead of _RGB_.
<a name="QReader_detect_table"></a>

- **Returns**: **tuple[dict[str, np.ndarray|float|tuple[float|int, float|int]]]**. A tuple of dictionaries containing all the information of every detection. Contains the following keys.

| Key              | Value Desc.                                 | Value Type                 | Value Form                  |
|------------------|---------------------------------------------|----------------------------|-----------------------------|
| `confidence`     | Detection confidence                        | `float`                    | `conf.`                     |
| `bbox_xyxy`      | Bounding box                                | np.ndarray (**4**)         | `[x1, y1, x2, y2]`          |
| `cxcy`           | Center of bounding box                      | tuple[`float`, `float`]    | `(x, y)`                    |
| `wh`             | Bounding box width and height               | tuple[`float`, `float`]    | `(w, h)`                    |
| `polygon_xy`     | Precise polygon that segments the _QR_      | np.ndarray (**N**, **2**)  | `[[x1, y1], [x2, y2], ...]` |
| `quad_xy`        | Four corners polygon that segments the _QR_ | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |
| `padded_quad_xy` |`quad_xy` padded to fully cover `polygon_xy` | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |
| `image_shape`    | Shape of the input image                    | tuple[`int`, `int`]    | `(h, w)`                    |  

> **NOTE:**
> - All `np.ndarray` values are of type `np.float32` 
> - All keys (except `confidence` and `image_shape`) have a normalized ('n') version. For example,`bbox_xyxy` represents the bbox of the QR in image coordinates [[0., im_w], [0., im_h]], while `bbox_xyxyn` contains the same bounding box in normalized coordinates [0., 1.].
> - `bbox_xyxy[n]` and `polygon_xy[n]` are clipped to `image_shape`. You can use them for indexing without further management


**NOTE**: Is this the only method you will need? Take a look at <a href="https://github.com/Eric-Canas/qrdet" target="_blank">QRDet</a>.

### QReader.decode(image, detection_result)

This method decodes a single **QR** code on the given image, described by a detection result. 

Internally, this method will run the <a href="https://github.com/NaturalHistoryMuseum/pyzbar" target="_blank">pyzbar</a> decoder, using the information of the `detection_result`, to apply different image preprocessing techniques that heavily increase the detecoding rate.

- ``image``: **np.ndarray**. NumPy Array with the ``image`` that contains the _QR_ to decode. The image is expected to be in ``uint8`` format [_HxWxC_], RGB.
- ``detection_result``: dict[str, np.ndarray|float|tuple[float|int, float|int]]. One of the **detection dicts** returned by the **detect** method. Note that [QReader.detect()](#QReader_detect) returns a `tuple` of these `dict`. This method expects just one of them.


- Returns: **str | None**. The decoded content of the _QR_ code or `None` if it couldn't be read.

## Usage Tests
<div><img alt="test_on_mobile" title="test_on_mobile" src="https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/resources/test_mobile.jpeg" width="60%"><img alt="" title="QReader" src="https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/resources/test_draw_64x64.jpeg" width="32%" align="right"></div>
<div>Two sample images. At left, an image taken with a mobile phone. At right, a 64x64 <b>QR</b> pasted over a drawing.</div>    
<br>

The following code will try to decode these images containing <b>QR</b>s with **QReader**, <a href="https://github.com/NaturalHistoryMuseum/pyzbar" target="_blank">pyzbar</a> and <a href="https://opencv.org/" target="_blank">OpenCV</a>.
```python
from qreader import QReader
from cv2 import QRCodeDetector, imread
from pyzbar.pyzbar import decode

# Initialize the three tested readers (QRReader, OpenCV and pyzbar)
qreader_reader, cv2_reader, pyzbar_reader = QReader(), QRCodeDetector(), decode

for img_path in ('test_mobile.jpeg', 'test_draw_64x64.jpeg'):
    # Read the image
    img = imread(img_path)

    # Try to decode the QR code with the three readers
    qreader_out = qreader_reader.detect_and_decode(image=img)
    cv2_out = cv2_reader.detectAndDecode(img=img)[0]
    pyzbar_out = pyzbar_reader(image=img)
    # Read the content of the pyzbar output (double decoding will save you from a lot of wrongly decoded characters)
    pyzbar_out = tuple(out.data.data.decode('utf-8').encode('shift-jis').decode('utf-8') for out in pyzbar_out)

    # Print the results
    print(f"Image: {img_path} -> QReader: {qreader_out}. OpenCV: {cv2_out}. pyzbar: {pyzbar_out}.")
```

The output of the previous code is:

```txt
Image: test_mobile.jpeg -> QReader: ('https://github.com/Eric-Canas/QReader'). OpenCV: . pyzbar: ().
Image: test_draw_64x64.jpeg -> QReader: ('https://github.com/Eric-Canas/QReader'). OpenCV: . pyzbar: ().
```

Note that **QReader** internally uses <a href="https://github.com/NaturalHistoryMuseum/pyzbar" target="_blank">pyzbar</a> as **decoder**. The improved **detection-decoding rate** that **QReader** achieves comes from the combination of different image pre-processing techniques and the <a href="https://github.com/ultralytics/ultralytics" target="_blank">YOLOv8</a> based <a href="https://github.com/Eric-Canas/qrdet" target="_blank">**QR** detector</a> that is able to detect **QR** codes in harder conditions than classical _Computer Vision_ methods.

## Benchmark

### Rotation Test
<div>
<img alt="Rotation Test" title="Rotation Test" src="https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/benchmark/rotation_benchmark.gif" width="40%" align="left">

&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;  
<div align="center">
  
| Method  | Max Rotation Degrees  |
|---------|-----------------------|
| Pyzbar  | 17º                   |
| OpenCV  | 46º                   |
| QReader | 79º                   |
  
</div>
</div>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Eric-Canas/qreader",
    "name": "qreader",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Eric Canas",
    "author_email": "elcorreodeharu@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/b6/f2/9c3047f063f291d99a29c52b39c94b37e61f5a16861524f3b2f3d506acf6/qreader-3.12.tar.gz",
    "platform": null,
    "description": "# QReader\r\n\r\n<img alt=\"QReader\" title=\"QReader\" src=\"https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/resources/logo.png\" width=\"20%\" align=\"left\"> **QReader** is a **Robust** and **Straight-Forward** solution for reading **difficult** and **tricky** **QR** codes within images in **Python**. Powered by a <a href=\"https://github.com/Eric-Canas/qrdet\" target=\"_blank\">YOLOv8</a> model.\r\n\r\nBehind the scenes, the library is composed by two main building blocks: A <a href=\"https://github.com/ultralytics/ultralytics\" target=\"_blank\">YOLOv8</a> **QR Detector** model trained to **detect** and **segment** QR codes (also offered as <a href=\"https://github.com/Eric-Canas/qrdet\" target=\"_blank\">stand-alone</a>), and the <a href=\"https://github.com/NaturalHistoryMuseum/pyzbar\" target=\"_blank\">Pyzbar</a> **QR Decoder**. Using the information extracted from this **QR Detector**, **QReader** transparently applies, on top of <a href=\"https://github.com/NaturalHistoryMuseum/pyzbar\" target=\"_blank\">Pyzbar</a>, different image preprocessing techniques that maximize the **decoding** rate on difficult images.\r\n\r\n\r\n## Installation\r\n\r\nTo install **QReader**, simply run:\r\n\r\n```bash\r\npip install qreader\r\n```\r\n\r\nYou may need to install some additional **pyzbar** dependencies:\r\n\r\nOn **Windows**:  \r\n\r\nRarely, you can see an ugly ImportError related with `lizbar-64.dll`. If it happens, install the [vcredist_x64.exe](https://www.microsoft.com/en-gb/download/details.aspx?id=40784) from the _Visual C++ Redistributable Packages for Visual Studio 2013_\r\n\r\nOn **Linux**:  \r\n```bash\r\nsudo apt-get install libzbar0\r\n```\r\n\r\nOn **Mac OS X**: \r\n```bash\r\nbrew install zbar\r\n```\r\n\r\n**NOTE:** If you're running **QReader** in a server with very limited resources, you may want to install the **CPU** version of [**PyTorch**](https://pytorch.org/get-started/locally/), before installing **QReader**. To do so, run: ``pip install torch --no-cache-dir`` (Thanks to [**@cjwalther**](https://github.com/Eric-Canas/QReader/issues/5) for his advice).\r\n\r\n## Usage\r\n<a href=\"https://colab.research.google.com/github/Eric-Canas/QReader/blob/main/example.ipynb\" target=\"_blank\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" data-canonical-src=\"https://colab.research.google.com/assets/colab-badge.svg\" style=\"max-width: 100%;\"></a>\r\n\r\n**QReader** is a very simple and straight-forward library. For most use cases, you'll only need to call ``detect_and_decode``:\r\n\r\n```python\r\nfrom qreader import QReader\r\nimport cv2\r\n\r\n\r\n# Create a QReader instance\r\nqreader = QReader()\r\n\r\n# Get the image that contains the QR code\r\nimage = cv2.cvtColor(cv2.imread(\"path/to/image.png\"), cv2.COLOR_BGR2RGB)\r\n\r\n# Use the detect_and_decode function to get the decoded QR data\r\ndecoded_text = qreader.detect_and_decode(image=image)\r\n```\r\n\r\n``detect_and_decode`` will return a `tuple` containing the decoded _string_ of every **QR** found in the image. \r\n\r\n> **NOTE**: Some entries can be `None`, it will happen when a **QR** have been detected but **couldn't be decoded**.\r\n\r\n\r\n## API Reference\r\n\r\n### QReader(model_size = 's', min_confidence = 0.5, reencode_to = 'shift-jis')\r\n\r\nThis is the main class of the library. Please, try to instantiate it just once to avoid loading the model every time you need to detect a **QR** code.\r\n- ``model_size``: **str**. The size of the model to use. It can be **'n'** (nano), **'s'** (small), **'m'** (medium) or **'l'** (large). Larger models are more accurate but slower. Default: 's'.\r\n- ``min_confidence``: **float**. The minimum confidence of the QR detection to be considered valid. Values closer to 0.0 can get more _False Positives_, while values closer to 1.0 can lose difficult QRs. Default (and recommended): 0.5.\r\n- ``reencode_to``: **str** | **None**. The encoding to reencode the `utf-8` decoded QR string. If None, it won't re-encode. If you find some characters being decoded incorrectly, try to set a [Code Page](https://learn.microsoft.com/en-us/windows/win32/intl/code-page-identifiers) that matches your specific charset. Recommendations that have been found useful:\r\n  - 'shift-jis' for Germanic languages\r\n  - 'cp65001' for Asian languages (Thanks to @nguyen-viet-hung for the suggestion)\r\n\r\n### QReader.detect_and_decode(image, return_detections = False)\r\n\r\nThis method will decode the **QR** codes in the given image and return the decoded _strings_ (or _None_, if any of them was detected but not decoded).\r\n\r\n- ``image``: **np.ndarray**. The image to be read. It is expected to be _RGB_ or _BGR_ (_uint8_). Format (_HxWx3_).\r\n- ``return_detections``: **bool**. If `True`, it will return the full detection results together with the decoded QRs. If False, it will return only the decoded content of the QR codes.\r\n- ``is_bgr``: **boolean**. If `True`, the received image is expected to be _BGR_ instead of _RGB_.\r\n\r\n  \r\n- **Returns**: **tuple[str | None] | tuple[tuple[dict[str, np.ndarray | float | tuple[float | int, float | int]]], str | None]]**: A tuple with all detected **QR** codes decodified. If ``return_detections`` is `False`, the output will look like: `('Decoded QR 1', 'Decoded QR 2', None, 'Decoded QR 4', ...)`. If ``return_detections`` is `True` it will look like: `(('Decoded QR 1', {'bbox_xyxy': (x1_1, y1_1, x2_1, y2_1), 'confidence': conf_1}), ('Decoded QR 2', {'bbox_xyxy': (x1_2, y1_2, x2_2, y2_2), 'confidence': conf_2, ...}), ...)`. Look [QReader.detect()](#QReader_detect_table) for more information about detections format.\r\n\r\n<a name=\"QReader_detect\"></a>\r\n\r\n### QReader.detect(image)\r\n\r\nThis method detects the **QR** codes in the image and returns a _tuple of dictionaries_ with all the detection information.\r\n\r\n- ``image``: **np.ndarray**. The image to be read. It is expected to be _RGB_ or _BGR_ (_uint8_). Format (_HxWx3_).\r\n- ``is_bgr``: **boolean**. If `True`, the received image is expected to be _BGR_ instead of _RGB_.\r\n<a name=\"QReader_detect_table\"></a>\r\n\r\n- **Returns**: **tuple[dict[str, np.ndarray|float|tuple[float|int, float|int]]]**. A tuple of dictionaries containing all the information of every detection. Contains the following keys.\r\n\r\n| Key              | Value Desc.                                 | Value Type                 | Value Form                  |\r\n|------------------|---------------------------------------------|----------------------------|-----------------------------|\r\n| `confidence`     | Detection confidence                        | `float`                    | `conf.`                     |\r\n| `bbox_xyxy`      | Bounding box                                | np.ndarray (**4**)         | `[x1, y1, x2, y2]`          |\r\n| `cxcy`           | Center of bounding box                      | tuple[`float`, `float`]    | `(x, y)`                    |\r\n| `wh`             | Bounding box width and height               | tuple[`float`, `float`]    | `(w, h)`                    |\r\n| `polygon_xy`     | Precise polygon that segments the _QR_      | np.ndarray (**N**, **2**)  | `[[x1, y1], [x2, y2], ...]` |\r\n| `quad_xy`        | Four corners polygon that segments the _QR_ | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |\r\n| `padded_quad_xy` |`quad_xy` padded to fully cover `polygon_xy` | np.ndarray (**4**, **2**)  | `[[x1, y1], ..., [x4, y4]]` |\r\n| `image_shape`    | Shape of the input image                    | tuple[`int`, `int`]    | `(h, w)`                    |  \r\n\r\n> **NOTE:**\r\n> - All `np.ndarray` values are of type `np.float32` \r\n> - All keys (except `confidence` and `image_shape`) have a normalized ('n') version. For example,`bbox_xyxy` represents the bbox of the QR in image coordinates [[0., im_w], [0., im_h]], while `bbox_xyxyn` contains the same bounding box in normalized coordinates [0., 1.].\r\n> - `bbox_xyxy[n]` and `polygon_xy[n]` are clipped to `image_shape`. You can use them for indexing without further management\r\n\r\n\r\n**NOTE**: Is this the only method you will need? Take a look at <a href=\"https://github.com/Eric-Canas/qrdet\" target=\"_blank\">QRDet</a>.\r\n\r\n### QReader.decode(image, detection_result)\r\n\r\nThis method decodes a single **QR** code on the given image, described by a detection result. \r\n\r\nInternally, this method will run the <a href=\"https://github.com/NaturalHistoryMuseum/pyzbar\" target=\"_blank\">pyzbar</a> decoder, using the information of the `detection_result`, to apply different image preprocessing techniques that heavily increase the detecoding rate.\r\n\r\n- ``image``: **np.ndarray**. NumPy Array with the ``image`` that contains the _QR_ to decode. The image is expected to be in ``uint8`` format [_HxWxC_], RGB.\r\n- ``detection_result``: dict[str, np.ndarray|float|tuple[float|int, float|int]]. One of the **detection dicts** returned by the **detect** method. Note that [QReader.detect()](#QReader_detect) returns a `tuple` of these `dict`. This method expects just one of them.\r\n\r\n\r\n- Returns: **str | None**. The decoded content of the _QR_ code or `None` if it couldn't be read.\r\n\r\n## Usage Tests\r\n<div><img alt=\"test_on_mobile\" title=\"test_on_mobile\" src=\"https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/resources/test_mobile.jpeg\" width=\"60%\"><img alt=\"\" title=\"QReader\" src=\"https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/resources/test_draw_64x64.jpeg\" width=\"32%\" align=\"right\"></div>\r\n<div>Two sample images. At left, an image taken with a mobile phone. At right, a 64x64 <b>QR</b> pasted over a drawing.</div>    \r\n<br>\r\n\r\nThe following code will try to decode these images containing <b>QR</b>s with **QReader**, <a href=\"https://github.com/NaturalHistoryMuseum/pyzbar\" target=\"_blank\">pyzbar</a> and <a href=\"https://opencv.org/\" target=\"_blank\">OpenCV</a>.\r\n```python\r\nfrom qreader import QReader\r\nfrom cv2 import QRCodeDetector, imread\r\nfrom pyzbar.pyzbar import decode\r\n\r\n# Initialize the three tested readers (QRReader, OpenCV and pyzbar)\r\nqreader_reader, cv2_reader, pyzbar_reader = QReader(), QRCodeDetector(), decode\r\n\r\nfor img_path in ('test_mobile.jpeg', 'test_draw_64x64.jpeg'):\r\n    # Read the image\r\n    img = imread(img_path)\r\n\r\n    # Try to decode the QR code with the three readers\r\n    qreader_out = qreader_reader.detect_and_decode(image=img)\r\n    cv2_out = cv2_reader.detectAndDecode(img=img)[0]\r\n    pyzbar_out = pyzbar_reader(image=img)\r\n    # Read the content of the pyzbar output (double decoding will save you from a lot of wrongly decoded characters)\r\n    pyzbar_out = tuple(out.data.data.decode('utf-8').encode('shift-jis').decode('utf-8') for out in pyzbar_out)\r\n\r\n    # Print the results\r\n    print(f\"Image: {img_path} -> QReader: {qreader_out}. OpenCV: {cv2_out}. pyzbar: {pyzbar_out}.\")\r\n```\r\n\r\nThe output of the previous code is:\r\n\r\n```txt\r\nImage: test_mobile.jpeg -> QReader: ('https://github.com/Eric-Canas/QReader'). OpenCV: . pyzbar: ().\r\nImage: test_draw_64x64.jpeg -> QReader: ('https://github.com/Eric-Canas/QReader'). OpenCV: . pyzbar: ().\r\n```\r\n\r\nNote that **QReader** internally uses <a href=\"https://github.com/NaturalHistoryMuseum/pyzbar\" target=\"_blank\">pyzbar</a> as **decoder**. The improved **detection-decoding rate** that **QReader** achieves comes from the combination of different image pre-processing techniques and the <a href=\"https://github.com/ultralytics/ultralytics\" target=\"_blank\">YOLOv8</a> based <a href=\"https://github.com/Eric-Canas/qrdet\" target=\"_blank\">**QR** detector</a> that is able to detect **QR** codes in harder conditions than classical _Computer Vision_ methods.\r\n\r\n## Benchmark\r\n\r\n### Rotation Test\r\n<div>\r\n<img alt=\"Rotation Test\" title=\"Rotation Test\" src=\"https://raw.githubusercontent.com/Eric-Canas/QReader/main/documentation/benchmark/rotation_benchmark.gif\" width=\"40%\" align=\"left\">\r\n\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;  \r\n<div align=\"center\">\r\n  \r\n| Method  | Max Rotation Degrees  |\r\n|---------|-----------------------|\r\n| Pyzbar  | 17\u00c2\u00ba                   |\r\n| OpenCV  | 46\u00c2\u00ba                   |\r\n| QReader | 79\u00c2\u00ba                   |\r\n  \r\n</div>\r\n</div>\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Robust and Straight-Forward solution for reading difficult and tricky QR codes within images in Python. Supported by a YOLOv8 QR Segmentation model.",
    "version": "3.12",
    "project_urls": {
        "Homepage": "https://github.com/Eric-Canas/qreader"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b6f29c3047f063f291d99a29c52b39c94b37e61f5a16861524f3b2f3d506acf6",
                "md5": "1f123089c5618f08e3503420c13dabd3",
                "sha256": "5e90ef794a44338a26de4698b1e8de4eab544fda7959e6d3bdc81dc6bd4a16f4"
            },
            "downloads": -1,
            "filename": "qreader-3.12.tar.gz",
            "has_sig": false,
            "md5_digest": "1f123089c5618f08e3503420c13dabd3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 17009,
            "upload_time": "2023-10-13T09:21:38",
            "upload_time_iso_8601": "2023-10-13T09:21:38.290681Z",
            "url": "https://files.pythonhosted.org/packages/b6/f2/9c3047f063f291d99a29c52b39c94b37e61f5a16861524f3b2f3d506acf6/qreader-3.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-13 09:21:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Eric-Canas",
    "github_project": "qreader",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "qreader"
}
        
Elapsed time: 0.12372s