face-detector-plus


Nameface-detector-plus JSON
Version 1.0.1 PyPI version JSON
download
home_pagehttps://github.com/huseyindas/face-detector-plus
SummaryLight weight face detector high-level client with multiple detection techniques.
upload_time2024-08-06 12:23:06
maintainerNone
docs_urlNone
authorHuseyin Das
requires_python>=3.9
licenseApache
keywords machine learning face detector face detection cnn dlib ultrafast hog caffemodel
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Face Detector Plus

"A comprehensive Python package that integrates multiple face detection algorithms, offering flexible and efficient solutions for various face recognition applications."

**Key features:**

- Easy to understand and setup
- Easy to manage
- Requires very less or no tuning for any resolution image
- No need to download models, they're automatically maintained
- Uses ultralight face detection models that is very fast on CPU alone
- Get very good speed and accuracy on CPU alone
- All detectors share same parameters and methods, makes it easier to switch and go

**Detectors:**

- Hog detector
- CNN detector
- Caffemodel detector
- UltraLight 320 detector
- UltraLight 640 detector


## Installation

Use the package manager [pip](https://pip.pypa.io/en/stable/) to install [face-detector-plus](https://pypi.org/project/face-detector-plus/) with the following command:

```bash
pip install face-detector-plus
```

If you would like to get the latest master or branch from github, you could also:

```bash
pip install git+https://github.com/huseyindas/face-detector-plus
```

Or even select a specific revision _(branch/tag/commit)_:

```bash
pip install git+https://github.com/huseyindas/face-detector-plus@master
```

Similarly, for tag specify [tag](https://github.com/huseyindas/face-detector-plus/tags) with `@v0.x.x`. For example to download tag v0.1.0 from Git use:

```bash
pip install git+https://github.com/huseyindas/face-detector-plus@v0.1.0
```

## Quick usage

Like said setup and usage is very simple and easy.

- Import the detector you want,
- Initialize it,
- Get predicts

**_Example_**

```python
from face_detector_plus import Ultralight320Detector
from face_detector_plus.utils import annotate_image

detector = Ultralight320Detector()

image = cv2.imread("image.png")

faces = detector.detect_faces(image)
image = annotate_image(image, faces, width=3)

cv2.imshow("view", image)
cv2.waitKey(100000)
```

### CaffeModel Detector

Caffemodel is very light weight model that uses less resources to perform detections that is created by caffe (Convolutional Architecture for Fast Feature Embedding).

```python
import cv2
from face_detector_plus import CaffemodelDetector
from face_detector_plus.utils import annotate_image

vid = cv2.VideoCapture(0)
detector = CaffemodelDetector()

while True:
    rect, frame = vid.read()
    if not rect:
        break

    bbox = detector.detect_faces(frame)
    frame = annotate_image(frame, bbox)

    cv2.imshow("Caffe Model Detection", frame)

    cv2.waitKey(1)
```

**Configurable options for CaffeModel detector**.

Syntax: `CaffemodelDetector(**options)`

| Options         | Description                                                                                                                                                                                              |
| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `convert_color` | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                                                                                            |
| `confidence`    | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5                                                                                 |
| `scale`         | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given)                                                                           |
| `mean`          | Scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true. Defaults to (104.0, 177.0, 123.0). |
| `scalefactor`   | Multiplier for images values. Defaults to 1.0.                                                                                                                                                           |
| `crop`          | Flag which indicates whether image will be cropped after resize or not. Defaults to False.                                                                                                               |
| `swapRB`        | Flag which indicates that swap first and last channels in 3-channel image is necessary. Defaults to False.                                                                                               |
| `transpose`     | Transpose image. Defaults to False.                                                                                                                                                                      |
| `resize`        | Spatial size for output image. Default is (300, 300)                                                                                                                                                     |

**Useful methods for this detector:**

- **`detect_faces(image)`**

  This method will return coordinates for all the detected faces of the given image

  | Options | Description                 |
  | ------- | --------------------------- |
  | `image` | image in numpy array format |

- **`detect_faces_keypoints(image, get_all=false)`**

  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new shape_predictor_68_face_landmarks_GTX.dat` model.

  _Note: Generating keypoints might take more time if compared with `detect_faces` method_

  | Options   | Description                                                               |
  | --------- | ------------------------------------------------------------------------- |
  | `image`   | Image in numpy array format                                               |
  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |

### CNN Detector

CNN (Convolutional Neural Network) might not be a light weight model but it is good at detecting faces from all angles. This detector is a hight level wrapper around `dlib::cnn_face_detection_model_v1` that is fine tuned to improve overall performance and accuracy.

```python
import cv2
from face_detector_plus import CNNDetector
from face_detector_plus.utils import annotate_image

vid = cv2.VideoCapture(0)
detector = CNNDetector()

while True:
    rect, frame = vid.read()
    if not rect:
        break

    bbox = detector.detect_faces(frame)
    frame = annotate_image(frame, bbox)

    cv2.imshow("CNN Detection", frame)

    cv2.waitKey(1)
```

**Configurable options for CNNDetector detector.**

Syntax: `CNNDetector(**options)`

| Options                       | Description                                                                                                                    |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `convert_color`               | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |
| `number_of_times_to_upsample` | Up samples the image number_of_times_to_upsample before running the basic detector. By default is 1.                           |
| `confidence`                  | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |
| `scale`                       | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |

- **`detect_faces(image)`**

  This method will return coordinates for all the detected faces of the given image

  | Options | Description                 |
  | ------- | --------------------------- |
  | `image` | image in numpy array format |

- **`detect_faces_keypoints(image, get_all=false)`**

  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.

  _Note: Generating keypoints might take more time if compared with `detect_faces` method_

  | Options   | Description                                                               |
  | --------- | ------------------------------------------------------------------------- |
  | `image`   | Image in numpy array format                                               |
  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |

### Hog Detector

This detector uses Histogram of Oriented Gradients (HOG) and Linear SVM classifier for face detection. It is also combined with an image pyramid and a sliding window detection scheme. `HogDetector` is a high level client over dlib's hog face detector and is fine tuned to make it more optimized in both speed and accuracy.

If you want to detect faster with `HogDetector` and don't care about number of detections then set `number_of_times_to_upsample=1` in the options, it will detect less fasces in less time, mainly used for real time one face detection.

```python
import cv2
from face_detector_plus import HogDetector
from face_detector_plus.utils import annotate_image

vid = cv2.VideoCapture(0)
detector = HogDetector()

while True:
    rect, frame = vid.read()
    if not rect:
        break

    bbox = detector.detect_faces(frame)
    frame = annotate_image(frame, bbox)

    cv2.imshow("Hog Detection", frame)

    cv2.waitKey(1)
```

**Configurable options for HogDetector detector.**

Syntax: `HogDetector(**options)`

| Options                       | Description                                                                                                                    |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `convert_color`               | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |
| `number_of_times_to_upsample` | Up samples the image number_of_times_to_upsample before running the basic detector. By default is 2.                           |
| `confidence`                  | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |
| `scale`                       | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |

- **`detect_faces(image)`**

  This method will return coordinates for all the detected faces of the given image

  | Options | Description                 |
  | ------- | --------------------------- |
  | `image` | image in numpy array format |

- **`detect_faces_keypoints(image, get_all=false)`**

  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.

  _Note: Generating keypoints might take more time if compared with `detect_faces` method_

  | Options   | Description                                                               |
  | --------- | ------------------------------------------------------------------------- |
  | `image`   | Image in numpy array format                                               |
  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |

### Ultra Light Detection (320px)

Ultra Light detection model is what the name says, it a very light weight, accuracy with impressive speed which is pre-trained on 320x240 sized images and only excepts 320x240 sized images but don't worry `Ultralight320Detector` detector will do all for you.

```python
import cv2
from face_detector_plus import Ultralight320Detector
from face_detector_plus.utils import annotate_image

vid = cv2.VideoCapture(0)
detector = Ultralight320Detector()

while True:
    rect, frame = vid.read()
    if not rect:
        break

    bbox = detector.detect_faces(frame)
    frame = annotate_image(frame, bbox)

    cv2.imshow("Ultra 320 Detection", frame)

    cv2.waitKey(1)
```

**Configurable options for Ultralight320Detector detector.**

Syntax: `Ultralight320Detector(**options)`

| Options         | Description                                                                                                                    |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `convert_color` | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |
| `mean`          | Metric used to measure the performance of models doing detection tasks. Defaults to [127, 127, 127].                           |
| `confidence`    | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |
| `scale`         | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |
| `cache`         | It uses same model for all the created sessions. Default is True                                                               |

- **`detect_faces(image)`**

  This method will return coordinates for all the detected faces of the given image

  | Options | Description                 |
  | ------- | --------------------------- |
  | `image` | image in numpy array format |

- **`detect_faces_keypoints(image, get_all=false)`**

  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.

  _Note: Generating keypoints might take more time if compared with `detect_faces` method_

  | Options   | Description                                                               |
  | --------- | ------------------------------------------------------------------------- |
  | `image`   | Image in numpy array format                                               |
  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |

### Ultra Light Detection (640px)

Ultra Light detection model is what the name says, it a very light weight, accuracy with impressive speed which is pre-trained on 640x480 sized images and only excepts 640x480 sized images but don't worry `Ultralight640Detector` detector will do all for you.

This detector will be more accurate than 320 sized ultra light model (`Ultralight320Detector`) but might take a little more time.

```python
import cv2
from face_detector_plus import Ultralight640Detector
from face_detector_plus.utils import annotate_image

vid = cv2.VideoCapture(0)
detector = Ultralight640Detector()

while True:
    rect, frame = vid.read()
    if not rect:
        break

    bbox = detector.detect_faces(frame)
    frame = annotate_image(frame, bbox)

    cv2.imshow("Ultra 640 Detection", frame)

    cv2.waitKey(1)
```

**Configurable options for Ultralight640Detector detector.**

Syntax: `Ultralight640Detector(**options)`

| Options         | Description                                                                                                                    |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `convert_color` | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |
| `mean`          | Metric used to measure the performance of models doing detection tasks. Defaults to [127, 127, 127].                           |
| `confidence`    | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |
| `scale`         | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |
| `cache`         | It uses same model for all the created sessions. Default is True                                                               |

- **`detect_faces(image)`**

  This method will return coordinates for all the detected faces of the given image

  | Options | Description                 |
  | ------- | --------------------------- |
  | `image` | image in numpy array format |

- **`detect_faces_keypoints(image, get_all=false)`**

  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.

  _Note: Generating keypoints might take more time if compared with `detect_faces` method_

  | Options   | Description                                                               |
  | --------- | ------------------------------------------------------------------------- |
  | `image`   | Image in numpy array format                                               |
  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |

### Annotate Image Function

Annotates the given image with the payload returned by any of the detectors and returns a well annotated image with boxes and keypoints on the faces.

**Configurable options for annotate_image function.**

Syntax: `annotate_image(**options)`

| Options         | Description                                                                  |
| --------------- | ---------------------------------------------------------------------------- |
| `image`         | Give image for annotation in numpy.Array format                              |
| `faces`         | Payload returned by detector.detect_faces or detector.detect_faces_keypoints |
| `box_rgb`       | RGB color for rectangle to be of. Defaults to (100, 0, 255).                 |
| `keypoints_rgb` | RGB color for keypoints to be of. Defaults to (150, 0, 255).                 |
| `width`         | Width of annotations. Defaults to 2                                          |

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/huseyindas/face-detector-plus",
    "name": "face-detector-plus",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "machine learning, face, detector, face detection, CNN, dlib, ultrafast, HOG, caffemodel",
    "author": "Huseyin Das",
    "author_email": "hsyndass@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/da/58/58133662605736006619198371659e279c11efc54ced7f28c09eb94e12c5/face_detector_plus-1.0.1.tar.gz",
    "platform": null,
    "description": "# Face Detector Plus\n\n\"A comprehensive Python package that integrates multiple face detection algorithms, offering flexible and efficient solutions for various face recognition applications.\"\n\n**Key features:**\n\n- Easy to understand and setup\n- Easy to manage\n- Requires very less or no tuning for any resolution image\n- No need to download models, they're automatically maintained\n- Uses ultralight face detection models that is very fast on CPU alone\n- Get very good speed and accuracy on CPU alone\n- All detectors share same parameters and methods, makes it easier to switch and go\n\n**Detectors:**\n\n- Hog detector\n- CNN detector\n- Caffemodel detector\n- UltraLight 320 detector\n- UltraLight 640 detector\n\n\n## Installation\n\nUse the package manager [pip](https://pip.pypa.io/en/stable/) to install [face-detector-plus](https://pypi.org/project/face-detector-plus/) with the following command:\n\n```bash\npip install face-detector-plus\n```\n\nIf you would like to get the latest master or branch from github, you could also:\n\n```bash\npip install git+https://github.com/huseyindas/face-detector-plus\n```\n\nOr even select a specific revision _(branch/tag/commit)_:\n\n```bash\npip install git+https://github.com/huseyindas/face-detector-plus@master\n```\n\nSimilarly, for tag specify [tag](https://github.com/huseyindas/face-detector-plus/tags) with `@v0.x.x`. For example to download tag v0.1.0 from Git use:\n\n```bash\npip install git+https://github.com/huseyindas/face-detector-plus@v0.1.0\n```\n\n## Quick usage\n\nLike said setup and usage is very simple and easy.\n\n- Import the detector you want,\n- Initialize it,\n- Get predicts\n\n**_Example_**\n\n```python\nfrom face_detector_plus import Ultralight320Detector\nfrom face_detector_plus.utils import annotate_image\n\ndetector = Ultralight320Detector()\n\nimage = cv2.imread(\"image.png\")\n\nfaces = detector.detect_faces(image)\nimage = annotate_image(image, faces, width=3)\n\ncv2.imshow(\"view\", image)\ncv2.waitKey(100000)\n```\n\n### CaffeModel Detector\n\nCaffemodel is very light weight model that uses less resources to perform detections that is created by caffe (Convolutional Architecture for Fast Feature Embedding).\n\n```python\nimport cv2\nfrom face_detector_plus import CaffemodelDetector\nfrom face_detector_plus.utils import annotate_image\n\nvid = cv2.VideoCapture(0)\ndetector = CaffemodelDetector()\n\nwhile True:\n    rect, frame = vid.read()\n    if not rect:\n        break\n\n    bbox = detector.detect_faces(frame)\n    frame = annotate_image(frame, bbox)\n\n    cv2.imshow(\"Caffe Model Detection\", frame)\n\n    cv2.waitKey(1)\n```\n\n**Configurable options for CaffeModel detector**.\n\nSyntax: `CaffemodelDetector(**options)`\n\n| Options         | Description                                                                                                                                                                                              |\n| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `convert_color` | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                                                                                            |\n| `confidence`    | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5                                                                                 |\n| `scale`         | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given)                                                                           |\n| `mean`          | Scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true. Defaults to (104.0, 177.0, 123.0). |\n| `scalefactor`   | Multiplier for images values. Defaults to 1.0.                                                                                                                                                           |\n| `crop`          | Flag which indicates whether image will be cropped after resize or not. Defaults to False.                                                                                                               |\n| `swapRB`        | Flag which indicates that swap first and last channels in 3-channel image is necessary. Defaults to False.                                                                                               |\n| `transpose`     | Transpose image. Defaults to False.                                                                                                                                                                      |\n| `resize`        | Spatial size for output image. Default is (300, 300)                                                                                                                                                     |\n\n**Useful methods for this detector:**\n\n- **`detect_faces(image)`**\n\n  This method will return coordinates for all the detected faces of the given image\n\n  | Options | Description                 |\n  | ------- | --------------------------- |\n  | `image` | image in numpy array format |\n\n- **`detect_faces_keypoints(image, get_all=false)`**\n\n  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new shape_predictor_68_face_landmarks_GTX.dat` model.\n\n  _Note: Generating keypoints might take more time if compared with `detect_faces` method_\n\n  | Options   | Description                                                               |\n  | --------- | ------------------------------------------------------------------------- |\n  | `image`   | Image in numpy array format                                               |\n  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |\n\n### CNN Detector\n\nCNN (Convolutional Neural Network) might not be a light weight model but it is good at detecting faces from all angles. This detector is a hight level wrapper around `dlib::cnn_face_detection_model_v1` that is fine tuned to improve overall performance and accuracy.\n\n```python\nimport cv2\nfrom face_detector_plus import CNNDetector\nfrom face_detector_plus.utils import annotate_image\n\nvid = cv2.VideoCapture(0)\ndetector = CNNDetector()\n\nwhile True:\n    rect, frame = vid.read()\n    if not rect:\n        break\n\n    bbox = detector.detect_faces(frame)\n    frame = annotate_image(frame, bbox)\n\n    cv2.imshow(\"CNN Detection\", frame)\n\n    cv2.waitKey(1)\n```\n\n**Configurable options for CNNDetector detector.**\n\nSyntax: `CNNDetector(**options)`\n\n| Options                       | Description                                                                                                                    |\n| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |\n| `convert_color`               | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |\n| `number_of_times_to_upsample` | Up samples the image number_of_times_to_upsample before running the basic detector. By default is 1.                           |\n| `confidence`                  | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |\n| `scale`                       | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |\n\n- **`detect_faces(image)`**\n\n  This method will return coordinates for all the detected faces of the given image\n\n  | Options | Description                 |\n  | ------- | --------------------------- |\n  | `image` | image in numpy array format |\n\n- **`detect_faces_keypoints(image, get_all=false)`**\n\n  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.\n\n  _Note: Generating keypoints might take more time if compared with `detect_faces` method_\n\n  | Options   | Description                                                               |\n  | --------- | ------------------------------------------------------------------------- |\n  | `image`   | Image in numpy array format                                               |\n  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |\n\n### Hog Detector\n\nThis detector uses Histogram of Oriented Gradients (HOG) and Linear SVM classifier for face detection. It is also combined with an image pyramid and a sliding window detection scheme. `HogDetector` is a high level client over dlib's hog face detector and is fine tuned to make it more optimized in both speed and accuracy.\n\nIf you want to detect faster with `HogDetector` and don't care about number of detections then set `number_of_times_to_upsample=1` in the options, it will detect less fasces in less time, mainly used for real time one face detection.\n\n```python\nimport cv2\nfrom face_detector_plus import HogDetector\nfrom face_detector_plus.utils import annotate_image\n\nvid = cv2.VideoCapture(0)\ndetector = HogDetector()\n\nwhile True:\n    rect, frame = vid.read()\n    if not rect:\n        break\n\n    bbox = detector.detect_faces(frame)\n    frame = annotate_image(frame, bbox)\n\n    cv2.imshow(\"Hog Detection\", frame)\n\n    cv2.waitKey(1)\n```\n\n**Configurable options for HogDetector detector.**\n\nSyntax: `HogDetector(**options)`\n\n| Options                       | Description                                                                                                                    |\n| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |\n| `convert_color`               | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |\n| `number_of_times_to_upsample` | Up samples the image number_of_times_to_upsample before running the basic detector. By default is 2.                           |\n| `confidence`                  | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |\n| `scale`                       | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |\n\n- **`detect_faces(image)`**\n\n  This method will return coordinates for all the detected faces of the given image\n\n  | Options | Description                 |\n  | ------- | --------------------------- |\n  | `image` | image in numpy array format |\n\n- **`detect_faces_keypoints(image, get_all=false)`**\n\n  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.\n\n  _Note: Generating keypoints might take more time if compared with `detect_faces` method_\n\n  | Options   | Description                                                               |\n  | --------- | ------------------------------------------------------------------------- |\n  | `image`   | Image in numpy array format                                               |\n  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |\n\n### Ultra Light Detection (320px)\n\nUltra Light detection model is what the name says, it a very light weight, accuracy with impressive speed which is pre-trained on 320x240 sized images and only excepts 320x240 sized images but don't worry `Ultralight320Detector` detector will do all for you.\n\n```python\nimport cv2\nfrom face_detector_plus import Ultralight320Detector\nfrom face_detector_plus.utils import annotate_image\n\nvid = cv2.VideoCapture(0)\ndetector = Ultralight320Detector()\n\nwhile True:\n    rect, frame = vid.read()\n    if not rect:\n        break\n\n    bbox = detector.detect_faces(frame)\n    frame = annotate_image(frame, bbox)\n\n    cv2.imshow(\"Ultra 320 Detection\", frame)\n\n    cv2.waitKey(1)\n```\n\n**Configurable options for Ultralight320Detector detector.**\n\nSyntax: `Ultralight320Detector(**options)`\n\n| Options         | Description                                                                                                                    |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------ |\n| `convert_color` | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |\n| `mean`          | Metric used to measure the performance of models doing detection tasks. Defaults to [127, 127, 127].                           |\n| `confidence`    | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |\n| `scale`         | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |\n| `cache`         | It uses same model for all the created sessions. Default is True                                                               |\n\n- **`detect_faces(image)`**\n\n  This method will return coordinates for all the detected faces of the given image\n\n  | Options | Description                 |\n  | ------- | --------------------------- |\n  | `image` | image in numpy array format |\n\n- **`detect_faces_keypoints(image, get_all=false)`**\n\n  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.\n\n  _Note: Generating keypoints might take more time if compared with `detect_faces` method_\n\n  | Options   | Description                                                               |\n  | --------- | ------------------------------------------------------------------------- |\n  | `image`   | Image in numpy array format                                               |\n  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |\n\n### Ultra Light Detection (640px)\n\nUltra Light detection model is what the name says, it a very light weight, accuracy with impressive speed which is pre-trained on 640x480 sized images and only excepts 640x480 sized images but don't worry `Ultralight640Detector` detector will do all for you.\n\nThis detector will be more accurate than 320 sized ultra light model (`Ultralight320Detector`) but might take a little more time.\n\n```python\nimport cv2\nfrom face_detector_plus import Ultralight640Detector\nfrom face_detector_plus.utils import annotate_image\n\nvid = cv2.VideoCapture(0)\ndetector = Ultralight640Detector()\n\nwhile True:\n    rect, frame = vid.read()\n    if not rect:\n        break\n\n    bbox = detector.detect_faces(frame)\n    frame = annotate_image(frame, bbox)\n\n    cv2.imshow(\"Ultra 640 Detection\", frame)\n\n    cv2.waitKey(1)\n```\n\n**Configurable options for Ultralight640Detector detector.**\n\nSyntax: `Ultralight640Detector(**options)`\n\n| Options         | Description                                                                                                                    |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------ |\n| `convert_color` | Takes OpenCV COLOR codes to convert the images. Defaults to cv2.COLOR_BGR2RGB                                                  |\n| `mean`          | Metric used to measure the performance of models doing detection tasks. Defaults to [127, 127, 127].                           |\n| `confidence`    | Confidence score is used to refrain from making predictions when it is not above a sufficient threshold. Defaults to 0.5       |\n| `scale`         | Scales the image for faster output (No need to set this manually, scale will be determined automatically if no value is given) |\n| `cache`         | It uses same model for all the created sessions. Default is True                                                               |\n\n- **`detect_faces(image)`**\n\n  This method will return coordinates for all the detected faces of the given image\n\n  | Options | Description                 |\n  | ------- | --------------------------- |\n  | `image` | image in numpy array format |\n\n- **`detect_faces_keypoints(image, get_all=false)`**\n\n  This method will return coordinates for all the detected faces along with their facial keypoints of the given image. Keypoints are detected using dlib's new `shape_predictor_68_face_landmarks_GTX.dat` model.\n\n  _Note: Generating keypoints might take more time if compared with `detect_faces` method_\n\n  | Options   | Description                                                               |\n  | --------- | ------------------------------------------------------------------------- |\n  | `image`   | Image in numpy array format                                               |\n  | `get_all` | Weather to get all facial keypoints or the main (chin, nose, eyes, mouth) |\n\n### Annotate Image Function\n\nAnnotates the given image with the payload returned by any of the detectors and returns a well annotated image with boxes and keypoints on the faces.\n\n**Configurable options for annotate_image function.**\n\nSyntax: `annotate_image(**options)`\n\n| Options         | Description                                                                  |\n| --------------- | ---------------------------------------------------------------------------- |\n| `image`         | Give image for annotation in numpy.Array format                              |\n| `faces`         | Payload returned by detector.detect_faces or detector.detect_faces_keypoints |\n| `box_rgb`       | RGB color for rectangle to be of. Defaults to (100, 0, 255).                 |\n| `keypoints_rgb` | RGB color for keypoints to be of. Defaults to (150, 0, 255).                 |\n| `width`         | Width of annotations. Defaults to 2                                          |\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "Light weight face detector high-level client with multiple detection techniques.",
    "version": "1.0.1",
    "project_urls": {
        "Bug Reports": "https://github.com/huseyindas/face-detector-plus/issues",
        "Documentation": "https://github.com/huseyindas/face-detector-plus#documentation",
        "Homepage": "https://github.com/huseyindas/face-detector-plus",
        "Source": "https://github.com/huseyindas/face-detector-plus"
    },
    "split_keywords": [
        "machine learning",
        " face",
        " detector",
        " face detection",
        " cnn",
        " dlib",
        " ultrafast",
        " hog",
        " caffemodel"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7f9cafecbf42a8dfaf6ff5015ebfb6854c924fab73fda85abeba86cc350fa6a1",
                "md5": "8996c655e874224d9d61a610c3521db1",
                "sha256": "057579745f50ff2bd0869646e77b7e1ac8937a14aa5fd3e5cc4c363ff16d8d39"
            },
            "downloads": -1,
            "filename": "face_detector_plus-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8996c655e874224d9d61a610c3521db1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 17609,
            "upload_time": "2024-08-06T12:23:05",
            "upload_time_iso_8601": "2024-08-06T12:23:05.219374Z",
            "url": "https://files.pythonhosted.org/packages/7f/9c/afecbf42a8dfaf6ff5015ebfb6854c924fab73fda85abeba86cc350fa6a1/face_detector_plus-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "da5858133662605736006619198371659e279c11efc54ced7f28c09eb94e12c5",
                "md5": "e080ceef0ac9aeeea58ea95f1fa9c791",
                "sha256": "03639d13dd5ecc7026c775da4fb58714185fccd6fc7bfcd0980ae33d47814d83"
            },
            "downloads": -1,
            "filename": "face_detector_plus-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "e080ceef0ac9aeeea58ea95f1fa9c791",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 16416,
            "upload_time": "2024-08-06T12:23:06",
            "upload_time_iso_8601": "2024-08-06T12:23:06.349194Z",
            "url": "https://files.pythonhosted.org/packages/da/58/58133662605736006619198371659e279c11efc54ced7f28c09eb94e12c5/face_detector_plus-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-06 12:23:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "huseyindas",
    "github_project": "face-detector-plus",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "face-detector-plus"
}
        
Elapsed time: 0.68939s