# UniFace: All-in-One Face Analysis Library
[](https://opensource.org/licenses/MIT)

[](https://pypi.org/project/uniface/)
[](https://github.com/yakhyo/uniface/actions)
[](https://github.com/yakhyo/uniface)
[](https://pepy.tech/project/uniface)
[](https://www.python.org/dev/peps/pep-0008/)
[](https://github.com/yakhyo/uniface/releases)
**uniface** is a lightweight face detection library designed for high-performance face localization, landmark detection and face alignment. The library supports ONNX models and provides utilities for bounding box visualization and landmark plotting. To train RetinaFace model, see https://github.com/yakhyo/retinaface-pytorch.
---
## Features
| Date | Feature Description |
| ---------- | --------------------------------------------------------------------------------------------------------------- |
| Planned | 🎠**Age and Gender Detection**: Planned feature for predicting age and gender from facial images. |
| Planned | 🧩 **Face Recognition**: Upcoming capability to identify and verify faces. |
| 2024-11-21 | 🔄 **Face Alignment**: Added precise face alignment for better downstream tasks. |
| 2024-11-20 | âš¡ **High-Speed Face Detection**: ONNX model integration for faster and efficient face detection. |
| 2024-11-20 | 🎯 **Facial Landmark Localization**: Accurate detection of key facial features like eyes, nose, and mouth. |
| 2024-11-20 | 🛠**API for Inference and Visualization**: Simplified API for seamless inference and visual results generation. |
---
## Installation
The easiest way to install **UniFace** is via [PyPI](https://pypi.org/project/uniface/). This will automatically install the library along with its prerequisites.
```bash
pip install uniface
```
To work with the latest version of **UniFace**, which may not yet be released on PyPI, you can install it directly from the repository:
```bash
git clone https://github.com/yakhyo/uniface.git
cd uniface
pip install .
```
---
## Quick Start
To get started with face detection using **UniFace**, check out the [example notebook](examples/face_detection.ipynb).
It demonstrates how to initialize the model, run inference, and visualize the results.
---
## Examples
<div align="center">
<img src="assets/alignment_result.png">
</div>
Explore the following example notebooks to learn how to use **UniFace** effectively:
- [Face Detection](examples/face_detection.ipynb): Demonstrates how to perform face detection, draw bounding boxes, and landmarks on an image.
- [Face Alignment](examples/face_alignment.ipynb): Shows how to align faces using detected landmarks.
- [Age and Gender Detection](examples/age_gender.ipynb): Example for detecting age and gender from faces. (underdevelopment)
### 🚀 Initialize the RetinaFace Model
To use the RetinaFace model for face detection, initialize it with either custom or default configuration parameters.
#### Full Initialization (with custom parameters)
```python
from uniface import RetinaFace
from uniface.constants import RetinaFaceWeights
# Initialize RetinaFace with custom configuration
uniface_inference = RetinaFace(
model_name=RetinaFaceWeights.MNET_V2, # Model name from enum
conf_thresh=0.5, # Confidence threshold for detections
pre_nms_topk=5000, # Number of top detections before NMS
nms_thresh=0.4, # IoU threshold for NMS
post_nms_topk=750, # Number of top detections after NMS
dynamic_size=False, # Whether to allow arbitrary input sizes
input_size=(640, 640) # Input image size (HxW)
)
```
#### Minimal Initialization (uses default parameters)
```python
from uniface import RetinaFace
# Initialize with default settings
uniface_inference = RetinaFace()
```
**Default Parameters:**
```python
model_name = RetinaFaceWeights.MNET_V2
conf_thresh = 0.5
pre_nms_topk = 5000
nms_thresh = 0.4
post_nms_topk = 750
dynamic_size = False
input_size = (640, 640)
```
### Run Inference
Inference on image:
```python
import cv2
from uniface.visualization import draw_detections
# Load an image
image_path = "assets/test.jpg"
original_image = cv2.imread(image_path)
# Perform inference
boxes, landmarks = uniface_inference.detect(original_image)
# boxes: [x_min, y_min, x_max, y_max, confidence]
# Visualize results
draw_detections(original_image, (boxes, landmarks), vis_threshold=0.6)
# Save the output image
output_path = "output.jpg"
cv2.imwrite(output_path, original_image)
print(f"Saved output image to {output_path}")
```
Inference on video:
```python
import cv2
from uniface.visualization import draw_detections
# Initialize the webcam
cap = cv2.VideoCapture(0)
if not cap.isOpened():
print("Error: Unable to access the webcam.")
exit()
while True:
# Capture a frame from the webcam
ret, frame = cap.read()
if not ret:
print("Error: Failed to read frame.")
break
# Perform inference
boxes, landmarks = uniface_inference.detect(frame)
# 'boxes' contains bounding box coordinates and confidence scores:
# Format: [x_min, y_min, x_max, y_max, confidence]
# Draw detections on the frame
draw_detections(frame, (boxes, landmarks), vis_threshold=0.6)
# Display the output
cv2.imshow("Webcam Inference", frame)
# Exit if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release the webcam and close all OpenCV windows
cap.release()
cv2.destroyAllWindows()
```
---
### Evaluation results of available models on WiderFace
| RetinaFace Models | Easy | Medium | Hard |
| ------------------ | ---------- | ---------- | ---------- |
| retinaface_mnet025 | 88.48% | 87.02% | 80.61% |
| retinaface_mnet050 | 89.42% | 87.97% | 82.40% |
| retinaface_mnet_v1 | 90.59% | 89.14% | 84.13% |
| retinaface_mnet_v2 | 91.70% | 91.03% | 86.60% |
| retinaface_r18 | 92.50% | 91.02% | 86.63% |
| retinaface_r34 | **94.16%** | **93.12%** | **88.90%** |
<div align="center">
<img src="assets/test_result.png">
</div>
## API Reference
### `RetinaFace` Class
#### Initialization
```python
from typings import Tuple
from uniface import RetinaFace
from uniface.constants import RetinaFaceWeights
RetinaFace(
model_name: RetinaFaceWeights,
conf_thresh: float = 0.5,
pre_nms_topk: int = 5000,
nms_thresh: float = 0.4,
post_nms_topk: int = 750,
dynamic_size: bool = False,
input_size: Tuple[int, int] = (640, 640)
)
```
**Parameters**:
- `model_name` _(RetinaFaceWeights)_: Enum value for model to use. Supported values:
- `MNET_025`, `MNET_050`, `MNET_V1`, `MNET_V2`, `RESNET18`, `RESNET34`
- `conf_thresh` _(float, default=0.5)_: Minimum confidence score for detections.
- `pre_nms_topk` _(int, default=5000)_: Max detections to keep before NMS.
- `nms_thresh` _(float, default=0.4)_: IoU threshold for Non-Maximum Suppression.
- `post_nms_topk` _(int, default=750)_: Max detections to keep after NMS.
- `dynamic_size` _(Optional[bool], default=False)_: Use dynamic input size.
- `input_size` _(Optional[Tuple[int, int]], default=(640, 640))_: Static input size for the model (width, height).
---
### `detect` Method
```python
detect(
image: np.ndarray,
max_num: int = 0,
metric: str = "default",
center_weight: float = 2.0
) -> Tuple[np.ndarray, np.ndarray]
```
**Description**:
Detects faces in the given image and returns bounding boxes and landmarks.
**Parameters**:
- `image` _(np.ndarray)_: Input image in BGR format.
- `max_num` _(int, default=0)_: Maximum number of faces to return. `0` means return all.
- `metric` _(str, default="default")_: Metric for prioritizing detections:
- `"default"`: Prioritize detections closer to the image center.
- `"max"`: Prioritize larger bounding box areas.
- `center_weight` _(float, default=2.0)_: Weight for prioritizing center-aligned faces.
**Returns**:
- `bounding_boxes` _(np.ndarray)_: Array of detections as `[x_min, y_min, x_max, y_max, confidence]`.
- `landmarks` _(np.ndarray)_: Array of landmarks as `[(x1, y1), ..., (x5, y5)]`.
---
### Visualization Utilities
#### `draw_detections`
```python
draw_detections(
image: np.ndarray,
detections: Tuple[np.ndarray, np.ndarray],
vis_threshold: float = 0.6
) -> None
```
**Description**:
Draws bounding boxes and landmarks on the given image.
**Parameters**:
- `image` _(np.ndarray)_: The input image in BGR format.
- `detections` _(Tuple[np.ndarray, np.ndarray])_: A tuple of bounding boxes and landmarks.
- `vis_threshold` _(float, default=0.6)_: Minimum confidence score for visualization.
---
## Contributing
We welcome contributions to enhance the library! Feel free to:
- Submit bug reports or feature requests.
- Fork the repository and create a pull request.
---
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
---
## Acknowledgments
- Based on the RetinaFace model for face detection ([https://github.com/yakhyo/retinaface-pytorch](https://github.com/yakhyo/retinaface-pytorch)).
- Inspired by InsightFace and other face detection projects.
---
Raw data
{
"_id": null,
"home_page": null,
"name": "uniface",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Yakhyokhuja Valikhujaev <yakhyo9696@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/b7/28/261432e6c9cc4761b882195792939bc1102a489e8962a5d1f6973f517885/uniface-0.1.8.tar.gz",
"platform": null,
"description": "# UniFace: All-in-One Face Analysis Library\n\n[](https://opensource.org/licenses/MIT)\n\n[](https://pypi.org/project/uniface/)\n[](https://github.com/yakhyo/uniface/actions)\n[](https://github.com/yakhyo/uniface)\n[](https://pepy.tech/project/uniface)\n[](https://www.python.org/dev/peps/pep-0008/)\n[](https://github.com/yakhyo/uniface/releases)\n\n**uniface** is a lightweight face detection library designed for high-performance face localization, landmark detection and face alignment. The library supports ONNX models and provides utilities for bounding box visualization and landmark plotting. To train RetinaFace model, see https://github.com/yakhyo/retinaface-pytorch.\n\n---\n\n## Features\n\n| Date | Feature Description |\n| ---------- | --------------------------------------------------------------------------------------------------------------- |\n| Planned | \ud83c\udfad **Age and Gender Detection**: Planned feature for predicting age and gender from facial images. |\n| Planned | \ud83e\udde9 **Face Recognition**: Upcoming capability to identify and verify faces. |\n| 2024-11-21 | \ud83d\udd04 **Face Alignment**: Added precise face alignment for better downstream tasks. |\n| 2024-11-20 | \u26a1 **High-Speed Face Detection**: ONNX model integration for faster and efficient face detection. |\n| 2024-11-20 | \ud83c\udfaf **Facial Landmark Localization**: Accurate detection of key facial features like eyes, nose, and mouth. |\n| 2024-11-20 | \ud83d\udee0 **API for Inference and Visualization**: Simplified API for seamless inference and visual results generation. |\n\n---\n\n## Installation\n\nThe easiest way to install **UniFace** is via [PyPI](https://pypi.org/project/uniface/). This will automatically install the library along with its prerequisites.\n\n```bash\npip install uniface\n```\n\nTo work with the latest version of **UniFace**, which may not yet be released on PyPI, you can install it directly from the repository:\n\n```bash\ngit clone https://github.com/yakhyo/uniface.git\ncd uniface\npip install .\n```\n\n---\n\n## Quick Start\n\nTo get started with face detection using **UniFace**, check out the [example notebook](examples/face_detection.ipynb).\nIt demonstrates how to initialize the model, run inference, and visualize the results.\n\n---\n\n## Examples\n\n<div align=\"center\">\n <img src=\"assets/alignment_result.png\">\n</div>\n\nExplore the following example notebooks to learn how to use **UniFace** effectively:\n\n- [Face Detection](examples/face_detection.ipynb): Demonstrates how to perform face detection, draw bounding boxes, and landmarks on an image.\n- [Face Alignment](examples/face_alignment.ipynb): Shows how to align faces using detected landmarks.\n- [Age and Gender Detection](examples/age_gender.ipynb): Example for detecting age and gender from faces. (underdevelopment)\n\n### \ud83d\ude80 Initialize the RetinaFace Model\n\nTo use the RetinaFace model for face detection, initialize it with either custom or default configuration parameters.\n\n#### Full Initialization (with custom parameters)\n\n```python\nfrom uniface import RetinaFace\nfrom uniface.constants import RetinaFaceWeights\n\n# Initialize RetinaFace with custom configuration\nuniface_inference = RetinaFace(\n model_name=RetinaFaceWeights.MNET_V2, # Model name from enum\n conf_thresh=0.5, # Confidence threshold for detections\n pre_nms_topk=5000, # Number of top detections before NMS\n nms_thresh=0.4, # IoU threshold for NMS\n post_nms_topk=750, # Number of top detections after NMS\n dynamic_size=False, # Whether to allow arbitrary input sizes\n input_size=(640, 640) # Input image size (HxW)\n)\n```\n\n#### Minimal Initialization (uses default parameters)\n\n```python\nfrom uniface import RetinaFace\n\n# Initialize with default settings\nuniface_inference = RetinaFace()\n```\n\n**Default Parameters:**\n\n```python\nmodel_name = RetinaFaceWeights.MNET_V2\nconf_thresh = 0.5\npre_nms_topk = 5000\nnms_thresh = 0.4\npost_nms_topk = 750\ndynamic_size = False\ninput_size = (640, 640)\n```\n\n### Run Inference\n\nInference on image:\n\n```python\nimport cv2\nfrom uniface.visualization import draw_detections\n\n# Load an image\nimage_path = \"assets/test.jpg\"\noriginal_image = cv2.imread(image_path)\n\n# Perform inference\nboxes, landmarks = uniface_inference.detect(original_image)\n# boxes: [x_min, y_min, x_max, y_max, confidence]\n\n# Visualize results\ndraw_detections(original_image, (boxes, landmarks), vis_threshold=0.6)\n\n# Save the output image\noutput_path = \"output.jpg\"\ncv2.imwrite(output_path, original_image)\nprint(f\"Saved output image to {output_path}\")\n```\n\nInference on video:\n\n```python\nimport cv2\nfrom uniface.visualization import draw_detections\n\n# Initialize the webcam\ncap = cv2.VideoCapture(0)\n\nif not cap.isOpened():\n print(\"Error: Unable to access the webcam.\")\n exit()\n\nwhile True:\n # Capture a frame from the webcam\n ret, frame = cap.read()\n if not ret:\n print(\"Error: Failed to read frame.\")\n break\n\n # Perform inference\n boxes, landmarks = uniface_inference.detect(frame)\n # 'boxes' contains bounding box coordinates and confidence scores:\n # Format: [x_min, y_min, x_max, y_max, confidence]\n\n # Draw detections on the frame\n draw_detections(frame, (boxes, landmarks), vis_threshold=0.6)\n\n # Display the output\n cv2.imshow(\"Webcam Inference\", frame)\n\n # Exit if 'q' is pressed\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n# Release the webcam and close all OpenCV windows\ncap.release()\ncv2.destroyAllWindows()\n```\n\n---\n\n### Evaluation results of available models on WiderFace\n\n| RetinaFace Models | Easy | Medium | Hard |\n| ------------------ | ---------- | ---------- | ---------- |\n| retinaface_mnet025 | 88.48% | 87.02% | 80.61% |\n| retinaface_mnet050 | 89.42% | 87.97% | 82.40% |\n| retinaface_mnet_v1 | 90.59% | 89.14% | 84.13% |\n| retinaface_mnet_v2 | 91.70% | 91.03% | 86.60% |\n| retinaface_r18 | 92.50% | 91.02% | 86.63% |\n| retinaface_r34 | **94.16%** | **93.12%** | **88.90%** |\n\n<div align=\"center\">\n <img src=\"assets/test_result.png\">\n</div>\n\n## API Reference\n\n### `RetinaFace` Class\n\n#### Initialization\n\n```python\nfrom typings import Tuple\nfrom uniface import RetinaFace\nfrom uniface.constants import RetinaFaceWeights\n\nRetinaFace(\n model_name: RetinaFaceWeights,\n conf_thresh: float = 0.5,\n pre_nms_topk: int = 5000,\n nms_thresh: float = 0.4,\n post_nms_topk: int = 750,\n dynamic_size: bool = False,\n input_size: Tuple[int, int] = (640, 640)\n)\n```\n\n**Parameters**:\n\n- `model_name` _(RetinaFaceWeights)_: Enum value for model to use. Supported values:\n - `MNET_025`, `MNET_050`, `MNET_V1`, `MNET_V2`, `RESNET18`, `RESNET34`\n- `conf_thresh` _(float, default=0.5)_: Minimum confidence score for detections.\n- `pre_nms_topk` _(int, default=5000)_: Max detections to keep before NMS.\n- `nms_thresh` _(float, default=0.4)_: IoU threshold for Non-Maximum Suppression.\n- `post_nms_topk` _(int, default=750)_: Max detections to keep after NMS.\n- `dynamic_size` _(Optional[bool], default=False)_: Use dynamic input size.\n- `input_size` _(Optional[Tuple[int, int]], default=(640, 640))_: Static input size for the model (width, height).\n\n---\n\n### `detect` Method\n\n```python\ndetect(\n image: np.ndarray,\n max_num: int = 0,\n metric: str = \"default\",\n center_weight: float = 2.0\n) -> Tuple[np.ndarray, np.ndarray]\n```\n\n**Description**:\nDetects faces in the given image and returns bounding boxes and landmarks.\n\n**Parameters**:\n\n- `image` _(np.ndarray)_: Input image in BGR format.\n- `max_num` _(int, default=0)_: Maximum number of faces to return. `0` means return all.\n- `metric` _(str, default=\"default\")_: Metric for prioritizing detections:\n - `\"default\"`: Prioritize detections closer to the image center.\n - `\"max\"`: Prioritize larger bounding box areas.\n- `center_weight` _(float, default=2.0)_: Weight for prioritizing center-aligned faces.\n\n**Returns**:\n\n- `bounding_boxes` _(np.ndarray)_: Array of detections as `[x_min, y_min, x_max, y_max, confidence]`.\n- `landmarks` _(np.ndarray)_: Array of landmarks as `[(x1, y1), ..., (x5, y5)]`.\n\n---\n\n### Visualization Utilities\n\n#### `draw_detections`\n\n```python\ndraw_detections(\n image: np.ndarray,\n detections: Tuple[np.ndarray, np.ndarray],\n vis_threshold: float = 0.6\n) -> None\n```\n\n**Description**:\nDraws bounding boxes and landmarks on the given image.\n\n**Parameters**:\n\n- `image` _(np.ndarray)_: The input image in BGR format.\n- `detections` _(Tuple[np.ndarray, np.ndarray])_: A tuple of bounding boxes and landmarks.\n- `vis_threshold` _(float, default=0.6)_: Minimum confidence score for visualization.\n\n---\n\n## Contributing\n\nWe welcome contributions to enhance the library! Feel free to:\n\n- Submit bug reports or feature requests.\n- Fork the repository and create a pull request.\n\n---\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.\n\n---\n\n## Acknowledgments\n\n- Based on the RetinaFace model for face detection ([https://github.com/yakhyo/retinaface-pytorch](https://github.com/yakhyo/retinaface-pytorch)).\n- Inspired by InsightFace and other face detection projects.\n\n---\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "UniFace: A Comprehensive Library for Face Detection, Recognition, Landmark Analysis, Age, and Gender Detection",
"version": "0.1.8",
"project_urls": {
"Homepage": "https://github.com/yakhyo/uniface",
"Repository": "https://github.com/yakhyo/uniface"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9978389cd7961e4765db3220b7ad6d6ba328d97c127d03998958bf33f1596277",
"md5": "204159c6698fe6d320190c4a5f789869",
"sha256": "50dcf2eb3224bac2a29fe307cd888eb7fae4b03e0e4ec0664b9b5c8f721584c8"
},
"downloads": -1,
"filename": "uniface-0.1.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "204159c6698fe6d320190c4a5f789869",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 16452,
"upload_time": "2025-08-30T13:57:18",
"upload_time_iso_8601": "2025-08-30T13:57:18.749323Z",
"url": "https://files.pythonhosted.org/packages/99/78/389cd7961e4765db3220b7ad6d6ba328d97c127d03998958bf33f1596277/uniface-0.1.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b728261432e6c9cc4761b882195792939bc1102a489e8962a5d1f6973f517885",
"md5": "6821a5ec4060923a4ef3fb3dc9f9ef2e",
"sha256": "291bad0e918b78a1ae4066e636b40c9ab81689937aa66360d73f7d7858a2916c"
},
"downloads": -1,
"filename": "uniface-0.1.8.tar.gz",
"has_sig": false,
"md5_digest": "6821a5ec4060923a4ef3fb3dc9f9ef2e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 18090,
"upload_time": "2025-08-30T13:57:19",
"upload_time_iso_8601": "2025-08-30T13:57:19.941898Z",
"url": "https://files.pythonhosted.org/packages/b7/28/261432e6c9cc4761b882195792939bc1102a489e8962a5d1f6973f517885/uniface-0.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-30 13:57:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yakhyo",
"github_project": "uniface",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": []
},
{
"name": "opencv-python",
"specs": []
},
{
"name": "onnx",
"specs": []
},
{
"name": "onnxruntime-gpu",
"specs": []
},
{
"name": "scikit-image",
"specs": []
},
{
"name": "requests",
"specs": []
},
{
"name": "pytest",
"specs": []
}
],
"lcname": "uniface"
}