# face-parser
## 1. Face Segmentation
### 1.1. BiSeNet
![](images/obama_bisenet.jpg)
```python
from visage.bisenet import BiSeNetFaceParser
from visage.visualize import apply_colormap
img = load_img() # torch.Tensor [3, H, W] in range [-1, 1]
face_parser = BiSeNetFaceParser()
segmentation_mask = face_parser.parse(img)
# Plotting
segmentation_mask_colored = apply_colormap(segmentation_mask) # Colorizes each class with a distinct color for better viewing
plt.imshow(segmentation_mask_colored)
```
## 2. Face Bounding Boxes
### 2.1. FaceBoxesV2
![](images/obama_face_boxes_v2.jpg)
```python
from visage.bounding_boxes.face_boxes_v2 import FaceBoxesV2
img = load_img() # np.ndarray [H, W, 3] in range [0, 255]
detector = FaceBoxesV2()
detected_bboxes = detector.detect(img)
# Plotting
cv2.rectangle(img, detected_bboxes[0].get_point1(), detected_bboxes[0].get_point2(), (255, 0, 0), 10)
plt.imshow(img)
```
## 3. Facial Landmarks
### 3.1. PIPNet
![](images/obama_pipnet.jpg)
```python
from visage.landmark_detection.pipnet import PIPNet
img = load_img() # np.ndarray [H, W, 3] in range [0, 255]
detected_bboxes = ... # <- from step 2.
pip_net = PIPNet()
landmarks = pip_net.forward(img, detected_bboxes[0])
# Plotting
for x, y in landmarks:
cv2.circle(img, (int(x), int(y)), 5, (255, 0, 0), -1)
plt.imshow(img)
```
## 4. Background Matting
### 4.1. BackgroundMattingV2
```python
from visage.matting.background_matting_v2 import BackgroundMattingV2
img = load_img(...) # np.ndarray [H, W, 3] in range [0, 255]
bg_img = load_img(...) # np.ndarray [H, W, 3] in range [0, 255]. Should be the same viewpoint but without the foreground
background_matter = BackgroundMattingV2()
alpha_images = background_matter.parse([img], [bg_img])
plt.imshow(alpha_images[0])
```
| Image | Background | Foreground Mask |
|------------------------------------|-----------------------------------|--------------------------------------------|
| ![](images/tobi_cam_222200038.jpg) | ![](images/tobi_bg_222200038.jpg) | ![](images/tobi_background_matting_v2.png) |
Raw data
{
"_id": null,
"home_page": "https://github.com/tobias-kirschstein/face-parser",
"name": "visage",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Tobias Kirschstein",
"author_email": "tobias.kirschstein@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/be/08/3026b0f136dd661484afd6ce768a55d0d073a11d988f685230e4d40b1c4e/visage-0.2.12.tar.gz",
"platform": null,
"description": "# face-parser\n\n## 1. Face Segmentation\n\n### 1.1. BiSeNet\n\n![](images/obama_bisenet.jpg)\n\n```python\nfrom visage.bisenet import BiSeNetFaceParser\nfrom visage.visualize import apply_colormap\n\nimg = load_img() # torch.Tensor [3, H, W] in range [-1, 1]\n\nface_parser = BiSeNetFaceParser()\nsegmentation_mask = face_parser.parse(img)\n\n# Plotting\nsegmentation_mask_colored = apply_colormap(segmentation_mask) # Colorizes each class with a distinct color for better viewing\nplt.imshow(segmentation_mask_colored)\n\n```\n\n## 2. Face Bounding Boxes\n\n### 2.1. FaceBoxesV2\n\n![](images/obama_face_boxes_v2.jpg)\n\n```python\nfrom visage.bounding_boxes.face_boxes_v2 import FaceBoxesV2\n\nimg = load_img() # np.ndarray [H, W, 3] in range [0, 255]\n\ndetector = FaceBoxesV2()\ndetected_bboxes = detector.detect(img)\n\n# Plotting\ncv2.rectangle(img, detected_bboxes[0].get_point1(), detected_bboxes[0].get_point2(), (255, 0, 0), 10)\nplt.imshow(img)\n```\n\n## 3. Facial Landmarks\n\n### 3.1. PIPNet\n\n![](images/obama_pipnet.jpg)\n\n```python\nfrom visage.landmark_detection.pipnet import PIPNet\n\nimg = load_img() # np.ndarray [H, W, 3] in range [0, 255]\ndetected_bboxes = ... # <- from step 2.\n\npip_net = PIPNet()\nlandmarks = pip_net.forward(img, detected_bboxes[0])\n\n# Plotting\nfor x, y in landmarks:\n cv2.circle(img, (int(x), int(y)), 5, (255, 0, 0), -1)\n\nplt.imshow(img)\n```\n\n## 4. Background Matting\n\n### 4.1. BackgroundMattingV2\n\n```python\nfrom visage.matting.background_matting_v2 import BackgroundMattingV2\n\nimg = load_img(...) # np.ndarray [H, W, 3] in range [0, 255]\nbg_img = load_img(...) # np.ndarray [H, W, 3] in range [0, 255]. Should be the same viewpoint but without the foreground\n\nbackground_matter = BackgroundMattingV2()\nalpha_images = background_matter.parse([img], [bg_img])\n\nplt.imshow(alpha_images[0])\n```\n| Image | Background | Foreground Mask |\n|------------------------------------|-----------------------------------|--------------------------------------------|\n | ![](images/tobi_cam_222200038.jpg) | ![](images/tobi_bg_222200038.jpg) | ![](images/tobi_background_matting_v2.png) |\n",
"bugtrack_url": null,
"license": null,
"summary": "Face Parsers",
"version": "0.2.12",
"project_urls": {
"Homepage": "https://github.com/tobias-kirschstein/face-parser"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "be083026b0f136dd661484afd6ce768a55d0d073a11d988f685230e4d40b1c4e",
"md5": "f1b0c69f1ff647793512ed3ed5b9dd99",
"sha256": "30e021a7fa69396bbbfa4a210be4c30a3a46c479506fa53dd14ab75de62f7718"
},
"downloads": -1,
"filename": "visage-0.2.12.tar.gz",
"has_sig": false,
"md5_digest": "f1b0c69f1ff647793512ed3ed5b9dd99",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 215640,
"upload_time": "2024-11-18T17:12:17",
"upload_time_iso_8601": "2024-11-18T17:12:17.616463Z",
"url": "https://files.pythonhosted.org/packages/be/08/3026b0f136dd661484afd6ce768a55d0d073a11d988f685230e4d40b1c4e/visage-0.2.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 17:12:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tobias-kirschstein",
"github_project": "face-parser",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "visage"
}