pyfacer


Namepyfacer JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/FacePerceiver/facer
SummaryFace related toolkit
upload_time2023-05-14 12:55:35
maintainer
docs_urlNone
authorFacePerceiver
requires_python
licenseMIT
keywords face-detection pytorch retinaface face-parsing farl face-alignment
VCS
bugtrack_url
requirements torch torchvision pillow numpy ipywidgets scikit-image matplotlib validators requests opencv-python
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # FACER

Face related toolkit. This repo is still under construction to include more models.

## Updates
- [14/05/2023] Face attribute recognition model trained on CelebA is available, check it out [here](./samples/face_attribute.ipynb).
- [04/05/2023] Face alignment model trained on IBUG300W, AFLW19, WFLW dataset is available, check it out [here](./samples/face_alignment.ipynb).
- [27/04/2023] Face parsing model trained on CelebM dataset is available, check it out [here](./samples/face_parsing.ipynb).

## Install

The easiest way to install it is using pip:

```bash
pip install git+https://github.com/FacePerceiver/facer.git@main
```
No extra setup needs, pretrained weights will be downloaded automatically.

If you have trouble install from source, you can try install from PyPI:
```bash
pip install pyfacer
```
the PyPI version is not guaranteed to be the latest version, but we will try to keep it up to date.


## Face Detection

We simply wrap a retinaface detector for easy usage.
```python
import facer

image = facer.hwc2bchw(facer.read_hwc('data/twogirls.jpg')).to(device=device)  # image: 1 x 3 x h x w

face_detector = facer.face_detector('retinaface/mobilenet', device=device)
with torch.inference_mode():
    faces = face_detector(image)

facer.show_bchw(facer.draw_bchw(image, faces))
```
![](./samples/example_output/detect.png)

Check [this notebook](./samples/face_detect.ipynb) for full example.

Please consider citing
```
@inproceedings{deng2020retinaface,
  title={Retinaface: Single-shot multi-level face localisation in the wild},
  author={Deng, Jiankang and Guo, Jia and Ververas, Evangelos and Kotsia, Irene and Zafeiriou, Stefanos},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5203--5212},
  year={2020}
}
```

## Face Parsing

We wrap the [FaRL](https://github.com/faceperceiver/farl) models for face parsing.
```python
import torch
import facer

device = 'cuda' if torch.cuda.is_available() else 'cpu'

image = facer.hwc2bchw(facer.read_hwc('data/twogirls.jpg')).to(device=device)  # image: 1 x 3 x h x w

face_detector = facer.face_detector('retinaface/mobilenet', device=device)
with torch.inference_mode():
    faces = face_detector(image)

face_parser = facer.face_parser('farl/lapa/448', device=device) # optional "farl/celebm/448"

with torch.inference_mode():
    faces = face_parser(image, faces)

seg_logits = faces['seg']['logits']
seg_probs = seg_logits.softmax(dim=1)  # nfaces x nclasses x h x w
n_classes = seg_probs.size(1)
vis_seg_probs = seg_probs.argmax(dim=1).float()/n_classes*255
vis_img = vis_seg_probs.sum(0, keepdim=True)
facer.show_bhw(vis_img)
facer.show_bchw(facer.draw_bchw(image, faces))
```
![](./samples/example_output/parsing.png)

Check [this notebook](./samples/face_parsing.ipynb) for full example.

Please consider citing
```
@inproceedings{zheng2022farl,
  title={General facial representation learning in a visual-linguistic manner},
  author={Zheng, Yinglin and Yang, Hao and Zhang, Ting and Bao, Jianmin and Chen, Dongdong and Huang, Yangyu and Yuan, Lu and Chen, Dong and Zeng, Ming and Wen, Fang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={18697--18709},
  year={2022}
}
``` 


## Face Alignment

We wrap the [FaRL](https://github.com/faceperceiver/farl) models for face alignment.
```python
import torch
import cv2
from matplotlib import pyplot as plt

device = 'cuda' if torch.cuda.is_available() else 'cpu'

import facer
img_file = 'data/twogirls.jpg'
# image: 1 x 3 x h x w
image = facer.hwc2bchw(facer.read_hwc(img_file)).to(device=device)  

face_detector = facer.face_detector('retinaface/mobilenet', device=device)
with torch.inference_mode():
    faces = face_detector(image)

face_aligner = facer.face_aligner('farl/ibug300w/448', device=device) # optional: "farl/wflw/448", "farl/aflw19/448"

with torch.inference_mode():
    faces = face_aligner(image, faces)

img = cv2.imread(img_file)[..., ::-1]
vis_img = img.copy()
for pts in faces['alignment']:
    vis_img = facer.draw_landmarks(vis_img, None, pts.cpu().numpy())
plt.imshow(vis_img)
```
![](./samples/example_output/alignment.png)

Check [this notebook](./samples/face_alignment.ipynb) for full example.

Please consider citing
```
@inproceedings{zheng2022farl,
  title={General facial representation learning in a visual-linguistic manner},
  author={Zheng, Yinglin and Yang, Hao and Zhang, Ting and Bao, Jianmin and Chen, Dongdong and Huang, Yangyu and Yuan, Lu and Chen, Dong and Zeng, Ming and Wen, Fang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={18697--18709},
  year={2022}
}
``` 

## Face Attribute Recognition
We wrap the [FaRL](https://github.com/faceperceiver/farl) models for face attribute recognition, the model achieves 92.06% accuracy on [CelebA](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset.

```python
import sys
import torch
import facer

device = "cuda" if torch.cuda.is_available() else "cpu"

# image: 1 x 3 x h x w
image = facer.hwc2bchw(facer.read_hwc("data/girl.jpg")).to(device=device)

face_detector = facer.face_detector("retinaface/mobilenet", device=device)
with torch.inference_mode():
    faces = face_detector(image)

face_attr = facer.face_attr("farl/celeba/224", device=device)
with torch.inference_mode():
    faces = face_attr(image, faces)

labels = face_attr.labels
face1_attrs = faces["attrs"][0] # get the first face's attributes

print(labels)

for prob, label in zip(face1_attrs, labels):
    if prob > 0.5:
        print(label, prob.item())
```

Check [this notebook](./samples/face_attribute.ipynb) for full example.

Please consider citing
```
@inproceedings{zheng2022farl,
  title={General facial representation learning in a visual-linguistic manner},
  author={Zheng, Yinglin and Yang, Hao and Zhang, Ting and Bao, Jianmin and Chen, Dongdong and Huang, Yangyu and Yuan, Lu and Chen, Dong and Zeng, Ming and Wen, Fang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={18697--18709},
  year={2022}
}
``` 

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/FacePerceiver/facer",
    "name": "pyfacer",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "face-detection pytorch RetinaFace face-parsing farl face-alignment",
    "author": "FacePerceiver",
    "author_email": "",
    "download_url": "",
    "platform": null,
    "description": "# FACER\n\nFace related toolkit. This repo is still under construction to include more models.\n\n## Updates\n- [14/05/2023] Face attribute recognition model trained on CelebA is available, check it out [here](./samples/face_attribute.ipynb).\n- [04/05/2023] Face alignment model trained on IBUG300W, AFLW19, WFLW dataset is available, check it out [here](./samples/face_alignment.ipynb).\n- [27/04/2023] Face parsing model trained on CelebM dataset is available, check it out [here](./samples/face_parsing.ipynb).\n\n## Install\n\nThe easiest way to install it is using pip:\n\n```bash\npip install git+https://github.com/FacePerceiver/facer.git@main\n```\nNo extra setup needs, pretrained weights will be downloaded automatically.\n\nIf you have trouble install from source, you can try install from PyPI:\n```bash\npip install pyfacer\n```\nthe PyPI version is not guaranteed to be the latest version, but we will try to keep it up to date.\n\n\n## Face Detection\n\nWe simply wrap a retinaface detector for easy usage.\n```python\nimport facer\n\nimage = facer.hwc2bchw(facer.read_hwc('data/twogirls.jpg')).to(device=device)  # image: 1 x 3 x h x w\n\nface_detector = facer.face_detector('retinaface/mobilenet', device=device)\nwith torch.inference_mode():\n    faces = face_detector(image)\n\nfacer.show_bchw(facer.draw_bchw(image, faces))\n```\n![](./samples/example_output/detect.png)\n\nCheck [this notebook](./samples/face_detect.ipynb) for full example.\n\nPlease consider citing\n```\n@inproceedings{deng2020retinaface,\n  title={Retinaface: Single-shot multi-level face localisation in the wild},\n  author={Deng, Jiankang and Guo, Jia and Ververas, Evangelos and Kotsia, Irene and Zafeiriou, Stefanos},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={5203--5212},\n  year={2020}\n}\n```\n\n## Face Parsing\n\nWe wrap the [FaRL](https://github.com/faceperceiver/farl) models for face parsing.\n```python\nimport torch\nimport facer\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\n\nimage = facer.hwc2bchw(facer.read_hwc('data/twogirls.jpg')).to(device=device)  # image: 1 x 3 x h x w\n\nface_detector = facer.face_detector('retinaface/mobilenet', device=device)\nwith torch.inference_mode():\n    faces = face_detector(image)\n\nface_parser = facer.face_parser('farl/lapa/448', device=device) # optional \"farl/celebm/448\"\n\nwith torch.inference_mode():\n    faces = face_parser(image, faces)\n\nseg_logits = faces['seg']['logits']\nseg_probs = seg_logits.softmax(dim=1)  # nfaces x nclasses x h x w\nn_classes = seg_probs.size(1)\nvis_seg_probs = seg_probs.argmax(dim=1).float()/n_classes*255\nvis_img = vis_seg_probs.sum(0, keepdim=True)\nfacer.show_bhw(vis_img)\nfacer.show_bchw(facer.draw_bchw(image, faces))\n```\n![](./samples/example_output/parsing.png)\n\nCheck [this notebook](./samples/face_parsing.ipynb) for full example.\n\nPlease consider citing\n```\n@inproceedings{zheng2022farl,\n  title={General facial representation learning in a visual-linguistic manner},\n  author={Zheng, Yinglin and Yang, Hao and Zhang, Ting and Bao, Jianmin and Chen, Dongdong and Huang, Yangyu and Yuan, Lu and Chen, Dong and Zeng, Ming and Wen, Fang},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={18697--18709},\n  year={2022}\n}\n``` \n\n\n## Face Alignment\n\nWe wrap the [FaRL](https://github.com/faceperceiver/farl) models for face alignment.\n```python\nimport torch\nimport cv2\nfrom matplotlib import pyplot as plt\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\n\nimport facer\nimg_file = 'data/twogirls.jpg'\n# image: 1 x 3 x h x w\nimage = facer.hwc2bchw(facer.read_hwc(img_file)).to(device=device)  \n\nface_detector = facer.face_detector('retinaface/mobilenet', device=device)\nwith torch.inference_mode():\n    faces = face_detector(image)\n\nface_aligner = facer.face_aligner('farl/ibug300w/448', device=device) # optional: \"farl/wflw/448\", \"farl/aflw19/448\"\n\nwith torch.inference_mode():\n    faces = face_aligner(image, faces)\n\nimg = cv2.imread(img_file)[..., ::-1]\nvis_img = img.copy()\nfor pts in faces['alignment']:\n    vis_img = facer.draw_landmarks(vis_img, None, pts.cpu().numpy())\nplt.imshow(vis_img)\n```\n![](./samples/example_output/alignment.png)\n\nCheck [this notebook](./samples/face_alignment.ipynb) for full example.\n\nPlease consider citing\n```\n@inproceedings{zheng2022farl,\n  title={General facial representation learning in a visual-linguistic manner},\n  author={Zheng, Yinglin and Yang, Hao and Zhang, Ting and Bao, Jianmin and Chen, Dongdong and Huang, Yangyu and Yuan, Lu and Chen, Dong and Zeng, Ming and Wen, Fang},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={18697--18709},\n  year={2022}\n}\n``` \n\n## Face Attribute Recognition\nWe wrap the [FaRL](https://github.com/faceperceiver/farl) models for face attribute recognition, the model achieves 92.06% accuracy on [CelebA](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset.\n\n```python\nimport sys\nimport torch\nimport facer\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# image: 1 x 3 x h x w\nimage = facer.hwc2bchw(facer.read_hwc(\"data/girl.jpg\")).to(device=device)\n\nface_detector = facer.face_detector(\"retinaface/mobilenet\", device=device)\nwith torch.inference_mode():\n    faces = face_detector(image)\n\nface_attr = facer.face_attr(\"farl/celeba/224\", device=device)\nwith torch.inference_mode():\n    faces = face_attr(image, faces)\n\nlabels = face_attr.labels\nface1_attrs = faces[\"attrs\"][0] # get the first face's attributes\n\nprint(labels)\n\nfor prob, label in zip(face1_attrs, labels):\n    if prob > 0.5:\n        print(label, prob.item())\n```\n\nCheck [this notebook](./samples/face_attribute.ipynb) for full example.\n\nPlease consider citing\n```\n@inproceedings{zheng2022farl,\n  title={General facial representation learning in a visual-linguistic manner},\n  author={Zheng, Yinglin and Yang, Hao and Zhang, Ting and Bao, Jianmin and Chen, Dongdong and Huang, Yangyu and Yuan, Lu and Chen, Dong and Zeng, Ming and Wen, Fang},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={18697--18709},\n  year={2022}\n}\n``` \n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Face related toolkit",
    "version": "0.0.4",
    "project_urls": {
        "Documentation": "https://github.com/FacePerceiver/facer",
        "Homepage": "https://github.com/FacePerceiver/facer",
        "Source": "https://github.com/FacePerceiver/facer",
        "Tracker": "https://github.com/FacePerceiver/facer/issues"
    },
    "split_keywords": [
        "face-detection",
        "pytorch",
        "retinaface",
        "face-parsing",
        "farl",
        "face-alignment"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8ea10d496c625f1a48de800d316988bb4a17ae782fb23e37d2ff224542b956b7",
                "md5": "f7954e11af3ab32fa41adf2ee6cd0bb1",
                "sha256": "adb2d005e219c2ff1cb7f0ffe7884d8c92d66b2ceee15b5d780644e2df028768"
            },
            "downloads": -1,
            "filename": "pyfacer-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f7954e11af3ab32fa41adf2ee6cd0bb1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 38633,
            "upload_time": "2023-05-14T12:55:35",
            "upload_time_iso_8601": "2023-05-14T12:55:35.680426Z",
            "url": "https://files.pythonhosted.org/packages/8e/a1/0d496c625f1a48de800d316988bb4a17ae782fb23e37d2ff224542b956b7/pyfacer-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-14 12:55:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "FacePerceiver",
    "github_project": "facer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.9.1"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": []
        },
        {
            "name": "pillow",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "ipywidgets",
            "specs": []
        },
        {
            "name": "scikit-image",
            "specs": []
        },
        {
            "name": "matplotlib",
            "specs": []
        },
        {
            "name": "validators",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "opencv-python",
            "specs": []
        }
    ],
    "lcname": "pyfacer"
}
        
Elapsed time: 0.89176s