Working in progess...
# Fast 6DoF Face Alignment and Tracking
This project purpose is to implement Ultra lightweight 6 DoF Face Alignment and Tracking. This project is capable of realtime tracking face for mobile device.
## Installation
### Requirements
- torch >= 2.0
- autoalbument >= 1.3.1
### Install
[![PyPI version](https://badge.fury.io/py/fdfat.svg)](https://badge.fury.io/py/fdfat)
```
pip install -U fdfat
```
## Model Zoo
TODO: add best model
## Training
### Prepare the dataset
This project use 3d 68 points of landmark (difference from the original 300W dataset). Please go to [FaceSynthetics](https://github.com/microsoft/FaceSynthetics) to download the dataset (100K one) and extract it to your disk.
Create your dataset yaml file with the following info:
```yaml
base_path: <path-to-face-synthesis-dataset>/dataset_100000
train: <path-to-list-train-text-file.txt>
val: <path-to-list-val-text-file.txt>
test: <path-to-list-test-text-file.txt>
```
note: you can use list train file in `datasets/FaceSynthetics` for reference.
### Start training
```bash
fdfat --data <path-to-your-dataset-yaml> --model LightWeightModel
```
For complete list of parameter, please folow this sample config file: [fdfat/cfg/default.yaml](fdfat/cfg/default.yaml)
## Validation
```bash
fdfat --task val --data <path-to-your-dataset-yaml> --model LightWeightModel
```
## Predict
```bash
fdfat --task predict --model LightWeightModel --checkpoint <path-to-checkoint> --input <path-to-test-img>
```
## Export
```bash
fdfat --task export --model LightWeightModel --checkpoint <path-to-checkoint> --export_format tflite
```
## Credit
- [YOLOv8](https://github.com/ultralytics/ultralytics) : Thanks for ultralytics awesome project, I borrow some code from here.
- [Ultra-Light-Fast-Generic-Face-Detector-1MB](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB) : Thanks for your lightweight face detector
- [FaceSynthetics](https://github.com/microsoft/FaceSynthetics) : Thanks for expressive face landmark dataset, it's a good starting point
- [head-pose-estimation](https://github.com/yinguobing/head-pose-estimation) : Thanks for head pose estimation code
Raw data
{
"_id": null,
"home_page": "https://github.com/RyanDam/Fast-6DoF-Face-Alignment-and-Tracking",
"name": "fdfat",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "machine-learning,deep-learning,vision,ML,DL,AI",
"author": "RyanDam",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/84/b0/44091ef4a37c6f8bddb4da7185d78870f45d194399942cbe39e594ab4e4a/fdfat-0.2.6.1.tar.gz",
"platform": null,
"description": "\n\nWorking in progess...\n\n# Fast 6DoF Face Alignment and Tracking\n\nThis project purpose is to implement Ultra lightweight 6 DoF Face Alignment and Tracking. This project is capable of realtime tracking face for mobile device.\n\n## Installation\n\n### Requirements\n\n- torch >= 2.0\n- autoalbument >= 1.3.1\n\n### Install\n\n[![PyPI version](https://badge.fury.io/py/fdfat.svg)](https://badge.fury.io/py/fdfat)\n\n```\npip install -U fdfat\n```\n\n## Model Zoo\n\nTODO: add best model\n\n## Training\n\n### Prepare the dataset\n\nThis project use 3d 68 points of landmark (difference from the original 300W dataset). Please go to [FaceSynthetics](https://github.com/microsoft/FaceSynthetics) to download the dataset (100K one) and extract it to your disk.\n\nCreate your dataset yaml file with the following info:\n\n```yaml\nbase_path: <path-to-face-synthesis-dataset>/dataset_100000\ntrain: <path-to-list-train-text-file.txt>\nval: <path-to-list-val-text-file.txt>\ntest: <path-to-list-test-text-file.txt>\n```\n\nnote: you can use list train file in `datasets/FaceSynthetics` for reference.\n\n### Start training\n\n```bash\nfdfat --data <path-to-your-dataset-yaml> --model LightWeightModel\n```\n\nFor complete list of parameter, please folow this sample config file: [fdfat/cfg/default.yaml](fdfat/cfg/default.yaml)\n\n## Validation\n\n```bash\nfdfat --task val --data <path-to-your-dataset-yaml> --model LightWeightModel\n```\n\n## Predict\n\n```bash\nfdfat --task predict --model LightWeightModel --checkpoint <path-to-checkoint> --input <path-to-test-img>\n```\n\n## Export\n\n```bash\nfdfat --task export --model LightWeightModel --checkpoint <path-to-checkoint> --export_format tflite\n```\n\n## Credit\n\n- [YOLOv8](https://github.com/ultralytics/ultralytics) : Thanks for ultralytics awesome project, I borrow some code from here.\n- [Ultra-Light-Fast-Generic-Face-Detector-1MB](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB) : Thanks for your lightweight face detector\n- [FaceSynthetics](https://github.com/microsoft/FaceSynthetics) : Thanks for expressive face landmark dataset, it's a good starting point\n- [head-pose-estimation](https://github.com/yinguobing/head-pose-estimation) : Thanks for head pose estimation code\n\n",
"bugtrack_url": null,
"license": "GPL-3.0",
"summary": "Fast 6DoF Face Alignment and Tracking",
"version": "0.2.6.1",
"project_urls": {
"Homepage": "https://github.com/RyanDam/Fast-6DoF-Face-Alignment-and-Tracking"
},
"split_keywords": [
"machine-learning",
"deep-learning",
"vision",
"ml",
"dl",
"ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7e58aea229fc13462367832b66435e98e2afb0c012e7912b7829c176a4057a01",
"md5": "1d5d3b510832b0ae7c50d8b5d59c5405",
"sha256": "aad19eb60071662de069ba5cf98dd8f5cfcdffcd010a23110185a0c74338f07f"
},
"downloads": -1,
"filename": "fdfat-0.2.6.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1d5d3b510832b0ae7c50d8b5d59c5405",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 63214,
"upload_time": "2023-09-09T05:49:05",
"upload_time_iso_8601": "2023-09-09T05:49:05.409434Z",
"url": "https://files.pythonhosted.org/packages/7e/58/aea229fc13462367832b66435e98e2afb0c012e7912b7829c176a4057a01/fdfat-0.2.6.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "84b044091ef4a37c6f8bddb4da7185d78870f45d194399942cbe39e594ab4e4a",
"md5": "006ede973bdac50d7c57bb57ab2006ac",
"sha256": "544ecc50373ec8700ba29ca2e10b5f2adaabd53d59ce38c60e166976d06583e9"
},
"downloads": -1,
"filename": "fdfat-0.2.6.1.tar.gz",
"has_sig": false,
"md5_digest": "006ede973bdac50d7c57bb57ab2006ac",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 45990,
"upload_time": "2023-09-09T05:49:07",
"upload_time_iso_8601": "2023-09-09T05:49:07.431276Z",
"url": "https://files.pythonhosted.org/packages/84/b0/44091ef4a37c6f8bddb4da7185d78870f45d194399942cbe39e594ab4e4a/fdfat-0.2.6.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-09-09 05:49:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "RyanDam",
"github_project": "Fast-6DoF-Face-Alignment-and-Tracking",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "matplotlib",
"specs": [
[
">=",
"3.2.2"
]
]
},
{
"name": "opencv-python",
"specs": [
[
">=",
"4.6.0"
]
]
},
{
"name": "Pillow",
"specs": [
[
">=",
"7.1.2"
]
]
},
{
"name": "PyYAML",
"specs": [
[
">=",
"5.3.1"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.23.0"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.4.1"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"1.7.0"
]
]
},
{
"name": "torchvision",
"specs": [
[
">=",
"0.8.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.64.0"
]
]
},
{
"name": "filterpy",
"specs": [
[
">=",
"1.4.5"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"1.1.4"
]
]
},
{
"name": "seaborn",
"specs": [
[
">=",
"0.11.0"
]
]
},
{
"name": "psutil",
"specs": []
}
],
"lcname": "fdfat"
}