# FLAME: Articulated Expressive 3D Head Model (PyTorch)
This is an implementation of the [FLAME](http://flame.is.tue.mpg.de/) 3D head model in PyTorch.
We also provide [Tensorflow FLAME](https://github.com/TimoBolkart/TF_FLAME), a [Chumpy](https://github.com/mattloper/chumpy)-based [FLAME-fitting repository](https://github.com/Rubikplayer/flame-fitting), and code to [convert from Basel Face Model to FLAME](https://github.com/TimoBolkart/BFM_to_FLAME).
<p align="center">
<img src="gifs/model_variations.gif">
</p>
FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the following [scientific publication](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/400/paper.pdf)
```bibtex
Learning a model of facial shape and expression from 4D scans
Tianye Li*, Timo Bolkart*, Michael J. Black, Hao Li, and Javier Romero
ACM Transactions on Graphics (Proc. SIGGRAPH Asia) 2017
```
and the [supplementary video](https://youtu.be/36rPTkhiJTM).
## Installation
The code uses **Python 3.7** and it is tested on PyTorch 1.4.
### Setup FLAME PyTorch Virtual Environment
```shell
python3.7 -m venv <your_home_dir>/.virtualenvs/FLAME_PyTorch
source <your_home_dir>/.virtualenvs/FLAME_PyTorch/bin/activate
```
### Clone the project and install requirements
```shell
git clone https://github.com/soubhiksanyal/FLAME_PyTorch
cd FLAME_PyTorch
python setup.py install
mkdir model
```
## Download models
* Download FLAME model from [here](http://flame.is.tue.mpg.de/). You need to sign up and agree to the model license for access to the model. Copy the downloaded model inside the **model** folder.
* Download Landmark embedings from [RingNet Project](https://github.com/soubhiksanyal/RingNet/tree/master/flame_model). Copy it inside the **model** folder.
## Demo
### Loading FLAME and visualising the 3D landmarks on the face
Please note we used the pose dependent conture for the face as introduced by [RingNet Project](https://github.com/soubhiksanyal/RingNet/tree/master/flame_model).
Run the following command from the terminal
```shell
python main.py
```
## License
FLAME is available under [Creative Commons Attribution license](https://creativecommons.org/licenses/by/4.0/). By using the model or the code code, you acknowledge that you have read the license terms (https://flame.is.tue.mpg.de/modellicense.html), understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code.
## Referencing FLAME
When using this code in a scientific publication, please cite
```bibtex
@article{FLAME:SiggraphAsia2017,
title = {Learning a model of facial shape and expression from {4D} scans},
author = {Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier},
journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
volume = {36},
number = {6},
year = {2017},
url = {https://doi.org/10.1145/3130800.3130813}
}
```
Additionally if you use the pose dependent dynamic landmarks from this codebase, please cite
```bibtex
@inproceedings{RingNet:CVPR:2019,
title = {Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision},
author = {Sanyal, Soubhik and Bolkart, Timo and Feng, Haiwen and Black, Michael},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
month = jun,
year = {2019},
month_numeric = {6}
}
```
## Supported Projects
FLAME supports several projects such as
* [CoMA: Convolutional Mesh Autoencoders](https://github.com/anuragranj/coma)
* [RingNet: 3D Face Shape and Expression Reconstruction from an Image without 3D Supervision](https://github.com/soubhiksanyal/RingNet)
* [VOCA: Voice Operated Character Animation](https://github.com/TimoBolkart/voca)
* [Expressive Body Capture: 3D Hands, Face, and Body from a Single Image](https://github.com/vchoutas/smplify-x)
* [ExPose: Monocular Expressive Body Regression through Body-Driven Attention](https://github.com/vchoutas/expose)
* [GIF: Generative Interpretable Faces](https://github.com/ParthaEth/GIF)
* [DECA: Detailed Expression Capture and Animation](https://github.com/YadiraF/DECA)
FLAME is part of [SMPL-X: : A new joint 3D model of the human body, face and hands together](https://github.com/vchoutas/smplx)
## Contact
If you have any questions regarding the PyTorch implementation then you can contact us at soubhik.sanyal@tuebingen.mpg.de and timo.bolkart@tuebingen.mpg.de.
## Acknowledgements
This repository is build with modifications from [SMPLX](https://github.com/vchoutas/smplx).
Raw data
{
"_id": null,
"home_page": "https://github.com/soubhiksanyal/FLAME_PyTorch",
"name": "FLAME-PyTorch",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "FLAME,3D Face Modelling",
"author": "Soubhik Sanyal",
"author_email": "soubhik.sanyal@tuebingen.mpg.de",
"download_url": "https://files.pythonhosted.org/packages/cb/e5/2ed3aa405ad7f139a18c1f73eff1b6f90f25f668d512b999eb92045d86c9/FLAME_PyTorch-0.0.1.tar.gz",
"platform": null,
"description": "# FLAME: Articulated Expressive 3D Head Model (PyTorch)\n\nThis is an implementation of the [FLAME](http://flame.is.tue.mpg.de/) 3D head model in PyTorch.\n\nWe also provide [Tensorflow FLAME](https://github.com/TimoBolkart/TF_FLAME), a [Chumpy](https://github.com/mattloper/chumpy)-based [FLAME-fitting repository](https://github.com/Rubikplayer/flame-fitting), and code to [convert from Basel Face Model to FLAME](https://github.com/TimoBolkart/BFM_to_FLAME).\n\n<p align=\"center\"> \n<img src=\"gifs/model_variations.gif\">\n</p>\n\nFLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the following [scientific publication](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/400/paper.pdf)\n\n```bibtex\nLearning a model of facial shape and expression from 4D scans\nTianye Li*, Timo Bolkart*, Michael J. Black, Hao Li, and Javier Romero\nACM Transactions on Graphics (Proc. SIGGRAPH Asia) 2017\n```\n\nand the [supplementary video](https://youtu.be/36rPTkhiJTM).\n\n## Installation\n\nThe code uses **Python 3.7** and it is tested on PyTorch 1.4.\n\n### Setup FLAME PyTorch Virtual Environment\n\n```shell\npython3.7 -m venv <your_home_dir>/.virtualenvs/FLAME_PyTorch\nsource <your_home_dir>/.virtualenvs/FLAME_PyTorch/bin/activate\n```\n\n### Clone the project and install requirements\n\n```shell\ngit clone https://github.com/soubhiksanyal/FLAME_PyTorch\ncd FLAME_PyTorch\npython setup.py install\nmkdir model\n```\n\n## Download models\n\n* Download FLAME model from [here](http://flame.is.tue.mpg.de/). You need to sign up and agree to the model license for access to the model. Copy the downloaded model inside the **model** folder. \n* Download Landmark embedings from [RingNet Project](https://github.com/soubhiksanyal/RingNet/tree/master/flame_model). Copy it inside the **model** folder. \n\n## Demo\n\n### Loading FLAME and visualising the 3D landmarks on the face\n\nPlease note we used the pose dependent conture for the face as introduced by [RingNet Project](https://github.com/soubhiksanyal/RingNet/tree/master/flame_model).\n\nRun the following command from the terminal\n\n```shell\npython main.py\n```\n\n## License\n\nFLAME is available under [Creative Commons Attribution license](https://creativecommons.org/licenses/by/4.0/). By using the model or the code code, you acknowledge that you have read the license terms (https://flame.is.tue.mpg.de/modellicense.html), understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code.\n\n## Referencing FLAME\n\nWhen using this code in a scientific publication, please cite\n\n```bibtex\n@article{FLAME:SiggraphAsia2017,\n title = {Learning a model of facial shape and expression from {4D} scans},\n author = {Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier},\n journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},\n volume = {36},\n number = {6},\n year = {2017},\n url = {https://doi.org/10.1145/3130800.3130813}\n}\n```\n\nAdditionally if you use the pose dependent dynamic landmarks from this codebase, please cite \n\n```bibtex\n@inproceedings{RingNet:CVPR:2019,\ntitle = {Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision},\nauthor = {Sanyal, Soubhik and Bolkart, Timo and Feng, Haiwen and Black, Michael},\nbooktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},\nmonth = jun,\nyear = {2019},\nmonth_numeric = {6}\n}\n```\n\n## Supported Projects\n\nFLAME supports several projects such as\n\n* [CoMA: Convolutional Mesh Autoencoders](https://github.com/anuragranj/coma)\n* [RingNet: 3D Face Shape and Expression Reconstruction from an Image without 3D Supervision](https://github.com/soubhiksanyal/RingNet)\n* [VOCA: Voice Operated Character Animation](https://github.com/TimoBolkart/voca)\n* [Expressive Body Capture: 3D Hands, Face, and Body from a Single Image](https://github.com/vchoutas/smplify-x)\n* [ExPose: Monocular Expressive Body Regression through Body-Driven Attention](https://github.com/vchoutas/expose)\n* [GIF: Generative Interpretable Faces](https://github.com/ParthaEth/GIF)\n* [DECA: Detailed Expression Capture and Animation](https://github.com/YadiraF/DECA)\n\nFLAME is part of [SMPL-X: : A new joint 3D model of the human body, face and hands together](https://github.com/vchoutas/smplx)\n\n## Contact\n\nIf you have any questions regarding the PyTorch implementation then you can contact us at soubhik.sanyal@tuebingen.mpg.de and timo.bolkart@tuebingen.mpg.de.\n\n## Acknowledgements\n\nThis repository is build with modifications from [SMPLX](https://github.com/vchoutas/smplx).\n\n\n",
"bugtrack_url": null,
"license": "See LICENSE",
"summary": "PyTorch implementation of the 3D FLAME model.",
"version": "0.0.1",
"project_urls": {
"Homepage": "https://github.com/soubhiksanyal/FLAME_PyTorch",
"Source": "https://github.com/soubhiksanyal/FLAME_PyTorch"
},
"split_keywords": [
"flame",
"3d face modelling"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "822e6de10208119933acfa69c514616ba62630bb18c28a937e1789749692c5cc",
"md5": "d61d78e4e61c52528023a66c8083998e",
"sha256": "37517a2d4a198ebb1c081d4773a7de989404c86185dd71793d04a1b225bba7d0"
},
"downloads": -1,
"filename": "FLAME_PyTorch-0.0.1-py3.7.egg",
"has_sig": false,
"md5_digest": "d61d78e4e61c52528023a66c8083998e",
"packagetype": "bdist_egg",
"python_version": "3.7",
"requires_python": ">=3.8",
"size": 12642,
"upload_time": "2023-05-23T16:26:36",
"upload_time_iso_8601": "2023-05-23T16:26:36.652963Z",
"url": "https://files.pythonhosted.org/packages/82/2e/6de10208119933acfa69c514616ba62630bb18c28a937e1789749692c5cc/FLAME_PyTorch-0.0.1-py3.7.egg",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f70d9f24c909ed743f29ea3eff0a17bce4f1717fa7257ea2cc7b3b6c6eceb435",
"md5": "0a719763d3b5181cd0cee7215f8b5b14",
"sha256": "6ddffba0f2adaebe0b3a45919ca469d5c84a0a22d21b390194ffef593e66e7bd"
},
"downloads": -1,
"filename": "FLAME_PyTorch-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0a719763d3b5181cd0cee7215f8b5b14",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 8487,
"upload_time": "2023-05-23T16:26:33",
"upload_time_iso_8601": "2023-05-23T16:26:33.818487Z",
"url": "https://files.pythonhosted.org/packages/f7/0d/9f24c909ed743f29ea3eff0a17bce4f1717fa7257ea2cc7b3b6c6eceb435/FLAME_PyTorch-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cbe52ed3aa405ad7f139a18c1f73eff1b6f90f25f668d512b999eb92045d86c9",
"md5": "0588be74bfd488a3eeff6cb8f2cae501",
"sha256": "ea3edef5821892d90161bbf22f444c775017705ec5fa5bfbd105fc75101005c1"
},
"downloads": -1,
"filename": "FLAME_PyTorch-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "0588be74bfd488a3eeff6cb8f2cae501",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 7632,
"upload_time": "2023-05-23T16:26:38",
"upload_time_iso_8601": "2023-05-23T16:26:38.874259Z",
"url": "https://files.pythonhosted.org/packages/cb/e5/2ed3aa405ad7f139a18c1f73eff1b6f90f25f668d512b999eb92045d86c9/FLAME_PyTorch-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-23 16:26:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "soubhiksanyal",
"github_project": "FLAME_PyTorch",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "numpy",
"specs": [
[
"==",
"1.18.1"
]
]
},
{
"name": "torch",
"specs": []
},
{
"name": "torchvision",
"specs": []
},
{
"name": "chumpy",
"specs": [
[
"==",
"0.69"
]
]
},
{
"name": "pyrender",
"specs": [
[
"==",
"0.1.39"
]
]
},
{
"name": "trimesh",
"specs": [
[
"==",
"3.6.18"
]
]
},
{
"name": "smplx",
"specs": []
},
{
"name": "pickle",
"specs": []
}
],
"lcname": "flame-pytorch"
}