new-gfpgan


Namenew-gfpgan JSON
Version 1.0.1 PyPI version JSON
download
home_pagehttps://github.com/TencentARC/GFPGAN
SummaryGFPGAN aims at developing Practical Algorithms for Real-world Face Restoration
upload_time2024-05-26 05:28:02
maintainerNone
docs_urlNone
authorXintao Wang
requires_pythonNone
licenseApache License Version 2.0
keywords computer vision pytorch image restoration super-resolution face restoration gan gfpgan
VCS
bugtrack_url
requirements basicsr facexlib lmdb numpy opencv-python pyyaml scipy tb-nightly torch torchvision tqdm yapf
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="assets/gfpgan_logo.png" height=130>
</p>

## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>

<div align="center">
<!-- <a href="https://twitter.com/_Xintao_" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/17445847/187162058-c764ced6-952f-404b-ac85-ba95cce18e7b.png" width="4%" alt="" />
</a> -->

[![download](https://img.shields.io/github/downloads/TencentARC/GFPGAN/total.svg)](https://github.com/TencentARC/GFPGAN/releases)
[![PyPI](https://img.shields.io/pypi/v/gfpgan)](https://pypi.org/project/gfpgan/)
[![Open issue](https://img.shields.io/github/issues/TencentARC/GFPGAN)](https://github.com/TencentARC/GFPGAN/issues)
[![Closed issue](https://img.shields.io/github/issues-closed/TencentARC/GFPGAN)](https://github.com/TencentARC/GFPGAN/issues)
[![LICENSE](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/TencentARC/GFPGAN/blob/master/LICENSE)
[![python lint](https://github.com/TencentARC/GFPGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/TencentARC/GFPGAN/blob/master/.github/workflows/pylint.yml)
[![Publish-pip](https://github.com/TencentARC/GFPGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/TencentARC/GFPGAN/blob/master/.github/workflows/publish-pip.yml)
</div>

1. :boom: **Updated** online demo: [![Replicate](https://img.shields.io/static/v1?label=Demo&message=Replicate&color=blue)](https://replicate.com/tencentarc/gfpgan). Here is the [backup](https://replicate.com/xinntao/gfpgan).
1. :boom: **Updated** online demo: [![Huggingface Gradio](https://img.shields.io/static/v1?label=Demo&message=Huggingface%20Gradio&color=orange)](https://huggingface.co/spaces/Xintao/GFPGAN)
1. [Colab Demo](https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo) for GFPGAN <a href="https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>; (Another [Colab Demo](https://colab.research.google.com/drive/1Oa1WwKB4M4l1GmR7CtswDVgOCOeSLChA?usp=sharing) for the original paper model)

<!-- 3. Online demo: [Replicate.ai](https://replicate.com/xinntao/gfpgan) (may need to sign in, return the whole image)
4. Online demo: [Baseten.co](https://app.baseten.co/applications/Q04Lz0d/operator_views/8qZG6Bg) (backed by GPU, returns the whole image)
5. We provide a *clean* version of GFPGAN, which can run without CUDA extensions. So that it can run in **Windows** or on **CPU mode**. -->

> :rocket: **Thanks for your interest in our work. You may also want to check our new updates on the *tiny models* for *anime images and videos* in [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/blob/master/docs/anime_video_model.md)** :blush:

GFPGAN aims at developing a **Practical Algorithm for Real-world Face Restoration**.<br>
It leverages rich and diverse priors encapsulated in a pretrained face GAN (*e.g.*, StyleGAN2) for blind face restoration.

:question: Frequently Asked Questions can be found in [FAQ.md](FAQ.md).

:triangular_flag_on_post: **Updates**

- :white_check_mark: Add [RestoreFormer](https://github.com/wzhouxiff/RestoreFormer) inference codes.
- :white_check_mark: Add [V1.4 model](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth), which produces slightly more details and better identity than V1.3.
- :white_check_mark: Add **[V1.3 model](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)**, which produces **more natural** restoration results, and better results on *very low-quality* / *high-quality* inputs. See more in [Model zoo](#european_castle-model-zoo), [Comparisons.md](Comparisons.md)
- :white_check_mark: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/GFPGAN).
- :white_check_mark: Support enhancing non-face regions (background) with [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN).
- :white_check_mark: We provide a *clean* version of GFPGAN, which does not require CUDA extensions.
- :white_check_mark: We provide an updated model without colorizing faces.

---

If GFPGAN is helpful in your photos/projects, please help to :star: this repo or recommend it to your friends. Thanks:blush:
Other recommended projects:<br>
:arrow_forward: [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN): A practical algorithm for general image restoration<br>
:arrow_forward: [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox<br>
:arrow_forward: [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions<br>
:arrow_forward: [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison<br>

---

### :book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior

> [[Paper](https://arxiv.org/abs/2101.04061)] &emsp; [[Project Page](https://xinntao.github.io/projects/gfpgan)] &emsp; [Demo] <br>
> [Xintao Wang](https://xinntao.github.io/), [Yu Li](https://yu-li.github.io/), [Honglun Zhang](https://scholar.google.com/citations?hl=en&user=KjQLROoAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>
> Applied Research Center (ARC), Tencent PCG

<p align="center">
  <img src="https://xinntao.github.io/projects/GFPGAN_src/gfpgan_teaser.jpg">
</p>

---

## :wrench: Dependencies and Installation

- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.7](https://pytorch.org/)
- Option: NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)
- Option: Linux

### Installation

We now provide a *clean* version of GFPGAN, which does not require customized CUDA extensions. <br>
If you want to use the original model in our paper, please see [PaperModel.md](PaperModel.md) for installation.

1. Clone repo

    ```bash
    git clone https://github.com/TencentARC/GFPGAN.git
    cd GFPGAN
    ```

1. Install dependent packages

    ```bash
    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr

    # Install facexlib - https://github.com/xinntao/facexlib
    # We use face detection and face restoration helper in the facexlib package
    pip install facexlib

    pip install -r requirements.txt
    python setup.py develop

    # If you want to enhance the background (non-face) regions with Real-ESRGAN,
    # you also need to install the realesrgan package
    pip install realesrgan
    ```

## :zap: Quick Inference

We take the v1.3 version for an example. More models can be found [here](#european_castle-model-zoo).

Download pre-trained models: [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)

```bash
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models
```

**Inference!**

```bash
python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2
```

```console
Usage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...

  -h                   show this help
  -i input             Input image or folder. Default: inputs/whole_imgs
  -o output            Output folder. Default: results
  -v version           GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3
  -s upscale           The final upsampling scale of the image. Default: 2
  -bg_upsampler        background upsampler. Default: realesrgan
  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400
  -suffix              Suffix of the restored faces
  -only_center_face    Only restore the center face
  -aligned             Input are aligned faces
  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
```

If you want to use the original model in our paper, please see [PaperModel.md](PaperModel.md) for installation and inference.

## :european_castle: Model Zoo

| Version | Model Name  | Description |
| :---: | :---:        |     :---:      |
| V1.3 | [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) | Based on V1.2; **more natural** restoration results; better results on very low-quality / high-quality inputs. |
| V1.2 | [GFPGANCleanv1-NoCE-C2.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth) | No colorization; no CUDA extensions are required. Trained with more data with pre-processing. |
| V1 | [GFPGANv1.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth) | The paper model, with colorization. |

The comparisons are in [Comparisons.md](Comparisons.md).

Note that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.

| Version | Strengths  | Weaknesses |
| :---: | :---:        |     :---:      |
|V1.3 |  ✓ natural outputs<br> ✓better results on very low-quality inputs <br> ✓ work on relatively high-quality inputs <br>✓ can have repeated (twice) restorations | ✗ not very sharp <br> ✗ have a slight change on identity |
|V1.2 |  ✓ sharper output <br> ✓ with beauty makeup | ✗ some outputs are unnatural |

You can find **more models (such as the discriminators)** here: [[Google Drive](https://drive.google.com/drive/folders/17rLiFzcUMoQuhLnptDsKolegHWwJOnHu?usp=sharing)], OR [[Tencent Cloud 腾讯微云](https://share.weiyun.com/ShYoCCoc)]

## :computer: Training

We provide the training codes for GFPGAN (used in our paper). <br>
You could improve it according to your own needs.

**Tips**

1. More high quality faces can improve the restoration quality.
2. You may need to perform some pre-processing, such as beauty makeup.

**Procedures**

(You can try a simple version ( `options/train_gfpgan_v1_simple.yml`) that does not require face component landmarks.)

1. Dataset preparation: [FFHQ](https://github.com/NVlabs/ffhq-dataset)

1. Download pre-trained models and other data. Put them in the `experiments/pretrained_models` folder.
    1. [Pre-trained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth)
    1. [Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/FFHQ_eye_mouth_landmarks_512.pth)
    1. [A simple ArcFace model: arcface_resnet18.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/arcface_resnet18.pth)

1. Modify the configuration file `options/train_gfpgan_v1.yml` accordingly.

1. Training

> python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1.yml --launcher pytorch

## :scroll: License and Acknowledgement

GFPGAN is released under Apache License Version 2.0.

## BibTeX

    @InProceedings{wang2021gfpgan,
        author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
        title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
        booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
        year = {2021}
    }

## :e-mail: Contact

If you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/TencentARC/GFPGAN",
    "name": "new-gfpgan",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "computer vision, pytorch, image restoration, super-resolution, face restoration, gan, gfpgan",
    "author": "Xintao Wang",
    "author_email": "xintao.wang@outlook.com",
    "download_url": "https://files.pythonhosted.org/packages/fc/a9/6afb5bafa5f1cb8f7e2ce89125135ccab983e103fe3717bd7c0d36977996/new-gfpgan-1.0.1.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\r\n  <img src=\"assets/gfpgan_logo.png\" height=130>\r\n</p>\r\n\r\n## <div align=\"center\"><b><a href=\"README.md\">English</a> | <a href=\"README_CN.md\">\u7b80\u4f53\u4e2d\u6587</a></b></div>\r\n\r\n<div align=\"center\">\r\n<!-- <a href=\"https://twitter.com/_Xintao_\" style=\"text-decoration:none;\">\r\n    <img src=\"https://user-images.githubusercontent.com/17445847/187162058-c764ced6-952f-404b-ac85-ba95cce18e7b.png\" width=\"4%\" alt=\"\" />\r\n</a> -->\r\n\r\n[![download](https://img.shields.io/github/downloads/TencentARC/GFPGAN/total.svg)](https://github.com/TencentARC/GFPGAN/releases)\r\n[![PyPI](https://img.shields.io/pypi/v/gfpgan)](https://pypi.org/project/gfpgan/)\r\n[![Open issue](https://img.shields.io/github/issues/TencentARC/GFPGAN)](https://github.com/TencentARC/GFPGAN/issues)\r\n[![Closed issue](https://img.shields.io/github/issues-closed/TencentARC/GFPGAN)](https://github.com/TencentARC/GFPGAN/issues)\r\n[![LICENSE](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/TencentARC/GFPGAN/blob/master/LICENSE)\r\n[![python lint](https://github.com/TencentARC/GFPGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/TencentARC/GFPGAN/blob/master/.github/workflows/pylint.yml)\r\n[![Publish-pip](https://github.com/TencentARC/GFPGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/TencentARC/GFPGAN/blob/master/.github/workflows/publish-pip.yml)\r\n</div>\r\n\r\n1. :boom: **Updated** online demo: [![Replicate](https://img.shields.io/static/v1?label=Demo&message=Replicate&color=blue)](https://replicate.com/tencentarc/gfpgan). Here is the [backup](https://replicate.com/xinntao/gfpgan).\r\n1. :boom: **Updated** online demo: [![Huggingface Gradio](https://img.shields.io/static/v1?label=Demo&message=Huggingface%20Gradio&color=orange)](https://huggingface.co/spaces/Xintao/GFPGAN)\r\n1. [Colab Demo](https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo) for GFPGAN <a href=\"https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"google colab logo\"></a>; (Another [Colab Demo](https://colab.research.google.com/drive/1Oa1WwKB4M4l1GmR7CtswDVgOCOeSLChA?usp=sharing) for the original paper model)\r\n\r\n<!-- 3. Online demo: [Replicate.ai](https://replicate.com/xinntao/gfpgan) (may need to sign in, return the whole image)\r\n4. Online demo: [Baseten.co](https://app.baseten.co/applications/Q04Lz0d/operator_views/8qZG6Bg) (backed by GPU, returns the whole image)\r\n5. We provide a *clean* version of GFPGAN, which can run without CUDA extensions. So that it can run in **Windows** or on **CPU mode**. -->\r\n\r\n> :rocket: **Thanks for your interest in our work. You may also want to check our new updates on the *tiny models* for *anime images and videos* in [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/blob/master/docs/anime_video_model.md)** :blush:\r\n\r\nGFPGAN aims at developing a **Practical Algorithm for Real-world Face Restoration**.<br>\r\nIt leverages rich and diverse priors encapsulated in a pretrained face GAN (*e.g.*, StyleGAN2) for blind face restoration.\r\n\r\n:question: Frequently Asked Questions can be found in [FAQ.md](FAQ.md).\r\n\r\n:triangular_flag_on_post: **Updates**\r\n\r\n- :white_check_mark: Add [RestoreFormer](https://github.com/wzhouxiff/RestoreFormer) inference codes.\r\n- :white_check_mark: Add [V1.4 model](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth), which produces slightly more details and better identity than V1.3.\r\n- :white_check_mark: Add **[V1.3 model](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)**, which produces **more natural** restoration results, and better results on *very low-quality* / *high-quality* inputs. See more in [Model zoo](#european_castle-model-zoo), [Comparisons.md](Comparisons.md)\r\n- :white_check_mark: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/GFPGAN).\r\n- :white_check_mark: Support enhancing non-face regions (background) with [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN).\r\n- :white_check_mark: We provide a *clean* version of GFPGAN, which does not require CUDA extensions.\r\n- :white_check_mark: We provide an updated model without colorizing faces.\r\n\r\n---\r\n\r\nIf GFPGAN is helpful in your photos/projects, please help to :star: this repo or recommend it to your friends. Thanks:blush:\r\nOther recommended projects:<br>\r\n:arrow_forward: [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN): A practical algorithm for general image restoration<br>\r\n:arrow_forward: [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox<br>\r\n:arrow_forward: [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions<br>\r\n:arrow_forward: [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison<br>\r\n\r\n---\r\n\r\n### :book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior\r\n\r\n> [[Paper](https://arxiv.org/abs/2101.04061)] &emsp; [[Project Page](https://xinntao.github.io/projects/gfpgan)] &emsp; [Demo] <br>\r\n> [Xintao Wang](https://xinntao.github.io/), [Yu Li](https://yu-li.github.io/), [Honglun Zhang](https://scholar.google.com/citations?hl=en&user=KjQLROoAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>\r\n> Applied Research Center (ARC), Tencent PCG\r\n\r\n<p align=\"center\">\r\n  <img src=\"https://xinntao.github.io/projects/GFPGAN_src/gfpgan_teaser.jpg\">\r\n</p>\r\n\r\n---\r\n\r\n## :wrench: Dependencies and Installation\r\n\r\n- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))\r\n- [PyTorch >= 1.7](https://pytorch.org/)\r\n- Option: NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)\r\n- Option: Linux\r\n\r\n### Installation\r\n\r\nWe now provide a *clean* version of GFPGAN, which does not require customized CUDA extensions. <br>\r\nIf you want to use the original model in our paper, please see [PaperModel.md](PaperModel.md) for installation.\r\n\r\n1. Clone repo\r\n\r\n    ```bash\r\n    git clone https://github.com/TencentARC/GFPGAN.git\r\n    cd GFPGAN\r\n    ```\r\n\r\n1. Install dependent packages\r\n\r\n    ```bash\r\n    # Install basicsr - https://github.com/xinntao/BasicSR\r\n    # We use BasicSR for both training and inference\r\n    pip install basicsr\r\n\r\n    # Install facexlib - https://github.com/xinntao/facexlib\r\n    # We use face detection and face restoration helper in the facexlib package\r\n    pip install facexlib\r\n\r\n    pip install -r requirements.txt\r\n    python setup.py develop\r\n\r\n    # If you want to enhance the background (non-face) regions with Real-ESRGAN,\r\n    # you also need to install the realesrgan package\r\n    pip install realesrgan\r\n    ```\r\n\r\n## :zap: Quick Inference\r\n\r\nWe take the v1.3 version for an example. More models can be found [here](#european_castle-model-zoo).\r\n\r\nDownload pre-trained models: [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)\r\n\r\n```bash\r\nwget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models\r\n```\r\n\r\n**Inference!**\r\n\r\n```bash\r\npython inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2\r\n```\r\n\r\n```console\r\nUsage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...\r\n\r\n  -h                   show this help\r\n  -i input             Input image or folder. Default: inputs/whole_imgs\r\n  -o output            Output folder. Default: results\r\n  -v version           GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3\r\n  -s upscale           The final upsampling scale of the image. Default: 2\r\n  -bg_upsampler        background upsampler. Default: realesrgan\r\n  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400\r\n  -suffix              Suffix of the restored faces\r\n  -only_center_face    Only restore the center face\r\n  -aligned             Input are aligned faces\r\n  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto\r\n```\r\n\r\nIf you want to use the original model in our paper, please see [PaperModel.md](PaperModel.md) for installation and inference.\r\n\r\n## :european_castle: Model Zoo\r\n\r\n| Version | Model Name  | Description |\r\n| :---: | :---:        |     :---:      |\r\n| V1.3 | [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) | Based on V1.2; **more natural** restoration results; better results on very low-quality / high-quality inputs. |\r\n| V1.2 | [GFPGANCleanv1-NoCE-C2.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth) | No colorization; no CUDA extensions are required. Trained with more data with pre-processing. |\r\n| V1 | [GFPGANv1.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth) | The paper model, with colorization. |\r\n\r\nThe comparisons are in [Comparisons.md](Comparisons.md).\r\n\r\nNote that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.\r\n\r\n| Version | Strengths  | Weaknesses |\r\n| :---: | :---:        |     :---:      |\r\n|V1.3 |  \u2713 natural outputs<br> \u2713better results on very low-quality inputs <br> \u2713 work on relatively high-quality inputs <br>\u2713 can have repeated (twice) restorations | \u2717 not very sharp <br> \u2717 have a slight change on identity |\r\n|V1.2 |  \u2713 sharper output <br> \u2713 with beauty makeup | \u2717 some outputs are unnatural |\r\n\r\nYou can find **more models (such as the discriminators)** here: [[Google Drive](https://drive.google.com/drive/folders/17rLiFzcUMoQuhLnptDsKolegHWwJOnHu?usp=sharing)], OR [[Tencent Cloud \u817e\u8baf\u5fae\u4e91](https://share.weiyun.com/ShYoCCoc)]\r\n\r\n## :computer: Training\r\n\r\nWe provide the training codes for GFPGAN (used in our paper). <br>\r\nYou could improve it according to your own needs.\r\n\r\n**Tips**\r\n\r\n1. More high quality faces can improve the restoration quality.\r\n2. You may need to perform some pre-processing, such as beauty makeup.\r\n\r\n**Procedures**\r\n\r\n(You can try a simple version ( `options/train_gfpgan_v1_simple.yml`) that does not require face component landmarks.)\r\n\r\n1. Dataset preparation: [FFHQ](https://github.com/NVlabs/ffhq-dataset)\r\n\r\n1. Download pre-trained models and other data. Put them in the `experiments/pretrained_models` folder.\r\n    1. [Pre-trained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth)\r\n    1. [Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/FFHQ_eye_mouth_landmarks_512.pth)\r\n    1. [A simple ArcFace model: arcface_resnet18.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/arcface_resnet18.pth)\r\n\r\n1. Modify the configuration file `options/train_gfpgan_v1.yml` accordingly.\r\n\r\n1. Training\r\n\r\n> python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1.yml --launcher pytorch\r\n\r\n## :scroll: License and Acknowledgement\r\n\r\nGFPGAN is released under Apache License Version 2.0.\r\n\r\n## BibTeX\r\n\r\n    @InProceedings{wang2021gfpgan,\r\n        author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},\r\n        title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},\r\n        booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\r\n        year = {2021}\r\n    }\r\n\r\n## :e-mail: Contact\r\n\r\nIf you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.\r\n",
    "bugtrack_url": null,
    "license": "Apache License Version 2.0",
    "summary": "GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration",
    "version": "1.0.1",
    "project_urls": {
        "Homepage": "https://github.com/TencentARC/GFPGAN"
    },
    "split_keywords": [
        "computer vision",
        " pytorch",
        " image restoration",
        " super-resolution",
        " face restoration",
        " gan",
        " gfpgan"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a31332ad47e84ee2f3e90ac29e3294dd433e7108e0549765a899d76e26acc54c",
                "md5": "7e93a2513c9a4d222e124ec3c7985be1",
                "sha256": "98325f70d58ecb6f6253d20ef068942713ada0b2afdba18c6de050b0ac95a29c"
            },
            "downloads": -1,
            "filename": "new_gfpgan-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7e93a2513c9a4d222e124ec3c7985be1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 52321,
            "upload_time": "2024-05-26T05:27:59",
            "upload_time_iso_8601": "2024-05-26T05:27:59.130107Z",
            "url": "https://files.pythonhosted.org/packages/a3/13/32ad47e84ee2f3e90ac29e3294dd433e7108e0549765a899d76e26acc54c/new_gfpgan-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fca96afb5bafa5f1cb8f7e2ce89125135ccab983e103fe3717bd7c0d36977996",
                "md5": "cd30f81ab2da47af936b40b5787f222d",
                "sha256": "2d73fe7b80cbd90866ab9836ea6aac9b98ce119fdbbbfe1127b9ad4d0fa60073"
            },
            "downloads": -1,
            "filename": "new-gfpgan-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "cd30f81ab2da47af936b40b5787f222d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 99446,
            "upload_time": "2024-05-26T05:28:02",
            "upload_time_iso_8601": "2024-05-26T05:28:02.841069Z",
            "url": "https://files.pythonhosted.org/packages/fc/a9/6afb5bafa5f1cb8f7e2ce89125135ccab983e103fe3717bd7c0d36977996/new-gfpgan-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-26 05:28:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "TencentARC",
    "github_project": "GFPGAN",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "basicsr",
            "specs": [
                [
                    ">=",
                    "1.4.2"
                ]
            ]
        },
        {
            "name": "facexlib",
            "specs": [
                [
                    ">=",
                    "0.2.5"
                ]
            ]
        },
        {
            "name": "lmdb",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "opencv-python",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "scipy",
            "specs": []
        },
        {
            "name": "tb-nightly",
            "specs": []
        },
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.7"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "yapf",
            "specs": []
        }
    ],
    "lcname": "new-gfpgan"
}
        
Elapsed time: 0.34205s