<p align="center">
<img src="assets/realesrgan_logo.png" height=120>
</p>
## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">ē®ä½äøę</a></b></div>
<div align="center">
š[**Demos**](#-demos-videos) **|** š©[**Updates**](#-updates) **|** ā”[**Usage**](#-quick-inference) **|** š°[**Model Zoo**](docs/model_zoo.md) **|** š§[Install](#-dependencies-and-installation) **|** š»[Train](docs/Training.md) **|** ā[FAQ](docs/FAQ.md) **|** šØ[Contribution](docs/CONTRIBUTING.md)
[![download](https://img.shields.io/github/downloads/xinntao/Real-ESRGAN/total.svg)](https://github.com/xinntao/Real-ESRGAN/releases)
[![PyPI](https://img.shields.io/pypi/v/realesrgan)](https://pypi.org/project/realesrgan/)
[![Open issue](https://img.shields.io/github/issues/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
[![Closed issue](https://img.shields.io/github/issues-closed/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)
[![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE)
[![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)
[![Publish-pip](https://github.com/xinntao/Real-ESRGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)
</div>
š„ **AnimeVideo-v3 model (åØę¼«č§é¢å°ęØ”å)**. Please see [[*anime video models*](docs/anime_video_model.md)] and [[*comparisons*](docs/anime_comparisons.md)]<br>
š„ **RealESRGAN_x4plus_anime_6B** for anime images **(åØę¼«ęå¾ęØ”å)**. Please see [[*anime_model*](docs/anime_model.md)]
<!-- 1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B) -->
1. :boom: **Update** online Replicate demo: [![Replicate](https://img.shields.io/static/v1?label=Demo&message=Replicate&color=blue)](https://replicate.com/xinntao/realesrgan)
1. Online Colab demo for Real-ESRGAN: [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) **|** Online Colab demo for for Real-ESRGAN (**anime videos**): [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing)
1. Portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. You can find more information [here](#portable-executable-files-ncnn). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
<!-- 1. You can watch enhanced animations in [Tencent Video](https://v.qq.com/s/topic/v_child/render/fC4iyCAM.html). ę¬¢čæč§ē[č
¾č®Æč§é¢åØę¼«äæ®å¤](https://v.qq.com/s/topic/v_child/render/fC4iyCAM.html) -->
Real-ESRGAN aims at developing **Practical Algorithms for General Image/Video Restoration**.<br>
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
š Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in [feedback.md](docs/feedback.md).
---
If Real-ESRGAN is helpful, please help to ā this repo or recommend it to your friends š <br>
Other recommended projects:<br>
ā¶ļø [GFPGAN](https://github.com/TencentARC/GFPGAN): A practical algorithm for real-world face restoration <br>
ā¶ļø [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox<br>
ā¶ļø [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions.<br>
ā¶ļø [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison <br>
ā¶ļø [HandyFigure](https://github.com/xinntao/HandyFigure): Open source of paper figures <br>
---
### š Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
> [[Paper](https://arxiv.org/abs/2107.10833)]   [[YouTube Video](https://www.youtube.com/watch?v=fxHWoDSSvSc)]   [[Bē«č®²č§£](https://www.bilibili.com/video/BV1H34y1m7sS/)]   [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)]   [[PPT slides](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]<br>
> [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>
> [Tencent ARC Lab](https://arc.tencent.com/en/ai-demos/imgRestore); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
<p align="center">
<img src="assets/teaser.jpg">
</p>
---
<!---------------------------------- Updates --------------------------->
## š© Updates
- ā
Add the **realesr-general-x4v3** model - a tiny small model for general scenes. It also supports the **--dn** option to balance the noise (avoiding over-smooth results). **--dn** is short for denoising strength.
- ā
Update the **RealESRGAN AnimeVideo-v3** model. Please see [anime video models](docs/anime_video_model.md) and [comparisons](docs/anime_comparisons.md) for more details.
- ā
Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md).
- ā
Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).
- ā
Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
- ā
Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](docs/Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)
- ā
Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**.
- ā
Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391)
- ā
Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.
- ā
[The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
- ā
The training codes have been released. A detailed guide can be found in [Training.md](docs/Training.md).
---
<!---------------------------------- Demo videos --------------------------->
## š Demos Videos
#### Bilibili
- [大é¹å¤©å®«ēꮵ](https://www.bilibili.com/video/BV1ja41117zb)
- [Anime dance cut åØę¼«éę§čč¹](https://www.bilibili.com/video/BV1wY4y1L7hT/)
- [ęµ·č“¼ēēꮵ](https://www.bilibili.com/video/BV1i3411L7Gy/)
#### YouTube
## š§ Dependencies and Installation
- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.7](https://pytorch.org/)
### Installation
1. Clone repo
```bash
git clone https://github.com/xinntao/Real-ESRGAN.git
cd Real-ESRGAN
```
1. Install dependent packages
```bash
# Install basicsr - https://github.com/xinntao/BasicSR
# We use BasicSR for both training and inference
pip install basicsr
# facexlib and gfpgan are for face enhancement
pip install facexlib
pip install gfpgan
pip install -r requirements.txt
python setup.py develop
```
---
## ā” Quick Inference
There are usually three ways to inference Real-ESRGAN.
1. [Online inference](#online-inference)
1. [Portable executable files (NCNN)](#portable-executable-files-ncnn)
1. [Python script](#python-script)
### Online inference
1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B)
1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN **|** [Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**).
### Portable executable files (NCNN)
You can download [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**.
This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.<br>
You can simply run the following command (the Windows example, more information is in the README.md of each executable files):
```bash
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name
```
We have provided five models:
1. realesrgan-x4plus (default)
2. realesrnet-x4plus
3. realesrgan-x4plus-anime (optimized for anime images, small model size)
4. realesr-animevideov3 (animation video)
You can use the `-n` argument for other models, for example, `./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`
#### Usage of portable executable files
1. Please refer to [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages) for more details.
1. Note that it does not support all the functions (such as `outscale`) as the python script `inference_realesrgan.py`.
```console
Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
-h show this help
-i input-path input image path (jpg/png/webp) or directory
-o output-path output image path (jpg/png/webp) or directory
-s scale upscale ratio (can be 2, 3, 4. default=4)
-t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
-m model-path folder path to the pre-trained models. default=models
-n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
-g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
-j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
-x enable tta mode"
-f format output image format (jpg/png/webp, default=ext/png)
-v verbose output
```
Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
### Python script
#### Usage of python script
1. You can use X4 model for **arbitrary output size** with the argument `outscale`. The program will further perform cheap resize operation after the Real-ESRGAN output.
```console
Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
-h show this help
-i --input Input image or folder. Default: inputs
-o --output Output folder. Default: results
-n --model_name Model name. Default: RealESRGAN_x4plus
-s, --outscale The final upsampling scale of the image. Default: 4
--suffix Suffix of the restored image. Default: out
-t, --tile Tile size, 0 for no tile during testing. Default: 0
--face_enhance Whether to use GFPGAN to enhance face. Default: False
--fp32 Use fp32 precision during inference. Default: fp16 (half precision).
--ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
```
#### Inference general images
Download pre-trained models: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)
```bash
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P weights
```
Inference!
```bash
python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance
```
Results are in the `results` folder
#### Inference anime images
<p align="center">
<img src="https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png">
</p>
Pre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)<br>
More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)
```bash
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights
# inference
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
```
Results are in the `results` folder
---
## BibTeX
@InProceedings{wang2021realesrgan,
author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
date = {2021}
}
## š§ Contact
If you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.
<!---------------------------------- Projects that use Real-ESRGAN --------------------------->
## š§© Projects that use Real-ESRGAN
If you develop/use Real-ESRGAN in your projects, welcome to let me know.
- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)
- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)
- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
**GUI**
- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)
- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)
- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)
- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)
- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)
- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)
- [Upscayl](https://github.com/upscayl/upscayl) by [Nayam Amarshe](https://github.com/NayamAmarshe) and [TGS963](https://github.com/TGS963)
## š¤ Acknowledgement
Thanks for all the contributors.
- [AK391](https://github.com/AK391): Integrate RealESRGAN to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN).
- [Asiimoviet](https://github.com/Asiimoviet): Translate the README.md to Chinese (äøę).
- [2ji3150](https://github.com/2ji3150): Thanks for the [detailed and valuable feedbacks/suggestions](https://github.com/xinntao/Real-ESRGAN/issues/131).
- [Jared-02](https://github.com/Jared-02): Translate the Training.md to Chinese (äøę).
Raw data
{
"_id": null,
"home_page": "https://github.com/xinntao/Real-ESRGAN",
"name": "realesrgan",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "computer vision,pytorch,image restoration,super-resolution,esrgan,real-esrgan",
"author": "Xintao Wang",
"author_email": "xintao.wang@outlook.com",
"download_url": "https://files.pythonhosted.org/packages/75/40/4af728ed9c48ac65634976535f36afd421de39315920e5f740049a6524a6/realesrgan-0.3.0.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\"assets/realesrgan_logo.png\" height=120>\n</p>\n\n## <div align=\"center\"><b><a href=\"README.md\">English</a> | <a href=\"README_CN.md\">\u7b80\u4f53\u4e2d\u6587</a></b></div>\n\n<div align=\"center\">\n\n\ud83d\udc40[**Demos**](#-demos-videos) **|** \ud83d\udea9[**Updates**](#-updates) **|** \u26a1[**Usage**](#-quick-inference) **|** \ud83c\udff0[**Model Zoo**](docs/model_zoo.md) **|** \ud83d\udd27[Install](#-dependencies-and-installation) **|** \ud83d\udcbb[Train](docs/Training.md) **|** \u2753[FAQ](docs/FAQ.md) **|** \ud83c\udfa8[Contribution](docs/CONTRIBUTING.md)\n\n[![download](https://img.shields.io/github/downloads/xinntao/Real-ESRGAN/total.svg)](https://github.com/xinntao/Real-ESRGAN/releases)\n[![PyPI](https://img.shields.io/pypi/v/realesrgan)](https://pypi.org/project/realesrgan/)\n[![Open issue](https://img.shields.io/github/issues/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)\n[![Closed issue](https://img.shields.io/github/issues-closed/xinntao/Real-ESRGAN)](https://github.com/xinntao/Real-ESRGAN/issues)\n[![LICENSE](https://img.shields.io/github/license/xinntao/Real-ESRGAN.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE)\n[![python lint](https://github.com/xinntao/Real-ESRGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/pylint.yml)\n[![Publish-pip](https://github.com/xinntao/Real-ESRGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/xinntao/Real-ESRGAN/blob/master/.github/workflows/publish-pip.yml)\n\n</div>\n\n\ud83d\udd25 **AnimeVideo-v3 model (\u52a8\u6f2b\u89c6\u9891\u5c0f\u6a21\u578b)**. Please see [[*anime video models*](docs/anime_video_model.md)] and [[*comparisons*](docs/anime_comparisons.md)]<br>\n\ud83d\udd25 **RealESRGAN_x4plus_anime_6B** for anime images **(\u52a8\u6f2b\u63d2\u56fe\u6a21\u578b)**. Please see [[*anime_model*](docs/anime_model.md)]\n\n<!-- 1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B) -->\n1. :boom: **Update** online Replicate demo: [![Replicate](https://img.shields.io/static/v1?label=Demo&message=Replicate&color=blue)](https://replicate.com/xinntao/realesrgan)\n1. Online Colab demo for Real-ESRGAN: [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) **|** Online Colab demo for for Real-ESRGAN (**anime videos**): [![Colab](https://img.shields.io/static/v1?label=Demo&message=Colab&color=orange)](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing)\n1. Portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**. You can find more information [here](#portable-executable-files-ncnn). The ncnn implementation is in [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)\n<!-- 1. You can watch enhanced animations in [Tencent Video](https://v.qq.com/s/topic/v_child/render/fC4iyCAM.html). \u6b22\u8fce\u89c2\u770b[\u817e\u8baf\u89c6\u9891\u52a8\u6f2b\u4fee\u590d](https://v.qq.com/s/topic/v_child/render/fC4iyCAM.html) -->\n\nReal-ESRGAN aims at developing **Practical Algorithms for General Image/Video Restoration**.<br>\nWe extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.\n\n\ud83c\udf0c Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in [feedback.md](docs/feedback.md).\n\n---\n\nIf Real-ESRGAN is helpful, please help to \u2b50 this repo or recommend it to your friends \ud83d\ude0a <br>\nOther recommended projects:<br>\n\u25b6\ufe0f [GFPGAN](https://github.com/TencentARC/GFPGAN): A practical algorithm for real-world face restoration <br>\n\u25b6\ufe0f [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox<br>\n\u25b6\ufe0f [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions.<br>\n\u25b6\ufe0f [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison <br>\n\u25b6\ufe0f [HandyFigure](https://github.com/xinntao/HandyFigure): Open source of paper figures <br>\n\n---\n\n### \ud83d\udcd6 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data\n\n> [[Paper](https://arxiv.org/abs/2107.10833)]   [[YouTube Video](https://www.youtube.com/watch?v=fxHWoDSSvSc)]   [[B\u7ad9\u8bb2\u89e3](https://www.bilibili.com/video/BV1H34y1m7sS/)]   [[Poster](https://xinntao.github.io/projects/RealESRGAN_src/RealESRGAN_poster.pdf)]   [[PPT slides](https://docs.google.com/presentation/d/1QtW6Iy8rm8rGLsJ0Ldti6kP-7Qyzy6XL/edit?usp=sharing&ouid=109799856763657548160&rtpof=true&sd=true)]<br>\n> [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en) <br>\n> [Tencent ARC Lab](https://arc.tencent.com/en/ai-demos/imgRestore); Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences\n\n<p align=\"center\">\n <img src=\"assets/teaser.jpg\">\n</p>\n\n---\n\n<!---------------------------------- Updates --------------------------->\n## \ud83d\udea9 Updates\n\n- \u2705 Add the **realesr-general-x4v3** model - a tiny small model for general scenes. It also supports the **--dn** option to balance the noise (avoiding over-smooth results). **--dn** is short for denoising strength.\n- \u2705 Update the **RealESRGAN AnimeVideo-v3** model. Please see [anime video models](docs/anime_video_model.md) and [comparisons](docs/anime_comparisons.md) for more details.\n- \u2705 Add small models for anime videos. More details are in [anime video models](docs/anime_video_model.md).\n- \u2705 Add the ncnn implementation [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan).\n- \u2705 Add [*RealESRGAN_x4plus_anime_6B.pth*](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth), which is optimized for **anime** images with much smaller model size. More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)\n- \u2705 Support finetuning on your own data or paired data (*i.e.*, finetuning ESRGAN). See [here](docs/Training.md#Finetune-Real-ESRGAN-on-your-own-dataset)\n- \u2705 Integrate [GFPGAN](https://github.com/TencentARC/GFPGAN) to support **face enhancement**.\n- \u2705 Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN). Thanks [@AK391](https://github.com/AK391)\n- \u2705 Support arbitrary scale with `--outscale` (It actually further resizes outputs with `LANCZOS4`). Add *RealESRGAN_x2plus.pth* model.\n- \u2705 [The inference code](inference_realesrgan.py) supports: 1) **tile** options; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.\n- \u2705 The training codes have been released. A detailed guide can be found in [Training.md](docs/Training.md).\n\n---\n\n<!---------------------------------- Demo videos --------------------------->\n## \ud83d\udc40 Demos Videos\n\n#### Bilibili\n\n- [\u5927\u95f9\u5929\u5bab\u7247\u6bb5](https://www.bilibili.com/video/BV1ja41117zb)\n- [Anime dance cut \u52a8\u6f2b\u9b54\u6027\u821e\u8e48](https://www.bilibili.com/video/BV1wY4y1L7hT/)\n- [\u6d77\u8d3c\u738b\u7247\u6bb5](https://www.bilibili.com/video/BV1i3411L7Gy/)\n\n#### YouTube\n\n## \ud83d\udd27 Dependencies and Installation\n\n- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))\n- [PyTorch >= 1.7](https://pytorch.org/)\n\n### Installation\n\n1. Clone repo\n\n ```bash\n git clone https://github.com/xinntao/Real-ESRGAN.git\n cd Real-ESRGAN\n ```\n\n1. Install dependent packages\n\n ```bash\n # Install basicsr - https://github.com/xinntao/BasicSR\n # We use BasicSR for both training and inference\n pip install basicsr\n # facexlib and gfpgan are for face enhancement\n pip install facexlib\n pip install gfpgan\n pip install -r requirements.txt\n python setup.py develop\n ```\n\n---\n\n## \u26a1 Quick Inference\n\nThere are usually three ways to inference Real-ESRGAN.\n\n1. [Online inference](#online-inference)\n1. [Portable executable files (NCNN)](#portable-executable-files-ncnn)\n1. [Python script](#python-script)\n\n### Online inference\n\n1. You can try in our website: [ARC Demo](https://arc.tencent.com/en/ai-demos/imgRestore) (now only support RealESRGAN_x4plus_anime_6B)\n1. [Colab Demo](https://colab.research.google.com/drive/1k2Zod6kSHEvraybHl50Lys0LerhyTMCo?usp=sharing) for Real-ESRGAN **|** [Colab Demo](https://colab.research.google.com/drive/1yNl9ORUxxlL4N0keJa2SEPB61imPQd1B?usp=sharing) for Real-ESRGAN (**anime videos**).\n\n### Portable executable files (NCNN)\n\nYou can download [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**.\n\nThis executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.<br>\n\nYou can simply run the following command (the Windows example, more information is in the README.md of each executable files):\n\n```bash\n./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name\n```\n\nWe have provided five models:\n\n1. realesrgan-x4plus (default)\n2. realesrnet-x4plus\n3. realesrgan-x4plus-anime (optimized for anime images, small model size)\n4. realesr-animevideov3 (animation video)\n\nYou can use the `-n` argument for other models, for example, `./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus`\n\n#### Usage of portable executable files\n\n1. Please refer to [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan#computer-usages) for more details.\n1. Note that it does not support all the functions (such as `outscale`) as the python script `inference_realesrgan.py`.\n\n```console\nUsage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...\n\n -h show this help\n -i input-path input image path (jpg/png/webp) or directory\n -o output-path output image path (jpg/png/webp) or directory\n -s scale upscale ratio (can be 2, 3, 4. default=4)\n -t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu\n -m model-path folder path to the pre-trained models. default=models\n -n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)\n -g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu\n -j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu\n -x enable tta mode\"\n -f format output image format (jpg/png/webp, default=ext/png)\n -v verbose output\n```\n\nNote that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.\n\n### Python script\n\n#### Usage of python script\n\n1. You can use X4 model for **arbitrary output size** with the argument `outscale`. The program will further perform cheap resize operation after the Real-ESRGAN output.\n\n```console\nUsage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...\n\nA common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance\n\n -h show this help\n -i --input Input image or folder. Default: inputs\n -o --output Output folder. Default: results\n -n --model_name Model name. Default: RealESRGAN_x4plus\n -s, --outscale The final upsampling scale of the image. Default: 4\n --suffix Suffix of the restored image. Default: out\n -t, --tile Tile size, 0 for no tile during testing. Default: 0\n --face_enhance Whether to use GFPGAN to enhance face. Default: False\n --fp32 Use fp32 precision during inference. Default: fp16 (half precision).\n --ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto\n```\n\n#### Inference general images\n\nDownload pre-trained models: [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth)\n\n```bash\nwget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P weights\n```\n\nInference!\n\n```bash\npython inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance\n```\n\nResults are in the `results` folder\n\n#### Inference anime images\n\n<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/xinntao/public-figures/master/Real-ESRGAN/cmp_realesrgan_anime_1.png\">\n</p>\n\nPre-trained models: [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth)<br>\n More details and comparisons with [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan) are in [**anime_model.md**](docs/anime_model.md)\n\n```bash\n# download model\nwget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights\n# inference\npython inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs\n```\n\nResults are in the `results` folder\n\n---\n\n## BibTeX\n\n @InProceedings{wang2021realesrgan,\n author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},\n title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},\n booktitle = {International Conference on Computer Vision Workshops (ICCVW)},\n date = {2021}\n }\n\n## \ud83d\udce7 Contact\n\nIf you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.\n\n<!---------------------------------- Projects that use Real-ESRGAN --------------------------->\n## \ud83e\udde9 Projects that use Real-ESRGAN\n\nIf you develop/use Real-ESRGAN in your projects, welcome to let me know.\n\n- NCNN-Android: [RealSR-NCNN-Android](https://github.com/tumuyan/RealSR-NCNN-Android) by [tumuyan](https://github.com/tumuyan)\n- VapourSynth: [vs-realesrgan](https://github.com/HolyWu/vs-realesrgan) by [HolyWu](https://github.com/HolyWu)\n- NCNN: [Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)\n\n **GUI**\n\n- [Waifu2x-Extension-GUI](https://github.com/AaronFeng753/Waifu2x-Extension-GUI) by [AaronFeng753](https://github.com/AaronFeng753)\n- [Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) by [Justin62628](https://github.com/Justin62628)\n- [Real-GUI](https://github.com/scifx/Real-GUI) by [scifx](https://github.com/scifx)\n- [Real-ESRGAN_GUI](https://github.com/net2cn/Real-ESRGAN_GUI) by [net2cn](https://github.com/net2cn)\n- [Real-ESRGAN-EGUI](https://github.com/WGzeyu/Real-ESRGAN-EGUI) by [WGzeyu](https://github.com/WGzeyu)\n- [anime_upscaler](https://github.com/shangar21/anime_upscaler) by [shangar21](https://github.com/shangar21)\n- [Upscayl](https://github.com/upscayl/upscayl) by [Nayam Amarshe](https://github.com/NayamAmarshe) and [TGS963](https://github.com/TGS963)\n\n## \ud83e\udd17 Acknowledgement\n\nThanks for all the contributors.\n\n- [AK391](https://github.com/AK391): Integrate RealESRGAN to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/Real-ESRGAN).\n- [Asiimoviet](https://github.com/Asiimoviet): Translate the README.md to Chinese (\u4e2d\u6587).\n- [2ji3150](https://github.com/2ji3150): Thanks for the [detailed and valuable feedbacks/suggestions](https://github.com/xinntao/Real-ESRGAN/issues/131).\n- [Jared-02](https://github.com/Jared-02): Translate the Training.md to Chinese (\u4e2d\u6587).\n\n\n",
"bugtrack_url": null,
"license": "BSD-3-Clause License",
"summary": "Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration",
"version": "0.3.0",
"split_keywords": [
"computer vision",
"pytorch",
"image restoration",
"super-resolution",
"esrgan",
"real-esrgan"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "c96f4527d93f082c65b0cb635c0f6f3c",
"sha256": "59336c16c30dd5130eff350dd27424acb9b7281d18a6810130e265606c9a6088"
},
"downloads": -1,
"filename": "realesrgan-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c96f4527d93f082c65b0cb635c0f6f3c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 26012,
"upload_time": "2022-09-20T11:49:54",
"upload_time_iso_8601": "2022-09-20T11:49:54.915218Z",
"url": "https://files.pythonhosted.org/packages/b2/3e/e2f79917a04991b9237df264f7abab2b58cf94748e7acfb6677b55232ca1/realesrgan-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "1c42ba3f0014515c2dd46f9112928b82",
"sha256": "0d36da96ab9f447071606e91f502ccdfb08f80cc82ee4f8caf720c7745ccec7e"
},
"downloads": -1,
"filename": "realesrgan-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "1c42ba3f0014515c2dd46f9112928b82",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 3020534,
"upload_time": "2022-09-20T11:49:56",
"upload_time_iso_8601": "2022-09-20T11:49:56.695470Z",
"url": "https://files.pythonhosted.org/packages/75/40/4af728ed9c48ac65634976535f36afd421de39315920e5f740049a6524a6/realesrgan-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-09-20 11:49:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "xinntao",
"github_project": "Real-ESRGAN",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "basicsr",
"specs": [
[
">=",
"1.4.2"
]
]
},
{
"name": "facexlib",
"specs": [
[
">=",
"0.2.5"
]
]
},
{
"name": "gfpgan",
"specs": [
[
">=",
"1.3.5"
]
]
},
{
"name": "numpy",
"specs": []
},
{
"name": "opencv-python",
"specs": []
},
{
"name": "Pillow",
"specs": []
},
{
"name": "torch",
"specs": [
[
">=",
"1.7"
]
]
},
{
"name": "torchvision",
"specs": []
},
{
"name": "tqdm",
"specs": []
}
],
"lcname": "realesrgan"
}