v-pyiqa


Namev-pyiqa JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/chaofengc/IQA-PyTorch
SummaryPyTorch Toolbox for Image Quality Assessment
upload_time2022-12-03 17:37:20
maintainer
docs_urlNone
authorChaofeng Chen
requires_python>=3.6
license
keywords image quality assessment pytorch
VCS
bugtrack_url
requirements addict future lmdb numpy opencv-python pandas Pillow pyyaml requests scikit-image scipy tb-nightly timm torch torchvision tqdm yapf einops imgaug
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PyTorch Toolbox for Image Quality Assessment

An IQA toolbox with pure python and pytorch. Please refer to [Awesome-Image-Quality-Assessment](https://github.com/chaofengc/Awesome-Image-Quality-Assessment) for a comprehensive survey of IQA methods, as well as download links for IQA datasets.

<a href="https://colab.research.google.com/drive/14J3KoyrjJ6R531DsdOy5Bza5xfeMODi6?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a> 
[![PyPI](https://img.shields.io/pypi/v/pyiqa)](https://pypi.org/project/pyiqa/)
![visitors](https://visitor-badge.laobi.icu/badge?page_id=chaofengc/IQA-PyTorch) 
[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/chaofengc/Awesome-Image-Quality-Assessment)
[![Citation](https://img.shields.io/badge/Citation-bibtex-green)](https://github.com/chaofengc/IQA-PyTorch/blob/main/README.md#bookmark_tabs-citation)

![demo](demo.gif)

- [:open_book: Introduction](#open_book-introduction)
- [:zap: Quick Start](#zap-quick-start)
  - [Dependencies and Installation](#dependencies-and-installation)
  - [Basic Usage](#basic-usage)
- [:hammer_and_wrench: Train](#hammer_and_wrench-train)
  - [Dataset Preparation](#dataset-preparation)
  - [Example Train Script](#example-train-script)
- [:1st_place_medal: Benchmark Performances and Model Zoo](#1st_place_medal-benchmark-performances-and-model-zoo)
  - [Results Calibration](#results-calibration)
  - [Performance Evaluation Protocol](#performance-evaluation-protocol)
  - [Benchmark Performance with Provided Script](#benchmark-performance-with-provided-script)

## :open_book: Introduction

This is a image quality assessment toolbox with **pure python and pytorch**. We provide reimplementation of many mainstream full reference (FR) and no reference (NR) metrics (results are calibrated with official matlab scripts if exist). **With GPU acceleration, most of our implementations are much faster than Matlab.** Below are details of supported methods and datasets in this project.

<details open>
<summary>Supported methods and datasets:</summary>

<table>
<tr><td>

| FR Method                | Backward           |
| ------------------------ | ------------------ |
| AHIQ                     | :white_check_mark: |
| PieAPP                   | :white_check_mark: |
| LPIPS                    | :white_check_mark: |
| DISTS                    | :white_check_mark: |
| WaDIQaM                  | :white_check_mark: |
| CKDN<sup>[1](#fn1)</sup> | :white_check_mark: |
| FSIM                     | :white_check_mark: |
| SSIM                     | :white_check_mark: |
| MS-SSIM                  | :white_check_mark: |
| CW-SSIM                  | :white_check_mark: |
| PSNR                     | :white_check_mark: |
| VIF                      | :white_check_mark: |
| GMSD                     | :white_check_mark: |
| NLPD                     | :white_check_mark: |
| VSI                      | :white_check_mark: |
| MAD                      | :white_check_mark: |

</td><td>

| NR Method                    | Backward                 |
| ---------------------------- | ------------------------ |
| FID                          | :heavy_multiplication_x: |
| MANIQA                       | :white_check_mark:       |
| MUSIQ                        | :white_check_mark:       |
| DBCNN                        | :white_check_mark:       |
| PaQ-2-PiQ                    | :white_check_mark:       |
| HyperIQA                     | :white_check_mark:       |
| NIMA                         | :white_check_mark:       |
| WaDIQaM                      | :white_check_mark:       |
| CNNIQA                       | :white_check_mark:       |
| NRQM(Ma)<sup>[2](#fn2)</sup> | :heavy_multiplication_x: |
| PI(Perceptual Index)         | :heavy_multiplication_x: |
| BRISQUE                      | :white_check_mark:       |
| ILNIQE                       | :white_check_mark:       |
| NIQE                         | :white_check_mark:       |

<!-- | HOSA                         | :hourglass_flowing_sand: | -->

</td><td>

| Dataset          | Type         |
| ---------------- | ------------ |
| FLIVE(PaQ-2-PiQ) | NR           |
| SPAQ             | NR/mobile    |
| AVA              | NR/Aesthetic |
| PIPAL            | FR           |
| BAPPS            | FR           |
| PieAPP           | FR           |
| KADID-10k        | FR           |
| KonIQ-10k(++)    | NR           |
| LIVEChallenge    | NR           |
| LIVEM            | FR           |
| LIVE             | FR           |
| TID2013          | FR           |
| TID2008          | FR           |
| CSIQ             | FR           |

</td></tr>
</table>

<a name="fn1">[1]</a> This method use distorted image as reference. Please refer to the paper for details.<br>
<a name="fn2">[2]</a> Currently, only naive random forest regression is implemented and **does not** support backward.

</details>

---

### :triangular_flag_on_post: Updates/Changelog

- **Sep 1, 2022**. 1) Add pretrained models for MANIQA and AHIQ. 2) Add dataset interface for pieapp and PIPAL.
- **June 3, 2022**. Add FID metric. See [clean-fid](https://github.com/GaParmar/clean-fid) for more details.
- **March 11, 2022**. Add pretrained DBCNN, NIMA, and official model of PieAPP, paq2piq.
- [**More**](docs/history_changelog.md)

---

### :hourglass_flowing_sand: TODO List

- :white_large_square: Add pretrained models on different datasets.

---

## :zap: Quick Start

### Dependencies and Installation
- Ubuntu >= 18.04
- Python >= 3.8
- Pytorch >= 1.10
- CUDA >= 10.2 (if use GPU)
```
# Install with pip
pip install pyiqa

# Install latest github version
pip uninstall pyiqa # if have older version installed already 
pip install git+https://github.com/chaofengc/IQA-PyTorch.git

# Install with git clone
git clone https://github.com/chaofengc/IQA-PyTorch.git
cd IQA-PyTorch
pip install -r requirements.txt
python setup.py develop
```

### Basic Usage 

```
import pyiqa
import torch

# list all available metrics
print(pyiqa.list_models())

# create metric with default setting
iqa_metric = pyiqa.create_metric('lpips', device=torch.device('cuda'))
# Note that gradient propagation is disabled by default. set as_loss=True to enable it as a loss function.
iqa_loss = pyiqa.create_metric('lpips', device=torch.device('cuda'), as_loss=True)

# create metric with custom setting
iqa_metric = pyiqa.create_metric('psnr', test_y_channel=True, color_space='ycbcr').to(device)

# check if lower better or higher better
print(iqa_metric.lower_better)

# example for iqa score inference
# Tensor inputs, img_tensor_x/y: (N, 3, H, W), RGB, 0 ~ 1
score_fr = iqa_metric(img_tensor_x, img_tensor_y)
score_nr = iqa_metric(img_tensor_x)

# img path as inputs.
score_fr = iqa_metric('./ResultsCalibra/dist_dir/I03.bmp', './ResultsCalibra/ref_dir/I03.bmp')

# For FID metric, use directory or precomputed statistics as inputs
# refer to clean-fid for more details: https://github.com/GaParmar/clean-fid
fid_metric = pyiqa.create_metric('fid')
score = fid_metric('./ResultsCalibra/dist_dir/', './ResultsCalibra/ref_dir')
score = fid_metric('./ResultsCalibra/dist_dir/', dataset_name="FFHQ", dataset_res=1024, dataset_split="trainval70k")
```


#### Example Test script

Example test script with input directory/images and reference directory/images. 
```
# example for FR metric with dirs
python inference_iqa.py -m LPIPS[or lpips] -i ./ResultsCalibra/dist_dir[dist_img] -r ./ResultsCalibra/ref_dir[ref_img]

# example for NR metric with single image
python inference_iqa.py -m brisque -i ./ResultsCalibra/dist_dir/I03.bmp
```


## :hammer_and_wrench: Train

### Dataset Preparation

- You only need to unzip downloaded datasets from official website without any extra operation. And then make soft links of these dataset folder under `datasets/` folder. Download links are provided in [Awesome-Image-Quality-Assessment](https://github.com/chaofengc/Awesome-Image-Quality-Assessment).
- We provide common interface to load these datasets with the prepared meta information files and train/val/test split files, which can be downloaded from [download_link](https://github.com/chaofengc/IQA-PyTorch/releases/download/v0.1-weights/data_info_files.tgz) and extract them to `datasets/` folder.

You may also use the following commands:

```
mkdir datasets && cd datasets

# make soft links of your dataset
ln -sf your/dataset/path datasetname

# download meta info files and train split files
wget https://github.com/chaofengc/IQA-PyTorch/releases/download/v0.1-weights/data_info_files.tgz
tar -xvf data_info_files.tgz
```

Examples to specific dataset options can be found in `./options/default_dataset_opt.yml`. Details of the dataloader inferface and meta information files can be found in [Dataset Preparation](docs/Dataset_Preparation.md)

### Example Train Script

Example to train DBCNN on LIVEChallenge dataset
```
# train for single experiment
python pyiqa/train.py -opt options/train/DBCNN/train_DBCNN.yml

# train N splits for small datasets
python pyiqa/train_nsplits.py -opt options/train/DBCNN/train_DBCNN.yml
```

## :1st_place_medal: Benchmark Performances and Model Zoo

### Results Calibration

Please refer to the [results calibration](./ResultsCalibra/ResultsCalibra.md) to verify the correctness of the python implementations compared with official scripts in matlab or python.

### Performance Evaluation Protocol

**We use official models for evaluation if available.** Otherwise, we use the following settings to train and evaluate different models for simplicity and consistency:

| Metric Type | Train | Test | Results | 
| --- | --- | --- | --- |
| FR | KADID-10k | CSIQ, LIVE, TID2008, TID2013 | [FR benchmark results](tests/FR_benchmark_results.csv) |
| NR | KonIQ-10k | LIVEC, KonIQ-10k (official split), TID2013 | [NR benchmark results](tests/NR_benchmark_results.csv) |
| Aesthetic IQA | AVA | AVA (official split)| [IAA benchmark results](tests/IAA_benchmark_results.csv) |

Basically, we use the largest existing datasets for training, and cross dataset evaluation performance for fair comparison. The following models do not provide official weights, and are retrained by our scripts:

| Metric Type | Model Names |
| --- | --- | 
| FR |  |
| NR | `dbcnn` |
| Aesthetic IQA | `nima`, `nima-vgg16-ava` |

Notes:
- Due to optimized training process, performance of some retrained approaches may be higher than original paper.
- Results of KonIQ-10k, AVA are both tested with official split.
- NIMA is only applicable to AVA dataset now. We use `inception_resnet_v2` for default `nima`.
- MUSIQ is not included in the IAA benchmark because we do not have train/split information of the official model.

### Benchmark Performance with Provided Script

Here is an example script to get performance benchmark on different datasets:
```
# NOTE: this script will test ALL specified metrics on ALL specified datasets
# Test default metrics on default datasets
python benchmark_results.py -m psnr ssim -d csiq tid2013 tid2008

# Test with your own options
python benchmark_results.py -m psnr --data_opt options/example_benchmark_data_opts.yml

python benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml tid2013 tid2008

python benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml --data_opt options/example_benchmark_data_opts.yml
```

## :beers: Contribution

Any contributions to this repository are greatly appreciated. Please follow the [contribution instructions](docs/Instruction.md) for contribution guidance.

## :scroll: License

This work is licensed under a [NTU S-Lab License](https://github.com/chaofengc/IQA-PyTorch/blob/main/LICENSE_NTU-S-Lab) and <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.

<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a>

## :bookmark_tabs: Citation

If you find our codes helpful to your research, please consider to use the following citation:

```
@misc{pyiqa,
  title={{IQA-PyTorch}: PyTorch Toolbox for Image Quality Assessment},
  author={Chaofeng Chen and Jiadi Mo},
  year={2022},
  howpublished = "[Online]. Available: \url{https://github.com/chaofengc/IQA-PyTorch}"
}
```

## :heart: Acknowledgement

The code architecture is borrowed from [BasicSR](https://github.com/xinntao/BasicSR). Several implementations are taken from: [IQA-optimization](https://github.com/dingkeyan93/IQA-optimization), [Image-Quality-Assessment-Toolbox](https://github.com/RyanXingQL/Image-Quality-Assessment-Toolbox), [piq](https://github.com/photosynthesis-team/piq), [piqa](https://github.com/francois-rozet/piqa), [clean-fid](https://github.com/GaParmar/clean-fid)

We also thanks the following public repositories: [MUSIQ](https://github.com/google-research/google-research/tree/master/musiq), [DBCNN](https://github.com/zwx8981/DBCNN-PyTorch), [NIMA](https://github.com/kentsyx/Neural-IMage-Assessment), [HyperIQA](https://github.com/SSL92/hyperIQA), [CNNIQA](https://github.com/lidq92/CNNIQA), [WaDIQaM](https://github.com/lidq92/WaDIQaM), [PieAPP](https://github.com/prashnani/PerceptualImageError), [paq2piq](https://github.com/baidut/paq2piq), [MANIQA](https://github.com/IIGROUP/MANIQA) 

## :e-mail: Contact

If you have any questions, please email `chaofenghust@gmail.com`



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/chaofengc/IQA-PyTorch",
    "name": "v-pyiqa",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "image quality assessment,pytorch",
    "author": "Chaofeng Chen",
    "author_email": "chaofenghust@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d3/b5/c3c19d94b573afa41ec091f1828c80d0680b3541262d55f3c4ec1533ce5f/v-pyiqa-0.1.5.tar.gz",
    "platform": null,
    "description": "# PyTorch Toolbox for Image Quality Assessment\n\nAn IQA toolbox with pure python and pytorch. Please refer to [Awesome-Image-Quality-Assessment](https://github.com/chaofengc/Awesome-Image-Quality-Assessment) for a comprehensive survey of IQA methods, as well as download links for IQA datasets.\n\n<a href=\"https://colab.research.google.com/drive/14J3KoyrjJ6R531DsdOy5Bza5xfeMODi6?usp=sharing\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"google colab logo\"></a> \n[![PyPI](https://img.shields.io/pypi/v/pyiqa)](https://pypi.org/project/pyiqa/)\n![visitors](https://visitor-badge.laobi.icu/badge?page_id=chaofengc/IQA-PyTorch) \n[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/chaofengc/Awesome-Image-Quality-Assessment)\n[![Citation](https://img.shields.io/badge/Citation-bibtex-green)](https://github.com/chaofengc/IQA-PyTorch/blob/main/README.md#bookmark_tabs-citation)\n\n![demo](demo.gif)\n\n- [:open_book: Introduction](#open_book-introduction)\n- [:zap: Quick Start](#zap-quick-start)\n  - [Dependencies and Installation](#dependencies-and-installation)\n  - [Basic Usage](#basic-usage)\n- [:hammer_and_wrench: Train](#hammer_and_wrench-train)\n  - [Dataset Preparation](#dataset-preparation)\n  - [Example Train Script](#example-train-script)\n- [:1st_place_medal: Benchmark Performances and Model Zoo](#1st_place_medal-benchmark-performances-and-model-zoo)\n  - [Results Calibration](#results-calibration)\n  - [Performance Evaluation Protocol](#performance-evaluation-protocol)\n  - [Benchmark Performance with Provided Script](#benchmark-performance-with-provided-script)\n\n## :open_book: Introduction\n\nThis is a image quality assessment toolbox with **pure python and pytorch**. We provide reimplementation of many mainstream full reference (FR) and no reference (NR) metrics (results are calibrated with official matlab scripts if exist). **With GPU acceleration, most of our implementations are much faster than Matlab.** Below are details of supported methods and datasets in this project.\n\n<details open>\n<summary>Supported methods and datasets:</summary>\n\n<table>\n<tr><td>\n\n| FR Method                | Backward           |\n| ------------------------ | ------------------ |\n| AHIQ                     | :white_check_mark: |\n| PieAPP                   | :white_check_mark: |\n| LPIPS                    | :white_check_mark: |\n| DISTS                    | :white_check_mark: |\n| WaDIQaM                  | :white_check_mark: |\n| CKDN<sup>[1](#fn1)</sup> | :white_check_mark: |\n| FSIM                     | :white_check_mark: |\n| SSIM                     | :white_check_mark: |\n| MS-SSIM                  | :white_check_mark: |\n| CW-SSIM                  | :white_check_mark: |\n| PSNR                     | :white_check_mark: |\n| VIF                      | :white_check_mark: |\n| GMSD                     | :white_check_mark: |\n| NLPD                     | :white_check_mark: |\n| VSI                      | :white_check_mark: |\n| MAD                      | :white_check_mark: |\n\n</td><td>\n\n| NR Method                    | Backward                 |\n| ---------------------------- | ------------------------ |\n| FID                          | :heavy_multiplication_x: |\n| MANIQA                       | :white_check_mark:       |\n| MUSIQ                        | :white_check_mark:       |\n| DBCNN                        | :white_check_mark:       |\n| PaQ-2-PiQ                    | :white_check_mark:       |\n| HyperIQA                     | :white_check_mark:       |\n| NIMA                         | :white_check_mark:       |\n| WaDIQaM                      | :white_check_mark:       |\n| CNNIQA                       | :white_check_mark:       |\n| NRQM(Ma)<sup>[2](#fn2)</sup> | :heavy_multiplication_x: |\n| PI(Perceptual Index)         | :heavy_multiplication_x: |\n| BRISQUE                      | :white_check_mark:       |\n| ILNIQE                       | :white_check_mark:       |\n| NIQE                         | :white_check_mark:       |\n\n<!-- | HOSA                         | :hourglass_flowing_sand: | -->\n\n</td><td>\n\n| Dataset          | Type         |\n| ---------------- | ------------ |\n| FLIVE(PaQ-2-PiQ) | NR           |\n| SPAQ             | NR/mobile    |\n| AVA              | NR/Aesthetic |\n| PIPAL            | FR           |\n| BAPPS            | FR           |\n| PieAPP           | FR           |\n| KADID-10k        | FR           |\n| KonIQ-10k(++)    | NR           |\n| LIVEChallenge    | NR           |\n| LIVEM            | FR           |\n| LIVE             | FR           |\n| TID2013          | FR           |\n| TID2008          | FR           |\n| CSIQ             | FR           |\n\n</td></tr>\n</table>\n\n<a name=\"fn1\">[1]</a> This method use distorted image as reference. Please refer to the paper for details.<br>\n<a name=\"fn2\">[2]</a> Currently, only naive random forest regression is implemented and **does not** support backward.\n\n</details>\n\n---\n\n### :triangular_flag_on_post: Updates/Changelog\n\n- **Sep 1, 2022**. 1) Add pretrained models for MANIQA and AHIQ. 2) Add dataset interface for pieapp and PIPAL.\n- **June 3, 2022**. Add FID metric. See [clean-fid](https://github.com/GaParmar/clean-fid) for more details.\n- **March 11, 2022**. Add pretrained DBCNN, NIMA, and official model of PieAPP, paq2piq.\n- [**More**](docs/history_changelog.md)\n\n---\n\n### :hourglass_flowing_sand: TODO List\n\n- :white_large_square: Add pretrained models on different datasets.\n\n---\n\n## :zap: Quick Start\n\n### Dependencies and Installation\n- Ubuntu >= 18.04\n- Python >= 3.8\n- Pytorch >= 1.10\n- CUDA >= 10.2 (if use GPU)\n```\n# Install with pip\npip install pyiqa\n\n# Install latest github version\npip uninstall pyiqa # if have older version installed already \npip install git+https://github.com/chaofengc/IQA-PyTorch.git\n\n# Install with git clone\ngit clone https://github.com/chaofengc/IQA-PyTorch.git\ncd IQA-PyTorch\npip install -r requirements.txt\npython setup.py develop\n```\n\n### Basic Usage \n\n```\nimport pyiqa\nimport torch\n\n# list all available metrics\nprint(pyiqa.list_models())\n\n# create metric with default setting\niqa_metric = pyiqa.create_metric('lpips', device=torch.device('cuda'))\n# Note that gradient propagation is disabled by default. set as_loss=True to enable it as a loss function.\niqa_loss = pyiqa.create_metric('lpips', device=torch.device('cuda'), as_loss=True)\n\n# create metric with custom setting\niqa_metric = pyiqa.create_metric('psnr', test_y_channel=True, color_space='ycbcr').to(device)\n\n# check if lower better or higher better\nprint(iqa_metric.lower_better)\n\n# example for iqa score inference\n# Tensor inputs, img_tensor_x/y: (N, 3, H, W), RGB, 0 ~ 1\nscore_fr = iqa_metric(img_tensor_x, img_tensor_y)\nscore_nr = iqa_metric(img_tensor_x)\n\n# img path as inputs.\nscore_fr = iqa_metric('./ResultsCalibra/dist_dir/I03.bmp', './ResultsCalibra/ref_dir/I03.bmp')\n\n# For FID metric, use directory or precomputed statistics as inputs\n# refer to clean-fid for more details: https://github.com/GaParmar/clean-fid\nfid_metric = pyiqa.create_metric('fid')\nscore = fid_metric('./ResultsCalibra/dist_dir/', './ResultsCalibra/ref_dir')\nscore = fid_metric('./ResultsCalibra/dist_dir/', dataset_name=\"FFHQ\", dataset_res=1024, dataset_split=\"trainval70k\")\n```\n\n\n#### Example Test script\n\nExample test script with input directory/images and reference directory/images. \n```\n# example for FR metric with dirs\npython inference_iqa.py -m LPIPS[or lpips] -i ./ResultsCalibra/dist_dir[dist_img] -r ./ResultsCalibra/ref_dir[ref_img]\n\n# example for NR metric with single image\npython inference_iqa.py -m brisque -i ./ResultsCalibra/dist_dir/I03.bmp\n```\n\n\n## :hammer_and_wrench: Train\n\n### Dataset Preparation\n\n- You only need to unzip downloaded datasets from official website without any extra operation. And then make soft links of these dataset folder under `datasets/` folder. Download links are provided in [Awesome-Image-Quality-Assessment](https://github.com/chaofengc/Awesome-Image-Quality-Assessment).\n- We provide common interface to load these datasets with the prepared meta information files and train/val/test split files, which can be downloaded from [download_link](https://github.com/chaofengc/IQA-PyTorch/releases/download/v0.1-weights/data_info_files.tgz) and extract them to `datasets/` folder.\n\nYou may also use the following commands:\n\n```\nmkdir datasets && cd datasets\n\n# make soft links of your dataset\nln -sf your/dataset/path datasetname\n\n# download meta info files and train split files\nwget https://github.com/chaofengc/IQA-PyTorch/releases/download/v0.1-weights/data_info_files.tgz\ntar -xvf data_info_files.tgz\n```\n\nExamples to specific dataset options can be found in `./options/default_dataset_opt.yml`. Details of the dataloader inferface and meta information files can be found in [Dataset Preparation](docs/Dataset_Preparation.md)\n\n### Example Train Script\n\nExample to train DBCNN on LIVEChallenge dataset\n```\n# train for single experiment\npython pyiqa/train.py -opt options/train/DBCNN/train_DBCNN.yml\n\n# train N splits for small datasets\npython pyiqa/train_nsplits.py -opt options/train/DBCNN/train_DBCNN.yml\n```\n\n## :1st_place_medal: Benchmark Performances and Model Zoo\n\n### Results Calibration\n\nPlease refer to the [results calibration](./ResultsCalibra/ResultsCalibra.md) to verify the correctness of the python implementations compared with official scripts in matlab or python.\n\n### Performance Evaluation Protocol\n\n**We use official models for evaluation if available.** Otherwise, we use the following settings to train and evaluate different models for simplicity and consistency:\n\n| Metric Type | Train | Test | Results | \n| --- | --- | --- | --- |\n| FR | KADID-10k | CSIQ, LIVE, TID2008, TID2013 | [FR benchmark results](tests/FR_benchmark_results.csv) |\n| NR | KonIQ-10k | LIVEC, KonIQ-10k (official split), TID2013 | [NR benchmark results](tests/NR_benchmark_results.csv) |\n| Aesthetic IQA | AVA | AVA (official split)| [IAA benchmark results](tests/IAA_benchmark_results.csv) |\n\nBasically, we use the largest existing datasets for training, and cross dataset evaluation performance for fair comparison. The following models do not provide official weights, and are retrained by our scripts:\n\n| Metric Type | Model Names |\n| --- | --- | \n| FR |  |\n| NR | `dbcnn` |\n| Aesthetic IQA | `nima`, `nima-vgg16-ava` |\n\nNotes:\n- Due to optimized training process, performance of some retrained approaches may be higher than original paper.\n- Results of KonIQ-10k, AVA are both tested with official split.\n- NIMA is only applicable to AVA dataset now. We use `inception_resnet_v2` for default `nima`.\n- MUSIQ is not included in the IAA benchmark because we do not have train/split information of the official model.\n\n### Benchmark Performance with Provided Script\n\nHere is an example script to get performance benchmark on different datasets:\n```\n# NOTE: this script will test ALL specified metrics on ALL specified datasets\n# Test default metrics on default datasets\npython benchmark_results.py -m psnr ssim -d csiq tid2013 tid2008\n\n# Test with your own options\npython benchmark_results.py -m psnr --data_opt options/example_benchmark_data_opts.yml\n\npython benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml tid2013 tid2008\n\npython benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml --data_opt options/example_benchmark_data_opts.yml\n```\n\n## :beers: Contribution\n\nAny contributions to this repository are greatly appreciated. Please follow the [contribution instructions](docs/Instruction.md) for contribution guidance.\n\n## :scroll: License\n\nThis work is licensed under a [NTU S-Lab License](https://github.com/chaofengc/IQA-PyTorch/blob/main/LICENSE_NTU-S-Lab) and <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.\n\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png\" /></a>\n\n## :bookmark_tabs: Citation\n\nIf you find our codes helpful to your research, please consider to use the following citation:\n\n```\n@misc{pyiqa,\n  title={{IQA-PyTorch}: PyTorch Toolbox for Image Quality Assessment},\n  author={Chaofeng Chen and Jiadi Mo},\n  year={2022},\n  howpublished = \"[Online]. Available: \\url{https://github.com/chaofengc/IQA-PyTorch}\"\n}\n```\n\n## :heart: Acknowledgement\n\nThe code architecture is borrowed from [BasicSR](https://github.com/xinntao/BasicSR). Several implementations are taken from: [IQA-optimization](https://github.com/dingkeyan93/IQA-optimization), [Image-Quality-Assessment-Toolbox](https://github.com/RyanXingQL/Image-Quality-Assessment-Toolbox), [piq](https://github.com/photosynthesis-team/piq), [piqa](https://github.com/francois-rozet/piqa), [clean-fid](https://github.com/GaParmar/clean-fid)\n\nWe also thanks the following public repositories: [MUSIQ](https://github.com/google-research/google-research/tree/master/musiq), [DBCNN](https://github.com/zwx8981/DBCNN-PyTorch), [NIMA](https://github.com/kentsyx/Neural-IMage-Assessment), [HyperIQA](https://github.com/SSL92/hyperIQA), [CNNIQA](https://github.com/lidq92/CNNIQA), [WaDIQaM](https://github.com/lidq92/WaDIQaM), [PieAPP](https://github.com/prashnani/PerceptualImageError), [paq2piq](https://github.com/baidut/paq2piq), [MANIQA](https://github.com/IIGROUP/MANIQA) \n\n## :e-mail: Contact\n\nIf you have any questions, please email `chaofenghust@gmail.com`\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "PyTorch Toolbox for Image Quality Assessment",
    "version": "0.1.5",
    "split_keywords": [
        "image quality assessment",
        "pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "62bd06a4d6539ad15e7a758ba562560f",
                "sha256": "ab51e79d96f7aeae2bce6377cde89ccf4f7d262505ab99f7b9b491981ced2648"
            },
            "downloads": -1,
            "filename": "v_pyiqa-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "62bd06a4d6539ad15e7a758ba562560f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 192167,
            "upload_time": "2022-12-03T17:37:19",
            "upload_time_iso_8601": "2022-12-03T17:37:19.160023Z",
            "url": "https://files.pythonhosted.org/packages/29/ac/6fd0ef5abab175737f197f3b6669f791af700c1366869c034713847c79f2/v_pyiqa-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "b2b7a000c28818f92858e22d2eb3e492",
                "sha256": "95947fd78465c30817d7d9247e26cbffbe7da31f0424153a110f259bea656092"
            },
            "downloads": -1,
            "filename": "v-pyiqa-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "b2b7a000c28818f92858e22d2eb3e492",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 154268,
            "upload_time": "2022-12-03T17:37:20",
            "upload_time_iso_8601": "2022-12-03T17:37:20.952150Z",
            "url": "https://files.pythonhosted.org/packages/d3/b5/c3c19d94b573afa41ec091f1828c80d0680b3541262d55f3c4ec1533ce5f/v-pyiqa-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-03 17:37:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "chaofengc",
    "github_project": "IQA-PyTorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "addict",
            "specs": []
        },
        {
            "name": "future",
            "specs": []
        },
        {
            "name": "lmdb",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "opencv-python",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "Pillow",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "scikit-image",
            "specs": []
        },
        {
            "name": "scipy",
            "specs": []
        },
        {
            "name": "tb-nightly",
            "specs": []
        },
        {
            "name": "timm",
            "specs": []
        },
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.8.1"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": [
                [
                    ">=",
                    "0.9"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "yapf",
            "specs": []
        },
        {
            "name": "einops",
            "specs": []
        },
        {
            "name": "imgaug",
            "specs": []
        }
    ],
    "lcname": "v-pyiqa"
}
        
Elapsed time: 0.02439s