<p align="center"><img src="https://raw.githubusercontent.com/francois-rozet/piqa/master/docs/images/banner.svg" width="80%"></p>
# PyTorch Image Quality Assessment
PIQA is a collection of PyTorch metrics for image quality assessment in various image processing tasks such as generation, denoising, super-resolution, interpolation, etc. It focuses on the efficiency, conciseness and understandability of its (sub-)modules, such that anyone can easily reuse and/or adapt them to its needs.
> PIQA should be pronounced *pika* (like Pikachu ⚡️)
## Installation
The `piqa` package is available on [PyPI](https://pypi.org/project/piqa), which means it is installable via `pip`.
```
pip install piqa
```
Alternatively, if you need the latest features, you can install it from the repository.
```
pip install git+https://github.com/francois-rozet/piqa
```
## Getting started
In `piqa`, each metric is associated to a class, child of `torch.nn.Module`, which has to be instantiated to evaluate the metric. All metrics are differentiable and support CPU and GPU (CUDA).
```python
import torch
import piqa
# PSNR
x = torch.rand(5, 3, 256, 256)
y = torch.rand(5, 3, 256, 256)
psnr = piqa.PSNR()
l = psnr(x, y)
# SSIM
x = torch.rand(5, 3, 256, 256, requires_grad=True).cuda()
y = torch.rand(5, 3, 256, 256).cuda()
ssim = piqa.SSIM().cuda()
l = 1 - ssim(x, y)
l.backward()
```
Like `torch.nn` built-in components, these classes are based on functional definitions of the metrics, which are less user-friendly, but more versatile.
```python
from piqa.ssim import ssim
from piqa.utils.functional import gaussian_kernel
kernel = gaussian_kernel(11, sigma=1.5).repeat(3, 1, 1)
ss, cs = ssim(x, y, kernel=kernel)
```
For more information, check out the documentation at [piqa.readthedocs.io](https://piqa.readthedocs.io).
### Available metrics
| Class | Range | Objective | Year | Metric |
|:---------:|:------:|:---------:|:----:|------------------------------------------------------------------------------------------------------|
| `TV` | [0, ∞] | / | 1937 | [Total Variation](https://en.wikipedia.org/wiki/Total_variation) |
| `PSNR` | [0, ∞] | max | / | [Peak Signal-to-Noise Ratio](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio) |
| `SSIM` | [0, 1] | max | 2004 | [Structural Similarity](https://en.wikipedia.org/wiki/Structural_similarity) |
| `MS_SSIM` | [0, 1] | max | 2004 | [Multi-Scale Structural Similarity](https://ieeexplore.ieee.org/document/1292216/) |
| `LPIPS` | [0, ∞] | min | 2018 | [Learned Perceptual Image Patch Similarity](https://arxiv.org/abs/1801.03924) |
| `GMSD` | [0, ∞] | min | 2013 | [Gradient Magnitude Similarity Deviation](https://arxiv.org/abs/1308.3052) |
| `MS_GMSD` | [0, ∞] | min | 2017 | [Multi-Scale Gradient Magnitude Similarity Deviation](https://ieeexplore.ieee.org/document/7952357) |
| `MDSI` | [0, ∞] | min | 2016 | [Mean Deviation Similarity Index](https://arxiv.org/abs/1608.07433) |
| `HaarPSI` | [0, 1] | max | 2018 | [Haar Perceptual Similarity Index](https://arxiv.org/abs/1607.06140) |
| `VSI` | [0, 1] | max | 2014 | [Visual Saliency-based Index](https://ieeexplore.ieee.org/document/6873260) |
| `FSIM` | [0, 1] | max | 2011 | [Feature Similarity](https://ieeexplore.ieee.org/document/5705575) |
| `FID` | [0, ∞] | min | 2017 | [Fréchet Inception Distance](https://arxiv.org/abs/1706.08500) |
### Tracing
All metrics of `piqa` support [PyTorch's tracing](https://pytorch.org/docs/stable/generated/torch.jit.trace.html), which optimizes their execution, especially on GPU.
```python
ssim = piqa.SSIM().cuda()
ssim_traced = torch.jit.trace(ssim, (x, y))
l = 1 - ssim_traced(x, y) # should be faster ¯\_(ツ)_/¯
```
### Assert
PIQA uses type assertions to raise meaningful messages when a metric doesn't receive an input of the expected type. This feature eases a lot early prototyping and debugging, but it might hurt a little the performances. If you need the absolute best performances, the assertions can be disabled with the Python flag [`-O`](https://docs.python.org/3/using/cmdline.html#cmdoption-o). For example,
```
python -O your_awesome_code_using_piqa.py
```
Alternatively, you can disable PIQA's type assertions within your code with
```python
piqa.utils.set_debug(False)
```
## Contributing
If you have a question, an issue or would like to contribute, please read our [contributing guidelines](https://github.com/francois-rozet/piqa/blob/master/CONTRIBUTING.md).
Raw data
{
"_id": null,
"home_page": "",
"name": "piqa",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "torch,vision,image,quality,metrics",
"author": "Fran\u00e7ois Rozet",
"author_email": "francois.rozet@outlook.com",
"download_url": "https://files.pythonhosted.org/packages/e9/ab/f0b2461b7d9d184c6c20cc1ce459471614af63879d50b9dd11150c791bd2/piqa-1.3.2.tar.gz",
"platform": null,
"description": "<p align=\"center\"><img src=\"https://raw.githubusercontent.com/francois-rozet/piqa/master/docs/images/banner.svg\" width=\"80%\"></p>\n\n# PyTorch Image Quality Assessment\n\nPIQA is a collection of PyTorch metrics for image quality assessment in various image processing tasks such as generation, denoising, super-resolution, interpolation, etc. It focuses on the efficiency, conciseness and understandability of its (sub-)modules, such that anyone can easily reuse and/or adapt them to its needs.\n\n> PIQA should be pronounced *pika* (like Pikachu \u26a1\ufe0f)\n\n## Installation\n\nThe `piqa` package is available on [PyPI](https://pypi.org/project/piqa), which means it is installable via `pip`.\n\n```\npip install piqa\n```\n\nAlternatively, if you need the latest features, you can install it from the repository.\n\n```\npip install git+https://github.com/francois-rozet/piqa\n```\n\n## Getting started\n\nIn `piqa`, each metric is associated to a class, child of `torch.nn.Module`, which has to be instantiated to evaluate the metric. All metrics are differentiable and support CPU and GPU (CUDA).\n\n```python\nimport torch\nimport piqa\n\n# PSNR\nx = torch.rand(5, 3, 256, 256)\ny = torch.rand(5, 3, 256, 256)\n\npsnr = piqa.PSNR()\nl = psnr(x, y)\n\n# SSIM\nx = torch.rand(5, 3, 256, 256, requires_grad=True).cuda()\ny = torch.rand(5, 3, 256, 256).cuda()\n\nssim = piqa.SSIM().cuda()\nl = 1 - ssim(x, y)\nl.backward()\n```\n\nLike `torch.nn` built-in components, these classes are based on functional definitions of the metrics, which are less user-friendly, but more versatile.\n\n```python\nfrom piqa.ssim import ssim\nfrom piqa.utils.functional import gaussian_kernel\n\nkernel = gaussian_kernel(11, sigma=1.5).repeat(3, 1, 1)\nss, cs = ssim(x, y, kernel=kernel)\n```\n\nFor more information, check out the documentation at [piqa.readthedocs.io](https://piqa.readthedocs.io).\n\n### Available metrics\n\n| Class | Range | Objective | Year | Metric |\n|:---------:|:------:|:---------:|:----:|------------------------------------------------------------------------------------------------------|\n| `TV` | [0, \u221e] | / | 1937 | [Total Variation](https://en.wikipedia.org/wiki/Total_variation) |\n| `PSNR` | [0, \u221e] | max | / | [Peak Signal-to-Noise Ratio](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio) |\n| `SSIM` | [0, 1] | max | 2004 | [Structural Similarity](https://en.wikipedia.org/wiki/Structural_similarity) |\n| `MS_SSIM` | [0, 1] | max | 2004 | [Multi-Scale Structural Similarity](https://ieeexplore.ieee.org/document/1292216/) |\n| `LPIPS` | [0, \u221e] | min | 2018 | [Learned Perceptual Image Patch Similarity](https://arxiv.org/abs/1801.03924) |\n| `GMSD` | [0, \u221e] | min | 2013 | [Gradient Magnitude Similarity Deviation](https://arxiv.org/abs/1308.3052) |\n| `MS_GMSD` | [0, \u221e] | min | 2017 | [Multi-Scale Gradient Magnitude Similarity Deviation](https://ieeexplore.ieee.org/document/7952357) |\n| `MDSI` | [0, \u221e] | min | 2016 | [Mean Deviation Similarity Index](https://arxiv.org/abs/1608.07433) |\n| `HaarPSI` | [0, 1] | max | 2018 | [Haar Perceptual Similarity Index](https://arxiv.org/abs/1607.06140) |\n| `VSI` | [0, 1] | max | 2014 | [Visual Saliency-based Index](https://ieeexplore.ieee.org/document/6873260) |\n| `FSIM` | [0, 1] | max | 2011 | [Feature Similarity](https://ieeexplore.ieee.org/document/5705575) |\n| `FID` | [0, \u221e] | min | 2017 | [Fr\u00e9chet Inception Distance](https://arxiv.org/abs/1706.08500) |\n\n### Tracing\n\nAll metrics of `piqa` support [PyTorch's tracing](https://pytorch.org/docs/stable/generated/torch.jit.trace.html), which optimizes their execution, especially on GPU.\n\n```python\nssim = piqa.SSIM().cuda()\nssim_traced = torch.jit.trace(ssim, (x, y))\n\nl = 1 - ssim_traced(x, y) # should be faster \u00af\\_(\u30c4)_/\u00af\n```\n\n### Assert\n\nPIQA uses type assertions to raise meaningful messages when a metric doesn't receive an input of the expected type. This feature eases a lot early prototyping and debugging, but it might hurt a little the performances. If you need the absolute best performances, the assertions can be disabled with the Python flag [`-O`](https://docs.python.org/3/using/cmdline.html#cmdoption-o). For example,\n\n```\npython -O your_awesome_code_using_piqa.py\n```\n\nAlternatively, you can disable PIQA's type assertions within your code with\n\n```python\npiqa.utils.set_debug(False)\n```\n\n## Contributing\n\nIf you have a question, an issue or would like to contribute, please read our [contributing guidelines](https://github.com/francois-rozet/piqa/blob/master/CONTRIBUTING.md).\n",
"bugtrack_url": null,
"license": "",
"summary": "PyTorch Image Quality Assessment",
"version": "1.3.2",
"project_urls": {
"Documentation": "https://piqa.readthedocs.io",
"Source": "https://github.com/francois-rozet/piqa",
"Tracker": "https://github.com/francois-rozet/piqa/issues"
},
"split_keywords": [
"torch",
"vision",
"image",
"quality",
"metrics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c9b81bb688ce6f31c00af4ea69277932bf6861ab6db41c6b2f02303756508478",
"md5": "ea37edf17ed924e4cce0f665b97537d6",
"sha256": "a8b605a64877622a9ac8b300a695824e9f878faeddf9b9a69d39ea281f763fd0"
},
"downloads": -1,
"filename": "piqa-1.3.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ea37edf17ed924e4cce0f665b97537d6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 32069,
"upload_time": "2023-10-30T10:17:07",
"upload_time_iso_8601": "2023-10-30T10:17:07.597724Z",
"url": "https://files.pythonhosted.org/packages/c9/b8/1bb688ce6f31c00af4ea69277932bf6861ab6db41c6b2f02303756508478/piqa-1.3.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e9abf0b2461b7d9d184c6c20cc1ce459471614af63879d50b9dd11150c791bd2",
"md5": "4a4f64cb817e415acaa6542409689e59",
"sha256": "354aff428a1b69ceda26870e4a6d128c67ce75b6ff7ce4cf07d9a309b2ce681f"
},
"downloads": -1,
"filename": "piqa-1.3.2.tar.gz",
"has_sig": false,
"md5_digest": "4a4f64cb817e415acaa6542409689e59",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 23929,
"upload_time": "2023-10-30T10:17:09",
"upload_time_iso_8601": "2023-10-30T10:17:09.439543Z",
"url": "https://files.pythonhosted.org/packages/e9/ab/f0b2461b7d9d184c6c20cc1ce459471614af63879d50b9dd11150c791bd2/piqa-1.3.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-30 10:17:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "francois-rozet",
"github_project": "piqa",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "torch",
"specs": [
[
">=",
"1.12.0"
]
]
},
{
"name": "torchvision",
"specs": [
[
">=",
"0.13.0"
]
]
}
],
"lcname": "piqa"
}