# Introduction
It is a tool for convenient use of objective quality metrics via the command line. You can use it to run calculations on whole datasets via GPU or CPU and track the results.
There is support of No Reference (NR) and Full Reference (FR) image and video quality (I/VQA) metrics with the possibility of using image metrics on videos framewise with averaging.
Written on **Python** and **PyTorch**. **52** methods have been implemented.
Most implementations are based on [IQA-PyTorch](https://github.com/chaofengc/IQA-PyTorch) and [PIQ](https://github.com/photosynthesis-team/piq). Some are taken from the repositories of the authors (see [List of available models](#list-of-available-models)). The VMAF implementation was taken from FFMPEG.
See [Homepage](https://github.com/bikingSolo/objective-metrics/tree/main?tab=readme-ov-file#license) for more information.
# Dependencies
* Python: >=3.10,<3.11
* [ffmpeg](https://ffmpeg.org/) (build with libvmaf for VMAF)
* [decord](https://github.com/dmlc/decord) (you can build decord with GPU to use NVDEC)
* [CUDA](https://developer.nvidia.com/cuda-toolkit): >= 10.2 (OPTIONAL if use GPU)
* [CuPy](https://docs.cupy.dev/en/stable/index.html) (OPTIONAL if use SI, CF, TI with GPU)
# List of available models
## Image models
### NR IQA
| Paper Link | Method | Code |
| ----------- | ---------- | ------------|
| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fang_Perceptual_Quality_Assessment_of_Smartphone_Photography_CVPR_2020_paper.pdf)| SPAQ Baseline (spaq-bl) | [PyTorch](https://github.com/h4nwei/SPAQ) |
| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fang_Perceptual_Quality_Assessment_of_Smartphone_Photography_CVPR_2020_paper.pdf)| SPAQ MT-A (spaq-mta) | [PyTorch](https://github.com/h4nwei/SPAQ) |
| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fang_Perceptual_Quality_Assessment_of_Smartphone_Photography_CVPR_2020_paper.pdf)| SPAQ MT-S (spaq-mts) | [PyTorch](https://github.com/h4nwei/SPAQ) |
| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Su_Blindly_Assess_Image_Quality_in_the_Wild_Guided_by_a_CVPR_2020_paper.pdf) | HyperIQA (hyperiqa) | [PyTorch](https://github.com/SSL92/hyperIQA) |
| [pdf](https://openaccess.thecvf.com/content_cvpr_2014/papers/Kang_Convolutional_Neural_Networks_2014_CVPR_paper.pdf) | CNNIQA (cnniqa) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/2008.03889) | Linearity (linearity) | [PyTorch](https://github.com/lidq92/LinearityIQA) |
| [arXiv](https://arxiv.org/abs/1912.10088) | PaQ2PiQ (paq2piq) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/2207.12396) | CLIPIQA (clipiqa) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/2207.12396) | CLIPIQA+ (clipiqa+) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/1910.06180) | KonCept512 (koncept512) | [PyTorch](https://github.com/ZhengyuZhao/koniq-PyTorch) |
| [arXiv](https://arxiv.org/abs/2204.08958) | MANIQA (maniqa) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/2108.06858) | TReS (tres) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/2108.05997) | MUSIQ (musiq) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/1809.07517) | PI (pi) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/1907.02665) | DBCNN (dbcnn) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [arXiv](https://arxiv.org/abs/1709.05424) | NIMA (nima) | [PyTorch](https://github.com/titu1994/neural-image-assessment) |
| [arXiv](https://arxiv.org/abs/1612.05890) | NRQM (nrqm) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [pdf](https://live.ece.utexas.edu/publications/2015/zhang2015feature.pdf) | ILNIQE (ilniqe) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [pdf](https://live.ece.utexas.edu/publications/2012/TIP%20BRISQUE.pdf) | BRISQUE (brisque) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
| [pdf](https://live.ece.utexas.edu/publications/2013/mittal2013.pdf) | NIQE (niqe) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
[arXiv](https://arxiv.org/abs/2005.13983) | UNIQUE (unique) |[PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
[arXiv](https://arxiv.org/abs/2308.03060)| TOPIQ (topiq_nr) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |
[ITU](https://www.itu.int/rec/T-REC-P.910)| Spatial Information (si) | self-made | No | - |
[ResearchGate](https://www.researchgate.net/publication/243135534_Measuring_Colourfulness_in_Natural_Images)| Colourfulness (cf) | self-made |
### FR IQA
>PSNR, SSIM, MS-SSIM, CW-SSIM are computed on Y channel in YUV (YCbCr) color space.
| Paper Link | Method | Code |
| ----------- | ---------- | ------------|
| [arXiv](https://arxiv.org/abs/2308.03060) | TOPIQ (topiq_fr) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [arXiv](https://arxiv.org/abs/2204.10485) | AHIQ (ahiq) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [arXiv](https://arxiv.org/abs/1806.02067) | PieAPP (pieapp) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [arXiv](https://arxiv.org/abs/1801.03924) | LPIPS (lpips) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [arXiv](https://arxiv.org/abs/2004.07728) | DISTS (dists) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [arXiv](https://arxiv.org/abs/2108.07948) | CKDN<sup>[1](#fn1)</sup> (ckdn) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [pdf](https://www4.comp.polyu.edu.hk/~cslzhang/IQA/TIP_IQA_FSIM.pdf) | FSIM (fsim) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [wiki](https://en.wikipedia.org/wiki/Structural_similarity) | SSIM (ssim) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [pdf](https://www.researchgate.net/publication/2931584_Multi-Scale_Structural_Similarity_for_Image_Quality_Assessment) | MS-SSIM (ms_ssim) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [pdf](https://live.ece.utexas.edu/publications/2009/sampat_tip_nov09.pdf) | CW-SSIM (cw_ssim) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [arXiv](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio)| PSNR (psnr) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [pdf](https://live.ece.utexas.edu/publications/2004/hrs_ieeetip_2004_imginfo.pdf)| VIF (vif) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [arXiv](https://arxiv.org/abs/1308.3052) | GMSD (gmsd) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [pdf](https://www.uv.es/lapeva/papers/2016_HVEI.pdf) | NLPD (nlpd) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [IEEE Xplore](https://ieeexplore.ieee.org/document/6873260)| VSI (vsi) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [pdf](https://www.researchgate.net/publication/220050520_Most_apparent_distortion_Full-reference_image_quality_assessment_and_the_role_of_strategy) | MAD (mad) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |
| [IEEE Xplore](https://ieeexplore.ieee.org/document/6467149) | SR-SIM (srsim) | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |
| [IEEE Xplore](https://ieeexplore.ieee.org/document/7351172)| DSS (dss) | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |
| [arXiv](https://arxiv.org/abs/1607.06140)| HaarPSI (haarpsi) | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |
| [arXiv](https://arxiv.org/abs/1608.07433) | MDSI (mdsi) | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |
| [pdf](https://www.researchgate.net/publication/317724142_Gradient_magnitude_similarity_deviation_on_multiple_scales_for_color_image_quality_assessment) | MS-GMSD (msgmsd) | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |
<a name="fn1">[1]</a> This method use distorted image as reference. Please refer to the paper for details.<br>
### Feature Extractors
| Paper Link | Method | Code |
| ----------- | ---------- | ------------|
| [arXiv](https://arxiv.org/abs/1512.00567) | InceptionV3 (inception_v3) | [PyTorch](https://pytorch.org/vision/stable/index.html) |
## Video models
### NR VQA
| Paper Link | Method | Code |
| ----------- | ---------- | ------------|
| [arXiv](https://arxiv.org/abs/2011.04263) | MDTVSFA (mdtvsfa) | [PyTorch](https://github.com/lidq92/MDTVSFA) |
| [arXiv](https://arxiv.org/abs/2207.02595) | FAST-VQA (FAST-VQA) | [PyTorch](https://github.com/teowu/FAST-VQA-and-FasterVQA) |
| [arXiv](https://arxiv.org/abs/2207.02595) | FasterVQA (FasterVQA) | [PyTorch](https://github.com/teowu/FAST-VQA-and-FasterVQA) |
| [arXiv](https://arxiv.org/abs/2211.04894) | DOVER (dover) | [PyTorch](https://github.com/VQAssessment/DOVER) |
| [ITU](https://www.itu.int/rec/T-REC-P.910) | Temporal Information (ti) | self-made |
### FR VQA
| Paper Link | Method | Code |
| ----------- | ---------- | ------------|
| [wiki](https://en.wikipedia.org/wiki/Video_Multimethod_Assessment_Fusion) | VMAF (vmaf) | [FFMPEG VMAF](https://github.com/Netflix/vmaf) |
# License
This project is licensed under the MIT License. However, it also includes code distributed under the BSD+Patent license.
Raw data
{
"_id": null,
"home_page": "https://github.com/bikingSolo/objective-metrics",
"name": "objective-metrics",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.11,>=3.10",
"maintainer_email": null,
"keywords": "quality assessment, image quality assessment, video quality assessment, pytorch",
"author": "Lev Borisvoskiy",
"author_email": "levbor888@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/92/36/e2b682ac86e80a30d8bc9d2273e16041115e20cbb0abf6a7620387de7013/objective_metrics-1.2.2.tar.gz",
"platform": null,
"description": "# Introduction\n\nIt is a tool for convenient use of objective quality metrics via the command line. You can use it to run calculations on whole datasets via GPU or CPU and track the results.\n\nThere is support of No Reference (NR) and Full Reference (FR) image and video quality (I/VQA) metrics with the possibility of using image metrics on videos framewise with averaging.\n\nWritten on **Python** and **PyTorch**. **52** methods have been implemented.\n\nMost implementations are based on [IQA-PyTorch](https://github.com/chaofengc/IQA-PyTorch) and [PIQ](https://github.com/photosynthesis-team/piq). Some are taken from the repositories of the authors (see [List of available models](#list-of-available-models)). The VMAF implementation was taken from FFMPEG.\n\nSee [Homepage](https://github.com/bikingSolo/objective-metrics/tree/main?tab=readme-ov-file#license) for more information.\n\n# Dependencies\n\n* Python: >=3.10,<3.11\n* [ffmpeg](https://ffmpeg.org/) (build with libvmaf for VMAF)\n* [decord](https://github.com/dmlc/decord) (you can build decord with GPU to use NVDEC)\n* [CUDA](https://developer.nvidia.com/cuda-toolkit): >= 10.2 (OPTIONAL if use GPU)\n* [CuPy](https://docs.cupy.dev/en/stable/index.html) (OPTIONAL if use SI, CF, TI with GPU)\n\n# List of available models\n\n## Image models\n\n### NR IQA\n\n| Paper Link | Method | Code |\n| ----------- | ---------- | ------------|\n| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fang_Perceptual_Quality_Assessment_of_Smartphone_Photography_CVPR_2020_paper.pdf)| SPAQ Baseline (spaq-bl) | [PyTorch](https://github.com/h4nwei/SPAQ) |\n| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fang_Perceptual_Quality_Assessment_of_Smartphone_Photography_CVPR_2020_paper.pdf)| SPAQ MT-A (spaq-mta) | [PyTorch](https://github.com/h4nwei/SPAQ) |\n| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Fang_Perceptual_Quality_Assessment_of_Smartphone_Photography_CVPR_2020_paper.pdf)| SPAQ MT-S (spaq-mts) | [PyTorch](https://github.com/h4nwei/SPAQ) |\n| [pdf](https://openaccess.thecvf.com/content_CVPR_2020/papers/Su_Blindly_Assess_Image_Quality_in_the_Wild_Guided_by_a_CVPR_2020_paper.pdf) | HyperIQA (hyperiqa) | [PyTorch](https://github.com/SSL92/hyperIQA) |\n| [pdf](https://openaccess.thecvf.com/content_cvpr_2014/papers/Kang_Convolutional_Neural_Networks_2014_CVPR_paper.pdf) | CNNIQA (cnniqa) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/2008.03889) | Linearity (linearity) | [PyTorch](https://github.com/lidq92/LinearityIQA) |\n| [arXiv](https://arxiv.org/abs/1912.10088) | PaQ2PiQ (paq2piq) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/2207.12396) | CLIPIQA (clipiqa) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/2207.12396) | CLIPIQA+ (clipiqa+) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/1910.06180) | KonCept512 (koncept512) | [PyTorch](https://github.com/ZhengyuZhao/koniq-PyTorch) |\n| [arXiv](https://arxiv.org/abs/2204.08958) | MANIQA (maniqa) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/2108.06858) | TReS (tres) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/2108.05997) | MUSIQ (musiq) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/1809.07517) | PI (pi) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/1907.02665) | DBCNN (dbcnn) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [arXiv](https://arxiv.org/abs/1709.05424) | NIMA (nima) | [PyTorch](https://github.com/titu1994/neural-image-assessment) |\n| [arXiv](https://arxiv.org/abs/1612.05890) | NRQM (nrqm) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [pdf](https://live.ece.utexas.edu/publications/2015/zhang2015feature.pdf) | ILNIQE (ilniqe) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [pdf](https://live.ece.utexas.edu/publications/2012/TIP%20BRISQUE.pdf) | BRISQUE (brisque) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n| [pdf](https://live.ece.utexas.edu/publications/2013/mittal2013.pdf) | NIQE (niqe) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n[arXiv](https://arxiv.org/abs/2005.13983) | UNIQUE (unique) |[PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n[arXiv](https://arxiv.org/abs/2308.03060)| TOPIQ (topiq_nr) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch) |\n[ITU](https://www.itu.int/rec/T-REC-P.910)| Spatial Information (si) | self-made | No | - |\n[ResearchGate](https://www.researchgate.net/publication/243135534_Measuring_Colourfulness_in_Natural_Images)| Colourfulness (cf) | self-made |\n\n\n### FR IQA\n\n>PSNR, SSIM, MS-SSIM, CW-SSIM are computed on Y channel in YUV (YCbCr) color space.\n\n| Paper Link | Method | Code |\n| ----------- | ---------- | ------------|\n| [arXiv](https://arxiv.org/abs/2308.03060) | TOPIQ (topiq_fr) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [arXiv](https://arxiv.org/abs/2204.10485) | AHIQ (ahiq) \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [arXiv](https://arxiv.org/abs/1806.02067) | PieAPP (pieapp) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [arXiv](https://arxiv.org/abs/1801.03924) | LPIPS (lpips) \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [arXiv](https://arxiv.org/abs/2004.07728) | DISTS (dists) \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [arXiv](https://arxiv.org/abs/2108.07948) | CKDN<sup>[1](#fn1)</sup> (ckdn) \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [pdf](https://www4.comp.polyu.edu.hk/~cslzhang/IQA/TIP_IQA_FSIM.pdf) | FSIM (fsim) \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [wiki](https://en.wikipedia.org/wiki/Structural_similarity) | SSIM (ssim) \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [pdf](https://www.researchgate.net/publication/2931584_Multi-Scale_Structural_Similarity_for_Image_Quality_Assessment) | MS-SSIM (ms_ssim) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [pdf](https://live.ece.utexas.edu/publications/2009/sampat_tip_nov09.pdf) | CW-SSIM (cw_ssim) | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [arXiv](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio)| PSNR (psnr) \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [pdf](https://live.ece.utexas.edu/publications/2004/hrs_ieeetip_2004_imginfo.pdf)| VIF (vif)\t \t\t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [arXiv](https://arxiv.org/abs/1308.3052) | GMSD (gmsd)\t \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [pdf](https://www.uv.es/lapeva/papers/2016_HVEI.pdf) | NLPD (nlpd)\t \t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [IEEE Xplore](https://ieeexplore.ieee.org/document/6873260)| VSI (vsi)\t \t\t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [pdf](https://www.researchgate.net/publication/220050520_Most_apparent_distortion_Full-reference_image_quality_assessment_and_the_role_of_strategy) | MAD (mad)\t \t\t | [PyTorch](https://github.com/chaofengc/IQA-PyTorch/tree/main) |\n| [IEEE Xplore](https://ieeexplore.ieee.org/document/6467149) | SR-SIM (srsim) | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |\n| [IEEE Xplore](https://ieeexplore.ieee.org/document/7351172)| DSS (dss)\t \t\t | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |\n| [arXiv](https://arxiv.org/abs/1607.06140)| HaarPSI (haarpsi) | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |\n| [arXiv](https://arxiv.org/abs/1608.07433) | MDSI (mdsi) \t\t | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |\n| [pdf](https://www.researchgate.net/publication/317724142_Gradient_magnitude_similarity_deviation_on_multiple_scales_for_color_image_quality_assessment) | MS-GMSD (msgmsd)\t | [PyTorch](https://github.com/photosynthesis-team/piq?tab=readme-ov-file) |\n\n<a name=\"fn1\">[1]</a> This method use distorted image as reference. Please refer to the paper for details.<br>\n \n### Feature Extractors\n\n| Paper Link | Method | Code |\n| ----------- | ---------- | ------------|\n| [arXiv](https://arxiv.org/abs/1512.00567) | InceptionV3 (inception_v3) | [PyTorch](https://pytorch.org/vision/stable/index.html) |\n\n## Video models\n\n### NR VQA\n\n\n| Paper Link | Method | Code |\n| ----------- | ---------- | ------------|\n| [arXiv](https://arxiv.org/abs/2011.04263) | MDTVSFA (mdtvsfa) | [PyTorch](https://github.com/lidq92/MDTVSFA) |\n| [arXiv](https://arxiv.org/abs/2207.02595) | FAST-VQA (FAST-VQA) | [PyTorch](https://github.com/teowu/FAST-VQA-and-FasterVQA) |\n| [arXiv](https://arxiv.org/abs/2207.02595) | FasterVQA (FasterVQA) | [PyTorch](https://github.com/teowu/FAST-VQA-and-FasterVQA) |\n| [arXiv](https://arxiv.org/abs/2211.04894) | DOVER (dover) | [PyTorch](https://github.com/VQAssessment/DOVER) |\n| [ITU](https://www.itu.int/rec/T-REC-P.910) | Temporal Information (ti) | self-made |\n\n### FR VQA\n\n| Paper Link | Method | Code |\n| ----------- | ---------- | ------------|\n| [wiki](https://en.wikipedia.org/wiki/Video_Multimethod_Assessment_Fusion) | VMAF (vmaf) | [FFMPEG VMAF](https://github.com/Netflix/vmaf) |\n\n\n# License\n\nThis project is licensed under the MIT License. However, it also includes code distributed under the BSD+Patent license.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Command line tool for Image and Video Quality Assessment including MDTVSFA, FAST-VQA, VMAF, MUSIQ and more...",
"version": "1.2.2",
"project_urls": {
"Homepage": "https://github.com/bikingSolo/objective-metrics",
"Repository": "https://github.com/bikingSolo/objective-metrics"
},
"split_keywords": [
"quality assessment",
" image quality assessment",
" video quality assessment",
" pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "97a61f374e819ff3d1f99d61bb40c41c4cc09f3e3859d8a9ab5a34845603c7d7",
"md5": "a3e92e51f3c5f48520eb0550905e3661",
"sha256": "fb09917cf073d8dd197702687b6b0938f158b61db544a7e92ab7014165b9f95d"
},
"downloads": -1,
"filename": "objective_metrics-1.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a3e92e51f3c5f48520eb0550905e3661",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>=3.10",
"size": 158303,
"upload_time": "2024-06-14T11:48:43",
"upload_time_iso_8601": "2024-06-14T11:48:43.905888Z",
"url": "https://files.pythonhosted.org/packages/97/a6/1f374e819ff3d1f99d61bb40c41c4cc09f3e3859d8a9ab5a34845603c7d7/objective_metrics-1.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9236e2b682ac86e80a30d8bc9d2273e16041115e20cbb0abf6a7620387de7013",
"md5": "7986220a3ff80c85617e1e5364e63536",
"sha256": "3eaea85ad676dd1c7ddcb0c48b0892cf7b5ec7b19fc272839370940114f6e344"
},
"downloads": -1,
"filename": "objective_metrics-1.2.2.tar.gz",
"has_sig": false,
"md5_digest": "7986220a3ff80c85617e1e5364e63536",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>=3.10",
"size": 87394,
"upload_time": "2024-06-14T11:48:45",
"upload_time_iso_8601": "2024-06-14T11:48:45.657076Z",
"url": "https://files.pythonhosted.org/packages/92/36/e2b682ac86e80a30d8bc9d2273e16041115e20cbb0abf6a7620387de7013/objective_metrics-1.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-14 11:48:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bikingSolo",
"github_project": "objective-metrics",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "objective-metrics"
}