vision-evaluation


Namevision-evaluation JSON
Version 0.2.13 PyPI version JSON
download
home_pagehttps://github.com/microsoft/vision-evaluation
SummaryEvaluation metric codes for various vision tasks.
upload_time2023-02-08 23:07:12
maintainer
docs_urlNone
authorPing Jin, Shohei Ono
requires_python>=3.7
licenseMIT
keywords vision metric evaluation classification detection
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Vision Evaluation

## Introduction

This repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification, object detection, image caption, and image matting.

If you only need the image classification or object detection evaluation pipeline, JRE is not required.
This repo

- contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification and object detection.
- defines the contract for metric calculation code in `Evaluator` class, for bringing custom evaluators under the same interface

This repo isn't trying to re-invent the wheel, but to provide centralized defaults for most metrics across different vision tasks so dev/research teams can compare model performance on the same page. As expected, you can find many implementations backed up by well-known sklearn or pycocotools.

## Functionalities

This repo currently offers evaluation metrics for three vision tasks:

- **Image classification**:
  - `TopKAccuracyEvaluator`: computes the top-k accuracy for multi-class classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.
  - `ThresholdAccuracyEvaluator`: computes the threshold based accuracy (mainly for multi-label classification problem), i.e., accuracy of the predictions with confidence over a certain threshold.
  - `AveragePrecisionEvaluator`: computes the average precision, i.e., precision averaged across different confidence thresholds.
  - `PrecisionEvaluator`: computes precision.
  - `RecallEvaluator`: computes recall.
  - `BalancedAccuracyScoreEvaluator`: computes balanced accuracy, i.e., average recall across classes, for multiclass classification.
  - `RocAucEvaluator`: computes Area under the Receiver Operating Characteristic Curve.
  - `F1ScoreEvaluator`: computes f1-score (recall and precision will be reported as well).
  - `EceLossEvaluator`: computes the [ECE loss](https://arxiv.org/pdf/1706.04599.pdf), i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
- **Object detection**:
  - `CocoMeanAveragePrecisionEvaluator`: Coco mean average precision (mAP) computation across different classes, under multiple [IoU(s)](https://en.wikipedia.org/wiki/Jaccard_index).
- **Image caption**:
  - `BleuScoreEvaluator`: computes the Bleu score. For more details, refer to [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf).
  - `METEORScoreEvaluator`: computes the Meteor score. For more details, refer to [Project page](http://www.cs.cmu.edu/~alavie/METEOR/). We use the latest version (1.5) of the [Code](https://github.com/mjdenkowski/meteor).
  - `ROUGELScoreEvaluator`: computes the Rouge-L score. Refer to [ROUGE: A Package for Automatic Evaluation of Summaries](http://anthology.aclweb.org/W/W04/W04-1013.pdf) for more details.
  - `CIDErScoreEvaluator`:  computes the CIDEr score. Refer to [CIDEr: Consensus-based Image Description Evaluation](http://arxiv.org/pdf/1411.5726.pdf) for more details.
  - `SPICEScoreEvaluator`:  computes the SPICE score. Refer to [SPICE: Semantic Propositional Image Caption Evaluation](https://arxiv.org/abs/1607.08822) for more details.
- **Image matting**:
  - `MeanIOUEvaluator`: computes the mean intersection-over-union score. 
  - `ForegroundIOUEvaluator`: computes the foreground intersection-over-union evaluator score.
  - `BoundaryMeanIOUEvaluator`: computes the boundary mean intersection-over-union score. 
  - `BoundaryForegroundIOUEvaluator`:  computes the boundary foreground intersection-over-union score.
  - `L1ErrorEvaluator`:  computes the L1 error.
- **Image regression**:
  - `MeanLpErrorEvaluator`: computes the mean Lp error (e.g. L1 error for p=1, L2 error for p=2, etc.).
- **Image retrieval**:
  - `RecallAtKEvaluator(k)`: computes Recall@k, which is the percentage of relevant items in top-k among all relevant items
  - `PrecisionAtKEvaluator(k)`: computes Precision@k, which is the percentage of TP among all items classified as P in top-k.
  - `MeanAveragePrecisionAtK(k)`: computes [Mean Average Precision@k](https://stackoverflow.com/questions/54966320/mapk-computation), an information retrieval metric.
  - `PrecisionRecallCurveNPointsEvaluator(k)`: computes a Precision-Recall Curve, interpolated at k points and averaged over all samples. 
  
While different machine learning problems/applications prefer different metrics, below are some general recommendations:

- **Multiclass classification**: Top-1 Accuracy and Top-5 Accuracy
- **Multilabel classification**: Average Precision, Precision/Recall/Precision@k/threshold, where k and threshold can be very problem-specific
- **Object detection**: mAP@IoU=30 and mAP@IoU=50
- **Image caption**: Bleu, METEOR, ROUGE-L, CIDEr, SPICE
- **Image matting**: Mean IOU, Foreground IOU, Boundary mean IOU, Boundary Foreground IOU, L1 Error
- **Image regression**: Mean L1 Error, Mean L2 Error
- **Image retrieval**: Recall@k, Precision@k, Mean Average Precision@k, Precision-Recall Curve

## Additional Requirements

The image caption evaluators requires Jave Runtime Environment (JRE) (Java 1.8.0). This is not required for other evaluators.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/microsoft/vision-evaluation",
    "name": "vision-evaluation",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "vision metric evaluation classification detection",
    "author": "Ping Jin, Shohei Ono",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/4a/c7/9d6c1f2c0d75afe1658df03465b92eb83ff2b6254d852c64ae5d97610875/vision-evaluation-0.2.13.tar.gz",
    "platform": null,
    "description": "# Vision Evaluation\n\n## Introduction\n\nThis repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification, object detection, image caption, and image matting.\n\nIf you only need the image classification or object detection evaluation pipeline, JRE is not required.\nThis repo\n\n- contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification and object detection.\n- defines the contract for metric calculation code in `Evaluator` class, for bringing custom evaluators under the same interface\n\nThis repo isn't trying to re-invent the wheel, but to provide centralized defaults for most metrics across different vision tasks so dev/research teams can compare model performance on the same page. As expected, you can find many implementations backed up by well-known sklearn or pycocotools.\n\n## Functionalities\n\nThis repo currently offers evaluation metrics for three vision tasks:\n\n- **Image classification**:\n  - `TopKAccuracyEvaluator`: computes the top-k accuracy for multi-class classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.\n  - `ThresholdAccuracyEvaluator`: computes the threshold based accuracy (mainly for multi-label classification problem), i.e., accuracy of the predictions with confidence over a certain threshold.\n  - `AveragePrecisionEvaluator`: computes the average precision, i.e., precision averaged across different confidence thresholds.\n  - `PrecisionEvaluator`: computes precision.\n  - `RecallEvaluator`: computes recall.\n  - `BalancedAccuracyScoreEvaluator`: computes balanced accuracy, i.e., average recall across classes, for multiclass classification.\n  - `RocAucEvaluator`: computes Area under the Receiver Operating Characteristic Curve.\n  - `F1ScoreEvaluator`: computes f1-score (recall and precision will be reported as well).\n  - `EceLossEvaluator`: computes the [ECE loss](https://arxiv.org/pdf/1706.04599.pdf), i.e., the expected calibration error, given the model confidence and true labels for a set of data points.\n- **Object detection**:\n  - `CocoMeanAveragePrecisionEvaluator`: Coco mean average precision (mAP) computation across different classes, under multiple [IoU(s)](https://en.wikipedia.org/wiki/Jaccard_index).\n- **Image caption**:\n  - `BleuScoreEvaluator`: computes the Bleu score. For more details, refer to [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf).\n  - `METEORScoreEvaluator`: computes the Meteor score. For more details, refer to [Project page](http://www.cs.cmu.edu/~alavie/METEOR/). We use the latest version (1.5) of the [Code](https://github.com/mjdenkowski/meteor).\n  - `ROUGELScoreEvaluator`: computes the Rouge-L score. Refer to [ROUGE: A Package for Automatic Evaluation of Summaries](http://anthology.aclweb.org/W/W04/W04-1013.pdf) for more details.\n  - `CIDErScoreEvaluator`:  computes the CIDEr score. Refer to [CIDEr: Consensus-based Image Description Evaluation](http://arxiv.org/pdf/1411.5726.pdf) for more details.\n  - `SPICEScoreEvaluator`:  computes the SPICE score. Refer to [SPICE: Semantic Propositional Image Caption Evaluation](https://arxiv.org/abs/1607.08822) for more details.\n- **Image matting**:\n  - `MeanIOUEvaluator`: computes the mean intersection-over-union score. \n  - `ForegroundIOUEvaluator`: computes the foreground intersection-over-union evaluator score.\n  - `BoundaryMeanIOUEvaluator`: computes the boundary mean intersection-over-union score. \n  - `BoundaryForegroundIOUEvaluator`:  computes the boundary foreground intersection-over-union score.\n  - `L1ErrorEvaluator`:  computes the L1 error.\n- **Image regression**:\n  - `MeanLpErrorEvaluator`: computes the mean Lp error (e.g. L1 error for p=1, L2 error for p=2, etc.).\n- **Image retrieval**:\n  - `RecallAtKEvaluator(k)`: computes Recall@k, which is the percentage of relevant items in top-k among all relevant items\n  - `PrecisionAtKEvaluator(k)`: computes Precision@k, which is the percentage of TP among all items classified as P in top-k.\n  - `MeanAveragePrecisionAtK(k)`: computes [Mean Average Precision@k](https://stackoverflow.com/questions/54966320/mapk-computation), an information retrieval metric.\n  - `PrecisionRecallCurveNPointsEvaluator(k)`: computes a Precision-Recall Curve, interpolated at k points and averaged over all samples. \n  \nWhile different machine learning problems/applications prefer different metrics, below are some general recommendations:\n\n- **Multiclass classification**: Top-1 Accuracy and Top-5 Accuracy\n- **Multilabel classification**: Average Precision, Precision/Recall/Precision@k/threshold, where k and threshold can be very problem-specific\n- **Object detection**: mAP@IoU=30 and mAP@IoU=50\n- **Image caption**: Bleu, METEOR, ROUGE-L, CIDEr, SPICE\n- **Image matting**: Mean IOU, Foreground IOU, Boundary mean IOU, Boundary Foreground IOU, L1 Error\n- **Image regression**: Mean L1 Error, Mean L2 Error\n- **Image retrieval**: Recall@k, Precision@k, Mean Average Precision@k, Precision-Recall Curve\n\n## Additional Requirements\n\nThe image caption evaluators requires Jave Runtime Environment (JRE) (Java 1.8.0). This is not required for other evaluators.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Evaluation metric codes for various vision tasks.",
    "version": "0.2.13",
    "split_keywords": [
        "vision",
        "metric",
        "evaluation",
        "classification",
        "detection"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ef09edd64e4c6b6528c308634b75435d269ef0425196cc7aa0da05531e97cbfd",
                "md5": "81d2f5e5b53c248b08428d2111eeb9f3",
                "sha256": "0deb69289e0727c0974fd720daa0123db5c6acb3d4200ccaa978ce9374ab7dbc"
            },
            "downloads": -1,
            "filename": "vision_evaluation-0.2.13-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "81d2f5e5b53c248b08428d2111eeb9f3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 27267,
            "upload_time": "2023-02-08T23:07:10",
            "upload_time_iso_8601": "2023-02-08T23:07:10.823201Z",
            "url": "https://files.pythonhosted.org/packages/ef/09/edd64e4c6b6528c308634b75435d269ef0425196cc7aa0da05531e97cbfd/vision_evaluation-0.2.13-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4ac79d6c1f2c0d75afe1658df03465b92eb83ff2b6254d852c64ae5d97610875",
                "md5": "fb96efa96c76f18614abfb25dad93913",
                "sha256": "a69db4327a9db6dff6f524c6d2534d91fb99c1aaef6aa8291f0ca4206d2ea0c0"
            },
            "downloads": -1,
            "filename": "vision-evaluation-0.2.13.tar.gz",
            "has_sig": false,
            "md5_digest": "fb96efa96c76f18614abfb25dad93913",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 26133,
            "upload_time": "2023-02-08T23:07:12",
            "upload_time_iso_8601": "2023-02-08T23:07:12.151506Z",
            "url": "https://files.pythonhosted.org/packages/4a/c7/9d6c1f2c0d75afe1658df03465b92eb83ff2b6254d852c64ae5d97610875/vision-evaluation-0.2.13.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-08 23:07:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "microsoft",
    "github_project": "vision-evaluation",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "tox": true,
    "lcname": "vision-evaluation"
}
        
Elapsed time: 0.03872s