visionmetrics


Namevisionmetrics JSON
Version 0.0.12 PyPI version JSON
download
home_pagehttps://github.com/microsoft/visionmetrics
SummaryEvaluation metric codes for various vision tasks.
upload_time2024-04-30 21:52:17
maintainerNone
docs_urlNone
authorMicrosoft
requires_python>=3.8
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # visionmetrics

This repo contains evaluation metrics for vision tasks such as classification, object detection, image caption, and image matting. It uses [torchmetrics](https://github.com/Lightning-AI/torchmetrics) as a base library and extends it to support custom vision tasks as necessary.

## Available Metrics

### Image Classification:
  - `Accuracy`: computes the top-k accuracy for a classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.
  - `PrecisionEvaluator`: computes precision.
  - `RecallEvaluator`: computes recall.
  - `AveragePrecisionEvaluator`: computes the average precision, i.e., precision averaged across different confidence thresholds. 
  - `AUCROC`: computes Area under the Receiver Operating Characteristic Curve.
  - `F1Score`: computes f1-score.
  - `CalibrationLoss`<sup>**</sup>: computes the [ECE loss](https://arxiv.org/pdf/1706.04599.pdf), i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
  - `ConfusionMatrix`: computes the confusion matrix of a classification. By definition a confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j (https://en.wikipedia.org/wiki/Confusion_matrix).
  - `ExactMatch`: computes the exact match score, i.e., the percentage of samples where the predicted label is exactly the same as the ground truth label.

The above metrics are available for Binary, Multiclass, and Multilabel classification tasks. For example, `BinaryAccuracy` is the binary version of `Accuracy` and `MultilabelAccuracy` is the multilabel version of `Accuracy`. Please refer to the example usage below for more details.

<sup>**</sup> The `CalibrationLoss` metric is only for binary and multiclass classification tasks.

### Object Detection:
- `MeanAveragePrecision`: Coco mean average precision (mAP) computation across different classes, under multiple [IoU(s)](https://en.wikipedia.org/wiki/Jaccard_index).
- `ClassAgnosticAveragePrecision`: Coco mean average prevision (mAP) calculated in a class-agnostic manner. Considers all classes as one class.

### Image Caption:
  - `BleuScore`: computes the Bleu score. For more details, refer to [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf).
  - `METEORScore`: computes the Meteor score. For more details, refer to [Project page](http://www.cs.cmu.edu/~alavie/METEOR/). We use the latest version (1.5) of the [Code](https://github.com/mjdenkowski/meteor).
  - `ROUGELScore`: computes the Rouge-L score. Refer to [ROUGE: A Package for Automatic Evaluation of Summaries](http://anthology.aclweb.org/W/W04/W04-1013.pdf) for more details.
  - `CIDErScore`:  computes the CIDEr score. Refer to [CIDEr: Consensus-based Image Description Evaluation](http://arxiv.org/pdf/1411.5726.pdf) for more details.
  - `SPICEScore`:  computes the SPICE score. Refer to [SPICE: Semantic Propositional Image Caption Evaluation](https://arxiv.org/abs/1607.08822) for more details.

### Image Matting:
  - `MeanIOU`: computes the mean intersection-over-union score. 
  - `ForegroundIOU`: computes the foreground intersection-over-union evaluator score.
  - `BoundaryMeanIOU`: computes the boundary mean intersection-over-union score. 
  - `BoundaryForegroundIOU`:  computes the boundary foreground intersection-over-union score.
  - `L1Error`:  computes the L1 error.

### Regression:
  - `MeanSquaredError`: computes the mean squared error. 
  - `MeanAbsoluteError`: computes the mean absolute error.

### Retrieval:
  - `RetrievalRecall`: computes Recall@k, which is the percentage of relevant items in top-k among all relevant items
  - `RetrievalPrecision`: computes Precision@k, which is the percentage of TP among all items classified as P in top-k.
  - `RetrievalMAP`: computes [Mean Average Precision@k](https://stackoverflow.com/questions/54966320/mapk-computation), an information retrieval metric.
  - `RetrievalPrecisionRecallCurveNPoints`: computes a Precision-Recall Curve, interpolated at k points and averaged over all samples. 

### Grounding
  - `Recall`: computes Recall@k, which is the percentage of correct grounding in top-k among all relevant items.

## Example Usage

```python
import torch
from visionmetrics.classification import MulticlassAccuracy

preds = torch.rand(10, 10)
target = torch.randint(0, 10, (10,))

# Initialize metric
metric = MulticlassAccuracy(num_classes=10, top_k=1, average='macro')

# Add batch of predictions and targets
metric.update(preds, target)

# Compute metric
result = metric.compute()
```

## Implementing Custom Metrics
Please refer to [torchmetrics](https://github.com/Lightning-AI/torchmetrics#implementing-your-own-module-metric) for more details on how to implement custom metrics.


## Additional Requirements

The image caption metric calculation requires Jave Runtime Environment (JRE) (Java 1.8.0) and some extra dependencies which can be installed with `pip install visionmetrics[caption]`. This is not required for other evaluators. If you do not need image caption metrics, JRE is not required.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/microsoft/visionmetrics",
    "name": "visionmetrics",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Microsoft",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/f4/87/d05e4892df09ca8432bb2801d7699180118efc426108ee5d12b8ac693d9d/visionmetrics-0.0.12.tar.gz",
    "platform": null,
    "description": "# visionmetrics\n\nThis repo contains evaluation metrics for vision tasks such as classification, object detection, image caption, and image matting. It uses [torchmetrics](https://github.com/Lightning-AI/torchmetrics) as a base library and extends it to support custom vision tasks as necessary.\n\n## Available Metrics\n\n### Image Classification:\n  - `Accuracy`: computes the top-k accuracy for a classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.\n  - `PrecisionEvaluator`: computes precision.\n  - `RecallEvaluator`: computes recall.\n  - `AveragePrecisionEvaluator`: computes the average precision, i.e., precision averaged across different confidence thresholds. \n  - `AUCROC`: computes Area under the Receiver Operating Characteristic Curve.\n  - `F1Score`: computes f1-score.\n  - `CalibrationLoss`<sup>**</sup>: computes the [ECE loss](https://arxiv.org/pdf/1706.04599.pdf), i.e., the expected calibration error, given the model confidence and true labels for a set of data points.\n  - `ConfusionMatrix`: computes the confusion matrix of a classification. By definition a confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j (https://en.wikipedia.org/wiki/Confusion_matrix).\n  - `ExactMatch`: computes the exact match score, i.e., the percentage of samples where the predicted label is exactly the same as the ground truth label.\n\nThe above metrics are available for Binary, Multiclass, and Multilabel classification tasks. For example, `BinaryAccuracy` is the binary version of `Accuracy` and `MultilabelAccuracy` is the multilabel version of `Accuracy`. Please refer to the example usage below for more details.\n\n<sup>**</sup> The `CalibrationLoss` metric is only for binary and multiclass classification tasks.\n\n### Object Detection:\n- `MeanAveragePrecision`: Coco mean average precision (mAP) computation across different classes, under multiple [IoU(s)](https://en.wikipedia.org/wiki/Jaccard_index).\n- `ClassAgnosticAveragePrecision`: Coco mean average prevision (mAP) calculated in a class-agnostic manner. Considers all classes as one class.\n\n### Image Caption:\n  - `BleuScore`: computes the Bleu score. For more details, refer to [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf).\n  - `METEORScore`: computes the Meteor score. For more details, refer to [Project page](http://www.cs.cmu.edu/~alavie/METEOR/). We use the latest version (1.5) of the [Code](https://github.com/mjdenkowski/meteor).\n  - `ROUGELScore`: computes the Rouge-L score. Refer to [ROUGE: A Package for Automatic Evaluation of Summaries](http://anthology.aclweb.org/W/W04/W04-1013.pdf) for more details.\n  - `CIDErScore`:  computes the CIDEr score. Refer to [CIDEr: Consensus-based Image Description Evaluation](http://arxiv.org/pdf/1411.5726.pdf) for more details.\n  - `SPICEScore`:  computes the SPICE score. Refer to [SPICE: Semantic Propositional Image Caption Evaluation](https://arxiv.org/abs/1607.08822) for more details.\n\n### Image Matting:\n  - `MeanIOU`: computes the mean intersection-over-union score. \n  - `ForegroundIOU`: computes the foreground intersection-over-union evaluator score.\n  - `BoundaryMeanIOU`: computes the boundary mean intersection-over-union score. \n  - `BoundaryForegroundIOU`:  computes the boundary foreground intersection-over-union score.\n  - `L1Error`:  computes the L1 error.\n\n### Regression:\n  - `MeanSquaredError`: computes the mean squared error. \n  - `MeanAbsoluteError`: computes the mean absolute error.\n\n### Retrieval:\n  - `RetrievalRecall`: computes Recall@k, which is the percentage of relevant items in top-k among all relevant items\n  - `RetrievalPrecision`: computes Precision@k, which is the percentage of TP among all items classified as P in top-k.\n  - `RetrievalMAP`: computes [Mean Average Precision@k](https://stackoverflow.com/questions/54966320/mapk-computation), an information retrieval metric.\n  - `RetrievalPrecisionRecallCurveNPoints`: computes a Precision-Recall Curve, interpolated at k points and averaged over all samples. \n\n### Grounding\n  - `Recall`: computes Recall@k, which is the percentage of correct grounding in top-k among all relevant items.\n\n## Example Usage\n\n```python\nimport torch\nfrom visionmetrics.classification import MulticlassAccuracy\n\npreds = torch.rand(10, 10)\ntarget = torch.randint(0, 10, (10,))\n\n# Initialize metric\nmetric = MulticlassAccuracy(num_classes=10, top_k=1, average='macro')\n\n# Add batch of predictions and targets\nmetric.update(preds, target)\n\n# Compute metric\nresult = metric.compute()\n```\n\n## Implementing Custom Metrics\nPlease refer to [torchmetrics](https://github.com/Lightning-AI/torchmetrics#implementing-your-own-module-metric) for more details on how to implement custom metrics.\n\n\n## Additional Requirements\n\nThe image caption metric calculation requires Jave Runtime Environment (JRE) (Java 1.8.0) and some extra dependencies which can be installed with `pip install visionmetrics[caption]`. This is not required for other evaluators. If you do not need image caption metrics, JRE is not required.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Evaluation metric codes for various vision tasks.",
    "version": "0.0.12",
    "project_urls": {
        "Homepage": "https://github.com/microsoft/visionmetrics"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "52fb0f4e08ad19abd7f6478d632e44ca03a00dc51197d84f80850d33977f92e1",
                "md5": "2bbe45f48457300c651d4a9eb728b4fc",
                "sha256": "e40dafd63ae6372abf696d7568dcd3742e49ba9ed81080abf3eab7034291a945"
            },
            "downloads": -1,
            "filename": "visionmetrics-0.0.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2bbe45f48457300c651d4a9eb728b4fc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 21614,
            "upload_time": "2024-04-30T21:52:16",
            "upload_time_iso_8601": "2024-04-30T21:52:16.304490Z",
            "url": "https://files.pythonhosted.org/packages/52/fb/0f4e08ad19abd7f6478d632e44ca03a00dc51197d84f80850d33977f92e1/visionmetrics-0.0.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f487d05e4892df09ca8432bb2801d7699180118efc426108ee5d12b8ac693d9d",
                "md5": "d8bc47eb3bc605f3ac4a4c9dd0c52282",
                "sha256": "d72b86e4bc81d2096c6a0ad67595ac7c664dd9f2f6a846e5b4ebf13d9abb6e90"
            },
            "downloads": -1,
            "filename": "visionmetrics-0.0.12.tar.gz",
            "has_sig": false,
            "md5_digest": "d8bc47eb3bc605f3ac4a4c9dd0c52282",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 20760,
            "upload_time": "2024-04-30T21:52:17",
            "upload_time_iso_8601": "2024-04-30T21:52:17.608096Z",
            "url": "https://files.pythonhosted.org/packages/f4/87/d05e4892df09ca8432bb2801d7699180118efc426108ee5d12b8ac693d9d/visionmetrics-0.0.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-30 21:52:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "microsoft",
    "github_project": "visionmetrics",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "visionmetrics"
}
        
Elapsed time: 0.31908s