# OOD Detection Metrics
Functions for computing metrics commonly used in the field of out-of-distribution (OOD) detection.
<div style="overflow: hidden; display: flex; justify-content:flex-start; gap:10px;">
<a href="https://github.com/tayden/ood-metrics/actions/workflows/tests.yml">
<img height="19px" alt="Tests" src="https://github.com/tayden/ood-metrics/actions/workflows/tests.yml/badge.svg" />
</a>
<a href="https://github.com/tayden/ood-metrics/blob/main/LICENSE">
<img alt="License" src="https://anaconda.org/conda-forge/ood-metrics/badges/license.svg" height="20px" />
</a>
<a href="https://anaconda.org/conda-forge/ood-metrics">
<img alt="Version" src="https://anaconda.org/conda-forge/ood-metrics/badges/version.svg" height="20px" />
</a>
</div>
## Installation
### With PIP
`pip install ood-metrics`
### With Conda
`conda install -c conda-forge ood-metrics`
## Metrics functions
### AUROC
Calculate and return the area under the ROC curve using unthresholded predictions on the data and a binary true label.
```python
from ood_metrics import auroc
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert auroc(scores, labels) == 0.75
```
### AUPR
Calculate and return the area under the Precision Recall curve using unthresholded predictions on the data and a binary true
label.
```python
from ood_metrics import aupr
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert aupr(scores, labels) == 0.25
```
### FPR @ 95% TPR
Return the FPR when TPR is at least 95%.
```python
from ood_metrics import fpr_at_95_tpr
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert fpr_at_95_tpr(scores, labels) == 0.25
```
### Detection Error
Return the misclassification probability when TPR is 95%.
```python
from ood_metrics import detection_error
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert detection_error(scores, labels) == 0.05
```
### Calculate all stats
Using predictions and labels, return a dictionary containing all novelty detection performance statistics.
```python
from ood_metrics import calc_metrics
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert calc_metrics(scores, labels) == {
'fpr_at_95_tpr': 0.25,
'detection_error': 0.05,
'auroc': 0.75,
'aupr_in': 0.25,
'aupr_out': 0.94375
}
```
## Plotting functions
### Plot ROC
Plot an ROC curve based on unthresholded predictions and true binary labels.
```python
from ood_metrics import plot_roc
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
plot_roc(scores, labels)
# Generate Matplotlib AUROC plot
```
### Plot PR
Plot an Precision-Recall curve based on unthresholded predictions and true binary labels.
```python
from ood_metrics import plot_pr
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
plot_pr(scores, labels)
# Generate Matplotlib Precision-Recall plot
```
### Plot Barcode
Plot a visualization showing inliers and outliers sorted by their prediction of novelty.
```python
from ood_metrics import plot_barcode
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
plot_barcode(scores, labels)
# Shows visualization of sort order of labels occording to the scores.
```
Raw data
{
"_id": null,
"home_page": "https://github.com/tayden/ood-metrics",
"name": "ood-metrics",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9,<3.12",
"maintainer_email": "",
"keywords": "OOD,out-of-distribution,anomaly,detection",
"author": "Taylor Denouden",
"author_email": "taylordenouden@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/41/65/f80a7f4e2213ce0f9ea0dc45f1542492d820b841c626818cbd1b3bd8e1fb/ood_metrics-1.1.2.tar.gz",
"platform": null,
"description": "# OOD Detection Metrics\n\nFunctions for computing metrics commonly used in the field of out-of-distribution (OOD) detection.\n\n<div style=\"overflow: hidden; display: flex; justify-content:flex-start; gap:10px;\">\n<a href=\"https://github.com/tayden/ood-metrics/actions/workflows/tests.yml\">\n<img height=\"19px\" alt=\"Tests\" src=\"https://github.com/tayden/ood-metrics/actions/workflows/tests.yml/badge.svg\" />\n</a>\n\n<a href=\"https://github.com/tayden/ood-metrics/blob/main/LICENSE\">\n <img alt=\"License\" src=\"https://anaconda.org/conda-forge/ood-metrics/badges/license.svg\" height=\"20px\" />\n</a>\n\n<a href=\"https://anaconda.org/conda-forge/ood-metrics\">\n <img alt=\"Version\" src=\"https://anaconda.org/conda-forge/ood-metrics/badges/version.svg\" height=\"20px\" />\n</a>\n</div>\n\n## Installation\n\n### With PIP\n\n`pip install ood-metrics`\n\n### With Conda\n\n`conda install -c conda-forge ood-metrics`\n\n## Metrics functions\n\n### AUROC\n\nCalculate and return the area under the ROC curve using unthresholded predictions on the data and a binary true label.\n\n```python\nfrom ood_metrics import auroc\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nassert auroc(scores, labels) == 0.75\n```\n\n### AUPR\n\nCalculate and return the area under the Precision Recall curve using unthresholded predictions on the data and a binary true\nlabel.\n\n```python\nfrom ood_metrics import aupr\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nassert aupr(scores, labels) == 0.25\n```\n\n### FPR @ 95% TPR\n\nReturn the FPR when TPR is at least 95%.\n\n```python\nfrom ood_metrics import fpr_at_95_tpr\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nassert fpr_at_95_tpr(scores, labels) == 0.25\n```\n\n### Detection Error\n\nReturn the misclassification probability when TPR is 95%.\n\n```python\nfrom ood_metrics import detection_error\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nassert detection_error(scores, labels) == 0.05\n```\n\n### Calculate all stats\n\nUsing predictions and labels, return a dictionary containing all novelty detection performance statistics.\n\n```python\nfrom ood_metrics import calc_metrics\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nassert calc_metrics(scores, labels) == {\n 'fpr_at_95_tpr': 0.25,\n 'detection_error': 0.05,\n 'auroc': 0.75,\n 'aupr_in': 0.25,\n 'aupr_out': 0.94375\n}\n```\n\n## Plotting functions\n\n### Plot ROC\n\nPlot an ROC curve based on unthresholded predictions and true binary labels.\n\n```python\n\nfrom ood_metrics import plot_roc\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nplot_roc(scores, labels)\n# Generate Matplotlib AUROC plot\n```\n\n### Plot PR\n\nPlot an Precision-Recall curve based on unthresholded predictions and true binary labels.\n\n```python\n\nfrom ood_metrics import plot_pr\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nplot_pr(scores, labels)\n# Generate Matplotlib Precision-Recall plot\n```\n\n### Plot Barcode\n\nPlot a visualization showing inliers and outliers sorted by their prediction of novelty.\n\n```python\n\nfrom ood_metrics import plot_barcode\n\nlabels = [0, 0, 0, 1, 0]\nscores = [0.1, 0.3, 0.6, 0.9, 1.3]\n\nplot_barcode(scores, labels)\n# Shows visualization of sort order of labels occording to the scores.\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Calculate common OOD detection metrics",
"version": "1.1.2",
"project_urls": {
"Homepage": "https://github.com/tayden/ood-metrics",
"Repository": "https://github.com/tayden/ood-metrics"
},
"split_keywords": [
"ood",
"out-of-distribution",
"anomaly",
"detection"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1e07ad1c339c1673704446aaec3ff98ea61d0599bdaad4c97346efd227d23355",
"md5": "5267e176c8cf59db11f1f0a33b3348d5",
"sha256": "1247819d025032ee632fbc15340517d2faadcc0333a17ce15953901dfb00e5b0"
},
"downloads": -1,
"filename": "ood_metrics-1.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5267e176c8cf59db11f1f0a33b3348d5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9,<3.12",
"size": 6123,
"upload_time": "2023-09-08T19:41:56",
"upload_time_iso_8601": "2023-09-08T19:41:56.121765Z",
"url": "https://files.pythonhosted.org/packages/1e/07/ad1c339c1673704446aaec3ff98ea61d0599bdaad4c97346efd227d23355/ood_metrics-1.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4165f80a7f4e2213ce0f9ea0dc45f1542492d820b841c626818cbd1b3bd8e1fb",
"md5": "c9c684e5a73d4024a24f590115f0045e",
"sha256": "5ca0ac4dbb9e2b5dfbc7022000f581eabc46466e0722a487caa221628dca6f5b"
},
"downloads": -1,
"filename": "ood_metrics-1.1.2.tar.gz",
"has_sig": false,
"md5_digest": "c9c684e5a73d4024a24f590115f0045e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9,<3.12",
"size": 4680,
"upload_time": "2023-09-08T19:41:57",
"upload_time_iso_8601": "2023-09-08T19:41:57.574674Z",
"url": "https://files.pythonhosted.org/packages/41/65/f80a7f4e2213ce0f9ea0dc45f1542492d820b841c626818cbd1b3bd8e1fb/ood_metrics-1.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-09-08 19:41:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tayden",
"github_project": "ood-metrics",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "ood-metrics"
}