# TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models
[TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models](https://arxiv.org/abs/2306.08013)
Pumjun Kim, Yoojin Jang, [Jisu Kim](https://jkim82133.github.io/), [Jaejun Yoo](https://scholar.google.co.kr/citations?hl=en&user=7NBlQw4AAAAJ)
[Paper](https://arxiv.org/abs/2306.08013) | Project Page (Coming Soon) | Quick Start: [TopPR](https://github.com/LAIT-CVLab/TopPR#quick-start) | [Colab]()
### [News]
* Our **TopP&R** is accepted by **NeurIPS 2023** 🎉!
## Abstract
> We propose a robust and reliable evaluation metric for generative models
called Topological Precision and Recall (TopP&R, pronounced “topper”), which
systematically estimates supports by retaining only topologically and statistically
significant features with a certain level of confidence. Existing metrics, such as
Inception Score (IS), Fréchet Inception Distance (FID), and various Precision
and Recall (P&R) variants, rely heavily on support estimates derived from sample
features. However, the reliability of these estimates has been overlooked, even
though the quality of the evaluation hinges entirely on their accuracy. In this
paper, we demonstrate that current methods not only fail to accurately assess
sample quality when support estimation is unreliable, but also yield inconsistent
results. In contrast, TopP&R reliably evaluates the sample quality and ensures
statistical consistency in its results. Our theoretical and experimental findings
reveal that TopP&R provides a robust evaluation, accurately capturing the true
trend of change in samples, even in the presence of outliers and non-independent
and identically distributed (Non-IID) perturbations where other methods result in
inaccurate support estimation. To our knowledge, TopP&R is the first evaluation
metric specifically focused on the robust estimation of supports, offering statistical
consistency under noisy conditions.
## Overview of topological precision and recall (TopP&R)
![toppr_overview](https://user-images.githubusercontent.com/102020840/203247514-3f64b9e6-bf74-434e-8c40-c6dfdfec7e59.png)
The proposed metric TopP&R is defined in the following three steps: (a) Confidence band estimation with bootstrapping in section 2,
(b) Robust support estimation, and (c) Evaluationn via TopP&R in section 3 of our paper.
## How TopP&R is defined?
We define the precision and recall of data points as
$$precision_P(\mathcal{Y}):={\sum_{j=1}^m1(Y_j\in supp(P)\cap supp(Q)) / \sum^m_{j=1}1(Y_j\in supp(Q))}$$
$$recall_Q(\mathcal{X}):={\sum_{i=1}^n 1(X_i\in supp(Q)\cap supp(P)) / \sum_{i=1}^n 1(X_i\in supp(P))}$$
In practice, $supp(P)$ and $supp(Q)$ are not known a priori and need to be estimated, and since we allow noise,
these estimates should be robust to noise. For this, we use the kernel density estimator (KDE) and
the bootstrap bandwidth to robustly estimate the support.
Using the estimated support (superlevel set at $c_{\mathcal{X}}$ and $c_{\mathcal{Y}}$), we define
the topological precision (TopP) and recall (TopR) as bellow:
$$TopP_{\mathcal{X}}(\mathcal{Y}):=\sum^m_{j=1}1(\hat{p_{h_n}}(Y_j)>c_{\mathcal{X}},\hat{q_{h_m}}(Y_j)>c_{\mathcal{Y}}) /
\sum^m_{j=1} 1(\hat{q_{h_m}}(Y_j)>c_{\mathcal{Y}})$$
$$TopR_{\mathcal{Y}}(\mathcal{X}):=\sum^n_{i=1}1(\hat{q_{h_m}}(X_i)>c_{\mathcal{Y}},\hat{p_{h_n}}(X_i)>c_{\mathcal{X}}) /
\sum^n_{i=1} 1(\hat{p_{h_n}}(X_i)>c_{\mathcal{X}})$$
The kernel bandwidths $h_n$ and $h_m$ are hyperparameters that users need to choose. We also provide our guide line to select
the optimal bandwidths $h_n$ and $h_m$ in practice (see our Appendix G.4).
# Quick Start
Our method can be used by `pip` command!
```
pip install top-pr
```
## How to use
In this example, we evaluate mode drop case. Please consider that we fix the seed number for random projection with a linear layer in `top_pr/top_pr.py`. If you want to evaluate with [PRDC](https://github.com/clovaai/generative-evaluation-prdc), please refer the metric and install prdc package.
```python
# Call packages
import matplotlib.pyplot as plot
import numpy as np
# Call mode drop example case
from top_pr import mode_drop
# Call metrics
from top_pr import compute_top_pr as TopPR
# For comparison to PRDC, use this. 'pip install prdc'
from prdc import compute_prdc
```
### 1. Sequential mode drop experiment
```python
# Evaluation step
start = 0
for Ratio in [0, 1, 2, 3, 4, 5, 6]:
# Define real and fake dataset
REAL = mode_drop.gaussian_mode_drop(method = 'sequential', ratio = 0)
FAKE = mode_drop.gaussian_mode_drop(method = 'sequential', ratio = Ratio)
# Evaluation with TopPR
Top_PR = TopPR(real_features=REAL, fake_features=FAKE, alpha = 0.1, kernel = "cosine", random_proj = True, f1_score = True)
# Evaluation with P&R and D&C
PR = compute_prdc(REAL, FAKE, 3)
DC = compute_prdc(REAL, FAKE, 5)
if (start == 0):
pr = [PR.get('precision'), PR.get('recall')]
dc = [DC.get('density'), DC.get('coverage')]
Top_pr = [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]
start = 1
else:
pr = np.vstack((pr, [PR.get('precision'), PR.get('recall')]))
dc = np.vstack((dc, [DC.get('density'), DC.get('coverage')]))
Top_pr = np.vstack((Top_pr, [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]))
# Visualization of Result
x = [0, 0.17, 0.34, 0.50, 0.67, 0.85, 1]
fig = plot.figure(figsize = (12,3))
for i in range(1,3):
axes = fig.add_subplot(1,2,i)
# Fidelity
if (i == 1):
axes.set_title("Fidelity",fontsize = 15)
plot.ylim([0.5, 1.5])
plot.plot(x, Top_pr[:,0], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = "TopP")
plot.plot(x, pr[:,0], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = "precision (k=3)")
plot.plot(x, dc[:,0], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = "density (k=5)")
plot.plot(x, np.linspace(1.0, 1.0, 11), color = 'black', linestyle = ':', linewidth = 2)
plot.legend(fontsize = 9)
# Diversity
elif (i == 2):
axes.set_title("Diversity",fontsize = 15)
plot.plot(x, Top_pr[:,1], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = "TopR")
plot.plot(x, pr[:,1], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = "recall (k=3)")
plot.plot(x, dc[:,1], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = "coverage (k=5)")
plot.plot(x, np.linspace(1.0, 0.14, 11), color = 'black', linestyle = ':', linewidth = 2)
plot.legend(fontsize = 9)
```
Above test code will result in the following figure.
![seq](https://user-images.githubusercontent.com/102020840/214468838-28557fdb-fb0f-49a4-8242-541afd3b7013.png)
### 2. Simultaneous mode drop experiment
```python
# Evaluation step
start = 0
for Ratio in [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
# Define real and fake dataset
REAL = mode_drop.gaussian_mode_drop(method = 'simultaneous', ratio = 0)
FAKE = mode_drop.gaussian_mode_drop(method = 'simultaneous', ratio = Ratio)
# Evaluation with TopPR
Top_PR = TopPR(real_features=REAL, fake_features=FAKE, alpha = 0.1, kernel = "cosine", random_proj = True, f1_score = True)
# Evaluation with P&R and D&C
PR = compute_prdc(REAL, FAKE, 3)
DC = compute_prdc(REAL, FAKE, 5)
if (start == 0):
pr = [PR.get('precision'), PR.get('recall')]
dc = [DC.get('density'), DC.get('coverage')]
Top_pr = [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]
start = 1
else:
pr = np.vstack((pr, [PR.get('precision'), PR.get('recall')]))
dc = np.vstack((dc, [DC.get('density'), DC.get('coverage')]))
Top_pr = np.vstack((Top_pr, [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]))
# Visualization of Result
x = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
fig = plot.figure(figsize = (12,3))
for i in range(1,3):
axes = fig.add_subplot(1,2,i)
# Fidelity
if (i == 1):
axes.set_title("Fidelity",fontsize = 15)
plot.ylim([0.5, 1.5])
plot.plot(x, Top_pr[:,0], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = "TopP")
plot.plot(x, pr[:,0], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = "precision (k=3)")
plot.plot(x, dc[:,0], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = "density (k=5)")
plot.plot(x, np.linspace(1.0, 1.0, 7), color = 'black', linestyle = ':', linewidth = 2)
plot.legend(fontsize = 9)
# Diversity
elif (i == 2):
axes.set_title("Diversity",fontsize = 15)
plot.plot(x, Top_pr[:,1], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = "TopR")
plot.plot(x, pr[:,1], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = "recall (k=3)")
plot.plot(x, dc[:,1], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = "coverage (k=5)")
plot.plot(x, np.linspace(1.0, 0.14, 7), color = 'black', linestyle = ':', linewidth = 2)
plot.legend(fontsize = 9)
```
Above test code will result in the following figure.
![sim](https://user-images.githubusercontent.com/102020840/214467800-e12678d1-96a5-4b92-939b-c2772f1c8023.png)
## Citation
If you find this repository useful for your research, please cite the following work.
```
@article{kim2023topp,
title={TopP$\backslash$\&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models},
author={Kim, Pum Jun and Jang, Yoojin and Kim, Jisu and Yoo, Jaejun},
journal={arXiv preprint arXiv:2306.08013},
year={2023}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/LAIT-CVLab/TopPR",
"name": "top-pr",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "toppr,evaluation metric,topological metric,precision and recall",
"author": "Pumjun Kim",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/ae/21/f624a9002cb3638c061a1b0c5f62889e2fded12c3a05519f60a4693336b3/top_pr-0.2.1.tar.gz",
"platform": null,
"description": "# TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models\n[TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models](https://arxiv.org/abs/2306.08013)\n\nPumjun Kim, Yoojin Jang, [Jisu Kim](https://jkim82133.github.io/), [Jaejun Yoo](https://scholar.google.co.kr/citations?hl=en&user=7NBlQw4AAAAJ)\n\n[Paper](https://arxiv.org/abs/2306.08013) | Project Page (Coming Soon) | Quick Start: [TopPR](https://github.com/LAIT-CVLab/TopPR#quick-start) | [Colab]()\n### [News]\n* Our **TopP&R** is accepted by **NeurIPS 2023** \ud83c\udf89!\n## Abstract\n> We propose a robust and reliable evaluation metric for generative models\ncalled Topological Precision and Recall (TopP&R, pronounced \u201ctopper\u201d), which\nsystematically estimates supports by retaining only topologically and statistically\nsignificant features with a certain level of confidence. Existing metrics, such as\nInception Score (IS), Fr\u00e9chet Inception Distance (FID), and various Precision\nand Recall (P&R) variants, rely heavily on support estimates derived from sample\nfeatures. However, the reliability of these estimates has been overlooked, even\nthough the quality of the evaluation hinges entirely on their accuracy. In this\npaper, we demonstrate that current methods not only fail to accurately assess\nsample quality when support estimation is unreliable, but also yield inconsistent\nresults. In contrast, TopP&R reliably evaluates the sample quality and ensures\nstatistical consistency in its results. Our theoretical and experimental findings\nreveal that TopP&R provides a robust evaluation, accurately capturing the true\ntrend of change in samples, even in the presence of outliers and non-independent\nand identically distributed (Non-IID) perturbations where other methods result in\ninaccurate support estimation. To our knowledge, TopP&R is the first evaluation\nmetric specifically focused on the robust estimation of supports, offering statistical\nconsistency under noisy conditions.\n\n## Overview of topological precision and recall (TopP&R)\n![toppr_overview](https://user-images.githubusercontent.com/102020840/203247514-3f64b9e6-bf74-434e-8c40-c6dfdfec7e59.png)\nThe proposed metric TopP&R is defined in the following three steps: (a) Confidence band estimation with bootstrapping in section 2,\n(b) Robust support estimation, and (c) Evaluationn via TopP&R in section 3 of our paper.\n\n## How TopP&R is defined?\nWe define the precision and recall of data points as\n\n$$precision_P(\\mathcal{Y}):={\\sum_{j=1}^m1(Y_j\\in supp(P)\\cap supp(Q)) / \\sum^m_{j=1}1(Y_j\\in supp(Q))}$$\n\n$$recall_Q(\\mathcal{X}):={\\sum_{i=1}^n 1(X_i\\in supp(Q)\\cap supp(P)) / \\sum_{i=1}^n 1(X_i\\in supp(P))}$$\n\nIn practice, $supp(P)$ and $supp(Q)$ are not known a priori and need to be estimated, and since we allow noise,\nthese estimates should be robust to noise. For this, we use the kernel density estimator (KDE) and \nthe bootstrap bandwidth to robustly estimate the support. \nUsing the estimated support (superlevel set at $c_{\\mathcal{X}}$ and $c_{\\mathcal{Y}}$), we define\nthe topological precision (TopP) and recall (TopR) as bellow:\n\n$$TopP_{\\mathcal{X}}(\\mathcal{Y}):=\\sum^m_{j=1}1(\\hat{p_{h_n}}(Y_j)>c_{\\mathcal{X}},\\hat{q_{h_m}}(Y_j)>c_{\\mathcal{Y}}) / \n\\sum^m_{j=1} 1(\\hat{q_{h_m}}(Y_j)>c_{\\mathcal{Y}})$$\n\n$$TopR_{\\mathcal{Y}}(\\mathcal{X}):=\\sum^n_{i=1}1(\\hat{q_{h_m}}(X_i)>c_{\\mathcal{Y}},\\hat{p_{h_n}}(X_i)>c_{\\mathcal{X}}) / \n\\sum^n_{i=1} 1(\\hat{p_{h_n}}(X_i)>c_{\\mathcal{X}})$$\n\nThe kernel bandwidths $h_n$ and $h_m$ are hyperparameters that users need to choose. We also provide our guide line to select \nthe optimal bandwidths $h_n$ and $h_m$ in practice (see our Appendix G.4).\n\n\n\n# Quick Start\nOur method can be used by `pip` command!\n```\npip install top-pr\n```\n\n## How to use\nIn this example, we evaluate mode drop case. Please consider that we fix the seed number for random projection with a linear layer in `top_pr/top_pr.py`. If you want to evaluate with [PRDC](https://github.com/clovaai/generative-evaluation-prdc), please refer the metric and install prdc package. \n```python\n# Call packages\nimport matplotlib.pyplot as plot\nimport numpy as np\n\n# Call mode drop example case\nfrom top_pr import mode_drop\n\n# Call metrics\nfrom top_pr import compute_top_pr as TopPR\n# For comparison to PRDC, use this. 'pip install prdc'\nfrom prdc import compute_prdc\n```\n\n### 1. Sequential mode drop experiment\n```python\n# Evaluation step\nstart = 0\nfor Ratio in [0, 1, 2, 3, 4, 5, 6]:\n\n # Define real and fake dataset\n REAL = mode_drop.gaussian_mode_drop(method = 'sequential', ratio = 0)\n FAKE = mode_drop.gaussian_mode_drop(method = 'sequential', ratio = Ratio)\n \n # Evaluation with TopPR\n Top_PR = TopPR(real_features=REAL, fake_features=FAKE, alpha = 0.1, kernel = \"cosine\", random_proj = True, f1_score = True)\n \n # Evaluation with P&R and D&C\n PR = compute_prdc(REAL, FAKE, 3)\n DC = compute_prdc(REAL, FAKE, 5)\n \n if (start == 0):\n pr = [PR.get('precision'), PR.get('recall')]\n dc = [DC.get('density'), DC.get('coverage')]\n Top_pr = [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]\n start = 1\n \n else:\n pr = np.vstack((pr, [PR.get('precision'), PR.get('recall')]))\n dc = np.vstack((dc, [DC.get('density'), DC.get('coverage')]))\n Top_pr = np.vstack((Top_pr, [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]))\n\n# Visualization of Result\nx = [0, 0.17, 0.34, 0.50, 0.67, 0.85, 1]\nfig = plot.figure(figsize = (12,3))\nfor i in range(1,3):\n axes = fig.add_subplot(1,2,i)\n \n # Fidelity\n if (i == 1):\n axes.set_title(\"Fidelity\",fontsize = 15)\n plot.ylim([0.5, 1.5])\n plot.plot(x, Top_pr[:,0], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = \"TopP\")\n plot.plot(x, pr[:,0], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = \"precision (k=3)\")\n plot.plot(x, dc[:,0], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = \"density (k=5)\")\n plot.plot(x, np.linspace(1.0, 1.0, 11), color = 'black', linestyle = ':', linewidth = 2)\n plot.legend(fontsize = 9)\n \n # Diversity\n elif (i == 2):\n axes.set_title(\"Diversity\",fontsize = 15)\n plot.plot(x, Top_pr[:,1], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = \"TopR\")\n plot.plot(x, pr[:,1], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = \"recall (k=3)\")\n plot.plot(x, dc[:,1], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = \"coverage (k=5)\")\n plot.plot(x, np.linspace(1.0, 0.14, 11), color = 'black', linestyle = ':', linewidth = 2)\n plot.legend(fontsize = 9)\n```\nAbove test code will result in the following figure.\n![seq](https://user-images.githubusercontent.com/102020840/214468838-28557fdb-fb0f-49a4-8242-541afd3b7013.png) \n\n\n### 2. Simultaneous mode drop experiment\n```python\n# Evaluation step\nstart = 0\nfor Ratio in [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:\n\n # Define real and fake dataset\n REAL = mode_drop.gaussian_mode_drop(method = 'simultaneous', ratio = 0)\n FAKE = mode_drop.gaussian_mode_drop(method = 'simultaneous', ratio = Ratio)\n \n # Evaluation with TopPR\n Top_PR = TopPR(real_features=REAL, fake_features=FAKE, alpha = 0.1, kernel = \"cosine\", random_proj = True, f1_score = True)\n \n # Evaluation with P&R and D&C\n PR = compute_prdc(REAL, FAKE, 3)\n DC = compute_prdc(REAL, FAKE, 5)\n \n if (start == 0):\n pr = [PR.get('precision'), PR.get('recall')]\n dc = [DC.get('density'), DC.get('coverage')]\n Top_pr = [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]\n start = 1\n \n else:\n pr = np.vstack((pr, [PR.get('precision'), PR.get('recall')]))\n dc = np.vstack((dc, [DC.get('density'), DC.get('coverage')]))\n Top_pr = np.vstack((Top_pr, [Top_PR.get('fidelity'), Top_PR.get('diversity'), Top_PR.get('Top_F1')]))\n\n# Visualization of Result\nx = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]\nfig = plot.figure(figsize = (12,3))\nfor i in range(1,3):\n axes = fig.add_subplot(1,2,i)\n \n # Fidelity\n if (i == 1):\n axes.set_title(\"Fidelity\",fontsize = 15)\n plot.ylim([0.5, 1.5])\n plot.plot(x, Top_pr[:,0], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = \"TopP\")\n plot.plot(x, pr[:,0], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = \"precision (k=3)\")\n plot.plot(x, dc[:,0], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = \"density (k=5)\")\n plot.plot(x, np.linspace(1.0, 1.0, 7), color = 'black', linestyle = ':', linewidth = 2)\n plot.legend(fontsize = 9)\n \n # Diversity\n elif (i == 2):\n axes.set_title(\"Diversity\",fontsize = 15)\n plot.plot(x, Top_pr[:,1], color = [255/255, 110/255, 97/255], linestyle = '-', linewidth = 3, marker = 'o', label = \"TopR\")\n plot.plot(x, pr[:,1], color = [77/255, 110/255, 111/255], linestyle = ':', linewidth = 3, marker = 'o', label = \"recall (k=3)\")\n plot.plot(x, dc[:,1], color = [15/255, 76/255, 130/255], linestyle = '-.', linewidth = 3, marker = 'o', label = \"coverage (k=5)\")\n plot.plot(x, np.linspace(1.0, 0.14, 7), color = 'black', linestyle = ':', linewidth = 2)\n plot.legend(fontsize = 9)\n```\nAbove test code will result in the following figure.\n![sim](https://user-images.githubusercontent.com/102020840/214467800-e12678d1-96a5-4b92-939b-c2772f1c8023.png)\n\n## Citation\nIf you find this repository useful for your research, please cite the following work.\n```\n@article{kim2023topp,\n title={TopP$\\backslash$\\&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models},\n author={Kim, Pum Jun and Jang, Yoojin and Kim, Jisu and Yoo, Jaejun},\n journal={arXiv preprint arXiv:2306.08013},\n year={2023}\n}\n```\n",
"bugtrack_url": null,
"license": "",
"summary": "TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models.",
"version": "0.2.1",
"project_urls": {
"Homepage": "https://github.com/LAIT-CVLab/TopPR"
},
"split_keywords": [
"toppr",
"evaluation metric",
"topological metric",
"precision and recall"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c5ffc56a8a41bec9307915bb0b0a3648f90f67b2bf752124830b6920b5fdfdee",
"md5": "009176479a474ec60a92ed1d57df8e36",
"sha256": "af25cd582991e8bd198728e3549f2c95230ddf7cc2300b9e780cae629cf29485"
},
"downloads": -1,
"filename": "top_pr-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "009176479a474ec60a92ed1d57df8e36",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 9781,
"upload_time": "2023-09-22T13:39:19",
"upload_time_iso_8601": "2023-09-22T13:39:19.109368Z",
"url": "https://files.pythonhosted.org/packages/c5/ff/c56a8a41bec9307915bb0b0a3648f90f67b2bf752124830b6920b5fdfdee/top_pr-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ae21f624a9002cb3638c061a1b0c5f62889e2fded12c3a05519f60a4693336b3",
"md5": "1da5a87b063b8a37d79287d212ebfb88",
"sha256": "e165b2ea4e2334cd062e9c4fdbcab86ac7b25fbe3829c5705a58355343f553f4"
},
"downloads": -1,
"filename": "top_pr-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "1da5a87b063b8a37d79287d212ebfb88",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 12028,
"upload_time": "2023-09-22T13:39:20",
"upload_time_iso_8601": "2023-09-22T13:39:20.942666Z",
"url": "https://files.pythonhosted.org/packages/ae/21/f624a9002cb3638c061a1b0c5f62889e2fded12c3a05519f60a4693336b3/top_pr-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-09-22 13:39:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LAIT-CVLab",
"github_project": "TopPR",
"github_not_found": true,
"lcname": "top-pr"
}