Name | jurity JSON |
Version |
2.1.0
JSON |
| download |
home_page | https://github.com/fidelity/jurity |
Summary | fairness and evaluation library |
upload_time | 2024-09-13 21:53:21 |
maintainer | None |
docs_url | None |
author | FMR LLC |
requires_python | >=3.8 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
[![ci](https://github.com/fidelity/jurity/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/fidelity/jurity/actions/workflows/ci.yml) [![PyPI version fury.io](https://badge.fury.io/py/jurity.svg)](https://pypi.python.org/pypi/jurity/) [![PyPI license](https://img.shields.io/pypi/l/jurity.svg)](https://pypi.python.org/pypi/jurity/) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) [![Downloads](https://static.pepy.tech/personalized-badge/jurity?period=total&units=international_system&left_color=grey&right_color=orange&left_text=Downloads)](https://pepy.tech/project/jurity)
# Jurity: Fairness & Evaluation Library
Jurity ([LION'23](https://link.springer.com/chapter/10.1007/978-3-031-44505-7_29), [ICMLA'21](https://ieeexplore.ieee.org/document/9680169)) is a research library
that provides fairness metrics, recommender system evaluations, classification metrics and bias mitigation techniques.
The library adheres to PEP-8 standards and is tested heavily.
Jurity is developed by the Artificial Intelligence Center of Excellence at Fidelity Investments.
Documentation is available at [fidelity.github.io/jurity](https://fidelity.github.io/jurity).
## Fairness Metrics
* [Average Odds](https://fidelity.github.io/jurity/about_fairness.html#average-odds)
* [Disparate Impact](https://fidelity.github.io/jurity/about_fairness.html#disparate-impact)
* [Equal Opportunity](https://fidelity.github.io/jurity/about_fairness.html#equal-opportunity)
* [False Negative Rate (FNR) Difference](https://fidelity.github.io/jurity/about_fairness.html#fnr-difference)
* [False Omission Rate (FOR) Difference](https://fidelity.github.io/jurity/about_fairness.html#for-difference)
* [Generalized Entropy Index](https://fidelity.github.io/jurity/about_fairness.html#generalized-entropy-index)
* [Predictive Equality](https://fidelity.github.io/jurity/about_fairness.html#predictive-equality)
* [Statistical Parity](https://fidelity.github.io/jurity/about_fairness.html#statistical-parity)
* [Theil Index](https://fidelity.github.io/jurity/about_fairness.html#theil-index)
## Binary Bias Mitigation Techniques
* [Equalized Odds](https://fidelity.github.io/jurity/about_fairness.html#equalized-odds)
## Recommenders Metrics
* [AUC: Area Under the Curve](https://fidelity.github.io/jurity/about_reco.html#auc-area-under-the-curve)
* [CTR: Click-through rate](https://fidelity.github.io/jurity/about_reco.html#ctr-click-through-rate)
* [DR: Doubly robust estimation](https://fidelity.github.io/jurity/about_reco.html#ctr-click-through-rate)
* [IPS: Inverse propensity scoring](https://fidelity.github.io/jurity/about_reco.html#ctr-click-through-rate)
* [MAP@K: Mean Average Precision](https://fidelity.github.io/jurity/about_reco.html#map-mean-average-precision)
* [NDCG: Normalized discounted cumulative gain](https://fidelity.github.io/jurity/about_reco.html#ndcg-normalized-discounted-cumulative-gain)
* [Precision@K](https://fidelity.github.io/jurity/about_reco.html#precision)
* [Recall@K](https://fidelity.github.io/jurity/about_reco.html#recall)
* [Inter-List Diversity@K](https://fidelity.github.io/jurity/about_reco.html#inter-list-diversity)
* [Intra-List Diversity@K](https://fidelity.github.io/jurity/about_reco.html#intra-list-diversity)
## Classification Metrics
* [Accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html)
* [AUC](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score)
* [F1 Score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html)
* [Precision](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html)
* [Recall](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html)
## Quick Start: Fairness Evaluation
```python
# Import binary and multi-class fairness metrics
from jurity.fairness import BinaryFairnessMetrics, MultiClassFairnessMetrics
# Data
binary_predictions = [1, 1, 0, 1, 0, 0]
multi_class_predictions = ["a", "b", "c", "b", "a", "a"]
multi_class_multi_label_predictions = [["a", "b"], ["b", "c"], ["b"], ["a", "b"], ["c", "a"], ["c"]]
memberships = [0, 0, 0, 1, 1, 1]
classes = ["a", "b", "c"]
# Metrics (see also other available metrics)
metric = BinaryFairnessMetrics.StatisticalParity()
multi_metric = MultiClassFairnessMetrics.StatisticalParity(classes)
# Scores
print("Metric:", metric.description)
print("Lower Bound: ", metric.lower_bound)
print("Upper Bound: ", metric.upper_bound)
print("Ideal Value: ", metric.ideal_value)
print("Binary Fairness score: ", metric.get_score(binary_predictions, memberships))
print("Multi-class Fairness scores: ", multi_metric.get_scores(multi_class_predictions, memberships))
print("Multi-class multi-label Fairness scores: ", multi_metric.get_scores(multi_class_multi_label_predictions, memberships))
```
## Quick Start: Probabilistic Fairness Evaluation
What if we do not know the protected membership attribute of each sample? This is a practical scenario that we refer to as _probabilistic_ fairness evaluation.
At a high-level, instead of strict 0/1 deterministic membership at individual level, consider the probability of membership to protected classes for each sample.
An easy baseline is to convert these probabilities back to the deterministic setting by taking the maximum likelihood as the protected membership. This is problematic as the goal is not to predict membership but to evaluate fairness.
Taking this a step further, while we do not have membership information at the individual level, consider access to _surrogate membership_ at _group level_. We can then infer the fairness metrics directly.
Jurity offers both options to address the case where membership data is missing. We provide an in-depth study and formal treatment in [Surrogate Membership for Inferred Metrics in Fairness Evaluation (LION 2023)]().
```python
from jurity.fairness import BinaryFairnessMetrics
# Instead of 0/1 deterministic membership at individual level
# consider likelihoods of membership to protected classes for each sample
binary_predictions = [1, 1, 0, 1]
memberships = [[0.2, 0.8], [0.4, 0.6], [0.2, 0.8], [0.9, 0.1]]
# Metric
metric = BinaryFairnessMetrics.StatisticalParity()
print("Binary Fairness score: ", metric.get_score(binary_predictions, memberships))
# Surrogate membership: consider access to surrogate membership at the group level.
surrogates = [0, 2, 0, 1]
print("Binary Fairness score: ", metric.get_score(binary_predictions, memberships, surrogates))
```
## Quick Start: Bias Mitigation
```python
# Import binary fairness and binary bias mitigation
from jurity.mitigation import BinaryMitigation
from jurity.fairness import BinaryFairnessMetrics
# Data
labels = [1, 1, 0, 1, 0, 0, 1, 0]
predictions = [0, 0, 0, 1, 1, 1, 1, 0]
likelihoods = [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.1]
is_member = [0, 0, 0, 0, 1, 1, 1, 1]
# Bias Mitigation
mitigation = BinaryMitigation.EqualizedOdds()
# Training: Learn mixing rates from the labeled data
mitigation.fit(labels, predictions, likelihoods, is_member)
# Testing: Mitigate bias in predictions
fair_predictions, fair_likelihoods = mitigation.transform(predictions, likelihoods, is_member)
# Scores: Fairness before and after
print("Fairness Metrics Before:", BinaryFairnessMetrics().get_all_scores(labels, predictions, is_member), '\n'+30*'-')
print("Fairness Metrics After:", BinaryFairnessMetrics().get_all_scores(labels, fair_predictions, is_member))
```
## Quick Start: Recommenders Evaluation
```python
# Import recommenders metrics
from jurity.recommenders import BinaryRecoMetrics, RankingRecoMetrics, DiversityRecoMetrics
import pandas as pd
# Data
actual = pd.DataFrame({"user_id": [1, 2, 3, 4], "item_id": [1, 2, 0, 3], "clicks": [0, 1, 0, 0]})
predicted = pd.DataFrame({"user_id": [1, 2, 3, 4], "item_id": [1, 2, 2, 3], "clicks": [0.8, 0.7, 0.8, 0.7]})
item_features = pd.DataFrame({"item_id": [0, 1, 2, 3], "feature1": [1, 2, 2, 1], "feature2": [0.8, 0.7, 0.8, 0.7]})
# Metrics
auc = BinaryRecoMetrics.AUC(click_column="clicks")
ctr = BinaryRecoMetrics.CTR(click_column="clicks")
dr = BinaryRecoMetrics.CTR(click_column="clicks", estimation='dr')
ips = BinaryRecoMetrics.CTR(click_column="clicks", estimation='ips')
map_k = RankingRecoMetrics.MAP(click_column="clicks", k=2)
ncdg_k = RankingRecoMetrics.NDCG(click_column="clicks", k=3)
precision_k = RankingRecoMetrics.Precision(click_column="clicks", k=2)
recall_k = RankingRecoMetrics.Recall(click_column="clicks", k=2)
interlist_diversity_k = DiversityRecoMetrics.InterListDiversity(click_column="clicks", k=2)
intralist_diversity_k = DiversityRecoMetrics.IntraListDiversity(item_features, click_column="clicks", k=2)
# Scores
print("AUC:", auc.get_score(actual, predicted))
print("CTR:", ctr.get_score(actual, predicted))
print("Doubly Robust:", dr.get_score(actual, predicted))
print("IPS:", ips.get_score(actual, predicted))
print("MAP@K:", map_k.get_score(actual, predicted))
print("NCDG:", ncdg_k.get_score(actual, predicted))
print("Precision@K:", precision_k.get_score(actual, predicted))
print("Recall@K:", recall_k.get_score(actual, predicted))
print("Inter-List Diversity@K:", interlist_diversity_k.get_score(actual, predicted))
print("Intra-List Diversity@K:", intralist_diversity_k.get_score(actual, predicted))
```
## Quick Start: Classification Evaluation
```python
# Import classification metrics
from jurity.classification import BinaryClassificationMetrics
# Data
labels = [1, 1, 0, 1, 0, 0, 1, 0]
predictions = [0, 0, 0, 1, 1, 1, 1, 0]
# Available: Accuracy, F1, Precision, Recall, and AUC
f1_score = BinaryClassificationMetrics.F1()
print('F1 score is', f1_score.get_score(predictions, labels))
```
## Installation
Jurity requires **Python 3.8+** and can be installed from PyPI using ``pip install jurity`` or by building from source as shown in [installation instructions](https://fidelity.github.io/jurity/install.html).
## Citation
If you use Jurity in a publication, please cite it as:
```bibtex
@article{DBLP:conf/lion/Melinda23,
author = {Melinda Thielbar, Serdar Kadioglu, Chenhui Zhang, Rick Pack, and Lukas Dannull},
title = {Surrogate Membership for Inferred Metrics in Fairness Evaluation},
booktitle = {The 17th Learning and Intelligent Optimization Conference (LION)},
publisher = {{LION}},
year = {2023}
}
@inproceedings{DBLP:conf/icmla/MichalskyK21,
author = {Filip Michalsk{\'{y}} and Serdar Kadioglu},
title = {Surrogate Ground Truth Generation to Enhance Binary Fairness Evaluation in Uplift Modeling},
booktitle = {20th {IEEE} International Conference on Machine Learning and Applications,
{ICMLA} 2021, Pasadena, CA, USA, December 13-16, 2021},
pages = {1654--1659},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/ICMLA52953.2021.00264},
doi = {10.1109/ICMLA52953.2021.00264},
}
```
## Support
Please submit bug reports and feature requests as [Issues](https://github.com/fidelity/jurity/issues).
## License
Jurity is licensed under the [Apache License 2.0.](https://github.com/fidelity/jurity/blob/master/LICENSE)
Raw data
{
"_id": null,
"home_page": "https://github.com/fidelity/jurity",
"name": "jurity",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "FMR LLC",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/02/85/09caccd02320892a6ce026d181302217badf8c84109b094be5d38c1572f5/jurity-2.1.0.tar.gz",
"platform": null,
"description": "[![ci](https://github.com/fidelity/jurity/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/fidelity/jurity/actions/workflows/ci.yml) [![PyPI version fury.io](https://badge.fury.io/py/jurity.svg)](https://pypi.python.org/pypi/jurity/) [![PyPI license](https://img.shields.io/pypi/l/jurity.svg)](https://pypi.python.org/pypi/jurity/) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) [![Downloads](https://static.pepy.tech/personalized-badge/jurity?period=total&units=international_system&left_color=grey&right_color=orange&left_text=Downloads)](https://pepy.tech/project/jurity)\n\n\n# Jurity: Fairness & Evaluation Library\n\nJurity ([LION'23](https://link.springer.com/chapter/10.1007/978-3-031-44505-7_29), [ICMLA'21](https://ieeexplore.ieee.org/document/9680169)) is a research library \nthat provides fairness metrics, recommender system evaluations, classification metrics and bias mitigation techniques. \nThe library adheres to PEP-8 standards and is tested heavily.\n\nJurity is developed by the Artificial Intelligence Center of Excellence at Fidelity Investments. \nDocumentation is available at [fidelity.github.io/jurity](https://fidelity.github.io/jurity).\n\n## Fairness Metrics\n* [Average Odds](https://fidelity.github.io/jurity/about_fairness.html#average-odds)\n* [Disparate Impact](https://fidelity.github.io/jurity/about_fairness.html#disparate-impact)\n* [Equal Opportunity](https://fidelity.github.io/jurity/about_fairness.html#equal-opportunity)\n* [False Negative Rate (FNR) Difference](https://fidelity.github.io/jurity/about_fairness.html#fnr-difference)\n* [False Omission Rate (FOR) Difference](https://fidelity.github.io/jurity/about_fairness.html#for-difference)\n* [Generalized Entropy Index](https://fidelity.github.io/jurity/about_fairness.html#generalized-entropy-index)\n* [Predictive Equality](https://fidelity.github.io/jurity/about_fairness.html#predictive-equality)\n* [Statistical Parity](https://fidelity.github.io/jurity/about_fairness.html#statistical-parity)\n* [Theil Index](https://fidelity.github.io/jurity/about_fairness.html#theil-index)\n\n## Binary Bias Mitigation Techniques\n* [Equalized Odds](https://fidelity.github.io/jurity/about_fairness.html#equalized-odds)\n\n## Recommenders Metrics\n* [AUC: Area Under the Curve](https://fidelity.github.io/jurity/about_reco.html#auc-area-under-the-curve)\n* [CTR: Click-through rate](https://fidelity.github.io/jurity/about_reco.html#ctr-click-through-rate)\n* [DR: Doubly robust estimation](https://fidelity.github.io/jurity/about_reco.html#ctr-click-through-rate)\n* [IPS: Inverse propensity scoring](https://fidelity.github.io/jurity/about_reco.html#ctr-click-through-rate)\n* [MAP@K: Mean Average Precision](https://fidelity.github.io/jurity/about_reco.html#map-mean-average-precision)\n* [NDCG: Normalized discounted cumulative gain](https://fidelity.github.io/jurity/about_reco.html#ndcg-normalized-discounted-cumulative-gain)\n* [Precision@K](https://fidelity.github.io/jurity/about_reco.html#precision)\n* [Recall@K](https://fidelity.github.io/jurity/about_reco.html#recall)\n* [Inter-List Diversity@K](https://fidelity.github.io/jurity/about_reco.html#inter-list-diversity)\n* [Intra-List Diversity@K](https://fidelity.github.io/jurity/about_reco.html#intra-list-diversity)\n\n## Classification Metrics\n* [Accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html)\n* [AUC](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score)\n* [F1 Score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html)\n* [Precision](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html)\n* [Recall](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html)\n\n\n## Quick Start: Fairness Evaluation\n\n```python\n# Import binary and multi-class fairness metrics\nfrom jurity.fairness import BinaryFairnessMetrics, MultiClassFairnessMetrics\n\n# Data\nbinary_predictions = [1, 1, 0, 1, 0, 0]\nmulti_class_predictions = [\"a\", \"b\", \"c\", \"b\", \"a\", \"a\"]\nmulti_class_multi_label_predictions = [[\"a\", \"b\"], [\"b\", \"c\"], [\"b\"], [\"a\", \"b\"], [\"c\", \"a\"], [\"c\"]]\nmemberships = [0, 0, 0, 1, 1, 1]\nclasses = [\"a\", \"b\", \"c\"]\n\n# Metrics (see also other available metrics)\nmetric = BinaryFairnessMetrics.StatisticalParity()\nmulti_metric = MultiClassFairnessMetrics.StatisticalParity(classes)\n\n# Scores\nprint(\"Metric:\", metric.description)\nprint(\"Lower Bound: \", metric.lower_bound)\nprint(\"Upper Bound: \", metric.upper_bound)\nprint(\"Ideal Value: \", metric.ideal_value)\nprint(\"Binary Fairness score: \", metric.get_score(binary_predictions, memberships))\nprint(\"Multi-class Fairness scores: \", multi_metric.get_scores(multi_class_predictions, memberships))\nprint(\"Multi-class multi-label Fairness scores: \", multi_metric.get_scores(multi_class_multi_label_predictions, memberships))\n```\n\n## Quick Start: Probabilistic Fairness Evaluation\n\nWhat if we do not know the protected membership attribute of each sample? This is a practical scenario that we refer to as _probabilistic_ fairness evaluation. \n\nAt a high-level, instead of strict 0/1 deterministic membership at individual level, consider the probability of membership to protected classes for each sample.\n\nAn easy baseline is to convert these probabilities back to the deterministic setting by taking the maximum likelihood as the protected membership. This is problematic as the goal is not to predict membership but to evaluate fairness.\n\nTaking this a step further, while we do not have membership information at the individual level, consider access to _surrogate membership_ at _group level_. We can then infer the fairness metrics directly. \n\nJurity offers both options to address the case where membership data is missing. We provide an in-depth study and formal treatment in [Surrogate Membership for Inferred Metrics in Fairness Evaluation (LION 2023)]().\n\n```python\nfrom jurity.fairness import BinaryFairnessMetrics\n\n# Instead of 0/1 deterministic membership at individual level \n# consider likelihoods of membership to protected classes for each sample \nbinary_predictions = [1, 1, 0, 1]\nmemberships = [[0.2, 0.8], [0.4, 0.6], [0.2, 0.8], [0.9, 0.1]]\n\n# Metric\nmetric = BinaryFairnessMetrics.StatisticalParity()\nprint(\"Binary Fairness score: \", metric.get_score(binary_predictions, memberships))\n\n# Surrogate membership: consider access to surrogate membership at the group level. \nsurrogates = [0, 2, 0, 1]\nprint(\"Binary Fairness score: \", metric.get_score(binary_predictions, memberships, surrogates))\n```\n\n\n## Quick Start: Bias Mitigation\n\n```python\n# Import binary fairness and binary bias mitigation\nfrom jurity.mitigation import BinaryMitigation\nfrom jurity.fairness import BinaryFairnessMetrics\n\n# Data\nlabels = [1, 1, 0, 1, 0, 0, 1, 0]\npredictions = [0, 0, 0, 1, 1, 1, 1, 0]\nlikelihoods = [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.1]\nis_member = [0, 0, 0, 0, 1, 1, 1, 1]\n\n# Bias Mitigation\nmitigation = BinaryMitigation.EqualizedOdds()\n\n# Training: Learn mixing rates from the labeled data\nmitigation.fit(labels, predictions, likelihoods, is_member)\n\n# Testing: Mitigate bias in predictions\nfair_predictions, fair_likelihoods = mitigation.transform(predictions, likelihoods, is_member)\n\n# Scores: Fairness before and after\nprint(\"Fairness Metrics Before:\", BinaryFairnessMetrics().get_all_scores(labels, predictions, is_member), '\\n'+30*'-')\nprint(\"Fairness Metrics After:\", BinaryFairnessMetrics().get_all_scores(labels, fair_predictions, is_member))\n```\n\n## Quick Start: Recommenders Evaluation\n\n```python\n# Import recommenders metrics\nfrom jurity.recommenders import BinaryRecoMetrics, RankingRecoMetrics, DiversityRecoMetrics\nimport pandas as pd\n\n# Data\nactual = pd.DataFrame({\"user_id\": [1, 2, 3, 4], \"item_id\": [1, 2, 0, 3], \"clicks\": [0, 1, 0, 0]})\npredicted = pd.DataFrame({\"user_id\": [1, 2, 3, 4], \"item_id\": [1, 2, 2, 3], \"clicks\": [0.8, 0.7, 0.8, 0.7]})\nitem_features = pd.DataFrame({\"item_id\": [0, 1, 2, 3], \"feature1\": [1, 2, 2, 1], \"feature2\": [0.8, 0.7, 0.8, 0.7]})\n\n# Metrics\nauc = BinaryRecoMetrics.AUC(click_column=\"clicks\")\nctr = BinaryRecoMetrics.CTR(click_column=\"clicks\")\ndr = BinaryRecoMetrics.CTR(click_column=\"clicks\", estimation='dr')\nips = BinaryRecoMetrics.CTR(click_column=\"clicks\", estimation='ips')\nmap_k = RankingRecoMetrics.MAP(click_column=\"clicks\", k=2)\nncdg_k = RankingRecoMetrics.NDCG(click_column=\"clicks\", k=3)\nprecision_k = RankingRecoMetrics.Precision(click_column=\"clicks\", k=2)\nrecall_k = RankingRecoMetrics.Recall(click_column=\"clicks\", k=2)\ninterlist_diversity_k = DiversityRecoMetrics.InterListDiversity(click_column=\"clicks\", k=2)\nintralist_diversity_k = DiversityRecoMetrics.IntraListDiversity(item_features, click_column=\"clicks\", k=2)\n\n# Scores\nprint(\"AUC:\", auc.get_score(actual, predicted))\nprint(\"CTR:\", ctr.get_score(actual, predicted))\nprint(\"Doubly Robust:\", dr.get_score(actual, predicted))\nprint(\"IPS:\", ips.get_score(actual, predicted))\nprint(\"MAP@K:\", map_k.get_score(actual, predicted))\nprint(\"NCDG:\", ncdg_k.get_score(actual, predicted))\nprint(\"Precision@K:\", precision_k.get_score(actual, predicted))\nprint(\"Recall@K:\", recall_k.get_score(actual, predicted))\nprint(\"Inter-List Diversity@K:\", interlist_diversity_k.get_score(actual, predicted))\nprint(\"Intra-List Diversity@K:\", intralist_diversity_k.get_score(actual, predicted))\n\n```\n\n## Quick Start: Classification Evaluation\n\n```python\n# Import classification metrics\nfrom jurity.classification import BinaryClassificationMetrics\n\n# Data\nlabels = [1, 1, 0, 1, 0, 0, 1, 0]\npredictions = [0, 0, 0, 1, 1, 1, 1, 0]\n\n# Available: Accuracy, F1, Precision, Recall, and AUC\nf1_score = BinaryClassificationMetrics.F1()\n\nprint('F1 score is', f1_score.get_score(predictions, labels))\n```\n\n\n## Installation\n\nJurity requires **Python 3.8+** and can be installed from PyPI using ``pip install jurity`` or by building from source as shown in [installation instructions](https://fidelity.github.io/jurity/install.html).\n\n## Citation\n\nIf you use Jurity in a publication, please cite it as:\n\n```bibtex\n @article{DBLP:conf/lion/Melinda23,\n author = {Melinda Thielbar, Serdar Kadioglu, Chenhui Zhang, Rick Pack, and Lukas Dannull},\n title = {Surrogate Membership for Inferred Metrics in Fairness Evaluation},\n booktitle = {The 17th Learning and Intelligent Optimization Conference (LION)},\n publisher = {{LION}},\n year = {2023}\n }\n\n @inproceedings{DBLP:conf/icmla/MichalskyK21,\n author = {Filip Michalsk{\\'{y}} and Serdar Kadioglu},\n title = {Surrogate Ground Truth Generation to Enhance Binary Fairness Evaluation in Uplift Modeling},\n booktitle = {20th {IEEE} International Conference on Machine Learning and Applications, \n {ICMLA} 2021, Pasadena, CA, USA, December 13-16, 2021},\n pages = {1654--1659},\n publisher = {{IEEE}},\n year = {2021},\n url = {https://doi.org/10.1109/ICMLA52953.2021.00264},\n doi = {10.1109/ICMLA52953.2021.00264},\n}\n```\n\n## Support\nPlease submit bug reports and feature requests as [Issues](https://github.com/fidelity/jurity/issues).\n\n## License\nJurity is licensed under the [Apache License 2.0.](https://github.com/fidelity/jurity/blob/master/LICENSE)\n",
"bugtrack_url": null,
"license": null,
"summary": "fairness and evaluation library",
"version": "2.1.0",
"project_urls": {
"Homepage": "https://github.com/fidelity/jurity",
"Source": "https://github.com/fidelity/jurity"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "978497791f0e10bac21c4032d1d09ba727eed9ae57a75849d67b5aea9a7a45c8",
"md5": "b90008b42d72731d44483afa19c3c127",
"sha256": "005df9ad5fc3a9d46d20a5d5267a6f8c07244bcf86376ef7f57609e17c384a77"
},
"downloads": -1,
"filename": "jurity-2.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b90008b42d72731d44483afa19c3c127",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 85469,
"upload_time": "2024-09-13T21:53:19",
"upload_time_iso_8601": "2024-09-13T21:53:19.493929Z",
"url": "https://files.pythonhosted.org/packages/97/84/97791f0e10bac21c4032d1d09ba727eed9ae57a75849d67b5aea9a7a45c8/jurity-2.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "028509caccd02320892a6ce026d181302217badf8c84109b094be5d38c1572f5",
"md5": "bbad901ac38b52ff4246c900f3070c2c",
"sha256": "2153246fe87d4cfe069f5192271f17d16191aa7ac2b49157d8936cb7107a0549"
},
"downloads": -1,
"filename": "jurity-2.1.0.tar.gz",
"has_sig": false,
"md5_digest": "bbad901ac38b52ff4246c900f3070c2c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 83775,
"upload_time": "2024-09-13T21:53:21",
"upload_time_iso_8601": "2024-09-13T21:53:21.109551Z",
"url": "https://files.pythonhosted.org/packages/02/85/09caccd02320892a6ce026d181302217badf8c84109b094be5d38c1572f5/jurity-2.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-13 21:53:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fidelity",
"github_project": "jurity",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "jurity"
}