error-parity


Nameerror-parity JSON
Version 0.3.10 PyPI version JSON
download
home_pagehttps://github.com/socialfoundations/error-parity
SummaryAchieve error-rate parity between protected groups for any predictor
upload_time2024-04-02 13:51:22
maintainerNone
docs_urlNone
authorAndreFCruz
requires_python>=3.8
licenseMIT
keywords ml optimization fairness error-parity equal-odds
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # error-parity

![Tests status](https://github.com/socialfoundations/error-parity/actions/workflows/python-tests.yml/badge.svg)
![PyPI status](https://github.com/socialfoundations/error-parity/actions/workflows/python-publish.yml/badge.svg)
![Documentation status](https://github.com/socialfoundations/error-parity/actions/workflows/python-docs.yml/badge.svg)
![PyPI version](https://badgen.net/pypi/v/error-parity)
![OSI license](https://badgen.net/pypi/license/error-parity)
![Python compatibility](https://badgen.net/pypi/python/error-parity)
<!-- ![PyPI version](https://img.shields.io/pypi/v/error-parity) -->
<!-- ![OSI license](https://img.shields.io/pypi/l/error-parity) -->
<!-- ![Compatible python versions](https://img.shields.io/pypi/pyversions/error-parity) -->

> Work presented as an _oral at ICLR 2024_, titled ["Unprocessing Seven Years of Algorithmic Fairness"](https://openreview.net/forum?id=jr03SfWsBS).


Fast postprocessing of any score-based predictor to meet fairness criteria.

The `error-parity` package can achieve strict or relaxed fairness constraint fulfillment, 
which can be useful to compare ML models at equal fairness levels.

Package documentation available [here](https://socialfoundations.github.io/error-parity/).


## Installing

Install package from [PyPI](https://pypi.org/project/error-parity/):
```
pip install error-parity
```

Or, for development, you can clone the repo and install from local sources:
```
git clone https://github.com/socialfoundations/error-parity.git
pip install ./error-parity
```


## Getting started

> See detailed example notebooks under the [**examples folder**](./examples/).

```py
from error_parity import RelaxedThresholdOptimizer

# Given any trained model that outputs real-valued scores
fair_clf = RelaxedThresholdOptimizer(
    predictor=lambda X: model.predict_proba(X)[:, -1],   # for sklearn API
    # predictor=model,            # use this for a callable model
    constraint="equalized_odds",  # other constraints are available
    tolerance=0.05,               # fairness constraint tolerance
)

# Fit the fairness adjustment on some data
# This will find the optimal _fair classifier_
fair_clf.fit(X=X, y=y, group=group)

# Now you can use `fair_clf` as any other classifier
# You have to provide group information to compute fair predictions
y_pred_test = fair_clf(X=X_test, group=group_test)
```


## How it works

Given a callable score-based predictor (i.e., `y_pred = predictor(X)`), and some `(X, Y, S)` data to fit, `RelaxedThresholdOptimizer` will:
1. Compute group-specific ROC curves and their convex hulls;
2. Compute the `r`-relaxed optimal solution for the chosen fairness criterion (using [cvxpy](https://www.cvxpy.org));
3. Find the set of group-specific binary classifiers that match the optimal solution found.
    - each group-specific classifier is made up of (possibly randomized) group-specific thresholds over the given predictor;
    - if a group's ROC point is in the interior of its ROC curve, partial randomization of its predictions may be necessary.


## Features and implementation road-map

We welcome community contributions for [cvxpy](https://www.cvxpy.org) implementations of other fairness constraints.

Currently implemented fairness constraints:
- [x] equality of odds (Hardt et al., 2016);
  - i.e., equal group-specific TPR and FPR;
  - use `constraint="equalized_odds"`;
- [x] equal opportunity;
  - i.e., equal group-specific TPR;
  - use `constraint="true_positive_rate_parity"`;
- [x] predictive equality;
  - i.e., equal group-specific FPR;
  - use `constraint="false_positive_rate_parity"`;
- [x] demographic parity;
  - i.e., equal group-specific predicted prevalence;
  - use `constraint="demographic_parity"`;


## Citing

```
@inproceedings{
  cruz2024unprocessing,
  title={Unprocessing Seven Years of Algorithmic Fairness},
  author={Andr{\'e} Cruz and Moritz Hardt},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024},
  url={https://openreview.net/forum?id=jr03SfWsBS}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/socialfoundations/error-parity",
    "name": "error-parity",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ml, optimization, fairness, error-parity, equal-odds",
    "author": "AndreFCruz",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/96/82/faa24cc7d4303a4d0bfbd986634166f288af99e41d2e822afd47b87bace0/error-parity-0.3.10.tar.gz",
    "platform": null,
    "description": "# error-parity\n\n![Tests status](https://github.com/socialfoundations/error-parity/actions/workflows/python-tests.yml/badge.svg)\n![PyPI status](https://github.com/socialfoundations/error-parity/actions/workflows/python-publish.yml/badge.svg)\n![Documentation status](https://github.com/socialfoundations/error-parity/actions/workflows/python-docs.yml/badge.svg)\n![PyPI version](https://badgen.net/pypi/v/error-parity)\n![OSI license](https://badgen.net/pypi/license/error-parity)\n![Python compatibility](https://badgen.net/pypi/python/error-parity)\n<!-- ![PyPI version](https://img.shields.io/pypi/v/error-parity) -->\n<!-- ![OSI license](https://img.shields.io/pypi/l/error-parity) -->\n<!-- ![Compatible python versions](https://img.shields.io/pypi/pyversions/error-parity) -->\n\n> Work presented as an _oral at ICLR 2024_, titled [\"Unprocessing Seven Years of Algorithmic Fairness\"](https://openreview.net/forum?id=jr03SfWsBS).\n\n\nFast postprocessing of any score-based predictor to meet fairness criteria.\n\nThe `error-parity` package can achieve strict or relaxed fairness constraint fulfillment, \nwhich can be useful to compare ML models at equal fairness levels.\n\nPackage documentation available [here](https://socialfoundations.github.io/error-parity/).\n\n\n## Installing\n\nInstall package from [PyPI](https://pypi.org/project/error-parity/):\n```\npip install error-parity\n```\n\nOr, for development, you can clone the repo and install from local sources:\n```\ngit clone https://github.com/socialfoundations/error-parity.git\npip install ./error-parity\n```\n\n\n## Getting started\n\n> See detailed example notebooks under the [**examples folder**](./examples/).\n\n```py\nfrom error_parity import RelaxedThresholdOptimizer\n\n# Given any trained model that outputs real-valued scores\nfair_clf = RelaxedThresholdOptimizer(\n    predictor=lambda X: model.predict_proba(X)[:, -1],   # for sklearn API\n    # predictor=model,            # use this for a callable model\n    constraint=\"equalized_odds\",  # other constraints are available\n    tolerance=0.05,               # fairness constraint tolerance\n)\n\n# Fit the fairness adjustment on some data\n# This will find the optimal _fair classifier_\nfair_clf.fit(X=X, y=y, group=group)\n\n# Now you can use `fair_clf` as any other classifier\n# You have to provide group information to compute fair predictions\ny_pred_test = fair_clf(X=X_test, group=group_test)\n```\n\n\n## How it works\n\nGiven a callable score-based predictor (i.e., `y_pred = predictor(X)`), and some `(X, Y, S)` data to fit, `RelaxedThresholdOptimizer` will:\n1. Compute group-specific ROC curves and their convex hulls;\n2. Compute the `r`-relaxed optimal solution for the chosen fairness criterion (using [cvxpy](https://www.cvxpy.org));\n3. Find the set of group-specific binary classifiers that match the optimal solution found.\n    - each group-specific classifier is made up of (possibly randomized) group-specific thresholds over the given predictor;\n    - if a group's ROC point is in the interior of its ROC curve, partial randomization of its predictions may be necessary.\n\n\n## Features and implementation road-map\n\nWe welcome community contributions for [cvxpy](https://www.cvxpy.org) implementations of other fairness constraints.\n\nCurrently implemented fairness constraints:\n- [x] equality of odds (Hardt et al., 2016);\n  - i.e., equal group-specific TPR and FPR;\n  - use `constraint=\"equalized_odds\"`;\n- [x] equal opportunity;\n  - i.e., equal group-specific TPR;\n  - use `constraint=\"true_positive_rate_parity\"`;\n- [x] predictive equality;\n  - i.e., equal group-specific FPR;\n  - use `constraint=\"false_positive_rate_parity\"`;\n- [x] demographic parity;\n  - i.e., equal group-specific predicted prevalence;\n  - use `constraint=\"demographic_parity\"`;\n\n\n## Citing\n\n```\n@inproceedings{\n  cruz2024unprocessing,\n  title={Unprocessing Seven Years of Algorithmic Fairness},\n  author={Andr{\\'e} Cruz and Moritz Hardt},\n  booktitle={The Twelfth International Conference on Learning Representations},\n  year={2024},\n  url={https://openreview.net/forum?id=jr03SfWsBS}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Achieve error-rate parity between protected groups for any predictor",
    "version": "0.3.10",
    "project_urls": {
        "Homepage": "https://github.com/socialfoundations/error-parity"
    },
    "split_keywords": [
        "ml",
        " optimization",
        " fairness",
        " error-parity",
        " equal-odds"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fcb63f1a99e1a23a388da4ed27c2c9e79924ade7684592587a133e36d9be3ff",
                "md5": "0beb87239c7d3ced5ae99e0663cb77b3",
                "sha256": "95842cf6d80f6faa6d3d0ce31e36a62a2bd861c29a9f0f3218e58b9919969c85"
            },
            "downloads": -1,
            "filename": "error_parity-0.3.10-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0beb87239c7d3ced5ae99e0663cb77b3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 40553,
            "upload_time": "2024-04-02T13:51:21",
            "upload_time_iso_8601": "2024-04-02T13:51:21.387597Z",
            "url": "https://files.pythonhosted.org/packages/4f/cb/63f1a99e1a23a388da4ed27c2c9e79924ade7684592587a133e36d9be3ff/error_parity-0.3.10-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9682faa24cc7d4303a4d0bfbd986634166f288af99e41d2e822afd47b87bace0",
                "md5": "492a18daf3ac23ea7cbb3ad5761e91af",
                "sha256": "a747bbf8b36f9b6aa90b32fe7cb527fb290ac41b55e7a1bfa22e6f703c7c9d6b"
            },
            "downloads": -1,
            "filename": "error-parity-0.3.10.tar.gz",
            "has_sig": false,
            "md5_digest": "492a18daf3ac23ea7cbb3ad5761e91af",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 37268,
            "upload_time": "2024-04-02T13:51:22",
            "upload_time_iso_8601": "2024-04-02T13:51:22.572991Z",
            "url": "https://files.pythonhosted.org/packages/96/82/faa24cc7d4303a4d0bfbd986634166f288af99e41d2e822afd47b87bace0/error-parity-0.3.10.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-02 13:51:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "socialfoundations",
    "github_project": "error-parity",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "error-parity"
}
        
Elapsed time: 0.22296s