inFairness


NameinFairness JSON
Version 0.2.3 PyPI version JSON
download
home_pagehttps://github.com/IBM/inFairness
SummaryinFairness is a Python package to train and audit individually fair PyTorch models
upload_time2023-04-10 19:16:12
maintainer
docs_urlNone
authorIBM Research
requires_python>=3.7
license
keywords individual fairness ai fairness trustworthy ai machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            <p align="center">
  <a href="https://ibm.github.io/inFairness">
     <img width="350" height="350" src="https://ibm.github.io/inFairness/_static/infairness-logo.png">
   </a>
</p>

<p align="center">
   <a href="https://pypi.org/project/infairness"><img src="https://img.shields.io/pypi/v/infairness?color=important&label=pypi%20package&logo=PyPy"></a>
   <a href="./examples"><img src="https://img.shields.io/badge/example-notebooks-red?logo=jupyter"></a>
   <a href="https://ibm.github.io/inFairness"><img src="https://img.shields.io/badge/documentation-up-green?logo=GitBook"></a>
   <a href="https://fairbert.vizhub.ai"><img src="https://img.shields.io/badge/fairness-demonstration-yellow?logo=ibm-watson"></a>
   <br/>
   <a href="https://app.travis-ci.com/IBM/inFairness"><img src="https://app.travis-ci.com/IBM/inFairness.svg?branch=main"></a>
   <a href="https://pypistats.org/packages/infairness"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/inFairness?color=blue"></a>
   <a href="https://www.python.org/"><img src="https://img.shields.io/badge/python-3.7+-blue?logo=python"></a>
   <a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/license-Apache-yellow"></a>
   <a href="https://github.com/psf/black"><img src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
</p>


## Individual Fairness and inFairness

Intuitively, an individually fair Machine Learning (ML) model treats similar inputs similarly. Formally, the leading notion of individual fairness is metric fairness [(Dwork et al., 2011)](https://dl.acm.org/doi/abs/10.1145/2090236.2090255); it requires:

$$ d_y (h(x_1), h(x_2)) \leq L d_x(x_1, x_2) \quad \forall \quad x_1, x_2 \in X $$

Here, $h: X \rightarrow Y$ is a ML model, where $X$ and $Y$ are input and output spaces; $d_x$ and $d_y$ are metrics on the input and output spaces, and $L \geq 0$ is a Lipchitz constant. This constrained optimization equation states that the distance between the model predictions for inputs $x_1$ and $x_2$ is upper-bounded by the fair distance between the inputs $x_1$ and $x_2$. Here, the fair metric $d_x$ encodes our intuition of which samples should be treated similarly by the ML model, and in designing so, we ensure that for input samples considered similar by the fair metric $d_x$, the model outputs will be similar as well.

inFairness is a PyTorch package that supports auditing, training, and post-processing ML models for individual fairness. At its core, the library implements the key components of individual fairness pipeline: $d_x$ - distance in the input space, $d_y$ - distance in the output space, and the learning algorithms to optimize for the equation above.

For an in-depth tutorial of Individual Fairness and the inFairness package, please watch this tutorial. Also, take a look at the [examples](./examples/) folder for illustrative use-cases and try the [Fairness Playground demo](https://fairbert.vizhub.ai). For more group fairness examples see [AIF360](https://aif360.mybluemix.net/).

<p align="center">
  <a href="https://video.ibm.com/recorded/131932983" target="_blank"><img width="700" alt="Watch the tutorial" src="https://user-images.githubusercontent.com/991913/178768336-2bfa5958-487f-4f14-a156-03dacfd68263.png"></a>
</p>

## Installation

inFairness can be installed using `pip`:

```
pip install inFairness
```


Alternatively, if you wish to install the latest development version, you can install directly by cloning this repository:

```
git clone <git repo url>
cd inFairness
pip install -e .
```



## Features

inFairness currently supports:

1. Learning individually fair metrics : [[Docs]](https://ibm.github.io/inFairness/reference/distances.html)
2. Training of individually fair models : [[Docs]](https://ibm.github.io/inFairness/reference/algorithms.html)
3. Auditing pre-trained ML models for individual fairness : [[Docs]](https://ibm.github.io/inFairness/reference/auditors.html)
4. Post-processing for Individual Fairness : [[Docs]](https://ibm.github.io/inFairness/reference/postprocessing.html)
5. Individually fair ranking : [[Docs]](https://ibm.github.io/inFairness/reference/algorithms.html)


## Contributing

We welcome contributions from the community in any form - whether it is through the contribution of a new fair algorithm, fair metric, a new use-case, or simply reporting an issue or enhancement in the package. To contribute code to the package, please follow the following steps:

1. Clone this git repository to your local system
2. Setup your system by installing dependencies as: `pip3 install -r requirements.txt` and `pip3 install -r  build_requirements.txt`
3. Add your code contribution to the package. Please refer to the [`inFairness`](./inFairness) folder for an overview of the directory structure
4. Add appropriate unit tests in the [`tests`](./tests) folder
5. Once you are ready to commit code, check for the following:
   1. Coding style compliance using: `flake8 inFairness/`. This command will list all stylistic violations found in the code. Please try to fix as much as you can
   2. Ensure all the test cases pass using: `coverage run --source inFairness -m pytest tests/`. All unit tests need to pass to be able merge code in the package.
6. Finally, commit your code and raise a Pull Request.


## Tutorials

The [`examples`](./examples) folder contains tutorials from different fields illustrating how to use the package.

### Minimal example

First, you need to import the relevant packages

```
from inFairness import distances
from inFairness.fairalgo import SenSeI
```

The `inFairness.distances` module implements various distance metrics on the input and the output spaces, and the `inFairness.fairalgo` implements various individually fair learning algorithms with `SenSeI` being one particular algorithm.

Thereafter, we instantiate and fit the distance metrics on the training data, and 


```[python]
distance_x = distances.SVDSensitiveSubspaceDistance()
distance_y = distances.EuclideanDistance()

distance_x.fit(X_train=data, n_components=50)

# Finally instantiate the fair algorithm
fairalgo = SenSeI(network, distance_x, distance_y, lossfn, rho=1.0, eps=1e-3, lr=0.01, auditor_nsteps=100, auditor_lr=0.1)
```

Finally, you can train the `fairalgo` as you would train your standard PyTorch deep neural network:

```
fairalgo.train()

for epoch in range(EPOCHS):
    for x, y in train_dl:
        optimizer.zero_grad()
        result = fairalgo(x, y)
        result.loss.backward()
        optimizer.step()
```


##  Authors

<table align="center">
  <tr>
    <td align="center"><a href="http://moonfolk.github.io/"><img src="https://avatars.githubusercontent.com/u/24443134?v=4?s=100" width="120px;" alt=""/><br /><b>Mikhail Yurochkin</b></a></a></td>
    <td align="center"><a href="http://mayankagarwal.github.io/"><img src="https://avatars.githubusercontent.com/u/991913?v=4?s=100" width="120px;" alt=""/><br /><b>Mayank Agarwal</b></a></a></td>
    <td align="center"><a href="https://github.com/aldopareja"><img src="https://avatars.githubusercontent.com/u/7622817?v=4?s=100" width="120px;" alt=""/><br /><b>Aldo Pareja</b></a></a></td>
    <td align="center"><a href="https://github.com/onkarbhardwaj"><img src="https://avatars.githubusercontent.com/u/13560220?v=4?s=100" width="120px;" alt=""/><br /><b>Onkar Bhardwaj</b></a></a></td>
  </tr>
</table>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/IBM/inFairness",
    "name": "inFairness",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "individual fairness,ai fairness,trustworthy ai,machine learning",
    "author": "IBM Research",
    "author_email": "mayank.agarwal@ibm.com, aldo.pareja@ibm.com, onkarbhardwaj@ibm.com, mikhail.yurochkin@ibm.com",
    "download_url": "https://files.pythonhosted.org/packages/5d/6c/312f52fac23c579678c4e4e2ede03083739ae5f644318baa9a4fdf0d36c2/inFairness-0.2.3.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <a href=\"https://ibm.github.io/inFairness\">\n     <img width=\"350\" height=\"350\" src=\"https://ibm.github.io/inFairness/_static/infairness-logo.png\">\n   </a>\n</p>\n\n<p align=\"center\">\n   <a href=\"https://pypi.org/project/infairness\"><img src=\"https://img.shields.io/pypi/v/infairness?color=important&label=pypi%20package&logo=PyPy\"></a>\n   <a href=\"./examples\"><img src=\"https://img.shields.io/badge/example-notebooks-red?logo=jupyter\"></a>\n   <a href=\"https://ibm.github.io/inFairness\"><img src=\"https://img.shields.io/badge/documentation-up-green?logo=GitBook\"></a>\n   <a href=\"https://fairbert.vizhub.ai\"><img src=\"https://img.shields.io/badge/fairness-demonstration-yellow?logo=ibm-watson\"></a>\n   <br/>\n   <a href=\"https://app.travis-ci.com/IBM/inFairness\"><img src=\"https://app.travis-ci.com/IBM/inFairness.svg?branch=main\"></a>\n   <a href=\"https://pypistats.org/packages/infairness\"><img alt=\"PyPI - Downloads\" src=\"https://img.shields.io/pypi/dm/inFairness?color=blue\"></a>\n   <a href=\"https://www.python.org/\"><img src=\"https://img.shields.io/badge/python-3.7+-blue?logo=python\"></a>\n   <a href=\"https://opensource.org/licenses/Apache-2.0\"><img src=\"https://img.shields.io/badge/license-Apache-yellow\"></a>\n   <a href=\"https://github.com/psf/black\"><img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"></a>\n</p>\n\n\n## Individual Fairness and inFairness\n\nIntuitively, an individually fair Machine Learning (ML) model treats similar inputs similarly. Formally, the leading notion of individual fairness is metric fairness [(Dwork et al., 2011)](https://dl.acm.org/doi/abs/10.1145/2090236.2090255); it requires:\n\n$$ d_y (h(x_1), h(x_2)) \\leq L d_x(x_1, x_2) \\quad \\forall \\quad x_1, x_2 \\in X $$\n\nHere, $h: X \\rightarrow Y$ is a ML model, where $X$ and $Y$ are input and output spaces; $d_x$ and $d_y$ are metrics on the input and output spaces, and $L \\geq 0$ is a Lipchitz constant. This constrained optimization equation states that the distance between the model predictions for inputs $x_1$ and $x_2$ is upper-bounded by the fair distance between the inputs $x_1$ and $x_2$. Here, the fair metric $d_x$ encodes our intuition of which samples should be treated similarly by the ML model, and in designing so, we ensure that for input samples considered similar by the fair metric $d_x$, the model outputs will be similar as well.\n\ninFairness is a PyTorch package that supports auditing, training, and post-processing ML models for individual fairness. At its core, the library implements the key components of individual fairness pipeline: $d_x$ - distance in the input space, $d_y$ - distance in the output space, and the learning algorithms to optimize for the equation above.\n\nFor an in-depth tutorial of Individual Fairness and the inFairness package, please watch this tutorial. Also, take a look at the [examples](./examples/) folder for illustrative use-cases and try the [Fairness Playground demo](https://fairbert.vizhub.ai). For more group fairness examples see [AIF360](https://aif360.mybluemix.net/).\n\n<p align=\"center\">\n  <a href=\"https://video.ibm.com/recorded/131932983\" target=\"_blank\"><img width=\"700\" alt=\"Watch the tutorial\" src=\"https://user-images.githubusercontent.com/991913/178768336-2bfa5958-487f-4f14-a156-03dacfd68263.png\"></a>\n</p>\n\n## Installation\n\ninFairness can be installed using `pip`:\n\n```\npip install inFairness\n```\n\n\nAlternatively, if you wish to install the latest development version, you can install directly by cloning this repository:\n\n```\ngit clone <git repo url>\ncd inFairness\npip install -e .\n```\n\n\n\n## Features\n\ninFairness currently supports:\n\n1. Learning individually fair metrics : [[Docs]](https://ibm.github.io/inFairness/reference/distances.html)\n2. Training of individually fair models : [[Docs]](https://ibm.github.io/inFairness/reference/algorithms.html)\n3. Auditing pre-trained ML models for individual fairness : [[Docs]](https://ibm.github.io/inFairness/reference/auditors.html)\n4. Post-processing for Individual Fairness : [[Docs]](https://ibm.github.io/inFairness/reference/postprocessing.html)\n5. Individually fair ranking : [[Docs]](https://ibm.github.io/inFairness/reference/algorithms.html)\n\n\n## Contributing\n\nWe welcome contributions from the community in any form - whether it is through the contribution of a new fair algorithm, fair metric, a new use-case, or simply reporting an issue or enhancement in the package. To contribute code to the package, please follow the following steps:\n\n1. Clone this git repository to your local system\n2. Setup your system by installing dependencies as: `pip3 install -r requirements.txt` and `pip3 install -r  build_requirements.txt`\n3. Add your code contribution to the package. Please refer to the [`inFairness`](./inFairness) folder for an overview of the directory structure\n4. Add appropriate unit tests in the [`tests`](./tests) folder\n5. Once you are ready to commit code, check for the following:\n   1. Coding style compliance using: `flake8 inFairness/`. This command will list all stylistic violations found in the code. Please try to fix as much as you can\n   2. Ensure all the test cases pass using: `coverage run --source inFairness -m pytest tests/`. All unit tests need to pass to be able merge code in the package.\n6. Finally, commit your code and raise a Pull Request.\n\n\n## Tutorials\n\nThe [`examples`](./examples) folder contains tutorials from different fields illustrating how to use the package.\n\n### Minimal example\n\nFirst, you need to import the relevant packages\n\n```\nfrom inFairness import distances\nfrom inFairness.fairalgo import SenSeI\n```\n\nThe `inFairness.distances` module implements various distance metrics on the input and the output spaces, and the `inFairness.fairalgo` implements various individually fair learning algorithms with `SenSeI` being one particular algorithm.\n\nThereafter, we instantiate and fit the distance metrics on the training data, and \n\n\n```[python]\ndistance_x = distances.SVDSensitiveSubspaceDistance()\ndistance_y = distances.EuclideanDistance()\n\ndistance_x.fit(X_train=data, n_components=50)\n\n# Finally instantiate the fair algorithm\nfairalgo = SenSeI(network, distance_x, distance_y, lossfn, rho=1.0, eps=1e-3, lr=0.01, auditor_nsteps=100, auditor_lr=0.1)\n```\n\nFinally, you can train the `fairalgo` as you would train your standard PyTorch deep neural network:\n\n```\nfairalgo.train()\n\nfor epoch in range(EPOCHS):\n    for x, y in train_dl:\n        optimizer.zero_grad()\n        result = fairalgo(x, y)\n        result.loss.backward()\n        optimizer.step()\n```\n\n\n##  Authors\n\n<table align=\"center\">\n  <tr>\n    <td align=\"center\"><a href=\"http://moonfolk.github.io/\"><img src=\"https://avatars.githubusercontent.com/u/24443134?v=4?s=100\" width=\"120px;\" alt=\"\"/><br /><b>Mikhail Yurochkin</b></a></a></td>\n    <td align=\"center\"><a href=\"http://mayankagarwal.github.io/\"><img src=\"https://avatars.githubusercontent.com/u/991913?v=4?s=100\" width=\"120px;\" alt=\"\"/><br /><b>Mayank Agarwal</b></a></a></td>\n    <td align=\"center\"><a href=\"https://github.com/aldopareja\"><img src=\"https://avatars.githubusercontent.com/u/7622817?v=4?s=100\" width=\"120px;\" alt=\"\"/><br /><b>Aldo Pareja</b></a></a></td>\n    <td align=\"center\"><a href=\"https://github.com/onkarbhardwaj\"><img src=\"https://avatars.githubusercontent.com/u/13560220?v=4?s=100\" width=\"120px;\" alt=\"\"/><br /><b>Onkar Bhardwaj</b></a></a></td>\n  </tr>\n</table>\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "inFairness is a Python package to train and audit individually fair PyTorch models",
    "version": "0.2.3",
    "split_keywords": [
        "individual fairness",
        "ai fairness",
        "trustworthy ai",
        "machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "282660e9a18cfbfdd0e3f4c38d4d2707664e161dd9cddceeaa1ce1b5c001f9f6",
                "md5": "9c5323ba4b3cc0c3979dac24ce3bec4f",
                "sha256": "85a39fc0a68d5b895a6794e722cf0548ff0f4fd52751ed3661e7a6f6365f8a22"
            },
            "downloads": -1,
            "filename": "inFairness-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c5323ba4b3cc0c3979dac24ce3bec4f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 45765,
            "upload_time": "2023-04-10T19:16:11",
            "upload_time_iso_8601": "2023-04-10T19:16:11.038454Z",
            "url": "https://files.pythonhosted.org/packages/28/26/60e9a18cfbfdd0e3f4c38d4d2707664e161dd9cddceeaa1ce1b5c001f9f6/inFairness-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5d6c312f52fac23c579678c4e4e2ede03083739ae5f644318baa9a4fdf0d36c2",
                "md5": "05a6e38ede6aa9c3579b4485ea47582b",
                "sha256": "a50caed2253ce0ab42d3f2a73d39871682ce2e3617160823aa72db43c5bb158b"
            },
            "downloads": -1,
            "filename": "inFairness-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "05a6e38ede6aa9c3579b4485ea47582b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 34273,
            "upload_time": "2023-04-10T19:16:12",
            "upload_time_iso_8601": "2023-04-10T19:16:12.563316Z",
            "url": "https://files.pythonhosted.org/packages/5d/6c/312f52fac23c579678c4e4e2ede03083739ae5f644318baa9a4fdf0d36c2/inFairness-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-10 19:16:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "IBM",
    "github_project": "inFairness",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "infairness"
}
        
Elapsed time: 0.21310s