fairret


Namefairret JSON
Version 0.1.3 PyPI version JSON
download
home_pageNone
SummaryA fairness library in PyTorch.
upload_time2024-04-26 16:32:17
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Ghent University Artificial Intelligence & Data Analytics Group Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords fairness machine-learning ai artificial-intelligence pytorch deep-learning python bias fairness-ai fairness-ml
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # fairret - a fairness library in PyTorch

[![Licence](https://img.shields.io/github/license/aida-ugent/fairret)](https://github.com/aida-ugent/fairret/blob/main/LICENSE)
[![PyPI - Version](https://img.shields.io/pypi/v/fairret)](https://pypi.org/project/fairret/)
![Static Badge](https://img.shields.io/badge/PyTorch-ee4c2c)
[![Static Badge](https://img.shields.io/badge/Original%20Paper-00a0ff)](https://openreview.net/pdf?id=NnyD0Rjx2B)
<img src="./docs/source/_static/fairret.png" height="300" align="right">

The goal of fairret is to serve as an open-source library for measuring and mitigating statistical fairness in PyTorch models. 

The library is designed to be 
1. *flexible* in how fairness is defined and pursued.
2. *easy* to integrate into existing PyTorch pipelines.
3. *clear* in what its tools can and cannot do.

Central to the library is the paradigm of the _fairness regularization term_ (fairrets) that quantify unfairness as differentiable PyTorch loss functions. 

These can be minimized jointly with other losses, like the binary cross-entropy error, by just adding them together!

## Quickstart

It suffices to simply choose a _statistic_ that should be equalized across groups and a _fairret_ that quantifies the gap. 

The model can then be trained as follows:

```python
import torch.nn.functional as F
from fairret.statistic import PositiveRate
from fairret.loss import NormLoss

statistic = PositiveRate()
norm_fairret = NormLoss(statistic)

def train(model, optimizer, train_loader):
     for feat, sens, target in train_loader:
            optimizer.zero_grad()
            
            logit = model(feat)
            bce_loss = F.binary_cross_entropy_with_logits(logit, target)
            fairret_loss = norm_fairret(logit, sens)
            loss = bce_loss + fairret_loss
            loss.backward()
            
            optimizer.step()
```

No special data structure is required for the sensitive features. If the training batch contains $N$ elements, then `sens` should be a tensor of floats with shape $(N, d_s)$, with $d_s$ the number of sensitive features. **Like any categorical feature, it is expected that categorical sensitive features are one-hot encoded.**

A notebook with a full example pipeline is provided here: [simple_pipeline.ipynb](/examples/simple_pipeline.ipynb).

We also host [documentation](https://aida-ugent.github.io/fairret/).

## Installation
The fairret library can be installed via PyPi:

```
pip install fairret
```

A minimal list of dependencies is provided in [pyproject.toml](https://github.com/aida-ugent/fairret/blob/main/pyproject.toml). 

If the library is installed locally, the required packages can be installed via `pip install .`

## Warning: AI fairness != fairness
There are many ways in which technical approaches to AI fairness, such as this library, are simplistic and limited in actually achieving fairness in real-world decision processes.

More information on these limitations can be found [here](https://dl.acm.org/doi/full/10.1145/3624700) or [here](https://ojs.aaai.org/index.php/AAAI/article/view/26798).

## Future plans
The library maintains a core focus on only fairrets for now, yet we plan to add more fairness tools that align with the design principles in the future. These may involve breaking changes. At the same time, we'll keep reviewing the role of this library within the wider ecosystem of fairness toolkits. 

Want to help? Please don't hesitate to open an issue, draft a pull request, or shoot an email to [maarten.buyl@ugent.be](mailto:maarten.buyl@ugent.be).

## Citation
This framework will be presented as a paper at ICLR 2024. If you found this library useful in your work, please consider citing it as follows:

```bibtex
@inproceedings{buyl2024fairret,
    title={fairret: a Framework for Differentiable Fairness Regularization Terms},
    author={Buyl, Maarten and Defrance, Marybeth and De Bie, Tijl},
    booktitle={International Conference on Learning Representations},
    year={2024}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "fairret",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "fairness, machine-learning, ai, artificial-intelligence, pytorch, deep-learning, python, bias, fairness-ai, fairness-ml",
    "author": null,
    "author_email": "Maarten Buyl <maarten.buyl@ugent.be>, MaryBeth Defrance <marybeth.defrance@ugent.be>",
    "download_url": "https://files.pythonhosted.org/packages/bd/74/d4941064187008d409cf48867728d526f77a083ea39fd8cb4d4d59c4deac/fairret-0.1.3.tar.gz",
    "platform": null,
    "description": "# fairret - a fairness library in PyTorch\n\n[![Licence](https://img.shields.io/github/license/aida-ugent/fairret)](https://github.com/aida-ugent/fairret/blob/main/LICENSE)\n[![PyPI - Version](https://img.shields.io/pypi/v/fairret)](https://pypi.org/project/fairret/)\n![Static Badge](https://img.shields.io/badge/PyTorch-ee4c2c)\n[![Static Badge](https://img.shields.io/badge/Original%20Paper-00a0ff)](https://openreview.net/pdf?id=NnyD0Rjx2B)\n<img src=\"./docs/source/_static/fairret.png\" height=\"300\" align=\"right\">\n\nThe goal of fairret is to serve as an open-source library for measuring and mitigating statistical fairness in PyTorch models. \n\nThe library is designed to be \n1. *flexible* in how fairness is defined and pursued.\n2. *easy* to integrate into existing PyTorch pipelines.\n3. *clear* in what its tools can and cannot do.\n\nCentral to the library is the paradigm of the _fairness regularization term_ (fairrets) that quantify unfairness as differentiable PyTorch loss functions. \n\nThese can be minimized jointly with other losses, like the binary cross-entropy error, by just adding them together!\n\n## Quickstart\n\nIt suffices to simply choose a _statistic_ that should be equalized across groups and a _fairret_ that quantifies the gap. \n\nThe model can then be trained as follows:\n\n```python\nimport torch.nn.functional as F\nfrom fairret.statistic import PositiveRate\nfrom fairret.loss import NormLoss\n\nstatistic = PositiveRate()\nnorm_fairret = NormLoss(statistic)\n\ndef train(model, optimizer, train_loader):\n     for feat, sens, target in train_loader:\n            optimizer.zero_grad()\n            \n            logit = model(feat)\n            bce_loss = F.binary_cross_entropy_with_logits(logit, target)\n            fairret_loss = norm_fairret(logit, sens)\n            loss = bce_loss + fairret_loss\n            loss.backward()\n            \n            optimizer.step()\n```\n\nNo special data structure is required for the sensitive features. If the training batch contains $N$ elements, then `sens` should be a tensor of floats with shape $(N, d_s)$, with $d_s$ the number of sensitive features. **Like any categorical feature, it is expected that categorical sensitive features are one-hot encoded.**\n\nA notebook with a full example pipeline is provided here: [simple_pipeline.ipynb](/examples/simple_pipeline.ipynb).\n\nWe also host [documentation](https://aida-ugent.github.io/fairret/).\n\n## Installation\nThe fairret library can be installed via PyPi:\n\n```\npip install fairret\n```\n\nA minimal list of dependencies is provided in [pyproject.toml](https://github.com/aida-ugent/fairret/blob/main/pyproject.toml). \n\nIf the library is installed locally, the required packages can be installed via `pip install .`\n\n## Warning: AI fairness != fairness\nThere are many ways in which technical approaches to AI fairness, such as this library, are simplistic and limited in actually achieving fairness in real-world decision processes.\n\nMore information on these limitations can be found [here](https://dl.acm.org/doi/full/10.1145/3624700) or [here](https://ojs.aaai.org/index.php/AAAI/article/view/26798).\n\n## Future plans\nThe library maintains a core focus on only fairrets for now, yet we plan to add more fairness tools that align with the design principles in the future. These may involve breaking changes. At the same time, we'll keep reviewing the role of this library within the wider ecosystem of fairness toolkits. \n\nWant to help? Please don't hesitate to open an issue, draft a pull request, or shoot an email to [maarten.buyl@ugent.be](mailto:maarten.buyl@ugent.be).\n\n## Citation\nThis framework will be presented as a paper at ICLR 2024. If you found this library useful in your work, please consider citing it as follows:\n\n```bibtex\n@inproceedings{buyl2024fairret,\n    title={fairret: a Framework for Differentiable Fairness Regularization Terms},\n    author={Buyl, Maarten and Defrance, Marybeth and De Bie, Tijl},\n    booktitle={International Conference on Learning Representations},\n    year={2024}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Ghent University Artificial Intelligence & Data Analytics Group  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "A fairness library in PyTorch.",
    "version": "0.1.3",
    "project_urls": null,
    "split_keywords": [
        "fairness",
        " machine-learning",
        " ai",
        " artificial-intelligence",
        " pytorch",
        " deep-learning",
        " python",
        " bias",
        " fairness-ai",
        " fairness-ml"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3b4da4ccaeab9e3d1b2d035981f45a891d2eda0ec18dddba5d53bb3afb2f1166",
                "md5": "a9b061b00e6989551b2cdb5d8d4fdf2b",
                "sha256": "de47b7b50c3dbfb241b4b6e686d49543ab6c62ac3ea98a4b0c5cca765ca2b686"
            },
            "downloads": -1,
            "filename": "fairret-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a9b061b00e6989551b2cdb5d8d4fdf2b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 20279,
            "upload_time": "2024-04-26T16:32:16",
            "upload_time_iso_8601": "2024-04-26T16:32:16.109978Z",
            "url": "https://files.pythonhosted.org/packages/3b/4d/a4ccaeab9e3d1b2d035981f45a891d2eda0ec18dddba5d53bb3afb2f1166/fairret-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bd74d4941064187008d409cf48867728d526f77a083ea39fd8cb4d4d59c4deac",
                "md5": "dff337baf7e9befb11a6448306d6cd12",
                "sha256": "e8684f362fd3da65080172ef43e80e50592d02d6e5855834707c9974056e08c3"
            },
            "downloads": -1,
            "filename": "fairret-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "dff337baf7e9befb11a6448306d6cd12",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 136918,
            "upload_time": "2024-04-26T16:32:17",
            "upload_time_iso_8601": "2024-04-26T16:32:17.272780Z",
            "url": "https://files.pythonhosted.org/packages/bd/74/d4941064187008d409cf48867728d526f77a083ea39fd8cb4d4d59c4deac/fairret-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 16:32:17",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "fairret"
}
        
Elapsed time: 0.24761s