fair-scoring


Namefair-scoring JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/schufa-innovationlab/fair-scoring
SummaryFairness metrics for continuous risk scores
upload_time2024-08-19 09:22:00
maintainerNone
docs_urlNone
authorSCHUFA Holding AG
requires_python>=3.9
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Fair-Scoring
Fairness metrics for continuous risk scores.

The implemented algorithms are described in the paper [[1]](#References).

#### Project Links
[Documentation](https://fair-scoring.readthedocs.io/en/stable/) | [PyPI](https://pypi.org/project/fair-scoring/) | [Paper](https://proceedings.mlr.press/v235/becker24a.html)

## Quickstart
### Installation

Install with `pip` directly:
```shell
pip install fair-scoring
```

### Example Usage
The following example shows how compute the equal opportunity bias of the compas dataset

```python
import pandas as pd
from fairscoring.metrics import bias_metric_eo

# Load compas data
dataURL = 'https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv'
df = pd.read_csv(dataURL)

# Relevant data
scores = df['decile_score']
target = df['two_year_recid']
attribute = df['race']

# Compute the bias
bias = bias_metric_eo(scores, target, attribute, groups=['African-American', 'Caucasian'], favorable_target=0,
                      prefer_high_scores=False)
```

#### Further examples
Further examples - especially the experiments conducted for the publication -  can be found 
[in the documentation](docs/source/examples).

## Development
### Setup
Clone the repository and install from this source via

```shell
pip install -e .[dev]
```

### Tests
To execute the tests install the package in development mode (see above)
```
pytest
```

Following the pytest framework, tests for each package are located in a subpackages named `test`

### Docs
To build the docs move to the `./docs` subfolder and call
```shell
make clean
make html
```

## References
[1] Becker, A.-K. and Dumitrasc, O. and Broelemann, K.;
Standardized Interpretable Fairness Measures for Continuous Risk Scores;
Proceedings of the 41th International Conference on Machine Learning, 2024; [pdf](https://openreview.net/pdf?id=CvRu2inbGV)
<details><summary>Bibtex</summary>
<p>

```
@inproceedings{Becker2024FairScoring,
    author = {Ann{-}Kristin Becker and Oana Dumitrasc and Klaus Broelemann}
    title  = {Standardized Interpretable Fairness Measures for Continuous Risk Scores},
    booktitle={Proceedings of the 41th International Conference on Machine Learning},
    year = {2024}
}
```

</p>
</details>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/schufa-innovationlab/fair-scoring",
    "name": "fair-scoring",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "SCHUFA Holding AG",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/54/02/fee499beea47a6fb5f98a10b5d9032d52ebbe94fd45629042b65d1dc01c7/fair_scoring-0.2.1.tar.gz",
    "platform": null,
    "description": "# Fair-Scoring\r\nFairness metrics for continuous risk scores.\r\n\r\nThe implemented algorithms are described in the paper [[1]](#References).\r\n\r\n#### Project Links\r\n[Documentation](https://fair-scoring.readthedocs.io/en/stable/) | [PyPI](https://pypi.org/project/fair-scoring/) | [Paper](https://proceedings.mlr.press/v235/becker24a.html)\r\n\r\n## Quickstart\r\n### Installation\r\n\r\nInstall with `pip` directly:\r\n```shell\r\npip install fair-scoring\r\n```\r\n\r\n### Example Usage\r\nThe following example shows how compute the equal opportunity bias of the compas dataset\r\n\r\n```python\r\nimport pandas as pd\r\nfrom fairscoring.metrics import bias_metric_eo\r\n\r\n# Load compas data\r\ndataURL = 'https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv'\r\ndf = pd.read_csv(dataURL)\r\n\r\n# Relevant data\r\nscores = df['decile_score']\r\ntarget = df['two_year_recid']\r\nattribute = df['race']\r\n\r\n# Compute the bias\r\nbias = bias_metric_eo(scores, target, attribute, groups=['African-American', 'Caucasian'], favorable_target=0,\r\n                      prefer_high_scores=False)\r\n```\r\n\r\n#### Further examples\r\nFurther examples - especially the experiments conducted for the publication -  can be found \r\n[in the documentation](docs/source/examples).\r\n\r\n## Development\r\n### Setup\r\nClone the repository and install from this source via\r\n\r\n```shell\r\npip install -e .[dev]\r\n```\r\n\r\n### Tests\r\nTo execute the tests install the package in development mode (see above)\r\n```\r\npytest\r\n```\r\n\r\nFollowing the pytest framework, tests for each package are located in a subpackages named `test`\r\n\r\n### Docs\r\nTo build the docs move to the `./docs` subfolder and call\r\n```shell\r\nmake clean\r\nmake html\r\n```\r\n\r\n## References\r\n[1] Becker, A.-K. and Dumitrasc, O. and Broelemann, K.;\r\nStandardized Interpretable Fairness Measures for Continuous Risk Scores;\r\nProceedings of the 41th International Conference on Machine Learning, 2024; [pdf](https://openreview.net/pdf?id=CvRu2inbGV)\r\n<details><summary>Bibtex</summary>\r\n<p>\r\n\r\n```\r\n@inproceedings{Becker2024FairScoring,\r\n    author = {Ann{-}Kristin Becker and Oana Dumitrasc and Klaus Broelemann}\r\n    title  = {Standardized Interpretable Fairness Measures for Continuous Risk Scores},\r\n    booktitle={Proceedings of the 41th International Conference on Machine Learning},\r\n    year = {2024}\r\n}\r\n```\r\n\r\n</p>\r\n</details>\r\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Fairness metrics for continuous risk scores",
    "version": "0.2.1",
    "project_urls": {
        "Changelog": "https://fair-scoring.readthedocs.io/en/stable/changelog.html",
        "Documentation": "https://fair-scoring.readthedocs.io/en/stable/",
        "Homepage": "https://github.com/schufa-innovationlab/fair-scoring",
        "Source code": "https://github.com/schufa-innovationlab/fair-scoring"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2b1f9d76773f994e7da0cd1d1abcae5a2b763ca9c725599b3a056393b810a559",
                "md5": "713e6551d1870b9d201463d11c8019e6",
                "sha256": "4016ae42f7e8064f171c85a892f82b6ae0db9a38f02bb9e124ce21cbe5888e3d"
            },
            "downloads": -1,
            "filename": "fair_scoring-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "713e6551d1870b9d201463d11c8019e6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 29305,
            "upload_time": "2024-08-19T09:21:58",
            "upload_time_iso_8601": "2024-08-19T09:21:58.680446Z",
            "url": "https://files.pythonhosted.org/packages/2b/1f/9d76773f994e7da0cd1d1abcae5a2b763ca9c725599b3a056393b810a559/fair_scoring-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5402fee499beea47a6fb5f98a10b5d9032d52ebbe94fd45629042b65d1dc01c7",
                "md5": "42ccc69decedec0d3e0c54f3df6f5655",
                "sha256": "cb376dace0a8088cb4421aa07efc6bcd11d07ef7474f8a75859845c2354e6c0e"
            },
            "downloads": -1,
            "filename": "fair_scoring-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "42ccc69decedec0d3e0c54f3df6f5655",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 395936,
            "upload_time": "2024-08-19T09:22:00",
            "upload_time_iso_8601": "2024-08-19T09:22:00.207139Z",
            "url": "https://files.pythonhosted.org/packages/54/02/fee499beea47a6fb5f98a10b5d9032d52ebbe94fd45629042b65d1dc01c7/fair_scoring-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-19 09:22:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "schufa-innovationlab",
    "github_project": "fair-scoring",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "fair-scoring"
}
        
Elapsed time: 0.61193s