biaslyze


Namebiaslyze JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://biaslyze.org
SummaryThe NLP Bias Identification Toolkit
upload_time2023-08-25 10:04:06
maintainerTobias Sterbak
docs_urlNone
authorTobias Sterbak & Stina Lohmüller
requires_python>=3.10,<3.11
licenseBSD-3-Clause
keywords nlp bias ethics fairness
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<div align="center">
  <img src="resources/biaslyze_logo_farbe_rgb.svg" alt="Biaslyze" width="40%">
  <h1>The NLP Bias Identification Toolkit</h1>
</div>

<div align="center">
    <a href="https://github.com/biaslyze-dev/biaslyze/blob/main/LICENSE">
        <img alt="licence" src="https://img.shields.io/github/license/biaslyze-dev/biaslyze">
    </a>
    <a href="https://pypi.org/project/biaslyze/">
        <img alt="pypi" src="https://img.shields.io/pypi/v/biaslyze">
    </a>
    <a href="https://pypi.org/project/biaslyze/">
        <img alt="pypi" src="https://img.shields.io/pypi/pyversions/biaslyze">
    </a>
    <a href="#">
        <img alt="tests" src="https://github.com/biaslyze-dev/biaslyze/actions/workflows/python-app.yml/badge.svg">
    </a>
</div>


Bias is often subtle and difficult to detect in NLP models, as the protected attributes are less obvious and can take many forms in language (e.g. proxies, double meanings, ambiguities etc.). Therefore, technical bias testing is a key step in avoiding algorithmically mediated discrimination. However, it is currently conducted too rarely due to the effort involved, missing resources or lack of awareness for the problem.

Biaslyze helps to get started with the analysis of bias within NLP models and offers a concrete entry point for further impact assessments and mitigation measures. Especially for developers, researchers and teams with limited resources, our toolbox offers a low-effort approach to bias testing in NLP use cases.

## Supported Models

All text classification models with probability output are supported. This includes models from scikit-learn, tensorflow, pytorch, huggingface transformers and custom models. The bias detection requires you to pass a `predict_proba` function similar to what you would get on scikit-learn models. You can find a tutorial on how to do that for pre-trained transformer models [here](https://biaslyze.org/tutorials/tutorial-hugging-hatexplain/). 

## Installation

Installation can be done using pypi:
```bash
pip install biaslyze
```

Then you need to download the required spacy models:
```bash
python -m spacy download en_core_web_sm
```

## Quickstart

```python
from biaslyze.bias_detectors import CounterfactualBiasDetector

bias_detector = CounterfactualBiasDetector()

# detect bias in the model based on the given texts
# here, clf is a scikit-learn text classification pipeline trained for a binary classification task
detection_res = bias_detector.process(
    texts=texts,
    predict_func=clf.predict_proba
)

# see a summary of the detection
detection_res.report()

# launch the dashboard visualize the counterfactual scores
detection_res.dashboard(num_keywords=10)
```

You will get results as Boxplots, among others, indicating the impact of keywords and concepts on the prediction of your model.
Example output:
![](resources/biaslyze-demo-box-plot.gif)


See more detailed examples in the [tutorial](https://biaslyze.org/tutorials/tutorial-toxic-comments/).


## Development setup

- First you need to install poetry to manage your python environment: https://python-poetry.org/docs/#installation
- Run `make install` to install the dependencies and get the spacy basemodels.
- Now you can use `biaslyze` in your jupyter notebooks.


### Adding concepts and keywords

You can add concepts and new keywords for existing concepts by editing the concept files in [concepts](https://github.com/biaslyze-dev/biaslyze/blob/main/biaslyze/concepts/).
A tutorial on how to create custom concepts and work with them can be found [here](https://biaslyze.org/tutorials/tutorial-working-with-custom-concepts/).

## Preview/build the documentation with mkdocs

To preview the documentation run `make doc-preview`. This will launch a preview of the documentation on `http://127.0.0.1:8000/`.
To build the documentation html run `make doc`.


## Run the automated tests

`make test`


## Style guide

We are using isort and black: `make style`
For linting we are running ruff: `make lint`

## Contributing

Follow the Google style guide for Python: https://google.github.io/styleguide/pyguide.html

This project uses black, isort and ruff to enforce style. Apply it by running `make style` and `make lint`.

## Acknowledgements

* Funded from March 2023 until August 2023 by ![logos of the "Bundesministerium für Bildung und Forschung", Prodotype Fund and OKFN-Deutschland](resources/pf_funding_logos.svg)

            

Raw data

            {
    "_id": null,
    "home_page": "https://biaslyze.org",
    "name": "biaslyze",
    "maintainer": "Tobias Sterbak",
    "docs_url": null,
    "requires_python": ">=3.10,<3.11",
    "maintainer_email": "hello@tobiassterbak.com",
    "keywords": "NLP,bias,ethics,fairness",
    "author": "Tobias Sterbak & Stina Lohm\u00fcller",
    "author_email": "hello@biaslyze.org",
    "download_url": "https://files.pythonhosted.org/packages/79/57/48c3feec5c14dd6f5adf7bbfa03ac9cdded3e58eb4e1fc2c1faf63762db7/biaslyze-0.1.0.tar.gz",
    "platform": null,
    "description": "\n<div align=\"center\">\n  <img src=\"resources/biaslyze_logo_farbe_rgb.svg\" alt=\"Biaslyze\" width=\"40%\">\n  <h1>The NLP Bias Identification Toolkit</h1>\n</div>\n\n<div align=\"center\">\n    <a href=\"https://github.com/biaslyze-dev/biaslyze/blob/main/LICENSE\">\n        <img alt=\"licence\" src=\"https://img.shields.io/github/license/biaslyze-dev/biaslyze\">\n    </a>\n    <a href=\"https://pypi.org/project/biaslyze/\">\n        <img alt=\"pypi\" src=\"https://img.shields.io/pypi/v/biaslyze\">\n    </a>\n    <a href=\"https://pypi.org/project/biaslyze/\">\n        <img alt=\"pypi\" src=\"https://img.shields.io/pypi/pyversions/biaslyze\">\n    </a>\n    <a href=\"#\">\n        <img alt=\"tests\" src=\"https://github.com/biaslyze-dev/biaslyze/actions/workflows/python-app.yml/badge.svg\">\n    </a>\n</div>\n\n\nBias is often subtle and difficult to detect in NLP models, as the protected attributes are less obvious and can take many forms in language (e.g. proxies, double meanings, ambiguities etc.). Therefore, technical bias testing is a key step in avoiding algorithmically mediated discrimination. However, it is currently conducted too rarely due to the effort involved, missing resources or lack of awareness for the problem.\n\nBiaslyze helps to get started with the analysis of bias within NLP models and offers a concrete entry point for further impact assessments and mitigation measures. Especially for developers, researchers and teams with limited resources, our toolbox offers a low-effort approach to bias testing in NLP use cases.\n\n## Supported Models\n\nAll text classification models with probability output are supported. This includes models from scikit-learn, tensorflow, pytorch, huggingface transformers and custom models. The bias detection requires you to pass a `predict_proba` function similar to what you would get on scikit-learn models. You can find a tutorial on how to do that for pre-trained transformer models [here](https://biaslyze.org/tutorials/tutorial-hugging-hatexplain/). \n\n## Installation\n\nInstallation can be done using pypi:\n```bash\npip install biaslyze\n```\n\nThen you need to download the required spacy models:\n```bash\npython -m spacy download en_core_web_sm\n```\n\n## Quickstart\n\n```python\nfrom biaslyze.bias_detectors import CounterfactualBiasDetector\n\nbias_detector = CounterfactualBiasDetector()\n\n# detect bias in the model based on the given texts\n# here, clf is a scikit-learn text classification pipeline trained for a binary classification task\ndetection_res = bias_detector.process(\n    texts=texts,\n    predict_func=clf.predict_proba\n)\n\n# see a summary of the detection\ndetection_res.report()\n\n# launch the dashboard visualize the counterfactual scores\ndetection_res.dashboard(num_keywords=10)\n```\n\nYou will get results as Boxplots, among others, indicating the impact of keywords and concepts on the prediction of your model.\nExample output:\n![](resources/biaslyze-demo-box-plot.gif)\n\n\nSee more detailed examples in the [tutorial](https://biaslyze.org/tutorials/tutorial-toxic-comments/).\n\n\n## Development setup\n\n- First you need to install poetry to manage your python environment: https://python-poetry.org/docs/#installation\n- Run `make install` to install the dependencies and get the spacy basemodels.\n- Now you can use `biaslyze` in your jupyter notebooks.\n\n\n### Adding concepts and keywords\n\nYou can add concepts and new keywords for existing concepts by editing the concept files in [concepts](https://github.com/biaslyze-dev/biaslyze/blob/main/biaslyze/concepts/).\nA tutorial on how to create custom concepts and work with them can be found [here](https://biaslyze.org/tutorials/tutorial-working-with-custom-concepts/).\n\n## Preview/build the documentation with mkdocs\n\nTo preview the documentation run `make doc-preview`. This will launch a preview of the documentation on `http://127.0.0.1:8000/`.\nTo build the documentation html run `make doc`.\n\n\n## Run the automated tests\n\n`make test`\n\n\n## Style guide\n\nWe are using isort and black: `make style`\nFor linting we are running ruff: `make lint`\n\n## Contributing\n\nFollow the Google style guide for Python: https://google.github.io/styleguide/pyguide.html\n\nThis project uses black, isort and ruff to enforce style. Apply it by running `make style` and `make lint`.\n\n## Acknowledgements\n\n* Funded from March 2023 until August 2023 by ![logos of the \"Bundesministerium f\u00fcr Bildung und Forschung\", Prodotype Fund and OKFN-Deutschland](resources/pf_funding_logos.svg)\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "The NLP Bias Identification Toolkit",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://biaslyze.org",
        "Repository": "https://github.com/biaslyze-dev/biaslyze/issues"
    },
    "split_keywords": [
        "nlp",
        "bias",
        "ethics",
        "fairness"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "be28dcea9b74bf8a98ccc1fd019b55fa87a7b8ed0ae336bf54cc048d714cb7d0",
                "md5": "fb51d30f89b10dc0c8908106bee6cd9d",
                "sha256": "8b68ae16dec9f652d211fc6b2b0f63cdb872e358ac019ca3a3f8408cb8accb80"
            },
            "downloads": -1,
            "filename": "biaslyze-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fb51d30f89b10dc0c8908106bee6cd9d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10,<3.11",
            "size": 33920,
            "upload_time": "2023-08-25T10:04:04",
            "upload_time_iso_8601": "2023-08-25T10:04:04.503469Z",
            "url": "https://files.pythonhosted.org/packages/be/28/dcea9b74bf8a98ccc1fd019b55fa87a7b8ed0ae336bf54cc048d714cb7d0/biaslyze-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "795748c3feec5c14dd6f5adf7bbfa03ac9cdded3e58eb4e1fc2c1faf63762db7",
                "md5": "4914a36bb18fb0a587a942395416708a",
                "sha256": "4b8fa8210a38019a7c7b3ac8ee1865f98c3ce94570a951aad2589c78bf223e42"
            },
            "downloads": -1,
            "filename": "biaslyze-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "4914a36bb18fb0a587a942395416708a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10,<3.11",
            "size": 29903,
            "upload_time": "2023-08-25T10:04:06",
            "upload_time_iso_8601": "2023-08-25T10:04:06.143463Z",
            "url": "https://files.pythonhosted.org/packages/79/57/48c3feec5c14dd6f5adf7bbfa03ac9cdded3e58eb4e1fc2c1faf63762db7/biaslyze-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-25 10:04:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "biaslyze-dev",
    "github_project": "biaslyze",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "biaslyze"
}
        
Elapsed time: 0.14781s