fairlearn


Namefairlearn JSON
Version 0.11.0 PyPI version JSON
download
home_pagehttps://github.com/fairlearn/fairlearn
SummaryA Python package to assess and improve fairness of machine learning models.
upload_time2024-10-31 16:12:16
maintainerNone
docs_urlNone
authorMiroslav Dudik, Richard Edgar, Adrin Jalali, Roman Lutz, Michael Madaio, Hilde Weerts, Allie Saizan
requires_python>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            |MIT license| |PyPI| |Discord| |StackOverflow|

Fairlearn
=========

Fairlearn is a Python package that empowers developers of artificial
intelligence (AI) systems to assess their system's fairness and mitigate
any observed unfairness issues. Fairlearn contains mitigation algorithms
as well as metrics for model assessment. Besides the source code, this
repository also contains Jupyter notebooks with examples of Fairlearn
usage.

Website: https://fairlearn.org/

-  `Current release <#current-release>`__
-  `What we mean by *fairness* <#what-we-mean-by-fairness>`__
-  `Overview of Fairlearn <#overview-of-fairlearn>`__
-  `Fairlearn metrics <#fairlearn-metrics>`__
-  `Fairlearn algorithms <#fairlearn-algorithms>`__
-  `Install Fairlearn <#install-fairlearn>`__
-  `Usage <#usage>`__
-  `Contributing <#contributing>`__
-  `Maintainers <#maintainers>`__
-  `Issues <#issues>`__

Current release
---------------

-  The current stable release is available on
   `PyPI <https://pypi.org/project/fairlearn/>`__.

-  Our current version may differ substantially from earlier versions.
   Users of earlier versions should visit our
   `version guide <https://fairlearn.org/main/user_guide/installation_and_version_guide/version_guide.html>`_
   to navigate significant changes and find information on how to migrate.

What we mean by *fairness*
--------------------------

An AI system can behave unfairly for a variety of reasons. In Fairlearn,
we define whether an AI system is behaving unfairly in terms of its
impact on people – i.e., in terms of harms. We focus on two kinds of
harms:

-  *Allocation harms.* These harms can occur when AI systems extend or
   withhold opportunities, resources, or information. Some of the key
   applications are in hiring, school admissions, and lending.

-  *Quality-of-service harms.* Quality of service refers to whether a
   system works as well for one person as it does for another, even if
   no opportunities, resources, or information are extended or withheld.

We follow the approach known as **group fairness**, which asks: *Which
groups of individuals are at risk for experiencing harms?* The relevant
groups need to be specified by the data scientist and are application
specific.

Group fairness is formalized by a set of constraints, which require that
some aspect (or aspects) of the AI system's behavior be comparable
across the groups. The Fairlearn package enables assessment and
mitigation of unfairness under several common definitions. To learn more
about our definitions of fairness, please visit our
`user guide on Fairness of AI Systems <https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#fairness-of-ai-systems>`__.

    *Note*: Fairness is fundamentally a sociotechnical challenge. Many
    aspects of fairness, such as justice and due process, are not
    captured by quantitative fairness metrics. Furthermore, there are
    many quantitative fairness metrics which cannot all be satisfied
    simultaneously. Our goal is to enable humans to assess different
    mitigation strategies and then make trade-offs appropriate to their
    scenario.

Overview of Fairlearn
---------------------

The Fairlearn Python package has two components:

-  *Metrics* for assessing which groups are negatively impacted by a
   model, and for comparing multiple models in terms of various fairness
   and accuracy metrics.

-  *Algorithms* for mitigating unfairness in a variety of AI tasks and
   along a variety of fairness definitions.

Fairlearn metrics
~~~~~~~~~~~~~~~~~

Check out our in-depth `guide on the Fairlearn
metrics <https://fairlearn.org/main/user_guide/assessment>`__.

Fairlearn algorithms
~~~~~~~~~~~~~~~~~~~~

For an overview of our algorithms please refer to our
`website <https://fairlearn.org/main/user_guide/mitigation/index.html>`__.

Install Fairlearn
-----------------

For instructions on how to install Fairlearn check out our `Quickstart
guide <https://fairlearn.org/main/quickstart.html>`__.

Usage
-----

For common usage refer to the `Jupyter notebooks <https://fairlearn.org/main/auto_examples/index.html>`__ and
our `user guide <https://fairlearn.org/main/user_guide/index.html>`__.
Please note that our APIs are subject to change, so notebooks downloaded
from ``main`` may not be compatible with Fairlearn installed with
``pip``. In this case, please navigate the tags in the repository (e.g.
`v0.7.0 <https://github.com/fairlearn/fairlearn/tree/v0.7.0>`__) to
locate the appropriate version of the notebook.

Contributing
------------

To contribute please check our `contributor
guide <https://fairlearn.org/main/contributor_guide/index.html>`__.

Maintainers
-----------

A list of current maintainers is
`on our website <https://fairlearn.org/main/about/index.html>`__.

Issues
------

Usage Questions
~~~~~~~~~~~~~~~

Pose questions and help answer them on `Stack
Overflow <https://stackoverflow.com/questions/tagged/fairlearn>`__ with
the tag ``fairlearn`` or on
`Discord <https://discord.gg/R22yCfgsRn>`__.

Regular (non-security) issues
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Issues are meant for bugs, feature requests, and documentation
improvements. Please submit a report through
`GitHub issues <https://github.com/fairlearn/fairlearn/issues>`__.
A maintainer will respond promptly as appropriate.

Maintainers will try to link duplicate issues when possible.

Reporting security issues
~~~~~~~~~~~~~~~~~~~~~~~~~

To report security issues please send an email to
``fairlearn-internal@python.org``.

.. |MIT license| image:: https://img.shields.io/badge/License-MIT-blue.svg
   :target: https://github.com/fairlearn/fairlearn/blob/main/LICENSE
.. |PyPI| image:: https://img.shields.io/pypi/v/fairlearn?color=blue
   :target: https://pypi.org/project/fairlearn/
.. |Discord| image:: https://img.shields.io/discord/840099830160031744
   :target: https://discord.gg/R22yCfgsRn
.. |StackOverflow| image:: https://img.shields.io/badge/StackOverflow-questions-blueviolet
   :target: https://stackoverflow.com/questions/tagged/fairlearn

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/fairlearn/fairlearn",
    "name": "fairlearn",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Miroslav Dudik, Richard Edgar, Adrin Jalali, Roman Lutz, Michael Madaio, Hilde Weerts, Allie Saizan",
    "author_email": "fairlearn-internal@python.org",
    "download_url": "https://files.pythonhosted.org/packages/ee/c0/a7c1288a3ec5c775a5709b46f68678b5d65511d1aab142171272dc35498d/fairlearn-0.11.0.tar.gz",
    "platform": null,
    "description": "|MIT license| |PyPI| |Discord| |StackOverflow|\n\nFairlearn\n=========\n\nFairlearn is a Python package that empowers developers of artificial\nintelligence (AI) systems to assess their system's fairness and mitigate\nany observed unfairness issues. Fairlearn contains mitigation algorithms\nas well as metrics for model assessment. Besides the source code, this\nrepository also contains Jupyter notebooks with examples of Fairlearn\nusage.\n\nWebsite: https://fairlearn.org/\n\n-  `Current release <#current-release>`__\n-  `What we mean by *fairness* <#what-we-mean-by-fairness>`__\n-  `Overview of Fairlearn <#overview-of-fairlearn>`__\n-  `Fairlearn metrics <#fairlearn-metrics>`__\n-  `Fairlearn algorithms <#fairlearn-algorithms>`__\n-  `Install Fairlearn <#install-fairlearn>`__\n-  `Usage <#usage>`__\n-  `Contributing <#contributing>`__\n-  `Maintainers <#maintainers>`__\n-  `Issues <#issues>`__\n\nCurrent release\n---------------\n\n-  The current stable release is available on\n   `PyPI <https://pypi.org/project/fairlearn/>`__.\n\n-  Our current version may differ substantially from earlier versions.\n   Users of earlier versions should visit our\n   `version guide <https://fairlearn.org/main/user_guide/installation_and_version_guide/version_guide.html>`_\n   to navigate significant changes and find information on how to migrate.\n\nWhat we mean by *fairness*\n--------------------------\n\nAn AI system can behave unfairly for a variety of reasons. In Fairlearn,\nwe define whether an AI system is behaving unfairly in terms of its\nimpact on people \u2013 i.e., in terms of harms. We focus on two kinds of\nharms:\n\n-  *Allocation harms.* These harms can occur when AI systems extend or\n   withhold opportunities, resources, or information. Some of the key\n   applications are in hiring, school admissions, and lending.\n\n-  *Quality-of-service harms.* Quality of service refers to whether a\n   system works as well for one person as it does for another, even if\n   no opportunities, resources, or information are extended or withheld.\n\nWe follow the approach known as **group fairness**, which asks: *Which\ngroups of individuals are at risk for experiencing harms?* The relevant\ngroups need to be specified by the data scientist and are application\nspecific.\n\nGroup fairness is formalized by a set of constraints, which require that\nsome aspect (or aspects) of the AI system's behavior be comparable\nacross the groups. The Fairlearn package enables assessment and\nmitigation of unfairness under several common definitions. To learn more\nabout our definitions of fairness, please visit our\n`user guide on Fairness of AI Systems <https://fairlearn.org/main/user_guide/fairness_in_machine_learning.html#fairness-of-ai-systems>`__.\n\n    *Note*: Fairness is fundamentally a sociotechnical challenge. Many\n    aspects of fairness, such as justice and due process, are not\n    captured by quantitative fairness metrics. Furthermore, there are\n    many quantitative fairness metrics which cannot all be satisfied\n    simultaneously. Our goal is to enable humans to assess different\n    mitigation strategies and then make trade-offs appropriate to their\n    scenario.\n\nOverview of Fairlearn\n---------------------\n\nThe Fairlearn Python package has two components:\n\n-  *Metrics* for assessing which groups are negatively impacted by a\n   model, and for comparing multiple models in terms of various fairness\n   and accuracy metrics.\n\n-  *Algorithms* for mitigating unfairness in a variety of AI tasks and\n   along a variety of fairness definitions.\n\nFairlearn metrics\n~~~~~~~~~~~~~~~~~\n\nCheck out our in-depth `guide on the Fairlearn\nmetrics <https://fairlearn.org/main/user_guide/assessment>`__.\n\nFairlearn algorithms\n~~~~~~~~~~~~~~~~~~~~\n\nFor an overview of our algorithms please refer to our\n`website <https://fairlearn.org/main/user_guide/mitigation/index.html>`__.\n\nInstall Fairlearn\n-----------------\n\nFor instructions on how to install Fairlearn check out our `Quickstart\nguide <https://fairlearn.org/main/quickstart.html>`__.\n\nUsage\n-----\n\nFor common usage refer to the `Jupyter notebooks <https://fairlearn.org/main/auto_examples/index.html>`__ and\nour `user guide <https://fairlearn.org/main/user_guide/index.html>`__.\nPlease note that our APIs are subject to change, so notebooks downloaded\nfrom ``main`` may not be compatible with Fairlearn installed with\n``pip``. In this case, please navigate the tags in the repository (e.g.\n`v0.7.0 <https://github.com/fairlearn/fairlearn/tree/v0.7.0>`__) to\nlocate the appropriate version of the notebook.\n\nContributing\n------------\n\nTo contribute please check our `contributor\nguide <https://fairlearn.org/main/contributor_guide/index.html>`__.\n\nMaintainers\n-----------\n\nA list of current maintainers is\n`on our website <https://fairlearn.org/main/about/index.html>`__.\n\nIssues\n------\n\nUsage Questions\n~~~~~~~~~~~~~~~\n\nPose questions and help answer them on `Stack\nOverflow <https://stackoverflow.com/questions/tagged/fairlearn>`__ with\nthe tag ``fairlearn`` or on\n`Discord <https://discord.gg/R22yCfgsRn>`__.\n\nRegular (non-security) issues\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIssues are meant for bugs, feature requests, and documentation\nimprovements. Please submit a report through\n`GitHub issues <https://github.com/fairlearn/fairlearn/issues>`__.\nA maintainer will respond promptly as appropriate.\n\nMaintainers will try to link duplicate issues when possible.\n\nReporting security issues\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nTo report security issues please send an email to\n``fairlearn-internal@python.org``.\n\n.. |MIT license| image:: https://img.shields.io/badge/License-MIT-blue.svg\n   :target: https://github.com/fairlearn/fairlearn/blob/main/LICENSE\n.. |PyPI| image:: https://img.shields.io/pypi/v/fairlearn?color=blue\n   :target: https://pypi.org/project/fairlearn/\n.. |Discord| image:: https://img.shields.io/discord/840099830160031744\n   :target: https://discord.gg/R22yCfgsRn\n.. |StackOverflow| image:: https://img.shields.io/badge/StackOverflow-questions-blueviolet\n   :target: https://stackoverflow.com/questions/tagged/fairlearn\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python package to assess and improve fairness of machine learning models.",
    "version": "0.11.0",
    "project_urls": {
        "Homepage": "https://github.com/fairlearn/fairlearn"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ec107142b64f0835958920672410c0002b3575d668db979000266e81b19eb4ac",
                "md5": "455fca6ab0574e1a8335557f52feb8a3",
                "sha256": "c8592e54412bd612a6a9823d177c9eab9212582c22ca01723739362971fcc266"
            },
            "downloads": -1,
            "filename": "fairlearn-0.11.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "455fca6ab0574e1a8335557f52feb8a3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 232283,
            "upload_time": "2024-10-31T16:12:13",
            "upload_time_iso_8601": "2024-10-31T16:12:13.200598Z",
            "url": "https://files.pythonhosted.org/packages/ec/10/7142b64f0835958920672410c0002b3575d668db979000266e81b19eb4ac/fairlearn-0.11.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eec0a7c1288a3ec5c775a5709b46f68678b5d65511d1aab142171272dc35498d",
                "md5": "93721a602ea43771c7aa7713aca97fb7",
                "sha256": "d836b851dd33c4d27bb07ab01b5f84b90d52b716c5bade54731656073c47f445"
            },
            "downloads": -1,
            "filename": "fairlearn-0.11.0.tar.gz",
            "has_sig": false,
            "md5_digest": "93721a602ea43771c7aa7713aca97fb7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 167101,
            "upload_time": "2024-10-31T16:12:16",
            "upload_time_iso_8601": "2024-10-31T16:12:16.478628Z",
            "url": "https://files.pythonhosted.org/packages/ee/c0/a7c1288a3ec5c775a5709b46f68678b5d65511d1aab142171272dc35498d/fairlearn-0.11.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-31 16:12:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fairlearn",
    "github_project": "fairlearn",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "circle": true,
    "requirements": [],
    "lcname": "fairlearn"
}
        
Elapsed time: 1.41851s