wefe


Namewefe JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryThe Word Embedding Fairness Evaluation Framework
upload_time2025-07-29 04:32:07
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords bias fairness nlp word embeddings machine learning artificial intelligence
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            .. -*- mode: rst -*-

|License|_ |GithubActions|_ |ReadTheDocs|_ |Downloads|_ |Pypy|_ |CondaVersion|_

.. |License| image:: https://img.shields.io/github/license/dccuchile/wefe
.. _License: https://github.com/dccuchile/wefe/blob/master/LICENSE

.. |ReadTheDocs| image:: https://readthedocs.org/projects/wefe/badge/?version=latest
.. _ReadTheDocs: https://wefe.readthedocs.io/en/latest/?badge=latest

.. |GithubActions| image:: https://github.com/dccuchile/wefe/actions/workflows/ci.yaml/badge.svg?branch=master
.. _GithubActions: https://github.com/dccuchile/wefe/actions

.. |Downloads| image:: https://pepy.tech/badge/wefe
.. _Downloads: https://pepy.tech/project/wefe

.. |Pypy| image:: https://badge.fury.io/py/wefe.svg
.. _Pypy: https://pypi.org/project/wefe/

.. |CondaVersion| image:: https://anaconda.org/pbadilla/wefe/badges/version.svg
.. _CondaVersion: https://anaconda.org/pbadilla/wefe


WEFE: The Word Embedding Fairness Evaluation Framework
======================================================

.. image:: ./docs/logos/WEFE_2.png
  :width: 300
  :alt: WEFE Logo
  :align: center

*Word Embedding Fairness Evaluation* (WEFE) is an open source library for
measuring an mitigating bias in word embedding models.
It generalizes many existing fairness metrics into a unified framework and
provides a standard interface for:

- Encapsulating existing fairness metrics from previous work and designing
  new ones.
- Encapsulating the test words used by fairness metrics into standard
  objects called queries.
- Computing a fairness metric on a given pre-trained word embedding model
  using user-given queries.

WEFE also standardizes the process of mitigating bias through an interface similar
to the ``scikit-learn`` ``fit-transform``.
This standardization separates the mitigation process into two stages:

- The logic of calculating the transformation to be performed on the model (``fit``).
- The execution of the mitigation transformation on the model (``transform``).


The official documentation can be found at this `link <https://wefe.readthedocs.io/>`_.


Installation
============

WEFE requires Python 3.10 or higher. There are two different ways to install WEFE:

**Install with pip** (recommended)::

    pip install wefe

**Install with conda**::

    conda install -c pbadilla wefe

**Install development version**::

    pip install git+https://github.com/dccuchile/wefe.git

**Install with development dependencies**::

    pip install "wefe[dev]"

**Install with PyTorch support**::

    pip install "wefe[pytorch]"


Requirements
------------

WEFE automatically installs the following dependencies:

- gensim (>=3.8.3)
- numpy (<=1.26.4)
- pandas (>=2.0.0)
- plotly (>=6.0.0)
- requests (>=2.22.0)
- scikit-learn (>=1.5.0)
- scipy (<1.13)
- semantic_version (>=2.8.0)
- tqdm (>=4.0.0)

Contributing
------------

To contribute to WEFE development:

1. **Clone the repository**::

    git clone https://github.com/dccuchile/wefe
    cd wefe

2. **Install in development mode with all dependencies**::

    pip install -e ".[dev]"

3. **Run tests to ensure everything works**::

    pytest tests

4. **Make your changes and run tests again**

5. **Follow our coding standards**:
   - Use ``ruff`` for code formatting: ``ruff format .``
   - Check code quality: ``ruff check .``
   - Run type checking: ``mypy wefe``

For detailed contributing guidelines, visit the `Contributing <https://wefe.readthedocs.io/en/latest/user_guide/contribute.html>`_ section in the documentation.

Development Requirements
------------------------

To install WEFE with all development dependencies for testing, documentation building, and code quality tools::

    pip install "wefe[dev]"

This installs additional packages including:

- pytest and pytest-cov for testing
- sphinx and related packages for documentation
- ruff for code formatting and linting
- mypy for type checking
- ipython for interactive development


Testing
-------

All unit tests are in the ``tests/`` folder. WEFE uses ``pytest`` as the testing framework.

To run all tests::

    pytest tests

To run tests with coverage reporting::

    pytest tests --cov=wefe --cov-report=html

To run a specific test file::

    pytest tests/test_datasets.py

Coverage reports will be generated in ``htmlcov/`` directory.


Build the documentation
-----------------------

The documentation is built using Sphinx and can be found in the ``docs/`` folder.

To build the documentation::

    cd docs
    make html

Or using the development environment::

    pip install "wefe[dev]"
    cd docs
    make html

The built documentation will be available at ``docs/_build/html/index.html``

Changelog
=========

Version 1.0.0
-------------------

**Major Release - Breaking Changes**

- **Python 3.10+ Required**: Dropped support for Python 3.6-3.9
- **Modern Packaging**: Migrated from ``setup.py`` to ``pyproject.toml``
- **Updated Dependencies**: All packages updated for modern Python ecosystem

**New Features**:
- Robust dataset fetching with retry mechanism and exponential backoff
- HTTP 429 (rate limiting) and timeout error handling
- Optional dependencies: ``pip install "wefe[dev]"`` and ``"wefe[pytorch]"``
- Dynamic version loading from ``wefe.__version__``

**Core Improvements**:
- **WordEmbeddingModel**: Enhanced type safety, better gensim compatibility, improved error handling
- **BaseMetric**: Refactored input validation, standardized ``run_query`` methods across all metrics
- **Testing**: Converted to pytest patterns with monkeypatch, comprehensive test coverage
- **Code Quality**: Migration from flake8 to Ruff, enhanced documentation with detailed docstrings

**Development Workflow**:
- GitHub Actions upgraded with Python 3.10-3.13 matrix testing
- Pre-commit hooks enhanced with JSON/TOML validation and security checks
- Modernized Sphinx documentation configuration
- Updated benchmark documentation and metrics comparison tables

Version 0.4.1
-------------------

- Fixed a bug where the last pair of target words in RIPA was not included.
- Added a benchmark that compares WEFE with another measurement and bias mitigation
  libraries in the documentation.
- Added a library changes since original paper release page in the documentation.

Version 0.4.0
-------------------
- 3 new bias mitigation methods (debias) implemented: Double Hard Debias, Half
  Sibling Regression and Repulsion Attraction Neutralization.
- The library documentation of the library has been restructured.
  Now, the documentation is divided into user guide and theoretical framework
  The user guide does not contain theoretical information.
  Instead, theoretical documentation can be found in the conceptual guides.
- Improved API documentation and examples. Added multilingual examples contributed
  by the community.
- The user guides are fully executable because they are now on notebooks.
- There was also an important improvement in the API documentation and in metrics and
  debias examples.
- Improved library testing mechanisms for metrics and debias methods.
- Fixed wrong repr of query. Now the sets are in the correct order.
- Implemented repr for WordEmbeddingModel.
- Testing CI moved from CircleCI to GithubActions.
- License changed to MIT.

Version 0.3.2
-------------
- Fixed RNSB bug where the classification labels were interchanged and could produce
  erroneous results when the attributes are of different sizes.
- Fixed RNSB replication notebook
- Update of WEFE case study scores.
- Improved documentation examples for WEAT, RNSB, RIPA.
- Holdout parameter added to RNSB, which allows to indicate whether or not a holdout
  is performed when training the classifier.
- Improved printing of the RNSB evaluation.

Version 0.3.1
-------------
- Update WEFE original case study
- Hotfix: Several bug fixes for execute WEFE original Case Study.
- fetch_eds top_n_race_occupations argument set to 10.
- Preprocessing: get_embeddings_from_set now returns a list with the lost
  preprocessed words instead of the original ones.

Version 0.3.0
-------------
- Implemented Bolukbasi et al. 2016 Hard Debias.
- Implemented  Thomas Manzini et al. 2019 Multiclass Hard Debias.
- Implemented a fetch function to retrieve gn-glove female-male word sets.
- Moved the transformation logic of words, sets and queries to embeddings to its own
  module: preprocessing
- Enhanced the preprocessor_args and secondary_preprocessor_args metric
  preprocessing parameters to an list of preprocessors `preprocessors` together with
  the parameter `strategy` indicating whether to consider all the transformed words
  (`'all'`) or only the first one encountered (`'first'`).
- Renamed WordEmbeddingModel attributes ```model``` and ```model_name```  to
  ```wv``` and ```name``` respectively.
- Renamed every run_query ```word_embedding``` argument to ```model``` in every metric.


Version 0.2.2
-------------

- Added RIPA metrics (thanks @stolenpyjak for your contribution!).
- Fixed Literal typing bug to make WEFE compatible with python 3.7.

Version 0.2.1
-------------

- Compatibility fixes.

Version 0.2.0
--------------

- Renamed optional ```run_query``` parameter  ```warn_filtered_words``` to
  `warn_not_found_words`.
- Added ```word_preprocessor_args``` parameter to ```run_query``` that allow specifying
  transformations prior to searching for words in word embeddings.
- Added ```secondary_preprocessor_args``` parameter to ```run_query``` which allows
  specifying a second pre-processor transformation to words before searching them in
  word embeddings. It is not necessary to specify the first preprocessor to use this
  one.
- Implemented ```__getitem__``` function in ```WordEmbeddingModel```. This method
  allows obtaining an embedding from a word from the model stored in the instance
  using indexers.
- Removed underscore from class and instance variable names.
- Improved type and verification exception messages when creating objects and executing
  methods.
- Fix an error that appeared when calculating rankings with two columns of aggregations
  with the same name.
- Ranking correlations are now calculated using pandas ```corr``` method.
- Changed metric template, name and short_names to class variables.
- Implemented ```random_state``` in RNSB to allow replication of the experiments.
- run_query now returns as a result the default metric requested in the parameters
  and all calculated values that may be useful in the other variables of the dictionary.
- Fixed problem with api documentation: now it shows methods of the classes.
- Implemented p-value for WEAT


Citation
=========


Please cite the following paper if using this package in an academic publication:

P. Badilla, F. Bravo-Marquez, and J. Pérez
`WEFE: The Word Embeddings Fairness Evaluation Framework In Proceedings of the
29th International Joint Conference on Artificial Intelligence and the 17th
Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Yokohama, Japan. <https://www.ijcai.org/Proceedings/2020/60>`_

Bibtex:

.. code-block:: latex

    @InProceedings{wefe2020,
        title     = {WEFE: The Word Embeddings Fairness Evaluation Framework},
        author    = {Badilla, Pablo and Bravo-Marquez, Felipe and Pérez, Jorge},
        booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on
                   Artificial Intelligence, {IJCAI-20}},
        publisher = {International Joint Conferences on Artificial Intelligence Organization},
        pages     = {430--436},
        year      = {2020},
        month     = {7},
        doi       = {10.24963/ijcai.2020/60},
        url       = {https://doi.org/10.24963/ijcai.2020/60},
        }


Team
====

- `Pablo Badilla <https://github.com/pbadillatorrealba/>`_.
- `Felipe Bravo-Marquez <https://felipebravom.com/>`_.
- `Jorge Pérez <https://users.dcc.uchile.cl/~jperez/>`_.
- `María José Zambrano  <https://github.com/mzambrano1/>`_.

Contributors
------------


We thank all our contributors who have allowed WEFE to grow, especially
`stolenpyjak <https://github.com/stolenpyjak/>`_ and
`mspl13 <https://github.com/mspl13/>`_ for implementing new metrics.

We also thank `alan-cueva <https://github.com/alan-cueva/>`_ for initiating the development
of metrics for contextualized embedding models and
`harshvr15 <https://github.com/harshvr15/>`_ for the examples of multi-language bias measurement.

Thank you very much 😊!

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "wefe",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "WEFE Team <fbravo@dcc.uchile.cl>",
    "keywords": "bias, fairness, nlp, word embeddings, machine learning, artificial intelligence",
    "author": null,
    "author_email": "Pablo Badilla <pablo.badilla@ug.uchile.cl>, Felipe Bravo <fbravo@dcc.uchile.cl>, WEFE Team <fbravo@dcc.uchile.cl>",
    "download_url": "https://files.pythonhosted.org/packages/0c/49/c964d23a73234ab2d0e866a3421c77db085ffd59cf69fe050a438fbcdd36/wefe-1.0.0.tar.gz",
    "platform": null,
    "description": ".. -*- mode: rst -*-\n\n|License|_ |GithubActions|_ |ReadTheDocs|_ |Downloads|_ |Pypy|_ |CondaVersion|_\n\n.. |License| image:: https://img.shields.io/github/license/dccuchile/wefe\n.. _License: https://github.com/dccuchile/wefe/blob/master/LICENSE\n\n.. |ReadTheDocs| image:: https://readthedocs.org/projects/wefe/badge/?version=latest\n.. _ReadTheDocs: https://wefe.readthedocs.io/en/latest/?badge=latest\n\n.. |GithubActions| image:: https://github.com/dccuchile/wefe/actions/workflows/ci.yaml/badge.svg?branch=master\n.. _GithubActions: https://github.com/dccuchile/wefe/actions\n\n.. |Downloads| image:: https://pepy.tech/badge/wefe\n.. _Downloads: https://pepy.tech/project/wefe\n\n.. |Pypy| image:: https://badge.fury.io/py/wefe.svg\n.. _Pypy: https://pypi.org/project/wefe/\n\n.. |CondaVersion| image:: https://anaconda.org/pbadilla/wefe/badges/version.svg\n.. _CondaVersion: https://anaconda.org/pbadilla/wefe\n\n\nWEFE: The Word Embedding Fairness Evaluation Framework\n======================================================\n\n.. image:: ./docs/logos/WEFE_2.png\n  :width: 300\n  :alt: WEFE Logo\n  :align: center\n\n*Word Embedding Fairness Evaluation* (WEFE) is an open source library for\nmeasuring an mitigating bias in word embedding models.\nIt generalizes many existing fairness metrics into a unified framework and\nprovides a standard interface for:\n\n- Encapsulating existing fairness metrics from previous work and designing\n  new ones.\n- Encapsulating the test words used by fairness metrics into standard\n  objects called queries.\n- Computing a fairness metric on a given pre-trained word embedding model\n  using user-given queries.\n\nWEFE also standardizes the process of mitigating bias through an interface similar\nto the ``scikit-learn`` ``fit-transform``.\nThis standardization separates the mitigation process into two stages:\n\n- The logic of calculating the transformation to be performed on the model (``fit``).\n- The execution of the mitigation transformation on the model (``transform``).\n\n\nThe official documentation can be found at this `link <https://wefe.readthedocs.io/>`_.\n\n\nInstallation\n============\n\nWEFE requires Python 3.10 or higher. There are two different ways to install WEFE:\n\n**Install with pip** (recommended)::\n\n    pip install wefe\n\n**Install with conda**::\n\n    conda install -c pbadilla wefe\n\n**Install development version**::\n\n    pip install git+https://github.com/dccuchile/wefe.git\n\n**Install with development dependencies**::\n\n    pip install \"wefe[dev]\"\n\n**Install with PyTorch support**::\n\n    pip install \"wefe[pytorch]\"\n\n\nRequirements\n------------\n\nWEFE automatically installs the following dependencies:\n\n- gensim (>=3.8.3)\n- numpy (<=1.26.4)\n- pandas (>=2.0.0)\n- plotly (>=6.0.0)\n- requests (>=2.22.0)\n- scikit-learn (>=1.5.0)\n- scipy (<1.13)\n- semantic_version (>=2.8.0)\n- tqdm (>=4.0.0)\n\nContributing\n------------\n\nTo contribute to WEFE development:\n\n1. **Clone the repository**::\n\n    git clone https://github.com/dccuchile/wefe\n    cd wefe\n\n2. **Install in development mode with all dependencies**::\n\n    pip install -e \".[dev]\"\n\n3. **Run tests to ensure everything works**::\n\n    pytest tests\n\n4. **Make your changes and run tests again**\n\n5. **Follow our coding standards**:\n   - Use ``ruff`` for code formatting: ``ruff format .``\n   - Check code quality: ``ruff check .``\n   - Run type checking: ``mypy wefe``\n\nFor detailed contributing guidelines, visit the `Contributing <https://wefe.readthedocs.io/en/latest/user_guide/contribute.html>`_ section in the documentation.\n\nDevelopment Requirements\n------------------------\n\nTo install WEFE with all development dependencies for testing, documentation building, and code quality tools::\n\n    pip install \"wefe[dev]\"\n\nThis installs additional packages including:\n\n- pytest and pytest-cov for testing\n- sphinx and related packages for documentation\n- ruff for code formatting and linting\n- mypy for type checking\n- ipython for interactive development\n\n\nTesting\n-------\n\nAll unit tests are in the ``tests/`` folder. WEFE uses ``pytest`` as the testing framework.\n\nTo run all tests::\n\n    pytest tests\n\nTo run tests with coverage reporting::\n\n    pytest tests --cov=wefe --cov-report=html\n\nTo run a specific test file::\n\n    pytest tests/test_datasets.py\n\nCoverage reports will be generated in ``htmlcov/`` directory.\n\n\nBuild the documentation\n-----------------------\n\nThe documentation is built using Sphinx and can be found in the ``docs/`` folder.\n\nTo build the documentation::\n\n    cd docs\n    make html\n\nOr using the development environment::\n\n    pip install \"wefe[dev]\"\n    cd docs\n    make html\n\nThe built documentation will be available at ``docs/_build/html/index.html``\n\nChangelog\n=========\n\nVersion 1.0.0\n-------------------\n\n**Major Release - Breaking Changes**\n\n- **Python 3.10+ Required**: Dropped support for Python 3.6-3.9\n- **Modern Packaging**: Migrated from ``setup.py`` to ``pyproject.toml``\n- **Updated Dependencies**: All packages updated for modern Python ecosystem\n\n**New Features**:\n- Robust dataset fetching with retry mechanism and exponential backoff\n- HTTP 429 (rate limiting) and timeout error handling\n- Optional dependencies: ``pip install \"wefe[dev]\"`` and ``\"wefe[pytorch]\"``\n- Dynamic version loading from ``wefe.__version__``\n\n**Core Improvements**:\n- **WordEmbeddingModel**: Enhanced type safety, better gensim compatibility, improved error handling\n- **BaseMetric**: Refactored input validation, standardized ``run_query`` methods across all metrics\n- **Testing**: Converted to pytest patterns with monkeypatch, comprehensive test coverage\n- **Code Quality**: Migration from flake8 to Ruff, enhanced documentation with detailed docstrings\n\n**Development Workflow**:\n- GitHub Actions upgraded with Python 3.10-3.13 matrix testing\n- Pre-commit hooks enhanced with JSON/TOML validation and security checks\n- Modernized Sphinx documentation configuration\n- Updated benchmark documentation and metrics comparison tables\n\nVersion 0.4.1\n-------------------\n\n- Fixed a bug where the last pair of target words in RIPA was not included.\n- Added a benchmark that compares WEFE with another measurement and bias mitigation\n  libraries in the documentation.\n- Added a library changes since original paper release page in the documentation.\n\nVersion 0.4.0\n-------------------\n- 3 new bias mitigation methods (debias) implemented: Double Hard Debias, Half\n  Sibling Regression and Repulsion Attraction Neutralization.\n- The library documentation of the library has been restructured.\n  Now, the documentation is divided into user guide and theoretical framework\n  The user guide does not contain theoretical information.\n  Instead, theoretical documentation can be found in the conceptual guides.\n- Improved API documentation and examples. Added multilingual examples contributed\n  by the community.\n- The user guides are fully executable because they are now on notebooks.\n- There was also an important improvement in the API documentation and in metrics and\n  debias examples.\n- Improved library testing mechanisms for metrics and debias methods.\n- Fixed wrong repr of query. Now the sets are in the correct order.\n- Implemented repr for WordEmbeddingModel.\n- Testing CI moved from CircleCI to GithubActions.\n- License changed to MIT.\n\nVersion 0.3.2\n-------------\n- Fixed RNSB bug where the classification labels were interchanged and could produce\n  erroneous results when the attributes are of different sizes.\n- Fixed RNSB replication notebook\n- Update of WEFE case study scores.\n- Improved documentation examples for WEAT, RNSB, RIPA.\n- Holdout parameter added to RNSB, which allows to indicate whether or not a holdout\n  is performed when training the classifier.\n- Improved printing of the RNSB evaluation.\n\nVersion 0.3.1\n-------------\n- Update WEFE original case study\n- Hotfix: Several bug fixes for execute WEFE original Case Study.\n- fetch_eds top_n_race_occupations argument set to 10.\n- Preprocessing: get_embeddings_from_set now returns a list with the lost\n  preprocessed words instead of the original ones.\n\nVersion 0.3.0\n-------------\n- Implemented Bolukbasi et al. 2016 Hard Debias.\n- Implemented  Thomas Manzini et al. 2019 Multiclass Hard Debias.\n- Implemented a fetch function to retrieve gn-glove female-male word sets.\n- Moved the transformation logic of words, sets and queries to embeddings to its own\n  module: preprocessing\n- Enhanced the preprocessor_args and secondary_preprocessor_args metric\n  preprocessing parameters to an list of preprocessors `preprocessors` together with\n  the parameter `strategy` indicating whether to consider all the transformed words\n  (`'all'`) or only the first one encountered (`'first'`).\n- Renamed WordEmbeddingModel attributes ```model``` and ```model_name```  to\n  ```wv``` and ```name``` respectively.\n- Renamed every run_query ```word_embedding``` argument to ```model``` in every metric.\n\n\nVersion 0.2.2\n-------------\n\n- Added RIPA metrics (thanks @stolenpyjak for your contribution!).\n- Fixed Literal typing bug to make WEFE compatible with python 3.7.\n\nVersion 0.2.1\n-------------\n\n- Compatibility fixes.\n\nVersion 0.2.0\n--------------\n\n- Renamed optional ```run_query``` parameter  ```warn_filtered_words``` to\n  `warn_not_found_words`.\n- Added ```word_preprocessor_args``` parameter to ```run_query``` that allow specifying\n  transformations prior to searching for words in word embeddings.\n- Added ```secondary_preprocessor_args``` parameter to ```run_query``` which allows\n  specifying a second pre-processor transformation to words before searching them in\n  word embeddings. It is not necessary to specify the first preprocessor to use this\n  one.\n- Implemented ```__getitem__``` function in ```WordEmbeddingModel```. This method\n  allows obtaining an embedding from a word from the model stored in the instance\n  using indexers.\n- Removed underscore from class and instance variable names.\n- Improved type and verification exception messages when creating objects and executing\n  methods.\n- Fix an error that appeared when calculating rankings with two columns of aggregations\n  with the same name.\n- Ranking correlations are now calculated using pandas ```corr``` method.\n- Changed metric template, name and short_names to class variables.\n- Implemented ```random_state``` in RNSB to allow replication of the experiments.\n- run_query now returns as a result the default metric requested in the parameters\n  and all calculated values that may be useful in the other variables of the dictionary.\n- Fixed problem with api documentation: now it shows methods of the classes.\n- Implemented p-value for WEAT\n\n\nCitation\n=========\n\n\nPlease cite the following paper if using this package in an academic publication:\n\nP. Badilla, F. Bravo-Marquez, and J. P\u00e9rez\n`WEFE: The Word Embeddings Fairness Evaluation Framework In Proceedings of the\n29th International Joint Conference on Artificial Intelligence and the 17th\nPacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Yokohama, Japan. <https://www.ijcai.org/Proceedings/2020/60>`_\n\nBibtex:\n\n.. code-block:: latex\n\n    @InProceedings{wefe2020,\n        title     = {WEFE: The Word Embeddings Fairness Evaluation Framework},\n        author    = {Badilla, Pablo and Bravo-Marquez, Felipe and P\u00e9rez, Jorge},\n        booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on\n                   Artificial Intelligence, {IJCAI-20}},\n        publisher = {International Joint Conferences on Artificial Intelligence Organization},\n        pages     = {430--436},\n        year      = {2020},\n        month     = {7},\n        doi       = {10.24963/ijcai.2020/60},\n        url       = {https://doi.org/10.24963/ijcai.2020/60},\n        }\n\n\nTeam\n====\n\n- `Pablo Badilla <https://github.com/pbadillatorrealba/>`_.\n- `Felipe Bravo-Marquez <https://felipebravom.com/>`_.\n- `Jorge P\u00e9rez <https://users.dcc.uchile.cl/~jperez/>`_.\n- `Mar\u00eda Jos\u00e9 Zambrano  <https://github.com/mzambrano1/>`_.\n\nContributors\n------------\n\n\nWe thank all our contributors who have allowed WEFE to grow, especially\n`stolenpyjak <https://github.com/stolenpyjak/>`_ and\n`mspl13 <https://github.com/mspl13/>`_ for implementing new metrics.\n\nWe also thank `alan-cueva <https://github.com/alan-cueva/>`_ for initiating the development\nof metrics for contextualized embedding models and\n`harshvr15 <https://github.com/harshvr15/>`_ for the examples of multi-language bias measurement.\n\nThank you very much \ud83d\ude0a!\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "The Word Embedding Fairness Evaluation Framework",
    "version": "1.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/dccuchile/wefe/issues",
        "Documentation": "https://wefe.readthedocs.io/",
        "Homepage": "https://github.com/dccuchile/wefe",
        "Repository": "https://github.com/dccuchile/wefe"
    },
    "split_keywords": [
        "bias",
        " fairness",
        " nlp",
        " word embeddings",
        " machine learning",
        " artificial intelligence"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9f878dd17b16d3a57a014b548befbd61fa85a9b66605112526cccaf4b8239d39",
                "md5": "bd113b36b966fa9712ac7aed09608f80",
                "sha256": "c0672985dfb4512bc366cbe558ee53aa902e6c19204ebfd22e6bf455d651fc10"
            },
            "downloads": -1,
            "filename": "wefe-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bd113b36b966fa9712ac7aed09608f80",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 7879672,
            "upload_time": "2025-07-29T04:32:04",
            "upload_time_iso_8601": "2025-07-29T04:32:04.375024Z",
            "url": "https://files.pythonhosted.org/packages/9f/87/8dd17b16d3a57a014b548befbd61fa85a9b66605112526cccaf4b8239d39/wefe-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0c49c964d23a73234ab2d0e866a3421c77db085ffd59cf69fe050a438fbcdd36",
                "md5": "c628857d9d122ef1a57fcd08160bcfd8",
                "sha256": "215a8d2bb6244321107e97bdee5d1877970da428783242d7f1a1db0801e56aef"
            },
            "downloads": -1,
            "filename": "wefe-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c628857d9d122ef1a57fcd08160bcfd8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 7426193,
            "upload_time": "2025-07-29T04:32:07",
            "upload_time_iso_8601": "2025-07-29T04:32:07.878118Z",
            "url": "https://files.pythonhosted.org/packages/0c/49/c964d23a73234ab2d0e866a3421c77db085ffd59cf69fe050a438fbcdd36/wefe-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-29 04:32:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dccuchile",
    "github_project": "wefe",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "wefe"
}
        
Elapsed time: 0.77630s