ER-Evaluation


NameER-Evaluation JSON
Version 2.3.0 PyPI version JSON
download
home_pagehttps://github.com/OlivierBinette/er_evaluation
SummaryAn End-to-End Evaluation Framework for Entity Resolution Systems.
upload_time2023-11-30 19:41:04
maintainer
docs_urlNone
authorOlivier Binette
requires_python>=3.6
licenseGNU Affero General Public License v3
keywords er_evaluation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
.. image:: https://github.com/Valires/er-evaluation/actions/workflows/python-package.yaml/badge.svg
        :target: https://github.com/Valires/er-evaluation/actions/workflows/python-package.yaml
        :alt: Github Action workflow status and link.

.. image:: https://badge.fury.io/py/er-evaluation.svg
        :target: https://badge.fury.io/py/er-evaluation
        :alt: PyPI release badge and link.

.. image:: https://readthedocs.org/projects/er-evaluation/badge/?version=latest
        :target: https://er-evaluation.readthedocs.io/en/latest/?version=latest
        :alt: Documentation status badge and link.

.. image:: https://joss.theoj.org/papers/10.21105/joss.05619/status.svg
       :target: https://doi.org/10.21105/joss.05619
       :alt: Journal of Open Source Software publication badge and link.

🔍 ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems
===================================================================================

`ER-Evaluation <https://er-evaluation.readthedocs.io/en/latest>`_ is a Python package for the evaluation of entity resolution (ER) systems.

It provides an **entity-centric** approach to evaluation. Given a sample of resolved entities, it provides: 

* **summary statistics**, such as average cluster size, matching rate, homonymy rate, and name variation rate.
* **comparison statistics** between entity resolutions, such as proportion of links from one which is also in the other, and vice-versa.
* **performance estimates** with uncertainty quantification, such as precision, recall, and F1 score estimates, as well as B-cubed and cluster metric estimates.
* **error analysis**, such as cluster-level error metrics and analysis tools to find root cause of errors.
* convenience **visualization tools**.

For more information on how to resolve a sample of entities for evaluation and model training, please refer to our `data labeling guide <https://er-evaluation.readthedocs.io/en/latest/06-data-labeling.html>`_.

Installation
---------------

Install the released version from PyPI using:

.. code:: bash

    pip install er-evaluation

Or install the development version using:
.. code:: bash

    pip install git+https://github.com/Valires/er-evaluation.git


Documentation
----------------

Please refer to the documentation website `er-evaluation.readthedocs.io <https://er-evaluation.readthedocs.io/en/latest>`_.

Usage Examples
-----------------

Please refer to the `User Guide <https://er-evaluation.readthedocs.io/en/latest/userguide.html>`_ or our `Visualization Examples <https://er-evaluation.readthedocs.io/en/latest/visualizations.html>`_ for a complete usage guide.

In summary, here's how you might use the package.

1. Import your predicted disambiguations and reference benchmark dataset. The benchmark dataset should contain a sample of disambiguated entities.

.. code::

        import er_evaluation as ee

        predictions, reference = ee.load_pv_disambiguations()

2. Plot `summary statistics <https://er-evaluation.readthedocs.io/en/latest/02-summary_statistics.html>`_ and compare disambiguations.

.. code::

        ee.plot_summaries(predictions)

.. image:: media/plot_summaries.png
   :width: 400

.. code::

        ee.plot_comparison(predictions)

.. image:: media/plot_comparison.png
   :width: 400

3. Define sampling weights and `estimate performance metrics <https://er-evaluation.readthedocs.io/en/latest/03-estimating_performance.html>`_.

.. code::

        ee.plot_estimates(predictions, {"sample":reference, "weights":"cluster_size"})

.. image:: media/plot_estimates.png
   :width: 400

4. Perform `error analysis <https://er-evaluation.readthedocs.io/en/latest/04-error_analysis.html>`_ using cluster-level explanatory features and cluster error metrics.

.. code::

        ee.make_dt_regressor_plot(
                y,
                weights,
                features_df,
                numerical_features,
                categorical_features,
                max_depth=3,
                type="sunburst"
        )

.. image:: media/plot_decisiontree.png
   :width: 400

Development Philosophy
-------------------------

**ER-Evaluation** is designed to be a unified source of evaluation tools for entity resolution systems, adhering to the Unix philosophy of simplicity, modularity, and composability. The package contains Python functions that take standard data structures such as pandas Series and DataFrames as input, making it easy to integrate into existing workflows. By importing the necessary functions and calling them on your data, you can easily use ER-Evaluation to evaluate your entity resolution system without worrying about custom data structures or complex architectures.

Citation
-----------

Please acknowledge the publications below if you use ER-Evaluation:

- Binette, Olivier. (2022). ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems. Available online at `github.com/Valires/ER-Evaluation <https://github.com/Valires/ER-Evaluation>`_
- Binette, Olivier, Sokhna A York, Emma Hickerson, Youngsoo Baek, Sarvo Madhavan, Christina Jones. (2022). Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org. arXiv e-prints: `arxiv:2210.01230 <https://arxiv.org/abs/2210.01230>`_
- Upcoming: "An End-to-End Framework for the Evaluation of Entity Resolution Systems With Application to Inventor Name Disambiguation"

Public License
--------------

* `GNU Affero General Public License v3 <https://www.gnu.org/licenses/agpl-3.0.en.html>`_


=========
Changelog
=========

2.3.0 (November 29, 2023)
-------------------------

* Fix handling of NaN values in `compress_memberships()`

2.2.1 (November 8, 2023)
------------------------
* Small fixes to paper and documentation.

2.2.0 (October 26, 2023)
------------------------
* Streamline package structure
* Additional tests
* Improved documentation

2.1.0 (June 02, 2023)
----------------------

* Add sunburst visualization for decision tree regressors
* Add decision tree regression pipeline for subgroup discovery
* Add search utilities
* Prepare submission to JOSS

2.0.0 (March 27, 2023)
----------------------

* Improve documentation
* Add handling of NA values
* Bug fixes
* Add datasets module
* Add visualization functions
* Performance improvements
* BREAKING: error_analysis functions have been renamed.
* BREAKING: estimators have been renamed.
* Added estimators support for sensitivity analyses
* Added fairness plots
* Performance improvements
* Added `compress_memberships()` function for performance improvements.

1.2.0 (January 11, 2022)
------------------------

- Refactoring and documentation overhaul.

1.1.0 (January 10, 2022)
------------------------

- Added additional error metrics, performance evaluation metrics, and performance estimators.
- Added record-level error metrics and error analysis tools.

1.0.2 (December 5, 2022)
------------------------

- Update setup.py with find_packages()

1.0.1 (November 30, 2022)
-------------------------

- Add "normalize" option to plot_cluster_sizes_distribution.
- Fix bugs in homonimy_rate and and name_variation_rate.
- Fix bug in estimators.

1.0.0
-----

- Initial release

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/OlivierBinette/er_evaluation",
    "name": "ER-Evaluation",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "er_evaluation",
    "author": "Olivier Binette",
    "author_email": "olivier.binette@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/f9/52/daf8b12798a8ff8dc83237eaa7d73ca93fc222c747297ac2c281ee5ba59b/ER-Evaluation-2.3.0.tar.gz",
    "platform": null,
    "description": "\n.. image:: https://github.com/Valires/er-evaluation/actions/workflows/python-package.yaml/badge.svg\n        :target: https://github.com/Valires/er-evaluation/actions/workflows/python-package.yaml\n        :alt: Github Action workflow status and link.\n\n.. image:: https://badge.fury.io/py/er-evaluation.svg\n        :target: https://badge.fury.io/py/er-evaluation\n        :alt: PyPI release badge and link.\n\n.. image:: https://readthedocs.org/projects/er-evaluation/badge/?version=latest\n        :target: https://er-evaluation.readthedocs.io/en/latest/?version=latest\n        :alt: Documentation status badge and link.\n\n.. image:: https://joss.theoj.org/papers/10.21105/joss.05619/status.svg\n       :target: https://doi.org/10.21105/joss.05619\n       :alt: Journal of Open Source Software publication badge and link.\n\n\ud83d\udd0d ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems\n===================================================================================\n\n`ER-Evaluation <https://er-evaluation.readthedocs.io/en/latest>`_ is a Python package for the evaluation of entity resolution (ER) systems.\n\nIt provides an **entity-centric** approach to evaluation. Given a sample of resolved entities, it provides: \n\n* **summary statistics**, such as average cluster size, matching rate, homonymy rate, and name variation rate.\n* **comparison statistics** between entity resolutions, such as proportion of links from one which is also in the other, and vice-versa.\n* **performance estimates** with uncertainty quantification, such as precision, recall, and F1 score estimates, as well as B-cubed and cluster metric estimates.\n* **error analysis**, such as cluster-level error metrics and analysis tools to find root cause of errors.\n* convenience **visualization tools**.\n\nFor more information on how to resolve a sample of entities for evaluation and model training, please refer to our `data labeling guide <https://er-evaluation.readthedocs.io/en/latest/06-data-labeling.html>`_.\n\nInstallation\n---------------\n\nInstall the released version from PyPI using:\n\n.. code:: bash\n\n    pip install er-evaluation\n\nOr install the development version using:\n.. code:: bash\n\n    pip install git+https://github.com/Valires/er-evaluation.git\n\n\nDocumentation\n----------------\n\nPlease refer to the documentation website `er-evaluation.readthedocs.io <https://er-evaluation.readthedocs.io/en/latest>`_.\n\nUsage Examples\n-----------------\n\nPlease refer to the `User Guide <https://er-evaluation.readthedocs.io/en/latest/userguide.html>`_ or our `Visualization Examples <https://er-evaluation.readthedocs.io/en/latest/visualizations.html>`_ for a complete usage guide.\n\nIn summary, here's how you might use the package.\n\n1. Import your predicted disambiguations and reference benchmark dataset. The benchmark dataset should contain a sample of disambiguated entities.\n\n.. code::\n\n        import er_evaluation as ee\n\n        predictions, reference = ee.load_pv_disambiguations()\n\n2. Plot `summary statistics <https://er-evaluation.readthedocs.io/en/latest/02-summary_statistics.html>`_ and compare disambiguations.\n\n.. code::\n\n        ee.plot_summaries(predictions)\n\n.. image:: media/plot_summaries.png\n   :width: 400\n\n.. code::\n\n        ee.plot_comparison(predictions)\n\n.. image:: media/plot_comparison.png\n   :width: 400\n\n3. Define sampling weights and `estimate performance metrics <https://er-evaluation.readthedocs.io/en/latest/03-estimating_performance.html>`_.\n\n.. code::\n\n        ee.plot_estimates(predictions, {\"sample\":reference, \"weights\":\"cluster_size\"})\n\n.. image:: media/plot_estimates.png\n   :width: 400\n\n4. Perform `error analysis <https://er-evaluation.readthedocs.io/en/latest/04-error_analysis.html>`_ using cluster-level explanatory features and cluster error metrics.\n\n.. code::\n\n        ee.make_dt_regressor_plot(\n                y,\n                weights,\n                features_df,\n                numerical_features,\n                categorical_features,\n                max_depth=3,\n                type=\"sunburst\"\n        )\n\n.. image:: media/plot_decisiontree.png\n   :width: 400\n\nDevelopment Philosophy\n-------------------------\n\n**ER-Evaluation** is designed to be a unified source of evaluation tools for entity resolution systems, adhering to the Unix philosophy of simplicity, modularity, and composability. The package contains Python functions that take standard data structures such as pandas Series and DataFrames as input, making it easy to integrate into existing workflows. By importing the necessary functions and calling them on your data, you can easily use ER-Evaluation to evaluate your entity resolution system without worrying about custom data structures or complex architectures.\n\nCitation\n-----------\n\nPlease acknowledge the publications below if you use ER-Evaluation:\n\n- Binette, Olivier. (2022). ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems. Available online at `github.com/Valires/ER-Evaluation <https://github.com/Valires/ER-Evaluation>`_\n- Binette, Olivier, Sokhna A York, Emma Hickerson, Youngsoo Baek, Sarvo Madhavan, Christina Jones. (2022). Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org. arXiv e-prints: `arxiv:2210.01230 <https://arxiv.org/abs/2210.01230>`_\n- Upcoming: \"An End-to-End Framework for the Evaluation of Entity Resolution Systems With Application to Inventor Name Disambiguation\"\n\nPublic License\n--------------\n\n* `GNU Affero General Public License v3 <https://www.gnu.org/licenses/agpl-3.0.en.html>`_\n\n\n=========\nChangelog\n=========\n\n2.3.0 (November 29, 2023)\n-------------------------\n\n* Fix handling of NaN values in `compress_memberships()`\n\n2.2.1 (November 8, 2023)\n------------------------\n* Small fixes to paper and documentation.\n\n2.2.0 (October 26, 2023)\n------------------------\n* Streamline package structure\n* Additional tests\n* Improved documentation\n\n2.1.0 (June 02, 2023)\n----------------------\n\n* Add sunburst visualization for decision tree regressors\n* Add decision tree regression pipeline for subgroup discovery\n* Add search utilities\n* Prepare submission to JOSS\n\n2.0.0 (March 27, 2023)\n----------------------\n\n* Improve documentation\n* Add handling of NA values\n* Bug fixes\n* Add datasets module\n* Add visualization functions\n* Performance improvements\n* BREAKING: error_analysis functions have been renamed.\n* BREAKING: estimators have been renamed.\n* Added estimators support for sensitivity analyses\n* Added fairness plots\n* Performance improvements\n* Added `compress_memberships()` function for performance improvements.\n\n1.2.0 (January 11, 2022)\n------------------------\n\n- Refactoring and documentation overhaul.\n\n1.1.0 (January 10, 2022)\n------------------------\n\n- Added additional error metrics, performance evaluation metrics, and performance estimators.\n- Added record-level error metrics and error analysis tools.\n\n1.0.2 (December 5, 2022)\n------------------------\n\n- Update setup.py with find_packages()\n\n1.0.1 (November 30, 2022)\n-------------------------\n\n- Add \"normalize\" option to plot_cluster_sizes_distribution.\n- Fix bugs in homonimy_rate and and name_variation_rate.\n- Fix bug in estimators.\n\n1.0.0\n-----\n\n- Initial release\n",
    "bugtrack_url": null,
    "license": "GNU Affero General Public License v3",
    "summary": "An End-to-End Evaluation Framework for Entity Resolution Systems.",
    "version": "2.3.0",
    "project_urls": {
        "Homepage": "https://github.com/OlivierBinette/er_evaluation"
    },
    "split_keywords": [
        "er_evaluation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "17e70d0f01e1092de682de7da4fd02e93ada2cf01151f040d955c099832838df",
                "md5": "c3838f8df5d104decd916b271caf0087",
                "sha256": "a42f32e2d9bd81c35f2dc242563c38bfe80dd18cba2ede246370c8fed9d006da"
            },
            "downloads": -1,
            "filename": "ER_Evaluation-2.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c3838f8df5d104decd916b271caf0087",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 65303251,
            "upload_time": "2023-11-30T19:40:59",
            "upload_time_iso_8601": "2023-11-30T19:40:59.469973Z",
            "url": "https://files.pythonhosted.org/packages/17/e7/0d0f01e1092de682de7da4fd02e93ada2cf01151f040d955c099832838df/ER_Evaluation-2.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f952daf8b12798a8ff8dc83237eaa7d73ca93fc222c747297ac2c281ee5ba59b",
                "md5": "7aa0a44da19a6def371c974cbc3d99f8",
                "sha256": "f6d87670cbeda096bc624f25eeb4bc85e63227eab7a346d35e2d12710c368715"
            },
            "downloads": -1,
            "filename": "ER-Evaluation-2.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7aa0a44da19a6def371c974cbc3d99f8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 66225844,
            "upload_time": "2023-11-30T19:41:04",
            "upload_time_iso_8601": "2023-11-30T19:41:04.450717Z",
            "url": "https://files.pythonhosted.org/packages/f9/52/daf8b12798a8ff8dc83237eaa7d73ca93fc222c747297ac2c281ee5ba59b/ER-Evaluation-2.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-30 19:41:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "OlivierBinette",
    "github_project": "er_evaluation",
    "github_not_found": true,
    "lcname": "er-evaluation"
}
        
Elapsed time: 0.40517s