xai-compare


Namexai-compare JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://github.com/emunaran/xai-compare.git
SummaryThis repository aims to provide tools for comparing different explainability methods, enhancing the interpretation of machine learning models.
upload_time2024-07-30 16:28:56
maintainerNone
docs_urlNone
authorRan Emuna
requires_python>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements jupyter lime matplotlib notebook numpy pandas scikit-learn scipy seaborn shap interpret interpret-community interpret-core sphinx livereload sphinx-rtd-theme
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
    <img src="https://raw.githubusercontent.com/emunaran/xai-compare/main/docs/images/xai-compare_logo.png" alt="Logo" width="200"/>
</div>

---
[![PyPI](https://img.shields.io/badge/pypi-v0.1.0-orange
)](https://pypi.org/project/)
![License](https://img.shields.io/badge/license-MIT-green
)
[![Python](https://img.shields.io/badge/python-%3E3.9-blue)](https://pypi.org/project/)


## Description
`xai-compare` is an open-source library that provides a suite of tools to systematically compare and evaluate the quality of explanations generated by different Explainable AI (XAI) methods. This package facilitates the development of new XAI methods and promotes transparent evaluations of such methods.

`xai-compare` includes a variety of XAI techniques like SHAP, LIME, and Permutation Feature Importance, and introduces advanced comparison techniques such as consistency measurement and feature selection analysis. It is designed to be flexible, easy to integrate, and ideal for enhancing model transparency and interpretability across various applications.


## Installation

The package can be installed from [PyPI](https://pypi.org/project/):

Using pip:
```bash
pip install xai-compare
```

## Explainers

`xai-compare` supports three popular model-agnostic XAI methods:

### SHAP
- SHAP values provide global interpretations of a model's output by attributing each feature's contribution to the predicted outcome.
- Depending on the model type, the script initializes an appropriate explainer such as `shap.TreeExplainer` for tree-based models, `shap.LinearExplainer` for linear models, or `shap.KernelExplainer` for more general models. It then uses SHAP to analyze and explain the behavior of the model.

### LIME
- LIME provides local interpretations of individual predictions by approximating the model's behavior around specific data points.
- The script initializes a LimeTabularExplainer and explains local predictions of the model using LIME.

### Permutation Feature Importance
- Permutation Feature Importance assesses the impact of each feature on a model’s prediction by measuring the decrease in the model’s performance when the values of a feature are randomly shuffled.
- The script measures this dependency by calculating the decrease in model performance after permuting each feature, averaged over multiple permutations.



## Comparison techniques

### Feature selection

The FeatureSelection class in `xai-compare` is a robust tool for optimizing machine learning models by identifying and prioritizing the most influential features. This class leverages a variety of explainers, including SHAP, LIME, and Permutation Importance, to evaluate feature relevance systematically. It facilitates the iterative removal of less significant features, allowing users to understand the impact of each feature on model performance. This approach not only improves model efficiency but also enhances interpretability, making it easier to understand and justify model decisions.


<div align="center">
    <img src="https://github.com/emunaran/xai-compare/raw/main/docs/images/Feature_selection_wf.png" alt="Feature Selection Workflow" width="700"/>
    <p style="color: #808080;">Feature Selection Workflow</p>
</div>


### Consistency
The Consistency class assesses the stability and reliability of explanations provided by various explainers across different splits of data. This class is crucial for determining whether the insights provided by model explainers are consistent regardless of data variances. 

<div align="center">
    <img src="https://github.com/emunaran/xai-compare/raw/main/docs/images/Consistency_wf.png" alt="Consistency Measurement Workflow" width="700"/>
    <p style="color: #808080;">Consistency Measurement Workflow</p>
</div>


## Sample notebooks
The notebooks below demonstrate different use cases for `xai-compare` package. For hands-on experience and to explore the notebooks in detail, visit the notebooks directory in the repository.

[Feature Selection Comparison Notebook](
xai_compare/demo_notebooks/comparison_feature_selection.ipynb)

[Consistency Comparison Notebook](
xai_compare/demo_notebooks/comparison_consistency.ipynb)

[Main Demo Notebook](
xai_compare/demo_notebooks/main_demo.ipynb)


## Call for Contributors
We're seeking individuals with expertise in machine learning, preferably explainable artificial intelligence (XAI), and proficiency in Python programming. If you have a background in these areas and are passionate about enhancing machine learning model transparency, we welcome your contributions. Join us in shaping the future of interpretable AI. 


## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.


## Acknowledgments
- The California housing dataset is sourced from scikit-learn.
- SHAP and LIME libraries are used for model interpretability.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/emunaran/xai-compare.git",
    "name": "xai-compare",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Ran Emuna",
    "author_email": "emuna.ran@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/09/86/ac014553d2949ded001a55ade8c632f60b81ab678c55377667dd46ffa823/xai-compare-0.1.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/emunaran/xai-compare/main/docs/images/xai-compare_logo.png\" alt=\"Logo\" width=\"200\"/>\n</div>\n\n---\n[![PyPI](https://img.shields.io/badge/pypi-v0.1.0-orange\n)](https://pypi.org/project/)\n![License](https://img.shields.io/badge/license-MIT-green\n)\n[![Python](https://img.shields.io/badge/python-%3E3.9-blue)](https://pypi.org/project/)\n\n\n## Description\n`xai-compare` is an open-source library that provides a suite of tools to systematically compare and evaluate the quality of explanations generated by different Explainable AI (XAI) methods. This package facilitates the development of new XAI methods and promotes transparent evaluations of such methods.\n\n`xai-compare` includes a variety of XAI techniques like SHAP, LIME, and Permutation Feature Importance, and introduces advanced comparison techniques such as consistency measurement and feature selection analysis. It is designed to be flexible, easy to integrate, and ideal for enhancing model transparency and interpretability across various applications.\n\n\n## Installation\n\nThe package can be installed from [PyPI](https://pypi.org/project/):\n\nUsing pip:\n```bash\npip install xai-compare\n```\n\n## Explainers\n\n`xai-compare` supports three popular model-agnostic XAI methods:\n\n### SHAP\n- SHAP values provide global interpretations of a model's output by attributing each feature's contribution to the predicted outcome.\n- Depending on the model type, the script initializes an appropriate explainer such as `shap.TreeExplainer` for tree-based models, `shap.LinearExplainer` for linear models, or `shap.KernelExplainer` for more general models. It then uses SHAP to analyze and explain the behavior of the model.\n\n### LIME\n- LIME provides local interpretations of individual predictions by approximating the model's behavior around specific data points.\n- The script initializes a LimeTabularExplainer and explains local predictions of the model using LIME.\n\n### Permutation Feature Importance\n- Permutation Feature Importance assesses the impact of each feature on a model\u2019s prediction by measuring the decrease in the model\u2019s performance when the values of a feature are randomly shuffled.\n- The script measures this dependency by calculating the decrease in model performance after permuting each feature, averaged over multiple permutations.\n\n\n\n## Comparison techniques\n\n### Feature selection\n\nThe FeatureSelection class in `xai-compare` is a robust tool for optimizing machine learning models by identifying and prioritizing the most influential features. This class leverages a variety of explainers, including SHAP, LIME, and Permutation Importance, to evaluate feature relevance systematically. It facilitates the iterative removal of less significant features, allowing users to understand the impact of each feature on model performance. This approach not only improves model efficiency but also enhances interpretability, making it easier to understand and justify model decisions.\n\n\n<div align=\"center\">\n    <img src=\"https://github.com/emunaran/xai-compare/raw/main/docs/images/Feature_selection_wf.png\" alt=\"Feature Selection Workflow\" width=\"700\"/>\n    <p style=\"color: #808080;\">Feature Selection Workflow</p>\n</div>\n\n\n### Consistency\nThe Consistency class assesses the stability and reliability of explanations provided by various explainers across different splits of data. This class is crucial for determining whether the insights provided by model explainers are consistent regardless of data variances. \n\n<div align=\"center\">\n    <img src=\"https://github.com/emunaran/xai-compare/raw/main/docs/images/Consistency_wf.png\" alt=\"Consistency Measurement Workflow\" width=\"700\"/>\n    <p style=\"color: #808080;\">Consistency Measurement Workflow</p>\n</div>\n\n\n## Sample notebooks\nThe notebooks below demonstrate different use cases for `xai-compare` package. For hands-on experience and to explore the notebooks in detail, visit the notebooks directory in the repository.\n\n[Feature Selection Comparison Notebook](\nxai_compare/demo_notebooks/comparison_feature_selection.ipynb)\n\n[Consistency Comparison Notebook](\nxai_compare/demo_notebooks/comparison_consistency.ipynb)\n\n[Main Demo Notebook](\nxai_compare/demo_notebooks/main_demo.ipynb)\n\n\n## Call for Contributors\nWe're seeking individuals with expertise in machine learning, preferably explainable artificial intelligence (XAI), and proficiency in Python programming. If you have a background in these areas and are passionate about enhancing machine learning model transparency, we welcome your contributions. Join us in shaping the future of interpretable AI. \n\n\n## License\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n\n## Acknowledgments\n- The California housing dataset is sourced from scikit-learn.\n- SHAP and LIME libraries are used for model interpretability.\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "This repository aims to provide tools for comparing different explainability methods, enhancing the interpretation of machine learning models.",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://github.com/emunaran/xai-compare.git"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "27eef94c30cf80a945b2348eccbb4b697dd2135d664bd3b66e77bfefe2844c9a",
                "md5": "43600911c76e49088ffab3d4178d99c3",
                "sha256": "fc6ab220c0fda410359bc7b15b328819d29a4dabd1c01ed8b9c0d9e503728912"
            },
            "downloads": -1,
            "filename": "xai_compare-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "43600911c76e49088ffab3d4178d99c3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 54657,
            "upload_time": "2024-07-30T16:28:55",
            "upload_time_iso_8601": "2024-07-30T16:28:55.437066Z",
            "url": "https://files.pythonhosted.org/packages/27/ee/f94c30cf80a945b2348eccbb4b697dd2135d664bd3b66e77bfefe2844c9a/xai_compare-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0986ac014553d2949ded001a55ade8c632f60b81ab678c55377667dd46ffa823",
                "md5": "eb60fc947a10ddc2ccc4368b54348a91",
                "sha256": "fd924c815397dff0edc34152b282a9a8540217776f9645d9fb5e8987e6c2bab2"
            },
            "downloads": -1,
            "filename": "xai-compare-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "eb60fc947a10ddc2ccc4368b54348a91",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 49502,
            "upload_time": "2024-07-30T16:28:56",
            "upload_time_iso_8601": "2024-07-30T16:28:56.765748Z",
            "url": "https://files.pythonhosted.org/packages/09/86/ac014553d2949ded001a55ade8c632f60b81ab678c55377667dd46ffa823/xai-compare-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-30 16:28:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "emunaran",
    "github_project": "xai-compare",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "jupyter",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "lime",
            "specs": [
                [
                    "==",
                    "0.2.0.1"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    "==",
                    "3.9.1"
                ]
            ]
        },
        {
            "name": "notebook",
            "specs": [
                [
                    "==",
                    "7.2.1"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.26.4"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "2.2.2"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    "==",
                    "1.5.1"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    "==",
                    "1.14.0"
                ]
            ]
        },
        {
            "name": "seaborn",
            "specs": [
                [
                    "==",
                    "0.13.2"
                ]
            ]
        },
        {
            "name": "shap",
            "specs": [
                [
                    "==",
                    "0.44.0"
                ]
            ]
        },
        {
            "name": "interpret",
            "specs": [
                [
                    "==",
                    "0.5.0"
                ]
            ]
        },
        {
            "name": "interpret-community",
            "specs": [
                [
                    "==",
                    "0.31.0"
                ]
            ]
        },
        {
            "name": "interpret-core",
            "specs": [
                [
                    "==",
                    "0.5.0"
                ]
            ]
        },
        {
            "name": "sphinx",
            "specs": [
                [
                    "==",
                    "8.0.2"
                ]
            ]
        },
        {
            "name": "livereload",
            "specs": [
                [
                    "==",
                    "2.7.0"
                ]
            ]
        },
        {
            "name": "sphinx-rtd-theme",
            "specs": [
                [
                    "==",
                    "2.0.0"
                ]
            ]
        }
    ],
    "lcname": "xai-compare"
}
        
Elapsed time: 0.48664s