Name | explainy JSON |
Version |
0.2.12
JSON |
| download |
home_page | https://github.com/MauroLuzzatto/explainy |
Summary | explainy is a library for generating explanations for machine learning models in Python. It uses methods from Machine Learning Explainability and provides a standardized API to create feature importance explanations for samples. The explanations are generated in the form of plots and text. |
upload_time | 2024-10-27 17:46:08 |
maintainer | None |
docs_url | None |
author | Mauro Luzzatto |
requires_python | >=3.9 |
license | MIT license |
keywords |
explainy
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
|
<!-- <img src="https://github.com/MauroLuzzatto/explainy/raw/main/docs/_static/logo.png" width="180" height="180" align="right"/> -->
<p align="center">
<img src="https://github.com/MauroLuzzatto/explainy/raw/main/docs/_static/logo.png" width="200" height="200"/>
</p>
<!-- # explainy - machine learning model explanations for humans -->
<!-- # explainy - black-box model explanations for humans -->
<h1 align="center">explainy - black-box model explanations for humans</h1>
[](https://pypi.python.org/pypi/explainy)
[](https://codecov.io/gh/MauroLuzzatto/explainy)
[](https://explainy.readthedocs.io/en/latest/?version=latest)
[](https://pypi.org/project/explainy)
[](https://github.com/ambv/black)
[](https://pycqa.github.io/isort/)
[](https://pepy.tech/project/explainy)
<!-- [](https://app.travis-ci.com/github/MauroLuzzatto/explainy?branch=master) -->
**explainy** is a library for generating machine learning models explanations in Python. It uses methods from **Machine Learning Explainability** and provides a standardized API to create feature importance explanations for samples.
The API is inspired by `scikit-learn` and has three core methods `explain()`, `plot()` and, `importance()`. The explanations are generated in the form of texts and plots.
**explainy** comes with four different algorithms to create either *global* or *local* and *contrastive* or *non-contrastive* model explanations.
| Method |Type | Explanations | Classification | Regression |
| --- | --- | :---: | :---: | :---: |
|[Permutation Feature Importance](https://explainy.readthedocs.io/en/latest/explainy.explanations.html#module-explainy.explanation.permutation_explanation) | non-contrastive | global | :star: | :star:|
|[Shap Values](https://explainy.readthedocs.io/en/latest/explainy.explanations.html?highlight=shap#module-explainy.explanations.shap_explanation) | non-contrastive | local | :star: | :star:|
|[Surrogate Model](https://explainy.readthedocs.io/en/latest/explainy.explanations.html#module-explainy.explanation.surrogate_model_explanation)|contrastive | global | :star: | :star: |
|[Counterfactual Example](https://explainy.readthedocs.io/en/latest/explainy.explanations.html#module-explainy.explanation.counterfactual_explanation)| contrastive | local |:star:| :star:|
Description:
- **global**: explanation of system functionality (all samples have the same explanation)
- **local**: explanation of decision rationale (each sample has its own explanation)
- **contrastive**: tracing of decision path (differences to other outcomes are described)
- **non-contrastive**: parameter weighting (the feature importance is reported)
## Documentation
https://explainy.readthedocs.io
## Install explainy
```
pip install explainy
```
---
Further, install `graphviz` (version 9.0.0 or later) for plotting tree surrogate models:
#### Windows
```
choco install graphviz
```
#### Mac
```
brew install graphviz
```
#### Linux: Ubuntu packages
```
sudo apt install graphviz
```
Further details on how to install `graphviz` can be found in the official [graphviz docs](https://graphviz.org/download/).
Also, make sure that the folder with the `dot` executable is added to your systems `PATH`. You can find further details [here](https://github.com/xflr6/graphviz?tab=readme-ov-file#installation).
## Usage
📚 A comprehensive example of the `explainy` API can be found in this 
📖 Or in the [example section](https://explainy.readthedocs.io/en/latest/examples/01-explainy-intro.html) of the documentation
Initialize and train a `scikit-learn` model:
```python
import pandas as pd
from sklearn.datasets import load_diabetes
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
diabetes = load_diabetes()
X_train, X_test, y_train, y_test = train_test_split(
diabetes.data, diabetes.target, random_state=0
)
X_test = pd.DataFrame(X_test, columns=diabetes.feature_names)
y_test = pd.DataFrame(y_test)
model = RandomForestRegressor(random_state=0)
model.fit(X_train, y_train)
```
Initialize the `PermutationExplanation` (or any other explanation) object and pass in the trained model and the to be explained dataset.
Addtionally, define the number of features used in the explanation. This allows you to configure the verbosity of your exaplanation.
Set the index of the sample that should be explained.
```python
from explainy.explanations import PermutationExplanation
number_of_features = 4
explainer = PermutationExplanation(
X_test, y_test, model, number_of_features
)
```
Call the `explain()` method and print the explanation for the sample (in case of a local explanation every sample has a different explanation).
```python
explanation = explainer.explain(sample_index=1)
print(explanation)
```
> The RandomForestRegressor used 10 features to produce the predictions. The prediction of this sample was 251.8.
> The feature importance was calculated using the Permutation Feature Importance method.
> The four features which were most important for the predictions were (from highest to lowest): 'bmi' (0.15), 's5' (0.12), 'bp' (0.03), and 'age' (0.02).
Use the `plot()` method to create a feature importance plot of that sample.
```python
explainer.plot()
```

If your prefer, you can also create another type of plot, as for example a boxplot.
```python
explainer.plot(kind='box')
```

Finally, you can also look at the importance values of the features (in form of a `pd.DataFrame`).
```python
feature_importance = explainer.importance()
print(feature_importance)
```
```python
Feature Importance
0 bmi 0.15
1 s5 0.12
2 bp 0.03
3 age 0.02
4 s2 -0.00
5 sex -0.00
6 s3 -0.00
7 s1 -0.01
8 s6 -0.01
9 s4 -0.01
```
<!-- Finally the result can be saved
```python
explainer.save(sample_index)
``` -->
<!--
## Model Explanations
-->
## Features
- Algorithms for inspecting black-box machine learning models
- Support for the machine learning frameworks `scikit-learn` and `xgboost`
- **explainy** offers a standardized API with three core methods `explain()`, `plot()`, `importance()`
## Other Machine Learning Explainability libraries to watch
- [shap](https://github.com/slundberg/shap): A game theoretic approach to explain the output of any machine learning model
- [eli5](https://github.com/TeamHG-Memex/eli5): A library for debugging/inspecting machine learning classifiers and explaining their predictions
- [alibi](https://github.com/SeldonIO/alibi): Algorithms for explaining machine learning models
- [interpret](https://github.com/interpretml/interpret): Fit interpretable models. Explain blackbox machine learning
## Source
Molnar, Christoph. "Interpretable machine learning. A Guide for Making Black Box Models Explainable", 2019. https://christophm.github.io/interpretable-ml-book/
## Author
**Mauro Luzzatto** - [Maurol](https://github.com/MauroLuzzatto)
Raw data
{
"_id": null,
"home_page": "https://github.com/MauroLuzzatto/explainy",
"name": "explainy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "explainy",
"author": "Mauro Luzzatto",
"author_email": "mauroluzzatto@hotmail.com",
"download_url": "https://files.pythonhosted.org/packages/ff/aa/10800a79c5cb00d9927862566e37ed467bbbe1179976f36d0a73c204af1f/explainy-0.2.12.tar.gz",
"platform": null,
"description": "\n<!-- <img src=\"https://github.com/MauroLuzzatto/explainy/raw/main/docs/_static/logo.png\" width=\"180\" height=\"180\" align=\"right\"/> -->\n<p align=\"center\">\n<img src=\"https://github.com/MauroLuzzatto/explainy/raw/main/docs/_static/logo.png\" width=\"200\" height=\"200\"/>\n</p>\n<!-- # explainy - machine learning model explanations for humans -->\n<!-- # explainy - black-box model explanations for humans -->\n\n<h1 align=\"center\">explainy - black-box model explanations for humans</h1>\n\n[](https://pypi.python.org/pypi/explainy)\n[](https://codecov.io/gh/MauroLuzzatto/explainy)\n[](https://explainy.readthedocs.io/en/latest/?version=latest)\n[](https://pypi.org/project/explainy)\n[](https://github.com/ambv/black)\n[](https://pycqa.github.io/isort/)\n[](https://pepy.tech/project/explainy)\n\n<!-- [](https://app.travis-ci.com/github/MauroLuzzatto/explainy?branch=master) -->\n\n\n**explainy** is a library for generating machine learning models explanations in Python. It uses methods from **Machine Learning Explainability** and provides a standardized API to create feature importance explanations for samples. \n\nThe API is inspired by `scikit-learn` and has three core methods `explain()`, `plot()` and, `importance()`. The explanations are generated in the form of texts and plots.\n\n**explainy** comes with four different algorithms to create either *global* or *local* and *contrastive* or *non-contrastive* model explanations.\n\n\n| Method\t\t\t\t|Type | Explanations | Classification | Regression | \n| --- \t\t\t\t| --- | :---: | :---: | :---: | \n|[Permutation Feature Importance](https://explainy.readthedocs.io/en/latest/explainy.explanations.html#module-explainy.explanation.permutation_explanation)\t| non-contrastive | global | :star: | :star:|\n|[Shap Values](https://explainy.readthedocs.io/en/latest/explainy.explanations.html?highlight=shap#module-explainy.explanations.shap_explanation)\t\t| non-contrastive | local | \t:star: | :star:|\n|[Surrogate Model](https://explainy.readthedocs.io/en/latest/explainy.explanations.html#module-explainy.explanation.surrogate_model_explanation)|contrastive | global | :star: | :star: | \n|[Counterfactual Example](https://explainy.readthedocs.io/en/latest/explainy.explanations.html#module-explainy.explanation.counterfactual_explanation)| contrastive | local |:star:| :star:|\n\n\nDescription:\n- **global**: explanation of system functionality (all samples have the same explanation)\n- **local**: explanation of decision rationale (each sample has its own explanation)\n- **contrastive**: tracing of decision path (differences to other outcomes are described)\n- **non-contrastive**: parameter weighting (the feature importance is reported)\n\n\n## Documentation\nhttps://explainy.readthedocs.io\n\n\n## Install explainy\n\n\n\n```\npip install explainy\n```\n\n---\n\nFurther, install `graphviz` (version 9.0.0 or later) for plotting tree surrogate models:\n\n#### Windows\n```\nchoco install graphviz\n```\n\n#### Mac\n```\nbrew install graphviz\n```\n#### Linux: Ubuntu packages\n```\nsudo apt install graphviz\n```\n\nFurther details on how to install `graphviz` can be found in the official [graphviz docs](https://graphviz.org/download/).\n\nAlso, make sure that the folder with the `dot` executable is added to your systems `PATH`. You can find further details [here](https://github.com/xflr6/graphviz?tab=readme-ov-file#installation).\n\n## Usage\n\n\ud83d\udcda A comprehensive example of the `explainy` API can be found in this  \n \n\ud83d\udcd6 Or in the [example section](https://explainy.readthedocs.io/en/latest/examples/01-explainy-intro.html) of the documentation\n\n\nInitialize and train a `scikit-learn` model:\n```python\nimport pandas as pd\nfrom sklearn.datasets import load_diabetes\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.model_selection import train_test_split\n\ndiabetes = load_diabetes()\nX_train, X_test, y_train, y_test = train_test_split(\n diabetes.data, diabetes.target, random_state=0\n)\nX_test = pd.DataFrame(X_test, columns=diabetes.feature_names)\ny_test = pd.DataFrame(y_test)\n\nmodel = RandomForestRegressor(random_state=0)\nmodel.fit(X_train, y_train)\n```\n\nInitialize the `PermutationExplanation` (or any other explanation) object and pass in the trained model and the to be explained dataset. \n\nAddtionally, define the number of features used in the explanation. This allows you to configure the verbosity of your exaplanation.\n\n Set the index of the sample that should be explained.\n\n```python\nfrom explainy.explanations import PermutationExplanation\n\nnumber_of_features = 4\n\nexplainer = PermutationExplanation(\n X_test, y_test, model, number_of_features\n)\n```\nCall the `explain()` method and print the explanation for the sample (in case of a local explanation every sample has a different explanation).\n\n```python\nexplanation = explainer.explain(sample_index=1)\nprint(explanation)\n```\n> The RandomForestRegressor used 10 features to produce the predictions. The prediction of this sample was 251.8.\n\n> The feature importance was calculated using the Permutation Feature Importance method.\n\n> The four features which were most important for the predictions were (from highest to lowest): 'bmi' (0.15), 's5' (0.12), 'bp' (0.03), and 'age' (0.02).\n\nUse the `plot()` method to create a feature importance plot of that sample.\n\n```python\nexplainer.plot()\n```\n\n\nIf your prefer, you can also create another type of plot, as for example a boxplot.\n```python\nexplainer.plot(kind='box')\n```\n\n\n\nFinally, you can also look at the importance values of the features (in form of a `pd.DataFrame`).\n\n```python\nfeature_importance = explainer.importance()\nprint(feature_importance)\n```\n\n```python\n Feature Importance\n0 bmi 0.15\n1 s5 0.12\n2 bp 0.03\n3 age 0.02\n4 s2 -0.00\n5 sex -0.00\n6 s3 -0.00\n7 s1 -0.01\n8 s6 -0.01\n9 s4 -0.01\n```\n\n<!-- Finally the result can be saved\n\n```python\nexplainer.save(sample_index)\n``` -->\n\n<!-- \n## Model Explanations\n-->\n\n\n## Features\n- Algorithms for inspecting black-box machine learning models \n- Support for the machine learning frameworks `scikit-learn` and `xgboost`\n- **explainy** offers a standardized API with three core methods `explain()`, `plot()`, `importance()`\n\n## Other Machine Learning Explainability libraries to watch\n- [shap](https://github.com/slundberg/shap): A game theoretic approach to explain the output of any machine learning model\n- [eli5](https://github.com/TeamHG-Memex/eli5): A library for debugging/inspecting machine learning classifiers and explaining their predictions \n- [alibi](https://github.com/SeldonIO/alibi): Algorithms for explaining machine learning models \n- [interpret](https://github.com/interpretml/interpret): Fit interpretable models. Explain blackbox machine learning\n\n\n## Source\n\nMolnar, Christoph. \"Interpretable machine learning. A Guide for Making Black Box Models Explainable\", 2019. https://christophm.github.io/interpretable-ml-book/\n\n## Author\n**Mauro Luzzatto** - [Maurol](https://github.com/MauroLuzzatto)\n\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "explainy is a library for generating explanations for machine learning models in Python. It uses methods from Machine Learning Explainability and provides a standardized API to create feature importance explanations for samples. The explanations are generated in the form of plots and text.",
"version": "0.2.12",
"project_urls": {
"Homepage": "https://github.com/MauroLuzzatto/explainy"
},
"split_keywords": [
"explainy"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e84fea46fa92bbdc4e3b66b4f99af7b6de122e48a9bc1ea07539906c81e8fea5",
"md5": "42c919374db6c2531d15a66b1907b799",
"sha256": "04357526c810330f3f215fb5f530cd86898e1d9f6fd26400fdbcc7fef5bd3f44"
},
"downloads": -1,
"filename": "explainy-0.2.12-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "42c919374db6c2531d15a66b1907b799",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.9",
"size": 28266,
"upload_time": "2024-10-27T17:46:05",
"upload_time_iso_8601": "2024-10-27T17:46:05.979509Z",
"url": "https://files.pythonhosted.org/packages/e8/4f/ea46fa92bbdc4e3b66b4f99af7b6de122e48a9bc1ea07539906c81e8fea5/explainy-0.2.12-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ffaa10800a79c5cb00d9927862566e37ed467bbbe1179976f36d0a73c204af1f",
"md5": "0dfa129a8e21dd10fd4eda9d6b95513e",
"sha256": "49ceaf243ebe0cd8502088b657937ff520464d9797dcab39123df377b69a2176"
},
"downloads": -1,
"filename": "explainy-0.2.12.tar.gz",
"has_sig": false,
"md5_digest": "0dfa129a8e21dd10fd4eda9d6b95513e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 654874,
"upload_time": "2024-10-27T17:46:08",
"upload_time_iso_8601": "2024-10-27T17:46:08.534998Z",
"url": "https://files.pythonhosted.org/packages/ff/aa/10800a79c5cb00d9927862566e37ed467bbbe1179976f36d0a73c204af1f/explainy-0.2.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-27 17:46:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "MauroLuzzatto",
"github_project": "explainy",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"requirements": [],
"tox": true,
"lcname": "explainy"
}