<p align="center">
<img src="https://raw.githubusercontent.com/SeldonIO/alibi/master/doc/source/_static/Alibi_Explain_Logo_rgb.png" alt="Alibi Logo" width="50%">
</p>
<!--- BADGES: START --->
[][#build-status]
[][#docs-package]
[](https://codecov.io/gh/SeldonIO/alibi)
[][#pypi-package]
[][#pypi-package]
[][#conda-forge-package]
[][#github-license]
[][#slack-channel]
<!--- Hide platform for now as platform agnostic --->
<!--- [][#conda-forge-package]--->
[#github-license]: https://github.com/SeldonIO/alibi/blob/master/LICENSE
[#pypi-package]: https://pypi.org/project/alibi/
[#conda-forge-package]: https://anaconda.org/conda-forge/alibi
[#docs-package]: https://docs.seldon.io/projects/alibi/en/stable/
[#build-status]: https://github.com/SeldonIO/alibi/actions?query=workflow%3A%22CI%22
[#slack-channel]: https://join.slack.com/t/seldondev/shared_invite/zt-vejg6ttd-ksZiQs3O_HOtPQsen_labg
<!--- BADGES: END --->
---
[Alibi](https://docs.seldon.io/projects/alibi) is a Python library aimed at machine learning model inspection and interpretation.
The focus of the library is to provide high-quality implementations of black-box, white-box, local and global
explanation methods for classification and regression models.
* [Documentation](https://docs.seldon.io/projects/alibi/en/stable/)
If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project [alibi-detect](https://github.com/SeldonIO/alibi-detect).
<table>
<tr valign="top">
<td width="50%" >
<a href="https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html">
<br>
<b>Anchor explanations for images</b>
<br>
<br>
<img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/anchor_image.png">
</a>
</td>
<td width="50%">
<a href="https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html">
<br>
<b>Integrated Gradients for text</b>
<br>
<br>
<img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ig_text.png">
</a>
</td>
</tr>
<tr valign="top">
<td width="50%">
<a href="https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html">
<br>
<b>Counterfactual examples</b>
<br>
<br>
<img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/cf.png">
</a>
</td>
<td width="50%">
<a href="https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html">
<br>
<b>Accumulated Local Effects</b>
<br>
<br>
<img src="https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ale.png">
</a>
</td>
</tr>
</table>
## Table of Contents
* [Installation and Usage](#installation-and-usage)
* [Supported Methods](#supported-methods)
* [Model Explanations](#model-explanations)
* [Model Confidence](#model-confidence)
* [Prototypes](#prototypes)
* [References and Examples](#references-and-examples)
* [Citations](#citations)
## Installation and Usage
Alibi can be installed from:
- PyPI or GitHub source (with `pip`)
- Anaconda (with `conda`/`mamba`)
### With pip
- Alibi can be installed from [PyPI](https://pypi.org/project/alibi):
```bash
pip install alibi
```
- Alternatively, the development version can be installed:
```bash
pip install git+https://github.com/SeldonIO/alibi.git
```
- To take advantage of distributed computation of explanations, install `alibi` with `ray`:
```bash
pip install alibi[ray]
```
- For SHAP support, install `alibi` as follows:
```bash
pip install alibi[shap]
```
### With conda
To install from [conda-forge](https://conda-forge.org/) it is recommended to use [mamba](https://mamba.readthedocs.io/en/stable/),
which can be installed to the *base* conda enviroment with:
```bash
conda install mamba -n base -c conda-forge
```
- For the standard Alibi install:
```bash
mamba install -c conda-forge alibi
```
- For distributed computing support:
```bash
mamba install -c conda-forge alibi ray
```
- For SHAP support:
```bash
mamba install -c conda-forge alibi shap
```
### Usage
The alibi explanation API takes inspiration from `scikit-learn`, consisting of distinct initialize,
fit and explain steps. We will use the [AnchorTabular](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html)
explainer to illustrate the API:
```python
from alibi.explainers import AnchorTabular
# initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)
# explain an instance
explanation = explainer.explain(x)
```
The explanation returned is an `Explanation` object with attributes `meta` and `data`. `meta` is a dictionary
containing the explainer metadata and any hyperparameters and `data` is a dictionary containing everything
related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed
via `explanation.data['anchor']` (or `explanation.anchor`). The exact details of available fields varies
from method to method so we encourage the reader to become familiar with the
[types of methods supported](https://docs.seldon.io/projects/alibi/en/stable/overview/algorithms.html).
## Supported Methods
The following tables summarize the possible use cases for each method.
### Model Explanations
| Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed |
|:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:|
| [ALE](https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html) | BB | global | ✔ | ✔ | ✔ | | | | | |
| [Partial Dependence](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependence.html) | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | |
| [PD Variance](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependenceVariance.html) | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | |
| [Permutation Importance](https://docs.seldon.io/projects/alibi/en/stable/methods/PermutationImportance.html) | BB | global | ✔ | ✔ | ✔ | | | ✔ | | |
| [Anchors](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html) | BB | local | ✔ | | ✔ | ✔ | ✔ | ✔ | For Tabular | |
| [CEM](https://docs.seldon.io/projects/alibi/en/stable/methods/CEM.html) | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | Optional | |
| [Counterfactuals](https://docs.seldon.io/projects/alibi/en/stable/methods/CF.html) | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | No | |
| [Prototype Counterfactuals](https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html) | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | ✔ | Optional | |
| [Counterfactuals with RL](https://docs.seldon.io/projects/alibi/en/stable/methods/CFRL.html) | BB | local | ✔ | | ✔ | | ✔ | ✔ | ✔ | |
| [Integrated Gradients](https://docs.seldon.io/projects/alibi/en/stable/methods/IntegratedGradients.html) | TF/Keras | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | Optional | |
| [Kernel SHAP](https://docs.seldon.io/projects/alibi/en/stable/methods/KernelSHAP.html) | BB | local <br></br>global | ✔ | ✔ | ✔ | | | ✔ | ✔ | ✔ |
| [Tree SHAP](https://docs.seldon.io/projects/alibi/en/stable/methods/TreeSHAP.html) | WB | local <br></br>global | ✔ | ✔ | ✔ | | | ✔ | Optional | |
| [Similarity explanations](https://docs.seldon.io/projects/alibi/en/stable/methods/Similarity.html) | WB | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
### Model Confidence
These algorithms provide **instance-specific** scores measuring the model confidence for making a
particular prediction.
|Method|Models|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set required|
|:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---|
|[Trust Scores](https://docs.seldon.io/projects/alibi/en/stable/methods/TrustScores.html)|BB|✔| |✔|✔(1)|✔(2)| |Yes|
|[Linearity Measure](https://docs.seldon.io/projects/alibi/en/stable/methods/LinearityMeasure.html)|BB|✔|✔|✔| |✔| |Optional|
Key:
- **BB** - black-box (only require a prediction function)
- **BB\*** - black-box but assume model is differentiable
- **WB** - requires white-box model access. There may be limitations on models supported
- **TF/Keras** - TensorFlow models via the Keras API
- **Local** - instance specific explanation, why was this prediction made?
- **Global** - explains the model with respect to a set of instances
- **(1)** - depending on model
- **(2)** - may require dimensionality reduction
### Prototypes
These algorithms provide a **distilled** view of the dataset and help construct a 1-KNN **interpretable** classifier.
|Method|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set labels|
|:-----|:-------------|:---------|:------|:---|:-----|:-------------------|:---------------|
|[ProtoSelect](https://docs.seldon.io/projects/alibi/en/latest/methods/ProtoSelect.html)|✔| |✔|✔|✔|✔| Optional |
## References and Examples
- Accumulated Local Effects (ALE, [Apley and Zhu, 2016](https://arxiv.org/abs/1612.08468))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html)
- Examples:
[California housing dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/ale_regression_california.html),
[Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/ale_classification.html)
- Partial Dependence ([J.H. Friedman, 2001](https://projecteuclid.org/journals/annals-of-statistics/volume-29/issue-5/Greedy-function-approximation-A-gradient-boostingmachine/10.1214/aos/1013203451.full))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependence.html)
- Examples:
[Bike rental](https://docs.seldon.io/projects/alibi/en/stable/examples/pdp_regression_bike.html)
- Partial Dependence Variance([Greenwell et al., 2018](https://arxiv.org/abs/1805.04755))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependenceVariance.html)
- Examples:
[Friedman’s regression problem](https://docs.seldon.io/projects/alibi/en/stable/examples/pd_variance_regression_friedman.html)
- Permutation Importance([Breiman, 2001](https://link.springer.com/article/10.1023/A:1010933404324); [Fisher et al., 2018](https://arxiv.org/abs/1801.01489))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PermutationImportance.html)
- Examples:
[Who's Going to Leave Next?](https://docs.seldon.io/projects/alibi/en/stable/examples/permutation_importance_classification_leave.html)
- Anchor explanations ([Ribeiro et al., 2018](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html)
- Examples:
[income prediction](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_tabular_adult.html),
[Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_tabular_iris.html),
[movie sentiment classification](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_text_movie.html),
[ImageNet](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html),
[fashion MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_fashion_mnist.html)
- Contrastive Explanation Method (CEM, [Dhurandhar et al., 2018](https://papers.nips.cc/paper/7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CEM.html)
- Examples: [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cem_mnist.html),
[Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/cem_iris.html)
- Counterfactual Explanations (extension of
[Wachter et al., 2017](https://arxiv.org/abs/1711.00399))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CF.html)
- Examples:
[MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cf_mnist.html)
- Counterfactual Explanations Guided by Prototypes ([Van Looveren and Klaise, 2019](https://arxiv.org/abs/1907.02584))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html)
- Examples:
[MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_mnist.html),
[California housing dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_housing.html),
[Adult income (one-hot)](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_cat_adult_ohe.html),
[Adult income (ordinal)](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_cat_adult_ord.html)
- Model-agnostic Counterfactual Explanations via RL([Samoilescu et al., 2021](https://arxiv.org/abs/2106.02597))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CFRL.html)
- Examples:
[MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cfrl_mnist.html),
[Adult income](https://docs.seldon.io/projects/alibi/en/stable/examples/cfrl_adult.html)
- Integrated Gradients ([Sundararajan et al., 2017](https://arxiv.org/abs/1703.01365))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/IntegratedGradients.html),
- Examples:
[MNIST example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_mnist.html),
[Imagenet example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imagenet.html),
[IMDB example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html).
- Kernel Shapley Additive Explanations ([Lundberg et al., 2017](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/KernelSHAP.html)
- Examples:
[SVM with continuous data](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_wine_intro.html),
[multinomial logistic regression with continous data](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_wine_lr.html),
[handling categorical variables](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_adult_lr.html)
- Tree Shapley Additive Explanations ([Lundberg et al., 2020](https://www.nature.com/articles/s42256-019-0138-9))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/TreeSHAP.html)
- Examples:
[Interventional (adult income, xgboost)](https://docs.seldon.io/projects/alibi/en/stable/examples/interventional_tree_shap_adult_xgb.html),
[Path-dependent (adult income, xgboost)](https://docs.seldon.io/projects/alibi/en/stable/examples/path_dependent_tree_shap_adult_xgb.html)
- Trust Scores ([Jiang et al., 2018](https://arxiv.org/abs/1805.11783))
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/TrustScores.html)
- Examples:
[MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/trustscore_mnist.html),
[Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/trustscore_mnist.html)
- Linearity Measure
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/LinearityMeasure.html)
- Examples:
[Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/linearity_measure_iris.html),
[fashion MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/linearity_measure_fashion_mnist.html)
- ProtoSelect
- [Documentation](https://docs.seldon.io/projects/alibi/en/latest/methods/ProtoSelect.html)
- Examples:
[Adult Census & CIFAR10](https://docs.seldon.io/projects/alibi/en/latest/examples/protoselect_adult_cifar10.html)
- Similarity explanations
- [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/Similarity.html)
- Examples:
[20 news groups dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_20ng.html),
[ImageNet dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_imagenet.html),
[MNIST dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_mnist.html)
## Citations
If you use alibi in your research, please consider citing it.
BibTeX entry:
```
@article{JMLR:v22:21-0017,
author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},
title = {Alibi Explain: Algorithms for Explaining Machine Learning Models},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {181},
pages = {1-7},
url = {http://jmlr.org/papers/v22/21-0017.html}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/SeldonIO/alibi",
"name": "alibi",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Seldon Technologies Ltd.",
"author_email": "hello@seldon.io",
"download_url": "https://files.pythonhosted.org/packages/f6/12/5281c82e19aa2b2d705c45cee415860e108dd0e7f21bfaaab4df045ba3e0/alibi-0.9.6.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/SeldonIO/alibi/master/doc/source/_static/Alibi_Explain_Logo_rgb.png\" alt=\"Alibi Logo\" width=\"50%\">\n</p>\n\n<!--- BADGES: START --->\n\n[][#build-status]\n[][#docs-package]\n[](https://codecov.io/gh/SeldonIO/alibi)\n[][#pypi-package]\n[][#pypi-package]\n[][#conda-forge-package]\n[][#github-license]\n[][#slack-channel]\n\n<!--- Hide platform for now as platform agnostic --->\n<!--- [][#conda-forge-package]--->\n\n[#github-license]: https://github.com/SeldonIO/alibi/blob/master/LICENSE\n[#pypi-package]: https://pypi.org/project/alibi/\n[#conda-forge-package]: https://anaconda.org/conda-forge/alibi\n[#docs-package]: https://docs.seldon.io/projects/alibi/en/stable/\n[#build-status]: https://github.com/SeldonIO/alibi/actions?query=workflow%3A%22CI%22\n[#slack-channel]: https://join.slack.com/t/seldondev/shared_invite/zt-vejg6ttd-ksZiQs3O_HOtPQsen_labg\n<!--- BADGES: END --->\n---\n\n[Alibi](https://docs.seldon.io/projects/alibi) is a Python library aimed at machine learning model inspection and interpretation.\nThe focus of the library is to provide high-quality implementations of black-box, white-box, local and global\nexplanation methods for classification and regression models.\n* [Documentation](https://docs.seldon.io/projects/alibi/en/stable/)\n\nIf you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project [alibi-detect](https://github.com/SeldonIO/alibi-detect).\n\n<table>\n <tr valign=\"top\">\n <td width=\"50%\" >\n <a href=\"https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html\">\n <br>\n <b>Anchor explanations for images</b>\n <br>\n <br>\n <img src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/anchor_image.png\">\n </a>\n </td>\n <td width=\"50%\">\n <a href=\"https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html\">\n <br>\n <b>Integrated Gradients for text</b>\n <br>\n <br>\n <img src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ig_text.png\">\n </a>\n </td>\n </tr>\n <tr valign=\"top\">\n <td width=\"50%\">\n <a href=\"https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html\">\n <br>\n <b>Counterfactual examples</b>\n <br>\n <br>\n <img src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/cf.png\">\n </a>\n </td>\n <td width=\"50%\">\n <a href=\"https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html\">\n <br>\n <b>Accumulated Local Effects</b>\n <br>\n <br>\n <img src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ale.png\">\n </a>\n </td>\n </tr>\n</table>\n\n## Table of Contents\n\n* [Installation and Usage](#installation-and-usage)\n* [Supported Methods](#supported-methods)\n * [Model Explanations](#model-explanations)\n * [Model Confidence](#model-confidence)\n * [Prototypes](#prototypes)\n * [References and Examples](#references-and-examples)\n* [Citations](#citations)\n\n## Installation and Usage\nAlibi can be installed from:\n\n- PyPI or GitHub source (with `pip`)\n- Anaconda (with `conda`/`mamba`)\n\n### With pip\n\n- Alibi can be installed from [PyPI](https://pypi.org/project/alibi):\n\n ```bash\n pip install alibi\n ```\n \n- Alternatively, the development version can be installed:\n ```bash\n pip install git+https://github.com/SeldonIO/alibi.git \n ```\n\n- To take advantage of distributed computation of explanations, install `alibi` with `ray`:\n ```bash\n pip install alibi[ray]\n ```\n\n- For SHAP support, install `alibi` as follows:\n ```bash\n pip install alibi[shap]\n ```\n\n### With conda \n\nTo install from [conda-forge](https://conda-forge.org/) it is recommended to use [mamba](https://mamba.readthedocs.io/en/stable/), \nwhich can be installed to the *base* conda enviroment with:\n\n```bash\nconda install mamba -n base -c conda-forge\n```\n\n- For the standard Alibi install:\n ```bash\n mamba install -c conda-forge alibi\n ```\n\n- For distributed computing support:\n ```bash\n mamba install -c conda-forge alibi ray\n ```\n\n- For SHAP support:\n ```bash\n mamba install -c conda-forge alibi shap\n ```\n\n### Usage\nThe alibi explanation API takes inspiration from `scikit-learn`, consisting of distinct initialize,\nfit and explain steps. We will use the [AnchorTabular](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html)\nexplainer to illustrate the API:\n\n```python\nfrom alibi.explainers import AnchorTabular\n\n# initialize and fit explainer by passing a prediction function and any other required arguments\nexplainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)\nexplainer.fit(X_train)\n\n# explain an instance\nexplanation = explainer.explain(x)\n```\n\nThe explanation returned is an `Explanation` object with attributes `meta` and `data`. `meta` is a dictionary\ncontaining the explainer metadata and any hyperparameters and `data` is a dictionary containing everything\nrelated to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed\nvia `explanation.data['anchor']` (or `explanation.anchor`). The exact details of available fields varies\nfrom method to method so we encourage the reader to become familiar with the\n[types of methods supported](https://docs.seldon.io/projects/alibi/en/stable/overview/algorithms.html).\n \n\n## Supported Methods\nThe following tables summarize the possible use cases for each method.\n\n### Model Explanations\n| Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed |\n|:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:|\n| [ALE](https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html) | BB | global | \u2714 | \u2714 | \u2714 | | | | | |\n| [Partial Dependence](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependence.html) | BB WB | global | \u2714 | \u2714 | \u2714 | | | \u2714 | | |\n| [PD Variance](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependenceVariance.html) | BB WB | global | \u2714 | \u2714 | \u2714 | | | \u2714 | | |\n| [Permutation Importance](https://docs.seldon.io/projects/alibi/en/stable/methods/PermutationImportance.html) | BB | global | \u2714 | \u2714 | \u2714 | | | \u2714 | | |\n| [Anchors](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html) | BB | local | \u2714 | | \u2714 | \u2714 | \u2714 | \u2714 | For Tabular | |\n| [CEM](https://docs.seldon.io/projects/alibi/en/stable/methods/CEM.html) | BB* TF/Keras | local | \u2714 | | \u2714 | | \u2714 | | Optional | |\n| [Counterfactuals](https://docs.seldon.io/projects/alibi/en/stable/methods/CF.html) | BB* TF/Keras | local | \u2714 | | \u2714 | | \u2714 | | No | |\n| [Prototype Counterfactuals](https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html) | BB* TF/Keras | local | \u2714 | | \u2714 | | \u2714 | \u2714 | Optional | |\n| [Counterfactuals with RL](https://docs.seldon.io/projects/alibi/en/stable/methods/CFRL.html) | BB | local | \u2714 | | \u2714 | | \u2714 | \u2714 | \u2714 | |\n| [Integrated Gradients](https://docs.seldon.io/projects/alibi/en/stable/methods/IntegratedGradients.html) | TF/Keras | local | \u2714 | \u2714 | \u2714 | \u2714 | \u2714 | \u2714 | Optional | |\n| [Kernel SHAP](https://docs.seldon.io/projects/alibi/en/stable/methods/KernelSHAP.html) | BB | local <br></br>global | \u2714 | \u2714 | \u2714 | | | \u2714 | \u2714 | \u2714 |\n| [Tree SHAP](https://docs.seldon.io/projects/alibi/en/stable/methods/TreeSHAP.html) | WB | local <br></br>global | \u2714 | \u2714 | \u2714 | | | \u2714 | Optional | |\n| [Similarity explanations](https://docs.seldon.io/projects/alibi/en/stable/methods/Similarity.html) | WB | local | \u2714 | \u2714 | \u2714 | \u2714 | \u2714 | \u2714 | \u2714 | |\n\n### Model Confidence\nThese algorithms provide **instance-specific** scores measuring the model confidence for making a\nparticular prediction.\n\n|Method|Models|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set required|\n|:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---|\n|[Trust Scores](https://docs.seldon.io/projects/alibi/en/stable/methods/TrustScores.html)|BB|\u2714| |\u2714|\u2714(1)|\u2714(2)| |Yes|\n|[Linearity Measure](https://docs.seldon.io/projects/alibi/en/stable/methods/LinearityMeasure.html)|BB|\u2714|\u2714|\u2714| |\u2714| |Optional|\n\nKey:\n - **BB** - black-box (only require a prediction function)\n - **BB\\*** - black-box but assume model is differentiable\n - **WB** - requires white-box model access. There may be limitations on models supported\n - **TF/Keras** - TensorFlow models via the Keras API\n - **Local** - instance specific explanation, why was this prediction made?\n - **Global** - explains the model with respect to a set of instances\n - **(1)** - depending on model\n - **(2)** - may require dimensionality reduction\n\n### Prototypes\nThese algorithms provide a **distilled** view of the dataset and help construct a 1-KNN **interpretable** classifier.\n\n|Method|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set labels|\n|:-----|:-------------|:---------|:------|:---|:-----|:-------------------|:---------------|\n|[ProtoSelect](https://docs.seldon.io/projects/alibi/en/latest/methods/ProtoSelect.html)|\u2714| |\u2714|\u2714|\u2714|\u2714| Optional |\n\n\n## References and Examples\n- Accumulated Local Effects (ALE, [Apley and Zhu, 2016](https://arxiv.org/abs/1612.08468))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html)\n - Examples:\n [California housing dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/ale_regression_california.html),\n [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/ale_classification.html)\n\n- Partial Dependence ([J.H. Friedman, 2001](https://projecteuclid.org/journals/annals-of-statistics/volume-29/issue-5/Greedy-function-approximation-A-gradient-boostingmachine/10.1214/aos/1013203451.full))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependence.html)\n - Examples:\n [Bike rental](https://docs.seldon.io/projects/alibi/en/stable/examples/pdp_regression_bike.html)\n\n- Partial Dependence Variance([Greenwell et al., 2018](https://arxiv.org/abs/1805.04755))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependenceVariance.html)\n - Examples:\n [Friedman\u2019s regression problem](https://docs.seldon.io/projects/alibi/en/stable/examples/pd_variance_regression_friedman.html)\n\n- Permutation Importance([Breiman, 2001](https://link.springer.com/article/10.1023/A:1010933404324); [Fisher et al., 2018](https://arxiv.org/abs/1801.01489))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PermutationImportance.html)\n - Examples:\n [Who's Going to Leave Next?](https://docs.seldon.io/projects/alibi/en/stable/examples/permutation_importance_classification_leave.html)\n\n- Anchor explanations ([Ribeiro et al., 2018](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html)\n - Examples:\n [income prediction](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_tabular_adult.html),\n [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_tabular_iris.html),\n [movie sentiment classification](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_text_movie.html),\n [ImageNet](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html),\n [fashion MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_fashion_mnist.html)\n\n- Contrastive Explanation Method (CEM, [Dhurandhar et al., 2018](https://papers.nips.cc/paper/7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CEM.html)\n - Examples: [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cem_mnist.html),\n [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/cem_iris.html)\n\n- Counterfactual Explanations (extension of\n [Wachter et al., 2017](https://arxiv.org/abs/1711.00399))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CF.html)\n - Examples: \n [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cf_mnist.html)\n\n- Counterfactual Explanations Guided by Prototypes ([Van Looveren and Klaise, 2019](https://arxiv.org/abs/1907.02584))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html)\n - Examples:\n [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_mnist.html),\n [California housing dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_housing.html),\n [Adult income (one-hot)](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_cat_adult_ohe.html),\n [Adult income (ordinal)](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_cat_adult_ord.html)\n\n- Model-agnostic Counterfactual Explanations via RL([Samoilescu et al., 2021](https://arxiv.org/abs/2106.02597))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CFRL.html)\n - Examples:\n [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cfrl_mnist.html),\n [Adult income](https://docs.seldon.io/projects/alibi/en/stable/examples/cfrl_adult.html)\n\n- Integrated Gradients ([Sundararajan et al., 2017](https://arxiv.org/abs/1703.01365))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/IntegratedGradients.html),\n - Examples:\n [MNIST example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_mnist.html),\n [Imagenet example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imagenet.html),\n [IMDB example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html).\n\n- Kernel Shapley Additive Explanations ([Lundberg et al., 2017](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/KernelSHAP.html)\n - Examples:\n [SVM with continuous data](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_wine_intro.html),\n [multinomial logistic regression with continous data](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_wine_lr.html),\n [handling categorical variables](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_adult_lr.html)\n \n- Tree Shapley Additive Explanations ([Lundberg et al., 2020](https://www.nature.com/articles/s42256-019-0138-9))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/TreeSHAP.html)\n - Examples:\n [Interventional (adult income, xgboost)](https://docs.seldon.io/projects/alibi/en/stable/examples/interventional_tree_shap_adult_xgb.html),\n [Path-dependent (adult income, xgboost)](https://docs.seldon.io/projects/alibi/en/stable/examples/path_dependent_tree_shap_adult_xgb.html)\n \n- Trust Scores ([Jiang et al., 2018](https://arxiv.org/abs/1805.11783))\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/TrustScores.html)\n - Examples:\n [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/trustscore_mnist.html),\n [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/trustscore_mnist.html)\n\n- Linearity Measure\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/LinearityMeasure.html)\n - Examples:\n [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/linearity_measure_iris.html),\n [fashion MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/linearity_measure_fashion_mnist.html)\n\n- ProtoSelect\n - [Documentation](https://docs.seldon.io/projects/alibi/en/latest/methods/ProtoSelect.html)\n - Examples:\n [Adult Census & CIFAR10](https://docs.seldon.io/projects/alibi/en/latest/examples/protoselect_adult_cifar10.html)\n\n- Similarity explanations\n - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/Similarity.html)\n - Examples:\n [20 news groups dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_20ng.html),\n [ImageNet dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_imagenet.html),\n [MNIST dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_mnist.html)\n\n## Citations\nIf you use alibi in your research, please consider citing it.\n\nBibTeX entry:\n\n```\n@article{JMLR:v22:21-0017,\n author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},\n title = {Alibi Explain: Algorithms for Explaining Machine Learning Models},\n journal = {Journal of Machine Learning Research},\n year = {2021},\n volume = {22},\n number = {181},\n pages = {1-7},\n url = {http://jmlr.org/papers/v22/21-0017.html}\n}\n```\n",
"bugtrack_url": null,
"license": "Business Source License 1.1",
"summary": "Algorithms for monitoring and explaining machine learning models",
"version": "0.9.6",
"project_urls": {
"Homepage": "https://github.com/SeldonIO/alibi"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bc00bc8caafbabf675a15f08e8cd6d86eedb5949e2cec5ec73157d71f62abd79",
"md5": "836d8e87775fe18df9620747cc492181",
"sha256": "206416e297f927f6028f3ffcc286b16227a11b26ce99581f533ecf9efdc86a14"
},
"downloads": -1,
"filename": "alibi-0.9.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "836d8e87775fe18df9620747cc492181",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 522135,
"upload_time": "2024-04-18T15:29:05",
"upload_time_iso_8601": "2024-04-18T15:29:05.326364Z",
"url": "https://files.pythonhosted.org/packages/bc/00/bc8caafbabf675a15f08e8cd6d86eedb5949e2cec5ec73157d71f62abd79/alibi-0.9.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f6125281c82e19aa2b2d705c45cee415860e108dd0e7f21bfaaab4df045ba3e0",
"md5": "300ff69b54d68220b5e338c2d68f2fd1",
"sha256": "7a5075baf62b693c4489752281c58814efca1dcd08cfe353242da9a9da9ed6c1"
},
"downloads": -1,
"filename": "alibi-0.9.6.tar.gz",
"has_sig": false,
"md5_digest": "300ff69b54d68220b5e338c2d68f2fd1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 469089,
"upload_time": "2024-04-18T15:29:15",
"upload_time_iso_8601": "2024-04-18T15:29:15.403150Z",
"url": "https://files.pythonhosted.org/packages/f6/12/5281c82e19aa2b2d705c45cee415860e108dd0e7f21bfaaab4df045ba3e0/alibi-0.9.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-18 15:29:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "SeldonIO",
"github_project": "alibi",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "alibi"
}