shap


Nameshap JSON
Version 0.46.0 PyPI version JSON
download
home_pageNone
SummaryA unified approach to explain the output of any machine learning model.
upload_time2024-06-27 10:17:22
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            

<p align="center">
  <img src="https://raw.githubusercontent.com/shap/shap/master/docs/artwork/shap_header.svg" width="800" />
</p>

---
[![PyPI](https://img.shields.io/pypi/v/shap)](https://pypi.org/project/shap/)
[![Conda](https://img.shields.io/conda/vn/conda-forge/shap)](https://anaconda.org/conda-forge/shap)
![License](https://img.shields.io/github/license/shap/shap)
![Tests](https://github.com/shap/shap/actions/workflows/run_tests.yml/badge.svg)
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/shap/shap/master)
[![Documentation Status](https://readthedocs.org/projects/shap/badge/?version=latest)](https://shap.readthedocs.io/en/latest/?badge=latest)
![Downloads](https://img.shields.io/pypi/dm/shap)
[![PyPI pyversions](https://img.shields.io/pypi/pyversions/shap)](https://pypi.org/pypi/shap/)


**SHAP (SHapley Additive exPlanations)** is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see [papers](#citations) for details and citations).

<!--**SHAP (SHapley Additive exPlanations)** is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our [papers](#citations) for details and citations).-->



## Install

SHAP can be installed from either [PyPI](https://pypi.org/project/shap) or [conda-forge](https://anaconda.org/conda-forge/shap):

<pre>
pip install shap
<i>or</i>
conda install -c conda-forge shap
</pre>

## Tree ensemble example (XGBoost/LightGBM/CatBoost/scikit-learn/pyspark models)

While SHAP can explain the output of any machine learning model, we have developed a high-speed exact algorithm for tree ensemble methods (see our [Nature MI paper](https://rdcu.be/b0z70)). Fast C++ implementations are supported for *XGBoost*, *LightGBM*, *CatBoost*, *scikit-learn* and *pyspark* tree models:

```python
import xgboost
import shap

# train an XGBoost model
X, y = shap.datasets.california()
model = xgboost.XGBRegressor().fit(X, y)

# explain the model's predictions using SHAP
# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)
explainer = shap.Explainer(model)
shap_values = explainer(X)

# visualize the first prediction's explanation
shap.plots.waterfall(shap_values[0])
```

<p align="center">
  <img width="616" src="./docs/artwork/california_waterfall.png" />
</p>

The above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue. Another way to visualize the same explanation is to use a force plot (these are introduced in our [Nature BME paper](https://rdcu.be/baVbR)):

```python
# visualize the first prediction's explanation with a force plot
shap.plots.force(shap_values[0])
```

<p align="center">
  <img width="811" src="./docs/artwork/california_instance.png" />
</p>

If we take many force plot explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset (in the notebook this plot is interactive):

```python
# visualize all the training set predictions
shap.plots.force(shap_values[:500])
```

<p align="center">
  <img width="811" src="./docs/artwork/california_dataset.png" />
</p>

To understand how a single feature effects the output of the model we can plot the SHAP value of that feature vs. the value of the feature for all the examples in a dataset. Since SHAP values represent a feature's responsibility for a change in the model output, the plot below represents the change in predicted house price as the latitude changes. Vertical dispersion at a single value of latitude represents interaction effects with other features. To help reveal these interactions we can color by another feature. If we pass the whole explanation tensor to the `color` argument the scatter plot will pick the best feature to color by. In this case it picks longitude.

```python
# create a dependence scatter plot to show the effect of a single feature across the whole dataset
shap.plots.scatter(shap_values[:, "Latitude"], color=shap_values)
```

<p align="center">
  <img width="544" src="./docs/artwork/california_scatter.png" />
</p>


To get an overview of which features are most important for a model we can plot the SHAP values of every feature for every sample. The plot below sorts features by the sum of SHAP value magnitudes over all samples, and uses SHAP values to show the distribution of the impacts each feature has on the model output. The color represents the feature value (red high, blue low). This reveals for example that higher median incomes improves the predicted home price.

```python
# summarize the effects of all the features
shap.plots.beeswarm(shap_values)
```

<p align="center">
  <img width="583" src="./docs/artwork/california_beeswarm.png" />
</p>

We can also just take the mean absolute value of the SHAP values for each feature to get a standard bar plot (produces stacked bars for multi-class outputs):

```python
shap.plots.bar(shap_values)
```

<p align="center">
  <img width="570" src="./docs/artwork/california_global_bar.png" />
</p>

## Natural language example (transformers)

SHAP has specific support for natural language models like those in the Hugging Face transformers library. By adding coalitional rules to traditional Shapley values we can form games that explain large modern NLP model using very few function evaluations. Using this functionality is as simple as passing a supported transformers pipeline to SHAP:

```python
import transformers
import shap

# load a transformers pipeline model
model = transformers.pipeline('sentiment-analysis', return_all_scores=True)

# explain the model on two sample inputs
explainer = shap.Explainer(model)
shap_values = explainer(["What a great movie! ...if you have no taste."])

# visualize the first prediction's explanation for the POSITIVE output class
shap.plots.text(shap_values[0, :, "POSITIVE"])
```

<p align="center">
  <img width="811" src="https://raw.githubusercontent.com/shap/shap/master/docs/artwork/sentiment_analysis_plot.png" />
</p>

## Deep learning example with DeepExplainer (TensorFlow/Keras models)

Deep SHAP is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with [DeepLIFT](https://arxiv.org/abs/1704.02685) described in the SHAP NIPS paper. The implementation here differs from the original DeepLIFT by using a distribution of background samples instead of a single reference value, and using Shapley equations to linearize components such as max, softmax, products, divisions, etc. Note that some of these enhancements have also been since integrated into DeepLIFT. TensorFlow models and Keras models using the TensorFlow backend are supported (there is also preliminary support for PyTorch):

```python
# ...include code from https://github.com/keras-team/keras/blob/master/examples/demo_mnist_convnet.py

import shap
import numpy as np

# select a set of background examples to take an expectation over
background = x_train[np.random.choice(x_train.shape[0], 100, replace=False)]

# explain predictions of the model on four images
e = shap.DeepExplainer(model, background)
# ...or pass tensors directly
# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)
shap_values = e.shap_values(x_test[1:5])

# plot the feature attributions
shap.image_plot(shap_values, -x_test[1:5])
```

<p align="center">
  <img width="820" src="https://raw.githubusercontent.com/shap/shap/master/docs/artwork/mnist_image_plot.png" />
</p>

The plot above explains ten outputs (digits 0-9) for four different images. Red pixels increase the model's output while blue pixels decrease the output. The input images are shown on the left, and as nearly transparent grayscale backings behind each of the explanations. The sum of the SHAP values equals the difference between the expected model output (averaged over the background dataset) and the current model output. Note that for the 'zero' image the blank middle is important, while for the 'four' image the lack of a connection on top makes it a four instead of a nine.


## Deep learning example with GradientExplainer (TensorFlow/Keras/PyTorch models)

Expected gradients combines ideas from [Integrated Gradients](https://arxiv.org/abs/1703.01365), SHAP, and [SmoothGrad](https://arxiv.org/abs/1706.03825) into a single expected value equation. This allows an entire dataset to be used as the background distribution (as opposed to a single reference value) and allows local smoothing. If we approximate the model with a linear function between each background data sample and the current input to be explained, and we assume the input features are independent then expected gradients will compute approximate SHAP values. In the example below we have explained how the 7th intermediate layer of the VGG16 ImageNet model impacts the output probabilities.

```python
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
import keras.backend as K
import numpy as np
import json
import shap

# load pre-trained model and choose two images to explain
model = VGG16(weights='imagenet', include_top=True)
X,y = shap.datasets.imagenet50()
to_explain = X[[39,41]]

# load the ImageNet class names
url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
fname = shap.datasets.cache(url)
with open(fname) as f:
    class_names = json.load(f)

# explain how the input to the 7th layer of the model explains the top two classes
def map2layer(x, layer):
    feed_dict = dict(zip([model.layers[0].input], [preprocess_input(x.copy())]))
    return K.get_session().run(model.layers[layer].input, feed_dict)
e = shap.GradientExplainer(
    (model.layers[7].input, model.layers[-1].output),
    map2layer(X, 7),
    local_smoothing=0 # std dev of smoothing noise
)
shap_values,indexes = e.shap_values(map2layer(to_explain, 7), ranked_outputs=2)

# get the names for the classes
index_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)

# plot the explanations
shap.image_plot(shap_values, to_explain, index_names)
```

<p align="center">
  <img width="500" src="https://raw.githubusercontent.com/shap/shap/master/docs/artwork/gradient_imagenet_plot.png" />
</p>

Predictions for two input images are explained in the plot above. Red pixels represent positive SHAP values that increase the probability of the class, while blue pixels represent negative SHAP values the reduce the probability of the class. By using `ranked_outputs=2` we explain only the two most likely classes for each input (this spares us from explaining all 1,000 classes).

## Model agnostic example with KernelExplainer (explains any function)

Kernel SHAP uses a specially-weighted local linear regression to estimate SHAP values for any model. Below is a simple example for explaining a multi-class SVM on the classic iris dataset.

```python
import sklearn
import shap
from sklearn.model_selection import train_test_split

# print the JS visualization code to the notebook
shap.initjs()

# train a SVM classifier
X_train,X_test,Y_train,Y_test = train_test_split(*shap.datasets.iris(), test_size=0.2, random_state=0)
svm = sklearn.svm.SVC(kernel='rbf', probability=True)
svm.fit(X_train, Y_train)

# use Kernel SHAP to explain test set predictions
explainer = shap.KernelExplainer(svm.predict_proba, X_train, link="logit")
shap_values = explainer.shap_values(X_test, nsamples=100)

# plot the SHAP values for the Setosa output of the first instance
shap.force_plot(explainer.expected_value[0], shap_values[0][0,:], X_test.iloc[0,:], link="logit")
```
<p align="center">
  <img width="810" src="https://raw.githubusercontent.com/shap/shap/master/docs/artwork/iris_instance.png" />
</p>

The above explanation shows four features each contributing to push the model output from the base value (the average model output over the training dataset we passed) towards zero. If there were any features pushing the class label higher they would be shown in red.

If we take many explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset. This is exactly what we do below for all the examples in the iris test set:

```python
# plot the SHAP values for the Setosa output of all instances
shap.force_plot(explainer.expected_value[0], shap_values[0], X_test, link="logit")
```
<p align="center">
  <img width="813" src="https://raw.githubusercontent.com/shap/shap/master/docs/artwork/iris_dataset.png" />
</p>

## SHAP Interaction Values

SHAP interaction values are a generalization of SHAP values to higher order interactions. Fast exact computation of pairwise interactions are implemented for tree models with `shap.TreeExplainer(model).shap_interaction_values(X)`. This returns a matrix for every prediction, where the main effects are on the diagonal and the interaction effects are off-diagonal. These values often reveal interesting hidden relationships, such as how the increased risk of death peaks for men at age 60 (see the NHANES notebook for details):

<p align="center">
  <img width="483" src="https://raw.githubusercontent.com/shap/shap/master/docs/artwork/nhanes_age_sex_interaction.png" />
</p>

## Sample notebooks

The notebooks below demonstrate different use cases for SHAP. Look inside the notebooks directory of the repository if you want to try playing with the original notebooks yourself.

### TreeExplainer

An implementation of Tree SHAP, a fast and exact algorithm to compute SHAP values for trees and ensembles of trees.

- [**NHANES survival model with XGBoost and SHAP interaction values**](https://shap.github.io/shap/notebooks/NHANES%20I%20Survival%20Model.html) - Using mortality data from 20 years of followup this notebook demonstrates how to use XGBoost and `shap` to uncover complex risk factor relationships.

- [**Census income classification with LightGBM**](https://shap.github.io/shap/notebooks/tree_explainer/Census%20income%20classification%20with%20LightGBM.html) - Using the standard adult census income dataset, this notebook trains a gradient boosting tree model with LightGBM and then explains predictions using `shap`.

- [**League of Legends Win Prediction with XGBoost**](https://shap.github.io/shap/notebooks/League%20of%20Legends%20Win%20Prediction%20with%20XGBoost.html) - Using a Kaggle dataset of 180,000 ranked matches from League of Legends we train and explain a gradient boosting tree model with XGBoost to predict if a player will win their match.

### DeepExplainer

An implementation of Deep SHAP, a faster (but only approximate) algorithm to compute SHAP values for deep learning models that is based on connections between SHAP and the DeepLIFT algorithm.

- [**MNIST Digit classification with Keras**](https://shap.github.io/shap/notebooks/deep_explainer/Front%20Page%20DeepExplainer%20MNIST%20Example.html) - Using the MNIST handwriting recognition dataset, this notebook trains a neural network with Keras and then explains predictions using `shap`.

- [**Keras LSTM for IMDB Sentiment Classification**](https://shap.github.io/shap/notebooks/deep_explainer/Keras%20LSTM%20for%20IMDB%20Sentiment%20Classification.html) - This notebook trains an LSTM with Keras on the IMDB text sentiment analysis dataset and then explains predictions using `shap`.

### GradientExplainer

An implementation of expected gradients to approximate SHAP values for deep learning models. It is based on connections between SHAP and the Integrated Gradients algorithm. GradientExplainer is slower than DeepExplainer and makes different approximation assumptions.

- [**Explain an Intermediate Layer of VGG16 on ImageNet**](https://shap.github.io/shap/notebooks/gradient_explainer/Explain%20an%20Intermediate%20Layer%20of%20VGG16%20on%20ImageNet.html) - This notebook demonstrates how to explain the output of a pre-trained VGG16 ImageNet model using an internal convolutional layer.

### LinearExplainer

For a linear model with independent features we can analytically compute the exact SHAP values. We can also account for feature correlation if we are willing to estimate the feature covariance matrix. LinearExplainer supports both of these options.

- [**Sentiment Analysis with Logistic Regression**](https://shap.github.io/shap/notebooks/linear_explainer/Sentiment%20Analysis%20with%20Logistic%20Regression.html) - This notebook demonstrates how to explain a linear logistic regression sentiment analysis model.

### KernelExplainer

An implementation of Kernel SHAP, a model agnostic method to estimate SHAP values for any model. Because it makes no assumptions about the model type, KernelExplainer is slower than the other model type specific algorithms.

- [**Census income classification with scikit-learn**](https://shap.github.io/shap/notebooks/Census%20income%20classification%20with%20scikit-learn.html) - Using the standard adult census income dataset, this notebook trains a k-nearest neighbors classifier using scikit-learn and then explains predictions using `shap`.

- [**ImageNet VGG16 Model with Keras**](https://shap.github.io/shap/notebooks/ImageNet%20VGG16%20Model%20with%20Keras.html) - Explain the classic VGG16 convolutional neural network's predictions for an image. This works by applying the model agnostic Kernel SHAP method to a super-pixel segmented image.

- [**Iris classification**](https://shap.github.io/shap/notebooks/Iris%20classification%20with%20scikit-learn.html) - A basic demonstration using the popular iris species dataset. It explains predictions from six different models in scikit-learn using `shap`.

## Documentation notebooks

These notebooks comprehensively demonstrate how to use specific functions and objects.

- [`shap.decision_plot` and `shap.multioutput_decision_plot`](https://shap.github.io/shap/notebooks/plots/decision_plot.html)

- [`shap.dependence_plot`](https://shap.github.io/shap/notebooks/plots/dependence_plot.html)

## Methods Unified by SHAP

1. *LIME:* Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should i trust you?: Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.

2. *Shapley sampling values:* Strumbelj, Erik, and Igor Kononenko. "Explaining prediction models and individual predictions with feature contributions." Knowledge and information systems 41.3 (2014): 647-665.

3. *DeepLIFT:* Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. "Learning important features through propagating activation differences." arXiv preprint arXiv:1704.02685 (2017).

4. *QII:* Datta, Anupam, Shayak Sen, and Yair Zick. "Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems." Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.

5. *Layer-wise relevance propagation:* Bach, Sebastian, et al. "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation." PloS one 10.7 (2015): e0130140.

6. *Shapley regression values:* Lipovetsky, Stan, and Michael Conklin. "Analysis of regression in game theory approach." Applied Stochastic Models in Business and Industry 17.4 (2001): 319-330.

7. *Tree interpreter:* Saabas, Ando. Interpreting random forests. http://blog.datadive.net/interpreting-random-forests/

## Citations

The algorithms and visualizations used in this package came primarily out of research in [Su-In Lee's lab](https://suinlee.cs.washington.edu) at the University of Washington, and Microsoft Research. If you use SHAP in your research we would appreciate a citation to the appropriate paper(s):

- For general use of SHAP you can read/cite our [NeurIPS paper](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions) ([bibtex](https://raw.githubusercontent.com/shap/shap/master/docs/references/shap_nips.bib)).
- For TreeExplainer you can read/cite our [Nature Machine Intelligence paper](https://www.nature.com/articles/s42256-019-0138-9) ([bibtex](https://raw.githubusercontent.com/shap/shap/master/docs/references/tree_explainer.bib); [free access](https://rdcu.be/b0z70)).
- For GPUTreeExplainer you can read/cite [this article](https://arxiv.org/abs/2010.13972).
- For `force_plot` visualizations and medical applications you can read/cite our [Nature Biomedical Engineering paper](https://www.nature.com/articles/s41551-018-0304-0) ([bibtex](https://raw.githubusercontent.com/shap/shap/master/docs/references/nature_bme.bib); [free access](https://rdcu.be/baVbR)).

<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=189147091855991&ev=PageView&noscript=1" />

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "shap",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Scott Lundberg <slund1@cs.washington.edu>",
    "download_url": "https://files.pythonhosted.org/packages/47/46/1b497452be642e19af56044814dfe32ee795805b443378821136729017a0/shap-0.46.0.tar.gz",
    "platform": null,
    "description": "\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/shap/shap/master/docs/artwork/shap_header.svg\" width=\"800\" />\n</p>\n\n---\n[![PyPI](https://img.shields.io/pypi/v/shap)](https://pypi.org/project/shap/)\n[![Conda](https://img.shields.io/conda/vn/conda-forge/shap)](https://anaconda.org/conda-forge/shap)\n![License](https://img.shields.io/github/license/shap/shap)\n![Tests](https://github.com/shap/shap/actions/workflows/run_tests.yml/badge.svg)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/shap/shap/master)\n[![Documentation Status](https://readthedocs.org/projects/shap/badge/?version=latest)](https://shap.readthedocs.io/en/latest/?badge=latest)\n![Downloads](https://img.shields.io/pypi/dm/shap)\n[![PyPI pyversions](https://img.shields.io/pypi/pyversions/shap)](https://pypi.org/pypi/shap/)\n\n\n**SHAP (SHapley Additive exPlanations)** is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see [papers](#citations) for details and citations).\n\n<!--**SHAP (SHapley Additive exPlanations)** is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our [papers](#citations) for details and citations).-->\n\n\n\n## Install\n\nSHAP can be installed from either [PyPI](https://pypi.org/project/shap) or [conda-forge](https://anaconda.org/conda-forge/shap):\n\n<pre>\npip install shap\n<i>or</i>\nconda install -c conda-forge shap\n</pre>\n\n## Tree ensemble example (XGBoost/LightGBM/CatBoost/scikit-learn/pyspark models)\n\nWhile SHAP can explain the output of any machine learning model, we have developed a high-speed exact algorithm for tree ensemble methods (see our [Nature MI paper](https://rdcu.be/b0z70)). Fast C++ implementations are supported for *XGBoost*, *LightGBM*, *CatBoost*, *scikit-learn* and *pyspark* tree models:\n\n```python\nimport xgboost\nimport shap\n\n# train an XGBoost model\nX, y = shap.datasets.california()\nmodel = xgboost.XGBRegressor().fit(X, y)\n\n# explain the model's predictions using SHAP\n# (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.)\nexplainer = shap.Explainer(model)\nshap_values = explainer(X)\n\n# visualize the first prediction's explanation\nshap.plots.waterfall(shap_values[0])\n```\n\n<p align=\"center\">\n  <img width=\"616\" src=\"./docs/artwork/california_waterfall.png\" />\n</p>\n\nThe above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue. Another way to visualize the same explanation is to use a force plot (these are introduced in our [Nature BME paper](https://rdcu.be/baVbR)):\n\n```python\n# visualize the first prediction's explanation with a force plot\nshap.plots.force(shap_values[0])\n```\n\n<p align=\"center\">\n  <img width=\"811\" src=\"./docs/artwork/california_instance.png\" />\n</p>\n\nIf we take many force plot explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset (in the notebook this plot is interactive):\n\n```python\n# visualize all the training set predictions\nshap.plots.force(shap_values[:500])\n```\n\n<p align=\"center\">\n  <img width=\"811\" src=\"./docs/artwork/california_dataset.png\" />\n</p>\n\nTo understand how a single feature effects the output of the model we can plot the SHAP value of that feature vs. the value of the feature for all the examples in a dataset. Since SHAP values represent a feature's responsibility for a change in the model output, the plot below represents the change in predicted house price as the latitude changes. Vertical dispersion at a single value of latitude represents interaction effects with other features. To help reveal these interactions we can color by another feature. If we pass the whole explanation tensor to the `color` argument the scatter plot will pick the best feature to color by. In this case it picks longitude.\n\n```python\n# create a dependence scatter plot to show the effect of a single feature across the whole dataset\nshap.plots.scatter(shap_values[:, \"Latitude\"], color=shap_values)\n```\n\n<p align=\"center\">\n  <img width=\"544\" src=\"./docs/artwork/california_scatter.png\" />\n</p>\n\n\nTo get an overview of which features are most important for a model we can plot the SHAP values of every feature for every sample. The plot below sorts features by the sum of SHAP value magnitudes over all samples, and uses SHAP values to show the distribution of the impacts each feature has on the model output. The color represents the feature value (red high, blue low). This reveals for example that higher median incomes improves the predicted home price.\n\n```python\n# summarize the effects of all the features\nshap.plots.beeswarm(shap_values)\n```\n\n<p align=\"center\">\n  <img width=\"583\" src=\"./docs/artwork/california_beeswarm.png\" />\n</p>\n\nWe can also just take the mean absolute value of the SHAP values for each feature to get a standard bar plot (produces stacked bars for multi-class outputs):\n\n```python\nshap.plots.bar(shap_values)\n```\n\n<p align=\"center\">\n  <img width=\"570\" src=\"./docs/artwork/california_global_bar.png\" />\n</p>\n\n## Natural language example (transformers)\n\nSHAP has specific support for natural language models like those in the Hugging Face transformers library. By adding coalitional rules to traditional Shapley values we can form games that explain large modern NLP model using very few function evaluations. Using this functionality is as simple as passing a supported transformers pipeline to SHAP:\n\n```python\nimport transformers\nimport shap\n\n# load a transformers pipeline model\nmodel = transformers.pipeline('sentiment-analysis', return_all_scores=True)\n\n# explain the model on two sample inputs\nexplainer = shap.Explainer(model)\nshap_values = explainer([\"What a great movie! ...if you have no taste.\"])\n\n# visualize the first prediction's explanation for the POSITIVE output class\nshap.plots.text(shap_values[0, :, \"POSITIVE\"])\n```\n\n<p align=\"center\">\n  <img width=\"811\" src=\"https://raw.githubusercontent.com/shap/shap/master/docs/artwork/sentiment_analysis_plot.png\" />\n</p>\n\n## Deep learning example with DeepExplainer (TensorFlow/Keras models)\n\nDeep SHAP is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with [DeepLIFT](https://arxiv.org/abs/1704.02685) described in the SHAP NIPS paper. The implementation here differs from the original DeepLIFT by using a distribution of background samples instead of a single reference value, and using Shapley equations to linearize components such as max, softmax, products, divisions, etc. Note that some of these enhancements have also been since integrated into DeepLIFT. TensorFlow models and Keras models using the TensorFlow backend are supported (there is also preliminary support for PyTorch):\n\n```python\n# ...include code from https://github.com/keras-team/keras/blob/master/examples/demo_mnist_convnet.py\n\nimport shap\nimport numpy as np\n\n# select a set of background examples to take an expectation over\nbackground = x_train[np.random.choice(x_train.shape[0], 100, replace=False)]\n\n# explain predictions of the model on four images\ne = shap.DeepExplainer(model, background)\n# ...or pass tensors directly\n# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)\nshap_values = e.shap_values(x_test[1:5])\n\n# plot the feature attributions\nshap.image_plot(shap_values, -x_test[1:5])\n```\n\n<p align=\"center\">\n  <img width=\"820\" src=\"https://raw.githubusercontent.com/shap/shap/master/docs/artwork/mnist_image_plot.png\" />\n</p>\n\nThe plot above explains ten outputs (digits 0-9) for four different images. Red pixels increase the model's output while blue pixels decrease the output. The input images are shown on the left, and as nearly transparent grayscale backings behind each of the explanations. The sum of the SHAP values equals the difference between the expected model output (averaged over the background dataset) and the current model output. Note that for the 'zero' image the blank middle is important, while for the 'four' image the lack of a connection on top makes it a four instead of a nine.\n\n\n## Deep learning example with GradientExplainer (TensorFlow/Keras/PyTorch models)\n\nExpected gradients combines ideas from [Integrated Gradients](https://arxiv.org/abs/1703.01365), SHAP, and [SmoothGrad](https://arxiv.org/abs/1706.03825) into a single expected value equation. This allows an entire dataset to be used as the background distribution (as opposed to a single reference value) and allows local smoothing. If we approximate the model with a linear function between each background data sample and the current input to be explained, and we assume the input features are independent then expected gradients will compute approximate SHAP values. In the example below we have explained how the 7th intermediate layer of the VGG16 ImageNet model impacts the output probabilities.\n\n```python\nfrom keras.applications.vgg16 import VGG16\nfrom keras.applications.vgg16 import preprocess_input\nimport keras.backend as K\nimport numpy as np\nimport json\nimport shap\n\n# load pre-trained model and choose two images to explain\nmodel = VGG16(weights='imagenet', include_top=True)\nX,y = shap.datasets.imagenet50()\nto_explain = X[[39,41]]\n\n# load the ImageNet class names\nurl = \"https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json\"\nfname = shap.datasets.cache(url)\nwith open(fname) as f:\n    class_names = json.load(f)\n\n# explain how the input to the 7th layer of the model explains the top two classes\ndef map2layer(x, layer):\n    feed_dict = dict(zip([model.layers[0].input], [preprocess_input(x.copy())]))\n    return K.get_session().run(model.layers[layer].input, feed_dict)\ne = shap.GradientExplainer(\n    (model.layers[7].input, model.layers[-1].output),\n    map2layer(X, 7),\n    local_smoothing=0 # std dev of smoothing noise\n)\nshap_values,indexes = e.shap_values(map2layer(to_explain, 7), ranked_outputs=2)\n\n# get the names for the classes\nindex_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)\n\n# plot the explanations\nshap.image_plot(shap_values, to_explain, index_names)\n```\n\n<p align=\"center\">\n  <img width=\"500\" src=\"https://raw.githubusercontent.com/shap/shap/master/docs/artwork/gradient_imagenet_plot.png\" />\n</p>\n\nPredictions for two input images are explained in the plot above. Red pixels represent positive SHAP values that increase the probability of the class, while blue pixels represent negative SHAP values the reduce the probability of the class. By using `ranked_outputs=2` we explain only the two most likely classes for each input (this spares us from explaining all 1,000 classes).\n\n## Model agnostic example with KernelExplainer (explains any function)\n\nKernel SHAP uses a specially-weighted local linear regression to estimate SHAP values for any model. Below is a simple example for explaining a multi-class SVM on the classic iris dataset.\n\n```python\nimport sklearn\nimport shap\nfrom sklearn.model_selection import train_test_split\n\n# print the JS visualization code to the notebook\nshap.initjs()\n\n# train a SVM classifier\nX_train,X_test,Y_train,Y_test = train_test_split(*shap.datasets.iris(), test_size=0.2, random_state=0)\nsvm = sklearn.svm.SVC(kernel='rbf', probability=True)\nsvm.fit(X_train, Y_train)\n\n# use Kernel SHAP to explain test set predictions\nexplainer = shap.KernelExplainer(svm.predict_proba, X_train, link=\"logit\")\nshap_values = explainer.shap_values(X_test, nsamples=100)\n\n# plot the SHAP values for the Setosa output of the first instance\nshap.force_plot(explainer.expected_value[0], shap_values[0][0,:], X_test.iloc[0,:], link=\"logit\")\n```\n<p align=\"center\">\n  <img width=\"810\" src=\"https://raw.githubusercontent.com/shap/shap/master/docs/artwork/iris_instance.png\" />\n</p>\n\nThe above explanation shows four features each contributing to push the model output from the base value (the average model output over the training dataset we passed) towards zero. If there were any features pushing the class label higher they would be shown in red.\n\nIf we take many explanations such as the one shown above, rotate them 90 degrees, and then stack them horizontally, we can see explanations for an entire dataset. This is exactly what we do below for all the examples in the iris test set:\n\n```python\n# plot the SHAP values for the Setosa output of all instances\nshap.force_plot(explainer.expected_value[0], shap_values[0], X_test, link=\"logit\")\n```\n<p align=\"center\">\n  <img width=\"813\" src=\"https://raw.githubusercontent.com/shap/shap/master/docs/artwork/iris_dataset.png\" />\n</p>\n\n## SHAP Interaction Values\n\nSHAP interaction values are a generalization of SHAP values to higher order interactions. Fast exact computation of pairwise interactions are implemented for tree models with `shap.TreeExplainer(model).shap_interaction_values(X)`. This returns a matrix for every prediction, where the main effects are on the diagonal and the interaction effects are off-diagonal. These values often reveal interesting hidden relationships, such as how the increased risk of death peaks for men at age 60 (see the NHANES notebook for details):\n\n<p align=\"center\">\n  <img width=\"483\" src=\"https://raw.githubusercontent.com/shap/shap/master/docs/artwork/nhanes_age_sex_interaction.png\" />\n</p>\n\n## Sample notebooks\n\nThe notebooks below demonstrate different use cases for SHAP. Look inside the notebooks directory of the repository if you want to try playing with the original notebooks yourself.\n\n### TreeExplainer\n\nAn implementation of Tree SHAP, a fast and exact algorithm to compute SHAP values for trees and ensembles of trees.\n\n- [**NHANES survival model with XGBoost and SHAP interaction values**](https://shap.github.io/shap/notebooks/NHANES%20I%20Survival%20Model.html) - Using mortality data from 20 years of followup this notebook demonstrates how to use XGBoost and `shap` to uncover complex risk factor relationships.\n\n- [**Census income classification with LightGBM**](https://shap.github.io/shap/notebooks/tree_explainer/Census%20income%20classification%20with%20LightGBM.html) - Using the standard adult census income dataset, this notebook trains a gradient boosting tree model with LightGBM and then explains predictions using `shap`.\n\n- [**League of Legends Win Prediction with XGBoost**](https://shap.github.io/shap/notebooks/League%20of%20Legends%20Win%20Prediction%20with%20XGBoost.html) - Using a Kaggle dataset of 180,000 ranked matches from League of Legends we train and explain a gradient boosting tree model with XGBoost to predict if a player will win their match.\n\n### DeepExplainer\n\nAn implementation of Deep SHAP, a faster (but only approximate) algorithm to compute SHAP values for deep learning models that is based on connections between SHAP and the DeepLIFT algorithm.\n\n- [**MNIST Digit classification with Keras**](https://shap.github.io/shap/notebooks/deep_explainer/Front%20Page%20DeepExplainer%20MNIST%20Example.html) - Using the MNIST handwriting recognition dataset, this notebook trains a neural network with Keras and then explains predictions using `shap`.\n\n- [**Keras LSTM for IMDB Sentiment Classification**](https://shap.github.io/shap/notebooks/deep_explainer/Keras%20LSTM%20for%20IMDB%20Sentiment%20Classification.html) - This notebook trains an LSTM with Keras on the IMDB text sentiment analysis dataset and then explains predictions using `shap`.\n\n### GradientExplainer\n\nAn implementation of expected gradients to approximate SHAP values for deep learning models. It is based on connections between SHAP and the Integrated Gradients algorithm. GradientExplainer is slower than DeepExplainer and makes different approximation assumptions.\n\n- [**Explain an Intermediate Layer of VGG16 on ImageNet**](https://shap.github.io/shap/notebooks/gradient_explainer/Explain%20an%20Intermediate%20Layer%20of%20VGG16%20on%20ImageNet.html) - This notebook demonstrates how to explain the output of a pre-trained VGG16 ImageNet model using an internal convolutional layer.\n\n### LinearExplainer\n\nFor a linear model with independent features we can analytically compute the exact SHAP values. We can also account for feature correlation if we are willing to estimate the feature covariance matrix. LinearExplainer supports both of these options.\n\n- [**Sentiment Analysis with Logistic Regression**](https://shap.github.io/shap/notebooks/linear_explainer/Sentiment%20Analysis%20with%20Logistic%20Regression.html) - This notebook demonstrates how to explain a linear logistic regression sentiment analysis model.\n\n### KernelExplainer\n\nAn implementation of Kernel SHAP, a model agnostic method to estimate SHAP values for any model. Because it makes no assumptions about the model type, KernelExplainer is slower than the other model type specific algorithms.\n\n- [**Census income classification with scikit-learn**](https://shap.github.io/shap/notebooks/Census%20income%20classification%20with%20scikit-learn.html) - Using the standard adult census income dataset, this notebook trains a k-nearest neighbors classifier using scikit-learn and then explains predictions using `shap`.\n\n- [**ImageNet VGG16 Model with Keras**](https://shap.github.io/shap/notebooks/ImageNet%20VGG16%20Model%20with%20Keras.html) - Explain the classic VGG16 convolutional neural network's predictions for an image. This works by applying the model agnostic Kernel SHAP method to a super-pixel segmented image.\n\n- [**Iris classification**](https://shap.github.io/shap/notebooks/Iris%20classification%20with%20scikit-learn.html) - A basic demonstration using the popular iris species dataset. It explains predictions from six different models in scikit-learn using `shap`.\n\n## Documentation notebooks\n\nThese notebooks comprehensively demonstrate how to use specific functions and objects.\n\n- [`shap.decision_plot` and `shap.multioutput_decision_plot`](https://shap.github.io/shap/notebooks/plots/decision_plot.html)\n\n- [`shap.dependence_plot`](https://shap.github.io/shap/notebooks/plots/dependence_plot.html)\n\n## Methods Unified by SHAP\n\n1. *LIME:* Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. \"Why should i trust you?: Explaining the predictions of any classifier.\" Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.\n\n2. *Shapley sampling values:* Strumbelj, Erik, and Igor Kononenko. \"Explaining prediction models and individual predictions with feature contributions.\" Knowledge and information systems 41.3 (2014): 647-665.\n\n3. *DeepLIFT:* Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. \"Learning important features through propagating activation differences.\" arXiv preprint arXiv:1704.02685 (2017).\n\n4. *QII:* Datta, Anupam, Shayak Sen, and Yair Zick. \"Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems.\" Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.\n\n5. *Layer-wise relevance propagation:* Bach, Sebastian, et al. \"On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.\" PloS one 10.7 (2015): e0130140.\n\n6. *Shapley regression values:* Lipovetsky, Stan, and Michael Conklin. \"Analysis of regression in game theory approach.\" Applied Stochastic Models in Business and Industry 17.4 (2001): 319-330.\n\n7. *Tree interpreter:* Saabas, Ando. Interpreting random forests. http://blog.datadive.net/interpreting-random-forests/\n\n## Citations\n\nThe algorithms and visualizations used in this package came primarily out of research in [Su-In Lee's lab](https://suinlee.cs.washington.edu) at the University of Washington, and Microsoft Research. If you use SHAP in your research we would appreciate a citation to the appropriate paper(s):\n\n- For general use of SHAP you can read/cite our [NeurIPS paper](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions) ([bibtex](https://raw.githubusercontent.com/shap/shap/master/docs/references/shap_nips.bib)).\n- For TreeExplainer you can read/cite our [Nature Machine Intelligence paper](https://www.nature.com/articles/s42256-019-0138-9) ([bibtex](https://raw.githubusercontent.com/shap/shap/master/docs/references/tree_explainer.bib); [free access](https://rdcu.be/b0z70)).\n- For GPUTreeExplainer you can read/cite [this article](https://arxiv.org/abs/2010.13972).\n- For `force_plot` visualizations and medical applications you can read/cite our [Nature Biomedical Engineering paper](https://www.nature.com/articles/s41551-018-0304-0) ([bibtex](https://raw.githubusercontent.com/shap/shap/master/docs/references/nature_bme.bib); [free access](https://rdcu.be/baVbR)).\n\n<img height=\"1\" width=\"1\" style=\"display:none\" src=\"https://www.facebook.com/tr?id=189147091855991&ev=PageView&noscript=1\" />\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "A unified approach to explain the output of any machine learning model.",
    "version": "0.46.0",
    "project_urls": {
        "Documentation": "https://shap.readthedocs.io/en/latest/index.html",
        "Release Notes": "https://shap.readthedocs.io/en/latest/release_notes.html",
        "Repository": "http://github.com/shap/shap"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "13a897442ec8e7aaad01d860768232b3b7051adb0560a9c79e52ce5e1222cbf1",
                "md5": "15ebf4eb8b5e226a11ac254286e06f3a",
                "sha256": "905b2d7a0262ef820785a7c0e3c7f24c9d281e6f934edb65cbe811fe0e971187"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp310-cp310-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "15ebf4eb8b5e226a11ac254286e06f3a",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 459332,
            "upload_time": "2024-06-27T10:16:34",
            "upload_time_iso_8601": "2024-06-27T10:16:34.710950Z",
            "url": "https://files.pythonhosted.org/packages/13/a8/97442ec8e7aaad01d860768232b3b7051adb0560a9c79e52ce5e1222cbf1/shap-0.46.0-cp310-cp310-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "00b32795a586a4446c8cbf04b6e8f15c19b4a6fb867e5c6cf9fcbca97d56a20b",
                "md5": "8dc887aee46c041c90d3b9e62530773f",
                "sha256": "bccbb30ffbf8b9ed53e476d0c1319fdfcbeac455fe9df277fb0d570d92790e80"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp310-cp310-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "8dc887aee46c041c90d3b9e62530773f",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 455839,
            "upload_time": "2024-06-27T10:16:37",
            "upload_time_iso_8601": "2024-06-27T10:16:37.654709Z",
            "url": "https://files.pythonhosted.org/packages/00/b3/2795a586a4446c8cbf04b6e8f15c19b4a6fb867e5c6cf9fcbca97d56a20b/shap-0.46.0-cp310-cp310-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "13a6b75760a52664dd82d530f9e232918bb74d1d6c39abcf34523c4f75cd4264",
                "md5": "d9dcecc3c0b4083986ba8fa210d04ba1",
                "sha256": "9633d3d7174acc01455538169ca6e6344f570530384548631aeadcf7bfdaaaea"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "d9dcecc3c0b4083986ba8fa210d04ba1",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 540067,
            "upload_time": "2024-06-27T10:16:39",
            "upload_time_iso_8601": "2024-06-27T10:16:39.713983Z",
            "url": "https://files.pythonhosted.org/packages/13/a6/b75760a52664dd82d530f9e232918bb74d1d6c39abcf34523c4f75cd4264/shap-0.46.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "351370e07364855b05d8aa628ec5aec4f038444ede0e26eee2be00c38077ee72",
                "md5": "0a55bdd7095b3ccb92408eb562db7683",
                "sha256": "c6097eb2ab7e8c194254bac3e462266490fbdd43bfe35a1014e9ee21c4ef10ee"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "has_sig": false,
            "md5_digest": "0a55bdd7095b3ccb92408eb562db7683",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 537808,
            "upload_time": "2024-06-27T10:16:41",
            "upload_time_iso_8601": "2024-06-27T10:16:41.955108Z",
            "url": "https://files.pythonhosted.org/packages/35/13/70e07364855b05d8aa628ec5aec4f038444ede0e26eee2be00c38077ee72/shap-0.46.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b4fcdd28e6838630cd436914116aa07a019753a40b956a05831b71bd3f7ce914",
                "md5": "60f39b69160fda78dd0430838258cecb",
                "sha256": "0cf7c6e3f056cf3bfd16bcfd5744d0cc25b851555b1e750a3ab889b3077d2d05"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp310-cp310-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "60f39b69160fda78dd0430838258cecb",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 1538235,
            "upload_time": "2024-06-27T10:16:43",
            "upload_time_iso_8601": "2024-06-27T10:16:43.681859Z",
            "url": "https://files.pythonhosted.org/packages/b4/fc/dd28e6838630cd436914116aa07a019753a40b956a05831b71bd3f7ce914/shap-0.46.0-cp310-cp310-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "aefef9e4d5e002bb58047c81edb6448579c179925c3807c98589ee70953587ab",
                "md5": "2b52892a39eaff9578a234bc23cf4a21",
                "sha256": "949bd7fa40371c3f1885a30ae0611dd481bf4ac90066ff726c73cb5bb393032b"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "2b52892a39eaff9578a234bc23cf4a21",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 456103,
            "upload_time": "2024-06-27T10:16:46",
            "upload_time_iso_8601": "2024-06-27T10:16:46.764434Z",
            "url": "https://files.pythonhosted.org/packages/ae/fe/f9e4d5e002bb58047c81edb6448579c179925c3807c98589ee70953587ab/shap-0.46.0-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e5a143bd69f32ddf381a09de18ea94d4b215d5ced3a24ff1a7b7d1a9401b5b85",
                "md5": "d338c979877d7b2a87b09be9d19b9100",
                "sha256": "f18217c98f39fd485d541f6aab0b860b3be74b69b21d4faf11959e3fcba765c5"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp311-cp311-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "d338c979877d7b2a87b09be9d19b9100",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 459333,
            "upload_time": "2024-06-27T10:16:48",
            "upload_time_iso_8601": "2024-06-27T10:16:48.872882Z",
            "url": "https://files.pythonhosted.org/packages/e5/a1/43bd69f32ddf381a09de18ea94d4b215d5ced3a24ff1a7b7d1a9401b5b85/shap-0.46.0-cp311-cp311-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5f9edce41d5ec9e79add65faf4381d8d4492247b29daaa6cc7d7fd0298abc1e2",
                "md5": "2279e6f2776f9cbc382f29e742a66ee1",
                "sha256": "5bbdae4489577c6fce1cfe2d9d8f3d5b96d69284d29645fe651f78f6e965aeb4"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp311-cp311-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "2279e6f2776f9cbc382f29e742a66ee1",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 455835,
            "upload_time": "2024-06-27T10:16:51",
            "upload_time_iso_8601": "2024-06-27T10:16:51.074151Z",
            "url": "https://files.pythonhosted.org/packages/5f/9e/dce41d5ec9e79add65faf4381d8d4492247b29daaa6cc7d7fd0298abc1e2/shap-0.46.0-cp311-cp311-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "066a09e3cb9864118337c0f3c2a0dc5add6b642e9f672665062e186d67ba992d",
                "md5": "338ed2068921a5efcd6180608e4c56d2",
                "sha256": "13d36dc58d1e8c010feb4e7da71c77d23626a52d12d16b02869e793b11be4695"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp311-cp311-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "338ed2068921a5efcd6180608e4c56d2",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 540163,
            "upload_time": "2024-06-27T10:16:53",
            "upload_time_iso_8601": "2024-06-27T10:16:53.179073Z",
            "url": "https://files.pythonhosted.org/packages/06/6a/09e3cb9864118337c0f3c2a0dc5add6b642e9f672665062e186d67ba992d/shap-0.46.0-cp311-cp311-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c374440eacbdf21c1b2e0a5b6962b79d4435e56a88588043d144a16c7785a596",
                "md5": "1753e12aec0aa4b0ca8a3e8f9ab1ab83",
                "sha256": "70e06fdfdf53d5fb932c82f4529397552b262e0ccce734f5226fb1e1eab2bc3e"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "has_sig": false,
            "md5_digest": "1753e12aec0aa4b0ca8a3e8f9ab1ab83",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 537765,
            "upload_time": "2024-06-27T10:16:54",
            "upload_time_iso_8601": "2024-06-27T10:16:54.763030Z",
            "url": "https://files.pythonhosted.org/packages/c3/74/440eacbdf21c1b2e0a5b6962b79d4435e56a88588043d144a16c7785a596/shap-0.46.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "08e6027ca36efcc8871eda4084bde5e4658a90e84006086186e39588fd03b396",
                "md5": "959ff775055cf142ba6aa18743ede9f3",
                "sha256": "943f0806fa00b4fafb174f172a73d88de2d8600e6d69c2e2bff833f00e6c4c21"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp311-cp311-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "959ff775055cf142ba6aa18743ede9f3",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 1538290,
            "upload_time": "2024-06-27T10:16:56",
            "upload_time_iso_8601": "2024-06-27T10:16:56.819496Z",
            "url": "https://files.pythonhosted.org/packages/08/e6/027ca36efcc8871eda4084bde5e4658a90e84006086186e39588fd03b396/shap-0.46.0-cp311-cp311-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8229923869e92c74bf07ec2b9a52ad5ac67d4184c873ba33ada7d4584356463a",
                "md5": "4ddbf0188efb66605a7f9b7ffd12c4e2",
                "sha256": "c972a2efdc9fc00d543efaa55805eca947b8c418d065962d967824c2d5d295d0"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "4ddbf0188efb66605a7f9b7ffd12c4e2",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 456103,
            "upload_time": "2024-06-27T10:16:58",
            "upload_time_iso_8601": "2024-06-27T10:16:58.433365Z",
            "url": "https://files.pythonhosted.org/packages/82/29/923869e92c74bf07ec2b9a52ad5ac67d4184c873ba33ada7d4584356463a/shap-0.46.0-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "05c53c4fe600dd71fd2785d21f86a3e7f1f13de60c9b434052e05ba17598f81e",
                "md5": "28d02a9b985e0a769f5c3d011cfcb9cc",
                "sha256": "a9cc9be191562bea1a782baff912854d267c6f4831bbf454d8d7bb7df7ddb214"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp312-cp312-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "28d02a9b985e0a769f5c3d011cfcb9cc",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 459316,
            "upload_time": "2024-06-27T10:17:00",
            "upload_time_iso_8601": "2024-06-27T10:17:00.313092Z",
            "url": "https://files.pythonhosted.org/packages/05/c5/3c4fe600dd71fd2785d21f86a3e7f1f13de60c9b434052e05ba17598f81e/shap-0.46.0-cp312-cp312-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4d1ac00a1e7a68a4af29f2b40c8a8740dd241cba6cc58cd6ac266956a954a41d",
                "md5": "2a2fd5d3453e54a8f88a10d672b8f392",
                "sha256": "ab1fecfb43604605be17e26ae12bde4406c451c46b54b980d9570cec03fbc239"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp312-cp312-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "2a2fd5d3453e54a8f88a10d672b8f392",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 455333,
            "upload_time": "2024-06-27T10:17:02",
            "upload_time_iso_8601": "2024-06-27T10:17:02.719810Z",
            "url": "https://files.pythonhosted.org/packages/4d/1a/c00a1e7a68a4af29f2b40c8a8740dd241cba6cc58cd6ac266956a954a41d/shap-0.46.0-cp312-cp312-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7f0ae3ab0dcddf4db1158fbf0d6c96348ba5f3031275f59088e0e3b7630cdcde",
                "md5": "08fc2ca74ab0f25496fedff00495a0ec",
                "sha256": "b216adf2a17b0e0694f17965ac29354ca8c4f27ac3c66f68bf6fc4cb2aa28207"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp312-cp312-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "08fc2ca74ab0f25496fedff00495a0ec",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 543894,
            "upload_time": "2024-06-27T10:17:04",
            "upload_time_iso_8601": "2024-06-27T10:17:04.941374Z",
            "url": "https://files.pythonhosted.org/packages/7f/0a/e3ab0dcddf4db1158fbf0d6c96348ba5f3031275f59088e0e3b7630cdcde/shap-0.46.0-cp312-cp312-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8f8fca077689b76161b51b420031b88948ef92ade55730e85490215222734729",
                "md5": "cc3d66c5efe5752e7f77d1213147640c",
                "sha256": "b6e5dc5257b747a784f7a9b3acb64216a9011f01734f3c96b27fe5e15ae5f99f"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "has_sig": false,
            "md5_digest": "cc3d66c5efe5752e7f77d1213147640c",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 540735,
            "upload_time": "2024-06-27T10:17:06",
            "upload_time_iso_8601": "2024-06-27T10:17:06.610000Z",
            "url": "https://files.pythonhosted.org/packages/8f/8f/ca077689b76161b51b420031b88948ef92ade55730e85490215222734729/shap-0.46.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6eb6169de0d8971c91decd3dacfd63edeeedfc1bba30bfc6abf8480142aafd48",
                "md5": "08cff62e0c8a95132d1a707b4b8db22b",
                "sha256": "1230bf973463041dfa15734f290fbf3ab9c6e4e8222339c76f68fc355b940d80"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp312-cp312-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "08cff62e0c8a95132d1a707b4b8db22b",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 1537953,
            "upload_time": "2024-06-27T10:17:08",
            "upload_time_iso_8601": "2024-06-27T10:17:08.225084Z",
            "url": "https://files.pythonhosted.org/packages/6e/b6/169de0d8971c91decd3dacfd63edeeedfc1bba30bfc6abf8480142aafd48/shap-0.46.0-cp312-cp312-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0458b2ea558ec8d9ed3728e83dfacb1b920c54a1a1f6feee2632c04676c3c1e9",
                "md5": "fdbc81515f6402ef484b4476dd9a36bb",
                "sha256": "0cbbf996537b2a42d3bc7f2a13492988822ee1bfd7220700989408dfb9e1c5ad"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "fdbc81515f6402ef484b4476dd9a36bb",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 456226,
            "upload_time": "2024-06-27T10:17:10",
            "upload_time_iso_8601": "2024-06-27T10:17:10.589525Z",
            "url": "https://files.pythonhosted.org/packages/04/58/b2ea558ec8d9ed3728e83dfacb1b920c54a1a1f6feee2632c04676c3c1e9/shap-0.46.0-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64f9ddd643b2336139d9dab3a5facb356f5fd39aeefcf3333716bf7fc772319b",
                "md5": "f8c2a592a486632509c3e1d581c40218",
                "sha256": "3c7d0c53a8cbefb2260ce28a98fa866c1a287770981f95c40a54f9d1082cbb31"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp39-cp39-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "f8c2a592a486632509c3e1d581c40218",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 459327,
            "upload_time": "2024-06-27T10:17:12",
            "upload_time_iso_8601": "2024-06-27T10:17:12.120967Z",
            "url": "https://files.pythonhosted.org/packages/64/f9/ddd643b2336139d9dab3a5facb356f5fd39aeefcf3333716bf7fc772319b/shap-0.46.0-cp39-cp39-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "66ffb8aaa11f1111a44c9e11bbb8302641106ca9a6531b3e3badb006b9dac5ad",
                "md5": "a3151cf3e1b1805d89727029298bb709",
                "sha256": "0726f8c63f09dde586c9859ad315641f5a080e9aecf123a0cabc336b61703d66"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp39-cp39-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "a3151cf3e1b1805d89727029298bb709",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 455837,
            "upload_time": "2024-06-27T10:17:13",
            "upload_time_iso_8601": "2024-06-27T10:17:13.671332Z",
            "url": "https://files.pythonhosted.org/packages/66/ff/b8aaa11f1111a44c9e11bbb8302641106ca9a6531b3e3badb006b9dac5ad/shap-0.46.0-cp39-cp39-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c65317cd5c57123f67b03c02d7a86ac5b9af76395f9e85e2e22960b259b2531a",
                "md5": "083fe5641e1ec135bbf31fc7b6eff5eb",
                "sha256": "99edc28daac4cbb98cd9f02febf4e9fbc6b9e3d24519c22ed59a98c68c47336c"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "083fe5641e1ec135bbf31fc7b6eff5eb",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 539936,
            "upload_time": "2024-06-27T10:17:15",
            "upload_time_iso_8601": "2024-06-27T10:17:15.169267Z",
            "url": "https://files.pythonhosted.org/packages/c6/53/17cd5c57123f67b03c02d7a86ac5b9af76395f9e85e2e22960b259b2531a/shap-0.46.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "de90ffb554377d8b7ccecd0d40e56f9ecc9df0741eaf1f81a83ebb491f374307",
                "md5": "dbddad35f751a4979950a69d87391762",
                "sha256": "85a6ff9c9e15abd9a332360cff8d105165a600466167d6274dab468a050d005a"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "has_sig": false,
            "md5_digest": "dbddad35f751a4979950a69d87391762",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 537688,
            "upload_time": "2024-06-27T10:17:16",
            "upload_time_iso_8601": "2024-06-27T10:17:16.905455Z",
            "url": "https://files.pythonhosted.org/packages/de/90/ffb554377d8b7ccecd0d40e56f9ecc9df0741eaf1f81a83ebb491f374307/shap-0.46.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fa76d408da66d606e15d2ff6b6564d5023144679eebf4d6b298eb527ab56b043",
                "md5": "cf8eb7cceb23fd97055f9fd20c7c266e",
                "sha256": "9f9f9727839e2459dfa4b4fbc190224e87f7b4b2a29f0e2a438500215921192b"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp39-cp39-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "cf8eb7cceb23fd97055f9fd20c7c266e",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 1538120,
            "upload_time": "2024-06-27T10:17:18",
            "upload_time_iso_8601": "2024-06-27T10:17:18.392214Z",
            "url": "https://files.pythonhosted.org/packages/fa/76/d408da66d606e15d2ff6b6564d5023144679eebf4d6b298eb527ab56b043/shap-0.46.0-cp39-cp39-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0248033ab9a2dee26d3de7e57cf532ab1d8408a608544c85ff98e6ea65775bdf",
                "md5": "37bc7b7e6521cfd5f1397a98283a1e71",
                "sha256": "b169b485a69f7d32e32fa64ad77be00129436c4455b9d0997b21b553f0becc8c"
            },
            "downloads": -1,
            "filename": "shap-0.46.0-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "37bc7b7e6521cfd5f1397a98283a1e71",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 456106,
            "upload_time": "2024-06-27T10:17:20",
            "upload_time_iso_8601": "2024-06-27T10:17:20.147699Z",
            "url": "https://files.pythonhosted.org/packages/02/48/033ab9a2dee26d3de7e57cf532ab1d8408a608544c85ff98e6ea65775bdf/shap-0.46.0-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "47461b497452be642e19af56044814dfe32ee795805b443378821136729017a0",
                "md5": "587f796fc2976830f52ec9e146df3369",
                "sha256": "bdaa5b098be5a958348015e940f6fd264339b5db1e651f9898a3117be95b05a0"
            },
            "downloads": -1,
            "filename": "shap-0.46.0.tar.gz",
            "has_sig": false,
            "md5_digest": "587f796fc2976830f52ec9e146df3369",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 1214102,
            "upload_time": "2024-06-27T10:17:22",
            "upload_time_iso_8601": "2024-06-27T10:17:22.263905Z",
            "url": "https://files.pythonhosted.org/packages/47/46/1b497452be642e19af56044814dfe32ee795805b443378821136729017a0/shap-0.46.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-27 10:17:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "shap",
    "github_project": "shap",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "shap"
}
        
Elapsed time: 0.26878s