# shapiq: Shapley Interactions for Machine Learning <img src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/logo/logo_shapiq_light.svg" alt="shapiq_logo" align="right" height="250px"/>
[](https://opensource.org/licenses/MIT)
[](https://coveralls.io/github/mmschlk/shapiq?branch=main)
[](https://github.com/mmschlk/shapiq/actions/workflows/unit-tests.yml)
[](https://shapiq.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.org/project/shapiq)
[](https://pypi.org/project/shapiq)
[](https://pepy.tech/project/shapiq)
[](https://github.com/psf/black)
> An interaction may speak more than a thousand main effects.
Shapley Interaction Quantification (`shapiq`) is a Python package for (1) approximating any-order Shapley interactions, (2) benchmarking game-theoretical algorithms for machine learning, (3) explaining feature interactions of model predictions. `shapiq` extends the well-known [shap](https://github.com/shap/shap) package for both researchers working on game theory in machine learning, as well as the end-users explaining models. SHAP-IQ extends individual Shapley values by quantifying the **synergy** effect between entities (aka **players** in the jargon of game theory) like explanatory features, data points, or weak learners in ensemble models. Synergies between players give a more comprehensive view of machine learning models.
## 🛠️ Install
`shapiq` is intended to work with **Python 3.9 and above**. Installation can be done via `pip`:
```sh
pip install shapiq
```
## ⭐ Quickstart
You can explain your model with `shapiq.explainer` and visualize Shapley interactions with `shapiq.plot`.
If you are interested in the underlying game theoretic algorithms, then check out the `shapiq.approximator` and `shapiq.games` modules.
### Compute any-order feature interactions
Explain your models with Shapley interactions:
```python
import shapiq
# load data
X, y = shapiq.load_california_housing(to_numpy=True)
# train a model
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(X, y)
# set up an explainer with k-SII interaction values up to order 4
explainer = shapiq.TabularExplainer(
model=model,
data=X,
index="k-SII",
max_order=4
)
# explain the model's prediction for the first sample
interaction_values = explainer.explain(X[0], budget=256)
# analyse interaction values
print(interaction_values)
>> InteractionValues(
>> index=k-SII, max_order=4, min_order=0, estimated=False,
>> estimation_budget=256, n_players=8, baseline_value=2.07282292,
>> Top 10 interactions:
>> (0,): 1.696969079 # attribution of feature 0
>> (0, 5): 0.4847876
>> (0, 1): 0.4494288 # interaction between features 0 & 1
>> (0, 6): 0.4477677
>> (1, 5): 0.3750034
>> (4, 5): 0.3468325
>> (0, 3, 6): -0.320 # interaction between features 0 & 3 & 6
>> (2, 3, 6): -0.329
>> (0, 1, 5): -0.363
>> (6,): -0.56358890
>> )
```
### Compute Shapley values like you are used to with SHAP
If you are used to working with SHAP, you can also compute Shapley values with `shapiq` the same way:
You can load your data and model, and then use the `shapiq.Explainer` to compute Shapley values.
If you set the index to ``'SV'``, you will get the Shapley values as you know them from SHAP.
```python
import shapiq
data, model = ... # get your data and model
explainer = shapiq.Explainer(
model=model,
data=data,
index="SV", # Shapley values
)
shapley_values = explainer.explain(data[0])
shapley_values.plot_force(feature_names=...)
```
Once you have the Shapley values, you can easily compute Interaction values as well:
```python
explainer = shapiq.Explainer(
model=model,
data=data,
index="k-SII", # k-SII interaction values
max_order=2 # specify any order you want
)
interaction_values = explainer.explain(data[0])
interaction_values.plot_force(feature_names=...)
```
<p align="center">
<img width="800px" src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/motivation_sv_and_si.png" alt="An example Force Plot for the California Housing Dataset with Shapley Interactions">
</p>
### Visualize feature interactions
A handy way of visualizing interaction scores up to order 2 are network plots.
You can see an example of such a plot below.
The nodes represent feature **attributions** and the edges represent the **interactions** between features.
The strength and size of the nodes and edges are proportional to the absolute value of attributions and interactions, respectively.
```python
shapiq.network_plot(
first_order_values=interaction_values.get_n_order_values(1),
second_order_values=interaction_values.get_n_order_values(2)
)
# or use
interaction_values.plot_network()
```
The pseudo-code above can produce the following plot (here also an image is added):
<p align="center">
<img width="500px" src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/network_example2.png" alt="network_plot_example">
</p>
### Explain TabPFN
With ``shapiq`` you can also [``TabPFN``](https://github.com/PriorLabs/TabPFN) by making use of the _remove-and-recontextualize_ explanation paradigm implemented in ``shapiq.TabPFNExplainer``.
```python
import tabpfn, shapiq
data, labels = ... # load your data
model = tabpfn.TabPFNClassifier() # get TabPFN
model.fit(data, labels) # "fit" TabPFN (optional)
explainer = shapiq.TabPFNExplainer( # setup the explainer
model=model,
data=data,
labels=labels,
index="FSII"
)
fsii_values = explainer.explain(X[0]) # explain with Faithful Shapley values
fsii_values.plot_force() # plot the force plot
```
<p align="center">
<img width="800px" src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/fsii_tabpfn_force_plot_example.png" alt="Force Plot of FSII values as derived from the example tabpfn notebook">
</p>
## 📖 Documentation with tutorials
The documentation of ``shapiq`` can be found at https://shapiq.readthedocs.io.
If you are new to Shapley values or Shapley interactions, we recommend starting with the [introduction](https://shapiq.readthedocs.io/en/latest/introduction/) and the [basic tutorials](https://shapiq.readthedocs.io/en/latest/notebooks/basics.html).
There is a lot of great resources available to get you started with Shapley values and interactions.
## 💬 Citation
If you use ``shapiq`` and enjoy it, please consider citing our [NeurIPS paper](https://arxiv.org/abs/2410.01649) or consider starring this repository.
```bibtex
@inproceedings{muschalik2024shapiq,
title = {shapiq: Shapley Interactions for Machine Learning},
author = {Maximilian Muschalik and Hubert Baniecki and Fabian Fumagalli and
Patrick Kolpaczki and Barbara Hammer and Eyke H\"{u}llermeier},
booktitle = {The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year = {2024},
url = {https://openreview.net/forum?id=knxGmi6SJi}
}
```
## 📦 Contributing
We welcome any kind of contributions to `shapiq`!
If you are interested in contributing, please check out our [contributing guidelines](https://github.com/mmschlk/shapiq/blob/main/.github/CONTRIBUTING.md).
If you have any questions, feel free to reach out to us.
We are tracking our progress via a [project board](https://github.com/users/mmschlk/projects/4) and the [issues](https://github.com/mmschlk/shapiq/issues) section.
If you find a bug or have a feature request, please open an issue or help us fixing it by opening a pull request.
## 📜 License
This project is licensed under the [MIT License](https://github.com/mmschlk/shapiq/blob/main/LICENSE).
## 💰 Funding
This work is openly available under the MIT license.
Some authors acknowledge the financial support by the German Research Foundation (DFG) under grant number TRR 318/1 2021 – 438445824.
---
Built with ❤️ by the shapiq team.
## Changelog
### Development
...
### v1.2.0 (2025-01-15)
- adds ``shapiq.TabPFNExplainer`` as a specialized version of the ``shapiq.TabularExplainer`` which offers a streamlined variant of the explainer for the TabPFN model [#301](https://github.com/mmschlk/shapiq/issues/301)
- handles ``explainer.explain()`` now through a common interface for all explainer classes which now need to implement a ``explain_function()`` method
- adds the baseline_value into the InteractionValues object's value storage for the ``()`` interaction if ``min_order=0`` (default usually) for all indices that are not ``SII```(SII has another baseline value) such that the values are efficient (sum up to the model prediction) without the awkward handling of the baseline_value attribute
- renames ``game_fun`` parameter in ``shapiq.ExactComputer`` to ``game`` [#297](https://github.com/mmschlk/shapiq/issues/297)
- adds a TabPFN example notebook to the documentation
- removes warning when class_index is not provided in explainers [#298](https://github.com/mmschlk/shapiq/issues/298)
- adds the `sentence_plot` function to the `plot` module to visualize the contributions of words to a language model prediction in a sentence-like format
- makes abbreviations in the `plot` module optional [#281](https://github.com/mmschlk/shapiq/issues/281)
- adds the `upset_plot` function to the `plot` module to visualize the interactions of higher-order [#290](https://github.com/mmschlk/shapiq/issues/290)
- adds support for IsoForest models to explainer and tree explainer [#278](https://github.com/mmschlk/shapiq/issues/278)
- adds support for sub-selection of players in the interaction values data class [#276](https://github.com/mmschlk/shapiq/issues/276) which allows retrieving interaction values for a subset of players
- refactors game theory computations like `ExactComputer`, `MoebiusConverter`, `core`, among others to be more modular and flexible into the `game_theory` module [#258](https://github.com/mmschlk/shapiq/issues/258)
- improves quality of the tests by adding many more semantic tests to the different interaction indices and computations [#285](https://github.com/mmschlk/shapiq/pull/285)
### v1.1.1 (2024-11-13)
#### Improvements and Ease of Use
- adds a `class_index` parameter to `TabularExplainer` and `Explainer` to specify the class index to be explained for classification models [#271](https://github.com/mmschlk/shapiq/issues/271) (renames `class_label` parameter in TreeExplainer to `class_index`)
- adds support for `PyTorch` models to `Explainer` [#272](https://github.com/mmschlk/shapiq/issues/272)
- adds new tests comparing `shapiq` outputs for SVs with alues computed with `shap`
- adds new tests for checking `shapiq` explainers with different types of models
#### Bug Fixes
- fixes a bug that `RandomForestClassifier` models were not working with the `TreeExplainer` [#273](https://github.com/mmschlk/shapiq/issues/273)
### v1.1.0 (2024-11-07)
#### New Features and Improvements
- adds computation of the Egalitarian Core (`EC`) and Egalitarian Least-Core (`ELC`) to the `ExactComputer` [#182](https://github.com/mmschlk/shapiq/issues/182)
- adds `waterfall_plot` [#34](https://github.com/mmschlk/shapiq/issues/34) that visualizes the contributions of features to the model prediction
- adds `BaselineImputer` [#107](https://github.com/mmschlk/shapiq/issues/107) which is now responsible for handling the `sample_replacements` parameter. Added a DeprecationWarning for the parameter in `MarginalImputer`, which will be removed in the next release.
- adds `joint_marginal_distribution` parameter to `MarginalImputer` with default value `True` [#261](https://github.com/mmschlk/shapiq/issues/261)
- renames explanation graph to `si_graph`
- `get_n_order` now has optional lower/upper limits for the order
- computing metrics for benchmarking now tries to resolve not-matching interaction indices and will throw a warning instead of a ValueError [#179](https://github.com/mmschlk/shapiq/issues/179)
- add a legend to benchmark plots [#170](https://github.com/mmschlk/shapiq/issues/170)
- refactored the `shapiq.games.benchmark` module into a separate `shapiq.benchmark` module by moving all but the benchmark games into the new module. This closes [#169](https://github.com/mmschlk/shapiq/issues/169) and makes benchmarking more flexible and convenient.
- a `shapiq.Game` can now be called more intuitively with coalitions data types (tuples of int or str) and also allows to add `player_names` to the game at initialization [#183](https://github.com/mmschlk/shapiq/issues/183)
- improve tests across the package
#### Documentation
- adds a notebook showing how to use custom tree models with the `TreeExplainer` [#66](https://github.com/mmschlk/shapiq/issues/66)
- adds a notebook show how to use the `shapiq.Game` API to create custom games [#184](https://github.com/mmschlk/shapiq/issues/184)
- adds a notebook showing hot to visualize interactions [#252](https://github.com/mmschlk/shapiq/issues/252)
- adds a notebook showing how to compute Shapley values with `shapiq` [#193](https://github.com/mmschlk/shapiq/issues/197)
- adds a notebook for conducting data valuation [#190](https://github.com/mmschlk/shapiq/issues/190)
- adds a notebook showcasing introducing the Core and how to compute it with `shapiq` [#191](https://github.com/mmschlk/shapiq/issues/191)
#### Bug Fixes
- fixes a bug with SIs not adding up to the model prediction because of wrong values in the empty set [#264](https://github.com/mmschlk/shapiq/issues/264)
- fixes a bug that `TreeExplainer` did not have the correct baseline_value when using XGBoost models [#250](https://github.com/mmschlk/shapiq/issues/250)
- fixes the force plot not showing and its baseline value
### v1.0.1 (2024-06-05)
- add `max_order=1` to `TabularExplainer` and `TreeExplainer`
- fix `TreeExplainer.explain_X(..., n_jobs=2, random_state=0)`
### v1.0.0 (2024-06-04)
Major release of the `shapiq` Python package including (among others):
- `approximator` module implements over 10 approximators of Shapley values and interaction indices.
- `exact` module implements a computer for over 10 game theoretic concepts like interaction indices or generalized values.
- `games` module implements over 10 application benchmarks for the approximators.
- `explainer` module includes a `TabularExplainer` and `TreeExplainer` for any-order feature interactions of machine learning model predictions.
- `interaction_values` module implements a data class to store and analyze interaction values.
- `plot` module allows visualizing interaction values.
- `datasets` module loads datasets for testing and examples.
Documentation of `shapiq` with tutorials and API reference is available at https://shapiq.readthedocs.io
Raw data
{
"_id": null,
"home_page": "https://github.com/mmschlk/shapiq",
"name": "shapiq",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9.0",
"maintainer_email": null,
"keywords": "python, machine learning, interpretable machine learning, shap, xai, explainable ai, interaction, shapley interactions, shapley values, feature interaction",
"author": "Maximilian Muschalik et al.",
"author_email": "maximilian.muschalik@ifi.lmu.de",
"download_url": "https://files.pythonhosted.org/packages/2a/05/78d11c3ebdf9957f508d4d9675b36fc488697e67a7c26b6a2a7baf3112ca/shapiq-1.2.0.tar.gz",
"platform": null,
"description": "# shapiq: Shapley Interactions for Machine Learning <img src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/logo/logo_shapiq_light.svg\" alt=\"shapiq_logo\" align=\"right\" height=\"250px\"/>\n\n[](https://opensource.org/licenses/MIT)\n[](https://coveralls.io/github/mmschlk/shapiq?branch=main)\n[](https://github.com/mmschlk/shapiq/actions/workflows/unit-tests.yml)\n[](https://shapiq.readthedocs.io/en/latest/?badge=latest)\n\n[](https://pypi.org/project/shapiq)\n[](https://pypi.org/project/shapiq)\n[](https://pepy.tech/project/shapiq)\n\n[](https://github.com/psf/black)\n\n> An interaction may speak more than a thousand main effects.\n\nShapley Interaction Quantification (`shapiq`) is a Python package for (1) approximating any-order Shapley interactions, (2) benchmarking game-theoretical algorithms for machine learning, (3) explaining feature interactions of model predictions. `shapiq` extends the well-known [shap](https://github.com/shap/shap) package for both researchers working on game theory in machine learning, as well as the end-users explaining models. SHAP-IQ extends individual Shapley values by quantifying the **synergy** effect between entities (aka **players** in the jargon of game theory) like explanatory features, data points, or weak learners in ensemble models. Synergies between players give a more comprehensive view of machine learning models.\n\n## \ud83d\udee0\ufe0f Install\n`shapiq` is intended to work with **Python 3.9 and above**. Installation can be done via `pip`:\n\n```sh\npip install shapiq\n```\n\n## \u2b50 Quickstart\n\nYou can explain your model with `shapiq.explainer` and visualize Shapley interactions with `shapiq.plot`.\nIf you are interested in the underlying game theoretic algorithms, then check out the `shapiq.approximator` and `shapiq.games` modules.\n\n### Compute any-order feature interactions\n\nExplain your models with Shapley interactions:\n\n```python\nimport shapiq\n# load data\nX, y = shapiq.load_california_housing(to_numpy=True)\n# train a model\nfrom sklearn.ensemble import RandomForestRegressor\nmodel = RandomForestRegressor()\nmodel.fit(X, y)\n# set up an explainer with k-SII interaction values up to order 4\nexplainer = shapiq.TabularExplainer(\n model=model,\n data=X,\n index=\"k-SII\",\n max_order=4\n)\n# explain the model's prediction for the first sample\ninteraction_values = explainer.explain(X[0], budget=256)\n# analyse interaction values\nprint(interaction_values)\n\n>> InteractionValues(\n>> index=k-SII, max_order=4, min_order=0, estimated=False,\n>> estimation_budget=256, n_players=8, baseline_value=2.07282292,\n>> Top 10 interactions:\n>> (0,): 1.696969079 # attribution of feature 0\n>> (0, 5): 0.4847876\n>> (0, 1): 0.4494288 # interaction between features 0 & 1\n>> (0, 6): 0.4477677\n>> (1, 5): 0.3750034\n>> (4, 5): 0.3468325\n>> (0, 3, 6): -0.320 # interaction between features 0 & 3 & 6\n>> (2, 3, 6): -0.329\n>> (0, 1, 5): -0.363\n>> (6,): -0.56358890\n>> )\n```\n\n### Compute Shapley values like you are used to with SHAP\n\nIf you are used to working with SHAP, you can also compute Shapley values with `shapiq` the same way:\nYou can load your data and model, and then use the `shapiq.Explainer` to compute Shapley values.\nIf you set the index to ``'SV'``, you will get the Shapley values as you know them from SHAP.\n\n```python\nimport shapiq\n\ndata, model = ... # get your data and model\nexplainer = shapiq.Explainer(\n model=model,\n data=data,\n index=\"SV\", # Shapley values\n)\nshapley_values = explainer.explain(data[0])\nshapley_values.plot_force(feature_names=...)\n```\n\nOnce you have the Shapley values, you can easily compute Interaction values as well:\n\n```python\nexplainer = shapiq.Explainer(\n model=model,\n data=data,\n index=\"k-SII\", # k-SII interaction values\n max_order=2 # specify any order you want\n)\ninteraction_values = explainer.explain(data[0])\ninteraction_values.plot_force(feature_names=...)\n```\n\n<p align=\"center\">\n <img width=\"800px\" src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/motivation_sv_and_si.png\" alt=\"An example Force Plot for the California Housing Dataset with Shapley Interactions\">\n</p>\n\n### Visualize feature interactions\n\nA handy way of visualizing interaction scores up to order 2 are network plots.\nYou can see an example of such a plot below.\nThe nodes represent feature **attributions** and the edges represent the **interactions** between features.\nThe strength and size of the nodes and edges are proportional to the absolute value of attributions and interactions, respectively.\n\n```python\nshapiq.network_plot(\n first_order_values=interaction_values.get_n_order_values(1),\n second_order_values=interaction_values.get_n_order_values(2)\n)\n# or use\ninteraction_values.plot_network()\n```\n\nThe pseudo-code above can produce the following plot (here also an image is added):\n\n<p align=\"center\">\n <img width=\"500px\" src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/network_example2.png\" alt=\"network_plot_example\">\n</p>\n\n### Explain TabPFN\n\nWith ``shapiq`` you can also [``TabPFN``](https://github.com/PriorLabs/TabPFN) by making use of the _remove-and-recontextualize_ explanation paradigm implemented in ``shapiq.TabPFNExplainer``.\n\n```python\nimport tabpfn, shapiq\ndata, labels = ... # load your data\nmodel = tabpfn.TabPFNClassifier() # get TabPFN\nmodel.fit(data, labels) # \"fit\" TabPFN (optional)\nexplainer = shapiq.TabPFNExplainer( # setup the explainer\n model=model,\n data=data,\n labels=labels,\n index=\"FSII\"\n)\nfsii_values = explainer.explain(X[0]) # explain with Faithful Shapley values\nfsii_values.plot_force() # plot the force plot\n```\n\n<p align=\"center\">\n <img width=\"800px\" src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/fsii_tabpfn_force_plot_example.png\" alt=\"Force Plot of FSII values as derived from the example tabpfn notebook\">\n</p>\n\n\n## \ud83d\udcd6 Documentation with tutorials\nThe documentation of ``shapiq`` can be found at https://shapiq.readthedocs.io.\nIf you are new to Shapley values or Shapley interactions, we recommend starting with the [introduction](https://shapiq.readthedocs.io/en/latest/introduction/) and the [basic tutorials](https://shapiq.readthedocs.io/en/latest/notebooks/basics.html).\nThere is a lot of great resources available to get you started with Shapley values and interactions.\n\n## \ud83d\udcac Citation\n\nIf you use ``shapiq`` and enjoy it, please consider citing our [NeurIPS paper](https://arxiv.org/abs/2410.01649) or consider starring this repository.\n\n```bibtex\n@inproceedings{muschalik2024shapiq,\n title = {shapiq: Shapley Interactions for Machine Learning},\n author = {Maximilian Muschalik and Hubert Baniecki and Fabian Fumagalli and\n Patrick Kolpaczki and Barbara Hammer and Eyke H\\\"{u}llermeier},\n booktitle = {The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},\n year = {2024},\n url = {https://openreview.net/forum?id=knxGmi6SJi}\n}\n```\n\n## \ud83d\udce6 Contributing\nWe welcome any kind of contributions to `shapiq`!\nIf you are interested in contributing, please check out our [contributing guidelines](https://github.com/mmschlk/shapiq/blob/main/.github/CONTRIBUTING.md).\nIf you have any questions, feel free to reach out to us.\nWe are tracking our progress via a [project board](https://github.com/users/mmschlk/projects/4) and the [issues](https://github.com/mmschlk/shapiq/issues) section.\nIf you find a bug or have a feature request, please open an issue or help us fixing it by opening a pull request.\n\n## \ud83d\udcdc License\nThis project is licensed under the [MIT License](https://github.com/mmschlk/shapiq/blob/main/LICENSE).\n\n## \ud83d\udcb0 Funding\nThis work is openly available under the MIT license.\nSome authors acknowledge the financial support by the German Research Foundation (DFG) under grant number TRR 318/1 2021 \u2013 438445824.\n\n---\nBuilt with \u2764\ufe0f by the shapiq team.\n\n\n## Changelog\n\n### Development\n...\n\n### v1.2.0 (2025-01-15)\n- adds ``shapiq.TabPFNExplainer`` as a specialized version of the ``shapiq.TabularExplainer`` which offers a streamlined variant of the explainer for the TabPFN model [#301](https://github.com/mmschlk/shapiq/issues/301)\n- handles ``explainer.explain()`` now through a common interface for all explainer classes which now need to implement a ``explain_function()`` method\n- adds the baseline_value into the InteractionValues object's value storage for the ``()`` interaction if ``min_order=0`` (default usually) for all indices that are not ``SII```(SII has another baseline value) such that the values are efficient (sum up to the model prediction) without the awkward handling of the baseline_value attribute\n- renames ``game_fun`` parameter in ``shapiq.ExactComputer`` to ``game`` [#297](https://github.com/mmschlk/shapiq/issues/297)\n- adds a TabPFN example notebook to the documentation\n- removes warning when class_index is not provided in explainers [#298](https://github.com/mmschlk/shapiq/issues/298)\n- adds the `sentence_plot` function to the `plot` module to visualize the contributions of words to a language model prediction in a sentence-like format\n- makes abbreviations in the `plot` module optional [#281](https://github.com/mmschlk/shapiq/issues/281)\n- adds the `upset_plot` function to the `plot` module to visualize the interactions of higher-order [#290](https://github.com/mmschlk/shapiq/issues/290)\n- adds support for IsoForest models to explainer and tree explainer [#278](https://github.com/mmschlk/shapiq/issues/278)\n- adds support for sub-selection of players in the interaction values data class [#276](https://github.com/mmschlk/shapiq/issues/276) which allows retrieving interaction values for a subset of players\n- refactors game theory computations like `ExactComputer`, `MoebiusConverter`, `core`, among others to be more modular and flexible into the `game_theory` module [#258](https://github.com/mmschlk/shapiq/issues/258)\n- improves quality of the tests by adding many more semantic tests to the different interaction indices and computations [#285](https://github.com/mmschlk/shapiq/pull/285)\n\n### v1.1.1 (2024-11-13)\n\n#### Improvements and Ease of Use\n- adds a `class_index` parameter to `TabularExplainer` and `Explainer` to specify the class index to be explained for classification models [#271](https://github.com/mmschlk/shapiq/issues/271) (renames `class_label` parameter in TreeExplainer to `class_index`)\n- adds support for `PyTorch` models to `Explainer` [#272](https://github.com/mmschlk/shapiq/issues/272)\n- adds new tests comparing `shapiq` outputs for SVs with alues computed with `shap`\n- adds new tests for checking `shapiq` explainers with different types of models\n\n#### Bug Fixes\n- fixes a bug that `RandomForestClassifier` models were not working with the `TreeExplainer` [#273](https://github.com/mmschlk/shapiq/issues/273)\n\n### v1.1.0 (2024-11-07)\n\n#### New Features and Improvements\n- adds computation of the Egalitarian Core (`EC`) and Egalitarian Least-Core (`ELC`) to the `ExactComputer` [#182](https://github.com/mmschlk/shapiq/issues/182)\n- adds `waterfall_plot` [#34](https://github.com/mmschlk/shapiq/issues/34) that visualizes the contributions of features to the model prediction\n- adds `BaselineImputer` [#107](https://github.com/mmschlk/shapiq/issues/107) which is now responsible for handling the `sample_replacements` parameter. Added a DeprecationWarning for the parameter in `MarginalImputer`, which will be removed in the next release.\n- adds `joint_marginal_distribution` parameter to `MarginalImputer` with default value `True` [#261](https://github.com/mmschlk/shapiq/issues/261)\n- renames explanation graph to `si_graph`\n- `get_n_order` now has optional lower/upper limits for the order\n- computing metrics for benchmarking now tries to resolve not-matching interaction indices and will throw a warning instead of a ValueError [#179](https://github.com/mmschlk/shapiq/issues/179)\n- add a legend to benchmark plots [#170](https://github.com/mmschlk/shapiq/issues/170)\n- refactored the `shapiq.games.benchmark` module into a separate `shapiq.benchmark` module by moving all but the benchmark games into the new module. This closes [#169](https://github.com/mmschlk/shapiq/issues/169) and makes benchmarking more flexible and convenient.\n- a `shapiq.Game` can now be called more intuitively with coalitions data types (tuples of int or str) and also allows to add `player_names` to the game at initialization [#183](https://github.com/mmschlk/shapiq/issues/183)\n- improve tests across the package\n\n#### Documentation\n- adds a notebook showing how to use custom tree models with the `TreeExplainer` [#66](https://github.com/mmschlk/shapiq/issues/66)\n- adds a notebook show how to use the `shapiq.Game` API to create custom games [#184](https://github.com/mmschlk/shapiq/issues/184)\n- adds a notebook showing hot to visualize interactions [#252](https://github.com/mmschlk/shapiq/issues/252)\n- adds a notebook showing how to compute Shapley values with `shapiq` [#193](https://github.com/mmschlk/shapiq/issues/197)\n- adds a notebook for conducting data valuation [#190](https://github.com/mmschlk/shapiq/issues/190)\n- adds a notebook showcasing introducing the Core and how to compute it with `shapiq` [#191](https://github.com/mmschlk/shapiq/issues/191)\n\n#### Bug Fixes\n- fixes a bug with SIs not adding up to the model prediction because of wrong values in the empty set [#264](https://github.com/mmschlk/shapiq/issues/264)\n- fixes a bug that `TreeExplainer` did not have the correct baseline_value when using XGBoost models [#250](https://github.com/mmschlk/shapiq/issues/250)\n- fixes the force plot not showing and its baseline value\n\n### v1.0.1 (2024-06-05)\n\n- add `max_order=1` to `TabularExplainer` and `TreeExplainer`\n- fix `TreeExplainer.explain_X(..., n_jobs=2, random_state=0)`\n\n### v1.0.0 (2024-06-04)\n\nMajor release of the `shapiq` Python package including (among others):\n\n- `approximator` module implements over 10 approximators of Shapley values and interaction indices.\n- `exact` module implements a computer for over 10 game theoretic concepts like interaction indices or generalized values.\n- `games` module implements over 10 application benchmarks for the approximators.\n- `explainer` module includes a `TabularExplainer` and `TreeExplainer` for any-order feature interactions of machine learning model predictions.\n- `interaction_values` module implements a data class to store and analyze interaction values.\n- `plot` module allows visualizing interaction values.\n- `datasets` module loads datasets for testing and examples.\n\nDocumentation of `shapiq` with tutorials and API reference is available at https://shapiq.readthedocs.io\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Shapley Interactions for Machine Learning",
"version": "1.2.0",
"project_urls": {
"Documentation": "https://shapiq.readthedocs.io",
"Homepage": "https://github.com/mmschlk/shapiq",
"Source": "https://github.com/mmschlk/shapiq",
"Tracker": "https://github.com/mmschlk/shapiq/issues"
},
"split_keywords": [
"python",
" machine learning",
" interpretable machine learning",
" shap",
" xai",
" explainable ai",
" interaction",
" shapley interactions",
" shapley values",
" feature interaction"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3f30f00a398f5acf87a0dfb9663178535820b612d412ec64ce2a2bf9b096e895",
"md5": "0f0bcd7ac7acc56ece938c0573f4e7c3",
"sha256": "c6c8ae8255ce64353c04b500b2bee0e09975fb29a49078abd6e324d8651c6152"
},
"downloads": -1,
"filename": "shapiq-1.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0f0bcd7ac7acc56ece938c0573f4e7c3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9.0",
"size": 241324,
"upload_time": "2025-01-15T07:41:52",
"upload_time_iso_8601": "2025-01-15T07:41:52.907820Z",
"url": "https://files.pythonhosted.org/packages/3f/30/f00a398f5acf87a0dfb9663178535820b612d412ec64ce2a2bf9b096e895/shapiq-1.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2a0578d11c3ebdf9957f508d4d9675b36fc488697e67a7c26b6a2a7baf3112ca",
"md5": "cd8bc355bd5a197eb15cad5a005800a0",
"sha256": "d2b9415aecf2a145509560b8eb019bc3c105b0322213b6abcb8175b667f03be9"
},
"downloads": -1,
"filename": "shapiq-1.2.0.tar.gz",
"has_sig": false,
"md5_digest": "cd8bc355bd5a197eb15cad5a005800a0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9.0",
"size": 190579,
"upload_time": "2025-01-15T07:41:54",
"upload_time_iso_8601": "2025-01-15T07:41:54.479389Z",
"url": "https://files.pythonhosted.org/packages/2a/05/78d11c3ebdf9957f508d4d9675b36fc488697e67a7c26b6a2a7baf3112ca/shapiq-1.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-15 07:41:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mmschlk",
"github_project": "shapiq",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"requirements": [],
"lcname": "shapiq"
}