shapiq


Nameshapiq JSON
Version 1.3.2 PyPI version JSON
download
home_pageNone
SummaryShapley Interactions for Machine Learning
upload_time2025-10-14 11:17:18
maintainerNone
docs_urlNone
authorHubert Baniecki, Fabian Fumagalli
requires_python>=3.10
licenseNone
keywords python machine learning interpretable machine learning shap xai explainable ai interaction shapley interactions shapley values feature interaction
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # shapiq: Shapley Interactions for Machine Learning <img src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/logo/logo_shapiq_light.svg" alt="shapiq_logo" align="right" height="250px"/>

[![PyPI version](https://badge.fury.io/py/shapiq.svg)](https://badge.fury.io/py/shapiq)
[![License](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)
[![Coverage Status](https://coveralls.io/repos/github/mmschlk/shapiq/badge.svg?branch=main)](https://coveralls.io/github/mmschlk/shapiq?branch=main)
[![Tests](https://github.com/mmschlk/shapiq/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/mmschlk/shapiq/actions/workflows/unit-tests.yml)
[![Read the Docs](https://readthedocs.org/projects/shapiq/badge/?version=latest)](https://shapiq.readthedocs.io/en/latest/?badge=latest)

[![PyPI Version](https://img.shields.io/pypi/pyversions/shapiq.svg)](https://pypi.org/project/shapiq)
[![PyPI status](https://img.shields.io/pypi/status/shapiq.svg?color=blue)](https://pypi.org/project/shapiq)
[![PePy](https://static.pepy.tech/badge/shapiq?style=flat-square)](https://pepy.tech/project/shapiq)

[![Code Style](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen)](https://github.com/mmschlk/shapiq/issues)
[![Last Commit](https://img.shields.io/github/last-commit/mmschlk/shapiq)](https://github.com/mmschlk/shapiq/commits/main)

> An interaction may speak more than a thousand main effects.

Shapley Interaction Quantification (`shapiq`) is a Python package for (1) approximating any-order Shapley interactions, (2) benchmarking game-theoretical algorithms for machine learning, (3) explaining feature interactions of model predictions. `shapiq` extends the well-known [shap](https://github.com/shap/shap) package for both researchers working on game theory in machine learning, as well as the end-users explaining models. SHAP-IQ extends individual Shapley values by quantifying the **synergy** effect between entities (aka **players** in the jargon of game theory) like explanatory features, data points, or weak learners in ensemble models. Synergies between players give a more comprehensive view of machine learning models.

## 🛠️ Install
`shapiq` is intended to work with **Python 3.10 and above**.
Installation can be done via `uv` :
```sh
uv add shapiq
```

or via `pip`:

```sh
pip install shapiq
```

## ⭐ Quickstart

You can explain your model with `shapiq.explainer` and visualize Shapley interactions with `shapiq.plot`.
If you are interested in the underlying game theoretic algorithms, then check out the `shapiq.approximator` and `shapiq.games` modules.

### Compute any-order feature interactions

Explain your models with Shapley interactions:
Just load your data and model, and then use a `shapiq.Explainer` to compute Shapley interactions.

```python
import shapiq
# load data
X, y = shapiq.load_california_housing(to_numpy=True)
# train a model
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(X, y)
# set up an explainer with k-SII interaction values up to order 4
explainer = shapiq.TabularExplainer(
    model=model,
    data=X,
    index="k-SII",
    max_order=4
)
# explain the model's prediction for the first sample
interaction_values = explainer.explain(X[0], budget=256)
# analyse interaction values
print(interaction_values)

>> InteractionValues(
>>     index=k-SII, max_order=4, min_order=0, estimated=False,
>>     estimation_budget=256, n_players=8, baseline_value=2.07282292,
>>     Top 10 interactions:
>>         (0,): 1.696969079  # attribution of feature 0
>>         (0, 5): 0.4847876
>>         (0, 1): 0.4494288  # interaction between features 0 & 1
>>         (0, 6): 0.4477677
>>         (1, 5): 0.3750034
>>         (4, 5): 0.3468325
>>         (0, 3, 6): -0.320  # interaction between features 0 & 3 & 6
>>         (2, 3, 6): -0.329
>>         (0, 1, 5): -0.363
>>         (6,): -0.56358890
>> )
```

### Compute Shapley values like you are used to with SHAP

If you are used to working with SHAP, you can also compute Shapley values with `shapiq` the same way:
You can load your data and model, and then use the `shapiq.Explainer` to compute Shapley values.
If you set the index to ``'SV'``, you will get the Shapley values as you know them from SHAP.

```python
import shapiq

data, model = ...  # get your data and model
explainer = shapiq.Explainer(
    model=model,
    data=data,
    index="SV",  # Shapley values
)
shapley_values = explainer.explain(data[0])
shapley_values.plot_force(feature_names=...)
```

Once you have the Shapley values, you can easily compute Interaction values as well:

```python
explainer = shapiq.Explainer(
    model=model,
    data=data,
    index="k-SII",  # k-SII interaction values
    max_order=2     # specify any order you want
)
interaction_values = explainer.explain(data[0])
interaction_values.plot_force(feature_names=...)
```

<p align="center">
  <img width="800px" src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/motivation_sv_and_si.png" alt="An example Force Plot for the California Housing Dataset with Shapley Interactions">
</p>

### Visualize feature interactions

A handy way of visualizing interaction scores up to order 2 are network plots.
You can see an example of such a plot below.
The nodes represent feature **attributions** and the edges represent the **interactions** between features.
The strength and size of the nodes and edges are proportional to the absolute value of attributions and interactions, respectively.

```python
shapiq.network_plot(
    first_order_values=interaction_values.get_n_order_values(1),
    second_order_values=interaction_values.get_n_order_values(2)
)
# or use
interaction_values.plot_network()
```

The pseudo-code above can produce the following plot (here also an image is added):

<p align="center">
  <img width="500px" src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/network_example2.png" alt="network_plot_example">
</p>

### Explain TabPFN

With ``shapiq`` you can also explain [``TabPFN``](https://github.com/PriorLabs/TabPFN) by making use of the _remove-and-recontextualize_ explanation paradigm implemented in ``shapiq.TabPFNExplainer``.

```python
import tabpfn, shapiq
data, labels = ...                    # load your data
model = tabpfn.TabPFNClassifier()     # get TabPFN
model.fit(data, labels)               # "fit" TabPFN (optional)
explainer = shapiq.TabPFNExplainer(   # setup the explainer
    model=model,
    data=data,
    labels=labels,
    index="FSII"
)
fsii_values = explainer.explain(data[0])  # explain with Faithful Shapley values
fsii_values.plot_force()               # plot the force plot
```

<p align="center">
  <img width="800px" src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/fsii_tabpfn_force_plot_example.png" alt="Force Plot of FSII values as derived from the example tabpfn notebook">
</p>

### Use SPEX (SParse EXplainer) <img src="https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/spex_logo.png" alt="spex_logo" align="right" height="75px"/>
For large-scale use-cases you can also check out the [👓``SPEX``](https://shapiq.readthedocs.io/en/latest/api/shapiq.approximator.sparse.html#shapiq.approximator.sparse.SPEX) approximator.

```python
# load your data and model with large number of features
data, model, n_features = ...

# use the SPEX approximator directly
approximator = shapiq.SPEX(n=n_features, index="FBII", max_order=2)
fbii_scores = approximator.approximate(budget=2000, game=model.predict)

# or use SPEX with an explainer
explainer = shapiq.Explainer(
    model=model,
    data=data,
    index="FBII",
    max_order=2,
    approximator="spex"  # specify SPEX as approximator
)
explanation = explainer.explain(data[0])
```


## 📖 Documentation with tutorials
The documentation of ``shapiq`` can be found at https://shapiq.readthedocs.io.
If you are new to Shapley values or Shapley interactions, we recommend starting with the [introduction](https://shapiq.readthedocs.io/en/latest/introduction/) and the [basic tutorials](https://shapiq.readthedocs.io/en/latest/notebooks/basics.html).
There is a lot of great resources available to get you started with Shapley values and interactions.

## 💬 Citation

If you use ``shapiq`` and enjoy it, please consider citing our [NeurIPS paper](https://arxiv.org/abs/2410.01649) or consider starring this repository.

```bibtex
@inproceedings{Muschalik.2024b,
  title     = {shapiq: Shapley Interactions for Machine Learning},
  author    = {Maximilian Muschalik and Hubert Baniecki and Fabian Fumagalli and
               Patrick Kolpaczki and Barbara Hammer and Eyke H\"{u}llermeier},
  booktitle = {The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
  year      = {2024},
  url       = {https://openreview.net/forum?id=knxGmi6SJi}
}
```

## 📦 Contributing
We welcome any kind of contributions to `shapiq`!
If you are interested in contributing, please check out our [contributing guidelines](https://github.com/mmschlk/shapiq/blob/main/.github/CONTRIBUTING.md).
If you have any questions, feel free to reach out to us.
We are tracking our progress via a [project board](https://github.com/users/mmschlk/projects/4) and the [issues](https://github.com/mmschlk/shapiq/issues) section.
If you find a bug or have a feature request, please open an issue or help us fixing it by opening a pull request.

## 📜 License
This project is licensed under the [MIT License](https://github.com/mmschlk/shapiq/blob/main/LICENSE).

## 💰 Funding
This work is openly available under the MIT license.
Some authors acknowledge the financial support by the German Research Foundation (DFG) under grant number TRR 318/1 2021 – 438445824.

---
Built with ❤️ by the shapiq team.

# Changelog

## v1.3.2 (2025-10-14)

### Hotfix
Removes `overrides` import in tabular explainer, which is not part of the package dependencies resulting in an ImportError when importing `shapiq`. [#436](https://github.com/mmschlk/shapiq/issues/436)

## v1.3.1 (2025-07-11)

### New Features
- adds the `shapiq.plot.beesvarm_plot()` function to shapiq. The beeswarm plot was extended to also support interactions of features. Beeswarm plots are useful in visualizing dependencies between feature values. The beeswarm plot was adapted from the SHAP library by sub-dividing the y-axis for each interaction term. [#399](https://github.com/mmschlk/shapiq/issues/399)
- adds JSON support to `InteractionValues` and `Game` objects, allowing for easy serialization and deserialization of interaction values and game objects [#412](https://github.com/mmschlk/shapiq/pull/412) usage of `pickle` is now deprecated. This change allows us to revamp the data structures in the future and offers more flexibility.

### Testing, Code-Quality and Documentation
- adds a testing suite for testing deprecations in `tests/tests_deprecations/` which allows for easier deprecation managment and tracking of deprecated features [#412](https://github.com/mmschlk/shapiq/pull/412)

## Deprecated
- The `Game(path_to_values=...)` constructor is now deprecated and will be removed in version 1.4.0. Use `Game.load(...)` or `Game().load_values(...)` instead.
- Saving and loading `InteactionValues` via `InteractionValues.save(..., as_pickle=True)` and `InteractionValues.save(..., as_npz=True)` is now deprecated and will be removed in version 1.4.0. Use `InteractionValues.save(...)` to save as json.

## v1.3.0 (2025-06-17)

### Highlights
- `shapiq.SPEX` (Sparse Exact) approximator for efficient computation of sparse interaction values for really large models and games. Paper: [SPEX: Scaling Feature Interaction Explanations for LLMs](https://arxiv.org/abs/2502.13870)
- `shapiq.AgnosticExplainer` a generic explainer that works for any value function or `shapiq.Game` object, allowing for more flexibility in explainers.
- prettier graph-based plots via `shapiq.si_graph_plot()` and `shapiq.network_plot()`, which now use the same backend for more flexibility and easier maintenance.

### New Features
- adds the SPEX (Sparse Exact) module in `approximator.sparse` for efficient computation of sparse interaction values [#379](https://github.com/mmschlk/shapiq/pull/379)
- adds `shapiq.AgnosticExplainer` which is a generic explainer that can be used for any value function or `shapiq.Game` object. This allows for more flexibility in the explainers. [#100](https://github.com/mmschlk/shapiq/issues/100), [#395](https://github.com/mmschlk/shapiq/pull/395)
- changes `budget` to be a mandatory parameter given to the `TabularExplainer.explain()` method [#355](https://github.com/mmschlk/shapiq/pull/356)
- changes logic of `InteractionValues.get_n_order()` function to be callable with **either** the `order: int` parameter and optional assignment of `min_order: int` and `max_order: int` parameters **or** with the min/max order parameters [#372](https://github.com/mmschlk/shapiq/pull/372)
- renamed `min_percentage` parameter in the force plot to `contribution_threshold` to better reflect its purpose [#391](https://github.com/mmschlk/shapiq/pull/391)
- adds ``verbose`` parameter to the ``Explainer``'s ``explain_X()`` method to control weather a progress bar is shown or not which is defaulted to ``False``. [#391](https://github.com/mmschlk/shapiq/pull/391)
- made `InteractionValues.get_n_order()` and `InteractionValues.get_n_order_values()` function more efficient by iterating over the stored interactions and not over the powerset of all potential interactions, which made the function not usable for higher player counts (models with many features, and results obtained from `TreeExplainer`). Note, this change does not really help `get_n_order_values()` as it still needs to create a numpy array of shape `n_players` times `order` [#372](https://github.com/mmschlk/shapiq/pull/372)
- streamlined the ``network_plot()`` plot function to use the ``si_graph_plot()`` as its backend function. This allows for more flexibility in the plot function and makes it easier to use the same code for different purposes. In addition, the ``si_graph_plot`` was modified to make plotting more easy and allow for more flexibility with new parameters. [#349](https://github.com/mmschlk/shapiq/pull/349)
- adds `Game.compute()` method to the `shapiq.Game` class to compute game values without changing the state of the game object. The compute method also introduces a `shapiq.utils.sets.generate_interaction_lookup_from_coalitions()` utility method which creates an interaction lookup dict from an array of coalitions. [#397](https://github.com/mmschlk/shapiq/pull/397)
- streamlines the creation of network plots and graph plots which now uses the same backend. The network plot via `shapiq.network_plot()` or `InteractionValues.plot_network()` is now a special case of the `shapiq.si_graph_plot()` and `InteractionValues.plot_si_graph()`. This allows to create more beautiful plots and easier maintenance in the future. [#349](https://github.com/mmschlk/shapiq/pull/349)

### Testing, Code-Quality and Documentation
- activates ``"ALL"`` rules in ``ruff-format`` configuration to enforce stricter code quality checks and addressed around 500 (not automatically solvable) issues in the code base. [#391](https://github.com/mmschlk/shapiq/pull/391)
- improved the testing environment by adding a new fixture module containing mock `InteractionValues` objects to be used in the tests. This allows for more efficient and cleaner tests, as well as easier debugging of the tests  [#372](https://github.com/mmschlk/shapiq/pull/372)
- removed check and error message if the ``index`` parameter is not in the list of available indices in the ``TabularExplainer`` since the type hints were replaced by Literals [#391](https://github.com/mmschlk/shapiq/pull/391)
- removed multiple instances where ``shapiq`` tests if some approximators/explainers can be instantiated with certain indices or not in favor of using Literals in the ``__init__`` method of the approximator classes. This allows for better type hinting and IDE support, as well as cleaner code. [#391](https://github.com/mmschlk/shapiq/pull/391)
- Added documentation for all public modules, classes, and functions in the code base to improve the documentation quality and make it easier to understand how to use the package. [#391](https://github.com/mmschlk/shapiq/pull/391)
- suppress a ``RuntimeWarning`` in ``Regression`` approximators ``solve_regression()``method when the solver is not able to find good interim solutions for the regression problem.
- refactors the tests into ``tests_unit/`` and ``tests_integration/`` to better separate unit tests from integration tests. [#395](https://github.com/mmschlk/shapiq/pull/395)
- adds new integration tests in ``tests/tests_integration/test_explainer_california_housing`` which compares the different explainers against ground-truth interaction values computed by ``shapiq.ExactComputer`` and interaction values stored on [disk](https://github.com/mmschlk/shapiq/tree/main/tests/data/interaction_values/california_housing) as a form of regression test. This test should help finding bugs in the future when the approximators, explainers, or exact computation are changed. [#395](https://github.com/mmschlk/shapiq/pull/395)

### Bug Fixes
- fixed a bug in the `shapiq.waterfall_plot` function that caused the plot to not display correctly resulting in cutoff y_ticks. Additionally, the file was renamed from `watefall.py` to `waterfall.py` to match the function name [#377](https://github.com/mmschlk/shapiq/pull/377)
- fixes a bug with `TabPFNExplainer`, where the model was not able to be used for predictions after it was explained. This was due to the model being fitted on a subset of features, which caused inconsistencies in the model's predictions after explanation. The fix includes that after each call to the `TabPFNImputer.value_function`, the tabpfn model is fitted on the whole dataset (without omitting features). This means that the original model can be used for predictions after it has been explained. [#396](https://github.com/mmschlk/shapiq/issues/396).
- fixed a bug in computing `BII` or `BV` indices with `shapiq.approximator.MonteCarlo` approximators (affecting `SHAP-IQ`, `SVARM` and `SVARM-IQ`). All orders of BII should now be computed correctly. [#395](https://github.com/mmschlk/shapiq/pull/395)

## v1.2.3 (2025-03-24)
- substantially improves the runtime of all `Regression` approximators by a) a faster pre-computation of the regression matrices and b) a faster computation of the weighted least squares regression [#340](https://github.com/mmschlk/shapiq/issues/340)
- removes `sample_replacements` parameter from `MarginalImputer` and removes the DeprecationWarning for it
- adds a trivial computation to `TreeSHAP-IQ` for trees that use only one feature in the tree (this works for decision stumps or trees splitting on only one feature multiple times). In such trees, the computation is trivial as the whole effect of $\nu(N) - \nu(\emptyset)$ is all on the main effect of the single feature and there is no interaction effect. This expands on the fix in v1.2.1 [#286](https://github.com/mmschlk/shapiq/issues/286).
- fixes a bug with xgboost where feature names where trees that did not contain all features would lead `TreeExplainer` to fail
- fixes a bug with `stacked_bar_plot` where the higher order interactions were inflated by the lower order interactions, thus wrongly showing the higher order interactions as higher than they are
- fixes a bug where `InteractionValues.get_subset()` returns a faulty `coalition_lookup` dictionary pointing to indices outside the subset of players [#336](https://github.com/mmschlk/shapiq/issues/336)
- updates default value of `TreeExplainer`'s `min_order` parameter from 1 to 0 to include the baseline value in the interaction values as per default
- adds the `RegressionFBII` approximator to estimate Faithful Banzhaf interactions via least squares regression [#333](https://github.com/mmschlk/shapiq/pull/333). Additionally, FBII support was introduced in TabularExplainer and MonteCarlo-Approximator.
- adds a `RandomGame` class as part of `shapiq.games.benchmark` which always returns a random vector of integers between 0 and 100.

## v1.2.2 (2025-03-11)
- changes python support to 3.10-3.13 [#318](https://github.com/mmschlk/shapiq/pull/318)
- fixes a bug that prohibited importing shapiq in environments without write access [#326](https://github.com/mmschlk/shapiq/issues/326)
- adds `ExtraTreeRegressors` to supported models [#309](https://github.com/mmschlk/shapiq/pull/309)

## v1.2.1 (2025-02-17)
- fixes bugs regarding plotting [#315](https://github.com/mmschlk/shapiq/issues/315) and [#316](https://github.com/mmschlk/shapiq/issues/316)
- fixes a bug with TreeExplainer and Trees that consist of only one feature [#286](https://github.com/mmschlk/shapiq/issues/286)
- fixes SV init with explainer for permutation, svarm, kernelshap, and unbiased kernelshap [#319](https://github.com/mmschlk/shapiq/issues/319)
- adds a progress bar to `explain_X()` [#324](https://github.com/mmschlk/shapiq/issues/324)

## v1.2.0 (2025-01-15)
- adds ``shapiq.TabPFNExplainer`` as a specialized version of the ``shapiq.TabularExplainer`` which offers a streamlined variant of the explainer for the TabPFN model [#301](https://github.com/mmschlk/shapiq/issues/301)
- handles ``explainer.explain()`` now through a common interface for all explainer classes which now need to implement a ``explain_function()`` method
- adds the baseline_value into the InteractionValues object's value storage for the ``()`` interaction if ``min_order=0`` (default usually) for all indices that are not ``SII```(SII has another baseline value) such that the values are efficient (sum up to the model prediction) without the awkward handling of the baseline_value attribute
- renames ``game_fun`` parameter in ``shapiq.ExactComputer`` to ``game`` [#297](https://github.com/mmschlk/shapiq/issues/297)
- adds a TabPFN example notebook to the documentation
- removes warning when class_index is not provided in explainers [#298](https://github.com/mmschlk/shapiq/issues/298)
- adds the `sentence_plot` function to the `plot` module to visualize the contributions of words to a language model prediction in a sentence-like format
- makes abbreviations in the `plot` module optional [#281](https://github.com/mmschlk/shapiq/issues/281)
- adds the `upset_plot` function to the `plot` module to visualize the interactions of higher-order [#290](https://github.com/mmschlk/shapiq/issues/290)
- adds support for IsoForest models to explainer and tree explainer [#278](https://github.com/mmschlk/shapiq/issues/278)
- adds support for sub-selection of players in the interaction values data class [#276](https://github.com/mmschlk/shapiq/issues/276) which allows retrieving interaction values for a subset of players
- refactors game theory computations like `ExactComputer`, `MoebiusConverter`, `core`, among others to be more modular and flexible into the `game_theory` module [#258](https://github.com/mmschlk/shapiq/issues/258)
- improves quality of the tests by adding many more semantic tests to the different interaction indices and computations [#285](https://github.com/mmschlk/shapiq/pull/285)

## v1.1.1 (2024-11-13)

### Improvements and Ease of Use
- adds a `class_index` parameter to `TabularExplainer` and `Explainer` to specify the class index to be explained for classification models [#271](https://github.com/mmschlk/shapiq/issues/271) (renames `class_label` parameter in TreeExplainer to `class_index`)
- adds support for `PyTorch` models to `Explainer` [#272](https://github.com/mmschlk/shapiq/issues/272)
- adds new tests comparing `shapiq` outputs for SVs with alues computed with `shap`
- adds new tests for checking `shapiq` explainers with different types of models

### Bug Fixes
- fixes a bug that `RandomForestClassifier` models were not working with the `TreeExplainer` [#273](https://github.com/mmschlk/shapiq/issues/273)

## v1.1.0 (2024-11-07)

### New Features and Improvements
- adds computation of the Egalitarian Core (`EC`) and Egalitarian Least-Core (`ELC`) to the `ExactComputer` [#182](https://github.com/mmschlk/shapiq/issues/182)
- adds `waterfall_plot` [#34](https://github.com/mmschlk/shapiq/issues/34) that visualizes the contributions of features to the model prediction
- adds `BaselineImputer` [#107](https://github.com/mmschlk/shapiq/issues/107) which is now responsible for handling the `sample_replacements` parameter. Added a DeprecationWarning for the parameter in `MarginalImputer`, which will be removed in the next release.
- adds `joint_marginal_distribution` parameter to `MarginalImputer` with default value `True` [#261](https://github.com/mmschlk/shapiq/issues/261)
- renames explanation graph to `si_graph`
- `get_n_order` now has optional lower/upper limits for the order
- computing metrics for benchmarking now tries to resolve not-matching interaction indices and will throw a warning instead of a ValueError [#179](https://github.com/mmschlk/shapiq/issues/179)
- add a legend to benchmark plots [#170](https://github.com/mmschlk/shapiq/issues/170)
- refactored the `shapiq.games.benchmark` module into a separate `shapiq.benchmark` module by moving all but the benchmark games into the new module. This closes [#169](https://github.com/mmschlk/shapiq/issues/169) and makes benchmarking more flexible and convenient.
- a `shapiq.Game` can now be called more intuitively with coalitions data types (tuples of int or str) and also allows to add `player_names` to the game at initialization [#183](https://github.com/mmschlk/shapiq/issues/183)
- improve tests across the package

### Documentation
- adds a notebook showing how to use custom tree models with the `TreeExplainer` [#66](https://github.com/mmschlk/shapiq/issues/66)
- adds a notebook show how to use the `shapiq.Game` API to create custom games [#184](https://github.com/mmschlk/shapiq/issues/184)
- adds a notebook showing hot to visualize interactions [#252](https://github.com/mmschlk/shapiq/issues/252)
- adds a notebook showing how to compute Shapley values with `shapiq` [#193](https://github.com/mmschlk/shapiq/issues/197)
- adds a notebook for conducting data valuation [#190](https://github.com/mmschlk/shapiq/issues/190)
- adds a notebook showcasing introducing the Core and how to compute it with `shapiq` [#191](https://github.com/mmschlk/shapiq/issues/191)

### Bug Fixes
- fixes a bug with SIs not adding up to the model prediction because of wrong values in the empty set [#264](https://github.com/mmschlk/shapiq/issues/264)
- fixes a bug that `TreeExplainer` did not have the correct baseline_value when using XGBoost models [#250](https://github.com/mmschlk/shapiq/issues/250)
- fixes the force plot not showing and its baseline value

## v1.0.1 (2024-06-05)

- add `max_order=1` to `TabularExplainer` and `TreeExplainer`
- fix `TreeExplainer.explain_X(..., n_jobs=2, random_state=0)`

## v1.0.0 (2024-06-04)

Major release of the `shapiq` Python package including (among others):

- `approximator` module implements over 10 approximators of Shapley values and interaction indices.
- `exact` module implements a computer for over 10 game theoretic concepts like interaction indices or generalized values.
- `games` module implements over 10 application benchmarks for the approximators.
- `explainer` module includes a `TabularExplainer` and `TreeExplainer` for any-order feature interactions of machine learning model predictions.
- `interaction_values` module implements a data class to store and analyze interaction values.
- `plot` module allows visualizing interaction values.
- `datasets` module loads datasets for testing and examples.

Documentation of `shapiq` with tutorials and API reference is available at https://shapiq.readthedocs.io

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "shapiq",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Maximilian Muschalik <Maximilian.Muschalik@lmu.de>",
    "keywords": "python, machine learning, interpretable machine learning, shap, xai, explainable ai, interaction, shapley interactions, shapley values, feature interaction",
    "author": "Hubert Baniecki, Fabian Fumagalli",
    "author_email": "Maximilian Muschalik <Maximilian.Muschalik@lmu.de>",
    "download_url": "https://files.pythonhosted.org/packages/9c/1f/59712b0c41477c169c77006bf677789c601fd85b384102f22e601aac7a17/shapiq-1.3.2.tar.gz",
    "platform": null,
    "description": "# shapiq: Shapley Interactions for Machine Learning <img src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/logo/logo_shapiq_light.svg\" alt=\"shapiq_logo\" align=\"right\" height=\"250px\"/>\n\n[![PyPI version](https://badge.fury.io/py/shapiq.svg)](https://badge.fury.io/py/shapiq)\n[![License](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)\n[![Coverage Status](https://coveralls.io/repos/github/mmschlk/shapiq/badge.svg?branch=main)](https://coveralls.io/github/mmschlk/shapiq?branch=main)\n[![Tests](https://github.com/mmschlk/shapiq/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/mmschlk/shapiq/actions/workflows/unit-tests.yml)\n[![Read the Docs](https://readthedocs.org/projects/shapiq/badge/?version=latest)](https://shapiq.readthedocs.io/en/latest/?badge=latest)\n\n[![PyPI Version](https://img.shields.io/pypi/pyversions/shapiq.svg)](https://pypi.org/project/shapiq)\n[![PyPI status](https://img.shields.io/pypi/status/shapiq.svg?color=blue)](https://pypi.org/project/shapiq)\n[![PePy](https://static.pepy.tech/badge/shapiq?style=flat-square)](https://pepy.tech/project/shapiq)\n\n[![Code Style](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen)](https://github.com/mmschlk/shapiq/issues)\n[![Last Commit](https://img.shields.io/github/last-commit/mmschlk/shapiq)](https://github.com/mmschlk/shapiq/commits/main)\n\n> An interaction may speak more than a thousand main effects.\n\nShapley Interaction Quantification (`shapiq`) is a Python package for (1) approximating any-order Shapley interactions, (2) benchmarking game-theoretical algorithms for machine learning, (3) explaining feature interactions of model predictions. `shapiq` extends the well-known [shap](https://github.com/shap/shap) package for both researchers working on game theory in machine learning, as well as the end-users explaining models. SHAP-IQ extends individual Shapley values by quantifying the **synergy** effect between entities (aka **players** in the jargon of game theory) like explanatory features, data points, or weak learners in ensemble models. Synergies between players give a more comprehensive view of machine learning models.\n\n## \ud83d\udee0\ufe0f Install\n`shapiq` is intended to work with **Python 3.10 and above**.\nInstallation can be done via `uv` :\n```sh\nuv add shapiq\n```\n\nor via `pip`:\n\n```sh\npip install shapiq\n```\n\n## \u2b50 Quickstart\n\nYou can explain your model with `shapiq.explainer` and visualize Shapley interactions with `shapiq.plot`.\nIf you are interested in the underlying game theoretic algorithms, then check out the `shapiq.approximator` and `shapiq.games` modules.\n\n### Compute any-order feature interactions\n\nExplain your models with Shapley interactions:\nJust load your data and model, and then use a `shapiq.Explainer` to compute Shapley interactions.\n\n```python\nimport shapiq\n# load data\nX, y = shapiq.load_california_housing(to_numpy=True)\n# train a model\nfrom sklearn.ensemble import RandomForestRegressor\nmodel = RandomForestRegressor()\nmodel.fit(X, y)\n# set up an explainer with k-SII interaction values up to order 4\nexplainer = shapiq.TabularExplainer(\n    model=model,\n    data=X,\n    index=\"k-SII\",\n    max_order=4\n)\n# explain the model's prediction for the first sample\ninteraction_values = explainer.explain(X[0], budget=256)\n# analyse interaction values\nprint(interaction_values)\n\n>> InteractionValues(\n>>     index=k-SII, max_order=4, min_order=0, estimated=False,\n>>     estimation_budget=256, n_players=8, baseline_value=2.07282292,\n>>     Top 10 interactions:\n>>         (0,): 1.696969079  # attribution of feature 0\n>>         (0, 5): 0.4847876\n>>         (0, 1): 0.4494288  # interaction between features 0 & 1\n>>         (0, 6): 0.4477677\n>>         (1, 5): 0.3750034\n>>         (4, 5): 0.3468325\n>>         (0, 3, 6): -0.320  # interaction between features 0 & 3 & 6\n>>         (2, 3, 6): -0.329\n>>         (0, 1, 5): -0.363\n>>         (6,): -0.56358890\n>> )\n```\n\n### Compute Shapley values like you are used to with SHAP\n\nIf you are used to working with SHAP, you can also compute Shapley values with `shapiq` the same way:\nYou can load your data and model, and then use the `shapiq.Explainer` to compute Shapley values.\nIf you set the index to ``'SV'``, you will get the Shapley values as you know them from SHAP.\n\n```python\nimport shapiq\n\ndata, model = ...  # get your data and model\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"SV\",  # Shapley values\n)\nshapley_values = explainer.explain(data[0])\nshapley_values.plot_force(feature_names=...)\n```\n\nOnce you have the Shapley values, you can easily compute Interaction values as well:\n\n```python\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"k-SII\",  # k-SII interaction values\n    max_order=2     # specify any order you want\n)\ninteraction_values = explainer.explain(data[0])\ninteraction_values.plot_force(feature_names=...)\n```\n\n<p align=\"center\">\n  <img width=\"800px\" src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/motivation_sv_and_si.png\" alt=\"An example Force Plot for the California Housing Dataset with Shapley Interactions\">\n</p>\n\n### Visualize feature interactions\n\nA handy way of visualizing interaction scores up to order 2 are network plots.\nYou can see an example of such a plot below.\nThe nodes represent feature **attributions** and the edges represent the **interactions** between features.\nThe strength and size of the nodes and edges are proportional to the absolute value of attributions and interactions, respectively.\n\n```python\nshapiq.network_plot(\n    first_order_values=interaction_values.get_n_order_values(1),\n    second_order_values=interaction_values.get_n_order_values(2)\n)\n# or use\ninteraction_values.plot_network()\n```\n\nThe pseudo-code above can produce the following plot (here also an image is added):\n\n<p align=\"center\">\n  <img width=\"500px\" src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/network_example2.png\" alt=\"network_plot_example\">\n</p>\n\n### Explain TabPFN\n\nWith ``shapiq`` you can also explain [``TabPFN``](https://github.com/PriorLabs/TabPFN) by making use of the _remove-and-recontextualize_ explanation paradigm implemented in ``shapiq.TabPFNExplainer``.\n\n```python\nimport tabpfn, shapiq\ndata, labels = ...                    # load your data\nmodel = tabpfn.TabPFNClassifier()     # get TabPFN\nmodel.fit(data, labels)               # \"fit\" TabPFN (optional)\nexplainer = shapiq.TabPFNExplainer(   # setup the explainer\n    model=model,\n    data=data,\n    labels=labels,\n    index=\"FSII\"\n)\nfsii_values = explainer.explain(data[0])  # explain with Faithful Shapley values\nfsii_values.plot_force()               # plot the force plot\n```\n\n<p align=\"center\">\n  <img width=\"800px\" src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/fsii_tabpfn_force_plot_example.png\" alt=\"Force Plot of FSII values as derived from the example tabpfn notebook\">\n</p>\n\n### Use SPEX (SParse EXplainer) <img src=\"https://raw.githubusercontent.com/mmschlk/shapiq/main/docs/source/_static/images/spex_logo.png\" alt=\"spex_logo\" align=\"right\" height=\"75px\"/>\nFor large-scale use-cases you can also check out the [\ud83d\udc53``SPEX``](https://shapiq.readthedocs.io/en/latest/api/shapiq.approximator.sparse.html#shapiq.approximator.sparse.SPEX) approximator.\n\n```python\n# load your data and model with large number of features\ndata, model, n_features = ...\n\n# use the SPEX approximator directly\napproximator = shapiq.SPEX(n=n_features, index=\"FBII\", max_order=2)\nfbii_scores = approximator.approximate(budget=2000, game=model.predict)\n\n# or use SPEX with an explainer\nexplainer = shapiq.Explainer(\n    model=model,\n    data=data,\n    index=\"FBII\",\n    max_order=2,\n    approximator=\"spex\"  # specify SPEX as approximator\n)\nexplanation = explainer.explain(data[0])\n```\n\n\n## \ud83d\udcd6 Documentation with tutorials\nThe documentation of ``shapiq`` can be found at https://shapiq.readthedocs.io.\nIf you are new to Shapley values or Shapley interactions, we recommend starting with the [introduction](https://shapiq.readthedocs.io/en/latest/introduction/) and the [basic tutorials](https://shapiq.readthedocs.io/en/latest/notebooks/basics.html).\nThere is a lot of great resources available to get you started with Shapley values and interactions.\n\n## \ud83d\udcac Citation\n\nIf you use ``shapiq`` and enjoy it, please consider citing our [NeurIPS paper](https://arxiv.org/abs/2410.01649) or consider starring this repository.\n\n```bibtex\n@inproceedings{Muschalik.2024b,\n  title     = {shapiq: Shapley Interactions for Machine Learning},\n  author    = {Maximilian Muschalik and Hubert Baniecki and Fabian Fumagalli and\n               Patrick Kolpaczki and Barbara Hammer and Eyke H\\\"{u}llermeier},\n  booktitle = {The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},\n  year      = {2024},\n  url       = {https://openreview.net/forum?id=knxGmi6SJi}\n}\n```\n\n## \ud83d\udce6 Contributing\nWe welcome any kind of contributions to `shapiq`!\nIf you are interested in contributing, please check out our [contributing guidelines](https://github.com/mmschlk/shapiq/blob/main/.github/CONTRIBUTING.md).\nIf you have any questions, feel free to reach out to us.\nWe are tracking our progress via a [project board](https://github.com/users/mmschlk/projects/4) and the [issues](https://github.com/mmschlk/shapiq/issues) section.\nIf you find a bug or have a feature request, please open an issue or help us fixing it by opening a pull request.\n\n## \ud83d\udcdc License\nThis project is licensed under the [MIT License](https://github.com/mmschlk/shapiq/blob/main/LICENSE).\n\n## \ud83d\udcb0 Funding\nThis work is openly available under the MIT license.\nSome authors acknowledge the financial support by the German Research Foundation (DFG) under grant number TRR 318/1 2021 \u2013 438445824.\n\n---\nBuilt with \u2764\ufe0f by the shapiq team.\n\n# Changelog\n\n## v1.3.2 (2025-10-14)\n\n### Hotfix\nRemoves `overrides` import in tabular explainer, which is not part of the package dependencies resulting in an ImportError when importing `shapiq`. [#436](https://github.com/mmschlk/shapiq/issues/436)\n\n## v1.3.1 (2025-07-11)\n\n### New Features\n- adds the `shapiq.plot.beesvarm_plot()` function to shapiq. The beeswarm plot was extended to also support interactions of features. Beeswarm plots are useful in visualizing dependencies between feature values. The beeswarm plot was adapted from the SHAP library by sub-dividing the y-axis for each interaction term. [#399](https://github.com/mmschlk/shapiq/issues/399)\n- adds JSON support to `InteractionValues` and `Game` objects, allowing for easy serialization and deserialization of interaction values and game objects [#412](https://github.com/mmschlk/shapiq/pull/412) usage of `pickle` is now deprecated. This change allows us to revamp the data structures in the future and offers more flexibility.\n\n### Testing, Code-Quality and Documentation\n- adds a testing suite for testing deprecations in `tests/tests_deprecations/` which allows for easier deprecation managment and tracking of deprecated features [#412](https://github.com/mmschlk/shapiq/pull/412)\n\n## Deprecated\n- The `Game(path_to_values=...)` constructor is now deprecated and will be removed in version 1.4.0. Use `Game.load(...)` or `Game().load_values(...)` instead.\n- Saving and loading `InteactionValues` via `InteractionValues.save(..., as_pickle=True)` and `InteractionValues.save(..., as_npz=True)` is now deprecated and will be removed in version 1.4.0. Use `InteractionValues.save(...)` to save as json.\n\n## v1.3.0 (2025-06-17)\n\n### Highlights\n- `shapiq.SPEX` (Sparse Exact) approximator for efficient computation of sparse interaction values for really large models and games. Paper: [SPEX: Scaling Feature Interaction Explanations for LLMs](https://arxiv.org/abs/2502.13870)\n- `shapiq.AgnosticExplainer` a generic explainer that works for any value function or `shapiq.Game` object, allowing for more flexibility in explainers.\n- prettier graph-based plots via `shapiq.si_graph_plot()` and `shapiq.network_plot()`, which now use the same backend for more flexibility and easier maintenance.\n\n### New Features\n- adds the SPEX (Sparse Exact) module in `approximator.sparse` for efficient computation of sparse interaction values [#379](https://github.com/mmschlk/shapiq/pull/379)\n- adds `shapiq.AgnosticExplainer` which is a generic explainer that can be used for any value function or `shapiq.Game` object. This allows for more flexibility in the explainers. [#100](https://github.com/mmschlk/shapiq/issues/100), [#395](https://github.com/mmschlk/shapiq/pull/395)\n- changes `budget` to be a mandatory parameter given to the `TabularExplainer.explain()` method [#355](https://github.com/mmschlk/shapiq/pull/356)\n- changes logic of `InteractionValues.get_n_order()` function to be callable with **either** the `order: int` parameter and optional assignment of `min_order: int` and `max_order: int` parameters **or** with the min/max order parameters [#372](https://github.com/mmschlk/shapiq/pull/372)\n- renamed `min_percentage` parameter in the force plot to `contribution_threshold` to better reflect its purpose [#391](https://github.com/mmschlk/shapiq/pull/391)\n- adds ``verbose`` parameter to the ``Explainer``'s ``explain_X()`` method to control weather a progress bar is shown or not which is defaulted to ``False``. [#391](https://github.com/mmschlk/shapiq/pull/391)\n- made `InteractionValues.get_n_order()` and `InteractionValues.get_n_order_values()` function more efficient by iterating over the stored interactions and not over the powerset of all potential interactions, which made the function not usable for higher player counts (models with many features, and results obtained from `TreeExplainer`). Note, this change does not really help `get_n_order_values()` as it still needs to create a numpy array of shape `n_players` times `order` [#372](https://github.com/mmschlk/shapiq/pull/372)\n- streamlined the ``network_plot()`` plot function to use the ``si_graph_plot()`` as its backend function. This allows for more flexibility in the plot function and makes it easier to use the same code for different purposes. In addition, the ``si_graph_plot`` was modified to make plotting more easy and allow for more flexibility with new parameters. [#349](https://github.com/mmschlk/shapiq/pull/349)\n- adds `Game.compute()` method to the `shapiq.Game` class to compute game values without changing the state of the game object. The compute method also introduces a `shapiq.utils.sets.generate_interaction_lookup_from_coalitions()` utility method which creates an interaction lookup dict from an array of coalitions. [#397](https://github.com/mmschlk/shapiq/pull/397)\n- streamlines the creation of network plots and graph plots which now uses the same backend. The network plot via `shapiq.network_plot()` or `InteractionValues.plot_network()` is now a special case of the `shapiq.si_graph_plot()` and `InteractionValues.plot_si_graph()`. This allows to create more beautiful plots and easier maintenance in the future. [#349](https://github.com/mmschlk/shapiq/pull/349)\n\n### Testing, Code-Quality and Documentation\n- activates ``\"ALL\"`` rules in ``ruff-format`` configuration to enforce stricter code quality checks and addressed around 500 (not automatically solvable) issues in the code base. [#391](https://github.com/mmschlk/shapiq/pull/391)\n- improved the testing environment by adding a new fixture module containing mock `InteractionValues` objects to be used in the tests. This allows for more efficient and cleaner tests, as well as easier debugging of the tests  [#372](https://github.com/mmschlk/shapiq/pull/372)\n- removed check and error message if the ``index`` parameter is not in the list of available indices in the ``TabularExplainer`` since the type hints were replaced by Literals [#391](https://github.com/mmschlk/shapiq/pull/391)\n- removed multiple instances where ``shapiq`` tests if some approximators/explainers can be instantiated with certain indices or not in favor of using Literals in the ``__init__`` method of the approximator classes. This allows for better type hinting and IDE support, as well as cleaner code. [#391](https://github.com/mmschlk/shapiq/pull/391)\n- Added documentation for all public modules, classes, and functions in the code base to improve the documentation quality and make it easier to understand how to use the package. [#391](https://github.com/mmschlk/shapiq/pull/391)\n- suppress a ``RuntimeWarning`` in ``Regression`` approximators ``solve_regression()``method when the solver is not able to find good interim solutions for the regression problem.\n- refactors the tests into ``tests_unit/`` and ``tests_integration/`` to better separate unit tests from integration tests. [#395](https://github.com/mmschlk/shapiq/pull/395)\n- adds new integration tests in ``tests/tests_integration/test_explainer_california_housing`` which compares the different explainers against ground-truth interaction values computed by ``shapiq.ExactComputer`` and interaction values stored on [disk](https://github.com/mmschlk/shapiq/tree/main/tests/data/interaction_values/california_housing) as a form of regression test. This test should help finding bugs in the future when the approximators, explainers, or exact computation are changed. [#395](https://github.com/mmschlk/shapiq/pull/395)\n\n### Bug Fixes\n- fixed a bug in the `shapiq.waterfall_plot` function that caused the plot to not display correctly resulting in cutoff y_ticks. Additionally, the file was renamed from `watefall.py` to `waterfall.py` to match the function name [#377](https://github.com/mmschlk/shapiq/pull/377)\n- fixes a bug with `TabPFNExplainer`, where the model was not able to be used for predictions after it was explained. This was due to the model being fitted on a subset of features, which caused inconsistencies in the model's predictions after explanation. The fix includes that after each call to the `TabPFNImputer.value_function`, the tabpfn model is fitted on the whole dataset (without omitting features). This means that the original model can be used for predictions after it has been explained. [#396](https://github.com/mmschlk/shapiq/issues/396).\n- fixed a bug in computing `BII` or `BV` indices with `shapiq.approximator.MonteCarlo` approximators (affecting `SHAP-IQ`, `SVARM` and `SVARM-IQ`). All orders of BII should now be computed correctly. [#395](https://github.com/mmschlk/shapiq/pull/395)\n\n## v1.2.3 (2025-03-24)\n- substantially improves the runtime of all `Regression` approximators by a) a faster pre-computation of the regression matrices and b) a faster computation of the weighted least squares regression [#340](https://github.com/mmschlk/shapiq/issues/340)\n- removes `sample_replacements` parameter from `MarginalImputer` and removes the DeprecationWarning for it\n- adds a trivial computation to `TreeSHAP-IQ` for trees that use only one feature in the tree (this works for decision stumps or trees splitting on only one feature multiple times). In such trees, the computation is trivial as the whole effect of $\\nu(N) - \\nu(\\emptyset)$ is all on the main effect of the single feature and there is no interaction effect. This expands on the fix in v1.2.1 [#286](https://github.com/mmschlk/shapiq/issues/286).\n- fixes a bug with xgboost where feature names where trees that did not contain all features would lead `TreeExplainer` to fail\n- fixes a bug with `stacked_bar_plot` where the higher order interactions were inflated by the lower order interactions, thus wrongly showing the higher order interactions as higher than they are\n- fixes a bug where `InteractionValues.get_subset()` returns a faulty `coalition_lookup` dictionary pointing to indices outside the subset of players [#336](https://github.com/mmschlk/shapiq/issues/336)\n- updates default value of `TreeExplainer`'s `min_order` parameter from 1 to 0 to include the baseline value in the interaction values as per default\n- adds the `RegressionFBII` approximator to estimate Faithful Banzhaf interactions via least squares regression [#333](https://github.com/mmschlk/shapiq/pull/333). Additionally, FBII support was introduced in TabularExplainer and MonteCarlo-Approximator.\n- adds a `RandomGame` class as part of `shapiq.games.benchmark` which always returns a random vector of integers between 0 and 100.\n\n## v1.2.2 (2025-03-11)\n- changes python support to 3.10-3.13 [#318](https://github.com/mmschlk/shapiq/pull/318)\n- fixes a bug that prohibited importing shapiq in environments without write access [#326](https://github.com/mmschlk/shapiq/issues/326)\n- adds `ExtraTreeRegressors` to supported models [#309](https://github.com/mmschlk/shapiq/pull/309)\n\n## v1.2.1 (2025-02-17)\n- fixes bugs regarding plotting [#315](https://github.com/mmschlk/shapiq/issues/315) and [#316](https://github.com/mmschlk/shapiq/issues/316)\n- fixes a bug with TreeExplainer and Trees that consist of only one feature [#286](https://github.com/mmschlk/shapiq/issues/286)\n- fixes SV init with explainer for permutation, svarm, kernelshap, and unbiased kernelshap [#319](https://github.com/mmschlk/shapiq/issues/319)\n- adds a progress bar to `explain_X()` [#324](https://github.com/mmschlk/shapiq/issues/324)\n\n## v1.2.0 (2025-01-15)\n- adds ``shapiq.TabPFNExplainer`` as a specialized version of the ``shapiq.TabularExplainer`` which offers a streamlined variant of the explainer for the TabPFN model [#301](https://github.com/mmschlk/shapiq/issues/301)\n- handles ``explainer.explain()`` now through a common interface for all explainer classes which now need to implement a ``explain_function()`` method\n- adds the baseline_value into the InteractionValues object's value storage for the ``()`` interaction if ``min_order=0`` (default usually) for all indices that are not ``SII```(SII has another baseline value) such that the values are efficient (sum up to the model prediction) without the awkward handling of the baseline_value attribute\n- renames ``game_fun`` parameter in ``shapiq.ExactComputer`` to ``game`` [#297](https://github.com/mmschlk/shapiq/issues/297)\n- adds a TabPFN example notebook to the documentation\n- removes warning when class_index is not provided in explainers [#298](https://github.com/mmschlk/shapiq/issues/298)\n- adds the `sentence_plot` function to the `plot` module to visualize the contributions of words to a language model prediction in a sentence-like format\n- makes abbreviations in the `plot` module optional [#281](https://github.com/mmschlk/shapiq/issues/281)\n- adds the `upset_plot` function to the `plot` module to visualize the interactions of higher-order [#290](https://github.com/mmschlk/shapiq/issues/290)\n- adds support for IsoForest models to explainer and tree explainer [#278](https://github.com/mmschlk/shapiq/issues/278)\n- adds support for sub-selection of players in the interaction values data class [#276](https://github.com/mmschlk/shapiq/issues/276) which allows retrieving interaction values for a subset of players\n- refactors game theory computations like `ExactComputer`, `MoebiusConverter`, `core`, among others to be more modular and flexible into the `game_theory` module [#258](https://github.com/mmschlk/shapiq/issues/258)\n- improves quality of the tests by adding many more semantic tests to the different interaction indices and computations [#285](https://github.com/mmschlk/shapiq/pull/285)\n\n## v1.1.1 (2024-11-13)\n\n### Improvements and Ease of Use\n- adds a `class_index` parameter to `TabularExplainer` and `Explainer` to specify the class index to be explained for classification models [#271](https://github.com/mmschlk/shapiq/issues/271) (renames `class_label` parameter in TreeExplainer to `class_index`)\n- adds support for `PyTorch` models to `Explainer` [#272](https://github.com/mmschlk/shapiq/issues/272)\n- adds new tests comparing `shapiq` outputs for SVs with alues computed with `shap`\n- adds new tests for checking `shapiq` explainers with different types of models\n\n### Bug Fixes\n- fixes a bug that `RandomForestClassifier` models were not working with the `TreeExplainer` [#273](https://github.com/mmschlk/shapiq/issues/273)\n\n## v1.1.0 (2024-11-07)\n\n### New Features and Improvements\n- adds computation of the Egalitarian Core (`EC`) and Egalitarian Least-Core (`ELC`) to the `ExactComputer` [#182](https://github.com/mmschlk/shapiq/issues/182)\n- adds `waterfall_plot` [#34](https://github.com/mmschlk/shapiq/issues/34) that visualizes the contributions of features to the model prediction\n- adds `BaselineImputer` [#107](https://github.com/mmschlk/shapiq/issues/107) which is now responsible for handling the `sample_replacements` parameter. Added a DeprecationWarning for the parameter in `MarginalImputer`, which will be removed in the next release.\n- adds `joint_marginal_distribution` parameter to `MarginalImputer` with default value `True` [#261](https://github.com/mmschlk/shapiq/issues/261)\n- renames explanation graph to `si_graph`\n- `get_n_order` now has optional lower/upper limits for the order\n- computing metrics for benchmarking now tries to resolve not-matching interaction indices and will throw a warning instead of a ValueError [#179](https://github.com/mmschlk/shapiq/issues/179)\n- add a legend to benchmark plots [#170](https://github.com/mmschlk/shapiq/issues/170)\n- refactored the `shapiq.games.benchmark` module into a separate `shapiq.benchmark` module by moving all but the benchmark games into the new module. This closes [#169](https://github.com/mmschlk/shapiq/issues/169) and makes benchmarking more flexible and convenient.\n- a `shapiq.Game` can now be called more intuitively with coalitions data types (tuples of int or str) and also allows to add `player_names` to the game at initialization [#183](https://github.com/mmschlk/shapiq/issues/183)\n- improve tests across the package\n\n### Documentation\n- adds a notebook showing how to use custom tree models with the `TreeExplainer` [#66](https://github.com/mmschlk/shapiq/issues/66)\n- adds a notebook show how to use the `shapiq.Game` API to create custom games [#184](https://github.com/mmschlk/shapiq/issues/184)\n- adds a notebook showing hot to visualize interactions [#252](https://github.com/mmschlk/shapiq/issues/252)\n- adds a notebook showing how to compute Shapley values with `shapiq` [#193](https://github.com/mmschlk/shapiq/issues/197)\n- adds a notebook for conducting data valuation [#190](https://github.com/mmschlk/shapiq/issues/190)\n- adds a notebook showcasing introducing the Core and how to compute it with `shapiq` [#191](https://github.com/mmschlk/shapiq/issues/191)\n\n### Bug Fixes\n- fixes a bug with SIs not adding up to the model prediction because of wrong values in the empty set [#264](https://github.com/mmschlk/shapiq/issues/264)\n- fixes a bug that `TreeExplainer` did not have the correct baseline_value when using XGBoost models [#250](https://github.com/mmschlk/shapiq/issues/250)\n- fixes the force plot not showing and its baseline value\n\n## v1.0.1 (2024-06-05)\n\n- add `max_order=1` to `TabularExplainer` and `TreeExplainer`\n- fix `TreeExplainer.explain_X(..., n_jobs=2, random_state=0)`\n\n## v1.0.0 (2024-06-04)\n\nMajor release of the `shapiq` Python package including (among others):\n\n- `approximator` module implements over 10 approximators of Shapley values and interaction indices.\n- `exact` module implements a computer for over 10 game theoretic concepts like interaction indices or generalized values.\n- `games` module implements over 10 application benchmarks for the approximators.\n- `explainer` module includes a `TabularExplainer` and `TreeExplainer` for any-order feature interactions of machine learning model predictions.\n- `interaction_values` module implements a data class to store and analyze interaction values.\n- `plot` module allows visualizing interaction values.\n- `datasets` module loads datasets for testing and examples.\n\nDocumentation of `shapiq` with tutorials and API reference is available at https://shapiq.readthedocs.io\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Shapley Interactions for Machine Learning",
    "version": "1.3.2",
    "project_urls": {
        "changelog": "https://github.com/mmschlk/shapiq/blob/main/CHANGELOG.md",
        "documentation": "https://shapiq.readthedocs.io",
        "source": "https://github.com/mmschlk/shapiq",
        "tracker": "https://github.com/mmschlk/shapiq/issues"
    },
    "split_keywords": [
        "python",
        " machine learning",
        " interpretable machine learning",
        " shap",
        " xai",
        " explainable ai",
        " interaction",
        " shapley interactions",
        " shapley values",
        " feature interaction"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9b95c422a1e40b8ddfe765cbfa6aa56859bb5912e333083bc0f6efb050826127",
                "md5": "f516ae1781fafc529ffeaadafda8dffd",
                "sha256": "aaa441d2ccd08915fcbed3e1529d17703b34afa70340da60b85110fb35b4e844"
            },
            "downloads": -1,
            "filename": "shapiq-1.3.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f516ae1781fafc529ffeaadafda8dffd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 284348,
            "upload_time": "2025-10-14T11:17:17",
            "upload_time_iso_8601": "2025-10-14T11:17:17.372979Z",
            "url": "https://files.pythonhosted.org/packages/9b/95/c422a1e40b8ddfe765cbfa6aa56859bb5912e333083bc0f6efb050826127/shapiq-1.3.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9c1f59712b0c41477c169c77006bf677789c601fd85b384102f22e601aac7a17",
                "md5": "3752863db8c4936936e5a753d4a205ce",
                "sha256": "4686281edb6fbfe4e3678db09b2dd217f85345c9a265b3b38e81c41da62c218a"
            },
            "downloads": -1,
            "filename": "shapiq-1.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "3752863db8c4936936e5a753d4a205ce",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 225448,
            "upload_time": "2025-10-14T11:17:18",
            "upload_time_iso_8601": "2025-10-14T11:17:18.689505Z",
            "url": "https://files.pythonhosted.org/packages/9c/1f/59712b0c41477c169c77006bf677789c601fd85b384102f22e601aac7a17/shapiq-1.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-14 11:17:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mmschlk",
    "github_project": "shapiq",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "shapiq"
}
        
Elapsed time: 3.13848s