auto-shap


Nameauto-shap JSON
Version 0.3.2 PyPI version JSON
download
home_pagehttps://github.com/micahmelling/auto-shap
SummaryCalculate SHAP values in parallel and automatically detect what explainer to use
upload_time2024-01-19 00:31:39
maintainer
docs_urlNone
authorMicah Melling
requires_python>=3.6
licenseMIT
keywords
VCS
bugtrack_url
requirements shap pandas numba numpy matplotlib pre-commit pytest scikit-learn xgboost catboost lightgbm
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # auto-shap
The auto-shap library is your best friend when calculating SHAP values!

[SHAP](https://christophm.github.io/interpretable-ml-book/shap.html) is a
state-of-the-art technique for explaining model predictions.
Model explanation can be valuable in many regards. For one, understanding
how a model devised a prediction can engender trust. Conversely, it could
inform us if our model is using features in a nonsensical or unrealistic way,
potentially helping us to catch leakage issues. Likewise, feature importance
scores can be useful for external presentations. For further details on SHAP
values and their underlying mathematical properties, see the hyperlink at the
beginning of this paragraph.

The Python [SHAP library](https://shap.readthedocs.io/en/latest/index.html)
is often the go-to source for computing SHAP values. It's handy and can
explain virtually any model we would like. However, we must be aware of the
following when using the library.

* The correct type of explainer class must be declared.
* Our code for extracting SHAP values will be slightly different when we
have a regression model compared to when we have a classifier.
* SHAP cannot natively handel scikit-learn's CalibratedClassifierCV, voting models,
or stacking models.
* Boosting models often have distinct behavior when it comes to SHAP values. For
boosting models, auto-shap will employ the SHAP settings to return probabilities.

Likewise, the native SHAP library does not take advantage of multiprocessing.
The auto-shap library will run SHAP calculations in parallel to speed them
up when possible! When we are using a Tree or Linear Explainer, we can calculate our
SHAP values in parallel without issue. The results will be the same compared
to when we run our calculations on a single core. Such situations are heavily
tested in tests/tests.py in the
[GitHub Repo](https://github.com/micahmelling/auto-shap). However, the situation
is slightly different when we are using the KernelExplainer. The KernelExplainer
is not deterministic, even when we are not using parallel processing. In fact,
especially on small datasets, if we re-run the KernelExplain back-to-back on the
same data with the same model, we won't get the exact same feature-level attribution,
though the total attribution will stay the same (which is tested in tests/tests.py).
The foregoing points can be substantiated by looking at the
[SHAP documentation](https://shap-lrjball.readthedocs.io/en/latest/generated/shap.KernelExplainer.html#shap.KernelExplainer.shap_values).
[This article](https://edden-gerber.github.io/shapley-part-2/) discusses the deterministic
nature of certain SHAP calculations.

In auto-shap, we still employ multiprocessing when using the KernelExplainer, knowing
that our results would still not be perfectly deterministic even on a single core,
and by using multiprocessing, we get a nice speed improvement. To turn off
multiprocessing in this case if desired, set n_jobs=1 when calling
generate_shap_values(). See more details on this function call below.

Additionally, there is a pickle error when using multiprocessing with a scikit-learn Voting or
Stacking model with SHAP. Therefore, no multiprocessing is used in such cases.

At a high level, the library will automatically detect the type of model
that has been trained (regressor vs. classifier, boosting model vs. other
model, etc.) and apply the correct handling. If your model is not accurately
identified by the library, it's easy to specify how it should be handled.

## Installation
The easiest way to install the library is with pip.

```buildoutcfg
$ pip3 install auto-shap
```
## Quick Example
Once installed, SHAP values can be calculated as follows.

```buildoutcfg
$ python3
>>> from auto_shap.auto_shap import generate_shap_values
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.ensemble import ExtraTreesClassifier
>>> x, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> model = ExtraTreesClassifier()
>>> model.fit(x, y)
>>> shap_values_df, shap_expected_value, global_shap_df = generate_shap_values(model, x)
```

There you have it!
* A dataframe of SHAP values for every row in the x predictors dataframe.
* The expected value of the SHAP explainer (in the above, the average
predicted positive probability).
* A dataframe of the global SHAP values covering every feature in the x dataframe.

What's more, you can change to a completely new model without changing any
of the auto-shap code.

```buildoutcfg
$ python3
>>> from auto_shap.auto_shap import generate_shap_values
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> x, y = load_diabetes(return_X_y=True, as_frame=True)
>>> model = GradientBoostingRegressor()
>>> model.fit(x, y)
>>> shap_values_df, shap_expected_value, global_shap_df = generate_shap_values(model, x)
```
auto-shap detected this was a boosted regressor and handled such a case
appropriately.

## Saving Output
The library also provides a helper function for saving output and plots to a
local directory.

```buildoutcfg
$ python3
>>> from auto_shap.auto_shap import produce_shap_values_and_summary_plots
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> x, y = load_diabetes(return_X_y=True, as_frame=True)
>>> model = GradientBoostingRegressor()
>>> model.fit(x, y)
>>> produce_shap_values_and_summary_plots(model=model, x_df=x, save_path='shap_output')
```
The above code will save three files into a "files" subdirectory in the specified
save_path directory.
* A csv of SHAP values for every row in x_df.
* A txt file containing the expected value of the SHAP explainer.
* A csv of the global SHAP values covering every feature in x_df.

Likewise, two plots will be saved into a "plots" subdirectory.
* A bar plot of the top global SHAP values.
* A dot plot of SHAP values to show the influence of features across observations
in x_df.

## Multiprocessing Support
By default, the maximum number of cores available is used to calculate SHAP
values in parallel. To manually set the number of cores to use, you can do
the following.

```buildoutcfg
>>> generate_shap_values(model, x_df, n_jobs=4)
```

For small datasets, multiprocessing may not add much in terms of performance and
could even slow down computation times due to the overhead of spinning up a
multiprocessing pool. To turn off multiprocessing, set n_jobs=1.

## Overriding Auto-Detection
Using generate_shap_values() or produce_shap_values_and_summary_plots() will
leverage auto-detection of certain model characteristics. Those characteristics
are as follows, which are all controlled with Booleans:
* linear_model
* tree_model
* boosting_model
* calibrated_model
* regression_model
* voting_or_stacking_model

Though auto-shap will natively handle most common models, it is not yet
tuned to handle every possible type of model. Therefore, in some cases, you may
have to manually set one or more of the above Booleans in the function calls.
At present and at minimum, auto-shap will work with the following models.
* XGBClassifier
* XGBRegressor
* CatBoostClassifier
* CatBoostRegressor
* LGBMClassifier
* LGBMRegressor
* ExtraTreesClassifier
* ExtraTreesRegressor
* GradientBoostingClassifier
* GradientBoostingRegressor
* RandomForestClassifier
* RandomForestRegressor
* ElasticNet
* Lasso
* LinearRegression
* LogisticRegression
* Ridge
* DecisionTreeClassifier
* DecisionTreeRegressor
* VotingClassifier
* VotingRegressor
* StackingClassifier
* StackingRegressor

For models cannot detect model qualities, it will fallback to using the
Kernel Explainer, which is model agnostic.

## CalibratedClassifierCV
The auto-shap library provides support for scikit-learn's CalibratedClassifierCV.
This implementation will extract the SHAP values for every base estimator in the
calibration ensemble. The SHAP values will then be averaged. For details on the
CalibratedClassifierCV, please go to the
[documentation](https://scikit-learn.org/stable/modules/generated/sklearn.calibration.CalibratedClassifierCV.html).
Since we are extracting only the SHAP values for the base estimator, we will miss
some detail since we are not using the full calibrator pair. Therefore, while
these SHAP values will still be instructive, they will not be perfectly precise.
For more precision, we would need to use the KernelExplainer. The main benefit of
averaging the results of the base estimators is computational as the
KernelExplainer can be quite slow.

To use KernelShap, one can do the following. More or less, this will ignore the
auto-generated model qualities.

```buildoutcfg
>>> generate_shap_values(model, x_df, use_kernel=True)
```

Since the Kenel Explainer can be computationally expensive, x_df can be subsampled
by either the sample_size or the k parameters. The former will take random samples,
and the latter will take k-means summarized samples.

## Voting and Stacking Models
If auto-shap detects a voting or stacking model, it will automatically use the
Kernel Explainer. The Kernel SHAP is computationally expensive, so you may
want to use a sample of x_df or use the previously-discussed arguments.

Additionally, there is a pickle error when using
multiprocessing with a scikit-learn Voting or Stacking model with SHAP. Therefore, no
multiprocessing is used in such cases, which is more motivation for subsetting
x_df.

## Other Potentially Useful Functionality
The generate_shap_values function relies on a few underlying functions that can
be accessed directly and have the corresponding arguments and datatypes.

```buildoutcfg
get_shap_expected_value(explainer: callable, boosting_model: bool, linear_model) -> float

generate_shap_global_values(shap_values: np.array, x_df: pd.DataFrame) -> pd.DataFrame

def produce_shap_output_with_agnostic_explainer(model: callable, x_df: pd.DataFrame, boosting_model: bool,
                                                regression_model: bool, linear_model: bool,
                                                return_df: bool = True, n_jobs: int = None,
                                                sample_size: int = None, k: int = None) -> tuple

produce_shap_output_with_tree_explainer(model: callable, x_df: pd.DataFrame, boosting_model: bool,
                                        regression_model: bool, linear_model: bool,
                                        return_df: bool = True, n_jobs: int = None) -> tuple

produce_shap_output_with_linear_explainer(model: callable, x_df: pd.DataFrame, regression_model: bool,
                                          linear_model: bool, return_df: bool = True, n_jobs: int = None) -> tuple

produce_shap_output_for_calibrated_classifier(model: callable, x_df: pd.DataFrame, boosting_model: bool,
                                              linear_model: bool, n_jobs: int = None) -> tuple

def produce_raw_shap_values(model: callable, x_df: pd.DataFrame, use_agnostic: bool, linear_model: bool,
                            tree_model: bool, calibrated_model: bool, boosting_model: bool, regression_model: bool,
                            voting_or_stacking_model: bool = False, n_jobs: int = None, sample_size: int = None,
                            k: int = None) -> tuple


generate_shap_summary_plot(shap_values: np.array, x_df: pd.DataFrame, plot_type: str, save_path: str,
                           file_prefix: str = None)
```

## The End
Enjoy explaining your models with auto-shap! Feel free to report any issues.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/micahmelling/auto-shap",
    "name": "auto-shap",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "Micah Melling",
    "author_email": "micahmelling@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/8b/22/2713d3083db066f1c2bccc78bdc21b5446f546d97035d0c5b36e55617259/auto_shap-0.3.2.tar.gz",
    "platform": null,
    "description": "# auto-shap\nThe auto-shap library is your best friend when calculating SHAP values!\n\n[SHAP](https://christophm.github.io/interpretable-ml-book/shap.html) is a\nstate-of-the-art technique for explaining model predictions.\nModel explanation can be valuable in many regards. For one, understanding\nhow a model devised a prediction can engender trust. Conversely, it could\ninform us if our model is using features in a nonsensical or unrealistic way,\npotentially helping us to catch leakage issues. Likewise, feature importance\nscores can be useful for external presentations. For further details on SHAP\nvalues and their underlying mathematical properties, see the hyperlink at the\nbeginning of this paragraph.\n\nThe Python [SHAP library](https://shap.readthedocs.io/en/latest/index.html)\nis often the go-to source for computing SHAP values. It's handy and can\nexplain virtually any model we would like. However, we must be aware of the\nfollowing when using the library.\n\n* The correct type of explainer class must be declared.\n* Our code for extracting SHAP values will be slightly different when we\nhave a regression model compared to when we have a classifier.\n* SHAP cannot natively handel scikit-learn's CalibratedClassifierCV, voting models,\nor stacking models.\n* Boosting models often have distinct behavior when it comes to SHAP values. For\nboosting models, auto-shap will employ the SHAP settings to return probabilities.\n\nLikewise, the native SHAP library does not take advantage of multiprocessing.\nThe auto-shap library will run SHAP calculations in parallel to speed them\nup when possible! When we are using a Tree or Linear Explainer, we can calculate our\nSHAP values in parallel without issue. The results will be the same compared\nto when we run our calculations on a single core. Such situations are heavily\ntested in tests/tests.py in the\n[GitHub Repo](https://github.com/micahmelling/auto-shap). However, the situation\nis slightly different when we are using the KernelExplainer. The KernelExplainer\nis not deterministic, even when we are not using parallel processing. In fact,\nespecially on small datasets, if we re-run the KernelExplain back-to-back on the\nsame data with the same model, we won't get the exact same feature-level attribution,\nthough the total attribution will stay the same (which is tested in tests/tests.py).\nThe foregoing points can be substantiated by looking at the\n[SHAP documentation](https://shap-lrjball.readthedocs.io/en/latest/generated/shap.KernelExplainer.html#shap.KernelExplainer.shap_values).\n[This article](https://edden-gerber.github.io/shapley-part-2/) discusses the deterministic\nnature of certain SHAP calculations.\n\nIn auto-shap, we still employ multiprocessing when using the KernelExplainer, knowing\nthat our results would still not be perfectly deterministic even on a single core,\nand by using multiprocessing, we get a nice speed improvement. To turn off\nmultiprocessing in this case if desired, set n_jobs=1 when calling\ngenerate_shap_values(). See more details on this function call below.\n\nAdditionally, there is a pickle error when using multiprocessing with a scikit-learn Voting or\nStacking model with SHAP. Therefore, no multiprocessing is used in such cases.\n\nAt a high level, the library will automatically detect the type of model\nthat has been trained (regressor vs. classifier, boosting model vs. other\nmodel, etc.) and apply the correct handling. If your model is not accurately\nidentified by the library, it's easy to specify how it should be handled.\n\n## Installation\nThe easiest way to install the library is with pip.\n\n```buildoutcfg\n$ pip3 install auto-shap\n```\n## Quick Example\nOnce installed, SHAP values can be calculated as follows.\n\n```buildoutcfg\n$ python3\n>>> from auto_shap.auto_shap import generate_shap_values\n>>> from sklearn.datasets import load_breast_cancer\n>>> from sklearn.ensemble import ExtraTreesClassifier\n>>> x, y = load_breast_cancer(return_X_y=True, as_frame=True)\n>>> model = ExtraTreesClassifier()\n>>> model.fit(x, y)\n>>> shap_values_df, shap_expected_value, global_shap_df = generate_shap_values(model, x)\n```\n\nThere you have it!\n* A dataframe of SHAP values for every row in the x predictors dataframe.\n* The expected value of the SHAP explainer (in the above, the average\npredicted positive probability).\n* A dataframe of the global SHAP values covering every feature in the x dataframe.\n\nWhat's more, you can change to a completely new model without changing any\nof the auto-shap code.\n\n```buildoutcfg\n$ python3\n>>> from auto_shap.auto_shap import generate_shap_values\n>>> from sklearn.datasets import load_diabetes\n>>> from sklearn.ensemble import GradientBoostingRegressor\n>>> x, y = load_diabetes(return_X_y=True, as_frame=True)\n>>> model = GradientBoostingRegressor()\n>>> model.fit(x, y)\n>>> shap_values_df, shap_expected_value, global_shap_df = generate_shap_values(model, x)\n```\nauto-shap detected this was a boosted regressor and handled such a case\nappropriately.\n\n## Saving Output\nThe library also provides a helper function for saving output and plots to a\nlocal directory.\n\n```buildoutcfg\n$ python3\n>>> from auto_shap.auto_shap import produce_shap_values_and_summary_plots\n>>> from sklearn.datasets import load_diabetes\n>>> from sklearn.ensemble import GradientBoostingRegressor\n>>> x, y = load_diabetes(return_X_y=True, as_frame=True)\n>>> model = GradientBoostingRegressor()\n>>> model.fit(x, y)\n>>> produce_shap_values_and_summary_plots(model=model, x_df=x, save_path='shap_output')\n```\nThe above code will save three files into a \"files\" subdirectory in the specified\nsave_path directory.\n* A csv of SHAP values for every row in x_df.\n* A txt file containing the expected value of the SHAP explainer.\n* A csv of the global SHAP values covering every feature in x_df.\n\nLikewise, two plots will be saved into a \"plots\" subdirectory.\n* A bar plot of the top global SHAP values.\n* A dot plot of SHAP values to show the influence of features across observations\nin x_df.\n\n## Multiprocessing Support\nBy default, the maximum number of cores available is used to calculate SHAP\nvalues in parallel. To manually set the number of cores to use, you can do\nthe following.\n\n```buildoutcfg\n>>> generate_shap_values(model, x_df, n_jobs=4)\n```\n\nFor small datasets, multiprocessing may not add much in terms of performance and\ncould even slow down computation times due to the overhead of spinning up a\nmultiprocessing pool. To turn off multiprocessing, set n_jobs=1.\n\n## Overriding Auto-Detection\nUsing generate_shap_values() or produce_shap_values_and_summary_plots() will\nleverage auto-detection of certain model characteristics. Those characteristics\nare as follows, which are all controlled with Booleans:\n* linear_model\n* tree_model\n* boosting_model\n* calibrated_model\n* regression_model\n* voting_or_stacking_model\n\nThough auto-shap will natively handle most common models, it is not yet\ntuned to handle every possible type of model. Therefore, in some cases, you may\nhave to manually set one or more of the above Booleans in the function calls.\nAt present and at minimum, auto-shap will work with the following models.\n* XGBClassifier\n* XGBRegressor\n* CatBoostClassifier\n* CatBoostRegressor\n* LGBMClassifier\n* LGBMRegressor\n* ExtraTreesClassifier\n* ExtraTreesRegressor\n* GradientBoostingClassifier\n* GradientBoostingRegressor\n* RandomForestClassifier\n* RandomForestRegressor\n* ElasticNet\n* Lasso\n* LinearRegression\n* LogisticRegression\n* Ridge\n* DecisionTreeClassifier\n* DecisionTreeRegressor\n* VotingClassifier\n* VotingRegressor\n* StackingClassifier\n* StackingRegressor\n\nFor models cannot detect model qualities, it will fallback to using the\nKernel Explainer, which is model agnostic.\n\n## CalibratedClassifierCV\nThe auto-shap library provides support for scikit-learn's CalibratedClassifierCV.\nThis implementation will extract the SHAP values for every base estimator in the\ncalibration ensemble. The SHAP values will then be averaged. For details on the\nCalibratedClassifierCV, please go to the\n[documentation](https://scikit-learn.org/stable/modules/generated/sklearn.calibration.CalibratedClassifierCV.html).\nSince we are extracting only the SHAP values for the base estimator, we will miss\nsome detail since we are not using the full calibrator pair. Therefore, while\nthese SHAP values will still be instructive, they will not be perfectly precise.\nFor more precision, we would need to use the KernelExplainer. The main benefit of\naveraging the results of the base estimators is computational as the\nKernelExplainer can be quite slow.\n\nTo use KernelShap, one can do the following. More or less, this will ignore the\nauto-generated model qualities.\n\n```buildoutcfg\n>>> generate_shap_values(model, x_df, use_kernel=True)\n```\n\nSince the Kenel Explainer can be computationally expensive, x_df can be subsampled\nby either the sample_size or the k parameters. The former will take random samples,\nand the latter will take k-means summarized samples.\n\n## Voting and Stacking Models\nIf auto-shap detects a voting or stacking model, it will automatically use the\nKernel Explainer. The Kernel SHAP is computationally expensive, so you may\nwant to use a sample of x_df or use the previously-discussed arguments.\n\nAdditionally, there is a pickle error when using\nmultiprocessing with a scikit-learn Voting or Stacking model with SHAP. Therefore, no\nmultiprocessing is used in such cases, which is more motivation for subsetting\nx_df.\n\n## Other Potentially Useful Functionality\nThe generate_shap_values function relies on a few underlying functions that can\nbe accessed directly and have the corresponding arguments and datatypes.\n\n```buildoutcfg\nget_shap_expected_value(explainer: callable, boosting_model: bool, linear_model) -> float\n\ngenerate_shap_global_values(shap_values: np.array, x_df: pd.DataFrame) -> pd.DataFrame\n\ndef produce_shap_output_with_agnostic_explainer(model: callable, x_df: pd.DataFrame, boosting_model: bool,\n                                                regression_model: bool, linear_model: bool,\n                                                return_df: bool = True, n_jobs: int = None,\n                                                sample_size: int = None, k: int = None) -> tuple\n\nproduce_shap_output_with_tree_explainer(model: callable, x_df: pd.DataFrame, boosting_model: bool,\n                                        regression_model: bool, linear_model: bool,\n                                        return_df: bool = True, n_jobs: int = None) -> tuple\n\nproduce_shap_output_with_linear_explainer(model: callable, x_df: pd.DataFrame, regression_model: bool,\n                                          linear_model: bool, return_df: bool = True, n_jobs: int = None) -> tuple\n\nproduce_shap_output_for_calibrated_classifier(model: callable, x_df: pd.DataFrame, boosting_model: bool,\n                                              linear_model: bool, n_jobs: int = None) -> tuple\n\ndef produce_raw_shap_values(model: callable, x_df: pd.DataFrame, use_agnostic: bool, linear_model: bool,\n                            tree_model: bool, calibrated_model: bool, boosting_model: bool, regression_model: bool,\n                            voting_or_stacking_model: bool = False, n_jobs: int = None, sample_size: int = None,\n                            k: int = None) -> tuple\n\n\ngenerate_shap_summary_plot(shap_values: np.array, x_df: pd.DataFrame, plot_type: str, save_path: str,\n                           file_prefix: str = None)\n```\n\n## The End\nEnjoy explaining your models with auto-shap! Feel free to report any issues.\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Calculate SHAP values in parallel and automatically detect what explainer to use",
    "version": "0.3.2",
    "project_urls": {
        "Homepage": "https://github.com/micahmelling/auto-shap"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8b222713d3083db066f1c2bccc78bdc21b5446f546d97035d0c5b36e55617259",
                "md5": "6bfbb75fd21d0fa3ae778531cf6d80f0",
                "sha256": "60fd29e5bda8e69684944003f78678f106a6bcfd6a9739cd411440d4df7fd9f9"
            },
            "downloads": -1,
            "filename": "auto_shap-0.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "6bfbb75fd21d0fa3ae778531cf6d80f0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 17044,
            "upload_time": "2024-01-19T00:31:39",
            "upload_time_iso_8601": "2024-01-19T00:31:39.422883Z",
            "url": "https://files.pythonhosted.org/packages/8b/22/2713d3083db066f1c2bccc78bdc21b5446f546d97035d0c5b36e55617259/auto_shap-0.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-19 00:31:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "micahmelling",
    "github_project": "auto-shap",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "shap",
            "specs": [
                [
                    "==",
                    "0.40.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "1.3.5"
                ]
            ]
        },
        {
            "name": "numba",
            "specs": [
                [
                    "==",
                    "0.56.4"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "1.21.5"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    "==",
                    "3.5.1"
                ]
            ]
        },
        {
            "name": "pre-commit",
            "specs": [
                [
                    "==",
                    "2.17.0"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "==",
                    "7.0.1"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    "==",
                    "1.0.2"
                ]
            ]
        },
        {
            "name": "xgboost",
            "specs": [
                [
                    "==",
                    "1.5.0"
                ]
            ]
        },
        {
            "name": "catboost",
            "specs": [
                [
                    "==",
                    "1.0.4"
                ]
            ]
        },
        {
            "name": "lightgbm",
            "specs": [
                [
                    "==",
                    "3.3.2"
                ]
            ]
        }
    ],
    "lcname": "auto-shap"
}
        
Elapsed time: 0.22451s