shap-app


Nameshap-app JSON
Version 0.5.1 PyPI version JSON
download
home_page
SummaryA comprehensive application for interpreting machine learning models using SHAP values
upload_time2023-09-12 17:50:10
maintainer
docs_urlNone
authorRodrigo Gonzalez
requires_python>=3.10,<3.12
licenseMIT
keywords shap machine learning model interpretation python data science ai artificial intelligence feature importance game theory predictive modeling data analysis modeling
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SHAP App
[![PyPI Package latest release](https://img.shields.io/pypi/v/shap-app.svg?style=flat-square)](https://pypi.org/project/shap-app/)
[![Supported versions](https://img.shields.io/pypi/pyversions/shap-app.svg?style=flat-square)](https://pypi.org/project/shap-app/)
[![code style black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

## Introduction

This app demonstrates how to use the
[SHAP](https://shap.readthedocs.io/en/latest/index.html)
library to explain models employing the popular `streamlit`
framework for the application frontend. The application is
deployed and can be accessed at
[https://shap-app.streamlit.app/](https://shap-app.streamlit.app/).



<div style="display: flex; justify-content: space-between; align-items: center;">
  <div style="width: 45%;">
    <p>SHAP (SHapley Additive exPlanations) is a unified measure of feature
    importance that originates from game theory. It connects optimal credit
    allocation with local explanations using the classic Shapley values from
    game theory and their related extensions.

In the context of machine learning, SHAP values provide a measure for
    the contribution of each feature to the prediction for individual samples.
    They can help to interpret the output of any machine learning model.
    Essentially, Shapley values answer the question: What is the relative
    contribution of each feature value to the prediction?</p>

  </div>
  <div style="width: 45%;">
    <img src="assets/shap_header.svg" alt="SHAP" style="max-width: 100%; height: auto;" />
  </div>
</div>


## Installation

The package can be installed using `pip`:

```bash
pip install shap-app
```


## Development

### Clone Repository

```bash
git clone git@github.com:RodrigoGonzalez/streamlit-shap-app.git
```

### Poetry Installation

This project was created using poetry. To install poetry, run the following:

```bash
curl -sSL https://install.python-poetry.org | python -
```

On MacOS, you can also install poetry using Homebrew:

```bash
brew install poetry
```

**Verify Installation**: You can verify the installation by running:

```bash
poetry --version
```

### Project Dependencies

To install the dependencies, run the
following command:

```bash
make local
```

This generates a virtual environment and installs the
dependencies listed in the `pyproject.toml` file.

### Install Dependencies Using Pip

I have also included a setup.py file for those who prefer
to use pip. To install the dependencies, run the following
command:

```bash
pip install -r pip/requirements.txt
```

## Running Application

To run simply type:

```bash
shap-app
```

To see all the options, type (currently limited to only
running the application.)

```bash
shap-app --help
```

## The Interpretation of Machine Learning Models aka Explainable AI
A Vital Component for Ensuring Transparency and Trustworthiness

In a world increasingly driven by automated decision-making,
the capacity to comprehend and articulate the underlying
mechanisms of machine learning models is paramount. This
understanding, referred to as model interpretability,
enables critical insight into the actions and
justifications of algorithmic systems that profoundly
impact human lives.

### **Key Aspects of Interpretability**:

Interpretability plays a vital role,
enhancing understanding and communication.

1. **Model Debugging**:
    - Analytical Evaluation
    - What instigated this model's error?
    - What adjustments are necessary to enhance the model's performance?

2. **Human-AI Collaboration**:
    - Mutual Understanding
    - How can users interpret and place faith in the model's resolutions?

3. **Regulatory Compliance**:
    - Legal Assurance
    - Does the model adhere to statutory mandates and ethical guidelines?

### **Interpretability in the Model Lifecycle**:

The interpretability facet of the model training and
deploying pipeline instrumental during the "diagnosis"
phase of the model lifecycle workflow. AI Explainability
elucidates the model's predictions through
human-intelligible descriptions, offering multifaceted
insights into model behavior:

-   **Global Explanations**: E.g., What variables shape
    the comprehensive conduct of a loan allocation model?

-   **Local Explanations**: E.g., What rationale led to
    the approval or denial of a specific customer's loan
    application?

Observation of model explanations for subgroups of data
points is invaluable, particularly when assessing fairness
in predictions for specific demographic classifications,
for example.

### **Specific Applications of Interpretability**:

The interpretability component leverages the SHAP
(SHapley Additive exPlanations) package, a robust tool
that facilitates the analytical understanding of model
behavior, providing insights into feature importance and
contributions to individual predictions

Utilize interpretability to:

-   Ascertain the reliability of AI system predictions by
    recognizing significant factors.

-   Strategize model debugging by first comprehending its
    functionality and discerning between legitimate relationships
    and misleading associations.

-   Detect potential biases by analyzing the basis of
    predictions on sensitive or highly correlated attributes.

-   Foster user confidence through local explanations
    that explain decision outcomes.

-   Execute regulatory audits to authenticate models
    and supervise the influence of model determinations on human
    interests.

The nuanced task of model interpretation extends beyond mere
technical necessity; it fosters transparency,
accountability, and trust in AI systems. Embracing
interpretability ensures that decisions derived from
artificial intelligence are not only proficient but
principled, aligning with both legal obligations and
ethical values.


## Summary of Project



### Motivation

In the dynamic field of machine learning, understanding and
explaining model predictions is vital for understanding and
being able to take actionable insights from model predictions.
This project focuses on Shapley values, a concept from game
theory, that can be used to interpret complex models.

The primary goal of this project is to provide an intuitive
introduction to Shapley values as well as how to use the
[SHAP](https://shap.readthedocs.io/en/latest/index.html)
library. Shapley values provide a robust understanding of how
each feature individually contributes to a prediction, making
complex models easier to understand.

Streamlit is utilized to create an interactive interface for
visualizing SHAP (SHapley Additive exPlanations) prediction
explanations, making the technical concepts easier to comprehend.

The project also highlights the real-world utility of
prediction explanations, demonstrating that it's not merely a
theoretical concept but a valuable tool for informed decision-making.
Additionally, SHAP's potential for providing a
consistent feature importance measure across various models
and versatility in handling diverse datasets is demonstrated.

### Datasets

### Tools and Methods Used

1.  **Python**: The project is implemented in Python, a popular language for
    data science due to its readability and vast ecosystem of scientific libraries.
    https://www.python.org/

2.  **SHAP**: SHAP (SHapley Additive exPlanations) is a game theoretic approach
    to explain the output of any machine learning model. It connects optimal
    credit allocation with local explanations using the classic Shapley values
    from game theory and their related extensions.
    https://shap.readthedocs.io/en/latest/index.html

3.  **Streamlit**: Streamlit is an open-source Python library that makes it easy to
    create and share beautiful, custom web apps for machine learning and data science.
    In this project, Streamlit is used to create an interactive web application to
    visualize the SHAP values.
    https://streamlit.io/

4.  **Pandas**: Pandas is a software library written for the Python programming
    language for data manipulation and analysis. In particular, it offers
    data structures and operations for manipulating numerical tables and time series.
    https://pandas.pydata.org/

5.  **Numpy**: Numpy is a library for the Python programming language, adding support
    for large, multidimensional arrays and matrices, along with a large collection
    of high-level mathematical functions to operate on these arrays.
    https://numpy.org/

6.  **Scikit-learn**: Scikit-learn is a free software machine learning library
    for the Python programming language. It features various classification,
    regression and clustering algorithms.
    https://scikit-learn.org/stable/

7.  **Matplotlib**: Matplotlib is a plotting library for the Python programming
    language and its numerical mathematics extension NumPy. It provides an
    object-oriented API for embedding plots into applications.
    https://matplotlib.org/

The project follows a structured approach starting from data exploration, data
cleaning, feature engineering, model building, and finally model explanation
using SHAP values. The codebase is modular and follows good software engineering
practices.

## Project Takeaways

In writing this app, the motivation was to explore and use
streamlit and the SHAP library. Streamlit for building web
applications, and SHAP for understanding decision-making
within models. The following section will outline key
takeaways from working with these tools.

### Streamlit

Streamlit is an excellent open source library for creating web applications
that showcase machine learning and data science projects. It's easy to use,
the documentation is excellent, and it integrates well with the open source
libraries used in this project. However, it may not be the best choice for
scalable or enterprise-level applications. Streamlit lacks some of the
more advanced customizations available in other web development frameworks,
but my biggest concerns for using outside smaller projects and prototyping
are that state management can be challenging and performance will be an
issue for very large datasets or highly complex applications. A problem I ran
into was that testing Streamlit apps can be challenging, as it's not a typical
Python library.

Overall, I think Streamlit is a great tool to have at your disposal, and
the problem it solves, getting something up and running quickly, is what it
excels at.

### SHAP Package

In this project, I used the
[SHAP (SHapley Additive exPlanations)](https://shap.readthedocs.io/en/latest/index.html)
library to interpret complex machine learning models.

The experience with SHAP in the project revealed a few
advantages. The interpretability it provided turned
previously black-box models into useful explanations,
making it easy to understand the relative contributions of
each feature. Its compatibility with various machine
learning models and good integration with `streamlit`
allowed for interactive visualizations. Moreover,
SHAP's ability to uncover the influence of each feature
through easy to generate plots, is especially useful for
explaining predictions to non-technical stakeholders.

However, the implementation was not without challenges.
SHAP's computational intensity, especially with larger
datasets and complex models, required careful optimization.
While SHAP values were insightful, interpreting them can
still be challenging, especially for non-technical audiences.
The beautiful visualizations, although informative, can
become overwhelming when dealing with a large number of
features, but feature selection techniques and careful
design can be utilized to keep the user experience
interesting.


Using the SHAP package was overwhelmingly positive, with
the pros far outweighing the cons. It's easy to see that
this library can be used to bridge the gap between machine
learning experts and other stakeholders. Any challenges
using the package can be dealt with, with careful
consideration and planning, and a thorough understanding
of the dataset.


## Relevant Literature and Links

Many of the ideas implemented in this repository were first detailed in the
following blog posts, papers, and tutorials:

1. [A Unified Approach to Interpreting Model Predictions](https://arxiv.org/abs/1705.07874)
2. [Consistent Individualized Feature Attribution for Tree Ensembles](https://arxiv.org/abs/1802.03888)
3. [Explainable AI for Trees: From Local Explanations to Global Understanding](https://arxiv.org/abs/1905.04610)
4. [Fairness-aware Explainable AI: A Decision-Making Perspective](https://arxiv.org/abs/2006.11458)
5. [Interpretable Machine Learning: Definitions, Methods, and Applications](https://arxiv.org/abs/1901.04592)
6. [SHAP-Sp: A Data-efficient Algorithm for Model Interpretation](https://arxiv.org/abs/2002.03222)
7. [On the Robustness of Interpretability Methods](https://arxiv.org/abs/2001.07538)
8. [Towards Accurate Model Interpretability by Training Interpretable Models](https://arxiv.org/abs/2006.16234)
9. [Understanding Black-box Predictions via Influence Functions](https://arxiv.org/abs/1703.04730)
10. [From Local Explanations to Global Understanding with Explainable AI for Trees](https://arxiv.org/abs/1905.04610)
11. [GitHub - slundberg/shap](https://github.com/slundberg/shap)
12. [Interpretable Machine Learning with SHAP](https://christophm.github.io/interpretable-ml-book/shap.html)
13. [Understanding SHAP Values](https://towardsdatascience.com/understanding-shap-values-1c1b7a0e57b7)
14. [Kaggle - Machine Learning Explainability](https://www.kaggle.com/learn/machine-learning-explainability)
15. [SHAP Values Explained Exactly How You Wished Someone Explained to You](https://medium.com/towards-data-science/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30)
16. [Interpreting complex models with SHAP values](https://medium.com/@gabrieltseng/interpreting-complex-models-with-shap-values-1c187db6ec83)
17. [Shapley Values Wikipedia Page](https://en.wikipedia.org/wiki/Shapley_value)


## SHAP App Limitations

-   This plugin is currently only compatible with Python 3.10+
-   Full documentation is not yet available
-   Does not support user defined datasets and packages yet.


## Contributing

Issues and pull requests are welcome.

## License

All code in this repository is released under the [MIT License](LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "shap-app",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10,<3.12",
    "maintainer_email": "",
    "keywords": "SHAP,Machine Learning,Model Interpretation,Python,Data Science,AI,Artificial Intelligence,Feature Importance,Game Theory,Predictive Modeling,Data Analysis,Modeling",
    "author": "Rodrigo Gonzalez ",
    "author_email": "r@rodrigo-gonzalez.com",
    "download_url": "https://files.pythonhosted.org/packages/30/84/3238f43af284ae60ddd02daa72d66a99e78ee2b5382578eecefb5a76705b/shap_app-0.5.1.tar.gz",
    "platform": null,
    "description": "# SHAP App\n[![PyPI Package latest release](https://img.shields.io/pypi/v/shap-app.svg?style=flat-square)](https://pypi.org/project/shap-app/)\n[![Supported versions](https://img.shields.io/pypi/pyversions/shap-app.svg?style=flat-square)](https://pypi.org/project/shap-app/)\n[![code style black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n## Introduction\n\nThis app demonstrates how to use the\n[SHAP](https://shap.readthedocs.io/en/latest/index.html)\nlibrary to explain models employing the popular `streamlit`\nframework for the application frontend. The application is\ndeployed and can be accessed at\n[https://shap-app.streamlit.app/](https://shap-app.streamlit.app/).\n\n\n\n<div style=\"display: flex; justify-content: space-between; align-items: center;\">\n  <div style=\"width: 45%;\">\n    <p>SHAP (SHapley Additive exPlanations) is a unified measure of feature\n    importance that originates from game theory. It connects optimal credit\n    allocation with local explanations using the classic Shapley values from\n    game theory and their related extensions.\n\nIn the context of machine learning, SHAP values provide a measure for\n    the contribution of each feature to the prediction for individual samples.\n    They can help to interpret the output of any machine learning model.\n    Essentially, Shapley values answer the question: What is the relative\n    contribution of each feature value to the prediction?</p>\n\n  </div>\n  <div style=\"width: 45%;\">\n    <img src=\"assets/shap_header.svg\" alt=\"SHAP\" style=\"max-width: 100%; height: auto;\" />\n  </div>\n</div>\n\n\n## Installation\n\nThe package can be installed using `pip`:\n\n```bash\npip install shap-app\n```\n\n\n## Development\n\n### Clone Repository\n\n```bash\ngit clone git@github.com:RodrigoGonzalez/streamlit-shap-app.git\n```\n\n### Poetry Installation\n\nThis project was created using poetry. To install poetry, run the following:\n\n```bash\ncurl -sSL https://install.python-poetry.org | python -\n```\n\nOn MacOS, you can also install poetry using Homebrew:\n\n```bash\nbrew install poetry\n```\n\n**Verify Installation**: You can verify the installation by running:\n\n```bash\npoetry --version\n```\n\n### Project Dependencies\n\nTo install the dependencies, run the\nfollowing command:\n\n```bash\nmake local\n```\n\nThis generates a virtual environment and installs the\ndependencies listed in the `pyproject.toml` file.\n\n### Install Dependencies Using Pip\n\nI have also included a setup.py file for those who prefer\nto use pip. To install the dependencies, run the following\ncommand:\n\n```bash\npip install -r pip/requirements.txt\n```\n\n## Running Application\n\nTo run simply type:\n\n```bash\nshap-app\n```\n\nTo see all the options, type (currently limited to only\nrunning the application.)\n\n```bash\nshap-app --help\n```\n\n## The Interpretation of Machine Learning Models aka Explainable AI\nA Vital Component for Ensuring Transparency and Trustworthiness\n\nIn a world increasingly driven by automated decision-making,\nthe capacity to comprehend and articulate the underlying\nmechanisms of machine learning models is paramount. This\nunderstanding, referred to as model interpretability,\nenables critical insight into the actions and\njustifications of algorithmic systems that profoundly\nimpact human lives.\n\n### **Key Aspects of Interpretability**:\n\nInterpretability plays a vital role,\nenhancing understanding and communication.\n\n1. **Model Debugging**:\n    - Analytical Evaluation\n    - What instigated this model's error?\n    - What adjustments are necessary to enhance the model's performance?\n\n2. **Human-AI Collaboration**:\n    - Mutual Understanding\n    - How can users interpret and place faith in the model's resolutions?\n\n3. **Regulatory Compliance**:\n    - Legal Assurance\n    - Does the model adhere to statutory mandates and ethical guidelines?\n\n### **Interpretability in the Model Lifecycle**:\n\nThe interpretability facet of the model training and\ndeploying pipeline instrumental during the \"diagnosis\"\nphase of the model lifecycle workflow. AI Explainability\nelucidates the model's predictions through\nhuman-intelligible descriptions, offering multifaceted\ninsights into model behavior:\n\n-   **Global Explanations**: E.g., What variables shape\n    the comprehensive conduct of a loan allocation model?\n\n-   **Local Explanations**: E.g., What rationale led to\n    the approval or denial of a specific customer's loan\n    application?\n\nObservation of model explanations for subgroups of data\npoints is invaluable, particularly when assessing fairness\nin predictions for specific demographic classifications,\nfor example.\n\n### **Specific Applications of Interpretability**:\n\nThe interpretability component leverages the SHAP\n(SHapley Additive exPlanations) package, a robust tool\nthat facilitates the analytical understanding of model\nbehavior, providing insights into feature importance and\ncontributions to individual predictions\n\nUtilize interpretability to:\n\n-   Ascertain the reliability of AI system predictions by\n    recognizing significant factors.\n\n-   Strategize model debugging by first comprehending its\n    functionality and discerning between legitimate relationships\n    and misleading associations.\n\n-   Detect potential biases by analyzing the basis of\n    predictions on sensitive or highly correlated attributes.\n\n-   Foster user confidence through local explanations\n    that explain decision outcomes.\n\n-   Execute regulatory audits to authenticate models\n    and supervise the influence of model determinations on human\n    interests.\n\nThe nuanced task of model interpretation extends beyond mere\ntechnical necessity; it fosters transparency,\naccountability, and trust in AI systems. Embracing\ninterpretability ensures that decisions derived from\nartificial intelligence are not only proficient but\nprincipled, aligning with both legal obligations and\nethical values.\n\n\n## Summary of Project\n\n\n\n### Motivation\n\nIn the dynamic field of machine learning, understanding and\nexplaining model predictions is vital for understanding and\nbeing able to take actionable insights from model predictions.\nThis project focuses on Shapley values, a concept from game\ntheory, that can be used to interpret complex models.\n\nThe primary goal of this project is to provide an intuitive\nintroduction to Shapley values as well as how to use the\n[SHAP](https://shap.readthedocs.io/en/latest/index.html)\nlibrary. Shapley values provide a robust understanding of how\neach feature individually contributes to a prediction, making\ncomplex models easier to understand.\n\nStreamlit is utilized to create an interactive interface for\nvisualizing SHAP (SHapley Additive exPlanations) prediction\nexplanations, making the technical concepts easier to comprehend.\n\nThe project also highlights the real-world utility of\nprediction explanations, demonstrating that it's not merely a\ntheoretical concept but a valuable tool for informed decision-making.\nAdditionally, SHAP's potential for providing a\nconsistent feature importance measure across various models\nand versatility in handling diverse datasets is demonstrated.\n\n### Datasets\n\n### Tools and Methods Used\n\n1.  **Python**: The project is implemented in Python, a popular language for\n    data science due to its readability and vast ecosystem of scientific libraries.\n    https://www.python.org/\n\n2.  **SHAP**: SHAP (SHapley Additive exPlanations) is a game theoretic approach\n    to explain the output of any machine learning model. It connects optimal\n    credit allocation with local explanations using the classic Shapley values\n    from game theory and their related extensions.\n    https://shap.readthedocs.io/en/latest/index.html\n\n3.  **Streamlit**: Streamlit is an open-source Python library that makes it easy to\n    create and share beautiful, custom web apps for machine learning and data science.\n    In this project, Streamlit is used to create an interactive web application to\n    visualize the SHAP values.\n    https://streamlit.io/\n\n4.  **Pandas**: Pandas is a software library written for the Python programming\n    language for data manipulation and analysis. In particular, it offers\n    data structures and operations for manipulating numerical tables and time series.\n    https://pandas.pydata.org/\n\n5.  **Numpy**: Numpy is a library for the Python programming language, adding support\n    for large, multidimensional arrays and matrices, along with a large collection\n    of high-level mathematical functions to operate on these arrays.\n    https://numpy.org/\n\n6.  **Scikit-learn**: Scikit-learn is a free software machine learning library\n    for the Python programming language. It features various classification,\n    regression and clustering algorithms.\n    https://scikit-learn.org/stable/\n\n7.  **Matplotlib**: Matplotlib is a plotting library for the Python programming\n    language and its numerical mathematics extension NumPy. It provides an\n    object-oriented API for embedding plots into applications.\n    https://matplotlib.org/\n\nThe project follows a structured approach starting from data exploration, data\ncleaning, feature engineering, model building, and finally model explanation\nusing SHAP values. The codebase is modular and follows good software engineering\npractices.\n\n## Project Takeaways\n\nIn writing this app, the motivation was to explore and use\nstreamlit and the SHAP library. Streamlit for building web\napplications, and SHAP for understanding decision-making\nwithin models. The following section will outline key\ntakeaways from working with these tools.\n\n### Streamlit\n\nStreamlit is an excellent open source library for creating web applications\nthat showcase machine learning and data science projects. It's easy to use,\nthe documentation is excellent, and it integrates well with the open source\nlibraries used in this project. However, it may not be the best choice for\nscalable or enterprise-level applications. Streamlit lacks some of the\nmore advanced customizations available in other web development frameworks,\nbut my biggest concerns for using outside smaller projects and prototyping\nare that state management can be challenging and performance will be an\nissue for very large datasets or highly complex applications. A problem I ran\ninto was that testing Streamlit apps can be challenging, as it's not a typical\nPython library.\n\nOverall, I think Streamlit is a great tool to have at your disposal, and\nthe problem it solves, getting something up and running quickly, is what it\nexcels at.\n\n### SHAP Package\n\nIn this project, I used the\n[SHAP (SHapley Additive exPlanations)](https://shap.readthedocs.io/en/latest/index.html)\nlibrary to interpret complex machine learning models.\n\nThe experience with SHAP in the project revealed a few\nadvantages. The interpretability it provided turned\npreviously black-box models into useful explanations,\nmaking it easy to understand the relative contributions of\neach feature. Its compatibility with various machine\nlearning models and good integration with `streamlit`\nallowed for interactive visualizations. Moreover,\nSHAP's ability to uncover the influence of each feature\nthrough easy to generate plots, is especially useful for\nexplaining predictions to non-technical stakeholders.\n\nHowever, the implementation was not without challenges.\nSHAP's computational intensity, especially with larger\ndatasets and complex models, required careful optimization.\nWhile SHAP values were insightful, interpreting them can\nstill be challenging, especially for non-technical audiences.\nThe beautiful visualizations, although informative, can\nbecome overwhelming when dealing with a large number of\nfeatures, but feature selection techniques and careful\ndesign can be utilized to keep the user experience\ninteresting.\n\n\nUsing the SHAP package was overwhelmingly positive, with\nthe pros far outweighing the cons. It's easy to see that\nthis library can be used to bridge the gap between machine\nlearning experts and other stakeholders. Any challenges\nusing the package can be dealt with, with careful\nconsideration and planning, and a thorough understanding\nof the dataset.\n\n\n## Relevant Literature and Links\n\nMany of the ideas implemented in this repository were first detailed in the\nfollowing blog posts, papers, and tutorials:\n\n1. [A Unified Approach to Interpreting Model Predictions](https://arxiv.org/abs/1705.07874)\n2. [Consistent Individualized Feature Attribution for Tree Ensembles](https://arxiv.org/abs/1802.03888)\n3. [Explainable AI for Trees: From Local Explanations to Global Understanding](https://arxiv.org/abs/1905.04610)\n4. [Fairness-aware Explainable AI: A Decision-Making Perspective](https://arxiv.org/abs/2006.11458)\n5. [Interpretable Machine Learning: Definitions, Methods, and Applications](https://arxiv.org/abs/1901.04592)\n6. [SHAP-Sp: A Data-efficient Algorithm for Model Interpretation](https://arxiv.org/abs/2002.03222)\n7. [On the Robustness of Interpretability Methods](https://arxiv.org/abs/2001.07538)\n8. [Towards Accurate Model Interpretability by Training Interpretable Models](https://arxiv.org/abs/2006.16234)\n9. [Understanding Black-box Predictions via Influence Functions](https://arxiv.org/abs/1703.04730)\n10. [From Local Explanations to Global Understanding with Explainable AI for Trees](https://arxiv.org/abs/1905.04610)\n11. [GitHub - slundberg/shap](https://github.com/slundberg/shap)\n12. [Interpretable Machine Learning with SHAP](https://christophm.github.io/interpretable-ml-book/shap.html)\n13. [Understanding SHAP Values](https://towardsdatascience.com/understanding-shap-values-1c1b7a0e57b7)\n14. [Kaggle - Machine Learning Explainability](https://www.kaggle.com/learn/machine-learning-explainability)\n15. [SHAP Values Explained Exactly How You Wished Someone Explained to You](https://medium.com/towards-data-science/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30)\n16. [Interpreting complex models with SHAP values](https://medium.com/@gabrieltseng/interpreting-complex-models-with-shap-values-1c187db6ec83)\n17. [Shapley Values Wikipedia Page](https://en.wikipedia.org/wiki/Shapley_value)\n\n\n## SHAP App Limitations\n\n-   This plugin is currently only compatible with Python 3.10+\n-   Full documentation is not yet available\n-   Does not support user defined datasets and packages yet.\n\n\n## Contributing\n\nIssues and pull requests are welcome.\n\n## License\n\nAll code in this repository is released under the [MIT License](LICENSE).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A comprehensive application for interpreting machine learning models using SHAP values",
    "version": "0.5.1",
    "project_urls": null,
    "split_keywords": [
        "shap",
        "machine learning",
        "model interpretation",
        "python",
        "data science",
        "ai",
        "artificial intelligence",
        "feature importance",
        "game theory",
        "predictive modeling",
        "data analysis",
        "modeling"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "26ee9d5fcd9691fa1841ee768e5711ca7be8ec50e012dc05a25ad41ba26fcf0a",
                "md5": "adf972d9c424ec7c0b3550fa6c50e537",
                "sha256": "c558c46423e3d7236713c81542d1bfecbd70fc4181b3c7f3c5dfbd03946a6d90"
            },
            "downloads": -1,
            "filename": "shap_app-0.5.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "adf972d9c424ec7c0b3550fa6c50e537",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10,<3.12",
            "size": 715249,
            "upload_time": "2023-09-12T17:50:08",
            "upload_time_iso_8601": "2023-09-12T17:50:08.545492Z",
            "url": "https://files.pythonhosted.org/packages/26/ee/9d5fcd9691fa1841ee768e5711ca7be8ec50e012dc05a25ad41ba26fcf0a/shap_app-0.5.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "30843238f43af284ae60ddd02daa72d66a99e78ee2b5382578eecefb5a76705b",
                "md5": "16c29297933b08f1b6d4315546246231",
                "sha256": "f14e162f0ced261ed93cf25774c8a510efa1b8c740ce322dd30f6a86571ef498"
            },
            "downloads": -1,
            "filename": "shap_app-0.5.1.tar.gz",
            "has_sig": false,
            "md5_digest": "16c29297933b08f1b6d4315546246231",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10,<3.12",
            "size": 693828,
            "upload_time": "2023-09-12T17:50:10",
            "upload_time_iso_8601": "2023-09-12T17:50:10.288497Z",
            "url": "https://files.pythonhosted.org/packages/30/84/3238f43af284ae60ddd02daa72d66a99e78ee2b5382578eecefb5a76705b/shap_app-0.5.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-12 17:50:10",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "shap-app"
}
        
Elapsed time: 0.10937s