This package makes it convenient to quickly deploy a dashboard web app
that explains the workings of a (scikit-learn compatible) fitted machine
learning model. The dashboard provides interactive plots on model performance,
feature importances, feature contributions to individual predictions,
partial dependence plots, SHAP (interaction) values, visualisation of individual
decision trees, etc.
The goal is manyfold:
- Make it easy for data scientists to quickly inspect the inner workings and
performance of their model with just a few lines of code
- Make it possible for non data scientist stakeholders such as co-workers,
managers, directors, internal and external watchdogs to interactively
inspect the inner workings of the model without having to depend
on a data scientist to generate every plot and table
- Make it easy to build a custom application that explains individual
predictions of your model for customers that ask for an explanation
- Explain the inner workings of the model to the people working with
model in a human-in-the-loop deployment so that they gain understanding
what the model does do and does not do.
This is important so that they can gain an intuition for when the
model is likely missing information and may have to be overruled.
The dashboard includes:
- SHAP values (i.e. what is the contribution of each feature to each
individual prediction?)
- Permutation importances (how much does the model metric deteriorate
when you shuffle a feature?)
- Partial dependence plots (how does the model prediction change when
you vary a single feature?
- Shap interaction values (decompose the shap value into a direct effect
an interaction effects)
- For Random Forests and xgboost models: visualization of individual trees
in the ensemble.
- Plus for classifiers: precision plots, confusion matrix, ROC AUC plot,
PR AUC plot, etc
- For regression models: goodness-of-fit plots, residual plots, etc.
The library is designed to be modular so that it is easy to design your
own custom dashboards so that you can focus on the layout and project specific
textual explanations of the dashboard. (i.e. design it so that it will be
interpretable for business users in your organization, not just data scientists)
A deployed example can be found at http://titanicexplainer.herokuapp.com
Raw data
{
"_id": null,
"home_page": "https://github.com/oegedijk/explainerdashboard",
"name": "explainerdashboard",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "machine learning,explainability,shap,feature importances,dash",
"author": "Oege Dijk",
"author_email": "oegedijk@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/a8/97/8438dd725e3d4b30d3aab80411c8d12291177e9fa96a8be3251ee35b1c57/explainerdashboard-0.4.5.tar.gz",
"platform": null,
"description": "\n\nThis package makes it convenient to quickly deploy a dashboard web app\nthat explains the workings of a (scikit-learn compatible) fitted machine \nlearning model. The dashboard provides interactive plots on model performance, \nfeature importances, feature contributions to individual predictions, \npartial dependence plots, SHAP (interaction) values, visualisation of individual\ndecision trees, etc. \n\n\nThe goal is manyfold:\n\n - Make it easy for data scientists to quickly inspect the inner workings and \n performance of their model with just a few lines of code\n - Make it possible for non data scientist stakeholders such as co-workers,\n managers, directors, internal and external watchdogs to interactively \n inspect the inner workings of the model without having to depend \n on a data scientist to generate every plot and table\n - Make it easy to build a custom application that explains individual \n predictions of your model for customers that ask for an explanation\n - Explain the inner workings of the model to the people working with \n model in a human-in-the-loop deployment so that they gain understanding \n what the model does do and does not do. \n This is important so that they can gain an intuition for when the \n model is likely missing information and may have to be overruled.\n\nThe dashboard includes:\n\n - SHAP values (i.e. what is the contribution of each feature to each \n individual prediction?)\n - Permutation importances (how much does the model metric deteriorate \n when you shuffle a feature?)\n - Partial dependence plots (how does the model prediction change when \n you vary a single feature?\n - Shap interaction values (decompose the shap value into a direct effect \n an interaction effects)\n - For Random Forests and xgboost models: visualization of individual trees\n in the ensemble. \n - Plus for classifiers: precision plots, confusion matrix, ROC AUC plot, \n PR AUC plot, etc\n - For regression models: goodness-of-fit plots, residual plots, etc.\n\nThe library is designed to be modular so that it is easy to design your \nown custom dashboards so that you can focus on the layout and project specific \ntextual explanations of the dashboard. (i.e. design it so that it will be \ninterpretable for business users in your organization, not just data scientists)\n\n\nA deployed example can be found at http://titanicexplainer.herokuapp.com\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Quickly build Explainable AI dashboards that show the inner workings of so-called \"blackbox\" machine learning models.",
"version": "0.4.5",
"project_urls": {
"Documentation": "https://explainerdashboard.readthedocs.io/",
"Github page": "https://github.com/oegedijk/explainerdashboard/",
"Homepage": "https://github.com/oegedijk/explainerdashboard"
},
"split_keywords": [
"machine learning",
"explainability",
"shap",
"feature importances",
"dash"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "010f070040df10b3f99bd27c00b567d28e84de7ec44396880fe524069405f489",
"md5": "ec930b73aaf864c78446443820be96b5",
"sha256": "ca2a1f43ebcdec44f33eb9dc49b3ea9789f05be9ed1d1bd0b6219dfa4338e937"
},
"downloads": -1,
"filename": "explainerdashboard-0.4.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ec930b73aaf864c78446443820be96b5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 287224,
"upload_time": "2023-12-17T19:42:36",
"upload_time_iso_8601": "2023-12-17T19:42:36.051188Z",
"url": "https://files.pythonhosted.org/packages/01/0f/070040df10b3f99bd27c00b567d28e84de7ec44396880fe524069405f489/explainerdashboard-0.4.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a8978438dd725e3d4b30d3aab80411c8d12291177e9fa96a8be3251ee35b1c57",
"md5": "6ad1d376f642d4dd84532b0a1fea92f8",
"sha256": "c489db883669c8c53bd12ef8548b16b3dee737e8e249959a2c3550fc604ed871"
},
"downloads": -1,
"filename": "explainerdashboard-0.4.5.tar.gz",
"has_sig": false,
"md5_digest": "6ad1d376f642d4dd84532b0a1fea92f8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 275507,
"upload_time": "2023-12-17T19:42:38",
"upload_time_iso_8601": "2023-12-17T19:42:38.637773Z",
"url": "https://files.pythonhosted.org/packages/a8/97/8438dd725e3d4b30d3aab80411c8d12291177e9fa96a8be3251ee35b1c57/explainerdashboard-0.4.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-17 19:42:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "oegedijk",
"github_project": "explainerdashboard",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "explainerdashboard"
}