====
ELI5
====
.. image:: https://img.shields.io/pypi/v/eli5.svg
:target: https://pypi.python.org/pypi/eli5
:alt: PyPI Version
.. image:: https://github.com/eli5-org/eli5/workflows/build/badge.svg?branch=master
:target: https://github.com/eli5-org/eli5/actions
:alt: Build Status
.. image:: https://codecov.io/github/TeamHG-Memex/eli5/coverage.svg?branch=master
:target: https://codecov.io/github/TeamHG-Memex/eli5?branch=master
:alt: Code Coverage
.. image:: https://readthedocs.org/projects/eli5/badge/?version=latest
:target: https://eli5.readthedocs.io/en/latest/?badge=latest
:alt: Documentation
ELI5 is a Python package which helps to debug machine learning
classifiers and explain their predictions.
.. image:: https://raw.githubusercontent.com/TeamHG-Memex/eli5/master/docs/source/static/word-highlight.png
:alt: explain_prediction for text data
.. image:: https://raw.githubusercontent.com/TeamHG-Memex/eli5/master/docs/source/static/gradcam-catdog.png
:alt: explain_prediction for image data
It provides support for the following machine learning frameworks and packages:
* scikit-learn_. Currently ELI5 allows to explain weights and predictions
of scikit-learn linear classifiers and regressors, print decision trees
as text or as SVG, show feature importances and explain predictions
of decision trees and tree-based ensembles. ELI5 understands text
processing utilities from scikit-learn and can highlight text data
accordingly. Pipeline and FeatureUnion are supported.
It also allows to debug scikit-learn pipelines which contain
HashingVectorizer, by undoing hashing.
* Keras_ - explain predictions of image classifiers via Grad-CAM visualizations.
* xgboost_ - show feature importances and explain predictions of XGBClassifier,
XGBRegressor and xgboost.Booster.
* LightGBM_ - show feature importances and explain predictions of
LGBMClassifier, LGBMRegressor and lightgbm.Booster.
* CatBoost_ - show feature importances of CatBoostClassifier,
CatBoostRegressor and catboost.CatBoost.
* lightning_ - explain weights and predictions of lightning classifiers and
regressors.
* sklearn-crfsuite_. ELI5 allows to check weights of sklearn_crfsuite.CRF
models.
ELI5 also implements several algorithms for inspecting black-box models
(see `Inspecting Black-Box Estimators`_):
* TextExplainer_ allows to explain predictions
of any text classifier using LIME_ algorithm (Ribeiro et al., 2016).
There are utilities for using LIME with non-text data and arbitrary black-box
classifiers as well, but this feature is currently experimental.
* `Permutation importance`_ method can be used to compute feature importances
for black box estimators.
Explanation and formatting are separated; you can get text-based explanation
to display in console, HTML version embeddable in an IPython notebook
or web dashboards, a ``pandas.DataFrame`` object if you want to process
results further, or JSON version which allows to implement custom rendering
and formatting on a client.
.. _lightning: https://github.com/scikit-learn-contrib/lightning
.. _scikit-learn: https://github.com/scikit-learn/scikit-learn
.. _sklearn-crfsuite: https://github.com/TeamHG-Memex/sklearn-crfsuite
.. _LIME: https://eli5.readthedocs.io/en/latest/blackbox/lime.html
.. _TextExplainer: https://eli5.readthedocs.io/en/latest/tutorials/black-box-text-classifiers.html
.. _xgboost: https://github.com/dmlc/xgboost
.. _LightGBM: https://github.com/Microsoft/LightGBM
.. _Catboost: https://github.com/catboost/catboost
.. _Keras: https://keras.io/
.. _Permutation importance: https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html
.. _Inspecting Black-Box Estimators: https://eli5.readthedocs.io/en/latest/blackbox/index.html
License is MIT.
Check `docs <https://eli5.readthedocs.io/>`_ for more.
.. note::
This is the same project as https://github.com/TeamHG-Memex/eli5/,
but due to temporary github access issues, 0.11 release is prepared in
https://github.com/eli5-org/eli5 (this repo).
----
.. image:: https://hyperiongray.s3.amazonaws.com/define-hg.svg
:target: https://www.hyperiongray.com/?pk_campaign=github&pk_kwd=eli5
:alt: define hyperiongray
Changelog
=========
0.13.0 (2022-05-11)
-------------------
* drop python2.7 support
* fix newer xgboost with unnamed features
0.12.0 (2022-05-11)
-------------------
* use Jinja2 >= 3.0.0, please use eli5 0.11 if you'd prefer to use
an older version of Jinja2
* support lightgbm.Booster
0.11.0 (2021-01-23)
-------------------
* fixed scikit-learn 0.22+ and 0.24+ support.
* allow nan inputs in permutation importance (if model supports them).
* fix for permutation importance with sample_weight and cross-validation.
* doc fixes (typos, keras and TF versions clarified).
* don't use deprecated getargspec function.
* less type ignores, mypy updated to 0.750.
* python 3.8 and 3.9 tested on GI, python 3.4 not tested any more.
* tests moved to github actions.
0.10.1 (2019-08-29)
-------------------
* Don't include typing dependency on Python 3.5+
to fix installation on Python 3.7
0.10.0 (2019-08-21)
-------------------
* Keras image classifiers: explaining predictions with Grad-CAM
(GSoC-2019 project by @teabolt).
0.9.0 (2019-07-05)
------------------
* CatBoost support: show feature importances of CatBoostClassifier,
CatBoostRegressor and catboost.CatBoost.
* Test fixes: fixes for scikit-learn 0.21+, use xenial base on Travis
* Catch exceptions from improperly installed LightGBM
0.8.2 (2019-04-04)
------------------
* fixed scikit-learn 0.21+ support (randomized linear models are removed
from scikit-learn);
* fixed pandas.DataFrame + xgboost support for PermutationImportance;
* fixed tests with recent numpy;
* added conda install instructions (conda package is maintained by community);
* tutorial is updated to use xgboost 0.81;
* update docs to use pandoc 2.x.
0.8.1 (2018-11-19)
------------------
* fixed Python 3.7 support;
* added support for XGBoost > 0.6a2;
* fixed deprecation warnings in numpy >= 1.14;
* documentation, type annotation and test improvements.
0.8 (2017-08-25)
----------------
* **backwards incompatible**: DataFrame objects with explanations no longer
use indexes and pivot tables, they are now just plain DataFrames;
* new method for inspection black-box models is added
(`eli5-permutation-importance`);
* transfor_feature_names is implemented for sklearn's MinMaxScaler,
StandardScaler, MaxAbsScaler and RobustScaler;
* zero and negative feature importances are no longer hidden;
* fixed compatibility with scikit-learn 0.19;
* fixed compatibility with LightGBM master (2.0.5 and 2.0.6 are still
unsupported - there are bugs in LightGBM);
* documentation, testing and type annotation improvements.
0.7 (2017-07-03)
----------------
* better pandas.DataFrame integration: `eli5.explain_weights_df`,
`eli5.explain_weights_dfs`, `eli5.explain_prediction_df`,
`eli5.explain_prediction_dfs`,
`eli5.format_as_dataframe <eli5.formatters.as_dataframe.format_as_dataframe>`
and `eli5.format_as_dataframes <eli5.formatters.as_dataframe.format_as_dataframes>`
functions allow to export explanations to pandas.DataFrames;
* `eli5.explain_prediction` now shows predicted class for binary
classifiers (previously it was always showing positive class);
* `eli5.explain_prediction` supports ``targets=[<class>]`` now
for binary classifiers; e.g. to show result as seen for negative class,
you can use ``eli5.explain_prediction(..., targets=[False])``;
* support `eli5.explain_prediction` and `eli5.explain_weights`
for libsvm-based linear estimators from sklearn.svm: ``SVC(kernel='linear')``
(only binary classification), ``NuSVC(kernel='linear')`` (only
binary classification), ``SVR(kernel='linear')``, ``NuSVR(kernel='linear')``,
``OneClassSVM(kernel='linear')``;
* fixed `eli5.explain_weights` for LightGBM_ estimators in Python 2 when
``importance_type`` is 'split' or 'weight';
* testing improvements.
0.6.4 (2017-06-22)
------------------
* Fixed `eli5.explain_prediction` for recent LightGBM_ versions;
* fixed Python 3 deprecation warning in formatters.html;
* testing improvements.
0.6.3 (2017-06-02)
------------------
* `eli5.explain_weights` and `eli5.explain_prediction`
works with xgboost.Booster, not only with sklearn-like APIs;
* `eli5.formatters.as_dict.format_as_dict` is now available as
``eli5.format_as_dict``;
* testing and documentation fixes.
0.6.2 (2017-05-17)
------------------
* readable `eli5.explain_weights` for XGBoost models trained on
pandas.DataFrame;
* readable `eli5.explain_weights` for LightGBM models trained on
pandas.DataFrame;
* fixed an issue with `eli5.explain_prediction` for XGBoost
models trained on pandas.DataFrame when feature names contain dots;
* testing improvements.
0.6.1 (2017-05-10)
------------------
* Better pandas support in `eli5.explain_prediction` for
xgboost, sklearn, LightGBM and lightning.
0.6 (2017-05-03)
----------------
* Better scikit-learn Pipeline support in `eli5.explain_weights`:
it is now possible to pass a Pipeline object directly. Curently only
SelectorMixin-based transformers, FeatureUnion and transformers
with ``get_feature_names`` are supported, but users can register other
transformers; built-in list of supported transformers will be expanded
in future. See `sklearn-pipelines` for more.
* Inverting of HashingVectorizer is now supported inside FeatureUnion
via `eli5.sklearn.unhashing.invert_hashing_and_fit`.
See `sklearn-unhashing`.
* Fixed compatibility with Jupyter Notebook >= 5.0.0.
* Fixed `eli5.explain_weights` for Lasso regression with a single
feature and no intercept.
* Fixed unhashing support in Python 2.x.
* Documentation and testing improvements.
0.5 (2017-04-27)
----------------
* LightGBM_ support: `eli5.explain_prediction` and
`eli5.explain_weights` are now supported for
``LGBMClassifier`` and ``LGBMRegressor``
(see `eli5 LightGBM support <library-lightgbm>`).
* fixed text formatting if all weights are zero;
* type checks now use latest mypy;
* testing setup improvements: Travis CI now uses Ubuntu 14.04.
.. _LightGBM: https://github.com/Microsoft/LightGBM
0.4.2 (2017-03-03)
------------------
* bug fix: eli5 should remain importable if xgboost is available, but
not installed correctly.
0.4.1 (2017-01-25)
------------------
* feature contribution calculation fixed
for `eli5.xgboost.explain_prediction_xgboost`
0.4 (2017-01-20)
----------------
* `eli5.explain_prediction`: new 'top_targets' argument allows
to display only predictions with highest or lowest scores;
* `eli5.explain_weights` allows to customize the way feature importances
are computed for XGBClassifier and XGBRegressor using ``importance_type``
argument (see docs for the `eli5 XGBoost support <library-xgboost>`);
* `eli5.explain_weights` uses gain for XGBClassifier and XGBRegressor
feature importances by default; this method is a better indication of
what's going, and it makes results more compatible with feature importances
displayed for scikit-learn gradient boosting methods.
0.3.1 (2017-01-16)
------------------
* packaging fix: scikit-learn is added to install_requires in setup.py.
0.3 (2017-01-13)
----------------
* `eli5.explain_prediction` works for XGBClassifier, XGBRegressor
from XGBoost and for ExtraTreesClassifier, ExtraTreesRegressor,
GradientBoostingClassifier, GradientBoostingRegressor,
RandomForestClassifier, RandomForestRegressor, DecisionTreeClassifier
and DecisionTreeRegressor from scikit-learn.
Explanation method is based on
http://blog.datadive.net/interpreting-random-forests/ .
* `eli5.explain_weights` now supports tree-based regressors from
scikit-learn: DecisionTreeRegressor, AdaBoostRegressor,
GradientBoostingRegressor, RandomForestRegressor and ExtraTreesRegressor.
* `eli5.explain_weights` works for XGBRegressor;
* new `TextExplainer <lime-tutorial>` class allows to explain predictions
of black-box text classification pipelines using LIME algorithm;
many improvements in `eli5.lime <eli5-lime>`.
* better ``sklearn.pipeline.FeatureUnion`` support in
`eli5.explain_prediction`;
* rendering performance is improved;
* a number of remaining feature importances is shown when the feature
importance table is truncated;
* styling of feature importances tables is fixed;
* `eli5.explain_weights` and `eli5.explain_prediction` support
more linear estimators from scikit-learn: HuberRegressor, LarsCV, LassoCV,
LassoLars, LassoLarsCV, LassoLarsIC, OrthogonalMatchingPursuit,
OrthogonalMatchingPursuitCV, PassiveAggressiveRegressor,
RidgeClassifier, RidgeClassifierCV, TheilSenRegressor.
* text-based formatting of decision trees is changed: for binary
classification trees only a probability of "true" class is printed,
not both probabilities as it was before.
* `eli5.explain_weights` supports ``feature_filter`` in addition
to ``feature_re`` for filtering features, and `eli5.explain_prediction`
now also supports both of these arguments;
* 'Weight' column is renamed to 'Contribution' in the output of
`eli5.explain_prediction`;
* new ``show_feature_values=True`` formatter argument allows to display
input feature values;
* fixed an issue with analyzer='char_wb' highlighting at the start of the
text.
0.2 (2016-12-03)
----------------
* XGBClassifier support (from `XGBoost <https://github.com/dmlc/xgboost>`__
package);
* `eli5.explain_weights` support for sklearn OneVsRestClassifier;
* std deviation of feature importances is no longer printed as zero
if it is not available.
0.1.1 (2016-11-25)
------------------
* packaging fixes: require attrs > 16.0.0, fixed README rendering
0.1 (2016-11-24)
----------------
* HTML output;
* IPython integration;
* JSON output;
* visualization of scikit-learn text vectorizers;
* `sklearn-crfsuite <https://github.com/TeamHG-Memex/sklearn-crfsuite>`__
support;
* `lightning <https://github.com/scikit-learn-contrib/lightning>`__ support;
* `eli5.show_weights` and `eli5.show_prediction` functions;
* `eli5.explain_weights` and `eli5.explain_prediction`
functions;
* `eli5.lime <eli5-lime>` improvements: samplers for non-text data,
bug fixes, docs;
* HashingVectorizer is supported for regression tasks;
* performance improvements - feature names are lazy;
* sklearn ElasticNetCV and RidgeCV support;
* it is now possible to customize formatting output - show/hide sections,
change layout;
* sklearn OneVsRestClassifier support;
* sklearn DecisionTreeClassifier visualization (text-based or svg-based);
* dropped support for scikit-learn < 0.18;
* basic mypy type annotations;
* ``feature_re`` argument allows to show only a subset of features;
* ``target_names`` argument allows to change display names of targets/classes;
* ``targets`` argument allows to show a subset of targets/classes and
change their display order;
* documentation, more examples.
0.0.6 (2016-10-12)
------------------
* Candidate features in eli5.sklearn.InvertableHashingVectorizer
are ordered by their frequency, first candidate is always positive.
0.0.5 (2016-09-27)
------------------
* HashingVectorizer support in explain_prediction;
* add an option to pass coefficient scaling array; it is useful
if you want to compare coefficients for features which scale or sign
is different in the input;
* bug fix: classifier weights are no longer changed by eli5 functions.
0.0.4 (2016-09-24)
------------------
* eli5.sklearn.InvertableHashingVectorizer and
eli5.sklearn.FeatureUnhasher allow to recover feature names for
pipelines which use HashingVectorizer or FeatureHasher;
* added support for scikit-learn linear regression models (ElasticNet,
Lars, Lasso, LinearRegression, LinearSVR, Ridge, SGDRegressor);
* doc and vec arguments are swapped in explain_prediction function;
vec can now be omitted if an example is already vectorized;
* fixed issue with dense feature vectors;
* all class_names arguments are renamed to target_names;
* feature name guessing is fixed for scikit-learn ensemble estimators;
* testing improvements.
0.0.3 (2016-09-21)
------------------
* support any black-box classifier using LIME (http://arxiv.org/abs/1602.04938)
algorithm; text data support is built-in;
* "vectorized" argument for sklearn.explain_prediction; it allows to pass
example which is already vectorized;
* allow to pass feature_names explicitly;
* support classifiers without get_feature_names method using auto-generated
feature names.
0.0.2 (2016-09-19)
------------------
* 'top' argument of ``explain_prediction``
can be a tuple (num_positive, num_negative);
* classifier name is no longer printed by default;
* added eli5.sklearn.explain_prediction to explain individual examples;
* fixed numpy warning.
0.0.1 (2016-09-15)
------------------
Pre-release.
Raw data
{
"_id": null,
"home_page": "https://github.com/eli5-org/eli5",
"name": "eli5",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Mikhail Korobov, Konstantin Lopuhin",
"author_email": "kmike84@gmail.com, kostia.lopuhin@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/b5/30/3be87d2a7ca12a6ec45b8a5f6fd10be9992cc4b239dfb029c80091add477/eli5-0.13.0.tar.gz",
"platform": null,
"description": "====\nELI5\n====\n\n.. image:: https://img.shields.io/pypi/v/eli5.svg\n :target: https://pypi.python.org/pypi/eli5\n :alt: PyPI Version\n\n.. image:: https://github.com/eli5-org/eli5/workflows/build/badge.svg?branch=master\n :target: https://github.com/eli5-org/eli5/actions\n :alt: Build Status\n\n.. image:: https://codecov.io/github/TeamHG-Memex/eli5/coverage.svg?branch=master\n :target: https://codecov.io/github/TeamHG-Memex/eli5?branch=master\n :alt: Code Coverage\n\n.. image:: https://readthedocs.org/projects/eli5/badge/?version=latest\n :target: https://eli5.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation\n\n\nELI5 is a Python package which helps to debug machine learning\nclassifiers and explain their predictions.\n\n.. image:: https://raw.githubusercontent.com/TeamHG-Memex/eli5/master/docs/source/static/word-highlight.png\n :alt: explain_prediction for text data\n\n.. image:: https://raw.githubusercontent.com/TeamHG-Memex/eli5/master/docs/source/static/gradcam-catdog.png\n :alt: explain_prediction for image data\n\nIt provides support for the following machine learning frameworks and packages:\n\n* scikit-learn_. Currently ELI5 allows to explain weights and predictions\n of scikit-learn linear classifiers and regressors, print decision trees\n as text or as SVG, show feature importances and explain predictions\n of decision trees and tree-based ensembles. ELI5 understands text\n processing utilities from scikit-learn and can highlight text data\n accordingly. Pipeline and FeatureUnion are supported.\n It also allows to debug scikit-learn pipelines which contain\n HashingVectorizer, by undoing hashing.\n\n* Keras_ - explain predictions of image classifiers via Grad-CAM visualizations.\n\n* xgboost_ - show feature importances and explain predictions of XGBClassifier,\n XGBRegressor and xgboost.Booster.\n\n* LightGBM_ - show feature importances and explain predictions of\n LGBMClassifier, LGBMRegressor and lightgbm.Booster.\n\n* CatBoost_ - show feature importances of CatBoostClassifier,\n CatBoostRegressor and catboost.CatBoost.\n\n* lightning_ - explain weights and predictions of lightning classifiers and\n regressors.\n\n* sklearn-crfsuite_. ELI5 allows to check weights of sklearn_crfsuite.CRF\n models.\n\n\nELI5 also implements several algorithms for inspecting black-box models\n(see `Inspecting Black-Box Estimators`_):\n\n* TextExplainer_ allows to explain predictions\n of any text classifier using LIME_ algorithm (Ribeiro et al., 2016).\n There are utilities for using LIME with non-text data and arbitrary black-box\n classifiers as well, but this feature is currently experimental.\n* `Permutation importance`_ method can be used to compute feature importances\n for black box estimators.\n\nExplanation and formatting are separated; you can get text-based explanation\nto display in console, HTML version embeddable in an IPython notebook\nor web dashboards, a ``pandas.DataFrame`` object if you want to process\nresults further, or JSON version which allows to implement custom rendering\nand formatting on a client.\n\n.. _lightning: https://github.com/scikit-learn-contrib/lightning\n.. _scikit-learn: https://github.com/scikit-learn/scikit-learn\n.. _sklearn-crfsuite: https://github.com/TeamHG-Memex/sklearn-crfsuite\n.. _LIME: https://eli5.readthedocs.io/en/latest/blackbox/lime.html\n.. _TextExplainer: https://eli5.readthedocs.io/en/latest/tutorials/black-box-text-classifiers.html\n.. _xgboost: https://github.com/dmlc/xgboost\n.. _LightGBM: https://github.com/Microsoft/LightGBM\n.. _Catboost: https://github.com/catboost/catboost\n.. _Keras: https://keras.io/\n.. _Permutation importance: https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html\n.. _Inspecting Black-Box Estimators: https://eli5.readthedocs.io/en/latest/blackbox/index.html\n\nLicense is MIT.\n\nCheck `docs <https://eli5.readthedocs.io/>`_ for more.\n\n.. note::\n This is the same project as https://github.com/TeamHG-Memex/eli5/,\n but due to temporary github access issues, 0.11 release is prepared in\n https://github.com/eli5-org/eli5 (this repo).\n\n----\n\n.. image:: https://hyperiongray.s3.amazonaws.com/define-hg.svg\n\t:target: https://www.hyperiongray.com/?pk_campaign=github&pk_kwd=eli5\n\t:alt: define hyperiongray\n\n\nChangelog\n=========\n\n0.13.0 (2022-05-11)\n-------------------\n\n* drop python2.7 support\n* fix newer xgboost with unnamed features\n\n0.12.0 (2022-05-11)\n-------------------\n\n* use Jinja2 >= 3.0.0, please use eli5 0.11 if you'd prefer to use\n an older version of Jinja2\n* support lightgbm.Booster\n\n0.11.0 (2021-01-23)\n-------------------\n\n* fixed scikit-learn 0.22+ and 0.24+ support.\n* allow nan inputs in permutation importance (if model supports them).\n* fix for permutation importance with sample_weight and cross-validation.\n* doc fixes (typos, keras and TF versions clarified).\n* don't use deprecated getargspec function.\n* less type ignores, mypy updated to 0.750.\n* python 3.8 and 3.9 tested on GI, python 3.4 not tested any more.\n* tests moved to github actions.\n\n0.10.1 (2019-08-29)\n-------------------\n\n* Don't include typing dependency on Python 3.5+\n to fix installation on Python 3.7\n\n0.10.0 (2019-08-21)\n-------------------\n\n* Keras image classifiers: explaining predictions with Grad-CAM\n (GSoC-2019 project by @teabolt).\n\n0.9.0 (2019-07-05)\n------------------\n\n* CatBoost support: show feature importances of CatBoostClassifier,\n CatBoostRegressor and catboost.CatBoost.\n* Test fixes: fixes for scikit-learn 0.21+, use xenial base on Travis\n* Catch exceptions from improperly installed LightGBM\n\n0.8.2 (2019-04-04)\n------------------\n\n* fixed scikit-learn 0.21+ support (randomized linear models are removed\n from scikit-learn);\n* fixed pandas.DataFrame + xgboost support for PermutationImportance;\n* fixed tests with recent numpy;\n* added conda install instructions (conda package is maintained by community);\n* tutorial is updated to use xgboost 0.81;\n* update docs to use pandoc 2.x.\n\n0.8.1 (2018-11-19)\n------------------\n\n* fixed Python 3.7 support;\n* added support for XGBoost > 0.6a2;\n* fixed deprecation warnings in numpy >= 1.14;\n* documentation, type annotation and test improvements.\n\n0.8 (2017-08-25)\n----------------\n\n* **backwards incompatible**: DataFrame objects with explanations no longer\n use indexes and pivot tables, they are now just plain DataFrames;\n* new method for inspection black-box models is added\n (`eli5-permutation-importance`);\n* transfor_feature_names is implemented for sklearn's MinMaxScaler,\n StandardScaler, MaxAbsScaler and RobustScaler;\n* zero and negative feature importances are no longer hidden;\n* fixed compatibility with scikit-learn 0.19;\n* fixed compatibility with LightGBM master (2.0.5 and 2.0.6 are still\n unsupported - there are bugs in LightGBM);\n* documentation, testing and type annotation improvements.\n\n0.7 (2017-07-03)\n----------------\n\n* better pandas.DataFrame integration: `eli5.explain_weights_df`,\n `eli5.explain_weights_dfs`, `eli5.explain_prediction_df`,\n `eli5.explain_prediction_dfs`,\n `eli5.format_as_dataframe <eli5.formatters.as_dataframe.format_as_dataframe>`\n and `eli5.format_as_dataframes <eli5.formatters.as_dataframe.format_as_dataframes>`\n functions allow to export explanations to pandas.DataFrames;\n* `eli5.explain_prediction` now shows predicted class for binary\n classifiers (previously it was always showing positive class);\n* `eli5.explain_prediction` supports ``targets=[<class>]`` now\n for binary classifiers; e.g. to show result as seen for negative class,\n you can use ``eli5.explain_prediction(..., targets=[False])``;\n* support `eli5.explain_prediction` and `eli5.explain_weights`\n for libsvm-based linear estimators from sklearn.svm: ``SVC(kernel='linear')``\n (only binary classification), ``NuSVC(kernel='linear')`` (only\n binary classification), ``SVR(kernel='linear')``, ``NuSVR(kernel='linear')``,\n ``OneClassSVM(kernel='linear')``;\n* fixed `eli5.explain_weights` for LightGBM_ estimators in Python 2 when\n ``importance_type`` is 'split' or 'weight';\n* testing improvements.\n\n0.6.4 (2017-06-22)\n------------------\n\n* Fixed `eli5.explain_prediction` for recent LightGBM_ versions;\n* fixed Python 3 deprecation warning in formatters.html;\n* testing improvements.\n\n0.6.3 (2017-06-02)\n------------------\n\n* `eli5.explain_weights` and `eli5.explain_prediction`\n works with xgboost.Booster, not only with sklearn-like APIs;\n* `eli5.formatters.as_dict.format_as_dict` is now available as\n ``eli5.format_as_dict``;\n* testing and documentation fixes.\n\n0.6.2 (2017-05-17)\n------------------\n\n* readable `eli5.explain_weights` for XGBoost models trained on\n pandas.DataFrame;\n* readable `eli5.explain_weights` for LightGBM models trained on\n pandas.DataFrame;\n* fixed an issue with `eli5.explain_prediction` for XGBoost\n models trained on pandas.DataFrame when feature names contain dots;\n* testing improvements.\n\n0.6.1 (2017-05-10)\n------------------\n\n* Better pandas support in `eli5.explain_prediction` for\n xgboost, sklearn, LightGBM and lightning.\n\n0.6 (2017-05-03)\n----------------\n\n* Better scikit-learn Pipeline support in `eli5.explain_weights`:\n it is now possible to pass a Pipeline object directly. Curently only\n SelectorMixin-based transformers, FeatureUnion and transformers\n with ``get_feature_names`` are supported, but users can register other\n transformers; built-in list of supported transformers will be expanded\n in future. See `sklearn-pipelines` for more.\n* Inverting of HashingVectorizer is now supported inside FeatureUnion\n via `eli5.sklearn.unhashing.invert_hashing_and_fit`.\n See `sklearn-unhashing`.\n* Fixed compatibility with Jupyter Notebook >= 5.0.0.\n* Fixed `eli5.explain_weights` for Lasso regression with a single\n feature and no intercept.\n* Fixed unhashing support in Python 2.x.\n* Documentation and testing improvements.\n\n\n0.5 (2017-04-27)\n----------------\n\n* LightGBM_ support: `eli5.explain_prediction` and\n `eli5.explain_weights` are now supported for\n ``LGBMClassifier`` and ``LGBMRegressor``\n (see `eli5 LightGBM support <library-lightgbm>`).\n* fixed text formatting if all weights are zero;\n* type checks now use latest mypy;\n* testing setup improvements: Travis CI now uses Ubuntu 14.04.\n\n.. _LightGBM: https://github.com/Microsoft/LightGBM\n\n0.4.2 (2017-03-03)\n------------------\n\n* bug fix: eli5 should remain importable if xgboost is available, but\n not installed correctly.\n\n0.4.1 (2017-01-25)\n------------------\n\n* feature contribution calculation fixed\n for `eli5.xgboost.explain_prediction_xgboost`\n\n\n0.4 (2017-01-20)\n----------------\n\n* `eli5.explain_prediction`: new 'top_targets' argument allows\n to display only predictions with highest or lowest scores;\n* `eli5.explain_weights` allows to customize the way feature importances\n are computed for XGBClassifier and XGBRegressor using ``importance_type``\n argument (see docs for the `eli5 XGBoost support <library-xgboost>`);\n* `eli5.explain_weights` uses gain for XGBClassifier and XGBRegressor\n feature importances by default; this method is a better indication of\n what's going, and it makes results more compatible with feature importances\n displayed for scikit-learn gradient boosting methods.\n\n0.3.1 (2017-01-16)\n------------------\n\n* packaging fix: scikit-learn is added to install_requires in setup.py.\n\n0.3 (2017-01-13)\n----------------\n\n* `eli5.explain_prediction` works for XGBClassifier, XGBRegressor\n from XGBoost and for ExtraTreesClassifier, ExtraTreesRegressor,\n GradientBoostingClassifier, GradientBoostingRegressor,\n RandomForestClassifier, RandomForestRegressor, DecisionTreeClassifier\n and DecisionTreeRegressor from scikit-learn.\n Explanation method is based on\n http://blog.datadive.net/interpreting-random-forests/ .\n* `eli5.explain_weights` now supports tree-based regressors from\n scikit-learn: DecisionTreeRegressor, AdaBoostRegressor,\n GradientBoostingRegressor, RandomForestRegressor and ExtraTreesRegressor.\n* `eli5.explain_weights` works for XGBRegressor;\n* new `TextExplainer <lime-tutorial>` class allows to explain predictions\n of black-box text classification pipelines using LIME algorithm;\n many improvements in `eli5.lime <eli5-lime>`.\n* better ``sklearn.pipeline.FeatureUnion`` support in\n `eli5.explain_prediction`;\n* rendering performance is improved;\n* a number of remaining feature importances is shown when the feature\n importance table is truncated;\n* styling of feature importances tables is fixed;\n* `eli5.explain_weights` and `eli5.explain_prediction` support\n more linear estimators from scikit-learn: HuberRegressor, LarsCV, LassoCV,\n LassoLars, LassoLarsCV, LassoLarsIC, OrthogonalMatchingPursuit,\n OrthogonalMatchingPursuitCV, PassiveAggressiveRegressor,\n RidgeClassifier, RidgeClassifierCV, TheilSenRegressor.\n* text-based formatting of decision trees is changed: for binary\n classification trees only a probability of \"true\" class is printed,\n not both probabilities as it was before.\n* `eli5.explain_weights` supports ``feature_filter`` in addition\n to ``feature_re`` for filtering features, and `eli5.explain_prediction`\n now also supports both of these arguments;\n* 'Weight' column is renamed to 'Contribution' in the output of\n `eli5.explain_prediction`;\n* new ``show_feature_values=True`` formatter argument allows to display\n input feature values;\n* fixed an issue with analyzer='char_wb' highlighting at the start of the\n text.\n\n0.2 (2016-12-03)\n----------------\n\n* XGBClassifier support (from `XGBoost <https://github.com/dmlc/xgboost>`__\n package);\n* `eli5.explain_weights` support for sklearn OneVsRestClassifier;\n* std deviation of feature importances is no longer printed as zero\n if it is not available.\n\n0.1.1 (2016-11-25)\n------------------\n\n* packaging fixes: require attrs > 16.0.0, fixed README rendering\n\n0.1 (2016-11-24)\n----------------\n\n* HTML output;\n* IPython integration;\n* JSON output;\n* visualization of scikit-learn text vectorizers;\n* `sklearn-crfsuite <https://github.com/TeamHG-Memex/sklearn-crfsuite>`__\n support;\n* `lightning <https://github.com/scikit-learn-contrib/lightning>`__ support;\n* `eli5.show_weights` and `eli5.show_prediction` functions;\n* `eli5.explain_weights` and `eli5.explain_prediction`\n functions;\n* `eli5.lime <eli5-lime>` improvements: samplers for non-text data,\n bug fixes, docs;\n* HashingVectorizer is supported for regression tasks;\n* performance improvements - feature names are lazy;\n* sklearn ElasticNetCV and RidgeCV support;\n* it is now possible to customize formatting output - show/hide sections,\n change layout;\n* sklearn OneVsRestClassifier support;\n* sklearn DecisionTreeClassifier visualization (text-based or svg-based);\n* dropped support for scikit-learn < 0.18;\n* basic mypy type annotations;\n* ``feature_re`` argument allows to show only a subset of features;\n* ``target_names`` argument allows to change display names of targets/classes;\n* ``targets`` argument allows to show a subset of targets/classes and\n change their display order;\n* documentation, more examples.\n\n\n0.0.6 (2016-10-12)\n------------------\n\n* Candidate features in eli5.sklearn.InvertableHashingVectorizer\n are ordered by their frequency, first candidate is always positive.\n\n0.0.5 (2016-09-27)\n------------------\n\n* HashingVectorizer support in explain_prediction;\n* add an option to pass coefficient scaling array; it is useful\n if you want to compare coefficients for features which scale or sign\n is different in the input;\n* bug fix: classifier weights are no longer changed by eli5 functions.\n\n0.0.4 (2016-09-24)\n------------------\n\n* eli5.sklearn.InvertableHashingVectorizer and\n eli5.sklearn.FeatureUnhasher allow to recover feature names for\n pipelines which use HashingVectorizer or FeatureHasher;\n* added support for scikit-learn linear regression models (ElasticNet,\n Lars, Lasso, LinearRegression, LinearSVR, Ridge, SGDRegressor);\n* doc and vec arguments are swapped in explain_prediction function;\n vec can now be omitted if an example is already vectorized;\n* fixed issue with dense feature vectors;\n* all class_names arguments are renamed to target_names;\n* feature name guessing is fixed for scikit-learn ensemble estimators;\n* testing improvements.\n\n0.0.3 (2016-09-21)\n------------------\n\n* support any black-box classifier using LIME (http://arxiv.org/abs/1602.04938)\n algorithm; text data support is built-in;\n* \"vectorized\" argument for sklearn.explain_prediction; it allows to pass\n example which is already vectorized;\n* allow to pass feature_names explicitly;\n* support classifiers without get_feature_names method using auto-generated\n feature names.\n\n0.0.2 (2016-09-19)\n------------------\n\n* 'top' argument of ``explain_prediction``\n can be a tuple (num_positive, num_negative);\n* classifier name is no longer printed by default;\n* added eli5.sklearn.explain_prediction to explain individual examples;\n* fixed numpy warning.\n\n0.0.1 (2016-09-15)\n------------------\n\nPre-release.\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "Debug machine learning classifiers and explain their predictions",
"version": "0.13.0",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b5303be87d2a7ca12a6ec45b8a5f6fd10be9992cc4b239dfb029c80091add477",
"md5": "f0cb57765db4a7d5e66a3c2051c5c073",
"sha256": "ec8459eaaf09d66743c53a7bdb115c6cda7e533d7a5d02a5a8bb717ee843eb37"
},
"downloads": -1,
"filename": "eli5-0.13.0.tar.gz",
"has_sig": false,
"md5_digest": "f0cb57765db4a7d5e66a3c2051c5c073",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 216240,
"upload_time": "2022-05-11T09:37:12",
"upload_time_iso_8601": "2022-05-11T09:37:12.474538Z",
"url": "https://files.pythonhosted.org/packages/b5/30/3be87d2a7ca12a6ec45b8a5f6fd10be9992cc4b239dfb029c80091add477/eli5-0.13.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-05-11 09:37:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "eli5-org",
"github_project": "eli5",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.9.0"
]
]
},
{
"name": "scipy",
"specs": []
},
{
"name": "singledispatch",
"specs": [
[
">=",
"3.4.0.3"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"0.20"
]
]
},
{
"name": "attrs",
"specs": [
[
">",
"16.0.0"
]
]
},
{
"name": "jinja2",
"specs": [
[
">=",
"3.0.0"
]
]
},
{
"name": "pip",
"specs": [
[
">=",
"8.1"
]
]
},
{
"name": "setuptools",
"specs": [
[
">=",
"20.7"
]
]
}
],
"tox": true,
"lcname": "eli5"
}