lux-explainer


Namelux-explainer JSON
Version 1.2.0 PyPI version JSON
download
home_pageNone
SummaryUniversal Local Rule-based Explainer
upload_time2024-04-30 10:44:18
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2021 sbobek Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords xai explainability model-agnostic rule-based
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LUX (Local Universal Rule-based Explainer)
## Main features
  <img align="right"  src="https://raw.githubusercontent.com/sbobek/lux/main/pix/lux-logo.png" width="200">
  
  * Model-agnostic, rule-based and visual local explanations of black-box ML models
  * Integrated counterfactual explanations
  * Rule-based explanations (that are executable at the same time)
  * Oblique trees backbone, which allows to explain more reliable linear decision boundaries
  * Integration with [Shapley values](https://shap.readthedocs.io/en/latest/) or [Lime](https://github.com/marcotcr/lime) importances (or any other explainer that produces importances) that help in generating high quality rules
  
## About
The workflow for LUX looks as follows:
  - You train an arbitrary selected machine learning model on your train dataset. The only requirements is that the model is able to output probabilities.
  
  ![](https://raw.githubusercontent.com/sbobek/lux/main/pix/decbound-point.png)
  - Next, you generate neighbourhood of an instance you wish to explain and you feed this neighbourhood to your model. 
  
  ![](https://raw.githubusercontent.com/sbobek/lux/main/pix/neighbourhood.png)
  - You obtain a decision stump, which locally explains the model and is executable by [HeaRTDroid](https://heartdroid.re) inference engine
  
  ![](https://raw.githubusercontent.com/sbobek/lux/main/pix/hmrp.png)
  - You can obtain explanation for a selected instance (the number after # represents confidence of an explanation):
  ```
  ['IF x2  < 0.01 AND  THEN class = 1 # 0.9229009792453621']
  ```

## Installation


```
pip install lux-explainer
```
If you want to use LUX with [JupyterLab](https://jupyter.org/) install it and run:

```
pip installta jupyterlab
jupyter lab
```

**Caution**: If you want to use LUX with categorical data, it is advised to use [multiprocessing gower distance](https://github.com/sbobek/gower/tree/add-multiprocessing) package (due to high computational complexity of the problem). 

## Usage

  * For complete usage see [lux_usage_example.ipynb](https://raw.githubusercontent.com/sbobek/lux/main/examples/lux_usage_example.ipynb)
  * Fos usage example with Shap integration see [lux_usage_example_shap.ipynb](https://raw.githubusercontent.com/sbobek/lux/main/examples/lux_usage_example_shap.ipynb)

### Simple example on Iris dataset

``` python
from lux.lux import LUX
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn import svm
import numpy as np
import pandas as pd
# import some data to play with
iris = datasets.load_iris()
features = ['sepal_length','sepal_width','petal_length','petal_width']
target = 'class'

#create daatframe with columns names as strings (LUX accepts only DataFrames withj string columns names)
df_iris = pd.DataFrame(iris.data,columns=features)
df_iris[target] = iris.target

#train classifier
train, test = train_test_split(df_iris)
clf = svm.SVC(probability=True)
clf.fit(train[features],train[target])
clf.score(test[features],test[target])

#pick some instance from datasetr
iris_instance = train[features].sample(1).values
iris_instance

#train lux on neighbourhood equal 20 instances
lux = LUX(predict_proba = clf.predict_proba, neighborhood_size=20,max_depth=2,  node_size_limit = 1, grow_confidence_threshold = 0 )
lux.fit(train[features], train[target], instance_to_explain=iris_instance,class_names=[0,1,2])

#see the justification of the instance being classified for a given class
lux.justify(np.array(iris_instance))

```

The above code should give you the answer as follows:
```
['IF petal_length >= 5.15 THEN class = 2 # 0.9833409059468439\n']
```

Alternatively one can get counterfactual explanation for a given instance by calling:

``` python
cf = lux.counterfactual(np.array(iris_instance), train[features], counterfactual_representative='nearest', topn=1)[0]
print(f"Counterfactual for {iris_instance} to change from class {lux.predict(np.array(iris_instance))[0]} to class {cf['prediction']}: \n{cf['counterfactual']}")
```
The result from the above query should look as follows:

```
Counterfactual for [[7.7 2.6 6.9 2.3]] to change from class 2 to class 1: 
sepal_length    6.9
sepal_width     3.1
petal_length    5.1
petal_width     2.3
```

### Rule-based model for local uncertain explanations
You can obtain a whole rule-based model for the local uncertain explanation that was generated by LUX for given instance by running following code

``` python
#have a look at the entire rule-based model that can be executed with https:://heartdroid.re
print(lux.to_HMR())
```

This will generate model which can later be executed by [HeaRTDroid](https://heartdroid.re)

```
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TYPES DEFINITIONS %%%%%%%%%%%%%%%%%%%%%%%%%%

xtype [
 name: petal_length, 
base:numeric,
domain : [-100000 to 100000]].
xtype [
 name: class, 
base:symbolic,
 domain : [1,0,2]].

%%%%%%%%%%%%%%%%%%%%%%%%% ATTRIBUTES DEFINITIONS %%%%%%%%%%%%%%%%%%%%%%%%%%
xattr [ name: petal_length,
 type:petal_length,
 class:simple,
 comm:out ].
xattr [ name: class,
 type:class,
 class:simple,
 comm:out ].

%%%%%%%%%%%%%%%%%%%%%%%% TABLE SCHEMAS DEFINITIONS %%%%%%%%%%%%%%%%%%%%%%%%
 xschm tree : [petal_length]==> [class].
xrule tree/0:
[petal_length  lt 3.05] ==> [class set 0]. # 0.9579256691362875
xrule tree/1:
[petal_length  gte 3.05, petal_length  lt 5.15] ==> [class set 1]. # 0.8398308552545226
xrule tree/2:
[petal_length  gte 3.05, petal_length  gte 5.15] ==> [class set 2]. # 0.9833409059468439
```
### Visualization of the local uncertain explanation
Similarly you can obtain visualization of the rule-based model in a form of decision tree by executing following code. 

``` python
import graphviz
from graphviz import Source
from IPython.display import SVG, Image
lux.uid3.tree.save_dot('tree.dot',fmt='.2f',visual=True, background_data=train)
gvz=graphviz.Source.from_file('tree.dot')
!dot -Tpng tree.dot > tree.png
Image('tree.png')
```

The code should yield something like that (depending on the instance that was selected):

![](https://raw.githubusercontent.com/sbobek/lux/main/pix/utree.png)

# Cite this work

```
@misc{bobek2023local,
      title={Local Universal Rule-based Explanations}, 
      author={Szymon Bobek and Grzegorz J. Nalepa},
      year={2023},
      eprint={2310.14894},
      archivePrefix={arXiv},
      primaryClass={cs.AI}

}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lux-explainer",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "xai, explainability, model-agnostic, rule-based",
    "author": null,
    "author_email": "Szymon Bobek <szymon.bobek@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/99/fd/8bd5ef6935c284600b6d5f3caab9a9b4930ea9b80948fb805bb1a6df8319/lux_explainer-1.2.0.tar.gz",
    "platform": null,
    "description": "# LUX (Local Universal Rule-based Explainer)\n## Main features\n  <img align=\"right\"  src=\"https://raw.githubusercontent.com/sbobek/lux/main/pix/lux-logo.png\" width=\"200\">\n  \n  * Model-agnostic, rule-based and visual local explanations of black-box ML models\n  * Integrated counterfactual explanations\n  * Rule-based explanations (that are executable at the same time)\n  * Oblique trees backbone, which allows to explain more reliable linear decision boundaries\n  * Integration with [Shapley values](https://shap.readthedocs.io/en/latest/) or [Lime](https://github.com/marcotcr/lime) importances (or any other explainer that produces importances) that help in generating high quality rules\n  \n## About\nThe workflow for LUX looks as follows:\n  - You train an arbitrary selected machine learning model on your train dataset. The only requirements is that the model is able to output probabilities.\n  \n  ![](https://raw.githubusercontent.com/sbobek/lux/main/pix/decbound-point.png)\n  - Next, you generate neighbourhood of an instance you wish to explain and you feed this neighbourhood to your model. \n  \n  ![](https://raw.githubusercontent.com/sbobek/lux/main/pix/neighbourhood.png)\n  - You obtain a decision stump, which locally explains the model and is executable by [HeaRTDroid](https://heartdroid.re) inference engine\n  \n  ![](https://raw.githubusercontent.com/sbobek/lux/main/pix/hmrp.png)\n  - You can obtain explanation for a selected instance (the number after # represents confidence of an explanation):\n  ```\n  ['IF x2  < 0.01 AND  THEN class = 1 # 0.9229009792453621']\n  ```\n\n## Installation\n\n\n```\npip install lux-explainer\n```\nIf you want to use LUX with [JupyterLab](https://jupyter.org/) install it and run:\n\n```\npip installta jupyterlab\njupyter lab\n```\n\n**Caution**: If you want to use LUX with categorical data, it is advised to use [multiprocessing gower distance](https://github.com/sbobek/gower/tree/add-multiprocessing) package (due to high computational complexity of the problem). \n\n## Usage\n\n  * For complete usage see [lux_usage_example.ipynb](https://raw.githubusercontent.com/sbobek/lux/main/examples/lux_usage_example.ipynb)\n  * Fos usage example with Shap integration see [lux_usage_example_shap.ipynb](https://raw.githubusercontent.com/sbobek/lux/main/examples/lux_usage_example_shap.ipynb)\n\n### Simple example on Iris dataset\n\n``` python\nfrom lux.lux import LUX\nfrom sklearn import datasets\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import svm\nimport numpy as np\nimport pandas as pd\n# import some data to play with\niris = datasets.load_iris()\nfeatures = ['sepal_length','sepal_width','petal_length','petal_width']\ntarget = 'class'\n\n#create daatframe with columns names as strings (LUX accepts only DataFrames withj string columns names)\ndf_iris = pd.DataFrame(iris.data,columns=features)\ndf_iris[target] = iris.target\n\n#train classifier\ntrain, test = train_test_split(df_iris)\nclf = svm.SVC(probability=True)\nclf.fit(train[features],train[target])\nclf.score(test[features],test[target])\n\n#pick some instance from datasetr\niris_instance = train[features].sample(1).values\niris_instance\n\n#train lux on neighbourhood equal 20 instances\nlux = LUX(predict_proba = clf.predict_proba, neighborhood_size=20,max_depth=2,  node_size_limit = 1, grow_confidence_threshold = 0 )\nlux.fit(train[features], train[target], instance_to_explain=iris_instance,class_names=[0,1,2])\n\n#see the justification of the instance being classified for a given class\nlux.justify(np.array(iris_instance))\n\n```\n\nThe above code should give you the answer as follows:\n```\n['IF petal_length >= 5.15 THEN class = 2 # 0.9833409059468439\\n']\n```\n\nAlternatively one can get counterfactual explanation for a given instance by calling:\n\n``` python\ncf = lux.counterfactual(np.array(iris_instance), train[features], counterfactual_representative='nearest', topn=1)[0]\nprint(f\"Counterfactual for {iris_instance} to change from class {lux.predict(np.array(iris_instance))[0]} to class {cf['prediction']}: \\n{cf['counterfactual']}\")\n```\nThe result from the above query should look as follows:\n\n```\nCounterfactual for [[7.7 2.6 6.9 2.3]] to change from class 2 to class 1: \nsepal_length    6.9\nsepal_width     3.1\npetal_length    5.1\npetal_width     2.3\n```\n\n### Rule-based model for local uncertain explanations\nYou can obtain a whole rule-based model for the local uncertain explanation that was generated by LUX for given instance by running following code\n\n``` python\n#have a look at the entire rule-based model that can be executed with https:://heartdroid.re\nprint(lux.to_HMR())\n```\n\nThis will generate model which can later be executed by [HeaRTDroid](https://heartdroid.re)\n\n```\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TYPES DEFINITIONS %%%%%%%%%%%%%%%%%%%%%%%%%%\n\nxtype [\n name: petal_length, \nbase:numeric,\ndomain : [-100000 to 100000]].\nxtype [\n name: class, \nbase:symbolic,\n domain : [1,0,2]].\n\n%%%%%%%%%%%%%%%%%%%%%%%%% ATTRIBUTES DEFINITIONS %%%%%%%%%%%%%%%%%%%%%%%%%%\nxattr [ name: petal_length,\n type:petal_length,\n class:simple,\n comm:out ].\nxattr [ name: class,\n type:class,\n class:simple,\n comm:out ].\n\n%%%%%%%%%%%%%%%%%%%%%%%% TABLE SCHEMAS DEFINITIONS %%%%%%%%%%%%%%%%%%%%%%%%\n xschm tree : [petal_length]==> [class].\nxrule tree/0:\n[petal_length  lt 3.05] ==> [class set 0]. # 0.9579256691362875\nxrule tree/1:\n[petal_length  gte 3.05, petal_length  lt 5.15] ==> [class set 1]. # 0.8398308552545226\nxrule tree/2:\n[petal_length  gte 3.05, petal_length  gte 5.15] ==> [class set 2]. # 0.9833409059468439\n```\n### Visualization of the local uncertain explanation\nSimilarly you can obtain visualization of the rule-based model in a form of decision tree by executing following code. \n\n``` python\nimport graphviz\nfrom graphviz import Source\nfrom IPython.display import SVG, Image\nlux.uid3.tree.save_dot('tree.dot',fmt='.2f',visual=True, background_data=train)\ngvz=graphviz.Source.from_file('tree.dot')\n!dot -Tpng tree.dot > tree.png\nImage('tree.png')\n```\n\nThe code should yield something like that (depending on the instance that was selected):\n\n![](https://raw.githubusercontent.com/sbobek/lux/main/pix/utree.png)\n\n# Cite this work\n\n```\n@misc{bobek2023local,\n      title={Local Universal Rule-based Explanations}, \n      author={Szymon Bobek and Grzegorz J. Nalepa},\n      year={2023},\n      eprint={2310.14894},\n      archivePrefix={arXiv},\n      primaryClass={cs.AI}\n\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2021 sbobek  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Universal Local Rule-based Explainer",
    "version": "1.2.0",
    "project_urls": {
        "Documentation": "https://lux-explainer.readthedocs.org",
        "Homepage": "https://github.com/sbobek/lux",
        "Issues": "https://github.com/sbobek/lux/issues"
    },
    "split_keywords": [
        "xai",
        " explainability",
        " model-agnostic",
        " rule-based"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9970e8d169b102b6992c5e3a7de9a35350fa1a6f1716f71778fd563c89afadeb",
                "md5": "20cfccdc205bbbcc3098dd156c8b59a9",
                "sha256": "1e7595c43f0742df7d7c8a376595f7cdc29f8ab223a70b8f9f935cfe3c0f9f4f"
            },
            "downloads": -1,
            "filename": "lux_explainer-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "20cfccdc205bbbcc3098dd156c8b59a9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 48909,
            "upload_time": "2024-04-30T10:44:15",
            "upload_time_iso_8601": "2024-04-30T10:44:15.941050Z",
            "url": "https://files.pythonhosted.org/packages/99/70/e8d169b102b6992c5e3a7de9a35350fa1a6f1716f71778fd563c89afadeb/lux_explainer-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "99fd8bd5ef6935c284600b6d5f3caab9a9b4930ea9b80948fb805bb1a6df8319",
                "md5": "6348854e839bf7cf5abfa02f8cd22a99",
                "sha256": "08a635ee13eb5375af93d1dc0628712ded21bdc541e5a3634a3c9978e9bbde0d"
            },
            "downloads": -1,
            "filename": "lux_explainer-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "6348854e839bf7cf5abfa02f8cd22a99",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 45044,
            "upload_time": "2024-04-30T10:44:18",
            "upload_time_iso_8601": "2024-04-30T10:44:18.183634Z",
            "url": "https://files.pythonhosted.org/packages/99/fd/8bd5ef6935c284600b6d5f3caab9a9b4930ea9b80948fb805bb1a6df8319/lux_explainer-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-30 10:44:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "sbobek",
    "github_project": "lux",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "lux-explainer"
}
        
Elapsed time: 0.25176s