omnixai


Nameomnixai JSON
Version 1.3.1 PyPI version JSON
download
home_pagehttps://github.com/salesforce/omnixai
SummaryOmniXAI: An Explainable AI Toolbox
upload_time2023-07-16 04:58:16
maintainer
docs_urlNone
authorWenzhuo Yang, Hung Le, Tanmay Shivprasad Laud, Silvio Savarese, Steven C.H. Hoi
requires_python>=3.7,<4
license3-Clause BSD
keywords xai explainable ai explanation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <br>
    <img src="https://github.com/salesforce/OmniXAI/raw/main/docs/_static/logo_small.png" width="400"/>
    <br>
<p>

# OmniXAI: A Library for Explainable AI
<div align="center">
  <a href="#">
  <img src="https://img.shields.io/badge/Python-3.7, 3.8, 3.9, 3.10-blue">
  </a>
  <a href="https://pypi.python.org/pypi/omnixai">
  <img alt="PyPI" src="https://img.shields.io/pypi/v/omnixai.svg"/>
  </a>
  <a href="https://opensource.salesforce.com/OmniXAI">
  <img alt="Documentation" src="https://github.com/salesforce/OmniXAI/actions/workflows/docs.yml/badge.svg"/>
  </a>
  <a href="https://pepy.tech/project/omnixai">
  <img alt="Downloads" src="https://pepy.tech/badge/omnixai">
  </a>
  <a href="https://arxiv.org/abs/2206.01612">
  <img alt="DOI" src="https://zenodo.org/badge/DOI/10.48550/ARXIV.2206.01612.svg"/>
  </a>
</div>

## Table of Contents
1. [Introduction](#introduction)
2. [Installation](#installation)
3. [Getting Started](#getting-started)
4. [Documentation](https://opensource.salesforce.com/OmniXAI/latest/index.html)
5. [Tutorials](https://opensource.salesforce.com/OmniXAI/latest/tutorials.html)
6. [Deployment](#deployment)
7. [Dashboard Demo](https://sfr-omnixai-demo-cc9e4edb6447.herokuapp.com/)
8. [How to Contribute](https://opensource.salesforce.com/OmniXAI/latest/omnixai.html#how-to-contribute)
9. [Technical Report and Citing OmniXAI](#technical-report-and-citing-omnixai)

## What's New

The latest version includes an experimental GPT explainer. This explainer leverages the outcomes 
produced by SHAP and MACE to formulate the input prompt for ChatGPT. Subsequently, ChatGPT 
analyzes these results and generates the corresponding explanations that provide developers with 
a clearer understanding of the rationale behind the model's predictions.

## Introduction

OmniXAI (short for Omni eXplainable AI) is a Python machine-learning library for explainable AI (XAI), offering omni-way explainable AI and interpretable 
machine learning capabilities to address many pain points in explaining decisions made by machine learning 
models in practice. OmniXAI aims to be a one-stop comprehensive library that makes explainable AI easy for 
data scientists, ML researchers and practitioners who need explanation for various types of data, models and 
explanation methods at different stages of ML process:
![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/ml_pipeline.png)

OmniXAI includes a rich family of explanation methods integrated in a unified interface, which 
supports multiple data types (tabular data, images, texts, time-series), multiple types of ML models 
(traditional ML in Scikit-learn and deep learning models in PyTorch/TensorFlow), and a range of diverse explaination 
methods including "model-specific" and "model-agnostic" methods (such as feature-attribution explanation, 
counterfactual explanation, gradient-based explanation, feature visualization, etc). For practitioners, OmniXAI provides an easy-to-use 
unified interface to generate the explanations for their applications by only writing a few lines of 
codes, and also a GUI dashboard for visualization for obtaining more insights about decisions.

The following table shows the supported explanation methods and features in our library.
We will continue improving this library to make it more comprehensive in the future.

|          Method           |  Model Type   | Explanation Type | EDA | Tabular | Image | Text | Timeseries | 
:-------------------------:|:-------------:|:----------------:|:---:|:-------:|:-----:| :---: | :---:
|     Feature analysis      |      NA       |      Global      |  ✅  |         |       |      |      |
|     Feature selection     |      NA       |      Global      |  ✅  |         |       |      |      |
|    Prediction metrics     |   Black box   |      Global      |     |    ✅    |   ✅   | ✅   |  ✅  |
|       Bias metrics        |   Black box   |      Global      |     |    ✅    |       |      |      |
| Partial dependence plots  |   Black box   |      Global      |     |    ✅    |       |      |      |
| Accumulated local effects |   Black box   |      Global      |     |    ✅    |       |      |      |
|   Sensitivity analysis    |   Black box   |      Global      |     |    ✅    |       |      |      |
|  Permutation explanation  |   Black box   |      Global      |     |    ✅    |       |      |      |
|   Feature visualization   |  Torch or TF  |      Global      |     |         |   ✅   |      |      |
|       Feature maps        |  Torch or TF  |      Local       |     |         |   ✅   |      |      |
|       GPT explainer       | Black box     |     Local        |     |    ✅    |       |      |      |
|           LIME            |   Black box   |      Local       |     |    ✅    |   ✅   | ✅   |      |
|           SHAP            |  Black box*   |      Local       |     |    ✅    |   ✅   | ✅   |  ✅  |
|          What-if          |   Black box   |      Local       |     |    ✅    |       |      |     |
|    Integrated gradient    |  Torch or TF  |      Local       |     |    ✅    |   ✅   | ✅   |      |
|      Counterfactual       |  Black box*   |      Local       |     |    ✅    |   ✅   | ✅   |  ✅  |
|  Contrastive explanation  |  Torch or TF  |      Local       |     |         |   ✅   |      |      |
|   Grad-CAM, Grad-CAM++    |  Torch or TF  |      Local       |     |         |   ✅   |      |      |
|         Score-CAM         |  Torch or TF  |      Local       |     |         |   ✅   |      |      |
|         Layer-CAM         |  Torch or TF  |      Local       |     |         |   ✅   |      |      |
|      Smooth gradient      |  Torch or TF  |      Local       |     |         |   ✅   |      |      |
|  Guided backpropagation   |  Torch or TF  |      Local       |     |         |   ✅   |      |      |
|    Learning to explain    |   Black box   |      Local       |     |    ✅    |   ✅   | ✅   |      |
|       Linear models       | Linear models | Global and Local |     |    ✅    |       |      |      |
|        Tree models        |  Tree models  | Global and Local |     |    ✅    |       |      |      |

*SHAP* accepts black box models for tabular data, PyTorch/Tensorflow models for image data, transformer models
for text data. *Counterfactual* accepts black box models for tabular, text and time-series data, and PyTorch/Tensorflow models for
image data.

This [table](https://opensource.salesforce.com/OmniXAI/latest/index.html#comparison-with-competitors) 
shows the comparison between our toolkit/library and other existing XAI toolkits/libraries
in literature.

**OmniXAI also integrates ChatGPT for generating plain text explanations given a classification/regression
model on tabular datasets.** The generated results may not be 100% accurate, but it is worth trying this 
explainer (we will continue improving the input prompts).

## Installation

You can install ``omnixai`` from PyPI by calling ``pip install omnixai``. You may install from source by
cloning the OmniXAI repo, navigating to the root directory, and calling
``pip install .``, or ``pip install -e .`` to install in editable mode. You may install additional dependencies:

- **For plotting & visualization**: Calling ``pip install omnixai[plot]``, or ``pip install .[plot]`` from the
  root directory of the repo.
- **For vision tasks**: Calling ``pip install omnixai[vision]``, or ``pip install .[vision]`` from the
  root directory of the repo.
- **For NLP tasks**: Calling ``pip install omnixai[nlp]``, or ``pip install .[nlp]`` from the
  root directory of the repo.
- **Install all the dependencies**: Calling ``pip install omnixai[all]``, or ``pip install .[all]`` from the
  root directory of the repo.

## Getting Started

For example code and an introduction to the library, see the Jupyter notebooks in
[tutorials](https://opensource.salesforce.com/OmniXAI/latest/tutorials.html), and the guided walkthrough
[here](https://opensource.salesforce.com/OmniXAI/latest/index.html). A dashboard demo can be found [here](https://sfr-omnixai-demo-cc9e4edb6447.herokuapp.com/).

Some examples:
1. [Tabular classification](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular_classification.ipynb)
2. [Tabular regression](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular_regression.ipynb)
3. [Image classification](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision.ipynb)
4. [Text classification](https://github.com/salesforce/OmniXAI/blob/main/tutorials/nlp_imdb.ipynb)
5. [Time-series anomaly detection](https://github.com/salesforce/OmniXAI/blob/main/tutorials/timeseries.ipynb)
6. [Vision-language tasks](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision/gradcam_vlm.ipynb)
7. [Ranking tasks](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular/ranking.ipynb)
8. [Feature visualization](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision/feature_visualization_torch.ipynb)
9. [Check feature maps](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision/feature_map_torch.ipynb)
10. [GPT explainer for tabular](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular/gpt.ipynb)

To get started, we recommend the linked tutorials in [tutorials](https://opensource.salesforce.com/OmniXAI/latest/tutorials.html).
In general, we recommend using `TabularExplainer`, `VisionExplainer`,
`NLPExplainer` and `TimeseriesExplainer` for tabular, vision, NLP and time-series tasks, respectively, and using
`DataAnalyzer` and `PredictionAnalyzer` for feature analysis and prediction result analysis.
These classes act as the factories of the individual explainers supported in OmniXAI, providing a simpler
interface to generate multiple explanations. To generate explanations, you only need to specify

- **The ML model to explain**: e.g., a scikit-learn model, a tensorflow model, a pytorch model or a black-box prediction function.
- **The pre-processing function**: i.e., converting raw input features into the model inputs.
- **The post-processing function (optional)**: e.g., converting the model outputs into class probabilities.
- **The explainers to apply**: e.g., SHAP, MACE, Grad-CAM.

Besides using these classes, you can also create a single explainer defined in the `omnixai.explainers` package, e.g.,
`ShapTabular`, `GradCAM`, `IntegratedGradient` or `FeatureVisualizer`.

Let's take the income prediction task as an example.
The [dataset](https://archive.ics.uci.edu/ml/datasets/adult) used in this example is for income prediction.
We recommend using data class `Tabular` to represent a tabular dataset. To create a `Tabular` instance given a pandas
dataframe, you need to specify the dataframe, the categorical feature names (if exists) and the target/label
column name (if exists).

```python
from omnixai.data.tabular import Tabular
# Load the dataset
feature_names = [
   "Age", "Workclass", "fnlwgt", "Education",
   "Education-Num", "Marital Status", "Occupation",
   "Relationship", "Race", "Sex", "Capital Gain",
   "Capital Loss", "Hours per week", "Country", "label"
]
df = pd.DataFrame(
  np.genfromtxt('adult.data', delimiter=', ', dtype=str),
  columns=feature_names
)
tabular_data = Tabular(
   df,
   categorical_columns=[feature_names[i] for i in [1, 3, 5, 6, 7, 8, 9, 13]],
   target_column='label'
)
```

The package `omnixai.preprocessing` provides several useful preprocessing functions
for a `Tabular` instance. `TabularTransform` is a special transform designed for processing tabular data.
By default, it converts categorical features into one-hot encoding, and keeps continuous-valued features.
The  method ``transform`` of `TabularTransform` transforms a `Tabular` instance to a numpy array.
If the `Tabular` instance has a target/label column, the last column of the numpy array
will be the target/label. You can apply any customized preprocessing functions instead of using `TabularTransform`. 
After data preprocessing, let's train a XGBoost classifier for this task.

```python
from omnixai.preprocessing.tabular import TabularTransform
# Data preprocessing
transformer = TabularTransform().fit(tabular_data)
class_names = transformer.class_names
x = transformer.transform(tabular_data)
# Split into training and test datasets
train, test, train_labels, test_labels = \
    sklearn.model_selection.train_test_split(x[:, :-1], x[:, -1], train_size=0.80)
# Train an XGBoost model (the last column of `x` is the label column after transformation)
model = xgboost.XGBClassifier(n_estimators=300, max_depth=5)
model.fit(train, train_labels)
# Convert the transformed data back to Tabular instances
train_data = transformer.invert(train)
test_data = transformer.invert(test)
```

To initialize `TabularExplainer`, the following parameters need to be set:

- ``explainers``: The names of the explainers to apply, e.g., ["lime", "shap", "mace", "pdp"].
- ``data``: The data used to initialize explainers. ``data`` is the training dataset for training the
  machine learning model. If the training dataset is too large, ``data`` can be a subset of it by applying
  `omnixai.sampler.tabular.Sampler.subsample`.
- ``model``: The ML model to explain, e.g., a scikit-learn model, a tensorflow model or a pytorch model.
- ``preprocess``: The preprocessing function converting the raw inputs (A `Tabular` instance) into the inputs of ``model``.
- ``postprocess`` (optional): The postprocessing function transforming the outputs of ``model`` to a
  user-specific form, e.g., the predicted probability for each class. The output of `postprocess` should be a numpy array.
- ``mode``: The task type, e.g., "classification" or "regression".

The preprocessing function takes a `Tabular` instance as its input and outputs the processed features that
the ML model consumes. In this example, we simply call ``transformer.transform``. If you use some customized transforms 
on pandas dataframes, the preprocess function has this format: `lambda z: some_transform(z.to_pd())`. If the output of ``model``
is not a numpy array, ``postprocess`` needs to be set to convert it into a numpy array.

```python
from omnixai.explainers.tabular import TabularExplainer
# Initialize a TabularExplainer
explainer = TabularExplainer(
  explainers=["lime", "shap", "mace", "pdp", "ale"], # The explainers to apply
  mode="classification",                             # The task type
  data=train_data,                                   # The data for initializing the explainers
  model=model,                                       # The ML model to explain
  preprocess=lambda z: transformer.transform(z),     # Converts raw features into the model inputs
  params={
     "mace": {"ignored_features": ["Sex", "Race", "Relationship", "Capital Loss"]}
  }                                                  # Additional parameters
)
```

In this example, LIME, SHAP and MACE generate local explanations while PDP (partial dependence plot)
generates global explanations. ``explainer.explain`` returns the local explanations generated by the
three methods given the test instances, and ``explainer.explain_global`` returns the global explanations
generated by PDP. `TabularExplainer` hides all the details behind the explainers, so we can simply call
these two methods to generate explanations.

```python
# Generate explanations
test_instances = test_data[:5]
local_explanations = explainer.explain(X=test_instances)
global_explanations = explainer.explain_global(
    params={"pdp": {"features": ["Age", "Education-Num", "Capital Gain",
                                 "Capital Loss", "Hours per week", "Education",
                                 "Marital Status", "Occupation"]}}
)
```

Similarly, we create a `PredictionAnalyzer` for computing performance metrics for this classification task. 
To initialize `PredictionAnalyzer`, the following parameters need to be set:

- `mode`: The task type, e.g., "classification" or "regression".
- `test_data`: The test dataset, which should be a `Tabular` instance.
- `test_targets`: The test labels or targets. For classification, ``test_targets`` should be integers 
  (processed by a LabelEncoder) and match the class probabilities returned by the ML model.
- `preprocess`: The preprocessing function converting the raw data (a `Tabular` instance) into the inputs of `model`.
- `postprocess` (optional): The postprocessing function transforming the outputs of ``model`` to a user-specific form, 
  e.g., the predicted probability for each class. The output of `postprocess` should be a numpy array.

```python
from omnixai.explainers.prediction import PredictionAnalyzer

analyzer = PredictionAnalyzer(
    mode="classification",
    test_data=test_data,                           # The test dataset (a `Tabular` instance)
    test_targets=test_labels,                      # The test labels (a numpy array)
    model=model,                                   # The ML model
    preprocess=lambda z: transformer.transform(z)  # Converts raw features into the model inputs
)
prediction_explanations = analyzer.explain()
```

Given the generated explanations, we can launch a dashboard (a Dash app) for visualization by setting the test
instance, the local explanations, the global explanations, the prediction metrics, the class names, and additional
parameters for visualization (optional). If you want "what-if" analysis, you can set the ``explainer`` parameter
when initializing the dashboard. For "what-if" analysis, OmniXAI also allows you to set a second explainer
if you want to compare different models.

```python
from omnixai.visualization.dashboard import Dashboard
# Launch a dashboard for visualization
dashboard = Dashboard(
   instances=test_instances,                        # The instances to explain
   local_explanations=local_explanations,           # Set the local explanations
   global_explanations=global_explanations,         # Set the global explanations
   prediction_explanations=prediction_explanations, # Set the prediction metrics
   class_names=class_names,                         # Set class names
   explainer=explainer                              # The created TabularExplainer for what if analysis
)
dashboard.show()                                    # Launch the dashboard
```

After opening the Dash app in the browser, we will see a dashboard showing the explanations:
![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/demo.gif)

You can also use the GPT explainer to generate explanations in text for tabular models:

```python
explainer = TabularExplainer(
  explainers=["gpt"],                                # The GPT explainer to apply
  mode="classification",                             # The task type
  data=train_data,                                   # The data for initializing the explainers
  model=model,                                       # The ML model to explain
  preprocess=lambda z: transformer.transform(z),     # Converts raw features into the model inputs
  params={
     "gpt": {"apikey": "xxxx"}
  }                                                  # Set the OpenAI API KEY
)
local_explanations = explainer.explain(X=test_instances)
```

For vision tasks, the same interface is used to create explainers and generate explanations. 
Let's take an image classification model as an example.

```python
from omnixai.explainers.vision import VisionExplainer
from omnixai.visualization.dashboard import Dashboard

explainer = VisionExplainer(
    explainers=["gradcam", "lime", "ig", "ce", "feature_visualization"],
    mode="classification",
    model=model,                   # An image classification model, e.g., ResNet50
    preprocess=preprocess,         # The preprocessing function
    postprocess=postprocess,       # The postprocessing function
    params={
        # Set the target layer for GradCAM
        "gradcam": {"target_layer": model.layer4[-1]},
        # Set the objective for feature visualization
        "feature_visualization": 
          {"objectives": [{"layer": model.layer4[-3], "type": "channel", "index": list(range(6))}]}
    },
)
# Generate explanations of GradCAM, LIME, IG and CE
local_explanations = explainer.explain(test_img)
# Generate explanations of feature visualization
global_explanations = explainer.explain_global()
# Launch the dashboard
dashboard = Dashboard(
    instances=test_img,
    local_explanations=local_explanations,
    global_explanations=global_explanations
)
dashboard.show()
```

The following figure shows the dashboard of these explanations:
![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/demo_vision.gif)

For NLP tasks and time-series forecasting/anomaly detection, OmniXAI also provides the same interface
to generate and visualize explanations. This figure shows a dashboard example of text classification
and time-series anomaly detection:
![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/demo_nlp_ts.gif)

## Deployment

The explainers in OmniXAI can be easily deployed via [BentoML](https://github.com/bentoml/BentoML). 
BentoML is a popular open-source unified model serving framework, supporting multiple platforms including
AWS, GCP, Heroku, etc. We implemented the BentoML-format interfaces for OmniXAI so that users only need
few lines of code to deploy their selected explainers. 

Let's take the income prediction task as an example. Given the trained model and the initialized explainer, 
you only need to save the explainer in the BentoML local model store:

```python
from omnixai.explainers.tabular import TabularExplainer
from omnixai.deployment.bentoml.omnixai import save_model

explainer = TabularExplainer(
  explainers=["lime", "shap", "mace", "pdp", "ale"],
  mode="classification",
  data=train_data,
  model=model,
  preprocess=lambda z: transformer.transform(z),
  params={
     "mace": {"ignored_features": ["Sex", "Race", "Relationship", "Capital Loss"]}
  }
)
save_model("tabular_explainer", explainer)
```

And then create a file (e.g., service.py) for the ML service code:

```python
from omnixai.deployment.bentoml.omnixai import init_service

svc = init_service(
    model_tag="tabular_explainer:latest",
    task_type="tabular",
    service_name="tabular_explainer"
)
```

The `init_service` function defines two API endpoints, i.e., `/predict` for model predictions and `/explain` for
generating explanations. You can start an API server locally to test the service code above:

```python
bentoml serve service:svc --reload
```

The endpoints can be accessed locally:

```python
import requests
from requests_toolbelt.multipart.encoder import MultipartEncoder

data = '["39", "State-gov", "77516", "Bachelors", "13", "Never-married", ' \
       '"Adm-clerical", "Not-in-family", "White", "Male", "2174", "0", "40", "United-States"]'

# Test the prediction endpoint
prediction = requests.post(
    "http://0.0.0.0:3000/predict",
    headers={"content-type": "application/json"},
    data=data
).text

# Test the explanation endpoint
m = MultipartEncoder(
    fields={
        "data": data,
        "params": '{"lime": {"y": [0]}}',
    }
)
result = requests.post(
    "http://0.0.0.0:3000/explain",
    headers={"Content-Type": m.content_type},
    data=m
).text

# Parse the results
from omnixai.explainers.base import AutoExplainerBase
exp = AutoExplainerBase.parse_explanations_from_json(result)
for name, explanation in exp.items():
    explanation.ipython_plot()
```

You can build Bento for deployment by following the steps shown in the 
[BentoML repo](https://github.com/bentoml/BentoML#how-it-works). For more examples, please
check [Tabular](https://github.com/salesforce/OmniXAI/tree/main/omnixai/tests/deployment/bentoml/tabular), 
[Vision](https://github.com/salesforce/OmniXAI/tree/main/omnixai/tests/deployment/bentoml/vision), 
[NLP](https://github.com/salesforce/OmniXAI/tree/main/omnixai/tests/deployment/bentoml/nlp).

## How to Contribute

We welcome the contribution from the open-source community to improve the library!

To add a new explanation method/feature into the library, please follow the template and steps demonstrated in this 
[documentation](https://opensource.salesforce.com/OmniXAI/latest/omnixai.html#how-to-contribute).

## Technical Report and Citing OmniXAI
You can find more details in our technical report: [https://arxiv.org/abs/2206.01612](https://arxiv.org/abs/2206.01612)

If you're using OmniXAI in your research or applications, please cite using this BibTeX:
```
@article{wenzhuo2022-omnixai,
  author    = {Wenzhuo Yang and Hung Le and Silvio Savarese and Steven Hoi},
  title     = {OmniXAI: A Library for Explainable AI},
  year      = {2022},
  doi       = {10.48550/ARXIV.2206.01612},
  url       = {https://arxiv.org/abs/2206.01612},
  archivePrefix = {arXiv},
  eprint    = {206.01612},
}
```

## Contact Us
If you have any questions, comments or suggestions, please do not hesitate to contact us at omnixai@salesforce.com.

## License
[BSD 3-Clause License](LICENSE)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/salesforce/omnixai",
    "name": "omnixai",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7,<4",
    "maintainer_email": "",
    "keywords": "XAI Explainable AI Explanation",
    "author": "Wenzhuo Yang, Hung Le, Tanmay Shivprasad Laud, Silvio Savarese, Steven C.H. Hoi",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/c7/94/84c0c7d3da7eaf9bde857b9c47865ed7e52b22af9ac6c64e75a82a6411a7/omnixai-1.3.1.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <br>\n    <img src=\"https://github.com/salesforce/OmniXAI/raw/main/docs/_static/logo_small.png\" width=\"400\"/>\n    <br>\n<p>\n\n# OmniXAI: A Library for Explainable AI\n<div align=\"center\">\n  <a href=\"#\">\n  <img src=\"https://img.shields.io/badge/Python-3.7, 3.8, 3.9, 3.10-blue\">\n  </a>\n  <a href=\"https://pypi.python.org/pypi/omnixai\">\n  <img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/omnixai.svg\"/>\n  </a>\n  <a href=\"https://opensource.salesforce.com/OmniXAI\">\n  <img alt=\"Documentation\" src=\"https://github.com/salesforce/OmniXAI/actions/workflows/docs.yml/badge.svg\"/>\n  </a>\n  <a href=\"https://pepy.tech/project/omnixai\">\n  <img alt=\"Downloads\" src=\"https://pepy.tech/badge/omnixai\">\n  </a>\n  <a href=\"https://arxiv.org/abs/2206.01612\">\n  <img alt=\"DOI\" src=\"https://zenodo.org/badge/DOI/10.48550/ARXIV.2206.01612.svg\"/>\n  </a>\n</div>\n\n## Table of Contents\n1. [Introduction](#introduction)\n2. [Installation](#installation)\n3. [Getting Started](#getting-started)\n4. [Documentation](https://opensource.salesforce.com/OmniXAI/latest/index.html)\n5. [Tutorials](https://opensource.salesforce.com/OmniXAI/latest/tutorials.html)\n6. [Deployment](#deployment)\n7. [Dashboard Demo](https://sfr-omnixai-demo-cc9e4edb6447.herokuapp.com/)\n8. [How to Contribute](https://opensource.salesforce.com/OmniXAI/latest/omnixai.html#how-to-contribute)\n9. [Technical Report and Citing OmniXAI](#technical-report-and-citing-omnixai)\n\n## What's New\n\nThe latest version includes an experimental GPT explainer. This explainer leverages the outcomes \nproduced by SHAP and MACE to formulate the input prompt for ChatGPT. Subsequently, ChatGPT \nanalyzes these results and generates the corresponding explanations that provide developers with \na clearer understanding of the rationale behind the model's predictions.\n\n## Introduction\n\nOmniXAI (short for Omni eXplainable AI) is a Python machine-learning library for explainable AI (XAI), offering omni-way explainable AI and interpretable \nmachine learning capabilities to address many pain points in explaining decisions made by machine learning \nmodels in practice. OmniXAI aims to be a one-stop comprehensive library that makes explainable AI easy for \ndata scientists, ML researchers and practitioners who need explanation for various types of data, models and \nexplanation methods at different stages of ML process:\n![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/ml_pipeline.png)\n\nOmniXAI includes a rich family of explanation methods integrated in a unified interface, which \nsupports multiple data types (tabular data, images, texts, time-series), multiple types of ML models \n(traditional ML in Scikit-learn and deep learning models in PyTorch/TensorFlow), and a range of diverse explaination \nmethods including \"model-specific\" and \"model-agnostic\" methods (such as feature-attribution explanation, \ncounterfactual explanation, gradient-based explanation, feature visualization, etc). For practitioners, OmniXAI provides an easy-to-use \nunified interface to generate the explanations for their applications by only writing a few lines of \ncodes, and also a GUI dashboard for visualization for obtaining more insights about decisions.\n\nThe following table shows the supported explanation methods and features in our library.\nWe will continue improving this library to make it more comprehensive in the future.\n\n|          Method           |  Model Type   | Explanation Type | EDA | Tabular | Image | Text | Timeseries | \n:-------------------------:|:-------------:|:----------------:|:---:|:-------:|:-----:| :---: | :---:\n|     Feature analysis      |      NA       |      Global      |  \u2705  |         |       |      |      |\n|     Feature selection     |      NA       |      Global      |  \u2705  |         |       |      |      |\n|    Prediction metrics     |   Black box   |      Global      |     |    \u2705    |   \u2705   | \u2705   |  \u2705  |\n|       Bias metrics        |   Black box   |      Global      |     |    \u2705    |       |      |      |\n| Partial dependence plots  |   Black box   |      Global      |     |    \u2705    |       |      |      |\n| Accumulated local effects |   Black box   |      Global      |     |    \u2705    |       |      |      |\n|   Sensitivity analysis    |   Black box   |      Global      |     |    \u2705    |       |      |      |\n|  Permutation explanation  |   Black box   |      Global      |     |    \u2705    |       |      |      |\n|   Feature visualization   |  Torch or TF  |      Global      |     |         |   \u2705   |      |      |\n|       Feature maps        |  Torch or TF  |      Local       |     |         |   \u2705   |      |      |\n|       GPT explainer       | Black box     |     Local        |     |    \u2705    |       |      |      |\n|           LIME            |   Black box   |      Local       |     |    \u2705    |   \u2705   | \u2705   |      |\n|           SHAP            |  Black box*   |      Local       |     |    \u2705    |   \u2705   | \u2705   |  \u2705  |\n|          What-if          |   Black box   |      Local       |     |    \u2705    |       |      |     |\n|    Integrated gradient    |  Torch or TF  |      Local       |     |    \u2705    |   \u2705   | \u2705   |      |\n|      Counterfactual       |  Black box*   |      Local       |     |    \u2705    |   \u2705   | \u2705   |  \u2705  |\n|  Contrastive explanation  |  Torch or TF  |      Local       |     |         |   \u2705   |      |      |\n|   Grad-CAM, Grad-CAM++    |  Torch or TF  |      Local       |     |         |   \u2705   |      |      |\n|         Score-CAM         |  Torch or TF  |      Local       |     |         |   \u2705   |      |      |\n|         Layer-CAM         |  Torch or TF  |      Local       |     |         |   \u2705   |      |      |\n|      Smooth gradient      |  Torch or TF  |      Local       |     |         |   \u2705   |      |      |\n|  Guided backpropagation   |  Torch or TF  |      Local       |     |         |   \u2705   |      |      |\n|    Learning to explain    |   Black box   |      Local       |     |    \u2705    |   \u2705   | \u2705   |      |\n|       Linear models       | Linear models | Global and Local |     |    \u2705    |       |      |      |\n|        Tree models        |  Tree models  | Global and Local |     |    \u2705    |       |      |      |\n\n*SHAP* accepts black box models for tabular data, PyTorch/Tensorflow models for image data, transformer models\nfor text data. *Counterfactual* accepts black box models for tabular, text and time-series data, and PyTorch/Tensorflow models for\nimage data.\n\nThis [table](https://opensource.salesforce.com/OmniXAI/latest/index.html#comparison-with-competitors) \nshows the comparison between our toolkit/library and other existing XAI toolkits/libraries\nin literature.\n\n**OmniXAI also integrates ChatGPT for generating plain text explanations given a classification/regression\nmodel on tabular datasets.** The generated results may not be 100% accurate, but it is worth trying this \nexplainer (we will continue improving the input prompts).\n\n## Installation\n\nYou can install ``omnixai`` from PyPI by calling ``pip install omnixai``. You may install from source by\ncloning the OmniXAI repo, navigating to the root directory, and calling\n``pip install .``, or ``pip install -e .`` to install in editable mode. You may install additional dependencies:\n\n- **For plotting & visualization**: Calling ``pip install omnixai[plot]``, or ``pip install .[plot]`` from the\n  root directory of the repo.\n- **For vision tasks**: Calling ``pip install omnixai[vision]``, or ``pip install .[vision]`` from the\n  root directory of the repo.\n- **For NLP tasks**: Calling ``pip install omnixai[nlp]``, or ``pip install .[nlp]`` from the\n  root directory of the repo.\n- **Install all the dependencies**: Calling ``pip install omnixai[all]``, or ``pip install .[all]`` from the\n  root directory of the repo.\n\n## Getting Started\n\nFor example code and an introduction to the library, see the Jupyter notebooks in\n[tutorials](https://opensource.salesforce.com/OmniXAI/latest/tutorials.html), and the guided walkthrough\n[here](https://opensource.salesforce.com/OmniXAI/latest/index.html). A dashboard demo can be found [here](https://sfr-omnixai-demo-cc9e4edb6447.herokuapp.com/).\n\nSome examples:\n1. [Tabular classification](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular_classification.ipynb)\n2. [Tabular regression](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular_regression.ipynb)\n3. [Image classification](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision.ipynb)\n4. [Text classification](https://github.com/salesforce/OmniXAI/blob/main/tutorials/nlp_imdb.ipynb)\n5. [Time-series anomaly detection](https://github.com/salesforce/OmniXAI/blob/main/tutorials/timeseries.ipynb)\n6. [Vision-language tasks](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision/gradcam_vlm.ipynb)\n7. [Ranking tasks](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular/ranking.ipynb)\n8. [Feature visualization](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision/feature_visualization_torch.ipynb)\n9. [Check feature maps](https://github.com/salesforce/OmniXAI/blob/main/tutorials/vision/feature_map_torch.ipynb)\n10. [GPT explainer for tabular](https://github.com/salesforce/OmniXAI/blob/main/tutorials/tabular/gpt.ipynb)\n\nTo get started, we recommend the linked tutorials in [tutorials](https://opensource.salesforce.com/OmniXAI/latest/tutorials.html).\nIn general, we recommend using `TabularExplainer`, `VisionExplainer`,\n`NLPExplainer` and `TimeseriesExplainer` for tabular, vision, NLP and time-series tasks, respectively, and using\n`DataAnalyzer` and `PredictionAnalyzer` for feature analysis and prediction result analysis.\nThese classes act as the factories of the individual explainers supported in OmniXAI, providing a simpler\ninterface to generate multiple explanations. To generate explanations, you only need to specify\n\n- **The ML model to explain**: e.g., a scikit-learn model, a tensorflow model, a pytorch model or a black-box prediction function.\n- **The pre-processing function**: i.e., converting raw input features into the model inputs.\n- **The post-processing function (optional)**: e.g., converting the model outputs into class probabilities.\n- **The explainers to apply**: e.g., SHAP, MACE, Grad-CAM.\n\nBesides using these classes, you can also create a single explainer defined in the `omnixai.explainers` package, e.g.,\n`ShapTabular`, `GradCAM`, `IntegratedGradient` or `FeatureVisualizer`.\n\nLet's take the income prediction task as an example.\nThe [dataset](https://archive.ics.uci.edu/ml/datasets/adult) used in this example is for income prediction.\nWe recommend using data class `Tabular` to represent a tabular dataset. To create a `Tabular` instance given a pandas\ndataframe, you need to specify the dataframe, the categorical feature names (if exists) and the target/label\ncolumn name (if exists).\n\n```python\nfrom omnixai.data.tabular import Tabular\n# Load the dataset\nfeature_names = [\n   \"Age\", \"Workclass\", \"fnlwgt\", \"Education\",\n   \"Education-Num\", \"Marital Status\", \"Occupation\",\n   \"Relationship\", \"Race\", \"Sex\", \"Capital Gain\",\n   \"Capital Loss\", \"Hours per week\", \"Country\", \"label\"\n]\ndf = pd.DataFrame(\n  np.genfromtxt('adult.data', delimiter=', ', dtype=str),\n  columns=feature_names\n)\ntabular_data = Tabular(\n   df,\n   categorical_columns=[feature_names[i] for i in [1, 3, 5, 6, 7, 8, 9, 13]],\n   target_column='label'\n)\n```\n\nThe package `omnixai.preprocessing` provides several useful preprocessing functions\nfor a `Tabular` instance. `TabularTransform` is a special transform designed for processing tabular data.\nBy default, it converts categorical features into one-hot encoding, and keeps continuous-valued features.\nThe  method ``transform`` of `TabularTransform` transforms a `Tabular` instance to a numpy array.\nIf the `Tabular` instance has a target/label column, the last column of the numpy array\nwill be the target/label. You can apply any customized preprocessing functions instead of using `TabularTransform`. \nAfter data preprocessing, let's train a XGBoost classifier for this task.\n\n```python\nfrom omnixai.preprocessing.tabular import TabularTransform\n# Data preprocessing\ntransformer = TabularTransform().fit(tabular_data)\nclass_names = transformer.class_names\nx = transformer.transform(tabular_data)\n# Split into training and test datasets\ntrain, test, train_labels, test_labels = \\\n    sklearn.model_selection.train_test_split(x[:, :-1], x[:, -1], train_size=0.80)\n# Train an XGBoost model (the last column of `x` is the label column after transformation)\nmodel = xgboost.XGBClassifier(n_estimators=300, max_depth=5)\nmodel.fit(train, train_labels)\n# Convert the transformed data back to Tabular instances\ntrain_data = transformer.invert(train)\ntest_data = transformer.invert(test)\n```\n\nTo initialize `TabularExplainer`, the following parameters need to be set:\n\n- ``explainers``: The names of the explainers to apply, e.g., [\"lime\", \"shap\", \"mace\", \"pdp\"].\n- ``data``: The data used to initialize explainers. ``data`` is the training dataset for training the\n  machine learning model. If the training dataset is too large, ``data`` can be a subset of it by applying\n  `omnixai.sampler.tabular.Sampler.subsample`.\n- ``model``: The ML model to explain, e.g., a scikit-learn model, a tensorflow model or a pytorch model.\n- ``preprocess``: The preprocessing function converting the raw inputs (A `Tabular` instance) into the inputs of ``model``.\n- ``postprocess`` (optional): The postprocessing function transforming the outputs of ``model`` to a\n  user-specific form, e.g., the predicted probability for each class. The output of `postprocess` should be a numpy array.\n- ``mode``: The task type, e.g., \"classification\" or \"regression\".\n\nThe preprocessing function takes a `Tabular` instance as its input and outputs the processed features that\nthe ML model consumes. In this example, we simply call ``transformer.transform``. If you use some customized transforms \non pandas dataframes, the preprocess function has this format: `lambda z: some_transform(z.to_pd())`. If the output of ``model``\nis not a numpy array, ``postprocess`` needs to be set to convert it into a numpy array.\n\n```python\nfrom omnixai.explainers.tabular import TabularExplainer\n# Initialize a TabularExplainer\nexplainer = TabularExplainer(\n  explainers=[\"lime\", \"shap\", \"mace\", \"pdp\", \"ale\"], # The explainers to apply\n  mode=\"classification\",                             # The task type\n  data=train_data,                                   # The data for initializing the explainers\n  model=model,                                       # The ML model to explain\n  preprocess=lambda z: transformer.transform(z),     # Converts raw features into the model inputs\n  params={\n     \"mace\": {\"ignored_features\": [\"Sex\", \"Race\", \"Relationship\", \"Capital Loss\"]}\n  }                                                  # Additional parameters\n)\n```\n\nIn this example, LIME, SHAP and MACE generate local explanations while PDP (partial dependence plot)\ngenerates global explanations. ``explainer.explain`` returns the local explanations generated by the\nthree methods given the test instances, and ``explainer.explain_global`` returns the global explanations\ngenerated by PDP. `TabularExplainer` hides all the details behind the explainers, so we can simply call\nthese two methods to generate explanations.\n\n```python\n# Generate explanations\ntest_instances = test_data[:5]\nlocal_explanations = explainer.explain(X=test_instances)\nglobal_explanations = explainer.explain_global(\n    params={\"pdp\": {\"features\": [\"Age\", \"Education-Num\", \"Capital Gain\",\n                                 \"Capital Loss\", \"Hours per week\", \"Education\",\n                                 \"Marital Status\", \"Occupation\"]}}\n)\n```\n\nSimilarly, we create a `PredictionAnalyzer` for computing performance metrics for this classification task. \nTo initialize `PredictionAnalyzer`, the following parameters need to be set:\n\n- `mode`: The task type, e.g., \"classification\" or \"regression\".\n- `test_data`: The test dataset, which should be a `Tabular` instance.\n- `test_targets`: The test labels or targets. For classification, ``test_targets`` should be integers \n  (processed by a LabelEncoder) and match the class probabilities returned by the ML model.\n- `preprocess`: The preprocessing function converting the raw data (a `Tabular` instance) into the inputs of `model`.\n- `postprocess` (optional): The postprocessing function transforming the outputs of ``model`` to a user-specific form, \n  e.g., the predicted probability for each class. The output of `postprocess` should be a numpy array.\n\n```python\nfrom omnixai.explainers.prediction import PredictionAnalyzer\n\nanalyzer = PredictionAnalyzer(\n    mode=\"classification\",\n    test_data=test_data,                           # The test dataset (a `Tabular` instance)\n    test_targets=test_labels,                      # The test labels (a numpy array)\n    model=model,                                   # The ML model\n    preprocess=lambda z: transformer.transform(z)  # Converts raw features into the model inputs\n)\nprediction_explanations = analyzer.explain()\n```\n\nGiven the generated explanations, we can launch a dashboard (a Dash app) for visualization by setting the test\ninstance, the local explanations, the global explanations, the prediction metrics, the class names, and additional\nparameters for visualization (optional). If you want \"what-if\" analysis, you can set the ``explainer`` parameter\nwhen initializing the dashboard. For \"what-if\" analysis, OmniXAI also allows you to set a second explainer\nif you want to compare different models.\n\n```python\nfrom omnixai.visualization.dashboard import Dashboard\n# Launch a dashboard for visualization\ndashboard = Dashboard(\n   instances=test_instances,                        # The instances to explain\n   local_explanations=local_explanations,           # Set the local explanations\n   global_explanations=global_explanations,         # Set the global explanations\n   prediction_explanations=prediction_explanations, # Set the prediction metrics\n   class_names=class_names,                         # Set class names\n   explainer=explainer                              # The created TabularExplainer for what if analysis\n)\ndashboard.show()                                    # Launch the dashboard\n```\n\nAfter opening the Dash app in the browser, we will see a dashboard showing the explanations:\n![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/demo.gif)\n\nYou can also use the GPT explainer to generate explanations in text for tabular models:\n\n```python\nexplainer = TabularExplainer(\n  explainers=[\"gpt\"],                                # The GPT explainer to apply\n  mode=\"classification\",                             # The task type\n  data=train_data,                                   # The data for initializing the explainers\n  model=model,                                       # The ML model to explain\n  preprocess=lambda z: transformer.transform(z),     # Converts raw features into the model inputs\n  params={\n     \"gpt\": {\"apikey\": \"xxxx\"}\n  }                                                  # Set the OpenAI API KEY\n)\nlocal_explanations = explainer.explain(X=test_instances)\n```\n\nFor vision tasks, the same interface is used to create explainers and generate explanations. \nLet's take an image classification model as an example.\n\n```python\nfrom omnixai.explainers.vision import VisionExplainer\nfrom omnixai.visualization.dashboard import Dashboard\n\nexplainer = VisionExplainer(\n    explainers=[\"gradcam\", \"lime\", \"ig\", \"ce\", \"feature_visualization\"],\n    mode=\"classification\",\n    model=model,                   # An image classification model, e.g., ResNet50\n    preprocess=preprocess,         # The preprocessing function\n    postprocess=postprocess,       # The postprocessing function\n    params={\n        # Set the target layer for GradCAM\n        \"gradcam\": {\"target_layer\": model.layer4[-1]},\n        # Set the objective for feature visualization\n        \"feature_visualization\": \n          {\"objectives\": [{\"layer\": model.layer4[-3], \"type\": \"channel\", \"index\": list(range(6))}]}\n    },\n)\n# Generate explanations of GradCAM, LIME, IG and CE\nlocal_explanations = explainer.explain(test_img)\n# Generate explanations of feature visualization\nglobal_explanations = explainer.explain_global()\n# Launch the dashboard\ndashboard = Dashboard(\n    instances=test_img,\n    local_explanations=local_explanations,\n    global_explanations=global_explanations\n)\ndashboard.show()\n```\n\nThe following figure shows the dashboard of these explanations:\n![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/demo_vision.gif)\n\nFor NLP tasks and time-series forecasting/anomaly detection, OmniXAI also provides the same interface\nto generate and visualize explanations. This figure shows a dashboard example of text classification\nand time-series anomaly detection:\n![alt text](https://github.com/salesforce/OmniXAI/raw/main/docs/_static/demo_nlp_ts.gif)\n\n## Deployment\n\nThe explainers in OmniXAI can be easily deployed via [BentoML](https://github.com/bentoml/BentoML). \nBentoML is a popular open-source unified model serving framework, supporting multiple platforms including\nAWS, GCP, Heroku, etc. We implemented the BentoML-format interfaces for OmniXAI so that users only need\nfew lines of code to deploy their selected explainers. \n\nLet's take the income prediction task as an example. Given the trained model and the initialized explainer, \nyou only need to save the explainer in the BentoML local model store:\n\n```python\nfrom omnixai.explainers.tabular import TabularExplainer\nfrom omnixai.deployment.bentoml.omnixai import save_model\n\nexplainer = TabularExplainer(\n  explainers=[\"lime\", \"shap\", \"mace\", \"pdp\", \"ale\"],\n  mode=\"classification\",\n  data=train_data,\n  model=model,\n  preprocess=lambda z: transformer.transform(z),\n  params={\n     \"mace\": {\"ignored_features\": [\"Sex\", \"Race\", \"Relationship\", \"Capital Loss\"]}\n  }\n)\nsave_model(\"tabular_explainer\", explainer)\n```\n\nAnd then create a file (e.g., service.py) for the ML service code:\n\n```python\nfrom omnixai.deployment.bentoml.omnixai import init_service\n\nsvc = init_service(\n    model_tag=\"tabular_explainer:latest\",\n    task_type=\"tabular\",\n    service_name=\"tabular_explainer\"\n)\n```\n\nThe `init_service` function defines two API endpoints, i.e., `/predict` for model predictions and `/explain` for\ngenerating explanations. You can start an API server locally to test the service code above:\n\n```python\nbentoml serve service:svc --reload\n```\n\nThe endpoints can be accessed locally:\n\n```python\nimport requests\nfrom requests_toolbelt.multipart.encoder import MultipartEncoder\n\ndata = '[\"39\", \"State-gov\", \"77516\", \"Bachelors\", \"13\", \"Never-married\", ' \\\n       '\"Adm-clerical\", \"Not-in-family\", \"White\", \"Male\", \"2174\", \"0\", \"40\", \"United-States\"]'\n\n# Test the prediction endpoint\nprediction = requests.post(\n    \"http://0.0.0.0:3000/predict\",\n    headers={\"content-type\": \"application/json\"},\n    data=data\n).text\n\n# Test the explanation endpoint\nm = MultipartEncoder(\n    fields={\n        \"data\": data,\n        \"params\": '{\"lime\": {\"y\": [0]}}',\n    }\n)\nresult = requests.post(\n    \"http://0.0.0.0:3000/explain\",\n    headers={\"Content-Type\": m.content_type},\n    data=m\n).text\n\n# Parse the results\nfrom omnixai.explainers.base import AutoExplainerBase\nexp = AutoExplainerBase.parse_explanations_from_json(result)\nfor name, explanation in exp.items():\n    explanation.ipython_plot()\n```\n\nYou can build Bento for deployment by following the steps shown in the \n[BentoML repo](https://github.com/bentoml/BentoML#how-it-works). For more examples, please\ncheck [Tabular](https://github.com/salesforce/OmniXAI/tree/main/omnixai/tests/deployment/bentoml/tabular), \n[Vision](https://github.com/salesforce/OmniXAI/tree/main/omnixai/tests/deployment/bentoml/vision), \n[NLP](https://github.com/salesforce/OmniXAI/tree/main/omnixai/tests/deployment/bentoml/nlp).\n\n## How to Contribute\n\nWe welcome the contribution from the open-source community to improve the library!\n\nTo add a new explanation method/feature into the library, please follow the template and steps demonstrated in this \n[documentation](https://opensource.salesforce.com/OmniXAI/latest/omnixai.html#how-to-contribute).\n\n## Technical Report and Citing OmniXAI\nYou can find more details in our technical report: [https://arxiv.org/abs/2206.01612](https://arxiv.org/abs/2206.01612)\n\nIf you're using OmniXAI in your research or applications, please cite using this BibTeX:\n```\n@article{wenzhuo2022-omnixai,\n  author    = {Wenzhuo Yang and Hung Le and Silvio Savarese and Steven Hoi},\n  title     = {OmniXAI: A Library for Explainable AI},\n  year      = {2022},\n  doi       = {10.48550/ARXIV.2206.01612},\n  url       = {https://arxiv.org/abs/2206.01612},\n  archivePrefix = {arXiv},\n  eprint    = {206.01612},\n}\n```\n\n## Contact Us\nIf you have any questions, comments or suggestions, please do not hesitate to contact us at omnixai@salesforce.com.\n\n## License\n[BSD 3-Clause License](LICENSE)\n",
    "bugtrack_url": null,
    "license": "3-Clause BSD",
    "summary": "OmniXAI: An Explainable AI Toolbox",
    "version": "1.3.1",
    "project_urls": {
        "Homepage": "https://github.com/salesforce/omnixai"
    },
    "split_keywords": [
        "xai",
        "explainable",
        "ai",
        "explanation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f099a3f24262eb99ed2b1299f458e0d9a2d20733631d26ae6641d430ae41bc7",
                "md5": "1d16c489dcdf65c75f71ab10a9fb6ebb",
                "sha256": "a17c8981d1edd3d63f20bf20a05353e36d1e00002350096beb7e07de0b4ad10f"
            },
            "downloads": -1,
            "filename": "omnixai-1.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1d16c489dcdf65c75f71ab10a9fb6ebb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7,<4",
            "size": 543857,
            "upload_time": "2023-07-16T04:58:13",
            "upload_time_iso_8601": "2023-07-16T04:58:13.765775Z",
            "url": "https://files.pythonhosted.org/packages/9f/09/9a3f24262eb99ed2b1299f458e0d9a2d20733631d26ae6641d430ae41bc7/omnixai-1.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c79484c0c7d3da7eaf9bde857b9c47865ed7e52b22af9ac6c64e75a82a6411a7",
                "md5": "a6428ef9c4271313e03553c434e634a8",
                "sha256": "060a77f5accdab388097f92ed7635c7bd94e0ecebe9b4730445527857b0609dc"
            },
            "downloads": -1,
            "filename": "omnixai-1.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "a6428ef9c4271313e03553c434e634a8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7,<4",
            "size": 325015,
            "upload_time": "2023-07-16T04:58:16",
            "upload_time_iso_8601": "2023-07-16T04:58:16.201560Z",
            "url": "https://files.pythonhosted.org/packages/c7/94/84c0c7d3da7eaf9bde857b9c47865ed7e52b22af9ac6c64e75a82a6411a7/omnixai-1.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-16 04:58:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "salesforce",
    "github_project": "omnixai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "omnixai"
}
        
Elapsed time: 0.08894s