explainableai


Nameexplainableai JSON
Version 0.10 PyPI version JSON
download
home_pagehttps://github.com/ombhojane/explainableai
SummaryA comprehensive package for Explainable AI and model interpretation
upload_time2024-10-13 06:48:03
maintainerNone
docs_urlNone
authorOm Bhojane, Palak Boricha
requires_python>=3.7
licenseNone
keywords explainableai explainable ai interpretable ml model interpretability feature importance shap lime model explanation ai transparency machine learning deep learning artificial intelligence data science model insights feature analysis model debugging ai ethics responsible ai xai model visualization
VCS
bugtrack_url
requirements numpy pandas scikit-learn shap matplotlib seaborn plotly ipywidgets lime reportlab google-generativeai python-dotenv scipy pillow xgboost colorama dask
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ExplainableAI

[![PyPI version](https://img.shields.io/pypi/v/explainableai.svg)](https://pypi.org/project/explainableai/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python Versions](https://img.shields.io/pypi/pyversions/explainableai.svg)](https://pypi.org/project/explainableai/)
[![Downloads](https://pepy.tech/badge/explainableai)](https://pepy.tech/project/explainableai)
[![GitHub stars](https://img.shields.io/github/stars/ombhojane/explainableai.svg)](https://github.com/ombhojane/explainableai/stargazers)

ExplainableAI is a powerful Python package that combines state-of-the-art machine learning techniques with advanced explainable AI methods and LLM-powered explanations.

## Table of Contents

- [Features](#features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Usage Examples](#usage-examples)
- [Environment Variables](#environment-variables)
- [API Reference](#api-reference)
- [Running Locally](#running-locally)
- [Contributing](#contributing)
- [Acknowledgements](#acknowledgements)
- [License](#license)

## Features

- **Automated Exploratory Data Analysis (EDA)**: Gain quick insights into your dataset.
- **Model Performance Evaluation**: Comprehensive metrics for model assessment.
- **Feature Importance Analysis**: Understand which features drive your model's decisions.
- **SHAP (SHapley Additive exPlanations) Integration**: Deep insights into model behavior.
- **Interactive Visualizations**: Explore model insights through intuitive charts and graphs.
- **LLM-Powered Explanations**: Get human-readable explanations for model results and individual predictions.
- **Automated Report Generation**: Create professional PDF reports with a single command.
- **Multi-Model Support**: Compare and analyze multiple ML models simultaneously.
- **Easy-to-Use Interface**: Simple API for model fitting, analysis, and prediction.

## Installation

Install ExplainableAI using pip:

```bash
pip install explainableai
```

## Quick Start

```python
from explainableai import XAIWrapper
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier

# Load sample dataset
X, y = load_iris(return_X_y=True, as_frame=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize XAIWrapper
xai = XAIWrapper()

# Fit and analyze model
model = RandomForestClassifier(n_estimators=100, random_state=42)
xai.fit(model, X_train, y_train)
results = xai.analyze(X_test, y_test)

# Print LLM explanation
print(results['llm_explanation'])

# Generate report
xai.generate_report('iris_analysis.pdf')
```

## Usage Examples

### Multi-Model Comparison

```python
from explainableai import XAIWrapper
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from xgboost import XGBClassifier
import pandas as pd

# Load your dataset
df = pd.read_csv('your_dataset.csv')
X = df.drop(columns=['target_column'])
y = df['target_column']

# Create models
models = {
    'Random Forest': RandomForestClassifier(n_estimators=100, random_state=42),
    'Logistic Regression': LogisticRegression(max_iter=1000),
    'XGBoost': XGBClassifier(n_estimators=100, random_state=42)
}

# Initialize XAIWrapper
xai = XAIWrapper()

# Fit and analyze models
xai.fit(models, X, y)
results = xai.analyze()

# Print LLM explanation of results
print(results['llm_explanation'])

# Generate a comprehensive report
xai.generate_report('multi_model_comparison.pdf')
```

### Explaining Individual Predictions

```python
# ... (after fitting the model)

# Make a prediction with explanation
new_data = {...}  # Dictionary of feature values
prediction, probabilities, explanation = xai.explain_prediction(new_data)

print(f"Prediction: {prediction}")
print(f"Probabilities: {probabilities}")
print(f"Explanation: {explanation}")
```

## Environment Variables

To use the LLM-powered explanations, you need to set up the following environment variable:

- `GEMINI_API_KEY`: Your [Google Gemini API key](https://ai.google.dev/gemini-api/docs/api-key)

Add this to your `.env` file:

```
GEMINI_API_KEY=your_api_key_here
```

## API Reference

For detailed API documentation, please refer to our [API Reference](https://pypi.org/project/explainableai/).

## Running Locally

To run ExplainableAI locally:

1. Clone the repository:

   ```bash
   git clone https://github.com/ombhojane/explainableai.git
   cd explainableai
   ```

2. Install dependencies:

   ```bash
   pip install -r requirements.txt
   ```

3. Set up your environment variables (see [Environment Variables](#environment-variables)).

4. Run the example script:
   ```bash
   python main.py [dataset] [target_column]
   ```

## Contributing

We welcome contributions to ExplainableAI! Please see our [Contributing Guidelines](CONTRIBUTING.md) for more information on how to get started.

## Credits

Explainable AI was created by [Om Bhojane](https://github.com/ombhojane). Special thanks to the following contributors for their support.

<p align="start">
<a  href="https://github.com/ombhojane/explainableai/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=ombhojane/explainableai"/>
</a>
</p>

## Acknowledgements

ExplainableAI builds upon several open-source libraries, including:

- [scikit-learn](https://scikit-learn.org/)
- [SHAP](https://github.com/slundberg/shap)
- [Matplotlib](https://matplotlib.org/)
- [XGBoost](https://xgboost.readthedocs.io/)

We are grateful to the maintainers and contributors of these projects.

## License

ExplainableAI is released under the [MIT License](LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ombhojane/explainableai",
    "name": "explainableai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "explainableai, explainable ai, interpretable ml, model interpretability, feature importance, shap, lime, model explanation, ai transparency, machine learning, deep learning, artificial intelligence, data science, model insights, feature analysis, model debugging, ai ethics, responsible ai, xai, model visualization",
    "author": "Om Bhojane, Palak Boricha",
    "author_email": "ombhojane05@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/60/23/6925fa7b397bbfeb74157403d87c082ec5575d71e174be882481f19acc5b/explainableai-0.10.tar.gz",
    "platform": null,
    "description": "# ExplainableAI\r\n\r\n[![PyPI version](https://img.shields.io/pypi/v/explainableai.svg)](https://pypi.org/project/explainableai/)\r\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\r\n[![Python Versions](https://img.shields.io/pypi/pyversions/explainableai.svg)](https://pypi.org/project/explainableai/)\r\n[![Downloads](https://pepy.tech/badge/explainableai)](https://pepy.tech/project/explainableai)\r\n[![GitHub stars](https://img.shields.io/github/stars/ombhojane/explainableai.svg)](https://github.com/ombhojane/explainableai/stargazers)\r\n\r\nExplainableAI is a powerful Python package that combines state-of-the-art machine learning techniques with advanced explainable AI methods and LLM-powered explanations.\r\n\r\n## Table of Contents\r\n\r\n- [Features](#features)\r\n- [Installation](#installation)\r\n- [Quick Start](#quick-start)\r\n- [Usage Examples](#usage-examples)\r\n- [Environment Variables](#environment-variables)\r\n- [API Reference](#api-reference)\r\n- [Running Locally](#running-locally)\r\n- [Contributing](#contributing)\r\n- [Acknowledgements](#acknowledgements)\r\n- [License](#license)\r\n\r\n## Features\r\n\r\n- **Automated Exploratory Data Analysis (EDA)**: Gain quick insights into your dataset.\r\n- **Model Performance Evaluation**: Comprehensive metrics for model assessment.\r\n- **Feature Importance Analysis**: Understand which features drive your model's decisions.\r\n- **SHAP (SHapley Additive exPlanations) Integration**: Deep insights into model behavior.\r\n- **Interactive Visualizations**: Explore model insights through intuitive charts and graphs.\r\n- **LLM-Powered Explanations**: Get human-readable explanations for model results and individual predictions.\r\n- **Automated Report Generation**: Create professional PDF reports with a single command.\r\n- **Multi-Model Support**: Compare and analyze multiple ML models simultaneously.\r\n- **Easy-to-Use Interface**: Simple API for model fitting, analysis, and prediction.\r\n\r\n## Installation\r\n\r\nInstall ExplainableAI using pip:\r\n\r\n```bash\r\npip install explainableai\r\n```\r\n\r\n## Quick Start\r\n\r\n```python\r\nfrom explainableai import XAIWrapper\r\nfrom sklearn.datasets import load_iris\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.ensemble import RandomForestClassifier\r\n\r\n# Load sample dataset\r\nX, y = load_iris(return_X_y=True, as_frame=True)\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\r\n\r\n# Initialize XAIWrapper\r\nxai = XAIWrapper()\r\n\r\n# Fit and analyze model\r\nmodel = RandomForestClassifier(n_estimators=100, random_state=42)\r\nxai.fit(model, X_train, y_train)\r\nresults = xai.analyze(X_test, y_test)\r\n\r\n# Print LLM explanation\r\nprint(results['llm_explanation'])\r\n\r\n# Generate report\r\nxai.generate_report('iris_analysis.pdf')\r\n```\r\n\r\n## Usage Examples\r\n\r\n### Multi-Model Comparison\r\n\r\n```python\r\nfrom explainableai import XAIWrapper\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom xgboost import XGBClassifier\r\nimport pandas as pd\r\n\r\n# Load your dataset\r\ndf = pd.read_csv('your_dataset.csv')\r\nX = df.drop(columns=['target_column'])\r\ny = df['target_column']\r\n\r\n# Create models\r\nmodels = {\r\n    'Random Forest': RandomForestClassifier(n_estimators=100, random_state=42),\r\n    'Logistic Regression': LogisticRegression(max_iter=1000),\r\n    'XGBoost': XGBClassifier(n_estimators=100, random_state=42)\r\n}\r\n\r\n# Initialize XAIWrapper\r\nxai = XAIWrapper()\r\n\r\n# Fit and analyze models\r\nxai.fit(models, X, y)\r\nresults = xai.analyze()\r\n\r\n# Print LLM explanation of results\r\nprint(results['llm_explanation'])\r\n\r\n# Generate a comprehensive report\r\nxai.generate_report('multi_model_comparison.pdf')\r\n```\r\n\r\n### Explaining Individual Predictions\r\n\r\n```python\r\n# ... (after fitting the model)\r\n\r\n# Make a prediction with explanation\r\nnew_data = {...}  # Dictionary of feature values\r\nprediction, probabilities, explanation = xai.explain_prediction(new_data)\r\n\r\nprint(f\"Prediction: {prediction}\")\r\nprint(f\"Probabilities: {probabilities}\")\r\nprint(f\"Explanation: {explanation}\")\r\n```\r\n\r\n## Environment Variables\r\n\r\nTo use the LLM-powered explanations, you need to set up the following environment variable:\r\n\r\n- `GEMINI_API_KEY`: Your [Google Gemini API key](https://ai.google.dev/gemini-api/docs/api-key)\r\n\r\nAdd this to your `.env` file:\r\n\r\n```\r\nGEMINI_API_KEY=your_api_key_here\r\n```\r\n\r\n## API Reference\r\n\r\nFor detailed API documentation, please refer to our [API Reference](https://pypi.org/project/explainableai/).\r\n\r\n## Running Locally\r\n\r\nTo run ExplainableAI locally:\r\n\r\n1. Clone the repository:\r\n\r\n   ```bash\r\n   git clone https://github.com/ombhojane/explainableai.git\r\n   cd explainableai\r\n   ```\r\n\r\n2. Install dependencies:\r\n\r\n   ```bash\r\n   pip install -r requirements.txt\r\n   ```\r\n\r\n3. Set up your environment variables (see [Environment Variables](#environment-variables)).\r\n\r\n4. Run the example script:\r\n   ```bash\r\n   python main.py [dataset] [target_column]\r\n   ```\r\n\r\n## Contributing\r\n\r\nWe welcome contributions to ExplainableAI! Please see our [Contributing Guidelines](CONTRIBUTING.md) for more information on how to get started.\r\n\r\n## Credits\r\n\r\nExplainable AI was created by [Om Bhojane](https://github.com/ombhojane). Special thanks to the following contributors for their support.\r\n\r\n<p align=\"start\">\r\n<a  href=\"https://github.com/ombhojane/explainableai/graphs/contributors\">\r\n  <img src=\"https://contrib.rocks/image?repo=ombhojane/explainableai\"/>\r\n</a>\r\n</p>\r\n\r\n## Acknowledgements\r\n\r\nExplainableAI builds upon several open-source libraries, including:\r\n\r\n- [scikit-learn](https://scikit-learn.org/)\r\n- [SHAP](https://github.com/slundberg/shap)\r\n- [Matplotlib](https://matplotlib.org/)\r\n- [XGBoost](https://xgboost.readthedocs.io/)\r\n\r\nWe are grateful to the maintainers and contributors of these projects.\r\n\r\n## License\r\n\r\nExplainableAI is released under the [MIT License](LICENSE).\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A comprehensive package for Explainable AI and model interpretation",
    "version": "0.10",
    "project_urls": {
        "Bug Tracker": "https://github.com/ombhojane/explainableai/issues",
        "Homepage": "https://github.com/ombhojane/explainableai",
        "Source Code": "https://github.com/ombhojane/explainableai"
    },
    "split_keywords": [
        "explainableai",
        " explainable ai",
        " interpretable ml",
        " model interpretability",
        " feature importance",
        " shap",
        " lime",
        " model explanation",
        " ai transparency",
        " machine learning",
        " deep learning",
        " artificial intelligence",
        " data science",
        " model insights",
        " feature analysis",
        " model debugging",
        " ai ethics",
        " responsible ai",
        " xai",
        " model visualization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "171422e0874d71483d7e1199c815e36e7382b2a6aef2124ad2d40a4eaad65718",
                "md5": "b30f1fd51013f2f4b22c353c5a5f058a",
                "sha256": "15fba6a0a1b0d706d4ddc6c545e827151f7141060d9a91e8a13162765e176845"
            },
            "downloads": -1,
            "filename": "explainableai-0.10-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b30f1fd51013f2f4b22c353c5a5f058a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 26614,
            "upload_time": "2024-10-13T06:48:00",
            "upload_time_iso_8601": "2024-10-13T06:48:00.756005Z",
            "url": "https://files.pythonhosted.org/packages/17/14/22e0874d71483d7e1199c815e36e7382b2a6aef2124ad2d40a4eaad65718/explainableai-0.10-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "60236925fa7b397bbfeb74157403d87c082ec5575d71e174be882481f19acc5b",
                "md5": "d644522717866412b4fbdadbeea729ed",
                "sha256": "4ba5c2c0a38c6ed8cdc5cefe1df12fe76d199b66bf5a3eb67aca99471c6958d2"
            },
            "downloads": -1,
            "filename": "explainableai-0.10.tar.gz",
            "has_sig": false,
            "md5_digest": "d644522717866412b4fbdadbeea729ed",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 24792,
            "upload_time": "2024-10-13T06:48:03",
            "upload_time_iso_8601": "2024-10-13T06:48:03.272734Z",
            "url": "https://files.pythonhosted.org/packages/60/23/6925fa7b397bbfeb74157403d87c082ec5575d71e174be882481f19acc5b/explainableai-0.10.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-13 06:48:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ombhojane",
    "github_project": "explainableai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "scikit-learn",
            "specs": []
        },
        {
            "name": "shap",
            "specs": []
        },
        {
            "name": "matplotlib",
            "specs": []
        },
        {
            "name": "seaborn",
            "specs": []
        },
        {
            "name": "plotly",
            "specs": []
        },
        {
            "name": "ipywidgets",
            "specs": []
        },
        {
            "name": "lime",
            "specs": []
        },
        {
            "name": "reportlab",
            "specs": []
        },
        {
            "name": "google-generativeai",
            "specs": []
        },
        {
            "name": "python-dotenv",
            "specs": []
        },
        {
            "name": "scipy",
            "specs": []
        },
        {
            "name": "pillow",
            "specs": []
        },
        {
            "name": "xgboost",
            "specs": []
        },
        {
            "name": "colorama",
            "specs": []
        },
        {
            "name": "dask",
            "specs": []
        }
    ],
    "lcname": "explainableai"
}
        
Elapsed time: 4.32782s