signxai2


Namesignxai2 JSON
Version 0.13.0 PyPI version JSON
download
home_pageNone
SummaryA comprehensive explainable AI library supporting both TensorFlow and PyTorch with unified API and advanced XAI methods including SIGN, LRP, and Grad-CAM. Authored by Nils Gumpfer, Jana Fischer and Alexander Paul.
upload_time2025-07-29 06:15:25
maintainerNone
docs_urlNone
authorNone
requires_python<3.11,>=3.9
licenseNone
keywords explainable ai xai interpretability machine learning deep learning tensorflow pytorch lrp grad-cam sign attribution saliency
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SIGNed explanations: Unveiling relevant features by reducing bias

This repository and python package is an extended version of the published python package of the following journal article:
https://doi.org/10.1016/j.inffus.2023.101883

If you use the code from this repository in your work, please cite:
```bibtex
 @article{Gumpfer2023SIGN,
    title = {SIGNed explanations: Unveiling relevant features by reducing bias},
    author = {Nils Gumpfer and Joshua Prim and Till Keller and Bernhard Seeger and Michael Guckert and Jennifer Hannig},
    journal = {Information Fusion},
    pages = {101883},
    year = {2023},
    issn = {1566-2535},
    doi = {https://doi.org/10.1016/j.inffus.2023.101883},
    url = {https://www.sciencedirect.com/science/article/pii/S1566253523001999}
}
```

<img src="https://ars.els-cdn.com/content/image/1-s2.0-S1566253523001999-ga1_lrg.jpg" title="Graphical Abstract" width="900px"/>

## Requirements

- Python 3.9 or 3.10 (Python 3.11+ is not supported)
- TensorFlow >=2.8.0,<=2.12.1
- PyTorch >=1.10.0
- NumPy, Matplotlib, SciPy

## 🚀 Installation

### Install from PyPI
```bash
pip install signxai2
```

**Note:** This installs the complete package with both TensorFlow and PyTorch support. Ensure you're using Python 3.9 or 3.10 before installation.

### Install from source

#### Option 1: Full installation (both frameworks)
```bash
git clone https://github.com/IRISlaboratory/signxai2.git
cd signxai2
pip install -e .
```

#### Option 2: Framework-specific installation
For users who want to install only specific framework support:

**TensorFlow only:**
```bash
git clone https://github.com/IRISlaboratory/signxai2.git
cd signxai2
pip install -r requirements/common.txt -r requirements/tensorflow.txt
```

**PyTorch only:**
```bash
git clone https://github.com/IRISlaboratory/signxai2.git
cd signxai2
pip install -r requirements/common.txt -r requirements/pytorch.txt
```

Note: Framework-specific installation is only available when installing from source. The PyPI package includes both frameworks for seamless compatibility.

## Setup of Git LFS

Before you get started please set up [Git LFS](https://git-lfs.github.com/) to download the large files in this repository. This is required to access the pre-trained models and example data.

```bash
git lfs install
```

## 📦 Load Data and Documentation

After installation, run the setup script to download documentation, examples, and sample data:

```bash
bash ./prepare.sh
```

This will download:
- 📚 Full documentation (viewable at `docs/index.html`)
- 📝 Example scripts and notebooks (`examples/`)  
- 📊 Sample ECG data and images (`examples/data/`)


## Examples

To get started with SignXAI2 Methods, please follow the example tutorials ('examples/tutorials/').

## Features

- Support for **TensorFlow** and **PyTorch** models
- Consistent API across frameworks
- Wide range of explanation methods:
  - Gradient-based: Vanilla gradient, Integrated gradients, SmoothGrad
  - Class activation maps: Grad-CAM
  - Guided backpropagation
  - Layer-wise Relevance Propagation (LRP)
  - Sign-based thresholding for binary relevance maps


### Development version

To install with development dependencies for testing and documentation:

```shell
pip install signxai2[dev]
```

Or from source:
```shell
git clone https://github.com/IRISlaboratory/signxai2.git
cd signxai2
pip install -e ".[dev]"
```

##  Project Structure

  - signxai/: Main package with unified API and framework detection
  - signxai/tf_signxai/: TensorFlow implementation using modified iNNvestigate
  - signxai/torch_signxai/: PyTorch implementation using zennit with custom hooks
  - examples/tutorials/: Tutorials for both frameworks covering images and time series
  - examples/comparison/: Implementation for reproducing results from the paper
  - utils/: Helper scripts for model conversion (tf -> torch) and data preprocessing


## Usage

Please follow the example tutorials in the `examples/tutorials/` directory to get started with SignXAI2 methods. The examples cover various use cases, including images and time series analysis.


## Methods

| Method | Base| Parameters |
|--------|-----------------------------------------|--------------------------------|
| gradient | Gradient | |
| input_t_gradient | Gradient x Input | |
| gradient_x_input | Gradient x Input | |
| gradient_x_sign | Gradient x SIGN  | mu = 0 |
| gradient_x_sign_mu | Gradient x SIGN  | requires *mu* parameter |
| gradient_x_sign_mu_0 | Gradient x SIGN  | mu = 0 |
| gradient_x_sign_mu_0_5 | Gradient x SIGN  | mu = 0.5 |
| gradient_x_sign_mu_neg_0_5 | Gradient x SIGN  | mu = -0.5 |
| guided_backprop | Guided Backpropagation | |
| guided_backprop_x_sign | Guided Backpropagation x SIGN  | mu = 0 |
| guided_backprop_x_sign_mu | Guided Backpropagation x SIGN  | requires *mu* parameter |
| guided_backprop_x_sign_mu_0 | Guided Backpropagation x SIGN  | mu = 0 |
| guided_backprop_x_sign_mu_0_5 | Guided Backpropagation x SIGN  | mu = 0.5 |
| guided_backprop_x_sign_mu_neg_0_5 | Guided Backpropagation x SIGN  | mu = -0.5 |
| integrated_gradients | Integrated Gradients | |
| smoothgrad | SmoothGrad | |
| smoothgrad_x_sign | SmoothGrad x SIGN  | mu = 0 |
| smoothgrad_x_sign_mu | SmoothGrad x SIGN  | requires *mu* parameter |
| smoothgrad_x_sign_mu_0 | SmoothGrad x SIGN  | mu = 0 |
| smoothgrad_x_sign_mu_0_5 | SmoothGrad x SIGN  | mu = 0.5  |
| smoothgrad_x_sign_mu_neg_0_5 | SmoothGrad x SIGN  | mu = -0.5  |
| vargrad | VarGrad  | |
| deconvnet | DeconvNet  | |
| deconvnet_x_sign | DeconvNet x SIGN | mu = 0 |
| deconvnet_x_sign_mu | DeconvNet x SIGN | requires *mu* parameter |
| deconvnet_x_sign_mu_0 | DeconvNet x SIGN | mu = 0 |
| deconvnet_x_sign_mu_0_5 | DeconvNet x SIGN | mu = 0.5 |
| deconvnet_x_sign_mu_neg_0_5 | DeconvNet x SIGN | mu = -0.5 |
| grad_cam | Grad-CAM| requires *last_conv* parameter |
| grad_cam_timeseries | Grad-CAM| (for time series data), requires *last_conv* parameter |
| grad_cam_VGG16ILSVRC | | *last_conv* based on VGG16 |
| guided_grad_cam_VGG16ILSVRC | | *last_conv* based on VGG16 |
| lrp_z | LRP-z  | |
| lrpsign_z | LRP-z / LRP-SIGN (Inputlayer-Rule) | |
| zblrp_z_VGG16ILSVRC | LRP-z / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet |
| w2lrp_z | LRP-z / LRP-w² (Inputlayer-Rule) | |
| flatlrp_z | LRP-z / LRP-flat (Inputlayer-Rule) | |
| lrp_epsilon_0_001 | LRP-epsilon | epsilon = 0.001 |
| lrpsign_epsilon_0_001 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.001 |
| zblrp_epsilon_0_001_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.001 |
| lrpz_epsilon_0_001 |LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.001 |
| lrp_epsilon_0_01 | LRP-epsilon | epsilon = 0.01 |
| lrpsign_epsilon_0_01 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.01 |
| zblrp_epsilon_0_01_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.01 |
| lrpz_epsilon_0_01 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.01 |
| w2lrp_epsilon_0_01 | LRP-epsilon / LRP-w² (Inputlayer-Rule)  | epsilon = 0.01 |
| flatlrp_epsilon_0_01 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 0.01 |
| lrp_epsilon_0_1 | LRP-epsilon | epsilon = 0.1 |
| lrpsign_epsilon_0_1 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.1 |
| zblrp_epsilon_0_1_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.1 |
| lrpz_epsilon_0_1 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.1 |
| w2lrp_epsilon_0_1 | LRP-epsilon / LRP-w² (Inputlayer-Rule)  | epsilon = 0.1 |
| flatlrp_epsilon_0_1 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 0.1 |
| lrp_epsilon_0_2 | LRP-epsilon | epsilon = 0.2 |
| lrpsign_epsilon_0_2 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.2 |
| zblrp_epsilon_0_2_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.2 |
| lrpz_epsilon_0_2 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.2 |
| lrp_epsilon_0_5 | LRP-epsilon | epsilon = 0.5 |
| lrpsign_epsilon_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.5 |
| zblrp_epsilon_0_5_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.5 |
| lrpz_epsilon_0_5 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.5 |
| lrp_epsilon_1 | LRP-epsilon | epsilon = 1 |
| lrpsign_epsilon_1 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 1 |
| zblrp_epsilon_1_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 1 |
| lrpz_epsilon_1 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 1 |
| w2lrp_epsilon_1 | LRP-epsilon / LRP-w² (Inputlayer-Rule)  | epsilon = 1 |
| flatlrp_epsilon_1 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 1 |
| lrp_epsilon_5 | LRP-epsilon | epsilon = 5 |
| lrpsign_epsilon_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 5 |
| zblrp_epsilon_5_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 5 |
| lrpz_epsilon_5 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 5 |
| lrp_epsilon_10 | LRP-epsilon | epsilon = 10 |
| lrpsign_epsilon_10 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 10 |
| zblrp_epsilon_10_VGG106ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 10 |
| lrpz_epsilon_10 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 10 |
| w2lrp_epsilon_10 | LRP-epsilon / LRP-w² (Inputlayer-Rule)  | epsilon = 10 |
| flatlrp_epsilon_10 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 10 |
| lrp_epsilon_20 | LRP-epsilon | epsilon = 20 |
| lrpsign_epsilon_20 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 20 |
| zblrp_epsilon_20_VGG206ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 20 |
| lrpz_epsilon_20 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 20 |
| w2lrp_epsilon_20 | LRP-epsilon / LRP-w² (Inputlayer-Rule)  | epsilon = 20 |
| flatlrp_epsilon_20 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 20 |
| lrp_epsilon_50 | LRP-epsilon | epsilon = 50 |
| lrpsign_epsilon_50 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 50 |
| lrpz_epsilon_50 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 50 |
| lrp_epsilon_75 | LRP-epsilon | epsilon = 75 |
| lrpsign_epsilon_75 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 75 |
| lrpz_epsilon_75 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 75 |
| lrp_epsilon_100 | LRP-epsilon | epsilon = 100 |
| lrpsign_epsilon_100 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = 0 |
| lrpsign_epsilon_100_mu_0 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = 0 |
| lrpsign_epsilon_100_mu_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = 0.5 |
| lrpsign_epsilon_100_mu_neg_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = -0.5 |
| lrpz_epsilon_100 | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 100 |
| zblrp_epsilon_100_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 100 |
| w2lrp_epsilon_100 | LRP-epsilon / LRP-w² (Inputlayer-Rule) | epsilon = 100 |
| flatlrp_epsilon_100 | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 100 |
| lrp_epsilon_0_1_std_x | LRP-epsilon | epsilon = 0.1 * std(x) |
| lrpsign_epsilon_0_1_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.1 * std(x) |
| lrpz_epsilon_0_1_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 0.1 * std(x) |
| zblrp_epsilon_0_1_std_x_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.1 * std(x) |
| w2lrp_epsilon_0_1_std_x | LRP-epsilon / LRP-w² (Inputlayer-Rule) | epsilon = 0.1 * std(x) |
| flatlrp_epsilon_0_1_std_x | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 0.1 * std(x) |
| lrp_epsilon_0_25_std_x | LRP-epsilon | epsilon = 0.25 * std(x) |
| lrpsign_epsilon_0_25_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = 0 |
| lrpz_epsilon_0_25_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 0.25 * std(x) |
| zblrp_epsilon_0_25_std_x_VGG256ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.25 * std(x) |
| w2lrp_epsilon_0_25_std_x | LRP-epsilon / LRP-w² (Inputlayer-Rule) | epsilon = 0.25 * std(x) |
| flatlrp_epsilon_0_25_std_x | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 0.25 * std(x) |
| lrpsign_epsilon_0_25_std_x_mu_0 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = 0 |
| lrpsign_epsilon_0_25_std_x_mu_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = 0.5 |
| lrpsign_epsilon_0_25_std_x_mu_neg_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = -0.5 |
| lrp_epsilon_0_5_std_x | LRP-epsilon | epsilon = 0.5 * std(x) |
| lrpsign_epsilon_0_5_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.5 * std(x) |
| lrpz_epsilon_0_5_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 0.5 * std(x) |
| zblrp_epsilon_0_5_std_x_VGG56ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.5 * std(x) |
| w2lrp_epsilon_0_5_std_x | LRP-epsilon / LRP-w² (Inputlayer-Rule) | epsilon = 0.5 * std(x) |
| flatlrp_epsilon_0_5_std_x | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 0.5 * std(x) |
| lrp_epsilon_1_std_x | LRP-epsilon | epsilon = 1 * std(x) |
| lrpsign_epsilon_1_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 1 * std(x), mu = 0 |
| lrpz_epsilon_1_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 1 * std(x) |
| lrp_epsilon_2_std_x | LRP-epsilon | epsilon = 2 * std(x) |
| lrpsign_epsilon_2_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 2 * std(x), mu = 0 |
| lrpz_epsilon_2_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 2 * std(x) |
| lrp_epsilon_3_std_x | LRP-epsilon | epsilon = 3 * std(x) |
| lrpsign_epsilon_3_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 3 * std(x), mu = 0 |
| lrpz_epsilon_3_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 3 * std(x) |
| lrp_alpha_1_beta_0 | LRP-alpha-beta | alpha = 1, beta = 0 |
| lrpsign_alpha_1_beta_0 | LRP-alpha-beta / LRP-SIGN (Inputlayer-Rule) | alpha = 1, beta = 0, mu = 0 |
| lrpz_alpha_1_beta_0 | LRP-alpha-beta / LRP-z (Inputlayer-Rule) | alpha = 1, beta = 0 |
| zblrp_alpha_1_beta_0_VGG16ILSVRC |  | bounds based on ImageNet, alpha = 1, beta = 0 |
| w2lrp_alpha_1_beta_0 | LRP-alpha-beta / LRP-ZB (Inputlayer-Rule) | alpha = 1, beta = 0 |
| flatlrp_alpha_1_beta_0 | LRP-alpha-beta / LRP-flat (Inputlayer-Rule) | alpha = 1, beta = 0 |
| lrp_sequential_composite_a | LRP Comosite Variant A |  |
| lrpsign_sequential_composite_a | LRP Comosite Variant A / LRP-SIGN (Inputlayer-Rule) |  mu = 0 |
| lrpz_sequential_composite_a | LRP Comosite Variant A / LRP-z (Inputlayer-Rule) |  |
| zblrp_sequential_composite_a_VGG16ILSVRC |  | bounds based on ImageNet  |
| w2lrp_sequential_composite_a | LRP Comosite Variant A / LRP-ZB (Inputlayer-Rule) |  |
| flatlrp_sequential_composite_a | LRP Comosite Variant A / LRP-flat (Inputlayer-Rule) |  |
| lrp_sequential_composite_b | LRP Comosite Variant B |  |
| lrpsign_sequential_composite_b | LRP Comosite Variant B / LRP-SIGN (Inputlayer-Rule) |  mu = 0 |
| lrpz_sequential_composite_b | LRP Comosite Variant B / LRP-z (Inputlayer-Rule) |  |
| zblrp_sequential_composite_b_VGG16ILSVRC |  | bounds based on ImageNet  |
| w2lrp_sequential_composite_b | LRP Comosite Variant B / LRP-ZB (Inputlayer-Rule) |  |
| flatlrp_sequential_composite_b | LRP Comosite Variant B / LRP-flat (Inputlayer-Rule) |  |

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "signxai2",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.11,>=3.9",
    "maintainer_email": "Nils Gumpfer <nils.gumpfer@kite.thm.de>",
    "keywords": "explainable ai, xai, interpretability, machine learning, deep learning, tensorflow, pytorch, lrp, grad-cam, sign, attribution, saliency",
    "author": null,
    "author_email": "IRISlaboratory <nils.gumpfer@kite.thm.de>",
    "download_url": "https://files.pythonhosted.org/packages/e3/d5/d8871927926ba7b303fa9a69dad1824d74221e9433d513ebe5d7df2727b9/signxai2-0.13.0.tar.gz",
    "platform": null,
    "description": "# SIGNed explanations: Unveiling relevant features by reducing bias\n\nThis repository and python package is an extended version of the published python package of the following journal article:\nhttps://doi.org/10.1016/j.inffus.2023.101883\n\nIf you use the code from this repository in your work, please cite:\n```bibtex\n @article{Gumpfer2023SIGN,\n    title = {SIGNed explanations: Unveiling relevant features by reducing bias},\n    author = {Nils Gumpfer and Joshua Prim and Till Keller and Bernhard Seeger and Michael Guckert and Jennifer Hannig},\n    journal = {Information Fusion},\n    pages = {101883},\n    year = {2023},\n    issn = {1566-2535},\n    doi = {https://doi.org/10.1016/j.inffus.2023.101883},\n    url = {https://www.sciencedirect.com/science/article/pii/S1566253523001999}\n}\n```\n\n<img src=\"https://ars.els-cdn.com/content/image/1-s2.0-S1566253523001999-ga1_lrg.jpg\" title=\"Graphical Abstract\" width=\"900px\"/>\n\n## Requirements\n\n- Python 3.9 or 3.10 (Python 3.11+ is not supported)\n- TensorFlow >=2.8.0,<=2.12.1\n- PyTorch >=1.10.0\n- NumPy, Matplotlib, SciPy\n\n## \ud83d\ude80 Installation\n\n### Install from PyPI\n```bash\npip install signxai2\n```\n\n**Note:** This installs the complete package with both TensorFlow and PyTorch support. Ensure you're using Python 3.9 or 3.10 before installation.\n\n### Install from source\n\n#### Option 1: Full installation (both frameworks)\n```bash\ngit clone https://github.com/IRISlaboratory/signxai2.git\ncd signxai2\npip install -e .\n```\n\n#### Option 2: Framework-specific installation\nFor users who want to install only specific framework support:\n\n**TensorFlow only:**\n```bash\ngit clone https://github.com/IRISlaboratory/signxai2.git\ncd signxai2\npip install -r requirements/common.txt -r requirements/tensorflow.txt\n```\n\n**PyTorch only:**\n```bash\ngit clone https://github.com/IRISlaboratory/signxai2.git\ncd signxai2\npip install -r requirements/common.txt -r requirements/pytorch.txt\n```\n\nNote: Framework-specific installation is only available when installing from source. The PyPI package includes both frameworks for seamless compatibility.\n\n## Setup of Git LFS\n\nBefore you get started please set up [Git LFS](https://git-lfs.github.com/) to download the large files in this repository. This is required to access the pre-trained models and example data.\n\n```bash\ngit lfs install\n```\n\n## \ud83d\udce6 Load Data and Documentation\n\nAfter installation, run the setup script to download documentation, examples, and sample data:\n\n```bash\nbash ./prepare.sh\n```\n\nThis will download:\n- \ud83d\udcda Full documentation (viewable at `docs/index.html`)\n- \ud83d\udcdd Example scripts and notebooks (`examples/`)  \n- \ud83d\udcca Sample ECG data and images (`examples/data/`)\n\n\n## Examples\n\nTo get started with SignXAI2 Methods, please follow the example tutorials ('examples/tutorials/').\n\n## Features\n\n- Support for **TensorFlow** and **PyTorch** models\n- Consistent API across frameworks\n- Wide range of explanation methods:\n  - Gradient-based: Vanilla gradient, Integrated gradients, SmoothGrad\n  - Class activation maps: Grad-CAM\n  - Guided backpropagation\n  - Layer-wise Relevance Propagation (LRP)\n  - Sign-based thresholding for binary relevance maps\n\n\n### Development version\n\nTo install with development dependencies for testing and documentation:\n\n```shell\npip install signxai2[dev]\n```\n\nOr from source:\n```shell\ngit clone https://github.com/IRISlaboratory/signxai2.git\ncd signxai2\npip install -e \".[dev]\"\n```\n\n##  Project Structure\n\n  - signxai/: Main package with unified API and framework detection\n  - signxai/tf_signxai/: TensorFlow implementation using modified iNNvestigate\n  - signxai/torch_signxai/: PyTorch implementation using zennit with custom hooks\n  - examples/tutorials/: Tutorials for both frameworks covering images and time series\n  - examples/comparison/: Implementation for reproducing results from the paper\n  - utils/: Helper scripts for model conversion (tf -> torch) and data preprocessing\n\n\n## Usage\n\nPlease follow the example tutorials in the `examples/tutorials/` directory to get started with SignXAI2 methods. The examples cover various use cases, including images and time series analysis.\n\n\n## Methods\n\n| Method | Base| Parameters |\n|--------|-----------------------------------------|--------------------------------|\n| gradient | Gradient | |\n| input_t_gradient | Gradient x Input | |\n| gradient_x_input | Gradient x Input | |\n| gradient_x_sign | Gradient x SIGN  | mu = 0 |\n| gradient_x_sign_mu | Gradient x SIGN  | requires *mu* parameter |\n| gradient_x_sign_mu_0 | Gradient x SIGN  | mu = 0 |\n| gradient_x_sign_mu_0_5 | Gradient x SIGN  | mu = 0.5 |\n| gradient_x_sign_mu_neg_0_5 | Gradient x SIGN  | mu = -0.5 |\n| guided_backprop | Guided Backpropagation | |\n| guided_backprop_x_sign | Guided Backpropagation x SIGN  | mu = 0 |\n| guided_backprop_x_sign_mu | Guided Backpropagation x SIGN  | requires *mu* parameter |\n| guided_backprop_x_sign_mu_0 | Guided Backpropagation x SIGN  | mu = 0 |\n| guided_backprop_x_sign_mu_0_5 | Guided Backpropagation x SIGN  | mu = 0.5 |\n| guided_backprop_x_sign_mu_neg_0_5 | Guided Backpropagation x SIGN  | mu = -0.5 |\n| integrated_gradients | Integrated Gradients | |\n| smoothgrad | SmoothGrad | |\n| smoothgrad_x_sign | SmoothGrad x SIGN  | mu = 0 |\n| smoothgrad_x_sign_mu | SmoothGrad x SIGN  | requires *mu* parameter |\n| smoothgrad_x_sign_mu_0 | SmoothGrad x SIGN  | mu = 0 |\n| smoothgrad_x_sign_mu_0_5 | SmoothGrad x SIGN  | mu = 0.5  |\n| smoothgrad_x_sign_mu_neg_0_5 | SmoothGrad x SIGN  | mu = -0.5  |\n| vargrad | VarGrad  | |\n| deconvnet | DeconvNet  | |\n| deconvnet_x_sign | DeconvNet x SIGN | mu = 0 |\n| deconvnet_x_sign_mu | DeconvNet x SIGN | requires *mu* parameter |\n| deconvnet_x_sign_mu_0 | DeconvNet x SIGN | mu = 0 |\n| deconvnet_x_sign_mu_0_5 | DeconvNet x SIGN | mu = 0.5 |\n| deconvnet_x_sign_mu_neg_0_5 | DeconvNet x SIGN | mu = -0.5 |\n| grad_cam | Grad-CAM| requires *last_conv* parameter |\n| grad_cam_timeseries | Grad-CAM| (for time series data), requires *last_conv* parameter |\n| grad_cam_VGG16ILSVRC | | *last_conv* based on VGG16 |\n| guided_grad_cam_VGG16ILSVRC | | *last_conv* based on VGG16 |\n| lrp_z | LRP-z  | |\n| lrpsign_z | LRP-z / LRP-SIGN (Inputlayer-Rule) | |\n| zblrp_z_VGG16ILSVRC | LRP-z / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet |\n| w2lrp_z | LRP-z / LRP-w\u00b2 (Inputlayer-Rule) | |\n| flatlrp_z | LRP-z / LRP-flat (Inputlayer-Rule) | |\n| lrp_epsilon_0_001 | LRP-epsilon | epsilon = 0.001 |\n| lrpsign_epsilon_0_001 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.001 |\n| zblrp_epsilon_0_001_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.001 |\n| lrpz_epsilon_0_001 |LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.001 |\n| lrp_epsilon_0_01 | LRP-epsilon | epsilon = 0.01 |\n| lrpsign_epsilon_0_01 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.01 |\n| zblrp_epsilon_0_01_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.01 |\n| lrpz_epsilon_0_01 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.01 |\n| w2lrp_epsilon_0_01 | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule)  | epsilon = 0.01 |\n| flatlrp_epsilon_0_01 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 0.01 |\n| lrp_epsilon_0_1 | LRP-epsilon | epsilon = 0.1 |\n| lrpsign_epsilon_0_1 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.1 |\n| zblrp_epsilon_0_1_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.1 |\n| lrpz_epsilon_0_1 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.1 |\n| w2lrp_epsilon_0_1 | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule)  | epsilon = 0.1 |\n| flatlrp_epsilon_0_1 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 0.1 |\n| lrp_epsilon_0_2 | LRP-epsilon | epsilon = 0.2 |\n| lrpsign_epsilon_0_2 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.2 |\n| zblrp_epsilon_0_2_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.2 |\n| lrpz_epsilon_0_2 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.2 |\n| lrp_epsilon_0_5 | LRP-epsilon | epsilon = 0.5 |\n| lrpsign_epsilon_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.5 |\n| zblrp_epsilon_0_5_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.5 |\n| lrpz_epsilon_0_5 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 0.5 |\n| lrp_epsilon_1 | LRP-epsilon | epsilon = 1 |\n| lrpsign_epsilon_1 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 1 |\n| zblrp_epsilon_1_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 1 |\n| lrpz_epsilon_1 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 1 |\n| w2lrp_epsilon_1 | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule)  | epsilon = 1 |\n| flatlrp_epsilon_1 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 1 |\n| lrp_epsilon_5 | LRP-epsilon | epsilon = 5 |\n| lrpsign_epsilon_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 5 |\n| zblrp_epsilon_5_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 5 |\n| lrpz_epsilon_5 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 5 |\n| lrp_epsilon_10 | LRP-epsilon | epsilon = 10 |\n| lrpsign_epsilon_10 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 10 |\n| zblrp_epsilon_10_VGG106ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 10 |\n| lrpz_epsilon_10 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 10 |\n| w2lrp_epsilon_10 | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule)  | epsilon = 10 |\n| flatlrp_epsilon_10 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 10 |\n| lrp_epsilon_20 | LRP-epsilon | epsilon = 20 |\n| lrpsign_epsilon_20 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 20 |\n| zblrp_epsilon_20_VGG206ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 20 |\n| lrpz_epsilon_20 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 20 |\n| w2lrp_epsilon_20 | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule)  | epsilon = 20 |\n| flatlrp_epsilon_20 | LRP-epsilon / LRP-flat (Inputlayer-Rule)  | epsilon = 20 |\n| lrp_epsilon_50 | LRP-epsilon | epsilon = 50 |\n| lrpsign_epsilon_50 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 50 |\n| lrpz_epsilon_50 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 50 |\n| lrp_epsilon_75 | LRP-epsilon | epsilon = 75 |\n| lrpsign_epsilon_75 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 75 |\n| lrpz_epsilon_75 | LRP-epsilon / LRP-z (Inputlayer-Rule)  | epsilon = 75 |\n| lrp_epsilon_100 | LRP-epsilon | epsilon = 100 |\n| lrpsign_epsilon_100 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = 0 |\n| lrpsign_epsilon_100_mu_0 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = 0 |\n| lrpsign_epsilon_100_mu_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = 0.5 |\n| lrpsign_epsilon_100_mu_neg_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 100, mu = -0.5 |\n| lrpz_epsilon_100 | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 100 |\n| zblrp_epsilon_100_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 100 |\n| w2lrp_epsilon_100 | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule) | epsilon = 100 |\n| flatlrp_epsilon_100 | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 100 |\n| lrp_epsilon_0_1_std_x | LRP-epsilon | epsilon = 0.1 * std(x) |\n| lrpsign_epsilon_0_1_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.1 * std(x) |\n| lrpz_epsilon_0_1_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 0.1 * std(x) |\n| zblrp_epsilon_0_1_std_x_VGG16ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.1 * std(x) |\n| w2lrp_epsilon_0_1_std_x | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule) | epsilon = 0.1 * std(x) |\n| flatlrp_epsilon_0_1_std_x | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 0.1 * std(x) |\n| lrp_epsilon_0_25_std_x | LRP-epsilon | epsilon = 0.25 * std(x) |\n| lrpsign_epsilon_0_25_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = 0 |\n| lrpz_epsilon_0_25_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 0.25 * std(x) |\n| zblrp_epsilon_0_25_std_x_VGG256ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.25 * std(x) |\n| w2lrp_epsilon_0_25_std_x | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule) | epsilon = 0.25 * std(x) |\n| flatlrp_epsilon_0_25_std_x | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 0.25 * std(x) |\n| lrpsign_epsilon_0_25_std_x_mu_0 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = 0 |\n| lrpsign_epsilon_0_25_std_x_mu_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = 0.5 |\n| lrpsign_epsilon_0_25_std_x_mu_neg_0_5 | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.25 * std(x), mu = -0.5 |\n| lrp_epsilon_0_5_std_x | LRP-epsilon | epsilon = 0.5 * std(x) |\n| lrpsign_epsilon_0_5_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 0.5 * std(x) |\n| lrpz_epsilon_0_5_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 0.5 * std(x) |\n| zblrp_epsilon_0_5_std_x_VGG56ILSVRC | LRP-epsilon / LRP-ZB (Inputlayer-Rule) | bounds based on ImageNet, epsilon = 0.5 * std(x) |\n| w2lrp_epsilon_0_5_std_x | LRP-epsilon / LRP-w\u00b2 (Inputlayer-Rule) | epsilon = 0.5 * std(x) |\n| flatlrp_epsilon_0_5_std_x | LRP-epsilon / LRP-flat (Inputlayer-Rule) | epsilon = 0.5 * std(x) |\n| lrp_epsilon_1_std_x | LRP-epsilon | epsilon = 1 * std(x) |\n| lrpsign_epsilon_1_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 1 * std(x), mu = 0 |\n| lrpz_epsilon_1_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 1 * std(x) |\n| lrp_epsilon_2_std_x | LRP-epsilon | epsilon = 2 * std(x) |\n| lrpsign_epsilon_2_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 2 * std(x), mu = 0 |\n| lrpz_epsilon_2_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 2 * std(x) |\n| lrp_epsilon_3_std_x | LRP-epsilon | epsilon = 3 * std(x) |\n| lrpsign_epsilon_3_std_x | LRP-epsilon / LRP-SIGN (Inputlayer-Rule) | epsilon = 3 * std(x), mu = 0 |\n| lrpz_epsilon_3_std_x | LRP-epsilon / LRP-z (Inputlayer-Rule) | epsilon = 3 * std(x) |\n| lrp_alpha_1_beta_0 | LRP-alpha-beta | alpha = 1, beta = 0 |\n| lrpsign_alpha_1_beta_0 | LRP-alpha-beta / LRP-SIGN (Inputlayer-Rule) | alpha = 1, beta = 0, mu = 0 |\n| lrpz_alpha_1_beta_0 | LRP-alpha-beta / LRP-z (Inputlayer-Rule) | alpha = 1, beta = 0 |\n| zblrp_alpha_1_beta_0_VGG16ILSVRC |  | bounds based on ImageNet, alpha = 1, beta = 0 |\n| w2lrp_alpha_1_beta_0 | LRP-alpha-beta / LRP-ZB (Inputlayer-Rule) | alpha = 1, beta = 0 |\n| flatlrp_alpha_1_beta_0 | LRP-alpha-beta / LRP-flat (Inputlayer-Rule) | alpha = 1, beta = 0 |\n| lrp_sequential_composite_a | LRP Comosite Variant A |  |\n| lrpsign_sequential_composite_a | LRP Comosite Variant A / LRP-SIGN (Inputlayer-Rule) |  mu = 0 |\n| lrpz_sequential_composite_a | LRP Comosite Variant A / LRP-z (Inputlayer-Rule) |  |\n| zblrp_sequential_composite_a_VGG16ILSVRC |  | bounds based on ImageNet  |\n| w2lrp_sequential_composite_a | LRP Comosite Variant A / LRP-ZB (Inputlayer-Rule) |  |\n| flatlrp_sequential_composite_a | LRP Comosite Variant A / LRP-flat (Inputlayer-Rule) |  |\n| lrp_sequential_composite_b | LRP Comosite Variant B |  |\n| lrpsign_sequential_composite_b | LRP Comosite Variant B / LRP-SIGN (Inputlayer-Rule) |  mu = 0 |\n| lrpz_sequential_composite_b | LRP Comosite Variant B / LRP-z (Inputlayer-Rule) |  |\n| zblrp_sequential_composite_b_VGG16ILSVRC |  | bounds based on ImageNet  |\n| w2lrp_sequential_composite_b | LRP Comosite Variant B / LRP-ZB (Inputlayer-Rule) |  |\n| flatlrp_sequential_composite_b | LRP Comosite Variant B / LRP-flat (Inputlayer-Rule) |  |\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A comprehensive explainable AI library supporting both TensorFlow and PyTorch with unified API and advanced XAI methods including SIGN, LRP, and Grad-CAM. Authored by Nils Gumpfer, Jana Fischer and Alexander Paul.",
    "version": "0.13.0",
    "project_urls": {
        "Bug Reports": "https://github.com/IRISlaboratory/signxai2/issues",
        "Changelog": "https://github.com/IRISlaboratory/signxai2/blob/main/CHANGELOG.md",
        "Documentation": "https://IRISlaboratory.github.io/signxai2/index.html",
        "Homepage": "https://github.com/IRISlaboratory/signxai2",
        "Publication": "https://www.sciencedirect.com/science/article/pii/S1566253523001999?via%3Dihub",
        "Repository": "https://github.com/IRISlaboratory/signxai2.git"
    },
    "split_keywords": [
        "explainable ai",
        " xai",
        " interpretability",
        " machine learning",
        " deep learning",
        " tensorflow",
        " pytorch",
        " lrp",
        " grad-cam",
        " sign",
        " attribution",
        " saliency"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4601486a06fe016d48e62577fd13dbc8166e8d3789e2a9cb74ef2909b622e199",
                "md5": "9c18d2e1f05990c7c6c13ad60ba102b4",
                "sha256": "d4021e1bc60e4465c639aea71bcc49527d4450483f3e68885b5d2badb6aefc76"
            },
            "downloads": -1,
            "filename": "signxai2-0.13.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c18d2e1f05990c7c6c13ad60ba102b4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11,>=3.9",
            "size": 245522,
            "upload_time": "2025-07-29T06:15:23",
            "upload_time_iso_8601": "2025-07-29T06:15:23.123303Z",
            "url": "https://files.pythonhosted.org/packages/46/01/486a06fe016d48e62577fd13dbc8166e8d3789e2a9cb74ef2909b622e199/signxai2-0.13.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e3d5d8871927926ba7b303fa9a69dad1824d74221e9433d513ebe5d7df2727b9",
                "md5": "03051d7d0522109bb01ae29205086c46",
                "sha256": "865923bf23f3c5137dcdfa0a2297044686fb576e0c5a0e1c85566b00281078c0"
            },
            "downloads": -1,
            "filename": "signxai2-0.13.0.tar.gz",
            "has_sig": false,
            "md5_digest": "03051d7d0522109bb01ae29205086c46",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.11,>=3.9",
            "size": 267329,
            "upload_time": "2025-07-29T06:15:25",
            "upload_time_iso_8601": "2025-07-29T06:15:25.152094Z",
            "url": "https://files.pythonhosted.org/packages/e3/d5/d8871927926ba7b303fa9a69dad1824d74221e9433d513ebe5d7df2727b9/signxai2-0.13.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-29 06:15:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "IRISlaboratory",
    "github_project": "signxai2",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "signxai2"
}
        
Elapsed time: 1.16223s