time-interpret


Nametime-interpret JSON
Version 0.3.0 PyPI version JSON
download
home_page
SummaryModel interpretability library for PyTorch with a focus on time series.
upload_time2023-06-06 01:33:02
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords deep-learning pytorch captum explainable-ai time-series
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Time Interpret (tint)

This library expands the [captum library](https://captum.ai) with a specific 
focus on time series. For more details, please see the documentation and our [paper](https://arxiv.org/abs/2306.02968).


## Install

Time Interpret can be installed with pip:

```shell script
pip install time_interpret
```

Please see the documentation for alternative installation modes.


## Quick-start

First, let's load an Arma dataset:

```python
from tint.datasets import Arma

arma = Arma()
arma.download()  # This method generates the dataset
```

We then load some test data from the dataset and the
corresponding true saliency:

```python
inputs = arma.preprocess()["x"][0]
true_saliency = arma.true_saliency(dim=1)[0]
```

We can now load an attribution method and use it to compute the saliency:

```python
from tint.attr import TemporalIntegratedGradients

explainer = TemporalIntegratedGradients(arma.get_white_box)

baselines = inputs * 0
attr = explainer.attribute(
    inputs,
    baselines=baselines,
    additional_forward_args=(true_saliency,),
    temporal_additional_forward_args=(True,),
).abs()
```

Finally, we evaluate our method using the true saliency and a white box metric:

```python
from tint.metrics.white_box import aup

print(f"{aup(attr, true_saliency):.4}")
```

## Methods

- [AugmentedOcclusion](https://arxiv.org/abs/2003.02821)
- [BayesKernelShap](https://arxiv.org/abs/2008.05030)
- [BayesLime](https://arxiv.org/abs/2008.05030)
- [Discretized Integrated Gradients](https://arxiv.org/abs/2108.13654)
- [DynaMask](https://arxiv.org/abs/2106.05303)
- [ExtremalMask](https://arxiv.org/abs/2305.18840)
- [Fit](https://arxiv.org/abs/2003.02821)
- [LofKernelShap](https://arxiv.org/abs/2306.02968)
- [LofLime](https://arxiv.org/abs/2306.02968)
- [Non-linearities Tunnel](https://arxiv.org/abs/1906.07983)
- [Occlusion](https://arxiv.org/abs/1311.2901)
- [Retain](https://arxiv.org/abs/1608.05745)
- [SequentialIntegratedGradients](https://arxiv.org/abs/2305.15853)
- [TemporalAugmentedOcclusion](https://arxiv.org/abs/2003.02821)
- [TemporalOcclusion](https://arxiv.org/abs/2003.02821)
- [TemporalIntegratedGradients](https://arxiv.org/abs/2306.02968)
- [TimeForwardTunnel](https://arxiv.org/abs/2306.02968)

This package also provides several datasets, models and metrics. Please refer to the documentation for more details.


## Paper: Learning Perturbations to Explain Time Series Predictions

The experiments for the paper: [Learning Perturbations to Explain Time Series Predictions](https://arxiv.org/abs/2305.18840) 
can be found on these folders:
- [HMM](experiments/hmm)
- [Mimic3](experiments/mimic3/mortality)


## Paper: Sequential Integrated Gradients: a simple but effective method for explaining language models

The experiments for the paper: 
[Sequential Integrated Gradients: a simple but effective method for explaining language models](https://arxiv.org/abs/2305.15853) 
can be found on the [NLP](experiments/nlp) section of the experiments.


## TSInterpret

More methods to interpret predictions of time series classifiers have been grouped 
into [TSInterpret](https://github.com/fzi-forschungszentrum-informatik/TSInterpret), another library with a specific 
focus on time series.
We developed Time Interpret concurrently, not being aware of this library at the time.


## Acknowledgment
- [Jonathan Crabbe](https://github.com/JonathanCrabbe/Dynamask) for the DynaMask implementation.
- [Sana Tonekaboni](https://github.com/sanatonek/time_series_explainability/tree/master/TSX) for the fit implementation.
- [INK Lab](https://github.com/INK-USC/DIG) for the discretized integrated gradients' implementation.
- [Dylan Slack](https://github.com/dylan-slack/Modeling-Uncertainty-Local-Explainability) for the BayesLime and BayesShap implementations.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "time-interpret",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "deep-learning,pytorch,captum,explainable-ai,time-series",
    "author": "",
    "author_email": "Joseph Enguehard <joseph@skippr.com>",
    "download_url": "https://files.pythonhosted.org/packages/af/78/7b7a09aaea24c068ef9c9590515d178b65809492d06641d11823406e1983/time_interpret-0.3.0.tar.gz",
    "platform": null,
    "description": "# Time Interpret (tint)\n\nThis library expands the [captum library](https://captum.ai) with a specific \nfocus on time series. For more details, please see the documentation and our [paper](https://arxiv.org/abs/2306.02968).\n\n\n## Install\n\nTime Interpret can be installed with pip:\n\n```shell script\npip install time_interpret\n```\n\nPlease see the documentation for alternative installation modes.\n\n\n## Quick-start\n\nFirst, let's load an Arma dataset:\n\n```python\nfrom tint.datasets import Arma\n\narma = Arma()\narma.download()  # This method generates the dataset\n```\n\nWe then load some test data from the dataset and the\ncorresponding true saliency:\n\n```python\ninputs = arma.preprocess()[\"x\"][0]\ntrue_saliency = arma.true_saliency(dim=1)[0]\n```\n\nWe can now load an attribution method and use it to compute the saliency:\n\n```python\nfrom tint.attr import TemporalIntegratedGradients\n\nexplainer = TemporalIntegratedGradients(arma.get_white_box)\n\nbaselines = inputs * 0\nattr = explainer.attribute(\n    inputs,\n    baselines=baselines,\n    additional_forward_args=(true_saliency,),\n    temporal_additional_forward_args=(True,),\n).abs()\n```\n\nFinally, we evaluate our method using the true saliency and a white box metric:\n\n```python\nfrom tint.metrics.white_box import aup\n\nprint(f\"{aup(attr, true_saliency):.4}\")\n```\n\n## Methods\n\n- [AugmentedOcclusion](https://arxiv.org/abs/2003.02821)\n- [BayesKernelShap](https://arxiv.org/abs/2008.05030)\n- [BayesLime](https://arxiv.org/abs/2008.05030)\n- [Discretized Integrated Gradients](https://arxiv.org/abs/2108.13654)\n- [DynaMask](https://arxiv.org/abs/2106.05303)\n- [ExtremalMask](https://arxiv.org/abs/2305.18840)\n- [Fit](https://arxiv.org/abs/2003.02821)\n- [LofKernelShap](https://arxiv.org/abs/2306.02968)\n- [LofLime](https://arxiv.org/abs/2306.02968)\n- [Non-linearities Tunnel](https://arxiv.org/abs/1906.07983)\n- [Occlusion](https://arxiv.org/abs/1311.2901)\n- [Retain](https://arxiv.org/abs/1608.05745)\n- [SequentialIntegratedGradients](https://arxiv.org/abs/2305.15853)\n- [TemporalAugmentedOcclusion](https://arxiv.org/abs/2003.02821)\n- [TemporalOcclusion](https://arxiv.org/abs/2003.02821)\n- [TemporalIntegratedGradients](https://arxiv.org/abs/2306.02968)\n- [TimeForwardTunnel](https://arxiv.org/abs/2306.02968)\n\nThis package also provides several datasets, models and metrics. Please refer to the documentation for more details.\n\n\n## Paper: Learning Perturbations to Explain Time Series Predictions\n\nThe experiments for the paper: [Learning Perturbations to Explain Time Series Predictions](https://arxiv.org/abs/2305.18840) \ncan be found on these folders:\n- [HMM](experiments/hmm)\n- [Mimic3](experiments/mimic3/mortality)\n\n\n## Paper: Sequential Integrated Gradients: a simple but effective method for explaining language models\n\nThe experiments for the paper: \n[Sequential Integrated Gradients: a simple but effective method for explaining language models](https://arxiv.org/abs/2305.15853) \ncan be found on the [NLP](experiments/nlp) section of the experiments.\n\n\n## TSInterpret\n\nMore methods to interpret predictions of time series classifiers have been grouped \ninto [TSInterpret](https://github.com/fzi-forschungszentrum-informatik/TSInterpret), another library with a specific \nfocus on time series.\nWe developed Time Interpret concurrently, not being aware of this library at the time.\n\n\n## Acknowledgment\n- [Jonathan Crabbe](https://github.com/JonathanCrabbe/Dynamask) for the DynaMask implementation.\n- [Sana Tonekaboni](https://github.com/sanatonek/time_series_explainability/tree/master/TSX) for the fit implementation.\n- [INK Lab](https://github.com/INK-USC/DIG) for the discretized integrated gradients' implementation.\n- [Dylan Slack](https://github.com/dylan-slack/Modeling-Uncertainty-Local-Explainability) for the BayesLime and BayesShap implementations.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Model interpretability library for PyTorch with a focus on time series.",
    "version": "0.3.0",
    "project_urls": {
        "documentation": "https://josephenguehard.github.io/time_interpret",
        "homepage": "https://github.com/josephenguehard/time_interpret"
    },
    "split_keywords": [
        "deep-learning",
        "pytorch",
        "captum",
        "explainable-ai",
        "time-series"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "266fa0684d12c2416eba1ddf140adced066c229850fa04d0fafe8267c0b23f70",
                "md5": "6326d0a3bd51c96ab94f667336de35cf",
                "sha256": "a057001510936e43e31dad4e11c08e6f6d6b22ad687b40186f85df98e3908465"
            },
            "downloads": -1,
            "filename": "time_interpret-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6326d0a3bd51c96ab94f667336de35cf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 1540606,
            "upload_time": "2023-06-06T01:32:59",
            "upload_time_iso_8601": "2023-06-06T01:32:59.692877Z",
            "url": "https://files.pythonhosted.org/packages/26/6f/a0684d12c2416eba1ddf140adced066c229850fa04d0fafe8267c0b23f70/time_interpret-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "af787b7a09aaea24c068ef9c9590515d178b65809492d06641d11823406e1983",
                "md5": "fe98a0ce64bad24100c381d6dc5d85de",
                "sha256": "6ff338d7f2400d3302b62a0371e81a88ca7f651e2fb1619327b243059ab7bf72"
            },
            "downloads": -1,
            "filename": "time_interpret-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "fe98a0ce64bad24100c381d6dc5d85de",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 1452596,
            "upload_time": "2023-06-06T01:33:02",
            "upload_time_iso_8601": "2023-06-06T01:33:02.285815Z",
            "url": "https://files.pythonhosted.org/packages/af/78/7b7a09aaea24c068ef9c9590515d178b65809492d06641d11823406e1983/time_interpret-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-06 01:33:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "josephenguehard",
    "github_project": "time_interpret",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "time-interpret"
}
        
Elapsed time: 0.20074s