saliency


Namesaliency JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/pair-code/saliency
SummaryFramework-agnostic saliency methods
upload_time2024-03-20 19:51:30
maintainerNone
docs_urlNone
authorThe saliency authors
requires_pythonNone
licenseApache 2.0
keywords saliency mask neural network deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Saliency Library
## Updates

🔴   Now framework-agnostic! [(Example core notebook)](Examples_core.ipynb)  🔴

🔗   For further explanation of the methods and more examples of the resulting maps, see our [Github Pages website](https://pair-code.github.io/saliency)   🔗

If upgrading from an older version, update old imports to `import saliency.tf1 as saliency`. We provide wrappers to make the framework-agnostic version compatible with TF1 models. [(Example TF1 notebook)](Examples_tf1.ipynb)

🔴   Added Performance Information Curve (PIC) - a human
independent metric for evaluating the quality of saliency methods.
([Example notebook](https://github.com/PAIR-code/saliency/blob/master/pic_metrics.ipynb))  🔴

## Saliency Methods

This repository contains code for the following saliency techniques:

*   Guided Integrated Gradients* ([paper](https://arxiv.org/abs/2106.09788), [poster](https://github.com/PAIR-code/saliency/blob/master/docs/CVPR_Guided_IG_Poster.pdf))
*   XRAI* ([paper](https://arxiv.org/abs/1906.02825), [poster](https://github.com/PAIR-code/saliency/blob/master/docs/ICCV_XRAI_Poster.pdf))
*   SmoothGrad* ([paper](https://arxiv.org/abs/1706.03825))
*   Vanilla Gradients
    ([paper](https://scholar.google.com/scholar?q=Visualizing+higher-layer+features+of+a+deep+network&btnG=&hl=en&as_sdt=0%2C22),
    [paper](https://arxiv.org/abs/1312.6034))
*   Guided Backpropogation ([paper](https://arxiv.org/abs/1412.6806))
*   Integrated Gradients ([paper](https://arxiv.org/abs/1703.01365))
*   Occlusion
*   Grad-CAM ([paper](https://arxiv.org/abs/1610.02391))
*   Blur IG ([paper](https://arxiv.org/abs/2004.03383))

\*Developed by PAIR.

This list is by no means comprehensive. We are accepting pull requests to add
new methods!

## Evaluation of Saliency Methods

The repository provides an implementation of Performance Information Curve (PIC) -
a human independent metric for evaluating the quality of saliency methods
([paper](https://arxiv.org/abs/1906.02825),
[poster](https://github.com/PAIR-code/saliency/blob/master/docs/ICCV_XRAI_Poster.pdf),
[code](https://github.com/PAIR-code/saliency/blob/master/saliency/metrics/pic.py),
[notebook](https://github.com/PAIR-code/saliency/blob/master/pic_metrics.ipynb)).


## Download

```
# To install the core subpackage:
pip install saliency

# To install core and tf1 subpackages:
pip install saliency[tf1]

```

or for the development version:
```
git clone https://github.com/pair-code/saliency
cd saliency
```


## Usage

The saliency library has two subpackages:
*	`core` uses a generic `call_model_function` which can be used with any ML 
	framework.
*	`tf1` accepts input/output tensors directly, and sets up the necessary 
	graph operations for each method.

### Core

Each saliency mask class extends from the `CoreSaliency` base class. This class
contains the following methods:

*   `GetMask(x_value, call_model_function, call_model_args=None)`: Returns a mask
    of
    the shape of non-batched `x_value` given by the saliency technique.
*   `GetSmoothedMask(x_value, call_model_function, call_model_args=None, stdev_spread=.15, nsamples=25, magnitude=True)`: 
    Returns a mask smoothed of the shape of non-batched `x_value` with the 
    SmoothGrad technique.


The visualization module contains two methods for saliency visualization:

* ```VisualizeImageGrayscale(image_3d, percentile)```: Marginalizes across the
  absolute value of each channel to create a 2D single channel image, and clips
  the image at the given percentile of the distribution. This method returns a
  2D tensor normalized between 0 to 1.
* ```VisualizeImageDiverging(image_3d, percentile)```: Marginalizes across the
  value of each channel to create a 2D single channel image, and clips the
  image at the given percentile of the distribution. This method returns a
  2D tensor normalized between -1 to 1 where zero remains unchanged.

If the sign of the value given by the saliency mask is not important, then use
```VisualizeImageGrayscale```, otherwise use ```VisualizeImageDiverging```. See
the SmoothGrad paper for more details on which visualization method to use.

##### call_model_function
`call_model_function` is how we pass inputs to a given model and receive the outputs
necessary to compute saliency masks. The description of this method and expected 
output format is in the `CoreSaliency` description, as well as separately for each method.


##### Examples

[This example iPython notebook](http://github.com/pair-code/saliency/blob/master/Examples_core.ipynb)
showing these techniques is a good starting place.

Here is a condensed example of using IG+SmoothGrad with TensorFlow 2:

```
import saliency.core as saliency
import tensorflow as tf

...

# call_model_function construction here.
def call_model_function(x_value_batched, call_model_args, expected_keys):
	tape = tf.GradientTape()
	grads = np.array(tape.gradient(output_layer, images))
	return {saliency.INPUT_OUTPUT_GRADIENTS: grads}

...

# Load data.
image = GetImagePNG(...)

# Compute IG+SmoothGrad.
ig_saliency = saliency.IntegratedGradients()
smoothgrad_ig = ig_saliency.GetSmoothedMask(image, 
											call_model_function, 
                                            call_model_args=None)

# Compute a 2D tensor for visualization.
grayscale_visualization = saliency.VisualizeImageGrayscale(
    smoothgrad_ig)
```

### TF1

Each saliency mask class extends from the `TF1Saliency` base class. This class
contains the following methods:

*   `__init__(graph, session, y, x)`: Constructor of the SaliencyMask. This can
    modify the graph, or sometimes create a new graph. Often this will add nodes
    to the graph, so this shouldn't be called continuously. `y` is the output
    tensor to compute saliency masks with respect to, `x` is the input tensor
    with the outer most dimension being batch size.
*   `GetMask(x_value, feed_dict)`: Returns a mask of the shape of non-batched
    `x_value` given by the saliency technique.
*   `GetSmoothedMask(x_value, feed_dict)`: Returns a mask smoothed of the shape
    of non-batched `x_value` with the SmoothGrad technique.

The visualization module contains two visualization methods:

* ```VisualizeImageGrayscale(image_3d, percentile)```: Marginalizes across the
  absolute value of each channel to create a 2D single channel image, and clips
  the image at the given percentile of the distribution. This method returns a
  2D tensor normalized between 0 to 1.
* ```VisualizeImageDiverging(image_3d, percentile)```: Marginalizes across the
  value of each channel to create a 2D single channel image, and clips the
  image at the given percentile of the distribution. This method returns a
  2D tensor normalized between -1 to 1 where zero remains unchanged.

If the sign of the value given by the saliency mask is not important, then use
```VisualizeImageGrayscale```, otherwise use ```VisualizeImageDiverging```. See
the SmoothGrad paper for more details on which visualization method to use.

##### Examples

[This example iPython notebook](http://github.com/pair-code/saliency/blob/master/Examples_tf1.ipynb) shows
these techniques is a good starting place.

Another example of using GuidedBackprop with SmoothGrad from TensorFlow:

```
from saliency.tf1 import GuidedBackprop
from saliency.tf1 import VisualizeImageGrayscale
import tensorflow.compat.v1 as tf

...
# Tensorflow graph construction here.
y = logits[5]
x = tf.placeholder(...)
...

# Compute guided backprop.
# NOTE: This creates another graph that gets cached, try to avoid creating many
# of these.
guided_backprop_saliency = GuidedBackprop(graph, session, y, x)

...
# Load data.
image = GetImagePNG(...)
...

smoothgrad_guided_backprop =
    guided_backprop_saliency.GetMask(image, feed_dict={...})

# Compute a 2D tensor for visualization.
grayscale_visualization = visualization.VisualizeImageGrayscale(
    smoothgrad_guided_backprop)
```

## Conclusion/Disclaimer

If you have any questions or suggestions for improvements to this library,
please contact the owners of the `PAIR-code/saliency` repository.

This is not an official Google product.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/pair-code/saliency",
    "name": "saliency",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "saliency mask neural network deep learning",
    "author": "The saliency authors",
    "author_email": "tf-saliency-dev@google.com",
    "download_url": "https://files.pythonhosted.org/packages/4f/63/0fb70dfa4b10c912e4613c0d0f99a3e46e615c8cc34657e14b0b511687e3/saliency-0.2.1.tar.gz",
    "platform": null,
    "description": "# Saliency Library\n## Updates\n\n🔴   Now framework-agnostic! [(Example core notebook)](Examples_core.ipynb)  🔴\n\n🔗   For further explanation of the methods and more examples of the resulting maps, see our [Github Pages website](https://pair-code.github.io/saliency)   🔗\n\nIf upgrading from an older version, update old imports to `import saliency.tf1 as saliency`. We provide wrappers to make the framework-agnostic version compatible with TF1 models. [(Example TF1 notebook)](Examples_tf1.ipynb)\n\n🔴   Added Performance Information Curve (PIC) - a human\nindependent metric for evaluating the quality of saliency methods.\n([Example notebook](https://github.com/PAIR-code/saliency/blob/master/pic_metrics.ipynb))  🔴\n\n## Saliency Methods\n\nThis repository contains code for the following saliency techniques:\n\n*   Guided Integrated Gradients* ([paper](https://arxiv.org/abs/2106.09788), [poster](https://github.com/PAIR-code/saliency/blob/master/docs/CVPR_Guided_IG_Poster.pdf))\n*   XRAI* ([paper](https://arxiv.org/abs/1906.02825), [poster](https://github.com/PAIR-code/saliency/blob/master/docs/ICCV_XRAI_Poster.pdf))\n*   SmoothGrad* ([paper](https://arxiv.org/abs/1706.03825))\n*   Vanilla Gradients\n    ([paper](https://scholar.google.com/scholar?q=Visualizing+higher-layer+features+of+a+deep+network&btnG=&hl=en&as_sdt=0%2C22),\n    [paper](https://arxiv.org/abs/1312.6034))\n*   Guided Backpropogation ([paper](https://arxiv.org/abs/1412.6806))\n*   Integrated Gradients ([paper](https://arxiv.org/abs/1703.01365))\n*   Occlusion\n*   Grad-CAM ([paper](https://arxiv.org/abs/1610.02391))\n*   Blur IG ([paper](https://arxiv.org/abs/2004.03383))\n\n\\*Developed by PAIR.\n\nThis list is by no means comprehensive. We are accepting pull requests to add\nnew methods!\n\n## Evaluation of Saliency Methods\n\nThe repository provides an implementation of Performance Information Curve (PIC) -\na human independent metric for evaluating the quality of saliency methods\n([paper](https://arxiv.org/abs/1906.02825),\n[poster](https://github.com/PAIR-code/saliency/blob/master/docs/ICCV_XRAI_Poster.pdf),\n[code](https://github.com/PAIR-code/saliency/blob/master/saliency/metrics/pic.py),\n[notebook](https://github.com/PAIR-code/saliency/blob/master/pic_metrics.ipynb)).\n\n\n## Download\n\n```\n# To install the core subpackage:\npip install saliency\n\n# To install core and tf1 subpackages:\npip install saliency[tf1]\n\n```\n\nor for the development version:\n```\ngit clone https://github.com/pair-code/saliency\ncd saliency\n```\n\n\n## Usage\n\nThe saliency library has two subpackages:\n*\t`core` uses a generic `call_model_function` which can be used with any ML \n\tframework.\n*\t`tf1` accepts input/output tensors directly, and sets up the necessary \n\tgraph operations for each method.\n\n### Core\n\nEach saliency mask class extends from the `CoreSaliency` base class. This class\ncontains the following methods:\n\n*   `GetMask(x_value, call_model_function, call_model_args=None)`: Returns a mask\n    of\n    the shape of non-batched `x_value` given by the saliency technique.\n*   `GetSmoothedMask(x_value, call_model_function, call_model_args=None, stdev_spread=.15, nsamples=25, magnitude=True)`: \n    Returns a mask smoothed of the shape of non-batched `x_value` with the \n    SmoothGrad technique.\n\n\nThe visualization module contains two methods for saliency visualization:\n\n* ```VisualizeImageGrayscale(image_3d, percentile)```: Marginalizes across the\n  absolute value of each channel to create a 2D single channel image, and clips\n  the image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between 0 to 1.\n* ```VisualizeImageDiverging(image_3d, percentile)```: Marginalizes across the\n  value of each channel to create a 2D single channel image, and clips the\n  image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between -1 to 1 where zero remains unchanged.\n\nIf the sign of the value given by the saliency mask is not important, then use\n```VisualizeImageGrayscale```, otherwise use ```VisualizeImageDiverging```. See\nthe SmoothGrad paper for more details on which visualization method to use.\n\n##### call_model_function\n`call_model_function` is how we pass inputs to a given model and receive the outputs\nnecessary to compute saliency masks. The description of this method and expected \noutput format is in the `CoreSaliency` description, as well as separately for each method.\n\n\n##### Examples\n\n[This example iPython notebook](http://github.com/pair-code/saliency/blob/master/Examples_core.ipynb)\nshowing these techniques is a good starting place.\n\nHere is a condensed example of using IG+SmoothGrad with TensorFlow 2:\n\n```\nimport saliency.core as saliency\nimport tensorflow as tf\n\n...\n\n# call_model_function construction here.\ndef call_model_function(x_value_batched, call_model_args, expected_keys):\n\ttape = tf.GradientTape()\n\tgrads = np.array(tape.gradient(output_layer, images))\n\treturn {saliency.INPUT_OUTPUT_GRADIENTS: grads}\n\n...\n\n# Load data.\nimage = GetImagePNG(...)\n\n# Compute IG+SmoothGrad.\nig_saliency = saliency.IntegratedGradients()\nsmoothgrad_ig = ig_saliency.GetSmoothedMask(image, \n\t\t\t\t\t\t\t\t\t\t\tcall_model_function, \n                                            call_model_args=None)\n\n# Compute a 2D tensor for visualization.\ngrayscale_visualization = saliency.VisualizeImageGrayscale(\n    smoothgrad_ig)\n```\n\n### TF1\n\nEach saliency mask class extends from the `TF1Saliency` base class. This class\ncontains the following methods:\n\n*   `__init__(graph, session, y, x)`: Constructor of the SaliencyMask. This can\n    modify the graph, or sometimes create a new graph. Often this will add nodes\n    to the graph, so this shouldn't be called continuously. `y` is the output\n    tensor to compute saliency masks with respect to, `x` is the input tensor\n    with the outer most dimension being batch size.\n*   `GetMask(x_value, feed_dict)`: Returns a mask of the shape of non-batched\n    `x_value` given by the saliency technique.\n*   `GetSmoothedMask(x_value, feed_dict)`: Returns a mask smoothed of the shape\n    of non-batched `x_value` with the SmoothGrad technique.\n\nThe visualization module contains two visualization methods:\n\n* ```VisualizeImageGrayscale(image_3d, percentile)```: Marginalizes across the\n  absolute value of each channel to create a 2D single channel image, and clips\n  the image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between 0 to 1.\n* ```VisualizeImageDiverging(image_3d, percentile)```: Marginalizes across the\n  value of each channel to create a 2D single channel image, and clips the\n  image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between -1 to 1 where zero remains unchanged.\n\nIf the sign of the value given by the saliency mask is not important, then use\n```VisualizeImageGrayscale```, otherwise use ```VisualizeImageDiverging```. See\nthe SmoothGrad paper for more details on which visualization method to use.\n\n##### Examples\n\n[This example iPython notebook](http://github.com/pair-code/saliency/blob/master/Examples_tf1.ipynb) shows\nthese techniques is a good starting place.\n\nAnother example of using GuidedBackprop with SmoothGrad from TensorFlow:\n\n```\nfrom saliency.tf1 import GuidedBackprop\nfrom saliency.tf1 import VisualizeImageGrayscale\nimport tensorflow.compat.v1 as tf\n\n...\n# Tensorflow graph construction here.\ny = logits[5]\nx = tf.placeholder(...)\n...\n\n# Compute guided backprop.\n# NOTE: This creates another graph that gets cached, try to avoid creating many\n# of these.\nguided_backprop_saliency = GuidedBackprop(graph, session, y, x)\n\n...\n# Load data.\nimage = GetImagePNG(...)\n...\n\nsmoothgrad_guided_backprop =\n    guided_backprop_saliency.GetMask(image, feed_dict={...})\n\n# Compute a 2D tensor for visualization.\ngrayscale_visualization = visualization.VisualizeImageGrayscale(\n    smoothgrad_guided_backprop)\n```\n\n## Conclusion/Disclaimer\n\nIf you have any questions or suggestions for improvements to this library,\nplease contact the owners of the `PAIR-code/saliency` repository.\n\nThis is not an official Google product.\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Framework-agnostic saliency methods",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/pair-code/saliency"
    },
    "split_keywords": [
        "saliency",
        "mask",
        "neural",
        "network",
        "deep",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a47b50fe6ecc60b0cdcc304f595c0570fe18986007edefb3321dd47f224e89bd",
                "md5": "4682f16c2a2aff0c7e7d8d16571c2179",
                "sha256": "f388286129c6bca459326fa334d2ab0c65a90607da796cb756296274f2b8f23d"
            },
            "downloads": -1,
            "filename": "saliency-0.2.1-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4682f16c2a2aff0c7e7d8d16571c2179",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 86171,
            "upload_time": "2024-03-20T19:51:28",
            "upload_time_iso_8601": "2024-03-20T19:51:28.937652Z",
            "url": "https://files.pythonhosted.org/packages/a4/7b/50fe6ecc60b0cdcc304f595c0570fe18986007edefb3321dd47f224e89bd/saliency-0.2.1-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4f630fb70dfa4b10c912e4613c0d0f99a3e46e615c8cc34657e14b0b511687e3",
                "md5": "5c1c19bf6b3c4820b1262b892ba59d15",
                "sha256": "79a3f64393a3ce89620bf46629af120c36a061019eff51b32b173378c8b18c63"
            },
            "downloads": -1,
            "filename": "saliency-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "5c1c19bf6b3c4820b1262b892ba59d15",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 54245,
            "upload_time": "2024-03-20T19:51:30",
            "upload_time_iso_8601": "2024-03-20T19:51:30.647024Z",
            "url": "https://files.pythonhosted.org/packages/4f/63/0fb70dfa4b10c912e4613c0d0f99a3e46e615c8cc34657e14b0b511687e3/saliency-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-20 19:51:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pair-code",
    "github_project": "saliency",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "saliency"
}
        
Elapsed time: 1.74543s