# plenoptic
[![PyPI Version](https://img.shields.io/pypi/v/plenoptic.svg)](https://pypi.org/project/plenoptic/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/LabForComputationalVision/plenoptic/blob/main/LICENSE)
![Python version](https://img.shields.io/badge/python-3.7|3.8|3.9|3.10-blue.svg)
[![Build Status](https://github.com/LabForComputationalVision/plenoptic/workflows/build/badge.svg)](https://github.com/LabForComputationalVision/plenoptic/actions?query=workflow%3Abuild)
[![Documentation Status](https://readthedocs.org/projects/plenoptic/badge/?version=latest)](https://plenoptic.readthedocs.io/en/latest/?badge=latest)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3995057.svg)](https://doi.org/10.5281/zenodo.3995057)
[![codecov](https://codecov.io/gh/LabForComputationalVision/plenoptic/branch/main/graph/badge.svg?token=EDtl5kqXKA)](https://codecov.io/gh/LabForComputationalVision/plenoptic)
[![Binder](http://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/LabForComputationalVision/plenoptic/1.0.1?filepath=examples)
![](docs/images/plenoptic_logo_wide.svg)
`plenoptic` is a python library for model-based stimulus synthesis. It
provides tools to help researchers understand their model by
synthesizing novel informative stimuli, which help build intuition for
what features the model ignores and what it is sensitive to. These
synthetic images can then be used in future perceptual or neural
experiments for further investigation.
## Getting started
- If you are unfamiliar with stimulus synthesis, see the [conceptual
introduction](https://plenoptic.readthedocs.io/en/latest/conceptual_intro.html)
for an in-depth introduction.
- If you understand the basics of synthesis and want to get started
using `plenoptic` quickly, see the
[Quickstart](examples/00_quickstart.ipynb) tutorial.
### Installation
The best way to install `plenoptic` is via `pip`.
``` bash
$ pip install plenoptic
```
See the [installation
page](https://plenoptic.readthedocs.io/en/latest/install.html) for more details,
including how to set up a virtual environment and jupyter.
### ffmpeg and videos
Several methods in this package generate videos. There are several backends
possible for saving the animations to file, see [matplotlib
documentation](https://matplotlib.org/stable/api/animation_api.html#writer-classes)
for more details. In order convert them to HTML5 for viewing (and thus, to view
in a jupyter notebook), you'll need [ffmpeg](https://ffmpeg.org/download.html)
installed and on your path as well. Depending on your system, this might already
be installed, but if not, the easiest way is probably through [conda]
(https://anaconda.org/conda-forge/ffmpeg): `conda install -c conda-forge
ffmpeg`.
To change the backend, run `matplotlib.rcParams['animation.writer'] = writer`
before calling any of the animate functions. If you try to set that `rcParam`
with a random string, `matplotlib` will tell you the available choices.
## Contents
### Synthesis methods
![](docs/images/example_synth.svg)
- [Metamers](examples/06_Metamer.ipynb): given a model and a
reference image, stochastically generate a new image whose model
representation is identical to that of the reference image. This
method investigates what image features the model disregards
entirely.
- [Eigendistortions](examples/02_Eigendistortions.ipynb): given a
model and a reference image, compute the image perturbation that
produces the smallest and largest changes in the model response
space. This method investigates the image features the model
considers the least and most important.
- [Maximal differentiation (MAD)
competition](examples/07_MAD_Competition.ipynb): given two metrics
that measure distance between images and a reference image, generate
pairs of images that optimally differentiate the models.
Specifically, synthesize a pair of images that the first model says
are equi-distant from the reference while the second model says they
are maximally/minimally distant from the reference. Then synthesize
a second pair with the roles of the two models reversed. This method
allows for efficient comparison of two metrics, highlighting the
aspects in which their sensitivities differ.
- [Geodesics](examples/05_Geodesics.ipynb): given a model and two
images, synthesize a sequence of images that lie on the shortest
("geodesic") path in the model's representation space. This
method investigates how a model represents motion and what changes
to an image it consider reasonable.
### Models, Metrics, and Model Components
- Portilla-Simoncelli texture model, which measures the statistical properties
of visual textures, here defined as "repeating visual patterns."
- Steerable pyramid, a multi-scale oriented image decomposition. The basis are
oriented (steerable) filters, localized in space and frequency. Among other
uses, the steerable pyramid serves as a good representation from which to
build a primary visual cortex model. See the [pyrtools
documentation](https://pyrtools.readthedocs.io/en/latest/index.html) for
more details on image pyramids in general and the steerable pyramid in
particular.
- Structural Similarity Index (SSIM), is a perceptual similarity metric,
returning a number between -1 (totally different) and 1 (identical)
reflecting how similar two images are. This is based on the images'
luminance, contrast, and structure, which are computed convolutionally
across the images.
- Multiscale Structrual Similarity Index (MS-SSIM), is a perceptual similarity
metric similar to SSIM, except it operates at multiple scales (i.e.,
spatial frequencies).
- Normalized Laplacian distance, is a perceptual distance metric based on
transformations associated with the early visual system: local luminance
subtraction and local contrast gain control, at six scales.
## Getting help
We communicate via several channels on Github:
- [Discussions](https://github.com/LabForComputationalVision/plenoptic/discussions)
is the place to ask usage questions, discuss issues too broad for a
single issue, or show off what you've made with plenoptic.
- If you've come across a bug, open an
[issue](https://github.com/LabForComputationalVision/plenoptic/issues).
- If you have an idea for an extension or enhancement, please post in the
[ideas
section](https://github.com/LabForComputationalVision/plenoptic/discussions/categories/ideas)
of discussions first. We'll discuss it there and, if we decide to pursue it,
open an issue to track progress.
- See the [contributing guide](CONTRIBUTING.md) for how to get involved.
In all cases, please follow our [code of conduct](CODE_OF_CONDUCT.md).
## Citing us
If you use `plenoptic` in a published academic article or presentation, please
cite us! See the [citation
guide](https://plenoptic.readthedocs.io/en/latest/citation.html) for more
details.
## Support
This package is supported by the Simons Foundation Flatiron Institute's Center
for Computational Neuroscience.
![](docs/images/CCN-logo-wText.png)
Raw data
{
"_id": null,
"home_page": "",
"name": "plenoptic",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "neuroscience,pytorch,visual information processing,machine learning,explainability,computational models",
"author": "Plenoptic authors",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/9c/ad/9e7cc714e36fe68dcff5adc1ff84d7978d61af4060561926e4afbc266b84/plenoptic-1.0.2.tar.gz",
"platform": null,
"description": "# plenoptic\n\n[![PyPI Version](https://img.shields.io/pypi/v/plenoptic.svg)](https://pypi.org/project/plenoptic/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/LabForComputationalVision/plenoptic/blob/main/LICENSE)\n![Python version](https://img.shields.io/badge/python-3.7|3.8|3.9|3.10-blue.svg)\n[![Build Status](https://github.com/LabForComputationalVision/plenoptic/workflows/build/badge.svg)](https://github.com/LabForComputationalVision/plenoptic/actions?query=workflow%3Abuild)\n[![Documentation Status](https://readthedocs.org/projects/plenoptic/badge/?version=latest)](https://plenoptic.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3995057.svg)](https://doi.org/10.5281/zenodo.3995057)\n[![codecov](https://codecov.io/gh/LabForComputationalVision/plenoptic/branch/main/graph/badge.svg?token=EDtl5kqXKA)](https://codecov.io/gh/LabForComputationalVision/plenoptic)\n[![Binder](http://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/LabForComputationalVision/plenoptic/1.0.1?filepath=examples)\n\n![](docs/images/plenoptic_logo_wide.svg)\n\n`plenoptic` is a python library for model-based stimulus synthesis. It\nprovides tools to help researchers understand their model by\nsynthesizing novel informative stimuli, which help build intuition for\nwhat features the model ignores and what it is sensitive to. These\nsynthetic images can then be used in future perceptual or neural\nexperiments for further investigation.\n\n## Getting started\n\n- If you are unfamiliar with stimulus synthesis, see the [conceptual\n introduction](https://plenoptic.readthedocs.io/en/latest/conceptual_intro.html)\n for an in-depth introduction.\n- If you understand the basics of synthesis and want to get started\n using `plenoptic` quickly, see the\n [Quickstart](examples/00_quickstart.ipynb) tutorial.\n\n### Installation\n\nThe best way to install `plenoptic` is via `pip`.\n\n``` bash\n$ pip install plenoptic\n```\n\nSee the [installation\npage](https://plenoptic.readthedocs.io/en/latest/install.html) for more details,\nincluding how to set up a virtual environment and jupyter.\n\n### ffmpeg and videos\n\nSeveral methods in this package generate videos. There are several backends\npossible for saving the animations to file, see [matplotlib\ndocumentation](https://matplotlib.org/stable/api/animation_api.html#writer-classes)\nfor more details. In order convert them to HTML5 for viewing (and thus, to view\nin a jupyter notebook), you'll need [ffmpeg](https://ffmpeg.org/download.html)\ninstalled and on your path as well. Depending on your system, this might already\nbe installed, but if not, the easiest way is probably through [conda]\n(https://anaconda.org/conda-forge/ffmpeg): `conda install -c conda-forge\nffmpeg`.\n\nTo change the backend, run `matplotlib.rcParams['animation.writer'] = writer`\nbefore calling any of the animate functions. If you try to set that `rcParam`\nwith a random string, `matplotlib` will tell you the available choices.\n\n## Contents\n\n### Synthesis methods\n\n![](docs/images/example_synth.svg)\n\n- [Metamers](examples/06_Metamer.ipynb): given a model and a\n reference image, stochastically generate a new image whose model\n representation is identical to that of the reference image. This\n method investigates what image features the model disregards\n entirely.\n- [Eigendistortions](examples/02_Eigendistortions.ipynb): given a\n model and a reference image, compute the image perturbation that\n produces the smallest and largest changes in the model response\n space. This method investigates the image features the model\n considers the least and most important.\n- [Maximal differentiation (MAD)\n competition](examples/07_MAD_Competition.ipynb): given two metrics\n that measure distance between images and a reference image, generate\n pairs of images that optimally differentiate the models.\n Specifically, synthesize a pair of images that the first model says\n are equi-distant from the reference while the second model says they\n are maximally/minimally distant from the reference. Then synthesize\n a second pair with the roles of the two models reversed. This method\n allows for efficient comparison of two metrics, highlighting the\n aspects in which their sensitivities differ.\n- [Geodesics](examples/05_Geodesics.ipynb): given a model and two\n images, synthesize a sequence of images that lie on the shortest\n (\"geodesic\") path in the model's representation space. This\n method investigates how a model represents motion and what changes\n to an image it consider reasonable.\n\n### Models, Metrics, and Model Components\n\n- Portilla-Simoncelli texture model, which measures the statistical properties\n of visual textures, here defined as \"repeating visual patterns.\"\n- Steerable pyramid, a multi-scale oriented image decomposition. The basis are\n oriented (steerable) filters, localized in space and frequency. Among other\n uses, the steerable pyramid serves as a good representation from which to\n build a primary visual cortex model. See the [pyrtools\n documentation](https://pyrtools.readthedocs.io/en/latest/index.html) for\n more details on image pyramids in general and the steerable pyramid in\n particular.\n- Structural Similarity Index (SSIM), is a perceptual similarity metric,\n returning a number between -1 (totally different) and 1 (identical)\n reflecting how similar two images are. This is based on the images'\n luminance, contrast, and structure, which are computed convolutionally\n across the images.\n- Multiscale Structrual Similarity Index (MS-SSIM), is a perceptual similarity\n metric similar to SSIM, except it operates at multiple scales (i.e.,\n spatial frequencies).\n- Normalized Laplacian distance, is a perceptual distance metric based on\n transformations associated with the early visual system: local luminance\n subtraction and local contrast gain control, at six scales.\n\n## Getting help\n\nWe communicate via several channels on Github:\n\n- [Discussions](https://github.com/LabForComputationalVision/plenoptic/discussions)\n is the place to ask usage questions, discuss issues too broad for a\n single issue, or show off what you've made with plenoptic.\n- If you've come across a bug, open an\n [issue](https://github.com/LabForComputationalVision/plenoptic/issues).\n- If you have an idea for an extension or enhancement, please post in the\n [ideas\n section](https://github.com/LabForComputationalVision/plenoptic/discussions/categories/ideas)\n of discussions first. We'll discuss it there and, if we decide to pursue it,\n open an issue to track progress.\n- See the [contributing guide](CONTRIBUTING.md) for how to get involved.\n\nIn all cases, please follow our [code of conduct](CODE_OF_CONDUCT.md).\n\n## Citing us\n\nIf you use `plenoptic` in a published academic article or presentation, please\ncite us! See the [citation\nguide](https://plenoptic.readthedocs.io/en/latest/citation.html) for more\ndetails.\n\n## Support\n\nThis package is supported by the Simons Foundation Flatiron Institute's Center\nfor Computational Neuroscience.\n\n![](docs/images/CCN-logo-wText.png)\n",
"bugtrack_url": null,
"license": "",
"summary": "Python library for model-based stimulus synthesis.",
"version": "1.0.2",
"project_urls": {
"Documentation": "https://plenoptic.readthedocs.io/en/latest/",
"Download": "https://zenodo.org/record/3995056",
"Homepage": "https://github.com/LabForComputationalVision/plenoptic"
},
"split_keywords": [
"neuroscience",
"pytorch",
"visual information processing",
"machine learning",
"explainability",
"computational models"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9cad9e7cc714e36fe68dcff5adc1ff84d7978d61af4060561926e4afbc266b84",
"md5": "e60877d687ad61a87b93c0eedabb5990",
"sha256": "d11c9b60d4650e75b5654f06e87fc245efb6291779067ec891287898f7f2ce77"
},
"downloads": -1,
"filename": "plenoptic-1.0.2.tar.gz",
"has_sig": false,
"md5_digest": "e60877d687ad61a87b93c0eedabb5990",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 40011356,
"upload_time": "2023-11-17T20:34:39",
"upload_time_iso_8601": "2023-11-17T20:34:39.750880Z",
"url": "https://files.pythonhosted.org/packages/9c/ad/9e7cc714e36fe68dcff5adc1ff84d7978d61af4060561926e4afbc266b84/plenoptic-1.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-11-17 20:34:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LabForComputationalVision",
"github_project": "plenoptic",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "plenoptic"
}