# plenoptic
[![PyPI Version](https://img.shields.io/pypi/v/plenoptic.svg)](https://pypi.org/project/plenoptic/)
[![Anaconda-Server Badge](https://anaconda.org/conda-forge/plenoptic/badges/version.svg)](https://anaconda.org/conda-forge/plenoptic)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/plenoptic-org/plenoptic/blob/main/LICENSE)
![Python version](https://img.shields.io/badge/python-3.10|3.11|3.12-blue.svg)
[![Build Status](https://github.com/plenoptic-org/plenoptic/workflows/build/badge.svg)](https://github.com/plenoptic-org/plenoptic/actions?query=workflow%3Abuild)
[![Documentation Status](https://readthedocs.org/projects/plenoptic/badge/?version=latest)](https://plenoptic.readthedocs.io/en/latest/?badge=latest)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10151131.svg)](https://doi.org/10.5281/zenodo.10151131)
[![codecov](https://codecov.io/gh/plenoptic-org/plenoptic/branch/main/graph/badge.svg?token=EDtl5kqXKA)](https://codecov.io/gh/plenoptic-org/plenoptic)
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/plenoptic-org/plenoptic/1.1.0?filepath=examples)
[![Project Status: Active – The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)
![](docs/images/plenoptic_logo_wide.svg)
`plenoptic` is a python library for model-based synthesis of perceptual stimuli.
For `plenoptic`, models are those of visual[^1] information processing: they
accept an image as input, perform some computations, and return some output,
which can be mapped to neuronal firing rate, fMRI BOLD response, behavior on
some task, image category, etc. The intended audience is researchers in
neuroscience, psychology, and machine learning. The generated stimuli enable
interpretation of model properties through examination of features that are
enhanced, suppressed, or discarded. More importantly, they can facilitate the
scientific process, through use in further perceptual or neural experiments
aimed at validating or falsifying model predictions.
## Getting started
- If you are unfamiliar with stimulus synthesis, see the [conceptual
introduction](https://plenoptic.readthedocs.io/en/latest/conceptual_intro.html)
for an in-depth introduction.
- If you understand the basics of synthesis and want to get started
using `plenoptic` quickly, see the
[Quickstart](examples/00_quickstart.ipynb) tutorial.
### Installation
The best way to install `plenoptic` is via `pip`:
``` bash
$ pip install plenoptic
```
or `conda`:
``` bash
$ conda install plenoptic -c conda-forge
```
> [!WARNING]
> We do not currently support conda installs on Windows, due to the lack of a Windows pytorch package on conda-forge. See [here](https://github.com/conda-forge/pytorch-cpu-feedstock/issues/32) for the status of that issue.
Our dependencies include [pytorch](https://pytorch.org/) and
[pyrtools](https://pyrtools.readthedocs.io/en/latest/). Installation should take
care of them (along with our other dependencies) automatically, but if you have
an installation problem (especially on a non-Linux operating system), it is
likely that the problem lies with one of those packages. [Open an
issue](https://github.com/plenoptic-org/plenoptic/issues) and we'll
try to help you figure out the problem!
See the [installation
page](https://plenoptic.readthedocs.io/en/latest/install.html) for more details,
including how to set up a virtual environment and jupyter.
### ffmpeg and videos
Several methods in this package generate videos. There are several backends
possible for saving the animations to file, see [matplotlib
documentation](https://matplotlib.org/stable/api/animation_api.html#writer-classes)
for more details. In order convert them to HTML5 for viewing (and thus, to view
in a jupyter notebook), you'll need [ffmpeg](https://ffmpeg.org/download.html)
installed and on your path as well. Depending on your system, this might already
be installed, but if not, the easiest way is probably through [conda]
(https://anaconda.org/conda-forge/ffmpeg): `conda install -c conda-forge
ffmpeg`.
To change the backend, run `matplotlib.rcParams['animation.writer'] = writer`
before calling any of the animate functions. If you try to set that `rcParam`
with a random string, `matplotlib` will tell you the available choices.
## Contents
### Synthesis methods
![](docs/images/example_synth.svg)
- [Metamers](examples/06_Metamer.ipynb): given a model and a
reference image, stochastically generate a new image whose model
representation is identical to that of the reference image. This
method investigates what image features the model disregards
entirely.
- [Eigendistortions](examples/02_Eigendistortions.ipynb): given a
model and a reference image, compute the image perturbation that
produces the smallest and largest changes in the model response
space. This method investigates the image features the model
considers the least and most important.
- [Maximal differentiation (MAD)
competition](examples/07_MAD_Competition.ipynb): given two metrics
that measure distance between images and a reference image, generate
pairs of images that optimally differentiate the models.
Specifically, synthesize a pair of images that the first model says
are equi-distant from the reference while the second model says they
are maximally/minimally distant from the reference. Then synthesize
a second pair with the roles of the two models reversed. This method
allows for efficient comparison of two metrics, highlighting the
aspects in which their sensitivities differ.
### Models, Metrics, and Model Components
- Portilla-Simoncelli texture model, which measures the statistical properties
of visual textures, here defined as "repeating visual patterns."
- Steerable pyramid, a multi-scale oriented image decomposition. The basis are
oriented (steerable) filters, localized in space and frequency. Among other
uses, the steerable pyramid serves as a good representation from which to
build a primary visual cortex model. See the [pyrtools
documentation](https://pyrtools.readthedocs.io/en/latest/index.html) for
more details on image pyramids in general and the steerable pyramid in
particular.
- Structural Similarity Index (SSIM), is a perceptual similarity metric,
returning a number between -1 (totally different) and 1 (identical)
reflecting how similar two images are. This is based on the images'
luminance, contrast, and structure, which are computed convolutionally
across the images.
- Multiscale Structrual Similarity Index (MS-SSIM), is a perceptual similarity
metric similar to SSIM, except it operates at multiple scales (i.e.,
spatial frequencies).
- Normalized Laplacian distance, is a perceptual distance metric based on
transformations associated with the early visual system: local luminance
subtraction and local contrast gain control, at six scales.
## Getting help
We communicate via several channels on Github:
- [Discussions](https://github.com/plenoptic-org/plenoptic/discussions)
is the place to ask usage questions, discuss issues too broad for a
single issue, or show off what you've made with plenoptic.
- If you've come across a bug, open an
[issue](https://github.com/plenoptic-org/plenoptic/issues).
- If you have an idea for an extension or enhancement, please post in the
[ideas
section](https://github.com/plenoptic-org/plenoptic/discussions/categories/ideas)
of discussions first. We'll discuss it there and, if we decide to pursue it,
open an issue to track progress.
- See the [contributing guide](CONTRIBUTING.md) for how to get involved.
In all cases, please follow our [code of conduct](CODE_OF_CONDUCT.md).
## Citing us
If you use `plenoptic` in a published academic article or presentation, please
cite both the code by the DOI as well the JOV paper. If you are not using the
code, but just discussing the project, please cite the paper. You can click on
`Cite this repository` on the right side of the GitHub page to get a copyable
citation for the code, or use the following:
- Code: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10151131.svg)](https://doi.org/10.5281/zenodo.10151131)
- Paper:
``` bibtex
@article{duong2023plenoptic,
title={Plenoptic: A platform for synthesizing model-optimized visual stimuli},
author={Duong, Lyndon and Bonnen, Kathryn and Broderick, William and Fiquet, Pierre-{\'E}tienne and Parthasarathy, Nikhil and Yerxa, Thomas and Zhao, Xinyuan and Simoncelli, Eero},
journal={Journal of Vision},
volume={23},
number={9},
pages={5822--5822},
year={2023},
publisher={The Association for Research in Vision and Ophthalmology}
}
```
See the [citation
guide](https://plenoptic.readthedocs.io/en/latest/citation.html) for more
details, including citations for the different synthesis methods and
computational moels included in plenoptic.
## Support
This package is supported by the Simons Foundation Flatiron Institute's Center
for Computational Neuroscience.
![](docs/images/CCN-logo-wText.png)
[^1]: These methods also work with auditory models, such as in [Feather et al.,
2019](https://proceedings.neurips.cc/paper_files/paper/2019/hash/ac27b77292582bc293a51055bfc994ee-Abstract.html),
though we haven't yet implemented examples. If you're interested, please
post in
[Discussions](https://github.com/plenoptic-org/plenoptic/discussions)!
Raw data
{
"_id": null,
"home_page": null,
"name": "plenoptic",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "neuroscience, pytorch, visual information processing, machine learning, explainability, computational models",
"author": "Plenoptic authors",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/79/78/2e7ce43f2323dba7987a2b932c642b2cd14aaa228c5253f1ff1e78695e11/plenoptic-1.1.0.tar.gz",
"platform": null,
"description": "# plenoptic\n\n[![PyPI Version](https://img.shields.io/pypi/v/plenoptic.svg)](https://pypi.org/project/plenoptic/)\n[![Anaconda-Server Badge](https://anaconda.org/conda-forge/plenoptic/badges/version.svg)](https://anaconda.org/conda-forge/plenoptic)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/plenoptic-org/plenoptic/blob/main/LICENSE)\n![Python version](https://img.shields.io/badge/python-3.10|3.11|3.12-blue.svg)\n[![Build Status](https://github.com/plenoptic-org/plenoptic/workflows/build/badge.svg)](https://github.com/plenoptic-org/plenoptic/actions?query=workflow%3Abuild)\n[![Documentation Status](https://readthedocs.org/projects/plenoptic/badge/?version=latest)](https://plenoptic.readthedocs.io/en/latest/?badge=latest)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10151131.svg)](https://doi.org/10.5281/zenodo.10151131)\n[![codecov](https://codecov.io/gh/plenoptic-org/plenoptic/branch/main/graph/badge.svg?token=EDtl5kqXKA)](https://codecov.io/gh/plenoptic-org/plenoptic)\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/plenoptic-org/plenoptic/1.1.0?filepath=examples)\n[![Project Status: Active \u2013 The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)\n\n![](docs/images/plenoptic_logo_wide.svg)\n\n`plenoptic` is a python library for model-based synthesis of perceptual stimuli.\nFor `plenoptic`, models are those of visual[^1] information processing: they\naccept an image as input, perform some computations, and return some output,\nwhich can be mapped to neuronal firing rate, fMRI BOLD response, behavior on\nsome task, image category, etc. The intended audience is researchers in\nneuroscience, psychology, and machine learning. The generated stimuli enable\ninterpretation of model properties through examination of features that are\nenhanced, suppressed, or discarded. More importantly, they can facilitate the\nscientific process, through use in further perceptual or neural experiments\naimed at validating or falsifying model predictions.\n\n## Getting started\n\n- If you are unfamiliar with stimulus synthesis, see the [conceptual\n introduction](https://plenoptic.readthedocs.io/en/latest/conceptual_intro.html)\n for an in-depth introduction.\n- If you understand the basics of synthesis and want to get started\n using `plenoptic` quickly, see the\n [Quickstart](examples/00_quickstart.ipynb) tutorial.\n\n### Installation\n\nThe best way to install `plenoptic` is via `pip`:\n\n``` bash\n$ pip install plenoptic\n```\n\nor `conda`:\n\n``` bash\n$ conda install plenoptic -c conda-forge\n```\n\n> [!WARNING]\n> We do not currently support conda installs on Windows, due to the lack of a Windows pytorch package on conda-forge. See [here](https://github.com/conda-forge/pytorch-cpu-feedstock/issues/32) for the status of that issue.\n\nOur dependencies include [pytorch](https://pytorch.org/) and\n[pyrtools](https://pyrtools.readthedocs.io/en/latest/). Installation should take\ncare of them (along with our other dependencies) automatically, but if you have\nan installation problem (especially on a non-Linux operating system), it is\nlikely that the problem lies with one of those packages. [Open an\nissue](https://github.com/plenoptic-org/plenoptic/issues) and we'll\ntry to help you figure out the problem!\n\nSee the [installation\npage](https://plenoptic.readthedocs.io/en/latest/install.html) for more details,\nincluding how to set up a virtual environment and jupyter.\n\n### ffmpeg and videos\n\nSeveral methods in this package generate videos. There are several backends\npossible for saving the animations to file, see [matplotlib\ndocumentation](https://matplotlib.org/stable/api/animation_api.html#writer-classes)\nfor more details. In order convert them to HTML5 for viewing (and thus, to view\nin a jupyter notebook), you'll need [ffmpeg](https://ffmpeg.org/download.html)\ninstalled and on your path as well. Depending on your system, this might already\nbe installed, but if not, the easiest way is probably through [conda]\n(https://anaconda.org/conda-forge/ffmpeg): `conda install -c conda-forge\nffmpeg`.\n\nTo change the backend, run `matplotlib.rcParams['animation.writer'] = writer`\nbefore calling any of the animate functions. If you try to set that `rcParam`\nwith a random string, `matplotlib` will tell you the available choices.\n\n## Contents\n\n### Synthesis methods\n\n![](docs/images/example_synth.svg)\n\n- [Metamers](examples/06_Metamer.ipynb): given a model and a\n reference image, stochastically generate a new image whose model\n representation is identical to that of the reference image. This\n method investigates what image features the model disregards\n entirely.\n- [Eigendistortions](examples/02_Eigendistortions.ipynb): given a\n model and a reference image, compute the image perturbation that\n produces the smallest and largest changes in the model response\n space. This method investigates the image features the model\n considers the least and most important.\n- [Maximal differentiation (MAD)\n competition](examples/07_MAD_Competition.ipynb): given two metrics\n that measure distance between images and a reference image, generate\n pairs of images that optimally differentiate the models.\n Specifically, synthesize a pair of images that the first model says\n are equi-distant from the reference while the second model says they\n are maximally/minimally distant from the reference. Then synthesize\n a second pair with the roles of the two models reversed. This method\n allows for efficient comparison of two metrics, highlighting the\n aspects in which their sensitivities differ.\n\n### Models, Metrics, and Model Components\n\n- Portilla-Simoncelli texture model, which measures the statistical properties\n of visual textures, here defined as \"repeating visual patterns.\"\n- Steerable pyramid, a multi-scale oriented image decomposition. The basis are\n oriented (steerable) filters, localized in space and frequency. Among other\n uses, the steerable pyramid serves as a good representation from which to\n build a primary visual cortex model. See the [pyrtools\n documentation](https://pyrtools.readthedocs.io/en/latest/index.html) for\n more details on image pyramids in general and the steerable pyramid in\n particular.\n- Structural Similarity Index (SSIM), is a perceptual similarity metric,\n returning a number between -1 (totally different) and 1 (identical)\n reflecting how similar two images are. This is based on the images'\n luminance, contrast, and structure, which are computed convolutionally\n across the images.\n- Multiscale Structrual Similarity Index (MS-SSIM), is a perceptual similarity\n metric similar to SSIM, except it operates at multiple scales (i.e.,\n spatial frequencies).\n- Normalized Laplacian distance, is a perceptual distance metric based on\n transformations associated with the early visual system: local luminance\n subtraction and local contrast gain control, at six scales.\n\n## Getting help\n\nWe communicate via several channels on Github:\n\n- [Discussions](https://github.com/plenoptic-org/plenoptic/discussions)\n is the place to ask usage questions, discuss issues too broad for a\n single issue, or show off what you've made with plenoptic.\n- If you've come across a bug, open an\n [issue](https://github.com/plenoptic-org/plenoptic/issues).\n- If you have an idea for an extension or enhancement, please post in the\n [ideas\n section](https://github.com/plenoptic-org/plenoptic/discussions/categories/ideas)\n of discussions first. We'll discuss it there and, if we decide to pursue it,\n open an issue to track progress.\n- See the [contributing guide](CONTRIBUTING.md) for how to get involved.\n\nIn all cases, please follow our [code of conduct](CODE_OF_CONDUCT.md).\n\n## Citing us\n\nIf you use `plenoptic` in a published academic article or presentation, please\ncite both the code by the DOI as well the JOV paper. If you are not using the\ncode, but just discussing the project, please cite the paper. You can click on\n`Cite this repository` on the right side of the GitHub page to get a copyable\ncitation for the code, or use the following:\n\n- Code: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10151131.svg)](https://doi.org/10.5281/zenodo.10151131)\n- Paper:\n ``` bibtex\n @article{duong2023plenoptic,\n title={Plenoptic: A platform for synthesizing model-optimized visual stimuli},\n author={Duong, Lyndon and Bonnen, Kathryn and Broderick, William and Fiquet, Pierre-{\\'E}tienne and Parthasarathy, Nikhil and Yerxa, Thomas and Zhao, Xinyuan and Simoncelli, Eero},\n journal={Journal of Vision},\n volume={23},\n number={9},\n pages={5822--5822},\n year={2023},\n publisher={The Association for Research in Vision and Ophthalmology}\n }\n ```\n\nSee the [citation\nguide](https://plenoptic.readthedocs.io/en/latest/citation.html) for more\ndetails, including citations for the different synthesis methods and\ncomputational moels included in plenoptic.\n\n## Support\n\nThis package is supported by the Simons Foundation Flatiron Institute's Center\nfor Computational Neuroscience.\n\n![](docs/images/CCN-logo-wText.png)\n\n[^1]: These methods also work with auditory models, such as in [Feather et al.,\n 2019](https://proceedings.neurips.cc/paper_files/paper/2019/hash/ac27b77292582bc293a51055bfc994ee-Abstract.html),\n though we haven't yet implemented examples. If you're interested, please\n post in\n [Discussions](https://github.com/plenoptic-org/plenoptic/discussions)!\n",
"bugtrack_url": null,
"license": null,
"summary": "Python library for model-based stimulus synthesis.",
"version": "1.1.0",
"project_urls": {
"Documentation": "https://plenoptic.readthedocs.io/en/latest/",
"Download": "https://zenodo.org/doi/10.5281/zenodo.10151130",
"Homepage": "https://github.com/plenoptic-org/plenoptic"
},
"split_keywords": [
"neuroscience",
" pytorch",
" visual information processing",
" machine learning",
" explainability",
" computational models"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3660cdcc7b7d125314690ebbe8d27a29d8c9560f2bcde3c943c50b50c2faac44",
"md5": "61942c756fd9a9ef189392d9bc827100",
"sha256": "0be6fce389fce3c1b1caf9138e65a30942b8958dff5f2a17eaf95de27b6f3e81"
},
"downloads": -1,
"filename": "plenoptic-1.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "61942c756fd9a9ef189392d9bc827100",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 399423,
"upload_time": "2024-09-16T19:25:32",
"upload_time_iso_8601": "2024-09-16T19:25:32.097439Z",
"url": "https://files.pythonhosted.org/packages/36/60/cdcc7b7d125314690ebbe8d27a29d8c9560f2bcde3c943c50b50c2faac44/plenoptic-1.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "79782e7ce43f2323dba7987a2b932c642b2cd14aaa228c5253f1ff1e78695e11",
"md5": "445306d12a9aa6b8c17038d6a0daa699",
"sha256": "733719f0bf63b6c68e35560a8bfa36aa8f065b82b038d7d8b84ccd4e6a5f3fec"
},
"downloads": -1,
"filename": "plenoptic-1.1.0.tar.gz",
"has_sig": false,
"md5_digest": "445306d12a9aa6b8c17038d6a0daa699",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 30423982,
"upload_time": "2024-09-16T19:27:06",
"upload_time_iso_8601": "2024-09-16T19:27:06.026178Z",
"url": "https://files.pythonhosted.org/packages/79/78/2e7ce43f2323dba7987a2b932c642b2cd14aaa228c5253f1ff1e78695e11/plenoptic-1.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-16 19:27:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "plenoptic-org",
"github_project": "plenoptic",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "plenoptic"
}