# Most Exciting Input
Generate ***Most Exciting Input*** to explore and understand PyTorch model's behavior by identifying input samples that induce high activation from specific neurons in your model.
## Paper
paper: [Input optimization for interpreting neural generative models](https://lacykaltgr.github.io/assets/pdf/TDK2023.pdf)
## Installation
```bash
pip install meitorch
```
## Usage
1. Load the model you want to generate interpretable visualizations for.
```python
model = load_your_model()
```
2. Define the operation you want to optimize most exciting inputs for.
Your operation must take a batch of inputs as parameters and return a dictionary of losses.
Visualizations will be generated for the input that maximizes the loss named "objective".
```python
import torch
def operation(inputs):
outputs = model(inputs)
activation = outputs[:, 0]
activation = torch.mean(activation, dim=0)
loss = -activation
losses = dict(
objective=loss,
activation=activation,
)
return losses
```
You can also define more complex operations that include multiple losses.
Adding other losses to the dictionary will enable you to plot them after the optimization.
```python
def operation(inputs):
outputs = model(inputs)
activation = outputs[:, 0]
activation = torch.mean(activation, dim=0)
model_losses = compute_loss(inputs, outputs)
regularization = losses["elbo"] * 0.1
loss = -activation + regularization
losses = dict(
objective=loss,
activation=activation,
elbo_regularization=regularization,
)
return losses
```
3. Create a MEI object with your operation and the input shape of your model.
```python
from meitorch.mei import MEI
mei = MEI(operation=operation(), shape=(1, 40, 40), device=device)
```
4. Define a configuration for the optimization and generate **Most Exciting Inputs**.
There are minor differences between the configuration for different optimization schemes, more on configurations below.
**Generate pixel-wise MEI**
```python
pixel_mei_config = dict(/* your config here */)
result = mei.generate_pixel_mei(config=pixel_mei_config)
```
**Generate variational MEI**
```python
variational_mei_config = dict(/* your config here */)
result = mei.generate_variational_mei(config=variational_mei_config)
```
**Generate transformation MEI**
```python
transformation_mei_config = dict(/* your config here */)
result = mei.generate_transformation_mei(config=transformation_mei_config)
```
5. Analyze the results
Access the generated images and the losses from the result object.
**Plot the loss curves and the visualizations**
```python
result.plot_losses(show=False, save_path=None, ranges=None)
result.plot_image_and_losses(self, save_path=None, ranges=None)
```
**Plot spatial frequency spectrum of the generated images**
```python
result.plot_spatial_frequency_spectrum()
```
**Further analysis**
You can further analyze the results with the **meitorch.analyze** module.
```python
from meitorch.analyze import Analyze
```
## Configurations
For all configurations, you can use a schedule instead of a constant value for any parameter.
A schedule is a function that takes the current iteration as input and returns the value for that iteration.
You can access the schedule class in **meitorch.tools.schedules**.
```python
from meitorch.tools.schedules import LinearSchedule
schedule = LinearSchedule(start=0.1, end=0.01)
```
Available schedules are:
```
- LinearSchedule(start, end)
- OctaveSchedule(values)
- RandomSchedule(minimum, maximum)
```
**Pixel-wise MEI configuration example**
```python
image_mei_config = dict(
iter_n=2, # number of optimization steps
n_samples=1, # number of samples per batch
save_every=1, # save copy of image every n iterations
bias=0, # bias of the distribution the image is sampled from
scale=1, # scaling of the distribution the image is sampled from
diverse=False, # whether to use diverse sampling
diverse_params=dict(
div_metric='euclidean', # distance metric for diversity (euclidean, cosine, correlation)
div_linkage='minimum', # linkage criterion for diversity (minimum, average)
div_weight=1.1, # weight of diversity loss
),
#pre-step transformations
scaler=1.01, # scaling of the image before each step
jitter=3, # size of translational jittering before each step
#normalization/clipping
train_norm=1, # norm adjustment during step
norm=1, # norm adjustment after step
#optmizer
optimizer="rmsprop", # optimizer (sgd, mei, rmsprop, adam)
optimizer_params=dict(
lr=0.03, # learning rate
weight_decay=1e-6, # weight decay
),
#preconditioning in the gradient
precond=0.3, # strength of gradient preconditioning filter falloff (float or schedule)
#denoiser after each step
blur='gaussian', # denoiser type (gaussian, tv, bilateral)
blur_params=dict(
#gaussian
kernel_size=3,
sigma=LinearSchedule(0.1, 0.01)
#tv
#regularization_scaler=1e-7,
#lr=0.0001,
#num_iters=5,
#bilateral
#kernel_size=3,
#sigma_color=LinearSchedule(1, 0.01),
#sigma_spatial=LinearSchedule(0.25, 0.01),
),
)
```
**Variational MEI configuration example**
```python
var_mei_config = dict(
iter_n=1, # number of optimization steps
save_every=100, # save image every n iterations
bias=0, # bias of the distribution the image is sampled from
scale=1, # scaling of the distribution the image is sampled from
#transformations
scaler=RandomSchedule(1, 1.025), # scaling of the image (float or schedule)
jitter=None, # size of translational jittering
#optmizer
optimizer="rmsprop", # optimizer (sgd, mei, rmsprop, adam)
optimizer_params=dict(
lr=0.04, # learning rate
weight_decay=1e-7, # weight decay
),
#preconditioning
precond=0.4, # strength of gradient preconditioning filter
#variational
distribution='normal', # distribution of the MEI (normal, laplace)
n_samples_per_batch=(128,), # number of samples per batch (tuple)
fixed_stddev=0.4, # fixed stddev of the distribution, None for learned stddev
)
```
**Transformation MEI configuration example**
For the transformation MEI, you need to define a transformation operation that takes an image as input and returns a transformed image.
Any backpropagatable operation can be used as a transformation. In the example below, we use a generative convolutional network, which is defined in **meitorch.tools.transformations**.
```python
tranformation_mei_config = dict(
iter_n=150, # number of optimization steps
save_every=1, # save image every n iterations
bias=0, # bias of the distribution the image is sampled from
scale=1, # scaling of the distribution the image is sampled from
n_samples=128, # number of samples per batch
#transformations before each step
scaler=None, # scaling of the image (float or schedule)
jitter=None, # size of translational jittering
#normalization
train_norm=None, # norm adjustment during step
#optmizer
optimizer="mei", # optimizer (sgd, mei, rmsprop, adam)
optimizer_params=dict
(
lr=0.02, # learning rate
weight_decay=1e-5, # weight decay
),
#preconditioning
precond=0.4, # strength of gradient preconditioning filter
# transformation operation
transform = GenerativeConvNet(hidden_sizes=[1], fixed_stddev=0.6, kernel_size=9, activation=torch.nn.ReLU(), activate_output=False, shape=(1, 40, 40))
)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/lacykaltgr/mei",
"name": "meitorch",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "generate,meitorch,most exciting,input optimization,interpretability",
"author": "L\u00e1szl\u00f3 Freund",
"author_email": "freundl0509@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/1f/de/24d6a653a64b6dacbea76c010019f3e65183a800da4e51fe81f15bf384d2/meitorch-0.223.tar.gz",
"platform": null,
"description": "# Most Exciting Input\nGenerate ***Most Exciting Input*** to explore and understand PyTorch model's behavior by identifying input samples that induce high activation from specific neurons in your model.\n\n## Paper\npaper: [Input optimization for interpreting neural generative models](https://lacykaltgr.github.io/assets/pdf/TDK2023.pdf)\n\n## Installation\n```bash\npip install meitorch\n```\n\n## Usage\n\n1. Load the model you want to generate interpretable visualizations for.\n\n```python\nmodel = load_your_model()\n```\n\n2. Define the operation you want to optimize most exciting inputs for.\n\nYour operation must take a batch of inputs as parameters and return a dictionary of losses. \nVisualizations will be generated for the input that maximizes the loss named \"objective\".\n\n\n```python\nimport torch\n\ndef operation(inputs):\n outputs = model(inputs)\n activation = outputs[:, 0]\n activation = torch.mean(activation, dim=0)\n loss = -activation\n losses = dict(\n objective=loss,\n activation=activation,\n )\n return losses\n```\n\nYou can also define more complex operations that include multiple losses. \nAdding other losses to the dictionary will enable you to plot them after the optimization.\n\n```python \ndef operation(inputs):\n outputs = model(inputs)\n activation = outputs[:, 0]\n activation = torch.mean(activation, dim=0)\n model_losses = compute_loss(inputs, outputs)\n regularization = losses[\"elbo\"] * 0.1\n loss = -activation + regularization\n losses = dict(\n objective=loss,\n activation=activation,\n elbo_regularization=regularization,\n )\n return losses\n```\n\n3. Create a MEI object with your operation and the input shape of your model.\n\n```python\nfrom meitorch.mei import MEI\n\nmei = MEI(operation=operation(), shape=(1, 40, 40), device=device)\n```\n\n4. Define a configuration for the optimization and generate **Most Exciting Inputs**.\nThere are minor differences between the configuration for different optimization schemes, more on configurations below.\n\n**Generate pixel-wise MEI**\n```python\npixel_mei_config = dict(/* your config here */)\nresult = mei.generate_pixel_mei(config=pixel_mei_config)\n```\n**Generate variational MEI**\n```python\nvariational_mei_config = dict(/* your config here */)\nresult = mei.generate_variational_mei(config=variational_mei_config)\n```\n\n**Generate transformation MEI**\n```python\ntransformation_mei_config = dict(/* your config here */)\nresult = mei.generate_transformation_mei(config=transformation_mei_config)\n```\n\n5. Analyze the results\nAccess the generated images and the losses from the result object.\n\n**Plot the loss curves and the visualizations**\n```python\nresult.plot_losses(show=False, save_path=None, ranges=None)\nresult.plot_image_and_losses(self, save_path=None, ranges=None)\n```\n\n**Plot spatial frequency spectrum of the generated images**\n```python\nresult.plot_spatial_frequency_spectrum()\n```\n**Further analysis**\nYou can further analyze the results with the **meitorch.analyze** module.\n```python\nfrom meitorch.analyze import Analyze\n```\n\n## Configurations\n\nFor all configurations, you can use a schedule instead of a constant value for any parameter.\nA schedule is a function that takes the current iteration as input and returns the value for that iteration.\nYou can access the schedule class in **meitorch.tools.schedules**.\n```python\nfrom meitorch.tools.schedules import LinearSchedule\n\nschedule = LinearSchedule(start=0.1, end=0.01)\n```\nAvailable schedules are:\n```\n- LinearSchedule(start, end)\n- OctaveSchedule(values)\n- RandomSchedule(minimum, maximum)\n```\n\n**Pixel-wise MEI configuration example**\n\n```python\nimage_mei_config = dict(\n iter_n=2, # number of optimization steps\n n_samples=1, # number of samples per batch\n save_every=1, # save copy of image every n iterations\n bias=0, # bias of the distribution the image is sampled from\n scale=1, # scaling of the distribution the image is sampled from\n diverse=False, # whether to use diverse sampling\n diverse_params=dict(\n div_metric='euclidean', # distance metric for diversity (euclidean, cosine, correlation)\n div_linkage='minimum', # linkage criterion for diversity (minimum, average)\n div_weight=1.1, # weight of diversity loss\n ),\n\n #pre-step transformations\n scaler=1.01, # scaling of the image before each step\n jitter=3, # size of translational jittering before each step\n\n #normalization/clipping\n train_norm=1, # norm adjustment during step\n norm=1, # norm adjustment after step\n\n #optmizer\n optimizer=\"rmsprop\", # optimizer (sgd, mei, rmsprop, adam)\n optimizer_params=dict(\n lr=0.03, # learning rate\n weight_decay=1e-6, # weight decay\n ),\n\n #preconditioning in the gradient\n precond=0.3, # strength of gradient preconditioning filter falloff (float or schedule)\n\n #denoiser after each step\n blur='gaussian', # denoiser type (gaussian, tv, bilateral)\n blur_params=dict(\n #gaussian\n kernel_size=3,\n sigma=LinearSchedule(0.1, 0.01)\n \n #tv\n #regularization_scaler=1e-7,\n #lr=0.0001,\n #num_iters=5,\n \n #bilateral\n #kernel_size=3,\n #sigma_color=LinearSchedule(1, 0.01),\n #sigma_spatial=LinearSchedule(0.25, 0.01),\n ),\n)\n```\n\n**Variational MEI configuration example**\n\n```python\nvar_mei_config = dict(\n iter_n=1, # number of optimization steps\n save_every=100, # save image every n iterations\n bias=0, # bias of the distribution the image is sampled from\n scale=1, # scaling of the distribution the image is sampled from\n\n #transformations\n scaler=RandomSchedule(1, 1.025), # scaling of the image (float or schedule)\n jitter=None, # size of translational jittering\n\n #optmizer\n optimizer=\"rmsprop\", # optimizer (sgd, mei, rmsprop, adam)\n optimizer_params=dict( \n lr=0.04, # learning rate\n weight_decay=1e-7, # weight decay\n ),\n\n #preconditioning\n precond=0.4, # strength of gradient preconditioning filter\n\n #variational\n distribution='normal', # distribution of the MEI (normal, laplace)\n n_samples_per_batch=(128,), # number of samples per batch (tuple)\n fixed_stddev=0.4, # fixed stddev of the distribution, None for learned stddev\n)\n```\n\n**Transformation MEI configuration example**\n\nFor the transformation MEI, you need to define a transformation operation that takes an image as input and returns a transformed image. \nAny backpropagatable operation can be used as a transformation. In the example below, we use a generative convolutional network, which is defined in **meitorch.tools.transformations**.\n\n```python\ntranformation_mei_config = dict(\n iter_n=150, # number of optimization steps\n save_every=1, # save image every n iterations\n bias=0, # bias of the distribution the image is sampled from\n scale=1, # scaling of the distribution the image is sampled from\n n_samples=128, # number of samples per batch\n\n #transformations before each step\n scaler=None, # scaling of the image (float or schedule)\n jitter=None, # size of translational jittering\n\n #normalization\n train_norm=None, # norm adjustment during step\n\n #optmizer\n optimizer=\"mei\", # optimizer (sgd, mei, rmsprop, adam)\n optimizer_params=dict\n (\n lr=0.02, # learning rate\n weight_decay=1e-5, # weight decay\n ),\n \n #preconditioning\n precond=0.4, # strength of gradient preconditioning filter\n\n # transformation operation\n transform = GenerativeConvNet(hidden_sizes=[1], fixed_stddev=0.6, kernel_size=9, activation=torch.nn.ReLU(), activate_output=False, shape=(1, 40, 40))\n )\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Generate Most Exciting Input to explore and understand PyTorch model's behavior by identifying input samples that induce high activation from specific neurons in your model.",
"version": "0.223",
"project_urls": {
"Download": "https://github.com/lacykaltgr/mei/archive/refs/tags/v0.0.1.tar.gz",
"Homepage": "https://github.com/lacykaltgr/mei"
},
"split_keywords": [
"generate",
"meitorch",
"most exciting",
"input optimization",
"interpretability"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "46d109351efeb8b1e8bf92d423b8a74f32aa198aa672ba20492f625b0a81d1c5",
"md5": "7350f6e77576e8539970e39bd3af4226",
"sha256": "13d59e268426613115c089a84361aaa6c69a240fcb15c329afb67ba067bfe4dd"
},
"downloads": -1,
"filename": "meitorch-0.223-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7350f6e77576e8539970e39bd3af4226",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 40674,
"upload_time": "2024-02-03T11:04:18",
"upload_time_iso_8601": "2024-02-03T11:04:18.038365Z",
"url": "https://files.pythonhosted.org/packages/46/d1/09351efeb8b1e8bf92d423b8a74f32aa198aa672ba20492f625b0a81d1c5/meitorch-0.223-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1fde24d6a653a64b6dacbea76c010019f3e65183a800da4e51fe81f15bf384d2",
"md5": "752e3008af48ccccce5d88d053c6bf15",
"sha256": "7092100264b09a3688ae22712bf987940129e7e779bc527235e5122c4304c768"
},
"downloads": -1,
"filename": "meitorch-0.223.tar.gz",
"has_sig": false,
"md5_digest": "752e3008af48ccccce5d88d053c6bf15",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 32664,
"upload_time": "2024-02-03T11:04:20",
"upload_time_iso_8601": "2024-02-03T11:04:20.123966Z",
"url": "https://files.pythonhosted.org/packages/1f/de/24d6a653a64b6dacbea76c010019f3e65183a800da4e51fe81f15bf384d2/meitorch-0.223.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-03 11:04:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "lacykaltgr",
"github_project": "mei",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "numpy",
"specs": [
[
"~=",
"1.24.4"
]
]
},
{
"name": "torch",
"specs": [
[
"~=",
"2.1.0"
]
]
},
{
"name": "scipy",
"specs": [
[
"~=",
"1.10.1"
]
]
},
{
"name": "scikit-image",
"specs": [
[
"~=",
"0.21.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"~=",
"3.7.1"
]
]
},
{
"name": "kornia",
"specs": [
[
"~=",
"0.7.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
"~=",
"4.65.0"
]
]
},
{
"name": "meitorch",
"specs": [
[
"~=",
"0.1"
]
]
},
{
"name": "torchvision",
"specs": [
[
"~=",
"0.16.0"
]
]
}
],
"lcname": "meitorch"
}