# Torch-Dreams
Making neural networks more interpretable, for research and art.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Mayukhdeb/torch-dreams-notebooks/blob/main/docs_notebooks/hello_torch_dreams.ipynb)
[![build](https://github.com/Mayukhdeb/torch-dreams/actions/workflows/main.yml/badge.svg)](https://github.com/Mayukhdeb/torch-dreams/actions/workflows/main.yml)
[![codecov](https://codecov.io/gh/Mayukhdeb/torch-dreams/branch/master/graph/badge.svg?token=krU6dNleoJ)](https://codecov.io/gh/Mayukhdeb/torch-dreams)
<!-- [![](https://img.shields.io/twitter/url?label=Docs&style=flat-square&url=https%3A%2F%2Fapp.gitbook.com%2F%40mayukh09%2Fs%2Ftorch-dreams%2F)](https://app.gitbook.com/@mayukh09/s/torch-dreams/) -->
<img src = "https://github.com/Mayukhdeb/torch-dreams/blob/master/images/banner_segmentation_model.png?raw=true">
```
pip install torch-dreams
```
## Contents:
* [Minimal example](https://github.com/Mayukhdeb/torch-dreams#minimal-example)
* [Not so minimal example](https://github.com/Mayukhdeb/torch-dreams#not-so-minimal-example)
* [Visualizing individual channels with `custom_func`](https://github.com/Mayukhdeb/torch-dreams#visualizing-individual-channels-with-custom_func)
* [Caricatures](https://github.com/Mayukhdeb/torch-dreams#caricatures)
* [Visualize features from multiple models simultaneously](https://github.com/Mayukhdeb/torch-dreams#visualize-features-from-multiple-models-simultaneously)
* [Use custom transforms](https://github.com/Mayukhdeb/torch-dreams#using-custom-transforms)
* [Feedback loops](https://github.com/Mayukhdeb/torch-dreams#you-can-also-use-outputs-of-one-render-as-the-input-of-another-to-create-feedback-loops)
* [Custom images](https://github.com/Mayukhdeb/torch-dreams#using-custom-images)
* [Working on models with different image normalizations](https://github.com/Mayukhdeb/torch-dreams#working-on-models-with-different-image-normalizations)
* [Masked image parametrs](https://github.com/Mayukhdeb/torch-dreams#masked-image-parameters)
* [Other conveniences](https://github.com/Mayukhdeb/torch-dreams#other-conveniences)
* [Development](https://github.com/Mayukhdeb/torch-dreams#development)
## Minimal example
> Make sure you also check out the [quick start colab notebook](https://colab.research.google.com/github/Mayukhdeb/torch-dreams-notebooks/blob/main/docs_notebooks/hello_torch_dreams.ipynb)
```python
import matplotlib.pyplot as plt
import torchvision.models as models
from torch_dreams import Dreamer
model = models.inception_v3(pretrained=True)
dreamy_boi = Dreamer(model, device = 'cuda')
image_param = dreamy_boi.render(
layers = [model.Mixed_5b],
)
plt.imshow(image_param)
plt.show()
```
## Not so minimal example
```python
model = models.inception_v3(pretrained=True)
dreamy_boi = Dreamer(model, device = 'cuda', quiet = False)
image_param = dreamy_boi.render(
layers = [model.Mixed_5b],
width = 256,
height = 256,
iters = 150,
lr = 9e-3,
rotate_degrees = 15,
scale_max = 1.2,
scale_min = 0.5,
translate_x = 0.2,
translate_y = 0.2,
custom_func = None,
weight_decay = 1e-2,
grad_clip = 1.,
)
plt.imshow(image_param)
plt.show()
```
## Visualizing individual channels with `custom_func`
```python
model = models.inception_v3(pretrained=True)
dreamy_boi = Dreamer(model, device = 'cuda')
layers_to_use = [model.Mixed_6b.branch1x1.conv]
def make_custom_func(layer_number = 0, channel_number= 0):
def custom_func(layer_outputs):
loss = layer_outputs[layer_number][:, channel_number].mean()
return -loss
return custom_func
my_custom_func = make_custom_func(layer_number= 0, channel_number = 119)
image_param = dreamy_boi.render(
layers = layers_to_use,
custom_func = my_custom_func,
)
plt.imshow(image_param)
plt.show()
```
## Batched generation for large scale experiments
The `BatchedAutoImageParam` paired with the `BatchedObjective` can be used to generate multiple feature visualizations in parallel. This takes up more memory based on the batch size, but is also faster than generating one visualization at a time.
```python
from torch_dreams import Dreamer
import torchvision.models as models
from torch_dreams.batched_objective import BatchedObjective
from torch_dreams.batched_image_param import BatchedAutoImageParam
model = models.inception_v3(pretrained=True)
dreamy_boi = Dreamer(model, device="cuda")
## specify list of neuron indices to visualize
batch_neuron_indices = [i for i in range(10,20)]
## set up a batch of trainable image parameters
bap = BatchedAutoImageParam(
batch_size=len(batch_neuron_indices),
width=256,
height=256,
standard_deviation=0.01
)
## objective generator for each neuron
def make_custom_func(layer_number=0, channel_number=0):
def custom_func(layer_outputs):
loss = layer_outputs[layer_number][:, channel_number].norm()
return -loss
return custom_func
## prepare objective functions for each neuron index
batched_objective = BatchedObjective(
objectives=[make_custom_func(channel_number=i) for i in batch_neuron_indices]
)
## render activation maximization signals
result_batch = dreamy_boi.render(
layers=[model.Mixed_5b],
image_parameter=bap,
iters=120,
custom_func=batched_objective,
)
## save results in a folder
for i in batch_neuron_indices:
result_batch[batch_neuron_indices.index(i)].save(f"results/{i}.jpg")
```
## Caricatures
Caricatures create a new image that has a similar but more extreme activation pattern to the input image at a given layer (or multiple layers at a time). It's inspired from [this issue](https://github.com/tensorflow/lucid/issues/121)
<img src = "https://raw.githubusercontent.com/Mayukhdeb/torch-dreams/master/images/caricature.png" width = "70%">
In this case, let's use googlenet
```python
model = models.googlenet(pretrained = True)
dreamy_boi = Dreamer(model = model, quiet= False, device= 'cuda')
image_param = dreamy_boi.caricature(
input_tensor = image_tensor,
layers = [model.inception4c], ## feel free to append more layers for more interesting caricatures
power= 1.2, ## higher -> more "exaggerated" features
)
plt.imshow(image_param)
plt.show()
```
## Visualize features from multiple models simultaneously
First, let's pick 2 models and specify which layers we'd want to work with
```python
from torch_dreams.model_bunch import ModelBunch
bunch = ModelBunch(
model_dict = {
'inception': models.inception_v3(pretrained=True).eval(),
'resnet': models.resnet18(pretrained= True).eval()
}
)
layers_to_use = [
bunch.model_dict['inception'].Mixed_6a,
bunch.model_dict['resnet'].layer2[0].conv1
]
dreamy_boi = Dreamer(model = bunch, quiet= False, device= 'cuda')
```
Then define a `custom_func` which determines which exact activations of the models we have to optimize
```python
def custom_func(layer_outputs):
loss = layer_outputs[0].mean()*2.0 + layer_outputs[1][:, 89].mean()
return -loss
```
Run the optimization
```python
image_param = dreamy_boi.render(
layers = layers_to_use,
custom_func= custom_func,
iters= 100
)
plt.imshow(image_param)
plt.show()
```
## Using custom transforms:
```python
import torchvision.transforms as transforms
model = models.inception_v3(pretrained=True)
dreamy_boi = Dreamer(model, device = 'cuda', quiet = False)
my_transforms = transforms.Compose([
transforms.RandomAffine(degrees = 10, translate = (0.5,0.5)),
transforms.RandomHorizontalFlip(p = 0.3)
])
dreamy_boi.set_custom_transforms(transforms = my_transforms)
image_param = dreamy_boi.render(
layers = [model.Mixed_5b],
)
plt.imshow(image_param)
plt.show()
```
## You can also use outputs of one `render()` as the input of another to create feedback loops.
```python
import matplotlib.pyplot as plt
import torchvision.models as models
from torch_dreams import Dreamer
model = models.inception_v3(pretrained=True)
dreamy_boi = Dreamer(model, device = 'cuda', quiet = False)
image_param = dreamy_boi.render(
layers = [model.Mixed_6c],
)
image_param = dreamy_boi.render(
image_parameter= image_param,
layers = [model.Mixed_5b],
iters = 20
)
plt.imshow(image_param)
plt.show()
```
## Using custom images
Note that you might have to use smaller values for certain hyperparameters like `lr` and `grad_clip`.
```python
from torch_dreams.custom_image_param import CustomImageParam
param = CustomImageParam(image = 'images/sample_small.jpg', device= 'cuda') ## image could either be a filename or a torch.tensor of shape NCHW
image_param = dreamy_boi.render(
image_parameter= param,
layers = [model.Mixed_6c],
lr = 2e-4,
grad_clip = 0.1,
weight_decay= 1e-1,
iters = 120
)
```
## Working on models with different image normalizations
`torch-dreams` generally works with models trained on images normalized with imagenet `mean` and `std`, but that can be easily overriden to support any other normalization. For example, if you have a model with `mean = [0.5, 0.5, 0.5]` and `std = [0.5, 0.5, 0.5]`:
```python
t = torchvision.transforms.Normalize(
mean = [0.5, 0.5, 0.5],
std = [0.5, 0.5, 0.5]
)
dreamy_boi.set_custom_normalization(normalization_transform = t) ## normalization_transform could be any instance of torch.nn.Module
```
## Masked Image parameters
Can be used to optimize only certain parts of the image using a mask whose values are clipped between `[0,1]`.
<img src = "https://raw.githubusercontent.com/Mayukhdeb/torch-dreams/master/images/masked_param.png" width = "80%">
Here's an example with a vertical gradient
```python
from torch_dreams.masked_image_param import MaskedImageParam
mask = torch.ones(1,1,512,512)
for i in range(0, 512, 1): ## vertical gradient
mask[:,:,i,:] = (i/512)
param = MaskedImageParam(
image= 'images/sample_small.jpg', ## optional
mask_tensor = mask,
device = 'cuda'
)
param = dreamy_boi.render(
layers = [model.inception4c],
image_parameter= param,
lr = 1e-4,
grad_clip= 0.1,
weight_decay= 1e-1,
iters= 200,
)
param.save('masked_param_output.jpg')
```
It's also possible to update the mask on the fly with `param.update_mask(some_mask)`
```python
param.update_mask(mask = torch.flip(mask, dims = (2,)))
param = dreamy_boi.render(
layers = [model.inception4a],
image_parameter= param,
lr = 1e-4,
grad_clip= 0.1,
weight_decay= 1e-1,
iters= 200,
)
param.save('masked_param_output_2.jpg')
```
## Other conveniences
The following methods are handy for an [`auto_image_param`](https://github.com/Mayukhdeb/torch-dreams/blob/master/torch_dreams/auto_image_param.py) instance:
1. Saving outputs as images:
```python
image_param.save('output.jpg')
```
2. Torch Tensor of dimensions `(height, width, color_channels)`
```python
torch_image = image_param.to_hwc_tensor(device = 'cpu')
```
3. Torch Tensor of dimensions `(color_channels, height, width)`
```python
torch_image_chw = image_param.to_chw_tensor(device = 'cpu')
```
4. Displaying outputs on matplotlib.
```python
plt.imshow(image_param)
plt.show()
```
5. For instances of `custom_image_param`, you can set any NCHW tensor as the image parameter:
```python
image_tensor = image_param.to_nchw_tensor()
## do some stuff with image_tensor
t = transforms.Compose([
transforms.RandomRotation(5)
])
transformed_image_tensor = t(image_tensor)
image_param.set_param(tensor = transformed_image_tensor)
```
## Args for `render()`
* `layers` (`iterable`): List of the layers of model(s)'s layers to work on. `[model.layer1, model.layer2...]`
* `image_parameter` (`auto_image_param`, optional): Instance of `torch_dreams.auto_image_param.auto_image_param`
* `width` (`int`, optional): Width of image to be optimized
* `height` (`int`, optional): Height of image to be optimized
* `iters` (`int`, optional): Number of iterations, higher -> stronger visualization
* `lr` (`float`, optional): Learning rate
* `rotate_degrees` (`int`, optional): Max rotation in default transforms
* `scale_max` (`float`, optional): Max image size factor. Defaults to 1.1.
* `scale_min` (`float`, optional): Minimum image size factor. Defaults to 0.5.
* `translate_x` (`float`, optional): Maximum translation factor in x direction
* `translate_y` (`float`, optional): Maximum translation factor in y direction
* `custom_func` (`function`, optional): Can be used to define custom optimiziation conditions to `render()`. Defaults to None.
* `weight_decay` (`float`, optional): Weight decay for default optimizer. Helps prevent high frequency noise. Defaults to 0.
* `grad_clip` (`float`, optional): Maximum value of the norm of gradient. Defaults to 1.
## Args for `Dreamer.__init__()`
* `model` (`nn.Module` or `torch_dreams.model_bunch.Modelbunch`): Almost any PyTorch model which was trained on imagenet `mean` and `std`, and supports variable sized images as inputs. You can pass multiple models into this argument as a `torch_dreams.model_bunch.Modelbunch` instance.
* `quiet` (`bool`): Set to `True` if you want to disable any progress bars
* `device` (`str`): `cuda` or `cpu` depending on your runtime
## Development
1. Clone the repo and navigate into the folder
```
git clone git@github.com:Mayukhdeb/torch-dreams.git
cd torch-dreams/
```
2. Install dependencies
```
pip install -r requirements.txt
```
3. Install `torch-dreams` as an editable module
```
python3 setup.py develop
```
## Citation
```
@misc{mayukhdebtorchdreams,
title={Feature Visualization library for PyTorch},
author={Mayukh Deb},
year={2021},
publisher={GitHub},
howpublished={\url{https://github.com/Mayukhdeb/torch-dreams}},
}
```
## Acknowledgements
* [amFOSS](https://amfoss.in/)
* [Gene Kogan](https://github.com/genekogan)
## Recommended Reading
* [Feature Visualization](https://distill.pub/2017/feature-visualization/)
* [Google AI blog on Deepdreams](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html)
Raw data
{
"_id": null,
"home_page": "https://github.com/Mayukhdeb/torch-dreams",
"name": "torch-dreams",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "PyTorch,machine learning,neural networks,convolutional neural networks,feature visualization,optimization",
"author": "Mayukh Deb",
"author_email": "mayukhmainak2000@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/f8/b5/d5d7543f1b3be4cf5f6d27c4cc3b9f6b1f0e8fc362f02e79dbdd05edc1db/torch-dreams-4.0.0.tar.gz",
"platform": null,
"description": "# Torch-Dreams\nMaking neural networks more interpretable, for research and art. \n\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Mayukhdeb/torch-dreams-notebooks/blob/main/docs_notebooks/hello_torch_dreams.ipynb)\n[![build](https://github.com/Mayukhdeb/torch-dreams/actions/workflows/main.yml/badge.svg)](https://github.com/Mayukhdeb/torch-dreams/actions/workflows/main.yml)\n[![codecov](https://codecov.io/gh/Mayukhdeb/torch-dreams/branch/master/graph/badge.svg?token=krU6dNleoJ)](https://codecov.io/gh/Mayukhdeb/torch-dreams)\n<!-- [![](https://img.shields.io/twitter/url?label=Docs&style=flat-square&url=https%3A%2F%2Fapp.gitbook.com%2F%40mayukh09%2Fs%2Ftorch-dreams%2F)](https://app.gitbook.com/@mayukh09/s/torch-dreams/) -->\n\n\n<img src = \"https://github.com/Mayukhdeb/torch-dreams/blob/master/images/banner_segmentation_model.png?raw=true\">\n\n```\npip install torch-dreams \n```\n\n## Contents:\n\n* [Minimal example](https://github.com/Mayukhdeb/torch-dreams#minimal-example)\n* [Not so minimal example](https://github.com/Mayukhdeb/torch-dreams#not-so-minimal-example)\n* [Visualizing individual channels with `custom_func`](https://github.com/Mayukhdeb/torch-dreams#visualizing-individual-channels-with-custom_func)\n* [Caricatures](https://github.com/Mayukhdeb/torch-dreams#caricatures)\n* [Visualize features from multiple models simultaneously](https://github.com/Mayukhdeb/torch-dreams#visualize-features-from-multiple-models-simultaneously)\n* [Use custom transforms](https://github.com/Mayukhdeb/torch-dreams#using-custom-transforms)\n* [Feedback loops](https://github.com/Mayukhdeb/torch-dreams#you-can-also-use-outputs-of-one-render-as-the-input-of-another-to-create-feedback-loops)\n* [Custom images](https://github.com/Mayukhdeb/torch-dreams#using-custom-images)\n* [Working on models with different image normalizations](https://github.com/Mayukhdeb/torch-dreams#working-on-models-with-different-image-normalizations)\n* [Masked image parametrs](https://github.com/Mayukhdeb/torch-dreams#masked-image-parameters)\n* [Other conveniences](https://github.com/Mayukhdeb/torch-dreams#other-conveniences)\n* [Development](https://github.com/Mayukhdeb/torch-dreams#development)\n\n## Minimal example\n> Make sure you also check out the [quick start colab notebook](https://colab.research.google.com/github/Mayukhdeb/torch-dreams-notebooks/blob/main/docs_notebooks/hello_torch_dreams.ipynb) \n\n\n```python\nimport matplotlib.pyplot as plt\nimport torchvision.models as models\nfrom torch_dreams import Dreamer\n\nmodel = models.inception_v3(pretrained=True)\ndreamy_boi = Dreamer(model, device = 'cuda')\n\nimage_param = dreamy_boi.render(\n layers = [model.Mixed_5b],\n)\n\nplt.imshow(image_param)\nplt.show()\n```\n\n## Not so minimal example\n```python\nmodel = models.inception_v3(pretrained=True)\ndreamy_boi = Dreamer(model, device = 'cuda', quiet = False)\n\nimage_param = dreamy_boi.render(\n layers = [model.Mixed_5b],\n width = 256,\n height = 256,\n iters = 150,\n lr = 9e-3,\n rotate_degrees = 15,\n scale_max = 1.2,\n scale_min = 0.5,\n translate_x = 0.2,\n translate_y = 0.2,\n custom_func = None,\n weight_decay = 1e-2,\n grad_clip = 1.,\n)\n\nplt.imshow(image_param)\nplt.show()\n```\n\n## Visualizing individual channels with `custom_func`\n\n```python\nmodel = models.inception_v3(pretrained=True)\ndreamy_boi = Dreamer(model, device = 'cuda')\n\nlayers_to_use = [model.Mixed_6b.branch1x1.conv]\n\ndef make_custom_func(layer_number = 0, channel_number= 0): \n def custom_func(layer_outputs):\n loss = layer_outputs[layer_number][:, channel_number].mean()\n return -loss\n return custom_func\n\nmy_custom_func = make_custom_func(layer_number= 0, channel_number = 119)\n\nimage_param = dreamy_boi.render(\n layers = layers_to_use,\n custom_func = my_custom_func,\n)\nplt.imshow(image_param)\nplt.show()\n```\n\n## Batched generation for large scale experiments\n\nThe `BatchedAutoImageParam` paired with the `BatchedObjective` can be used to generate multiple feature visualizations in parallel. This takes up more memory based on the batch size, but is also faster than generating one visualization at a time.\n\n```python\nfrom torch_dreams import Dreamer\nimport torchvision.models as models\nfrom torch_dreams.batched_objective import BatchedObjective\nfrom torch_dreams.batched_image_param import BatchedAutoImageParam\n\nmodel = models.inception_v3(pretrained=True)\ndreamy_boi = Dreamer(model, device=\"cuda\")\n\n## specify list of neuron indices to visualize\nbatch_neuron_indices = [i for i in range(10,20)]\n\n## set up a batch of trainable image parameters\nbap = BatchedAutoImageParam(\n batch_size=len(batch_neuron_indices), \n width=256, \n height=256, \n standard_deviation=0.01\n)\n\n## objective generator for each neuron\ndef make_custom_func(layer_number=0, channel_number=0):\n def custom_func(layer_outputs):\n loss = layer_outputs[layer_number][:, channel_number].norm()\n return -loss\n\n return custom_func\n\n## prepare objective functions for each neuron index\nbatched_objective = BatchedObjective(\n objectives=[make_custom_func(channel_number=i) for i in batch_neuron_indices]\n)\n\n## render activation maximization signals\nresult_batch = dreamy_boi.render(\n layers=[model.Mixed_5b],\n image_parameter=bap,\n iters=120,\n custom_func=batched_objective,\n)\n\n## save results in a folder\nfor i in batch_neuron_indices:\n result_batch[batch_neuron_indices.index(i)].save(f\"results/{i}.jpg\")\n```\n\n## Caricatures\n\nCaricatures create a new image that has a similar but more extreme activation pattern to the input image at a given layer (or multiple layers at a time). It's inspired from [this issue](https://github.com/tensorflow/lucid/issues/121)\n\n<img src = \"https://raw.githubusercontent.com/Mayukhdeb/torch-dreams/master/images/caricature.png\" width = \"70%\">\n\nIn this case, let's use googlenet \n\n```python\nmodel = models.googlenet(pretrained = True)\ndreamy_boi = Dreamer(model = model, quiet= False, device= 'cuda')\n\nimage_param = dreamy_boi.caricature(\n input_tensor = image_tensor, \n layers = [model.inception4c], ## feel free to append more layers for more interesting caricatures \n power= 1.2, ## higher -> more \"exaggerated\" features\n)\n\nplt.imshow(image_param)\nplt.show()\n```\n## Visualize features from multiple models simultaneously\n\nFirst, let's pick 2 models and specify which layers we'd want to work with\n\n```python\nfrom torch_dreams.model_bunch import ModelBunch\n\nbunch = ModelBunch(\n model_dict = {\n 'inception': models.inception_v3(pretrained=True).eval(),\n 'resnet': models.resnet18(pretrained= True).eval()\n }\n)\n\nlayers_to_use = [\n bunch.model_dict['inception'].Mixed_6a,\n bunch.model_dict['resnet'].layer2[0].conv1\n ]\n\ndreamy_boi = Dreamer(model = bunch, quiet= False, device= 'cuda')\n```\n\nThen define a `custom_func` which determines which exact activations of the models we have to optimize\n\n```python\ndef custom_func(layer_outputs):\n loss = layer_outputs[0].mean()*2.0 + layer_outputs[1][:, 89].mean() \n return -loss\n```\n\nRun the optimization\n\n```python\nimage_param = dreamy_boi.render(\n layers = layers_to_use,\n custom_func= custom_func,\n iters= 100\n)\n\nplt.imshow(image_param)\nplt.show()\n```\n\n\n## Using custom transforms:\n\n```python\nimport torchvision.transforms as transforms\n\nmodel = models.inception_v3(pretrained=True)\ndreamy_boi = Dreamer(model, device = 'cuda', quiet = False)\n\nmy_transforms = transforms.Compose([\n transforms.RandomAffine(degrees = 10, translate = (0.5,0.5)),\n transforms.RandomHorizontalFlip(p = 0.3)\n])\n\ndreamy_boi.set_custom_transforms(transforms = my_transforms)\n\nimage_param = dreamy_boi.render(\n layers = [model.Mixed_5b],\n)\n\nplt.imshow(image_param)\nplt.show()\n```\n\n## You can also use outputs of one `render()` as the input of another to create feedback loops.\n\n```python\nimport matplotlib.pyplot as plt\nimport torchvision.models as models\nfrom torch_dreams import Dreamer\n\nmodel = models.inception_v3(pretrained=True)\ndreamy_boi = Dreamer(model, device = 'cuda', quiet = False)\n\nimage_param = dreamy_boi.render(\n layers = [model.Mixed_6c],\n)\n\nimage_param = dreamy_boi.render(\n image_parameter= image_param,\n layers = [model.Mixed_5b],\n iters = 20\n)\n\nplt.imshow(image_param)\nplt.show()\n```\n\n## Using custom images\n\nNote that you might have to use smaller values for certain hyperparameters like `lr` and `grad_clip`.\n\n```python\nfrom torch_dreams.custom_image_param import CustomImageParam\nparam = CustomImageParam(image = 'images/sample_small.jpg', device= 'cuda') ## image could either be a filename or a torch.tensor of shape NCHW\n\nimage_param = dreamy_boi.render(\n image_parameter= param,\n layers = [model.Mixed_6c],\n lr = 2e-4,\n grad_clip = 0.1,\n weight_decay= 1e-1,\n iters = 120\n)\n```\n\n## Working on models with different image normalizations\n\n`torch-dreams` generally works with models trained on images normalized with imagenet `mean` and `std`, but that can be easily overriden to support any other normalization. For example, if you have a model with `mean = [0.5, 0.5, 0.5]` and `std = [0.5, 0.5, 0.5]`: \n\n```python \nt = torchvision.transforms.Normalize(\n mean = [0.5, 0.5, 0.5],\n std = [0.5, 0.5, 0.5]\n )\n\ndreamy_boi.set_custom_normalization(normalization_transform = t) ## normalization_transform could be any instance of torch.nn.Module\n```\n\n## Masked Image parameters\n\nCan be used to optimize only certain parts of the image using a mask whose values are clipped between `[0,1]`.\n\n<img src = \"https://raw.githubusercontent.com/Mayukhdeb/torch-dreams/master/images/masked_param.png\" width = \"80%\">\n\nHere's an example with a vertical gradient \n\n```python \nfrom torch_dreams.masked_image_param import MaskedImageParam\n\nmask = torch.ones(1,1,512,512)\n\nfor i in range(0, 512, 1): ## vertical gradient\n mask[:,:,i,:] = (i/512)\n\nparam = MaskedImageParam(\n image= 'images/sample_small.jpg', ## optional\n mask_tensor = mask,\n device = 'cuda'\n)\n\nparam = dreamy_boi.render(\n layers = [model.inception4c],\n image_parameter= param,\n lr = 1e-4,\n grad_clip= 0.1,\n weight_decay= 1e-1,\n iters= 200,\n)\n\nparam.save('masked_param_output.jpg')\n```\n\nIt's also possible to update the mask on the fly with `param.update_mask(some_mask)`\n\n```python\n\nparam.update_mask(mask = torch.flip(mask, dims = (2,)))\n\nparam = dreamy_boi.render(\n layers = [model.inception4a],\n image_parameter= param,\n lr = 1e-4,\n grad_clip= 0.1,\n weight_decay= 1e-1,\n iters= 200,\n)\n\nparam.save('masked_param_output_2.jpg')\n```\n\n\n## Other conveniences \n\nThe following methods are handy for an [`auto_image_param`](https://github.com/Mayukhdeb/torch-dreams/blob/master/torch_dreams/auto_image_param.py) instance:\n\n1. Saving outputs as images:\n\n```python\nimage_param.save('output.jpg')\n```\n\n2. Torch Tensor of dimensions `(height, width, color_channels)`\n\n```python\ntorch_image = image_param.to_hwc_tensor(device = 'cpu')\n```\n\n3. Torch Tensor of dimensions `(color_channels, height, width)`\n\n```python\ntorch_image_chw = image_param.to_chw_tensor(device = 'cpu')\n```\n\n4. Displaying outputs on matplotlib. \n\n```python\nplt.imshow(image_param)\nplt.show()\n```\n\n5. For instances of `custom_image_param`, you can set any NCHW tensor as the image parameter: \n\n```python\nimage_tensor = image_param.to_nchw_tensor()\n\n## do some stuff with image_tensor\nt = transforms.Compose([\n transforms.RandomRotation(5)\n])\ntransformed_image_tensor = t(image_tensor) \n\nimage_param.set_param(tensor = transformed_image_tensor)\n```\n\n## Args for `render()`\n\n* `layers` (`iterable`): List of the layers of model(s)'s layers to work on. `[model.layer1, model.layer2...]`\n* `image_parameter` (`auto_image_param`, optional): Instance of `torch_dreams.auto_image_param.auto_image_param`\n\n* `width` (`int`, optional): Width of image to be optimized \n* `height` (`int`, optional): Height of image to be optimized \n* `iters` (`int`, optional): Number of iterations, higher -> stronger visualization\n* `lr` (`float`, optional): Learning rate\n* `rotate_degrees` (`int`, optional): Max rotation in default transforms\n* `scale_max` (`float`, optional): Max image size factor. Defaults to 1.1.\n* `scale_min` (`float`, optional): Minimum image size factor. Defaults to 0.5.\n* `translate_x` (`float`, optional): Maximum translation factor in x direction\n* `translate_y` (`float`, optional): Maximum translation factor in y direction\n* `custom_func` (`function`, optional): Can be used to define custom optimiziation conditions to `render()`. Defaults to None.\n* `weight_decay` (`float`, optional): Weight decay for default optimizer. Helps prevent high frequency noise. Defaults to 0.\n* `grad_clip` (`float`, optional): Maximum value of the norm of gradient. Defaults to 1.\n\n## Args for `Dreamer.__init__()`\n * `model` (`nn.Module` or `torch_dreams.model_bunch.Modelbunch`): Almost any PyTorch model which was trained on imagenet `mean` and `std`, and supports variable sized images as inputs. You can pass multiple models into this argument as a `torch_dreams.model_bunch.Modelbunch` instance.\n * `quiet` (`bool`): Set to `True` if you want to disable any progress bars\n * `device` (`str`): `cuda` or `cpu` depending on your runtime \n\n ## Development\n\n1. Clone the repo and navigate into the folder\n```\ngit clone git@github.com:Mayukhdeb/torch-dreams.git\ncd torch-dreams/\n```\n\n2. Install dependencies\n```\npip install -r requirements.txt\n```\n\n3. Install `torch-dreams` as an editable module\n```\npython3 setup.py develop\n```\n\n## Citation\n```\n@misc{mayukhdebtorchdreams,\n title={Feature Visualization library for PyTorch},\n author={Mayukh Deb},\n year={2021},\n publisher={GitHub},\n howpublished={\\url{https://github.com/Mayukhdeb/torch-dreams}},\n}\n```\n\n## Acknowledgements\n\n* [amFOSS](https://amfoss.in/)\n* [Gene Kogan](https://github.com/genekogan) \n\n## Recommended Reading \n\n* [Feature Visualization](https://distill.pub/2017/feature-visualization/)\n* [Google AI blog on Deepdreams](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html)\n",
"bugtrack_url": null,
"license": "",
"summary": "Making neural networks more interpretable, for research and art",
"version": "4.0.0",
"project_urls": {
"Homepage": "https://github.com/Mayukhdeb/torch-dreams"
},
"split_keywords": [
"pytorch",
"machine learning",
"neural networks",
"convolutional neural networks",
"feature visualization",
"optimization"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "924ee4f03f697e25c42666129312f82a8efa706fae21b3eec726c579b38dcb31",
"md5": "f1b4b5aaea2a36124d43b62db47a35ba",
"sha256": "ea0c5006dff954c307781b8e431b41633011628bdcaaf98c8980bd8f3b8b8546"
},
"downloads": -1,
"filename": "torch_dreams-4.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f1b4b5aaea2a36124d43b62db47a35ba",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 26017,
"upload_time": "2023-06-14T08:24:17",
"upload_time_iso_8601": "2023-06-14T08:24:17.101556Z",
"url": "https://files.pythonhosted.org/packages/92/4e/e4f03f697e25c42666129312f82a8efa706fae21b3eec726c579b38dcb31/torch_dreams-4.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f8b5d5d7543f1b3be4cf5f6d27c4cc3b9f6b1f0e8fc362f02e79dbdd05edc1db",
"md5": "bd23d01a2a78aefb3beb8ee99c3a45a4",
"sha256": "9e517d37d295adf84a31152cdcd8b1c4a2cca70100fdf5ca4e7b92e748f9db83"
},
"downloads": -1,
"filename": "torch-dreams-4.0.0.tar.gz",
"has_sig": false,
"md5_digest": "bd23d01a2a78aefb3beb8ee99c3a45a4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 24783,
"upload_time": "2023-06-14T08:24:19",
"upload_time_iso_8601": "2023-06-14T08:24:19.029120Z",
"url": "https://files.pythonhosted.org/packages/f8/b5/d5d7543f1b3be4cf5f6d27c4cc3b9f6b1f0e8fc362f02e79dbdd05edc1db/torch-dreams-4.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-14 08:24:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Mayukhdeb",
"github_project": "torch-dreams",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "torch-dreams"
}