# torchconvview
[](https://pypi.python.org/pypi/torchconvquality)
[](https://github.com/paulgavrikov/torchconvview/actions/workflows/pytest.yml)
[![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]
[cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/
[cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png
[cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg
*A library for PyTorch convolution layer visualizations via matplotlib plots.*
## Installation
To install published releases from PyPi execute:
```bash
pip install torchconvview
```
To update torchconvquality to the latest available version, add the `--upgrade` flag to the above commands.
If you want the latest (potentially unstable) features you can also directly install from the github main branch:
```bash
pip install git+https://github.com/paulgavrikov/torchconvview
```
## Usage
```python
from torchconvvision import plot_conv, plot_conv_rgb, PCAView
import matplotlib.pyplot as plt
# Replace this with your own model. As an example,
# we will use an ImageNet pretrained ResNet-18.
import torchvision
model = torchvision.models.resnet18(pretrained=True)
```
### General
All `plot_...` functions return a tuple of the matplotlib figure and axes which allow you to customize the plot to your needs. Also most of these functions accept the `img_scale` argument which allows you to specify a multiplier to the resolution.
### Visualize kernels in the convolution layers
Just pass the convolution weight as tensor or numpy into `plot_conv` and you'll get a matplotlib figure of the kernels! Each column is one channel/filter, i.e. this stack of kernels generates a feature-map from all input maps.
```python
plot_conv(model.layer1[1].conv2.weight)
plt.show()
```
<img src="docs/fig/output_plot_conv.png" width="30%">
### Visualize the first layer
If you have a convolution layer with RGB input (e.g. often the first layer), the you can visualize entire filters. This function maps all kernels to their respective color. Note that this only work on convolution layers with 3 input channels and only produces meaningfull results if these channels are R, G, B feature-maps!
```python
plot_conv_rgb(model.conv1.weight)
plt.show()
```
<img src="docs/fig/output_plot_conv_rgb.png" width="100%">
### PCA of convolution weights
You can also compute the eigenimages/basis vectors of the kernels by using the `PCAView` class. Under the hood it will do a PCA for you. Note, that currently this requires the `scikit-learn` module.
```python
pcaview = PCAView(model.conv1.weight)
pcaview.plot_conv()
plt.show()
```
<img src="docs/fig/output_pcaview_plot_conv.png" width="10%">
And to get a handy barplot of the explained variance ratio:
```python
pcaview.plot_variance_ratio()
plt.show()
```
<img src="docs/fig/output_pcaview_plot_variance_ratio.png" width="30%">
## Citation
Please consider citing our publication if this libary was helpfull to you.
```
@InProceedings{Gavrikov_2022_CVPR,
author = {Gavrikov, Paul and Keuper, Janis},
title = {CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {19066-19076}
}
```
## Legal
This work is licensed under a
[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].
Funded by the Ministry for Science, Research and Arts, Baden-Wuerttemberg, Germany Grant 32-7545.20/45/1 (Q-AMeLiA).
Raw data
{
"_id": null,
"home_page": "https://github.com/paulgavrikov/torchconvview",
"name": "torchconvview",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "",
"author": "Paul Gavrikov",
"author_email": "paul.gavrikov@hs-offenburg.de",
"download_url": "https://files.pythonhosted.org/packages/7b/cd/b568a9041b3f0c8ad7e608cc1890a0af8f2227dd096421debe395bbbcd5c/torchconvview-0.2.0.tar.gz",
"platform": null,
"description": "# torchconvview\n\n[](https://pypi.python.org/pypi/torchconvquality)\n[](https://github.com/paulgavrikov/torchconvview/actions/workflows/pytest.yml)\n[![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]\n\n\n[cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/\n[cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png\n[cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg\n\n*A library for PyTorch convolution layer visualizations via matplotlib plots.*\n\n\n## Installation\nTo install published releases from PyPi execute:\n```bash\npip install torchconvview\n```\nTo update torchconvquality to the latest available version, add the `--upgrade` flag to the above commands.\n\nIf you want the latest (potentially unstable) features you can also directly install from the github main branch:\n```bash\npip install git+https://github.com/paulgavrikov/torchconvview\n```\n\n## Usage\n\n```python\nfrom torchconvvision import plot_conv, plot_conv_rgb, PCAView\nimport matplotlib.pyplot as plt\n\n# Replace this with your own model. As an example,\n# we will use an ImageNet pretrained ResNet-18.\nimport torchvision\nmodel = torchvision.models.resnet18(pretrained=True)\n```\n### General\n\nAll `plot_...` functions return a tuple of the matplotlib figure and axes which allow you to customize the plot to your needs. Also most of these functions accept the `img_scale` argument which allows you to specify a multiplier to the resolution.\n\n### Visualize kernels in the convolution layers\nJust pass the convolution weight as tensor or numpy into `plot_conv` and you'll get a matplotlib figure of the kernels! Each column is one channel/filter, i.e. this stack of kernels generates a feature-map from all input maps.\n```python\nplot_conv(model.layer1[1].conv2.weight)\nplt.show()\n```\n<img src=\"docs/fig/output_plot_conv.png\" width=\"30%\">\n\n### Visualize the first layer\nIf you have a convolution layer with RGB input (e.g. often the first layer), the you can visualize entire filters. This function maps all kernels to their respective color. Note that this only work on convolution layers with 3 input channels and only produces meaningfull results if these channels are R, G, B feature-maps!\n\n```python\nplot_conv_rgb(model.conv1.weight)\nplt.show()\n```\n<img src=\"docs/fig/output_plot_conv_rgb.png\" width=\"100%\">\n\n### PCA of convolution weights\nYou can also compute the eigenimages/basis vectors of the kernels by using the `PCAView` class. Under the hood it will do a PCA for you. Note, that currently this requires the `scikit-learn` module.\n\n```python\npcaview = PCAView(model.conv1.weight)\npcaview.plot_conv()\nplt.show()\n```\n<img src=\"docs/fig/output_pcaview_plot_conv.png\" width=\"10%\">\n\nAnd to get a handy barplot of the explained variance ratio:\n```python\npcaview.plot_variance_ratio()\nplt.show()\n```\n<img src=\"docs/fig/output_pcaview_plot_variance_ratio.png\" width=\"30%\">\n\n\n## Citation\n\nPlease consider citing our publication if this libary was helpfull to you.\n```\n@InProceedings{Gavrikov_2022_CVPR,\n author = {Gavrikov, Paul and Keuper, Janis},\n title = {CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters},\n booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n month = {June},\n year = {2022},\n pages = {19066-19076}\n}\n```\n\n## Legal\n\nThis work is licensed under a\n[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].\n\nFunded by the Ministry for Science, Research and Arts, Baden-Wuerttemberg, Germany Grant 32-7545.20/45/1 (Q-AMeLiA).\n",
"bugtrack_url": null,
"license": "",
"summary": "A library for PyTorch convolution layer visualizations.",
"version": "0.2.0",
"project_urls": {
"Homepage": "https://github.com/paulgavrikov/torchconvview"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9ab3559faec966011780a96be4a4a3d1a242092ae9ea600709223d0a1e64877d",
"md5": "22d744b8054ff3891f73801e7dc94b8f",
"sha256": "77eb5eff85e24f3fc855923a2bad373ecbe33529b5689e1e456c73e5f16088e7"
},
"downloads": -1,
"filename": "torchconvview-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "22d744b8054ff3891f73801e7dc94b8f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 13697,
"upload_time": "2023-10-18T09:38:57",
"upload_time_iso_8601": "2023-10-18T09:38:57.980849Z",
"url": "https://files.pythonhosted.org/packages/9a/b3/559faec966011780a96be4a4a3d1a242092ae9ea600709223d0a1e64877d/torchconvview-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7bcdb568a9041b3f0c8ad7e608cc1890a0af8f2227dd096421debe395bbbcd5c",
"md5": "49cc1e2881d634fa0cf15c7bd2bcdc62",
"sha256": "804aecf351a39d1b2020e6a0ee749a1890a38d19ebdbc03fa9cf446579ae7487"
},
"downloads": -1,
"filename": "torchconvview-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "49cc1e2881d634fa0cf15c7bd2bcdc62",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 13241,
"upload_time": "2023-10-18T09:38:59",
"upload_time_iso_8601": "2023-10-18T09:38:59.246671Z",
"url": "https://files.pythonhosted.org/packages/7b/cd/b568a9041b3f0c8ad7e608cc1890a0af8f2227dd096421debe395bbbcd5c/torchconvview-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-18 09:38:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "paulgavrikov",
"github_project": "torchconvview",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "torchconvview"
}