lovely-tensors


Namelovely-tensors JSON
Version 0.1.15 PyPI version JSON
download
home_pagehttps://github.com/xl0/lovely-tensors
Summary❤️ Lovely Tensors
upload_time2023-04-27 10:36:11
maintainer
docs_urlNone
authorAlexey Zaytsev
requires_python>=3.7
licenseMIT License
keywords jupyter pytorch tensor visualisation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ❤️ Lovely Tensors
================

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

<div>

## [Read full docs](https://xl0.github.io/lovely-tensors) \| 💘 [Lovely `JAX`](https://github.com/xl0/lovely-jax) \| 💟 [Lovely `NumPy`](https://github.com/xl0/lovely-numpy) \| [Discord](https://discord.gg/4NxRV7NH)

</div>

## Install

``` sh
pip install lovely-tensors
```

or

``` sh
mamba install lovely-tensors
```

or

``` sh
conda install -c conda-forge lovely-tensors
```

## How to use

How often do you find yourself debugging PyTorch code? You dump a tensor
to the cell output, and see this:

``` python
numbers
```

    tensor([[[-0.3541, -0.3369, -0.4054,  ..., -0.5596, -0.4739,  2.2489],
             [-0.4054, -0.4226, -0.4911,  ..., -0.9192, -0.8507,  2.1633],
             [-0.4739, -0.4739, -0.5424,  ..., -1.0390, -1.0390,  2.1975],
             ...,
             [-0.9020, -0.8335, -0.9363,  ..., -1.4672, -1.2959,  2.2318],
             [-0.8507, -0.7822, -0.9363,  ..., -1.6042, -1.5014,  2.1804],
             [-0.8335, -0.8164, -0.9705,  ..., -1.6555, -1.5528,  2.1119]],

            [[-0.1975, -0.1975, -0.3025,  ..., -0.4776, -0.3725,  2.4111],
             [-0.2500, -0.2325, -0.3375,  ..., -0.7052, -0.6702,  2.3585],
             [-0.3025, -0.2850, -0.3901,  ..., -0.7402, -0.8102,  2.3761],
             ...,
             [-0.4251, -0.2325, -0.3725,  ..., -1.0903, -1.0203,  2.4286],
             [-0.3901, -0.2325, -0.4251,  ..., -1.2304, -1.2304,  2.4111],
             [-0.4076, -0.2850, -0.4776,  ..., -1.2829, -1.2829,  2.3410]],

            [[-0.6715, -0.9853, -0.8807,  ..., -0.9678, -0.6890,  2.3960],
             [-0.7238, -1.0724, -0.9678,  ..., -1.2467, -1.0201,  2.3263],
             [-0.8284, -1.1247, -1.0201,  ..., -1.2641, -1.1596,  2.3786],
             ...,
             [-1.2293, -1.4733, -1.3861,  ..., -1.5081, -1.2641,  2.5180],
             [-1.1944, -1.4559, -1.4210,  ..., -1.6476, -1.4733,  2.4308],
             [-1.2293, -1.5256, -1.5081,  ..., -1.6824, -1.5256,  2.3611]]])

Was it really useful for you, as a human, to see all these numbers?

What is the shape? The size?  
What are the statistics?  
Are any of the values `nan` or `inf`?  
Is it an image of a man holding a tench?

``` python
import lovely_tensors as lt
```

``` python
lt.monkey_patch()
```

## Summary

``` python
numbers # torch.Tensor
```

    tensor[3, 196, 196] n=115248 (0.4Mb) x∈[-2.118, 2.640] μ=-0.388 σ=1.073

Better, huh?

``` python
numbers[1,:6,1] # Still shows values if there are not too many.
```

    tensor[6] x∈[-0.443, -0.197] μ=-0.311 σ=0.091 [-0.197, -0.232, -0.285, -0.373, -0.443, -0.338]

``` python
spicy = numbers[0,:12,0].clone()

spicy[0] *= 10000
spicy[1] /= 10000
spicy[2] = float('inf')
spicy[3] = float('-inf')
spicy[4] = float('nan')

spicy = spicy.reshape((2,6))
spicy # Spicy stuff
```

    tensor[2, 6] n=12 x∈[-3.541e+03, -4.054e-05] μ=-393.842 σ=1.180e+03 +Inf! -Inf! NaN!

``` python
torch.zeros(10, 10) # A zero tensor - make it obvious
```

    tensor[10, 10] n=100 all_zeros

``` python
spicy.v # Verbose
```

    tensor[2, 6] n=12 x∈[-3.541e+03, -4.054e-05] μ=-393.842 σ=1.180e+03 +Inf! -Inf! NaN!
    tensor([[-3.5405e+03, -4.0543e-05,         inf,        -inf,         nan, -6.1093e-01],
            [-6.1093e-01, -5.9380e-01, -5.9380e-01, -5.4243e-01, -5.4243e-01, -5.4243e-01]])

``` python
spicy.p # The plain old way
```

    tensor([[-3.5405e+03, -4.0543e-05,         inf,        -inf,         nan, -6.1093e-01],
            [-6.1093e-01, -5.9380e-01, -5.9380e-01, -5.4243e-01, -5.4243e-01, -5.4243e-01]])

## Going `.deeper`

``` python
numbers.deeper
```

    tensor[3, 196, 196] n=115248 (0.4Mb) x∈[-2.118, 2.640] μ=-0.388 σ=1.073
      tensor[196, 196] n=38416 x∈[-2.118, 2.249] μ=-0.324 σ=1.036
      tensor[196, 196] n=38416 x∈[-1.966, 2.429] μ=-0.274 σ=0.973
      tensor[196, 196] n=38416 x∈[-1.804, 2.640] μ=-0.567 σ=1.178

``` python
# You can go deeper if you need to
numbers[:,:3,:5].deeper(2)
```

    tensor[3, 3, 5] n=45 x∈[-1.316, -0.197] μ=-0.593 σ=0.306
      tensor[3, 5] n=15 x∈[-0.765, -0.337] μ=-0.492 σ=0.124
        tensor[5] x∈[-0.440, -0.337] μ=-0.385 σ=0.041 [-0.354, -0.337, -0.405, -0.440, -0.388]
        tensor[5] x∈[-0.662, -0.405] μ=-0.512 σ=0.108 [-0.405, -0.423, -0.491, -0.577, -0.662]
        tensor[5] x∈[-0.765, -0.474] μ=-0.580 σ=0.125 [-0.474, -0.474, -0.542, -0.645, -0.765]
      tensor[3, 5] n=15 x∈[-0.513, -0.197] μ=-0.321 σ=0.099
        tensor[5] x∈[-0.303, -0.197] μ=-0.243 σ=0.055 [-0.197, -0.197, -0.303, -0.303, -0.215]
        tensor[5] x∈[-0.408, -0.232] μ=-0.327 σ=0.084 [-0.250, -0.232, -0.338, -0.408, -0.408]
        tensor[5] x∈[-0.513, -0.285] μ=-0.394 σ=0.102 [-0.303, -0.285, -0.390, -0.478, -0.513]
      tensor[3, 5] n=15 x∈[-1.316, -0.672] μ=-0.964 σ=0.176
        tensor[5] x∈[-0.985, -0.672] μ=-0.846 σ=0.123 [-0.672, -0.985, -0.881, -0.776, -0.916]
        tensor[5] x∈[-1.212, -0.724] μ=-0.989 σ=0.179 [-0.724, -1.072, -0.968, -0.968, -1.212]
        tensor[5] x∈[-1.316, -0.828] μ=-1.058 σ=0.179 [-0.828, -1.125, -1.020, -1.003, -1.316]

## Now in `.rgb` color

The important queston - is it our man?

``` python
numbers.rgb
```

![](index_files/figure-commonmark/cell-13-output-1.png)

*Maaaaybe?* Looks like someone normalized him.

``` python
in_stats = ( (0.485, 0.456, 0.406),     # mean 
             (0.229, 0.224, 0.225) )    # std

# numbers.rgb(in_stats, cl=True) # For channel-last input format
numbers.rgb(in_stats)
```

![](index_files/figure-commonmark/cell-14-output-1.png)

It’s indeed our hero, the Tenchman!

## `.plt` the statistics

``` python
(numbers+3).plt
```

![](index_files/figure-commonmark/cell-15-output-1.svg)

``` python
(numbers+3).plt(center="mean", max_s=1000)
```

![](index_files/figure-commonmark/cell-16-output-1.svg)

``` python
(numbers+3).plt(center="range")
```

![](index_files/figure-commonmark/cell-17-output-1.svg)

## See the `.chans`

``` python
# .chans will map values betwen [-1,1] to colors.
# Make our values fit into that range to avoid clipping.
mean = torch.tensor(in_stats[0])[:,None,None]
std = torch.tensor(in_stats[1])[:,None,None]
numbers_01 = (numbers*std + mean)
numbers_01
```

    tensor[3, 196, 196] n=115248 (0.4Mb) x∈[0., 1.000] μ=0.361 σ=0.248

``` python
numbers_01.chans
```

![](index_files/figure-commonmark/cell-19-output-1.png)

Let’s try with a Convolutional Neural Network

``` python
from torchvision.models import vgg11
```

``` python
features: torch.nn.Sequential = vgg11().features

# I saved the first 5 layers in "features.pt"
_ = features.load_state_dict(torch.load("../features.pt"), strict=False)
```

``` python
# Activatons of the second max pool layer of VGG11
acts = (features[:6](numbers[None])[0]/2) # /2 to reduce clipping
acts
```

    tensor[128, 49, 49] n=307328 (1.2Mb) x∈[0., 12.508] μ=0.367 σ=0.634 grad DivBackward0

``` python
acts[:4].chans(cmap="coolwarm", scale=4)
```

![](index_files/figure-commonmark/cell-23-output-1.png)

## Grouping

``` python
# Make 8 images with progressively higher brightness and stack them 2x2x2.
eight_images = (torch.stack([numbers]*8)
                    .add(torch.linspace(-3, 3, 8)[:,None,None,None])
                    .mul(torch.tensor(in_stats[1])[:,None,None])
                    .add(torch.tensor(in_stats[0])[:,None,None])
                    .clamp(0,1)
                    .view(2,2,2,3,196,196)
)
eight_images
```

    tensor[2, 2, 2, 3, 196, 196] n=921984 (3.5Mb) x∈[0., 1.000] μ=0.411 σ=0.369

``` python
eight_images.rgb
```

![](index_files/figure-commonmark/cell-25-output-1.png)

``` python
# Weights of the second conv layer of VGG11
features[3].weight
```

    Parameter containing:
    Parameter[128, 64, 3, 3] n=73728 (0.3Mb) x∈[-0.783, 0.776] μ=-0.004 σ=0.065 grad

I want +/- 2σ to fall in the range \[-1..1\]

``` python
weights = features[3].weight.data
weights = weights / (2*2*weights.std()) # *2 because we want 2σ on both sides, so 4σ
# weights += weights.std() * 2
weights.plt
```

![](index_files/figure-commonmark/cell-27-output-1.svg)

``` python
# Weights of the second conv layer (64ch -> 128ch) of VGG11,
# grouped per output channel.
weights.chans(frame_px=1, gutter_px=0)
```

![](index_files/figure-commonmark/cell-28-output-1.png)

It’s a bit hard to see. Scale up 10x, but onyl show the first 4 filters.

``` python
weights[:4].chans(frame_px=1, gutter_px=0, scale=10)
```

![](index_files/figure-commonmark/cell-29-output-1.png)

## Options \| [Docs](https://xl0.github.io/lovely-tensors/utils.config.html)

``` python
from lovely_tensors import set_config, config, lovely, get_config
```

``` python
set_config(precision=1, sci_mode=True, color=False)
torch.tensor([1, 2, torch.nan])
```

    tensor[3] μ=1.5e+00 σ=7.1e-01 NaN! [1.0e+00, 2.0e+00, nan]

``` python
set_config(precision=None, sci_mode=None, color=None) # None -> Reset to defaults
```

``` python
print(torch.tensor([1., 2]))
# Or with config context manager.
with config(sci_mode=True, precision=5):
    print(torch.tensor([1., 2]))

print(torch.tensor([1., 2]))
```

    tensor[2] μ=1.500 σ=0.707 [1.000, 2.000]
    tensor[2] μ=1.50000e+00 σ=7.07107e-01 [1.00000e+00, 2.00000e+00]
    tensor[2] μ=1.500 σ=0.707 [1.000, 2.000]

## Without `.monkey_patch`

``` python
lt.lovely(spicy)
```

    tensor[2, 6] n=12 x∈[-3.541e+03, -4.054e-05] μ=-393.842 σ=1.180e+03 +Inf! -Inf! NaN!

``` python
lt.lovely(spicy, verbose=True)
```

    tensor[2, 6] n=12 x∈[-3.541e+03, -4.054e-05] μ=-393.842 σ=1.180e+03 +Inf! -Inf! NaN!
    tensor([[-3.5405e+03, -4.0543e-05,         inf,        -inf,         nan, -6.1093e-01],
            [-6.1093e-01, -5.9380e-01, -5.9380e-01, -5.4243e-01, -5.4243e-01, -5.4243e-01]])

``` python
lt.lovely(numbers, depth=1)
```

    tensor[3, 196, 196] n=115248 (0.4Mb) x∈[-2.118, 2.640] μ=-0.388 σ=1.073
      tensor[196, 196] n=38416 x∈[-2.118, 2.249] μ=-0.324 σ=1.036
      tensor[196, 196] n=38416 x∈[-1.966, 2.429] μ=-0.274 σ=0.973
      tensor[196, 196] n=38416 x∈[-1.804, 2.640] μ=-0.567 σ=1.178

``` python
lt.rgb(numbers, in_stats)
```

![](index_files/figure-commonmark/cell-37-output-1.png)

``` python
lt.plot(numbers, center="mean")
```

![](index_files/figure-commonmark/cell-38-output-1.svg)

``` python
lt.chans(numbers_01)
```

![](index_files/figure-commonmark/cell-39-output-1.png)

## Matplotlib integration \| [Docs](https://xl0.github.io/lovely-tensors/matplotlib.html)

``` python
numbers.rgb(in_stats).fig # matplotlib figure
```

![](index_files/figure-commonmark/cell-40-output-1.png)

``` python
(numbers*0.3+0.5).chans.fig # matplotlib figure
```

![](index_files/figure-commonmark/cell-41-output-1.png)

``` python
numbers.plt.fig.savefig('pretty.svg') # Save it
```

``` python
!file pretty.svg; rm pretty.svg
```

    pretty.svg: SVG Scalable Vector Graphics image

### Add content to existing Axes

``` python
fig = plt.figure(figsize=(8,3))
fig.set_constrained_layout(True)
gs = fig.add_gridspec(2,2)
ax1 = fig.add_subplot(gs[0, :])
ax2 = fig.add_subplot(gs[1, 0])
ax3 = fig.add_subplot(gs[1,1:])

ax2.set_axis_off()
ax3.set_axis_off()

numbers_01.plt(ax=ax1)
numbers_01.rgb(ax=ax2)
numbers_01.chans(ax=ax3);
```

![](index_files/figure-commonmark/cell-44-output-1.png)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/xl0/lovely-tensors",
    "name": "lovely-tensors",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "jupyter pytorch tensor visualisation",
    "author": "Alexey Zaytsev",
    "author_email": "alexey.zaytsev@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/dd/9e/94f14f40d040d4063256f4080fd9818000873e7c4d03c66ff209b87d896b/lovely-tensors-0.1.15.tar.gz",
    "platform": null,
    "description": "\u2764\ufe0f Lovely Tensors\n================\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n<div>\n\n## [Read full docs](https://xl0.github.io/lovely-tensors) \\| \ud83d\udc98 [Lovely `JAX`](https://github.com/xl0/lovely-jax) \\| \ud83d\udc9f [Lovely `NumPy`](https://github.com/xl0/lovely-numpy) \\| [Discord](https://discord.gg/4NxRV7NH)\n\n</div>\n\n## Install\n\n``` sh\npip install lovely-tensors\n```\n\nor\n\n``` sh\nmamba install lovely-tensors\n```\n\nor\n\n``` sh\nconda install -c conda-forge lovely-tensors\n```\n\n## How to use\n\nHow often do you find yourself debugging PyTorch code? You dump a tensor\nto the cell output, and see this:\n\n``` python\nnumbers\n```\n\n    tensor([[[-0.3541, -0.3369, -0.4054,  ..., -0.5596, -0.4739,  2.2489],\n             [-0.4054, -0.4226, -0.4911,  ..., -0.9192, -0.8507,  2.1633],\n             [-0.4739, -0.4739, -0.5424,  ..., -1.0390, -1.0390,  2.1975],\n             ...,\n             [-0.9020, -0.8335, -0.9363,  ..., -1.4672, -1.2959,  2.2318],\n             [-0.8507, -0.7822, -0.9363,  ..., -1.6042, -1.5014,  2.1804],\n             [-0.8335, -0.8164, -0.9705,  ..., -1.6555, -1.5528,  2.1119]],\n\n            [[-0.1975, -0.1975, -0.3025,  ..., -0.4776, -0.3725,  2.4111],\n             [-0.2500, -0.2325, -0.3375,  ..., -0.7052, -0.6702,  2.3585],\n             [-0.3025, -0.2850, -0.3901,  ..., -0.7402, -0.8102,  2.3761],\n             ...,\n             [-0.4251, -0.2325, -0.3725,  ..., -1.0903, -1.0203,  2.4286],\n             [-0.3901, -0.2325, -0.4251,  ..., -1.2304, -1.2304,  2.4111],\n             [-0.4076, -0.2850, -0.4776,  ..., -1.2829, -1.2829,  2.3410]],\n\n            [[-0.6715, -0.9853, -0.8807,  ..., -0.9678, -0.6890,  2.3960],\n             [-0.7238, -1.0724, -0.9678,  ..., -1.2467, -1.0201,  2.3263],\n             [-0.8284, -1.1247, -1.0201,  ..., -1.2641, -1.1596,  2.3786],\n             ...,\n             [-1.2293, -1.4733, -1.3861,  ..., -1.5081, -1.2641,  2.5180],\n             [-1.1944, -1.4559, -1.4210,  ..., -1.6476, -1.4733,  2.4308],\n             [-1.2293, -1.5256, -1.5081,  ..., -1.6824, -1.5256,  2.3611]]])\n\nWas it really useful for you, as a human, to see all these numbers?\n\nWhat is the shape? The size?  \nWhat are the statistics?  \nAre any of the values `nan` or `inf`?  \nIs it an image of a man holding a tench?\n\n``` python\nimport lovely_tensors as lt\n```\n\n``` python\nlt.monkey_patch()\n```\n\n## Summary\n\n``` python\nnumbers # torch.Tensor\n```\n\n    tensor[3, 196, 196] n=115248 (0.4Mb) x\u2208[-2.118, 2.640] \u03bc=-0.388 \u03c3=1.073\n\nBetter, huh?\n\n``` python\nnumbers[1,:6,1] # Still shows values if there are not too many.\n```\n\n    tensor[6] x\u2208[-0.443, -0.197] \u03bc=-0.311 \u03c3=0.091 [-0.197, -0.232, -0.285, -0.373, -0.443, -0.338]\n\n``` python\nspicy = numbers[0,:12,0].clone()\n\nspicy[0] *= 10000\nspicy[1] /= 10000\nspicy[2] = float('inf')\nspicy[3] = float('-inf')\nspicy[4] = float('nan')\n\nspicy = spicy.reshape((2,6))\nspicy # Spicy stuff\n```\n\n    tensor[2, 6] n=12 x\u2208[-3.541e+03, -4.054e-05] \u03bc=-393.842 \u03c3=1.180e+03 +Inf! -Inf! NaN!\n\n``` python\ntorch.zeros(10, 10) # A zero tensor - make it obvious\n```\n\n    tensor[10, 10] n=100 all_zeros\n\n``` python\nspicy.v # Verbose\n```\n\n    tensor[2, 6] n=12 x\u2208[-3.541e+03, -4.054e-05] \u03bc=-393.842 \u03c3=1.180e+03 +Inf! -Inf! NaN!\n    tensor([[-3.5405e+03, -4.0543e-05,         inf,        -inf,         nan, -6.1093e-01],\n            [-6.1093e-01, -5.9380e-01, -5.9380e-01, -5.4243e-01, -5.4243e-01, -5.4243e-01]])\n\n``` python\nspicy.p # The plain old way\n```\n\n    tensor([[-3.5405e+03, -4.0543e-05,         inf,        -inf,         nan, -6.1093e-01],\n            [-6.1093e-01, -5.9380e-01, -5.9380e-01, -5.4243e-01, -5.4243e-01, -5.4243e-01]])\n\n## Going `.deeper`\n\n``` python\nnumbers.deeper\n```\n\n    tensor[3, 196, 196] n=115248 (0.4Mb) x\u2208[-2.118, 2.640] \u03bc=-0.388 \u03c3=1.073\n      tensor[196, 196] n=38416 x\u2208[-2.118, 2.249] \u03bc=-0.324 \u03c3=1.036\n      tensor[196, 196] n=38416 x\u2208[-1.966, 2.429] \u03bc=-0.274 \u03c3=0.973\n      tensor[196, 196] n=38416 x\u2208[-1.804, 2.640] \u03bc=-0.567 \u03c3=1.178\n\n``` python\n# You can go deeper if you need to\nnumbers[:,:3,:5].deeper(2)\n```\n\n    tensor[3, 3, 5] n=45 x\u2208[-1.316, -0.197] \u03bc=-0.593 \u03c3=0.306\n      tensor[3, 5] n=15 x\u2208[-0.765, -0.337] \u03bc=-0.492 \u03c3=0.124\n        tensor[5] x\u2208[-0.440, -0.337] \u03bc=-0.385 \u03c3=0.041 [-0.354, -0.337, -0.405, -0.440, -0.388]\n        tensor[5] x\u2208[-0.662, -0.405] \u03bc=-0.512 \u03c3=0.108 [-0.405, -0.423, -0.491, -0.577, -0.662]\n        tensor[5] x\u2208[-0.765, -0.474] \u03bc=-0.580 \u03c3=0.125 [-0.474, -0.474, -0.542, -0.645, -0.765]\n      tensor[3, 5] n=15 x\u2208[-0.513, -0.197] \u03bc=-0.321 \u03c3=0.099\n        tensor[5] x\u2208[-0.303, -0.197] \u03bc=-0.243 \u03c3=0.055 [-0.197, -0.197, -0.303, -0.303, -0.215]\n        tensor[5] x\u2208[-0.408, -0.232] \u03bc=-0.327 \u03c3=0.084 [-0.250, -0.232, -0.338, -0.408, -0.408]\n        tensor[5] x\u2208[-0.513, -0.285] \u03bc=-0.394 \u03c3=0.102 [-0.303, -0.285, -0.390, -0.478, -0.513]\n      tensor[3, 5] n=15 x\u2208[-1.316, -0.672] \u03bc=-0.964 \u03c3=0.176\n        tensor[5] x\u2208[-0.985, -0.672] \u03bc=-0.846 \u03c3=0.123 [-0.672, -0.985, -0.881, -0.776, -0.916]\n        tensor[5] x\u2208[-1.212, -0.724] \u03bc=-0.989 \u03c3=0.179 [-0.724, -1.072, -0.968, -0.968, -1.212]\n        tensor[5] x\u2208[-1.316, -0.828] \u03bc=-1.058 \u03c3=0.179 [-0.828, -1.125, -1.020, -1.003, -1.316]\n\n## Now in `.rgb` color\n\nThe important queston - is it our man?\n\n``` python\nnumbers.rgb\n```\n\n![](index_files/figure-commonmark/cell-13-output-1.png)\n\n*Maaaaybe?* Looks like someone normalized him.\n\n``` python\nin_stats = ( (0.485, 0.456, 0.406),     # mean \n             (0.229, 0.224, 0.225) )    # std\n\n# numbers.rgb(in_stats, cl=True) # For channel-last input format\nnumbers.rgb(in_stats)\n```\n\n![](index_files/figure-commonmark/cell-14-output-1.png)\n\nIt\u2019s indeed our hero, the Tenchman!\n\n## `.plt` the statistics\n\n``` python\n(numbers+3).plt\n```\n\n![](index_files/figure-commonmark/cell-15-output-1.svg)\n\n``` python\n(numbers+3).plt(center=\"mean\", max_s=1000)\n```\n\n![](index_files/figure-commonmark/cell-16-output-1.svg)\n\n``` python\n(numbers+3).plt(center=\"range\")\n```\n\n![](index_files/figure-commonmark/cell-17-output-1.svg)\n\n## See the `.chans`\n\n``` python\n# .chans will map values betwen [-1,1] to colors.\n# Make our values fit into that range to avoid clipping.\nmean = torch.tensor(in_stats[0])[:,None,None]\nstd = torch.tensor(in_stats[1])[:,None,None]\nnumbers_01 = (numbers*std + mean)\nnumbers_01\n```\n\n    tensor[3, 196, 196] n=115248 (0.4Mb) x\u2208[0., 1.000] \u03bc=0.361 \u03c3=0.248\n\n``` python\nnumbers_01.chans\n```\n\n![](index_files/figure-commonmark/cell-19-output-1.png)\n\nLet\u2019s try with a Convolutional Neural Network\n\n``` python\nfrom torchvision.models import vgg11\n```\n\n``` python\nfeatures: torch.nn.Sequential = vgg11().features\n\n# I saved the first 5 layers in \"features.pt\"\n_ = features.load_state_dict(torch.load(\"../features.pt\"), strict=False)\n```\n\n``` python\n# Activatons of the second max pool layer of VGG11\nacts = (features[:6](numbers[None])[0]/2) # /2 to reduce clipping\nacts\n```\n\n    tensor[128, 49, 49] n=307328 (1.2Mb) x\u2208[0., 12.508] \u03bc=0.367 \u03c3=0.634 grad DivBackward0\n\n``` python\nacts[:4].chans(cmap=\"coolwarm\", scale=4)\n```\n\n![](index_files/figure-commonmark/cell-23-output-1.png)\n\n## Grouping\n\n``` python\n# Make 8 images with progressively higher brightness and stack them 2x2x2.\neight_images = (torch.stack([numbers]*8)\n                    .add(torch.linspace(-3, 3, 8)[:,None,None,None])\n                    .mul(torch.tensor(in_stats[1])[:,None,None])\n                    .add(torch.tensor(in_stats[0])[:,None,None])\n                    .clamp(0,1)\n                    .view(2,2,2,3,196,196)\n)\neight_images\n```\n\n    tensor[2, 2, 2, 3, 196, 196] n=921984 (3.5Mb) x\u2208[0., 1.000] \u03bc=0.411 \u03c3=0.369\n\n``` python\neight_images.rgb\n```\n\n![](index_files/figure-commonmark/cell-25-output-1.png)\n\n``` python\n# Weights of the second conv layer of VGG11\nfeatures[3].weight\n```\n\n    Parameter containing:\n    Parameter[128, 64, 3, 3] n=73728 (0.3Mb) x\u2208[-0.783, 0.776] \u03bc=-0.004 \u03c3=0.065 grad\n\nI want +/- 2\u03c3 to fall in the range \\[-1..1\\]\n\n``` python\nweights = features[3].weight.data\nweights = weights / (2*2*weights.std()) # *2 because we want 2\u03c3 on both sides, so 4\u03c3\n# weights += weights.std() * 2\nweights.plt\n```\n\n![](index_files/figure-commonmark/cell-27-output-1.svg)\n\n``` python\n# Weights of the second conv layer (64ch -> 128ch) of VGG11,\n# grouped per output channel.\nweights.chans(frame_px=1, gutter_px=0)\n```\n\n![](index_files/figure-commonmark/cell-28-output-1.png)\n\nIt\u2019s a bit hard to see. Scale up 10x, but onyl show the first 4 filters.\n\n``` python\nweights[:4].chans(frame_px=1, gutter_px=0, scale=10)\n```\n\n![](index_files/figure-commonmark/cell-29-output-1.png)\n\n## Options \\| [Docs](https://xl0.github.io/lovely-tensors/utils.config.html)\n\n``` python\nfrom lovely_tensors import set_config, config, lovely, get_config\n```\n\n``` python\nset_config(precision=1, sci_mode=True, color=False)\ntorch.tensor([1, 2, torch.nan])\n```\n\n    tensor[3] \u03bc=1.5e+00 \u03c3=7.1e-01 NaN! [1.0e+00, 2.0e+00, nan]\n\n``` python\nset_config(precision=None, sci_mode=None, color=None) # None -> Reset to defaults\n```\n\n``` python\nprint(torch.tensor([1., 2]))\n# Or with config context manager.\nwith config(sci_mode=True, precision=5):\n    print(torch.tensor([1., 2]))\n\nprint(torch.tensor([1., 2]))\n```\n\n    tensor[2] \u03bc=1.500 \u03c3=0.707 [1.000, 2.000]\n    tensor[2] \u03bc=1.50000e+00 \u03c3=7.07107e-01 [1.00000e+00, 2.00000e+00]\n    tensor[2] \u03bc=1.500 \u03c3=0.707 [1.000, 2.000]\n\n## Without `.monkey_patch`\n\n``` python\nlt.lovely(spicy)\n```\n\n    tensor[2, 6] n=12 x\u2208[-3.541e+03, -4.054e-05] \u03bc=-393.842 \u03c3=1.180e+03 +Inf! -Inf! NaN!\n\n``` python\nlt.lovely(spicy, verbose=True)\n```\n\n    tensor[2, 6] n=12 x\u2208[-3.541e+03, -4.054e-05] \u03bc=-393.842 \u03c3=1.180e+03 +Inf! -Inf! NaN!\n    tensor([[-3.5405e+03, -4.0543e-05,         inf,        -inf,         nan, -6.1093e-01],\n            [-6.1093e-01, -5.9380e-01, -5.9380e-01, -5.4243e-01, -5.4243e-01, -5.4243e-01]])\n\n``` python\nlt.lovely(numbers, depth=1)\n```\n\n    tensor[3, 196, 196] n=115248 (0.4Mb) x\u2208[-2.118, 2.640] \u03bc=-0.388 \u03c3=1.073\n      tensor[196, 196] n=38416 x\u2208[-2.118, 2.249] \u03bc=-0.324 \u03c3=1.036\n      tensor[196, 196] n=38416 x\u2208[-1.966, 2.429] \u03bc=-0.274 \u03c3=0.973\n      tensor[196, 196] n=38416 x\u2208[-1.804, 2.640] \u03bc=-0.567 \u03c3=1.178\n\n``` python\nlt.rgb(numbers, in_stats)\n```\n\n![](index_files/figure-commonmark/cell-37-output-1.png)\n\n``` python\nlt.plot(numbers, center=\"mean\")\n```\n\n![](index_files/figure-commonmark/cell-38-output-1.svg)\n\n``` python\nlt.chans(numbers_01)\n```\n\n![](index_files/figure-commonmark/cell-39-output-1.png)\n\n## Matplotlib integration \\| [Docs](https://xl0.github.io/lovely-tensors/matplotlib.html)\n\n``` python\nnumbers.rgb(in_stats).fig # matplotlib figure\n```\n\n![](index_files/figure-commonmark/cell-40-output-1.png)\n\n``` python\n(numbers*0.3+0.5).chans.fig # matplotlib figure\n```\n\n![](index_files/figure-commonmark/cell-41-output-1.png)\n\n``` python\nnumbers.plt.fig.savefig('pretty.svg') # Save it\n```\n\n``` python\n!file pretty.svg; rm pretty.svg\n```\n\n    pretty.svg: SVG Scalable Vector Graphics image\n\n### Add content to existing Axes\n\n``` python\nfig = plt.figure(figsize=(8,3))\nfig.set_constrained_layout(True)\ngs = fig.add_gridspec(2,2)\nax1 = fig.add_subplot(gs[0, :])\nax2 = fig.add_subplot(gs[1, 0])\nax3 = fig.add_subplot(gs[1,1:])\n\nax2.set_axis_off()\nax3.set_axis_off()\n\nnumbers_01.plt(ax=ax1)\nnumbers_01.rgb(ax=ax2)\nnumbers_01.chans(ax=ax3);\n```\n\n![](index_files/figure-commonmark/cell-44-output-1.png)\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "\u2764\ufe0f Lovely Tensors",
    "version": "0.1.15",
    "split_keywords": [
        "jupyter",
        "pytorch",
        "tensor",
        "visualisation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a33c5e1560727eefacd7c9f8c3ed13f4db61c7aa371b0f1e5d11d4ba39c80bcd",
                "md5": "413932ab07232280669d6d05856b1411",
                "sha256": "26d3f8322fa20e396e6fdf4365d499738220202775856daa390695a5d3582dfd"
            },
            "downloads": -1,
            "filename": "lovely_tensors-0.1.15-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "413932ab07232280669d6d05856b1411",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 17090,
            "upload_time": "2023-04-27T10:36:09",
            "upload_time_iso_8601": "2023-04-27T10:36:09.012376Z",
            "url": "https://files.pythonhosted.org/packages/a3/3c/5e1560727eefacd7c9f8c3ed13f4db61c7aa371b0f1e5d11d4ba39c80bcd/lovely_tensors-0.1.15-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dd9e94f14f40d040d4063256f4080fd9818000873e7c4d03c66ff209b87d896b",
                "md5": "8d52f66a7559cab4404fa9da9ef7165e",
                "sha256": "eb25efdf79f338752bdc427bea188a450bf2664e3912a3d7f932af8039e4f06b"
            },
            "downloads": -1,
            "filename": "lovely-tensors-0.1.15.tar.gz",
            "has_sig": false,
            "md5_digest": "8d52f66a7559cab4404fa9da9ef7165e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 18534,
            "upload_time": "2023-04-27T10:36:11",
            "upload_time_iso_8601": "2023-04-27T10:36:11.841064Z",
            "url": "https://files.pythonhosted.org/packages/dd/9e/94f14f40d040d4063256f4080fd9818000873e7c4d03c66ff209b87d896b/lovely-tensors-0.1.15.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-27 10:36:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "xl0",
    "github_project": "lovely-tensors",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "lovely-tensors"
}
        
Elapsed time: 0.05909s