pytorch-memlab


Namepytorch-memlab JSON
Version 0.3.0 PyPI version JSON
download
home_pagehttps://github.com/Stonesjtu/pytorch_memlab
SummaryA lab to do simple and accurate memory experiments on pytorch
upload_time2023-07-29 13:27:13
maintainer
docs_urlNone
authorKaiyu Shi
requires_python
licenseMIT
keywords pytorch memory profile
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            pytorch_memlab
======
[![Build Status](https://travis-ci.com/Stonesjtu/pytorch_memlab.svg?token=vyTdxHbi1PCRzV6disHp&branch=master)](https://travis-ci.com/Stonesjtu/pytorch_memlab)
![PyPI](https://img.shields.io/pypi/v/pytorch_memlab.svg)
[![CodeQL: Python](https://github.com/Stonesjtu/pytorch_memlab/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/Stonesjtu/pytorch_memlab/actions/workflows/github-code-scanning/codeql)
![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch_memlab.svg)

A simple and accurate **CUDA** memory management laboratory for pytorch,
it consists of different parts about the memory:

- Features:

  - Memory Profiler: A `line_profiler` style CUDA memory profiler with simple API.
  - Memory Reporter: A reporter to inspect tensors occupying the CUDA memory.
  - Courtesy: An interesting feature to temporarily move all the CUDA tensors into
    CPU memory for courtesy, and of course the backward transferring.
  - IPython support through `%mlrun`/`%%mlrun` line/cell magic
    commands.


- Table of Contents
  * [Installation](#installation)
  * [User-Doc](#user-doc)
    + [Memory Profiler](#memory-profiler)
    + [IPython support](#ipython-support)
    + [Memory Reporter](#memory-reporter)
    + [Courtesy](#courtesy)
    + [ACK](#ack)
  * [CHANGES](#changes)

Installation
-----

- Released version:
```bash
pip install pytorch_memlab
```

- Newest version:
```bash
pip install git+https://github.com/stonesjtu/pytorch_memlab
```

What's for
-----

Out-Of-Memory errors in pytorch happen frequently, for new-bees and
experienced programmers. A common reason is that most people don't really
learn the underlying memory management philosophy of pytorch and GPUs.
They wrote memory in-efficient codes and complained about pytorch eating too
much CUDA memory.

In this repo, I'm going to share some useful tools to help debugging OOM, or
to inspect the underlying mechanism if anyone is interested in.


User-Doc
-----

### Memory Profiler

The memory profiler is a modification of python's `line_profiler`, it gives
the memory usage info for each line of code in the specified function/method.

#### Sample:

```python
import torch
from pytorch_memlab import LineProfiler

def inner():
    torch.nn.Linear(100, 100).cuda()

def outer():
    linear = torch.nn.Linear(100, 100).cuda()
    linear2 = torch.nn.Linear(100, 100).cuda()
    linear3 = torch.nn.Linear(100, 100).cuda()

work()
```

After the script finishes or interrupted by keyboard, it gives the following
profiling info if you're in a Jupyter notebook:

<p align="center"><img src="readme-output.png" width="640"></p>

or the following info if you're in a text-only terminal:

```
## outer

active_bytes reserved_bytes line  code
         all            all
        peak           peak
       0.00B          0.00B    7  def outer():
      40.00K          2.00M    8      linear = torch.nn.Linear(100, 100).cuda()
      80.00K          2.00M    9      linear2 = torch.nn.Linear(100, 100).cuda()
     120.00K          2.00M   10      inner()


## inner

active_bytes reserved_bytes line  code
         all            all
        peak           peak
      80.00K          2.00M    4  def inner():
     120.00K          2.00M    5      torch.nn.Linear(100, 100).cuda()
```

An explanation of what each column means can be found in the [Torch documentation](https://pytorch.org/docs/stable/cuda.html#torch.cuda.memory_stats). The name of any field from `memory_stats()`
can be passed to `display()` to view the corresponding statistic.

If you use `profile` decorator, the memory statistics are collected during
multiple runs and only the maximum one is displayed at the end.
We also provide a more flexible API called `profile_every` which prints the
memory info every *N* times of function execution. You can simply replace
`@profile` with `@profile_every(1)` to print the memory usage for each
execution.

The `@profile` and `@profile_every` can also be mixed to gain more control
of the debugging granularity.

- You can also add the decorator in the module class:

```python
class Net(torch.nn.Module):
    def __init__(self):
        super().__init__()
    @profile
    def forward(self, inp):
        #do_something
```

- The *Line Profiler* profiles the memory usage of CUDA device 0 by default,
you may want to switch the device to profile by `set_target_gpu`. The gpu
selection is globally,  which means you have to remember which gpu you are
profiling on during the whole process:

```python
import torch
from pytorch_memlab import profile, set_target_gpu
@profile
def func():
    net1 = torch.nn.Linear(1024, 1024).cuda(0)
    set_target_gpu(1)
    net2 = torch.nn.Linear(1024, 1024).cuda(1)
    set_target_gpu(0)
    net3 = torch.nn.Linear(1024, 1024).cuda(0)

func()
```


More samples can be found in `test/test_line_profiler.py`

### IPython support

Make sure you have `IPython` installed, or have installed `pytorch-memlab` with
`pip install pytorch-memlab[ipython]`.

First, load the extension:

```python
%load_ext pytorch_memlab
```

This makes the `%mlrun` and `%%mlrun` line/cell magics available for use. For
example, in a new cell run the following to profile an entire cell

```python
%%mlrun -f func
import torch
from pytorch_memlab import profile, set_target_gpu
def func():
    net1 = torch.nn.Linear(1024, 1024).cuda(0)
    set_target_gpu(1)
    net2 = torch.nn.Linear(1024, 1024).cuda(1)
    set_target_gpu(0)
    net3 = torch.nn.Linear(1024, 1024).cuda(0)
```

Or you can invoke the profiler for a single statement on via the `%mlrun` cell
magic.

```python
import torch
from pytorch_memlab import profile, set_target_gpu
def func(input_size):
    net1 = torch.nn.Linear(input_size, 1024).cuda(0)
%mlrun -f func func(2048)
```

See `%mlrun?` for help on what arguments are supported. You can set the GPU
device to profile, dump profiling results to a file, and return the
`LineProfiler` object for post-profile inspection.

Find out more by checking out the [demo Jupyter notebook](./demo.ipynb)


### Memory Reporter

As *Memory Profiler* only gives the overall memory usage information by lines,
a more low-level memory usage information can be obtained by *Memory Reporter*.

*Memory reporter* iterates all the `Tensor` objects and gets the underlying
`Storage` object to get the actual memory usage instead of the surface
`Tensor.size`.

#### Sample

- A minimal one:

```python
import torch
from pytorch_memlab import MemReporter
linear = torch.nn.Linear(1024, 1024).cuda()
reporter = MemReporter()
reporter.report()
```
outputs:
```
Element type                                            Size  Used MEM
-------------------------------------------------------------------------------
Storage on cuda:0
Parameter0                                      (1024, 1024)     4.00M
Parameter1                                           (1024,)     4.00K
-------------------------------------------------------------------------------
Total Tensors: 1049600  Used Memory: 4.00M
The allocated memory on cuda:0: 4.00M
-------------------------------------------------------------------------------
```

- You can also pass in a model object for automatically name inference.

```python
import torch
from pytorch_memlab import MemReporter

linear = torch.nn.Linear(1024, 1024).cuda()
inp = torch.Tensor(512, 1024).cuda()
# pass in a model to automatically infer the tensor names
reporter = MemReporter(linear)
out = linear(inp).mean()
print('========= before backward =========')
reporter.report()
out.backward()
print('========= after backward =========')
reporter.report()
```

outputs:
```
========= before backward =========
Element type                                            Size  Used MEM
-------------------------------------------------------------------------------
Storage on cuda:0
weight                                          (1024, 1024)     4.00M
bias                                                 (1024,)     4.00K
Tensor0                                          (512, 1024)     2.00M
Tensor1                                                 (1,)   512.00B
-------------------------------------------------------------------------------
Total Tensors: 1573889  Used Memory: 6.00M
The allocated memory on cuda:0: 6.00M
-------------------------------------------------------------------------------
========= after backward =========
Element type                                            Size  Used MEM
-------------------------------------------------------------------------------
Storage on cuda:0
weight                                          (1024, 1024)     4.00M
weight.grad                                     (1024, 1024)     4.00M
bias                                                 (1024,)     4.00K
bias.grad                                            (1024,)     4.00K
Tensor0                                          (512, 1024)     2.00M
Tensor1                                                 (1,)   512.00B
-------------------------------------------------------------------------------
Total Tensors: 2623489  Used Memory: 10.01M
The allocated memory on cuda:0: 10.01M
-------------------------------------------------------------------------------
```


- The reporter automatically deals with the sharing weights parameters:

```python
import torch
from pytorch_memlab import MemReporter

linear = torch.nn.Linear(1024, 1024).cuda()
linear2 = torch.nn.Linear(1024, 1024).cuda()
linear2.weight = linear.weight
container = torch.nn.Sequential(
    linear, linear2
)
inp = torch.Tensor(512, 1024).cuda()
# pass in a model to automatically infer the tensor names

out = container(inp).mean()
out.backward()

# verbose shows how storage is shared across multiple Tensors
reporter = MemReporter(container)
reporter.report(verbose=True)
```

outputs:
```
Element type                                            Size  Used MEM
-------------------------------------------------------------------------------
Storage on cuda:0
0.weight                                        (1024, 1024)     4.00M
0.weight.grad                                   (1024, 1024)     4.00M
0.bias                                               (1024,)     4.00K
0.bias.grad                                          (1024,)     4.00K
1.bias                                               (1024,)     4.00K
1.bias.grad                                          (1024,)     4.00K
Tensor0                                          (512, 1024)     2.00M
Tensor1                                                 (1,)   512.00B
-------------------------------------------------------------------------------
Total Tensors: 2625537  Used Memory: 10.02M
The allocated memory on cuda:0: 10.02M
-------------------------------------------------------------------------------
```

- You can better understand the memory layout for more complicated module:

```python
import torch
from pytorch_memlab import MemReporter

lstm = torch.nn.LSTM(1024, 1024).cuda()
reporter = MemReporter(lstm)
reporter.report(verbose=True)
inp = torch.Tensor(10, 10, 1024).cuda()
out, _ = lstm(inp)
out.mean().backward()
reporter.report(verbose=True)
```

As shown below, the `(->)` indicates the re-use of the same storage back-end
outputs:
```
Element type                                            Size  Used MEM
-------------------------------------------------------------------------------
Storage on cuda:0
weight_ih_l0                                    (4096, 1024)    32.03M
weight_hh_l0(->weight_ih_l0)                    (4096, 1024)     0.00B
bias_ih_l0(->weight_ih_l0)                           (4096,)     0.00B
bias_hh_l0(->weight_ih_l0)                           (4096,)     0.00B
Tensor0                                       (10, 10, 1024)   400.00K
-------------------------------------------------------------------------------
Total Tensors: 8499200  Used Memory: 32.42M
The allocated memory on cuda:0: 32.52M
Memory differs due to the matrix alignment
-------------------------------------------------------------------------------
Element type                                            Size  Used MEM
-------------------------------------------------------------------------------
Storage on cuda:0
weight_ih_l0                                    (4096, 1024)    32.03M
weight_ih_l0.grad                               (4096, 1024)    32.03M
weight_hh_l0(->weight_ih_l0)                    (4096, 1024)     0.00B
weight_hh_l0.grad(->weight_ih_l0.grad)          (4096, 1024)     0.00B
bias_ih_l0(->weight_ih_l0)                           (4096,)     0.00B
bias_ih_l0.grad(->weight_ih_l0.grad)                 (4096,)     0.00B
bias_hh_l0(->weight_ih_l0)                           (4096,)     0.00B
bias_hh_l0.grad(->weight_ih_l0.grad)                 (4096,)     0.00B
Tensor0                                       (10, 10, 1024)   400.00K
Tensor1                                       (10, 10, 1024)   400.00K
Tensor2                                        (1, 10, 1024)    40.00K
Tensor3                                        (1, 10, 1024)    40.00K
-------------------------------------------------------------------------------
Total Tensors: 17018880         Used Memory: 64.92M
The allocated memory on cuda:0: 65.11M
Memory differs due to the matrix alignment
-------------------------------------------------------------------------------
```

NOTICE:
> When forwarding with `grad_mode=True`, pytorch maintains tensor buffers for
> future Back-Propagation, in C level. So these buffers are not going to be
> managed or collected by pytorch. But if you store these intermediate results
> as python variables, then they will be reported.

- You can also filter the device to report on by passing extra arguments:
`report(device=torch.device(0))`

- A failed example due to pytorch's C side tensor buffers

In the following example, a temp buffer is created at `inp * (inp + 2)` to
store both `inp` and `inp + 2`, unfortunately python only knows the existence
of inp, so we have *2M* memory lost, which is the same size of Tensor `inp`.

```python
import torch
from pytorch_memlab import MemReporter

linear = torch.nn.Linear(1024, 1024).cuda()
inp = torch.Tensor(512, 1024).cuda()
# pass in a model to automatically infer the tensor names
reporter = MemReporter(linear)
out = linear(inp * (inp + 2)).mean()
reporter.report()
```

outputs:
```
Element type                                            Size  Used MEM
-------------------------------------------------------------------------------
Storage on cuda:0
weight                                          (1024, 1024)     4.00M
bias                                                 (1024,)     4.00K
Tensor0                                          (512, 1024)     2.00M
Tensor1                                                 (1,)   512.00B
-------------------------------------------------------------------------------
Total Tensors: 1573889  Used Memory: 6.00M
The allocated memory on cuda:0: 8.00M
Memory differs due to the matrix alignment or invisible gradient buffer tensors
-------------------------------------------------------------------------------
```


### Courtesy

Sometimes people would like to preempt your running task, but you don't want
to save checkpoint and then load, actually all they need is GPU resources (
typically CPU resources and CPU memory is always spare in GPU clusters), so
you can move all your workspaces from GPU to CPU and then halt your task until
a restart signal is triggered, instead of saving&loading checkpoints and
bootstrapping from scratch.

Still developing..... But you can have fun with:
```python
from pytorch_memlab import Courtesy

iamcourtesy = Courtesy()
for i in range(num_iteration):
    if something_happens:
        iamcourtesy.yield_memory()
        wait_for_restart_signal()
        iamcourtesy.restore()
```

#### Known Issues

- As is stated above in `Memory_Reporter`, intermediate tensors are not covered
properly, so you may want to insert such courtesy logics after `backward` or
before `forward`.
- Currently the CUDA context of pytorch requires about 1 GB CUDA memory, which
means even all Tensors are on CPU, 1GB of CUDA memory is wasted, :-(. However
it's still under investigation if I can fully destroy the context and then
re-init.


### ACK

I suffered a lot debugging weird memory usage during my 3-years of developing
efficient Deep Learning models, and of course learned a lot from the great
open source community.

## CHANGES


##### 0.2.4 (2021-10-28)
  - Fix colab error (#35)
  - Support python3.8 (#38)
  - Support sparse tensor (#30)
##### 0.2.3 (2020-12-01)
  - Fix name mapping in `MemReporter` (#24)
  - Fix reporter without model input (#22 #25)
##### 0.2.2 (2020-10-23)
  - Fix memory leak in `MemReporter`
##### 0.2.1 (2020-06-18)
  - Fix `line_profiler` not found
##### 0.2.0 (2020-06-15)
  - Add jupyter notebook figure and ipython support
##### 0.1.0 (2020-04-17)
  - Add ipython magic support (#8)
##### 0.0.4 (2019-10-08)
  - Add gpu switch for line-profiler(#2)
  - Add device filter for reporter
##### 0.0.3 (2019-06-15)
  - Install dependency for pip installation
##### 0.0.2 (2019-06-04)
  - Fix statistics shift in loop
##### 0.0.1 (2019-05-28)
  - initial release



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Stonesjtu/pytorch_memlab",
    "name": "pytorch-memlab",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "pytorch memory profile",
    "author": "Kaiyu Shi",
    "author_email": "skyisno.1@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/7f/8e/528f7ebccdd855aa0769c118548f77535fa33286ca6a0a833b92435c74fd/pytorch-memlab-0.3.0.tar.gz",
    "platform": null,
    "description": "pytorch_memlab\n======\n[![Build Status](https://travis-ci.com/Stonesjtu/pytorch_memlab.svg?token=vyTdxHbi1PCRzV6disHp&branch=master)](https://travis-ci.com/Stonesjtu/pytorch_memlab)\n![PyPI](https://img.shields.io/pypi/v/pytorch_memlab.svg)\n[![CodeQL: Python](https://github.com/Stonesjtu/pytorch_memlab/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/Stonesjtu/pytorch_memlab/actions/workflows/github-code-scanning/codeql)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch_memlab.svg)\n\nA simple and accurate **CUDA** memory management laboratory for pytorch,\nit consists of different parts about the memory:\n\n- Features:\n\n  - Memory Profiler: A `line_profiler` style CUDA memory profiler with simple API.\n  - Memory Reporter: A reporter to inspect tensors occupying the CUDA memory.\n  - Courtesy: An interesting feature to temporarily move all the CUDA tensors into\n    CPU memory for courtesy, and of course the backward transferring.\n  - IPython support through `%mlrun`/`%%mlrun` line/cell magic\n    commands.\n\n\n- Table of Contents\n  * [Installation](#installation)\n  * [User-Doc](#user-doc)\n    + [Memory Profiler](#memory-profiler)\n    + [IPython support](#ipython-support)\n    + [Memory Reporter](#memory-reporter)\n    + [Courtesy](#courtesy)\n    + [ACK](#ack)\n  * [CHANGES](#changes)\n\nInstallation\n-----\n\n- Released version:\n```bash\npip install pytorch_memlab\n```\n\n- Newest version:\n```bash\npip install git+https://github.com/stonesjtu/pytorch_memlab\n```\n\nWhat's for\n-----\n\nOut-Of-Memory errors in pytorch happen frequently, for new-bees and\nexperienced programmers. A common reason is that most people don't really\nlearn the underlying memory management philosophy of pytorch and GPUs.\nThey wrote memory in-efficient codes and complained about pytorch eating too\nmuch CUDA memory.\n\nIn this repo, I'm going to share some useful tools to help debugging OOM, or\nto inspect the underlying mechanism if anyone is interested in.\n\n\nUser-Doc\n-----\n\n### Memory Profiler\n\nThe memory profiler is a modification of python's `line_profiler`, it gives\nthe memory usage info for each line of code in the specified function/method.\n\n#### Sample:\n\n```python\nimport torch\nfrom pytorch_memlab import LineProfiler\n\ndef inner():\n    torch.nn.Linear(100, 100).cuda()\n\ndef outer():\n    linear = torch.nn.Linear(100, 100).cuda()\n    linear2 = torch.nn.Linear(100, 100).cuda()\n    linear3 = torch.nn.Linear(100, 100).cuda()\n\nwork()\n```\n\nAfter the script finishes or interrupted by keyboard, it gives the following\nprofiling info if you're in a Jupyter notebook:\n\n<p align=\"center\"><img src=\"readme-output.png\" width=\"640\"></p>\n\nor the following info if you're in a text-only terminal:\n\n```\n## outer\n\nactive_bytes reserved_bytes line  code\n         all            all\n        peak           peak\n       0.00B          0.00B    7  def outer():\n      40.00K          2.00M    8      linear = torch.nn.Linear(100, 100).cuda()\n      80.00K          2.00M    9      linear2 = torch.nn.Linear(100, 100).cuda()\n     120.00K          2.00M   10      inner()\n\n\n## inner\n\nactive_bytes reserved_bytes line  code\n         all            all\n        peak           peak\n      80.00K          2.00M    4  def inner():\n     120.00K          2.00M    5      torch.nn.Linear(100, 100).cuda()\n```\n\nAn explanation of what each column means can be found in the [Torch documentation](https://pytorch.org/docs/stable/cuda.html#torch.cuda.memory_stats). The name of any field from `memory_stats()`\ncan be passed to `display()` to view the corresponding statistic.\n\nIf you use `profile` decorator, the memory statistics are collected during\nmultiple runs and only the maximum one is displayed at the end.\nWe also provide a more flexible API called `profile_every` which prints the\nmemory info every *N* times of function execution. You can simply replace\n`@profile` with `@profile_every(1)` to print the memory usage for each\nexecution.\n\nThe `@profile` and `@profile_every` can also be mixed to gain more control\nof the debugging granularity.\n\n- You can also add the decorator in the module class:\n\n```python\nclass Net(torch.nn.Module):\n    def __init__(self):\n        super().__init__()\n    @profile\n    def forward(self, inp):\n        #do_something\n```\n\n- The *Line Profiler* profiles the memory usage of CUDA device 0 by default,\nyou may want to switch the device to profile by `set_target_gpu`. The gpu\nselection is globally,  which means you have to remember which gpu you are\nprofiling on during the whole process:\n\n```python\nimport torch\nfrom pytorch_memlab import profile, set_target_gpu\n@profile\ndef func():\n    net1 = torch.nn.Linear(1024, 1024).cuda(0)\n    set_target_gpu(1)\n    net2 = torch.nn.Linear(1024, 1024).cuda(1)\n    set_target_gpu(0)\n    net3 = torch.nn.Linear(1024, 1024).cuda(0)\n\nfunc()\n```\n\n\nMore samples can be found in `test/test_line_profiler.py`\n\n### IPython support\n\nMake sure you have `IPython` installed, or have installed `pytorch-memlab` with\n`pip install pytorch-memlab[ipython]`.\n\nFirst, load the extension:\n\n```python\n%load_ext pytorch_memlab\n```\n\nThis makes the `%mlrun` and `%%mlrun` line/cell magics available for use. For\nexample, in a new cell run the following to profile an entire cell\n\n```python\n%%mlrun -f func\nimport torch\nfrom pytorch_memlab import profile, set_target_gpu\ndef func():\n    net1 = torch.nn.Linear(1024, 1024).cuda(0)\n    set_target_gpu(1)\n    net2 = torch.nn.Linear(1024, 1024).cuda(1)\n    set_target_gpu(0)\n    net3 = torch.nn.Linear(1024, 1024).cuda(0)\n```\n\nOr you can invoke the profiler for a single statement on via the `%mlrun` cell\nmagic.\n\n```python\nimport torch\nfrom pytorch_memlab import profile, set_target_gpu\ndef func(input_size):\n    net1 = torch.nn.Linear(input_size, 1024).cuda(0)\n%mlrun -f func func(2048)\n```\n\nSee `%mlrun?` for help on what arguments are supported. You can set the GPU\ndevice to profile, dump profiling results to a file, and return the\n`LineProfiler` object for post-profile inspection.\n\nFind out more by checking out the [demo Jupyter notebook](./demo.ipynb)\n\n\n### Memory Reporter\n\nAs *Memory Profiler* only gives the overall memory usage information by lines,\na more low-level memory usage information can be obtained by *Memory Reporter*.\n\n*Memory reporter* iterates all the `Tensor` objects and gets the underlying\n`Storage` object to get the actual memory usage instead of the surface\n`Tensor.size`.\n\n#### Sample\n\n- A minimal one:\n\n```python\nimport torch\nfrom pytorch_memlab import MemReporter\nlinear = torch.nn.Linear(1024, 1024).cuda()\nreporter = MemReporter()\nreporter.report()\n```\noutputs:\n```\nElement type                                            Size  Used MEM\n-------------------------------------------------------------------------------\nStorage on cuda:0\nParameter0                                      (1024, 1024)     4.00M\nParameter1                                           (1024,)     4.00K\n-------------------------------------------------------------------------------\nTotal Tensors: 1049600  Used Memory: 4.00M\nThe allocated memory on cuda:0: 4.00M\n-------------------------------------------------------------------------------\n```\n\n- You can also pass in a model object for automatically name inference.\n\n```python\nimport torch\nfrom pytorch_memlab import MemReporter\n\nlinear = torch.nn.Linear(1024, 1024).cuda()\ninp = torch.Tensor(512, 1024).cuda()\n# pass in a model to automatically infer the tensor names\nreporter = MemReporter(linear)\nout = linear(inp).mean()\nprint('========= before backward =========')\nreporter.report()\nout.backward()\nprint('========= after backward =========')\nreporter.report()\n```\n\noutputs:\n```\n========= before backward =========\nElement type                                            Size  Used MEM\n-------------------------------------------------------------------------------\nStorage on cuda:0\nweight                                          (1024, 1024)     4.00M\nbias                                                 (1024,)     4.00K\nTensor0                                          (512, 1024)     2.00M\nTensor1                                                 (1,)   512.00B\n-------------------------------------------------------------------------------\nTotal Tensors: 1573889  Used Memory: 6.00M\nThe allocated memory on cuda:0: 6.00M\n-------------------------------------------------------------------------------\n========= after backward =========\nElement type                                            Size  Used MEM\n-------------------------------------------------------------------------------\nStorage on cuda:0\nweight                                          (1024, 1024)     4.00M\nweight.grad                                     (1024, 1024)     4.00M\nbias                                                 (1024,)     4.00K\nbias.grad                                            (1024,)     4.00K\nTensor0                                          (512, 1024)     2.00M\nTensor1                                                 (1,)   512.00B\n-------------------------------------------------------------------------------\nTotal Tensors: 2623489  Used Memory: 10.01M\nThe allocated memory on cuda:0: 10.01M\n-------------------------------------------------------------------------------\n```\n\n\n- The reporter automatically deals with the sharing weights parameters:\n\n```python\nimport torch\nfrom pytorch_memlab import MemReporter\n\nlinear = torch.nn.Linear(1024, 1024).cuda()\nlinear2 = torch.nn.Linear(1024, 1024).cuda()\nlinear2.weight = linear.weight\ncontainer = torch.nn.Sequential(\n    linear, linear2\n)\ninp = torch.Tensor(512, 1024).cuda()\n# pass in a model to automatically infer the tensor names\n\nout = container(inp).mean()\nout.backward()\n\n# verbose shows how storage is shared across multiple Tensors\nreporter = MemReporter(container)\nreporter.report(verbose=True)\n```\n\noutputs:\n```\nElement type                                            Size  Used MEM\n-------------------------------------------------------------------------------\nStorage on cuda:0\n0.weight                                        (1024, 1024)     4.00M\n0.weight.grad                                   (1024, 1024)     4.00M\n0.bias                                               (1024,)     4.00K\n0.bias.grad                                          (1024,)     4.00K\n1.bias                                               (1024,)     4.00K\n1.bias.grad                                          (1024,)     4.00K\nTensor0                                          (512, 1024)     2.00M\nTensor1                                                 (1,)   512.00B\n-------------------------------------------------------------------------------\nTotal Tensors: 2625537  Used Memory: 10.02M\nThe allocated memory on cuda:0: 10.02M\n-------------------------------------------------------------------------------\n```\n\n- You can better understand the memory layout for more complicated module:\n\n```python\nimport torch\nfrom pytorch_memlab import MemReporter\n\nlstm = torch.nn.LSTM(1024, 1024).cuda()\nreporter = MemReporter(lstm)\nreporter.report(verbose=True)\ninp = torch.Tensor(10, 10, 1024).cuda()\nout, _ = lstm(inp)\nout.mean().backward()\nreporter.report(verbose=True)\n```\n\nAs shown below, the `(->)` indicates the re-use of the same storage back-end\noutputs:\n```\nElement type                                            Size  Used MEM\n-------------------------------------------------------------------------------\nStorage on cuda:0\nweight_ih_l0                                    (4096, 1024)    32.03M\nweight_hh_l0(->weight_ih_l0)                    (4096, 1024)     0.00B\nbias_ih_l0(->weight_ih_l0)                           (4096,)     0.00B\nbias_hh_l0(->weight_ih_l0)                           (4096,)     0.00B\nTensor0                                       (10, 10, 1024)   400.00K\n-------------------------------------------------------------------------------\nTotal Tensors: 8499200  Used Memory: 32.42M\nThe allocated memory on cuda:0: 32.52M\nMemory differs due to the matrix alignment\n-------------------------------------------------------------------------------\nElement type                                            Size  Used MEM\n-------------------------------------------------------------------------------\nStorage on cuda:0\nweight_ih_l0                                    (4096, 1024)    32.03M\nweight_ih_l0.grad                               (4096, 1024)    32.03M\nweight_hh_l0(->weight_ih_l0)                    (4096, 1024)     0.00B\nweight_hh_l0.grad(->weight_ih_l0.grad)          (4096, 1024)     0.00B\nbias_ih_l0(->weight_ih_l0)                           (4096,)     0.00B\nbias_ih_l0.grad(->weight_ih_l0.grad)                 (4096,)     0.00B\nbias_hh_l0(->weight_ih_l0)                           (4096,)     0.00B\nbias_hh_l0.grad(->weight_ih_l0.grad)                 (4096,)     0.00B\nTensor0                                       (10, 10, 1024)   400.00K\nTensor1                                       (10, 10, 1024)   400.00K\nTensor2                                        (1, 10, 1024)    40.00K\nTensor3                                        (1, 10, 1024)    40.00K\n-------------------------------------------------------------------------------\nTotal Tensors: 17018880         Used Memory: 64.92M\nThe allocated memory on cuda:0: 65.11M\nMemory differs due to the matrix alignment\n-------------------------------------------------------------------------------\n```\n\nNOTICE:\n> When forwarding with `grad_mode=True`, pytorch maintains tensor buffers for\n> future Back-Propagation, in C level. So these buffers are not going to be\n> managed or collected by pytorch. But if you store these intermediate results\n> as python variables, then they will be reported.\n\n- You can also filter the device to report on by passing extra arguments:\n`report(device=torch.device(0))`\n\n- A failed example due to pytorch's C side tensor buffers\n\nIn the following example, a temp buffer is created at `inp * (inp + 2)` to\nstore both `inp` and `inp + 2`, unfortunately python only knows the existence\nof inp, so we have *2M* memory lost, which is the same size of Tensor `inp`.\n\n```python\nimport torch\nfrom pytorch_memlab import MemReporter\n\nlinear = torch.nn.Linear(1024, 1024).cuda()\ninp = torch.Tensor(512, 1024).cuda()\n# pass in a model to automatically infer the tensor names\nreporter = MemReporter(linear)\nout = linear(inp * (inp + 2)).mean()\nreporter.report()\n```\n\noutputs:\n```\nElement type                                            Size  Used MEM\n-------------------------------------------------------------------------------\nStorage on cuda:0\nweight                                          (1024, 1024)     4.00M\nbias                                                 (1024,)     4.00K\nTensor0                                          (512, 1024)     2.00M\nTensor1                                                 (1,)   512.00B\n-------------------------------------------------------------------------------\nTotal Tensors: 1573889  Used Memory: 6.00M\nThe allocated memory on cuda:0: 8.00M\nMemory differs due to the matrix alignment or invisible gradient buffer tensors\n-------------------------------------------------------------------------------\n```\n\n\n### Courtesy\n\nSometimes people would like to preempt your running task, but you don't want\nto save checkpoint and then load, actually all they need is GPU resources (\ntypically CPU resources and CPU memory is always spare in GPU clusters), so\nyou can move all your workspaces from GPU to CPU and then halt your task until\na restart signal is triggered, instead of saving&loading checkpoints and\nbootstrapping from scratch.\n\nStill developing..... But you can have fun with:\n```python\nfrom pytorch_memlab import Courtesy\n\niamcourtesy = Courtesy()\nfor i in range(num_iteration):\n    if something_happens:\n        iamcourtesy.yield_memory()\n        wait_for_restart_signal()\n        iamcourtesy.restore()\n```\n\n#### Known Issues\n\n- As is stated above in `Memory_Reporter`, intermediate tensors are not covered\nproperly, so you may want to insert such courtesy logics after `backward` or\nbefore `forward`.\n- Currently the CUDA context of pytorch requires about 1 GB CUDA memory, which\nmeans even all Tensors are on CPU, 1GB of CUDA memory is wasted, :-(. However\nit's still under investigation if I can fully destroy the context and then\nre-init.\n\n\n### ACK\n\nI suffered a lot debugging weird memory usage during my 3-years of developing\nefficient Deep Learning models, and of course learned a lot from the great\nopen source community.\n\n## CHANGES\n\n\n##### 0.2.4 (2021-10-28)\n  - Fix colab error (#35)\n  - Support python3.8 (#38)\n  - Support sparse tensor (#30)\n##### 0.2.3 (2020-12-01)\n  - Fix name mapping in `MemReporter` (#24)\n  - Fix reporter without model input (#22 #25)\n##### 0.2.2 (2020-10-23)\n  - Fix memory leak in `MemReporter`\n##### 0.2.1 (2020-06-18)\n  - Fix `line_profiler` not found\n##### 0.2.0 (2020-06-15)\n  - Add jupyter notebook figure and ipython support\n##### 0.1.0 (2020-04-17)\n  - Add ipython magic support (#8)\n##### 0.0.4 (2019-10-08)\n  - Add gpu switch for line-profiler(#2)\n  - Add device filter for reporter\n##### 0.0.3 (2019-06-15)\n  - Install dependency for pip installation\n##### 0.0.2 (2019-06-04)\n  - Fix statistics shift in loop\n##### 0.0.1 (2019-05-28)\n  - initial release\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A lab to do simple and accurate memory experiments on pytorch",
    "version": "0.3.0",
    "project_urls": {
        "Homepage": "https://github.com/Stonesjtu/pytorch_memlab"
    },
    "split_keywords": [
        "pytorch",
        "memory",
        "profile"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7f8e528f7ebccdd855aa0769c118548f77535fa33286ca6a0a833b92435c74fd",
                "md5": "7232a9a3253732e8dcdc9d4e30b39d27",
                "sha256": "4c80a9cf2ba4246e23352c408147fe8b1cec3a96bc69d81a63d67eb3fab16ab4"
            },
            "downloads": -1,
            "filename": "pytorch-memlab-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7232a9a3253732e8dcdc9d4e30b39d27",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 22596,
            "upload_time": "2023-07-29T13:27:13",
            "upload_time_iso_8601": "2023-07-29T13:27:13.376406Z",
            "url": "https://files.pythonhosted.org/packages/7f/8e/528f7ebccdd855aa0769c118548f77535fa33286ca6a0a833b92435c74fd/pytorch-memlab-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-29 13:27:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Stonesjtu",
    "github_project": "pytorch_memlab",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "pytorch-memlab"
}
        
Elapsed time: 0.09513s