toma


Nametoma JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://github.com/blackhc/toma
SummaryWrite algorithms in PyTorch that adapt to the available (CUDA) memory
upload_time2020-04-24 11:34:28
maintainer
docs_urlNone
authorAndreas @blackhc Kirsch
requires_python
licenseMIT
keywords tools pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage
            # Torch Memory-adaptive Algorithms (TOMA)

[![Build Status](https://www.travis-ci.com/BlackHC/toma.svg?branch=master)](https://www.travis-ci.com/BlackHC/toma) [![codecov](https://codecov.io/gh/BlackHC/toma/branch/master/graph/badge.svg)](https://codecov.io/gh/BlackHC/toma) [![PyPI](https://img.shields.io/badge/PyPI-toma-blue.svg)](https://pypi.python.org/pypi/toma/)

A collection of helpers to make it easier to write code that adapts to the available (CUDA) memory.
Specifically, it retries code that fails due to OOM (out-of-memory) conditions and lowers batchsizes automatically. 

To avoid failing over repeatedly, a simple cache is implemented that memorizes that last successful batchsize given the call and available free memory.

## Installation

To install using pip, use:

```
pip install toma
```

To run the tests, use:

```
python setup.py test
```

## Example

```python
from toma import toma

@toma.batch(initial_batchsize=512)
def run_inference(batchsize, model, dataset):
    # ...

run_inference(batchsize, model, dataset)
```

This will try to execute train_model with batchsize=512. If a memory error is thrown, it will decrease the batchsize until it succeeds.

**Note:** 
This batch size can be different from the batch size used to accumulate gradients by only calling `optimizer.step()` every so often.

To make it easier to loop over a ranges, there are also `toma.range` and `toma.chunked`:

```python
@toma.chunked(initial_step=512)
def compute_result(out: torch.Tensor, start: int, end: int):
    # ...

result = torch.empty((8192, ...))
compute_result(result)
```

This will chunk `result` and pass the chunks to `compute_result` one by one. 
Again, if it fails due to OOM, the step will be halfed etc.
Compared to `toma.batch`, this allows for reduction of the step size while looping over the chunks.
This can save computation.

```python
@toma.range(initial_step=32)
def reduce_data(start: int, end: int, out: torch.Tensor, dataA: torch.Tensor, dataB: torch.Tensor):
    # ...

reduce_data(0, 1024, result, dataA, dataB)
``` 

`toma.range` iterates over `range(start, end, step)` with `step=initial_step`. If it fails due to OOM, it will lower the step size and continue.

### `toma.execute`

To make it easier to just execute a block without having to extract it into a function and then call it, we also provide `toma.execute.batch`, `toma.execute.range` and `toma.execute.chunked`, which are somewhat unorthodox and call the function that is passed to them right away. (Mainly because there is no support for anonymous functions in Python beyond lambda expressions.)

```python
def function():
    # ... other code

    @toma.execute.chunked(batched_data, initial_step=128):
    def compute(chunk, start, end):
        # ...
```

## Cache

There are 3 available cache types at the moment. 
They can be changed by either setting `toma.DEFAULT_CACHE_TYPE` or by passing `cache_type` to the calls.

For example:
```python
@toma.batch(initial_batchsize=512, cache_type=toma.GlobalBatchsizeCache)
```
or
```python
toma.explicit.batch(..., toma_cache_type=toma.GlobalBatchsizeCache)
```

### `StacktraceMemoryBatchsizeCache`: Stacktrace & Available Memory (*the default*)

This memorizes the successful batchsizes for a given call trace and available memory at that point.
For most machine learning code, this is sufficient to remember the right batchsize without having to look at the actual arguments and understanding more of the semantics.

The implicit assumption is that after a few iterations a stable state will be reached in regards to GPU and CPU memory usage.

To limit the CPU memory of the process, toma provides:
```python
import toma.cpu_memory

toma.cpu_memory.set_cpu_memory_limit(8)
```
This can also be useful to avoid accidental swap thrashing.

### `GlobalBatchsizeCache`: Global per Function

This reuses the last successful batchsize independently from where the call happened.

### `NoBatchsizeCache`: No Caching

Always starts with the suggested batchsize and fails over if necessary.

## Benchmark/Overhead

There is overhead involved. Toma should only be used with otherwise time/memory-consuming operations.

```text
---------------------------------------------------------------------------------- benchmark: 5 tests ----------------------------------------------------------------------------------
Name (time in ms)          Min                Max               Mean            StdDev             Median                IQR            Outliers       OPS            Rounds  Iterations
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_native             2.1455 (1.0)       3.7733 (1.0)       2.3037 (1.0)      0.1103 (1.0)       2.2935 (1.0)       0.1302 (1.0)          81;5  434.0822 (1.0)         448           1
test_simple            17.4657 (8.14)     27.0049 (7.16)     21.0453 (9.14)     2.6233 (23.79)    20.4881 (8.93)      3.4384 (26.42)        13;0   47.5165 (0.11)         39           1
test_toma_no_cache     31.4380 (14.65)    40.8567 (10.83)    33.2749 (14.44)    2.2530 (20.43)    32.2698 (14.07)     2.8210 (21.67)         4;1   30.0527 (0.07)         25           1
test_explicit          33.0759 (15.42)    52.1866 (13.83)    39.6956 (17.23)    6.9620 (63.14)    38.4929 (16.78)    11.2344 (86.31)         4;0   25.1917 (0.06)         20           1
test_toma              36.9633 (17.23)    57.0220 (15.11)    43.5201 (18.89)    6.7318 (61.05)    41.6034 (18.14)     7.2173 (55.45)         2;2   22.9779 (0.05)         13           1
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
```

## Thanks

Thanks to [@y0ast](https://github.com/y0ast) for feedback and discussion.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/blackhc/toma",
    "name": "toma",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "tools pytorch",
    "author": "Andreas @blackhc Kirsch",
    "author_email": "blackhc+toma@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4f/d1/74aad779150b03b6237de39f9d7a0e48d6a1ef2ff59d7501f683a8aa5a8a/toma-1.1.0.tar.gz",
    "platform": "",
    "description": "# Torch Memory-adaptive Algorithms (TOMA)\n\n[![Build Status](https://www.travis-ci.com/BlackHC/toma.svg?branch=master)](https://www.travis-ci.com/BlackHC/toma) [![codecov](https://codecov.io/gh/BlackHC/toma/branch/master/graph/badge.svg)](https://codecov.io/gh/BlackHC/toma) [![PyPI](https://img.shields.io/badge/PyPI-toma-blue.svg)](https://pypi.python.org/pypi/toma/)\n\nA collection of helpers to make it easier to write code that adapts to the available (CUDA) memory.\nSpecifically, it retries code that fails due to OOM (out-of-memory) conditions and lowers batchsizes automatically. \n\nTo avoid failing over repeatedly, a simple cache is implemented that memorizes that last successful batchsize given the call and available free memory.\n\n## Installation\n\nTo install using pip, use:\n\n```\npip install toma\n```\n\nTo run the tests, use:\n\n```\npython setup.py test\n```\n\n## Example\n\n```python\nfrom toma import toma\n\n@toma.batch(initial_batchsize=512)\ndef run_inference(batchsize, model, dataset):\n    # ...\n\nrun_inference(batchsize, model, dataset)\n```\n\nThis will try to execute train_model with batchsize=512. If a memory error is thrown, it will decrease the batchsize until it succeeds.\n\n**Note:** \nThis batch size can be different from the batch size used to accumulate gradients by only calling `optimizer.step()` every so often.\n\nTo make it easier to loop over a ranges, there are also `toma.range` and `toma.chunked`:\n\n```python\n@toma.chunked(initial_step=512)\ndef compute_result(out: torch.Tensor, start: int, end: int):\n    # ...\n\nresult = torch.empty((8192, ...))\ncompute_result(result)\n```\n\nThis will chunk `result` and pass the chunks to `compute_result` one by one. \nAgain, if it fails due to OOM, the step will be halfed etc.\nCompared to `toma.batch`, this allows for reduction of the step size while looping over the chunks.\nThis can save computation.\n\n```python\n@toma.range(initial_step=32)\ndef reduce_data(start: int, end: int, out: torch.Tensor, dataA: torch.Tensor, dataB: torch.Tensor):\n    # ...\n\nreduce_data(0, 1024, result, dataA, dataB)\n``` \n\n`toma.range` iterates over `range(start, end, step)` with `step=initial_step`. If it fails due to OOM, it will lower the step size and continue.\n\n### `toma.execute`\n\nTo make it easier to just execute a block without having to extract it into a function and then call it, we also provide `toma.execute.batch`, `toma.execute.range` and `toma.execute.chunked`, which are somewhat unorthodox and call the function that is passed to them right away. (Mainly because there is no support for anonymous functions in Python beyond lambda expressions.)\n\n```python\ndef function():\n    # ... other code\n\n    @toma.execute.chunked(batched_data, initial_step=128):\n    def compute(chunk, start, end):\n        # ...\n```\n\n## Cache\n\nThere are 3 available cache types at the moment. \nThey can be changed by either setting `toma.DEFAULT_CACHE_TYPE` or by passing `cache_type` to the calls.\n\nFor example:\n```python\n@toma.batch(initial_batchsize=512, cache_type=toma.GlobalBatchsizeCache)\n```\nor\n```python\ntoma.explicit.batch(..., toma_cache_type=toma.GlobalBatchsizeCache)\n```\n\n### `StacktraceMemoryBatchsizeCache`: Stacktrace & Available Memory (*the default*)\n\nThis memorizes the successful batchsizes for a given call trace and available memory at that point.\nFor most machine learning code, this is sufficient to remember the right batchsize without having to look at the actual arguments and understanding more of the semantics.\n\nThe implicit assumption is that after a few iterations a stable state will be reached in regards to GPU and CPU memory usage.\n\nTo limit the CPU memory of the process, toma provides:\n```python\nimport toma.cpu_memory\n\ntoma.cpu_memory.set_cpu_memory_limit(8)\n```\nThis can also be useful to avoid accidental swap thrashing.\n\n### `GlobalBatchsizeCache`: Global per Function\n\nThis reuses the last successful batchsize independently from where the call happened.\n\n### `NoBatchsizeCache`: No Caching\n\nAlways starts with the suggested batchsize and fails over if necessary.\n\n## Benchmark/Overhead\n\nThere is overhead involved. Toma should only be used with otherwise time/memory-consuming operations.\n\n```text\n---------------------------------------------------------------------------------- benchmark: 5 tests ----------------------------------------------------------------------------------\nName (time in ms)          Min                Max               Mean            StdDev             Median                IQR            Outliers       OPS            Rounds  Iterations\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\ntest_native             2.1455 (1.0)       3.7733 (1.0)       2.3037 (1.0)      0.1103 (1.0)       2.2935 (1.0)       0.1302 (1.0)          81;5  434.0822 (1.0)         448           1\ntest_simple            17.4657 (8.14)     27.0049 (7.16)     21.0453 (9.14)     2.6233 (23.79)    20.4881 (8.93)      3.4384 (26.42)        13;0   47.5165 (0.11)         39           1\ntest_toma_no_cache     31.4380 (14.65)    40.8567 (10.83)    33.2749 (14.44)    2.2530 (20.43)    32.2698 (14.07)     2.8210 (21.67)         4;1   30.0527 (0.07)         25           1\ntest_explicit          33.0759 (15.42)    52.1866 (13.83)    39.6956 (17.23)    6.9620 (63.14)    38.4929 (16.78)    11.2344 (86.31)         4;0   25.1917 (0.06)         20           1\ntest_toma              36.9633 (17.23)    57.0220 (15.11)    43.5201 (18.89)    6.7318 (61.05)    41.6034 (18.14)     7.2173 (55.45)         2;2   22.9779 (0.05)         13           1\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n```\n\n## Thanks\n\nThanks to [@y0ast](https://github.com/y0ast) for feedback and discussion.\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Write algorithms in PyTorch that adapt to the available (CUDA) memory",
    "version": "1.1.0",
    "project_urls": {
        "Homepage": "https://github.com/blackhc/toma"
    },
    "split_keywords": [
        "tools",
        "pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0e63f0b3f8855591bf9385e2c66f27e99c847e758927f1be72a1aece8be007a1",
                "md5": "1a9b49341a92c349aff33cc4e6f47057",
                "sha256": "e2d678bd286e8a1e8141277f21aa32bb635dc47e58c492d5f60efbf646927318"
            },
            "downloads": -1,
            "filename": "toma-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1a9b49341a92c349aff33cc4e6f47057",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 9567,
            "upload_time": "2020-04-24T11:34:25",
            "upload_time_iso_8601": "2020-04-24T11:34:25.881797Z",
            "url": "https://files.pythonhosted.org/packages/0e/63/f0b3f8855591bf9385e2c66f27e99c847e758927f1be72a1aece8be007a1/toma-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fd174aad779150b03b6237de39f9d7a0e48d6a1ef2ff59d7501f683a8aa5a8a",
                "md5": "036b5801bf5fb7de06c4c94fa4ad7b17",
                "sha256": "d4b7d04d3c8a5b4ce4fee30a92282e29e95f4b641db20d1c6f458d13e8793b7a"
            },
            "downloads": -1,
            "filename": "toma-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "036b5801bf5fb7de06c4c94fa4ad7b17",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 11600,
            "upload_time": "2020-04-24T11:34:28",
            "upload_time_iso_8601": "2020-04-24T11:34:28.192661Z",
            "url": "https://files.pythonhosted.org/packages/4f/d1/74aad779150b03b6237de39f9d7a0e48d6a1ef2ff59d7501f683a8aa5a8a/toma-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2020-04-24 11:34:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "blackhc",
    "github_project": "toma",
    "travis_ci": true,
    "coveralls": true,
    "github_actions": false,
    "lcname": "toma"
}
        
Elapsed time: 0.20910s