torch-max-mem


Nametorch-max-mem JSON
Version 0.1.3 PyPI version JSON
download
home_pagehttps://github.com/mberr/torch-max-mem
SummaryMaximize memory utilization with PyTorch.
upload_time2023-09-23 14:36:29
maintainerMax Berrendorf
docs_urlNone
authorMax Berrendorf
requires_python>=3.8
licenseMIT
keywords snekpack cookiecutter torch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!--
<p align="center">
  <img src="https://github.com/mberr/torch-max-mem/raw/main/docs/source/logo.png" height="150">
</p>
-->

<h1 align="center">
  torch-max-mem
</h1>

<p align="center">
    <a href="https://github.com/mberr/torch-max-mem/actions?query=workflow%3ATests">
        <img alt="Tests" src="https://github.com/mberr/torch-max-mem/workflows/Tests/badge.svg" />
    </a>
    <a href="https://github.com/cthoyt/cookiecutter-python-package">
        <img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /> 
    </a>
    <a href="https://pypi.org/project/torch_max_mem">
        <img alt="PyPI" src="https://img.shields.io/pypi/v/torch_max_mem" />
    </a>
    <a href="https://pypi.org/project/torch_max_mem">
        <img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/torch_max_mem" />
    </a>
    <a href="https://github.com/mberr/torch-max-mem/blob/main/LICENSE">
        <img alt="PyPI - License" src="https://img.shields.io/pypi/l/torch_max_mem" />
    </a>
    <a href='https://torch_max_mem.readthedocs.io/en/latest/?badge=latest'>
        <img src='https://readthedocs.org/projects/torch_max_mem/badge/?version=latest' alt='Documentation Status' />
    </a>
    <a href='https://github.com/psf/black'>
        <img src='https://img.shields.io/badge/code%20style-black-000000.svg' alt='Code style: black' />
    </a>
</p>

This package provides decorators for memory utilization maximization with PyTorch and CUDA by starting with a maximum parameter size and applying successive halving until no more out-of-memory exception occurs.

## 💪 Getting Started

Assume you have a function for batched computation of nearest neighbors using brute-force distance calculation.

```python
import torch

def knn(x, y, batch_size, k: int = 3):
    return torch.cat(
        [
            torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices
            for start in range(0, x.shape[0], batch_size)
        ],
        dim=0,
    )
```

With `torch_max_mem` you can decorate this function to reduce the batch size until no more out-of-memory error occurs.

```python
import torch
from torch_max_mem import maximize_memory_utilization


@maximize_memory_utilization()
def knn(x, y, batch_size, k: int = 3):
    return torch.cat(
        [
            torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices
            for start in range(0, x.shape[0], batch_size)
        ],
        dim=0,
    )
```

In the code, you can now always pass the largest sensible batch size, e.g.,

```python
x = torch.rand(100, 100, device="cuda")
y = torch.rand(200, 100, device="cuda")
knn(x, y, batch_size=x.shape[0])
```

## 🚀 Installation

The most recent release can be installed from
[PyPI](https://pypi.org/project/torch_max_mem/) with:

```bash
$ pip install torch_max_mem
```

The most recent code and data can be installed directly from GitHub with:

```bash
$ pip install git+https://github.com/mberr/torch-max-mem.git
```

To install in development mode, use the following:

```bash
$ git clone git+https://github.com/mberr/torch-max-mem.git
$ cd torch-max-mem
$ pip install -e .
```

## 👐 Contributing

Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See
[CONTRIBUTING.md](https://github.com/mberr/torch-max-mem/blob/master/CONTRIBUTING.md) for more information on getting involved.

## 👋 Attribution

Parts of the logic have been developed with [Laurent Vermue](https://github.com/lvermue) for [PyKEEN](https://github.com/pykeen/pykeen).


### ⚖️ License

The code in this package is licensed under the MIT License.

### 🍪 Cookiecutter

This package was created with [@audreyfeldroy](https://github.com/audreyfeldroy)'s
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using [@cthoyt](https://github.com/cthoyt)'s
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack) template.

## 🛠️ For Developers

<details>
  <summary>See developer instrutions</summary>

  
The final section of the README is for if you want to get involved by making a code contribution.

### 🥼 Testing

After cloning the repository and installing `tox` with `pip install tox`, the unit tests in the `tests/` folder can be
run reproducibly with:

```shell
$ tox
```

Additionally, these tests are automatically re-run with each commit in a [GitHub Action](https://github.com/mberr/torch-max-mem/actions?query=workflow%3ATests).

### 📖 Building the Documentation

```shell
$ tox -e docs
``` 

### 📦 Making a Release

After installing the package in development mode and installing
`tox` with `pip install tox`, the commands for making a new release are contained within the `finish` environment
in `tox.ini`. Run the following from the shell:

```shell
$ tox -e finish
```

This script does the following:

1. Uses [Bump2Version](https://github.com/c4urself/bump2version) to switch the version number in the `setup.cfg` and
   `src/torch_max_mem/version.py` to not have the `-dev` suffix
2. Packages the code in both a tar archive and a wheel
3. Uploads to PyPI using `twine`. Be sure to have a `.pypirc` file configured to avoid the need for manual input at this
   step
4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can
   use `tox -e bumpversion minor` after.
</details>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mberr/torch-max-mem",
    "name": "torch-max-mem",
    "maintainer": "Max Berrendorf",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "max.berrendorf@gmail.com",
    "keywords": "snekpack,cookiecutter,torch",
    "author": "Max Berrendorf",
    "author_email": "max.berrendorf@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5a/5d/ec735a0ecaf3a80e653d23ec31fe88e3b10d48539804dd7377bd189db8ab/torch_max_mem-0.1.3.tar.gz",
    "platform": null,
    "description": "<!--\n<p align=\"center\">\n  <img src=\"https://github.com/mberr/torch-max-mem/raw/main/docs/source/logo.png\" height=\"150\">\n</p>\n-->\n\n<h1 align=\"center\">\n  torch-max-mem\n</h1>\n\n<p align=\"center\">\n    <a href=\"https://github.com/mberr/torch-max-mem/actions?query=workflow%3ATests\">\n        <img alt=\"Tests\" src=\"https://github.com/mberr/torch-max-mem/workflows/Tests/badge.svg\" />\n    </a>\n    <a href=\"https://github.com/cthoyt/cookiecutter-python-package\">\n        <img alt=\"Cookiecutter template from @cthoyt\" src=\"https://img.shields.io/badge/Cookiecutter-snekpack-blue\" /> \n    </a>\n    <a href=\"https://pypi.org/project/torch_max_mem\">\n        <img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/torch_max_mem\" />\n    </a>\n    <a href=\"https://pypi.org/project/torch_max_mem\">\n        <img alt=\"PyPI - Python Version\" src=\"https://img.shields.io/pypi/pyversions/torch_max_mem\" />\n    </a>\n    <a href=\"https://github.com/mberr/torch-max-mem/blob/main/LICENSE\">\n        <img alt=\"PyPI - License\" src=\"https://img.shields.io/pypi/l/torch_max_mem\" />\n    </a>\n    <a href='https://torch_max_mem.readthedocs.io/en/latest/?badge=latest'>\n        <img src='https://readthedocs.org/projects/torch_max_mem/badge/?version=latest' alt='Documentation Status' />\n    </a>\n    <a href='https://github.com/psf/black'>\n        <img src='https://img.shields.io/badge/code%20style-black-000000.svg' alt='Code style: black' />\n    </a>\n</p>\n\nThis package provides decorators for memory utilization maximization with PyTorch and CUDA by starting with a maximum parameter size and applying successive halving until no more out-of-memory exception occurs.\n\n## \ud83d\udcaa Getting Started\n\nAssume you have a function for batched computation of nearest neighbors using brute-force distance calculation.\n\n```python\nimport torch\n\ndef knn(x, y, batch_size, k: int = 3):\n    return torch.cat(\n        [\n            torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices\n            for start in range(0, x.shape[0], batch_size)\n        ],\n        dim=0,\n    )\n```\n\nWith `torch_max_mem` you can decorate this function to reduce the batch size until no more out-of-memory error occurs.\n\n```python\nimport torch\nfrom torch_max_mem import maximize_memory_utilization\n\n\n@maximize_memory_utilization()\ndef knn(x, y, batch_size, k: int = 3):\n    return torch.cat(\n        [\n            torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices\n            for start in range(0, x.shape[0], batch_size)\n        ],\n        dim=0,\n    )\n```\n\nIn the code, you can now always pass the largest sensible batch size, e.g.,\n\n```python\nx = torch.rand(100, 100, device=\"cuda\")\ny = torch.rand(200, 100, device=\"cuda\")\nknn(x, y, batch_size=x.shape[0])\n```\n\n## \ud83d\ude80 Installation\n\nThe most recent release can be installed from\n[PyPI](https://pypi.org/project/torch_max_mem/) with:\n\n```bash\n$ pip install torch_max_mem\n```\n\nThe most recent code and data can be installed directly from GitHub with:\n\n```bash\n$ pip install git+https://github.com/mberr/torch-max-mem.git\n```\n\nTo install in development mode, use the following:\n\n```bash\n$ git clone git+https://github.com/mberr/torch-max-mem.git\n$ cd torch-max-mem\n$ pip install -e .\n```\n\n## \ud83d\udc50 Contributing\n\nContributions, whether filing an issue, making a pull request, or forking, are appreciated. See\n[CONTRIBUTING.md](https://github.com/mberr/torch-max-mem/blob/master/CONTRIBUTING.md) for more information on getting involved.\n\n## \ud83d\udc4b Attribution\n\nParts of the logic have been developed with [Laurent Vermue](https://github.com/lvermue) for [PyKEEN](https://github.com/pykeen/pykeen).\n\n\n### \u2696\ufe0f License\n\nThe code in this package is licensed under the MIT License.\n\n### \ud83c\udf6a Cookiecutter\n\nThis package was created with [@audreyfeldroy](https://github.com/audreyfeldroy)'s\n[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using [@cthoyt](https://github.com/cthoyt)'s\n[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack) template.\n\n## \ud83d\udee0\ufe0f For Developers\n\n<details>\n  <summary>See developer instrutions</summary>\n\n  \nThe final section of the README is for if you want to get involved by making a code contribution.\n\n### \ud83e\udd7c Testing\n\nAfter cloning the repository and installing `tox` with `pip install tox`, the unit tests in the `tests/` folder can be\nrun reproducibly with:\n\n```shell\n$ tox\n```\n\nAdditionally, these tests are automatically re-run with each commit in a [GitHub Action](https://github.com/mberr/torch-max-mem/actions?query=workflow%3ATests).\n\n### \ud83d\udcd6 Building the Documentation\n\n```shell\n$ tox -e docs\n``` \n\n### \ud83d\udce6 Making a Release\n\nAfter installing the package in development mode and installing\n`tox` with `pip install tox`, the commands for making a new release are contained within the `finish` environment\nin `tox.ini`. Run the following from the shell:\n\n```shell\n$ tox -e finish\n```\n\nThis script does the following:\n\n1. Uses [Bump2Version](https://github.com/c4urself/bump2version) to switch the version number in the `setup.cfg` and\n   `src/torch_max_mem/version.py` to not have the `-dev` suffix\n2. Packages the code in both a tar archive and a wheel\n3. Uploads to PyPI using `twine`. Be sure to have a `.pypirc` file configured to avoid the need for manual input at this\n   step\n4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.\n5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can\n   use `tox -e bumpversion minor` after.\n</details>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Maximize memory utilization with PyTorch.",
    "version": "0.1.3",
    "project_urls": {
        "Bug Tracker": "https://github.com/mberr/torch-max-mem/issues",
        "Download": "https://github.com/mberr/torch-max-mem/releases",
        "Homepage": "https://github.com/mberr/torch-max-mem",
        "Source Code": "https://github.com/mberr/torch-max-mem"
    },
    "split_keywords": [
        "snekpack",
        "cookiecutter",
        "torch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1b5c7d13deed73e02f71ec418cdd9fd40e4500624fd61bd250bffac589471da1",
                "md5": "fe514c5e937eaa909e219887edc4abe6",
                "sha256": "61cfdd352bd9d3710f43e912388c688c102ae725599eb4bfdb0a5889c396dbf5"
            },
            "downloads": -1,
            "filename": "torch_max_mem-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fe514c5e937eaa909e219887edc4abe6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 10915,
            "upload_time": "2023-09-23T14:36:27",
            "upload_time_iso_8601": "2023-09-23T14:36:27.241504Z",
            "url": "https://files.pythonhosted.org/packages/1b/5c/7d13deed73e02f71ec418cdd9fd40e4500624fd61bd250bffac589471da1/torch_max_mem-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a5dec735a0ecaf3a80e653d23ec31fe88e3b10d48539804dd7377bd189db8ab",
                "md5": "5e1f888e38b51932afb2bae9d98720fd",
                "sha256": "ae03e1626327f7001130c943b9aa40134a02450b3019a4e6e87c61e9dc3ebef1"
            },
            "downloads": -1,
            "filename": "torch_max_mem-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "5e1f888e38b51932afb2bae9d98720fd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 17966,
            "upload_time": "2023-09-23T14:36:29",
            "upload_time_iso_8601": "2023-09-23T14:36:29.088341Z",
            "url": "https://files.pythonhosted.org/packages/5a/5d/ec735a0ecaf3a80e653d23ec31fe88e3b10d48539804dd7377bd189db8ab/torch_max_mem-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-23 14:36:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mberr",
    "github_project": "torch-max-mem",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "torch-max-mem"
}
        
Elapsed time: 0.15489s