torch-optimi


Nametorch-optimi JSON
Version 0.1.1 PyPI version JSON
download
home_page
SummaryFast, Modern, & Low Precision PyTorch Optimizers
upload_time2023-11-19 01:50:52
maintainer
docs_urlNone
author
requires_python>=3.8
licenseMIT License Copyright (c) 2023 Benjamin Warner Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords optimizers pytorch deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # optimī

### Fast, Modern, and Low Precision PyTorch Optimizers

optimi enables accurate low precision training via Kahan summation, supports fully decoupled weight decay, and features fast implementations of modern optimizers.

## Low Precision Training with Kahan Summation

optimi optimizers can match the performance of mixed precision when [training in BFloat16 by using Kahan summation](https://optimi.benjaminwarner.dev/kahan_summation).

Training in BFloat16 with Kahan summation can reduce non-activation training memory usage by [37.5 to 45.5 percent](https://optimi.benjaminwarner.dev/kahan_summation/#memory-savings) when using an Adam optimizer. BFloat16 training increases single GPU [training speed by ~10 percent](https://optimi.benjaminwarner.dev/kahan_summation/#training-speedup) at the same batch size.

## Fully Decoupled Weight Decay

In addition to supporting PyTorch-style decoupled weight decay, optimi optimizers also support [fully decoupled weight decay](https://optimi.benjaminwarner.dev/fully_decoupled_weight_decay).

Fully decoupled weight decay decouples weight decay from the learning rate, more accurately following [*Decoupled Weight Decay Regularization*](https://arxiv.org/abs/1711.05101). This can help simplify hyperparameter tuning as the optimal weight decay is no longer tied to the learning rate.

## Foreach Implementations

All optimi optimizers have fast [foreach implementations](https://optimi.benjaminwarner.dev/foreach), which can significantly outperform the for-loop versions. optimi reuses the gradient buffer for temporary variables to reduce foreach memory usage.

## Documentation

https://optimi.benjaminwarner.dev

## Install

optimi is available to install from pypi.

```bash
pip install torch-optimi
```

## Usage

To use an optimi optimizer with Kahan summation and fully decoupled weight decay:

```python
import torch
from torch import nn
from optimi import AdamW

# create or cast model in low precision (bfloat16)
model = nn.Linear(20, 1, dtype=torch.bfloat16)

# instantiate AdamW with parameters and fully decoupled weight decay
# Kahan summation is automatically enabled since model & inputs are bfloat16
opt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-5, decouple_lr=True)

# forward and backward, casting input to bfloat16 if needed
loss = model(torch.randn(20, dtype=torch.bfloat16))
loss.backward()

# optimizer step
opt.step()
opt.zero_grad()
```

To use with PyTorch-style weight decay with float32 or mixed precision:

```python
# create model
model = nn.Linear(20, 1)

# instantiate AdamW with parameters
opt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-2)
```

## Difference from PyTorch

optimi optimizers do not support compilation, differentiation, or have capturable versions.

optimi Adam optimizers do not support AMSGrad and SGD does not support Nesterov momentum. Optimizers which debias updates (Adam optimizers and Adan) calculate the debias term per parameter group, not per parameter.

## Optimizers

optimi implements the following optimizers:

* [Adam](https://optimi.benjaminwarner.dev/optimizers/adam)
* [AdamW](https://optimi.benjaminwarner.dev/optimizers/adamw)
* [Adan](https://optimi.benjaminwarner.dev/optimizers/adan)
* [Lion](https://optimi.benjaminwarner.dev/optimizers/lion)
* [SGD](https://optimi.benjaminwarner.dev/optimizers/sgd)
* [StableAdamW](https://optimi.benjaminwarner.dev/optimizers/stableadamw)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "torch-optimi",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "Optimizers,PyTorch,Deep Learning",
    "author": "",
    "author_email": "Benjamin Warner <me@benjaminwarner.dev>",
    "download_url": "https://files.pythonhosted.org/packages/78/33/a2f5c038c8cd4531b1186a9020eb0bf3fe1aba57463249a3fffe4ed065ae/torch-optimi-0.1.1.tar.gz",
    "platform": null,
    "description": "# optim\u012b\n\n### Fast, Modern, and Low Precision PyTorch Optimizers\n\noptimi enables accurate low precision training via Kahan summation, supports fully decoupled weight decay, and features fast implementations of modern optimizers.\n\n## Low Precision Training with Kahan Summation\n\noptimi optimizers can match the performance of mixed precision when [training in BFloat16 by using Kahan summation](https://optimi.benjaminwarner.dev/kahan_summation).\n\nTraining in BFloat16 with Kahan summation can reduce non-activation training memory usage by [37.5 to 45.5 percent](https://optimi.benjaminwarner.dev/kahan_summation/#memory-savings) when using an Adam optimizer. BFloat16 training increases single GPU [training speed by ~10 percent](https://optimi.benjaminwarner.dev/kahan_summation/#training-speedup) at the same batch size.\n\n## Fully Decoupled Weight Decay\n\nIn addition to supporting PyTorch-style decoupled weight decay, optimi optimizers also support [fully decoupled weight decay](https://optimi.benjaminwarner.dev/fully_decoupled_weight_decay).\n\nFully decoupled weight decay decouples weight decay from the learning rate, more accurately following [*Decoupled Weight Decay Regularization*](https://arxiv.org/abs/1711.05101). This can help simplify hyperparameter tuning as the optimal weight decay is no longer tied to the learning rate.\n\n## Foreach Implementations\n\nAll optimi optimizers have fast [foreach implementations](https://optimi.benjaminwarner.dev/foreach), which can significantly outperform the for-loop versions. optimi reuses the gradient buffer for temporary variables to reduce foreach memory usage.\n\n## Documentation\n\nhttps://optimi.benjaminwarner.dev\n\n## Install\n\noptimi is available to install from pypi.\n\n```bash\npip install torch-optimi\n```\n\n## Usage\n\nTo use an optimi optimizer with Kahan summation and fully decoupled weight decay:\n\n```python\nimport torch\nfrom torch import nn\nfrom optimi import AdamW\n\n# create or cast model in low precision (bfloat16)\nmodel = nn.Linear(20, 1, dtype=torch.bfloat16)\n\n# instantiate AdamW with parameters and fully decoupled weight decay\n# Kahan summation is automatically enabled since model & inputs are bfloat16\nopt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-5, decouple_lr=True)\n\n# forward and backward, casting input to bfloat16 if needed\nloss = model(torch.randn(20, dtype=torch.bfloat16))\nloss.backward()\n\n# optimizer step\nopt.step()\nopt.zero_grad()\n```\n\nTo use with PyTorch-style weight decay with float32 or mixed precision:\n\n```python\n# create model\nmodel = nn.Linear(20, 1)\n\n# instantiate AdamW with parameters\nopt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-2)\n```\n\n## Difference from PyTorch\n\noptimi optimizers do not support compilation, differentiation, or have capturable versions.\n\noptimi Adam optimizers do not support AMSGrad and SGD does not support Nesterov momentum. Optimizers which debias updates (Adam optimizers and Adan) calculate the debias term per parameter group, not per parameter.\n\n## Optimizers\n\noptimi implements the following optimizers:\n\n* [Adam](https://optimi.benjaminwarner.dev/optimizers/adam)\n* [AdamW](https://optimi.benjaminwarner.dev/optimizers/adamw)\n* [Adan](https://optimi.benjaminwarner.dev/optimizers/adan)\n* [Lion](https://optimi.benjaminwarner.dev/optimizers/lion)\n* [SGD](https://optimi.benjaminwarner.dev/optimizers/sgd)\n* [StableAdamW](https://optimi.benjaminwarner.dev/optimizers/stableadamw)\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Benjamin Warner  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Fast, Modern, & Low Precision PyTorch Optimizers",
    "version": "0.1.1",
    "project_urls": {
        "Bug Reports": "https://github.com/warner-benjamin/optimi/issues",
        "Homepage": "https://optimi.benjaminwarner.dev",
        "Source": "https://github.com/warner-benjamin/optimi"
    },
    "split_keywords": [
        "optimizers",
        "pytorch",
        "deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "57d461611f67ece188d7aa2f1bd388cb61461b9826c113c947ab56437d999d78",
                "md5": "2a64bf4f999569b0e365d77ed1b3d89d",
                "sha256": "6221f59280d9f5c92a7cc11d20d11847033328279172c716609d918951925118"
            },
            "downloads": -1,
            "filename": "torch_optimi-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2a64bf4f999569b0e365d77ed1b3d89d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 23751,
            "upload_time": "2023-11-19T01:50:51",
            "upload_time_iso_8601": "2023-11-19T01:50:51.461227Z",
            "url": "https://files.pythonhosted.org/packages/57/d4/61611f67ece188d7aa2f1bd388cb61461b9826c113c947ab56437d999d78/torch_optimi-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7833a2f5c038c8cd4531b1186a9020eb0bf3fe1aba57463249a3fffe4ed065ae",
                "md5": "8a85380a79ce44a30cb1aee48352cb0b",
                "sha256": "9efa6f9c9e06127d69ce8ffbda7c8d3fe794531e0f006ac7df25b036bac93df2"
            },
            "downloads": -1,
            "filename": "torch-optimi-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "8a85380a79ce44a30cb1aee48352cb0b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 13867,
            "upload_time": "2023-11-19T01:50:52",
            "upload_time_iso_8601": "2023-11-19T01:50:52.648126Z",
            "url": "https://files.pythonhosted.org/packages/78/33/a2f5c038c8cd4531b1186a9020eb0bf3fe1aba57463249a3fffe4ed065ae/torch-optimi-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-19 01:50:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "warner-benjamin",
    "github_project": "optimi",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "torch-optimi"
}
        
Elapsed time: 0.15160s