dropgrad-dingo-actual


Namedropgrad-dingo-actual JSON
Version 0.1.0 PyPI version JSON
download
home_page
SummaryA PyTorch implementation of DropGrad regularization.
upload_time2023-12-16 10:59:56
maintainer
docs_urlNone
author
requires_python>=3.7
licenseMIT License Copyright (c) 2023 Ryan Taylor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords neural networks machine learning pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # DropGrad: A Simple Method for Regularization and Accelerated Optimization of Neural Networks

- [DropGrad: A Simple Method for Regularization and Accelerated Optimization of Neural Networks](#dropgrad-a-simple-method-for-regularization-and-accelerated-optimization-of-neural-networks)
  - [Installation](#installation)
    - [Requirements](#requirements)
    - [Using pip](#using-pip)
    - [Using git](#using-git)
  - [Usage](#usage)
    - [Basic Usage](#basic-usage)
    - [Use with Learning Rate Schedulers](#use-with-learning-rate-schedulers)
    - [Varying `drop_rate` per `Parameter`](#varying-drop_rate-per-parameter)

DropGrad is a regularization method for neural networks that works by randomly (and independently) setting gradient values to zero before an optimization step. Similarly to Dropout, it has a single parameter, `drop_rate`, the probability of setting each parameter gradient to zero. In order to de-bias the remaining gradient values, they are divided by `1.0 - drop_rate`.

> To the best of my knowledge DropGrad is an original contribution. However, I have no plans of publishing a paper.
> If indeed, it is an original method, please feel free to publish a paper about DropGrad. If you do so, all I ask is
> that you mention me in your publication and cite this repository.

## Installation

The PyTorch implementation of DropGrad can be installed simply using pip or by cloning the current GitHub repo.

### Requirements

The only requirement for DropGrad is PyTorch. (Only versions of PyTorch >= 2.0 have been tested, although DropGrad should be compatible with any version of PyTorch)

### Using pip

To install using pip:

```bash
pip install dropgrad
```

### Using git

```bash
git clone https://github.com/dingo-actual/dropgrad.git
cd dropgrad
python -m build
pip install dist/dropgrad-0.1.0-py3-none-any.whl
```

## Usage

### Basic Usage

To use DropGrad in your neural network optimization, simply import the `DropGrad` class to wrap your optimizer.

```python
from dropgrad import DropGrad
```

Wrapping an optimizer is similar to using a learning rate scheduler:

```python
opt_unwrapped = Adam(net.parameters(), lr=1e-3)
opt = DropGrad(opt_unwrapped, drop_rate=0.1)
```

During training, the application of DropGrad is automatically handled by the wrapper. Simply call `.step()` on
the wrapped optimizer to apply DropGrad then `.zero_grad()` to reset the gradients.

```python
opt.step()
opt.zero_grad()
```

### Use with Learning Rate Schedulers

If you use a learning rate scheduler as well as DropGrad, simply pass the base optimizer to both the DropGrad
wrapper and the learning rate scheduler:

```python
opt_unwrapped = Adam(net.parameters(), lr=1e-3)
lr_scheduler = CosineAnnealingLR(opt_unwrapped, T_max=100)
opt = DropGrad(opt_unwrapped, drop_rate=0.1)
```

During the training loop, you call `.step()` on the DropGrad wrapper before calling `.step()` on the learning rate
scheduler, similarly to using an optimizer without DropGrad:

```python
for epoch_n in range(n_epochs):
    for x_batch, y_batch in dataloader:
        pred_batch = net(x_batch)
        loss = loss_fn(pred_batch, y_batch)

        loss.backward()

        opt.step()
        opt.zero_grad()

    lr_scheduler.step()
```

### Varying `drop_rate` per `Parameter`

DropGrad allows the user to set a different drop rate for each `Parameter` under optimization. To do this, simply
pass a dictionary mapping `Parameters` to drop rates to the `drop_rate` argument of the DropGrad wrapper. If a dictionary
is passed to DropGrad during initialization, all optimized `Parameter`s that are not present in that dictionary will have
the drop rate passed to the DropGrad wrapper at initialization (if `drop_rate=None` then drop grad simply won't be applied
to `Parameter`s that are not present in the dictionary).

The example below will apply a `drop_rate` of 0.1 to all optimized weights and a `drop_rate` of 0.01 to all optimized biases,
with no DropGrad applied to any other optimized `Parameter`s:

```python
drop_rate_weights = 0.1
drop_rate_biases = 0.01

params_weights = [p for name, p in net.named_parameters() if p.requires_grad and 'weight' in name]
params_biases = [p for name, p in net.named_parameters() if p.requires_grad and 'bias' in name]

param_drop_rates = {p: drop_rate_weights for p in params_weights}
param_drop_rates.update({p: drop_rate_biases for p in params_biases})

opt_unwrapped = Adam(net.parameters(), lr=1e-3)
opt = DropGrad(opt_unwrapped, drop_rate=None, params=param_drop_rates)
```

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "dropgrad-dingo-actual",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Ryan Taylor <ryan@beta-reduce.net>",
    "keywords": "neural networks,machine learning,pytorch",
    "author": "",
    "author_email": "Ryan Taylor <ryan@beta-reduce.net>",
    "download_url": "https://files.pythonhosted.org/packages/9c/9a/5a844bd243af55ba63a20b7d370caf0faf9554ce77e3fd3d14d2f73d1915/dropgrad_dingo-actual-0.1.0.tar.gz",
    "platform": null,
    "description": "# DropGrad: A Simple Method for Regularization and Accelerated Optimization of Neural Networks\n\n- [DropGrad: A Simple Method for Regularization and Accelerated Optimization of Neural Networks](#dropgrad-a-simple-method-for-regularization-and-accelerated-optimization-of-neural-networks)\n  - [Installation](#installation)\n    - [Requirements](#requirements)\n    - [Using pip](#using-pip)\n    - [Using git](#using-git)\n  - [Usage](#usage)\n    - [Basic Usage](#basic-usage)\n    - [Use with Learning Rate Schedulers](#use-with-learning-rate-schedulers)\n    - [Varying `drop_rate` per `Parameter`](#varying-drop_rate-per-parameter)\n\nDropGrad is a regularization method for neural networks that works by randomly (and independently) setting gradient values to zero before an optimization step. Similarly to Dropout, it has a single parameter, `drop_rate`, the probability of setting each parameter gradient to zero. In order to de-bias the remaining gradient values, they are divided by `1.0 - drop_rate`.\n\n> To the best of my knowledge DropGrad is an original contribution. However, I have no plans of publishing a paper.\n> If indeed, it is an original method, please feel free to publish a paper about DropGrad. If you do so, all I ask is\n> that you mention me in your publication and cite this repository.\n\n## Installation\n\nThe PyTorch implementation of DropGrad can be installed simply using pip or by cloning the current GitHub repo.\n\n### Requirements\n\nThe only requirement for DropGrad is PyTorch. (Only versions of PyTorch >= 2.0 have been tested, although DropGrad should be compatible with any version of PyTorch)\n\n### Using pip\n\nTo install using pip:\n\n```bash\npip install dropgrad\n```\n\n### Using git\n\n```bash\ngit clone https://github.com/dingo-actual/dropgrad.git\ncd dropgrad\npython -m build\npip install dist/dropgrad-0.1.0-py3-none-any.whl\n```\n\n## Usage\n\n### Basic Usage\n\nTo use DropGrad in your neural network optimization, simply import the `DropGrad` class to wrap your optimizer.\n\n```python\nfrom dropgrad import DropGrad\n```\n\nWrapping an optimizer is similar to using a learning rate scheduler:\n\n```python\nopt_unwrapped = Adam(net.parameters(), lr=1e-3)\nopt = DropGrad(opt_unwrapped, drop_rate=0.1)\n```\n\nDuring training, the application of DropGrad is automatically handled by the wrapper. Simply call `.step()` on\nthe wrapped optimizer to apply DropGrad then `.zero_grad()` to reset the gradients.\n\n```python\nopt.step()\nopt.zero_grad()\n```\n\n### Use with Learning Rate Schedulers\n\nIf you use a learning rate scheduler as well as DropGrad, simply pass the base optimizer to both the DropGrad\nwrapper and the learning rate scheduler:\n\n```python\nopt_unwrapped = Adam(net.parameters(), lr=1e-3)\nlr_scheduler = CosineAnnealingLR(opt_unwrapped, T_max=100)\nopt = DropGrad(opt_unwrapped, drop_rate=0.1)\n```\n\nDuring the training loop, you call `.step()` on the DropGrad wrapper before calling `.step()` on the learning rate\nscheduler, similarly to using an optimizer without DropGrad:\n\n```python\nfor epoch_n in range(n_epochs):\n    for x_batch, y_batch in dataloader:\n        pred_batch = net(x_batch)\n        loss = loss_fn(pred_batch, y_batch)\n\n        loss.backward()\n\n        opt.step()\n        opt.zero_grad()\n\n    lr_scheduler.step()\n```\n\n### Varying `drop_rate` per `Parameter`\n\nDropGrad allows the user to set a different drop rate for each `Parameter` under optimization. To do this, simply\npass a dictionary mapping `Parameters` to drop rates to the `drop_rate` argument of the DropGrad wrapper. If a dictionary\nis passed to DropGrad during initialization, all optimized `Parameter`s that are not present in that dictionary will have\nthe drop rate passed to the DropGrad wrapper at initialization (if `drop_rate=None` then drop grad simply won't be applied\nto `Parameter`s that are not present in the dictionary).\n\nThe example below will apply a `drop_rate` of 0.1 to all optimized weights and a `drop_rate` of 0.01 to all optimized biases,\nwith no DropGrad applied to any other optimized `Parameter`s:\n\n```python\ndrop_rate_weights = 0.1\ndrop_rate_biases = 0.01\n\nparams_weights = [p for name, p in net.named_parameters() if p.requires_grad and 'weight' in name]\nparams_biases = [p for name, p in net.named_parameters() if p.requires_grad and 'bias' in name]\n\nparam_drop_rates = {p: drop_rate_weights for p in params_weights}\nparam_drop_rates.update({p: drop_rate_biases for p in params_biases})\n\nopt_unwrapped = Adam(net.parameters(), lr=1e-3)\nopt = DropGrad(opt_unwrapped, drop_rate=None, params=param_drop_rates)\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Ryan Taylor  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "A PyTorch implementation of DropGrad regularization.",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/dingo-actual/dropgrad",
        "Issues": "https://github.com/dingo-actual/dropgrad/issues",
        "Repository": "https://github.com/dingo-actual/dropgrad"
    },
    "split_keywords": [
        "neural networks",
        "machine learning",
        "pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f0fdde88bb2af80f8c493d9662e1719c92be257be13ee70f4a4df96424620865",
                "md5": "47b8ff1fdd9151abbbd2beeee9993105",
                "sha256": "e7b80451498bf1bc289e30ef4b5b25344831ad7007d8c5bae5a848efe40c2643"
            },
            "downloads": -1,
            "filename": "dropgrad_dingo_actual-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "47b8ff1fdd9151abbbd2beeee9993105",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 6719,
            "upload_time": "2023-12-16T10:59:54",
            "upload_time_iso_8601": "2023-12-16T10:59:54.518768Z",
            "url": "https://files.pythonhosted.org/packages/f0/fd/de88bb2af80f8c493d9662e1719c92be257be13ee70f4a4df96424620865/dropgrad_dingo_actual-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9c9a5a844bd243af55ba63a20b7d370caf0faf9554ce77e3fd3d14d2f73d1915",
                "md5": "42039c59da605c5bf06bd0aff18ffdb9",
                "sha256": "f0c89008b56981484f3f4a54769030c4ea62929917af9faba82dc9d62f5bed95"
            },
            "downloads": -1,
            "filename": "dropgrad_dingo-actual-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "42039c59da605c5bf06bd0aff18ffdb9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 6401,
            "upload_time": "2023-12-16T10:59:56",
            "upload_time_iso_8601": "2023-12-16T10:59:56.790781Z",
            "url": "https://files.pythonhosted.org/packages/9c/9a/5a844bd243af55ba63a20b7d370caf0faf9554ce77e3fd3d14d2f73d1915/dropgrad_dingo-actual-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-16 10:59:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dingo-actual",
    "github_project": "dropgrad",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "dropgrad-dingo-actual"
}
        
Elapsed time: 2.79286s