locoprop


Namelocoprop JSON
Version 0.1.0 PyPI version JSON
download
home_page
SummaryLocoProp implementation in PyTorch.
upload_time2023-01-15 13:33:21
maintainer
docs_urlNone
authorDimitri von Rütte
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LocoProp Torch

Implementation of the paper "LocoProp: Enhancing BackProp via Local Loss Optimization" in PyTorch.

Paper: https://proceedings.mlr.press/v151/amid22a/amid22a.pdf

Official code: https://github.com/google-research/google-research/blob/master/locoprop/locoprop_training.ipynb

## Installation

```
pip install locoprop
```

## Usage

```python
from locoprop import LocoLayer LocopropTrainer

# model needs to be instance of nn.Sequential
# each trainable layer needs to be instance of LocoLayer
# Example: deep auto-encoder
model = nn.Sequential(
    LocoLayer(nn.Linear(28*28, 1000), nn.Tanh()),
    LocoLayer(nn.Linear(1000, 500), nn.Tanh()),
    LocoLayer(nn.Linear(500, 250), nn.Tanh()),
    LocoLayer(nn.Linear(250, 30), nn.Tanh()),
    LocoLayer(nn.Linear(30, 250), nn.Tanh()),
    LocoLayer(nn.Linear(250, 500), nn.Tanh()),
    LocoLayer(nn.Linear(500, 1000), nn.Tanh()),
    LocoLayer(nn.Linear(1000, 28*28), nn.Sigmoid(), implicit=True),  # implicit means the activation only is applied during local optimization
)

def loss_fn(logits, labels):
    ...

trainer = LocopropTrainer(model, loss_fn)

dl = get_dataloader()

for x, y in dl:
    trainer.step(x, y)
```

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "locoprop",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Dimitri von R\u00fctte",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/90/ad/e6fac6a5f65028db3fd95d265b27e30834a89736a69cf99895240cb9aedc/locoprop-0.1.0.tar.gz",
    "platform": null,
    "description": "# LocoProp Torch\n\nImplementation of the paper \"LocoProp: Enhancing BackProp via Local Loss Optimization\" in PyTorch.\n\nPaper: https://proceedings.mlr.press/v151/amid22a/amid22a.pdf\n\nOfficial code: https://github.com/google-research/google-research/blob/master/locoprop/locoprop_training.ipynb\n\n## Installation\n\n```\npip install locoprop\n```\n\n## Usage\n\n```python\nfrom locoprop import LocoLayer LocopropTrainer\n\n# model needs to be instance of nn.Sequential\n# each trainable layer needs to be instance of LocoLayer\n# Example: deep auto-encoder\nmodel = nn.Sequential(\n    LocoLayer(nn.Linear(28*28, 1000), nn.Tanh()),\n    LocoLayer(nn.Linear(1000, 500), nn.Tanh()),\n    LocoLayer(nn.Linear(500, 250), nn.Tanh()),\n    LocoLayer(nn.Linear(250, 30), nn.Tanh()),\n    LocoLayer(nn.Linear(30, 250), nn.Tanh()),\n    LocoLayer(nn.Linear(250, 500), nn.Tanh()),\n    LocoLayer(nn.Linear(500, 1000), nn.Tanh()),\n    LocoLayer(nn.Linear(1000, 28*28), nn.Sigmoid(), implicit=True),  # implicit means the activation only is applied during local optimization\n)\n\ndef loss_fn(logits, labels):\n    ...\n\ntrainer = LocopropTrainer(model, loss_fn)\n\ndl = get_dataloader()\n\nfor x, y in dl:\n    trainer.step(x, y)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LocoProp implementation in PyTorch.",
    "version": "0.1.0",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e27b607b5559f152edc32c40d95cfc74255e63d018d83f3e0d8c9b76ae81d47a",
                "md5": "b79bd8e9755248a0c7a9e7beaa2a9906",
                "sha256": "2b104e3ca4fa22384aaba894d925fb7adf4bf1db47ea101c483fb1d87f26f138"
            },
            "downloads": -1,
            "filename": "locoprop-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b79bd8e9755248a0c7a9e7beaa2a9906",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 4753,
            "upload_time": "2023-01-15T13:33:19",
            "upload_time_iso_8601": "2023-01-15T13:33:19.290431Z",
            "url": "https://files.pythonhosted.org/packages/e2/7b/607b5559f152edc32c40d95cfc74255e63d018d83f3e0d8c9b76ae81d47a/locoprop-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "90ade6fac6a5f65028db3fd95d265b27e30834a89736a69cf99895240cb9aedc",
                "md5": "fc8e33cefde230da93944372e0a1b3d2",
                "sha256": "f5b231462d8ade9c0131bdee1416f2c8676b16085bd9ad4c219306bcad1435fa"
            },
            "downloads": -1,
            "filename": "locoprop-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "fc8e33cefde230da93944372e0a1b3d2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 4212,
            "upload_time": "2023-01-15T13:33:21",
            "upload_time_iso_8601": "2023-01-15T13:33:21.351516Z",
            "url": "https://files.pythonhosted.org/packages/90/ad/e6fac6a5f65028db3fd95d265b27e30834a89736a69cf99895240cb9aedc/locoprop-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-15 13:33:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "locoprop"
}
        
Elapsed time: 0.04657s