torchfunc-nightly


Nametorchfunc-nightly JSON
Version 1627347017 PyPI version JSON
download
home_pagehttps://github.com/szymonmaszke/torchfunc
SummaryPyTorch functions to improve performance, analyse models and make your life easier.
upload_time2021-07-27 00:50:25
maintainer
docs_urlNone
authorSzymon Maszke
requires_python>=3.6
licenseMIT
keywords pytorch torch functions performance visualize utils utilities recording
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            <img align="left" width="256" height="256" src="https://github.com/szymonmaszke/torchfunc/blob/master/assets/logos/medium.png">

* Improve and analyse performance of your neural network (e.g. Tensor Cores compatibility)
* Record/analyse internal state of `torch.nn.Module` as data passes through it
* Do the above based on external conditions (using single `Callable` to specify it)
* Day-to-day neural network related duties (model size, seeding, time measurements etc.)
* Get information about your host operating system, `torch.nn.Module` device, CUDA
capabilities etc.


| Version | Docs | Tests | Coverage | Style | PyPI | Python | PyTorch | Docker | Roadmap |
|---------|------|-------|----------|-------|------|--------|---------|--------|---------|
| [![Version](https://img.shields.io/static/v1?label=&message=0.2.0&color=377EF0&style=for-the-badge)](https://github.com/szymonmaszke/torchfunc/releases) | [![Documentation](https://img.shields.io/static/v1?label=&message=docs&color=EE4C2C&style=for-the-badge)](https://szymonmaszke.github.io/torchfunc/)  | ![Tests](https://github.com/szymonmaszke/torchfunc/workflows/test/badge.svg) | ![Coverage](https://img.shields.io/codecov/c/github/szymonmaszke/torchfunc?label=%20&logo=codecov&style=for-the-badge) | [![codebeat](https://img.shields.io/static/v1?label=&message=CB&color=27A8E0&style=for-the-badge)](https://codebeat.co/projects/github-com-szymonmaszke-torchfunc-master) | [![PyPI](https://img.shields.io/static/v1?label=&message=PyPI&color=377EF0&style=for-the-badge)](https://pypi.org/project/torchfunc/) | [![Python](https://img.shields.io/static/v1?label=&message=3.6&color=377EF0&style=for-the-badge&logo=python&logoColor=F8C63D)](https://www.python.org/) | [![PyTorch](https://img.shields.io/static/v1?label=&message=>=1.2.0&color=EE4C2C&style=for-the-badge)](https://pytorch.org/) | [![Docker](https://img.shields.io/static/v1?label=&message=docker&color=309cef&style=for-the-badge)](https://hub.docker.com/r/szymonmaszke/torchfunc) | [![Roadmap](https://img.shields.io/static/v1?label=&message=roadmap&color=009688&style=for-the-badge)](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md) |

# :bulb: Examples

__Check documentation here:__ [https://szymonmaszke.github.io/torchfunc](https://szymonmaszke.github.io/torchfunc)

## 1. Getting performance tips

- __Get instant performance tips about your module. All problems described by comments
will be shown by `torchfunc.performance.tips`:__

```python
class Model(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.convolution = torch.nn.Sequential(
            torch.nn.Conv2d(1, 32, 3),
            torch.nn.ReLU(inplace=True),  # Inplace may harm kernel fusion
            torch.nn.Conv2d(32, 128, 3, groups=32),  # Depthwise is slower in PyTorch
            torch.nn.ReLU(inplace=True),  # Same as before
            torch.nn.Conv2d(128, 250, 3),  # Wrong output size for TensorCores
        )

        self.classifier = torch.nn.Sequential(
            torch.nn.Linear(250, 64),  # Wrong input size for TensorCores
            torch.nn.ReLU(),  # Fine, no info about this layer
            torch.nn.Linear(64, 10),  # Wrong output size for TensorCores
        )

    def forward(self, inputs):
        convolved = torch.nn.AdaptiveAvgPool2d(1)(self.convolution(inputs)).flatten()
        return self.classifier(convolved)

# All you have to do
print(torchfunc.performance.tips(Model()))
```

## 2. Seeding, weight freezing and others

- __Seed globaly (including `numpy` and `cuda`), freeze weights, check inference time and model size:__

```python
# Inb4 MNIST, you can use any module with those functions
model = torch.nn.Linear(784, 10)
torchfunc.seed(0)
frozen = torchfunc.module.freeze(model, bias=False)

with torchfunc.Timer() as timer:
  frozen(torch.randn(32, 784)
  print(timer.checkpoint()) # Time since the beginning
  frozen(torch.randn(128, 784)
  print(timer.checkpoint()) # Since last checkpoint

print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}")
```

## 3. Record `torch.nn.Module` internal state

- __Record and sum per-layer activation statistics as data passes through network:__

```python
# Still MNIST but any module can be put in it's place
model = torch.nn.Sequential(
    torch.nn.Linear(784, 100),
    torch.nn.ReLU(),
    torch.nn.Linear(100, 50),
    torch.nn.ReLU(),
    torch.nn.Linear(50, 10),
)
# Recorder which sums all inputs to layers
recorder = torchfunc.hooks.recorders.ForwardPre(reduction=lambda x, y: x+y)
# Record only for torch.nn.Linear
recorder.children(model, types=(torch.nn.Linear,))
# Train your network normally (or pass data through it)
...
# Activations of all neurons of first layer!
print(recorder[1]) # You can also post-process this data easily with apply
```

For other examples (and how to use condition), see [documentation](https://szymonmaszke.github.io/torchfunc/)

# :wrench: Installation

## :snake: [pip](<https://pypi.org/project/torchfunc/>)

### Latest release:

```shell
pip install --user torchfunc
```

### Nightly:

```shell
pip install --user torchfunc-nightly
```

## :whale2: [Docker](https://hub.docker.com/r/szymonmaszke/torchfunc)

__CPU standalone__ and various versions of __GPU enabled__ images are available
at [dockerhub](https://hub.docker.com/r/szymonmaszke/torchfunc/tags).

For CPU quickstart, issue:

```shell
docker pull szymonmaszke/torchfunc:18.04
```

Nightly builds are also available, just prefix tag with `nightly_`. If you are going for `GPU` image make sure you have
[nvidia/docker](https://github.com/NVIDIA/nvidia-docker) installed and it's runtime set.

# :question: Contributing

If you find any issue or you think some functionality may be useful to others and fits this library, please [open new Issue](https://help.github.com/en/articles/creating-an-issue) or [create Pull Request](https://help.github.com/en/articles/creating-a-pull-request-from-a-fork).

To get an overview of things one can do to help this project, see [Roadmap](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/szymonmaszke/torchfunc",
    "name": "torchfunc-nightly",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "pytorch torch functions performance visualize utils utilities recording",
    "author": "Szymon Maszke",
    "author_email": "szymon.maszke@protonmail.com",
    "download_url": "https://files.pythonhosted.org/packages/71/f1/001ede08929d49865623144e458147e46209852e35b4cb1e625e4ed1d533/torchfunc-nightly-1627347017.tar.gz",
    "platform": "",
    "description": "<img align=\"left\" width=\"256\" height=\"256\" src=\"https://github.com/szymonmaszke/torchfunc/blob/master/assets/logos/medium.png\">\n\n* Improve and analyse performance of your neural network (e.g. Tensor Cores compatibility)\n* Record/analyse internal state of `torch.nn.Module` as data passes through it\n* Do the above based on external conditions (using single `Callable` to specify it)\n* Day-to-day neural network related duties (model size, seeding, time measurements etc.)\n* Get information about your host operating system, `torch.nn.Module` device, CUDA\ncapabilities etc.\n\n\n| Version | Docs | Tests | Coverage | Style | PyPI | Python | PyTorch | Docker | Roadmap |\n|---------|------|-------|----------|-------|------|--------|---------|--------|---------|\n| [![Version](https://img.shields.io/static/v1?label=&message=0.2.0&color=377EF0&style=for-the-badge)](https://github.com/szymonmaszke/torchfunc/releases) | [![Documentation](https://img.shields.io/static/v1?label=&message=docs&color=EE4C2C&style=for-the-badge)](https://szymonmaszke.github.io/torchfunc/)  | ![Tests](https://github.com/szymonmaszke/torchfunc/workflows/test/badge.svg) | ![Coverage](https://img.shields.io/codecov/c/github/szymonmaszke/torchfunc?label=%20&logo=codecov&style=for-the-badge) | [![codebeat](https://img.shields.io/static/v1?label=&message=CB&color=27A8E0&style=for-the-badge)](https://codebeat.co/projects/github-com-szymonmaszke-torchfunc-master) | [![PyPI](https://img.shields.io/static/v1?label=&message=PyPI&color=377EF0&style=for-the-badge)](https://pypi.org/project/torchfunc/) | [![Python](https://img.shields.io/static/v1?label=&message=3.6&color=377EF0&style=for-the-badge&logo=python&logoColor=F8C63D)](https://www.python.org/) | [![PyTorch](https://img.shields.io/static/v1?label=&message=>=1.2.0&color=EE4C2C&style=for-the-badge)](https://pytorch.org/) | [![Docker](https://img.shields.io/static/v1?label=&message=docker&color=309cef&style=for-the-badge)](https://hub.docker.com/r/szymonmaszke/torchfunc) | [![Roadmap](https://img.shields.io/static/v1?label=&message=roadmap&color=009688&style=for-the-badge)](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md) |\n\n# :bulb: Examples\n\n__Check documentation here:__ [https://szymonmaszke.github.io/torchfunc](https://szymonmaszke.github.io/torchfunc)\n\n## 1. Getting performance tips\n\n- __Get instant performance tips about your module. All problems described by comments\nwill be shown by `torchfunc.performance.tips`:__\n\n```python\nclass Model(torch.nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.convolution = torch.nn.Sequential(\n            torch.nn.Conv2d(1, 32, 3),\n            torch.nn.ReLU(inplace=True),  # Inplace may harm kernel fusion\n            torch.nn.Conv2d(32, 128, 3, groups=32),  # Depthwise is slower in PyTorch\n            torch.nn.ReLU(inplace=True),  # Same as before\n            torch.nn.Conv2d(128, 250, 3),  # Wrong output size for TensorCores\n        )\n\n        self.classifier = torch.nn.Sequential(\n            torch.nn.Linear(250, 64),  # Wrong input size for TensorCores\n            torch.nn.ReLU(),  # Fine, no info about this layer\n            torch.nn.Linear(64, 10),  # Wrong output size for TensorCores\n        )\n\n    def forward(self, inputs):\n        convolved = torch.nn.AdaptiveAvgPool2d(1)(self.convolution(inputs)).flatten()\n        return self.classifier(convolved)\n\n# All you have to do\nprint(torchfunc.performance.tips(Model()))\n```\n\n## 2. Seeding, weight freezing and others\n\n- __Seed globaly (including `numpy` and `cuda`), freeze weights, check inference time and model size:__\n\n```python\n# Inb4 MNIST, you can use any module with those functions\nmodel = torch.nn.Linear(784, 10)\ntorchfunc.seed(0)\nfrozen = torchfunc.module.freeze(model, bias=False)\n\nwith torchfunc.Timer() as timer:\n  frozen(torch.randn(32, 784)\n  print(timer.checkpoint()) # Time since the beginning\n  frozen(torch.randn(128, 784)\n  print(timer.checkpoint()) # Since last checkpoint\n\nprint(f\"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}\")\n```\n\n## 3. Record `torch.nn.Module` internal state\n\n- __Record and sum per-layer activation statistics as data passes through network:__\n\n```python\n# Still MNIST but any module can be put in it's place\nmodel = torch.nn.Sequential(\n    torch.nn.Linear(784, 100),\n    torch.nn.ReLU(),\n    torch.nn.Linear(100, 50),\n    torch.nn.ReLU(),\n    torch.nn.Linear(50, 10),\n)\n# Recorder which sums all inputs to layers\nrecorder = torchfunc.hooks.recorders.ForwardPre(reduction=lambda x, y: x+y)\n# Record only for torch.nn.Linear\nrecorder.children(model, types=(torch.nn.Linear,))\n# Train your network normally (or pass data through it)\n...\n# Activations of all neurons of first layer!\nprint(recorder[1]) # You can also post-process this data easily with apply\n```\n\nFor other examples (and how to use condition), see [documentation](https://szymonmaszke.github.io/torchfunc/)\n\n# :wrench: Installation\n\n## :snake: [pip](<https://pypi.org/project/torchfunc/>)\n\n### Latest release:\n\n```shell\npip install --user torchfunc\n```\n\n### Nightly:\n\n```shell\npip install --user torchfunc-nightly\n```\n\n## :whale2: [Docker](https://hub.docker.com/r/szymonmaszke/torchfunc)\n\n__CPU standalone__ and various versions of __GPU enabled__ images are available\nat [dockerhub](https://hub.docker.com/r/szymonmaszke/torchfunc/tags).\n\nFor CPU quickstart, issue:\n\n```shell\ndocker pull szymonmaszke/torchfunc:18.04\n```\n\nNightly builds are also available, just prefix tag with `nightly_`. If you are going for `GPU` image make sure you have\n[nvidia/docker](https://github.com/NVIDIA/nvidia-docker) installed and it's runtime set.\n\n# :question: Contributing\n\nIf you find any issue or you think some functionality may be useful to others and fits this library, please [open new Issue](https://help.github.com/en/articles/creating-an-issue) or [create Pull Request](https://help.github.com/en/articles/creating-a-pull-request-from-a-fork).\n\nTo get an overview of things one can do to help this project, see [Roadmap](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md).\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "PyTorch functions to improve performance, analyse models and make your life easier.",
    "version": "1627347017",
    "split_keywords": [
        "pytorch",
        "torch",
        "functions",
        "performance",
        "visualize",
        "utils",
        "utilities",
        "recording"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "33e5e1ea735499a0e27645a02e50cca2",
                "sha256": "9902a3c756ed1a3ddc0a5e571851a33bd15c3d413ecdcc151712f8935cf0f115"
            },
            "downloads": -1,
            "filename": "torchfunc_nightly-1627347017-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "33e5e1ea735499a0e27645a02e50cca2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 30990,
            "upload_time": "2021-07-27T00:50:23",
            "upload_time_iso_8601": "2021-07-27T00:50:23.970786Z",
            "url": "https://files.pythonhosted.org/packages/d4/32/ca99e467ab10ac943c09f2baddbf3b4947a74d53ea9c312f83984ef7694a/torchfunc_nightly-1627347017-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "d0b1c03b363aee004104b088278a9686",
                "sha256": "d56c8f0287ff5b798dd11c93c99980d6945c1983b359013f85e471c6f45954ad"
            },
            "downloads": -1,
            "filename": "torchfunc-nightly-1627347017.tar.gz",
            "has_sig": false,
            "md5_digest": "d0b1c03b363aee004104b088278a9686",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 24507,
            "upload_time": "2021-07-27T00:50:25",
            "upload_time_iso_8601": "2021-07-27T00:50:25.679624Z",
            "url": "https://files.pythonhosted.org/packages/71/f1/001ede08929d49865623144e458147e46209852e35b4cb1e625e4ed1d533/torchfunc-nightly-1627347017.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-07-27 00:50:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "szymonmaszke",
    "github_project": "torchfunc",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "tox": true,
    "lcname": "torchfunc-nightly"
}
        
Elapsed time: 0.34525s