ptflops


Nameptflops JSON
Version 0.7.4 PyPI version JSON
download
home_pageNone
SummaryFlops counter for neural networks in pytorch framework
upload_time2024-09-27 18:45:32
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2019 Vladislav Sovrasov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords pytorch cnn transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Flops counting tool for neural networks in pytorch framework
[![Pypi version](https://img.shields.io/pypi/v/ptflops.svg)](https://pypi.org/project/ptflops/)

This tool is designed to compute the theoretical amount of multiply-add operations
in neural networks. It can also compute the number of parameters and
print per-layer computational cost of a given network.

`ptflops` has two backends, `pytorch` and `aten`. `pytorch` backend is a legacy one, it considers `nn.Modules` only. However,
it's still useful, since it provides a better par-layer analytics for CNNs. In all other cases it's recommended to use
`aten` backend, which considers aten operations, and therefore it covers more model architectures (including transformers).
The default backend is `aten`. Please, don't use `pytorch` backend for transformer architectures.

## `aten` backend
### Operations considered:
- aten.mm, aten.matmul, aten.addmm, aten.bmm
- aten.convolution

### Usage tips
- Use `verbose=True` to see the operations which were not considered during complexity computation.
- This backend prints per-module statistics only for modules directly nested into the root `nn.Module`.
Deeper modules at the second level of nesting are not shown in the per-layer statistics.
- `ignore_modules` option forces `ptflops` to ignore the listed modules. This can be useful
for research purposes. For instance, one can drop all convolutions from the counting process
specifying `ignore_modules=[torch.ops.aten.convolution, torch.ops.aten._convolution]`.

## `pytorch` backend
### Supported layers:
- Conv1d/2d/3d (including grouping)
- ConvTranspose1d/2d/3d (including grouping)
- BatchNorm1d/2d/3d, GroupNorm, InstanceNorm1d/2d/3d, LayerNorm
- Activations (ReLU, PReLU, ELU, ReLU6, LeakyReLU, GELU)
- Linear
- Upsample
- Poolings (AvgPool1d/2d/3d, MaxPool1d/2d/3d and adaptive ones)

Experimental support:
- RNN, LSTM, GRU (NLH layout is assumed)
- RNNCell, LSTMCell, GRUCell
- torch.nn.MultiheadAttention
- torchvision.ops.DeformConv2d
- visual transformers from [timm](https://github.com/huggingface/pytorch-image-models)

### Usage tips

- This backend doesn't take into account some of the `torch.nn.functional.*` and `tensor.*` operations. Therefore unsupported operations are
not contributing to the final complexity estimation. See `ptflops/pytorch_ops.py:FUNCTIONAL_MAPPING,TENSOR_OPS_MAPPING` to check supported ops.
Sometimes considering functional style conflicts with hooks for `nn.Module` (for instance, custom ones). In that case, counting with these ops can be disabled by
passing `backend_specific_config={"count_functional" : False}`.
- `ptflops` launches a given model on a random tensor and estimates amount of computations during inference. Complicated models can have several inputs, some of them could be optional. To construct non-trivial input one can use the `input_constructor` argument of the `get_model_complexity_info`. `input_constructor` is a function that takes the input spatial resolution as a tuple and returns a dict with named input arguments of the model. Next, this dict would be passed to the model as a keyword arguments.
- `verbose` parameter allows to get information about modules that don't contribute to the final numbers.
- `ignore_modules` option forces `ptflops` to ignore the listed modules. This can be useful
for research purposes. For instance, one can drop all convolutions from the counting process
specifying `ignore_modules=[torch.nn.Conv2d]`.

## Requirements
Pytorch >= 2.0. Use `pip install ptflops==0.7.2.2` to work with torch 1.x.

## Install the latest version
From PyPI:
```bash
pip install ptflops
```

From this repository:
```bash
pip install --upgrade git+https://github.com/sovrasov/flops-counter.pytorch.git
```

## Example
```python
import torchvision.models as models
import torch
from ptflops import get_model_complexity_info

with torch.cuda.device(0):
  net = models.densenet161()
  macs, params = get_model_complexity_info(net, (3, 224, 224), as_strings=True, backend='pytorch'
                                           print_per_layer_stat=True, verbose=True)
  print('{:<30}  {:<8}'.format('Computational complexity: ', macs))
  print('{:<30}  {:<8}'.format('Number of parameters: ', params))

  macs, params = get_model_complexity_info(net, (3, 224, 224), as_strings=True, backend='aten'
                                           print_per_layer_stat=True, verbose=True)
  print('{:<30}  {:<8}'.format('Computational complexity: ', macs))
  print('{:<30}  {:<8}'.format('Number of parameters: ', params))
```

## Citation
If ptflops was useful for your paper or tech report, please cite me:
```
@online{ptflops,
  author = {Vladislav Sovrasov},
  title = {ptflops: a flops counting tool for neural networks in pytorch framework},
  year = 2018-2024,
  url = {https://github.com/sovrasov/flops-counter.pytorch},
}
```

## Credits

Thanks to @warmspringwinds and Horace He for the initial version of the script.

## Benchmark

### [torchvision](https://pytorch.org/vision/0.16/models.html)

Model                  | Input Resolution | Params(M) | MACs(G) (`pytorch`) | MACs(G) (`aten`)
---                    |---               |---        |---                  |---
alexnet                | 224x224          | 61.10     | 0.72                | 0.71
convnext_base          | 224x224          | 88.59     | 15.43               | 15.38
densenet121            | 224x224          | 7.98      | 2.90                |
efficientnet_b0        | 224x224          | 5.29      | 0.41                |
efficientnet_v2_m      | 224x224          | 54.14     | 5.43                |
googlenet              | 224x224          | 13.00     | 1.51                |
inception_v3           | 224x224          | 27.16     | 5.75                | 5.71
maxvit_t               | 224x224          | 30.92     | 5.48                |
mnasnet1_0             | 224x224          | 4.38      | 0.33                |
mobilenet_v2           | 224x224          | 3.50      | 0.32                |
mobilenet_v3_large     | 224x224          | 5.48      | 0.23                |
regnet_y_1_6gf         | 224x224          | 11.20     | 1.65                |
resnet18               | 224x224          | 11.69     | 1.83                | 1.81
resnet50               | 224x224          | 25.56     | 4.13                | 4.09
resnext50_32x4d        | 224x224          | 25.03     | 4.29                |
shufflenet_v2_x1_0     | 224x224          | 2.28      | 0.15                |
squeezenet1_0          | 224x224          | 1.25      | 0.84                | 0.82
vgg16                  | 224x224          | 138.36    | 15.52               | 15.48
vit_b_16               | 224x224          | 86.57     | 17.61 (wrong)       | 16.86
wide_resnet50_2        | 224x224          | 68.88     | 11.45               |


### [timm](https://github.com/huggingface/pytorch-image-models)

Model                  | Input Resolution | Params(M) | MACs(G)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ptflops",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Vladislav Sovrasov <sovrasov.vlad@gmail.com>",
    "keywords": "pytorch, cnn, transformer",
    "author": null,
    "author_email": "Vladislav Sovrasov <sovrasov.vlad@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/a3/a8/fde8ede43c0a43141602f297005bc2f4af9731249f545c58843992335dc2/ptflops-0.7.4.tar.gz",
    "platform": null,
    "description": "# Flops counting tool for neural networks in pytorch framework\n[![Pypi version](https://img.shields.io/pypi/v/ptflops.svg)](https://pypi.org/project/ptflops/)\n\nThis tool is designed to compute the theoretical amount of multiply-add operations\nin neural networks. It can also compute the number of parameters and\nprint per-layer computational cost of a given network.\n\n`ptflops` has two backends, `pytorch` and `aten`. `pytorch` backend is a legacy one, it considers `nn.Modules` only. However,\nit's still useful, since it provides a better par-layer analytics for CNNs. In all other cases it's recommended to use\n`aten` backend, which considers aten operations, and therefore it covers more model architectures (including transformers).\nThe default backend is `aten`. Please, don't use `pytorch` backend for transformer architectures.\n\n## `aten` backend\n### Operations considered:\n- aten.mm, aten.matmul, aten.addmm, aten.bmm\n- aten.convolution\n\n### Usage tips\n- Use `verbose=True` to see the operations which were not considered during complexity computation.\n- This backend prints per-module statistics only for modules directly nested into the root `nn.Module`.\nDeeper modules at the second level of nesting are not shown in the per-layer statistics.\n- `ignore_modules` option forces `ptflops` to ignore the listed modules. This can be useful\nfor research purposes. For instance, one can drop all convolutions from the counting process\nspecifying `ignore_modules=[torch.ops.aten.convolution, torch.ops.aten._convolution]`.\n\n## `pytorch` backend\n### Supported layers:\n- Conv1d/2d/3d (including grouping)\n- ConvTranspose1d/2d/3d (including grouping)\n- BatchNorm1d/2d/3d, GroupNorm, InstanceNorm1d/2d/3d, LayerNorm\n- Activations (ReLU, PReLU, ELU, ReLU6, LeakyReLU, GELU)\n- Linear\n- Upsample\n- Poolings (AvgPool1d/2d/3d, MaxPool1d/2d/3d and adaptive ones)\n\nExperimental support:\n- RNN, LSTM, GRU (NLH layout is assumed)\n- RNNCell, LSTMCell, GRUCell\n- torch.nn.MultiheadAttention\n- torchvision.ops.DeformConv2d\n- visual transformers from [timm](https://github.com/huggingface/pytorch-image-models)\n\n### Usage tips\n\n- This backend doesn't take into account some of the `torch.nn.functional.*` and `tensor.*` operations. Therefore unsupported operations are\nnot contributing to the final complexity estimation. See `ptflops/pytorch_ops.py:FUNCTIONAL_MAPPING,TENSOR_OPS_MAPPING` to check supported ops.\nSometimes considering functional style conflicts with hooks for `nn.Module` (for instance, custom ones). In that case, counting with these ops can be disabled by\npassing `backend_specific_config={\"count_functional\" : False}`.\n- `ptflops` launches a given model on a random tensor and estimates amount of computations during inference. Complicated models can have several inputs, some of them could be optional. To construct non-trivial input one can use the `input_constructor` argument of the `get_model_complexity_info`. `input_constructor` is a function that takes the input spatial resolution as a tuple and returns a dict with named input arguments of the model. Next, this dict would be passed to the model as a keyword arguments.\n- `verbose` parameter allows to get information about modules that don't contribute to the final numbers.\n- `ignore_modules` option forces `ptflops` to ignore the listed modules. This can be useful\nfor research purposes. For instance, one can drop all convolutions from the counting process\nspecifying `ignore_modules=[torch.nn.Conv2d]`.\n\n## Requirements\nPytorch >= 2.0. Use `pip install ptflops==0.7.2.2` to work with torch 1.x.\n\n## Install the latest version\nFrom PyPI:\n```bash\npip install ptflops\n```\n\nFrom this repository:\n```bash\npip install --upgrade git+https://github.com/sovrasov/flops-counter.pytorch.git\n```\n\n## Example\n```python\nimport torchvision.models as models\nimport torch\nfrom ptflops import get_model_complexity_info\n\nwith torch.cuda.device(0):\n  net = models.densenet161()\n  macs, params = get_model_complexity_info(net, (3, 224, 224), as_strings=True, backend='pytorch'\n                                           print_per_layer_stat=True, verbose=True)\n  print('{:<30}  {:<8}'.format('Computational complexity: ', macs))\n  print('{:<30}  {:<8}'.format('Number of parameters: ', params))\n\n  macs, params = get_model_complexity_info(net, (3, 224, 224), as_strings=True, backend='aten'\n                                           print_per_layer_stat=True, verbose=True)\n  print('{:<30}  {:<8}'.format('Computational complexity: ', macs))\n  print('{:<30}  {:<8}'.format('Number of parameters: ', params))\n```\n\n## Citation\nIf ptflops was useful for your paper or tech report, please cite me:\n```\n@online{ptflops,\n  author = {Vladislav Sovrasov},\n  title = {ptflops: a flops counting tool for neural networks in pytorch framework},\n  year = 2018-2024,\n  url = {https://github.com/sovrasov/flops-counter.pytorch},\n}\n```\n\n## Credits\n\nThanks to @warmspringwinds and Horace He for the initial version of the script.\n\n## Benchmark\n\n### [torchvision](https://pytorch.org/vision/0.16/models.html)\n\nModel                  | Input Resolution | Params(M) | MACs(G) (`pytorch`) | MACs(G) (`aten`)\n---                    |---               |---        |---                  |---\nalexnet                | 224x224          | 61.10     | 0.72                | 0.71\nconvnext_base          | 224x224          | 88.59     | 15.43               | 15.38\ndensenet121            | 224x224          | 7.98      | 2.90                |\nefficientnet_b0        | 224x224          | 5.29      | 0.41                |\nefficientnet_v2_m      | 224x224          | 54.14     | 5.43                |\ngooglenet              | 224x224          | 13.00     | 1.51                |\ninception_v3           | 224x224          | 27.16     | 5.75                | 5.71\nmaxvit_t               | 224x224          | 30.92     | 5.48                |\nmnasnet1_0             | 224x224          | 4.38      | 0.33                |\nmobilenet_v2           | 224x224          | 3.50      | 0.32                |\nmobilenet_v3_large     | 224x224          | 5.48      | 0.23                |\nregnet_y_1_6gf         | 224x224          | 11.20     | 1.65                |\nresnet18               | 224x224          | 11.69     | 1.83                | 1.81\nresnet50               | 224x224          | 25.56     | 4.13                | 4.09\nresnext50_32x4d        | 224x224          | 25.03     | 4.29                |\nshufflenet_v2_x1_0     | 224x224          | 2.28      | 0.15                |\nsqueezenet1_0          | 224x224          | 1.25      | 0.84                | 0.82\nvgg16                  | 224x224          | 138.36    | 15.52               | 15.48\nvit_b_16               | 224x224          | 86.57     | 17.61 (wrong)       | 16.86\nwide_resnet50_2        | 224x224          | 68.88     | 11.45               |\n\n\n### [timm](https://github.com/huggingface/pytorch-image-models)\n\nModel                  | Input Resolution | Params(M) | MACs(G)\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2019 Vladislav Sovrasov  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Flops counter for neural networks in pytorch framework",
    "version": "0.7.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/sovrasov/flops-counter.pytorch/issues",
        "Changelog": "https://github.com/sovrasov/flops-counter.pytorch/blob/master/CHANGELOG.md",
        "Documentation": "https://github.com/sovrasov/flops-counter.pytorch/blob/master/README.md",
        "Homepage": "https://github.com/sovrasov/flops-counter.pytorch/",
        "Repository": "https://github.com/sovrasov/flops-counter.pytorch.git"
    },
    "split_keywords": [
        "pytorch",
        " cnn",
        " transformer"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c24509e6bab344951fe3912a4b82d673202517f4ad8f6ec2ab4466b78f060e51",
                "md5": "c4eb112f70c0c561114138c0e3cdd373",
                "sha256": "17f63e2da8651979fdffff7bab2091d04dddde22f7b42f96a6297a135d1de049"
            },
            "downloads": -1,
            "filename": "ptflops-0.7.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c4eb112f70c0c561114138c0e3cdd373",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 19403,
            "upload_time": "2024-09-27T18:45:31",
            "upload_time_iso_8601": "2024-09-27T18:45:31.038346Z",
            "url": "https://files.pythonhosted.org/packages/c2/45/09e6bab344951fe3912a4b82d673202517f4ad8f6ec2ab4466b78f060e51/ptflops-0.7.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a3a8fde8ede43c0a43141602f297005bc2f4af9731249f545c58843992335dc2",
                "md5": "73242835b8589b4d8f61c73282531f25",
                "sha256": "660a5f2218294ea035938054d9b3cfa2bff4812dcd7e9469344af5c0945c45e1"
            },
            "downloads": -1,
            "filename": "ptflops-0.7.4.tar.gz",
            "has_sig": false,
            "md5_digest": "73242835b8589b4d8f61c73282531f25",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 19024,
            "upload_time": "2024-09-27T18:45:32",
            "upload_time_iso_8601": "2024-09-27T18:45:32.637789Z",
            "url": "https://files.pythonhosted.org/packages/a3/a8/fde8ede43c0a43141602f297005bc2f4af9731249f545c58843992335dc2/ptflops-0.7.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-27 18:45:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "sovrasov",
    "github_project": "flops-counter.pytorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "ptflops"
}
        
Elapsed time: 0.31559s