<pre align=center style='color:green'>
______ __ __
/ __/ /___ ____ / /_/ /_
/ /_/ / __ \/ __ \/ __/ __ \
/ __/ / /_/ / /_/ / /_/ / / /
/_/ /_/\____/ .___/\__/_/ /_/
/_/
</pre>
# flopth
A simple program to calculate and visualize the FLOPs and Parameters of Pytorch models, with cli tool and Python API.
# Features
- Handy cli command to show flops and params quickly
- Visualization percent of flops and params in each layer
- Support multiple inputs in model's `forward` function
- Support Both CPU and GPU mode
- Support Torchscript Model (Only Parameters are shown)
- Support Python3.5 and above
# Installation
Install stable version of flopth via pypi:
```bash
pip install flopth
```
or install latest version via github:
```bash
pip install -U git+https://github.com/vra/flopth.git
```
# Usage examples
## cli command
flopth provide cli command `flopth` after installation. You can use it to get information of pytorch models quickly
### Running on models in torchvision.models
with `flopth -m <model_name>`, flopth gives you all information about the `<model_name>`, input shape, output shape, parameter and flops of each layer, and total flops and params.
Here is an example running on alexnet (default input size in (3, 224, 224)):
```plain
$ flopth -m alexnet
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |
+===============+===================+=============+=============+==========+==================+================================+==========+=================+=====================+
| features.0 | Conv2d | (3,224,224) | (64,55,55) | 23.296K | 0.0381271% | | 70.4704M | 9.84839% | #### |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.1 | ReLU | (64,55,55) | (64,55,55) | 0.0 | 0.0% | | 193.6K | 0.027056% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.2 | MaxPool2d | (64,55,55) | (64,27,27) | 0.0 | 0.0% | | 193.6K | 0.027056% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.3 | Conv2d | (64,27,27) | (192,27,27) | 307.392K | 0.50309% | | 224.089M | 31.3169% | ############### |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.4 | ReLU | (192,27,27) | (192,27,27) | 0.0 | 0.0% | | 139.968K | 0.0195608% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.5 | MaxPool2d | (192,27,27) | (192,13,13) | 0.0 | 0.0% | | 139.968K | 0.0195608% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.6 | Conv2d | (192,13,13) | (384,13,13) | 663.936K | 1.08662% | | 112.205M | 15.6809% | ####### |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.7 | ReLU | (384,13,13) | (384,13,13) | 0.0 | 0.0% | | 64.896K | 0.00906935% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.8 | Conv2d | (384,13,13) | (256,13,13) | 884.992K | 1.44841% | | 149.564M | 20.9018% | ########## |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.9 | ReLU | (256,13,13) | (256,13,13) | 0.0 | 0.0% | | 43.264K | 0.00604624% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.10 | Conv2d | (256,13,13) | (256,13,13) | 590.08K | 0.965748% | | 99.7235M | 13.9366% | ###### |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.11 | ReLU | (256,13,13) | (256,13,13) | 0.0 | 0.0% | | 43.264K | 0.00604624% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| features.12 | MaxPool2d | (256,13,13) | (256,6,6) | 0.0 | 0.0% | | 43.264K | 0.00604624% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| avgpool | AdaptiveAvgPool2d | (256,6,6) | (256,6,6) | 0.0 | 0.0% | | 9.216K | 0.00128796% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| classifier.0 | Dropout | (9216) | (9216) | 0.0 | 0.0% | | 0.0 | 0.0% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| classifier.1 | Linear | (9216) | (4096) | 37.7528M | 61.7877% | ############################## | 37.7487M | 5.27547% | ## |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| classifier.2 | ReLU | (4096) | (4096) | 0.0 | 0.0% | | 4.096K | 0.000572425% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| classifier.3 | Dropout | (4096) | (4096) | 0.0 | 0.0% | | 0.0 | 0.0% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| classifier.4 | Linear | (4096) | (4096) | 16.7813M | 27.4649% | ############# | 16.7772M | 2.34465% | # |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| classifier.5 | ReLU | (4096) | (4096) | 0.0 | 0.0% | | 4.096K | 0.000572425% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
| classifier.6 | Linear | (4096) | (1000) | 4.097M | 6.70531% | ### | 4.096M | 0.572425% | |
+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+
FLOPs: 715.553M
Params: 61.1008M
```
### Running on custom models
Also, given model name and the file path where the model defined, flopth will output model information:
For the dummpy network `MyModel` defined in `/tmp/my_model.py`,
```python
# file path: /tmp/my_model.py
# model name: MyModel
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
def forward(self, x1):
x1 = self.conv1(x1)
x1 = self.conv2(x1)
x1 = self.conv3(x1)
x1 = self.conv4(x1)
return x1
```
You can use `flopth -m MyModel -p /tmp/my_model -i 3 224 224` to print model information:
```plain
$ flopth -m MyModel -p /tmp/my_model.py -i 3 224 224
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |
+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+
| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv3 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv4 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
FLOPs: 16.8591M
Params: 336.0
```
#### Multiple inputs
If your model has more than one input in `forward`, you can add multiple `-i` parameters to flopth:
```python
# file path: /tmp/my_model.py
# model name: MyModel
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
def forward(self, x1, x2):
x1 = self.conv1(x1)
x1 = self.conv2(x1)
x2 = self.conv3(x2)
x2 = self.conv4(x2)
return (x1, x2)
```
You can use `flopth -m MyModel -p /tmp/my_model -i 3 224 224 -i 3 128 128` to print model information:
```plain
flopth -m MyModel -p /tmp/my_model.py -i 3 224 224 -i 3 128 128
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |
+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+
| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv3 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv4 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
FLOPs: 11.1821M
Params: 336.0
```
#### Extra arguments in model's initialization
flopth with options like `-x param1=int:3 param2=float:5.2` to process the extra parameters in model's initialization:
```python
# file path: /tmp/my_model.py
# model name: MyModel
import torch.nn as nn
class MyModel(nn.Module):
# Please Notice the parameters ks1 and ks2 here!
def __init__(self, ks1, ks2):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 3, kernel_size=ks1, padding=1)
self.conv2 = nn.Conv2d(3, 3, kernel_size=ks1, padding=1)
self.conv3 = nn.Conv2d(3, 3, kernel_size=ks2, padding=1)
self.conv4 = nn.Conv2d(3, 3, kernel_size=ks2, padding=1)
def forward(self, x1, x2):
x1 = self.conv1(x1)
x1 = self.conv2(x1)
x2 = self.conv3(x2)
x2 = self.conv4(x2)
return (x1, x2)
```
In order to pass value to the arguments of `ks1` and `ks2`, we can run flopth like this:
```plain
$ flopth -m MyModel -p /tmp/my_model.py -i 3 224 224 -i 3 128 128 -x ks1=int:3 ks2=int:1
+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+
| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |
+===============+===============+=============+=============+==========+==================+=======================+==========+=================+=========================+
| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 43.75% | ##################### | 4.21478M | 47.6707% | ####################### |
+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+
| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 43.75% | ##################### | 4.21478M | 47.6707% | ####################### |
+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+
| conv3 | Conv2d | (3,128,128) | (3,130,130) | 12 | 6.25% | ### | 202.8K | 2.29374% | # |
+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+
| conv4 | Conv2d | (3,130,130) | (3,132,132) | 12 | 6.25% | ### | 209.088K | 2.36486% | # |
+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+
FLOPs: 8.84146M
Params: 192.0
```
### Line number mode
One of the fancy features of flopth is that given the line number where the model **object** is definited, flopth can print model information:
```python
# file path: /tmp/my_model.py
# model name: MyModel
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
def forward(self, x1, x2):
x1 = self.conv1(x1)
x1 = self.conv2(x1)
x2 = self.conv3(x2)
x2 = self.conv4(x2)
return (x1, x2)
if __name__ == '__main__':
my_model = MyModel()
```
Since the model object `my_model` in defined in line 23, we can run flopth like this:
```plain
$ flopth -n 23 -p /tmp/my_model.py -i 3 224 224 -i 3 128 128
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |
+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+
| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv3 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv4 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
FLOPs: 11.1821M
Params: 336.0
```
**Notice: Although line number mode of flopth is quite handy, it may fail when the model definition is too complex, e.g., using outer config file to initialize a model. In this case, I recommend you to use flopth's Python API detailed below.**
## Python API
The Python API of flopth is quite simple:
```python
import torch
import torch.nn as nn
# import flopth
from flopth import flopth
# define Model
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)
def forward(self, x1):
x1 = self.conv1(x1)
x1 = self.conv2(x1)
x1 = self.conv3(x1)
x1 = self.conv4(x1)
return x1
# declare Model object
my_model = MyModel()
# Use input size
flops, params = flopth(my_model, in_size=((3, 224, 224),))
print(flops, params)
# Or use input tensors
dummy_inputs = torch.rand(1, 3, 224, 224)
flops, params = flopth(my_model, inputs=(dummy_inputs,))
print(flops, params)
```
The output is like this:
```plain
16.8591M 336.0
```
To show detail information of each layer, add `show_detail=True` in flopth function call:
```python
flops, params = flopth(my_model, in_size=((3, 224, 224),), show_detail=True)
```
The outputs:
```plain
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |
+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+
| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv3 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
| conv4 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |
+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+
16.8591M 336.0
```
To show only the value of flops and params (no unit conversion), add `bare_number=True` to flopth function call:
```python
flops, params = flopth(my_model, in_size=((3, 224, 224),), bare_number=True)
```
The outputs:
```plain
16859136 336
```
# Known issues
1. When use a module more than one time during `forward`, the FLOPs calculation is not correct, For example:
```python
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.l1 = nn.Linear(10, 10)
def forward(self, x, y):
x = self.l1(x)
x = self.l1(x)
x = self.l1(x)
return x
```
Will give wrong FLOPs value, because we use [register_buffer ](https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module.register_buffer), which is bind to a `nn.Module` (in this example, `l1`).
# TODOs
- [x] Support multiple inputs
- [x] Add parameter size
- [x] Add file line mode
- [x] Add line number mode
- [ ] Support more modules
# Contribution and issue
Any discussion and contribution are very welcomed. Please open an issue to reach me.
# Acknowledge
This program is mostly inspired by [torchstat](https://github.com/Swall0w/torchstat), great thanks to the creators of it.
Raw data
{
"_id": null,
"home_page": "https://github.com/vra/flopth",
"name": "flopth",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "flopth,Pytorch,Flops,Deep-learning",
"author": "Yunfeng Wang",
"author_email": "wyf.brz@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/b4/52/ee6bbca51f47680883a22f2e4f7dfaa7a1aab3447855a6db5e80d4eeefb2/flopth-0.1.3.tar.gz",
"platform": "any",
"description": "<pre align=center style='color:green'>\n\n ______ __ __ \n / __/ /___ ____ / /_/ /_ \n / /_/ / __ \\/ __ \\/ __/ __ \\\n / __/ / /_/ / /_/ / /_/ / / /\n/_/ /_/\\____/ .___/\\__/_/ /_/ \n /_/ \n\n</pre>\n\n# flopth\n\nA simple program to calculate and visualize the FLOPs and Parameters of Pytorch models, with cli tool and Python API.\n\n# Features\n - Handy cli command to show flops and params quickly\n - Visualization percent of flops and params in each layer\n - Support multiple inputs in model's `forward` function\n - Support Both CPU and GPU mode\n - Support Torchscript Model (Only Parameters are shown)\n - Support Python3.5 and above\n\n# Installation\nInstall stable version of flopth via pypi:\n```bash\npip install flopth \n```\n\nor install latest version via github:\n```bash\npip install -U git+https://github.com/vra/flopth.git\n```\n\n# Usage examples\n## cli command\nflopth provide cli command `flopth` after installation. You can use it to get information of pytorch models quickly\n### Running on models in torchvision.models\nwith `flopth -m <model_name>`, flopth gives you all information about the `<model_name>`, input shape, output shape, parameter and flops of each layer, and total flops and params.\n\nHere is an example running on alexnet (default input size in (3, 224, 224)):\n```plain\n$ flopth -m alexnet \n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |\n+===============+===================+=============+=============+==========+==================+================================+==========+=================+=====================+\n| features.0 | Conv2d | (3,224,224) | (64,55,55) | 23.296K | 0.0381271% | | 70.4704M | 9.84839% | #### |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.1 | ReLU | (64,55,55) | (64,55,55) | 0.0 | 0.0% | | 193.6K | 0.027056% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.2 | MaxPool2d | (64,55,55) | (64,27,27) | 0.0 | 0.0% | | 193.6K | 0.027056% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.3 | Conv2d | (64,27,27) | (192,27,27) | 307.392K | 0.50309% | | 224.089M | 31.3169% | ############### |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.4 | ReLU | (192,27,27) | (192,27,27) | 0.0 | 0.0% | | 139.968K | 0.0195608% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.5 | MaxPool2d | (192,27,27) | (192,13,13) | 0.0 | 0.0% | | 139.968K | 0.0195608% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.6 | Conv2d | (192,13,13) | (384,13,13) | 663.936K | 1.08662% | | 112.205M | 15.6809% | ####### |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.7 | ReLU | (384,13,13) | (384,13,13) | 0.0 | 0.0% | | 64.896K | 0.00906935% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.8 | Conv2d | (384,13,13) | (256,13,13) | 884.992K | 1.44841% | | 149.564M | 20.9018% | ########## |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.9 | ReLU | (256,13,13) | (256,13,13) | 0.0 | 0.0% | | 43.264K | 0.00604624% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.10 | Conv2d | (256,13,13) | (256,13,13) | 590.08K | 0.965748% | | 99.7235M | 13.9366% | ###### |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.11 | ReLU | (256,13,13) | (256,13,13) | 0.0 | 0.0% | | 43.264K | 0.00604624% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| features.12 | MaxPool2d | (256,13,13) | (256,6,6) | 0.0 | 0.0% | | 43.264K | 0.00604624% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| avgpool | AdaptiveAvgPool2d | (256,6,6) | (256,6,6) | 0.0 | 0.0% | | 9.216K | 0.00128796% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| classifier.0 | Dropout | (9216) | (9216) | 0.0 | 0.0% | | 0.0 | 0.0% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| classifier.1 | Linear | (9216) | (4096) | 37.7528M | 61.7877% | ############################## | 37.7487M | 5.27547% | ## |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| classifier.2 | ReLU | (4096) | (4096) | 0.0 | 0.0% | | 4.096K | 0.000572425% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| classifier.3 | Dropout | (4096) | (4096) | 0.0 | 0.0% | | 0.0 | 0.0% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| classifier.4 | Linear | (4096) | (4096) | 16.7813M | 27.4649% | ############# | 16.7772M | 2.34465% | # |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| classifier.5 | ReLU | (4096) | (4096) | 0.0 | 0.0% | | 4.096K | 0.000572425% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n| classifier.6 | Linear | (4096) | (1000) | 4.097M | 6.70531% | ### | 4.096M | 0.572425% | |\n+---------------+-------------------+-------------+-------------+----------+------------------+--------------------------------+----------+-----------------+---------------------+\n\n\nFLOPs: 715.553M\nParams: 61.1008M\n```\n\n### Running on custom models\nAlso, given model name and the file path where the model defined, flopth will output model information:\n\nFor the dummpy network `MyModel` defined in `/tmp/my_model.py`,\n```python\n# file path: /tmp/my_model.py\n# model name: MyModel\nimport torch.nn as nn\n\n\nclass MyModel(nn.Module):\n def __init__(self):\n super(MyModel, self).__init__()\n self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n\n def forward(self, x1):\n x1 = self.conv1(x1)\n x1 = self.conv2(x1)\n x1 = self.conv3(x1)\n x1 = self.conv4(x1)\n return x1\n```\nYou can use `flopth -m MyModel -p /tmp/my_model -i 3 224 224` to print model information:\n\n```plain\n$ flopth -m MyModel -p /tmp/my_model.py -i 3 224 224\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |\n+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+\n| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv3 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv4 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n\nFLOPs: 16.8591M\nParams: 336.0\n```\n\n#### Multiple inputs\nIf your model has more than one input in `forward`, you can add multiple `-i` parameters to flopth:\n\n```python\n# file path: /tmp/my_model.py\n# model name: MyModel\nimport torch.nn as nn\n\n\nclass MyModel(nn.Module):\n def __init__(self):\n super(MyModel, self).__init__()\n self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n\n def forward(self, x1, x2):\n x1 = self.conv1(x1)\n x1 = self.conv2(x1)\n x2 = self.conv3(x2)\n x2 = self.conv4(x2)\n return (x1, x2)\n```\nYou can use `flopth -m MyModel -p /tmp/my_model -i 3 224 224 -i 3 128 128` to print model information:\n\n```plain\n flopth -m MyModel -p /tmp/my_model.py -i 3 224 224 -i 3 128 128\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |\n+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+\n| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv3 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv4 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n\n\nFLOPs: 11.1821M\nParams: 336.0\n```\n\n#### Extra arguments in model's initialization\nflopth with options like `-x param1=int:3 param2=float:5.2` to process the extra parameters in model's initialization:\n```python\n# file path: /tmp/my_model.py\n# model name: MyModel\nimport torch.nn as nn\n\n\nclass MyModel(nn.Module):\n # Please Notice the parameters ks1 and ks2 here!\n def __init__(self, ks1, ks2):\n super(MyModel, self).__init__()\n self.conv1 = nn.Conv2d(3, 3, kernel_size=ks1, padding=1)\n self.conv2 = nn.Conv2d(3, 3, kernel_size=ks1, padding=1)\n self.conv3 = nn.Conv2d(3, 3, kernel_size=ks2, padding=1)\n self.conv4 = nn.Conv2d(3, 3, kernel_size=ks2, padding=1)\n\n def forward(self, x1, x2):\n x1 = self.conv1(x1)\n x1 = self.conv2(x1)\n x2 = self.conv3(x2)\n x2 = self.conv4(x2)\n return (x1, x2)\n```\nIn order to pass value to the arguments of `ks1` and `ks2`, we can run flopth like this:\n```plain\n$ flopth -m MyModel -p /tmp/my_model.py -i 3 224 224 -i 3 128 128 -x ks1=int:3 ks2=int:1\n+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+\n| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |\n+===============+===============+=============+=============+==========+==================+=======================+==========+=================+=========================+\n| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 43.75% | ##################### | 4.21478M | 47.6707% | ####################### |\n+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+\n| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 43.75% | ##################### | 4.21478M | 47.6707% | ####################### |\n+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+\n| conv3 | Conv2d | (3,128,128) | (3,130,130) | 12 | 6.25% | ### | 202.8K | 2.29374% | # |\n+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+\n| conv4 | Conv2d | (3,130,130) | (3,132,132) | 12 | 6.25% | ### | 209.088K | 2.36486% | # |\n+---------------+---------------+-------------+-------------+----------+------------------+-----------------------+----------+-----------------+-------------------------+\n\n\nFLOPs: 8.84146M\nParams: 192.0\n```\n\n### Line number mode\nOne of the fancy features of flopth is that given the line number where the model **object** is definited, flopth can print model information:\n```python\n# file path: /tmp/my_model.py\n# model name: MyModel\nimport torch.nn as nn\n\n\nclass MyModel(nn.Module):\n def __init__(self):\n super(MyModel, self).__init__()\n self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n\n def forward(self, x1, x2):\n x1 = self.conv1(x1)\n x1 = self.conv2(x1)\n x2 = self.conv3(x2)\n x2 = self.conv4(x2)\n return (x1, x2)\n\n\nif __name__ == '__main__':\n my_model = MyModel()\n```\n\nSince the model object `my_model` in defined in line 23, we can run flopth like this:\n```plain\n$ flopth -n 23 -p /tmp/my_model.py -i 3 224 224 -i 3 128 128\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |\n+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+\n| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 37.6923% | ################## |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv3 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv4 | Conv2d | (3,128,128) | (3,128,128) | 84 | 25.0% | ############ | 1.37626M | 12.3077% | ###### |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n\n\nFLOPs: 11.1821M\nParams: 336.0\n```\n\n**Notice: Although line number mode of flopth is quite handy, it may fail when the model definition is too complex, e.g., using outer config file to initialize a model. In this case, I recommend you to use flopth's Python API detailed below.**\n\n## Python API\nThe Python API of flopth is quite simple:\n```python\nimport torch\nimport torch.nn as nn\n\n# import flopth\nfrom flopth import flopth\n\n# define Model\nclass MyModel(nn.Module):\n def __init__(self):\n super(MyModel, self).__init__()\n self.conv1 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv2 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv3 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n self.conv4 = nn.Conv2d(3, 3, kernel_size=3, padding=1)\n\n def forward(self, x1):\n x1 = self.conv1(x1)\n x1 = self.conv2(x1)\n x1 = self.conv3(x1)\n x1 = self.conv4(x1)\n return x1\n\n\n# declare Model object\nmy_model = MyModel()\n\n# Use input size\nflops, params = flopth(my_model, in_size=((3, 224, 224),))\nprint(flops, params)\n\n# Or use input tensors\ndummy_inputs = torch.rand(1, 3, 224, 224)\nflops, params = flopth(my_model, inputs=(dummy_inputs,))\nprint(flops, params)\n```\n\nThe output is like this:\n```plain\n16.8591M 336.0\n```\n\nTo show detail information of each layer, add `show_detail=True` in flopth function call:\n```python\nflops, params = flopth(my_model, in_size=((3, 224, 224),), show_detail=True)\n```\n\nThe outputs:\n```plain\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| module_name | module_type | in_shape | out_shape | params | params_percent | params_percent_vis | flops | flops_percent | flops_percent_vis |\n+===============+===============+=============+=============+==========+==================+======================+==========+=================+=====================+\n| conv1 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv2 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv3 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n| conv4 | Conv2d | (3,224,224) | (3,224,224) | 84 | 25.0% | ############ | 4.21478M | 25.0% | ############ |\n+---------------+---------------+-------------+-------------+----------+------------------+----------------------+----------+-----------------+---------------------+\n\n\n16.8591M 336.0\n```\n\nTo show only the value of flops and params (no unit conversion), add `bare_number=True` to flopth function call:\n```python\nflops, params = flopth(my_model, in_size=((3, 224, 224),), bare_number=True)\n```\n\nThe outputs:\n```plain\n16859136 336\n```\n\n# Known issues\n 1. When use a module more than one time during `forward`, the FLOPs calculation is not correct, For example:\n ```python\nimport torch.nn as nn\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n\n self.l1 = nn.Linear(10, 10)\n\n def forward(self, x, y):\n x = self.l1(x)\n x = self.l1(x)\n x = self.l1(x)\n\n return x\n ```\n Will give wrong FLOPs value, because we use [register_buffer ](https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module.register_buffer), which is bind to a `nn.Module` (in this example, `l1`). \n\n# TODOs\n - [x] Support multiple inputs\n - [x] Add parameter size\n - [x] Add file line mode\n - [x] Add line number mode \n - [ ] Support more modules \n\n# Contribution and issue\nAny discussion and contribution are very welcomed. Please open an issue to reach me. \n\n# Acknowledge\nThis program is mostly inspired by [torchstat](https://github.com/Swall0w/torchstat), great thanks to the creators of it.\n",
"bugtrack_url": null,
"license": "MIT Licence",
"summary": "A program to calculate FLOPs and Parameters of Pytorch models",
"version": "0.1.3",
"split_keywords": [
"flopth",
"pytorch",
"flops",
"deep-learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5de38a118480ed989ab9c45ed4ed26d959267ab43a4dfb4922f00b8aeb7bc823",
"md5": "59a06ddd2865dd920740ba3707e17bef",
"sha256": "288935088cbf0120778dc6639cc2a94bd51003c0d85831b2057dadcd57f53302"
},
"downloads": -1,
"filename": "flopth-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "59a06ddd2865dd920740ba3707e17bef",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 11811,
"upload_time": "2023-01-30T16:19:34",
"upload_time_iso_8601": "2023-01-30T16:19:34.326063Z",
"url": "https://files.pythonhosted.org/packages/5d/e3/8a118480ed989ab9c45ed4ed26d959267ab43a4dfb4922f00b8aeb7bc823/flopth-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b452ee6bbca51f47680883a22f2e4f7dfaa7a1aab3447855a6db5e80d4eeefb2",
"md5": "098fa7f93b1c493e3e87bea5a4cb6240",
"sha256": "83b129c76156a6f5234607ea53c58f1bfa052047c047d3e366e6caad2333cf0f"
},
"downloads": -1,
"filename": "flopth-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "098fa7f93b1c493e3e87bea5a4cb6240",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 14052,
"upload_time": "2023-01-30T16:19:36",
"upload_time_iso_8601": "2023-01-30T16:19:36.798888Z",
"url": "https://files.pythonhosted.org/packages/b4/52/ee6bbca51f47680883a22f2e4f7dfaa7a1aab3447855a6db5e80d4eeefb2/flopth-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-30 16:19:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "vra",
"github_project": "flopth",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "flopth"
}