pytorchures


Namepytorchures JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryMeasure execution times of every layer in your pytorch model.
upload_time2024-10-29 14:13:03
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseCopyright (c) 2024 Cezary Bloch Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords pytorch performance layers profiling
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Pytorch profiler
Pytorchures is a simple model profiler intended for any pytorch model. 
It measures execution time of model layers individually. Every layer of a model is wrapped with timing class which measures latency when called.

## TLDR;

Install
```
pip install pytorchures
```

Run
```
from pytorchures import TimedModule

model = TimedModule(model)

_output_ = model(inputs)

profiling_data = model.get_timings()

with open(profiling_filename, "w") as f:
    json.dump(profiling_data, f, indent=4)
```

One layer extract of sample output ```.json```

```
    {
        "module_name": "InvertedResidual",
        "device_type": "cuda",
        "execution_times_ms": [
            5.021333694458008,
            2.427816390991211,
            2.4025440216064453
        ],
        "mean_time_ms": 3.283898035685221,
        "median_time_ms": 2.427816390991211,
        "sub_modules": [
            {
                "module_name": "Sequential",
                "device_type": "cuda",
                "execution_times_ms": [
                    4.198789596557617,
                    1.9135475158691406,
                    1.9412040710449219
                ],
                "mean_time_ms": 2.684513727823893,
                "median_time_ms": 1.9412040710449219,
                "sub_modules": [
                    {
                        "module_name": "Conv2dNormActivation",
                        "device_type": "cuda",
                        "execution_times_ms": [
                            2.0263195037841797,
                            0.7545948028564453,
                            0.9317398071289062
                        ],
                        "mean_time_ms": 1.2375513712565105,
                        "median_time_ms": 0.9317398071289062,
                        "sub_modules": [
                            ...
                                                    
```

# Setup

This repo was developed under WSL 2 running Ubuntu 20.04 LTS, and Ubuntu 22.04 LTS. The editor of choice is VS Code. 

## Install python 

The code was tested for Python 3.11, if you want to run other release please subsitute the python version in commands below which install python and virtual environment.

```sudo apt-get update```

```sudo apt-get install python3.11```

```sudo apt-get install python3.11-venv```

Install for PIL image.show() to work on WSL
```sudo apt install imagemagick```

## Install relevant VS Code extentions.

If you choose to use the recommended VS Code as editor please install the extensions from  ```extensions.json```.

## Create virtual environment

Create venv 

```python3.11 -m venv .venv```

To activate venv type - VS Code should automatically detect your new venv, so select it as your default interpreter.

```source venv/bin/activate```

## Install package in editable mode

In order to be able to develop and run the code install this repo in editable mode.

```pip install -e .```

To install in editable mode with additional dependencies for development use the command below.

```pip install -e .[dev]```

# Running

The entry point to profiling the sample object detection models is 
```run_profiling.py``` file.

## Examples

Running on CPU
```python pytorchures/run_profiling.py --device 'cpu' --nr_images 3```

Running on GPU
```python pytorchures/run_profiling.py --device 'cuda' --nr_images 3```

The script will print CPU wall time of every layer encountered in the model.
Values are printed in a nested manner.

## TimedModule wrapper

```
from pytorchures import TimedModule

model = TimedModule(model)

_output = model(inputs)

profiling_data = model.get_timings()

with open(profiling_filename, "w") as f:
    json.dump(profiling_data, f, indent=4)
```

In the code above the model and all it's sublayers are recursively wrapped with ```TimedModule``` class which measures execution times when a layers are called and stores them for every time the model is called.
Execution times of every wrapped layer are retrieved as hierarchical dictionary using ```model.get_timings()```.
This dictionary can be saved to json file.

If for some reason there is a need to clear recorded timings call ```model.clear_timings()```. This may useful in only some of the measurements should be included in the final results. It is often the case that first inference run takes much more time due to resource initialization, so clearing the measurements is a way to exclude this first run.

# Testing

All tests are located in 'tests' folder. Please follow Arange-Act-Assert pattern for all tests.
The tests should load in the test explorer.

# Formatting

This repo uses 'Black' code formatter.

# Publishing to Pypi

Build the package. This command will create a ```dist``` folder with ```pytorchures``` package as ```.whl```  and ```tar.gz```.

```python -m build```

Check if the build pacakge were build correctly.

```twine check dist/*```

Optionally upload the new package to ```testpypi``` server.

```twine upload -r testpypi dist/*```

To test the package from use ```testpypi``` the command:

```pip install --index-url https://test.pypi.org/simple/ pytorchures```

Upload the new package to production ```pypi``` server.

```twine upload dist/*```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pytorchures",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "pytorch, performance, layers, profiling",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/29/56/33866d157a8dfe6cdd35d01aa1552228b3dcb779f00a2d1cefe3e341351d/pytorchures-0.1.2.tar.gz",
    "platform": null,
    "description": "# Pytorch profiler\nPytorchures is a simple model profiler intended for any pytorch model. \nIt measures execution time of model layers individually. Every layer of a model is wrapped with timing class which measures latency when called.\n\n## TLDR;\n\nInstall\n```\npip install pytorchures\n```\n\nRun\n```\nfrom pytorchures import TimedModule\n\nmodel = TimedModule(model)\n\n_output_ = model(inputs)\n\nprofiling_data = model.get_timings()\n\nwith open(profiling_filename, \"w\") as f:\n    json.dump(profiling_data, f, indent=4)\n```\n\nOne layer extract of sample output ```.json```\n\n```\n    {\n        \"module_name\": \"InvertedResidual\",\n        \"device_type\": \"cuda\",\n        \"execution_times_ms\": [\n            5.021333694458008,\n            2.427816390991211,\n            2.4025440216064453\n        ],\n        \"mean_time_ms\": 3.283898035685221,\n        \"median_time_ms\": 2.427816390991211,\n        \"sub_modules\": [\n            {\n                \"module_name\": \"Sequential\",\n                \"device_type\": \"cuda\",\n                \"execution_times_ms\": [\n                    4.198789596557617,\n                    1.9135475158691406,\n                    1.9412040710449219\n                ],\n                \"mean_time_ms\": 2.684513727823893,\n                \"median_time_ms\": 1.9412040710449219,\n                \"sub_modules\": [\n                    {\n                        \"module_name\": \"Conv2dNormActivation\",\n                        \"device_type\": \"cuda\",\n                        \"execution_times_ms\": [\n                            2.0263195037841797,\n                            0.7545948028564453,\n                            0.9317398071289062\n                        ],\n                        \"mean_time_ms\": 1.2375513712565105,\n                        \"median_time_ms\": 0.9317398071289062,\n                        \"sub_modules\": [\n                            ...\n                                                    \n```\n\n# Setup\n\nThis repo was developed under WSL 2 running Ubuntu 20.04 LTS, and Ubuntu 22.04 LTS. The editor of choice is VS Code. \n\n## Install python \n\nThe code was tested for Python 3.11, if you want to run other release please subsitute the python version in commands below which install python and virtual environment.\n\n```sudo apt-get update```\n\n```sudo apt-get install python3.11```\n\n```sudo apt-get install python3.11-venv```\n\nInstall for PIL image.show() to work on WSL\n```sudo apt install imagemagick```\n\n## Install relevant VS Code extentions.\n\nIf you choose to use the recommended VS Code as editor please install the extensions from  ```extensions.json```.\n\n## Create virtual environment\n\nCreate venv \n\n```python3.11 -m venv .venv```\n\nTo activate venv type - VS Code should automatically detect your new venv, so select it as your default interpreter.\n\n```source venv/bin/activate```\n\n## Install package in editable mode\n\nIn order to be able to develop and run the code install this repo in editable mode.\n\n```pip install -e .```\n\nTo install in editable mode with additional dependencies for development use the command below.\n\n```pip install -e .[dev]```\n\n# Running\n\nThe entry point to profiling the sample object detection models is \n```run_profiling.py``` file.\n\n## Examples\n\nRunning on CPU\n```python pytorchures/run_profiling.py --device 'cpu' --nr_images 3```\n\nRunning on GPU\n```python pytorchures/run_profiling.py --device 'cuda' --nr_images 3```\n\nThe script will print CPU wall time of every layer encountered in the model.\nValues are printed in a nested manner.\n\n## TimedModule wrapper\n\n```\nfrom pytorchures import TimedModule\n\nmodel = TimedModule(model)\n\n_output = model(inputs)\n\nprofiling_data = model.get_timings()\n\nwith open(profiling_filename, \"w\") as f:\n    json.dump(profiling_data, f, indent=4)\n```\n\nIn the code above the model and all it's sublayers are recursively wrapped with ```TimedModule``` class which measures execution times when a layers are called and stores them for every time the model is called.\nExecution times of every wrapped layer are retrieved as hierarchical dictionary using ```model.get_timings()```.\nThis dictionary can be saved to json file.\n\nIf for some reason there is a need to clear recorded timings call ```model.clear_timings()```. This may useful in only some of the measurements should be included in the final results. It is often the case that first inference run takes much more time due to resource initialization, so clearing the measurements is a way to exclude this first run.\n\n# Testing\n\nAll tests are located in 'tests' folder. Please follow Arange-Act-Assert pattern for all tests.\nThe tests should load in the test explorer.\n\n# Formatting\n\nThis repo uses 'Black' code formatter.\n\n# Publishing to Pypi\n\nBuild the package. This command will create a ```dist``` folder with ```pytorchures``` package as ```.whl```  and ```tar.gz```.\n\n```python -m build```\n\nCheck if the build pacakge were build correctly.\n\n```twine check dist/*```\n\nOptionally upload the new package to ```testpypi``` server.\n\n```twine upload -r testpypi dist/*```\n\nTo test the package from use ```testpypi``` the command:\n\n```pip install --index-url https://test.pypi.org/simple/ pytorchures```\n\nUpload the new package to production ```pypi``` server.\n\n```twine upload dist/*```\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2024 Cezary Bloch  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Measure execution times of every layer in your pytorch model.",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://github.com/cezbloch/pytorchures"
    },
    "split_keywords": [
        "pytorch",
        " performance",
        " layers",
        " profiling"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "073e662de177e113f28e5dbc183193dc0f905e4c1928496e6086504747fc0ef1",
                "md5": "9b48972a49b002db4c3e6a2446c3b564",
                "sha256": "3b90295629f3c4b76bb6678193b4f71dbfa41050194bddf8675f99665afcea98"
            },
            "downloads": -1,
            "filename": "pytorchures-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9b48972a49b002db4c3e6a2446c3b564",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 8876,
            "upload_time": "2024-10-29T14:13:02",
            "upload_time_iso_8601": "2024-10-29T14:13:02.702718Z",
            "url": "https://files.pythonhosted.org/packages/07/3e/662de177e113f28e5dbc183193dc0f905e4c1928496e6086504747fc0ef1/pytorchures-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "295633866d157a8dfe6cdd35d01aa1552228b3dcb779f00a2d1cefe3e341351d",
                "md5": "81842e02e295a026b1ee493dfd064a9b",
                "sha256": "228038b2674895f6330df71f984d5df5ba56b360a1b18ca52c996cb226968817"
            },
            "downloads": -1,
            "filename": "pytorchures-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "81842e02e295a026b1ee493dfd064a9b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 11474,
            "upload_time": "2024-10-29T14:13:03",
            "upload_time_iso_8601": "2024-10-29T14:13:03.830624Z",
            "url": "https://files.pythonhosted.org/packages/29/56/33866d157a8dfe6cdd35d01aa1552228b3dcb779f00a2d1cefe3e341351d/pytorchures-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-29 14:13:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cezbloch",
    "github_project": "pytorchures",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "pytorchures"
}
        
Elapsed time: 0.92942s