apple-upscale


Nameapple-upscale JSON
Version 0.1.1 PyPI version JSON
download
home_page
SummaryExport utility for unconstrained channel pruned models
upload_time2023-07-14 21:59:14
maintainer
docs_urlNone
author
requires_python>=3.7
licenseCopyright (C) 2023 Apple Inc. All Rights Reserved. IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software. In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated. The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords pruning export unconstrained pruning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Unconstrained Channel Pruning · [Paper](https://openreview.net/forum?id=25fe54GXLo)

**UPSCALE: Unconstrained Channel Pruning** @ [ICML 2023](https://openreview.net/forum?id=25fe54GXLo)<br/>
[Alvin Wan](https://alvinwan.com), [Hanxiang Hao](https://scholar.google.com/citations?user=IMn1m2sAAAAJ&hl=en&oi=ao), [Kaushik Patnaik](https://openreview.net/profile?id=~Kaushik_Patnaik1), [Yueyang Xu](https://github.com/inSam), [Omer Hadad](https://scholar.google.com/citations?user=cHZBEjQAAAAJ&hl=en), [David Güera](https://davidguera.com), [Zhile Ren](https://jrenzhile.com), [Qi Shan](https://scholar.google.com/citations?user=0FbnKXwAAAAJ&hl=en)

By removing constraints from existing pruners, we improve ImageNet accuracy for post-training pruned models by 2.1 points on average - benefiting DenseNet (+16.9), EfficientNetV2 (+7.9), and ResNet (+6.2). Furthermore, for these unconstrained pruned models, UPSCALE improves inference speeds by up to 2x over a baseline export.

## Quick Start

Install our package.

```bash
pip install apple-upscale
```

Mask and prune channels, using the default magnitude pruner.

```python
import torch, torchvision
from upscale import MaskingManager, PruningManager

x = torch.rand((1, 3, 224, 224), device='cuda')
model = torchvision.models.get_model('resnet18', pretrained=True).cuda()  # get any pytorch model
MaskingManager(model).importance().mask()
PruningManager(model).compute([x]).prune()
```

## Customize Pruning

We provide a number of pruning heuristics out of the box:

- Magnitude ([L1](https://arxiv.org/abs/1608.08710) and [L2](https://arxiv.org/abs/1608.03665))
- [LAMP](https://arxiv.org/abs/2010.07611)
- [FPGM](https://arxiv.org/abs/1811.00250)
- [HRank](https://arxiv.org/abs/2002.10179)

You can pass the desired heuristic into the `UpscaleManager.mask` method call. You can also configure the pruning ratio in `UpscaleManager.mask`. A value of `0.25` means 25% of channels are set to zero.

```python
from upscale.importance import LAMP
MaskingManager(model).importance(LAMP()).mask(amount=0.25)
```

You can also zero out channels using any method you see fit.

```python
model.conv0.weight[:, 24] = 0
```

Then, run our export.

```python
PruningManager(model).compute([x]).prune()
```

## Advanced

You may want direct access to network segments to build a heavily-customized pruning algorithm.

```python
for segment in MaskingManager(model).segments():
    # prune each segment in the network independently
    for layer in segment.layers:
        # layers in the segment
```

## Development

> **NOTE:** See [src/upscale/pruning/README.md](src/upscale/pruning/README.md) for more details on how the core export algorithm code is organized.

Clone and setup.

```bash
git clone git@github.com:apple/ml-upscale.git
cd upscale
pip install -e .
```

Run tests.

```
py.test src tests --doctest-modules
```

## Paper

Follow the development installation instructions to have the paper code under `paper/` available.

To run the baseline unconstrained export, pass `baseline=True` to `PruningManager.prune`.

```python
PruningManager(model).compute([x]).prune(baseline=True)
```

To reproduce the paper results, run

```bash
python paper/main.py resnet18
```

Plug in any model in the `torchvision.models` namespace.

```
usage: main.py [-h] [--side {input,output} [{input,output} ...]]
               [--method {constrained,unconstrained} [{constrained,unconstrained} ...]]
               [--amount AMOUNT [AMOUNT ...]] [--epochs EPOCHS] 
               [--heuristic {l1,l2,lamp,fpgm,hrank}] [--global] [--out OUT] 
               [--force] [--latency] [--clean]
               model

positional arguments:
  model                 model to prune

options:
  -h, --help            show this help message and exit
  --side {input,output} [{input,output} ...]
                        prune which "side" -- producers, or consumers
  --method {constrained,unconstrained} [{constrained,unconstrained} ...]
                        how to handle multiple branches
  --amount AMOUNT [AMOUNT ...]
                        amounts to prune by. .6 means 60 percent pruned
  --epochs EPOCHS       number of epochs to train for
  --heuristic {l1,l2,lamp,fpgm,hrank}
                        pruning heuristic
  --global              apply heuristic globally
  --out OUT             directory to write results.csv to
  --force               force latency rerun
  --latency             measure latency locally
  --clean               clean the dataframe
```

## Citation

If you find this useful for your research, please consider citing

```
@inproceedings{wan2023upscale,
  title={UPSCALE: Unconstrained Channel Pruning},
  author={Alvin Wan and Hanxiang Hao and Kaushik Patnaik and Yueyang Xu and Omer Hadad and David Guera and Zhile Ren and Qi Shan},
  booktitle={ICML},
  year={2023}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "apple-upscale",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "pruning,export,unconstrained pruning",
    "author": "",
    "author_email": "Alvin Wan <alvinwan@apple.com>",
    "download_url": "https://files.pythonhosted.org/packages/c1/61/291f4b23ca0afb390be68c4ce299e27de214bfe3c2489038dd104763b29c/apple-upscale-0.1.1.tar.gz",
    "platform": null,
    "description": "# Unconstrained Channel Pruning \u00b7 [Paper](https://openreview.net/forum?id=25fe54GXLo)\n\n**UPSCALE: Unconstrained Channel Pruning** @ [ICML 2023](https://openreview.net/forum?id=25fe54GXLo)<br/>\n[Alvin Wan](https://alvinwan.com), [Hanxiang Hao](https://scholar.google.com/citations?user=IMn1m2sAAAAJ&hl=en&oi=ao), [Kaushik Patnaik](https://openreview.net/profile?id=~Kaushik_Patnaik1), [Yueyang Xu](https://github.com/inSam), [Omer Hadad](https://scholar.google.com/citations?user=cHZBEjQAAAAJ&hl=en), [David G\u00fcera](https://davidguera.com), [Zhile Ren](https://jrenzhile.com), [Qi Shan](https://scholar.google.com/citations?user=0FbnKXwAAAAJ&hl=en)\n\nBy removing constraints from existing pruners, we improve ImageNet accuracy for post-training pruned models by 2.1 points on average - benefiting DenseNet (+16.9), EfficientNetV2 (+7.9), and ResNet (+6.2). Furthermore, for these unconstrained pruned models, UPSCALE improves inference speeds by up to 2x over a baseline export.\n\n## Quick Start\n\nInstall our package.\n\n```bash\npip install apple-upscale\n```\n\nMask and prune channels, using the default magnitude pruner.\n\n```python\nimport torch, torchvision\nfrom upscale import MaskingManager, PruningManager\n\nx = torch.rand((1, 3, 224, 224), device='cuda')\nmodel = torchvision.models.get_model('resnet18', pretrained=True).cuda()  # get any pytorch model\nMaskingManager(model).importance().mask()\nPruningManager(model).compute([x]).prune()\n```\n\n## Customize Pruning\n\nWe provide a number of pruning heuristics out of the box:\n\n- Magnitude ([L1](https://arxiv.org/abs/1608.08710) and [L2](https://arxiv.org/abs/1608.03665))\n- [LAMP](https://arxiv.org/abs/2010.07611)\n- [FPGM](https://arxiv.org/abs/1811.00250)\n- [HRank](https://arxiv.org/abs/2002.10179)\n\nYou can pass the desired heuristic into the `UpscaleManager.mask` method call. You can also configure the pruning ratio in `UpscaleManager.mask`. A value of `0.25` means 25% of channels are set to zero.\n\n```python\nfrom upscale.importance import LAMP\nMaskingManager(model).importance(LAMP()).mask(amount=0.25)\n```\n\nYou can also zero out channels using any method you see fit.\n\n```python\nmodel.conv0.weight[:, 24] = 0\n```\n\nThen, run our export.\n\n```python\nPruningManager(model).compute([x]).prune()\n```\n\n## Advanced\n\nYou may want direct access to network segments to build a heavily-customized pruning algorithm.\n\n```python\nfor segment in MaskingManager(model).segments():\n    # prune each segment in the network independently\n    for layer in segment.layers:\n        # layers in the segment\n```\n\n## Development\n\n> **NOTE:** See [src/upscale/pruning/README.md](src/upscale/pruning/README.md) for more details on how the core export algorithm code is organized.\n\nClone and setup.\n\n```bash\ngit clone git@github.com:apple/ml-upscale.git\ncd upscale\npip install -e .\n```\n\nRun tests.\n\n```\npy.test src tests --doctest-modules\n```\n\n## Paper\n\nFollow the development installation instructions to have the paper code under `paper/` available.\n\nTo run the baseline unconstrained export, pass `baseline=True` to `PruningManager.prune`.\n\n```python\nPruningManager(model).compute([x]).prune(baseline=True)\n```\n\nTo reproduce the paper results, run\n\n```bash\npython paper/main.py resnet18\n```\n\nPlug in any model in the `torchvision.models` namespace.\n\n```\nusage: main.py [-h] [--side {input,output} [{input,output} ...]]\n               [--method {constrained,unconstrained} [{constrained,unconstrained} ...]]\n               [--amount AMOUNT [AMOUNT ...]] [--epochs EPOCHS] \n               [--heuristic {l1,l2,lamp,fpgm,hrank}] [--global] [--out OUT] \n               [--force] [--latency] [--clean]\n               model\n\npositional arguments:\n  model                 model to prune\n\noptions:\n  -h, --help            show this help message and exit\n  --side {input,output} [{input,output} ...]\n                        prune which \"side\" -- producers, or consumers\n  --method {constrained,unconstrained} [{constrained,unconstrained} ...]\n                        how to handle multiple branches\n  --amount AMOUNT [AMOUNT ...]\n                        amounts to prune by. .6 means 60 percent pruned\n  --epochs EPOCHS       number of epochs to train for\n  --heuristic {l1,l2,lamp,fpgm,hrank}\n                        pruning heuristic\n  --global              apply heuristic globally\n  --out OUT             directory to write results.csv to\n  --force               force latency rerun\n  --latency             measure latency locally\n  --clean               clean the dataframe\n```\n\n## Citation\n\nIf you find this useful for your research, please consider citing\n\n```\n@inproceedings{wan2023upscale,\n  title={UPSCALE: Unconstrained Channel Pruning},\n  author={Alvin Wan and Hanxiang Hao and Kaushik Patnaik and Yueyang Xu and Omer Hadad and David Guera and Zhile Ren and Qi Shan},\n  booktitle={ICML},\n  year={2023}\n}\n```\n",
    "bugtrack_url": null,
    "license": "Copyright (C) 2023 Apple Inc. All Rights Reserved.  IMPORTANT:  This Apple software is supplied to you by Apple Inc. (\"Apple\") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms.  If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software.  In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the \"Apple Software\"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple.  Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated.  The Apple Software is provided by Apple on an \"AS IS\" basis.  APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.  IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.",
    "summary": "Export utility for unconstrained channel pruned models",
    "version": "0.1.1",
    "project_urls": {
        "Bug Reports": "https://github.com/apple/ml-upscale/issues",
        "Homepage": "https://github.com/apple/ml-upscale",
        "Source": "https://github.com/apple/ml-upscale"
    },
    "split_keywords": [
        "pruning",
        "export",
        "unconstrained pruning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1ee41dd156ddf27cb4254bff089e1613ecb6b9fdd4162c6a7cb94e587916896e",
                "md5": "2d7e9aec2e4e4c8cc3ce0cee0ef5dbd0",
                "sha256": "42edf24b2e63d6e1c6861f8278440bb4fb9fa58ebed2eab520611d5c2b7fac37"
            },
            "downloads": -1,
            "filename": "apple_upscale-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2d7e9aec2e4e4c8cc3ce0cee0ef5dbd0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 43763,
            "upload_time": "2023-07-14T21:59:12",
            "upload_time_iso_8601": "2023-07-14T21:59:12.626908Z",
            "url": "https://files.pythonhosted.org/packages/1e/e4/1dd156ddf27cb4254bff089e1613ecb6b9fdd4162c6a7cb94e587916896e/apple_upscale-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c161291f4b23ca0afb390be68c4ce299e27de214bfe3c2489038dd104763b29c",
                "md5": "8c3034a2c6077f0f257b25e8d58b698e",
                "sha256": "e48ff529b13a80b6996af4c263e53d7beae79ed07ec9fc1d3d0a047dcd6ad454"
            },
            "downloads": -1,
            "filename": "apple-upscale-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "8c3034a2c6077f0f257b25e8d58b698e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 38607,
            "upload_time": "2023-07-14T21:59:14",
            "upload_time_iso_8601": "2023-07-14T21:59:14.618606Z",
            "url": "https://files.pythonhosted.org/packages/c1/61/291f4b23ca0afb390be68c4ce299e27de214bfe3c2489038dd104763b29c/apple-upscale-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-14 21:59:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "apple",
    "github_project": "ml-upscale",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "apple-upscale"
}
        
Elapsed time: 0.08967s