structured-pruning-adapters


Namestructured-pruning-adapters JSON
Version 0.7.1 PyPI version JSON
download
home_pagehttps://github.com/lukashedegaard/structured-pruning-adapters
SummaryStructured Pruning Adapters for PyTorch
upload_time2023-06-29 09:34:38
maintainer
docs_urlNone
authorLukas Hedegaard
requires_python
license
keywords deep learning pytorch ai adapters pruning inference
VCS
bugtrack_url
requirements torch
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Structured Pruning Adapters for PyTorch

<div align="left">
  <a href="https://pypi.org/project/structured-pruning-adapters/">
    <img src="https://img.shields.io/pypi/pyversions/structured-pruning-adapters" height="20" >
  </a>
  <a href="https://badge.fury.io/py/structured-pruning-adapters">
    <img src="https://badge.fury.io/py/structured-pruning-adapters.svg" height="20" >
  </a>
  <!-- <a href="https://structured-pruning-adapters.readthedocs.io/en/latest/?badge=latest">
    <img src="https://readthedocs.org/projects/structured-pruning-adapters/badge/?version=latest" alt="Documentation Status" height="20"/>
  </a> -->
  <!-- <a href="https://pepy.tech/project/structured-pruning-adapters">
    <img src="https://pepy.tech/badge/structured-pruning-adapters" height="20">
  </a> -->
  <a href="https://opensource.org/licenses/Apache-2.0">
    <img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" height="20">
  </a>
  <a href="https://arxiv.org/abs/2211.10155">
    <img src="http://img.shields.io/badge/paper-arxiv.2211.10155-B31B1B.svg" height="20" >
  </a>
  <a href="https://github.com/psf/black">
    <img src="https://img.shields.io/badge/code%20style-black-000000.svg" height="20">
  </a>
    <a href="https://codecov.io/github/LukasHedegaard/structured-pruning-adapters" > 
    <img src="https://codecov.io/github/LukasHedegaard/structured-pruning-adapters/branch/main/graph/badge.svg?token=WHBSM01TRN"/> 
  </a>
  <a href="https://www.codefactor.io/repository/github/lukashedegaard/structured-pruning-adapters">
    <img src="https://www.codefactor.io/repository/github/lukashedegaard/structured-pruning-adapters/badge" alt="CodeFactor" />
  </a>
</div>

```bash
pip install structured-pruning-adapters
```
## A happy mariage πŸ‘°β€β™€οΈπŸ€΅β€β™‚οΈ

__Pruning__ is an effective method for reducing the size of neural networks. Besides reducing the parameter count, the process _can_ accelerate inference as well. 
CPUs can handle sparse weights just fine, but GPUs need structured weights for an acceleration to happen. 
A structured approach to pruning i.e., removing network channels [[paper](https://www.sciencedirect.com/science/article/pii/S0031320321000868)] or blocks of weights [[paper](https://aclanthology.org/2021.emnlp-main.829.pdf)], generally yields speedups as well

\+

__Adapters__ [[paper](https://proceedings.neurips.cc/paper/2017/file/e7b24b112a44fdd9ee93bdf998c6ca0e-paper.pdf)] have emerged as an alternative to fine-tuning, in which the prior network weights are unaltered, and a new set of _adapters_ weights are added to the network to learn a specific task.
Some types of adapters add new layers, others are _fusible_ with existing weights and don't have a run-time overhead.
When a single base-model is deployed with many specialised models, these structures can save a lot of parameters compared with full fine-tuning.

=
<!-- | |
| --- | -->
[__Structured Pruning Adapters__](https://github.com/LukasHedegaard/structured-pruning-adapters) are the offspring of Structured Pruning and Fusible Adapters, and can be used for _Transfer Learning_ which has:
- βœ… Extremely few learned parameters (binary pruning mask + masked adapter weights) πŸ‘Œ
- βœ… Accelerated network inference πŸŽπŸ’¨


## How to use this library
Use in conjunction with any Structured Pruning technique. 
1. Install the library:
    ```bash
    pip install structured-pruning-adapters
    ```
2. Replace Linear and Conv layers with an SP Adapter:
    ```python3
    from torch.nn import Linear
    from sp_adapter import SPLoRA

    reg_lin = Linear(256, 512, bias=True)
    spa_lin = SPLoRA(reg_lin, rank=32)

    # Or replace all applicable layers in a network
    spa_net = SPLoRA(reg_net, rank=32)
    ```
3. Employ any Structured Pruning method. We conducted extensive experimens with multiple [channel-pruning](https://github.com/lukashedegaard/channel-spa-experiments) and [block-pruning](https://github.com/lukashedegaard/block-spa-experiments) methods.

4. Get pruned SP Adapter weights:
    ```python3
    # Specify mask - learned via your choice of Structured Pruning method
    in_features_mask=torch.tensor([1, 0, ..., 1], dtype=torch.bool)
    out_features_mask=torch.tensor([0, 1, ..., 1], dtype=torch.bool)

    # Read parameters
    params = sp_adapters.splora.parameters(
        adapter_weights_only=True,
        in_features_mask=torch.tensor([1, 0, ..., 1], dtype=torch.bool)
        out_features_mask=torch.tensor([0, 1, ..., 1], dtype=torch.bool),
    )   
    named_parameters = sp_adapters.splora.named_parameters(
        adapter_weights_only=True,
        in_features_mask=torch.tensor([1, 0, ..., 1], dtype=torch.bool)
        out_features_mask=torch.tensor([0, 1, ..., 1], dtype=torch.bool),
    )
    ```

### Demo
See also [notebooks/demo.ipynb](notebooks/demo.ipynb) for a hands-on demo.

### Structured Pruning Low-Rank Adapter (SPLoRA) for _Channel Pruning_ 
```python3
from sp_adapters import SPLoRA
```
<div align="center">
<img src="figures/SPLoRA.png" width="400">
</div>
Adds a low-rank bottle-neck projection in projection in parallel with the main weights projection.

<br/>

### Structured Pruning Parllel Residual Adapter (SPPaRA) for _Channel Pruning_ of CNNs
```python3
from sp_adapters import SPPaRA
```
Adds a pointwise convolution as adapter to convolutional layers. First proposed in ["Efficient parametrization of multi-domain deep neural networks" by Rebuffi et al.](https://arxiv.org/pdf/1803.10082.pdf),

<br/>

### Structured Pruning Low-rank PHM Adapter (SPLoPA) for _Block Pruning_ (experimental)
```python3
from sp_adapters import SPLoPA
```

<div align="center">
<img src="figures/SPLoPA.png" width="600">
</div>

Uses a variation on the Parameterized Hypercomplex Multiplication (PHM) layer [[paper](https://openreview.net/forum?id=rcQdycl0zyk)] with shared low-rank prototypes for block-sparse adaptation.

## Citation
If you enjoy this work, please consider citing it
```bibtex
@article{hedegaard2022structured,
  title={Structured Pruning Adapters},
  author={Lukas Hedegaard and Aman Alok and Juby Jose and Alexandros Iosifidis},
  journal={preprint, arXiv:2211.10155},
  year={2022}
}
```

## Acknowledgement
This work was done in conjunction with a research exchange at [Cactus Communications 🌡](https://cactusglobal.com).

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 [(OpenDR) πŸ‡ͺπŸ‡Ί](https://opendr.eu).




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/lukashedegaard/structured-pruning-adapters",
    "name": "structured-pruning-adapters",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "deep learning,pytorch,AI,adapters,pruning,inference",
    "author": "Lukas Hedegaard",
    "author_email": "lukasxhedegaard@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/18/cf/96303f38417aef71a7d137e447f083ff6649c630f9908265db700073050d/structured-pruning-adapters-0.7.1.tar.gz",
    "platform": null,
    "description": "# Structured Pruning Adapters for PyTorch\n\n<div align=\"left\">\n  <a href=\"https://pypi.org/project/structured-pruning-adapters/\">\n    <img src=\"https://img.shields.io/pypi/pyversions/structured-pruning-adapters\" height=\"20\" >\n  </a>\n  <a href=\"https://badge.fury.io/py/structured-pruning-adapters\">\n    <img src=\"https://badge.fury.io/py/structured-pruning-adapters.svg\" height=\"20\" >\n  </a>\n  <!-- <a href=\"https://structured-pruning-adapters.readthedocs.io/en/latest/?badge=latest\">\n    <img src=\"https://readthedocs.org/projects/structured-pruning-adapters/badge/?version=latest\" alt=\"Documentation Status\" height=\"20\"/>\n  </a> -->\n  <!-- <a href=\"https://pepy.tech/project/structured-pruning-adapters\">\n    <img src=\"https://pepy.tech/badge/structured-pruning-adapters\" height=\"20\">\n  </a> -->\n  <a href=\"https://opensource.org/licenses/Apache-2.0\">\n    <img src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\" height=\"20\">\n  </a>\n  <a href=\"https://arxiv.org/abs/2211.10155\">\n    <img src=\"http://img.shields.io/badge/paper-arxiv.2211.10155-B31B1B.svg\" height=\"20\" >\n  </a>\n  <a href=\"https://github.com/psf/black\">\n    <img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\" height=\"20\">\n  </a>\n    <a href=\"https://codecov.io/github/LukasHedegaard/structured-pruning-adapters\" > \n    <img src=\"https://codecov.io/github/LukasHedegaard/structured-pruning-adapters/branch/main/graph/badge.svg?token=WHBSM01TRN\"/> \n  </a>\n  <a href=\"https://www.codefactor.io/repository/github/lukashedegaard/structured-pruning-adapters\">\n    <img src=\"https://www.codefactor.io/repository/github/lukashedegaard/structured-pruning-adapters/badge\" alt=\"CodeFactor\" />\n  </a>\n</div>\n\n```bash\npip install structured-pruning-adapters\n```\n## A happy mariage \ud83d\udc70\u200d\u2640\ufe0f\ud83e\udd35\u200d\u2642\ufe0f\n\n__Pruning__ is an effective method for reducing the size of neural networks. Besides reducing the parameter count, the process _can_ accelerate inference as well. \nCPUs can handle sparse weights just fine, but GPUs need structured weights for an acceleration to happen. \nA structured approach to pruning i.e., removing network channels [[paper](https://www.sciencedirect.com/science/article/pii/S0031320321000868)] or blocks of weights [[paper](https://aclanthology.org/2021.emnlp-main.829.pdf)], generally yields speedups as well\n\n\\+\n\n__Adapters__ [[paper](https://proceedings.neurips.cc/paper/2017/file/e7b24b112a44fdd9ee93bdf998c6ca0e-paper.pdf)] have emerged as an alternative to fine-tuning, in which the prior network weights are unaltered, and a new set of _adapters_ weights are added to the network to learn a specific task.\nSome types of adapters add new layers, others are _fusible_ with existing weights and don't have a run-time overhead.\nWhen a single base-model is deployed with many specialised models, these structures can save a lot of parameters compared with full fine-tuning.\n\n=\n<!-- | |\n| --- | -->\n[__Structured Pruning Adapters__](https://github.com/LukasHedegaard/structured-pruning-adapters) are the offspring of Structured Pruning and Fusible Adapters, and can be used for _Transfer Learning_ which has:\n- \u2705 Extremely few learned parameters (binary pruning mask + masked adapter weights) \ud83d\udc4c\n- \u2705 Accelerated network inference \ud83c\udfce\ud83d\udca8\n\n\n## How to use this library\nUse in conjunction with any Structured Pruning technique. \n1. Install the library:\n    ```bash\n    pip install structured-pruning-adapters\n    ```\n2. Replace Linear and Conv layers with an SP Adapter:\n    ```python3\n    from torch.nn import Linear\n    from sp_adapter import SPLoRA\n\n    reg_lin = Linear(256, 512, bias=True)\n    spa_lin = SPLoRA(reg_lin, rank=32)\n\n    # Or replace all applicable layers in a network\n    spa_net = SPLoRA(reg_net, rank=32)\n    ```\n3. Employ any Structured Pruning method. We conducted extensive experimens with multiple [channel-pruning](https://github.com/lukashedegaard/channel-spa-experiments) and [block-pruning](https://github.com/lukashedegaard/block-spa-experiments) methods.\n\n4. Get pruned SP Adapter weights:\n    ```python3\n    # Specify mask - learned via your choice of Structured Pruning method\n    in_features_mask=torch.tensor([1, 0, ..., 1], dtype=torch.bool)\n    out_features_mask=torch.tensor([0, 1, ..., 1], dtype=torch.bool)\n\n    # Read parameters\n    params = sp_adapters.splora.parameters(\n        adapter_weights_only=True,\n        in_features_mask=torch.tensor([1, 0, ..., 1], dtype=torch.bool)\n        out_features_mask=torch.tensor([0, 1, ..., 1], dtype=torch.bool),\n    )   \n    named_parameters = sp_adapters.splora.named_parameters(\n        adapter_weights_only=True,\n        in_features_mask=torch.tensor([1, 0, ..., 1], dtype=torch.bool)\n        out_features_mask=torch.tensor([0, 1, ..., 1], dtype=torch.bool),\n    )\n    ```\n\n### Demo\nSee also [notebooks/demo.ipynb](notebooks/demo.ipynb) for a hands-on demo.\n\n### Structured Pruning Low-Rank Adapter (SPLoRA) for _Channel Pruning_ \n```python3\nfrom sp_adapters import SPLoRA\n```\n<div align=\"center\">\n<img src=\"figures/SPLoRA.png\" width=\"400\">\n</div>\nAdds a low-rank bottle-neck projection in projection in parallel with the main weights projection.\n\n<br/>\n\n### Structured Pruning Parllel Residual Adapter (SPPaRA) for _Channel Pruning_ of CNNs\n```python3\nfrom sp_adapters import SPPaRA\n```\nAdds a pointwise convolution as adapter to convolutional layers. First proposed in [\"Efficient parametrization of multi-domain deep neural networks\" by Rebuffi et al.](https://arxiv.org/pdf/1803.10082.pdf),\n\n<br/>\n\n### Structured Pruning Low-rank PHM Adapter (SPLoPA) for _Block Pruning_ (experimental)\n```python3\nfrom sp_adapters import SPLoPA\n```\n\n<div align=\"center\">\n<img src=\"figures/SPLoPA.png\" width=\"600\">\n</div>\n\nUses a variation on the Parameterized Hypercomplex Multiplication (PHM) layer [[paper](https://openreview.net/forum?id=rcQdycl0zyk)] with shared low-rank prototypes for block-sparse adaptation.\n\n## Citation\nIf you enjoy this work, please consider citing it\n```bibtex\n@article{hedegaard2022structured,\n  title={Structured Pruning Adapters},\n  author={Lukas Hedegaard and Aman Alok and Juby Jose and Alexandros Iosifidis},\n  journal={preprint, arXiv:2211.10155},\n  year={2022}\n}\n```\n\n## Acknowledgement\nThis work was done in conjunction with a research exchange at [Cactus Communications \ud83c\udf35](https://cactusglobal.com).\n\nThis work has received funding from the European Union\u2019s Horizon 2020 research and innovation programme under grant agreement No 871449 [(OpenDR) \ud83c\uddea\ud83c\uddfa](https://opendr.eu).\n\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Structured Pruning Adapters for PyTorch",
    "version": "0.7.1",
    "project_urls": {
        "Homepage": "https://github.com/lukashedegaard/structured-pruning-adapters"
    },
    "split_keywords": [
        "deep learning",
        "pytorch",
        "ai",
        "adapters",
        "pruning",
        "inference"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a7e977bd5114aeb85099c8862d72cbe9f0732bcaf8337a7fdd8aef13037d173c",
                "md5": "7ead672f1ab13a77f4f2559243579dc7",
                "sha256": "7d4cef2decc411a1f797d02a1df34475204ddd0c95ca1c6a708b628f65e9e0bb"
            },
            "downloads": -1,
            "filename": "structured_pruning_adapters-0.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7ead672f1ab13a77f4f2559243579dc7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 22898,
            "upload_time": "2023-06-29T09:34:36",
            "upload_time_iso_8601": "2023-06-29T09:34:36.156156Z",
            "url": "https://files.pythonhosted.org/packages/a7/e9/77bd5114aeb85099c8862d72cbe9f0732bcaf8337a7fdd8aef13037d173c/structured_pruning_adapters-0.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "18cf96303f38417aef71a7d137e447f083ff6649c630f9908265db700073050d",
                "md5": "3be89fd9bd5f8e89897bd7bccc0e42eb",
                "sha256": "a1880c529c579f751d293fd00955539ffdc676eb62f0e6adfbaefea8f9298d24"
            },
            "downloads": -1,
            "filename": "structured-pruning-adapters-0.7.1.tar.gz",
            "has_sig": false,
            "md5_digest": "3be89fd9bd5f8e89897bd7bccc0e42eb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 28823,
            "upload_time": "2023-06-29T09:34:38",
            "upload_time_iso_8601": "2023-06-29T09:34:38.093698Z",
            "url": "https://files.pythonhosted.org/packages/18/cf/96303f38417aef71a7d137e447f083ff6649c630f9908265db700073050d/structured-pruning-adapters-0.7.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-29 09:34:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lukashedegaard",
    "github_project": "structured-pruning-adapters",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.9"
                ]
            ]
        }
    ],
    "lcname": "structured-pruning-adapters"
}
        
Elapsed time: 0.22298s