lightning-nc


Namelightning-nc JSON
Version 0.0.3 PyPI version JSON
download
home_page
Summary
upload_time2023-11-21 17:31:35
maintainer
docs_urlNone
authorClément POIRET
requires_python>=3.8.0
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Lightning Neural Compressor

[![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE.md)

This repository contains the implementation of the Lightning Neural Compressor. The main goal of this project is to provide Pytorch Lightning callbacks to use Intel® Neural Compressor. The callbacks aim at compressing a neural network so that it can be used on edge devices (i.e., mobile phones, raspberry pi, etc.). This project is a work in progress and is not ready for production use.

## Current Status

The project is currently under development, starting with Quantization Aware Training, as the default callback has been deleted from Pytorch Lightning.

The project also supports Weight Pruning and should work at least with pruners related to the [`PytorchBasicPruner`](https://github.com/intel/neural-compressor/blob/d81269d2b261d39967605e17a89b5688ebaedbd1/neural_compressor/compression/pruner/pruners/basic.py#L29).

## Installation

To install the dependencies for this project, use the following command to use pypi:

```
pip install -U lightning-nc
```

or directly by cloning the main branch:

```bash
git clone https://github.com/clementpoiret/lightning-nc
cd lightning-nc
pip install -e .
```

## Usage

To use the Lightning Neural Compressor, import the callbacks from the `lightning_nc` module.

**WARNING:** Currently, the callbacks need the PyTorch model to be a `nn.Module` contained inside your `LightningModule`.
This is not a huge limitation as the refactoring is easy and straightforward, such as:

```python
import os

import lightning as L
import timm
import torch
import torch.nn.functional as F
from neural_compressor import QuantizationAwareTrainingConfig
from neural_compressor.config import Torch2ONNXConfig
from neural_compressor.training import WeightPruningConfig
from lightning_nc import QATCallback, WeightPruningCallback
from torch import Tensor, nn, optim, utils
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor


# Define your main model here
class VeryComplexModel(nn.Module):
    
    def __init__(self):
        super().__init__()
        self.backbone = timm.create_model("best_pretrained_model",
                                          pretrained=True)

        self.mlp = nn.Sequential(nn.Linear(self.backbone.num_features, 128),
                                 nn.ReLU(), nn.Linear(128, 10))

    def forward(self, x):
        return self.mlp(self.backbone(x))


# Then, define your LightningModule as usual
class Classifier(L.LightningModule):

    def __init__(self):
        super().__init__()

        # This is mandatory for the callbacks
        self.model = VeryComplexModel()

    def forward(self, x):
        return self.model(x)

    def training_step(self, batch, batch_idx):
        x, y = batch

        # This is just to use MNIST images on a pretrained timm model, you can skip that
        x = x.repeat(1, 3, 1, 1)
        x = F.interpolate(x, size=(224, 224))

        y_hat = self.forward(x)

        loss = F.cross_entropy(y_hat, y)

        return loss

    def configure_optimizers(self):
        optimizer = optim.Adam(self.parameters(), lr=1e-3)

        return [optimizer]


clf = Classifier()

# setup data
dataset = MNIST(os.getcwd(), download=True, transform=ToTensor())
train_loader = utils.data.DataLoader(dataset)
```

Now that everything is setup, the callbacks can be integrated into a PyTorch Lightning training routine:

```python
# Define the configs for Pruning and Quantization
q_config = QuantizationAwareTrainingConfig()
p_config = WeightPruningConfig([{
    "op_names": ["backbone.*"],
    "start_step": 1,
    "end_step": 100,
    "target_sparsity": 0.5,
    "pruning_frequency": 1,
    "pattern": "4x1",
    "min_sparsity_ratio_per_op": 0.,
    "pruning_scope": "global",
}])

callbacks = [
    QATCallback(config=q_config),
    WeightPruningCallback(config=p_config),
]

trainer = L.Trainer(accelerator="gpu",
                    strategy="auto",
                    limit_train_batches=100,
                    max_epochs=1,
                    callbacks=callbacks)
trainer.fit(model=clf, train_dataloaders=train_loader)
```

Models can now be saved eaily such as:

```python
clf.model.export(
    "model.onnx",
    Torch2ONNXConfig(
        dtype="int8",
        opset_version=17,
        quant_format="QOperator",  # or QDQ
        example_inputs=torch.randn(1, 3, 224, 224),
        input_names=["input"],
        output_names=["output"],
        dynamic_axes={
            "input": {
                0: "batch_size"
            },
            "output": {
                0: "batch_size"
            },
        },
    ))

```

## Contributing

If you would like to contribute to this project, please submit a pull request. All contributions are welcome!

## License

This project is licensed under the MIT License. See the [LICENSE.md](LICENSE.md) file for details.


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "lightning-nc",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Cl\u00e9ment POIRET",
    "author_email": "poiret.clement@outlook.fr",
    "download_url": "https://files.pythonhosted.org/packages/56/9f/c3b0e91a6abf99b92a8d9d05485ec8ee2adbefed74fa581994d61ead0b71/lightning_nc-0.0.3.tar.gz",
    "platform": null,
    "description": "# Lightning Neural Compressor\n\n[![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE.md)\n\nThis repository contains the implementation of the Lightning Neural Compressor. The main goal of this project is to provide Pytorch Lightning callbacks to use Intel\u00ae Neural Compressor. The callbacks aim at compressing a neural network so that it can be used on edge devices (i.e., mobile phones, raspberry pi, etc.). This project is a work in progress and is not ready for production use.\n\n## Current Status\n\nThe project is currently under development, starting with Quantization Aware Training, as the default callback has been deleted from Pytorch Lightning.\n\nThe project also supports Weight Pruning and should work at least with pruners related to the [`PytorchBasicPruner`](https://github.com/intel/neural-compressor/blob/d81269d2b261d39967605e17a89b5688ebaedbd1/neural_compressor/compression/pruner/pruners/basic.py#L29).\n\n## Installation\n\nTo install the dependencies for this project, use the following command to use pypi:\n\n```\npip install -U lightning-nc\n```\n\nor directly by cloning the main branch:\n\n```bash\ngit clone https://github.com/clementpoiret/lightning-nc\ncd lightning-nc\npip install -e .\n```\n\n## Usage\n\nTo use the Lightning Neural Compressor, import the callbacks from the `lightning_nc` module.\n\n**WARNING:** Currently, the callbacks need the PyTorch model to be a `nn.Module` contained inside your `LightningModule`.\nThis is not a huge limitation as the refactoring is easy and straightforward, such as:\n\n```python\nimport os\n\nimport lightning as L\nimport timm\nimport torch\nimport torch.nn.functional as F\nfrom neural_compressor import QuantizationAwareTrainingConfig\nfrom neural_compressor.config import Torch2ONNXConfig\nfrom neural_compressor.training import WeightPruningConfig\nfrom lightning_nc import QATCallback, WeightPruningCallback\nfrom torch import Tensor, nn, optim, utils\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import ToTensor\n\n\n# Define your main model here\nclass VeryComplexModel(nn.Module):\n    \n    def __init__(self):\n        super().__init__()\n        self.backbone = timm.create_model(\"best_pretrained_model\",\n                                          pretrained=True)\n\n        self.mlp = nn.Sequential(nn.Linear(self.backbone.num_features, 128),\n                                 nn.ReLU(), nn.Linear(128, 10))\n\n    def forward(self, x):\n        return self.mlp(self.backbone(x))\n\n\n# Then, define your LightningModule as usual\nclass Classifier(L.LightningModule):\n\n    def __init__(self):\n        super().__init__()\n\n        # This is mandatory for the callbacks\n        self.model = VeryComplexModel()\n\n    def forward(self, x):\n        return self.model(x)\n\n    def training_step(self, batch, batch_idx):\n        x, y = batch\n\n        # This is just to use MNIST images on a pretrained timm model, you can skip that\n        x = x.repeat(1, 3, 1, 1)\n        x = F.interpolate(x, size=(224, 224))\n\n        y_hat = self.forward(x)\n\n        loss = F.cross_entropy(y_hat, y)\n\n        return loss\n\n    def configure_optimizers(self):\n        optimizer = optim.Adam(self.parameters(), lr=1e-3)\n\n        return [optimizer]\n\n\nclf = Classifier()\n\n# setup data\ndataset = MNIST(os.getcwd(), download=True, transform=ToTensor())\ntrain_loader = utils.data.DataLoader(dataset)\n```\n\nNow that everything is setup, the callbacks can be integrated into a PyTorch Lightning training routine:\n\n```python\n# Define the configs for Pruning and Quantization\nq_config = QuantizationAwareTrainingConfig()\np_config = WeightPruningConfig([{\n    \"op_names\": [\"backbone.*\"],\n    \"start_step\": 1,\n    \"end_step\": 100,\n    \"target_sparsity\": 0.5,\n    \"pruning_frequency\": 1,\n    \"pattern\": \"4x1\",\n    \"min_sparsity_ratio_per_op\": 0.,\n    \"pruning_scope\": \"global\",\n}])\n\ncallbacks = [\n    QATCallback(config=q_config),\n    WeightPruningCallback(config=p_config),\n]\n\ntrainer = L.Trainer(accelerator=\"gpu\",\n                    strategy=\"auto\",\n                    limit_train_batches=100,\n                    max_epochs=1,\n                    callbacks=callbacks)\ntrainer.fit(model=clf, train_dataloaders=train_loader)\n```\n\nModels can now be saved eaily such as:\n\n```python\nclf.model.export(\n    \"model.onnx\",\n    Torch2ONNXConfig(\n        dtype=\"int8\",\n        opset_version=17,\n        quant_format=\"QOperator\",  # or QDQ\n        example_inputs=torch.randn(1, 3, 224, 224),\n        input_names=[\"input\"],\n        output_names=[\"output\"],\n        dynamic_axes={\n            \"input\": {\n                0: \"batch_size\"\n            },\n            \"output\": {\n                0: \"batch_size\"\n            },\n        },\n    ))\n\n```\n\n## Contributing\n\nIf you would like to contribute to this project, please submit a pull request. All contributions are welcome!\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE.md](LICENSE.md) file for details.\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "",
    "version": "0.0.3",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "14fbc3b24a5df79410c96db314f5c97aa700424ff10fec5023090de04dba1bf3",
                "md5": "970cf6b98f5de3d7b3f07ebaf179b952",
                "sha256": "3d711e195128cf13bcd76830c19bdec1f248093d7671b718dd6afd84e56a5daa"
            },
            "downloads": -1,
            "filename": "lightning_nc-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "970cf6b98f5de3d7b3f07ebaf179b952",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 8386,
            "upload_time": "2023-11-21T17:31:32",
            "upload_time_iso_8601": "2023-11-21T17:31:32.915148Z",
            "url": "https://files.pythonhosted.org/packages/14/fb/c3b24a5df79410c96db314f5c97aa700424ff10fec5023090de04dba1bf3/lightning_nc-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "569fc3b0e91a6abf99b92a8d9d05485ec8ee2adbefed74fa581994d61ead0b71",
                "md5": "7c6d3ea2303438f962020ccf47a239ad",
                "sha256": "b4736b13726fe0b2efce90bf7921d43f73dadec450d51d393ef8f4a752c23ab8"
            },
            "downloads": -1,
            "filename": "lightning_nc-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "7c6d3ea2303438f962020ccf47a239ad",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 6329,
            "upload_time": "2023-11-21T17:31:35",
            "upload_time_iso_8601": "2023-11-21T17:31:35.211420Z",
            "url": "https://files.pythonhosted.org/packages/56/9f/c3b0e91a6abf99b92a8d9d05485ec8ee2adbefed74fa581994d61ead0b71/lightning_nc-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-21 17:31:35",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "lightning-nc"
}
        
Elapsed time: 0.13890s