mct-nightly


Namemct-nightly JSON
Version 1.4.0.31052022.post408 PyPI version JSON
download
home_page
SummaryA Model Compression Toolkit for neural networks
upload_time2022-05-31 00:04:11
maintainer
docs_urlNone
author
requires_python>=3.6
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Model Compression Toolkit (MCT)
![tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_suite_all_latest_frameworks.yml/badge.svg)

Model Compression Toolkit (MCT) is an open-source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers tools for optimizing and deploying state-of-the-art neural networks on efficient hardware. Specifically, this project aims to apply quantization and pruning schemes to compress neural networks. 
<img src="MCT_Block_Diagram.svg" width="800">

Currently, this project supports hardware-friendly post-training quantization (HPTQ) with Tensorflow 2 and Pytorch [1]. 

The MCT project is developed by researchers and engineers working at Sony Semiconductors Israel.

For more information, please visit our [project website](https://sony.github.io/model_optimization/).

## Table of Contents

- [Getting Started](#getting-started)
- [Supported features](#supported-features)
- [Results](#results)
- [Contributions](#contributions)
- [License](#license)

## Getting Started

This section provides a quick starting guide. We begin with installation via source code or pip server. Then, we provide a short usage example.

### Installation
See the MCT install guide for the pip package, and build from the source.


#### From Source
```
git clone https://github.com/sony/model_optimization.git
python setup.py install
```
#### From PyPi - latest stable release
```
pip install model-compression-toolkit
```

A nightly package is also available (unstable):
```
pip install mct-nightly
```

To run MCT, one of the supported frameworks, Tenosflow/Pytorch, needs to be installed.

For using with Tensorflow please install the packages: 
[tensorflow](https://www.tensorflow.org/install), 
[tensorflow-model-optimization](https://www.tensorflow.org/model_optimization/guide/install)

For using with Pytorch (experimental) please install the packages: 
[torch](https://pytorch.org/)

MCT is tested with:
* Tensorflow version 2.7 
* Pytorch version 1.10.0 

### Usage Example 
For an example of how to use the post-training quantization, using Keras,
please use this [link](tutorials/example_keras_mobilenet.py).

For an example using Pytorch (experimental), please use this [link](tutorials/example_pytorch_mobilenet_v2.py).

For more examples please see the [tutorials' directory](tutorials).


## Supported Features

Quantization:

   * Post Training Quantization for Keras models.
   * Post Training Quantization for Pytorch models (experimental).
   * Gradient-based post-training (Experimental, Keras only).
   * Mixed-precision post-training quantization (Experimental).

Tensorboard Visualization (Experimental):

   * CS Analyzer: compare a model compressed with the original model to analyze large accuracy drops.
   * Activation statistics and errors


## Results
### Keras
As part of the MCT library, we have a set of example networks on image classification. These networks can be used as examples when using the package.

* Image Classification Example with MobileNet V1 on ImageNet dataset

| Network Name             | Float Accuracy  | 8Bit Accuracy   | Comments                             |
| -------------------------| ---------------:| ---------------:| ------------------------------------:|
| MobileNetV1 [2]          | 70.558          | 70.418          |                                      |


For more results please see [1]

### Pytorch
We quantized classification networks from the torchvision library. 
In the following table we present the ImageNet validation results for these models:

| Network Name              | Float Accuracy  | 8Bit Accuracy   | 
| --------------------------| ---------------:| ---------------:| 
| MobileNet V2 [3]          | 71.886          | 71.444           |                                      
| ResNet-18 [3]             | 69.86           | 69.63           |                                      
| SqueezeNet 1.1 [3]        | 58.128          | 57.678          |                                      



## Contributions
MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.

*You will find more information about contributions in the [Contribution guide](CONTRIBUTING.md).


## License
[Apache License 2.0](LICENSE).

## References 

[1] Habi, H.V., Peretz, R., Cohen, E., Dikstein, L., Dror, O., Diamant, I., Jennings, R.H. and Netzer, A., 2021. [HPTQ: Hardware-Friendly Post Training Quantization. arXiv preprint](https://arxiv.org/abs/2109.09113).

[2] [MobilNet](https://keras.io/api/applications/mobilenet/#mobilenet-function) from Keras applications.

[3] [TORCHVISION.MODELS](https://pytorch.org/vision/stable/models.html) 



            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "mct-nightly",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/af/f8/c282ae419e6185466ce337aadaae5685a6abd051c962ac99603ae8a13d84/mct-nightly-1.4.0.31052022.post408.tar.gz",
    "platform": null,
    "description": "# Model Compression Toolkit (MCT)\n![tests](https://github.com/sony/model_optimization/actions/workflows/run_tests_suite_all_latest_frameworks.yml/badge.svg)\n\nModel Compression Toolkit (MCT) is an open-source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers tools for optimizing and deploying state-of-the-art neural networks on efficient hardware. Specifically, this project aims to apply quantization and pruning schemes to compress neural networks. \n<img src=\"MCT_Block_Diagram.svg\" width=\"800\">\n\nCurrently, this project supports hardware-friendly post-training quantization (HPTQ) with Tensorflow 2 and Pytorch [1]. \n\nThe MCT project is developed by researchers and engineers working at Sony Semiconductors Israel.\n\nFor more information, please visit our [project website](https://sony.github.io/model_optimization/).\n\n## Table of Contents\n\n- [Getting Started](#getting-started)\n- [Supported features](#supported-features)\n- [Results](#results)\n- [Contributions](#contributions)\n- [License](#license)\n\n## Getting Started\n\nThis section provides a quick starting guide. We begin with installation via source code or pip server. Then, we provide a short usage example.\n\n### Installation\nSee the MCT install guide for the pip package, and build from the source.\n\n\n#### From Source\n```\ngit clone https://github.com/sony/model_optimization.git\npython setup.py install\n```\n#### From PyPi - latest stable release\n```\npip install model-compression-toolkit\n```\n\nA nightly package is also available (unstable):\n```\npip install mct-nightly\n```\n\nTo run MCT, one of the supported frameworks, Tenosflow/Pytorch, needs to be installed.\n\nFor using with Tensorflow please install the packages: \n[tensorflow](https://www.tensorflow.org/install), \n[tensorflow-model-optimization](https://www.tensorflow.org/model_optimization/guide/install)\n\nFor using with Pytorch (experimental) please install the packages: \n[torch](https://pytorch.org/)\n\nMCT is tested with:\n* Tensorflow version 2.7 \n* Pytorch version 1.10.0 \n\n### Usage Example \nFor an example of how to use the post-training quantization, using Keras,\nplease use this [link](tutorials/example_keras_mobilenet.py).\n\nFor an example using Pytorch (experimental), please use this [link](tutorials/example_pytorch_mobilenet_v2.py).\n\nFor more examples please see the [tutorials' directory](tutorials).\n\n\n## Supported Features\n\nQuantization:\n\n   * Post Training Quantization for Keras models.\n   * Post Training Quantization for Pytorch models (experimental).\n   * Gradient-based post-training (Experimental, Keras only).\n   * Mixed-precision post-training quantization (Experimental).\n\nTensorboard Visualization (Experimental):\n\n   * CS Analyzer: compare a model compressed with the original model to analyze large accuracy drops.\n   * Activation statistics and errors\n\n\n## Results\n### Keras\nAs part of the MCT library, we have a set of example networks on image classification. These networks can be used as examples when using the package.\n\n* Image Classification Example with MobileNet V1 on ImageNet dataset\n\n| Network Name             | Float Accuracy  | 8Bit Accuracy   | Comments                             |\n| -------------------------| ---------------:| ---------------:| ------------------------------------:|\n| MobileNetV1 [2]          | 70.558          | 70.418          |                                      |\n\n\nFor more results please see [1]\n\n### Pytorch\nWe quantized classification networks from the torchvision library. \nIn the following table we present the ImageNet validation results for these models:\n\n| Network Name              | Float Accuracy  | 8Bit Accuracy   | \n| --------------------------| ---------------:| ---------------:| \n| MobileNet V2 [3]          | 71.886          | 71.444           |                                      \n| ResNet-18 [3]             | 69.86           | 69.63           |                                      \n| SqueezeNet 1.1 [3]        | 58.128          | 57.678          |                                      \n\n\n\n## Contributions\nMCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.\n\n*You will find more information about contributions in the [Contribution guide](CONTRIBUTING.md).\n\n\n## License\n[Apache License 2.0](LICENSE).\n\n## References \n\n[1] Habi, H.V., Peretz, R., Cohen, E., Dikstein, L., Dror, O., Diamant, I., Jennings, R.H. and Netzer, A., 2021. [HPTQ: Hardware-Friendly Post Training Quantization. arXiv preprint](https://arxiv.org/abs/2109.09113).\n\n[2] [MobilNet](https://keras.io/api/applications/mobilenet/#mobilenet-function) from Keras applications.\n\n[3] [TORCHVISION.MODELS](https://pytorch.org/vision/stable/models.html) \n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A Model Compression Toolkit for neural networks",
    "version": "1.4.0.31052022.post408",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "37643e29f8abf78513426712c41c8ec1",
                "sha256": "99f3d99f2c2fac0e74d685c058813e840c0f3f84d6b698e2faa4d3023e72dbbc"
            },
            "downloads": -1,
            "filename": "mct_nightly-1.4.0.31052022.post408-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "37643e29f8abf78513426712c41c8ec1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 399836,
            "upload_time": "2022-05-31T00:04:09",
            "upload_time_iso_8601": "2022-05-31T00:04:09.934319Z",
            "url": "https://files.pythonhosted.org/packages/f2/16/cdf070add884a1c1da4c02d7d3f392f61c293040ed3c5d42b05e85911a0b/mct_nightly-1.4.0.31052022.post408-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "1a260e42ed974abea9376e42ade7aa81",
                "sha256": "3852b791627ea5a01ccdbd348aae99ca4564306e6871dc4e4de244fa7fe9e21b"
            },
            "downloads": -1,
            "filename": "mct-nightly-1.4.0.31052022.post408.tar.gz",
            "has_sig": false,
            "md5_digest": "1a260e42ed974abea9376e42ade7aa81",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 218560,
            "upload_time": "2022-05-31T00:04:11",
            "upload_time_iso_8601": "2022-05-31T00:04:11.496472Z",
            "url": "https://files.pythonhosted.org/packages/af/f8/c282ae419e6185466ce337aadaae5685a6abd051c962ac99603ae8a13d84/mct-nightly-1.4.0.31052022.post408.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-05-31 00:04:11",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "mct-nightly"
}
        
Elapsed time: 0.51202s