Name | mct-quantizers-nightly JSON |
Version |
1.5.2.20241008.post1343
JSON |
| download |
home_page | None |
Summary | Infrastructure for support neural networks compression |
upload_time | 2024-10-08 00:13:55 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.6 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Model Compression Toolkit (MCT) Quantizers
The MCT Quantizers library is an open-source library developed by researchers and engineers working at Sony Semiconductor Israel.
It provides tools for easily representing a quantized neural network in both Keras and PyTorch. The library offers researchers, developers, and engineers a set of useful quantizers, along with a simple interface for implementing new custom quantizers.
## High level description:
The library's quantizers interface consists of two main components:
1. `QuantizationWrapper`: This object takes a layer with weights and a set of weight quantizers to infer a quantized layer.
2. `ActivationQuantizationHolder`: An object that holds an activation quantizer to be used during inference.
Users can set the quantizers and all the quantization information for each layer by initializing the weights_quantizer and activation_quantizer API.
Please note that the quantization wrapper and the quantizers are framework-specific.
<img src="https://github.com/sony/mct_quantizers/raw/main/quantization_infra.png" width="700">
## Quantizers:
The library provides the "Inferable Quantizer" interface for implementing new quantizers.
This interface is based on the [`BaseInferableQuantizer`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/base_inferable_quantizer.py) class, which allows the definition of quantizers used for emulating inference-time quantization.
On top of `BaseInferableQuantizer` the library defines a set of framework-specific quantizers for both weights and activations:
1. [Keras Quantizers](https://github.com/sony/mct_quantizers/tree/main/mct_quantizers/keras/quantizers)
2. [Pytorch Quantizers](https://github.com/sony/mct_quantizers/tree/main/mct_quantizers/pytorch/quantizers)
### The mark_quantizer Decorator
The [`@mark_quantizer`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/base_inferable_quantizer.py) decorator is used to assign each quantizer with static properties that define its task compatibility. Each quantizer class should be decorated with this decorator, which defines the following properties:
- [`QuantizationTarget`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/base_inferable_quantizer.py): An Enum that indicates whether the quantizer is intended for weights or activations quantization.
- [`QuantizationMethod`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/quant_info.py): A list of quantization methods (Uniform, Symmetric, etc.).
- `identifier`: A unique identifier for the quantizer class. This is a helper property that allows the creation of advanced quantizers for specific tasks.
## Getting Started
This section provides a quick guide to getting started. We begin with the installation process, either via source code or the pip server. Then, we provide a short example of usage.
### Installation
#### From PyPi - mct-quantizers package
To install the latest stable release of MCT Quantizer from PyPi, run the following command:
```
pip install mct-quantizers
```
If you prefer to use the nightly package (unstable version), you can install it with the following command:
```
pip install mct-quantizers-nightly
```
#### From Source
To work with the MCT Quantizers source code, follow these steps:
```
git clone https://github.com/sony/mct_quantizers.git
cd mct_quantizers
python setup.py install
```
### Requirements
To use MCT Quantizers, you need to have one of the supported frameworks, Tensorflow or PyTorch, installed.
For use with Tensorflow, please install the following package:
[tensorflow](https://www.tensorflow.org/install),
For use with PyTorch, please install the following package:
[torch](https://pytorch.org/)
You can also use the [requirements](https://github.com/sony/mct_quantizers/blob/main/requirements.txt) file to set up your environment.
## License
[Apache License 2.0](https://github.com/sony/mct_quantizers/blob/main/LICENSE.md).
Raw data
{
"_id": null,
"home_page": null,
"name": "mct-quantizers-nightly",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/27/86/3866d4f6f8717c6a85d04b2519175e5253e501dbdca3849e2528db84ee7e/mct-quantizers-nightly-1.5.2.20241008.post1343.tar.gz",
"platform": null,
"description": "# Model Compression Toolkit (MCT) Quantizers\n\nThe MCT Quantizers library is an open-source library developed by researchers and engineers working at Sony Semiconductor Israel. \n\nIt provides tools for easily representing a quantized neural network in both Keras and PyTorch. The library offers researchers, developers, and engineers a set of useful quantizers, along with a simple interface for implementing new custom quantizers.\n\n## High level description:\n\nThe library's quantizers interface consists of two main components:\n\n1. `QuantizationWrapper`: This object takes a layer with weights and a set of weight quantizers to infer a quantized layer.\n2. `ActivationQuantizationHolder`: An object that holds an activation quantizer to be used during inference.\n\nUsers can set the quantizers and all the quantization information for each layer by initializing the weights_quantizer and activation_quantizer API.\n\nPlease note that the quantization wrapper and the quantizers are framework-specific.\n\n<img src=\"https://github.com/sony/mct_quantizers/raw/main/quantization_infra.png\" width=\"700\">\n\n## Quantizers:\n\nThe library provides the \"Inferable Quantizer\" interface for implementing new quantizers. \nThis interface is based on the [`BaseInferableQuantizer`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/base_inferable_quantizer.py) class, which allows the definition of quantizers used for emulating inference-time quantization.\n\nOn top of `BaseInferableQuantizer` the library defines a set of framework-specific quantizers for both weights and activations:\n1. [Keras Quantizers](https://github.com/sony/mct_quantizers/tree/main/mct_quantizers/keras/quantizers)\n2. [Pytorch Quantizers](https://github.com/sony/mct_quantizers/tree/main/mct_quantizers/pytorch/quantizers)\n\n### The mark_quantizer Decorator\n\nThe [`@mark_quantizer`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/base_inferable_quantizer.py) decorator is used to assign each quantizer with static properties that define its task compatibility. Each quantizer class should be decorated with this decorator, which defines the following properties:\n - [`QuantizationTarget`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/base_inferable_quantizer.py): An Enum that indicates whether the quantizer is intended for weights or activations quantization.\n - [`QuantizationMethod`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/quant_info.py): A list of quantization methods (Uniform, Symmetric, etc.).\n - `identifier`: A unique identifier for the quantizer class. This is a helper property that allows the creation of advanced quantizers for specific tasks.\n\n## Getting Started\n\nThis section provides a quick guide to getting started. We begin with the installation process, either via source code or the pip server. Then, we provide a short example of usage.\n\n### Installation\n\n#### From PyPi - mct-quantizers package\nTo install the latest stable release of MCT Quantizer from PyPi, run the following command:\n```\npip install mct-quantizers\n```\n\nIf you prefer to use the nightly package (unstable version), you can install it with the following command:\n```\npip install mct-quantizers-nightly\n```\n\n#### From Source\nTo work with the MCT Quantizers source code, follow these steps:\n```\ngit clone https://github.com/sony/mct_quantizers.git\ncd mct_quantizers\npython setup.py install\n```\n\n### Requirements\n\nTo use MCT Quantizers, you need to have one of the supported frameworks, Tensorflow or PyTorch, installed.\n\nFor use with Tensorflow, please install the following package:\n[tensorflow](https://www.tensorflow.org/install),\n\nFor use with PyTorch, please install the following package:\n[torch](https://pytorch.org/)\n\nYou can also use the [requirements](https://github.com/sony/mct_quantizers/blob/main/requirements.txt) file to set up your environment.\n\n## License\n[Apache License 2.0](https://github.com/sony/mct_quantizers/blob/main/LICENSE.md).\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Infrastructure for support neural networks compression",
"version": "1.5.2.20241008.post1343",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c1db5ba9fdc7ef10d8bcfb8834c663c1e35f42055794ad003880800cbb3c0148",
"md5": "0b38b036602fabdd9b9143d7a0f640fe",
"sha256": "ebfa0a2d1067a47521c0be0da87e67941a067c84222c6f40dd1a87cc62fc1651"
},
"downloads": -1,
"filename": "mct_quantizers_nightly-1.5.2.20241008.post1343-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0b38b036602fabdd9b9143d7a0f640fe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 105096,
"upload_time": "2024-10-08T00:13:53",
"upload_time_iso_8601": "2024-10-08T00:13:53.751443Z",
"url": "https://files.pythonhosted.org/packages/c1/db/5ba9fdc7ef10d8bcfb8834c663c1e35f42055794ad003880800cbb3c0148/mct_quantizers_nightly-1.5.2.20241008.post1343-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "27863866d4f6f8717c6a85d04b2519175e5253e501dbdca3849e2528db84ee7e",
"md5": "2a530362ef61ebe7bd6da7faeffa6ea6",
"sha256": "8c6b7489c73e5c09da003551a1bf8599a4232f77c56706367c1430f8df03e42a"
},
"downloads": -1,
"filename": "mct-quantizers-nightly-1.5.2.20241008.post1343.tar.gz",
"has_sig": false,
"md5_digest": "2a530362ef61ebe7bd6da7faeffa6ea6",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 47239,
"upload_time": "2024-10-08T00:13:55",
"upload_time_iso_8601": "2024-10-08T00:13:55.736757Z",
"url": "https://files.pythonhosted.org/packages/27/86/3866d4f6f8717c6a85d04b2519175e5253e501dbdca3849e2528db84ee7e/mct-quantizers-nightly-1.5.2.20241008.post1343.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-08 00:13:55",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "mct-quantizers-nightly"
}