quantizers


Namequantizers JSON
Version 1.0.1 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2024-11-20 02:17:02
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseGNU Lesser General Public License v3 (LGPLv3)
keywords quantization
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Quantizers
[![PyPI version](https://badge.fury.io/py/quantizers.svg)](https://badge.fury.io/py/quantizers)
[![License](https://img.shields.io/badge/License-LGPL-blue)](LICENSE)
[![Tests](https://github.com/calad0i/quantizers/actions/workflows/python-test.yml/badge.svg)](https://github.com/calad0i/quantizers/actions/workflows/python-test.yml)
[![Coverage](https://img.shields.io/codecov/c/github/calad0i/quantizers)](https://app.codecov.io/gh/calad0i/quantizers)


Hardware-oriented numerical quantizers for deep learning models, implemented in Keras v3 and NumPy. Provides bit-accurate precision matching with Vivado/Vitis HLS implementations.

## Features

- Bit-accurate to the HLS implementation up to 32/64-bit floating point precision
- Support for fixed-point and minifloat number formats
- Differentiable Keras v3 implementations with gradients on inputs
  - With surrogate gradients for bit-width optimization as described in *[Gradient-based Automatic Mixed Precision Quantization for Neural Networks On-Chip](https://arxiv.org/abs/2405.00645)*
- Supports stochastic rounding for training

## Supported Quantizers

### Fixed-Point Quantizer

Parameters:
- `k` (keep_negative): Enable negative numbers
- `i` (integer_bits): Number of bits before decimal point (excludes sign bit)
- `f` (fractional_bits): Number of bits after decimal point
- For C++: `W = k + i + f`, `I = k + i`, `S = k`

Supported modes:
- Rounding: `TRN`, `RND`, `RND_CONV`, `TRN_ZERO`, `RND_ZERO`, `RND_MIN_INF`, `RND_INF`
  - `S_RND` and `S_RND_CONV` for stochastic rounding; Not available in NumPy implementation as it is for training only
- Overflow: `WRAP`, `SAT`, `SAT_SYM`, `WRAP_SM`

Limitations:
- `WRAP_SM` only works with `RND` or `RND_CONV` rounding
- `WRAP*` modes don't provide surrogate gradients for integer bits
- Saturation bit forced to zero for `WRAP` and `WRAP_SM`

### Minifloat Quantizer

Parameters:
- `m` (mantissa_bits): Mantissa width
- `e` (exponent_bits): Exponent width
- `e0` (exponent_zero): Exponent bias (default: 0)
- Range: `[-2^(e-1) + e0, 2^(e-1) - 1 + e0]`

Features:
- Supports subnormal numbers
- Uses `RND_CONV` rounding and `SAT` overflow
- HLS-synthesizable implementation in `test/cpp_source/ap_types/ap_float.h`

### Simplified Quantizers

- **Binary**: Maps to {-1,1} with 0 to -1. (preliminary implementation)
- **Ternary**: Shorthand for fixed-point `fixed<2, 1, RND_CONV, SAT_SYM>`


## Installation

**requires python>=3.10**

```bash
pip install quantizers
```
`keras>=3.0` and at least one compatible backend (`pytorch`, `jax`, or `tensorflow`) is required for training.

## Usage

### Stateless Quantizers
```python
from quantizers import (
  float_quantize(_np), # add _np for NumPy implementation
  get_fixed_quantizer(_np),
  binary_quantize(_np),
  ternary_quantize(_np),
)

# Fixed-point quantizer
fixed_quantizer = get_fixed_quantizer(round_mode, overflow_mode)
fixedp_qtensor = fixed_quantizer(
    x,
    integer_bits,
    fractional_bits,
    keep_negative,
    training, # For stochastic rounding, and WRAP does not happen during training
    seed, # For stochastic rounding only
)

# Minifloat quantizer
floatp_qtensor = float_quantize(x, mantissa_bits, exponent_bits, exponent_zero)

# Simplified quantizers
binary_qtensor = binary_quantize(x)
ternary_qtensor = ternary_quantize(x)
```

### Stateful Quantizers
```python
# Can be used for, but not intended for training
fixed_q = FixedQ(
    width,
    integer_bits, # including the sign bit)
    keep_negative,
    fixed_round_mode, # No stochastic rounding
    fixed_overflow_mode
)
quantized = fixed_q(x)

mfloat_q = MinifloatQ(mantissa_bits, exponent_bits, exponent_zero)
quantized = mfloat_q(x)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "quantizers",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "quantization",
    "author": null,
    "author_email": "Chang Sun <chsun@cern.ch>",
    "download_url": "https://files.pythonhosted.org/packages/ea/29/095fc44d0ecbc62e4a69a3da8adf4f23999e039cfb7a62c74679837b0aae/quantizers-1.0.1.tar.gz",
    "platform": null,
    "description": "# Quantizers\n[![PyPI version](https://badge.fury.io/py/quantizers.svg)](https://badge.fury.io/py/quantizers)\n[![License](https://img.shields.io/badge/License-LGPL-blue)](LICENSE)\n[![Tests](https://github.com/calad0i/quantizers/actions/workflows/python-test.yml/badge.svg)](https://github.com/calad0i/quantizers/actions/workflows/python-test.yml)\n[![Coverage](https://img.shields.io/codecov/c/github/calad0i/quantizers)](https://app.codecov.io/gh/calad0i/quantizers)\n\n\nHardware-oriented numerical quantizers for deep learning models, implemented in Keras v3 and NumPy. Provides bit-accurate precision matching with Vivado/Vitis HLS implementations.\n\n## Features\n\n- Bit-accurate to the HLS implementation up to 32/64-bit floating point precision\n- Support for fixed-point and minifloat number formats\n- Differentiable Keras v3 implementations with gradients on inputs\n  - With surrogate gradients for bit-width optimization as described in *[Gradient-based Automatic Mixed Precision Quantization for Neural Networks On-Chip](https://arxiv.org/abs/2405.00645)*\n- Supports stochastic rounding for training\n\n## Supported Quantizers\n\n### Fixed-Point Quantizer\n\nParameters:\n- `k` (keep_negative): Enable negative numbers\n- `i` (integer_bits): Number of bits before decimal point (excludes sign bit)\n- `f` (fractional_bits): Number of bits after decimal point\n- For C++: `W = k + i + f`, `I = k + i`, `S = k`\n\nSupported modes:\n- Rounding: `TRN`, `RND`, `RND_CONV`, `TRN_ZERO`, `RND_ZERO`, `RND_MIN_INF`, `RND_INF`\n  - `S_RND` and `S_RND_CONV` for stochastic rounding; Not available in NumPy implementation as it is for training only\n- Overflow: `WRAP`, `SAT`, `SAT_SYM`, `WRAP_SM`\n\nLimitations:\n- `WRAP_SM` only works with `RND` or `RND_CONV` rounding\n- `WRAP*` modes don't provide surrogate gradients for integer bits\n- Saturation bit forced to zero for `WRAP` and `WRAP_SM`\n\n### Minifloat Quantizer\n\nParameters:\n- `m` (mantissa_bits): Mantissa width\n- `e` (exponent_bits): Exponent width\n- `e0` (exponent_zero): Exponent bias (default: 0)\n- Range: `[-2^(e-1) + e0, 2^(e-1) - 1 + e0]`\n\nFeatures:\n- Supports subnormal numbers\n- Uses `RND_CONV` rounding and `SAT` overflow\n- HLS-synthesizable implementation in `test/cpp_source/ap_types/ap_float.h`\n\n### Simplified Quantizers\n\n- **Binary**: Maps to {-1,1} with 0 to -1. (preliminary implementation)\n- **Ternary**: Shorthand for fixed-point `fixed<2, 1, RND_CONV, SAT_SYM>`\n\n\n## Installation\n\n**requires python>=3.10**\n\n```bash\npip install quantizers\n```\n`keras>=3.0` and at least one compatible backend (`pytorch`, `jax`, or `tensorflow`) is required for training.\n\n## Usage\n\n### Stateless Quantizers\n```python\nfrom quantizers import (\n  float_quantize(_np), # add _np for NumPy implementation\n  get_fixed_quantizer(_np),\n  binary_quantize(_np),\n  ternary_quantize(_np),\n)\n\n# Fixed-point quantizer\nfixed_quantizer = get_fixed_quantizer(round_mode, overflow_mode)\nfixedp_qtensor = fixed_quantizer(\n    x,\n    integer_bits,\n    fractional_bits,\n    keep_negative,\n    training, # For stochastic rounding, and WRAP does not happen during training\n    seed, # For stochastic rounding only\n)\n\n# Minifloat quantizer\nfloatp_qtensor = float_quantize(x, mantissa_bits, exponent_bits, exponent_zero)\n\n# Simplified quantizers\nbinary_qtensor = binary_quantize(x)\nternary_qtensor = ternary_quantize(x)\n```\n\n### Stateful Quantizers\n```python\n# Can be used for, but not intended for training\nfixed_q = FixedQ(\n    width,\n    integer_bits, # including the sign bit)\n    keep_negative,\n    fixed_round_mode, # No stochastic rounding\n    fixed_overflow_mode\n)\nquantized = fixed_q(x)\n\nmfloat_q = MinifloatQ(mantissa_bits, exponent_bits, exponent_zero)\nquantized = mfloat_q(x)\n```\n",
    "bugtrack_url": null,
    "license": "GNU Lesser General Public License v3 (LGPLv3)",
    "summary": null,
    "version": "1.0.1",
    "project_urls": {
        "repository": "https://github.com/calad0i/quantizers"
    },
    "split_keywords": [
        "quantization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5ff804ceef8c291dd503715e6d250180d54a7eb8d898b8fe570e24e81d6bfda5",
                "md5": "0277ecd5072224e5c23e13af28dc9fb3",
                "sha256": "eb83d5415f55e321f8ecfe8d8117cc1604179f13399cd0da6b29742f59f394bb"
            },
            "downloads": -1,
            "filename": "quantizers-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0277ecd5072224e5c23e13af28dc9fb3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 14962,
            "upload_time": "2024-11-20T02:17:00",
            "upload_time_iso_8601": "2024-11-20T02:17:00.989914Z",
            "url": "https://files.pythonhosted.org/packages/5f/f8/04ceef8c291dd503715e6d250180d54a7eb8d898b8fe570e24e81d6bfda5/quantizers-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ea29095fc44d0ecbc62e4a69a3da8adf4f23999e039cfb7a62c74679837b0aae",
                "md5": "1cb1ec7b757a50edf818de95e4b7ffbc",
                "sha256": "aa5b9f7d1617d9712763f99215fb6cb25a82a45d1b17edb8de3d58f42e91723c"
            },
            "downloads": -1,
            "filename": "quantizers-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "1cb1ec7b757a50edf818de95e4b7ffbc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 117649,
            "upload_time": "2024-11-20T02:17:02",
            "upload_time_iso_8601": "2024-11-20T02:17:02.595003Z",
            "url": "https://files.pythonhosted.org/packages/ea/29/095fc44d0ecbc62e4a69a3da8adf4f23999e039cfb7a62c74679837b0aae/quantizers-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-20 02:17:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "calad0i",
    "github_project": "quantizers",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "quantizers"
}
        
Elapsed time: 0.43218s