neural-compressor-3x-ort


Nameneural-compressor-3x-ort JSON
Version 2.5.1 PyPI version JSON
download
home_pagehttps://github.com/intel/neural-compressor
SummaryRepository of Intel® Neural Compressor
upload_time2024-04-03 14:11:31
maintainerNone
docs_urlNone
authorIntel AIPT Team
requires_python>=3.7.0
licenseApache 2.0
keywords quantization auto-tuning post-training static quantization post-training dynamic quantization quantization-aware training
VCS
bugtrack_url
requirements deprecated numpy opencv-python-headless pandas Pillow prettytable psutil py-cpuinfo pycocotools pycocotools-windows pyyaml requests schema scikit-learn
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

Intel® Neural Compressor
===========================
<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)</h3>

[![python](https://img.shields.io/badge/python-3.8%2B-blue)](https://github.com/intel/neural-compressor)
[![version](https://img.shields.io/badge/release-2.5-green)](https://github.com/intel/neural-compressor/releases)
[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/intel/neural-compressor/blob/master/LICENSE)
[![coverage](https://img.shields.io/badge/coverage-85%25-green)](https://github.com/intel/neural-compressor)
[![Downloads](https://static.pepy.tech/personalized-badge/neural-compressor?period=total&units=international_system&left_color=grey&right_color=green&left_text=downloads)](https://pepy.tech/project/neural-compressor)

[Architecture](./docs/source/design.md#architecture)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[Workflow](./docs/source/design.md#workflow)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[LLMs Recipes](./docs/source/llm_recipes.md)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[Results](./docs/source/validated_model_list.md)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[Documentations](https://intel.github.io/neural-compressor)

---
<div align="left">

Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org/), [ONNX Runtime](https://onnxruntime.ai/), and [MXNet](https://mxnet.apache.org/),
as well as Intel extensions such as [Intel Extension for TensorFlow](https://github.com/intel/intel-extension-for-tensorflow) and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
In particular, the tool provides the key features, typical examples, and open collaborations as below:

* Support a wide range of Intel hardware such as [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html), [Intel Xeon CPU Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html), [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/flex-series.html), and [Intel Data Center GPU Max Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html) with extensive testing; support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing

* Validate popular LLMs such as [LLama2](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Falcon](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [GPT-J](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Bloom](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [OPT](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), and more than 10,000 broad models such as [Stable Diffusion](/examples/pytorch/nlp/huggingface_models/text-to-image/quantization), [BERT-Large](/examples/pytorch/nlp/huggingface_models/text-classification/quantization/ptq_static/fx), and [ResNet50](/examples/pytorch/image_recognition/torchvision_models/quantization/ptq/cpu/fx) from popular model hubs such as [Hugging Face](https://huggingface.co/), [Torch Vision](https://pytorch.org/vision/stable/index.html), and [ONNX Model Zoo](https://github.com/onnx/models#models), by leveraging zero-code optimization solution [Neural Coder](/neural_coder#what-do-we-offer) and automatic [accuracy-driven](/docs/source/design.md#workflow) quantization strategies

* Collaborate with cloud marketplaces such as [Google Cloud Platform](https://console.cloud.google.com/marketplace/product/bitnami-launchpad/inc-tensorflow-intel?project=verdant-sensor-286207), [Amazon Web Services](https://aws.amazon.com/marketplace/pp/prodview-yjyh2xmggbmga#pdp-support), and [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bitnami.inc-tensorflow-intel), software platforms such as [Alibaba Cloud](https://www.intel.com/content/www/us/en/developer/articles/technical/quantize-ai-by-oneapi-analytics-on-alibaba-cloud.html), [Tencent TACO](https://new.qq.com/rain/a/20221202A00B9S00) and [Microsoft Olive](https://github.com/microsoft/Olive), and open AI ecosystem such as [Hugging Face](https://huggingface.co/blog/intel), [PyTorch](https://pytorch.org/tutorials/recipes/intel_neural_compressor_for_pytorch.html), [ONNX](https://github.com/onnx/models#models), [ONNX Runtime](https://github.com/microsoft/onnxruntime), and [Lightning AI](https://github.com/Lightning-AI/lightning/blob/master/docs/source-pytorch/advanced/post_training_quantization.rst)

## What's New
* [2024/03] A new SOTA approach [AutoRound](https://github.com/intel/auto-round) Weight-Only Quantization on [Intel Gaudi2 AI accelerator](https://habana.ai/products/gaudi2/) is available for LLMs.

## Installation

### Install from pypi
```Shell
pip install neural-compressor
```
> **Note**: 
> More installation methods can be found at [Installation Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/installation_guide.md). Please check out our [FAQ](https://github.com/intel/neural-compressor/blob/master/docs/source/faq.md) for more details.

## Getting Started

Setting up the environment:  
```bash
pip install "neural-compressor>=2.3" "transformers>=4.34.0" torch torchvision
```
After successfully installing these packages, try your first quantization program.

### Weight-Only Quantization (LLMs)
Following example code demonstrates Weight-Only Quantization on LLMs, it supports Intel CPU, Intel Gauid2 AI Accelerator, Nvidia GPU, best device will be selected automatically. 

To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). 
```bash
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.14.0/ubuntu22.04//habanalabs/pytorch-installer-2.1.1:latest

# Check the container ID
docker ps

# Login into container
docker exec -it <container_id> bash

# Install the optimum-habana
pip install --upgrade-strategy eager optimum[habana]

# Install INC/auto_round
pip install neural-compressor auto_round
```
Run the example:
```python
from transformers import AutoModel, AutoTokenizer

from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.quantization import fit
from neural_compressor.adaptor.torch_utils.auto_round import get_dataloader

model_name = "EleutherAI/gpt-neo-125m"
float_model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
dataloader = get_dataloader(tokenizer, seqlen=2048)

woq_conf = PostTrainingQuantConfig(
    approach="weight_only",
    op_type_dict={
        ".*": {  # match all ops
            "weight": {
                "dtype": "int",
                "bits": 4,
                "algorithm": "AUTOROUND",
            },
        }
    },
)
quantized_model = fit(model=float_model, conf=woq_conf, calib_dataloader=dataloader)
```
**Note:** 

To try INT4 model inference, please directly use [Intel Extension for Transformers](https://github.com/intel/intel-extension-for-transformers), which leverages Intel Neural Compressor for model quantization.        

### Static Quantization (Non-LLMs)

```python
from torchvision import models

from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.data import DataLoader, Datasets
from neural_compressor.quantization import fit

float_model = models.resnet18()
dataset = Datasets("pytorch")["dummy"](shape=(1, 3, 224, 224))
calib_dataloader = DataLoader(framework="pytorch", dataset=dataset)
static_quant_conf = PostTrainingQuantConfig()
quantized_model = fit(model=float_model, conf=static_quant_conf, calib_dataloader=calib_dataloader)
```

## Documentation

<table class="docutils">
  <thead>
  <tr>
    <th colspan="8">Overview</th>
  </tr>
  </thead>
  <tbody>
    <tr>
      <td colspan="2" align="center"><a href="./docs/source/design.md#architecture">Architecture</a></td>
      <td colspan="2" align="center"><a href="./docs/source/design.md#workflow">Workflow</a></td>
      <td colspan="1" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
      <td colspan="1" align="center"><a href="./docs/source/llm_recipes.md">LLMs Recipes</a></td>
      <td colspan="2" align="center"><a href="examples/README.md">Examples</a></td>
    </tr>
  </tbody>
  <thead>
    <tr>
      <th colspan="8">Python-based APIs</th>
    </tr>
  </thead>
  <tbody>
    <tr>
        <td colspan="2" align="center"><a href="./docs/source/quantization.md">Quantization</a></td>
        <td colspan="2" align="center"><a href="./docs/source/mixed_precision.md">Advanced Mixed Precision</a></td>
        <td colspan="2" align="center"><a href="./docs/source/pruning.md">Pruning (Sparsity)</a></td>
        <td colspan="2" align="center"><a href="./docs/source/distillation.md">Distillation</a></td>
    </tr>
    <tr>
        <td colspan="2" align="center"><a href="./docs/source/orchestration.md">Orchestration</a></td>
        <td colspan="2" align="center"><a href="./docs/source/benchmark.md">Benchmarking</a></td>
        <td colspan="2" align="center"><a href="./docs/source/distributed.md">Distributed Compression</a></td>
        <td colspan="2" align="center"><a href="./docs/source/export.md">Model Export</a></td>
    </tr>
  </tbody>
  <thead>
    <tr>
      <th colspan="8">Neural Coder (Zero-code Optimization)</th>
    </tr>
  </thead>
  <tbody>
    <tr>
        <td colspan="2" align="center"><a href="./neural_coder/docs/PythonLauncher.md">Launcher</a></td>
        <td colspan="2" align="center"><a href="./neural_coder/extensions/neural_compressor_ext_lab/README.md">JupyterLab Extension</a></td>
        <td colspan="2" align="center"><a href="./neural_coder/extensions/neural_compressor_ext_vscode/README.md">Visual Studio Code Extension</a></td>
        <td colspan="2" align="center"><a href="./neural_coder/docs/SupportMatrix.md">Supported Matrix</a></td>
    </tr>
  </tbody>
  <thead>
      <tr>
        <th colspan="8">Advanced Topics</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td colspan="2" align="center"><a href="./docs/source/adaptor.md">Adaptor</a></td>
          <td colspan="2" align="center"><a href="./docs/source/tuning_strategies.md">Strategy</a></td>
          <td colspan="2" align="center"><a href="./docs/source/distillation_quantization.md">Distillation for Quantization</a></td>
          <td colspan="2" align="center"><a href="./docs/source/smooth_quant.md">SmoothQuant</td>
      </tr>
      <tr>
          <td colspan="4" align="center"><a href="./docs/source/quantization_weight_only.md">Weight-Only Quantization (INT8/INT4/FP4/NF4) </td>
          <td colspan="2" align="center"><a href="https://github.com/intel/neural-compressor/blob/fp8_adaptor/docs/source/fp8.md">FP8 Quantization </td>
          <td colspan="2" align="center"><a href="./docs/source/quantization_layer_wise.md">Layer-Wise Quantization </td>
      </tr>
  </tbody>
  <thead>
      <tr>
        <th colspan="8">Innovations for Productivity</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td colspan="4" align="center"><a href="./neural_insights/README.md">Neural Insights</a></td>
          <td colspan="4" align="center"><a href="./neural_solution/README.md">Neural Solution</a></td>
      </tr>
  </tbody>
</table>

> **Note**: 
> More documentations can be found at [User Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/user_guide.md).

## Selected Publications/Events
* Blog by Intel: [Effective Weight-Only Quantization for Large Language Models with Intel® Neural Compressor](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Effective-Weight-Only-Quantization-for-Large-Language-Models/post/1529552) (Oct 2023)
* EMNLP'2023 (Under Review): [TEQ: Trainable Equivalent Transformation for Quantization of LLMs](https://openreview.net/forum?id=iaI8xEINAf&referrer=%5BAuthor%20Console%5D) (Sep 2023)
* arXiv: [Efficient Post-training Quantization with FP8 Formats](https://arxiv.org/abs/2309.14592) (Sep 2023)
* arXiv: [Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs](https://arxiv.org/abs/2309.05516) (Sep 2023)
* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)
* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)

> **Note**: 
> View [Full Publication List](https://github.com/intel/neural-compressor/blob/master/docs/source/publication_list.md).

## Additional Content

* [Release Information](./docs/source/releases_info.md)
* [Contribution Guidelines](./docs/source/CONTRIBUTING.md)
* [Legal Information](./docs/source/legal_information.md)
* [Security Policy](SECURITY.md)

## Communication 
- [GitHub Issues](https://github.com/intel/neural-compressor/issues): mainly for bug reports, new feature requests, question asking, etc.
- [Email](mailto:inc.maintainers@intel.com): welcome to raise any interesting research ideas on model compression techniques by email for collaborations.  
- [Discord Channel](https://discord.com/invite/Wxk3J3ZJkU): join the discord channel for more flexible technical discussion.
- [WeChat group](/docs/source/imgs/wechat_group.jpg): scan the QA code to join the technical discussion.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/intel/neural-compressor",
    "name": "neural-compressor-3x-ort",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7.0",
    "maintainer_email": null,
    "keywords": "quantization, auto-tuning, post-training static quantization, post-training dynamic quantization, quantization-aware training",
    "author": "Intel AIPT Team",
    "author_email": "feng.tian@intel.com, haihao.shen@intel.com, suyue.chen@intel.com",
    "download_url": "https://files.pythonhosted.org/packages/a0/69/e1fe87ef314421efa71c66817ef6006c81fef98e4cd147363b772886bbb4/neural_compressor_3x_ort-2.5.1.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\nIntel\u00ae Neural Compressor\n===========================\n<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)</h3>\n\n[![python](https://img.shields.io/badge/python-3.8%2B-blue)](https://github.com/intel/neural-compressor)\n[![version](https://img.shields.io/badge/release-2.5-green)](https://github.com/intel/neural-compressor/releases)\n[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/intel/neural-compressor/blob/master/LICENSE)\n[![coverage](https://img.shields.io/badge/coverage-85%25-green)](https://github.com/intel/neural-compressor)\n[![Downloads](https://static.pepy.tech/personalized-badge/neural-compressor?period=total&units=international_system&left_color=grey&right_color=green&left_text=downloads)](https://pepy.tech/project/neural-compressor)\n\n[Architecture](./docs/source/design.md#architecture)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[Workflow](./docs/source/design.md#workflow)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[LLMs Recipes](./docs/source/llm_recipes.md)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[Results](./docs/source/validated_model_list.md)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[Documentations](https://intel.github.io/neural-compressor)\n\n---\n<div align=\"left\">\n\nIntel\u00ae Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org/), [ONNX Runtime](https://onnxruntime.ai/), and [MXNet](https://mxnet.apache.org/),\nas well as Intel extensions such as [Intel Extension for TensorFlow](https://github.com/intel/intel-extension-for-tensorflow) and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).\nIn particular, the tool provides the key features, typical examples, and open collaborations as below:\n\n* Support a wide range of Intel hardware such as [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html), [Intel Xeon CPU Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html), [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/flex-series.html), and [Intel Data Center GPU Max Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html) with extensive testing; support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing\n\n* Validate popular LLMs such as [LLama2](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Falcon](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [GPT-J](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Bloom](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [OPT](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), and more than 10,000 broad models such as [Stable Diffusion](/examples/pytorch/nlp/huggingface_models/text-to-image/quantization), [BERT-Large](/examples/pytorch/nlp/huggingface_models/text-classification/quantization/ptq_static/fx), and [ResNet50](/examples/pytorch/image_recognition/torchvision_models/quantization/ptq/cpu/fx) from popular model hubs such as [Hugging Face](https://huggingface.co/), [Torch Vision](https://pytorch.org/vision/stable/index.html), and [ONNX Model Zoo](https://github.com/onnx/models#models), by leveraging zero-code optimization solution [Neural Coder](/neural_coder#what-do-we-offer) and automatic [accuracy-driven](/docs/source/design.md#workflow) quantization strategies\n\n* Collaborate with cloud marketplaces such as [Google Cloud Platform](https://console.cloud.google.com/marketplace/product/bitnami-launchpad/inc-tensorflow-intel?project=verdant-sensor-286207), [Amazon Web Services](https://aws.amazon.com/marketplace/pp/prodview-yjyh2xmggbmga#pdp-support), and [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bitnami.inc-tensorflow-intel), software platforms such as [Alibaba Cloud](https://www.intel.com/content/www/us/en/developer/articles/technical/quantize-ai-by-oneapi-analytics-on-alibaba-cloud.html), [Tencent TACO](https://new.qq.com/rain/a/20221202A00B9S00) and [Microsoft Olive](https://github.com/microsoft/Olive), and open AI ecosystem such as [Hugging Face](https://huggingface.co/blog/intel), [PyTorch](https://pytorch.org/tutorials/recipes/intel_neural_compressor_for_pytorch.html), [ONNX](https://github.com/onnx/models#models), [ONNX Runtime](https://github.com/microsoft/onnxruntime), and [Lightning AI](https://github.com/Lightning-AI/lightning/blob/master/docs/source-pytorch/advanced/post_training_quantization.rst)\n\n## What's New\n* [2024/03] A new SOTA approach [AutoRound](https://github.com/intel/auto-round) Weight-Only Quantization on [Intel Gaudi2 AI accelerator](https://habana.ai/products/gaudi2/) is available for LLMs.\n\n## Installation\n\n### Install from pypi\n```Shell\npip install neural-compressor\n```\n> **Note**: \n> More installation methods can be found at [Installation Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/installation_guide.md). Please check out our [FAQ](https://github.com/intel/neural-compressor/blob/master/docs/source/faq.md) for more details.\n\n## Getting Started\n\nSetting up the environment:  \n```bash\npip install \"neural-compressor>=2.3\" \"transformers>=4.34.0\" torch torchvision\n```\nAfter successfully installing these packages, try your first quantization program.\n\n### Weight-Only Quantization (LLMs)\nFollowing example code demonstrates Weight-Only Quantization on LLMs, it supports Intel CPU, Intel Gauid2 AI Accelerator, Nvidia GPU, best device will be selected automatically. \n\nTo try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). \n```bash\ndocker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.14.0/ubuntu22.04//habanalabs/pytorch-installer-2.1.1:latest\n\n# Check the container ID\ndocker ps\n\n# Login into container\ndocker exec -it <container_id> bash\n\n# Install the optimum-habana\npip install --upgrade-strategy eager optimum[habana]\n\n# Install INC/auto_round\npip install neural-compressor auto_round\n```\nRun the example:\n```python\nfrom transformers import AutoModel, AutoTokenizer\n\nfrom neural_compressor.config import PostTrainingQuantConfig\nfrom neural_compressor.quantization import fit\nfrom neural_compressor.adaptor.torch_utils.auto_round import get_dataloader\n\nmodel_name = \"EleutherAI/gpt-neo-125m\"\nfloat_model = AutoModel.from_pretrained(model_name)\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\ndataloader = get_dataloader(tokenizer, seqlen=2048)\n\nwoq_conf = PostTrainingQuantConfig(\n    approach=\"weight_only\",\n    op_type_dict={\n        \".*\": {  # match all ops\n            \"weight\": {\n                \"dtype\": \"int\",\n                \"bits\": 4,\n                \"algorithm\": \"AUTOROUND\",\n            },\n        }\n    },\n)\nquantized_model = fit(model=float_model, conf=woq_conf, calib_dataloader=dataloader)\n```\n**Note:** \n\nTo try INT4 model inference, please directly use [Intel Extension for Transformers](https://github.com/intel/intel-extension-for-transformers), which leverages Intel Neural Compressor for model quantization.        \n\n### Static Quantization (Non-LLMs)\n\n```python\nfrom torchvision import models\n\nfrom neural_compressor.config import PostTrainingQuantConfig\nfrom neural_compressor.data import DataLoader, Datasets\nfrom neural_compressor.quantization import fit\n\nfloat_model = models.resnet18()\ndataset = Datasets(\"pytorch\")[\"dummy\"](shape=(1, 3, 224, 224))\ncalib_dataloader = DataLoader(framework=\"pytorch\", dataset=dataset)\nstatic_quant_conf = PostTrainingQuantConfig()\nquantized_model = fit(model=float_model, conf=static_quant_conf, calib_dataloader=calib_dataloader)\n```\n\n## Documentation\n\n<table class=\"docutils\">\n  <thead>\n  <tr>\n    <th colspan=\"8\">Overview</th>\n  </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/design.md#architecture\">Architecture</a></td>\n      <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/design.md#workflow\">Workflow</a></td>\n      <td colspan=\"1\" align=\"center\"><a href=\"https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html\">APIs</a></td>\n      <td colspan=\"1\" align=\"center\"><a href=\"./docs/source/llm_recipes.md\">LLMs Recipes</a></td>\n      <td colspan=\"2\" align=\"center\"><a href=\"examples/README.md\">Examples</a></td>\n    </tr>\n  </tbody>\n  <thead>\n    <tr>\n      <th colspan=\"8\">Python-based APIs</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/quantization.md\">Quantization</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/mixed_precision.md\">Advanced Mixed Precision</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/pruning.md\">Pruning (Sparsity)</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/distillation.md\">Distillation</a></td>\n    </tr>\n    <tr>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/orchestration.md\">Orchestration</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/benchmark.md\">Benchmarking</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/distributed.md\">Distributed Compression</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/export.md\">Model Export</a></td>\n    </tr>\n  </tbody>\n  <thead>\n    <tr>\n      <th colspan=\"8\">Neural Coder (Zero-code Optimization)</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n        <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/docs/PythonLauncher.md\">Launcher</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/extensions/neural_compressor_ext_lab/README.md\">JupyterLab Extension</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/extensions/neural_compressor_ext_vscode/README.md\">Visual Studio Code Extension</a></td>\n        <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/docs/SupportMatrix.md\">Supported Matrix</a></td>\n    </tr>\n  </tbody>\n  <thead>\n      <tr>\n        <th colspan=\"8\">Advanced Topics</th>\n      </tr>\n  </thead>\n  <tbody>\n      <tr>\n          <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/adaptor.md\">Adaptor</a></td>\n          <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/tuning_strategies.md\">Strategy</a></td>\n          <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/distillation_quantization.md\">Distillation for Quantization</a></td>\n          <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/smooth_quant.md\">SmoothQuant</td>\n      </tr>\n      <tr>\n          <td colspan=\"4\" align=\"center\"><a href=\"./docs/source/quantization_weight_only.md\">Weight-Only Quantization (INT8/INT4/FP4/NF4) </td>\n          <td colspan=\"2\" align=\"center\"><a href=\"https://github.com/intel/neural-compressor/blob/fp8_adaptor/docs/source/fp8.md\">FP8 Quantization </td>\n          <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/quantization_layer_wise.md\">Layer-Wise Quantization </td>\n      </tr>\n  </tbody>\n  <thead>\n      <tr>\n        <th colspan=\"8\">Innovations for Productivity</th>\n      </tr>\n  </thead>\n  <tbody>\n      <tr>\n          <td colspan=\"4\" align=\"center\"><a href=\"./neural_insights/README.md\">Neural Insights</a></td>\n          <td colspan=\"4\" align=\"center\"><a href=\"./neural_solution/README.md\">Neural Solution</a></td>\n      </tr>\n  </tbody>\n</table>\n\n> **Note**: \n> More documentations can be found at [User Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/user_guide.md).\n\n## Selected Publications/Events\n* Blog by Intel: [Effective Weight-Only Quantization for Large Language Models with Intel\u00ae Neural Compressor](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Effective-Weight-Only-Quantization-for-Large-Language-Models/post/1529552) (Oct 2023)\n* EMNLP'2023 (Under Review): [TEQ: Trainable Equivalent Transformation for Quantization of LLMs](https://openreview.net/forum?id=iaI8xEINAf&referrer=%5BAuthor%20Console%5D) (Sep 2023)\n* arXiv: [Efficient Post-training Quantization with FP8 Formats](https://arxiv.org/abs/2309.14592) (Sep 2023)\n* arXiv: [Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs](https://arxiv.org/abs/2309.05516) (Sep 2023)\n* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)\n* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)\n\n> **Note**: \n> View [Full Publication List](https://github.com/intel/neural-compressor/blob/master/docs/source/publication_list.md).\n\n## Additional Content\n\n* [Release Information](./docs/source/releases_info.md)\n* [Contribution Guidelines](./docs/source/CONTRIBUTING.md)\n* [Legal Information](./docs/source/legal_information.md)\n* [Security Policy](SECURITY.md)\n\n## Communication \n- [GitHub Issues](https://github.com/intel/neural-compressor/issues): mainly for bug reports, new feature requests, question asking, etc.\n- [Email](mailto:inc.maintainers@intel.com): welcome to raise any interesting research ideas on model compression techniques by email for collaborations.  \n- [Discord Channel](https://discord.com/invite/Wxk3J3ZJkU): join the discord channel for more flexible technical discussion.\n- [WeChat group](/docs/source/imgs/wechat_group.jpg): scan the QA code to join the technical discussion.\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Repository of Intel\u00ae Neural Compressor",
    "version": "2.5.1",
    "project_urls": {
        "Homepage": "https://github.com/intel/neural-compressor"
    },
    "split_keywords": [
        "quantization",
        " auto-tuning",
        " post-training static quantization",
        " post-training dynamic quantization",
        " quantization-aware training"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4ceab614866cc8ad8b837775eea1977843c5b9e6d86092ad9db868d600c3f558",
                "md5": "54a947297146cf3b72fd9da3542eec34",
                "sha256": "31c5d2fb89df12f0d19b9df8ba4343f49210b827d005ff568e367bf3d4c76102"
            },
            "downloads": -1,
            "filename": "neural_compressor_3x_ort-2.5.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "54a947297146cf3b72fd9da3542eec34",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.0",
            "size": 112928,
            "upload_time": "2024-04-03T14:11:29",
            "upload_time_iso_8601": "2024-04-03T14:11:29.238176Z",
            "url": "https://files.pythonhosted.org/packages/4c/ea/b614866cc8ad8b837775eea1977843c5b9e6d86092ad9db868d600c3f558/neural_compressor_3x_ort-2.5.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a069e1fe87ef314421efa71c66817ef6006c81fef98e4cd147363b772886bbb4",
                "md5": "c374549e7e4edc3a68613b9c0ba5aa5f",
                "sha256": "fd920a76e0958ccf09ad113f260884e4aaa64a34fdb0227d9870cd035cbf9c63"
            },
            "downloads": -1,
            "filename": "neural_compressor_3x_ort-2.5.1.tar.gz",
            "has_sig": false,
            "md5_digest": "c374549e7e4edc3a68613b9c0ba5aa5f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.0",
            "size": 95427,
            "upload_time": "2024-04-03T14:11:31",
            "upload_time_iso_8601": "2024-04-03T14:11:31.682556Z",
            "url": "https://files.pythonhosted.org/packages/a0/69/e1fe87ef314421efa71c66817ef6006c81fef98e4cd147363b772886bbb4/neural_compressor_3x_ort-2.5.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-03 14:11:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "intel",
    "github_project": "neural-compressor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "deprecated",
            "specs": [
                [
                    ">=",
                    "1.2.13"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "<",
                    "2.0"
                ]
            ]
        },
        {
            "name": "opencv-python-headless",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "Pillow",
            "specs": []
        },
        {
            "name": "prettytable",
            "specs": []
        },
        {
            "name": "psutil",
            "specs": []
        },
        {
            "name": "py-cpuinfo",
            "specs": []
        },
        {
            "name": "pycocotools",
            "specs": []
        },
        {
            "name": "pycocotools-windows",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "schema",
            "specs": []
        },
        {
            "name": "scikit-learn",
            "specs": []
        }
    ],
    "lcname": "neural-compressor-3x-ort"
}
        
Elapsed time: 0.21618s