<div align="center">
Intel® Neural Compressor
===========================
<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)</h3>
[![python](https://img.shields.io/badge/python-3.7%2B-blue)](https://github.com/intel/neural-compressor)
[![version](https://img.shields.io/badge/release-2.1-green)](https://github.com/intel/neural-compressor/releases)
[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/intel/neural-compressor/blob/master/LICENSE)
[![coverage](https://img.shields.io/badge/coverage-85%25-green)](https://github.com/intel/neural-compressor)
[![Downloads](https://static.pepy.tech/personalized-badge/neural-compressor?period=total&units=international_system&left_color=grey&right_color=green&left_text=downloads)](https://pepy.tech/project/neural-compressor)
[Architecture](./docs/source/design.md#architecture) | [Workflow](./docs/source/design.md#workflow) | [Results](./docs/source/validated_model_list.md) | [Examples](./examples/README.md) | [Documentations](https://intel.github.io/neural-compressor)
</div>
---
<div align="left">
Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org/), [ONNX Runtime](https://onnxruntime.ai/), and [MXNet](https://mxnet.apache.org/),
as well as Intel extensions such as [Intel Extension for TensorFlow](https://github.com/intel/intel-extension-for-tensorflow) and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
In particular, the tool provides the key features, typical examples, and open collaborations as below:
* Support a wide range of Intel hardware such as [Intel Xeon Scalable processor](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html), [Intel Xeon CPU Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html), [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/flex-series.html), and [Intel Data Center GPU Max Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html) with extensive testing; support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing
* Validate more than 10,000 models such as [Bloom-176B](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_static/ipex/smooth_quant), [OPT-6.7B](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_static/ipex/smooth_quant), [Stable Diffusion](/examples/pytorch/nlp/huggingface_models/text-to-image/quantization), [GPT-J](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_static/fx), [BERT-Large](/examples/pytorch/nlp/huggingface_models/text-classification/quantization/ptq_static/fx), and [ResNet50](/examples/pytorch/image_recognition/torchvision_models/quantization/ptq/cpu/fx) from popular model hubs such as [Hugging Face](https://huggingface.co/), [Torch Vision](https://pytorch.org/vision/stable/index.html), and [ONNX Model Zoo](https://github.com/onnx/models#models), by leveraging zero-code optimization solution [Neural Coder](/neural_coder#what-do-we-offer) and automatic [accuracy-driven](/docs/source/design.md#workflow) quantization strategies
* Collaborate with cloud marketplace such as [Google Cloud Platform](https://console.cloud.google.com/marketplace/product/bitnami-launchpad/inc-tensorflow-intel?project=verdant-sensor-286207), [Amazon Web Services](https://aws.amazon.com/marketplace/pp/prodview-yjyh2xmggbmga#pdp-support), and [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bitnami.inc-tensorflow-intel), software platforms such as [Alibaba Cloud](https://www.intel.com/content/www/us/en/developer/articles/technical/quantize-ai-by-oneapi-analytics-on-alibaba-cloud.html) and [Tencent TACO](https://new.qq.com/rain/a/20221202A00B9S00), and open AI ecosystem such as [Hugging Face](https://huggingface.co/blog/intel), [PyTorch](https://pytorch.org/tutorials/recipes/intel_neural_compressor_for_pytorch.html), [ONNX](https://github.com/onnx/models#models), and [Lightning AI](https://github.com/Lightning-AI/lightning/blob/master/docs/source-pytorch/advanced/post_training_quantization.rst)
## Installation
### Install from pypi
```Shell
pip install neural-compressor
```
> More installation methods can be found at [Installation Guide](./docs/source/installation_guide.md). Please check out our [FAQ](./docs/source/faq.md) for more details.
## Getting Started
### Quantization with Python API
```shell
# Install Intel Neural Compressor and TensorFlow
pip install neural-compressor
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
```
```python
from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.data import DataLoader
from neural_compressor.data import Datasets
dataset = Datasets('tensorflow')['dummy'](shape=(1, 224, 224, 3))
dataloader = DataLoader(framework='tensorflow', dataset=dataset)
from neural_compressor.quantization import fit
q_model = fit(
model="./mobilenet_v1_1.0_224_frozen.pb",
conf=PostTrainingQuantConfig(),
calib_dataloader=dataloader,
eval_dataloader=dataloader)
```
> More quick samples can be found in [Get Started Page](./docs/source/get_started.md).
## Documentation
<table class="docutils">
<thead>
<tr>
<th colspan="8">Overview</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="./docs/source/design.md#architecture">Architecture</a></td>
<td colspan="2" align="center"><a href="./docs/source/design.md#workflow">Workflow</a></td>
<td colspan="2" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
<td colspan="2" align="center"><a href="./docs/source/bench.md">GUI</a></td>
</tr>
<tr>
<td colspan="2" align="center"><a href="examples/README.md#notebook-examples">Notebook</a></td>
<td colspan="2" align="center"><a href="examples/README.md">Examples</a></td>
<td colspan="4" align="center"><a href="https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html">Intel oneAPI AI Analytics Toolkit</a></td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">Python-based APIs</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="./docs/source/quantization.md">Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/source/mixed_precision.md">Advanced Mixed Precision</a></td>
<td colspan="2" align="center"><a href="./docs/source/pruning.md">Pruning (Sparsity)</a></td>
<td colspan="2" align="center"><a href="./docs/source/distillation.md">Distillation</a></td>
</tr>
<tr>
<td colspan="2" align="center"><a href="./docs/source/orchestration.md">Orchestration</a></td>
<td colspan="2" align="center"><a href="./docs/source/benchmark.md">Benchmarking</a></td>
<td colspan="2" align="center"><a href="./docs/source/distributed.md">Distributed Compression</a></td>
<td colspan="2" align="center"><a href="./docs/source/export.md">Model Export</a></td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">Neural Coder (Zero-code Optimization)</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="./neural_coder/docs/PythonLauncher.md">Launcher</a></td>
<td colspan="2" align="center"><a href="./neural_coder/extensions/neural_compressor_ext_lab/README.md">JupyterLab Extension</a></td>
<td colspan="2" align="center"><a href="./neural_coder/extensions/neural_compressor_ext_vscode/README.md">Visual Studio Code Extension</a></td>
<td colspan="2" align="center"><a href="./neural_coder/docs/SupportMatrix.md">Supported Matrix</a></td>
</tr>
</tbody>
<thead>
<tr>
<th colspan="8">Advanced Topics</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="2" align="center"><a href="./docs/source/adaptor.md">Adaptor</a></td>
<td colspan="2" align="center"><a href="./docs/source/tuning_strategies.md">Strategy</a></td>
<td colspan="2" align="center"><a href="./docs/source/distillation_quantization.md">Distillation for Quantization</a></td>
<td colspan="2" align="center"><a href="./docs/source/smooth_quant.md">SmoothQuant</td>
</tr>
</tbody>
</table>
## Selected Publications/Events
* Blog on Medium: [Effective Post-training Quantization for Large Language Models with Enhanced SmoothQuant Approach](https://medium.com/@NeuralCompressor/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98) (Apr 2023)
* Blog by Intel: [Intel® Xeon® Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-Xeon-Processors-Are-Still-the-Only-CPU-With-MLPerf-Results/post/1472750) (Apr 2023)
* Post on Social Media: [Adopt with Tencent TACO: Heterogeneous optimization is also key to improving AI computing power](https://mp.weixin.qq.com/s/I-FQqOuW7HTnwXegLGNAtw) (Mar 2023)
* Post on Social Media: [Training and Inference for Stable Diffusion | Intel Business](https://www.youtube.com/watch?v=emCgSTlJaAg) (Jan 2023)
* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)
* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)
> View our [Full Publication List](./docs/source/publication_list.md).
## Additional Content
* [Release Information](./docs/source/releases_info.md)
* [Contribution Guidelines](./docs/source/CONTRIBUTING.md)
* [Legal Information](./docs/source/legal_information.md)
* [Security Policy](SECURITY.md)
## Research Collaborations
Welcome to raise any interesting research ideas on model compression techniques and feel free to reach us (inc.maintainers@intel.com). Look forward to our collaborations on Intel Neural Compressor!
Raw data
{
"_id": null,
"home_page": "https://github.com/intel/neural-compressor",
"name": "neural-compressor-full",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6.0",
"maintainer_email": "",
"keywords": "quantization,auto-tuning,post-training static quantization,post-training dynamic quantization,quantization-aware training,tuning strategy",
"author": "Intel AIA Team",
"author_email": "feng.tian@intel.com, haihao.shen@intel.com, suyue.chen@intel.com",
"download_url": "https://files.pythonhosted.org/packages/d3/fe/0373342d96d2dab64baa1deee3d62d2e7c38b0989e47e3acc14e52d73b9d/neural_compressor_full-2.1.1.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n \nIntel\u00ae Neural Compressor\n===========================\n<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)</h3>\n\n[![python](https://img.shields.io/badge/python-3.7%2B-blue)](https://github.com/intel/neural-compressor)\n[![version](https://img.shields.io/badge/release-2.1-green)](https://github.com/intel/neural-compressor/releases)\n[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/intel/neural-compressor/blob/master/LICENSE)\n[![coverage](https://img.shields.io/badge/coverage-85%25-green)](https://github.com/intel/neural-compressor)\n[![Downloads](https://static.pepy.tech/personalized-badge/neural-compressor?period=total&units=international_system&left_color=grey&right_color=green&left_text=downloads)](https://pepy.tech/project/neural-compressor)\n\n[Architecture](./docs/source/design.md#architecture) | [Workflow](./docs/source/design.md#workflow) | [Results](./docs/source/validated_model_list.md) | [Examples](./examples/README.md) | [Documentations](https://intel.github.io/neural-compressor)\n</div>\n\n---\n<div align=\"left\">\n\nIntel\u00ae Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org/), [ONNX Runtime](https://onnxruntime.ai/), and [MXNet](https://mxnet.apache.org/),\nas well as Intel extensions such as [Intel Extension for TensorFlow](https://github.com/intel/intel-extension-for-tensorflow) and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).\nIn particular, the tool provides the key features, typical examples, and open collaborations as below:\n\n* Support a wide range of Intel hardware such as [Intel Xeon Scalable processor](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html), [Intel Xeon CPU Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html), [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/flex-series.html), and [Intel Data Center GPU Max Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html) with extensive testing; support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing\n\n* Validate more than 10,000 models such as [Bloom-176B](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_static/ipex/smooth_quant), [OPT-6.7B](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_static/ipex/smooth_quant), [Stable Diffusion](/examples/pytorch/nlp/huggingface_models/text-to-image/quantization), [GPT-J](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_static/fx), [BERT-Large](/examples/pytorch/nlp/huggingface_models/text-classification/quantization/ptq_static/fx), and [ResNet50](/examples/pytorch/image_recognition/torchvision_models/quantization/ptq/cpu/fx) from popular model hubs such as [Hugging Face](https://huggingface.co/), [Torch Vision](https://pytorch.org/vision/stable/index.html), and [ONNX Model Zoo](https://github.com/onnx/models#models), by leveraging zero-code optimization solution [Neural Coder](/neural_coder#what-do-we-offer) and automatic [accuracy-driven](/docs/source/design.md#workflow) quantization strategies\n\n* Collaborate with cloud marketplace such as [Google Cloud Platform](https://console.cloud.google.com/marketplace/product/bitnami-launchpad/inc-tensorflow-intel?project=verdant-sensor-286207), [Amazon Web Services](https://aws.amazon.com/marketplace/pp/prodview-yjyh2xmggbmga#pdp-support), and [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bitnami.inc-tensorflow-intel), software platforms such as [Alibaba Cloud](https://www.intel.com/content/www/us/en/developer/articles/technical/quantize-ai-by-oneapi-analytics-on-alibaba-cloud.html) and [Tencent TACO](https://new.qq.com/rain/a/20221202A00B9S00), and open AI ecosystem such as [Hugging Face](https://huggingface.co/blog/intel), [PyTorch](https://pytorch.org/tutorials/recipes/intel_neural_compressor_for_pytorch.html), [ONNX](https://github.com/onnx/models#models), and [Lightning AI](https://github.com/Lightning-AI/lightning/blob/master/docs/source-pytorch/advanced/post_training_quantization.rst)\n\n## Installation\n\n### Install from pypi\n```Shell\npip install neural-compressor\n```\n> More installation methods can be found at [Installation Guide](./docs/source/installation_guide.md). Please check out our [FAQ](./docs/source/faq.md) for more details.\n\n## Getting Started\n### Quantization with Python API \n\n```shell\n# Install Intel Neural Compressor and TensorFlow\npip install neural-compressor \npip install tensorflow\n# Prepare fp32 model\nwget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb\n```\n```python\nfrom neural_compressor.config import PostTrainingQuantConfig\nfrom neural_compressor.data import DataLoader\nfrom neural_compressor.data import Datasets\n\ndataset = Datasets('tensorflow')['dummy'](shape=(1, 224, 224, 3))\ndataloader = DataLoader(framework='tensorflow', dataset=dataset)\n\nfrom neural_compressor.quantization import fit\nq_model = fit(\n model=\"./mobilenet_v1_1.0_224_frozen.pb\",\n conf=PostTrainingQuantConfig(),\n calib_dataloader=dataloader,\n eval_dataloader=dataloader)\n```\n> More quick samples can be found in [Get Started Page](./docs/source/get_started.md).\n\n## Documentation\n\n<table class=\"docutils\">\n <thead>\n <tr>\n <th colspan=\"8\">Overview</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/design.md#architecture\">Architecture</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/design.md#workflow\">Workflow</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html\">APIs</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/bench.md\">GUI</a></td>\n </tr>\n <tr>\n <td colspan=\"2\" align=\"center\"><a href=\"examples/README.md#notebook-examples\">Notebook</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"examples/README.md\">Examples</a></td>\n <td colspan=\"4\" align=\"center\"><a href=\"https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html\">Intel oneAPI AI Analytics Toolkit</a></td>\n </tr>\n </tbody>\n <thead>\n <tr>\n <th colspan=\"8\">Python-based APIs</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/quantization.md\">Quantization</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/mixed_precision.md\">Advanced Mixed Precision</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/pruning.md\">Pruning (Sparsity)</a></td> \n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/distillation.md\">Distillation</a></td>\n </tr>\n <tr>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/orchestration.md\">Orchestration</a></td> \n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/benchmark.md\">Benchmarking</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/distributed.md\">Distributed Compression</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/export.md\">Model Export</a></td>\n </tr>\n </tbody>\n <thead>\n <tr>\n <th colspan=\"8\">Neural Coder (Zero-code Optimization)</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/docs/PythonLauncher.md\">Launcher</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/extensions/neural_compressor_ext_lab/README.md\">JupyterLab Extension</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/extensions/neural_compressor_ext_vscode/README.md\">Visual Studio Code Extension</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./neural_coder/docs/SupportMatrix.md\">Supported Matrix</a></td>\n </tr> \n </tbody>\n <thead>\n <tr>\n <th colspan=\"8\">Advanced Topics</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/adaptor.md\">Adaptor</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/tuning_strategies.md\">Strategy</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/distillation_quantization.md\">Distillation for Quantization</a></td>\n <td colspan=\"2\" align=\"center\"><a href=\"./docs/source/smooth_quant.md\">SmoothQuant</td>\n </tr>\n </tbody>\n</table>\n\n## Selected Publications/Events\n* Blog on Medium: [Effective Post-training Quantization for Large Language Models with Enhanced SmoothQuant Approach](https://medium.com/@NeuralCompressor/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98) (Apr 2023)\n* Blog by Intel: [Intel\u00ae Xeon\u00ae Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-Xeon-Processors-Are-Still-the-Only-CPU-With-MLPerf-Results/post/1472750) (Apr 2023)\n* Post on Social Media: [Adopt with Tencent TACO: Heterogeneous optimization is also key to improving AI computing power](https://mp.weixin.qq.com/s/I-FQqOuW7HTnwXegLGNAtw) (Mar 2023)\n* Post on Social Media: [Training and Inference for Stable Diffusion | Intel Business](https://www.youtube.com/watch?v=emCgSTlJaAg) (Jan 2023)\n* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)\n* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)\n\n> View our [Full Publication List](./docs/source/publication_list.md).\n\n## Additional Content\n\n* [Release Information](./docs/source/releases_info.md)\n* [Contribution Guidelines](./docs/source/CONTRIBUTING.md)\n* [Legal Information](./docs/source/legal_information.md)\n* [Security Policy](SECURITY.md)\n\n## Research Collaborations\n\nWelcome to raise any interesting research ideas on model compression techniques and feel free to reach us (inc.maintainers@intel.com). Look forward to our collaborations on Intel Neural Compressor!\n\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Repository of Intel\u00ae Neural Compressor",
"version": "2.1.1",
"project_urls": {
"Homepage": "https://github.com/intel/neural-compressor"
},
"split_keywords": [
"quantization",
"auto-tuning",
"post-training static quantization",
"post-training dynamic quantization",
"quantization-aware training",
"tuning strategy"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fbee13af9258d4c46cb166fbcaf7042b7e360de69f3f1a097874c95068af356b",
"md5": "fa6b45516d73b6a51ce18f668833e8ea",
"sha256": "c42065400236b83824062b6f45aae2bd75ff0346e1a889f671c05478dd8a0538"
},
"downloads": -1,
"filename": "neural_compressor_full-2.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "fa6b45516d73b6a51ce18f668833e8ea",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6.0",
"size": 7373687,
"upload_time": "2023-05-11T12:19:40",
"upload_time_iso_8601": "2023-05-11T12:19:40.303637Z",
"url": "https://files.pythonhosted.org/packages/fb/ee/13af9258d4c46cb166fbcaf7042b7e360de69f3f1a097874c95068af356b/neural_compressor_full-2.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d3fe0373342d96d2dab64baa1deee3d62d2e7c38b0989e47e3acc14e52d73b9d",
"md5": "85795d536eaa79e0cad8b348f56d4f77",
"sha256": "26198af02945577a67b4a7864df6ac050325857676185139088c61c57abe19cb"
},
"downloads": -1,
"filename": "neural_compressor_full-2.1.1.tar.gz",
"has_sig": false,
"md5_digest": "85795d536eaa79e0cad8b348f56d4f77",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6.0",
"size": 6784218,
"upload_time": "2023-05-11T12:19:44",
"upload_time_iso_8601": "2023-05-11T12:19:44.296212Z",
"url": "https://files.pythonhosted.org/packages/d3/fe/0373342d96d2dab64baa1deee3d62d2e7c38b0989e47e3acc14e52d73b9d/neural_compressor_full-2.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-11 12:19:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "intel",
"github_project": "neural-compressor",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "neural-compressor-full"
}