# <img width="40" alt="tool icon" src="https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96" /> LLM Compressor
`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:
* Comprehensive set of quantization algorithms for weight-only and activation quantization
* Seamless integration with Hugging Face models and repositories
* `safetensors`-based file format compatible with `vllm`
* Large model support via `accelerate`
**✨ Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! ✨**
<p align="center">
<img alt="LLM Compressor Flow" src="https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b" width="80%" />
</p>
## 🚀 What's New!
Big updates have landed in LLM Compressor! Check out these exciting new features:
* **Preliminary FP4 Quantization Support:** Quantize weights and activations to FP4 and seamlessly run the compressed model in vLLM. Model weights and activations are quantized following the NVFP4 [configuration](https://github.com/neuralmagic/compressed-tensors/blob/f5dbfc336b9c9c361b9fe7ae085d5cb0673e56eb/src/compressed_tensors/quantization/quant_scheme.py#L104). See examples of [weight-only quantization](examples/quantization_w4a16_fp4/llama3_example.py) and [fp4 activation support](examples/quantization_w4a4_fp4/llama3_example.py). Support is currently preliminary and additional support will be added for MoEs.
* **Axolotl Sparse Finetuning Integration:** Seamlessly finetune sparse LLMs with our Axolotl integration. Learn how to create [fast sparse open-source models with Axolotl and LLM Compressor](https://developers.redhat.com/articles/2025/06/17/axolotl-meets-llm-compressor-fast-sparse-open). See also the [Axolotl integration docs](https://docs.axolotl.ai/docs/custom_integrations.html#llmcompressor).
* **AutoAWQ Integration:** Perform low-bit weight-only quantization efficiently using AutoAWQ, now part of LLM Compressor. *Note: This integration should be considered experimental for now. Enhanced support, including for MoE models and improved handling of larger models via layer sequential pipelining, is planned for upcoming releases.* [See the details](https://github.com/vllm-project/llm-compressor/pull/1177).
* **Day 0 Llama 4 Support:** Meta utilized LLM Compressor to create the [FP8-quantized Llama-4-Maverick-17B-128E](https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8), optimized for vLLM inference using [compressed-tensors](https://github.com/neuralmagic/compressed-tensors) format.
### Supported Formats
* Activation Quantization: W8A8 (int8 and fp8)
* Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)
* 2:4 Semi-structured and Unstructured Sparsity
### Supported Algorithms
* Simple PTQ
* GPTQ
* AWQ
* SmoothQuant
* SparseGPT
### When to Use Which Optimization
Please refer to [docs/schemes.md](./docs/schemes.md) for detailed information about available optimization schemes and their use cases.
## Installation
```bash
pip install llmcompressor
```
## Get Started
### End-to-End Examples
Applying quantization with `llmcompressor`:
* [Activation quantization to `int8`](examples/quantization_w8a8_int8/README.md)
* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8/README.md)
* [Activation quantization to `fp4`](examples/quantization_w4a4_fp4/llama3_example.py)
* [Weight only quantization to `fp4`](examples/quantization_w4a16_fp4/llama3_example.py)
* [Weight only quantization to `int4` using GPTQ](examples/quantization_w4a16/README.md)
* [Weight only quantization to `int4` using AWQ](examples/awq/README.md)
* [Quantizing MoE LLMs](examples/quantizing_moe/README.md)
* [Quantizing Vision-Language Models](examples/multimodal_vision/README.md)
* [Quantizing Audio-Language Models](examples/multimodal_audio/README.md)
### User Guides
Deep dives into advanced usage of `llmcompressor`:
* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate/README.md)
## Quick Tour
Let's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.
Note that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.
### Apply Quantization
Quantization is applied by selecting an algorithm and calling the `oneshot` API.
```python
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor import oneshot
# Select quantization algorithm. In this case, we:
# * apply SmoothQuant to make the activations easier to quantize
# * quantize the weights to int8 with GPTQ (static per channel)
# * quantize the activations to int8 (dynamic per token)
recipe = [
SmoothQuantModifier(smoothing_strength=0.8),
GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]
# Apply quantization using the built in open_platypus dataset.
# * See examples for demos showing how to pass a custom calibration set
oneshot(
model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
dataset="open_platypus",
recipe=recipe,
output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
max_seq_length=2048,
num_calibration_samples=512,
)
```
### Inference with vLLM
The checkpoints created by `llmcompressor` can be loaded and run in `vllm`:
Install:
```bash
pip install vllm
```
Run:
```python
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
```
## Questions / Contribution
- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).
## Citation
If you find LLM Compressor useful in your research or projects, please consider citing it:
```bibtex
@software{llmcompressor2024,
title={{LLM Compressor}},
author={Red Hat AI and vLLM Project},
year={2024},
month={8},
url={https://github.com/vllm-project/llm-compressor},
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/vllm-project/llm-compressor",
"name": "llmcompressor",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llmcompressor, llms, large language models, transformers, pytorch, huggingface, compressors, compression, quantization, pruning, sparsity, optimization, model optimization, model compression, ",
"author": "Neuralmagic, Inc.",
"author_email": "support@neuralmagic.com",
"download_url": "https://files.pythonhosted.org/packages/60/5b/73c0228689bedd199cc13600082e5a363df497a328a23d7cea1bcec51a31/llmcompressor-0.6.0.tar.gz",
"platform": null,
"description": "# <img width=\"40\" alt=\"tool icon\" src=\"https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96\" /> LLM Compressor\n`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:\n\n* Comprehensive set of quantization algorithms for weight-only and activation quantization\n* Seamless integration with Hugging Face models and repositories\n* `safetensors`-based file format compatible with `vllm`\n* Large model support via `accelerate`\n\n**\u2728 Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! \u2728**\n\n<p align=\"center\">\n <img alt=\"LLM Compressor Flow\" src=\"https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b\" width=\"80%\" />\n</p>\n\n## \ud83d\ude80 What's New!\n\nBig updates have landed in LLM Compressor! Check out these exciting new features:\n\n* **Preliminary FP4 Quantization Support:** Quantize weights and activations to FP4 and seamlessly run the compressed model in vLLM. Model weights and activations are quantized following the NVFP4 [configuration](https://github.com/neuralmagic/compressed-tensors/blob/f5dbfc336b9c9c361b9fe7ae085d5cb0673e56eb/src/compressed_tensors/quantization/quant_scheme.py#L104). See examples of [weight-only quantization](examples/quantization_w4a16_fp4/llama3_example.py) and [fp4 activation support](examples/quantization_w4a4_fp4/llama3_example.py). Support is currently preliminary and additional support will be added for MoEs.\n* **Axolotl Sparse Finetuning Integration:** Seamlessly finetune sparse LLMs with our Axolotl integration. Learn how to create [fast sparse open-source models with Axolotl and LLM Compressor](https://developers.redhat.com/articles/2025/06/17/axolotl-meets-llm-compressor-fast-sparse-open). See also the [Axolotl integration docs](https://docs.axolotl.ai/docs/custom_integrations.html#llmcompressor).\n* **AutoAWQ Integration:** Perform low-bit weight-only quantization efficiently using AutoAWQ, now part of LLM Compressor. *Note: This integration should be considered experimental for now. Enhanced support, including for MoE models and improved handling of larger models via layer sequential pipelining, is planned for upcoming releases.* [See the details](https://github.com/vllm-project/llm-compressor/pull/1177).\n* **Day 0 Llama 4 Support:** Meta utilized LLM Compressor to create the [FP8-quantized Llama-4-Maverick-17B-128E](https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8), optimized for vLLM inference using [compressed-tensors](https://github.com/neuralmagic/compressed-tensors) format.\n\n### Supported Formats\n* Activation Quantization: W8A8 (int8 and fp8)\n* Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)\n* 2:4 Semi-structured and Unstructured Sparsity\n\n### Supported Algorithms\n* Simple PTQ\n* GPTQ\n* AWQ\n* SmoothQuant\n* SparseGPT\n\n### When to Use Which Optimization\n\nPlease refer to [docs/schemes.md](./docs/schemes.md) for detailed information about available optimization schemes and their use cases.\n\n\n## Installation\n\n```bash\npip install llmcompressor\n```\n\n## Get Started\n\n### End-to-End Examples\n\nApplying quantization with `llmcompressor`:\n* [Activation quantization to `int8`](examples/quantization_w8a8_int8/README.md)\n* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8/README.md)\n* [Activation quantization to `fp4`](examples/quantization_w4a4_fp4/llama3_example.py)\n* [Weight only quantization to `fp4`](examples/quantization_w4a16_fp4/llama3_example.py)\n* [Weight only quantization to `int4` using GPTQ](examples/quantization_w4a16/README.md)\n* [Weight only quantization to `int4` using AWQ](examples/awq/README.md)\n* [Quantizing MoE LLMs](examples/quantizing_moe/README.md)\n* [Quantizing Vision-Language Models](examples/multimodal_vision/README.md)\n* [Quantizing Audio-Language Models](examples/multimodal_audio/README.md)\n\n### User Guides\nDeep dives into advanced usage of `llmcompressor`:\n* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate/README.md)\n\n\n## Quick Tour\nLet's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.\n\nNote that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.\n\n### Apply Quantization\nQuantization is applied by selecting an algorithm and calling the `oneshot` API.\n\n```python\nfrom llmcompressor.modifiers.smoothquant import SmoothQuantModifier\nfrom llmcompressor.modifiers.quantization import GPTQModifier\nfrom llmcompressor import oneshot\n\n# Select quantization algorithm. In this case, we:\n# * apply SmoothQuant to make the activations easier to quantize\n# * quantize the weights to int8 with GPTQ (static per channel)\n# * quantize the activations to int8 (dynamic per token)\nrecipe = [\n SmoothQuantModifier(smoothing_strength=0.8),\n GPTQModifier(scheme=\"W8A8\", targets=\"Linear\", ignore=[\"lm_head\"]),\n]\n\n# Apply quantization using the built in open_platypus dataset.\n# * See examples for demos showing how to pass a custom calibration set\noneshot(\n model=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\",\n dataset=\"open_platypus\",\n recipe=recipe,\n output_dir=\"TinyLlama-1.1B-Chat-v1.0-INT8\",\n max_seq_length=2048,\n num_calibration_samples=512,\n)\n```\n\n### Inference with vLLM\n\nThe checkpoints created by `llmcompressor` can be loaded and run in `vllm`:\n\nInstall:\n\n```bash\npip install vllm\n```\n\nRun:\n\n```python\nfrom vllm import LLM\nmodel = LLM(\"TinyLlama-1.1B-Chat-v1.0-INT8\")\noutput = model.generate(\"My name is\")\n```\n\n## Questions / Contribution\n\n- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.\n- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).\n\n## Citation\n\nIf you find LLM Compressor useful in your research or projects, please consider citing it:\n\n```bibtex\n@software{llmcompressor2024,\n title={{LLM Compressor}},\n author={Red Hat AI and vLLM Project},\n year={2024},\n month={8},\n url={https://github.com/vllm-project/llm-compressor},\n}\n```\n",
"bugtrack_url": null,
"license": "Apache",
"summary": "A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.",
"version": "0.6.0",
"project_urls": {
"Homepage": "https://github.com/vllm-project/llm-compressor"
},
"split_keywords": [
"llmcompressor",
" llms",
" large language models",
" transformers",
" pytorch",
" huggingface",
" compressors",
" compression",
" quantization",
" pruning",
" sparsity",
" optimization",
" model optimization",
" model compression",
" "
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "80cbc5e4f10c94fe08563806c55d3b5d5e082fb07a5b515ab67703ab7473d26c",
"md5": "899ab4a8889a5146c0f26607ebe9aeba",
"sha256": "19195e3058ef25e50ee60f4f9a54526905d90e82cb30d827f286b5641686b84a"
},
"downloads": -1,
"filename": "llmcompressor-0.6.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "899ab4a8889a5146c0f26607ebe9aeba",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 253394,
"upload_time": "2025-06-24T15:20:52",
"upload_time_iso_8601": "2025-06-24T15:20:52.927803Z",
"url": "https://files.pythonhosted.org/packages/80/cb/c5e4f10c94fe08563806c55d3b5d5e082fb07a5b515ab67703ab7473d26c/llmcompressor-0.6.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "605b73c0228689bedd199cc13600082e5a363df497a328a23d7cea1bcec51a31",
"md5": "ed7c1c74ed929d977f4f668ab5144b91",
"sha256": "cf8b3381d0cdcf8de371a1023cb046a3134f996d190807c291161ae8b451c79a"
},
"downloads": -1,
"filename": "llmcompressor-0.6.0.tar.gz",
"has_sig": false,
"md5_digest": "ed7c1c74ed929d977f4f668ab5144b91",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 384907,
"upload_time": "2025-06-24T15:20:55",
"upload_time_iso_8601": "2025-06-24T15:20:55.678203Z",
"url": "https://files.pythonhosted.org/packages/60/5b/73c0228689bedd199cc13600082e5a363df497a328a23d7cea1bcec51a31/llmcompressor-0.6.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-06-24 15:20:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "vllm-project",
"github_project": "llm-compressor",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llmcompressor"
}