llmcompressor-nightly


Namellmcompressor-nightly JSON
Version 0.4.1.20250314 PyPI version JSON
download
home_pagehttps://github.com/neuralmagic/llm-compressor
SummaryA library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.
upload_time2025-03-14 03:23:13
maintainerNone
docs_urlNone
authorNeuralmagic, Inc.
requires_python>=3.8
licenseApache
keywords llmcompressor llms large language models transformers pytorch huggingface compressors compression quantization pruning sparsity optimization model optimization model compression
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img width="40" alt="tool icon" src="https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96" />  LLM Compressor
`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:

* Comprehensive set of quantization algorithms for weight-only and activation quantization
* Seamless integration with Hugging Face models and repositories
* `safetensors`-based file format compatible with `vllm`
* Large model support via `accelerate`

**✨ Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! ✨**

<p align="center">
   <img alt="LLM Compressor Flow" src="https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b" width="80%" />
</p>

### Supported Formats
* Activation Quantization: W8A8 (int8 and fp8)
* Mixed Precision: W4A16, W8A16
* 2:4 Semi-structured and Unstructured Sparsity

### Supported Algorithms
* Simple PTQ
* GPTQ
* SmoothQuant
* SparseGPT

### When to Use Which Optimization

#### PTQ
PTQ is performed to reduce the precision of quantizable weights (e.g., linear layers) to a lower bit-width. Supported formats are:

##### [W4A16](./examples/quantization_w4a16/README.md)
- Uses GPTQ to compress weights to 4 bits. Requires calibration dataset.
- Useful speed ups in low QPS regimes with more weight compression. 
- Recommended for any GPUs types. 
##### [W8A8-INT8](./examples/quantization_w8a8_int8/README.md)
- Uses channel-wise quantization to compress weights to 8 bits using GPTQ, and uses dynamic per-token quantization to compress activations to 8 bits. Requires calibration dataset for weight quantization. Activation quantization is carried out during inference on vLLM.
- Useful for speed ups in high QPS regimes or offline serving on vLLM. 
- Recommended for NVIDIA GPUs with compute capability <8.9 (Ampere, Turing, Volta, Pascal, or older). 
##### [W8A8-FP8](./examples/quantization_w8a8_fp8/README.md)
- Uses channel-wise quantization to compress weights to 8 bits, and uses dynamic per-token quantization to compress activations to 8 bits. Does not require calibration dataset. Activation quantization is carried out during inference on vLLM.
- Useful for speed ups in high QPS regimes or offline serving on vLLM. 
- Recommended for NVIDIA GPUs with compute capability >8.9 (Hopper and Ada Lovelace). 

#### Sparsification
Sparsification reduces model complexity by pruning selected weight values to zero while retaining essential weights in a subset of parameters. Supported formats include:

##### [2:4-Sparsity with FP8 Weight, FP8 Input Activation](./examples/sparse_2of4_quantization_fp8/README.md)
- Uses (1) semi-structured sparsity (SparseGPT), where, for every four contiguous weights in a tensor, two are set to zero. (2) Uses channel-wise quantization to compress weights to 8 bits and dynamic per-token quantization to compress activations to 8 bits.
- Useful for better inference than W8A8-fp8, with almost no drop in its evaluation score [blog](https://neuralmagic.com/blog/24-sparse-llama-fp8-sota-performance-for-nvidia-hopper-gpus/). Note: Small models may experience accuracy drops when the remaining non-zero weights are insufficient to recapitulate the original distribution.
- Recommended for compute capability >8.9 (Hopper and Ada Lovelace).


## Installation

```bash
pip install llmcompressor
```

## Get Started

### End-to-End Examples

Applying quantization with `llmcompressor`:
* [Activation quantization to `int8`](examples/quantization_w8a8_int8/README.md)
* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8/README.md)
* [Weight only quantization to `int4`](examples/quantization_w4a16/README.md)
* [Quantizing MoE LLMs](examples/quantizing_moe/README.md)
* [Quantizing Vision-Language Models](examples/multimodal_vision/README.md)
* [Quantizing Audio-Language Models](examples/multimodal_audio/README.md)

### User Guides
Deep dives into advanced usage of `llmcompressor`:
* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate/README.md)


## Quick Tour
Let's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.

### Apply Quantization
Quantization is applied by selecting an algorithm and calling the `oneshot` API.

```python
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor import oneshot

# Select quantization algorithm. In this case, we:
#   * apply SmoothQuant to make the activations easier to quantize
#   * quantize the weights to int8 with GPTQ (static per channel)
#   * quantize the activations to int8 (dynamic per token)
recipe = [
    SmoothQuantModifier(smoothing_strength=0.8),
    GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]

# Apply quantization using the built in open_platypus dataset.
#   * See examples for demos showing how to pass a custom calibration set
oneshot(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    dataset="open_platypus",
    recipe=recipe,
    output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
    max_seq_length=2048,
    num_calibration_samples=512,
)
```

### Inference with vLLM

The checkpoints created by `llmcompressor` can be loaded and run in `vllm`:

Install:

```bash
pip install vllm
```

Run:

```python
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
```

## Questions / Contribution

- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/neuralmagic/llm-compressor",
    "name": "llmcompressor-nightly",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llmcompressor, llms, large language models, transformers, pytorch, huggingface, compressors, compression, quantization, pruning, sparsity, optimization, model optimization, model compression, ",
    "author": "Neuralmagic, Inc.",
    "author_email": "support@neuralmagic.com",
    "download_url": "https://files.pythonhosted.org/packages/25/bc/3bfc1b23fb9de552607fe5057d23323c7f99c41f29a98c84efc02e507c40/llmcompressor-nightly-0.4.1.20250314.tar.gz",
    "platform": null,
    "description": "# <img width=\"40\" alt=\"tool icon\" src=\"https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96\" />  LLM Compressor\n`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:\n\n* Comprehensive set of quantization algorithms for weight-only and activation quantization\n* Seamless integration with Hugging Face models and repositories\n* `safetensors`-based file format compatible with `vllm`\n* Large model support via `accelerate`\n\n**\u2728 Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! \u2728**\n\n<p align=\"center\">\n   <img alt=\"LLM Compressor Flow\" src=\"https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b\" width=\"80%\" />\n</p>\n\n### Supported Formats\n* Activation Quantization: W8A8 (int8 and fp8)\n* Mixed Precision: W4A16, W8A16\n* 2:4 Semi-structured and Unstructured Sparsity\n\n### Supported Algorithms\n* Simple PTQ\n* GPTQ\n* SmoothQuant\n* SparseGPT\n\n### When to Use Which Optimization\n\n#### PTQ\nPTQ is performed to reduce the precision of quantizable weights (e.g., linear layers) to a lower bit-width. Supported formats are:\n\n##### [W4A16](./examples/quantization_w4a16/README.md)\n- Uses GPTQ to compress weights to 4 bits. Requires calibration dataset.\n- Useful speed ups in low QPS regimes with more weight compression. \n- Recommended for any GPUs types. \n##### [W8A8-INT8](./examples/quantization_w8a8_int8/README.md)\n- Uses channel-wise quantization to compress weights to 8 bits using GPTQ, and uses dynamic per-token quantization to compress activations to 8 bits. Requires calibration dataset for weight quantization. Activation quantization is carried out during inference on vLLM.\n- Useful for speed ups in high QPS regimes or offline serving on vLLM. \n- Recommended for NVIDIA GPUs with compute capability <8.9 (Ampere, Turing, Volta, Pascal, or older). \n##### [W8A8-FP8](./examples/quantization_w8a8_fp8/README.md)\n- Uses channel-wise quantization to compress weights to 8 bits, and uses dynamic per-token quantization to compress activations to 8 bits. Does not require calibration dataset. Activation quantization is carried out during inference on vLLM.\n- Useful for speed ups in high QPS regimes or offline serving on vLLM. \n- Recommended for NVIDIA GPUs with compute capability >8.9 (Hopper and Ada Lovelace). \n\n#### Sparsification\nSparsification reduces model complexity by pruning selected weight values to zero while retaining essential weights in a subset of parameters. Supported formats include:\n\n##### [2:4-Sparsity with FP8 Weight, FP8 Input Activation](./examples/sparse_2of4_quantization_fp8/README.md)\n- Uses (1) semi-structured sparsity (SparseGPT), where, for every four contiguous weights in a tensor, two are set to zero. (2) Uses channel-wise quantization to compress weights to 8 bits and dynamic per-token quantization to compress activations to 8 bits.\n- Useful for better inference than W8A8-fp8, with almost no drop in its evaluation score [blog](https://neuralmagic.com/blog/24-sparse-llama-fp8-sota-performance-for-nvidia-hopper-gpus/). Note: Small models may experience accuracy drops when the remaining non-zero weights are insufficient to recapitulate the original distribution.\n- Recommended for compute capability >8.9 (Hopper and Ada Lovelace).\n\n\n## Installation\n\n```bash\npip install llmcompressor\n```\n\n## Get Started\n\n### End-to-End Examples\n\nApplying quantization with `llmcompressor`:\n* [Activation quantization to `int8`](examples/quantization_w8a8_int8/README.md)\n* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8/README.md)\n* [Weight only quantization to `int4`](examples/quantization_w4a16/README.md)\n* [Quantizing MoE LLMs](examples/quantizing_moe/README.md)\n* [Quantizing Vision-Language Models](examples/multimodal_vision/README.md)\n* [Quantizing Audio-Language Models](examples/multimodal_audio/README.md)\n\n### User Guides\nDeep dives into advanced usage of `llmcompressor`:\n* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate/README.md)\n\n\n## Quick Tour\nLet's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.\n\nNote that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.\n\n### Apply Quantization\nQuantization is applied by selecting an algorithm and calling the `oneshot` API.\n\n```python\nfrom llmcompressor.modifiers.smoothquant import SmoothQuantModifier\nfrom llmcompressor.modifiers.quantization import GPTQModifier\nfrom llmcompressor import oneshot\n\n# Select quantization algorithm. In this case, we:\n#   * apply SmoothQuant to make the activations easier to quantize\n#   * quantize the weights to int8 with GPTQ (static per channel)\n#   * quantize the activations to int8 (dynamic per token)\nrecipe = [\n    SmoothQuantModifier(smoothing_strength=0.8),\n    GPTQModifier(scheme=\"W8A8\", targets=\"Linear\", ignore=[\"lm_head\"]),\n]\n\n# Apply quantization using the built in open_platypus dataset.\n#   * See examples for demos showing how to pass a custom calibration set\noneshot(\n    model=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\",\n    dataset=\"open_platypus\",\n    recipe=recipe,\n    output_dir=\"TinyLlama-1.1B-Chat-v1.0-INT8\",\n    max_seq_length=2048,\n    num_calibration_samples=512,\n)\n```\n\n### Inference with vLLM\n\nThe checkpoints created by `llmcompressor` can be loaded and run in `vllm`:\n\nInstall:\n\n```bash\npip install vllm\n```\n\nRun:\n\n```python\nfrom vllm import LLM\nmodel = LLM(\"TinyLlama-1.1B-Chat-v1.0-INT8\")\noutput = model.generate(\"My name is\")\n```\n\n## Questions / Contribution\n\n- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.\n- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.",
    "version": "0.4.1.20250314",
    "project_urls": {
        "Homepage": "https://github.com/neuralmagic/llm-compressor"
    },
    "split_keywords": [
        "llmcompressor",
        " llms",
        " large language models",
        " transformers",
        " pytorch",
        " huggingface",
        " compressors",
        " compression",
        " quantization",
        " pruning",
        " sparsity",
        " optimization",
        " model optimization",
        " model compression",
        " "
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a7c42193ff1cf0303ff68aac540d2987a67ecd707733e4ff15af082fe353540f",
                "md5": "476871ea11c628133226d165de81a565",
                "sha256": "8e957568c2545a67c1a1de0c010f00abea03a20c34b431a4c846bfd23633125b"
            },
            "downloads": -1,
            "filename": "llmcompressor_nightly-0.4.1.20250314-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "476871ea11c628133226d165de81a565",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 258711,
            "upload_time": "2025-03-14T03:23:10",
            "upload_time_iso_8601": "2025-03-14T03:23:10.627383Z",
            "url": "https://files.pythonhosted.org/packages/a7/c4/2193ff1cf0303ff68aac540d2987a67ecd707733e4ff15af082fe353540f/llmcompressor_nightly-0.4.1.20250314-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "25bc3bfc1b23fb9de552607fe5057d23323c7f99c41f29a98c84efc02e507c40",
                "md5": "edfa1315a8e58271b89378907cfd9797",
                "sha256": "0cd12a220e0146518f653245a6b96958c42ecbd30606c55e55dd3384a8ff15ec"
            },
            "downloads": -1,
            "filename": "llmcompressor-nightly-0.4.1.20250314.tar.gz",
            "has_sig": false,
            "md5_digest": "edfa1315a8e58271b89378907cfd9797",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 189972,
            "upload_time": "2025-03-14T03:23:13",
            "upload_time_iso_8601": "2025-03-14T03:23:13.902602Z",
            "url": "https://files.pythonhosted.org/packages/25/bc/3bfc1b23fb9de552607fe5057d23323c7f99c41f29a98c84efc02e507c40/llmcompressor-nightly-0.4.1.20250314.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-03-14 03:23:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neuralmagic",
    "github_project": "llm-compressor",
    "github_not_found": true,
    "lcname": "llmcompressor-nightly"
}
        
Elapsed time: 6.15873s