llmcompressor


Namellmcompressor JSON
Version 0.3.0 PyPI version JSON
download
home_pagehttps://github.com/neuralmagic/llm-compressor
SummaryA library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.
upload_time2024-11-13 05:17:48
maintainerNone
docs_urlNone
authorNeuralmagic, Inc.
requires_python>=3.8
licenseApache
keywords llmcompressor llms large language models transformers pytorch huggingface compressors compression quantization pruning sparsity optimization model optimization model compression
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img width="40" alt="tool icon" src="https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96" />  LLM Compressor
`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:

* Comprehensive set of quantization algorithms for weight-only and activation quantization
* Seamless integration with Hugging Face models and repositories
* `safetensors`-based file format compatible with `vllm`
* Large model support via `accelerate`

**✨ Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! ✨**

<p align="center">
   <img alt="LLM Compressor Flow" src="https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b" width="80%" />
</p>

### Supported Formats
* Activation Quantization: W8A8 (int8 and fp8)
* Mixed Precision: W4A16, W8A16
* 2:4 Semi-structured and Unstructured Sparsity

### Supported Algorithms
* Simple PTQ
* GPTQ
* SmoothQuant
* SparseGPT


## Installation

```bash
pip install llmcompressor
```

## Get Started

### End-to-End Examples

Applying quantization with `llmcompressor`:
* [Activation quantization to `int8`](examples/quantization_w8a8_int8)
* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8)
* [Weight only quantization to `int4`](examples/quantization_w4a16)
* [Quantizing MoE LLMs](examples/quantizing_moe)

### User Guides
Deep dives into advanced usage of `llmcompressor`:
* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate)


## Quick Tour
Let's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.

### Apply Quantization
Quantization is applied by selecting an algorithm and calling the `oneshot` API.

```python
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot

# Select quantization algorithm. In this case, we:
#   * apply SmoothQuant to make the activations easier to quantize
#   * quantize the weights to int8 with GPTQ (static per channel)
#   * quantize the activations to int8 (dynamic per token)
recipe = [
    SmoothQuantModifier(smoothing_strength=0.8),
    GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]

# Apply quantization using the built in open_platypus dataset.
#   * See examples for demos showing how to pass a custom calibration set
oneshot(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    dataset="open_platypus",
    recipe=recipe,
    output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
    max_seq_length=2048,
    num_calibration_samples=512,
)
```

### Inference with vLLM

The checkpoints created by `llmcompressor` can be loaded and run in `vllm`:

Install:

```bash
pip install vllm
```

Run:

```python
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
```

## Questions / Contribution

- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/neuralmagic/llm-compressor",
    "name": "llmcompressor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llmcompressor, llms, large language models, transformers, pytorch, huggingface, compressors, compression, quantization, pruning, sparsity, optimization, model optimization, model compression, ",
    "author": "Neuralmagic, Inc.",
    "author_email": "support@neuralmagic.com",
    "download_url": "https://files.pythonhosted.org/packages/cc/21/a4057242a0da69067e2ff683f0a2d6de2b0077f59cc1cd60dc56f0f54223/llmcompressor-0.3.0.tar.gz",
    "platform": null,
    "description": "# <img width=\"40\" alt=\"tool icon\" src=\"https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96\" />  LLM Compressor\n`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:\n\n* Comprehensive set of quantization algorithms for weight-only and activation quantization\n* Seamless integration with Hugging Face models and repositories\n* `safetensors`-based file format compatible with `vllm`\n* Large model support via `accelerate`\n\n**\u2728 Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! \u2728**\n\n<p align=\"center\">\n   <img alt=\"LLM Compressor Flow\" src=\"https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b\" width=\"80%\" />\n</p>\n\n### Supported Formats\n* Activation Quantization: W8A8 (int8 and fp8)\n* Mixed Precision: W4A16, W8A16\n* 2:4 Semi-structured and Unstructured Sparsity\n\n### Supported Algorithms\n* Simple PTQ\n* GPTQ\n* SmoothQuant\n* SparseGPT\n\n\n## Installation\n\n```bash\npip install llmcompressor\n```\n\n## Get Started\n\n### End-to-End Examples\n\nApplying quantization with `llmcompressor`:\n* [Activation quantization to `int8`](examples/quantization_w8a8_int8)\n* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8)\n* [Weight only quantization to `int4`](examples/quantization_w4a16)\n* [Quantizing MoE LLMs](examples/quantizing_moe)\n\n### User Guides\nDeep dives into advanced usage of `llmcompressor`:\n* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate)\n\n\n## Quick Tour\nLet's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.\n\nNote that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.\n\n### Apply Quantization\nQuantization is applied by selecting an algorithm and calling the `oneshot` API.\n\n```python\nfrom llmcompressor.modifiers.quantization import GPTQModifier\nfrom llmcompressor.modifiers.smoothquant import SmoothQuantModifier\nfrom llmcompressor.transformers import oneshot\n\n# Select quantization algorithm. In this case, we:\n#   * apply SmoothQuant to make the activations easier to quantize\n#   * quantize the weights to int8 with GPTQ (static per channel)\n#   * quantize the activations to int8 (dynamic per token)\nrecipe = [\n    SmoothQuantModifier(smoothing_strength=0.8),\n    GPTQModifier(scheme=\"W8A8\", targets=\"Linear\", ignore=[\"lm_head\"]),\n]\n\n# Apply quantization using the built in open_platypus dataset.\n#   * See examples for demos showing how to pass a custom calibration set\noneshot(\n    model=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\",\n    dataset=\"open_platypus\",\n    recipe=recipe,\n    output_dir=\"TinyLlama-1.1B-Chat-v1.0-INT8\",\n    max_seq_length=2048,\n    num_calibration_samples=512,\n)\n```\n\n### Inference with vLLM\n\nThe checkpoints created by `llmcompressor` can be loaded and run in `vllm`:\n\nInstall:\n\n```bash\npip install vllm\n```\n\nRun:\n\n```python\nfrom vllm import LLM\nmodel = LLM(\"TinyLlama-1.1B-Chat-v1.0-INT8\")\noutput = model.generate(\"My name is\")\n```\n\n## Questions / Contribution\n\n- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.\n- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.",
    "version": "0.3.0",
    "project_urls": {
        "Homepage": "https://github.com/neuralmagic/llm-compressor"
    },
    "split_keywords": [
        "llmcompressor",
        " llms",
        " large language models",
        " transformers",
        " pytorch",
        " huggingface",
        " compressors",
        " compression",
        " quantization",
        " pruning",
        " sparsity",
        " optimization",
        " model optimization",
        " model compression",
        " "
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bf5c564fab07b924086d1a8b1a18dc47f09f469cc1f0474de4f4fd694d464ccf",
                "md5": "9c86b61fa7584ed287678d8932582190",
                "sha256": "97bf4e77b14f01b1bdde52c8a1d9663d341da934d1229062ac601d875c5e4d73"
            },
            "downloads": -1,
            "filename": "llmcompressor-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c86b61fa7584ed287678d8932582190",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 226853,
            "upload_time": "2024-11-13T05:17:45",
            "upload_time_iso_8601": "2024-11-13T05:17:45.166894Z",
            "url": "https://files.pythonhosted.org/packages/bf/5c/564fab07b924086d1a8b1a18dc47f09f469cc1f0474de4f4fd694d464ccf/llmcompressor-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cc21a4057242a0da69067e2ff683f0a2d6de2b0077f59cc1cd60dc56f0f54223",
                "md5": "a9b7b934ef8cb37f1f4559d05b9e4c20",
                "sha256": "3f22a6754058202881dcf12f5a72926324f66d1286ce13784fccb5b6250a04ac"
            },
            "downloads": -1,
            "filename": "llmcompressor-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a9b7b934ef8cb37f1f4559d05b9e4c20",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 172805,
            "upload_time": "2024-11-13T05:17:48",
            "upload_time_iso_8601": "2024-11-13T05:17:48.295566Z",
            "url": "https://files.pythonhosted.org/packages/cc/21/a4057242a0da69067e2ff683f0a2d6de2b0077f59cc1cd60dc56f0f54223/llmcompressor-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-13 05:17:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neuralmagic",
    "github_project": "llm-compressor",
    "github_not_found": true,
    "lcname": "llmcompressor"
}
        
Elapsed time: 0.40571s