llmcompressor-nightly


Namellmcompressor-nightly JSON
Version 0.2.0.20240926 PyPI version JSON
download
home_pagehttps://github.com/neuralmagic/llm-compressor
SummaryA library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.
upload_time2024-09-26 03:25:01
maintainerNone
docs_urlNone
authorNeuralmagic, Inc.
requires_python>=3.8
licenseApache
keywords llmcompressor llms large language models transformers pytorch huggingface compressors compression quantization pruning sparsity optimization model optimization model compression
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img width="40" alt="tool icon" src="https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96" />  LLM Compressor
`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:

* Comprehensive set of quantization algorithms for weight-only and activation quantization
* Seamless integration with Hugging Face models and repositories
* `safetensors`-based file format compatible with `vllm`
* Large model support via `accelerate`

**✨ Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! ✨**

<p align="center">
   <img alt="LLM Compressor Flow" src="https://github.com/user-attachments/assets/91c1f391-8c9a-4b20-80c2-20ffb9ad78b4" width="80%" />
</p>

### Supported Formats
* Activation Quantization: W8A8 (int8 and fp8)
* Mixed Precision: W4A16, W8A16
* 2:4 Semi-structured and Unstructured Sparsity

### Supported Algorithms
* Simple PTQ
* GPTQ
* SmoothQuant
* SparseGPT


## Installation

```bash
pip install llmcompressor
```

## Get Started

### End-to-End Examples

Applying quantization with `llmcompressor`:
* [Activation quantization to `int8`](examples/quantization_w8a8_int8)
* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8)
* [Weight only quantization to `int4`](examples/quantization_w4a16)
* [Quantizing MoE LLMs](examples/quantizing_moe)

### User Guides
Deep dives into advanced usage of `llmcompressor`:
* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate)


## Quick Tour
Let's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.

### Apply Quantization
Quantization is applied by selecting an algorithm and calling the `oneshot` API.

```python
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot

# Select quantization algorithm. In this case, we:
#   * apply SmoothQuant to make the activations easier to quantize
#   * quantize the weights to int8 with GPTQ (static per channel)
#   * quantize the activations to int8 (dynamic per token)
recipe = [
    SmoothQuantModifier(smoothing_strength=0.8),
    GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]

# Apply quantization using the built in open_platypus dataset.
#   * See examples for demos showing how to pass a custom calibration set
oneshot(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    dataset="open_platypus",
    recipe=recipe,
    output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
    max_seq_length=2048,
    num_calibration_samples=512,
)
```

### Inference with vLLM

The checkpoints created by `llmcompressor` can be loaded and run in `vllm`:

Install:

```bash
pip install vllm
```

Run:

```python
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
```

## Questions / Contribution

- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/neuralmagic/llm-compressor",
    "name": "llmcompressor-nightly",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llmcompressor, llms, large language models, transformers, pytorch, huggingface, compressors, compression, quantization, pruning, sparsity, optimization, model optimization, model compression, ",
    "author": "Neuralmagic, Inc.",
    "author_email": "support@neuralmagic.com",
    "download_url": "https://files.pythonhosted.org/packages/3d/61/a12680e87df02ae172d52d25d3e4158a79025dffd4a498be7ff389e8a3b8/llmcompressor-nightly-0.2.0.20240926.tar.gz",
    "platform": null,
    "description": "# <img width=\"40\" alt=\"tool icon\" src=\"https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96\" />  LLM Compressor\n`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:\n\n* Comprehensive set of quantization algorithms for weight-only and activation quantization\n* Seamless integration with Hugging Face models and repositories\n* `safetensors`-based file format compatible with `vllm`\n* Large model support via `accelerate`\n\n**\u2728 Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! \u2728**\n\n<p align=\"center\">\n   <img alt=\"LLM Compressor Flow\" src=\"https://github.com/user-attachments/assets/91c1f391-8c9a-4b20-80c2-20ffb9ad78b4\" width=\"80%\" />\n</p>\n\n### Supported Formats\n* Activation Quantization: W8A8 (int8 and fp8)\n* Mixed Precision: W4A16, W8A16\n* 2:4 Semi-structured and Unstructured Sparsity\n\n### Supported Algorithms\n* Simple PTQ\n* GPTQ\n* SmoothQuant\n* SparseGPT\n\n\n## Installation\n\n```bash\npip install llmcompressor\n```\n\n## Get Started\n\n### End-to-End Examples\n\nApplying quantization with `llmcompressor`:\n* [Activation quantization to `int8`](examples/quantization_w8a8_int8)\n* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8)\n* [Weight only quantization to `int4`](examples/quantization_w4a16)\n* [Quantizing MoE LLMs](examples/quantizing_moe)\n\n### User Guides\nDeep dives into advanced usage of `llmcompressor`:\n* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate)\n\n\n## Quick Tour\nLet's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.\n\nNote that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.\n\n### Apply Quantization\nQuantization is applied by selecting an algorithm and calling the `oneshot` API.\n\n```python\nfrom llmcompressor.modifiers.quantization import GPTQModifier\nfrom llmcompressor.modifiers.smoothquant import SmoothQuantModifier\nfrom llmcompressor.transformers import oneshot\n\n# Select quantization algorithm. In this case, we:\n#   * apply SmoothQuant to make the activations easier to quantize\n#   * quantize the weights to int8 with GPTQ (static per channel)\n#   * quantize the activations to int8 (dynamic per token)\nrecipe = [\n    SmoothQuantModifier(smoothing_strength=0.8),\n    GPTQModifier(scheme=\"W8A8\", targets=\"Linear\", ignore=[\"lm_head\"]),\n]\n\n# Apply quantization using the built in open_platypus dataset.\n#   * See examples for demos showing how to pass a custom calibration set\noneshot(\n    model=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\",\n    dataset=\"open_platypus\",\n    recipe=recipe,\n    output_dir=\"TinyLlama-1.1B-Chat-v1.0-INT8\",\n    max_seq_length=2048,\n    num_calibration_samples=512,\n)\n```\n\n### Inference with vLLM\n\nThe checkpoints created by `llmcompressor` can be loaded and run in `vllm`:\n\nInstall:\n\n```bash\npip install vllm\n```\n\nRun:\n\n```python\nfrom vllm import LLM\nmodel = LLM(\"TinyLlama-1.1B-Chat-v1.0-INT8\")\noutput = model.generate(\"My name is\")\n```\n\n## Questions / Contribution\n\n- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.\n- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.",
    "version": "0.2.0.20240926",
    "project_urls": {
        "Homepage": "https://github.com/neuralmagic/llm-compressor"
    },
    "split_keywords": [
        "llmcompressor",
        " llms",
        " large language models",
        " transformers",
        " pytorch",
        " huggingface",
        " compressors",
        " compression",
        " quantization",
        " pruning",
        " sparsity",
        " optimization",
        " model optimization",
        " model compression",
        " "
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "961afa76e428577250c2d1f31ddb63cfd284d160425ee7a0be61a8b0a3019668",
                "md5": "5c795ccf1b58d709eeb565a002cf1166",
                "sha256": "d3bb1acdbfcca50ad24f3ba39ad4e3fc1fb0fb3ea2aabe45a6ebf722aa1c91a3"
            },
            "downloads": -1,
            "filename": "llmcompressor_nightly-0.2.0.20240926-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5c795ccf1b58d709eeb565a002cf1166",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 211339,
            "upload_time": "2024-09-26T03:24:58",
            "upload_time_iso_8601": "2024-09-26T03:24:58.337911Z",
            "url": "https://files.pythonhosted.org/packages/96/1a/fa76e428577250c2d1f31ddb63cfd284d160425ee7a0be61a8b0a3019668/llmcompressor_nightly-0.2.0.20240926-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3d61a12680e87df02ae172d52d25d3e4158a79025dffd4a498be7ff389e8a3b8",
                "md5": "fde7fbef7d8cdec1bccd6f2cde102896",
                "sha256": "8db9a4f263d4ef6b60aaca3bf295cfd7efb8d6fed6404551fd7a320db9fbf98e"
            },
            "downloads": -1,
            "filename": "llmcompressor-nightly-0.2.0.20240926.tar.gz",
            "has_sig": false,
            "md5_digest": "fde7fbef7d8cdec1bccd6f2cde102896",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 161778,
            "upload_time": "2024-09-26T03:25:01",
            "upload_time_iso_8601": "2024-09-26T03:25:01.377706Z",
            "url": "https://files.pythonhosted.org/packages/3d/61/a12680e87df02ae172d52d25d3e4158a79025dffd4a498be7ff389e8a3b8/llmcompressor-nightly-0.2.0.20240926.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-26 03:25:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neuralmagic",
    "github_project": "llm-compressor",
    "github_not_found": true,
    "lcname": "llmcompressor-nightly"
}
        
Elapsed time: 0.61067s