llmcompressor-nightly


Namellmcompressor-nightly JSON
Version 0.3.0.20241112 PyPI version JSON
download
home_pagehttps://github.com/neuralmagic/llm-compressor
SummaryA library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.
upload_time2024-11-12 16:42:51
maintainerNone
docs_urlNone
authorNeuralmagic, Inc.
requires_python>=3.8
licenseApache
keywords llmcompressor llms large language models transformers pytorch huggingface compressors compression quantization pruning sparsity optimization model optimization model compression
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img width="40" alt="tool icon" src="https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96" />  LLM Compressor
`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:

* Comprehensive set of quantization algorithms for weight-only and activation quantization
* Seamless integration with Hugging Face models and repositories
* `safetensors`-based file format compatible with `vllm`
* Large model support via `accelerate`

**✨ Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! ✨**

<p align="center">
   <img alt="LLM Compressor Flow" src="https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b" width="80%" />
</p>

### Supported Formats
* Activation Quantization: W8A8 (int8 and fp8)
* Mixed Precision: W4A16, W8A16
* 2:4 Semi-structured and Unstructured Sparsity

### Supported Algorithms
* Simple PTQ
* GPTQ
* SmoothQuant
* SparseGPT


## Installation

```bash
pip install llmcompressor
```

## Get Started

### End-to-End Examples

Applying quantization with `llmcompressor`:
* [Activation quantization to `int8`](examples/quantization_w8a8_int8)
* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8)
* [Weight only quantization to `int4`](examples/quantization_w4a16)
* [Quantizing MoE LLMs](examples/quantizing_moe)

### User Guides
Deep dives into advanced usage of `llmcompressor`:
* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate)


## Quick Tour
Let's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.

### Apply Quantization
Quantization is applied by selecting an algorithm and calling the `oneshot` API.

```python
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot

# Select quantization algorithm. In this case, we:
#   * apply SmoothQuant to make the activations easier to quantize
#   * quantize the weights to int8 with GPTQ (static per channel)
#   * quantize the activations to int8 (dynamic per token)
recipe = [
    SmoothQuantModifier(smoothing_strength=0.8),
    GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]

# Apply quantization using the built in open_platypus dataset.
#   * See examples for demos showing how to pass a custom calibration set
oneshot(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    dataset="open_platypus",
    recipe=recipe,
    output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
    max_seq_length=2048,
    num_calibration_samples=512,
)
```

### Inference with vLLM

The checkpoints created by `llmcompressor` can be loaded and run in `vllm`:

Install:

```bash
pip install vllm
```

Run:

```python
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
```

## Questions / Contribution

- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/neuralmagic/llm-compressor",
    "name": "llmcompressor-nightly",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llmcompressor, llms, large language models, transformers, pytorch, huggingface, compressors, compression, quantization, pruning, sparsity, optimization, model optimization, model compression, ",
    "author": "Neuralmagic, Inc.",
    "author_email": "support@neuralmagic.com",
    "download_url": "https://files.pythonhosted.org/packages/5e/78/39a5851894a1b882bb0e6f55d2898568c11fe8919f3b8e753d40a5bf12a3/llmcompressor-nightly-0.3.0.20241112.tar.gz",
    "platform": null,
    "description": "# <img width=\"40\" alt=\"tool icon\" src=\"https://github.com/user-attachments/assets/f9b86465-aefa-4625-a09b-54e158efcf96\" />  LLM Compressor\n`llmcompressor` is an easy-to-use library for optimizing models for deployment with `vllm`, including:\n\n* Comprehensive set of quantization algorithms for weight-only and activation quantization\n* Seamless integration with Hugging Face models and repositories\n* `safetensors`-based file format compatible with `vllm`\n* Large model support via `accelerate`\n\n**\u2728 Read the announcement blog [here](https://neuralmagic.com/blog/llm-compressor-is-here-faster-inference-with-vllm/)! \u2728**\n\n<p align=\"center\">\n   <img alt=\"LLM Compressor Flow\" src=\"https://github.com/user-attachments/assets/adf07594-6487-48ae-af62-d9555046d51b\" width=\"80%\" />\n</p>\n\n### Supported Formats\n* Activation Quantization: W8A8 (int8 and fp8)\n* Mixed Precision: W4A16, W8A16\n* 2:4 Semi-structured and Unstructured Sparsity\n\n### Supported Algorithms\n* Simple PTQ\n* GPTQ\n* SmoothQuant\n* SparseGPT\n\n\n## Installation\n\n```bash\npip install llmcompressor\n```\n\n## Get Started\n\n### End-to-End Examples\n\nApplying quantization with `llmcompressor`:\n* [Activation quantization to `int8`](examples/quantization_w8a8_int8)\n* [Activation quantization to `fp8`](examples/quantization_w8a8_fp8)\n* [Weight only quantization to `int4`](examples/quantization_w4a16)\n* [Quantizing MoE LLMs](examples/quantizing_moe)\n\n### User Guides\nDeep dives into advanced usage of `llmcompressor`:\n* [Quantizing with large models with the help of `accelerate`](examples/big_models_with_accelerate)\n\n\n## Quick Tour\nLet's quantize `TinyLlama` with 8 bit weights and activations using the `GPTQ` and `SmoothQuant` algorithms.\n\nNote that the model can be swapped for a local or remote HF-compatible checkpoint and the `recipe` may be changed to target different quantization algorithms or formats.\n\n### Apply Quantization\nQuantization is applied by selecting an algorithm and calling the `oneshot` API.\n\n```python\nfrom llmcompressor.modifiers.quantization import GPTQModifier\nfrom llmcompressor.modifiers.smoothquant import SmoothQuantModifier\nfrom llmcompressor.transformers import oneshot\n\n# Select quantization algorithm. In this case, we:\n#   * apply SmoothQuant to make the activations easier to quantize\n#   * quantize the weights to int8 with GPTQ (static per channel)\n#   * quantize the activations to int8 (dynamic per token)\nrecipe = [\n    SmoothQuantModifier(smoothing_strength=0.8),\n    GPTQModifier(scheme=\"W8A8\", targets=\"Linear\", ignore=[\"lm_head\"]),\n]\n\n# Apply quantization using the built in open_platypus dataset.\n#   * See examples for demos showing how to pass a custom calibration set\noneshot(\n    model=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\",\n    dataset=\"open_platypus\",\n    recipe=recipe,\n    output_dir=\"TinyLlama-1.1B-Chat-v1.0-INT8\",\n    max_seq_length=2048,\n    num_calibration_samples=512,\n)\n```\n\n### Inference with vLLM\n\nThe checkpoints created by `llmcompressor` can be loaded and run in `vllm`:\n\nInstall:\n\n```bash\npip install vllm\n```\n\nRun:\n\n```python\nfrom vllm import LLM\nmodel = LLM(\"TinyLlama-1.1B-Chat-v1.0-INT8\")\noutput = model.generate(\"My name is\")\n```\n\n## Questions / Contribution\n\n- If you have any questions or requests open an [issue](https://github.com/vllm-project/llm-compressor/issues) and we will add an example or documentation.\n- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](CONTRIBUTING.md).\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.",
    "version": "0.3.0.20241112",
    "project_urls": {
        "Homepage": "https://github.com/neuralmagic/llm-compressor"
    },
    "split_keywords": [
        "llmcompressor",
        " llms",
        " large language models",
        " transformers",
        " pytorch",
        " huggingface",
        " compressors",
        " compression",
        " quantization",
        " pruning",
        " sparsity",
        " optimization",
        " model optimization",
        " model compression",
        " "
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "09057f4fc3049198b60d89c8b043a02f35edfbaf4bdba240962370dac5346673",
                "md5": "c97ec8a332efc8806309bf4ec3e819bb",
                "sha256": "f70e206be92520e21b3904e5c1baccc94faa46a257bd60bd8ec4636b2575e109"
            },
            "downloads": -1,
            "filename": "llmcompressor_nightly-0.3.0.20241112-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c97ec8a332efc8806309bf4ec3e819bb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 227089,
            "upload_time": "2024-11-12T16:42:48",
            "upload_time_iso_8601": "2024-11-12T16:42:48.345771Z",
            "url": "https://files.pythonhosted.org/packages/09/05/7f4fc3049198b60d89c8b043a02f35edfbaf4bdba240962370dac5346673/llmcompressor_nightly-0.3.0.20241112-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5e7839a5851894a1b882bb0e6f55d2898568c11fe8919f3b8e753d40a5bf12a3",
                "md5": "9b5e9e7c98d8692bfc6d626c5a18bdd0",
                "sha256": "7f2579c297d259affd6d16a8140d145a72a8badac279053e35367eb504f487f6"
            },
            "downloads": -1,
            "filename": "llmcompressor-nightly-0.3.0.20241112.tar.gz",
            "has_sig": false,
            "md5_digest": "9b5e9e7c98d8692bfc6d626c5a18bdd0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 173019,
            "upload_time": "2024-11-12T16:42:51",
            "upload_time_iso_8601": "2024-11-12T16:42:51.184769Z",
            "url": "https://files.pythonhosted.org/packages/5e/78/39a5851894a1b882bb0e6f55d2898568c11fe8919f3b8e753d40a5bf12a3/llmcompressor-nightly-0.3.0.20241112.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-12 16:42:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neuralmagic",
    "github_project": "llm-compressor",
    "github_not_found": true,
    "lcname": "llmcompressor-nightly"
}
        
Elapsed time: 0.88239s