autoawq


Nameautoawq JSON
Version 0.2.5 PyPI version JSON
download
home_pagehttps://github.com/casper-hansen/AutoAWQ
SummaryAutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference.
upload_time2024-05-02 18:32:41
maintainerNone
docs_urlNone
authorCasper Hansen
requires_python>=3.8.0
licenseMIT
keywords awq autoawq quantization transformers
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AutoAWQ

<p align="center">
| <a href="https://github.com/casper-hansen/AutoAWQ/issues/32"><b>Roadmap</b></a> | <a href="https://github.com/casper-hansen/AutoAWQ/tree/main/examples"><b>Examples</b></a> | <a href="https://github.com/casper-hansen/AutoAWQ/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22"><b>Issues: Help Wanted</b></a> |

</p>
<p align="center">
    <a href="https://huggingface.co/models?search=awq">
        <img alt="Huggingface - Models" src="https://img.shields.io/badge/🤗_1000+_models_available-8A2BE2">
    </a>
    <a href="https://github.com/casper-hansen/AutoAWQ/releases">
        <img alt="GitHub - Releases" src="https://img.shields.io/github/release/casper-hansen/AutoAWQ.svg">
    </a>
    <a href="https://pypi.org/project/autoawq/">
        <img alt="PyPI - Downloads" src="https://static.pepy.tech/badge/autoawq/month">
    </a>
</p>

AutoAWQ is an easy-to-use package for 4-bit quantized models. AutoAWQ speeds up models by 3x and reduces memory requirements by 3x compared to FP16. AutoAWQ implements the Activation-aware Weight Quantization (AWQ) algorithm for quantizing LLMs.  AutoAWQ was created and improved upon from the [original work](https://github.com/mit-han-lab/llm-awq) from MIT.

*Latest News* 🔥
- [2023/12] Mixtral, LLaVa, QWen, Baichuan model support.
- [2023/11] AutoAWQ inference has been integrated into 🤗 transformers. Now includes CUDA 12.1 wheels.
- [2023/10] Mistral (Fused Modules), Bigcode, Turing support, Memory Bug Fix (Saves 2GB VRAM)
- [2023/09] 1.6x-2.5x speed boost on fused models (now including MPT and Falcon).
- [2023/09] Multi-GPU support, bug fixes, and better benchmark scripts available
- [2023/08] PyPi package released and AutoModel class available

## Install

### Prerequisites

- NVIDIA:
  - Your NVIDIA GPU(s) must be of Compute Capability 7.5. Turing and later architectures are supported.
  - Your CUDA version must be CUDA 11.8 or later.
- AMD:
  -  Your ROCm version must be ROCm 5.6 or later.

### Install from PyPi

To install the newest AutoAWQ from PyPi, you need CUDA 12.1 installed.

```
pip install autoawq
```

### Build from source

For CUDA 11.8, ROCm 5.6, and ROCm 5.7, you can install wheels from the [release page](https://github.com/casper-hansen/AutoAWQ/releases/latest):

```
pip install autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
```

Or from the main branch directly:

```
pip install autoawq@https://github.com/casper-hansen/AutoAWQ.git
```

Or by cloning the repository and installing from source:

```
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip install -e .
```

All three methods will install the latest and correct kernels for your system from [AutoAWQ_Kernels](https://github.com/casper-hansen/AutoAWQ_kernels/releases). 

If your system is not supported (i.e. not on the release page), you can build the kernels yourself by following the instructions in [AutoAWQ_Kernels](https://github.com/casper-hansen/AutoAWQ_kernels/releases) and then install AutoAWQ from source.

## Usage

Under examples, you can find examples of how to quantize, run inference, and benchmark AutoAWQ models.

### INT4 GEMM vs INT4 GEMV vs FP16

There are two versions of AWQ: GEMM and GEMV. Both names relate to how matrix multiplication runs under the hood. We suggest the following:

- GEMV (quantized): 20% faster than GEMM, only batch size 1 (not good for large context).
- GEMM (quantized): Much faster than FP16 at batch sizes below 8 (good with large contexts).
- FP16 (non-quantized): Recommended for highest throughput: [vLLM](https://github.com/vllm-project/vllm).

#### Compute-bound vs Memory-bound

At small batch sizes with small 7B models, we are memory-bound. This means we are bound by the bandwidth our GPU has to push around the weights in memory, and this is essentially what limits how many tokens per second we can generate. Being memory-bound is what makes quantized models faster because your weights are 3x smaller and can therefore be pushed around in memory much faster. This is different from being compute-bound where the main time spent during generation is doing matrix multiplication. 

In the scenario of being compute-bound, which happens at higher batch sizes, you will not gain a speed-up using a W4A16 quantized model because the overhead of dequantization will slow down the overall generation. This happens because AWQ quantized models only store the weights in INT4 but perform FP16 operations during inference, so we are essentially converting INT4 -> FP16 during inference.

### Fused modules

Fused modules are a large part of the speedup you get from AutoAWQ. The idea is to combine multiple layers into a single operation, thus becoming more efficient. Fused modules represent a set of custom modules that work separately from Huggingface models. They are compatible with `model.generate()` and other Huggingface methods, which comes with some inflexibility in how you can use your model if you activate fused modules:

- Fused modules are activated when you use `fuse_layers=True`.
- A custom cache is implemented. It preallocates based on batch size and sequence length.
    - You cannot change the sequence length after you have created your model.
    - Reference: `AutoAWQForCausalLM.from_quantized(max_seq_len=seq_len, batch_size=batch_size)`
- The main accelerator in the fused modules comes from FasterTransformer, which is only compatible with Linux.
- The `past_key_values` from `model.generate()` are only dummy values, so they cannot be used after generation.

### Examples

More examples can be found in the [examples directory](examples).

<details>

<summary>Quantization</summary>

Expect this to take 10-15 minutes on smaller 7B models, and around 1 hour for 70B models.

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model_path = 'lmsys/vicuna-7b-v1.5'
quant_path = 'vicuna-7b-v1.5-awq'
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

# Load model
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

# Quantize
model.quantize(tokenizer, quant_config=quant_config)

# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)
```

</details>

<details>

<summary>Inference</summary>

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

quant_path = "TheBloke/zephyr-7B-beta-AWQ"

# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(
    prompt_template.format(prompt=prompt), 
    return_tensors='pt'
).input_ids.cuda()

# Generate output
generation_output = model.generate(
    tokens, 
    streamer=streamer,
    max_seq_len=512
)
```

</details>

## Benchmarks

These benchmarks showcase the speed and memory usage of processing context (prefill) and generating tokens (decoding). The results include speed at various batch sizes and different versions of AWQ kernels. We have aimed to test models fairly using the same benchmarking tool that you can use to reproduce the results. Do note that speed may vary not only between GPUs but also between CPUs. What matters most is a GPU with high memory bandwidth and a CPU with high single core clock speed.

- Tested with AutoAWQ version 0.1.6
- GPU: RTX 4090 (AMD Ryzen 9 7950X)
- Command: `python examples/benchmark.py --model_path <hf_model> --batch_size 1`
- 🟢 for GEMV, 🔵 for GEMM, 🔴 for avoid using

| Model Name | Size | Version | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM)     |
| ---------- | ---- | ------- | ---------- | -------------- | ------------- | ---------------- | --------------- | ----------------- |
| Vicuna     | 7B   | 🟢GEMV   | 1          | 64             | 64            | 639.65           | 198.848         | 4.50 GB (19.05%)  |
| Vicuna     | 7B   | 🟢GEMV   | 1          | 2048           | 2048          | 1123.63          | 133.191         | 6.15 GB (26.02%)  |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| Mistral    | 7B   | 🔵GEMM   | 1          | 64             | 64            | 1093.35          | 156.317         | 4.35 GB (18.41%)  |
| Mistral    | 7B   | 🔵GEMM   | 1          | 2048           | 2048          | 3897.02          | 114.355         | 5.55 GB (23.48%)  |
| Mistral    | 7B   | 🔵GEMM   | 8          | 64             | 64            | 4199.18          | 1185.25         | 4.35 GB (18.41%)  |
| Mistral    | 7B   | 🔵GEMM   | 8          | 2048           | 2048          | 3661.46          | 829.754         | 16.82 GB (71.12%) |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| Mistral    | 7B   | 🟢GEMV   | 1          | 64             | 64            | 531.99           | 188.29          | 4.28 GB (18.08%)  |
| Mistral    | 7B   | 🟢GEMV   | 1          | 2048           | 2048          | 903.83           | 130.66          | 5.55 GB (23.48%)  |
| Mistral    | 7B   | 🔴GEMV   | 8          | 64             | 64            | 897.87           | 486.46          | 4.33 GB (18.31%)  |
| Mistral    | 7B   | 🔴GEMV   | 8          | 2048           | 2048          | 884.22           | 411.893         | 16.82 GB (71.12%) |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| TinyLlama  | 1B   | 🟢GEMV   | 1          | 64             | 64            | 1088.63          | 548.993         | 0.86 GB (3.62%)   |
| TinyLlama  | 1B   | 🟢GEMV   | 1          | 2048           | 2048          | 5178.98          | 431.468         | 2.10 GB (8.89%)   |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| Llama 2    | 13B  | 🔵GEMM   | 1          | 64             | 64            | 820.34           | 96.74           | 8.47 GB (35.83%)  |
| Llama 2    | 13B  | 🔵GEMM   | 1          | 2048           | 2048          | 2279.41          | 73.8213         | 10.28 GB (43.46%) |
| Llama 2    | 13B  | 🔵GEMM   | 3          | 64             | 64            | 1593.88          | 286.249         | 8.57 GB (36.24%)  |
| Llama 2    | 13B  | 🔵GEMM   | 3          | 2048           | 2048          | 2226.7           | 189.573         | 16.90 GB (71.47%) |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| MPT        | 7B   | 🔵GEMM   | 1          | 64             | 64            | 1079.06          | 161.344         | 3.67 GB (15.51%)  |
| MPT        | 7B   | 🔵GEMM   | 1          | 2048           | 2048          | 4069.78          | 114.982         | 5.87 GB (24.82%)  |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| Falcon     | 7B   | 🔵GEMM   | 1          | 64             | 64            | 1139.93          | 133.585         | 4.47 GB (18.92%)  |
| Falcon     | 7B   | 🔵GEMM   | 1          | 2048           | 2048          | 2850.97          | 115.73          | 6.83 GB (28.88%)  |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| CodeLlama  | 34B  | 🔵GEMM   | 1          | 64             | 64            | 681.74           | 41.01           | 19.05 GB (80.57%) |
| CodeLlama  | 34B  | 🔵GEMM   | 1          | 2048           | 2048          | 1072.36          | 35.8316         | 20.26 GB (85.68%) |
| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |
| DeepSeek   | 33B  | 🔵GEMM   | 1          | 64             | 64            | 1160.18          | 40.29           | 18.92 GB (80.00%) |
| DeepSeek   | 33B  | 🔵GEMM   | 1          | 2048           | 2048          | 1012.1           | 34.0093         | 19.87 GB (84.02%) |

### Multi-GPU

GPU: 2x NVIDIA GeForce RTX 4090

| Model | Size    | Version       |   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)     |
|--------:|------:|--------------:|-------------:|-----------------:|----------------:|-------------------:|------------------:|:------------------|
| Mixtral | 46.7B | 🔵GEMM        |            1 |               32 |              32 |            149.742 |           93.406  | 25.28 GB (53.44%) |
| Mixtral | 46.7B | 🔵GEMM        |            1 |               64 |              64 |           1489.64  |           93.184  | 25.32 GB (53.53%) |
| Mixtral | 46.7B | 🔵GEMM        |            1 |              128 |             128 |           2082.95  |           92.9444 | 25.33 GB (53.55%) |
| Mixtral | 46.7B | 🔵GEMM        |            1 |              256 |             256 |           2428.59  |           91.5187 | 25.35 GB (53.59%) |
| Mixtral | 46.7B | 🔵GEMM        |            1 |              512 |             512 |           2633.11  |           89.1457 | 25.39 GB (53.67%) |
| Mixtral | 46.7B | 🔵GEMM        |            1 |             1024 |            1024 |           2598.95  |           84.6753 | 25.75 GB (54.44%) |
| Mixtral | 46.7B | 🔵GEMM        |            1 |             2048 |            2048 |           2446.15  |           77.0516 | 27.98 GB (59.15%) |
| Mixtral | 46.7B | 🔵GEMM        |            1 |             4096 |            4096 |           1985.78  |           77.5689 | 34.65 GB (73.26%) |

## Reference

If you find AWQ useful or relevant to your research, you can cite their [paper](https://arxiv.org/abs/2306.00978):

```
@article{lin2023awq,
  title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
  author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
  journal={arXiv},
  year={2023}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/casper-hansen/AutoAWQ",
    "name": "autoawq",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": null,
    "keywords": "awq, autoawq, quantization, transformers",
    "author": "Casper Hansen",
    "author_email": null,
    "download_url": null,
    "platform": "linux",
    "description": "# AutoAWQ\n\n<p align=\"center\">\n| <a href=\"https://github.com/casper-hansen/AutoAWQ/issues/32\"><b>Roadmap</b></a> | <a href=\"https://github.com/casper-hansen/AutoAWQ/tree/main/examples\"><b>Examples</b></a> | <a href=\"https://github.com/casper-hansen/AutoAWQ/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22\"><b>Issues: Help Wanted</b></a> |\n\n</p>\n<p align=\"center\">\n    <a href=\"https://huggingface.co/models?search=awq\">\n        <img alt=\"Huggingface - Models\" src=\"https://img.shields.io/badge/\ud83e\udd17_1000+_models_available-8A2BE2\">\n    </a>\n    <a href=\"https://github.com/casper-hansen/AutoAWQ/releases\">\n        <img alt=\"GitHub - Releases\" src=\"https://img.shields.io/github/release/casper-hansen/AutoAWQ.svg\">\n    </a>\n    <a href=\"https://pypi.org/project/autoawq/\">\n        <img alt=\"PyPI - Downloads\" src=\"https://static.pepy.tech/badge/autoawq/month\">\n    </a>\n</p>\n\nAutoAWQ is an easy-to-use package for 4-bit quantized models. AutoAWQ speeds up models by 3x and reduces memory requirements by 3x compared to FP16. AutoAWQ implements the Activation-aware Weight Quantization (AWQ) algorithm for quantizing LLMs.  AutoAWQ was created and improved upon from the [original work](https://github.com/mit-han-lab/llm-awq) from MIT.\n\n*Latest News* \ud83d\udd25\n- [2023/12] Mixtral, LLaVa, QWen, Baichuan model support.\n- [2023/11] AutoAWQ inference has been integrated into \ud83e\udd17 transformers. Now includes CUDA 12.1 wheels.\n- [2023/10] Mistral (Fused Modules), Bigcode, Turing support, Memory Bug Fix (Saves 2GB VRAM)\n- [2023/09] 1.6x-2.5x speed boost on fused models (now including MPT and Falcon).\n- [2023/09] Multi-GPU support, bug fixes, and better benchmark scripts available\n- [2023/08] PyPi package released and AutoModel class available\n\n## Install\n\n### Prerequisites\n\n- NVIDIA:\n  - Your NVIDIA GPU(s) must be of Compute Capability 7.5. Turing and later architectures are supported.\n  - Your CUDA version must be CUDA 11.8 or later.\n- AMD:\n  -  Your ROCm version must be ROCm 5.6 or later.\n\n### Install from PyPi\n\nTo install the newest AutoAWQ from PyPi, you need CUDA 12.1 installed.\n\n```\npip install autoawq\n```\n\n### Build from source\n\nFor CUDA 11.8, ROCm 5.6, and ROCm 5.7, you can install wheels from the [release page](https://github.com/casper-hansen/AutoAWQ/releases/latest):\n\n```\npip install autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl\n```\n\nOr from the main branch directly:\n\n```\npip install autoawq@https://github.com/casper-hansen/AutoAWQ.git\n```\n\nOr by cloning the repository and installing from source:\n\n```\ngit clone https://github.com/casper-hansen/AutoAWQ\ncd AutoAWQ\npip install -e .\n```\n\nAll three methods will install the latest and correct kernels for your system from [AutoAWQ_Kernels](https://github.com/casper-hansen/AutoAWQ_kernels/releases). \n\nIf your system is not supported (i.e. not on the release page), you can build the kernels yourself by following the instructions in [AutoAWQ_Kernels](https://github.com/casper-hansen/AutoAWQ_kernels/releases) and then install AutoAWQ from source.\n\n## Usage\n\nUnder examples, you can find examples of how to quantize, run inference, and benchmark AutoAWQ models.\n\n### INT4 GEMM vs INT4 GEMV vs FP16\n\nThere are two versions of AWQ: GEMM and GEMV. Both names relate to how matrix multiplication runs under the hood. We suggest the following:\n\n- GEMV (quantized): 20% faster than GEMM, only batch size 1 (not good for large context).\n- GEMM (quantized): Much faster than FP16 at batch sizes below 8 (good with large contexts).\n- FP16 (non-quantized): Recommended for highest throughput: [vLLM](https://github.com/vllm-project/vllm).\n\n#### Compute-bound vs Memory-bound\n\nAt small batch sizes with small 7B models, we are memory-bound. This means we are bound by the bandwidth our GPU has to push around the weights in memory, and this is essentially what limits how many tokens per second we can generate. Being memory-bound is what makes quantized models faster because your weights are 3x smaller and can therefore be pushed around in memory much faster. This is different from being compute-bound where the main time spent during generation is doing matrix multiplication. \n\nIn the scenario of being compute-bound, which happens at higher batch sizes, you will not gain a speed-up using a W4A16 quantized model because the overhead of dequantization will slow down the overall generation. This happens because AWQ quantized models only store the weights in INT4 but perform FP16 operations during inference, so we are essentially converting INT4 -> FP16 during inference.\n\n### Fused modules\n\nFused modules are a large part of the speedup you get from AutoAWQ. The idea is to combine multiple layers into a single operation, thus becoming more efficient. Fused modules represent a set of custom modules that work separately from Huggingface models. They are compatible with `model.generate()` and other Huggingface methods, which comes with some inflexibility in how you can use your model if you activate fused modules:\n\n- Fused modules are activated when you use `fuse_layers=True`.\n- A custom cache is implemented. It preallocates based on batch size and sequence length.\n    - You cannot change the sequence length after you have created your model.\n    - Reference: `AutoAWQForCausalLM.from_quantized(max_seq_len=seq_len, batch_size=batch_size)`\n- The main accelerator in the fused modules comes from FasterTransformer, which is only compatible with Linux.\n- The `past_key_values` from `model.generate()` are only dummy values, so they cannot be used after generation.\n\n### Examples\n\nMore examples can be found in the [examples directory](examples).\n\n<details>\n\n<summary>Quantization</summary>\n\nExpect this to take 10-15 minutes on smaller 7B models, and around 1 hour for 70B models.\n\n```python\nfrom awq import AutoAWQForCausalLM\nfrom transformers import AutoTokenizer\n\nmodel_path = 'lmsys/vicuna-7b-v1.5'\nquant_path = 'vicuna-7b-v1.5-awq'\nquant_config = { \"zero_point\": True, \"q_group_size\": 128, \"w_bit\": 4, \"version\": \"GEMM\" }\n\n# Load model\nmodel = AutoAWQForCausalLM.from_pretrained(model_path)\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\n\n# Quantize\nmodel.quantize(tokenizer, quant_config=quant_config)\n\n# Save quantized model\nmodel.save_quantized(quant_path)\ntokenizer.save_pretrained(quant_path)\n```\n\n</details>\n\n<details>\n\n<summary>Inference</summary>\n\n```python\nfrom awq import AutoAWQForCausalLM\nfrom transformers import AutoTokenizer, TextStreamer\n\nquant_path = \"TheBloke/zephyr-7B-beta-AWQ\"\n\n# Load model\nmodel = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)\ntokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)\nstreamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)\n\n# Convert prompt to tokens\nprompt_template = \"\"\"\\\n<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>\"\"\"\n\nprompt = \"You're standing on the surface of the Earth. \"\\\n        \"You walk one mile south, one mile west and one mile north. \"\\\n        \"You end up exactly where you started. Where are you?\"\n\ntokens = tokenizer(\n    prompt_template.format(prompt=prompt), \n    return_tensors='pt'\n).input_ids.cuda()\n\n# Generate output\ngeneration_output = model.generate(\n    tokens, \n    streamer=streamer,\n    max_seq_len=512\n)\n```\n\n</details>\n\n## Benchmarks\n\nThese benchmarks showcase the speed and memory usage of processing context (prefill) and generating tokens (decoding). The results include speed at various batch sizes and different versions of AWQ kernels. We have aimed to test models fairly using the same benchmarking tool that you can use to reproduce the results. Do note that speed may vary not only between GPUs but also between CPUs. What matters most is a GPU with high memory bandwidth and a CPU with high single core clock speed.\n\n- Tested with AutoAWQ version 0.1.6\n- GPU: RTX 4090 (AMD Ryzen 9 7950X)\n- Command: `python examples/benchmark.py --model_path <hf_model> --batch_size 1`\n- \ud83d\udfe2 for GEMV, \ud83d\udd35 for GEMM, \ud83d\udd34 for avoid using\n\n| Model Name | Size | Version | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM)     |\n| ---------- | ---- | ------- | ---------- | -------------- | ------------- | ---------------- | --------------- | ----------------- |\n| Vicuna     | 7B   | \ud83d\udfe2GEMV   | 1          | 64             | 64            | 639.65           | 198.848         | 4.50 GB (19.05%)  |\n| Vicuna     | 7B   | \ud83d\udfe2GEMV   | 1          | 2048           | 2048          | 1123.63          | 133.191         | 6.15 GB (26.02%)  |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| Mistral    | 7B   | \ud83d\udd35GEMM   | 1          | 64             | 64            | 1093.35          | 156.317         | 4.35 GB (18.41%)  |\n| Mistral    | 7B   | \ud83d\udd35GEMM   | 1          | 2048           | 2048          | 3897.02          | 114.355         | 5.55 GB (23.48%)  |\n| Mistral    | 7B   | \ud83d\udd35GEMM   | 8          | 64             | 64            | 4199.18          | 1185.25         | 4.35 GB (18.41%)  |\n| Mistral    | 7B   | \ud83d\udd35GEMM   | 8          | 2048           | 2048          | 3661.46          | 829.754         | 16.82 GB (71.12%) |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| Mistral    | 7B   | \ud83d\udfe2GEMV   | 1          | 64             | 64            | 531.99           | 188.29          | 4.28 GB (18.08%)  |\n| Mistral    | 7B   | \ud83d\udfe2GEMV   | 1          | 2048           | 2048          | 903.83           | 130.66          | 5.55 GB (23.48%)  |\n| Mistral    | 7B   | \ud83d\udd34GEMV   | 8          | 64             | 64            | 897.87           | 486.46          | 4.33 GB (18.31%)  |\n| Mistral    | 7B   | \ud83d\udd34GEMV   | 8          | 2048           | 2048          | 884.22           | 411.893         | 16.82 GB (71.12%) |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| TinyLlama  | 1B   | \ud83d\udfe2GEMV   | 1          | 64             | 64            | 1088.63          | 548.993         | 0.86 GB (3.62%)   |\n| TinyLlama  | 1B   | \ud83d\udfe2GEMV   | 1          | 2048           | 2048          | 5178.98          | 431.468         | 2.10 GB (8.89%)   |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| Llama 2    | 13B  | \ud83d\udd35GEMM   | 1          | 64             | 64            | 820.34           | 96.74           | 8.47 GB (35.83%)  |\n| Llama 2    | 13B  | \ud83d\udd35GEMM   | 1          | 2048           | 2048          | 2279.41          | 73.8213         | 10.28 GB (43.46%) |\n| Llama 2    | 13B  | \ud83d\udd35GEMM   | 3          | 64             | 64            | 1593.88          | 286.249         | 8.57 GB (36.24%)  |\n| Llama 2    | 13B  | \ud83d\udd35GEMM   | 3          | 2048           | 2048          | 2226.7           | 189.573         | 16.90 GB (71.47%) |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| MPT        | 7B   | \ud83d\udd35GEMM   | 1          | 64             | 64            | 1079.06          | 161.344         | 3.67 GB (15.51%)  |\n| MPT        | 7B   | \ud83d\udd35GEMM   | 1          | 2048           | 2048          | 4069.78          | 114.982         | 5.87 GB (24.82%)  |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| Falcon     | 7B   | \ud83d\udd35GEMM   | 1          | 64             | 64            | 1139.93          | 133.585         | 4.47 GB (18.92%)  |\n| Falcon     | 7B   | \ud83d\udd35GEMM   | 1          | 2048           | 2048          | 2850.97          | 115.73          | 6.83 GB (28.88%)  |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| CodeLlama  | 34B  | \ud83d\udd35GEMM   | 1          | 64             | 64            | 681.74           | 41.01           | 19.05 GB (80.57%) |\n| CodeLlama  | 34B  | \ud83d\udd35GEMM   | 1          | 2048           | 2048          | 1072.36          | 35.8316         | 20.26 GB (85.68%) |\n| ...        | ...  | ...     | ...        | ...            | ...           | ...              | ...             | ...               |\n| DeepSeek   | 33B  | \ud83d\udd35GEMM   | 1          | 64             | 64            | 1160.18          | 40.29           | 18.92 GB (80.00%) |\n| DeepSeek   | 33B  | \ud83d\udd35GEMM   | 1          | 2048           | 2048          | 1012.1           | 34.0093         | 19.87 GB (84.02%) |\n\n### Multi-GPU\n\nGPU: 2x NVIDIA GeForce RTX 4090\n\n| Model | Size    | Version       |   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)     |\n|--------:|------:|--------------:|-------------:|-----------------:|----------------:|-------------------:|------------------:|:------------------|\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |               32 |              32 |            149.742 |           93.406  | 25.28 GB (53.44%) |\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |               64 |              64 |           1489.64  |           93.184  | 25.32 GB (53.53%) |\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |              128 |             128 |           2082.95  |           92.9444 | 25.33 GB (53.55%) |\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |              256 |             256 |           2428.59  |           91.5187 | 25.35 GB (53.59%) |\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |              512 |             512 |           2633.11  |           89.1457 | 25.39 GB (53.67%) |\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |             1024 |            1024 |           2598.95  |           84.6753 | 25.75 GB (54.44%) |\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |             2048 |            2048 |           2446.15  |           77.0516 | 27.98 GB (59.15%) |\n| Mixtral | 46.7B | \ud83d\udd35GEMM        |            1 |             4096 |            4096 |           1985.78  |           77.5689 | 34.65 GB (73.26%) |\n\n## Reference\n\nIf you find AWQ useful or relevant to your research, you can cite their [paper](https://arxiv.org/abs/2306.00978):\n\n```\n@article{lin2023awq,\n  title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},\n  author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},\n  journal={arXiv},\n  year={2023}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference.",
    "version": "0.2.5",
    "project_urls": {
        "Homepage": "https://github.com/casper-hansen/AutoAWQ"
    },
    "split_keywords": [
        "awq",
        " autoawq",
        " quantization",
        " transformers"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0c0cf9105d57be356a46e4b679f72138daf264adce68d0a67468e03733aa6685",
                "md5": "98a2fb864e30e99b83861ae85b245541",
                "sha256": "6cddf62e44d3fc0c691a0d12056451ed5fd48c533454b22563c864504b30ce36"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp310-cp310-manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "98a2fb864e30e99b83861ae85b245541",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8.0",
            "size": 84263,
            "upload_time": "2024-05-02T18:32:41",
            "upload_time_iso_8601": "2024-05-02T18:32:41.913551Z",
            "url": "https://files.pythonhosted.org/packages/0c/0c/f9105d57be356a46e4b679f72138daf264adce68d0a67468e03733aa6685/autoawq-0.2.5-cp310-cp310-manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5b1772a8633f3c8b6634332851483d1f73eb01a219220cea1af1ef5a4977cd91",
                "md5": "e60ecce1a1a960623cec61addbd54e6e",
                "sha256": "1bec38e2c18a462566e5207dc6ec7d701ed655bcb1dc68e53f8a8916f7559991"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "e60ecce1a1a960623cec61addbd54e6e",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8.0",
            "size": 84950,
            "upload_time": "2024-05-02T18:32:43",
            "upload_time_iso_8601": "2024-05-02T18:32:43.639471Z",
            "url": "https://files.pythonhosted.org/packages/5b/17/72a8633f3c8b6634332851483d1f73eb01a219220cea1af1ef5a4977cd91/autoawq-0.2.5-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "023c47e27d4d2e6609c9a74ea3d055386d1648526d7a2d08662425635e23a453",
                "md5": "352f6a224f2ab1e60a08d3b00bf6e0d5",
                "sha256": "2bc7289ede22a638911031e768f7c4549cc89e0165bd828167d553e5f95e1102"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp311-cp311-manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "352f6a224f2ab1e60a08d3b00bf6e0d5",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8.0",
            "size": 84263,
            "upload_time": "2024-05-02T18:32:45",
            "upload_time_iso_8601": "2024-05-02T18:32:45.373934Z",
            "url": "https://files.pythonhosted.org/packages/02/3c/47e27d4d2e6609c9a74ea3d055386d1648526d7a2d08662425635e23a453/autoawq-0.2.5-cp311-cp311-manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "102ac1bb3011c1d7c2e98528f23bd0a46ac475ae3d24261a9598f975a9040f39",
                "md5": "20a45583cbd0ff3bba16f786a1e284ee",
                "sha256": "3aa640cf1c869ed99027086c1c292a1bda8c088608890abfd9bc8dc9b3187381"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "20a45583cbd0ff3bba16f786a1e284ee",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8.0",
            "size": 84952,
            "upload_time": "2024-05-02T18:32:47",
            "upload_time_iso_8601": "2024-05-02T18:32:47.194768Z",
            "url": "https://files.pythonhosted.org/packages/10/2a/c1bb3011c1d7c2e98528f23bd0a46ac475ae3d24261a9598f975a9040f39/autoawq-0.2.5-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8f382477c738c8e7e8e4cce3dc60a279a74b7a141d4f4fd1d3fa95e6a654f852",
                "md5": "d3cdb6e33301c4498badd0cf1b1c2cb7",
                "sha256": "80cdc6ab6eb3d9e461ae4dda7b5f74a8b3c79f6712162866a5d96e7d582e107d"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp38-cp38-manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "d3cdb6e33301c4498badd0cf1b1c2cb7",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8.0",
            "size": 84264,
            "upload_time": "2024-05-02T18:32:48",
            "upload_time_iso_8601": "2024-05-02T18:32:48.285207Z",
            "url": "https://files.pythonhosted.org/packages/8f/38/2477c738c8e7e8e4cce3dc60a279a74b7a141d4f4fd1d3fa95e6a654f852/autoawq-0.2.5-cp38-cp38-manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a39396f4730c88e48896579fbf5ac7137240f45dc3184796eedb9b3e81067508",
                "md5": "a4018fbbb9c5af8f812ced6174b01f0c",
                "sha256": "e1a0757e128e8ce63f9fda2d4bb3d43bda06ed4de6b93653e74a87526b35b207"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp38-cp38-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "a4018fbbb9c5af8f812ced6174b01f0c",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8.0",
            "size": 84954,
            "upload_time": "2024-05-02T18:32:49",
            "upload_time_iso_8601": "2024-05-02T18:32:49.571665Z",
            "url": "https://files.pythonhosted.org/packages/a3/93/96f4730c88e48896579fbf5ac7137240f45dc3184796eedb9b3e81067508/autoawq-0.2.5-cp38-cp38-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "66657f187ed46a29828a7b4eb78353884b6147c5d2c94f2c4794de328ac26454",
                "md5": "803fd7aeb93528e6cbad07a714f5b18d",
                "sha256": "c280d4244bc0b48c09c3cd361d30788c2afca38b3ab274fe3b86aecfa1484760"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp39-cp39-manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "803fd7aeb93528e6cbad07a714f5b18d",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8.0",
            "size": 84263,
            "upload_time": "2024-05-02T18:32:50",
            "upload_time_iso_8601": "2024-05-02T18:32:50.904366Z",
            "url": "https://files.pythonhosted.org/packages/66/65/7f187ed46a29828a7b4eb78353884b6147c5d2c94f2c4794de328ac26454/autoawq-0.2.5-cp39-cp39-manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e4e9acf114c2a4f2d1caa8becbe505811390255b4967ca1bc7282713b378ca45",
                "md5": "a8deec7f509b2bc648baa256989a5260",
                "sha256": "ed69a39f76127d40c7249d7a9751b0d211198c2e1f6bd3e80faacc823d01c9d2"
            },
            "downloads": -1,
            "filename": "autoawq-0.2.5-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "a8deec7f509b2bc648baa256989a5260",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8.0",
            "size": 84951,
            "upload_time": "2024-05-02T18:32:52",
            "upload_time_iso_8601": "2024-05-02T18:32:52.577482Z",
            "url": "https://files.pythonhosted.org/packages/e4/e9/acf114c2a4f2d1caa8becbe505811390255b4967ca1bc7282713b378ca45/autoawq-0.2.5-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-02 18:32:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "casper-hansen",
    "github_project": "AutoAWQ",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "autoawq"
}
        
Elapsed time: 0.24078s