<p align="center"><img src="https://avatars.githubusercontent.com/u/175231607?s=200&v=4" alt=""></p>
<h1 align="center">bitsandbytes</h1>
<p align="center">
    <a href="https://github.com/bitsandbytes-foundation/bitsandbytes/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/bitsandbytes-foundation/bitsandbytes.svg?color=blue"></a>
    <a href="https://pepy.tech/project/bitsandbytes"><img alt="Downloads" src="https://static.pepy.tech/badge/bitsandbytes/month"></a>
    <a href="https://github.com/bitsandbytes-foundation/bitsandbytes/actions/workflows/tests.yml"><img alt="Nightly Unit Tests" src="https://img.shields.io/github/actions/workflow/status/bitsandbytes-foundation/bitsandbytes/tests.yml?logo=github&label=Nightly%20Tests"></a>
    <a href="https://github.com/bitsandbytes-foundation/bitsandbytes/releases"><img alt="GitHub Release" src="https://img.shields.io/github/v/release/bitsandbytes-foundation/bitsandbytes"></a>
    <a href="https://pypi.org/project/bitsandbytes/"><img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/bitsandbytes"></a>
</p>
`bitsandbytes` enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:
* 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.
* LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.
* QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don't compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.
The library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.
## System Requirements
bitsandbytes has the following minimum requirements for all platforms:
* Python 3.9+
* [PyTorch](https://pytorch.org/get-started/locally/) 2.3+
  * _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._
#### Accelerator support:
<small>Note: this table reflects the status of the current development branch. For the latest stable release, see the
[document in the 0.48.0 tag](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/0.48.0/README.md#accelerator-support).
</small>
##### Legend:
π§ = In Development,
γ°οΈ = Partially Supported,
β
 = Supported,
β = Not Supported
<table>
  <thead>
    <tr>
      <th>Platform</th>
      <th>Accelerator</th>
      <th>Hardware Requirements</th>
      <th>LLM.int8()</th>
      <th>QLoRA 4-bit</th>
      <th>8-bit Optimizers</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td colspan="6">π§ <strong>Linux, glibc >= 2.24</strong></td>
    </tr>
    <tr>
      <td align="right">x86-64</td>
      <td>β»οΈ CPU</td>
      <td>AVX2</td>
      <td>β
</td>
      <td>β
</td>
      <td>β</td>
    </tr>
    <tr>
      <td></td>
      <td>π© NVIDIA GPU <br><code>cuda</code></td>
      <td>SM60+ minimum<br>SM75+ recommended</td>
      <td>β
</td>
      <td>β
</td>
      <td>β
</td>
    </tr>
    <tr>
      <td></td>
      <td>π₯ AMD GPU <br><code>cuda</code></td>
      <td>
        CDNA: gfx90a, gfx942<br>
        RDNA: gfx1100
      </td>
      <td>β
</td>
      <td>γ°οΈ</td>
      <td>β
</td>
    </tr>
    <tr>
      <td></td>
      <td>π¦ Intel GPU <br><code>xpu</code></td>
      <td>
        Data Center GPU Max Series<br>
        Arc A-Series (Alchemist)<br>
        Arc B-Series (Battlemage)
      </td>
      <td>β
</td>
      <td>β
</td>
      <td>γ°οΈ</td>
    </tr>
    <tr>
      <td></td>
      <td>πͺ Intel Gaudi <br><code>hpu</code></td>
      <td>Gaudi2, Gaudi3</td>
      <td>β
</td>
      <td>γ°οΈ</td>
      <td>β</td>
    </tr>
    <tr>
      <td align="right">aarch64</td>
      <td>β»οΈ CPU</td>
      <td></td>
      <td>β
</td>
      <td>β
</td>
      <td>β</td>
    </tr>
    <tr>
      <td></td>
      <td>π© NVIDIA GPU <br><code>cuda</code></td>
      <td>SM75+</td>
      <td>β
</td>
      <td>β
</td>
      <td>β
</td>
    </tr>
    <tr>
      <td colspan="6">πͺ <strong>Windows 11 / Windows Server 2019+</strong></td>
    </tr>
    <tr>
      <td align="right">x86-64</td>
      <td>β»οΈ CPU</td>
      <td>AVX2</td>
      <td>β
</td>
      <td>β
</td>
      <td>β</td>
    </tr>
    <tr>
      <td></td>
      <td>π© NVIDIA GPU <br><code>cuda</code></td>
      <td>SM60+ minimum<br>SM75+ recommended</td>
      <td>β
</td>
      <td>β
</td>
      <td>β
</td>
    </tr>
    <tr>
      <td></td>
      <td>π¦ Intel GPU <br><code>xpu</code></td>
      <td>
        Arc A-Series (Alchemist) <br>
        Arc B-Series (Battlemage)
      </td>
      <td>β
</td>
      <td>β
</td>
      <td>γ°οΈ</td>
    </tr>
    <tr>
      <td colspan="6">π <strong>macOS 14+</strong></td>
    </tr>
    <tr>
      <td align="right">arm64</td>
      <td>β»οΈ CPU</td>
      <td>Apple M1+</td>
      <td>π§</td>
      <td>π§</td>
      <td>β</td>
    </tr>
    <tr>
      <td></td>
      <td>β¬ Metal <br><code>mps</code></td>
      <td>Apple M1+</td>
      <td>π§</td>
      <td>π§</td>
      <td>β</td>
  </tbody>
</table>
## :book: Documentation
* [Official Documentation](https://huggingface.co/docs/bitsandbytes/main)
* π€ [Transformers](https://huggingface.co/docs/transformers/quantization/bitsandbytes)
* π€ [Diffusers](https://huggingface.co/docs/diffusers/quantization/bitsandbytes)
* π€ [PEFT](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model)
## :heart: Sponsors
The continued maintenance and development of `bitsandbytes` is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.
<kbd><a href="https://hf.co" target="_blank"><img width="100" src="https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo.svg" alt="Hugging Face"></a></kbd>
 
<kbd><a href="https://intel.com" target="_blank"><img width="100" src="https://avatars.githubusercontent.com/u/17888862?s=100&v=4" alt="Intel"></a></kbd>
## License
`bitsandbytes` is MIT licensed.
We thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization.
## How to cite us
If you found this library useful, please consider citing our work:
### QLoRA
```bibtex
@article{dettmers2023qlora,
  title={Qlora: Efficient finetuning of quantized llms},
  author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
  journal={arXiv preprint arXiv:2305.14314},
  year={2023}
}
```
### LLM.int8()
```bibtex
@article{dettmers2022llmint8,
  title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},
  author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},
  journal={arXiv preprint arXiv:2208.07339},
  year={2022}
}
```
### 8-bit Optimizers
```bibtex
@article{dettmers2022optimizers,
  title={8-bit Optimizers via Block-wise Quantization},
  author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},
  journal={9th International Conference on Learning Representations, ICLR},
  year={2022}
}
```
            
         
        Raw data
        
            {
    "_id": null,
    "home_page": null,
    "name": "bitsandbytes",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Titus von K\u00f6ller <titus@huggingface.co>, Matthew Douglas <matthew.douglas@huggingface.co>",
    "keywords": "gpu, optimizers, optimization, 8-bit, quantization, compression",
    "author": null,
    "author_email": "Tim Dettmers <dettmers@cs.washington.edu>",
    "download_url": null,
    "platform": null,
    "description": "<p align=\"center\"><img src=\"https://avatars.githubusercontent.com/u/175231607?s=200&v=4\" alt=\"\"></p>\n<h1 align=\"center\">bitsandbytes</h1>\n<p align=\"center\">\n    <a href=\"https://github.com/bitsandbytes-foundation/bitsandbytes/main/LICENSE\"><img alt=\"License\" src=\"https://img.shields.io/github/license/bitsandbytes-foundation/bitsandbytes.svg?color=blue\"></a>\n    <a href=\"https://pepy.tech/project/bitsandbytes\"><img alt=\"Downloads\" src=\"https://static.pepy.tech/badge/bitsandbytes/month\"></a>\n    <a href=\"https://github.com/bitsandbytes-foundation/bitsandbytes/actions/workflows/tests.yml\"><img alt=\"Nightly Unit Tests\" src=\"https://img.shields.io/github/actions/workflow/status/bitsandbytes-foundation/bitsandbytes/tests.yml?logo=github&label=Nightly%20Tests\"></a>\n    <a href=\"https://github.com/bitsandbytes-foundation/bitsandbytes/releases\"><img alt=\"GitHub Release\" src=\"https://img.shields.io/github/v/release/bitsandbytes-foundation/bitsandbytes\"></a>\n    <a href=\"https://pypi.org/project/bitsandbytes/\"><img alt=\"PyPI - Python Version\" src=\"https://img.shields.io/pypi/pyversions/bitsandbytes\"></a>\n</p>\n\n`bitsandbytes` enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:\n\n* 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.\n* LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.\n* QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don't compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.\n\nThe library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.\n\n## System Requirements\nbitsandbytes has the following minimum requirements for all platforms:\n\n* Python 3.9+\n* [PyTorch](https://pytorch.org/get-started/locally/) 2.3+\n  * _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._\n\n#### Accelerator support:\n\n<small>Note: this table reflects the status of the current development branch. For the latest stable release, see the\n[document in the 0.48.0 tag](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/0.48.0/README.md#accelerator-support).\n</small>\n\n##### Legend:\n\ud83d\udea7 = In Development,\n\u3030\ufe0f = Partially Supported,\n\u2705 = Supported,\n\u274c = Not Supported\n\n<table>\n  <thead>\n    <tr>\n      <th>Platform</th>\n      <th>Accelerator</th>\n      <th>Hardware Requirements</th>\n      <th>LLM.int8()</th>\n      <th>QLoRA 4-bit</th>\n      <th>8-bit Optimizers</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <td colspan=\"6\">\ud83d\udc27 <strong>Linux, glibc >= 2.24</strong></td>\n    </tr>\n    <tr>\n      <td align=\"right\">x86-64</td>\n      <td>\u25fb\ufe0f CPU</td>\n      <td>AVX2</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u274c</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\ud83d\udfe9 NVIDIA GPU <br><code>cuda</code></td>\n      <td>SM60+ minimum<br>SM75+ recommended</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\ud83d\udfe5 AMD GPU <br><code>cuda</code></td>\n      <td>\n        CDNA: gfx90a, gfx942<br>\n        RDNA: gfx1100\n      </td>\n      <td>\u2705</td>\n      <td>\u3030\ufe0f</td>\n      <td>\u2705</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\ud83d\udfe6 Intel GPU <br><code>xpu</code></td>\n      <td>\n        Data Center GPU Max Series<br>\n        Arc A-Series (Alchemist)<br>\n        Arc B-Series (Battlemage)\n      </td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u3030\ufe0f</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\ud83d\udfea Intel Gaudi <br><code>hpu</code></td>\n      <td>Gaudi2, Gaudi3</td>\n      <td>\u2705</td>\n      <td>\u3030\ufe0f</td>\n      <td>\u274c</td>\n    </tr>\n    <tr>\n      <td align=\"right\">aarch64</td>\n      <td>\u25fb\ufe0f CPU</td>\n      <td></td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u274c</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\ud83d\udfe9 NVIDIA GPU <br><code>cuda</code></td>\n      <td>SM75+</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n    </tr>\n    <tr>\n      <td colspan=\"6\">\ud83e\ude9f <strong>Windows 11 / Windows Server 2019+</strong></td>\n    </tr>\n    <tr>\n      <td align=\"right\">x86-64</td>\n      <td>\u25fb\ufe0f CPU</td>\n      <td>AVX2</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u274c</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\ud83d\udfe9 NVIDIA GPU <br><code>cuda</code></td>\n      <td>SM60+ minimum<br>SM75+ recommended</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\ud83d\udfe6 Intel GPU <br><code>xpu</code></td>\n      <td>\n        Arc A-Series (Alchemist) <br>\n        Arc B-Series (Battlemage)\n      </td>\n      <td>\u2705</td>\n      <td>\u2705</td>\n      <td>\u3030\ufe0f</td>\n    </tr>\n    <tr>\n      <td colspan=\"6\">\ud83c\udf4e <strong>macOS 14+</strong></td>\n    </tr>\n    <tr>\n      <td align=\"right\">arm64</td>\n      <td>\u25fb\ufe0f CPU</td>\n      <td>Apple M1+</td>\n      <td>\ud83d\udea7</td>\n      <td>\ud83d\udea7</td>\n      <td>\u274c</td>\n    </tr>\n    <tr>\n      <td></td>\n      <td>\u2b1c Metal <br><code>mps</code></td>\n      <td>Apple M1+</td>\n      <td>\ud83d\udea7</td>\n      <td>\ud83d\udea7</td>\n      <td>\u274c</td>\n  </tbody>\n</table>\n\n## :book: Documentation\n* [Official Documentation](https://huggingface.co/docs/bitsandbytes/main)\n* \ud83e\udd17 [Transformers](https://huggingface.co/docs/transformers/quantization/bitsandbytes)\n* \ud83e\udd17 [Diffusers](https://huggingface.co/docs/diffusers/quantization/bitsandbytes)\n* \ud83e\udd17 [PEFT](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model)\n\n## :heart: Sponsors\nThe continued maintenance and development of `bitsandbytes` is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.\n\n<kbd><a href=\"https://hf.co\" target=\"_blank\"><img width=\"100\" src=\"https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo.svg\" alt=\"Hugging Face\"></a></kbd>\n \n<kbd><a href=\"https://intel.com\" target=\"_blank\"><img width=\"100\" src=\"https://avatars.githubusercontent.com/u/17888862?s=100&v=4\" alt=\"Intel\"></a></kbd>\n\n## License\n`bitsandbytes` is MIT licensed.\n\nWe thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization.\n\n## How to cite us\nIf you found this library useful, please consider citing our work:\n\n### QLoRA\n\n```bibtex\n@article{dettmers2023qlora,\n  title={Qlora: Efficient finetuning of quantized llms},\n  author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},\n  journal={arXiv preprint arXiv:2305.14314},\n  year={2023}\n}\n```\n\n### LLM.int8()\n\n```bibtex\n@article{dettmers2022llmint8,\n  title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},\n  author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},\n  journal={arXiv preprint arXiv:2208.07339},\n  year={2022}\n}\n```\n\n### 8-bit Optimizers\n\n```bibtex\n@article{dettmers2022optimizers,\n  title={8-bit Optimizers via Block-wise Quantization},\n  author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},\n  journal={9th International Conference on Learning Representations, ICLR},\n  year={2022}\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "k-bit optimizers and matrix multiplication routines.",
    "version": "0.48.2",
    "project_urls": {
        "changelog": "https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/CHANGELOG.md",
        "docs": "https://huggingface.co/docs/bitsandbytes/main",
        "homepage": "https://github.com/bitsandbytes-foundation/bitsandbytes",
        "issues": "https://github.com/bitsandbytes-foundation/bitsandbytes/issues"
    },
    "split_keywords": [
        "gpu",
        " optimizers",
        " optimization",
        " 8-bit",
        " quantization",
        " compression"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d556e58f1eeb41c71d3789d401ad147fc0fa78fa7004ea58616ba6abc50889c8",
                "md5": "49c195dda15a7ece4db957d7a2a72c15",
                "sha256": "defbfa374d93809de3811cd2bca6978d1d51ecaa39f5bdd2018e1394a4886603"
            },
            "downloads": -1,
            "filename": "bitsandbytes-0.48.2-py3-none-manylinux_2_24_aarch64.whl",
            "has_sig": false,
            "md5_digest": "49c195dda15a7ece4db957d7a2a72c15",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 33764723,
            "upload_time": "2025-10-29T21:40:36",
            "upload_time_iso_8601": "2025-10-29T21:40:36.389304Z",
            "url": "https://files.pythonhosted.org/packages/d5/56/e58f1eeb41c71d3789d401ad147fc0fa78fa7004ea58616ba6abc50889c8/bitsandbytes-0.48.2-py3-none-manylinux_2_24_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3b72f6934097c94e023967bf7eb24e50987aca8a40f4cabf4e0282957f6258c2",
                "md5": "63488a68bebf7e47629775a73ad2418d",
                "sha256": "cd289562cb7308ee2a707e6884fecca9bbbcfc9ec33a86df2a45e0779692c1a3"
            },
            "downloads": -1,
            "filename": "bitsandbytes-0.48.2-py3-none-manylinux_2_24_x86_64.whl",
            "has_sig": false,
            "md5_digest": "63488a68bebf7e47629775a73ad2418d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 59399452,
            "upload_time": "2025-10-29T21:40:39",
            "upload_time_iso_8601": "2025-10-29T21:40:39.941930Z",
            "url": "https://files.pythonhosted.org/packages/3b/72/f6934097c94e023967bf7eb24e50987aca8a40f4cabf4e0282957f6258c2/bitsandbytes-0.48.2-py3-none-manylinux_2_24_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "172d309b3ba276d1fe5f3db4fb5e0a6926d10654656cdbfe49881aa95f4cfcc2",
                "md5": "0bdde6e1da4c81f0a850d37a6eb13a6d",
                "sha256": "a048c285eb6ff53a8d189880e9dfa421d2bfb54e8cab263311757cf5b742d865"
            },
            "downloads": -1,
            "filename": "bitsandbytes-0.48.2-py3-none-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "0bdde6e1da4c81f0a850d37a6eb13a6d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 58992447,
            "upload_time": "2025-10-29T21:40:43",
            "upload_time_iso_8601": "2025-10-29T21:40:43.921679Z",
            "url": "https://files.pythonhosted.org/packages/17/2d/309b3ba276d1fe5f3db4fb5e0a6926d10654656cdbfe49881aa95f4cfcc2/bitsandbytes-0.48.2-py3-none-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-29 21:40:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bitsandbytes-foundation",
    "github_project": "bitsandbytes",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "bitsandbytes"
}