alignment-handbook


Namealignment-handbook JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/huggingface/alignment-handbook
SummaryThe Alignment Handbook
upload_time2024-01-04 01:38:30
maintainer
docs_urlNone
authorThe Hugging Face team (past and future)
requires_python>=3.10.9
licenseApache
keywords nlp deep learning rlhf llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="https://raw.githubusercontent.com/huggingface/alignment-handbook/main/assets/handbook.png">
</p>

<p align="center">
    🤗 <a href="https://huggingface.co/collections/alignment-handbook/handbook-v01-models-and-datasets-654e424d22e6880da5ebc015" target="_blank">Models & Datasets</a> | 📃 <a href="https://arxiv.org/abs/2310.16944" target="_blank">Technical Report</a>
</p>

# The Alignment Handbook

Robust recipes to align language models with human and AI preferences.

## What is this?

Just one year ago, chatbots were out of fashion and most people hadn't heard about techniques like Reinforcement Learning from Human Feedback (RLHF) to align language models with human preferences. Then, OpenAI broke the internet with ChatGPT and Meta followed suit by releasing the Llama series of language models which enabled the ML community to build their very own capable chatbots. This has led to a rich ecosystem of datasets and models that have mostly focused on teaching language models to follow instructions through supervised fine-tuning (SFT).

However, we know from the [InstructGPT](https://huggingface.co/papers/2203.02155) and [Llama2](https://huggingface.co/papers/2307.09288) papers that significant gains in helpfulness and safety can be had by augmenting SFT with human (or AI) preferences. At the same time, aligning language models to a set of preferences is a fairly novel idea and there are few public resources available on how to train these models, what data to collect, and what metrics to measure for best downstream performance.

The Alignment Handbook aims to fill that gap by providing the community with a series of robust training recipes that span the whole pipeline.

## News 🗞️

* **November 10, 2023:** We release all the training code to replicate Zephyr-7b-β 🪁! We also release [No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots), a brand new dataset of 10,000 instructions and demonstrations written entirely by skilled human annotators.

## Links 🔗

* [Zephyr 7B models, datasets, and demos](https://huggingface.co/collections/HuggingFaceH4/zephyr-7b-6538c6d6d5ddd1cbb1744a66)

## How to navigate this project 🧭

This project is simple by design and mostly consists of:

* [`scripts`](./scripts/) to train and evaluate chat models. Each script supports distributed training of the full model weights with DeepSpeed ZeRO-3, or LoRA/QLoRA for parameter-efficient fine-tuning.
* [`recipes`](./recipes/) to reproduce models like Zephyr 7B. Each recipe takes the form of a YAML file which contains all the parameters associated with a single training run.

We are also working on a series of guides to explain how methods like direct preference optimization (DPO) work, along with lessons learned from gathering human preferences in practice. To get started, we recommend the following:

1. Follow the [installation instructions](#installation-instructions) to set up your environment etc.
2. Replicate Zephyr-7b-β by following the [recipe instructions](./recipes/zephyr-7b-beta/README.md).

If you would like to train chat models on your own datasets, we recommend following the dataset formatting instructions [here](./scripts/README.md#fine-tuning-on-your-datasets).


## Contents

The initial release of the handbook will focus on the following techniques:

* **Supervised fine-tuning:** teach language models to follow instructions and tips on how to collect and curate your own training dataset.
* **Reward modeling:** teach language models to distinguish model responses according to human or AI preferences.
* **Rejection sampling:** a simple, but powerful technique to boost the performance of your SFT model.
* **Direct preference optimisation (DPO):** a powerful and promising alternative to PPO.

## Installation instructions

To run the code in this project, first, create a Python virtual environment using e.g. Conda:

```shell
conda create -n handbook python=3.10 && conda activate handbook
```

Next, install PyTorch `v2.1.0` - the precise version is important for reproducibility! Since this is hardware-dependent, we
direct you to the [PyTorch Installation Page](https://pytorch.org/get-started/locally/).

You can then install the remaining package dependencies as follows:

```shell
git clone https://github.com/huggingface/alignment-handbook.git
cd ./alignment-handbook/
python -m pip install .
```

You will also need Flash Attention 2 installed, which can be done by running:

> **Note**
> If your machine has less than 96GB of RAM and many CPU cores, reduce the MAX_JOBS., e.g. `MAX_JOBS=4 pip install flash-attn --no-build-isolation`

```shell
python -m pip install flash-attn --no-build-isolation
```

Next, log into your Hugging Face account as follows:

```shell
huggingface-cli login
```

Finally, install Git LFS so that you can push models to the Hugging Face Hub:

```shell
sudo apt-get install git-lfs
```

You can now check out the `scripts` and `recipes` directories for instructions on how to train some models 🪁!

## Project structure

```
├── LICENSE
├── Makefile                    <- Makefile with commands like `make style`
├── README.md                   <- The top-level README for developers using this project
├── chapters                    <- Educational content to render on hf.co/learn
├── recipes                     <- Recipe configs, accelerate configs, slurm scripts
├── scripts                     <- Scripts to train and evaluate chat models
├── setup.cfg                   <- Installation config (mostly used for configuring code quality & tests)
├── setup.py                    <- Makes project pip installable (pip install -e .) so `alignment` can be imported
├── src                         <- Source code for use in this project
└── tests                       <- Unit tests
```

## Citation

If you find the content of this repo useful in your work, please cite it as follows:

```bibtex
@misc{alignment_handbook2023,
  author = {Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Alexander M. Rush and Thomas Wolf},
  title = {The Alignment Handbook},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/alignment-handbook}}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/huggingface/alignment-handbook",
    "name": "alignment-handbook",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10.9",
    "maintainer_email": "",
    "keywords": "nlp deep learning rlhf llm",
    "author": "The Hugging Face team (past and future)",
    "author_email": "lewis@huggingface.co",
    "download_url": "https://files.pythonhosted.org/packages/ea/7f/4b92c1a14ed381b4a41f38876b999f4a6dfc6df3dbab113c107bfc1ad55c/alignment-handbook-0.2.0.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/huggingface/alignment-handbook/main/assets/handbook.png\">\n</p>\n\n<p align=\"center\">\n    \ud83e\udd17 <a href=\"https://huggingface.co/collections/alignment-handbook/handbook-v01-models-and-datasets-654e424d22e6880da5ebc015\" target=\"_blank\">Models & Datasets</a> | \ud83d\udcc3 <a href=\"https://arxiv.org/abs/2310.16944\" target=\"_blank\">Technical Report</a>\n</p>\n\n# The Alignment Handbook\n\nRobust recipes to align language models with human and AI preferences.\n\n## What is this?\n\nJust one year ago, chatbots were out of fashion and most people hadn't heard about techniques like Reinforcement Learning from Human Feedback (RLHF) to align language models with human preferences. Then, OpenAI broke the internet with ChatGPT and Meta followed suit by releasing the Llama series of language models which enabled the ML community to build their very own capable chatbots. This has led to a rich ecosystem of datasets and models that have mostly focused on teaching language models to follow instructions through supervised fine-tuning (SFT).\n\nHowever, we know from the [InstructGPT](https://huggingface.co/papers/2203.02155) and [Llama2](https://huggingface.co/papers/2307.09288) papers that significant gains in helpfulness and safety can be had by augmenting SFT with human (or AI) preferences. At the same time, aligning language models to a set of preferences is a fairly novel idea and there are few public resources available on how to train these models, what data to collect, and what metrics to measure for best downstream performance.\n\nThe Alignment Handbook aims to fill that gap by providing the community with a series of robust training recipes that span the whole pipeline.\n\n## News \ud83d\uddde\ufe0f\n\n* **November 10, 2023:** We release all the training code to replicate Zephyr-7b-\u03b2 \ud83e\ude81! We also release [No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots), a brand new dataset of 10,000 instructions and demonstrations written entirely by skilled human annotators.\n\n## Links \ud83d\udd17\n\n* [Zephyr 7B models, datasets, and demos](https://huggingface.co/collections/HuggingFaceH4/zephyr-7b-6538c6d6d5ddd1cbb1744a66)\n\n## How to navigate this project \ud83e\udded\n\nThis project is simple by design and mostly consists of:\n\n* [`scripts`](./scripts/) to train and evaluate chat models. Each script supports distributed training of the full model weights with DeepSpeed ZeRO-3, or LoRA/QLoRA for parameter-efficient fine-tuning.\n* [`recipes`](./recipes/) to reproduce models like Zephyr 7B. Each recipe takes the form of a YAML file which contains all the parameters associated with a single training run.\n\nWe are also working on a series of guides to explain how methods like direct preference optimization (DPO) work, along with lessons learned from gathering human preferences in practice. To get started, we recommend the following:\n\n1. Follow the [installation instructions](#installation-instructions) to set up your environment etc.\n2. Replicate Zephyr-7b-\u03b2 by following the [recipe instructions](./recipes/zephyr-7b-beta/README.md).\n\nIf you would like to train chat models on your own datasets, we recommend following the dataset formatting instructions [here](./scripts/README.md#fine-tuning-on-your-datasets).\n\n\n## Contents\n\nThe initial release of the handbook will focus on the following techniques:\n\n* **Supervised fine-tuning:** teach language models to follow instructions and tips on how to collect and curate your own training dataset.\n* **Reward modeling:** teach language models to distinguish model responses according to human or AI preferences.\n* **Rejection sampling:** a simple, but powerful technique to boost the performance of your SFT model.\n* **Direct preference optimisation (DPO):** a powerful and promising alternative to PPO.\n\n## Installation instructions\n\nTo run the code in this project, first, create a Python virtual environment using e.g. Conda:\n\n```shell\nconda create -n handbook python=3.10 && conda activate handbook\n```\n\nNext, install PyTorch `v2.1.0` - the precise version is important for reproducibility! Since this is hardware-dependent, we\ndirect you to the [PyTorch Installation Page](https://pytorch.org/get-started/locally/).\n\nYou can then install the remaining package dependencies as follows:\n\n```shell\ngit clone https://github.com/huggingface/alignment-handbook.git\ncd ./alignment-handbook/\npython -m pip install .\n```\n\nYou will also need Flash Attention 2 installed, which can be done by running:\n\n> **Note**\n> If your machine has less than 96GB of RAM and many CPU cores, reduce the MAX_JOBS., e.g. `MAX_JOBS=4 pip install flash-attn --no-build-isolation`\n\n```shell\npython -m pip install flash-attn --no-build-isolation\n```\n\nNext, log into your Hugging Face account as follows:\n\n```shell\nhuggingface-cli login\n```\n\nFinally, install Git LFS so that you can push models to the Hugging Face Hub:\n\n```shell\nsudo apt-get install git-lfs\n```\n\nYou can now check out the `scripts` and `recipes` directories for instructions on how to train some models \ud83e\ude81!\n\n## Project structure\n\n```\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 Makefile                    <- Makefile with commands like `make style`\n\u251c\u2500\u2500 README.md                   <- The top-level README for developers using this project\n\u251c\u2500\u2500 chapters                    <- Educational content to render on hf.co/learn\n\u251c\u2500\u2500 recipes                     <- Recipe configs, accelerate configs, slurm scripts\n\u251c\u2500\u2500 scripts                     <- Scripts to train and evaluate chat models\n\u251c\u2500\u2500 setup.cfg                   <- Installation config (mostly used for configuring code quality & tests)\n\u251c\u2500\u2500 setup.py                    <- Makes project pip installable (pip install -e .) so `alignment` can be imported\n\u251c\u2500\u2500 src                         <- Source code for use in this project\n\u2514\u2500\u2500 tests                       <- Unit tests\n```\n\n## Citation\n\nIf you find the content of this repo useful in your work, please cite it as follows:\n\n```bibtex\n@misc{alignment_handbook2023,\n  author = {Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Alexander M. Rush and Thomas Wolf},\n  title = {The Alignment Handbook},\n  year = {2023},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/huggingface/alignment-handbook}}\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "The Alignment Handbook",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/huggingface/alignment-handbook"
    },
    "split_keywords": [
        "nlp",
        "deep",
        "learning",
        "rlhf",
        "llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a0bcf1d571a4d02ec4ab113a77aeeb33da80f38c0277cc0f7b2f7b9d6532f17f",
                "md5": "d075590cc990637b61daba63e8985ee2",
                "sha256": "ee6d6e0f2124d824816691f286f7e26319db39ac3d9cf227a9d3e550564c5447"
            },
            "downloads": -1,
            "filename": "alignment_handbook-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d075590cc990637b61daba63e8985ee2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10.9",
            "size": 18498,
            "upload_time": "2024-01-04T01:38:29",
            "upload_time_iso_8601": "2024-01-04T01:38:29.386155Z",
            "url": "https://files.pythonhosted.org/packages/a0/bc/f1d571a4d02ec4ab113a77aeeb33da80f38c0277cc0f7b2f7b9d6532f17f/alignment_handbook-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ea7f4b92c1a14ed381b4a41f38876b999f4a6dfc6df3dbab113c107bfc1ad55c",
                "md5": "beba5bb0afac2e4409721342d90ae940",
                "sha256": "750717cdf3057f01779df331cebbb1818065f51150bac49cdd06ebb23c6c1888"
            },
            "downloads": -1,
            "filename": "alignment-handbook-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "beba5bb0afac2e4409721342d90ae940",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10.9",
            "size": 22487,
            "upload_time": "2024-01-04T01:38:30",
            "upload_time_iso_8601": "2024-01-04T01:38:30.943443Z",
            "url": "https://files.pythonhosted.org/packages/ea/7f/4b92c1a14ed381b4a41f38876b999f4a6dfc6df3dbab113c107bfc1ad55c/alignment-handbook-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-04 01:38:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "huggingface",
    "github_project": "alignment-handbook",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "alignment-handbook"
}
        
Elapsed time: 0.17104s