verl


Nameverl JSON
Version 0.1rc0 PyPI version JSON
download
home_pagehttps://github.com/volcengine/verl
SummaryveRL: Volcano Engine Reinforcement Learning for LLM
upload_time2024-11-01 04:48:31
maintainerNone
docs_urlNone
authorBytedance - Seed - MLSys
requires_pythonNone
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements transformers hydra-core tensordict numpy pytest deepspeed pybind11 codetiming yapf wandb None
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align=center>
  <img src="docs/_static/logo.png" width = "20%" height = "20%" />
</div>

<h1 style="text-align: center;">veRL: Volcano Engine Reinforcement Learning for LLM</h1>

veRL (HybridFlow) is a flexible, efficient and industrial-level RL(HF) training framework designed for large language models (LLMs). veRL is the open-source version of [HybridFlow](https://arxiv.org/abs/2409.19256v2) paper.

veRL is flexible and easy to use with:

- **Easy to support diverse RL(HF) algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.

- **Seamless integration of existing LLM infra with modular API design**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.

- **Flexible device mapping**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.

- Readily integration with popular Hugging Face models


veRL is fast with:

- **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput.

- **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.


<p align="center">
| <a href="https://verl.readthedocs.io/en/latest/index.html"><b>Documentation</b></a> | <a href="https://arxiv.org/abs/2409.19256v2"><b>Paper</b></a> | 
<!-- <a href=""><b>Slides</b></a> | -->
</p>



## Installation

For installing the latest version of veRL, the best way is to clone and install it from source. Then you can modify our code to customize your own post-training jobs.

```bash
# install verl together with some lightweight dependencies in setup.py
git clone https://github.com/volcengine/verl.git
cd verl
pip3 install -e .
```

You can also install veRL using `pip3 install`

```bash
# directly install from pypi
pip3 install verl
```

### Dependencies

veRL requires Python >= 3.9 and CUDA >= 12.1.

veRL support various backend, we currently release FSDP and Megatron-LM for actor training and vLLM for rollout generation.

To install the dependencies, we recommend using conda:

```bash
conda create -n verl python==3.9
conda activate verl
```

The following dependencies are required for all backends.

```bash
# install torch [or you can skip this step and let vllm to install the correct version for you]
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121

# install vllm
pip3 install vllm==0.5.4
pip3 install ray==2.10 # other version may have bug

# flash attention 2
pip3 install flash-attn --no-build-isolation
```

**FSDP**

We recommend using FSDP backend to investigate, research and prototype different models, datasets and RL algorithms.

The pros, cons and extension guide for using FSDP backend can be found in [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html)

**Megatron-LM**

For users who pursue better scalability, we recommend using Megatron-LM backend. Please install the above dependencies first.

Currently, we support Megatron-LM@core_v0.4.0 and we fix some internal issues of Megatron-LM. Here's the additional installation guide.

The pros, cons and extension guide for using Megatron-LM backend can be found in [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/workers/megatron_workers.html)

```bash
# FOR Megatron-LM Backend
# apex
pip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation \
         --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" \
         git+https://github.com/NVIDIA/apex

# transformer engine
pip3 install git+https://github.com/NVIDIA/TransformerEngine.git@v1.7

# megatron core v0.4.0
cd ..
git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git
cd Megatron-LM
cp ../verl/patches/megatron_v4.patch .
git apply megatron_v4.patch
pip3 install -e .
export PYTHONPATH=$PYTHONPATH:$(pwd)
```

## Getting Started
Visit our [documentation](https://verl.readthedocs.io/en/latest/index.html) to learn more.

**Running an PPO example should follow:**
- Preparation
  - [Installation](https://verl.readthedocs.io/en/latest/preparation/install.html)
  - [Prepare Data (Parquet) for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html)
  - [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html)
- PPO Example (Run an example)
  - [PPO Example Architecture](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html)
  - [Config Explanation](https://verl.readthedocs.io/en/latest/examples/config.html)
  - [Run GSM8K Example](https://verl.readthedocs.io/en/latest/examples/gsm8k_example.html)

**For code explanation and advance usage (extension):**
- PPO Trainer and Workers
  - [PPO Ray Trainer](https://verl.readthedocs.io/en/latest/workers/ray_trainer.html)
  - [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html)
  - [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/index.html)
- Advance Usage and Extension
  - [Ray API Design Tutorial](https://verl.readthedocs.io/en/latest/advance/placement.html)
  - [Extend to other RL(HF) algorithms](https://verl.readthedocs.io/en/latest/advance/dpo_extension.html)
  - [Add models to FSDP backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html)
  - [Add models to Megatron-LM backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html)


## Contribution
### Code formatting
We use yapf (Google style) to enforce strict code formatting when reviewing MRs. To reformat you code locally, make sure you installed `yapf`
```bash
pip3 install yapf
```
Then, make sure you are at top level of verl repo and run
```bash
yapf -ir -vv --style ./.style.yapf verl single_controller examples
```



## Citation

```tex
@article{sheng2024hybridflow,
  title   = {HybridFlow: A Flexible and Efficient RLHF Framework},
  author  = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},
  year    = {2024},
  journal = {arXiv preprint arXiv: 2409.19256}
}

@inproceedings{zhang2024framework,
  title={A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization},
  author={Zhang, Chi and Sheng, Guangming and Liu, Siyao and Li, Jiahao and Feng, Ziyuan and Liu, Zherui and Liu, Xin and Jia, Xiaoying and Peng, Yanghua and Lin, Haibin and Wu, Chuan},
  booktitle={In NL2Code Workshop of ACM KDD},
  year={2024}
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/volcengine/verl",
    "name": "verl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Bytedance - Seed - MLSys",
    "author_email": "zhangchi.usc1992@bytedance.com, gmsheng@connect.hku.hk",
    "download_url": "https://files.pythonhosted.org/packages/c3/1e/f08abc3db1fade00aac8797d5e21bbc3f198cc91d64772ba219cea0d9a3a/verl-0.1rc0.tar.gz",
    "platform": null,
    "description": "<div align=center>\n  <img src=\"docs/_static/logo.png\" width = \"20%\" height = \"20%\" />\n</div>\n\n<h1 style=\"text-align: center;\">veRL: Volcano Engine Reinforcement Learning for LLM</h1>\n\nveRL (HybridFlow) is a flexible, efficient and industrial-level RL(HF) training framework designed for large language models (LLMs). veRL is the open-source version of [HybridFlow](https://arxiv.org/abs/2409.19256v2) paper.\n\nveRL is flexible and easy to use with:\n\n- **Easy to support diverse RL(HF) algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.\n\n- **Seamless integration of existing LLM infra with modular API design**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks.\n\n- **Flexible device mapping**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.\n\n- Readily integration with popular Hugging Face models\n\n\nveRL is fast with:\n\n- **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput.\n\n- **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.\n\n\n<p align=\"center\">\n| <a href=\"https://verl.readthedocs.io/en/latest/index.html\"><b>Documentation</b></a> | <a href=\"https://arxiv.org/abs/2409.19256v2\"><b>Paper</b></a> | \n<!-- <a href=\"\"><b>Slides</b></a> | -->\n</p>\n\n\n\n## Installation\n\nFor installing the latest version of veRL, the best way is to clone and install it from source. Then you can modify our code to customize your own post-training jobs.\n\n```bash\n# install verl together with some lightweight dependencies in setup.py\ngit clone https://github.com/volcengine/verl.git\ncd verl\npip3 install -e .\n```\n\nYou can also install veRL using `pip3 install`\n\n```bash\n# directly install from pypi\npip3 install verl\n```\n\n### Dependencies\n\nveRL requires Python >= 3.9 and CUDA >= 12.1.\n\nveRL support various backend, we currently release FSDP and Megatron-LM for actor training and vLLM for rollout generation.\n\nTo install the dependencies, we recommend using conda:\n\n```bash\nconda create -n verl python==3.9\nconda activate verl\n```\n\nThe following dependencies are required for all backends.\n\n```bash\n# install torch [or you can skip this step and let vllm to install the correct version for you]\npip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121\n\n# install vllm\npip3 install vllm==0.5.4\npip3 install ray==2.10 # other version may have bug\n\n# flash attention 2\npip3 install flash-attn --no-build-isolation\n```\n\n**FSDP**\n\nWe recommend using FSDP backend to investigate, research and prototype different models, datasets and RL algorithms.\n\nThe pros, cons and extension guide for using FSDP backend can be found in [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html)\n\n**Megatron-LM**\n\nFor users who pursue better scalability, we recommend using Megatron-LM backend. Please install the above dependencies first.\n\nCurrently, we support Megatron-LM@core_v0.4.0 and we fix some internal issues of Megatron-LM. Here's the additional installation guide.\n\nThe pros, cons and extension guide for using Megatron-LM backend can be found in [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/workers/megatron_workers.html)\n\n```bash\n# FOR Megatron-LM Backend\n# apex\npip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation \\\n         --config-settings \"--build-option=--cpp_ext\" --config-settings \"--build-option=--cuda_ext\" \\\n         git+https://github.com/NVIDIA/apex\n\n# transformer engine\npip3 install git+https://github.com/NVIDIA/TransformerEngine.git@v1.7\n\n# megatron core v0.4.0\ncd ..\ngit clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git\ncd Megatron-LM\ncp ../verl/patches/megatron_v4.patch .\ngit apply megatron_v4.patch\npip3 install -e .\nexport PYTHONPATH=$PYTHONPATH:$(pwd)\n```\n\n## Getting Started\nVisit our [documentation](https://verl.readthedocs.io/en/latest/index.html) to learn more.\n\n**Running an PPO example should follow:**\n- Preparation\n  - [Installation](https://verl.readthedocs.io/en/latest/preparation/install.html)\n  - [Prepare Data (Parquet) for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html)\n  - [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html)\n- PPO Example (Run an example)\n  - [PPO Example Architecture](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html)\n  - [Config Explanation](https://verl.readthedocs.io/en/latest/examples/config.html)\n  - [Run GSM8K Example](https://verl.readthedocs.io/en/latest/examples/gsm8k_example.html)\n\n**For code explanation and advance usage (extension):**\n- PPO Trainer and Workers\n  - [PPO Ray Trainer](https://verl.readthedocs.io/en/latest/workers/ray_trainer.html)\n  - [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html)\n  - [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/index.html)\n- Advance Usage and Extension\n  - [Ray API Design Tutorial](https://verl.readthedocs.io/en/latest/advance/placement.html)\n  - [Extend to other RL(HF) algorithms](https://verl.readthedocs.io/en/latest/advance/dpo_extension.html)\n  - [Add models to FSDP backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html)\n  - [Add models to Megatron-LM backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html)\n\n\n## Contribution\n### Code formatting\nWe use yapf (Google style) to enforce strict code formatting when reviewing MRs. To reformat you code locally, make sure you installed `yapf`\n```bash\npip3 install yapf\n```\nThen, make sure you are at top level of verl repo and run\n```bash\nyapf -ir -vv --style ./.style.yapf verl single_controller examples\n```\n\n\n\n## Citation\n\n```tex\n@article{sheng2024hybridflow,\n  title   = {HybridFlow: A Flexible and Efficient RLHF Framework},\n  author  = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},\n  year    = {2024},\n  journal = {arXiv preprint arXiv: 2409.19256}\n}\n\n@inproceedings{zhang2024framework,\n  title={A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization},\n  author={Zhang, Chi and Sheng, Guangming and Liu, Siyao and Li, Jiahao and Feng, Ziyuan and Liu, Zherui and Liu, Xin and Jia, Xiaoying and Peng, Yanghua and Lin, Haibin and Wu, Chuan},\n  booktitle={In NL2Code Workshop of ACM KDD},\n  year={2024}\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "veRL: Volcano Engine Reinforcement Learning for LLM",
    "version": "0.1rc0",
    "project_urls": {
        "Homepage": "https://github.com/volcengine/verl"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f86496edb88346dafe6a0476247aae75f6ec1662af25c077d03026dc7cd9edf",
                "md5": "b1efd659375b4f66ff50308245af97fa",
                "sha256": "2db37387cc2ca600d9989ec67285c561b9f3acf5152c15801b4de892cf1be139"
            },
            "downloads": -1,
            "filename": "verl-0.1rc0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b1efd659375b4f66ff50308245af97fa",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 332561,
            "upload_time": "2024-11-01T04:48:15",
            "upload_time_iso_8601": "2024-11-01T04:48:15.041966Z",
            "url": "https://files.pythonhosted.org/packages/9f/86/496edb88346dafe6a0476247aae75f6ec1662af25c077d03026dc7cd9edf/verl-0.1rc0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c31ef08abc3db1fade00aac8797d5e21bbc3f198cc91d64772ba219cea0d9a3a",
                "md5": "a3a1ab54cc15aede6bdb422f96ae5c59",
                "sha256": "5e13b84588a76d73f7edb30020428aeaba5081ddc643260aa470b067c403b2f1"
            },
            "downloads": -1,
            "filename": "verl-0.1rc0.tar.gz",
            "has_sig": false,
            "md5_digest": "a3a1ab54cc15aede6bdb422f96ae5c59",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 222713,
            "upload_time": "2024-11-01T04:48:31",
            "upload_time_iso_8601": "2024-11-01T04:48:31.153304Z",
            "url": "https://files.pythonhosted.org/packages/c3/1e/f08abc3db1fade00aac8797d5e21bbc3f198cc91d64772ba219cea0d9a3a/verl-0.1rc0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-01 04:48:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "volcengine",
    "github_project": "verl",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "transformers",
            "specs": []
        },
        {
            "name": "hydra-core",
            "specs": []
        },
        {
            "name": "tensordict",
            "specs": [
                [
                    "<",
                    "0.3.1"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "pytest",
            "specs": []
        },
        {
            "name": "deepspeed",
            "specs": []
        },
        {
            "name": "pybind11",
            "specs": []
        },
        {
            "name": "codetiming",
            "specs": []
        },
        {
            "name": "yapf",
            "specs": []
        },
        {
            "name": "wandb",
            "specs": []
        },
        {
            "name": null,
            "specs": []
        }
    ],
    "lcname": "verl"
}
        
Elapsed time: 0.38178s