torchtitan


Nametorchtitan JSON
Version 0.0.2 PyPI version JSON
download
home_pagehttps://github.com/pytorch/torchtitan
SummaryA native-PyTorch library for large scale LLM training
upload_time2024-04-16 04:26:57
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseBSD 3-Clause License Copyright 2024 Meta Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice,this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords pytorch training llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # torchtitan
<p align="center">
  <picture>
    <source media="(prefers-color-scheme: light)" srcset="https://github.com/pytorch/torchtitan/blob/bcac1570a7cc47554f934d057e0f9aea9ae6fd08/assets/images/TorchTitan_logo_main.jpg">
    <img alt="TorchTitan_Logo" width=35%>
  </picture>
</p>

## torchtitan is still in pre-release!
`torchtitan` is currently in a pre-release state and under extensive development.

`torchtitan` is a native PyTorch reference architecture showcasing some of the latest PyTorch techniques for large scale model training.
* Designed to be easy to understand, use and extend for different training purposes.
* Minimal changes to the model code when applying 1D, 2D, or (soon) 3D Parallel.
* Modular components instead of monolithic codebase.
* Get started in minutes, not hours!

Please note: `torchtitan` is a proof-of-concept for Large-scale LLM training using native PyTorch. It is (and will continue to be) a repo to showcase PyTorch's latest distributed training features in a clean, minimal codebase. torchtitan is complementary to and not a replacement for any of the great large-scale LLM training codebases such as Megatron, Megablocks, LLM Foundry, Deepspeed, etc. Instead, we hope that the features showcased in torchtitan will be adopted by these codebases quickly. torchtitan is unlikely to ever grow a large community around it.

## Pre-Release Updates:
#### (4/16/2024): TorchTitan is now public but in a pre-release state and under development.  Currently we showcase pre-training Llama2 models (LLMs) of various sizes from scratch.

Key features available:</br>
1 - [FSDP2 (per param sharding)](https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md) </br>
2 - Tensor Parallel (FSDP + Tensor Parallel)</br>
3 - Selective layer and op activation checkpointing </br>
4 - Distributed checkpointing (asynch pending) </br>
5 - 3 datasets pre-configured (47K - 144M)</br>
6 - GPU usage, MFU, tokens per second and other metrics all reported and displayed via TensorBoard.</br>
7 - optional Fused RMSNorm, learning rate scheduler, meta init, and more.</br>
8 - All options easily configured via toml files.</br>


## Coming soon features:
1 - Asynch checkpointing </br>
2 - FP8 support </br>
3 - Context Parallel </br>
4 - 3D (Pipeline Parallel) </br>
5 - Torch Compile support </br>


## Installation

Install PyTorch from source or install the latest pytorch nightly, then install requirements by

```python
pip install -r requirements.txt
```

Install additional dev requirements if you want to contribute to the repo:
```
pip install -r dev-requirements.txt
```

run the llama debug model locally to verify the setup is correct:

```
./run_llama_train.sh
```

## TensorBoard

To visualize TensorBoard metrics of models trained on a remote server via a local web browser:

1. Make sure `metrics.enable_tensorboard` option is set to true in model training (either from a .toml file or from CLI).

2. Set up SSH tunneling, by running the following from local CLI
```
ssh -L 6006:127.0.0.1:6006 [username]@[hostname]
```

3. Inside the SSH tunnel that logged into the remote server, go to the torchtitan repo, and start the TensorBoard backend
```
tensorboard --logdir=./outputs/tb
```

4. In the local web browser, go to the URL it provides OR to http://localhost:6006/.

## Multi-Node Training
For training on ParallelCluster/Slurm type configurations, you can use the multinode_trainer.slurm file to submit your sbatch job.</br>
Note that you will need to adjust the number of nodes and gpu count to your cluster configs.</br>
<b>To adjust total nodes:</b>
```
#SBATCH --ntasks=2
#SBATCH --nodes=2
```
should both be set to your total node count.
Then update the srun launch parameters to match:
```
srun torchrun --nnodes 2
```
where nnodes is your total node count, matching the sbatch node count above.

<b>To adjust gpu count per node:</b>

If your gpu count per node is not 8, adjust:

```--nproc_per_node```

 in the torchrun command and

```#SBATCH --gpus-per-task```

in the SBATCH command section.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/pytorch/torchtitan",
    "name": "torchtitan",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "pytorch, training, llm",
    "author": null,
    "author_email": "PyTorch Team <packages@pytorch.org>",
    "download_url": "https://files.pythonhosted.org/packages/fb/73/83d7c481a9ee1d97d44da7cc3b0dee4c09b04c9fe99cf055ee47f9e18aae/torchtitan-0.0.2.tar.gz",
    "platform": null,
    "description": "# torchtitan\n<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: light)\" srcset=\"https://github.com/pytorch/torchtitan/blob/bcac1570a7cc47554f934d057e0f9aea9ae6fd08/assets/images/TorchTitan_logo_main.jpg\">\n    <img alt=\"TorchTitan_Logo\" width=35%>\n  </picture>\n</p>\n\n## torchtitan is still in pre-release!\n`torchtitan` is currently in a pre-release state and under extensive development.\n\n`torchtitan` is a native PyTorch reference architecture showcasing some of the latest PyTorch techniques for large scale model training.\n* Designed to be easy to understand, use and extend for different training purposes.\n* Minimal changes to the model code when applying 1D, 2D, or (soon) 3D Parallel.\n* Modular components instead of monolithic codebase.\n* Get started in minutes, not hours!\n\nPlease note: `torchtitan` is a proof-of-concept for Large-scale LLM training using native PyTorch. It is (and will continue to be) a repo to showcase PyTorch's latest distributed training features in a clean, minimal codebase. torchtitan is complementary to and not a replacement for any of the great large-scale LLM training codebases such as Megatron, Megablocks, LLM Foundry, Deepspeed, etc. Instead, we hope that the features showcased in torchtitan will be adopted by these codebases quickly. torchtitan is unlikely to ever grow a large community around it.\n\n## Pre-Release Updates:\n#### (4/16/2024): TorchTitan is now public but in a pre-release state and under development.  Currently we showcase pre-training Llama2 models (LLMs) of various sizes from scratch.\n\nKey features available:</br>\n1 - [FSDP2 (per param sharding)](https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md) </br>\n2 - Tensor Parallel (FSDP + Tensor Parallel)</br>\n3 - Selective layer and op activation checkpointing </br>\n4 - Distributed checkpointing (asynch pending) </br>\n5 - 3 datasets pre-configured (47K - 144M)</br>\n6 - GPU usage, MFU, tokens per second and other metrics all reported and displayed via TensorBoard.</br>\n7 - optional Fused RMSNorm, learning rate scheduler, meta init, and more.</br>\n8 - All options easily configured via toml files.</br>\n\n\n## Coming soon features:\n1 - Asynch checkpointing </br>\n2 - FP8 support </br>\n3 - Context Parallel </br>\n4 - 3D (Pipeline Parallel) </br>\n5 - Torch Compile support </br>\n\n\n## Installation\n\nInstall PyTorch from source or install the latest pytorch nightly, then install requirements by\n\n```python\npip install -r requirements.txt\n```\n\nInstall additional dev requirements if you want to contribute to the repo:\n```\npip install -r dev-requirements.txt\n```\n\nrun the llama debug model locally to verify the setup is correct:\n\n```\n./run_llama_train.sh\n```\n\n## TensorBoard\n\nTo visualize TensorBoard metrics of models trained on a remote server via a local web browser:\n\n1. Make sure `metrics.enable_tensorboard` option is set to true in model training (either from a .toml file or from CLI).\n\n2. Set up SSH tunneling, by running the following from local CLI\n```\nssh -L 6006:127.0.0.1:6006 [username]@[hostname]\n```\n\n3. Inside the SSH tunnel that logged into the remote server, go to the torchtitan repo, and start the TensorBoard backend\n```\ntensorboard --logdir=./outputs/tb\n```\n\n4. In the local web browser, go to the URL it provides OR to http://localhost:6006/.\n\n## Multi-Node Training\nFor training on ParallelCluster/Slurm type configurations, you can use the multinode_trainer.slurm file to submit your sbatch job.</br>\nNote that you will need to adjust the number of nodes and gpu count to your cluster configs.</br>\n<b>To adjust total nodes:</b>\n```\n#SBATCH --ntasks=2\n#SBATCH --nodes=2\n```\nshould both be set to your total node count.\nThen update the srun launch parameters to match:\n```\nsrun torchrun --nnodes 2\n```\nwhere nnodes is your total node count, matching the sbatch node count above.\n\n<b>To adjust gpu count per node:</b>\n\nIf your gpu count per node is not 8, adjust:\n\n```--nproc_per_node```\n\n in the torchrun command and\n\n```#SBATCH --gpus-per-task```\n\nin the SBATCH command section.\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause License  Copyright 2024 Meta  Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:  1. Redistributions of source code must retain the above copyright notice,this list of conditions and the following disclaimer.  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \u201cAS IS\u201d AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ",
    "summary": "A native-PyTorch library for large scale LLM training",
    "version": "0.0.2",
    "project_urls": {
        "Documentation": "https://github.com/pytorch/torchtitan/tree/main/docs",
        "GitHub": "https://github.com/pytorch/torchtitan",
        "Homepage": "https://github.com/pytorch/torchtitan",
        "Issues": "https://github.com/pytorch/torchtitan/issues"
    },
    "split_keywords": [
        "pytorch",
        " training",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c696007977f62a02259e3cff0400cd43a9cf6d357a4d851232cf29951048bf43",
                "md5": "aa0541aa4b01b726cb0de3be84eb3eb2",
                "sha256": "6316af7599d3d0b2b541d7018305abb40c6cf5cda41d5cda451391996ff06f6f"
            },
            "downloads": -1,
            "filename": "torchtitan-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "aa0541aa4b01b726cb0de3be84eb3eb2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 4938,
            "upload_time": "2024-04-16T04:26:55",
            "upload_time_iso_8601": "2024-04-16T04:26:55.505255Z",
            "url": "https://files.pythonhosted.org/packages/c6/96/007977f62a02259e3cff0400cd43a9cf6d357a4d851232cf29951048bf43/torchtitan-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fb7383d7c481a9ee1d97d44da7cc3b0dee4c09b04c9fe99cf055ee47f9e18aae",
                "md5": "0e7e09226c84c7014224b314472afd60",
                "sha256": "14981ecfa3ac1fc6ce220c6700cf17d85f9b1c61cbae0d498211be84db8db5d2"
            },
            "downloads": -1,
            "filename": "torchtitan-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "0e7e09226c84c7014224b314472afd60",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 5195,
            "upload_time": "2024-04-16T04:26:57",
            "upload_time_iso_8601": "2024-04-16T04:26:57.069431Z",
            "url": "https://files.pythonhosted.org/packages/fb/73/83d7c481a9ee1d97d44da7cc3b0dee4c09b04c9fe99cf055ee47f9e18aae/torchtitan-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-16 04:26:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pytorch",
    "github_project": "torchtitan",
    "github_not_found": true,
    "lcname": "torchtitan"
}
        
Elapsed time: 0.25406s