# TorchScale - A Library of Foundation Architectures
<p>
<a href="https://github.com/microsoft/torchscale/blob/main/LICENSE"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-blue.svg" /></a>
<a href="https://pypi.org/project/torchscale"><img alt="MIT License" src="https://badge.fury.io/py/torchscale.svg" /></a>
</p>
TorchScale is a PyTorch library that allows researchers and developers to scale up Transformers efficiently and effectively.
Fundamental research to develop new architectures for foundation models and A(G)I, focusing on modeling generality and capability, as well as training stability and efficiency.
- Stability - [**DeepNet**](https://arxiv.org/abs/2203.00555): scaling Transformers to 1,000 Layers and beyond
- Generality - [**Foundation Transformers (Magneto)**](https://arxiv.org/abs/2210.06423): towards true general-purpose modeling across tasks and modalities (including language, vision, speech, and multimodal)
- Capability - A [**Length-Extrapolatable**](https://arxiv.org/abs/2212.10554) Transformer
- Efficiency - [**X-MoE**](https://arxiv.org/abs/2204.09179): scalable & finetunable sparse Mixture-of-Experts (MoE)
### The Revolution of Model Architecture
- [**BitNet**](https://arxiv.org/abs/2310.11453): 1-bit Transformers for Large Language Models
- [**RetNet**](https://arxiv.org/abs/2307.08621): Retentive Network: A Successor to Transformer for Large Language Models
- [**LongNet**](https://arxiv.org/abs/2307.02486): Scaling Transformers to 1,000,000,000 Tokens
## News
- December, 2023: [LongNet](torchscale/model/LongNet.py) and [LongViT](examples/longvit/README.md) released
- October, 2023: Update RMSNorm and SwiGLU as the default module in RetNet
- November, 2022: TorchScale 0.1.1 released [[Paper](https://arxiv.org/abs/2211.13184)] [[PyPI](https://pypi.org/project/torchscale/)]
## Installation
To install:
```
pip install torchscale
```
Alternatively, you can develop it locally:
```
git clone https://github.com/microsoft/torchscale.git
cd torchscale
pip install -e .
```
For faster training install [Flash Attention](https://github.com/Dao-AILab/flash-attention) for Turing, Ampere, Ada, or Hopper GPUs:
```
pip install flash-attn
```
or [xFormers](https://github.com/facebookresearch/xformers) for Volta, Turing, Ampere, Ada, or Hopper GPUs:
```
# cuda 11.8 version
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
# cuda 12.1 version
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu121
```
## Getting Started
It takes only several lines of code to create a model with the above fundamental research features enabled. Here is how to quickly obtain a BERT-like encoder:
```python
>>> from torchscale.architecture.config import EncoderConfig
>>> from torchscale.architecture.encoder import Encoder
>>> config = EncoderConfig(vocab_size=64000)
>>> model = Encoder(config)
>>> print(model)
```
We also support the `Decoder` architecture and the `EncoderDecoder` architecture:
```python
# Creating a decoder model
>>> from torchscale.architecture.config import DecoderConfig
>>> from torchscale.architecture.decoder import Decoder
>>> config = DecoderConfig(vocab_size=64000)
>>> decoder = Decoder(config)
>>> print(decoder)
# Creating a encoder-decoder model
>>> from torchscale.architecture.config import EncoderDecoderConfig
>>> from torchscale.architecture.encoder_decoder import EncoderDecoder
>>> config = EncoderDecoderConfig(vocab_size=64000)
>>> encdec = EncoderDecoder(config)
>>> print(encdec)
```
It takes only several lines of code to create a RetNet model:
```python
# Creating a RetNet model
>>> import torch
>>> from torchscale.architecture.config import RetNetConfig
>>> from torchscale.architecture.retnet import RetNetDecoder
>>> config = RetNetConfig(vocab_size=64000)
>>> retnet = RetNetDecoder(config)
>>> print(retnet)
```
For LongNet models ([Flash Attention](https://github.com/Dao-AILab/flash-attention) required):
```python
>>> import torch
>>> from torchscale.architecture.config import EncoderConfig, DecoderConfig
>>> from torchscale.model.longnet import LongNetEncoder, LongNetDecoder
# Creating a LongNet encoder with the dilated pattern of segment_length=[2048,4096] and dilated_ratio=[1,2]
>>> config = EncoderConfig(vocab_size=64000, segment_length='[2048,4096]', dilated_ratio='[1,2]', flash_attention=True)
>>> longnet = LongNetEncoder(config)
# Creating a LongNet decoder with the dilated pattern of segment_length=[2048,4096] and dilated_ratio=[1,2]
>>> config = DecoderConfig(vocab_size=64000, segment_length='[2048,4096]', dilated_ratio='[1,2]', flash_attention=True)
>>> longnet = LongNetDecoder(config)
```
## Key Features
- [DeepNorm to improve the training stability of Post-LayerNorm Transformers](https://arxiv.org/abs/2203.00555)
* enabled by setting *deepnorm=True* in the `Config` class.
* It adjusts both the residual connection and the initialization method according to the model architecture (i.e., encoder, decoder, or encoder-decoder).
- [SubLN for the model generality and the training stability](https://arxiv.org/abs/2210.06423)
* enabled by *subln=True*. This is enabled by default.
* It introduces another LayerNorm to each sublayer and adjusts the initialization according to the model architecture.
* Note that SubLN and DeepNorm cannot be used in one single model.
- [X-MoE: efficient and finetunable sparse MoE modeling](https://arxiv.org/abs/2204.09179)
* enabled by *use_xmoe=True*.
* It replaces every *'moe_freq'* `FeedForwardNetwork` layers with the X-MoE layers.
- [Multiway architecture for multimodality](https://arxiv.org/abs/2208.10442)
* enabled by *multiway=True*.
* It provides a pool of Transformer's parameters used for different modalities.
- [Extrapolatable position embedding (Xpos)](https://arxiv.org/abs/2212.10554)
* enabled by *xpos_rel_pos=True*.
- [Relative position bias](https://arxiv.org/abs/1910.10683)
* enabled by adjusting *rel_pos_buckets* and *max_rel_pos*.
- [SparseClip: improving the gradient clipping for sparse MoE models](https://arxiv.org/abs/2211.13184)
* we provide a [sample code](examples/fairseq/utils/sparse_clip.py) that can be easily adapted to the FairSeq (or other) repo.
- [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/abs/2307.08621)
* created by `config = RetNetConfig(vocab_size=64000)` and `retnet = RetNetDecoder(config)`.
- [LongNet: Scaling Transformers to 1,000,000,000 Tokens](https://arxiv.org/abs/2307.02486)
Most of the features above can be used by simply passing the corresponding parameters to the config. For example:
```python
>>> from torchscale.architecture.config import EncoderConfig
>>> from torchscale.architecture.encoder import Encoder
>>> config = EncoderConfig(vocab_size=64000, deepnorm=True, multiway=True)
>>> model = Encoder(config)
>>> print(model)
```
## Examples
We have examples of how to use TorchScale in the following scenarios/tasks:
- Language
* [Decoder/GPT](examples/fairseq/README.md#example-gpt-pretraining)
* [Encoder-Decoder/Neural Machine Translation](examples/fairseq/README.md#example-machine-translation)
* [Encoder/BERT](examples/fairseq/README.md#example-bert-pretraining)
- Vision
* [LongViT](examples/longvit/README.md)
* ViT/BEiT [In progress]
- Speech
- Multimodal
* [Multiway Transformers/BEiT-3](https://github.com/microsoft/unilm/tree/master/beit3)
We plan to provide more examples regarding different tasks (e.g. vision pretraining and speech recognition) and various deep learning toolkits (e.g. [DeepSpeed](https://github.com/microsoft/DeepSpeed) and [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)). Any comments or PRs are welcome!
## Results
### Stability Evaluation
<p align="center">
<img src="https://publicmodel.blob.core.windows.net/torchscale/pic/convergence.png?sv=2020-04-08&st=2023-08-11T03%3A09%3A09Z&se=2053-08-12T03%3A09%3A00Z&sr=c&sp=rl&sig=3b6nDda%2Fu0vD6E%2BhoTO%2BHfNSnSlUfgvXFV%2FCNKquWjE%3D" width="800"/>
</p>
The training curve is smooth by using TorchScale, while the baseline Transformer cannot converge.
### Scaling-up Experiments
<p align="center">
<img src="https://publicmodel.blob.core.windows.net/torchscale/pic/scaling_curve.png?sv=2020-04-08&st=2023-08-11T03%3A09%3A09Z&se=2053-08-12T03%3A09%3A00Z&sr=c&sp=rl&sig=3b6nDda%2Fu0vD6E%2BhoTO%2BHfNSnSlUfgvXFV%2FCNKquWjE%3D" width="800"/>
</p>
TorchScale supports arbitrary depths and widths, successfully scaling-up the models without pain.
## Acknowledgments
Some implementations in TorchScale are either adapted from or inspired by the [FairSeq](https://github.com/facebookresearch/fairseq) repository and the [UniLM](https://github.com/microsoft/unilm) repository.
## Citations
If you find this repository useful, please consider citing our work:
```
@article{torchscale,
author = {Shuming Ma and Hongyu Wang and Shaohan Huang and Wenhui Wang and Zewen Chi and Li Dong and Alon Benhaim and Barun Patra and Vishrav Chaudhary and Xia Song and Furu Wei},
title = {{TorchScale}: {Transformers} at Scale},
journal = {CoRR},
volume = {abs/2211.13184},
year = {2022}
}
```
```
@article{deepnet,
author = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Dongdong Zhang and Furu Wei},
title = {{DeepNet}: Scaling {Transformers} to 1,000 Layers},
journal = {CoRR},
volume = {abs/2203.00555},
year = {2022},
}
```
```
@article{magneto,
author = {Hongyu Wang and Shuming Ma and Shaohan Huang and Li Dong and Wenhui Wang and Zhiliang Peng and Yu Wu and Payal Bajaj and Saksham Singhal and Alon Benhaim and Barun Patra and Zhun Liu and Vishrav Chaudhary and Xia Song and Furu Wei},
title = {Foundation {Transformers}},
journal = {CoRR},
volume = {abs/2210.06423},
year = {2022}
}
```
```
@inproceedings{xmoe,
title={On the Representation Collapse of Sparse Mixture of Experts},
author={Zewen Chi and Li Dong and Shaohan Huang and Damai Dai and Shuming Ma and Barun Patra and Saksham Singhal and Payal Bajaj and Xia Song and Xian-Ling Mao and Heyan Huang and Furu Wei},
booktitle={Advances in Neural Information Processing Systems},
year={2022},
url={https://openreview.net/forum?id=mWaYC6CZf5}
}
```
```
@article{retnet,
author={Yutao Sun and Li Dong and Shaohan Huang and Shuming Ma and Yuqing Xia and Jilong Xue and Jianyong Wang and Furu Wei},
title = {Retentive Network: A Successor to {Transformer} for Large Language Models},
journal = {ArXiv},
volume = {abs/2307.08621},
year = {2023}
}
```
```
@article{longnet,
author={Jiayu Ding and Shuming Ma and Li Dong and Xingxing Zhang and Shaohan Huang and Wenhui Wang and Nanning Zheng and Furu Wei},
title = {{LongNet}: Scaling Transformers to 1,000,000,000 Tokens},
journal = {ArXiv},
volume = {abs/2307.02486},
year = {2023}
}
```
```
@article{longvit,
title = {When an Image is Worth 1,024 x 1,024 Words: A Case Study in Computational Pathology},
author = {Wenhui Wang and Shuming Ma and Hanwen Xu and Naoto Usuyama and Jiayu Ding and Hoifung Poon and Furu Wei},
journal = {ArXiv},
volume = {abs/2312.03558},
year = {2023}
}
```
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [Furu Wei](mailto:fuwei@microsoft.com) and [Shuming Ma](mailto:shumma@microsoft.com) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos is subject to those third-party's policies.
Raw data
{
"_id": null,
"home_page": "https://github.com/microsoft/torchscale",
"name": "torchscale-gml",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8.0",
"maintainer_email": null,
"keywords": "Transformers at any scale",
"author": "TorchScale Team",
"author_email": "Shuming.Ma@microsoft.com",
"download_url": null,
"platform": null,
"description": "# TorchScale - A Library of Foundation Architectures\n\n<p>\n <a href=\"https://github.com/microsoft/torchscale/blob/main/LICENSE\"><img alt=\"MIT License\" src=\"https://img.shields.io/badge/license-MIT-blue.svg\" /></a>\n <a href=\"https://pypi.org/project/torchscale\"><img alt=\"MIT License\" src=\"https://badge.fury.io/py/torchscale.svg\" /></a>\n</p>\n\nTorchScale is a PyTorch library that allows researchers and developers to scale up Transformers efficiently and effectively.\n\nFundamental research to develop new architectures for foundation models and A(G)I, focusing on modeling generality and capability, as well as training stability and efficiency.\n- Stability - [**DeepNet**](https://arxiv.org/abs/2203.00555): scaling Transformers to 1,000 Layers and beyond\n- Generality - [**Foundation Transformers (Magneto)**](https://arxiv.org/abs/2210.06423): towards true general-purpose modeling across tasks and modalities (including language, vision, speech, and multimodal)\n- Capability - A [**Length-Extrapolatable**](https://arxiv.org/abs/2212.10554) Transformer\n- Efficiency - [**X-MoE**](https://arxiv.org/abs/2204.09179): scalable & finetunable sparse Mixture-of-Experts (MoE)\n\n### The Revolution of Model Architecture\n- [**BitNet**](https://arxiv.org/abs/2310.11453): 1-bit Transformers for Large Language Models\n- [**RetNet**](https://arxiv.org/abs/2307.08621): Retentive Network: A Successor to Transformer for Large Language Models\n- [**LongNet**](https://arxiv.org/abs/2307.02486): Scaling Transformers to 1,000,000,000 Tokens\n\n## News\n\n- December, 2023: [LongNet](torchscale/model/LongNet.py) and [LongViT](examples/longvit/README.md) released\n- October, 2023: Update RMSNorm and SwiGLU as the default module in RetNet\n- November, 2022: TorchScale 0.1.1 released [[Paper](https://arxiv.org/abs/2211.13184)] [[PyPI](https://pypi.org/project/torchscale/)]\n\n## Installation\n\nTo install:\n```\npip install torchscale\n```\n\nAlternatively, you can develop it locally:\n```\ngit clone https://github.com/microsoft/torchscale.git\ncd torchscale\npip install -e .\n```\n\nFor faster training install [Flash Attention](https://github.com/Dao-AILab/flash-attention) for Turing, Ampere, Ada, or Hopper GPUs:\n```\npip install flash-attn\n```\nor [xFormers](https://github.com/facebookresearch/xformers) for Volta, Turing, Ampere, Ada, or Hopper GPUs:\n```\n# cuda 11.8 version\npip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118\n# cuda 12.1 version\npip3 install -U xformers --index-url https://download.pytorch.org/whl/cu121\n```\n\n## Getting Started\n\nIt takes only several lines of code to create a model with the above fundamental research features enabled. Here is how to quickly obtain a BERT-like encoder:\n\n```python\n>>> from torchscale.architecture.config import EncoderConfig\n>>> from torchscale.architecture.encoder import Encoder\n\n>>> config = EncoderConfig(vocab_size=64000)\n>>> model = Encoder(config)\n\n>>> print(model)\n```\n\nWe also support the `Decoder` architecture and the `EncoderDecoder` architecture:\n\n```python\n# Creating a decoder model\n>>> from torchscale.architecture.config import DecoderConfig\n>>> from torchscale.architecture.decoder import Decoder\n\n>>> config = DecoderConfig(vocab_size=64000)\n>>> decoder = Decoder(config)\n>>> print(decoder)\n\n# Creating a encoder-decoder model\n>>> from torchscale.architecture.config import EncoderDecoderConfig\n>>> from torchscale.architecture.encoder_decoder import EncoderDecoder\n\n>>> config = EncoderDecoderConfig(vocab_size=64000)\n>>> encdec = EncoderDecoder(config)\n>>> print(encdec)\n```\n\nIt takes only several lines of code to create a RetNet model:\n\n```python\n# Creating a RetNet model\n>>> import torch\n>>> from torchscale.architecture.config import RetNetConfig\n>>> from torchscale.architecture.retnet import RetNetDecoder\n\n>>> config = RetNetConfig(vocab_size=64000)\n>>> retnet = RetNetDecoder(config)\n\n>>> print(retnet)\n```\n\nFor LongNet models ([Flash Attention](https://github.com/Dao-AILab/flash-attention) required):\n```python\n>>> import torch\n>>> from torchscale.architecture.config import EncoderConfig, DecoderConfig\n>>> from torchscale.model.longnet import LongNetEncoder, LongNetDecoder\n\n# Creating a LongNet encoder with the dilated pattern of segment_length=[2048,4096] and dilated_ratio=[1,2]\n>>> config = EncoderConfig(vocab_size=64000, segment_length='[2048,4096]', dilated_ratio='[1,2]', flash_attention=True)\n>>> longnet = LongNetEncoder(config)\n\n# Creating a LongNet decoder with the dilated pattern of segment_length=[2048,4096] and dilated_ratio=[1,2]\n>>> config = DecoderConfig(vocab_size=64000, segment_length='[2048,4096]', dilated_ratio='[1,2]', flash_attention=True)\n>>> longnet = LongNetDecoder(config)\n```\n\n## Key Features\n\n- [DeepNorm to improve the training stability of Post-LayerNorm Transformers](https://arxiv.org/abs/2203.00555)\n * enabled by setting *deepnorm=True* in the `Config` class. \n * It adjusts both the residual connection and the initialization method according to the model architecture (i.e., encoder, decoder, or encoder-decoder).\n\n- [SubLN for the model generality and the training stability](https://arxiv.org/abs/2210.06423)\n * enabled by *subln=True*. This is enabled by default. \n * It introduces another LayerNorm to each sublayer and adjusts the initialization according to the model architecture.\n * Note that SubLN and DeepNorm cannot be used in one single model.\n\n- [X-MoE: efficient and finetunable sparse MoE modeling](https://arxiv.org/abs/2204.09179)\n * enabled by *use_xmoe=True*. \n * It replaces every *'moe_freq'* `FeedForwardNetwork` layers with the X-MoE layers.\n\n- [Multiway architecture for multimodality](https://arxiv.org/abs/2208.10442)\n * enabled by *multiway=True*.\n * It provides a pool of Transformer's parameters used for different modalities.\n\n- [Extrapolatable position embedding (Xpos)](https://arxiv.org/abs/2212.10554)\n * enabled by *xpos_rel_pos=True*.\n\n- [Relative position bias](https://arxiv.org/abs/1910.10683)\n * enabled by adjusting *rel_pos_buckets* and *max_rel_pos*.\n\n- [SparseClip: improving the gradient clipping for sparse MoE models](https://arxiv.org/abs/2211.13184)\n * we provide a [sample code](examples/fairseq/utils/sparse_clip.py) that can be easily adapted to the FairSeq (or other) repo.\n\n- [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/abs/2307.08621)\n * created by `config = RetNetConfig(vocab_size=64000)` and `retnet = RetNetDecoder(config)`.\n\n- [LongNet: Scaling Transformers to 1,000,000,000 Tokens](https://arxiv.org/abs/2307.02486)\n \nMost of the features above can be used by simply passing the corresponding parameters to the config. For example:\n\n```python\n>>> from torchscale.architecture.config import EncoderConfig\n>>> from torchscale.architecture.encoder import Encoder\n\n>>> config = EncoderConfig(vocab_size=64000, deepnorm=True, multiway=True)\n>>> model = Encoder(config)\n\n>>> print(model)\n```\n\n## Examples\n\nWe have examples of how to use TorchScale in the following scenarios/tasks:\n\n- Language\n\n * [Decoder/GPT](examples/fairseq/README.md#example-gpt-pretraining)\n\n * [Encoder-Decoder/Neural Machine Translation](examples/fairseq/README.md#example-machine-translation)\n\n * [Encoder/BERT](examples/fairseq/README.md#example-bert-pretraining)\n\n- Vision\n\n * [LongViT](examples/longvit/README.md)\n\n * ViT/BEiT [In progress]\n\n- Speech\n\n- Multimodal\n\n * [Multiway Transformers/BEiT-3](https://github.com/microsoft/unilm/tree/master/beit3)\n\nWe plan to provide more examples regarding different tasks (e.g. vision pretraining and speech recognition) and various deep learning toolkits (e.g. [DeepSpeed](https://github.com/microsoft/DeepSpeed) and [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)). Any comments or PRs are welcome!\n\n## Results\n\n### Stability Evaluation\n\n<p align=\"center\">\n <img src=\"https://publicmodel.blob.core.windows.net/torchscale/pic/convergence.png?sv=2020-04-08&st=2023-08-11T03%3A09%3A09Z&se=2053-08-12T03%3A09%3A00Z&sr=c&sp=rl&sig=3b6nDda%2Fu0vD6E%2BhoTO%2BHfNSnSlUfgvXFV%2FCNKquWjE%3D\" width=\"800\"/>\n</p>\n\nThe training curve is smooth by using TorchScale, while the baseline Transformer cannot converge.\n\n### Scaling-up Experiments\n\n<p align=\"center\">\n <img src=\"https://publicmodel.blob.core.windows.net/torchscale/pic/scaling_curve.png?sv=2020-04-08&st=2023-08-11T03%3A09%3A09Z&se=2053-08-12T03%3A09%3A00Z&sr=c&sp=rl&sig=3b6nDda%2Fu0vD6E%2BhoTO%2BHfNSnSlUfgvXFV%2FCNKquWjE%3D\" width=\"800\"/>\n</p>\n\nTorchScale supports arbitrary depths and widths, successfully scaling-up the models without pain.\n\n## Acknowledgments\n\nSome implementations in TorchScale are either adapted from or inspired by the [FairSeq](https://github.com/facebookresearch/fairseq) repository and the [UniLM](https://github.com/microsoft/unilm) repository.\n\n## Citations\n\nIf you find this repository useful, please consider citing our work:\n\n```\n@article{torchscale,\n author = {Shuming Ma and Hongyu Wang and Shaohan Huang and Wenhui Wang and Zewen Chi and Li Dong and Alon Benhaim and Barun Patra and Vishrav Chaudhary and Xia Song and Furu Wei},\n title = {{TorchScale}: {Transformers} at Scale},\n journal = {CoRR},\n volume = {abs/2211.13184},\n year = {2022}\n}\n```\n\n```\n@article{deepnet,\n author = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Dongdong Zhang and Furu Wei},\n title = {{DeepNet}: Scaling {Transformers} to 1,000 Layers},\n journal = {CoRR},\n volume = {abs/2203.00555},\n year = {2022},\n}\n```\n\n```\n@article{magneto,\n author = {Hongyu Wang and Shuming Ma and Shaohan Huang and Li Dong and Wenhui Wang and Zhiliang Peng and Yu Wu and Payal Bajaj and Saksham Singhal and Alon Benhaim and Barun Patra and Zhun Liu and Vishrav Chaudhary and Xia Song and Furu Wei},\n title = {Foundation {Transformers}},\n journal = {CoRR},\n volume = {abs/2210.06423},\n year = {2022}\n}\n```\n\n```\n@inproceedings{xmoe,\n title={On the Representation Collapse of Sparse Mixture of Experts},\n author={Zewen Chi and Li Dong and Shaohan Huang and Damai Dai and Shuming Ma and Barun Patra and Saksham Singhal and Payal Bajaj and Xia Song and Xian-Ling Mao and Heyan Huang and Furu Wei},\n booktitle={Advances in Neural Information Processing Systems},\n year={2022},\n url={https://openreview.net/forum?id=mWaYC6CZf5}\n}\n```\n\n```\n@article{retnet,\n author={Yutao Sun and Li Dong and Shaohan Huang and Shuming Ma and Yuqing Xia and Jilong Xue and Jianyong Wang and Furu Wei},\n title = {Retentive Network: A Successor to {Transformer} for Large Language Models},\n journal = {ArXiv},\n volume = {abs/2307.08621},\n year = {2023}\n}\n```\n\n```\n@article{longnet,\n author={Jiayu Ding and Shuming Ma and Li Dong and Xingxing Zhang and Shaohan Huang and Wenhui Wang and Nanning Zheng and Furu Wei},\n title = {{LongNet}: Scaling Transformers to 1,000,000,000 Tokens},\n journal = {ArXiv},\n volume = {abs/2307.02486},\n year = {2023}\n}\n```\n\n```\n@article{longvit,\n title = {When an Image is Worth 1,024 x 1,024 Words: A Case Study in Computational Pathology},\n author = {Wenhui Wang and Shuming Ma and Hanwen Xu and Naoto Usuyama and Jiayu Ding and Hoifung Poon and Furu Wei},\n journal = {ArXiv},\n volume = {abs/2312.03558},\n year = {2023}\n}\n```\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [Furu Wei](mailto:fuwei@microsoft.com) and [Shuming Ma](mailto:shumma@microsoft.com) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos is subject to those third-party's policies.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Transformers at any scale",
"version": "0.2.3",
"project_urls": {
"Homepage": "https://github.com/microsoft/torchscale"
},
"split_keywords": [
"transformers",
"at",
"any",
"scale"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "01800dfd9eabacd4787d0fb5754a3d59e13c38492e3f092148d310a712b59091",
"md5": "fd5a3bbce1fd3e485eef0d05423f0044",
"sha256": "d078664141f69a41eddba3792d9e3ec143371f66571084e31864d53a46828c56"
},
"downloads": -1,
"filename": "torchscale_gml-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "fd5a3bbce1fd3e485eef0d05423f0044",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8.0",
"size": 46296,
"upload_time": "2024-10-14T17:30:43",
"upload_time_iso_8601": "2024-10-14T17:30:43.156969Z",
"url": "https://files.pythonhosted.org/packages/01/80/0dfd9eabacd4787d0fb5754a3d59e13c38492e3f092148d310a712b59091/torchscale_gml-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-14 17:30:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "microsoft",
"github_project": "torchscale",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "torchscale-gml"
}