# Megatron-Core
Megatron-Core is an open-source PyTorch-based library that contains GPU-optimized techniques and cutting-edge system-level optimizations. It abstracts them into composable and modular APIs, allowing full flexibility for developers and model researchers to train custom transformers at-scale on NVIDIA accelerated computing infrastructure. This library is compatible with all NVIDIA Tensor Core GPUs, including FP8 acceleration support for [NVIDIA Hopper architectures](https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture/).
Megatron-Core offers core building blocks such as attention mechanisms, transformer blocks and layers, normalization layers, and embedding techniques. Additional functionality like activation re-computation, distributed checkpointing is also natively built-in to the library. The building blocks and functionality are all GPU optimized, and can be built with advanced parallelization strategies for optimal training speed and stability on NVIDIA Accelerated Computing Infrastructure. Another key component of the Megatron-Core library includes advanced model parallelism techniques (tensor, sequence, pipeline, context, and MoE expert parallelism).
Megatron-Core can be used with [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/products/nemo/), an enterprise-grade AI platform. Alternatively, you can explore Megatron-Core with the native PyTorch training loop [here](https://github.com/NVIDIA/Megatron-LM/tree/main/examples). Visit [Megatron-Core documentation](https://docs.nvidia.com/megatron-core/developer-guide/latest/index.html) to learn more.
## Quick links
- [Benchmark using NVIDIA NeMo](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html#performance-benchmarks)
- [Multimodal example (LLaVA training pipeline)](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/multimodal)
- [Mixture-of-Experts](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/transformer/moe)
- [Training Mamba-based Language Models](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/mamba)
Raw data
{
"_id": null,
"home_page": "https://github.com/NVIDIA/Megatron-LM/megatron/core",
"name": "megatron-core",
"maintainer": "NVIDIA",
"docs_url": null,
"requires_python": null,
"maintainer_email": "nemo-toolkit@nvidia.com",
"keywords": "deep learning, machine learning, gpu, NLP, NLU, language, transformer, nvidia, pytorch, torch",
"author": "NVIDIA",
"author_email": "nemo-toolkit@nvidia.com",
"download_url": "https://github.com/NVIDIA/Megatron-LM/releases",
"platform": null,
"description": "# Megatron-Core\n\nMegatron-Core is an open-source PyTorch-based library that contains GPU-optimized techniques and cutting-edge system-level optimizations. It abstracts them into composable and modular APIs, allowing full flexibility for developers and model researchers to train custom transformers at-scale on NVIDIA accelerated computing infrastructure. This library is compatible with all NVIDIA Tensor Core GPUs, including FP8 acceleration support for [NVIDIA Hopper architectures](https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture/).\n\nMegatron-Core offers core building blocks such as attention mechanisms, transformer blocks and layers, normalization layers, and embedding techniques. Additional functionality like activation re-computation, distributed checkpointing is also natively built-in to the library. The building blocks and functionality are all GPU optimized, and can be built with advanced parallelization strategies for optimal training speed and stability on NVIDIA Accelerated Computing Infrastructure. Another key component of the Megatron-Core library includes advanced model parallelism techniques (tensor, sequence, pipeline, context, and MoE expert parallelism).\n\nMegatron-Core can be used with [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/products/nemo/), an enterprise-grade AI platform. Alternatively, you can explore Megatron-Core with the native PyTorch training loop [here](https://github.com/NVIDIA/Megatron-LM/tree/main/examples). Visit [Megatron-Core documentation](https://docs.nvidia.com/megatron-core/developer-guide/latest/index.html) to learn more.\n\n## Quick links\n\n- [Benchmark using NVIDIA NeMo](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html#performance-benchmarks)\n- [Multimodal example (LLaVA training pipeline)](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/multimodal)\n- [Mixture-of-Experts](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/transformer/moe)\n- [Training Mamba-based Language Models](https://github.com/NVIDIA/Megatron-LM/tree/main/examples/mamba)\n",
"bugtrack_url": null,
"license": "BSD-3",
"summary": "Megatron Core - a library for efficient and scalable training of transformer based models",
"version": "0.9.0",
"project_urls": {
"Download": "https://github.com/NVIDIA/Megatron-LM/releases",
"Homepage": "https://github.com/NVIDIA/Megatron-LM/megatron/core"
},
"split_keywords": [
"deep learning",
" machine learning",
" gpu",
" nlp",
" nlu",
" language",
" transformer",
" nvidia",
" pytorch",
" torch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ff17cf9ab8e7aec4ab89e697e43e52d9801c8b788b79de5c0c810f154d7c0a2f",
"md5": "6aa725f3ae67304e679d1242417a8567",
"sha256": "c0d929cf92f0aee68b18916b0191beec917e63a7766b4834852bef664202cc76"
},
"downloads": -1,
"filename": "megatron_core-0.9.0-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "6aa725f3ae67304e679d1242417a8567",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": null,
"size": 1588820,
"upload_time": "2024-10-24T10:42:06",
"upload_time_iso_8601": "2024-10-24T10:42:06.446612Z",
"url": "https://files.pythonhosted.org/packages/ff/17/cf9ab8e7aec4ab89e697e43e52d9801c8b788b79de5c0c810f154d7c0a2f/megatron_core-0.9.0-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d7bf4eb7772f2a5830dd11d0590fd56ce1dff8c82a398d893bfb583116d423ed",
"md5": "88fbaaa28dd341803bf1cb0c2fb48d32",
"sha256": "b2c73c9e6fa58c93f3b1833ffd32bc08dc29b5d28fda7375c5a5e3a8aaeb3db8"
},
"downloads": -1,
"filename": "megatron_core-0.9.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl",
"has_sig": false,
"md5_digest": "88fbaaa28dd341803bf1cb0c2fb48d32",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": null,
"size": 1608958,
"upload_time": "2024-10-24T10:42:08",
"upload_time_iso_8601": "2024-10-24T10:42:08.338486Z",
"url": "https://files.pythonhosted.org/packages/d7/bf/4eb7772f2a5830dd11d0590fd56ce1dff8c82a398d893bfb583116d423ed/megatron_core-0.9.0-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-24 10:42:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "NVIDIA",
"github_project": "Megatron-LM",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "megatron-core"
}