Name | nemo-toolkit JSON |
Version |
2.3.2
JSON |
| download |
home_page | https://github.com/nvidia/nemo |
Summary | NeMo - a toolkit for Conversational AI |
upload_time | 2025-07-08 22:29:27 |
maintainer | NVIDIA |
docs_url | None |
author | NVIDIA |
requires_python | >=3.10 |
license | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. |
keywords |
nlp
nemo
deep
gpu
language
learning
learning
machine
nvidia
pytorch
speech
torch
tts
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
|
[](http://www.repostatus.org/#active)
[](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/)
[](https://github.com/nvidia/nemo/actions/workflows/codeql.yml)
[](https://github.com/NVIDIA/NeMo/blob/master/LICENSE)
[](https://badge.fury.io/py/nemo-toolkit)
[](https://badge.fury.io/py/nemo-toolkit)
[](https://pepy.tech/project/nemo-toolkit)
[](https://github.com/psf/black)
# **NVIDIA NeMo Framework**
## Latest News
<!-- markdownlint-disable -->
<details open>
<summary><b>Pretrain and finetune :hugs:Hugging Face models via AutoModel</b></summary>
Nemo Framework's latest feature AutoModel enables broad support for :hugs:Hugging Face models, with 25.02 focusing on <a href=https://huggingface.co/transformers/v3.5.1/model_doc/auto.html#automodelforcausallm>AutoModelForCausalLM<a> in the <a href=https://huggingface.co/models?pipeline_tag=text-generation&sort=trending>text generation category<a>. Future releases will enable support for more model families such as Vision Language Model.
</details>
<details open>
<summary><b>Training on Blackwell using Nemo</b></summary>
NeMo Framework has added Blackwell support, with 25.02 focusing on functional parity for B200. More optimizations to come in the upcoming releases.
</details>
<details open>
<summary><b>NeMo Framework 2.0</b></summary>
We've released NeMo 2.0, an update on the NeMo Framework which prioritizes modularity and ease-of-use. Please refer to the <a href=https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/index.html>NeMo Framework User Guide</a> to get started.
</details>
<details open>
<summary><b>New Cosmos World Foundation Models Support</b></summary>
<details>
<summary> <a href="https://developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform">Advancing Physical AI with NVIDIA Cosmos World Foundation Model Platform </a> (2025-01-09)
</summary>
The end-to-end NVIDIA Cosmos platform accelerates world model development for physical AI systems. Built on CUDA, Cosmos combines state-of-the-art world foundation models, video tokenizers, and AI-accelerated data processing pipelines. Developers can accelerate world model development by fine-tuning Cosmos world foundation models or building new ones from the ground up. These models create realistic synthetic videos of environments and interactions, providing a scalable foundation for training complex systems, from simulating humanoid robots performing advanced actions to developing end-to-end autonomous driving models.
<br><br>
</details>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/">
Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities
</a> (2025-01-07)
</summary>
The NeMo Framework now supports training and customizing the <a href="https://github.com/NVIDIA/Cosmos">NVIDIA Cosmos</a> collection of world foundation models. Cosmos leverages advanced text-to-world generation techniques to create fluid, coherent video content from natural language prompts.
<br><br>
You can also now accelerate your video processing step using the <a href="https://developer.nvidia.com/nemo-curator-video-processing-early-access">NeMo Curator</a> library, which provides optimized video processing and captioning features that can deliver up to 89x faster video processing when compared to an unoptimized CPU pipeline.
<br><br>
</details>
</details>
<details open>
<summary><b>Large Language Models and Multimodal Models</b></summary>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/">
State-of-the-Art Multimodal Generative AI Model Development with NVIDIA NeMo
</a> (2024-11-06)
</summary>
NVIDIA recently announced significant enhancements to the NeMo platform, focusing on multimodal generative AI models. The update includes NeMo Curator and the Cosmos tokenizer, which streamline the data curation process and enhance the quality of visual data. These tools are designed to handle large-scale data efficiently, making it easier to develop high-quality AI models for various applications, including robotics and autonomous driving. The Cosmos tokenizers, in particular, efficiently map visual data into compact, semantic tokens, which is crucial for training large-scale generative models. The tokenizer is available now on the <a href=http://github.com/NVIDIA/cosmos-tokenizer/NVIDIA/cosmos-tokenizer>NVIDIA/cosmos-tokenizer</a> GitHub repo and on <a href=https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8>Hugging Face</a>.
<br><br>
</details>
<details>
<summary>
<a href="https://docs.nvidia.com/nemo-framework/user-guide/latest/llms/llama/index.html#new-llama-3-1-support for more information/">
New Llama 3.1 Support
</a> (2024-07-23)
</summary>
The NeMo Framework now supports training and customizing the Llama 3.1 collection of LLMs from Meta.
<br><br>
</details>
<details>
<summary>
<a href="https://aws.amazon.com/blogs/machine-learning/accelerate-your-generative-ai-distributed-training-workloads-with-the-nvidia-nemo-framework-on-amazon-eks/">
Accelerate your Generative AI Distributed Training Workloads with the NVIDIA NeMo Framework on Amazon EKS
</a> (2024-07-16)
</summary>
NVIDIA NeMo Framework now runs distributed training workloads on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. For step-by-step instructions on creating an EKS cluster and running distributed training workloads with NeMo, see the GitHub repository <a href="https://github.com/aws-samples/awsome-distributed-training/tree/main/3.test_cases/2.nemo-launcher/EKS/"> here.</a>
<br><br>
</details>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/nvidia-nemo-accelerates-llm-innovation-with-hybrid-state-space-model-support/">
NVIDIA NeMo Accelerates LLM Innovation with Hybrid State Space Model Support
</a> (2024/06/17)
</summary>
NVIDIA NeMo and Megatron Core now support pre-training and fine-tuning of state space models (SSMs). NeMo also supports training models based on the Griffin architecture as described by Google DeepMind.
<br><br>
</details>
<details>
<summary>
<a href="https://huggingface.co/models?sort=trending&search=nvidia%2Fnemotron-4-340B">
NVIDIA releases 340B base, instruct, and reward models pretrained on a total of 9T tokens.
</a> (2024-06-18)
</summary>
See documentation and tutorials for SFT, PEFT, and PTQ with
<a href="https://docs.nvidia.com/nemo-framework/user-guide/latest/llms/nemotron/index.html">
Nemotron 340B
</a>
in the NeMo Framework User Guide.
<br><br>
</details>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/nvidia-sets-new-generative-ai-performance-and-scale-records-in-mlperf-training-v4-0/">
NVIDIA sets new generative AI performance and scale records in MLPerf Training v4.0
</a> (2024/06/12)
</summary>
Using NVIDIA NeMo Framework and NVIDIA Hopper GPUs NVIDIA was able to scale to 11,616 H100 GPUs and achieve near-linear performance scaling on LLM pretraining.
NVIDIA also achieved the highest LLM fine-tuning performance and raised the bar for text-to-image training.
<br><br>
</details>
<details>
<summary>
<a href="https://cloud.google.com/blog/products/compute/gke-and-nvidia-nemo-framework-to-train-generative-ai-models">
Accelerate your generative AI journey with NVIDIA NeMo Framework on GKE
</a> (2024/03/16)
</summary>
An end-to-end walkthrough to train generative AI models on the Google Kubernetes Engine (GKE) using the NVIDIA NeMo Framework is available at https://github.com/GoogleCloudPlatform/nvidia-nemo-on-gke.
The walkthrough includes detailed instructions on how to set up a Google Cloud Project and pre-train a GPT model using the NeMo Framework.
<br><br>
</details>
</details>
<details open>
<summary><b>Speech Recognition</b></summary>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/accelerating-leaderboard-topping-asr-models-10x-with-nvidia-nemo/">
Accelerating Leaderboard-Topping ASR Models 10x with NVIDIA NeMo
</a> (2024/09/24)
</summary>
NVIDIA NeMo team released a number of inference optimizations for CTC, RNN-T, and TDT models that resulted in up to 10x inference speed-up.
These models now exceed an inverse real-time factor (RTFx) of 2,000, with some reaching RTFx of even 6,000.
<br><br>
</details>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/new-standard-for-speech-recognition-and-translation-from-the-nvidia-nemo-canary-model/">
New Standard for Speech Recognition and Translation from the NVIDIA NeMo Canary Model
</a> (2024/04/18)
</summary>
The NeMo team just released Canary, a multilingual model that transcribes speech in English, Spanish, German, and French with punctuation and capitalization.
Canary also provides bi-directional translation, between English and the three other supported languages.
<br><br>
</details>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/pushing-the-boundaries-of-speech-recognition-with-nemo-parakeet-asr-models/">
Pushing the Boundaries of Speech Recognition with NVIDIA NeMo Parakeet ASR Models
</a> (2024/04/18)
</summary>
NVIDIA NeMo, an end-to-end platform for the development of multimodal generative AI models at scale anywhere—on any cloud and on-premises—released the Parakeet family of automatic speech recognition (ASR) models.
These state-of-the-art ASR models, developed in collaboration with Suno.ai, transcribe spoken English with exceptional accuracy.
<br><br>
</details>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/turbocharge-asr-accuracy-and-speed-with-nvidia-nemo-parakeet-tdt/">
Turbocharge ASR Accuracy and Speed with NVIDIA NeMo Parakeet-TDT
</a> (2024/04/18)
</summary>
NVIDIA NeMo, an end-to-end platform for developing multimodal generative AI models at scale anywhere—on any cloud and on-premises—recently released Parakeet-TDT.
This new addition to the NeMo ASR Parakeet model family boasts better accuracy and 64% greater speed over the previously best model, Parakeet-RNNT-1.1B.
<br><br>
</details>
</details>
<!-- markdownlint-enable -->
## Introduction
NVIDIA NeMo Framework is a scalable and cloud-native generative AI
framework built for researchers and PyTorch developers working on Large
Language Models (LLMs), Multimodal Models (MMs), Automatic Speech
Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV)
domains. It is designed to help you efficiently create, customize, and
deploy new generative AI models by leveraging existing code and
pre-trained model checkpoints.
For technical documentation, please see the [NeMo Framework User
Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/playbooks/index.html).
## What's New in NeMo 2.0
NVIDIA NeMo 2.0 introduces several significant improvements over its predecessor, NeMo 1.0, enhancing flexibility, performance, and scalability.
- **Python-Based Configuration** - NeMo 2.0 transitions from YAML files to a Python-based configuration, providing more flexibility and control. This shift makes it easier to extend and customize configurations programmatically.
- **Modular Abstractions** - By adopting PyTorch Lightning’s modular abstractions, NeMo 2.0 simplifies adaptation and experimentation. This modular approach allows developers to more easily modify and experiment with different components of their models.
- **Scalability** - NeMo 2.0 seamlessly scaling large-scale experiments across thousands of GPUs using [NeMo-Run](https://github.com/NVIDIA/NeMo-Run), a powerful tool designed to streamline the configuration, execution, and management of machine learning experiments across computing environments.
Overall, these enhancements make NeMo 2.0 a powerful, scalable, and user-friendly framework for AI model development.
> [!IMPORTANT]
> NeMo 2.0 is currently supported by the LLM (large language model) and VLM (vision language model) collections.
### Get Started with NeMo 2.0
- Refer to the [Quickstart](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/quickstart.html) for examples of using NeMo-Run to launch NeMo 2.0 experiments locally and on a slurm cluster.
- For more information about NeMo 2.0, see the [NeMo Framework User Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/index.html).
- [NeMo 2.0 Recipes](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/llm/recipes) contains additional examples of launching large-scale runs using NeMo 2.0 and NeMo-Run.
- For an in-depth exploration of the main features of NeMo 2.0, see the [Feature Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/features/index.html#feature-guide).
- To transition from NeMo 1.0 to 2.0, see the [Migration Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/migration/index.html#migration-guide) for step-by-step instructions.
### Get Started with Cosmos
NeMo Curator and NeMo Framework support video curation and post-training of the Cosmos World Foundation Models, which are open and available on [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/cosmos/collections/cosmos) and [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6). For more information on video datasets, refer to [NeMo Curator](https://developer.nvidia.com/nemo-curator). To post-train World Foundation Models using the NeMo Framework for your custom physical AI tasks, see the [Cosmos Diffusion models](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/post_training/README.md) and the [Cosmos Autoregressive models](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/post_training/README.md).
## LLMs and MMs Training, Alignment, and Customization
All NeMo models are trained with
[Lightning](https://github.com/Lightning-AI/lightning). Training is
automatically scalable to 1000s of GPUs. You can check the performance benchmarks using the
latest NeMo Framework container [here](https://docs.nvidia.com/nemo-framework/user-guide/latest/performance/performance_summary.html).
When applicable, NeMo models leverage cutting-edge distributed training
techniques, incorporating [parallelism
strategies](https://docs.nvidia.com/nemo-framework/user-guide/latest/modeloverview.html)
to enable efficient training of very large models. These techniques
include Tensor Parallelism (TP), Pipeline Parallelism (PP), Fully
Sharded Data Parallelism (FSDP), Mixture-of-Experts (MoE), and Mixed
Precision Training with BFloat16 and FP8, as well as others.
NeMo Transformer-based LLMs and MMs utilize [NVIDIA Transformer
Engine](https://github.com/NVIDIA/TransformerEngine) for FP8 training on
NVIDIA Hopper GPUs, while leveraging [NVIDIA Megatron
Core](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core) for
scaling Transformer model training.
NeMo LLMs can be aligned with state-of-the-art methods such as SteerLM,
Direct Preference Optimization (DPO), and Reinforcement Learning from
Human Feedback (RLHF). See [NVIDIA NeMo
Aligner](https://github.com/NVIDIA/NeMo-Aligner) for more information.
In addition to supervised fine-tuning (SFT), NeMo also supports the
latest parameter efficient fine-tuning (PEFT) techniques such as LoRA,
P-Tuning, Adapters, and IA3. Refer to the [NeMo Framework User
Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/sft_peft/index.html)
for the full list of supported models and techniques.
## LLMs and MMs Deployment and Optimization
NeMo LLMs and MMs can be deployed and optimized with [NVIDIA NeMo
Microservices](https://developer.nvidia.com/nemo-microservices-early-access).
## Speech AI
NeMo ASR and TTS models can be optimized for inference and deployed for
production use cases with [NVIDIA Riva](https://developer.nvidia.com/riva).
## NeMo Framework Launcher
> [!IMPORTANT]
> NeMo Framework Launcher is compatible with NeMo version 1.0 only. [NeMo-Run](https://github.com/NVIDIA/NeMo-Run) is recommended for launching experiments using NeMo 2.0.
[NeMo Framework
Launcher](https://github.com/NVIDIA/NeMo-Megatron-Launcher) is a
cloud-native tool that streamlines the NeMo Framework experience. It is
used for launching end-to-end NeMo Framework training jobs on CSPs and
Slurm clusters.
The NeMo Framework Launcher includes extensive recipes, scripts,
utilities, and documentation for training NeMo LLMs. It also includes
the NeMo Framework [Autoconfigurator](https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration),
which is designed to find the optimal model parallel configuration for
training on a specific cluster.
To get started quickly with the NeMo Framework Launcher, please see the
[NeMo Framework
Playbooks](https://docs.nvidia.com/nemo-framework/user-guide/latest/playbooks/index.html).
The NeMo Framework Launcher does not currently support ASR and TTS
training, but it will soon.
## Get Started with NeMo Framework
Getting started with NeMo Framework is easy. State-of-the-art pretrained
NeMo models are freely available on [Hugging Face
Hub](https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia)
and [NVIDIA
NGC](https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC).
These models can be used to generate text or images, transcribe audio,
and synthesize speech in just a few lines of code.
We have extensive
[tutorials](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html)
that can be run on [Google Colab](https://colab.research.google.com) or
with our [NGC NeMo Framework
Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo).
We also have
[playbooks](https://docs.nvidia.com/nemo-framework/user-guide/latest/playbooks/index.html)
for users who want to train NeMo models with the NeMo Framework
Launcher.
For advanced users who want to train NeMo models from scratch or
fine-tune existing NeMo models, we have a full suite of [example
scripts](https://github.com/NVIDIA/NeMo/tree/main/examples) that support
multi-GPU/multi-node training.
## Key Features
- [Large Language Models](nemo/collections/nlp/README.md)
- [Multimodal](nemo/collections/multimodal/README.md)
- [Automatic Speech Recognition](nemo/collections/asr/README.md)
- [Text to Speech](nemo/collections/tts/README.md)
- [Computer Vision](nemo/collections/vision/README.md)
## Requirements
- Python 3.10 or above
- Pytorch 2.5 or above
- NVIDIA GPU (if you intend to do model training)
## Developer Documentation
| Version | Status | Description |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| Latest | [](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/) | [Documentation of the latest (i.e. main) branch.](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/) |
| Stable | [](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/) | [Documentation of the stable (i.e. most recent release)](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/) |
## Install NeMo Framework
The NeMo Framework can be installed in a variety of ways, depending on
your needs. Depending on the domain, you may find one of the following
installation methods more suitable.
- [Conda / Pip](#conda--pip): Install NeMo-Framework with native Pip into a virtual environment.
- Used to explore NeMo on any supported platform.
- This is the recommended method for ASR and TTS domains.
- Limited feature-completeness for other domains.
- [NGC PyTorch container](#ngc-pytorch-container): Install NeMo-Framework from source with feature-completeness into a highly optimized container.
- For users that want to install from source in a highly optimized container.
- [NGC NeMo container](#ngc-nemo-container): Ready-to-go solution of NeMo-Framework
- For users that seek highest performance.
- Contains all dependencies installed and tested for performance and convergence.
### Support matrix
NeMo-Framework provides tiers of support based on OS / Platform and mode of installation. Please refer the following overview of support levels:
- Fully supported: Max performance and feature-completeness.
- Limited supported: Used to explore NeMo.
- No support yet: In development.
- Deprecated: Support has reached end of life.
Please refer to the following table for current support levels:
| OS / Platform | Install from PyPi | Source into NGC container |
|----------------------------|-------------------|---------------------------|
| `linux` - `amd64/x84_64` | Limited support | Full support |
| `linux` - `arm64` | Limited support | Limited support |
| `darwin` - `amd64/x64_64` | Deprecated | Deprecated |
| `darwin` - `arm64` | Limited support | Limited support |
| `windows` - `amd64/x64_64` | No support yet | No support yet |
| `windows` - `arm64` | No support yet | No support yet |
### Conda / Pip
Install NeMo in a fresh Conda environment:
```bash
conda create --name nemo python==3.10.12
conda activate nemo
```
#### Pick the right version
NeMo-Framework publishes pre-built wheels with each release.
To install nemo_toolkit from such a wheel, use the following installation method:
```bash
pip install "nemo_toolkit[all]"
```
If a more specific version is desired, we recommend a Pip-VCS install. From [NVIDIA/NeMo](github.com/NVIDIA/NeMo), fetch the commit, branch, or tag that you would like to install.
To install nemo_toolkit from this Git reference `$REF`, use the following installation method:
```bash
git clone https://github.com/NVIDIA/NeMo
cd NeMo
git checkout @${REF:-'main'}
pip install '.[all]'
```
#### Install a specific Domain
To install a specific domain of NeMo, you must first install the
nemo_toolkit using the instructions listed above. Then, you run the
following domain-specific commands:
```bash
pip install nemo_toolkit['all'] # or pip install "nemo_toolkit['all']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}"
pip install nemo_toolkit['asr'] # or pip install "nemo_toolkit['asr']@git+https://github.com/NVIDIA/NeMo@$REF:-'main'}"
pip install nemo_toolkit['nlp'] # or pip install "nemo_toolkit['nlp']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}"
pip install nemo_toolkit['tts'] # or pip install "nemo_toolkit['tts']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}"
pip install nemo_toolkit['vision'] # or pip install "nemo_toolkit['vision']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}"
pip install nemo_toolkit['multimodal'] # or pip install "nemo_toolkit['multimodal']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}"
```
### NGC PyTorch container
**NOTE: The following steps are supported beginning with 24.04 (NeMo-Toolkit 2.3.0)**
We recommended that you start with a base NVIDIA PyTorch container:
nvcr.io/nvidia/pytorch:25.01-py3.
If starting with a base NVIDIA PyTorch container, you must first launch
the container:
```bash
docker run \
--gpus all \
-it \
--rm \
--shm-size=16g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
nvcr.io/nvidia/pytorch:${NV_PYTORCH_TAG:-'nvcr.io/nvidia/pytorch:25.01-py3'}
```
From [NVIDIA/NeMo](github.com/NVIDIA/NeMo), fetch the commit/branch/tag that you want to install.
To install nemo_toolkit including all of its dependencies from this Git reference `$REF`, use the following installation method:
```bash
cd /opt
git clone https://github.com/NVIDIA/NeMo
cd NeMo
git checkout ${REF:-'main'}
bash reinstall.sh --library all
```
## NGC NeMo container
NeMo containers are launched concurrently with NeMo version updates.
NeMo Framework now supports LLMs, MMs, ASR, and TTS in a single
consolidated Docker container. You can find additional information about
released containers on the [NeMo releases
page](https://github.com/NVIDIA/NeMo/releases).
To use a pre-built container, run the following code:
```bash
docker run \
--gpus all \
-it \
--rm \
--shm-size=16g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
nvcr.io/nvidia/pytorch:${NV_PYTORCH_TAG:-'nvcr.io/nvidia/nemo:25.02'}
```
## Future Work
The NeMo Framework Launcher does not currently support ASR and TTS
training, but it will soon.
## Discussions Board
FAQ can be found on the NeMo [Discussions
board](https://github.com/NVIDIA/NeMo/discussions). You are welcome to
ask questions or start discussions on the board.
## Contribute to NeMo
We welcome community contributions! Please refer to
[CONTRIBUTING.md](https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md)
for the process.
## Publications
We provide an ever-growing list of
[publications](https://nvidia.github.io/NeMo/publications/) that utilize
the NeMo Framework.
To contribute an article to the collection, please submit a pull request
to the `gh-pages-src` branch of this repository. For detailed
information, please consult the README located at the [gh-pages-src
branch](https://github.com/NVIDIA/NeMo/tree/gh-pages-src#readme).
## Blogs
<!-- markdownlint-disable -->
<details open>
<summary><b>Large Language Models and Multimodal Models</b></summary>
<details>
<summary>
<a href="https://blogs.nvidia.com/blog/bria-builds-responsible-generative-ai-using-nemo-picasso/">
Bria Builds Responsible Generative AI for Enterprises Using NVIDIA NeMo, Picasso
</a> (2024/03/06)
</summary>
Bria, a Tel Aviv startup at the forefront of visual generative AI for enterprises now leverages the NVIDIA NeMo Framework.
The Bria.ai platform uses reference implementations from the NeMo Multimodal collection, trained on NVIDIA Tensor Core GPUs, to enable high-throughput and low-latency image generation.
Bria has also adopted NVIDIA Picasso, a foundry for visual generative AI models, to run inference.
<br><br>
</details>
<details>
<summary>
<a href="https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility/">
New NVIDIA NeMo Framework Features and NVIDIA H200
</a> (2023/12/06)
</summary>
NVIDIA NeMo Framework now includes several optimizations and enhancements,
including:
1) Fully Sharded Data Parallelism (FSDP) to improve the efficiency of training large-scale AI models,
2) Mix of Experts (MoE)-based LLM architectures with expert parallelism for efficient LLM training at scale,
3) Reinforcement Learning from Human Feedback (RLHF) with TensorRT-LLM for inference stage acceleration, and
4) up to 4.2x speedups for Llama 2 pre-training on NVIDIA H200 Tensor Core GPUs.
<br><br>
<a href="https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility">
<img src="https://github.com/sbhavani/TransformerEngine/blob/main/docs/examples/H200-NeMo-performance.png" alt="H200-NeMo-performance" style="width: 600px;"></a>
<br><br>
</details>
<details>
<summary>
<a href="https://blogs.nvidia.com/blog/nemo-amazon-titan/">
NVIDIA now powers training for Amazon Titan Foundation models
</a> (2023/11/28)
</summary>
NVIDIA NeMo Framework now empowers the Amazon Titan foundation models (FM) with efficient training of large language models (LLMs).
The Titan FMs form the basis of Amazon’s generative AI service, Amazon Bedrock.
The NeMo Framework provides a versatile framework for building, customizing, and running LLMs.
<br><br>
</details>
</details>
<!-- markdownlint-enable -->
## Licenses
- [NeMo GitHub Apache 2.0
license](https://github.com/NVIDIA/NeMo?tab=Apache-2.0-1-ov-file#readme)
- NeMo is licensed under the [NVIDIA AI PRODUCT
AGREEMENT](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/).
By pulling and using the container, you accept the terms and
conditions of this license.
Raw data
{
"_id": null,
"home_page": "https://github.com/nvidia/nemo",
"name": "nemo-toolkit",
"maintainer": "NVIDIA",
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "NVIDIA <nemo-toolkit@nvidia.com>",
"keywords": "NLP, NeMo, deep, gpu, language, learning, learning, machine, nvidia, pytorch, speech, torch, tts",
"author": "NVIDIA",
"author_email": "NVIDIA <nemo-toolkit@nvidia.com>",
"download_url": "https://files.pythonhosted.org/packages/88/03/2ff323dea400bfbedd667f4ce76b35f4d6d6329bd305dca7cf9deeb6754a/nemo_toolkit-2.3.2.tar.gz",
"platform": null,
"description": "[](http://www.repostatus.org/#active)\n[](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/)\n[](https://github.com/nvidia/nemo/actions/workflows/codeql.yml)\n[](https://github.com/NVIDIA/NeMo/blob/master/LICENSE)\n[](https://badge.fury.io/py/nemo-toolkit)\n[](https://badge.fury.io/py/nemo-toolkit)\n[](https://pepy.tech/project/nemo-toolkit)\n[](https://github.com/psf/black)\n\n# **NVIDIA NeMo Framework**\n\n## Latest News\n\n<!-- markdownlint-disable -->\n<details open>\n <summary><b>Pretrain and finetune :hugs:Hugging Face models via AutoModel</b></summary>\n Nemo Framework's latest feature AutoModel enables broad support for :hugs:Hugging Face models, with 25.02 focusing on <a href=https://huggingface.co/transformers/v3.5.1/model_doc/auto.html#automodelforcausallm>AutoModelForCausalLM<a> in the <a href=https://huggingface.co/models?pipeline_tag=text-generation&sort=trending>text generation category<a>. Future releases will enable support for more model families such as Vision Language Model.\n</details>\n\n<details open>\n <summary><b>Training on Blackwell using Nemo</b></summary>\n NeMo Framework has added Blackwell support, with 25.02 focusing on functional parity for B200. More optimizations to come in the upcoming releases.\n</details>\n\n\n<details open>\n <summary><b>NeMo Framework 2.0</b></summary>\n We've released NeMo 2.0, an update on the NeMo Framework which prioritizes modularity and ease-of-use. Please refer to the <a href=https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/index.html>NeMo Framework User Guide</a> to get started.\n</details>\n<details open>\n <summary><b>New Cosmos World Foundation Models Support</b></summary>\n <details> \n <summary> <a href=\"https://developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform\">Advancing Physical AI with NVIDIA Cosmos World Foundation Model Platform </a> (2025-01-09) \n </summary> \n The end-to-end NVIDIA Cosmos platform accelerates world model development for physical AI systems. Built on CUDA, Cosmos combines state-of-the-art world foundation models, video tokenizers, and AI-accelerated data processing pipelines. Developers can accelerate world model development by fine-tuning Cosmos world foundation models or building new ones from the ground up. These models create realistic synthetic videos of environments and interactions, providing a scalable foundation for training complex systems, from simulating humanoid robots performing advanced actions to developing end-to-end autonomous driving models. \n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/accelerate-custom-video-foundation-model-pipelines-with-new-nvidia-nemo-framework-capabilities/\">\n Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities\n </a> (2025-01-07)\n </summary>\n The NeMo Framework now supports training and customizing the <a href=\"https://github.com/NVIDIA/Cosmos\">NVIDIA Cosmos</a> collection of world foundation models. Cosmos leverages advanced text-to-world generation techniques to create fluid, coherent video content from natural language prompts.\n <br><br>\n You can also now accelerate your video processing step using the <a href=\"https://developer.nvidia.com/nemo-curator-video-processing-early-access\">NeMo Curator</a> library, which provides optimized video processing and captioning features that can deliver up to 89x faster video processing when compared to an unoptimized CPU pipeline.\n <br><br>\n </details>\n</details>\n<details open>\n <summary><b>Large Language Models and Multimodal Models</b></summary>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/\">\n State-of-the-Art Multimodal Generative AI Model Development with NVIDIA NeMo\n </a> (2024-11-06)\n </summary>\n NVIDIA recently announced significant enhancements to the NeMo platform, focusing on multimodal generative AI models. The update includes NeMo Curator and the Cosmos tokenizer, which streamline the data curation process and enhance the quality of visual data. These tools are designed to handle large-scale data efficiently, making it easier to develop high-quality AI models for various applications, including robotics and autonomous driving. The Cosmos tokenizers, in particular, efficiently map visual data into compact, semantic tokens, which is crucial for training large-scale generative models. The tokenizer is available now on the <a href=http://github.com/NVIDIA/cosmos-tokenizer/NVIDIA/cosmos-tokenizer>NVIDIA/cosmos-tokenizer</a> GitHub repo and on <a href=https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8>Hugging Face</a>.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://docs.nvidia.com/nemo-framework/user-guide/latest/llms/llama/index.html#new-llama-3-1-support for more information/\">\n New Llama 3.1 Support\n </a> (2024-07-23)\n </summary>\n The NeMo Framework now supports training and customizing the Llama 3.1 collection of LLMs from Meta.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://aws.amazon.com/blogs/machine-learning/accelerate-your-generative-ai-distributed-training-workloads-with-the-nvidia-nemo-framework-on-amazon-eks/\">\n Accelerate your Generative AI Distributed Training Workloads with the NVIDIA NeMo Framework on Amazon EKS\n </a> (2024-07-16)\n </summary>\n NVIDIA NeMo Framework now runs distributed training workloads on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. For step-by-step instructions on creating an EKS cluster and running distributed training workloads with NeMo, see the GitHub repository <a href=\"https://github.com/aws-samples/awsome-distributed-training/tree/main/3.test_cases/2.nemo-launcher/EKS/\"> here.</a>\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/nvidia-nemo-accelerates-llm-innovation-with-hybrid-state-space-model-support/\">\n NVIDIA NeMo Accelerates LLM Innovation with Hybrid State Space Model Support\n </a> (2024/06/17)\n </summary>\n NVIDIA NeMo and Megatron Core now support pre-training and fine-tuning of state space models (SSMs). NeMo also supports training models based on the Griffin architecture as described by Google DeepMind. \n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://huggingface.co/models?sort=trending&search=nvidia%2Fnemotron-4-340B\">\n NVIDIA releases 340B base, instruct, and reward models pretrained on a total of 9T tokens.\n </a> (2024-06-18)\n </summary>\n See documentation and tutorials for SFT, PEFT, and PTQ with \n <a href=\"https://docs.nvidia.com/nemo-framework/user-guide/latest/llms/nemotron/index.html\">\n Nemotron 340B \n </a>\n in the NeMo Framework User Guide.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/nvidia-sets-new-generative-ai-performance-and-scale-records-in-mlperf-training-v4-0/\">\n NVIDIA sets new generative AI performance and scale records in MLPerf Training v4.0\n </a> (2024/06/12)\n </summary>\n Using NVIDIA NeMo Framework and NVIDIA Hopper GPUs NVIDIA was able to scale to 11,616 H100 GPUs and achieve near-linear performance scaling on LLM pretraining. \n NVIDIA also achieved the highest LLM fine-tuning performance and raised the bar for text-to-image training.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://cloud.google.com/blog/products/compute/gke-and-nvidia-nemo-framework-to-train-generative-ai-models\">\n Accelerate your generative AI journey with NVIDIA NeMo Framework on GKE\n </a> (2024/03/16)\n </summary>\n An end-to-end walkthrough to train generative AI models on the Google Kubernetes Engine (GKE) using the NVIDIA NeMo Framework is available at https://github.com/GoogleCloudPlatform/nvidia-nemo-on-gke. \n The walkthrough includes detailed instructions on how to set up a Google Cloud Project and pre-train a GPT model using the NeMo Framework.\n <br><br>\n </details>\n</details>\n<details open>\n <summary><b>Speech Recognition</b></summary>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/accelerating-leaderboard-topping-asr-models-10x-with-nvidia-nemo/\">\n Accelerating Leaderboard-Topping ASR Models 10x with NVIDIA NeMo\n </a> (2024/09/24)\n </summary>\n NVIDIA NeMo team released a number of inference optimizations for CTC, RNN-T, and TDT models that resulted in up to 10x inference speed-up. \n These models now exceed an inverse real-time factor (RTFx) of 2,000, with some reaching RTFx of even 6,000.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/new-standard-for-speech-recognition-and-translation-from-the-nvidia-nemo-canary-model/\">\n New Standard for Speech Recognition and Translation from the NVIDIA NeMo Canary Model\n </a> (2024/04/18)\n </summary>\n The NeMo team just released Canary, a multilingual model that transcribes speech in English, Spanish, German, and French with punctuation and capitalization. \n Canary also provides bi-directional translation, between English and the three other supported languages.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/pushing-the-boundaries-of-speech-recognition-with-nemo-parakeet-asr-models/\">\n Pushing the Boundaries of Speech Recognition with NVIDIA NeMo Parakeet ASR Models\n </a> (2024/04/18)\n </summary>\n NVIDIA NeMo, an end-to-end platform for the development of multimodal generative AI models at scale anywhere\u2014on any cloud and on-premises\u2014released the Parakeet family of automatic speech recognition (ASR) models. \n These state-of-the-art ASR models, developed in collaboration with Suno.ai, transcribe spoken English with exceptional accuracy.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/turbocharge-asr-accuracy-and-speed-with-nvidia-nemo-parakeet-tdt/\">\n Turbocharge ASR Accuracy and Speed with NVIDIA NeMo Parakeet-TDT\n </a> (2024/04/18)\n </summary>\n NVIDIA NeMo, an end-to-end platform for developing multimodal generative AI models at scale anywhere\u2014on any cloud and on-premises\u2014recently released Parakeet-TDT. \n This new addition to the \u202fNeMo ASR Parakeet model family boasts better accuracy and 64% greater speed over the previously best model, Parakeet-RNNT-1.1B.\n <br><br>\n </details>\n</details>\n<!-- markdownlint-enable -->\n\n## Introduction\n\nNVIDIA NeMo Framework is a scalable and cloud-native generative AI\nframework built for researchers and PyTorch developers working on Large\nLanguage Models (LLMs), Multimodal Models (MMs), Automatic Speech\nRecognition (ASR), Text to Speech (TTS), and Computer Vision (CV)\ndomains. It is designed to help you efficiently create, customize, and\ndeploy new generative AI models by leveraging existing code and\npre-trained model checkpoints.\n\nFor technical documentation, please see the [NeMo Framework User\nGuide](https://docs.nvidia.com/nemo-framework/user-guide/latest/playbooks/index.html).\n\n## What's New in NeMo 2.0\n\nNVIDIA NeMo 2.0 introduces several significant improvements over its predecessor, NeMo 1.0, enhancing flexibility, performance, and scalability.\n\n- **Python-Based Configuration** - NeMo 2.0 transitions from YAML files to a Python-based configuration, providing more flexibility and control. This shift makes it easier to extend and customize configurations programmatically.\n\n- **Modular Abstractions** - By adopting PyTorch Lightning\u2019s modular abstractions, NeMo 2.0 simplifies adaptation and experimentation. This modular approach allows developers to more easily modify and experiment with different components of their models.\n\n- **Scalability** - NeMo 2.0 seamlessly scaling large-scale experiments across thousands of GPUs using [NeMo-Run](https://github.com/NVIDIA/NeMo-Run), a powerful tool designed to streamline the configuration, execution, and management of machine learning experiments across computing environments.\n\nOverall, these enhancements make NeMo 2.0 a powerful, scalable, and user-friendly framework for AI model development.\n\n> [!IMPORTANT] \n> NeMo 2.0 is currently supported by the LLM (large language model) and VLM (vision language model) collections.\n\n### Get Started with NeMo 2.0\n\n- Refer to the [Quickstart](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/quickstart.html) for examples of using NeMo-Run to launch NeMo 2.0 experiments locally and on a slurm cluster.\n- For more information about NeMo 2.0, see the [NeMo Framework User Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/index.html).\n- [NeMo 2.0 Recipes](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/llm/recipes) contains additional examples of launching large-scale runs using NeMo 2.0 and NeMo-Run.\n- For an in-depth exploration of the main features of NeMo 2.0, see the [Feature Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/features/index.html#feature-guide).\n- To transition from NeMo 1.0 to 2.0, see the [Migration Guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemo-2.0/migration/index.html#migration-guide) for step-by-step instructions.\n\n### Get Started with Cosmos\n\nNeMo Curator and NeMo Framework support video curation and post-training of the Cosmos World Foundation Models, which are open and available on [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/cosmos/collections/cosmos) and [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6). For more information on video datasets, refer to [NeMo Curator](https://developer.nvidia.com/nemo-curator). To post-train World Foundation Models using the NeMo Framework for your custom physical AI tasks, see the [Cosmos Diffusion models](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/post_training/README.md) and the [Cosmos Autoregressive models](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/post_training/README.md).\n\n## LLMs and MMs Training, Alignment, and Customization\n\nAll NeMo models are trained with\n[Lightning](https://github.com/Lightning-AI/lightning). Training is\nautomatically scalable to 1000s of GPUs. You can check the performance benchmarks using the\nlatest NeMo Framework container [here](https://docs.nvidia.com/nemo-framework/user-guide/latest/performance/performance_summary.html).\n\nWhen applicable, NeMo models leverage cutting-edge distributed training\ntechniques, incorporating [parallelism\nstrategies](https://docs.nvidia.com/nemo-framework/user-guide/latest/modeloverview.html)\nto enable efficient training of very large models. These techniques\ninclude Tensor Parallelism (TP), Pipeline Parallelism (PP), Fully\nSharded Data Parallelism (FSDP), Mixture-of-Experts (MoE), and Mixed\nPrecision Training with BFloat16 and FP8, as well as others.\n\nNeMo Transformer-based LLMs and MMs utilize [NVIDIA Transformer\nEngine](https://github.com/NVIDIA/TransformerEngine) for FP8 training on\nNVIDIA Hopper GPUs, while leveraging [NVIDIA Megatron\nCore](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core) for\nscaling Transformer model training.\n\nNeMo LLMs can be aligned with state-of-the-art methods such as SteerLM,\nDirect Preference Optimization (DPO), and Reinforcement Learning from\nHuman Feedback (RLHF). See [NVIDIA NeMo\nAligner](https://github.com/NVIDIA/NeMo-Aligner) for more information.\n\nIn addition to supervised fine-tuning (SFT), NeMo also supports the\nlatest parameter efficient fine-tuning (PEFT) techniques such as LoRA,\nP-Tuning, Adapters, and IA3. Refer to the [NeMo Framework User\nGuide](https://docs.nvidia.com/nemo-framework/user-guide/latest/sft_peft/index.html)\nfor the full list of supported models and techniques.\n\n## LLMs and MMs Deployment and Optimization\n\nNeMo LLMs and MMs can be deployed and optimized with [NVIDIA NeMo\nMicroservices](https://developer.nvidia.com/nemo-microservices-early-access).\n\n## Speech AI\n\nNeMo ASR and TTS models can be optimized for inference and deployed for\nproduction use cases with [NVIDIA Riva](https://developer.nvidia.com/riva).\n\n## NeMo Framework Launcher\n\n> [!IMPORTANT] \n> NeMo Framework Launcher is compatible with NeMo version 1.0 only. [NeMo-Run](https://github.com/NVIDIA/NeMo-Run) is recommended for launching experiments using NeMo 2.0.\n\n[NeMo Framework\nLauncher](https://github.com/NVIDIA/NeMo-Megatron-Launcher) is a\ncloud-native tool that streamlines the NeMo Framework experience. It is\nused for launching end-to-end NeMo Framework training jobs on CSPs and\nSlurm clusters.\n\nThe NeMo Framework Launcher includes extensive recipes, scripts,\nutilities, and documentation for training NeMo LLMs. It also includes\nthe NeMo Framework [Autoconfigurator](https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration),\nwhich is designed to find the optimal model parallel configuration for\ntraining on a specific cluster.\n\nTo get started quickly with the NeMo Framework Launcher, please see the\n[NeMo Framework\nPlaybooks](https://docs.nvidia.com/nemo-framework/user-guide/latest/playbooks/index.html).\nThe NeMo Framework Launcher does not currently support ASR and TTS\ntraining, but it will soon.\n\n## Get Started with NeMo Framework\n\nGetting started with NeMo Framework is easy. State-of-the-art pretrained\nNeMo models are freely available on [Hugging Face\nHub](https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia)\nand [NVIDIA\nNGC](https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC).\nThese models can be used to generate text or images, transcribe audio,\nand synthesize speech in just a few lines of code.\n\nWe have extensive\n[tutorials](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html)\nthat can be run on [Google Colab](https://colab.research.google.com) or\nwith our [NGC NeMo Framework\nContainer](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo).\nWe also have\n[playbooks](https://docs.nvidia.com/nemo-framework/user-guide/latest/playbooks/index.html)\nfor users who want to train NeMo models with the NeMo Framework\nLauncher.\n\nFor advanced users who want to train NeMo models from scratch or\nfine-tune existing NeMo models, we have a full suite of [example\nscripts](https://github.com/NVIDIA/NeMo/tree/main/examples) that support\nmulti-GPU/multi-node training.\n\n## Key Features\n\n- [Large Language Models](nemo/collections/nlp/README.md)\n- [Multimodal](nemo/collections/multimodal/README.md)\n- [Automatic Speech Recognition](nemo/collections/asr/README.md)\n- [Text to Speech](nemo/collections/tts/README.md)\n- [Computer Vision](nemo/collections/vision/README.md)\n\n## Requirements\n\n- Python 3.10 or above\n- Pytorch 2.5 or above\n- NVIDIA GPU (if you intend to do model training)\n\n## Developer Documentation\n\n| Version | Status | Description |\n| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |\n| Latest | [](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/) | [Documentation of the latest (i.e. main) branch.](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/) |\n| Stable | [](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/) | [Documentation of the stable (i.e. most recent release)](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/) |\n\n## Install NeMo Framework\n\nThe NeMo Framework can be installed in a variety of ways, depending on\nyour needs. Depending on the domain, you may find one of the following\ninstallation methods more suitable.\n\n- [Conda / Pip](#conda--pip): Install NeMo-Framework with native Pip into a virtual environment.\n - Used to explore NeMo on any supported platform.\n - This is the recommended method for ASR and TTS domains.\n - Limited feature-completeness for other domains.\n- [NGC PyTorch container](#ngc-pytorch-container): Install NeMo-Framework from source with feature-completeness into a highly optimized container.\n - For users that want to install from source in a highly optimized container.\n- [NGC NeMo container](#ngc-nemo-container): Ready-to-go solution of NeMo-Framework\n - For users that seek highest performance.\n - Contains all dependencies installed and tested for performance and convergence.\n\n### Support matrix\n\nNeMo-Framework provides tiers of support based on OS / Platform and mode of installation. Please refer the following overview of support levels:\n\n- Fully supported: Max performance and feature-completeness.\n- Limited supported: Used to explore NeMo.\n- No support yet: In development.\n- Deprecated: Support has reached end of life.\n\nPlease refer to the following table for current support levels:\n\n| OS / Platform | Install from PyPi | Source into NGC container |\n|----------------------------|-------------------|---------------------------|\n| `linux` - `amd64/x84_64` | Limited support | Full support |\n| `linux` - `arm64` | Limited support | Limited support |\n| `darwin` - `amd64/x64_64` | Deprecated | Deprecated |\n| `darwin` - `arm64` | Limited support | Limited support |\n| `windows` - `amd64/x64_64` | No support yet | No support yet |\n| `windows` - `arm64` | No support yet | No support yet |\n\n### Conda / Pip\n\nInstall NeMo in a fresh Conda environment:\n\n```bash\nconda create --name nemo python==3.10.12\nconda activate nemo\n```\n\n#### Pick the right version\n\nNeMo-Framework publishes pre-built wheels with each release.\nTo install nemo_toolkit from such a wheel, use the following installation method:\n\n```bash\npip install \"nemo_toolkit[all]\"\n```\n\nIf a more specific version is desired, we recommend a Pip-VCS install. From [NVIDIA/NeMo](github.com/NVIDIA/NeMo), fetch the commit, branch, or tag that you would like to install. \nTo install nemo_toolkit from this Git reference `$REF`, use the following installation method:\n\n```bash\ngit clone https://github.com/NVIDIA/NeMo\ncd NeMo\ngit checkout @${REF:-'main'}\npip install '.[all]'\n```\n\n#### Install a specific Domain\n\nTo install a specific domain of NeMo, you must first install the\nnemo_toolkit using the instructions listed above. Then, you run the\nfollowing domain-specific commands:\n\n```bash\npip install nemo_toolkit['all'] # or pip install \"nemo_toolkit['all']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}\"\npip install nemo_toolkit['asr'] # or pip install \"nemo_toolkit['asr']@git+https://github.com/NVIDIA/NeMo@$REF:-'main'}\"\npip install nemo_toolkit['nlp'] # or pip install \"nemo_toolkit['nlp']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}\"\npip install nemo_toolkit['tts'] # or pip install \"nemo_toolkit['tts']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}\"\npip install nemo_toolkit['vision'] # or pip install \"nemo_toolkit['vision']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}\"\npip install nemo_toolkit['multimodal'] # or pip install \"nemo_toolkit['multimodal']@git+https://github.com/NVIDIA/NeMo@${REF:-'main'}\"\n```\n\n### NGC PyTorch container\n\n**NOTE: The following steps are supported beginning with 24.04 (NeMo-Toolkit 2.3.0)**\n\nWe recommended that you start with a base NVIDIA PyTorch container:\nnvcr.io/nvidia/pytorch:25.01-py3.\n\nIf starting with a base NVIDIA PyTorch container, you must first launch\nthe container:\n\n```bash\ndocker run \\\n --gpus all \\\n -it \\\n --rm \\\n --shm-size=16g \\\n --ulimit memlock=-1 \\\n --ulimit stack=67108864 \\\n nvcr.io/nvidia/pytorch:${NV_PYTORCH_TAG:-'nvcr.io/nvidia/pytorch:25.01-py3'}\n```\n\nFrom [NVIDIA/NeMo](github.com/NVIDIA/NeMo), fetch the commit/branch/tag that you want to install. \nTo install nemo_toolkit including all of its dependencies from this Git reference `$REF`, use the following installation method:\n\n```bash\ncd /opt\ngit clone https://github.com/NVIDIA/NeMo\ncd NeMo\ngit checkout ${REF:-'main'}\nbash reinstall.sh --library all\n```\n\n## NGC NeMo container\n\nNeMo containers are launched concurrently with NeMo version updates.\nNeMo Framework now supports LLMs, MMs, ASR, and TTS in a single\nconsolidated Docker container. You can find additional information about\nreleased containers on the [NeMo releases\npage](https://github.com/NVIDIA/NeMo/releases).\n\nTo use a pre-built container, run the following code:\n\n```bash\ndocker run \\\n --gpus all \\\n -it \\\n --rm \\\n --shm-size=16g \\\n --ulimit memlock=-1 \\\n --ulimit stack=67108864 \\\n nvcr.io/nvidia/pytorch:${NV_PYTORCH_TAG:-'nvcr.io/nvidia/nemo:25.02'}\n```\n\n## Future Work\n\nThe NeMo Framework Launcher does not currently support ASR and TTS\ntraining, but it will soon.\n\n## Discussions Board\n\nFAQ can be found on the NeMo [Discussions\nboard](https://github.com/NVIDIA/NeMo/discussions). You are welcome to\nask questions or start discussions on the board.\n\n## Contribute to NeMo\n\nWe welcome community contributions! Please refer to\n[CONTRIBUTING.md](https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md)\nfor the process.\n\n## Publications\n\nWe provide an ever-growing list of\n[publications](https://nvidia.github.io/NeMo/publications/) that utilize\nthe NeMo Framework.\n\nTo contribute an article to the collection, please submit a pull request\nto the `gh-pages-src` branch of this repository. For detailed\ninformation, please consult the README located at the [gh-pages-src\nbranch](https://github.com/NVIDIA/NeMo/tree/gh-pages-src#readme).\n\n## Blogs\n\n<!-- markdownlint-disable -->\n<details open>\n <summary><b>Large Language Models and Multimodal Models</b></summary>\n <details>\n <summary>\n <a href=\"https://blogs.nvidia.com/blog/bria-builds-responsible-generative-ai-using-nemo-picasso/\">\n Bria Builds Responsible Generative AI for Enterprises Using NVIDIA NeMo, Picasso\n </a> (2024/03/06)\n </summary>\n Bria, a Tel Aviv startup at the forefront of visual generative AI for enterprises now leverages the NVIDIA NeMo Framework. \n The Bria.ai platform uses reference implementations from the NeMo Multimodal collection, trained on NVIDIA Tensor Core GPUs, to enable high-throughput and low-latency image generation. \n Bria has also adopted NVIDIA Picasso, a foundry for visual generative AI models, to run inference.\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility/\">\n New NVIDIA NeMo Framework Features and NVIDIA H200\n </a> (2023/12/06)\n </summary>\n NVIDIA NeMo Framework now includes several optimizations and enhancements, \n including: \n 1) Fully Sharded Data Parallelism (FSDP) to improve the efficiency of training large-scale AI models, \n 2) Mix of Experts (MoE)-based LLM architectures with expert parallelism for efficient LLM training at scale, \n 3) Reinforcement Learning from Human Feedback (RLHF) with TensorRT-LLM for inference stage acceleration, and \n 4) up to 4.2x speedups for Llama 2 pre-training on NVIDIA H200 Tensor Core GPUs.\n <br><br>\n <a href=\"https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility\">\n <img src=\"https://github.com/sbhavani/TransformerEngine/blob/main/docs/examples/H200-NeMo-performance.png\" alt=\"H200-NeMo-performance\" style=\"width: 600px;\"></a>\n <br><br>\n </details>\n <details>\n <summary>\n <a href=\"https://blogs.nvidia.com/blog/nemo-amazon-titan/\">\n NVIDIA now powers training for Amazon Titan Foundation models\n </a> (2023/11/28)\n </summary>\n NVIDIA NeMo Framework now empowers the Amazon Titan foundation models (FM) with efficient training of large language models (LLMs). \n The Titan FMs form the basis of Amazon\u2019s generative AI service, Amazon Bedrock. \n The NeMo Framework provides a versatile framework for building, customizing, and running LLMs.\n <br><br>\n </details>\n</details>\n<!-- markdownlint-enable -->\n\n## Licenses\n\n- [NeMo GitHub Apache 2.0\n license](https://github.com/NVIDIA/NeMo?tab=Apache-2.0-1-ov-file#readme)\n- NeMo is licensed under the [NVIDIA AI PRODUCT\n AGREEMENT](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/).\n By pulling and using the container, you accept the terms and\n conditions of this license.\n",
"bugtrack_url": null,
"license": "Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License. \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\" \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.",
"summary": "NeMo - a toolkit for Conversational AI",
"version": "2.3.2",
"project_urls": {
"Download": "https://github.com/NVIDIA/NeMo/releases",
"Homepage": "https://github.com/nvidia/nemo"
},
"split_keywords": [
"nlp",
" nemo",
" deep",
" gpu",
" language",
" learning",
" learning",
" machine",
" nvidia",
" pytorch",
" speech",
" torch",
" tts"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "188b888602f2c54958fab5b89c67c7b4fdabfb840873579ac0c6238c06d570a1",
"md5": "8b91e6b37ea8e3dee0797a7db7f07a44",
"sha256": "7c0a5457868b958fd1d3581932fe1a676b43fe93dfd40c5c0020050ab6d359e5"
},
"downloads": -1,
"filename": "nemo_toolkit-2.3.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8b91e6b37ea8e3dee0797a7db7f07a44",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 5966107,
"upload_time": "2025-07-08T22:29:24",
"upload_time_iso_8601": "2025-07-08T22:29:24.302688Z",
"url": "https://files.pythonhosted.org/packages/18/8b/888602f2c54958fab5b89c67c7b4fdabfb840873579ac0c6238c06d570a1/nemo_toolkit-2.3.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "88032ff323dea400bfbedd667f4ce76b35f4d6d6329bd305dca7cf9deeb6754a",
"md5": "fdb6ae94eb767c17e5e8d1402c4d5b02",
"sha256": "ff2185419a5bbd91b3875a5d7441696ebcf774911a2355c3c3ac72dec9371bdf"
},
"downloads": -1,
"filename": "nemo_toolkit-2.3.2.tar.gz",
"has_sig": false,
"md5_digest": "fdb6ae94eb767c17e5e8d1402c4d5b02",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 4364760,
"upload_time": "2025-07-08T22:29:27",
"upload_time_iso_8601": "2025-07-08T22:29:27.750522Z",
"url": "https://files.pythonhosted.org/packages/88/03/2ff323dea400bfbedd667f4ce76b35f4d6d6329bd305dca7cf9deeb6754a/nemo_toolkit-2.3.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-08 22:29:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "nvidia",
"github_project": "nemo",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "nemo-toolkit"
}