super-gradients


Namesuper-gradients JSON
Version 3.7.1 PyPI version JSON
download
home_pagehttps://docs.deci.ai/super-gradients/documentation/source/welcome.html
SummarySuperGradients
upload_time2024-04-08 16:51:38
maintainerNone
docs_urlNone
authorDeci AI
requires_pythonNone
licenseNone
keywords deci ai training deep learning computer vision pytorch sota recipes pre trained models
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center" markdown="1">
  <img src="documentation/assets/SG_img/SG - Horizontal Glow 2.png" width="600"/>
 <br/><br/>
  
**Build, train, and fine-tune production-ready deep learning  SOTA vision models**
[![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Easily%20train%20or%20fine-tune%20SOTA%20computer%20vision%20models%20from%20one%20training%20repository&url=https://github.com/Deci-AI/super-gradients&via=deci_ai&hashtags=AI,deeplearning,computervision,training,opensource)

#### Version 3.5 is out! Notebooks have been updated!
______________________________________________________________________
</div>  
<div align="center">
<p align="center">
  <a href="https://www.supergradients.com/">Website</a> •
  <a href="https://docs.deci.ai/super-gradients/documentation/source/welcome.html">Docs</a> •
  <a href="#getting-started">Getting Started</a> •
  <a href="#implemented-model-architectures">Pretrained Models</a> •
  <a href="#community">Community</a> •
  <a href="#license">License</a> •
  <a href="#deci-platform">Deci Platform</a>
</p>
<p align="center">
  <a href="https://github.com/Deci-AI/super-gradients#prerequisites"><img src="https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9-blue" />
  <a href="https://github.com/Deci-AI/super-gradients#prerequisites"><img src="https://img.shields.io/badge/pytorch-1.9%20%7C%201.10-blue" />
  <a href="https://pypi.org/project/super-gradients/"><img src="https://img.shields.io/pypi/v/super-gradients" />
  <a href="https://github.com/Deci-AI/super-gradients/blob/master/documentation/source/model_zoo.md" ><img src="https://img.shields.io/badge/pre--trained%20models-34-brightgreen" />
  <a href="https://github.com/Deci-AI/super-gradients/releases"><img src="https://img.shields.io/github/v/release/Deci-AI/super-gradients" />
  <a href="https://join.slack.com/t/supergradients-comm52/shared_invite/zt-10vz6o1ia-b_0W5jEPEnuHXm087K~t8Q"><img src="https://img.shields.io/badge/slack-community-blueviolet" />
  <a href="https://github.com/Deci-AI/super-gradients/blob/master/LICENSE.md"><img src="https://img.shields.io/badge/license-Apache%202.0-blue" />
  <a href="https://docs.deci.ai/super-gradients/documentation/source/welcome.html"><img src="https://img.shields.io/badge/docs-mkdocs-brightgreen" /></a>
</p>    
</div>

______________________________________________________________________

## Build with SuperGradients
__________________________________________________________________________________________________________

### Support various computer vision tasks
<div align="center">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Segmentation 1500x900 .png" width="250px">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Object detection 1500X900.png" width="250px">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Classification 1500x900.png" width="250px">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/PoseEstimation.jpg" width="250px">
</div>


### Ready to deploy pre-trained SOTA models

YOLO-NAS and YOLO-NAS-POSE architectures are out! 
The new YOLO-NAS delivers state-of-the-art performance with the unparalleled accuracy-speed performance, outperforming other models such as YOLOv5, YOLOv6, YOLOv7 and YOLOv8.
A YOLO-NAS-POSE model for pose estimation is also available, delivering state-of-the-art accuracy/performance tradeoff.

Check these out here: [YOLO-NAS](YOLONAS.md) & [YOLO-NAS-POSE](YOLONAS-POSE.md).

<div align="center">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_frontier.png" height="600px">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_pose_frontier_t4.png" height="600px">
</div>

```python
# Load model with pretrained weights
from super_gradients.training import models
from super_gradients.common.object_names import Models

model = models.get(Models.YOLO_NAS_M, pretrained_weights="coco")
```
#### All Computer Vision Models - Pretrained Checkpoints can be found in the [Model Zoo](http://bit.ly/41dkt89)

#### Classification
<div align="center">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Classification@2xDark.png" width="800px">
</div>

#### Semantic Segmentation
<div align="center">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Semantic Segmentation@2xDark.png" width="800px">
</div>

#### Object Detection 
<div align="center">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Object Detection@2xDark.png" width="800px">
</div>

#### Pose Estimation
<div align="center">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_pose_frontier_t4.png" width="400px">
<img src="https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_pose_frontier_xavier_nx.png" width="400px">
</div>

### Easy to train SOTA Models

Easily load and fine-tune production-ready, pre-trained SOTA models that incorporate best practices and validated hyper-parameters for achieving best-in-class accuracy. 
For more information on how to do it go to [Getting Started](#getting-started)
    

#### Plug and play recipes
```bash
python -m super_gradients.train_from_recipe architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>
```
More examples on how and why to use recipes can be found in [Recipes](#recipes)


### Production readiness
All SuperGradients models’ are production ready in the sense that they are compatible with deployment tools such as TensorRT (Nvidia) and OpenVINO (Intel) and can be easily taken into production. With a few lines of code you can easily integrate the models into your codebase.
```python
# Load model with pretrained weights
from super_gradients.training import models
from super_gradients.common.object_names import Models

model = models.get(Models.YOLO_NAS_M, pretrained_weights="coco")

# Prepare model for conversion
# Input size is in format of [Batch x Channels x Width x Height] where 640 is the standard COCO dataset dimensions
model.eval()
model.prep_model_for_conversion(input_size=[1, 3, 640, 640])
    
# Create dummy_input

# Convert model to onnx
torch.onnx.export(model, dummy_input,  "yolo_nas_m.onnx")
```
More information on how to take your model to production can be found in [Getting Started](#getting-started) notebooks

## Quick Installation

__________________________________________________________________________________________________________

 
```bash
pip install super-gradients
```

## What's New

__________________________________________________________________________________________________________
Version 3.4.0 (November 6, 2023)

* [YoloNAS-Pose](YOLONAS-POSE.md) model released - a new frontier in pose estimation
* Added option to export a recipe to a single YAML file or to a standalone train.py file 
* Other bugfixes & minor improvements. Full release notes available [here](https://github.com/Deci-AI/super-gradients/releases/tag/3.4.0)

__________________________________________________________________________________________________________
Version 3.1.3 (July 19, 2023)

* [Pose Estimation Task Support](https://docs.deci.ai/super-gradients/documentation/source/PoseEstimation.html) - Check out fine-tuning [notebook example](https://colab.research.google.com/drive/1NMGzx8NdycIZqnRlZKJZrIOqyj0MFzJE#scrollTo=3UZJqTehg0On) 
* Pre-trained modified [DEKR](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/coco2017_pose_dekr_w32_no_dc.yaml) model for pose estimation (TensorRT-compatible)
* Support for Python 3.10
* Support for torch.compile
* Other bugfixes & minor improvements. Check out [release notes](https://github.com/Deci-AI/super-gradients/releases/tag/3.1.3)

__________________________________________________________________________________________________________
30th of May
* [Quantization Aware Training YoloNAS on Custom Dataset](https://bit.ly/3MIKdTy)

Version 3.1.1 (May 3rd)
* [YOLO-NAS](https://bit.ly/41WeNPZ)
* New [predict function](https://bit.ly/3oZfaea) (predict on any image, video, url, path, stream)
* [RoboFlow100](https://bit.ly/40YOJ5z) datasets integration 
* A new [Documentation Hub](https://docs.deci.ai/super-gradients/documentation/source/welcome.html)
* Integration with [DagsHub for experiment monitoring](https://bit.ly/3ALFUkQ)
* Support [Darknet/Yolo format detection dataset](https://bit.ly/41VX6Qu) (used by Yolo v5, v6, v7, v8) 
* [Segformer](https://bit.ly/3oYu6Jp) model and recipe 
* Post Training Quantization and Quantization Aware Training - [notebooks](http://bit.ly/3KrN6an)

Check out SG full [release notes](https://github.com/Deci-AI/super-gradients/releases).

## Table of Content
__________________________________________________________________________________________________________
<!-- toc -->

- [Getting Started](#getting-started)
- [Advanced Features](#advanced-features)
- [Installation Methods](#installation-methods)
    - [Prerequisites](#prerequisites)
    - [Quick Installation](#quick-installation)
- [Implemented Model Architectures](#implemented-model-architectures)
- [Contributing](#contributing)
- [Citation](#citation)
- [Community](#community)
- [License](#license)
- [Deci Platform](#deci-platform)

<!-- tocstop -->

## Getting Started
__________________________________________________________________________________________________________

### Start Training with Just 1 Command Line
The most simple and straightforward way to start training SOTA performance models with SuperGradients reproducible recipes. Just define your dataset path and where you want your checkpoints to be saved and you are good to go from your terminal!

Just make sure that you [setup your dataset](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/Dataset_Setup_Instructions.md) according to the data dir specified in the recipe.

```bash
python -m super_gradients.train_from_recipe --config-name=imagenet_regnetY architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>
```
### Quickly Load Pre-Trained Weights for Your Desired Model with SOTA Performance
Want to try our pre-trained models on your machine? Import SuperGradients, initialize your Trainer, and load your desired architecture and pre-trained weights from our [SOTA model zoo](http://bit.ly/41dkt89)

```python
# The pretrained_weights argument will load a pre-trained architecture on the provided dataset
    
import super_gradients

model = models.get("model-name", pretrained_weights="pretrained-model-name")

```   
###  Classification

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_classification.ipynb) [Transfer Learning for classification](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_classification.ipynb) 
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/PTQ_and_QAT_for_classification.ipynb)   [PTQ and QAT for classification](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/PTQ_and_QAT_for_classification.ipynb)

###  Semantic Segmentation

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/quickstart_segmentation.ipynb) [Segmentation Quick Start](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/quickstart_segmentation.ipynb)
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_semantic_segmentation.ipynb) [Segmentation Transfer Learning](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_semantic_segmentation.ipynb)
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/segmentation_connect_custom_dataset.ipynb) [How to Connect Custom Dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/segmentation_connect_custom_dataset.ipynb)
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/Segmentation_Model_Export.ipynb) [How to export segmentation model to ONNX](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/Segmentation_Model_Export.ipynb)


### Pose Estimation

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Pose_Fine_Tuning_Animals_Pose_Dataset.ipynb) [Fine Tuning YoloNAS-Pose on AnimalPose dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Pose_Fine_Tuning_Animals_Pose_Dataset.ipynb)
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/DEKR_PoseEstimationFineTuning.ipynb) [Fine Tuning DEKR on AnimalPose dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/DEKR_PoseEstimationFineTuning.ipynb)


###  Object Detection

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Inference_using_TensorRT.ipynb) [YoloNAS inference using TensorRT](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Inference_using_TensorRT.ipynb)
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/detection_transfer_learning.ipynb) [Object Detection Transfer Learning](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/detection_transfer_learning.ipynb)
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/detection_how_to_connect_custom_dataset.ipynb) [How to Connect Custom Dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/detection_how_to_connect_custom_dataset.ipynb)
* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/yolo_nas_custom_dataset_fine_tuning_with_qat.ipynb) [Quantization Aware Training YoloNAS on Custom Dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/yolo_nas_custom_dataset_fine_tuning_with_qat.ipynb)


### How to Predict Using Pre-trained Model

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/how_to_run_model_predict.ipynb) [How to Predict Using Pre-trained Model](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/how_to_run_model_predict.ipynb)

### Albumentations Integration

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/albumentations_tutorial.ipynb) [Using Albumentations with SG](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/albumentations_tutorial.ipynb)



## Advanced Features
__________________________________________________________________________________________________________
### Post Training Quantization and Quantization Aware Training
Quantization involves representing weights and biases in lower precision, resulting in reduced memory and computational requirements, making it useful for deploying models on devices with limited resources. 
The process can be done during training, called Quantization aware training, or after training, called post-training quantization. 
A full tutorial can be found [here](http://bit.ly/41hC8uI).

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/3KrN6an) [Post Training Quantization and Quantization Aware Training](https://bit.ly/3KrN6an)


### Quantization Aware Training YoloNAS on Custom Dataset
This tutorial provides a comprehensive guide on how to fine-tune a YoloNAS model using a custom dataset. 
It also demonstrates how to utilize SG's QAT (Quantization-Aware Training) support. Additionally, it offers step-by-step instructions on deploying the model and performing benchmarking.

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/3MIKdTy) [Quantization Aware Training YoloNAS on Custom Dataset](https://bit.ly/3MIKdTy)


### Knowledge Distillation Training

Knowledge Distillation is a training technique that uses a large model, teacher model, to improve the performance of a smaller model, the student model.
Learn more about SuperGradients knowledge distillation training with our pre-trained BEiT base teacher model and Resnet18 student model on CIFAR10 example notebook on Google Colab for an easy to use tutorial using free GPU hardware

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/how_to_use_knowledge_distillation_for_classification.ipynb) [Knowledge Distillation Training](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/how_to_use_knowledge_distillation_for_classification.ipynb)


### Recipes

To train a model, it is necessary to configure 4 main components. 
These components are aggregated into a single "main" recipe `.yaml` file that inherits the aforementioned dataset, architecture, raining and checkpoint params.
It is also possible (and recommended for flexibility) to override default settings with custom ones.
All recipes can be found [here](http://bit.ly/3gfLw07)
</br>
Recipes support out of the box every model, metric or loss that is implemented in SuperGradients, but you can easily extend this to any custom object that you need by "registering it". Check out [this](http://bit.ly/3TQ4iZB) tutorial for more information.

* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/what_are_recipes_and_how_to_use.ipynb) [How to Use Recipes](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/what_are_recipes_and_how_to_use.ipynb)
 
<details markdown="1">
  <summary><h3>Using Distributed Data Parallel (DDP) </h3></summary>

#### Why use DDP ?

Recent Deep Learning models are growing larger and larger to an extent that training on a single GPU can take weeks.
In order to train models in a timely fashion, it is necessary to train them with multiple GPUs.
Using 100s GPUs can reduce training time of a model from a week to less than an hour.

#### How does it work ?
Each GPU has its own process, which controls a copy of the model and which loads its own mini-batch from disk and sends
it to its GPU during training. After the forward pass is completed on every GPU, the gradient is reduced across all
GPUs, yielding to all the GPUs having the same gradient locally. This leads to the model weights to stay synchronized
across all GPUs after the backward pass.

#### How to use it ?
You can use SuperGradients to train your model with DDP in just a few lines.


*main.py*
```python
from super_gradients import init_trainer, Trainer
from super_gradients.common import MultiGPUMode
from super_gradients.training.utils.distributed_training_utils import setup_device

# Initialize the environment
init_trainer()

# Launch DDP on 4 GPUs'
setup_device(multi_gpu=MultiGPUMode.DISTRIBUTED_DATA_PARALLEL, num_gpus=4)

# Call the trainer
Trainer(expriment_name=...)

# Everything you do below will run on 4 gpus

...

Trainer.train(...)

```

Finally, you can launch your distributed training with a simple python call.
```bash
python main.py
```


Please note that if you work with `torch<1.9.0` (deprecated), you will have to launch your training with either 
`torch.distributed.launch` or `torchrun`, in which case `nproc_per_node` will overwrite the value  set with `gpu_mode`:
```bash
python -m torch.distributed.launch --nproc_per_node=4 main.py
```

```bash
torchrun --nproc_per_node=4 main.py
```

#### Calling functions on a single node

It is often in DDP training that we want to execute code on the master rank (i.e rank 0).
In SG, users usually execute their own code by triggering "Phase Callbacks" (see "Using phase callbacks" section below).
One can make sure the desired code will only be ran on rank 0, using ddp_silent_mode or the multi_process_safe decorator.
For example, consider the simple phase callback below, that uploads the first 3 images of every batch during training to
the Tensorboard:

```python
from super_gradients.training.utils.callbacks import PhaseCallback, PhaseContext, Phase
from super_gradients.common.environment.env_helpers import multi_process_safe

class Upload3TrainImagesCalbback(PhaseCallback):
    def __init__(
        self,
    ):
        super().__init__(phase=Phase.TRAIN_BATCH_END)
    
    @multi_process_safe
    def __call__(self, context: PhaseContext):
        batch_imgs = context.inputs.cpu().detach().numpy()
        tag = "batch_" + str(context.batch_idx) + "_images"
        context.sg_logger.add_images(tag=tag, images=batch_imgs[: 3], global_step=context.epoch)

```
The @multi_process_safe decorator ensures that the callback will only be triggered by rank 0. Alternatively, this can also
be done by the SG trainer boolean attribute (which the phase context has access to), ddp_silent_mode, which is set to False
iff the current process rank is zero (even after the process group has been killed):
```python
from super_gradients.training.utils.callbacks import PhaseCallback, PhaseContext, Phase

class Upload3TrainImagesCalbback(PhaseCallback):
    def __init__(
        self,
    ):
        super().__init__(phase=Phase.TRAIN_BATCH_END)

    def __call__(self, context: PhaseContext):
        if not context.ddp_silent_mode:
            batch_imgs = context.inputs.cpu().detach().numpy()
            tag = "batch_" + str(context.batch_idx) + "_images"
            context.sg_logger.add_images(tag=tag, images=batch_imgs[: 3], global_step=context.epoch)

```

Note that ddp_silent_mode can be accessed through SgTrainer.ddp_silent_mode. Hence, it can be used in scripts after calling
SgTrainer.train() when some part of it should be ran on rank 0 only.

#### Good to know
Your total batch size will be (number of gpus x batch size), so you might want to increase your learning rate.
There is no clear rule, but a rule of thumb seems to be to [linearly increase the learning rate with the number of gpus](https://arxiv.org/pdf/1706.02677.pdf) 

</details>

<details markdown="1">
<summary><h3> Easily change architectures parameters </h3></summary>

```python
from super_gradients.training import models

# instantiate default pretrained resnet18
default_resnet18 = models.get(model_name="resnet18", num_classes=100, pretrained_weights="imagenet")

# instantiate pretrained resnet18, turning DropPath on with probability 0.5
droppath_resnet18 = models.get(model_name="resnet18", arch_params={"droppath_prob": 0.5}, num_classes=100, pretrained_weights="imagenet")

# instantiate pretrained resnet18, without classifier head. Output will be from the last stage before global pooling
backbone_resnet18 = models.get(model_name="resnet18", arch_params={"backbone_mode": True}, pretrained_weights="imagenet")
```

</details>

<details markdown="1">

<summary><h3> Using phase callbacks </h3></summary>  
  
```python
from super_gradients import Trainer
from torch.optim.lr_scheduler import ReduceLROnPlateau
from super_gradients.training.utils.callbacks import Phase, LRSchedulerCallback
from super_gradients.training.metrics.classification_metrics import Accuracy

# define PyTorch train and validation loaders and optimizer

# define what to be called in the callback
rop_lr_scheduler = ReduceLROnPlateau(optimizer, mode="max", patience=10, verbose=True)

# define phase callbacks, they will fire as defined in Phase
phase_callbacks = [LRSchedulerCallback(scheduler=rop_lr_scheduler,
                                       phase=Phase.VALIDATION_EPOCH_END,
                                       metric_name="Accuracy")]

# create a trainer object, look the declaration for more parameters
trainer = Trainer("experiment_name")

# define phase_callbacks as part of the training parameters
train_params = {"phase_callbacks": phase_callbacks}
```

</details>

<details markdown="1">

<summary><h3> Integration to DagsHub </h3></summary>    

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11fW56pMpwOMHQSbQW6xxMRYvw1mEC-t-?usp=sharing) 

```python
from super_gradients import Trainer

trainer = Trainer("experiment_name")
model = ...

training_params = { ...  # Your training params
                   "sg_logger": "dagshub_sg_logger",  # DagsHub Logger, see class super_gradients.common.sg_loggers.dagshub_sg_logger.DagsHubSGLogger for details
                   "sg_logger_params":  # Params that will be passes to __init__ of the logger super_gradients.common.sg_loggers.dagshub_sg_logger.DagsHubSGLogger
                     {
                       "dagshub_repository": "<REPO_OWNER>/<REPO_NAME>", # Optional: Your DagsHub project name, consisting of the owner name, followed by '/', and the repo name. If this is left empty, you'll be prompted in your run to fill it in manually.
                       "log_mlflow_only": False, # Optional: Change to true to bypass logging to DVC, and log all artifacts only to MLflow  
                       "save_checkpoints_remote": True,
                       "save_tensorboard_remote": True,
                       "save_logs_remote": True,
                     }
                   }
```

</details>

<details>

<summary><h3> Integration to Weights and Biases </h3></summary>    
  

```python
from super_gradients import Trainer

# create a trainer object, look the declaration for more parameters
trainer = Trainer("experiment_name")

train_params = { ... # training parameters
                "sg_logger": "wandb_sg_logger", # Weights&Biases Logger, see class WandBSGLogger for details
                "sg_logger_params": # paramenters that will be passes to __init__ of the logger 
                  {
                    "project_name": "project_name", # W&B project name
                    "save_checkpoints_remote": True
                    "save_tensorboard_remote": True
                    "save_logs_remote": True
                  } 
               }
```

</details>

<details markdown="1">

<summary><h3> Integration to ClearML </h3></summary>    


```python
from super_gradients import Trainer

# create a trainer object, look the declaration for more parameters
trainer = Trainer("experiment_name")

train_params = { ... # training parameters
                "sg_logger": "clearml_sg_logger", # ClearML Logger, see class ClearMLSGLogger for details
                "sg_logger_params": # paramenters that will be passes to __init__ of the logger 
                  {
                    "project_name": "project_name", # ClearML project name
                    "save_checkpoints_remote": True,
                    "save_tensorboard_remote": True,
                    "save_logs_remote": True,
                  } 
               }
```


  </details>
<details markdown="1">

  <summary><h3> Integration to Voxel51 </h3></summary>    
  
You can apply SuperGradients YOLO-NAS models directly to your FiftyOne dataset using the apply_model() method:

```python
import fiftyone as fo
import fiftyone.zoo as foz

from super_gradients.training import models

dataset = foz.load_zoo_dataset("quickstart", max_samples=25)
dataset.select_fields().keep_fields()

model = models.get("yolo_nas_m", pretrained_weights="coco")

dataset.apply_model(model, label_field="yolo_nas", confidence_thresh=0.7)

session = fo.launch_app(dataset)
```

The SuperGradients YOLO-NAS model can be accessed directly from the FiftyOne Model Zoo:

```python
import fiftyone as fo
import fiftyone.zoo as foz

model = foz.load_zoo_model("yolo-nas-torch")

dataset = foz.load_zoo_dataset("quickstart")
dataset.apply_model(model, label_field="yolo_nas")

session = fo.launch_app(dataset)
```

</details>


## Installation Methods
__________________________________________________________________________________________________________
### Prerequisites
<details markdown="1">
  
<summary>General requirements</summary>
  
- Python 3.7, 3.8 or 3.9 installed.
- 1.9.0 <= torch < 1.14 
  - https://pytorch.org/get-started/locally/
- The python packages that are specified in requirements.txt;

</details>
    
<details markdown="1">
  
<summary>To train on nvidia GPUs</summary>
  
- [Nvidia CUDA Toolkit >= 11.2](https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu)
- CuDNN >= 8.1.x
- Nvidia Driver with CUDA >= 11.2 support (≥460.x)
  
</details>
    
### Quick Installation

<details markdown="1">
  
<summary>Install stable version using PyPi</summary>

See in [PyPi](https://pypi.org/project/super-gradients/)
```bash
pip install super-gradients
```

That's it !

</details>
    
<details markdown="1">
  
<summary>Install using GitHub</summary>


```bash
pip install git+https://github.com/Deci-AI/super-gradients.git@stable
```

</details> 


## Implemented Model Architectures 
__________________________________________________________________________________________________________

All Computer Vision Models - Pretrained Checkpoints can be found in the [Model Zoo](http://bit.ly/41dkt89)

### Image Classification
  
- [DensNet (Densely Connected Convolutional Networks)](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/densenet.py) 
- [DPN](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/dpn.py) 
- [EfficientNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/efficientnet.py)
- [LeNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/lenet.py) 
- [MobileNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/mobilenet.py)
- [MobileNet v2](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/mobilenetv2.py)  
- [MobileNet v3](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/mobilenetv3.py) 
- [PNASNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/pnasnet.py) 
- [Pre-activation ResNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/preact_resnet.py)  
- [RegNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/regnet.py)
- [RepVGG](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/repvgg.py)  
- [ResNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/resnet.py)
- [ResNeXt](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/resnext.py) 
- [SENet ](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/senet.py)
- [ShuffleNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/shufflenet.py)
- [ShuffleNet v2](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/shufflenetv2.py)
- [VGG](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/vgg.py)
  
### Semantic Segmentation 

- [PP-LiteSeg](https://bit.ly/3RrtMMO)
- [DDRNet (Deep Dual-resolution Networks)](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/ddrnet.py) 
- [LadderNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/laddernet.py)
- [RegSeg](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/regseg.py)
- [ShelfNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/shelfnet.py) 
- [STDC](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/stdc.py)
  

### Object Detection
  
- [CSP DarkNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/csp_darknet53.py)
- [DarkNet-53](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/darknet53.py)
- [SSD (Single Shot Detector)](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/ssd.py) 
- [YOLOX](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/yolox.py)


### Pose Estimation

- [DEKR-W32-NO-DC](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/pose_estimation_models/dekr_hrnet.py)

  

__________________________________________________________________________________________________________

## Implemented Datasets 
__________________________________________________________________________________________________________

Deci provides implementation for various datasets. If you need to download any of the dataset, you can 
[find instructions](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/Dataset_Setup_Instructions.md). 

### Image Classification
  
- [Cifar10](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/classification_datasets/cifar.py) 
- [ImageNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/classification_datasets/imagenet_dataset.py) 
  
### Semantic Segmentation 

- [Cityscapes](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/cityscape_segmentation.py)
- [Coco](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/coco_segmentation.py) 
- [PascalVOC 2012 / PascalAUG 2012](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/pascal_voc_segmentation.py)
- [SuperviselyPersons](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/supervisely_persons_segmentation.py)
- [Mapillary Vistas Dataset](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/mapillary_dataset.py)


### Object Detection
  
- [Coco](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/detection_datasets/coco_detection.py)
- [PascalVOC 2007 & 2012](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/detection_datasets/pascal_voc_detection.py)
  
### Pose Estimation

- [COCO](https://github.com/Deci-AI/super-gradients/blob/cadcfdd64e7808d21cccddbfaeb26acb8267699b/src/super_gradients/recipes/dataset_params/coco_pose_estimation_dekr_dataset_params.yaml)

__________________________________________________________________________________________________________


## Documentation

Check SuperGradients [Docs](https://docs.deci.ai/super-gradients/documentation/source/welcome.html) for full documentation, user guide, and examples.
  
## Contributing

To learn about making a contribution to SuperGradients, please see our [Contribution page](CONTRIBUTING.md).

Our awesome contributors:
    
<a href="https://github.com/Deci-AI/super-gradients/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=Deci-AI/super-gradients" />
</a>


<br/>Made with [contrib.rocks](https://contrib.rocks).

## Citation

If you are using SuperGradients library or benchmarks in your research, please cite SuperGradients deep learning training library.

## Community

If you want to be a part of SuperGradients growing community, hear about all the exciting news and updates, need help, request for advanced features,
    or want to file a bug or issue report, we would love to welcome you aboard!

* Discord is the place to be and ask questions about SuperGradients and get support. [Click here to join our Discord Community](
  https://discord.gg/2v6cEGMREN)
    
* To report a bug, [file an issue](https://github.com/Deci-AI/super-gradients/issues) on GitHub.

* Join the [SG Newsletter](https://www.supergradients.com/#Newsletter)
  for staying up to date with new features and models, important announcements, and upcoming events.

* For a short meeting with us, use this [link](https://calendly.com/ofer-baratz-deci/15min) and choose your preferred time.

## License

This project is released under the [Apache 2.0 license](LICENSE).
    
## Citing

### BibTeX

```bibtex

@misc{supergradients,
  doi = {10.5281/ZENODO.7789328},
  url = {https://zenodo.org/record/7789328},
  author = {Aharon,  Shay and {Louis-Dupont} and {Ofri Masad} and Yurkova,  Kate and {Lotem Fridman} and {Lkdci} and Khvedchenya,  Eugene and Rubin,  Ran and Bagrov,  Natan and Tymchenko,  Borys and Keren,  Tomer and Zhilko,  Alexander and {Eran-Deci}},
  title = {Super-Gradients},
  publisher = {GitHub},
  journal = {GitHub repository},
  year = {2021},
}
```

### Latest DOI

[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7789328.svg)](https://doi.org/10.5281/zenodo.7789328)

    
__________________________________________________________________________________________________________


## Deci Platform

Deci Platform is our end to end platform for building, optimizing and deploying deep learning models to production.

[Request free trial](https://bit.ly/3qO3icq) to enjoy immediate improvement in throughput, latency, memory footprint and model size.

Features

- Automatically compile and quantize your models with just a few clicks (TensorRT, OpenVINO).
- Gain up to 10X improvement in throughput, latency, memory and model size. 
- Easily benchmark your models’ performance on different hardware and batch sizes.
- Invite co-workers to collaborate on models and communicate your progress.
- Deci supports all common frameworks and Hardware, from Intel CPUs to Nvidia's GPUs and Jetsons.
ֿ

Request free trial [here](https://bit.ly/3qO3icq) 

            

Raw data

            {
    "_id": null,
    "home_page": "https://docs.deci.ai/super-gradients/documentation/source/welcome.html",
    "name": "super-gradients",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "Deci, AI, Training, Deep Learning, Computer Vision, PyTorch, SOTA, Recipes, Pre Trained, Models",
    "author": "Deci AI",
    "author_email": "rnd@deci.ai",
    "download_url": null,
    "platform": null,
    "description": "<div align=\"center\" markdown=\"1\">\n  <img src=\"documentation/assets/SG_img/SG - Horizontal Glow 2.png\" width=\"600\"/>\n <br/><br/>\n  \n**Build, train, and fine-tune production-ready deep learning  SOTA vision models**\n[![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Easily%20train%20or%20fine-tune%20SOTA%20computer%20vision%20models%20from%20one%20training%20repository&url=https://github.com/Deci-AI/super-gradients&via=deci_ai&hashtags=AI,deeplearning,computervision,training,opensource)\n\n#### Version 3.5 is out! Notebooks have been updated!\n______________________________________________________________________\n</div>  \n<div align=\"center\">\n<p align=\"center\">\n  <a href=\"https://www.supergradients.com/\">Website</a> \u2022\n  <a href=\"https://docs.deci.ai/super-gradients/documentation/source/welcome.html\">Docs</a> \u2022\n  <a href=\"#getting-started\">Getting Started</a> \u2022\n  <a href=\"#implemented-model-architectures\">Pretrained Models</a> \u2022\n  <a href=\"#community\">Community</a> \u2022\n  <a href=\"#license\">License</a> \u2022\n  <a href=\"#deci-platform\">Deci Platform</a>\n</p>\n<p align=\"center\">\n  <a href=\"https://github.com/Deci-AI/super-gradients#prerequisites\"><img src=\"https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9-blue\" />\n  <a href=\"https://github.com/Deci-AI/super-gradients#prerequisites\"><img src=\"https://img.shields.io/badge/pytorch-1.9%20%7C%201.10-blue\" />\n  <a href=\"https://pypi.org/project/super-gradients/\"><img src=\"https://img.shields.io/pypi/v/super-gradients\" />\n  <a href=\"https://github.com/Deci-AI/super-gradients/blob/master/documentation/source/model_zoo.md\" ><img src=\"https://img.shields.io/badge/pre--trained%20models-34-brightgreen\" />\n  <a href=\"https://github.com/Deci-AI/super-gradients/releases\"><img src=\"https://img.shields.io/github/v/release/Deci-AI/super-gradients\" />\n  <a href=\"https://join.slack.com/t/supergradients-comm52/shared_invite/zt-10vz6o1ia-b_0W5jEPEnuHXm087K~t8Q\"><img src=\"https://img.shields.io/badge/slack-community-blueviolet\" />\n  <a href=\"https://github.com/Deci-AI/super-gradients/blob/master/LICENSE.md\"><img src=\"https://img.shields.io/badge/license-Apache%202.0-blue\" />\n  <a href=\"https://docs.deci.ai/super-gradients/documentation/source/welcome.html\"><img src=\"https://img.shields.io/badge/docs-mkdocs-brightgreen\" /></a>\n</p>    \n</div>\n\n______________________________________________________________________\n\n## Build with SuperGradients\n__________________________________________________________________________________________________________\n\n### Support various computer vision tasks\n<div align=\"center\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Segmentation 1500x900 .png\" width=\"250px\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Object detection 1500X900.png\" width=\"250px\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Classification 1500x900.png\" width=\"250px\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/PoseEstimation.jpg\" width=\"250px\">\n</div>\n\n\n### Ready to deploy pre-trained SOTA models\n\nYOLO-NAS and YOLO-NAS-POSE architectures are out! \nThe new YOLO-NAS delivers state-of-the-art performance with the unparalleled accuracy-speed performance, outperforming other models such as YOLOv5, YOLOv6, YOLOv7 and YOLOv8.\nA YOLO-NAS-POSE model for pose estimation is also available, delivering state-of-the-art accuracy/performance tradeoff.\n\nCheck these out here: [YOLO-NAS](YOLONAS.md) & [YOLO-NAS-POSE](YOLONAS-POSE.md).\n\n<div align=\"center\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_frontier.png\" height=\"600px\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_pose_frontier_t4.png\" height=\"600px\">\n</div>\n\n```python\n# Load model with pretrained weights\nfrom super_gradients.training import models\nfrom super_gradients.common.object_names import Models\n\nmodel = models.get(Models.YOLO_NAS_M, pretrained_weights=\"coco\")\n```\n#### All Computer Vision Models - Pretrained Checkpoints can be found in the [Model Zoo](http://bit.ly/41dkt89)\n\n#### Classification\n<div align=\"center\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Classification@2xDark.png\" width=\"800px\">\n</div>\n\n#### Semantic Segmentation\n<div align=\"center\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Semantic Segmentation@2xDark.png\" width=\"800px\">\n</div>\n\n#### Object Detection \n<div align=\"center\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/assets/SG_img/Object Detection@2xDark.png\" width=\"800px\">\n</div>\n\n#### Pose Estimation\n<div align=\"center\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_pose_frontier_t4.png\" width=\"400px\">\n<img src=\"https://github.com/Deci-AI/super-gradients/raw/master/documentation/source/images/yolo_nas_pose_frontier_xavier_nx.png\" width=\"400px\">\n</div>\n\n### Easy to train SOTA Models\n\nEasily load and fine-tune production-ready, pre-trained SOTA models that incorporate best practices and validated hyper-parameters for achieving best-in-class accuracy. \nFor more information on how to do it go to [Getting Started](#getting-started)\n    \n\n#### Plug and play recipes\n```bash\npython -m super_gradients.train_from_recipe architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>\n```\nMore examples on how and why to use recipes can be found in [Recipes](#recipes)\n\n\n### Production readiness\nAll SuperGradients models\u2019 are production ready in the sense that they are compatible with deployment tools such as TensorRT (Nvidia) and OpenVINO (Intel) and can be easily taken into production. With a few lines of code you can easily integrate the models into your codebase.\n```python\n# Load model with pretrained weights\nfrom super_gradients.training import models\nfrom super_gradients.common.object_names import Models\n\nmodel = models.get(Models.YOLO_NAS_M, pretrained_weights=\"coco\")\n\n# Prepare model for conversion\n# Input size is in format of [Batch x Channels x Width x Height] where 640 is the standard COCO dataset dimensions\nmodel.eval()\nmodel.prep_model_for_conversion(input_size=[1, 3, 640, 640])\n    \n# Create dummy_input\n\n# Convert model to onnx\ntorch.onnx.export(model, dummy_input,  \"yolo_nas_m.onnx\")\n```\nMore information on how to take your model to production can be found in [Getting Started](#getting-started) notebooks\n\n## Quick Installation\n\n__________________________________________________________________________________________________________\n\n \n```bash\npip install super-gradients\n```\n\n## What's New\n\n__________________________________________________________________________________________________________\nVersion 3.4.0 (November 6, 2023)\n\n* [YoloNAS-Pose](YOLONAS-POSE.md) model released - a new frontier in pose estimation\n* Added option to export a recipe to a single YAML file or to a standalone train.py file \n* Other bugfixes & minor improvements. Full release notes available [here](https://github.com/Deci-AI/super-gradients/releases/tag/3.4.0)\n\n__________________________________________________________________________________________________________\nVersion 3.1.3 (July 19, 2023)\n\n* [Pose Estimation Task Support](https://docs.deci.ai/super-gradients/documentation/source/PoseEstimation.html) - Check out fine-tuning [notebook example](https://colab.research.google.com/drive/1NMGzx8NdycIZqnRlZKJZrIOqyj0MFzJE#scrollTo=3UZJqTehg0On) \n* Pre-trained modified [DEKR](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/coco2017_pose_dekr_w32_no_dc.yaml) model for pose estimation (TensorRT-compatible)\n* Support for Python 3.10\n* Support for torch.compile\n* Other bugfixes & minor improvements. Check out [release notes](https://github.com/Deci-AI/super-gradients/releases/tag/3.1.3)\n\n__________________________________________________________________________________________________________\n30th of May\n* [Quantization Aware Training YoloNAS on Custom Dataset](https://bit.ly/3MIKdTy)\n\nVersion 3.1.1 (May 3rd)\n* [YOLO-NAS](https://bit.ly/41WeNPZ)\n* New [predict function](https://bit.ly/3oZfaea) (predict on any image, video, url, path, stream)\n* [RoboFlow100](https://bit.ly/40YOJ5z) datasets integration \n* A new [Documentation Hub](https://docs.deci.ai/super-gradients/documentation/source/welcome.html)\n* Integration with [DagsHub for experiment monitoring](https://bit.ly/3ALFUkQ)\n* Support [Darknet/Yolo format detection dataset](https://bit.ly/41VX6Qu) (used by Yolo v5, v6, v7, v8) \n* [Segformer](https://bit.ly/3oYu6Jp) model and recipe \n* Post Training Quantization and Quantization Aware Training - [notebooks](http://bit.ly/3KrN6an)\n\nCheck out SG full [release notes](https://github.com/Deci-AI/super-gradients/releases).\n\n## Table of Content\n__________________________________________________________________________________________________________\n<!-- toc -->\n\n- [Getting Started](#getting-started)\n- [Advanced Features](#advanced-features)\n- [Installation Methods](#installation-methods)\n    - [Prerequisites](#prerequisites)\n    - [Quick Installation](#quick-installation)\n- [Implemented Model Architectures](#implemented-model-architectures)\n- [Contributing](#contributing)\n- [Citation](#citation)\n- [Community](#community)\n- [License](#license)\n- [Deci Platform](#deci-platform)\n\n<!-- tocstop -->\n\n## Getting Started\n__________________________________________________________________________________________________________\n\n### Start Training with Just 1 Command Line\nThe most simple and straightforward way to start training SOTA performance models with SuperGradients reproducible recipes. Just define your dataset path and where you want your checkpoints to be saved and you are good to go from your terminal!\n\nJust make sure that you [setup your dataset](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/Dataset_Setup_Instructions.md) according to the data dir specified in the recipe.\n\n```bash\npython -m super_gradients.train_from_recipe --config-name=imagenet_regnetY architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>\n```\n### Quickly Load Pre-Trained Weights for Your Desired Model with SOTA Performance\nWant to try our pre-trained models on your machine? Import SuperGradients, initialize your Trainer, and load your desired architecture and pre-trained weights from our [SOTA model zoo](http://bit.ly/41dkt89)\n\n```python\n# The pretrained_weights argument will load a pre-trained architecture on the provided dataset\n    \nimport super_gradients\n\nmodel = models.get(\"model-name\", pretrained_weights=\"pretrained-model-name\")\n\n```   \n###  Classification\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_classification.ipynb) [Transfer Learning for classification](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_classification.ipynb) \n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/PTQ_and_QAT_for_classification.ipynb)   [PTQ and QAT for classification](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/PTQ_and_QAT_for_classification.ipynb)\n\n###  Semantic Segmentation\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/quickstart_segmentation.ipynb) [Segmentation Quick Start](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/quickstart_segmentation.ipynb)\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_semantic_segmentation.ipynb) [Segmentation Transfer Learning](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/transfer_learning_semantic_segmentation.ipynb)\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/segmentation_connect_custom_dataset.ipynb) [How to Connect Custom Dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/segmentation_connect_custom_dataset.ipynb)\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/Segmentation_Model_Export.ipynb) [How to export segmentation model to ONNX](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/Segmentation_Model_Export.ipynb)\n\n\n### Pose Estimation\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Pose_Fine_Tuning_Animals_Pose_Dataset.ipynb) [Fine Tuning YoloNAS-Pose on AnimalPose dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Pose_Fine_Tuning_Animals_Pose_Dataset.ipynb)\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/DEKR_PoseEstimationFineTuning.ipynb) [Fine Tuning DEKR on AnimalPose dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/DEKR_PoseEstimationFineTuning.ipynb)\n\n\n###  Object Detection\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Inference_using_TensorRT.ipynb) [YoloNAS inference using TensorRT](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/YoloNAS_Inference_using_TensorRT.ipynb)\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/detection_transfer_learning.ipynb) [Object Detection Transfer Learning](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/detection_transfer_learning.ipynb)\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/detection_how_to_connect_custom_dataset.ipynb) [How to Connect Custom Dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/detection_how_to_connect_custom_dataset.ipynb)\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/yolo_nas_custom_dataset_fine_tuning_with_qat.ipynb) [Quantization Aware Training YoloNAS on Custom Dataset](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/yolo_nas_custom_dataset_fine_tuning_with_qat.ipynb)\n\n\n### How to Predict Using Pre-trained Model\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/how_to_run_model_predict.ipynb) [How to Predict Using Pre-trained Model](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/how_to_run_model_predict.ipynb)\n\n### Albumentations Integration\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/albumentations_tutorial.ipynb) [Using Albumentations with SG](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/albumentations_tutorial.ipynb)\n\n\n\n## Advanced Features\n__________________________________________________________________________________________________________\n### Post Training Quantization and Quantization Aware Training\nQuantization involves representing weights and biases in lower precision, resulting in reduced memory and computational requirements, making it useful for deploying models on devices with limited resources. \nThe process can be done during training, called Quantization aware training, or after training, called post-training quantization. \nA full tutorial can be found [here](http://bit.ly/41hC8uI).\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/3KrN6an) [Post Training Quantization and Quantization Aware Training](https://bit.ly/3KrN6an)\n\n\n### Quantization Aware Training YoloNAS on Custom Dataset\nThis tutorial provides a comprehensive guide on how to fine-tune a YoloNAS model using a custom dataset. \nIt also demonstrates how to utilize SG's QAT (Quantization-Aware Training) support. Additionally, it offers step-by-step instructions on deploying the model and performing benchmarking.\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/3MIKdTy) [Quantization Aware Training YoloNAS on Custom Dataset](https://bit.ly/3MIKdTy)\n\n\n### Knowledge Distillation Training\n\nKnowledge Distillation is a training technique that uses a large model, teacher model, to improve the performance of a smaller model, the student model.\nLearn more about SuperGradients knowledge distillation training with our pre-trained BEiT base teacher model and Resnet18 student model on CIFAR10 example notebook on Google Colab for an easy to use tutorial using free GPU hardware\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/how_to_use_knowledge_distillation_for_classification.ipynb) [Knowledge Distillation Training](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/how_to_use_knowledge_distillation_for_classification.ipynb)\n\n\n### Recipes\n\nTo train a model, it is necessary to configure 4 main components. \nThese components are aggregated into a single \"main\" recipe `.yaml` file that inherits the aforementioned dataset, architecture, raining and checkpoint params.\nIt is also possible (and recommended for flexibility) to override default settings with custom ones.\nAll recipes can be found [here](http://bit.ly/3gfLw07)\n</br>\nRecipes support out of the box every model, metric or loss that is implemented in SuperGradients, but you can easily extend this to any custom object that you need by \"registering it\". Check out [this](http://bit.ly/3TQ4iZB) tutorial for more information.\n\n* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Deci-AI/super-gradients/blob/master/notebooks/what_are_recipes_and_how_to_use.ipynb) [How to Use Recipes](https://github.com/Deci-AI/super-gradients/blob/master/notebooks/what_are_recipes_and_how_to_use.ipynb)\n \n<details markdown=\"1\">\n  <summary><h3>Using Distributed Data Parallel (DDP) </h3></summary>\n\n#### Why use DDP ?\n\nRecent Deep Learning models are growing larger and larger to an extent that training on a single GPU can take weeks.\nIn order to train models in a timely fashion, it is necessary to train them with multiple GPUs.\nUsing 100s GPUs can reduce training time of a model from a week to less than an hour.\n\n#### How does it work ?\nEach GPU has its own process, which controls a copy of the model and which loads its own mini-batch from disk and sends\nit to its GPU during training. After the forward pass is completed on every GPU, the gradient is reduced across all\nGPUs, yielding to all the GPUs having the same gradient locally. This leads to the model weights to stay synchronized\nacross all GPUs after the backward pass.\n\n#### How to use it ?\nYou can use SuperGradients to train your model with DDP in just a few lines.\n\n\n*main.py*\n```python\nfrom super_gradients import init_trainer, Trainer\nfrom super_gradients.common import MultiGPUMode\nfrom super_gradients.training.utils.distributed_training_utils import setup_device\n\n# Initialize the environment\ninit_trainer()\n\n# Launch DDP on 4 GPUs'\nsetup_device(multi_gpu=MultiGPUMode.DISTRIBUTED_DATA_PARALLEL, num_gpus=4)\n\n# Call the trainer\nTrainer(expriment_name=...)\n\n# Everything you do below will run on 4 gpus\n\n...\n\nTrainer.train(...)\n\n```\n\nFinally, you can launch your distributed training with a simple python call.\n```bash\npython main.py\n```\n\n\nPlease note that if you work with `torch<1.9.0` (deprecated), you will have to launch your training with either \n`torch.distributed.launch` or `torchrun`, in which case `nproc_per_node` will overwrite the value  set with `gpu_mode`:\n```bash\npython -m torch.distributed.launch --nproc_per_node=4 main.py\n```\n\n```bash\ntorchrun --nproc_per_node=4 main.py\n```\n\n#### Calling functions on a single node\n\nIt is often in DDP training that we want to execute code on the master rank (i.e rank 0).\nIn SG, users usually execute their own code by triggering \"Phase Callbacks\" (see \"Using phase callbacks\" section below).\nOne can make sure the desired code will only be ran on rank 0, using ddp_silent_mode or the multi_process_safe decorator.\nFor example, consider the simple phase callback below, that uploads the first 3 images of every batch during training to\nthe Tensorboard:\n\n```python\nfrom super_gradients.training.utils.callbacks import PhaseCallback, PhaseContext, Phase\nfrom super_gradients.common.environment.env_helpers import multi_process_safe\n\nclass Upload3TrainImagesCalbback(PhaseCallback):\n    def __init__(\n        self,\n    ):\n        super().__init__(phase=Phase.TRAIN_BATCH_END)\n    \n    @multi_process_safe\n    def __call__(self, context: PhaseContext):\n        batch_imgs = context.inputs.cpu().detach().numpy()\n        tag = \"batch_\" + str(context.batch_idx) + \"_images\"\n        context.sg_logger.add_images(tag=tag, images=batch_imgs[: 3], global_step=context.epoch)\n\n```\nThe @multi_process_safe decorator ensures that the callback will only be triggered by rank 0. Alternatively, this can also\nbe done by the SG trainer boolean attribute (which the phase context has access to), ddp_silent_mode, which is set to False\niff the current process rank is zero (even after the process group has been killed):\n```python\nfrom super_gradients.training.utils.callbacks import PhaseCallback, PhaseContext, Phase\n\nclass Upload3TrainImagesCalbback(PhaseCallback):\n    def __init__(\n        self,\n    ):\n        super().__init__(phase=Phase.TRAIN_BATCH_END)\n\n    def __call__(self, context: PhaseContext):\n        if not context.ddp_silent_mode:\n            batch_imgs = context.inputs.cpu().detach().numpy()\n            tag = \"batch_\" + str(context.batch_idx) + \"_images\"\n            context.sg_logger.add_images(tag=tag, images=batch_imgs[: 3], global_step=context.epoch)\n\n```\n\nNote that ddp_silent_mode can be accessed through SgTrainer.ddp_silent_mode. Hence, it can be used in scripts after calling\nSgTrainer.train() when some part of it should be ran on rank 0 only.\n\n#### Good to know\nYour total batch size will be (number of gpus x batch size), so you might want to increase your learning rate.\nThere is no clear rule, but a rule of thumb seems to be to [linearly increase the learning rate with the number of gpus](https://arxiv.org/pdf/1706.02677.pdf) \n\n</details>\n\n<details markdown=\"1\">\n<summary><h3> Easily change architectures parameters </h3></summary>\n\n```python\nfrom super_gradients.training import models\n\n# instantiate default pretrained resnet18\ndefault_resnet18 = models.get(model_name=\"resnet18\", num_classes=100, pretrained_weights=\"imagenet\")\n\n# instantiate pretrained resnet18, turning DropPath on with probability 0.5\ndroppath_resnet18 = models.get(model_name=\"resnet18\", arch_params={\"droppath_prob\": 0.5}, num_classes=100, pretrained_weights=\"imagenet\")\n\n# instantiate pretrained resnet18, without classifier head. Output will be from the last stage before global pooling\nbackbone_resnet18 = models.get(model_name=\"resnet18\", arch_params={\"backbone_mode\": True}, pretrained_weights=\"imagenet\")\n```\n\n</details>\n\n<details markdown=\"1\">\n\n<summary><h3> Using phase callbacks </h3></summary>  \n  \n```python\nfrom super_gradients import Trainer\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\nfrom super_gradients.training.utils.callbacks import Phase, LRSchedulerCallback\nfrom super_gradients.training.metrics.classification_metrics import Accuracy\n\n# define PyTorch train and validation loaders and optimizer\n\n# define what to be called in the callback\nrop_lr_scheduler = ReduceLROnPlateau(optimizer, mode=\"max\", patience=10, verbose=True)\n\n# define phase callbacks, they will fire as defined in Phase\nphase_callbacks = [LRSchedulerCallback(scheduler=rop_lr_scheduler,\n                                       phase=Phase.VALIDATION_EPOCH_END,\n                                       metric_name=\"Accuracy\")]\n\n# create a trainer object, look the declaration for more parameters\ntrainer = Trainer(\"experiment_name\")\n\n# define phase_callbacks as part of the training parameters\ntrain_params = {\"phase_callbacks\": phase_callbacks}\n```\n\n</details>\n\n<details markdown=\"1\">\n\n<summary><h3> Integration to DagsHub </h3></summary>    \n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11fW56pMpwOMHQSbQW6xxMRYvw1mEC-t-?usp=sharing) \n\n```python\nfrom super_gradients import Trainer\n\ntrainer = Trainer(\"experiment_name\")\nmodel = ...\n\ntraining_params = { ...  # Your training params\n                   \"sg_logger\": \"dagshub_sg_logger\",  # DagsHub Logger, see class super_gradients.common.sg_loggers.dagshub_sg_logger.DagsHubSGLogger for details\n                   \"sg_logger_params\":  # Params that will be passes to __init__ of the logger super_gradients.common.sg_loggers.dagshub_sg_logger.DagsHubSGLogger\n                     {\n                       \"dagshub_repository\": \"<REPO_OWNER>/<REPO_NAME>\", # Optional: Your DagsHub project name, consisting of the owner name, followed by '/', and the repo name. If this is left empty, you'll be prompted in your run to fill it in manually.\n                       \"log_mlflow_only\": False, # Optional: Change to true to bypass logging to DVC, and log all artifacts only to MLflow  \n                       \"save_checkpoints_remote\": True,\n                       \"save_tensorboard_remote\": True,\n                       \"save_logs_remote\": True,\n                     }\n                   }\n```\n\n</details>\n\n<details>\n\n<summary><h3> Integration to Weights and Biases </h3></summary>    \n  \n\n```python\nfrom super_gradients import Trainer\n\n# create a trainer object, look the declaration for more parameters\ntrainer = Trainer(\"experiment_name\")\n\ntrain_params = { ... # training parameters\n                \"sg_logger\": \"wandb_sg_logger\", # Weights&Biases Logger, see class WandBSGLogger for details\n                \"sg_logger_params\": # paramenters that will be passes to __init__ of the logger \n                  {\n                    \"project_name\": \"project_name\", # W&B project name\n                    \"save_checkpoints_remote\": True\n                    \"save_tensorboard_remote\": True\n                    \"save_logs_remote\": True\n                  } \n               }\n```\n\n</details>\n\n<details markdown=\"1\">\n\n<summary><h3> Integration to ClearML </h3></summary>    \n\n\n```python\nfrom super_gradients import Trainer\n\n# create a trainer object, look the declaration for more parameters\ntrainer = Trainer(\"experiment_name\")\n\ntrain_params = { ... # training parameters\n                \"sg_logger\": \"clearml_sg_logger\", # ClearML Logger, see class ClearMLSGLogger for details\n                \"sg_logger_params\": # paramenters that will be passes to __init__ of the logger \n                  {\n                    \"project_name\": \"project_name\", # ClearML project name\n                    \"save_checkpoints_remote\": True,\n                    \"save_tensorboard_remote\": True,\n                    \"save_logs_remote\": True,\n                  } \n               }\n```\n\n\n  </details>\n<details markdown=\"1\">\n\n  <summary><h3> Integration to Voxel51 </h3></summary>    \n  \nYou can apply SuperGradients YOLO-NAS models directly to your FiftyOne dataset using the apply_model() method:\n\n```python\nimport fiftyone as fo\nimport fiftyone.zoo as foz\n\nfrom super_gradients.training import models\n\ndataset = foz.load_zoo_dataset(\"quickstart\", max_samples=25)\ndataset.select_fields().keep_fields()\n\nmodel = models.get(\"yolo_nas_m\", pretrained_weights=\"coco\")\n\ndataset.apply_model(model, label_field=\"yolo_nas\", confidence_thresh=0.7)\n\nsession = fo.launch_app(dataset)\n```\n\nThe SuperGradients YOLO-NAS model can be accessed directly from the FiftyOne Model Zoo:\n\n```python\nimport fiftyone as fo\nimport fiftyone.zoo as foz\n\nmodel = foz.load_zoo_model(\"yolo-nas-torch\")\n\ndataset = foz.load_zoo_dataset(\"quickstart\")\ndataset.apply_model(model, label_field=\"yolo_nas\")\n\nsession = fo.launch_app(dataset)\n```\n\n</details>\n\n\n## Installation Methods\n__________________________________________________________________________________________________________\n### Prerequisites\n<details markdown=\"1\">\n  \n<summary>General requirements</summary>\n  \n- Python 3.7, 3.8 or 3.9 installed.\n- 1.9.0 <= torch < 1.14 \n  - https://pytorch.org/get-started/locally/\n- The python packages that are specified in requirements.txt;\n\n</details>\n    \n<details markdown=\"1\">\n  \n<summary>To train on nvidia GPUs</summary>\n  \n- [Nvidia CUDA Toolkit >= 11.2](https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu)\n- CuDNN >= 8.1.x\n- Nvidia Driver with CUDA >= 11.2 support (\u2265460.x)\n  \n</details>\n    \n### Quick Installation\n\n<details markdown=\"1\">\n  \n<summary>Install stable version using PyPi</summary>\n\nSee in [PyPi](https://pypi.org/project/super-gradients/)\n```bash\npip install super-gradients\n```\n\nThat's it !\n\n</details>\n    \n<details markdown=\"1\">\n  \n<summary>Install using GitHub</summary>\n\n\n```bash\npip install git+https://github.com/Deci-AI/super-gradients.git@stable\n```\n\n</details> \n\n\n## Implemented Model Architectures \n__________________________________________________________________________________________________________\n\nAll Computer Vision Models - Pretrained Checkpoints can be found in the [Model Zoo](http://bit.ly/41dkt89)\n\n### Image Classification\n  \n- [DensNet (Densely Connected Convolutional Networks)](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/densenet.py) \n- [DPN](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/dpn.py) \n- [EfficientNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/efficientnet.py)\n- [LeNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/lenet.py) \n- [MobileNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/mobilenet.py)\n- [MobileNet v2](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/mobilenetv2.py)  \n- [MobileNet v3](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/mobilenetv3.py) \n- [PNASNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/pnasnet.py) \n- [Pre-activation ResNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/preact_resnet.py)  \n- [RegNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/regnet.py)\n- [RepVGG](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/repvgg.py)  \n- [ResNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/resnet.py)\n- [ResNeXt](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/resnext.py) \n- [SENet ](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/senet.py)\n- [ShuffleNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/shufflenet.py)\n- [ShuffleNet v2](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/shufflenetv2.py)\n- [VGG](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/classification_models/vgg.py)\n  \n### Semantic Segmentation \n\n- [PP-LiteSeg](https://bit.ly/3RrtMMO)\n- [DDRNet (Deep Dual-resolution Networks)](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/ddrnet.py) \n- [LadderNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/laddernet.py)\n- [RegSeg](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/regseg.py)\n- [ShelfNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/shelfnet.py) \n- [STDC](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/stdc.py)\n  \n\n### Object Detection\n  \n- [CSP DarkNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/csp_darknet53.py)\n- [DarkNet-53](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/darknet53.py)\n- [SSD (Single Shot Detector)](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/ssd.py) \n- [YOLOX](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/detection_models/yolox.py)\n\n\n### Pose Estimation\n\n- [DEKR-W32-NO-DC](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/pose_estimation_models/dekr_hrnet.py)\n\n  \n\n__________________________________________________________________________________________________________\n\n## Implemented Datasets \n__________________________________________________________________________________________________________\n\nDeci provides implementation for various datasets. If you need to download any of the dataset, you can \n[find instructions](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/Dataset_Setup_Instructions.md). \n\n### Image Classification\n  \n- [Cifar10](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/classification_datasets/cifar.py) \n- [ImageNet](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/classification_datasets/imagenet_dataset.py) \n  \n### Semantic Segmentation \n\n- [Cityscapes](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/cityscape_segmentation.py)\n- [Coco](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/coco_segmentation.py) \n- [PascalVOC 2012 / PascalAUG 2012](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/pascal_voc_segmentation.py)\n- [SuperviselyPersons](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/supervisely_persons_segmentation.py)\n- [Mapillary Vistas Dataset](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/segmentation_datasets/mapillary_dataset.py)\n\n\n### Object Detection\n  \n- [Coco](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/detection_datasets/coco_detection.py)\n- [PascalVOC 2007 & 2012](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/datasets/detection_datasets/pascal_voc_detection.py)\n  \n### Pose Estimation\n\n- [COCO](https://github.com/Deci-AI/super-gradients/blob/cadcfdd64e7808d21cccddbfaeb26acb8267699b/src/super_gradients/recipes/dataset_params/coco_pose_estimation_dekr_dataset_params.yaml)\n\n__________________________________________________________________________________________________________\n\n\n## Documentation\n\nCheck SuperGradients [Docs](https://docs.deci.ai/super-gradients/documentation/source/welcome.html) for full documentation, user guide, and examples.\n  \n## Contributing\n\nTo learn about making a contribution to SuperGradients, please see our [Contribution page](CONTRIBUTING.md).\n\nOur awesome contributors:\n    \n<a href=\"https://github.com/Deci-AI/super-gradients/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=Deci-AI/super-gradients\" />\n</a>\n\n\n<br/>Made with [contrib.rocks](https://contrib.rocks).\n\n## Citation\n\nIf you are using SuperGradients library or benchmarks in your research, please cite SuperGradients deep learning training library.\n\n## Community\n\nIf you want to be a part of SuperGradients growing community, hear about all the exciting news and updates, need help, request for advanced features,\n    or want to file a bug or issue report, we would love to welcome you aboard!\n\n* Discord is the place to be and ask questions about SuperGradients and get support. [Click here to join our Discord Community](\n  https://discord.gg/2v6cEGMREN)\n    \n* To report a bug, [file an issue](https://github.com/Deci-AI/super-gradients/issues) on GitHub.\n\n* Join the [SG Newsletter](https://www.supergradients.com/#Newsletter)\n  for staying up to date with new features and models, important announcements, and upcoming events.\n\n* For a short meeting with us, use this [link](https://calendly.com/ofer-baratz-deci/15min) and choose your preferred time.\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n    \n## Citing\n\n### BibTeX\n\n```bibtex\n\n@misc{supergradients,\n  doi = {10.5281/ZENODO.7789328},\n  url = {https://zenodo.org/record/7789328},\n  author = {Aharon,  Shay and {Louis-Dupont} and {Ofri Masad} and Yurkova,  Kate and {Lotem Fridman} and {Lkdci} and Khvedchenya,  Eugene and Rubin,  Ran and Bagrov,  Natan and Tymchenko,  Borys and Keren,  Tomer and Zhilko,  Alexander and {Eran-Deci}},\n  title = {Super-Gradients},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  year = {2021},\n}\n```\n\n### Latest DOI\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7789328.svg)](https://doi.org/10.5281/zenodo.7789328)\n\n    \n__________________________________________________________________________________________________________\n\n\n## Deci Platform\n\nDeci Platform is our end to end platform for building, optimizing and deploying deep learning models to production.\n\n[Request free trial](https://bit.ly/3qO3icq) to enjoy immediate improvement in throughput, latency, memory footprint and model size.\n\nFeatures\n\n- Automatically compile and quantize your models with just a few clicks (TensorRT, OpenVINO).\n- Gain up to 10X improvement in throughput, latency, memory and model size. \n- Easily benchmark your models\u2019 performance on different hardware and batch sizes.\n- Invite co-workers to collaborate on models and communicate your progress.\n- Deci supports all common frameworks and Hardware, from Intel CPUs to Nvidia's GPUs and Jetsons.\n\u05bf\n\nRequest free trial [here](https://bit.ly/3qO3icq) \n",
    "bugtrack_url": null,
    "license": null,
    "summary": "SuperGradients",
    "version": "3.7.1",
    "project_urls": {
        "Homepage": "https://docs.deci.ai/super-gradients/documentation/source/welcome.html"
    },
    "split_keywords": [
        "deci",
        " ai",
        " training",
        " deep learning",
        " computer vision",
        " pytorch",
        " sota",
        " recipes",
        " pre trained",
        " models"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2579930f313adb3c1bec2f650f9cb5c46656e4ba3d0a2f63869385a9ad9d0244",
                "md5": "e8bdbaf09d7a92314df530e9cadad887",
                "sha256": "f4ebf2f551399f77070609da15efb2cfaf90392a38be4be8b1d4eab0d9c47381"
            },
            "downloads": -1,
            "filename": "super_gradients-3.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e8bdbaf09d7a92314df530e9cadad887",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 12120472,
            "upload_time": "2024-04-08T16:51:38",
            "upload_time_iso_8601": "2024-04-08T16:51:38.553991Z",
            "url": "https://files.pythonhosted.org/packages/25/79/930f313adb3c1bec2f650f9cb5c46656e4ba3d0a2f63869385a9ad9d0244/super_gradients-3.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-08 16:51:38",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "super-gradients"
}
        
Elapsed time: 0.49127s