<div align="center">

[](https://pypi.python.org/pypi/torch_uncertainty)
[](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml)
[](https://torch-uncertainty.github.io/)
[](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/pulls)
[](https://github.com/astral-sh/ruff)
[](https://codecov.io/gh/ENSTA-U2IS-AI/torch-uncertainty)
[](https://pepy.tech/project/torch-uncertainty)
[](https://discord.gg/HMCawt5MJu)
</div>
_TorchUncertainty_ is a package designed to help leverage [uncertainty quantification techniques](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning) to make deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!
:construction: _TorchUncertainty_ is in early development :construction: - expect changes, but reach out and contribute if you are interested in the project! **Please raise an issue if you have any bugs or difficulties and join the [discord server](https://discord.gg/HMCawt5MJu).**
:books: Our webpage and documentation is available here: [torch-uncertainty.github.io](https://torch-uncertainty.github.io). :books:
TorchUncertainty contains the _official implementations_ of multiple papers from _major machine-learning and computer vision conferences_ and was featured in tutorials at **[WACV](https://wacv2024.thecvf.com/) 2024**, **[HAICON](https://haicon24.de/) 2024** and **[ECCV](https://eccv.ecva.net/) 2024**.
---
This package provides a multi-level API, including:
- easy-to-use :zap: lightning **uncertainty-aware** training & evaluation routines for **4 tasks**: classification, probabilistic and pointwise regression, and segmentation.
- fully automated evaluation of the performance of models with proper scores, selective classification, out-of-distribution detection and distribution shift performance metrics!
- ready-to-train baselines on research datasets, such as ImageNet and CIFAR
- **layers**, **models**, **metrics**, & **losses** available for your networks
- scikit-learn style post-processing methods such as Temperature Scaling.
- transformations and augmentations, including corruptions resulting in additional "corrupted datasets" available on [HuggingFace](https://huggingface.co/torch-uncertainty)
Have a look at the [Reference page](https://torch-uncertainty.github.io/references.html) or the [API reference](https://torch-uncertainty.github.io/api.html) for a more exhaustive list of the implemented methods, datasets, metrics, etc.
## :gear: Installation
TorchUncertainty requires Python 3.10 or greater. Install the desired PyTorch version in your environment.
Then, install the package from PyPI:
```sh
pip install torch-uncertainty
```
The installation procedure for contributors is different: have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).
### :whale: Docker image for contributors
For contributors running experiments on cloud GPU instances, we provide a pre-built Docker image that includes all necessary dependencies and configurations and the Dockerfile for building your custom Docker images.
This allows you to quickly launch an experiment-ready container with minimal setup. Please refer to [DOCKER.md](docker/DOCKER.md) for further details.
## :racehorse: Quickstart
We make a quickstart available at [torch-uncertainty.github.io/quickstart](https://torch-uncertainty.github.io/quickstart.html).
## :books: Implemented methods
TorchUncertainty currently supports **classification**, **probabilistic** and pointwise **regression**, **segmentation** and **pixelwise regression** (such as monocular depth estimation).
We also provide the following methods:
### Uncertainty quantification models
To date, the following deep learning uncertainty quantification modes have been implemented. **Click** :inbox_tray: **on the methods for tutorials**:
- [Deep Ensembles](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_from_de_to_pe.html), BatchEnsemble, Masksembles, & MIMO
- [MC-Dropout](https://torch-uncertainty.github.io/auto_tutorials/Bayesian_Methods/tutorial_mc_dropout.html)
- [Packed-Ensembles](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_from_de_to_pe.html) (see [Blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873))
- [Variational Bayesian Neural Networks](https://torch-uncertainty.github.io/auto_tutorials/Bayesian_Methods/tutorial_bayesian.html)
- Checkpoint Ensembles & Snapshot Ensembles
- Stochastic Weight Averaging & Stochastic Weight Averaging Gaussian
- [Deep Evidential Classification](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_evidential_classification.html) & [Regression](https://torch-uncertainty.github.io/auto_tutorials/Regression/tutorial_der_cubic.html)
- Regression with Beta Gaussian NLL Loss
- Test-time adaptation with Zero
### Augmentation methods
The following data augmentation methods have been implemented:
- Mixup, MixupIO, RegMixup, WarpingMixup
- Modernized corruptions to evaluate model performance under distribution shift
### Post-processing methods
To date, the following post-processing methods have been implemented:
- [Temperature](https://torch-uncertainty.github.io/auto_tutorials/Post_Hoc_Methods/tutorial_scaler.html), Vector, & Matrix scaling
- [Conformal Predictions](https://torch-uncertainty.github.io/auto_tutorials/Post_Hoc_Methods/tutorial_conformal.html) with APS and RAPS
- [Monte Carlo Batch Normalization](https://torch-uncertainty.github.io/auto_tutorials/Post_Hoc_Methods/tutorial_mc_batch_norm.html)
- Laplace approximation through the [Laplace library](https://github.com/aleximmer/Laplace)
### Official Implementations
It includes the official codes of the following papers:
- _Packed-Ensembles for Efficient Uncertainty Estimation_ - [ICLR 2023](https://arxiv.org/abs/2210.09184) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_pe_cifar10.html)
- _LP-BNN: Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification_ - [IEEE TPAMI 2023](https://arxiv.org/abs/2012.02818)
- _MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for multiple uncertainty types and tasks_ - [BMVC 2022](https://arxiv.org/abs/2203.01437)
## Tutorials
Check out all our tutorials at [torch-uncertainty.github.io/auto_tutorials](https://torch-uncertainty.github.io/auto_tutorials/index.html).
## :telescope: Projects using TorchUncertainty
The following projects use TorchUncertainty:
- _Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It_ - [ICLR 2025](https://arxiv.org/abs/2403.14715)
- _A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors_ - [ICLR 2024](https://arxiv.org/abs/2310.08287)
**If you are using TorchUncertainty in your project, please let us know, and we will add your project to this list!**
Raw data
{
"_id": null,
"home_page": null,
"name": "torch-uncertainty",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "bayesian-network, ensembles, neural-networks, predictive-uncertainty, reliable-ai, trustworthy-machine-learning, uncertainty, uncertainty-quantification",
"author": null,
"author_email": "ENSTA U2IS <olivier.laurent@ensta-paris.fr>, Adrien Lafage <adrienlafage@outlook.com>, Olivier Laurent <olivier.laurent@ensta-paris.fr>",
"download_url": "https://files.pythonhosted.org/packages/13/18/a54fc69d85e2eb6ef57345885255a0b493ccbfe1c0b447a89b1fc044ae7b/torch_uncertainty-0.5.2.post0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n\n\n\n[](https://pypi.python.org/pypi/torch_uncertainty)\n[](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml)\n[](https://torch-uncertainty.github.io/)\n[](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/pulls)\n[](https://github.com/astral-sh/ruff)\n[](https://codecov.io/gh/ENSTA-U2IS-AI/torch-uncertainty)\n[](https://pepy.tech/project/torch-uncertainty)\n[](https://discord.gg/HMCawt5MJu)\n</div>\n\n_TorchUncertainty_ is a package designed to help leverage [uncertainty quantification techniques](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning) to make deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!\n\n:construction: _TorchUncertainty_ is in early development :construction: - expect changes, but reach out and contribute if you are interested in the project! **Please raise an issue if you have any bugs or difficulties and join the [discord server](https://discord.gg/HMCawt5MJu).**\n\n:books: Our webpage and documentation is available here: [torch-uncertainty.github.io](https://torch-uncertainty.github.io). :books:\n\nTorchUncertainty contains the _official implementations_ of multiple papers from _major machine-learning and computer vision conferences_ and was featured in tutorials at **[WACV](https://wacv2024.thecvf.com/) 2024**, **[HAICON](https://haicon24.de/) 2024** and **[ECCV](https://eccv.ecva.net/) 2024**.\n\n---\n\nThis package provides a multi-level API, including:\n\n- easy-to-use :zap: lightning **uncertainty-aware** training & evaluation routines for **4 tasks**: classification, probabilistic and pointwise regression, and segmentation.\n- fully automated evaluation of the performance of models with proper scores, selective classification, out-of-distribution detection and distribution shift performance metrics!\n- ready-to-train baselines on research datasets, such as ImageNet and CIFAR\n- **layers**, **models**, **metrics**, & **losses** available for your networks\n- scikit-learn style post-processing methods such as Temperature Scaling.\n- transformations and augmentations, including corruptions resulting in additional \"corrupted datasets\" available on [HuggingFace](https://huggingface.co/torch-uncertainty)\n\nHave a look at the [Reference page](https://torch-uncertainty.github.io/references.html) or the [API reference](https://torch-uncertainty.github.io/api.html) for a more exhaustive list of the implemented methods, datasets, metrics, etc.\n\n## :gear: Installation\n\nTorchUncertainty requires Python 3.10 or greater. Install the desired PyTorch version in your environment.\nThen, install the package from PyPI:\n\n```sh\npip install torch-uncertainty\n```\n\nThe installation procedure for contributors is different: have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).\n\n### :whale: Docker image for contributors\n\nFor contributors running experiments on cloud GPU instances, we provide a pre-built Docker image that includes all necessary dependencies and configurations and the Dockerfile for building your custom Docker images.\nThis allows you to quickly launch an experiment-ready container with minimal setup. Please refer to [DOCKER.md](docker/DOCKER.md) for further details.\n\n## :racehorse: Quickstart\n\nWe make a quickstart available at [torch-uncertainty.github.io/quickstart](https://torch-uncertainty.github.io/quickstart.html).\n\n## :books: Implemented methods\n\nTorchUncertainty currently supports **classification**, **probabilistic** and pointwise **regression**, **segmentation** and **pixelwise regression** (such as monocular depth estimation).\n\nWe also provide the following methods:\n\n### Uncertainty quantification models\n\nTo date, the following deep learning uncertainty quantification modes have been implemented. **Click** :inbox_tray: **on the methods for tutorials**:\n\n- [Deep Ensembles](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_from_de_to_pe.html), BatchEnsemble, Masksembles, & MIMO\n- [MC-Dropout](https://torch-uncertainty.github.io/auto_tutorials/Bayesian_Methods/tutorial_mc_dropout.html)\n- [Packed-Ensembles](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_from_de_to_pe.html) (see [Blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873))\n- [Variational Bayesian Neural Networks](https://torch-uncertainty.github.io/auto_tutorials/Bayesian_Methods/tutorial_bayesian.html)\n- Checkpoint Ensembles & Snapshot Ensembles\n- Stochastic Weight Averaging & Stochastic Weight Averaging Gaussian\n- [Deep Evidential Classification](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_evidential_classification.html) & [Regression](https://torch-uncertainty.github.io/auto_tutorials/Regression/tutorial_der_cubic.html)\n- Regression with Beta Gaussian NLL Loss\n- Test-time adaptation with Zero\n\n### Augmentation methods\n\nThe following data augmentation methods have been implemented:\n\n- Mixup, MixupIO, RegMixup, WarpingMixup\n- Modernized corruptions to evaluate model performance under distribution shift\n\n### Post-processing methods\n\nTo date, the following post-processing methods have been implemented:\n\n- [Temperature](https://torch-uncertainty.github.io/auto_tutorials/Post_Hoc_Methods/tutorial_scaler.html), Vector, & Matrix scaling\n- [Conformal Predictions](https://torch-uncertainty.github.io/auto_tutorials/Post_Hoc_Methods/tutorial_conformal.html) with APS and RAPS\n- [Monte Carlo Batch Normalization](https://torch-uncertainty.github.io/auto_tutorials/Post_Hoc_Methods/tutorial_mc_batch_norm.html)\n- Laplace approximation through the [Laplace library](https://github.com/aleximmer/Laplace)\n\n### Official Implementations\n\nIt includes the official codes of the following papers:\n\n- _Packed-Ensembles for Efficient Uncertainty Estimation_ - [ICLR 2023](https://arxiv.org/abs/2210.09184) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/Classification/tutorial_pe_cifar10.html)\n- _LP-BNN: Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification_ - [IEEE TPAMI 2023](https://arxiv.org/abs/2012.02818)\n- _MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for multiple uncertainty types and tasks_ - [BMVC 2022](https://arxiv.org/abs/2203.01437)\n\n## Tutorials\n\nCheck out all our tutorials at [torch-uncertainty.github.io/auto_tutorials](https://torch-uncertainty.github.io/auto_tutorials/index.html).\n\n## :telescope: Projects using TorchUncertainty\n\nThe following projects use TorchUncertainty:\n\n- _Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It_ - [ICLR 2025](https://arxiv.org/abs/2403.14715)\n- _A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors_ - [ICLR 2024](https://arxiv.org/abs/2310.08287)\n\n**If you are using TorchUncertainty in your project, please let us know, and we will add your project to this list!**\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Uncertainty quantification library in PyTorch",
"version": "0.5.2.post0",
"project_urls": {
"documentation": "https://torch-uncertainty.github.io/quickstart.html",
"homepage": "https://torch-uncertainty.github.io/",
"repository": "https://github.com/ENSTA-U2IS-AI/torch-uncertainty.git"
},
"split_keywords": [
"bayesian-network",
" ensembles",
" neural-networks",
" predictive-uncertainty",
" reliable-ai",
" trustworthy-machine-learning",
" uncertainty",
" uncertainty-quantification"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a031631c1fb045bb543e2bd91ffa26a4428c85b170e729482403a38bb5d8b010",
"md5": "c4810d9534be0dd4cf78b89123d7e7e2",
"sha256": "8998633971b12239f39f80a3d8cde14516d8536135c2eaffdea7f8226347edbb"
},
"downloads": -1,
"filename": "torch_uncertainty-0.5.2.post0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c4810d9534be0dd4cf78b89123d7e7e2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 351987,
"upload_time": "2025-07-11T12:44:56",
"upload_time_iso_8601": "2025-07-11T12:44:56.818765Z",
"url": "https://files.pythonhosted.org/packages/a0/31/631c1fb045bb543e2bd91ffa26a4428c85b170e729482403a38bb5d8b010/torch_uncertainty-0.5.2.post0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1318a54fc69d85e2eb6ef57345885255a0b493ccbfe1c0b447a89b1fc044ae7b",
"md5": "e303b33c557cee1e3beb0c7d5c0dd3a1",
"sha256": "c71eeca7fd240e7d3997cc5af019f153d5f8410c946a4c8d90e7fbcddc9b0d91"
},
"downloads": -1,
"filename": "torch_uncertainty-0.5.2.post0.tar.gz",
"has_sig": false,
"md5_digest": "e303b33c557cee1e3beb0c7d5c0dd3a1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 1729616,
"upload_time": "2025-07-11T12:44:57",
"upload_time_iso_8601": "2025-07-11T12:44:57.963618Z",
"url": "https://files.pythonhosted.org/packages/13/18/a54fc69d85e2eb6ef57345885255a0b493ccbfe1c0b447a89b1fc044ae7b/torch_uncertainty-0.5.2.post0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 12:44:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ENSTA-U2IS-AI",
"github_project": "torch-uncertainty",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "torch-uncertainty"
}