torch-uncertainty


Nametorch-uncertainty JSON
Version 0.1.6 PyPI version JSON
download
home_page
SummaryUncertainty quantification library in PyTorch
upload_time2024-02-14 09:35:05
maintainer
docs_urlNone
author
requires_python>=3.10
license
keywords bayesian-network ensembles neural-networks predictive-uncertainty pytorch reliable-ai trustworthy-machine-learning uncertainty uncertainty-quantification
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

![Torch Uncertainty Logo](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/blob/main/docs/source/_static/images/torch_uncertainty.png)

[![pypi](https://img.shields.io/pypi/v/torch_uncertainty.svg)](https://pypi.python.org/pypi/torch_uncertainty)
[![tests](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml/badge.svg?branch=main&event=push)](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml)
[![Docs](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/build-docs.yml/badge.svg)](https://torch-uncertainty.github.io/)
[![PRWelcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/pulls)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Code Coverage](https://codecov.io/github/ENSTA-U2IS-AI/torch-uncertainty/coverage.svg?branch=master)](https://codecov.io/gh/ENSTA-U2IS-AI/torch-uncertainty)
[![Discord Badge](https://dcbadge.vercel.app/api/server/HMCawt5MJu?compact=true&style=flat)](https://discord.gg/HMCawt5MJu)
</div>

_TorchUncertainty_ is a package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!

:construction: _TorchUncertainty_ is in early development :construction: - expect changes, but reach out and contribute if you are interested in the project! **Please raise an issue if you have any bugs or difficulties and join the [discord server](https://discord.gg/HMCawt5MJu).**

---

This package provides a multi-level API, including:

- ready-to-train baselines on research datasets, such as ImageNet and CIFAR
- deep learning baselines available for training on your datasets
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR (work in progress 🚧).
- layers available for use in your networks
- scikit-learn style post-processing methods such as Temperature Scaling

See the [Reference page](https://torch-uncertainty.github.io/references.html) or the [API reference](https://torch-uncertainty.github.io/api.html) for a more exhaustive list of the implemented methods, datasets, metrics, etc.

## Installation

Install the desired PyTorch version in your environment.
Then, install the package from PyPI:

```sh
pip install torch-uncertainty
```

If you aim to contribute, have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).

## Getting Started and Documentation

Please find the documentation at [torch-uncertainty.github.io](https://torch-uncertainty.github.io).

A quickstart is available at [torch-uncertainty.github.io/quickstart](https://torch-uncertainty.github.io/quickstart.html).

## Implemented methods

### Baselines

To date, the following deep learning baselines have been implemented:

- Deep Ensembles
- MC-Dropout - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)
- BatchEnsemble
- Masksembles
- MIMO
- Packed-Ensembles (see [blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873)) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- Bayesian Neural Networks :construction: Work in progress :construction: - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)
- Regression with Beta Gaussian NLL Loss
- Deep Evidential Classification & Regression - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)

### Augmentation methods

The following data augmentation methods have been implemented:

- Mixup, MixupIO, RegMixup, WarpingMixup

### Post-processing methods

To date, the following post-processing methods have been implemented:

- Temperature, Vector, & Matrix scaling - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html)
- Monte Carlo Batch Normalization - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_batch_norm.html)

## Tutorials

We provide the following tutorials in our documentation:

- [From a Standard Classifier to a Packed-Ensemble](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)
- [Training a Bayesian Neural Network in 3 minutes](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)
- [Improve Top-label Calibration with Temperature Scaling](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html)
- [Deep Evidential Regression on a Toy Example](https://torch-uncertainty.github.io/auto_tutorials/tutorial_der_cubic.html)
- [Training a LeNet with Monte-Carlo Dropout](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)
- [Training a LeNet with Deep Evidential Classification](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)
  
## Awesome Uncertainty repositories

You may find a lot of papers about modern uncertainty estimation techniques on the [Awesome Uncertainty in Deep Learning](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning).

## Other References

This package also contains the official implementation of Packed-Ensembles.

If you find the corresponding models interesting, please consider citing our [paper](https://arxiv.org/abs/2210.09184):

```text
@inproceedings{laurent2023packed,
    title={Packed-Ensembles for Efficient Uncertainty Estimation},
    author={Laurent, Olivier and Lafage, Adrien and Tartaglione, Enzo and Daniel, Geoffrey and Martinez, Jean-Marc and Bursuc, Andrei and Franchi, Gianni},
    booktitle={ICLR},
    year={2023}
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "torch-uncertainty",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "bayesian-network,ensembles,neural-networks,predictive-uncertainty,pytorch,reliable-ai,trustworthy-machine-learning,uncertainty,uncertainty-quantification",
    "author": "",
    "author_email": "ENSTA U2IS <olivier.laurent@ensta-paris.fr>, Adrien Lafage <adrienlafage@outlook.com>, Olivier Laurent <olivier.laurent@ensta-paris.fr>",
    "download_url": "https://files.pythonhosted.org/packages/bd/1d/33ad752f58c7fd2ff3f64c757c27f57be2b1bdb3355913eb58384068fccd/torch_uncertainty-0.1.6.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n![Torch Uncertainty Logo](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/blob/main/docs/source/_static/images/torch_uncertainty.png)\n\n[![pypi](https://img.shields.io/pypi/v/torch_uncertainty.svg)](https://pypi.python.org/pypi/torch_uncertainty)\n[![tests](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml/badge.svg?branch=main&event=push)](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/run-tests.yml)\n[![Docs](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/actions/workflows/build-docs.yml/badge.svg)](https://torch-uncertainty.github.io/)\n[![PRWelcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/pulls)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![Code Coverage](https://codecov.io/github/ENSTA-U2IS-AI/torch-uncertainty/coverage.svg?branch=master)](https://codecov.io/gh/ENSTA-U2IS-AI/torch-uncertainty)\n[![Discord Badge](https://dcbadge.vercel.app/api/server/HMCawt5MJu?compact=true&style=flat)](https://discord.gg/HMCawt5MJu)\n</div>\n\n_TorchUncertainty_ is a package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!\n\n:construction: _TorchUncertainty_ is in early development :construction: - expect changes, but reach out and contribute if you are interested in the project! **Please raise an issue if you have any bugs or difficulties and join the [discord server](https://discord.gg/HMCawt5MJu).**\n\n---\n\nThis package provides a multi-level API, including:\n\n- ready-to-train baselines on research datasets, such as ImageNet and CIFAR\n- deep learning baselines available for training on your datasets\n- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR (work in progress \ud83d\udea7).\n- layers available for use in your networks\n- scikit-learn style post-processing methods such as Temperature Scaling\n\nSee the [Reference page](https://torch-uncertainty.github.io/references.html) or the [API reference](https://torch-uncertainty.github.io/api.html) for a more exhaustive list of the implemented methods, datasets, metrics, etc.\n\n## Installation\n\nInstall the desired PyTorch version in your environment.\nThen, install the package from PyPI:\n\n```sh\npip install torch-uncertainty\n```\n\nIf you aim to contribute, have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).\n\n## Getting Started and Documentation\n\nPlease find the documentation at [torch-uncertainty.github.io](https://torch-uncertainty.github.io).\n\nA quickstart is available at [torch-uncertainty.github.io/quickstart](https://torch-uncertainty.github.io/quickstart.html).\n\n## Implemented methods\n\n### Baselines\n\nTo date, the following deep learning baselines have been implemented:\n\n- Deep Ensembles\n- MC-Dropout - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)\n- BatchEnsemble\n- Masksembles\n- MIMO\n- Packed-Ensembles (see [blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873)) - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)\n- Bayesian Neural Networks :construction: Work in progress :construction: - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)\n- Regression with Beta Gaussian NLL Loss\n- Deep Evidential Classification & Regression - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)\n\n### Augmentation methods\n\nThe following data augmentation methods have been implemented:\n\n- Mixup, MixupIO, RegMixup, WarpingMixup\n\n### Post-processing methods\n\nTo date, the following post-processing methods have been implemented:\n\n- Temperature, Vector, & Matrix scaling - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html)\n- Monte Carlo Batch Normalization - [Tutorial](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_batch_norm.html)\n\n## Tutorials\n\nWe provide the following tutorials in our documentation:\n\n- [From a Standard Classifier to a Packed-Ensemble](https://torch-uncertainty.github.io/auto_tutorials/tutorial_pe_cifar10.html)\n- [Training a Bayesian Neural Network in 3 minutes](https://torch-uncertainty.github.io/auto_tutorials/tutorial_bayesian.html)\n- [Improve Top-label Calibration with Temperature Scaling](https://torch-uncertainty.github.io/auto_tutorials/tutorial_scaler.html)\n- [Deep Evidential Regression on a Toy Example](https://torch-uncertainty.github.io/auto_tutorials/tutorial_der_cubic.html)\n- [Training a LeNet with Monte-Carlo Dropout](https://torch-uncertainty.github.io/auto_tutorials/tutorial_mc_dropout.html)\n- [Training a LeNet with Deep Evidential Classification](https://torch-uncertainty.github.io/auto_tutorials/tutorial_evidential_classification.html)\n  \n## Awesome Uncertainty repositories\n\nYou may find a lot of papers about modern uncertainty estimation techniques on the [Awesome Uncertainty in Deep Learning](https://github.com/ENSTA-U2IS-AI/awesome-uncertainty-deeplearning).\n\n## Other References\n\nThis package also contains the official implementation of Packed-Ensembles.\n\nIf you find the corresponding models interesting, please consider citing our [paper](https://arxiv.org/abs/2210.09184):\n\n```text\n@inproceedings{laurent2023packed,\n    title={Packed-Ensembles for Efficient Uncertainty Estimation},\n    author={Laurent, Olivier and Lafage, Adrien and Tartaglione, Enzo and Daniel, Geoffrey and Martinez, Jean-Marc and Bursuc, Andrei and Franchi, Gianni},\n    booktitle={ICLR},\n    year={2023}\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Uncertainty quantification library in PyTorch",
    "version": "0.1.6",
    "project_urls": {
        "documentation": "https://torch-uncertainty.github.io/quickstart.html",
        "homepage": "https://torch-uncertainty.github.io/",
        "repository": "https://github.com/ENSTA-U2IS-AI/torch-uncertainty.git"
    },
    "split_keywords": [
        "bayesian-network",
        "ensembles",
        "neural-networks",
        "predictive-uncertainty",
        "pytorch",
        "reliable-ai",
        "trustworthy-machine-learning",
        "uncertainty",
        "uncertainty-quantification"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "48411c7a3bd61f5151de96be97865af1033f43bdc68a772a5595747d131b82b6",
                "md5": "cf91d31c4ea71b1851a5bdbea3243446",
                "sha256": "035883a790641057ecc99318de76239feda0758674ed065021da8fef2915e84d"
            },
            "downloads": -1,
            "filename": "torch_uncertainty-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cf91d31c4ea71b1851a5bdbea3243446",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 163444,
            "upload_time": "2024-02-14T09:35:01",
            "upload_time_iso_8601": "2024-02-14T09:35:01.417574Z",
            "url": "https://files.pythonhosted.org/packages/48/41/1c7a3bd61f5151de96be97865af1033f43bdc68a772a5595747d131b82b6/torch_uncertainty-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bd1d33ad752f58c7fd2ff3f64c757c27f57be2b1bdb3355913eb58384068fccd",
                "md5": "9478bf9d423f71cee0c7f97c963e6226",
                "sha256": "73141222d3b5e2ba3b227da5dc61042d7d23b1100713bea4d1e944ca674222d1"
            },
            "downloads": -1,
            "filename": "torch_uncertainty-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "9478bf9d423f71cee0c7f97c963e6226",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 508306,
            "upload_time": "2024-02-14T09:35:05",
            "upload_time_iso_8601": "2024-02-14T09:35:05.026965Z",
            "url": "https://files.pythonhosted.org/packages/bd/1d/33ad752f58c7fd2ff3f64c757c27f57be2b1bdb3355913eb58384068fccd/torch_uncertainty-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-14 09:35:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ENSTA-U2IS-AI",
    "github_project": "torch-uncertainty",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "torch-uncertainty"
}
        
Elapsed time: 0.18148s