equine


Nameequine JSON
Version 0.1.2 PyPI version JSON
download
home_page
SummaryEQUINE^2: Establishing Quantified Uncertainty for Neural Networks
upload_time2023-10-16 17:29:02
maintainerAllan Wollaber, Steven Jorgensen
docs_urlNone
authorAllan Wollaber, Steven Jorgensen, John Holodnak, Jensen Dempsey, Harry Li
requires_python>=3.9
licenseMIT
keywords machine learning robustness pytorch responsible ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Establishing Quantified Uncertainty in Neural Networks 
<p align="center"><img src="assets/equine_full_logo.svg" width="720"\></p>

[![PyPi](https://img.shields.io/pypi/v/equine.svg)](https://pypi.org/project/equine/)
[![Build Status](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml/badge.svg?branch=main)](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml)
![python_passing_tests](https://img.shields.io/badge/Tests%20Passed-100%25-green)
![python_coverage](https://img.shields.io/badge/Coverage-98%25-green)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Tested with Hypothesis](https://img.shields.io/badge/hypothesis-tested-brightgreen.svg)](https://hypothesis.readthedocs.io/)
[![DOI](https://zenodo.org/badge/653796804.svg)](https://zenodo.org/badge/latestdoi/653796804)


## Usage
Deep neural networks (DNNs) for supervised labeling problems are known to
produce accurate results on a wide variety of learning tasks. However, when
accuracy is the only objective, DNNs frequently make over-confident predictions,
and they also always make a label prediction regardless of whether or not the
test data belongs to any known labels. 

EQUINE was created to simplify two kinds of uncertainty quantification for supervised labeling problems:
1) Calibrated probabilities for each predicted label
2) An in-distribution score, indicating whether any of the model's known labels should be trusted.

Dive into our [documentation examples](https://mit-ll-responsible-ai.github.io/equine/)
to get started. Additionally, we provide a [companion web application](https://github.com/mit-ll-responsible-ai/equine-webapp).

## Installation
Users are recommended to install a virtual environment such as Anaconda, as is also recommended
in the [pytorch installation](https://github.com/pytorch/pytorch). EQUINE has relatively
few dependencies beyond torch. 
```console
pip install equine
```
Users interested in contributing should refer to `CONTRIBUTING.md` for details.

## Design
EQUINE extends pytorch's `nn.Module` interface using a `predict` method that returns both
the class predictions and the extra OOD scores. 

## Disclaimer

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

© 2023 MASSACHUSETTS INSTITUTE OF TECHNOLOGY

- Subject to FAR 52.227-11 – Patent Rights – Ownership by the Contractor (May 2014)
- SPDX-License-Identifier: MIT

This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.

The software/firmware is provided to you on an As-Is basis.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "equine",
    "maintainer": "Allan Wollaber, Steven Jorgensen",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "machine learning,robustness,pytorch,responsible,AI",
    "author": "Allan Wollaber, Steven Jorgensen, John Holodnak, Jensen Dempsey, Harry Li",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/22/06/0d0a4d41a0ac38baee89017fea50e6fdd7a3e4775c55a61888f5959a6b24/equine-0.1.2.tar.gz",
    "platform": null,
    "description": "# Establishing Quantified Uncertainty in Neural Networks \n<p align=\"center\"><img src=\"assets/equine_full_logo.svg\" width=\"720\"\\></p>\n\n[![PyPi](https://img.shields.io/pypi/v/equine.svg)](https://pypi.org/project/equine/)\n[![Build Status](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml/badge.svg?branch=main)](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml)\n![python_passing_tests](https://img.shields.io/badge/Tests%20Passed-100%25-green)\n![python_coverage](https://img.shields.io/badge/Coverage-98%25-green)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Tested with Hypothesis](https://img.shields.io/badge/hypothesis-tested-brightgreen.svg)](https://hypothesis.readthedocs.io/)\n[![DOI](https://zenodo.org/badge/653796804.svg)](https://zenodo.org/badge/latestdoi/653796804)\n\n\n## Usage\nDeep neural networks (DNNs) for supervised labeling problems are known to\nproduce accurate results on a wide variety of learning tasks. However, when\naccuracy is the only objective, DNNs frequently make over-confident predictions,\nand they also always make a label prediction regardless of whether or not the\ntest data belongs to any known labels. \n\nEQUINE was created to simplify two kinds of uncertainty quantification for supervised labeling problems:\n1) Calibrated probabilities for each predicted label\n2) An in-distribution score, indicating whether any of the model's known labels should be trusted.\n\nDive into our [documentation examples](https://mit-ll-responsible-ai.github.io/equine/)\nto get started. Additionally, we provide a [companion web application](https://github.com/mit-ll-responsible-ai/equine-webapp).\n\n## Installation\nUsers are recommended to install a virtual environment such as Anaconda, as is also recommended\nin the [pytorch installation](https://github.com/pytorch/pytorch). EQUINE has relatively\nfew dependencies beyond torch. \n```console\npip install equine\n```\nUsers interested in contributing should refer to `CONTRIBUTING.md` for details.\n\n## Design\nEQUINE extends pytorch's `nn.Module` interface using a `predict` method that returns both\nthe class predictions and the extra OOD scores. \n\n## Disclaimer\n\nDISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.\n\n\u00a9 2023 MASSACHUSETTS INSTITUTE OF TECHNOLOGY\n\n- Subject to FAR 52.227-11 \u2013 Patent Rights \u2013 Ownership by the Contractor (May 2014)\n- SPDX-License-Identifier: MIT\n\nThis material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.\n\nThe software/firmware is provided to you on an As-Is basis.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "EQUINE^2: Establishing Quantified Uncertainty for Neural Networks",
    "version": "0.1.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/mit-ll-responsible-ai/equine/issues",
        "Homepage": "https://mit-ll-responsible-ai.github.io/equine/",
        "Source": "https://github.com/mit-ll-responsible-ai/equine"
    },
    "split_keywords": [
        "machine learning",
        "robustness",
        "pytorch",
        "responsible",
        "ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6e5da05cdb91bf64cf2b946b1103ff9e15dae3bf9080691699cc42bca6589389",
                "md5": "e23f6ea00e6cfd1a7862aea036ea3346",
                "sha256": "280b6e6fbc1fcbd6e798fe27b6b5284eb41c86be9399995afdbe1feef6d09b95"
            },
            "downloads": -1,
            "filename": "equine-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e23f6ea00e6cfd1a7862aea036ea3346",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 24873,
            "upload_time": "2023-10-16T17:28:59",
            "upload_time_iso_8601": "2023-10-16T17:28:59.493786Z",
            "url": "https://files.pythonhosted.org/packages/6e/5d/a05cdb91bf64cf2b946b1103ff9e15dae3bf9080691699cc42bca6589389/equine-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "22060d0a4d41a0ac38baee89017fea50e6fdd7a3e4775c55a61888f5959a6b24",
                "md5": "88bcc09783668db722dcc361892b831b",
                "sha256": "fdeae30e01d347050beda3b9f11561c979d0aab12bab417e8c852f9b15d1380a"
            },
            "downloads": -1,
            "filename": "equine-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "88bcc09783668db722dcc361892b831b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 961089,
            "upload_time": "2023-10-16T17:29:02",
            "upload_time_iso_8601": "2023-10-16T17:29:02.575450Z",
            "url": "https://files.pythonhosted.org/packages/22/06/0d0a4d41a0ac38baee89017fea50e6fdd7a3e4775c55a61888f5959a6b24/equine-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-16 17:29:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mit-ll-responsible-ai",
    "github_project": "equine",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "equine"
}
        
Elapsed time: 0.19076s