equine


Nameequine JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryEQUINE^2: Establishing Quantified Uncertainty for Neural Networks
upload_time2024-08-22 17:42:08
maintainerAllan Wollaber, Steven Jorgensen
docs_urlNone
authorAllan Wollaber, Steven Jorgensen, John Holodnak, Jensen Dempsey, Harry Li
requires_python>=3.9
licenseMIT
keywords machine learning robustness pytorch responsible ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Establishing Quantified Uncertainty in Neural Networks 
<p align="center"><img src="assets/equine_full_logo.svg" width="720"\></p>

[![PyPi](https://img.shields.io/pypi/v/equine.svg)](https://pypi.org/project/equine/)
[![Build Status](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml/badge.svg?branch=main)](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml)
![python_passing_tests](https://img.shields.io/badge/Tests%20Passed-100%25-green)
![python_coverage](https://img.shields.io/badge/Coverage-98%25-green)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Tested with Hypothesis](https://img.shields.io/badge/hypothesis-tested-brightgreen.svg)](https://hypothesis.readthedocs.io/)
[![DOI](https://zenodo.org/badge/653796804.svg)](https://zenodo.org/badge/latestdoi/653796804)


## Usage
Deep neural networks (DNNs) for supervised labeling problems are known to
produce accurate results on a wide variety of learning tasks. However, when
accuracy is the only objective, DNNs frequently make over-confident predictions,
and they also always make a label prediction regardless of whether or not the
test data belongs to any known labels. 

EQUINE was created to simplify two kinds of uncertainty quantification for supervised labeling problems:
1) Calibrated probabilities for each predicted label
2) An in-distribution score, indicating whether any of the model's known labels should be trusted.

Dive into our [documentation examples](https://mit-ll-responsible-ai.github.io/equine/)
to get started. Additionally, we provide a [companion web application](https://github.com/mit-ll-responsible-ai/equine-webapp).

## Installation
Users are recommended to install a virtual environment such as Anaconda, as is also recommended
in the [pytorch installation](https://github.com/pytorch/pytorch). EQUINE has relatively
few dependencies beyond torch. 
```console
pip install equine
```
Users interested in contributing should refer to `CONTRIBUTING.md` for details.

## Design
EQUINE extends pytorch's `nn.Module` interface using a `predict` method that returns both
the class predictions and the extra OOD scores. 

## Disclaimer

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

© 2024 MASSACHUSETTS INSTITUTE OF TECHNOLOGY

- Subject to FAR 52.227-11 – Patent Rights – Ownership by the Contractor (May 2014)
- SPDX-License-Identifier: MIT

This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.

The software/firmware is provided to you on an As-Is basis.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "equine",
    "maintainer": "Allan Wollaber, Steven Jorgensen",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "machine learning, robustness, pytorch, responsible, AI",
    "author": "Allan Wollaber, Steven Jorgensen, John Holodnak, Jensen Dempsey, Harry Li",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/d2/f1/07a35d947889c5ae919aa21ae40f3802cad174dab335c22a538ac1922cc4/equine-0.1.4.tar.gz",
    "platform": null,
    "description": "# Establishing Quantified Uncertainty in Neural Networks \n<p align=\"center\"><img src=\"assets/equine_full_logo.svg\" width=\"720\"\\></p>\n\n[![PyPi](https://img.shields.io/pypi/v/equine.svg)](https://pypi.org/project/equine/)\n[![Build Status](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml/badge.svg?branch=main)](https://github.com/mit-ll-responsible-ai/equine/actions/workflows/Tests.yml)\n![python_passing_tests](https://img.shields.io/badge/Tests%20Passed-100%25-green)\n![python_coverage](https://img.shields.io/badge/Coverage-98%25-green)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Tested with Hypothesis](https://img.shields.io/badge/hypothesis-tested-brightgreen.svg)](https://hypothesis.readthedocs.io/)\n[![DOI](https://zenodo.org/badge/653796804.svg)](https://zenodo.org/badge/latestdoi/653796804)\n\n\n## Usage\nDeep neural networks (DNNs) for supervised labeling problems are known to\nproduce accurate results on a wide variety of learning tasks. However, when\naccuracy is the only objective, DNNs frequently make over-confident predictions,\nand they also always make a label prediction regardless of whether or not the\ntest data belongs to any known labels. \n\nEQUINE was created to simplify two kinds of uncertainty quantification for supervised labeling problems:\n1) Calibrated probabilities for each predicted label\n2) An in-distribution score, indicating whether any of the model's known labels should be trusted.\n\nDive into our [documentation examples](https://mit-ll-responsible-ai.github.io/equine/)\nto get started. Additionally, we provide a [companion web application](https://github.com/mit-ll-responsible-ai/equine-webapp).\n\n## Installation\nUsers are recommended to install a virtual environment such as Anaconda, as is also recommended\nin the [pytorch installation](https://github.com/pytorch/pytorch). EQUINE has relatively\nfew dependencies beyond torch. \n```console\npip install equine\n```\nUsers interested in contributing should refer to `CONTRIBUTING.md` for details.\n\n## Design\nEQUINE extends pytorch's `nn.Module` interface using a `predict` method that returns both\nthe class predictions and the extra OOD scores. \n\n## Disclaimer\n\nDISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.\n\n\u00a9 2024 MASSACHUSETTS INSTITUTE OF TECHNOLOGY\n\n- Subject to FAR 52.227-11 \u2013 Patent Rights \u2013 Ownership by the Contractor (May 2014)\n- SPDX-License-Identifier: MIT\n\nThis material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.\n\nThe software/firmware is provided to you on an As-Is basis.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "EQUINE^2: Establishing Quantified Uncertainty for Neural Networks",
    "version": "0.1.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/mit-ll-responsible-ai/equine/issues",
        "Homepage": "https://mit-ll-responsible-ai.github.io/equine/",
        "Source": "https://github.com/mit-ll-responsible-ai/equine"
    },
    "split_keywords": [
        "machine learning",
        " robustness",
        " pytorch",
        " responsible",
        " ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ddb553ddc95f1157d4e9730315e4d07c4eaada2fc0a76ccd2c50a5cf57a66803",
                "md5": "0ed0bbdf6ad829cb13516f183e0ed835",
                "sha256": "df7eebf4debf61ed22ba30db0c38695df894b2120a1b50c041b632a4a90a0f03"
            },
            "downloads": -1,
            "filename": "equine-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0ed0bbdf6ad829cb13516f183e0ed835",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 26102,
            "upload_time": "2024-08-22T17:42:07",
            "upload_time_iso_8601": "2024-08-22T17:42:07.311466Z",
            "url": "https://files.pythonhosted.org/packages/dd/b5/53ddc95f1157d4e9730315e4d07c4eaada2fc0a76ccd2c50a5cf57a66803/equine-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d2f107a35d947889c5ae919aa21ae40f3802cad174dab335c22a538ac1922cc4",
                "md5": "90359cec87c04bd2c5a28c1c101f7dd9",
                "sha256": "401b92e999f5efaa226be0976fe2577e942521ae56a1344b436734464ab54e7e"
            },
            "downloads": -1,
            "filename": "equine-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "90359cec87c04bd2c5a28c1c101f7dd9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 966521,
            "upload_time": "2024-08-22T17:42:08",
            "upload_time_iso_8601": "2024-08-22T17:42:08.640586Z",
            "url": "https://files.pythonhosted.org/packages/d2/f1/07a35d947889c5ae919aa21ae40f3802cad174dab335c22a538ac1922cc4/equine-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-22 17:42:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mit-ll-responsible-ai",
    "github_project": "equine",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "equine"
}
        
Elapsed time: 4.24355s