properscoring


Nameproperscoring JSON
Version 0.1 PyPI version JSON
download
home_pagehttps://github.com/TheClimateCorporation/properscoring
SummaryProper scoring rules in Python
upload_time2015-11-12 19:54:29
maintainerNone
docs_urlNone
authorThe Climate Corporation
requires_pythonNone
licenseApache
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            properscoring
=============

.. image:: https://travis-ci.org/TheClimateCorporation/properscoring.svg?branch=master
    :target: https://travis-ci.org/TheClimateCorporation/properscoring

`Proper scoring rules`_ for evaluating probabilistic forecasts in Python.
Evaluation methods that are "strictly proper" cannot be artificially improved
through hedging, which makes them fair methods for accessing the accuracy of
probabilistic forecasts. In particular, these rules are often used for
evaluating weather forecasts.

.. _Proper scoring rules: https://www.stat.washington.edu/raftery/Research/PDF/Gneiting2007jasa.pdf

properscoring runs on both Python 2 and 3. It requires NumPy (1.8 or
later) and SciPy (any recent version should be fine). Numba is optional,
but highly encouraged: it enables significant speedups (e.g., 20x faster)
for ``crps_ensemble`` and ``threshold_brier_score``.

To install, use pip: ``pip install properscoring``.

Example: five ways to calculate CRPS
------------------------------------

This library focuses on the closely related
`Continuous Ranked Probability Score`_ (CRPS) and `Brier Score`_. We like
these scores because they are both interpretable (e.g., CRPS is a
generalization of mean absolute error) and easily calculated from a finite
number of samples of a probability distribution.

.. _Continuous Ranked Probability Score: http://www.eumetcal.org/resources/ukmeteocal/verification/www/english/msg/ver_prob_forec/uos3b/uos3b_ko1.htm
.. _Brier score: https://en.wikipedia.org/wiki/Brier_score

We will illustrate how to calculate CRPS against a forecast given by a
Gaussian random variable. To begin, import properscoring::

    import numpy as np
    import properscoring as ps
    from scipy.stats import norm

Exact calculation using ``crps_gaussian`` (this is the fastest method)::

    >>>> ps.crps_gaussian(0, mu=0, sig=1)
    0.23369497725510913

Numerical integration with ``crps_quadrature``::

    >>> ps.crps_quadrature(0, norm)
    array(0.23369497725510724)

From a finite sample with ``crps_ensemble``::

    >>> ensemble = np.random.RandomState(0).randn(1000)
    >>> ps.crps_ensemble(0, ensemble)
    0.2297109370729622

Weighted by PDF values with ``crps_ensemble``::

    >>> x = np.linspace(-5, 5, num=1000)
    >>> ps.crps_ensemble(0, x, weights=norm.pdf(x))
    0.23370047937569616

Based on the `threshold decomposition`_ of CRPS with
``threshold_brier_score``::

    >>> threshold_scores = ps.threshold_brier_score(0, ensemble, threshold=x)
    >>> (x[1] - x[0]) * threshold_scores.sum(axis=-1)
    0.22973090090090081

.. _threshold decomposition: https://www.stat.washington.edu/research/reports/2008/tr533.pdf

In this example, we only scored a single observation/forecast pair. But
to reliably evaluate a forecast model, you need to average these scores across
many observations. Fortunately, all scoring rules in properscoring happily
accept and return observations as multi-dimensional arrays::

    >>> ps.crps_gaussian([-2, -1, 0, 1, 2], mu=0, sig=1)
    array([ 1.45279182,  0.60244136,  0.23369498,  0.60244136,  1.45279182])

Once you calculate an average score, is often useful to normalize them
relative to a baseline forecast to calculate a so-called "skill score",
defined such that 0 indicates no improvement over the baseline and 1
indicates a perfect forecast. For example, suppose that our baseline
forecast is to always predict 0::

    >>> obs = [-2, -1, 0, 1, 2]
    >>> baseline_score = ps.crps_ensemble(obs, [0, 0, 0, 0, 0]).mean()
    >>> forecast_score = ps.crps_gaussian(obs, mu=0, sig=1).mean()
    >>> skill = (baseline_score - forecast_score) / baseline_score
    >>> skill
    0.27597311068630859

A standard normal distribution was 28% better at predicting these five
observations.

API
---

properscoring contains optimized and extensively tested routines for
scoring probability forecasts. These functions currently fall into two
categories:

* Continuous Ranked Probability Score (CRPS):

  - for an ensemble forecast: ``crps_ensemble``
  - for a Gaussian distribution: ``crps_gaussian``
  - for an arbitrary cumulative distribution function: ``crps_quadrature``

* Brier score:

  - for binary probability forecasts: ``brier_score``
  - for threshold exceedances with an ensemble forecast: ``threshold_brier_score``

All functions are robust to missing values represented by the floating
point value ``NaN``.

History
-------

This library was written by researchers at The Climate Corporation. The
original authors include Leon Barrett, Stephan Hoyer, Alex Kleeman and
Drew O'Kane.

License
-------

Copyright 2015 The Climate Corporation

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Contributions
-------------

Outside contributions (bug fixes or new features related to proper scoring
rules) would be very welcome! Please open a GitHub issue to discuss your
plans.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/TheClimateCorporation/properscoring",
    "name": "properscoring",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "The Climate Corporation",
    "author_email": "eng@climate.com",
    "download_url": "https://files.pythonhosted.org/packages/38/ac/513d2c8653ab6bc66c4502372e6e4e20ce6a136cde4c1ba9908ec36e34c1/properscoring-0.1.tar.gz",
    "platform": "UNKNOWN",
    "description": "properscoring\n=============\n\n.. image:: https://travis-ci.org/TheClimateCorporation/properscoring.svg?branch=master\n    :target: https://travis-ci.org/TheClimateCorporation/properscoring\n\n`Proper scoring rules`_ for evaluating probabilistic forecasts in Python.\nEvaluation methods that are \"strictly proper\" cannot be artificially improved\nthrough hedging, which makes them fair methods for accessing the accuracy of\nprobabilistic forecasts. In particular, these rules are often used for\nevaluating weather forecasts.\n\n.. _Proper scoring rules: https://www.stat.washington.edu/raftery/Research/PDF/Gneiting2007jasa.pdf\n\nproperscoring runs on both Python 2 and 3. It requires NumPy (1.8 or\nlater) and SciPy (any recent version should be fine). Numba is optional,\nbut highly encouraged: it enables significant speedups (e.g., 20x faster)\nfor ``crps_ensemble`` and ``threshold_brier_score``.\n\nTo install, use pip: ``pip install properscoring``.\n\nExample: five ways to calculate CRPS\n------------------------------------\n\nThis library focuses on the closely related\n`Continuous Ranked Probability Score`_ (CRPS) and `Brier Score`_. We like\nthese scores because they are both interpretable (e.g., CRPS is a\ngeneralization of mean absolute error) and easily calculated from a finite\nnumber of samples of a probability distribution.\n\n.. _Continuous Ranked Probability Score: http://www.eumetcal.org/resources/ukmeteocal/verification/www/english/msg/ver_prob_forec/uos3b/uos3b_ko1.htm\n.. _Brier score: https://en.wikipedia.org/wiki/Brier_score\n\nWe will illustrate how to calculate CRPS against a forecast given by a\nGaussian random variable. To begin, import properscoring::\n\n    import numpy as np\n    import properscoring as ps\n    from scipy.stats import norm\n\nExact calculation using ``crps_gaussian`` (this is the fastest method)::\n\n    >>>> ps.crps_gaussian(0, mu=0, sig=1)\n    0.23369497725510913\n\nNumerical integration with ``crps_quadrature``::\n\n    >>> ps.crps_quadrature(0, norm)\n    array(0.23369497725510724)\n\nFrom a finite sample with ``crps_ensemble``::\n\n    >>> ensemble = np.random.RandomState(0).randn(1000)\n    >>> ps.crps_ensemble(0, ensemble)\n    0.2297109370729622\n\nWeighted by PDF values with ``crps_ensemble``::\n\n    >>> x = np.linspace(-5, 5, num=1000)\n    >>> ps.crps_ensemble(0, x, weights=norm.pdf(x))\n    0.23370047937569616\n\nBased on the `threshold decomposition`_ of CRPS with\n``threshold_brier_score``::\n\n    >>> threshold_scores = ps.threshold_brier_score(0, ensemble, threshold=x)\n    >>> (x[1] - x[0]) * threshold_scores.sum(axis=-1)\n    0.22973090090090081\n\n.. _threshold decomposition: https://www.stat.washington.edu/research/reports/2008/tr533.pdf\n\nIn this example, we only scored a single observation/forecast pair. But\nto reliably evaluate a forecast model, you need to average these scores across\nmany observations. Fortunately, all scoring rules in properscoring happily\naccept and return observations as multi-dimensional arrays::\n\n    >>> ps.crps_gaussian([-2, -1, 0, 1, 2], mu=0, sig=1)\n    array([ 1.45279182,  0.60244136,  0.23369498,  0.60244136,  1.45279182])\n\nOnce you calculate an average score, is often useful to normalize them\nrelative to a baseline forecast to calculate a so-called \"skill score\",\ndefined such that 0 indicates no improvement over the baseline and 1\nindicates a perfect forecast. For example, suppose that our baseline\nforecast is to always predict 0::\n\n    >>> obs = [-2, -1, 0, 1, 2]\n    >>> baseline_score = ps.crps_ensemble(obs, [0, 0, 0, 0, 0]).mean()\n    >>> forecast_score = ps.crps_gaussian(obs, mu=0, sig=1).mean()\n    >>> skill = (baseline_score - forecast_score) / baseline_score\n    >>> skill\n    0.27597311068630859\n\nA standard normal distribution was 28% better at predicting these five\nobservations.\n\nAPI\n---\n\nproperscoring contains optimized and extensively tested routines for\nscoring probability forecasts. These functions currently fall into two\ncategories:\n\n* Continuous Ranked Probability Score (CRPS):\n\n  - for an ensemble forecast: ``crps_ensemble``\n  - for a Gaussian distribution: ``crps_gaussian``\n  - for an arbitrary cumulative distribution function: ``crps_quadrature``\n\n* Brier score:\n\n  - for binary probability forecasts: ``brier_score``\n  - for threshold exceedances with an ensemble forecast: ``threshold_brier_score``\n\nAll functions are robust to missing values represented by the floating\npoint value ``NaN``.\n\nHistory\n-------\n\nThis library was written by researchers at The Climate Corporation. The\noriginal authors include Leon Barrett, Stephan Hoyer, Alex Kleeman and\nDrew O'Kane.\n\nLicense\n-------\n\nCopyright 2015 The Climate Corporation\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nContributions\n-------------\n\nOutside contributions (bug fixes or new features related to proper scoring\nrules) would be very welcome! Please open a GitHub issue to discuss your\nplans.",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "Proper scoring rules in Python",
    "version": "0.1",
    "project_urls": {
        "Download": "UNKNOWN",
        "Homepage": "https://github.com/TheClimateCorporation/properscoring"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0aff51706ba1a029d0f2df0322543793d3bf1383de9dc567d23886144cb21bef",
                "md5": "d497364acd7bd51c54f692c7edff4180",
                "sha256": "f84d5b06c13549d0171ce52ad7b45c6f5726ac44b733d24af5c60654cbb821dc"
            },
            "downloads": -1,
            "filename": "properscoring-0.1-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d497364acd7bd51c54f692c7edff4180",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 23427,
            "upload_time": "2015-11-12T19:54:24",
            "upload_time_iso_8601": "2015-11-12T19:54:24.578877Z",
            "url": "https://files.pythonhosted.org/packages/0a/ff/51706ba1a029d0f2df0322543793d3bf1383de9dc567d23886144cb21bef/properscoring-0.1-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "38ac513d2c8653ab6bc66c4502372e6e4e20ce6a136cde4c1ba9908ec36e34c1",
                "md5": "f1fe6cc96c24713a28886c2c0b47afd6",
                "sha256": "b0cc4963cc218b728d6c5f77b3259c8f835ae00e32e82678cdf6936049b93961"
            },
            "downloads": -1,
            "filename": "properscoring-0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "f1fe6cc96c24713a28886c2c0b47afd6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 17848,
            "upload_time": "2015-11-12T19:54:29",
            "upload_time_iso_8601": "2015-11-12T19:54:29.615748Z",
            "url": "https://files.pythonhosted.org/packages/38/ac/513d2c8653ab6bc66c4502372e6e4e20ce6a136cde4c1ba9908ec36e34c1/properscoring-0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2015-11-12 19:54:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "TheClimateCorporation",
    "github_project": "properscoring",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "lcname": "properscoring"
}
        
Elapsed time: 0.52592s