otbenchmark


Nameotbenchmark JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/openturns/otbenchmark
SummaryA benchmark library for OpenTURNS
upload_time2024-10-09 07:31:07
maintainerMichaël Baudin
docs_urlNone
authorMichaël Baudin, Youssef Jebroun, Elias Fekhari, Vincent Chabridon
requires_python>=3.8
licenseNone
keywords reliability sensitivity benchmark
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![GHA](https://github.com/openturns/otbenchmark/actions/workflows/build.yml/badge.svg?branch=master)](https://github.com/openturns/otbenchmark/actions/workflows/build.yml)

# otbenchmark

## What is it?

The goal of this project is to provide benchmark classes for OpenTURNS. 
It provides a framework to create use-cases which are associated with
reference values.
Such a benchmark problem may be used in order to check that a given
algorithm works as expected and to measure its performance in terms 
of accuracy and speed.

Two categories of benchmark classes are currently provided:
* reliability problems, i.e. estimating the probability that 
the output of a function is less than a threshold,
* sensitivity problems, i.e. estimating sensitivity indices, 
for example Sobol' indices.

Most of the reliability problems were adapted from the RPRepo

https://rprepo.readthedocs.io/en/latest/

This module allows you to create a problem, run an algorithm and 
compare the computed probability with a reference probability: 
Moreover, we can loop over all problems and run several methods on these 
problems.

## Authors

* Michaël Baudin
* Youssef Jebroun
* Elias Fekhari
* Vincent Chabridon

## Installation

To install the module, we can use either pip or conda: 

```
pip install otbenchmark
```

## Documentation

The documentation is available here: https://openturns.github.io/otbenchmark/master/


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/openturns/otbenchmark",
    "name": "otbenchmark",
    "maintainer": "Micha\u00ebl Baudin",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Micha\u00ebl Baudin <michael.baudin@edf.fr>",
    "keywords": "reliability, sensitivity, benchmark",
    "author": "Micha\u00ebl Baudin, Youssef Jebroun,  Elias Fekhari,  Vincent Chabridon",
    "author_email": "michael.baudin@edf.fr",
    "download_url": null,
    "platform": null,
    "description": "[![GHA](https://github.com/openturns/otbenchmark/actions/workflows/build.yml/badge.svg?branch=master)](https://github.com/openturns/otbenchmark/actions/workflows/build.yml)\n\n# otbenchmark\n\n## What is it?\n\nThe goal of this project is to provide benchmark classes for OpenTURNS. \nIt provides a framework to create use-cases which are associated with\nreference values.\nSuch a benchmark problem may be used in order to check that a given\nalgorithm works as expected and to measure its performance in terms \nof accuracy and speed.\n\nTwo categories of benchmark classes are currently provided:\n* reliability problems, i.e. estimating the probability that \nthe output of a function is less than a threshold,\n* sensitivity problems, i.e. estimating sensitivity indices, \nfor example Sobol' indices.\n\nMost of the reliability problems were adapted from the RPRepo\n\nhttps://rprepo.readthedocs.io/en/latest/\n\nThis module allows you to create a problem, run an algorithm and \ncompare the computed probability with a reference probability: \nMoreover, we can loop over all problems and run several methods on these \nproblems.\n\n## Authors\n\n* Micha\u00ebl Baudin\n* Youssef Jebroun\n* Elias Fekhari\n* Vincent Chabridon\n\n## Installation\n\nTo install the module, we can use either pip or conda: \n\n```\npip install otbenchmark\n```\n\n## Documentation\n\nThe documentation is available here: https://openturns.github.io/otbenchmark/master/\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A benchmark library for OpenTURNS",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/openturns/otbenchmark"
    },
    "split_keywords": [
        "reliability",
        " sensitivity",
        " benchmark"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1e90805c94146493d025ff17d2f6e2c2fa8241049001060938cedf31a4134d71",
                "md5": "bf4a150a8466d850493cd030a2946bae",
                "sha256": "2c71dd9d7d69d178c9c7cf8849bc76bb8c63fbe15460eabc60f80e60f7600bf6"
            },
            "downloads": -1,
            "filename": "otbenchmark-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bf4a150a8466d850493cd030a2946bae",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 97754,
            "upload_time": "2024-10-09T07:31:07",
            "upload_time_iso_8601": "2024-10-09T07:31:07.259958Z",
            "url": "https://files.pythonhosted.org/packages/1e/90/805c94146493d025ff17d2f6e2c2fa8241049001060938cedf31a4134d71/otbenchmark-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-09 07:31:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "openturns",
    "github_project": "otbenchmark",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "otbenchmark"
}
        
Elapsed time: 4.20901s