rliable


Namerliable JSON
Version 1.0.8 PyPI version JSON
download
home_pagehttps://github.com/google-research/rliable
Summaryrliable: Reliable evaluation on reinforcement learning and machine learning benchmarks.
upload_time2022-06-22 14:37:36
maintainer
docs_urlNone
authorRishabh Agarwal
requires_python
licenseApache 2.0
keywords benchmarking evaluation reproducibility research reinforcement machine learning research
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1a0pSD-1tWhMmeJeeoyZM1A-HCW3yf1xR?usp=sharing) [![Website](https://img.shields.io/badge/www-Website-green)](https://agarwl.github.io/rliable) [![Blog](https://img.shields.io/badge/b-Blog-blue)](https://ai.googleblog.com/2021/11/rliable-towards-reliable-evaluation.html)

`rliable` is an open-source Python library for reliable evaluation, even with a *handful
of runs*, on reinforcement learning and machine learnings benchmarks. 
| **Desideratum** | **Current evaluation approach** |  **Our Recommendation**    |
| --------------------------------- | ----------- | --------- |
| Uncertainty in aggregate performance | **Point estimates**: <ul> <li> Ignore statistical uncertainty </li> <li> Hinder *results reproducibility* </li></ul> | Interval estimates using **stratified bootstrap confidence intervals** (CIs) |
|Performance variability across tasks and runs| **Tables with task mean scores**: <ul><li> Overwhelming beyond a few tasks </li> <li> Standard deviations frequently omitted </li> <li> Incomplete picture for multimodal and heavy-tailed distributions </li> </ul> | **Score distributions** (*performance profiles*): <ul> <li> Show tail distribution of scores on combined runs across tasks </li> <li> Allow qualitative comparisons </li> <li> Easily read any score percentile </li> </ul>|
|Aggregate metrics for summarizing benchmark performance | **Mean**:  <ul><li> Often dominated by performance on outlier tasks </li></ul> &nbsp; **Median**: <ul> <li> Statistically inefficient (requires a large number of runs to claim improvements) </li>  <li> Poor indicator of overall performance: 0 scores on nearly half the tasks doesn't change it </li> </ul>| **Interquartile Mean (IQM)** across all runs: <ul> <li> Performance on middle 50% of combined runs </li> <li> Robust to outlier scores but more statistically efficient than median </li> </ul> To show other aspects of performance gains, report *Probability of improvement* and *Optimality gap* |

`rliable` provides support for:

 * Stratified Bootstrap Confidence Intervals (CIs)
 * Performance Profiles (with plotting functions)
 * Aggregate metrics
   * Interquartile Mean (IQM) across all runs
   * Optimality Gap
   * Probability of Improvement

<div align="left">
  <img src="https://raw.githubusercontent.com/google-research/rliable/master/images/aggregate_metric.png">
</div>

## Interactive colab
We provide a colab at [bit.ly/statistical_precipice_colab](https://colab.research.google.com/drive/1a0pSD-1tWhMmeJeeoyZM1A-HCW3yf1xR?usp=sharing),
which shows how to use the library with examples of published algorithms on
widely used benchmarks including Atari 100k, ALE, DM Control and Procgen.


### Paper
For more details, refer to the accompanying **NeurIPS 2021** paper (**Outstanding Paper** Award):
[Deep Reinforcement Learning at the Edge of the Statistical Precipice](https://arxiv.org/pdf/2108.13264.pdf).


### Installation

To install `rliable`, run:
```python
pip install -U rliable
```

To install latest version of `rliable` as a package, run:

```python
pip install git+https://github.com/google-research/rliable
```

To import `rliable`, we suggest:

```python
from rliable import library as rly
from rliable import metrics
from rliable import plot_utils
```

### Aggregate metrics with 95% Stratified Bootstrap CIs


##### IQM, Optimality Gap, Median, Mean
```python
algorithms = ['DQN (Nature)', 'DQN (Adam)', 'C51', 'REM', 'Rainbow',
              'IQN', 'M-IQN', 'DreamerV2']
# Load ALE scores as a dictionary mapping algorithms to their human normalized
# score matrices, each of which is of size `(num_runs x num_games)`.
atari_200m_normalized_score_dict = ...
aggregate_func = lambda x: np.array([
  metrics.aggregate_median(x),
  metrics.aggregate_iqm(x),
  metrics.aggregate_mean(x),
  metrics.aggregate_optimality_gap(x)])
aggregate_scores, aggregate_score_cis = rly.get_interval_estimates(
  atari_200m_normalized_score_dict, aggregate_func, reps=50000)
fig, axes = plot_utils.plot_interval_estimates(
  aggregate_scores, aggregate_score_cis,
  metric_names=['Median', 'IQM', 'Mean', 'Optimality Gap'],
  algorithms=algorithms, xlabel='Human Normalized Score')
```

<div align="left">
  <img src="https://raw.githubusercontent.com/google-research/rliable/master/images/ale_interval_estimates.png">
</div>

##### Probability of Improvement
```python
# Load ProcGen scores as a dictionary containing pairs of normalized score
# matrices for pairs of algorithms we want to compare
procgen_algorithm_pairs = {.. , 'x,y': (score_x, score_y), ..}
average_probabilities, average_prob_cis = rly.get_interval_estimates(
  procgen_algorithm_pairs, metrics.probability_of_improvement, reps=2000)
plot_utils.plot_probability_of_improvement(average_probabilities, average_prob_cis)
```
<div align="center">
  <img src="https://raw.githubusercontent.com/google-research/rliable/master/images/procgen_probability_of_improvement.png">
</div>

#### Sample Efficiency Curve
```python
algorithms = ['DQN (Nature)', 'DQN (Adam)', 'C51', 'REM', 'Rainbow',
              'IQN', 'M-IQN', 'DreamerV2']
# Load ALE scores as a dictionary mapping algorithms to their human normalized
# score matrices across all 200 million frames, each of which is of size
# `(num_runs x num_games x 200)` where scores are recorded every million frame.
ale_all_frames_scores_dict = ...
frames = np.array([1, 10, 25, 50, 75, 100, 125, 150, 175, 200]) - 1
ale_frames_scores_dict = {algorithm: score[:, :, frames] for algorithm, score
                          in ale_all_frames_scores_dict.items()}
iqm = lambda scores: np.array([metrics.aggregate_iqm(scores[..., frame])
                               for frame in range(scores.shape[-1])])
iqm_scores, iqm_cis = rly.get_interval_estimates(
  ale_frames_scores_dict, iqm, reps=50000)
plot_utils.plot_sample_efficiency_curve(
    frames+1, iqm_scores, iqm_cis, algorithms=algorithms,
    xlabel=r'Number of Frames (in millions)',
    ylabel='IQM Human Normalized Score')
```
<div align="center">
  <img src="https://raw.githubusercontent.com/google-research/rliable/master/images/ale_legend.png">
  <img src="https://raw.githubusercontent.com/google-research/rliable/master/images/atari_sample_efficiency_iqm.png">
</div>

### Performance Profiles

```python
# Load ALE scores as a dictionary mapping algorithms to their human normalized
# score matrices, each of which is of size `(num_runs x num_games)`.
atari_200m_normalized_score_dict = ...
# Human normalized score thresholds
atari_200m_thresholds = np.linspace(0.0, 8.0, 81)
score_distributions, score_distributions_cis = rly.create_performance_profile(
    atari_200m_normalized_score_dict, atari_200m_thresholds)
# Plot score distributions
fig, ax = plt.subplots(ncols=1, figsize=(7, 5))
plot_utils.plot_performance_profiles(
  score_distributions, atari_200m_thresholds,
  performance_profile_cis=score_distributions_cis,
  colors=dict(zip(algorithms, sns.color_palette('colorblind'))),
  xlabel=r'Human Normalized Score $(\tau)$',
  ax=ax)
```
<div align="center">
  <img src="https://raw.githubusercontent.com/google-research/rliable/master/images/ale_legend.png">
  <img src="https://raw.githubusercontent.com/google-research/rliable/master/images/ale_score_distributions_new.png">
</div>

The above profile can also be plotted with non-linear scaling as follows:

```python
plot_utils.plot_performance_profiles(
  perf_prof_atari_200m, atari_200m_tau,
  performance_profile_cis=perf_prof_atari_200m_cis,
  use_non_linear_scaling=True,
  xticks = [0.0, 0.5, 1.0, 2.0, 4.0, 8.0]
  colors=dict(zip(algorithms, sns.color_palette('colorblind'))),
  xlabel=r'Human Normalized Score $(\tau)$',
  ax=ax)
```


### Dependencies
The code was tested under `Python>=3.7` and uses these packages:

- arch == 5.3.0
- scipy >= 1.7.0
- numpy >= 0.9.0
- absl-py >= 1.16.4
- seaborn >= 0.11.2

Citing
------
If you find this open source release useful, please reference in your paper:

    @article{agarwal2021deep,
      title={Deep Reinforcement Learning at the Edge of the Statistical Precipice},
      author={Agarwal, Rishabh and Schwarzer, Max and Castro, Pablo Samuel
              and Courville, Aaron and Bellemare, Marc G},
      journal={Advances in Neural Information Processing Systems},
      year={2021}
    }

Disclaimer: This is not an official Google product.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/google-research/rliable",
    "name": "rliable",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "benchmarking,evaluation,reproducibility,research,reinforcement,machine,learning,research",
    "author": "Rishabh Agarwal",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/8d/1b/157c103ae73c668c175b1fea0f5017b05dd3d260e1a53050005190ecf7d4/rliable-1.0.8.tar.gz",
    "platform": null,
    "description": "\n# [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1a0pSD-1tWhMmeJeeoyZM1A-HCW3yf1xR?usp=sharing) [![Website](https://img.shields.io/badge/www-Website-green)](https://agarwl.github.io/rliable) [![Blog](https://img.shields.io/badge/b-Blog-blue)](https://ai.googleblog.com/2021/11/rliable-towards-reliable-evaluation.html)\n\n`rliable` is an open-source Python library for reliable evaluation, even with a *handful\nof runs*, on reinforcement learning and machine learnings benchmarks. \n| **Desideratum** | **Current evaluation approach** |  **Our Recommendation**    |\n| --------------------------------- | ----------- | --------- |\n| Uncertainty in aggregate performance | **Point estimates**: <ul> <li> Ignore statistical uncertainty </li> <li> Hinder *results reproducibility* </li></ul> | Interval estimates using **stratified bootstrap confidence intervals** (CIs) |\n|Performance variability across tasks and runs| **Tables with task mean scores**: <ul><li> Overwhelming beyond a few tasks </li> <li> Standard deviations frequently omitted </li> <li> Incomplete picture for multimodal and heavy-tailed distributions </li> </ul> | **Score distributions** (*performance profiles*): <ul> <li> Show tail distribution of scores on combined runs across tasks </li> <li> Allow qualitative comparisons </li> <li> Easily read any score percentile </li> </ul>|\n|Aggregate metrics for summarizing benchmark performance | **Mean**:  <ul><li> Often dominated by performance on outlier tasks </li></ul> &nbsp; **Median**: <ul> <li> Statistically inefficient (requires a large number of runs to claim improvements) </li>  <li> Poor indicator of overall performance: 0 scores on nearly half the tasks doesn't change it </li> </ul>| **Interquartile Mean (IQM)** across all runs: <ul> <li> Performance on middle 50% of combined runs </li> <li> Robust to outlier scores but more statistically efficient than median </li> </ul> To show other aspects of performance gains, report *Probability of improvement* and *Optimality gap* |\n\n`rliable` provides support for:\n\n * Stratified Bootstrap Confidence Intervals (CIs)\n * Performance Profiles (with plotting functions)\n * Aggregate metrics\n   * Interquartile Mean (IQM) across all runs\n   * Optimality Gap\n   * Probability of Improvement\n\n<div align=\"left\">\n  <img src=\"https://raw.githubusercontent.com/google-research/rliable/master/images/aggregate_metric.png\">\n</div>\n\n## Interactive colab\nWe provide a colab at [bit.ly/statistical_precipice_colab](https://colab.research.google.com/drive/1a0pSD-1tWhMmeJeeoyZM1A-HCW3yf1xR?usp=sharing),\nwhich shows how to use the library with examples of published algorithms on\nwidely used benchmarks including Atari 100k, ALE, DM Control and Procgen.\n\n\n### Paper\nFor more details, refer to the accompanying **NeurIPS 2021** paper (**Outstanding Paper** Award):\n[Deep Reinforcement Learning at the Edge of the Statistical Precipice](https://arxiv.org/pdf/2108.13264.pdf).\n\n\n### Installation\n\nTo install `rliable`, run:\n```python\npip install -U rliable\n```\n\nTo install latest version of `rliable` as a package, run:\n\n```python\npip install git+https://github.com/google-research/rliable\n```\n\nTo import `rliable`, we suggest:\n\n```python\nfrom rliable import library as rly\nfrom rliable import metrics\nfrom rliable import plot_utils\n```\n\n### Aggregate metrics with 95% Stratified Bootstrap CIs\n\n\n##### IQM, Optimality Gap, Median, Mean\n```python\nalgorithms = ['DQN (Nature)', 'DQN (Adam)', 'C51', 'REM', 'Rainbow',\n              'IQN', 'M-IQN', 'DreamerV2']\n# Load ALE scores as a dictionary mapping algorithms to their human normalized\n# score matrices, each of which is of size `(num_runs x num_games)`.\natari_200m_normalized_score_dict = ...\naggregate_func = lambda x: np.array([\n  metrics.aggregate_median(x),\n  metrics.aggregate_iqm(x),\n  metrics.aggregate_mean(x),\n  metrics.aggregate_optimality_gap(x)])\naggregate_scores, aggregate_score_cis = rly.get_interval_estimates(\n  atari_200m_normalized_score_dict, aggregate_func, reps=50000)\nfig, axes = plot_utils.plot_interval_estimates(\n  aggregate_scores, aggregate_score_cis,\n  metric_names=['Median', 'IQM', 'Mean', 'Optimality Gap'],\n  algorithms=algorithms, xlabel='Human Normalized Score')\n```\n\n<div align=\"left\">\n  <img src=\"https://raw.githubusercontent.com/google-research/rliable/master/images/ale_interval_estimates.png\">\n</div>\n\n##### Probability of Improvement\n```python\n# Load ProcGen scores as a dictionary containing pairs of normalized score\n# matrices for pairs of algorithms we want to compare\nprocgen_algorithm_pairs = {.. , 'x,y': (score_x, score_y), ..}\naverage_probabilities, average_prob_cis = rly.get_interval_estimates(\n  procgen_algorithm_pairs, metrics.probability_of_improvement, reps=2000)\nplot_utils.plot_probability_of_improvement(average_probabilities, average_prob_cis)\n```\n<div align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/google-research/rliable/master/images/procgen_probability_of_improvement.png\">\n</div>\n\n#### Sample Efficiency Curve\n```python\nalgorithms = ['DQN (Nature)', 'DQN (Adam)', 'C51', 'REM', 'Rainbow',\n              'IQN', 'M-IQN', 'DreamerV2']\n# Load ALE scores as a dictionary mapping algorithms to their human normalized\n# score matrices across all 200 million frames, each of which is of size\n# `(num_runs x num_games x 200)` where scores are recorded every million frame.\nale_all_frames_scores_dict = ...\nframes = np.array([1, 10, 25, 50, 75, 100, 125, 150, 175, 200]) - 1\nale_frames_scores_dict = {algorithm: score[:, :, frames] for algorithm, score\n                          in ale_all_frames_scores_dict.items()}\niqm = lambda scores: np.array([metrics.aggregate_iqm(scores[..., frame])\n                               for frame in range(scores.shape[-1])])\niqm_scores, iqm_cis = rly.get_interval_estimates(\n  ale_frames_scores_dict, iqm, reps=50000)\nplot_utils.plot_sample_efficiency_curve(\n    frames+1, iqm_scores, iqm_cis, algorithms=algorithms,\n    xlabel=r'Number of Frames (in millions)',\n    ylabel='IQM Human Normalized Score')\n```\n<div align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/google-research/rliable/master/images/ale_legend.png\">\n  <img src=\"https://raw.githubusercontent.com/google-research/rliable/master/images/atari_sample_efficiency_iqm.png\">\n</div>\n\n### Performance Profiles\n\n```python\n# Load ALE scores as a dictionary mapping algorithms to their human normalized\n# score matrices, each of which is of size `(num_runs x num_games)`.\natari_200m_normalized_score_dict = ...\n# Human normalized score thresholds\natari_200m_thresholds = np.linspace(0.0, 8.0, 81)\nscore_distributions, score_distributions_cis = rly.create_performance_profile(\n    atari_200m_normalized_score_dict, atari_200m_thresholds)\n# Plot score distributions\nfig, ax = plt.subplots(ncols=1, figsize=(7, 5))\nplot_utils.plot_performance_profiles(\n  score_distributions, atari_200m_thresholds,\n  performance_profile_cis=score_distributions_cis,\n  colors=dict(zip(algorithms, sns.color_palette('colorblind'))),\n  xlabel=r'Human Normalized Score $(\\tau)$',\n  ax=ax)\n```\n<div align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/google-research/rliable/master/images/ale_legend.png\">\n  <img src=\"https://raw.githubusercontent.com/google-research/rliable/master/images/ale_score_distributions_new.png\">\n</div>\n\nThe above profile can also be plotted with non-linear scaling as follows:\n\n```python\nplot_utils.plot_performance_profiles(\n  perf_prof_atari_200m, atari_200m_tau,\n  performance_profile_cis=perf_prof_atari_200m_cis,\n  use_non_linear_scaling=True,\n  xticks = [0.0, 0.5, 1.0, 2.0, 4.0, 8.0]\n  colors=dict(zip(algorithms, sns.color_palette('colorblind'))),\n  xlabel=r'Human Normalized Score $(\\tau)$',\n  ax=ax)\n```\n\n\n### Dependencies\nThe code was tested under `Python>=3.7` and uses these packages:\n\n- arch == 5.3.0\n- scipy >= 1.7.0\n- numpy >= 0.9.0\n- absl-py >= 1.16.4\n- seaborn >= 0.11.2\n\nCiting\n------\nIf you find this open source release useful, please reference in your paper:\n\n    @article{agarwal2021deep,\n      title={Deep Reinforcement Learning at the Edge of the Statistical Precipice},\n      author={Agarwal, Rishabh and Schwarzer, Max and Castro, Pablo Samuel\n              and Courville, Aaron and Bellemare, Marc G},\n      journal={Advances in Neural Information Processing Systems},\n      year={2021}\n    }\n\nDisclaimer: This is not an official Google product.\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "rliable: Reliable evaluation on reinforcement learning and machine learning benchmarks.",
    "version": "1.0.8",
    "split_keywords": [
        "benchmarking",
        "evaluation",
        "reproducibility",
        "research",
        "reinforcement",
        "machine",
        "learning",
        "research"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "366e1c84efefd5938352934493bf099c7b0d281b0614fee2e0291bd057c0eef2",
                "md5": "a40259afafe3ca0b03fd65c0ac2bf43e",
                "sha256": "0a868fada926d0fa410f368d2a01ade811ae11aa8d7e82a9f80de29b5a634fc3"
            },
            "downloads": -1,
            "filename": "rliable-1.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a40259afafe3ca0b03fd65c0ac2bf43e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 19682,
            "upload_time": "2022-06-22T14:37:34",
            "upload_time_iso_8601": "2022-06-22T14:37:34.536523Z",
            "url": "https://files.pythonhosted.org/packages/36/6e/1c84efefd5938352934493bf099c7b0d281b0614fee2e0291bd057c0eef2/rliable-1.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8d1b157c103ae73c668c175b1fea0f5017b05dd3d260e1a53050005190ecf7d4",
                "md5": "2ce9a97c8efbbfb68818bf86afe89bca",
                "sha256": "662ee1cd9a98c39340b4cf0c744bce5141171cf18d60f25811283d1f435d3852"
            },
            "downloads": -1,
            "filename": "rliable-1.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "2ce9a97c8efbbfb68818bf86afe89bca",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 20769,
            "upload_time": "2022-06-22T14:37:36",
            "upload_time_iso_8601": "2022-06-22T14:37:36.091496Z",
            "url": "https://files.pythonhosted.org/packages/8d/1b/157c103ae73c668c175b1fea0f5017b05dd3d260e1a53050005190ecf7d4/rliable-1.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-06-22 14:37:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "google-research",
    "github_project": "rliable",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "rliable"
}
        
Elapsed time: 0.02789s