mbrl


Namembrl JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/facebookresearch/mbrl-lib
SummaryA PyTorch library for model-based reinforcement learning research
upload_time2023-03-29 19:22:44
maintainer
docs_urlNone
authorFacebook AI Research
requires_python>=3.8
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPi Version](https://img.shields.io/pypi/v/mbrl)](https://pypi.org/project/mbrl/)
[![Main](https://github.com/facebookresearch/mbrl-lib/workflows/CI/badge.svg)](https://github.com/facebookresearch/mbrl-lib/actions?query=workflow%3ACI)
[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/facebookresearch/mbrl-lib/tree/main/LICENSE)
[![Python 3.7+](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/release/python-360/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
 

# MBRL-Lib

``mbrl`` is a toolbox for facilitating development of 
Model-Based Reinforcement Learning algorithms. It provides easily interchangeable 
modeling and planning components, and a set of utility functions that allow writing
model-based RL algorithms with only a few lines of code. 

See also our companion [paper](https://arxiv.org/abs/2104.10159). 

## Getting Started

### Installation

#### Standard Installation

``mbrl`` requires Python 3.8+ library and [PyTorch (>= 1.7)](https://pytorch.org). 
To install the latest stable version, run

    pip install mbrl

#### Developer installation
If you are interested in modifying the library, clone the repository and set up 
a development environment as follows

    git clone https://github.com/facebookresearch/mbrl-lib.git
    pip install -e ".[dev]"

And test it by running the following from the root folder of the repository

    python -m pytest tests/core
    python -m pytest tests/algorithms


### Basic example
As a starting point, check out our [tutorial notebook](https://github.com/facebookresearch/mbrl-lib/tree/main/notebooks/pets_example.ipynb) 
on how to write the PETS algorithm 
([Chua et al., NeurIPS 2018](https://arxiv.org/pdf/1805.12114.pdf)) 
using our toolbox, and running it on a continuous version of the cartpole 
environment.

## Provided algorithm implementations
MBRL-Lib provides implementations of popular MBRL algorithms 
as examples of how to use this library. You can find them in the 
[mbrl/algorithms](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms) folder. Currently, we have implemented
[PETS](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms/pets.py),
[MBPO](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms/mbpo.py),
[PlaNet](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms/planet.py), 
we plan to keep increasing this list in the future.

The implementations rely on [Hydra](https://github.com/facebookresearch/hydra) 
to handle configuration. You can see the configuration files in 
[this](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf) 
folder. 
The [overrides](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf/overrides) 
subfolder contains
environment specific configurations for each environment, overriding the 
default configurations with the best hyperparameter values we have found so far 
for each combination of algorithm and environment. You can run training
by passing the desired override option via command line. 
For example, to run MBPO on the [Gymnasium](https://github.com/Farama-Foundation/Gymnasium/) version of HalfCheetah, you should call
```python
python -m mbrl.examples.main algorithm=mbpo overrides=mbpo_halfcheetah 
```
By default, all algorithms will save results in a csv file called `results.csv`,
inside a folder whose path looks like 
`./exp/mbpo/default/gym___HalfCheetah-v2/yyyy.mm.dd/hhmmss`; 
you can change the root directory (`./exp`) by passing 
`root_dir=path-to-your-dir`, and the experiment sub-folder (`default`) by
passing `experiment=your-name`. The logger will also save a file called 
`model_train.csv` with training information for the dynamics model.

Beyond the override defaults, You can also change other configuration options, 
such as the type of dynamics model 
(e.g., `dynamics_model=basic_ensemble`), or the number of models in the ensemble 
(e.g., `dynamics_model.model.ensemble_size=some-number`). To learn more about
all the available options, take a look at the provided 
[configuration files](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf). 

## Supported environments
Our example configurations are largely based on [Mujoco](https://mujoco.org/), but
our library components (and algorithms) are compatible with any environment that follows
the standard [Gymnasium](https://github.com/Farama-Foundation/Gymnasium/) syntax. You can try our utilities in other environments 
by creating your own entry script and Hydra configuration, using our default entry 
[`main.py`](https://github.com/facebookresearch/mbrl-lib/blob/main/mbrl/examples/main.py) as guiding template. 
See also the example [override](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf/overrides)
configurations. 

Without any modifications, our provided `main.py` can be used to launch experiments with the following environments:
  * [`mujoco`](https://github.com/deepmind/mujoco)
  * [`dm_control`](https://github.com/deepmind/dm_control)
  * [`pybullet-gym`](https://github.com/benelot/pybullet-gym) (thanks to [dtch1997](https://github.com/dtch1997)) for the contribution!
  Note: You must run `pip install gym==0.26.3` to use the dm_control and pybulletgym environments.

You can test your Mujoco and PyBullet installations by running

    python -m pytest tests/mujoco
    python -m pytest tests/pybullet

To specify the environment to use for `main.py`, there are two possibilities:

  * **Preferred way**: Use a Hydra dictionary to specify arguments for your env constructor. See [example](https://github.com/facebookresearch/mbrl-lib/blob/main/mbrl/examples/conf/overrides/planet_cartpole_balance.yaml#L4).
  * Less flexible alternative: A single string with the following syntax:
      - `mujoco-gym`: `"gym___<env-name>"`, where `env-name` is the name of the environment in Gymnasium (e.g., "HalfCheetah-v2").
      - `dm_control`: `"dmcontrol___<domain>--<task>`, where domain/task are defined as in DMControl (e.g., "cheetah--run").
      - `pybullet-gym`: `"pybulletgym___<env-name>"`, where `env-name` is the name of the environment in pybullet gym (e.g., "HopperPyBulletEnv-v0")

## Visualization and diagnostics tools
Our library also contains a set of 
[diagnostics](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/diagnostics) tools, meant to facilitate 
development and debugging of models and controllers. With the exception of the CPU-controller, which also supports 
PyBullet, these currently require a Mujoco installation, but we are planning to add support for other environments 
and extensions in the future. Currently, the following tools are provided:

* ``Visualizer``: Creates a video to qualitatively
assess model predictions over a rolling horizon. Specifically, it runs a 
  user specified policy in a given environment, and at each time step, computes
  the model's predicted observation/rewards over a lookahead horizon for the 
  same policy. The predictions are plotted as line plots, one for each 
  observation dimension (blue lines) and reward (red line), along with the 
  result of applying the same policy to the real environment (black lines). 
  The model's uncertainty is visualized by plotting lines the maximum and 
  minimum predictions at each time step. The model and policy are specified 
  by passing directories containing configuration files for each; they can 
  be trained independently. The following gif shows an example of 200 steps 
  of pre-trained MBPO policy on Inverted Pendulum environment.
  \
  \
  ![Example of Visualizer](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/inv_pendulum_mbpo_vis.gif)
  <br>
  <br>
* ``DatasetEvaluator``: Loads a pre-trained model and a dataset (can be loaded from separate directories), 
  and computes predictions of the model for each output dimension. The evaluator then
  creates a scatter plot for each dimension comparing the ground truth output 
  vs. the model's prediction. If the model is an ensemble, the plot shows the
  mean prediction as well as the individual predictions of each ensemble member.
  \
  \
  ![Example of DatasetEvaluator](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/dataset_evaluator.png)
  <br>
  <br>
* ``FineTuner``: Can be used to train a model on a dataset produced by a given agent/controller. 
  The model and agent can be loaded from separate directories, and the fine tuner will roll the 
  environment for some number of steps using actions obtained from the 
  controller. The final model and dataset will then be saved under directory
  "model_dir/diagnostics/subdir", where `subdir` is provided by the user.\
  <br>
* ``True Dynamics Multi-CPU Controller``: This script can run
a trajectory optimizer agent on the true environment using Python's 
  multiprocessing. Each environment runs in its own CPU, which can significantly
  speed up costly sampling algorithm such as CEM. The controller will also save
  a video if the ``render`` argument is passed. Below is an example on 
  HalfCheetah-v2 using CEM for trajectory optimization. To specify the environment,
  follow the single string syntax described 
  [here](https://github.com/facebookresearch/mbrl-lib/blob/main/README.md#supported-environments).
  \
  \
  ![Control Half-Cheetah True Dynamics](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/halfcheetah-break.gif)
  <br>
  <br>
* [``TrainingBrowser``](training_browser.py): This script launches a lightweight
training browser for plotting rewards obtained after training runs 
  (as long as the runs use our logger). 
  The browser allows aggregating multiple runs and displaying mean/std, 
  and also lets the user save the image to hard drive. The legend and axes labels
  can be edited in the pane at the bottom left. Requires installing `PyQt5`. 
  Thanks to [a3ahmad](https://github.com/a3ahmad) for the contribution!

  ![Training Browser Example](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/training-browser-example.png)

Note that, except for the training browser, all the tools above require Mujoco 
installation and are specific to models of type 
[``OneDimTransitionRewardModel``](../models/one_dim_tr_model.py).
We are planning to extend this in the future; if you have useful suggestions
don't hesitate to raise an issue or submit a pull request!

## Advanced Examples
MBRL-Lib can be used for many different research projects in the subject area. 
Below are some community-contributed examples:
*  [Trajectory-based Dynamics Model](https://arxiv.org/abs/2012.09156) Training [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/natolambert/mbrl-lib-dev/blob/main/notebooks/traj_based_model.ipynb) 

## Documentation 
Please check out our **[documentation](https://facebookresearch.github.io/mbrl-lib/)** 
and don't hesitate to raise issues or contribute if anything is unclear!

## License
`mbrl` is released under the MIT license. See [LICENSE](LICENSE) for 
additional details about it. See also our 
[Terms of Use](https://opensource.facebook.com/legal/terms) and 
[Privacy Policy](https://opensource.facebook.com/legal/privacy).

## Citing
If you use this project in your research, please cite:

```BibTeX
@Article{Pineda2021MBRL,
  author  = {Luis Pineda and Brandon Amos and Amy Zhang and Nathan O. Lambert and Roberto Calandra},
  journal = {Arxiv},
  title   = {MBRL-Lib: A Modular Library for Model-based Reinforcement Learning},
  year    = {2021},
  url     = {https://arxiv.org/abs/2104.10159},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/facebookresearch/mbrl-lib",
    "name": "mbrl",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "",
    "author": "Facebook AI Research",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/5a/4b/e6348289e208a39e0b5739a6bd59532663a35a0f786e10b3235222cfe6f5/mbrl-0.2.0.tar.gz",
    "platform": null,
    "description": "[![PyPi Version](https://img.shields.io/pypi/v/mbrl)](https://pypi.org/project/mbrl/)\n[![Main](https://github.com/facebookresearch/mbrl-lib/workflows/CI/badge.svg)](https://github.com/facebookresearch/mbrl-lib/actions?query=workflow%3ACI)\n[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/facebookresearch/mbrl-lib/tree/main/LICENSE)\n[![Python 3.7+](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/release/python-360/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n \n\n# MBRL-Lib\n\n``mbrl`` is a toolbox for facilitating development of \nModel-Based Reinforcement Learning algorithms. It provides easily interchangeable \nmodeling and planning components, and a set of utility functions that allow writing\nmodel-based RL algorithms with only a few lines of code. \n\nSee also our companion [paper](https://arxiv.org/abs/2104.10159). \n\n## Getting Started\n\n### Installation\n\n#### Standard Installation\n\n``mbrl`` requires Python 3.8+ library and [PyTorch (>= 1.7)](https://pytorch.org). \nTo install the latest stable version, run\n\n    pip install mbrl\n\n#### Developer installation\nIf you are interested in modifying the library, clone the repository and set up \na development environment as follows\n\n    git clone https://github.com/facebookresearch/mbrl-lib.git\n    pip install -e \".[dev]\"\n\nAnd test it by running the following from the root folder of the repository\n\n    python -m pytest tests/core\n    python -m pytest tests/algorithms\n\n\n### Basic example\nAs a starting point, check out our [tutorial notebook](https://github.com/facebookresearch/mbrl-lib/tree/main/notebooks/pets_example.ipynb) \non how to write the PETS algorithm \n([Chua et al., NeurIPS 2018](https://arxiv.org/pdf/1805.12114.pdf)) \nusing our toolbox, and running it on a continuous version of the cartpole \nenvironment.\n\n## Provided algorithm implementations\nMBRL-Lib provides implementations of popular MBRL algorithms \nas examples of how to use this library. You can find them in the \n[mbrl/algorithms](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms) folder. Currently, we have implemented\n[PETS](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms/pets.py),\n[MBPO](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms/mbpo.py),\n[PlaNet](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/algorithms/planet.py), \nwe plan to keep increasing this list in the future.\n\nThe implementations rely on [Hydra](https://github.com/facebookresearch/hydra) \nto handle configuration. You can see the configuration files in \n[this](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf) \nfolder. \nThe [overrides](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf/overrides) \nsubfolder contains\nenvironment specific configurations for each environment, overriding the \ndefault configurations with the best hyperparameter values we have found so far \nfor each combination of algorithm and environment. You can run training\nby passing the desired override option via command line. \nFor example, to run MBPO on the [Gymnasium](https://github.com/Farama-Foundation/Gymnasium/) version of HalfCheetah, you should call\n```python\npython -m mbrl.examples.main algorithm=mbpo overrides=mbpo_halfcheetah \n```\nBy default, all algorithms will save results in a csv file called `results.csv`,\ninside a folder whose path looks like \n`./exp/mbpo/default/gym___HalfCheetah-v2/yyyy.mm.dd/hhmmss`; \nyou can change the root directory (`./exp`) by passing \n`root_dir=path-to-your-dir`, and the experiment sub-folder (`default`) by\npassing `experiment=your-name`. The logger will also save a file called \n`model_train.csv` with training information for the dynamics model.\n\nBeyond the override defaults, You can also change other configuration options, \nsuch as the type of dynamics model \n(e.g., `dynamics_model=basic_ensemble`), or the number of models in the ensemble \n(e.g., `dynamics_model.model.ensemble_size=some-number`). To learn more about\nall the available options, take a look at the provided \n[configuration files](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf). \n\n## Supported environments\nOur example configurations are largely based on [Mujoco](https://mujoco.org/), but\nour library components (and algorithms) are compatible with any environment that follows\nthe standard [Gymnasium](https://github.com/Farama-Foundation/Gymnasium/) syntax. You can try our utilities in other environments \nby creating your own entry script and Hydra configuration, using our default entry \n[`main.py`](https://github.com/facebookresearch/mbrl-lib/blob/main/mbrl/examples/main.py) as guiding template. \nSee also the example [override](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/examples/conf/overrides)\nconfigurations. \n\nWithout any modifications, our provided `main.py` can be used to launch experiments with the following environments:\n  * [`mujoco`](https://github.com/deepmind/mujoco)\n  * [`dm_control`](https://github.com/deepmind/dm_control)\n  * [`pybullet-gym`](https://github.com/benelot/pybullet-gym) (thanks to [dtch1997](https://github.com/dtch1997)) for the contribution!\n  Note: You must run `pip install gym==0.26.3` to use the dm_control and pybulletgym environments.\n\nYou can test your Mujoco and PyBullet installations by running\n\n    python -m pytest tests/mujoco\n    python -m pytest tests/pybullet\n\nTo specify the environment to use for `main.py`, there are two possibilities:\n\n  * **Preferred way**: Use a Hydra dictionary to specify arguments for your env constructor. See [example](https://github.com/facebookresearch/mbrl-lib/blob/main/mbrl/examples/conf/overrides/planet_cartpole_balance.yaml#L4).\n  * Less flexible alternative: A single string with the following syntax:\n      - `mujoco-gym`: `\"gym___<env-name>\"`, where `env-name` is the name of the environment in Gymnasium (e.g., \"HalfCheetah-v2\").\n      - `dm_control`: `\"dmcontrol___<domain>--<task>`, where domain/task are defined as in DMControl (e.g., \"cheetah--run\").\n      - `pybullet-gym`: `\"pybulletgym___<env-name>\"`, where `env-name` is the name of the environment in pybullet gym (e.g., \"HopperPyBulletEnv-v0\")\n\n## Visualization and diagnostics tools\nOur library also contains a set of \n[diagnostics](https://github.com/facebookresearch/mbrl-lib/tree/main/mbrl/diagnostics) tools, meant to facilitate \ndevelopment and debugging of models and controllers. With the exception of the CPU-controller, which also supports \nPyBullet, these currently require a Mujoco installation, but we are planning to add support for other environments \nand extensions in the future. Currently, the following tools are provided:\n\n* ``Visualizer``: Creates a video to qualitatively\nassess model predictions over a rolling horizon. Specifically, it runs a \n  user specified policy in a given environment, and at each time step, computes\n  the model's predicted observation/rewards over a lookahead horizon for the \n  same policy. The predictions are plotted as line plots, one for each \n  observation dimension (blue lines) and reward (red line), along with the \n  result of applying the same policy to the real environment (black lines). \n  The model's uncertainty is visualized by plotting lines the maximum and \n  minimum predictions at each time step. The model and policy are specified \n  by passing directories containing configuration files for each; they can \n  be trained independently. The following gif shows an example of 200 steps \n  of pre-trained MBPO policy on Inverted Pendulum environment.\n  \\\n  \\\n  ![Example of Visualizer](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/inv_pendulum_mbpo_vis.gif)\n  <br>\n  <br>\n* ``DatasetEvaluator``: Loads a pre-trained model and a dataset (can be loaded from separate directories), \n  and computes predictions of the model for each output dimension. The evaluator then\n  creates a scatter plot for each dimension comparing the ground truth output \n  vs. the model's prediction. If the model is an ensemble, the plot shows the\n  mean prediction as well as the individual predictions of each ensemble member.\n  \\\n  \\\n  ![Example of DatasetEvaluator](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/dataset_evaluator.png)\n  <br>\n  <br>\n* ``FineTuner``: Can be used to train a model on a dataset produced by a given agent/controller. \n  The model and agent can be loaded from separate directories, and the fine tuner will roll the \n  environment for some number of steps using actions obtained from the \n  controller. The final model and dataset will then be saved under directory\n  \"model_dir/diagnostics/subdir\", where `subdir` is provided by the user.\\\n  <br>\n* ``True Dynamics Multi-CPU Controller``: This script can run\na trajectory optimizer agent on the true environment using Python's \n  multiprocessing. Each environment runs in its own CPU, which can significantly\n  speed up costly sampling algorithm such as CEM. The controller will also save\n  a video if the ``render`` argument is passed. Below is an example on \n  HalfCheetah-v2 using CEM for trajectory optimization. To specify the environment,\n  follow the single string syntax described \n  [here](https://github.com/facebookresearch/mbrl-lib/blob/main/README.md#supported-environments).\n  \\\n  \\\n  ![Control Half-Cheetah True Dynamics](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/halfcheetah-break.gif)\n  <br>\n  <br>\n* [``TrainingBrowser``](training_browser.py): This script launches a lightweight\ntraining browser for plotting rewards obtained after training runs \n  (as long as the runs use our logger). \n  The browser allows aggregating multiple runs and displaying mean/std, \n  and also lets the user save the image to hard drive. The legend and axes labels\n  can be edited in the pane at the bottom left. Requires installing `PyQt5`. \n  Thanks to [a3ahmad](https://github.com/a3ahmad) for the contribution!\n\n  ![Training Browser Example](http://raw.githubusercontent.com/facebookresearch/mbrl-lib/main/docs/resources/training-browser-example.png)\n\nNote that, except for the training browser, all the tools above require Mujoco \ninstallation and are specific to models of type \n[``OneDimTransitionRewardModel``](../models/one_dim_tr_model.py).\nWe are planning to extend this in the future; if you have useful suggestions\ndon't hesitate to raise an issue or submit a pull request!\n\n## Advanced Examples\nMBRL-Lib can be used for many different research projects in the subject area. \nBelow are some community-contributed examples:\n*  [Trajectory-based Dynamics Model](https://arxiv.org/abs/2012.09156) Training [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/natolambert/mbrl-lib-dev/blob/main/notebooks/traj_based_model.ipynb) \n\n## Documentation \nPlease check out our **[documentation](https://facebookresearch.github.io/mbrl-lib/)** \nand don't hesitate to raise issues or contribute if anything is unclear!\n\n## License\n`mbrl` is released under the MIT license. See [LICENSE](LICENSE) for \nadditional details about it. See also our \n[Terms of Use](https://opensource.facebook.com/legal/terms) and \n[Privacy Policy](https://opensource.facebook.com/legal/privacy).\n\n## Citing\nIf you use this project in your research, please cite:\n\n```BibTeX\n@Article{Pineda2021MBRL,\n  author  = {Luis Pineda and Brandon Amos and Amy Zhang and Nathan O. Lambert and Roberto Calandra},\n  journal = {Arxiv},\n  title   = {MBRL-Lib: A Modular Library for Model-based Reinforcement Learning},\n  year    = {2021},\n  url     = {https://arxiv.org/abs/2104.10159},\n}\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A PyTorch library for model-based reinforcement learning research",
    "version": "0.2.0",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5e2ab462d4d2d4e79bcb4b8b8119ee2b153ed63ce707e56cef1601deac66e6a8",
                "md5": "7685bab7d5aae3ccbc691e98b93a3474",
                "sha256": "1a858f5c43447668974fd0996ea1fcaa6fb0ad27e7494d4317b99151c317f29e"
            },
            "downloads": -1,
            "filename": "mbrl-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7685bab7d5aae3ccbc691e98b93a3474",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 175545,
            "upload_time": "2023-03-29T19:22:42",
            "upload_time_iso_8601": "2023-03-29T19:22:42.066157Z",
            "url": "https://files.pythonhosted.org/packages/5e/2a/b462d4d2d4e79bcb4b8b8119ee2b153ed63ce707e56cef1601deac66e6a8/mbrl-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a4be6348289e208a39e0b5739a6bd59532663a35a0f786e10b3235222cfe6f5",
                "md5": "9c1132a78add2dee682c8033e949ff37",
                "sha256": "a2aefd5f229d80e1089d1d32607571b0a92657866494466960f8cd6dd05b9fba"
            },
            "downloads": -1,
            "filename": "mbrl-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "9c1132a78add2dee682c8033e949ff37",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 129891,
            "upload_time": "2023-03-29T19:22:44",
            "upload_time_iso_8601": "2023-03-29T19:22:44.419463Z",
            "url": "https://files.pythonhosted.org/packages/5a/4b/e6348289e208a39e0b5739a6bd59532663a35a0f786e10b3235222cfe6f5/mbrl-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-29 19:22:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "facebookresearch",
    "github_project": "mbrl-lib",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "mbrl"
}
        
Elapsed time: 0.13853s