trifinger-rl-datasets


Nametrifinger-rl-datasets JSON
Version 1.0.3 PyPI version JSON
download
home_page
SummaryGym environments which provide offline RL datasets collected on the TriFinger system.
upload_time2024-01-17 11:11:47
maintainer
docs_urlNone
authorNico Gürtler
requires_python
licenseBSD 3-Clause
keywords offline reinforcement learning reinforcement learning robotics trifinger real robot challenge dexterous manipulation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TriFinger RL Datasets

This repository provides offline reinforcement learning datasets collected on the real TriFinger platform and in a simulated version of the environment. The paper ["Benchmarking Offline Reinforcement Learning on Real-Robot Hardware"](https://openreview.net/pdf?id=3k5CUGDLNdd) provides more details on the datasets and benchmarks offline RL algorithms on them. All datasets are available with camera images as well.

More detailed information about the simulated environment, the datasets and on how to run experiments on a cluster of real TriFinger robots can be found in the [documentation](https://webdav.tuebingen.mpg.de/trifinger-rl/docs/).

Some of the datasets were used during the [Real Robot Challenge 2022](https://real-robot-challenge.com).

## Installation

To install the package run with python 3.8 in the root directory of the repository (we recommend doing this in a virtual environment):

```bash
pip install --upgrade pip  # make sure the most recent version of pip is installed
pip install .
```

## Usage

This section provides short examples of how to load datasets and evaluate a policy in simulation. More details on how to work with the datasets can be found in the [documentation](https://webdav.tuebingen.mpg.de/trifinger-rl/docs/).


### Loading a dataset

The datasets are accessible via gym environments which are automatically registered when importing the package. They are automatically downloaded when requested and stored in `~/.trifinger_rl_datasets` as Zarr files by default (see the [documentation](https://webdav.tuebingen.mpg.de/trifinger-rl/docs/) for custom paths to the datasets). The code for loading the datasets follows the interface suggested by [D4RL](https://github.com/rail-berkeley/d4rl) and extends it where needed. 

As an alternative to the automatic download, the datasets can also be downloaded
manually from the [Edmond repository](https://edmond.mpdl.mpg.de/dataset.xhtml?persistentId=doi:10.17617/3.DXZ7TL).

The datasets are named following the pattern `trifinger-cube-task-source-type-v0` where `task` is either `push` or `lift`, `source` is either `sim` or `real` and `type` can be either `mixed`, `weak-n-expert` or `expert`.

By default the observations are loaded as flat arrays. For the simulated datasets the environment can be stepped and visualized. Example usage (also see `demo/load_dataset.py`):

```python
import gymnasium as gym

import trifinger_rl_datasets

env = gym.make(
    "trifinger-cube-push-sim-expert-v0",
    visualization=True,  # enable visualization
)

dataset = env.get_dataset()

print("First observation: ", dataset["observations"][0])
print("First action: ", dataset["actions"][0])
print("First reward: ", dataset["rewards"][0])

obs, info = env.reset()
truncated = False

while not truncated:
    obs, rew, terminated, truncated, info = env.step(env.action_space.sample())
```

Alternatively, the observations can be obtained as nested dictionaries. This simplifies working with the data. As some parts of the observations might be more useful than others, it is also possible to filter the observations when requesting dictionaries (see `demo/load_filtered_dicts.py`):

```python
    # Nested dictionary defines which observations to keep.
    # Everything that is not included or has value False
    # will be dropped.
    obs_to_keep = {
        "robot_observation": {
            "position": True,
            "velocity": True,
            "fingertip_force": False,
        },
        "object_observation": {"keypoints": True},
    }
    env = gym.make(
        args.env_name,
        # filter observations,
        obs_to_keep=obs_to_keep,
    )
```

All datasets come in two versions: with and without camera observations. The versions with camera observations contain `-image` in their name. Despite PNG image compression they are more than one order of magnitude bigger than the imageless versions. To avoid running out of memory, a part of a dataset can be loaded by specifying a range of timesteps:

```python
env = gym.make(
    "trifinger-cube-push-real-expert-image-v0",
    disable_env_checker=True
)

# load only a subset of obervations, actions and rewards
dataset = env.get_dataset(rng=(1000, 2000))
```

The camera observations corresponding to this range are then returned in `dataset["images"]` with the following dimensions:

```python
n_timesteps, n_cameras, n_channels, height, width = dataset["images"].shape
```

### Evaluating a policy in simulation

This package contains an executable module `trifinger_rl_datasets.evaluate_sim`, which
can be used to evaluate a policy in simulation.  As arguments it expects the task
("push" or "lift") and a Python class that implements the policy, following the
`PolicyBase` interface:

    python3 -m trifinger_rl_datasets.evaluate_sim push my_package.MyPolicy

For more options see `--help`.

## How to cite

The paper ["Benchmarking Offline Reinforcement Learning on Real-Robot Hardware"](https://openreview.net/pdf?id=3k5CUGDLNdd) introducing the datasets was published at ICLR 2023:

```
@inproceedings{
guertler2023benchmarking,
title={Benchmarking Offline Reinforcement Learning on Real-Robot Hardware},
author={Nico G{\"u}rtler and Sebastian Blaes and Pavel Kolev and Felix Widmaier and Manuel Wuthrich and Stefan Bauer and Bernhard Sch{\"o}lkopf and Georg Martius},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=3k5CUGDLNdd}
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "trifinger-rl-datasets",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "offline reinforcement learning,reinforcement learning,robotics,TriFinger,Real Robot Challenge,dexterous manipulation",
    "author": "Nico G\u00fcrtler",
    "author_email": "nico.guertler@tuebingen.mpg.de",
    "download_url": "https://files.pythonhosted.org/packages/ac/37/b0d29413dd67bf55b1e426122565ff0f0b8b9d22102dcfe1f9e8d64febe4/trifinger_rl_datasets-1.0.3.tar.gz",
    "platform": null,
    "description": "# TriFinger RL Datasets\n\nThis repository provides offline reinforcement learning datasets collected on the real TriFinger platform and in a simulated version of the environment. The paper [\"Benchmarking Offline Reinforcement Learning on Real-Robot Hardware\"](https://openreview.net/pdf?id=3k5CUGDLNdd) provides more details on the datasets and benchmarks offline RL algorithms on them. All datasets are available with camera images as well.\n\nMore detailed information about the simulated environment, the datasets and on how to run experiments on a cluster of real TriFinger robots can be found in the [documentation](https://webdav.tuebingen.mpg.de/trifinger-rl/docs/).\n\nSome of the datasets were used during the [Real Robot Challenge 2022](https://real-robot-challenge.com).\n\n## Installation\n\nTo install the package run with python 3.8 in the root directory of the repository (we recommend doing this in a virtual environment):\n\n```bash\npip install --upgrade pip  # make sure the most recent version of pip is installed\npip install .\n```\n\n## Usage\n\nThis section provides short examples of how to load datasets and evaluate a policy in simulation. More details on how to work with the datasets can be found in the [documentation](https://webdav.tuebingen.mpg.de/trifinger-rl/docs/).\n\n\n### Loading a dataset\n\nThe datasets are accessible via gym environments which are automatically registered when importing the package. They are automatically downloaded when requested and stored in `~/.trifinger_rl_datasets` as Zarr files by default (see the [documentation](https://webdav.tuebingen.mpg.de/trifinger-rl/docs/) for custom paths to the datasets). The code for loading the datasets follows the interface suggested by [D4RL](https://github.com/rail-berkeley/d4rl) and extends it where needed. \n\nAs an alternative to the automatic download, the datasets can also be downloaded\nmanually from the [Edmond repository](https://edmond.mpdl.mpg.de/dataset.xhtml?persistentId=doi:10.17617/3.DXZ7TL).\n\nThe datasets are named following the pattern `trifinger-cube-task-source-type-v0` where `task` is either `push` or `lift`, `source` is either `sim` or `real` and `type` can be either `mixed`, `weak-n-expert` or `expert`.\n\nBy default the observations are loaded as flat arrays. For the simulated datasets the environment can be stepped and visualized. Example usage (also see `demo/load_dataset.py`):\n\n```python\nimport gymnasium as gym\n\nimport trifinger_rl_datasets\n\nenv = gym.make(\n    \"trifinger-cube-push-sim-expert-v0\",\n    visualization=True,  # enable visualization\n)\n\ndataset = env.get_dataset()\n\nprint(\"First observation: \", dataset[\"observations\"][0])\nprint(\"First action: \", dataset[\"actions\"][0])\nprint(\"First reward: \", dataset[\"rewards\"][0])\n\nobs, info = env.reset()\ntruncated = False\n\nwhile not truncated:\n    obs, rew, terminated, truncated, info = env.step(env.action_space.sample())\n```\n\nAlternatively, the observations can be obtained as nested dictionaries. This simplifies working with the data. As some parts of the observations might be more useful than others, it is also possible to filter the observations when requesting dictionaries (see `demo/load_filtered_dicts.py`):\n\n```python\n    # Nested dictionary defines which observations to keep.\n    # Everything that is not included or has value False\n    # will be dropped.\n    obs_to_keep = {\n        \"robot_observation\": {\n            \"position\": True,\n            \"velocity\": True,\n            \"fingertip_force\": False,\n        },\n        \"object_observation\": {\"keypoints\": True},\n    }\n    env = gym.make(\n        args.env_name,\n        # filter observations,\n        obs_to_keep=obs_to_keep,\n    )\n```\n\nAll datasets come in two versions: with and without camera observations. The versions with camera observations contain `-image` in their name. Despite PNG image compression they are more than one order of magnitude bigger than the imageless versions. To avoid running out of memory, a part of a dataset can be loaded by specifying a range of timesteps:\n\n```python\nenv = gym.make(\n    \"trifinger-cube-push-real-expert-image-v0\",\n    disable_env_checker=True\n)\n\n# load only a subset of obervations, actions and rewards\ndataset = env.get_dataset(rng=(1000, 2000))\n```\n\nThe camera observations corresponding to this range are then returned in `dataset[\"images\"]` with the following dimensions:\n\n```python\nn_timesteps, n_cameras, n_channels, height, width = dataset[\"images\"].shape\n```\n\n### Evaluating a policy in simulation\n\nThis package contains an executable module `trifinger_rl_datasets.evaluate_sim`, which\ncan be used to evaluate a policy in simulation.  As arguments it expects the task\n(\"push\" or \"lift\") and a Python class that implements the policy, following the\n`PolicyBase` interface:\n\n    python3 -m trifinger_rl_datasets.evaluate_sim push my_package.MyPolicy\n\nFor more options see `--help`.\n\n## How to cite\n\nThe paper [\"Benchmarking Offline Reinforcement Learning on Real-Robot Hardware\"](https://openreview.net/pdf?id=3k5CUGDLNdd) introducing the datasets was published at ICLR 2023:\n\n```\n@inproceedings{\nguertler2023benchmarking,\ntitle={Benchmarking Offline Reinforcement Learning on Real-Robot Hardware},\nauthor={Nico G{\\\"u}rtler and Sebastian Blaes and Pavel Kolev and Felix Widmaier and Manuel Wuthrich and Stefan Bauer and Bernhard Sch{\\\"o}lkopf and Georg Martius},\nbooktitle={The Eleventh International Conference on Learning Representations },\nyear={2023},\nurl={https://openreview.net/forum?id=3k5CUGDLNdd}\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause",
    "summary": "Gym environments which provide offline RL datasets collected on the TriFinger system.",
    "version": "1.0.3",
    "project_urls": null,
    "split_keywords": [
        "offline reinforcement learning",
        "reinforcement learning",
        "robotics",
        "trifinger",
        "real robot challenge",
        "dexterous manipulation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "36c843f6eea2bfb25677b3847814a46b7c93bdf81bf717a9d5ded7f9be5f5810",
                "md5": "75ec334b4c3aee835d4e2b01045924fc",
                "sha256": "ea8647dfdffe03ad9984fec4b59b9692ce037b2d0d9708992e0f28f5975036ae"
            },
            "downloads": -1,
            "filename": "trifinger_rl_datasets-1.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "75ec334b4c3aee835d4e2b01045924fc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 1145816,
            "upload_time": "2024-01-17T11:11:44",
            "upload_time_iso_8601": "2024-01-17T11:11:44.646651Z",
            "url": "https://files.pythonhosted.org/packages/36/c8/43f6eea2bfb25677b3847814a46b7c93bdf81bf717a9d5ded7f9be5f5810/trifinger_rl_datasets-1.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ac37b0d29413dd67bf55b1e426122565ff0f0b8b9d22102dcfe1f9e8d64febe4",
                "md5": "5ce1487cf1a55ce9800b411834fa902f",
                "sha256": "2687c99e53922d42dcf0af88e310f59039680a416e9859cb0e36aec8be7b01e0"
            },
            "downloads": -1,
            "filename": "trifinger_rl_datasets-1.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "5ce1487cf1a55ce9800b411834fa902f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 1140435,
            "upload_time": "2024-01-17T11:11:47",
            "upload_time_iso_8601": "2024-01-17T11:11:47.531926Z",
            "url": "https://files.pythonhosted.org/packages/ac/37/b0d29413dd67bf55b1e426122565ff0f0b8b9d22102dcfe1f9e8d64febe4/trifinger_rl_datasets-1.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-17 11:11:47",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "trifinger-rl-datasets"
}
        
Elapsed time: 0.19146s