rsl-rl-lib


Namersl-rl-lib JSON
Version 3.0.0 PyPI version JSON
download
home_pageNone
SummaryFast and simple RL algorithms implemented in PyTorch
upload_time2025-07-18 10:12:13
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseBSD-3-Clause
keywords reinforcement-learning isaac leggedrobotics rl-pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RSL RL

A fast and simple implementation of RL algorithms, designed to run fully on GPU.
This code is an evolution of `rl-pytorch` provided with NVIDIA's Isaac Gym.

Environment repositories using the framework:

* **`Isaac Lab`** (built on top of NVIDIA Isaac Sim): https://github.com/isaac-sim/IsaacLab
* **`Legged-Gym`** (built on top of NVIDIA Isaac Gym): https://leggedrobotics.github.io/legged_gym/

The main branch supports **PPO** and **Student-Teacher Distillation** with additional features from our research. These include:

* [Random Network Distillation (RND)](https://proceedings.mlr.press/v229/schwarke23a.html) - Encourages exploration by adding
  a curiosity driven intrinsic reward.
* [Symmetry-based Augmentation](https://arxiv.org/abs/2403.04359) - Makes the learned behaviors more symmetrical.

We welcome contributions from the community. Please check our contribution guidelines for more
information.

**Maintainer**: Mayank Mittal and Clemens Schwarke <br/>
**Affiliation**: Robotic Systems Lab, ETH Zurich & NVIDIA <br/>
**Contact**: cschwarke@ethz.ch

> **Note:** The `algorithms` branch supports additional algorithms (SAC, DDPG, DSAC, and more). However, it isn't currently actively maintained.


## Setup

The package can be installed via PyPI with:

```bash
pip install rsl-rl-lib
```

or by cloning this repository and installing it with:

```bash
git clone https://github.com/leggedrobotics/rsl_rl
cd rsl_rl
pip install -e .
```

The package supports the following logging frameworks which can be configured through `logger`:

* Tensorboard: https://www.tensorflow.org/tensorboard/
* Weights & Biases: https://wandb.ai/site
* Neptune: https://docs.neptune.ai/

For a demo configuration of PPO, please check the [example_config.yaml](config/example_config.yaml) file.


## Contribution Guidelines

For documentation, we adopt the [Google Style Guide](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) for docstrings. Please make sure that your code is well-documented and follows the guidelines.

We use the following tools for maintaining code quality:

- [pre-commit](https://pre-commit.com/): Runs a list of formatters and linters over the codebase.
- [black](https://black.readthedocs.io/en/stable/): The uncompromising code formatter.
- [flake8](https://flake8.pycqa.org/en/latest/): A wrapper around PyFlakes, pycodestyle, and McCabe complexity checker.

Please check [here](https://pre-commit.com/#install) for instructions to set these up. To run over the entire repository, please execute the following command in the terminal:

```bash
# for installation (only once)
pre-commit install
# for running
pre-commit run --all-files
```

## Citing

**We are working on writing a white paper for this library.** Until then, please cite the following work
if you use this library for your research:

```text
@InProceedings{rudin2022learning,
  title = 	 {Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning},
  author =       {Rudin, Nikita and Hoeller, David and Reist, Philipp and Hutter, Marco},
  booktitle = 	 {Proceedings of the 5th Conference on Robot Learning},
  pages = 	 {91--100},
  year = 	 {2022},
  volume = 	 {164},
  series = 	 {Proceedings of Machine Learning Research},
  publisher =    {PMLR},
  url = 	 {https://proceedings.mlr.press/v164/rudin22a.html},
}
```

If you use the library with curiosity-driven exploration (random network distillation), please cite:

```text
@InProceedings{schwarke2023curiosity,
  title = 	 {Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks},
  author =       {Schwarke, Clemens and Klemm, Victor and Boon, Matthijs van der and Bjelonic, Marko and Hutter, Marco},
  booktitle = 	 {Proceedings of The 7th Conference on Robot Learning},
  pages = 	 {2594--2610},
  year = 	 {2023},
  volume = 	 {229},
  series = 	 {Proceedings of Machine Learning Research},
  publisher =    {PMLR},
  url = 	 {https://proceedings.mlr.press/v229/schwarke23a.html},
}
```

If you use the library with symmetry augmentation, please cite:

```text
@InProceedings{mittal2024symmetry,
  author={Mittal, Mayank and Rudin, Nikita and Klemm, Victor and Allshire, Arthur and Hutter, Marco},
  booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
  title={Symmetry Considerations for Learning Task Symmetric Robot Policies},
  year={2024},
  pages={7433-7439},
  doi={10.1109/ICRA57147.2024.10611493}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "rsl-rl-lib",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Clemens Schwarke <cschwarke@ethz.ch>, Mayank Mittal <mittalma@ethz.ch>",
    "keywords": "reinforcement-learning, isaac, leggedrobotics, rl-pytorch",
    "author": null,
    "author_email": "Clemens Schwarke <cschwarke@ethz.ch>, Mayank Mittal <mittalma@ethz.ch>, Nikita Rudin <rudinn@ethz.ch>, David Hoeller <holler.david78@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/47/55/86fb1b12c1ae9668e6b0864f46b0cca1d0a18c226a395d6cd323cd766401/rsl_rl_lib-3.0.0.tar.gz",
    "platform": null,
    "description": "# RSL RL\n\nA fast and simple implementation of RL algorithms, designed to run fully on GPU.\nThis code is an evolution of `rl-pytorch` provided with NVIDIA's Isaac Gym.\n\nEnvironment repositories using the framework:\n\n* **`Isaac Lab`** (built on top of NVIDIA Isaac Sim): https://github.com/isaac-sim/IsaacLab\n* **`Legged-Gym`** (built on top of NVIDIA Isaac Gym): https://leggedrobotics.github.io/legged_gym/\n\nThe main branch supports **PPO** and **Student-Teacher Distillation** with additional features from our research. These include:\n\n* [Random Network Distillation (RND)](https://proceedings.mlr.press/v229/schwarke23a.html) - Encourages exploration by adding\n  a curiosity driven intrinsic reward.\n* [Symmetry-based Augmentation](https://arxiv.org/abs/2403.04359) - Makes the learned behaviors more symmetrical.\n\nWe welcome contributions from the community. Please check our contribution guidelines for more\ninformation.\n\n**Maintainer**: Mayank Mittal and Clemens Schwarke <br/>\n**Affiliation**: Robotic Systems Lab, ETH Zurich & NVIDIA <br/>\n**Contact**: cschwarke@ethz.ch\n\n> **Note:** The `algorithms` branch supports additional algorithms (SAC, DDPG, DSAC, and more). However, it isn't currently actively maintained.\n\n\n## Setup\n\nThe package can be installed via PyPI with:\n\n```bash\npip install rsl-rl-lib\n```\n\nor by cloning this repository and installing it with:\n\n```bash\ngit clone https://github.com/leggedrobotics/rsl_rl\ncd rsl_rl\npip install -e .\n```\n\nThe package supports the following logging frameworks which can be configured through `logger`:\n\n* Tensorboard: https://www.tensorflow.org/tensorboard/\n* Weights & Biases: https://wandb.ai/site\n* Neptune: https://docs.neptune.ai/\n\nFor a demo configuration of PPO, please check the [example_config.yaml](config/example_config.yaml) file.\n\n\n## Contribution Guidelines\n\nFor documentation, we adopt the [Google Style Guide](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) for docstrings. Please make sure that your code is well-documented and follows the guidelines.\n\nWe use the following tools for maintaining code quality:\n\n- [pre-commit](https://pre-commit.com/): Runs a list of formatters and linters over the codebase.\n- [black](https://black.readthedocs.io/en/stable/): The uncompromising code formatter.\n- [flake8](https://flake8.pycqa.org/en/latest/): A wrapper around PyFlakes, pycodestyle, and McCabe complexity checker.\n\nPlease check [here](https://pre-commit.com/#install) for instructions to set these up. To run over the entire repository, please execute the following command in the terminal:\n\n```bash\n# for installation (only once)\npre-commit install\n# for running\npre-commit run --all-files\n```\n\n## Citing\n\n**We are working on writing a white paper for this library.** Until then, please cite the following work\nif you use this library for your research:\n\n```text\n@InProceedings{rudin2022learning,\n  title = \t {Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning},\n  author =       {Rudin, Nikita and Hoeller, David and Reist, Philipp and Hutter, Marco},\n  booktitle = \t {Proceedings of the 5th Conference on Robot Learning},\n  pages = \t {91--100},\n  year = \t {2022},\n  volume = \t {164},\n  series = \t {Proceedings of Machine Learning Research},\n  publisher =    {PMLR},\n  url = \t {https://proceedings.mlr.press/v164/rudin22a.html},\n}\n```\n\nIf you use the library with curiosity-driven exploration (random network distillation), please cite:\n\n```text\n@InProceedings{schwarke2023curiosity,\n  title = \t {Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks},\n  author =       {Schwarke, Clemens and Klemm, Victor and Boon, Matthijs van der and Bjelonic, Marko and Hutter, Marco},\n  booktitle = \t {Proceedings of The 7th Conference on Robot Learning},\n  pages = \t {2594--2610},\n  year = \t {2023},\n  volume = \t {229},\n  series = \t {Proceedings of Machine Learning Research},\n  publisher =    {PMLR},\n  url = \t {https://proceedings.mlr.press/v229/schwarke23a.html},\n}\n```\n\nIf you use the library with symmetry augmentation, please cite:\n\n```text\n@InProceedings{mittal2024symmetry,\n  author={Mittal, Mayank and Rudin, Nikita and Klemm, Victor and Allshire, Arthur and Hutter, Marco},\n  booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},\n  title={Symmetry Considerations for Learning Task Symmetric Robot Policies},\n  year={2024},\n  pages={7433-7439},\n  doi={10.1109/ICRA57147.2024.10611493}\n}\n```\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "Fast and simple RL algorithms implemented in PyTorch",
    "version": "3.0.0",
    "project_urls": {
        "Homepage": "https://github.com/leggedrobotics/rsl_rl",
        "Issues": "https://github.com/leggedrobotics/rsl_rl/issues"
    },
    "split_keywords": [
        "reinforcement-learning",
        " isaac",
        " leggedrobotics",
        " rl-pytorch"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "67956f8223cd4726c0ed04060e59b1007db64e160e37cc9cbeca7a376dc05a98",
                "md5": "15cafdb6527b716e485ef761ad7ff963",
                "sha256": "3dd0ce2d8cfb72758cd0612becd850078e4bc0dc0be1926c0ec1a17662a5d561"
            },
            "downloads": -1,
            "filename": "rsl_rl_lib-3.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "15cafdb6527b716e485ef761ad7ff963",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 49022,
            "upload_time": "2025-07-18T10:12:12",
            "upload_time_iso_8601": "2025-07-18T10:12:12.483479Z",
            "url": "https://files.pythonhosted.org/packages/67/95/6f8223cd4726c0ed04060e59b1007db64e160e37cc9cbeca7a376dc05a98/rsl_rl_lib-3.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "475586fb1b12c1ae9668e6b0864f46b0cca1d0a18c226a395d6cd323cd766401",
                "md5": "3ffdff8e96a9c5836b91afbd2fe42d54",
                "sha256": "c38936f8b6c199fabf2330706f976bbbe066526893a32d1cd39962764e28d1f5"
            },
            "downloads": -1,
            "filename": "rsl_rl_lib-3.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3ffdff8e96a9c5836b91afbd2fe42d54",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 51361,
            "upload_time": "2025-07-18T10:12:13",
            "upload_time_iso_8601": "2025-07-18T10:12:13.906314Z",
            "url": "https://files.pythonhosted.org/packages/47/55/86fb1b12c1ae9668e6b0864f46b0cca1d0a18c226a395d6cd323cd766401/rsl_rl_lib-3.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-18 10:12:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "leggedrobotics",
    "github_project": "rsl_rl",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "rsl-rl-lib"
}
        
Elapsed time: 0.78389s