![](doc/img/logo/logo8.png)
# Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
[![Release](https://img.shields.io/github/v/release/Skylark0924/Rofunc)](https://pypi.org/project/rofunc/)
![License](https://img.shields.io/github/license/Skylark0924/Rofunc?color=blue)
![](https://img.shields.io/github/downloads/skylark0924/Rofunc/total)
[![](https://img.shields.io/github/issues-closed-raw/Skylark0924/Rofunc?color=brightgreen)](https://github.com/Skylark0924/Rofunc/issues?q=is%3Aissue+is%3Aclosed)
[![](https://img.shields.io/github/issues-raw/Skylark0924/Rofunc?color=orange)](https://github.com/Skylark0924/Rofunc/issues?q=is%3Aopen+is%3Aissue)
[![Documentation Status](https://readthedocs.org/projects/rofunc/badge/?version=latest)](https://rofunc.readthedocs.io/en/latest/?badge=latest)
[![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2FSkylark0924%2FRofunc%2Fbadge%3Fref%3Dmain&style=flat)](https://actions-badge.atrox.dev/Skylark0924/Rofunc/goto?ref=main)
> **Repository address: https://github.com/Skylark0924/Rofunc** <br>
> **Documentation: https://rofunc.readthedocs.io/**
<img src="doc/img/task_gif3/CURIQbSoftHandSynergyGraspSpatulaRofuncRLPPO.gif" width=25% /><img src="doc/img/task_gif3/CURIQbSoftHandSynergyGraspPower_drillRofuncRLPPO.gif" width=25% /><img src="doc/img/task_gif3/CURIQbSoftHandSynergyGraspPhillips_Screw_DriverRofuncRLPPO.gif" width=25% /><img src="doc/img/task_gif3/CURIQbSoftHandSynergyGraspLarge_clampRofuncRLPPO.gif" width=25% />
<img src="doc/img/task_gif3/CURICoffeeStirring.gif" width=33.3% /><img src="doc/img/task_gif3/CURIScrew.gif" width=33.3% /><img src="doc/img/task_gif3/CURITaichiPushingHand.gif" width=33.3% />
<img src="doc/img/task_gif3/HOTU_Random_Motion.gif" width=25% /><img src="doc/img/task_gif3/H1_Random_Motion.gif" width=25% /><img src="doc/img/task_gif3/Bruce_Random_Motion.gif" width=25% /><img src="doc/img/task_gif3/Walker_Random_Motion.gif" width=25% />
<img src="doc/img/task_gif3/HumanoidFlipRofuncRLAMP.gif" width=33.3% /><img src="doc/img/task_gif3/HumanoidDanceRofuncRLAMP.gif" width=33.3% /><img src="doc/img/task_gif3/HumanoidRunRofuncRLAMP.gif" width=33.3% />
<img src="doc/img/task_gif3/HumanoidASEHeadingSwordShieldRofuncRLASE.gif" width=33.3% /><img src="doc/img/task_gif3/HumanoidASEStrikeSwordShieldRofuncRLASE.gif" width=33.3% /><img src="doc/img/task_gif3/HumanoidASELocationSwordShieldRofuncRLASE.gif" width=33.3% />
<img src="doc/img/task_gif3/BiShadowHandLiftUnderarmRofuncRLPPO.gif" width=33.3% /><img src="doc/img/task_gif3/BiShadowHandDoorOpenOutwardRofuncRLPPO.gif" width=33.3% /><img src="doc/img/task_gif3/BiShadowHandSwingCupRofuncRLPPO.gif" width=33.3% />
Rofunc package focuses on the **Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD)** for **(Humanoid) Robot Manipulation**. It provides valuable and convenient python functions, including
_demonstration collection, data pre-processing, LfD algorithms, planning, and control methods_. We also provide an
`IsaacGym` and `OmniIsaacGym` based robot simulator for evaluation. This package aims to advance the field by building a full-process
toolkit and validation platform that simplifies and standardizes the process of demonstration data collection,
processing, learning, and its deployment on robots.
![](doc/img/pipeline.png)
- [Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation](#rofunc-the-full-process-python-package-for-robot-learning-from-demonstration-and-robot-manipulation)
- [Update News 🎉🎉🎉](#update-news-)
- [Installation](#installation)
- [Documentation](#documentation)
- [RofuncRL](#rofuncrl)
- [Star History](#star-history)
- [Citation](#citation)
- [Related Papers](#related-papers)
- [The Team](#the-team)
- [Acknowledge](#acknowledge)
- [Learning from Demonstration](#learning-from-demonstration)
- [Planning and Control](#planning-and-control)
## Update News 🎉🎉🎉
- [2024-01-24] 🚀 [CURI Synergy-based Softhand grasping tasks](https://github.com/Skylark0924/Rofunc/blob/main/examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py) are supported to be trained by `RofuncRL`.
- [2023-12-24] 🚀 [Dexterous hand (Shadow Hand, Allegro Hand, qbSofthand) tasks](https://github.com/Skylark0924/Rofunc/blob/main/examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py) are supported to be trained by `RofuncRL`.
- [2023-12-03] 🖼️ [Segment-Anything (SAM)](https://segment-anything.com/) is supported in an interactive mode, check the examples in Visualab ([segment anything](https://github.com/Skylark0924/Rofunc/blob/main/examples/visualab/example_sam_seg.py), [segment with prompt](https://github.com/Skylark0924/Rofunc/blob/main/examples/visualab/example_sam_seg_w_prompt.py)).
- **[2023-10-31] 🚀 [`RofuncRL`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/index.html): A modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks is released. It has been tested with simulators like `OpenAIGym`, `IsaacGym`, `OmniIsaacGym` (see [example gallery](https://rofunc.readthedocs.io/en/latest/examples/learning_rl/index.html)), and also differentiable simulators like `PlasticineLab` and `DiffCloth`.**
- ...
- If you want to know more about the update news, please refer to the [changelog](https://github.com/Skylark0924/Rofunc/blob/main/changelog.md).
## Installation
Please refer to the [installation guide](https://rofunc.readthedocs.io/en/latest/installation.html).
## Documentation
[![Documentation](https://img.shields.io/badge/Documentation-Access-brightgreen?style=for-the-badge)](https://rofunc.readthedocs.io/en/latest/)
[![Example Gallery](https://img.shields.io/badge/Example%20Gallery-Access-brightgreen?style=for-the-badge)](https://rofunc.readthedocs.io/en/latest/examples/index.html)
To give you a quick overview of the pipeline of `rofunc`, we provide an interesting example of learning to play Taichi
from human demonstration. You can find it in the [Quick start](https://rofunc.readthedocs.io/en/latest/quickstart.html)
section of the documentation.
<details>
<summary>The available functions and plans can be found as follows.</summary>
> **Note**
> ✅: Achieved 🔃: Reformatting ⛔: TODO
| Data | | Learning | | P&C | | Tools | | Simulator | |
|:-------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------:|----|:------------------------------------------------------------------------------------------------------------------:|----|:-------------------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------------:|----|
| [`xsens.record`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | ✅ | `DMP` | ⛔ | [`LQT`](https://rofunc.readthedocs.io/en/latest/planning/lqt.html) | ✅ | `config` | ✅ | [`Franka`](https://rofunc.readthedocs.io/en/latest/simulator/franka.html) | ✅ |
| [`xsens.export`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | ✅ | [`GMR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.gmr.html) | ✅ | [`LQTBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqt.lqt.html) | ✅ | [`logger`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.logger.beauty_logger.html) | ✅ | [`CURI`](https://rofunc.readthedocs.io/en/latest/simulator/curi.html) | ✅ |
| [`xsens.visual`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | ✅ | [`TPGMM`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | ✅ | [`LQTFb`](https://rofunc.readthedocs.io/en/latest/planning/lqt_fb.html) | ✅ | [`datalab`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.datalab.html) | ✅ | `CURIMini` | 🔃 |
| [`opti.record`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | ✅ | [`TPGMMBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | ✅ | [`LQTCP`](https://rofunc.readthedocs.io/en/latest/planning/lqt_cp.html) | ✅ | [`robolab.coord`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.coord.transform.html) | ✅ | [`CURISoftHand`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.curi_sim.html) | ✅ |
| [`opti.export`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | ✅ | [`TPGMM_RPCtl`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | ✅ | [`LQTCPDMP`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqt.lqt_cp_dmp.html) | ✅ | [`robolab.fk`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.kinematics.fk.html) | ✅ | [`Walker`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.walker_sim.html) | ✅ |
| [`opti.visual`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | ✅ | [`TPGMM_RPRepr`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | ✅ | [`LQR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.lqr.html) | ✅ | [`robolab.ik`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.kinematics.ik.html) | ✅ | `Gluon` | 🔃 |
| [`zed.record`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | ✅ | [`TPGMR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmr.html) | ✅ | [`PoGLQRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.lqr.html) | ✅ | `robolab.fd` | ⛔ | `Baxter` | 🔃 |
| [`zed.export`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | ✅ | [`TPGMRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmr.html) | ✅ | [`iLQR`](https://rofunc.readthedocs.io/en/latest/planning/ilqr.html) | 🔃 | `robolab.id` | ⛔ | `Sawyer` | 🔃 |
| [`zed.visual`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | ✅ | [`TPHSMM`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tphsmm.html) | ✅ | [`iLQRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_bi.html) | 🔃 | [`visualab.dist`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.distribution.html) | ✅ | [`Humanoid`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.humanoid_sim.html) | ✅ |
| [`emg.record`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.emg.record.html) | ✅ | [`RLBaseLine(SKRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RLBaseLine/SKRL.html) | ✅ | `iLQRFb` | 🔃 | [`visualab.ellip`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.ellipsoid.html) | ✅ | [`Multi-Robot`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.multirobot_sim.html) | ✅ |
| [`emg.export`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.emg.export.html) | ✅ | `RLBaseLine(RLlib)` | ✅ | [`iLQRCP`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_cp.html) | 🔃 | [`visualab.traj`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.trajectory.html) | ✅ | | |
| `mmodal.record` | ⛔ | `RLBaseLine(ElegRL)` | ✅ | [`iLQRDyna`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_dyna.html) | 🔃 | [`oslab.dir_proc`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.dir_process.html) | ✅ | | |
| [`mmodal.sync`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.mmodal.sync.html) | ✅ | `BCO(RofuncIL)` | 🔃 | [`iLQRObs`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_obstacle.html) | 🔃 | [`oslab.file_proc`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.file_process.html) | ✅ | | |
| | | `BC-Z(RofuncIL)` | ⛔ | `MPC` | ⛔ | [`oslab.internet`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.internet.html) | ✅ | | |
| | | `STrans(RofuncIL)` | ⛔ | `RMP` | ⛔ | [`oslab.path`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.path.html) | ✅ | | |
| | | `RT-1(RofuncIL)` | ⛔ | | | | | | |
| | | [`A2C(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/A2C.html) | ✅ | | | | | | |
| | | [`PPO(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/PPO.html) | ✅ | | | | | | |
| | | [`SAC(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/SAC.html) | ✅ | | | | | | |
| | | [`TD3(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/TD3.html) | ✅ | | | | | | |
| | | `CQL(RofuncRL)` | ⛔ | | | | | | |
| | | `TD3BC(RofuncRL)` | ⛔ | | | | | | |
| | | [`DTrans(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/DTrans.html) | ✅ | | | | | | |
| | | `EDAC(RofuncRL)` | ⛔ | | | | | | |
| | | [`AMP(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/AMP.html) | ✅ | | | | | | |
| | | [`ASE(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/ASE.html) | ✅ | | | | | | |
| | | `ODTrans(RofuncRL)` | ⛔ | | | | | | |
</details>
## RofuncRL
`RofuncRL` is one of the most important sub-packages of `Rofunc`. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like `OpenAIGym`, `IsaacGym`, `OmniIsaacGym` (see [example gallery](https://rofunc.readthedocs.io/en/latest/examples/learning_rl/index.html)), and also differentiable simulators like `PlasticineLab` and `DiffCloth`. Here is a list of robot tasks trained by `RofuncRL`:
> **Note**\
> You can customize your own project based on RofuncRL by following the [**RofuncRL customize tutorial**](https://rofunc.readthedocs.io/en/latest/tutorial/customizeRL.html).\
> We also provide a [**RofuncRL-based repository template**](https://github.com/Skylark0924/RofuncRL-template) to generate your own repository following the RofuncRL structure by one click.\
> For more details, please check [**the documentation for RofuncRL**](https://rofunc.readthedocs.io/en/latest/examples/learning_rl/index.html).
<details>
<summary>The list of all supported tasks.</summary>
| Tasks | Animation | Performance | [ModelZoo](https://github.com/Skylark0924/Rofunc/blob/main/rofunc/config/learning/model_zoo.json) |
|-------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|---------------------------------------------------------------------------------------------------|
| Ant | ![](doc/img/task_gifs/AntRofuncRLPPO.gif) | | ✅ |
| Cartpole | | | |
| Franka<br/>Cabinet | ![](doc/img/task_gifs/FrankaCabinetRofuncRLPPO.gif) | | ✅ |
| Franka<br/>CubeStack | | | |
| CURI<br/>Cabinet | ![](doc/img/task_gifs/CURICabinetRofuncRLPPO.gif) | | ✅ |
| CURI<br/>CabinetImage | ![](doc/img/task_gifs/CURICabinetRofuncRLPPO.gif) | | |
| CURI<br/>CabinetBimanual | | | |
| CURIQbSoftHand<br/>SynergyGrasp | <img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspLarge_clampRofuncRLPPO.gif" width=48% /><img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspSpatulaRofuncRLPPO.gif" width=48% /><img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspPhillips_Screw_DriverRofuncRLPPO.gif" width=48% /><img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspScissorsRofuncRLPPO.gif" width=48% /><img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspKnifeRofuncRLPPO.gif" width=48% /><img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspHammerRofuncRLPPO.gif" width=48% /><img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspPower_drillRofuncRLPPO.gif" width=48% /><img src="doc/img/task_gifs/CURIQbSoftHandSynergyGraspMugRofuncRLPPO.gif" width=48% /> | | ✅ |
| Humanoid | ![](doc/img/task_gifs/HumanoidRofuncRLPPO.gif) | | ✅ |
| HumanoidAMP<br/>Backflip | ![](doc/img/task_gifs/HumanoidFlipRofuncRLAMP.gif) | | ✅ |
| HumanoidAMP<br/>Walk | | | ✅ |
| HumanoidAMP<br/>Run | ![](doc/img/task_gifs/HumanoidRunRofuncRLAMP.gif) | | ✅ |
| HumanoidAMP<br/>Dance | ![](doc/img/task_gifs/HumanoidDanceRofuncRLAMP.gif) | | ✅ |
| HumanoidAMP<br/>Hop | ![](doc/img/task_gifs/HumanoidHopRofuncRLAMP.gif) | | ✅ |
| HumanoidASE<br/>GetupSwordShield | ![](doc/img/task_gifs/HumanoidASEGetupSwordShieldRofuncRLASE.gif) | | ✅ |
| HumanoidASE<br/>PerturbSwordShield | ![](doc/img/task_gifs/HumanoidASEPerturbSwordShieldRofuncRLASE.gif) | | ✅ |
| HumanoidASE<br/>HeadingSwordShield | ![](doc/img/task_gifs/HumanoidASEHeadingSwordShieldRofuncRLASE.gif) | | ✅ |
| HumanoidASE<br/>LocationSwordShield | ![](doc/img/task_gifs/HumanoidASELocationSwordShieldRofuncRLASE.gif) | | ✅ |
| HumanoidASE<br/>ReachSwordShield | | | ✅ |
| HumanoidASE<br/>StrikeSwordShield | ![](doc/img/task_gifs/HumanoidASEStrikeSwordShieldRofuncRLASE.gif) | | ✅ |
| BiShadowHand<br/>BlockStack | ![](doc/img/task_gifs/BiShadowHandBlockStackRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>BottleCap | ![](doc/img/task_gifs/BiShadowHandBottleCapRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>CatchAbreast | ![](doc/img/task_gifs/BiShadowHandCatchAbreastRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>CatchOver2Underarm | ![](doc/img/task_gifs/BiShadowHandCatchOver2UnderarmRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>CatchUnderarm | ![](doc/img/task_gifs/BiShadowHandCatchUnderarmRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>DoorOpenInward | ![](doc/img/task_gifs/BiShadowHandDoorOpenInwardRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>DoorOpenOutward | ![](doc/img/task_gifs/BiShadowHandDoorOpenOutwardRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>DoorCloseInward | ![](doc/img/task_gifs/BiShadowHandDoorCloseInwardRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>DoorCloseOutward | ![](doc/img/task_gifs/BiShadowHandDoorCloseOutwardRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>GraspAndPlace | ![](doc/img/task_gifs/BiShadowHandGraspAndPlaceRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>LiftUnderarm | ![](doc/img/task_gifs/BiShadowHandLiftUnderarmRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>HandOver | ![](doc/img/task_gifs/BiShadowHandOverRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>Pen | ![](doc/img/task_gifs/BiShadowHandPenRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>PointCloud | | | |
| BiShadowHand<br/>PushBlock | ![](doc/img/task_gifs/BiShadowHandPushBlockRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>ReOrientation | ![](doc/img/task_gifs/BiShadowHandReOrientationRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>Scissors | ![](doc/img/task_gifs/BiShadowHandScissorsRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>SwingCup | ![](doc/img/task_gifs/BiShadowHandSwingCupRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>Switch | ![](doc/img/task_gifs/BiShadowHandSwitchRofuncRLPPO.gif) | | ✅ |
| BiShadowHand<br/>TwoCatchUnderarm | ![](doc/img/task_gifs/BiShadowHandTwoCatchUnderarmRofuncRLPPO.gif) | | ✅ |
</details>
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=Skylark0924/Rofunc&type=Date)](https://star-history.com/#Skylark0924/Rofunc&Date)
## Citation
If you use rofunc in a scientific publication, we would appreciate citations to the following paper:
```
@software{liu2023rofunc,
title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
author = {Liu, Junjia and Dong, Zhipeng and Li, Chenzui and Li, Zhihao and Yu, Minghao and Delehelle, Donatien and Chen, Fei},
year = {2023},
publisher = {Zenodo},
doi = {10.5281/zenodo.10016946},
url = {https://doi.org/10.5281/zenodo.10016946},
dimensions = {true},
google_scholar_id = {0EnyYjriUFMC},
}
```
## Related Papers
1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid
objects ([IEEE RA-L 2022](https://arxiv.org/abs/2205.05960) | [Code](rofunc/learning/RofuncIL/structured_transformer/strans.py))
```
@article{liu2022robot,
title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={2},
pages={5159--5166},
year={2022},
publisher={IEEE}
}
```
2. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph
Transformer ([IROS 2023](https://arxiv.org/abs/2306.12677)|Code coming soon)
```
@inproceedings{liu2023softgpt,
title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={4920--4925},
year={2023},
organization={IEEE}
}
```
3. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human
Demonstration ([IEEE CDC 2023](https://arxiv.org/abs/2307.05933) | [Code](./rofunc/learning/ml/tpgmm.py))
```
@inproceedings{liu2023birp,
title={Birp: Learning robot generalized bimanual coordination using relative parameterization method on human demonstration},
author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Tan, Kay Chen and Chen, Fei},
booktitle={2023 62nd IEEE Conference on Decision and Control (CDC)},
pages={8300--8305},
year={2023},
organization={IEEE}
}
```
## The Team
Rofunc is developed and maintained by
the [CLOVER Lab (Collaborative and Versatile Robots Laboratory)](https://feichenlab.com/), CUHK.
## Acknowledge
We would like to acknowledge the following projects:
### Learning from Demonstration
1. [pbdlib](https://gitlab.idiap.ch/rli/pbdlib-python)
2. [Ray RLlib](https://docs.ray.io/en/latest/rllib/index.html)
3. [ElegantRL](https://github.com/AI4Finance-Foundation/ElegantRL)
4. [SKRL](https://github.com/Toni-SM/skrl)
5. [DexterousHands](https://github.com/PKU-MARL/DexterousHands)
### Planning and Control
1. [Robotics codes from scratch (RCFS)](https://gitlab.idiap.ch/rli/robotics-codes-from-scratch)
Raw data
{
"_id": null,
"home_page": "https://github.com/Skylark0924/Rofunc",
"name": "rofunc",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.9,>=3.7",
"maintainer_email": null,
"keywords": "robotics, robot learning, learning from demonstration, reinforcement learning, robot manipulation",
"author": "Junjia Liu",
"author_email": "jjliu@mae.cuhk.edu.hk",
"download_url": "https://files.pythonhosted.org/packages/c1/83/dc904d2e8c3e2a321132c0aa02279d2053b699900a39574f1d5fc5e9d0b5/rofunc-0.0.2.6.tar.gz",
"platform": null,
"description": "![](doc/img/logo/logo8.png)\n\n# Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation\n\n[![Release](https://img.shields.io/github/v/release/Skylark0924/Rofunc)](https://pypi.org/project/rofunc/)\n![License](https://img.shields.io/github/license/Skylark0924/Rofunc?color=blue)\n![](https://img.shields.io/github/downloads/skylark0924/Rofunc/total)\n[![](https://img.shields.io/github/issues-closed-raw/Skylark0924/Rofunc?color=brightgreen)](https://github.com/Skylark0924/Rofunc/issues?q=is%3Aissue+is%3Aclosed)\n[![](https://img.shields.io/github/issues-raw/Skylark0924/Rofunc?color=orange)](https://github.com/Skylark0924/Rofunc/issues?q=is%3Aopen+is%3Aissue)\n[![Documentation Status](https://readthedocs.org/projects/rofunc/badge/?version=latest)](https://rofunc.readthedocs.io/en/latest/?badge=latest)\n[![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2FSkylark0924%2FRofunc%2Fbadge%3Fref%3Dmain&style=flat)](https://actions-badge.atrox.dev/Skylark0924/Rofunc/goto?ref=main)\n\n> **Repository address: https://github.com/Skylark0924/Rofunc** <br>\n> **Documentation: https://rofunc.readthedocs.io/**\n\n<img src=\"doc/img/task_gif3/CURIQbSoftHandSynergyGraspSpatulaRofuncRLPPO.gif\" width=25% /><img src=\"doc/img/task_gif3/CURIQbSoftHandSynergyGraspPower_drillRofuncRLPPO.gif\" width=25% /><img src=\"doc/img/task_gif3/CURIQbSoftHandSynergyGraspPhillips_Screw_DriverRofuncRLPPO.gif\" width=25% /><img src=\"doc/img/task_gif3/CURIQbSoftHandSynergyGraspLarge_clampRofuncRLPPO.gif\" width=25% />\n<img src=\"doc/img/task_gif3/CURICoffeeStirring.gif\" width=33.3% /><img src=\"doc/img/task_gif3/CURIScrew.gif\" width=33.3% /><img src=\"doc/img/task_gif3/CURITaichiPushingHand.gif\" width=33.3% />\n<img src=\"doc/img/task_gif3/HOTU_Random_Motion.gif\" width=25% /><img src=\"doc/img/task_gif3/H1_Random_Motion.gif\" width=25% /><img src=\"doc/img/task_gif3/Bruce_Random_Motion.gif\" width=25% /><img src=\"doc/img/task_gif3/Walker_Random_Motion.gif\" width=25% />\n<img src=\"doc/img/task_gif3/HumanoidFlipRofuncRLAMP.gif\" width=33.3% /><img src=\"doc/img/task_gif3/HumanoidDanceRofuncRLAMP.gif\" width=33.3% /><img src=\"doc/img/task_gif3/HumanoidRunRofuncRLAMP.gif\" width=33.3% />\n<img src=\"doc/img/task_gif3/HumanoidASEHeadingSwordShieldRofuncRLASE.gif\" width=33.3% /><img src=\"doc/img/task_gif3/HumanoidASEStrikeSwordShieldRofuncRLASE.gif\" width=33.3% /><img src=\"doc/img/task_gif3/HumanoidASELocationSwordShieldRofuncRLASE.gif\" width=33.3% />\n<img src=\"doc/img/task_gif3/BiShadowHandLiftUnderarmRofuncRLPPO.gif\" width=33.3% /><img src=\"doc/img/task_gif3/BiShadowHandDoorOpenOutwardRofuncRLPPO.gif\" width=33.3% /><img src=\"doc/img/task_gif3/BiShadowHandSwingCupRofuncRLPPO.gif\" width=33.3% />\n\n\n\n\n\n\n\nRofunc package focuses on the **Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD)** for **(Humanoid) Robot Manipulation**. It provides valuable and convenient python functions, including\n_demonstration collection, data pre-processing, LfD algorithms, planning, and control methods_. We also provide an\n`IsaacGym` and `OmniIsaacGym` based robot simulator for evaluation. This package aims to advance the field by building a full-process\ntoolkit and validation platform that simplifies and standardizes the process of demonstration data collection,\nprocessing, learning, and its deployment on robots.\n\n![](doc/img/pipeline.png)\n\n- [Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation](#rofunc-the-full-process-python-package-for-robot-learning-from-demonstration-and-robot-manipulation)\n - [Update News \ud83c\udf89\ud83c\udf89\ud83c\udf89](#update-news-)\n - [Installation](#installation)\n - [Documentation](#documentation)\n - [RofuncRL](#rofuncrl)\n - [Star History](#star-history)\n - [Citation](#citation)\n - [Related Papers](#related-papers)\n - [The Team](#the-team)\n - [Acknowledge](#acknowledge)\n - [Learning from Demonstration](#learning-from-demonstration)\n - [Planning and Control](#planning-and-control)\n\n\n## Update News \ud83c\udf89\ud83c\udf89\ud83c\udf89\n- [2024-01-24] \ud83d\ude80 [CURI Synergy-based Softhand grasping tasks](https://github.com/Skylark0924/Rofunc/blob/main/examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py) are supported to be trained by `RofuncRL`.\n- [2023-12-24] \ud83d\ude80 [Dexterous hand (Shadow Hand, Allegro Hand, qbSofthand) tasks](https://github.com/Skylark0924/Rofunc/blob/main/examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py) are supported to be trained by `RofuncRL`.\n- [2023-12-03] \ud83d\uddbc\ufe0f [Segment-Anything (SAM)](https://segment-anything.com/) is supported in an interactive mode, check the examples in Visualab ([segment anything](https://github.com/Skylark0924/Rofunc/blob/main/examples/visualab/example_sam_seg.py), [segment with prompt](https://github.com/Skylark0924/Rofunc/blob/main/examples/visualab/example_sam_seg_w_prompt.py)).\n- **[2023-10-31] \ud83d\ude80 [`RofuncRL`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/index.html): A modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks is released. It has been tested with simulators like `OpenAIGym`, `IsaacGym`, `OmniIsaacGym` (see [example gallery](https://rofunc.readthedocs.io/en/latest/examples/learning_rl/index.html)), and also differentiable simulators like `PlasticineLab` and `DiffCloth`.**\n- ...\n- If you want to know more about the update news, please refer to the [changelog](https://github.com/Skylark0924/Rofunc/blob/main/changelog.md).\n\n\n## Installation\n\nPlease refer to the [installation guide](https://rofunc.readthedocs.io/en/latest/installation.html).\n\n## Documentation\n\n[![Documentation](https://img.shields.io/badge/Documentation-Access-brightgreen?style=for-the-badge)](https://rofunc.readthedocs.io/en/latest/)\n[![Example Gallery](https://img.shields.io/badge/Example%20Gallery-Access-brightgreen?style=for-the-badge)](https://rofunc.readthedocs.io/en/latest/examples/index.html)\n\nTo give you a quick overview of the pipeline of `rofunc`, we provide an interesting example of learning to play Taichi\nfrom human demonstration. You can find it in the [Quick start](https://rofunc.readthedocs.io/en/latest/quickstart.html)\nsection of the documentation.\n\n\n\n<details>\n<summary>The available functions and plans can be found as follows.</summary>\n\n> **Note**\n> \u2705: Achieved \ud83d\udd03: Reformatting \u26d4: TODO\n\n| Data | | Learning | | P&C | | Tools | | Simulator | |\n|:-------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------:|----|:------------------------------------------------------------------------------------------------------------------:|----|:-------------------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------------:|----|\n| [`xsens.record`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | \u2705 | `DMP` | \u26d4 | [`LQT`](https://rofunc.readthedocs.io/en/latest/planning/lqt.html) | \u2705 | `config` | \u2705 | [`Franka`](https://rofunc.readthedocs.io/en/latest/simulator/franka.html) | \u2705 |\n| [`xsens.export`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | \u2705 | [`GMR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.gmr.html) | \u2705 | [`LQTBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqt.lqt.html) | \u2705 | [`logger`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.logger.beauty_logger.html) | \u2705 | [`CURI`](https://rofunc.readthedocs.io/en/latest/simulator/curi.html) | \u2705 |\n| [`xsens.visual`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | \u2705 | [`TPGMM`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | \u2705 | [`LQTFb`](https://rofunc.readthedocs.io/en/latest/planning/lqt_fb.html) | \u2705 | [`datalab`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.datalab.html) | \u2705 | `CURIMini` | \ud83d\udd03 |\n| [`opti.record`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | \u2705 | [`TPGMMBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | \u2705 | [`LQTCP`](https://rofunc.readthedocs.io/en/latest/planning/lqt_cp.html) | \u2705 | [`robolab.coord`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.coord.transform.html) | \u2705 | [`CURISoftHand`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.curi_sim.html) | \u2705 |\n| [`opti.export`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | \u2705 | [`TPGMM_RPCtl`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | \u2705 | [`LQTCPDMP`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqt.lqt_cp_dmp.html) | \u2705 | [`robolab.fk`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.kinematics.fk.html) | \u2705 | [`Walker`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.walker_sim.html) | \u2705 |\n| [`opti.visual`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | \u2705 | [`TPGMM_RPRepr`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | \u2705 | [`LQR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.lqr.html) | \u2705 | [`robolab.ik`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.kinematics.ik.html) | \u2705 | `Gluon` | \ud83d\udd03 |\n| [`zed.record`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | \u2705 | [`TPGMR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmr.html) | \u2705 | [`PoGLQRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.lqr.html) | \u2705 | `robolab.fd` | \u26d4 | `Baxter` | \ud83d\udd03 |\n| [`zed.export`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | \u2705 | [`TPGMRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmr.html) | \u2705 | [`iLQR`](https://rofunc.readthedocs.io/en/latest/planning/ilqr.html) | \ud83d\udd03 | `robolab.id` | \u26d4 | `Sawyer` | \ud83d\udd03 |\n| [`zed.visual`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | \u2705 | [`TPHSMM`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tphsmm.html) | \u2705 | [`iLQRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_bi.html) | \ud83d\udd03 | [`visualab.dist`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.distribution.html) | \u2705 | [`Humanoid`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.humanoid_sim.html) | \u2705 |\n| [`emg.record`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.emg.record.html) | \u2705 | [`RLBaseLine(SKRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RLBaseLine/SKRL.html) | \u2705 | `iLQRFb` | \ud83d\udd03 | [`visualab.ellip`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.ellipsoid.html) | \u2705 | [`Multi-Robot`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.multirobot_sim.html) | \u2705 |\n| [`emg.export`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.emg.export.html) | \u2705 | `RLBaseLine(RLlib)` | \u2705 | [`iLQRCP`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_cp.html) | \ud83d\udd03 | [`visualab.traj`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.trajectory.html) | \u2705 | | |\n| `mmodal.record` | \u26d4 | `RLBaseLine(ElegRL)` | \u2705 | [`iLQRDyna`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_dyna.html) | \ud83d\udd03 | [`oslab.dir_proc`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.dir_process.html) | \u2705 | | |\n| [`mmodal.sync`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.mmodal.sync.html) | \u2705 | `BCO(RofuncIL)` | \ud83d\udd03 | [`iLQRObs`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_obstacle.html) | \ud83d\udd03 | [`oslab.file_proc`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.file_process.html) | \u2705 | | |\n| | | `BC-Z(RofuncIL)` | \u26d4 | `MPC` | \u26d4 | [`oslab.internet`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.internet.html) | \u2705 | | |\n| | | `STrans(RofuncIL)` | \u26d4 | `RMP` | \u26d4 | [`oslab.path`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.path.html) | \u2705 | | |\n| | | `RT-1(RofuncIL)` | \u26d4 | | | | | | |\n| | | [`A2C(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/A2C.html) | \u2705 | | | | | | |\n| | | [`PPO(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/PPO.html) | \u2705 | | | | | | |\n| | | [`SAC(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/SAC.html) | \u2705 | | | | | | |\n| | | [`TD3(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/TD3.html) | \u2705 | | | | | | |\n| | | `CQL(RofuncRL)` | \u26d4 | | | | | | |\n| | | `TD3BC(RofuncRL)` | \u26d4 | | | | | | |\n| | | [`DTrans(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/DTrans.html) | \u2705 | | | | | | |\n| | | `EDAC(RofuncRL)` | \u26d4 | | | | | | |\n| | | [`AMP(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/AMP.html) | \u2705 | | | | | | |\n| | | [`ASE(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/ASE.html) | \u2705 | | | | | | |\n| | | `ODTrans(RofuncRL)` | \u26d4 | | | | | | |\n</details>\n\n## RofuncRL\n\n`RofuncRL` is one of the most important sub-packages of `Rofunc`. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like `OpenAIGym`, `IsaacGym`, `OmniIsaacGym` (see [example gallery](https://rofunc.readthedocs.io/en/latest/examples/learning_rl/index.html)), and also differentiable simulators like `PlasticineLab` and `DiffCloth`. Here is a list of robot tasks trained by `RofuncRL`:\n\n\n> **Note**\\\n> You can customize your own project based on RofuncRL by following the [**RofuncRL customize tutorial**](https://rofunc.readthedocs.io/en/latest/tutorial/customizeRL.html).\\\n> We also provide a [**RofuncRL-based repository template**](https://github.com/Skylark0924/RofuncRL-template) to generate your own repository following the RofuncRL structure by one click.\\\n> For more details, please check [**the documentation for RofuncRL**](https://rofunc.readthedocs.io/en/latest/examples/learning_rl/index.html).\n\n<details>\n<summary>The list of all supported tasks.</summary>\n\n| Tasks | Animation | Performance | [ModelZoo](https://github.com/Skylark0924/Rofunc/blob/main/rofunc/config/learning/model_zoo.json) |\n|-------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|---------------------------------------------------------------------------------------------------|\n| Ant | ![](doc/img/task_gifs/AntRofuncRLPPO.gif) | | \u2705 |\n| Cartpole | | | |\n| Franka<br/>Cabinet | ![](doc/img/task_gifs/FrankaCabinetRofuncRLPPO.gif) | | \u2705 |\n| Franka<br/>CubeStack | | | |\n| CURI<br/>Cabinet | ![](doc/img/task_gifs/CURICabinetRofuncRLPPO.gif) | | \u2705 |\n| CURI<br/>CabinetImage | ![](doc/img/task_gifs/CURICabinetRofuncRLPPO.gif) | | |\n| CURI<br/>CabinetBimanual | | | |\n| CURIQbSoftHand<br/>SynergyGrasp | <img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspLarge_clampRofuncRLPPO.gif\" width=48% /><img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspSpatulaRofuncRLPPO.gif\" width=48% /><img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspPhillips_Screw_DriverRofuncRLPPO.gif\" width=48% /><img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspScissorsRofuncRLPPO.gif\" width=48% /><img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspKnifeRofuncRLPPO.gif\" width=48% /><img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspHammerRofuncRLPPO.gif\" width=48% /><img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspPower_drillRofuncRLPPO.gif\" width=48% /><img src=\"doc/img/task_gifs/CURIQbSoftHandSynergyGraspMugRofuncRLPPO.gif\" width=48% /> | | \u2705 |\n| Humanoid | ![](doc/img/task_gifs/HumanoidRofuncRLPPO.gif) | | \u2705 |\n| HumanoidAMP<br/>Backflip | ![](doc/img/task_gifs/HumanoidFlipRofuncRLAMP.gif) | | \u2705 |\n| HumanoidAMP<br/>Walk | | | \u2705 |\n| HumanoidAMP<br/>Run | ![](doc/img/task_gifs/HumanoidRunRofuncRLAMP.gif) | | \u2705 |\n| HumanoidAMP<br/>Dance | ![](doc/img/task_gifs/HumanoidDanceRofuncRLAMP.gif) | | \u2705 |\n| HumanoidAMP<br/>Hop | ![](doc/img/task_gifs/HumanoidHopRofuncRLAMP.gif) | | \u2705 |\n| HumanoidASE<br/>GetupSwordShield | ![](doc/img/task_gifs/HumanoidASEGetupSwordShieldRofuncRLASE.gif) | | \u2705 |\n| HumanoidASE<br/>PerturbSwordShield | ![](doc/img/task_gifs/HumanoidASEPerturbSwordShieldRofuncRLASE.gif) | | \u2705 |\n| HumanoidASE<br/>HeadingSwordShield | ![](doc/img/task_gifs/HumanoidASEHeadingSwordShieldRofuncRLASE.gif) | | \u2705 |\n| HumanoidASE<br/>LocationSwordShield | ![](doc/img/task_gifs/HumanoidASELocationSwordShieldRofuncRLASE.gif) | | \u2705 |\n| HumanoidASE<br/>ReachSwordShield | | | \u2705 |\n| HumanoidASE<br/>StrikeSwordShield | ![](doc/img/task_gifs/HumanoidASEStrikeSwordShieldRofuncRLASE.gif) | | \u2705 |\n| BiShadowHand<br/>BlockStack | ![](doc/img/task_gifs/BiShadowHandBlockStackRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>BottleCap | ![](doc/img/task_gifs/BiShadowHandBottleCapRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>CatchAbreast | ![](doc/img/task_gifs/BiShadowHandCatchAbreastRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>CatchOver2Underarm | ![](doc/img/task_gifs/BiShadowHandCatchOver2UnderarmRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>CatchUnderarm | ![](doc/img/task_gifs/BiShadowHandCatchUnderarmRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>DoorOpenInward | ![](doc/img/task_gifs/BiShadowHandDoorOpenInwardRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>DoorOpenOutward | ![](doc/img/task_gifs/BiShadowHandDoorOpenOutwardRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>DoorCloseInward | ![](doc/img/task_gifs/BiShadowHandDoorCloseInwardRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>DoorCloseOutward | ![](doc/img/task_gifs/BiShadowHandDoorCloseOutwardRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>GraspAndPlace | ![](doc/img/task_gifs/BiShadowHandGraspAndPlaceRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>LiftUnderarm | ![](doc/img/task_gifs/BiShadowHandLiftUnderarmRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>HandOver | ![](doc/img/task_gifs/BiShadowHandOverRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>Pen | ![](doc/img/task_gifs/BiShadowHandPenRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>PointCloud | | | |\n| BiShadowHand<br/>PushBlock | ![](doc/img/task_gifs/BiShadowHandPushBlockRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>ReOrientation | ![](doc/img/task_gifs/BiShadowHandReOrientationRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>Scissors | ![](doc/img/task_gifs/BiShadowHandScissorsRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>SwingCup | ![](doc/img/task_gifs/BiShadowHandSwingCupRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>Switch | ![](doc/img/task_gifs/BiShadowHandSwitchRofuncRLPPO.gif) | | \u2705 |\n| BiShadowHand<br/>TwoCatchUnderarm | ![](doc/img/task_gifs/BiShadowHandTwoCatchUnderarmRofuncRLPPO.gif) | | \u2705 |\n\n</details>\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=Skylark0924/Rofunc&type=Date)](https://star-history.com/#Skylark0924/Rofunc&Date)\n\n## Citation\n\nIf you use rofunc in a scientific publication, we would appreciate citations to the following paper:\n\n```\n@software{liu2023rofunc,\n title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},\n author = {Liu, Junjia and Dong, Zhipeng and Li, Chenzui and Li, Zhihao and Yu, Minghao and Delehelle, Donatien and Chen, Fei},\n year = {2023},\n publisher = {Zenodo},\n doi = {10.5281/zenodo.10016946},\n url = {https://doi.org/10.5281/zenodo.10016946},\n dimensions = {true},\n google_scholar_id = {0EnyYjriUFMC},\n}\n```\n\n## Related Papers\n\n1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid\n objects ([IEEE RA-L 2022](https://arxiv.org/abs/2205.05960) | [Code](rofunc/learning/RofuncIL/structured_transformer/strans.py))\n\n```\n@article{liu2022robot,\n title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},\n author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},\n journal={IEEE Robotics and Automation Letters},\n volume={7},\n number={2},\n pages={5159--5166},\n year={2022},\n publisher={IEEE}\n}\n```\n\n2. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph\n Transformer ([IROS 2023](https://arxiv.org/abs/2306.12677)\uff5cCode coming soon)\n\n```\n@inproceedings{liu2023softgpt,\n title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},\n author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},\n booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n pages={4920--4925},\n year={2023},\n organization={IEEE}\n}\n```\n\n3. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human\n Demonstration ([IEEE CDC 2023](https://arxiv.org/abs/2307.05933) | [Code](./rofunc/learning/ml/tpgmm.py))\n\n```\n@inproceedings{liu2023birp,\n title={Birp: Learning robot generalized bimanual coordination using relative parameterization method on human demonstration},\n author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Tan, Kay Chen and Chen, Fei},\n booktitle={2023 62nd IEEE Conference on Decision and Control (CDC)},\n pages={8300--8305},\n year={2023},\n organization={IEEE}\n}\n```\n\n## The Team\n\nRofunc is developed and maintained by\nthe [CLOVER Lab (Collaborative and Versatile Robots Laboratory)](https://feichenlab.com/), CUHK.\n\n## Acknowledge\n\nWe would like to acknowledge the following projects:\n\n### Learning from Demonstration\n\n1. [pbdlib](https://gitlab.idiap.ch/rli/pbdlib-python)\n2. [Ray RLlib](https://docs.ray.io/en/latest/rllib/index.html)\n3. [ElegantRL](https://github.com/AI4Finance-Foundation/ElegantRL)\n4. [SKRL](https://github.com/Toni-SM/skrl)\n5. [DexterousHands](https://github.com/PKU-MARL/DexterousHands)\n\n### Planning and Control\n\n1. [Robotics codes from scratch (RCFS)](https://gitlab.idiap.ch/rli/robotics-codes-from-scratch)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation",
"version": "0.0.2.6",
"project_urls": {
"Homepage": "https://github.com/Skylark0924/Rofunc"
},
"split_keywords": [
"robotics",
" robot learning",
" learning from demonstration",
" reinforcement learning",
" robot manipulation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "71d44862a991ea4b36059f7577672dc1f09cafd1da3860936d445856b9f63e75",
"md5": "39c9150308d7b5708cb204889a79bdd7",
"sha256": "5d92ed79673794120369b08b7bdd7c316e1af1343e56148b04031cdce7cf9fc5"
},
"downloads": -1,
"filename": "rofunc-0.0.2.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "39c9150308d7b5708cb204889a79bdd7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.9,>=3.7",
"size": 202930776,
"upload_time": "2024-07-03T10:34:19",
"upload_time_iso_8601": "2024-07-03T10:34:19.673886Z",
"url": "https://files.pythonhosted.org/packages/71/d4/4862a991ea4b36059f7577672dc1f09cafd1da3860936d445856b9f63e75/rofunc-0.0.2.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c183dc904d2e8c3e2a321132c0aa02279d2053b699900a39574f1d5fc5e9d0b5",
"md5": "790ae45fd78e53f7f624270f5f857468",
"sha256": "2dd61eaed54c8eef4694b19585dbdace7366c8a941101e53bc70ed6bbc9ebb0a"
},
"downloads": -1,
"filename": "rofunc-0.0.2.6.tar.gz",
"has_sig": false,
"md5_digest": "790ae45fd78e53f7f624270f5f857468",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.9,>=3.7",
"size": 201473252,
"upload_time": "2024-07-03T10:34:38",
"upload_time_iso_8601": "2024-07-03T10:34:38.612058Z",
"url": "https://files.pythonhosted.org/packages/c1/83/dc904d2e8c3e2a321132c0aa02279d2053b699900a39574f1d5fc5e9d0b5/rofunc-0.0.2.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-03 10:34:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Skylark0924",
"github_project": "Rofunc",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "rofunc"
}