robot-awe


Namerobot-awe JSON
Version 0.1 PyPI version JSON
download
home_page
Summaryresearch project
upload_time2023-07-27 01:27:41
maintainer
docs_urlNone
authorIRIS
requires_python>=3.6
licenseMIT License
keywords deep learning machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Automatic Waypoint Extraction (AWE)
[[Project website](https://lucys0.github.io/awe/)] [[Paper]()]

![](media/teaser.png)

This repo contains the implementation of Automatic Waypoint Extraction (AWE): a plug-and-play module for selecting waypoints from demonstrations for performant behavioral cloning.  This repo also includes instantiations of combining AWE with two state-of-the-art imitation learning methods, [Diffusion Policy](https://arxiv.org/abs/2303.04137) and [Action Chunking with Transformers (ACT)](https://arxiv.org/abs/2304.13705), and the respective benchmarking environments, [RoboMimic](https://robomimic.github.io/) and [Bimanual Simulation Suite](https://sites.google.com/view/https://tonyzhaozh.github.io/aloha/).

If you encountered any issue, feel free to contact lucyshi (at) stanford (dot) edu

## Installation
1. Clone this repository
```bash
git clone git@github.com:lucys0/awe.git
cd awe
```

2. Create a virtual environment
```bash 
conda create -n awe_venv python=3.9
conda activate awe_venv
```

3. Install MuJoCo 2.1
* Download the MuJoCo version 2.1 binaries for [Linux](https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz) or [OSX](https://mujoco.org/download/mujoco210-macos-x86_64.tar.gz).
* Extract the downloaded `mujoco210` directory into `~/.mujoco/mujoco210`.

4. Install packages
```bash
pip install -e .
```

## RoboMimic
### Set up the environment
```bash
# install robomimic
pip install -e robomimic/

# install robosuite
pip install -e robosuite/
```

### Download data
```bash
# download unprocessed data from the robomimic benchmark
python robomimic/robomimic/scripts/download_datasets.py --tasks lift can square  

# download processed image data from diffusion policy (faster)
mkdir data && cd data
wget https://diffusion-policy.cs.columbia.edu/data/training/robomimic_image.zip
unzip robomimic_image.zip && rm -f robomimic_image.zip && cd ..
```

### Usage
Please replace `[TASK]` with your desired task to train. `[TASK]={lift, can, square}`
* Convert delta actions to absolute actions
```bash
python utils/robomimic_convert_action.py --dataset=robomimic/datasets/[TASK]/ph/low_dim.hdf5
```

* Save waypoints
```bash
python utils/robomimic_save_waypoints.py --dataset=robomimic/datasets/[TASK]/ph/low_dim.hdf5 --err_threshold=0.005
```

* Replay waypoints (save 3 videos and 3D visualizations by default)
```bash
mkdir video
python example/robomimic_waypoint_replay.py --dataset=robomimic/datasets/[TASK]/ph/low_dim.hdf5 \
    --record_video --video_path video/[TASK]_waypoint.mp4 --task=[TASK] \
    --plot_3d --auto_waypoint --err_threshold=0.005
```

## AWE + Diffusion Policy

### Install Diffusion Policy
```bash
conda env update -f diffusion_policy/conda_environment.yaml
```
If the installation is too slow, consider using [Mambaforge](https://github.com/conda-forge/miniforge#mambaforge) instead of the standard anaconda distribution, as recommended by the [Diffusion Policy](https://github.com/columbia-ai-robotics/diffusion_policy#%EF%B8%8F-installation) authors. That is:

```bash
mamba env create -f diffusion_policy/conda_environment.yaml
```

### Train policy
```bash
python diffusion_policy/train.py --config-dir=config --config-name=waypoint_image_[TASK]_ph_diffusion_policy_transformer.yaml hydra.run.dir='data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}'
```

## Bimanual Simulation Suite
### Set up the environment
```bash
conda env update -f act/conda_env.yaml
```

### Download data
Please download scripted/human demo for simulated environments from [here](https://drive.google.com/drive/folders/1gPR03v05S1xiInoVJn7G7VJ9pDCnxq9O) and save them in `data/act/`.

If you need real robot data, please contact Lucy Shi: lucyshi (at) stanford (dot) edu


### Usage
Please replace `[TASK]` with your desired task to train. `[TASK]={sim_transfer_cube_scripted, sim_insertion_scripted, sim_transfer_cube_human, sim_insertion_human}`

* Visualize waypoints
```bash
python example/act_waypoint.py --dataset=data/act/[TASK] --err_threshold=0.01 --plot_3d --end_idx=0 
```

* Save waypoints
```bash
python example/act_waypoint.py --dataset=data/act/[TASK] --err_threshold=0.01 --save_waypoints 
```

## AWE + ACT
### Train policy
```bash
python act/imitate_episodes.py \
    --task_name [TASK] \
    --ckpt_dir data/outputs/act_ckpt/[TASK]_waypoint \
    --policy_class ACT --kl_weight 10 --chunk_size 50 --hidden_dim 512 --batch_size 8 --dim_feedforward 3200 \
    --num_epochs 8000  --lr 1e-5 \
    --seed 0 --temporal_agg --use_waypoint
```
For human datasets, set `--kl_weight=80`, as suggested by the ACT authors. To evaluate the policy, run the same command with `--eval`. 


## Citation

If you find our code useful for your research, please cite:
```

```

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "robot-awe",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "Deep Learning,Machine Learning",
    "author": "IRIS",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/61/e2/f649eb82f61b91b6793c758b64c4f54c44f9af45aa7604dff58a095acc21/robot_awe-0.1.tar.gz",
    "platform": null,
    "description": "# Automatic Waypoint Extraction (AWE)\n[[Project website](https://lucys0.github.io/awe/)] [[Paper]()]\n\n![](media/teaser.png)\n\nThis repo contains the implementation of Automatic Waypoint Extraction (AWE): a plug-and-play module for selecting waypoints from demonstrations for performant behavioral cloning.  This repo also includes instantiations of combining AWE with two state-of-the-art imitation learning methods, [Diffusion Policy](https://arxiv.org/abs/2303.04137) and [Action Chunking with Transformers (ACT)](https://arxiv.org/abs/2304.13705), and the respective benchmarking environments, [RoboMimic](https://robomimic.github.io/) and [Bimanual Simulation Suite](https://sites.google.com/view/https://tonyzhaozh.github.io/aloha/).\n\nIf you encountered any issue, feel free to contact lucyshi (at) stanford (dot) edu\n\n## Installation\n1. Clone this repository\n```bash\ngit clone git@github.com:lucys0/awe.git\ncd awe\n```\n\n2. Create a virtual environment\n```bash \nconda create -n awe_venv python=3.9\nconda activate awe_venv\n```\n\n3. Install MuJoCo 2.1\n* Download the MuJoCo version 2.1 binaries for [Linux](https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz) or [OSX](https://mujoco.org/download/mujoco210-macos-x86_64.tar.gz).\n* Extract the downloaded `mujoco210` directory into `~/.mujoco/mujoco210`.\n\n4. Install packages\n```bash\npip install -e .\n```\n\n## RoboMimic\n### Set up the environment\n```bash\n# install robomimic\npip install -e robomimic/\n\n# install robosuite\npip install -e robosuite/\n```\n\n### Download data\n```bash\n# download unprocessed data from the robomimic benchmark\npython robomimic/robomimic/scripts/download_datasets.py --tasks lift can square  \n\n# download processed image data from diffusion policy (faster)\nmkdir data && cd data\nwget https://diffusion-policy.cs.columbia.edu/data/training/robomimic_image.zip\nunzip robomimic_image.zip && rm -f robomimic_image.zip && cd ..\n```\n\n### Usage\nPlease replace `[TASK]` with your desired task to train. `[TASK]={lift, can, square}`\n* Convert delta actions to absolute actions\n```bash\npython utils/robomimic_convert_action.py --dataset=robomimic/datasets/[TASK]/ph/low_dim.hdf5\n```\n\n* Save waypoints\n```bash\npython utils/robomimic_save_waypoints.py --dataset=robomimic/datasets/[TASK]/ph/low_dim.hdf5 --err_threshold=0.005\n```\n\n* Replay waypoints (save 3 videos and 3D visualizations by default)\n```bash\nmkdir video\npython example/robomimic_waypoint_replay.py --dataset=robomimic/datasets/[TASK]/ph/low_dim.hdf5 \\\n    --record_video --video_path video/[TASK]_waypoint.mp4 --task=[TASK] \\\n    --plot_3d --auto_waypoint --err_threshold=0.005\n```\n\n## AWE + Diffusion Policy\n\n### Install Diffusion Policy\n```bash\nconda env update -f diffusion_policy/conda_environment.yaml\n```\nIf the installation is too slow, consider using [Mambaforge](https://github.com/conda-forge/miniforge#mambaforge) instead of the standard anaconda distribution, as recommended by the [Diffusion Policy](https://github.com/columbia-ai-robotics/diffusion_policy#%EF%B8%8F-installation) authors. That is:\n\n```bash\nmamba env create -f diffusion_policy/conda_environment.yaml\n```\n\n### Train policy\n```bash\npython diffusion_policy/train.py --config-dir=config --config-name=waypoint_image_[TASK]_ph_diffusion_policy_transformer.yaml hydra.run.dir='data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}'\n```\n\n## Bimanual Simulation Suite\n### Set up the environment\n```bash\nconda env update -f act/conda_env.yaml\n```\n\n### Download data\nPlease download scripted/human demo for simulated environments from [here](https://drive.google.com/drive/folders/1gPR03v05S1xiInoVJn7G7VJ9pDCnxq9O) and save them in `data/act/`.\n\nIf you need real robot data, please contact Lucy Shi: lucyshi (at) stanford (dot) edu\n\n\n### Usage\nPlease replace `[TASK]` with your desired task to train. `[TASK]={sim_transfer_cube_scripted, sim_insertion_scripted, sim_transfer_cube_human, sim_insertion_human}`\n\n* Visualize waypoints\n```bash\npython example/act_waypoint.py --dataset=data/act/[TASK] --err_threshold=0.01 --plot_3d --end_idx=0 \n```\n\n* Save waypoints\n```bash\npython example/act_waypoint.py --dataset=data/act/[TASK] --err_threshold=0.01 --save_waypoints \n```\n\n## AWE + ACT\n### Train policy\n```bash\npython act/imitate_episodes.py \\\n    --task_name [TASK] \\\n    --ckpt_dir data/outputs/act_ckpt/[TASK]_waypoint \\\n    --policy_class ACT --kl_weight 10 --chunk_size 50 --hidden_dim 512 --batch_size 8 --dim_feedforward 3200 \\\n    --num_epochs 8000  --lr 1e-5 \\\n    --seed 0 --temporal_agg --use_waypoint\n```\nFor human datasets, set `--kl_weight=80`, as suggested by the ACT authors. To evaluate the policy, run the same command with `--eval`. \n\n\n## Citation\n\nIf you find our code useful for your research, please cite:\n```\n\n```\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "research project",
    "version": "0.1",
    "project_urls": null,
    "split_keywords": [
        "deep learning",
        "machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "db41dbe88f610f15441c384a0c61f621a4f4a9ab9331c378307fe2f03acdd8d7",
                "md5": "af46dc54d79ca65817c948140aea36e5",
                "sha256": "f458fa106db157bea42f5d4d262db5d29a3115fa2d541d3dd44d3779d5ba2a84"
            },
            "downloads": -1,
            "filename": "robot_awe-0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "af46dc54d79ca65817c948140aea36e5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 58533,
            "upload_time": "2023-07-27T01:27:40",
            "upload_time_iso_8601": "2023-07-27T01:27:40.045156Z",
            "url": "https://files.pythonhosted.org/packages/db/41/dbe88f610f15441c384a0c61f621a4f4a9ab9331c378307fe2f03acdd8d7/robot_awe-0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "61e2f649eb82f61b91b6793c758b64c4f54c44f9af45aa7604dff58a095acc21",
                "md5": "d75fa124653884415d889665e5d4e9b5",
                "sha256": "ab39a6289d10ff9a28afc386becf69449266cb34c868159af0a7d35da32a5935"
            },
            "downloads": -1,
            "filename": "robot_awe-0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d75fa124653884415d889665e5d4e9b5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 48274,
            "upload_time": "2023-07-27T01:27:41",
            "upload_time_iso_8601": "2023-07-27T01:27:41.590147Z",
            "url": "https://files.pythonhosted.org/packages/61/e2/f649eb82f61b91b6793c758b64c4f54c44f9af45aa7604dff58a095acc21/robot_awe-0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-27 01:27:41",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "robot-awe"
}
        
Elapsed time: 0.24142s