This repository contains the source code for the paper Iterative Residual Policy for Goal-Conditioned Dynamic Manipulation of Deformable Objects. This paper has been accepted to RSS 2022.
@inproceedings{chi2022irp,
title={Iterative Residual Policy for Goal-Conditioned Dynamic Manipulation of Deformable Objects},
author={Chi, Cheng and Burchfiel, Benjamin and Cousineau, Eric and Feng, Siyuan and Song, Shuran},
booktitle={Proceedings of Robotics: Science and Systems (RSS)},
year={2022}
}
IRP Rope Dataset [required for eval] (7.63GB)
IRP Cloth Dataset [training only] (938MB)
IRP Rope [action + tracking] (914MB)
IRP Cloth [action] (450MB)
A conda environment.yml for python=3.8, pytorch=1.9.0 and cudatoolkit=11.2
is provided.
conda env create --file environment.yml
Please try mambaforge for better dependency conflict resolution.
mamba env create --file environment.yml
- Install Mujoco 2.1.0
- Install mujoco-py 2.0.2 and carefully follow instructions.
- Install abr_control with Mujoco.
- Stereolabs ZED 2i Camera
- UR5-CB3 or UR5e (RTDE Interface is required)
- Millibar Robotics Manual Tool Changer (only need robot side)
- 3D Print Quick Change Plate to mount the wooden extension stick to EEF.
- 3/8 inch Square Wooden Dowel
- 8mm Cotton Rope
- Wood screws
- Duct tape 😛
Under project root (i.e. irp/
), create data
folder and download IRP Rope Dataset as well as Pretrained Models. Extract tar files
$ cd data
$ tar -xvf checkpoints.tar
$ tar -xvf irp_rope.zarr.tar
Activate environment
$ conda activate irp
(irp) $
Run dataset evaluation script. Use action.gpu_id
to select GPU on multi-GPU systems.
(irp) $ python eval_irp_rope_dataset.py action.gpu_id=0
A log.pkl
file will be saved to the ad-hoc output directory created by hydra. Add command-line argument offline=False
to enable wandb logging (recommended).
Numbers reported in our paper is generated using this method.
Extract data and checkpoints following basic (required). Install dependencies following sim installation.
Run simulation evaluation script.
(irp) $ python eval_irp_rope_dataset.py action.gpu_id=0
Note that this method has not been extensively tested and is mainly provided for future development. Mujoco might crash due to numerical instability (i.e. Nan), which is better handled in the dataset.
Extract data and checkpoints following basic (required). Install dependencies following real installation.
Initialize UR5 robot and write down <robot_ip>
. Move robot in teach mode close to joint configuration [-90,-70,150,-170,-90,-90]
to prevent unexpected movement.
Run ur5_camera_calibration_app
to create homography calibration file to data/calibration.pkl
. Use A/D
keys to move j2
and W/S
keys to move j3
. Click on the tip of the wooden extension for calibration.
(irp) $ python ur5_camera_calibration_app.py --ip <robot_ip> -o data/calibration.pkl
Example for calibration result, the red and blue crosses should be fairly close.
Run real eval script.
(irp) $ python eval_irp_rope_real.py
Result and videos will be saved to the hydra ad-hoc output directory.
To train IRP model from scratch:
(irp) $ python train_irp.py
In case of inaccurate tracking, use video_labeler.py
to generate tracking labels and train_tracker.py
to train tracking model.