Skip to content

[RSS 2023] RoboNinja: Learning an Adaptive Cutting Policy for Multi-Material Objects

License

Notifications You must be signed in to change notification settings

real-stanford/roboninja

Repository files navigation

RoboNinja: Learning an Adaptive Cutting Policy for Multi-Material Objects

Zhenjia Xu1, Zhou Xian2, Xingyu Lin3, Cheng Chi1, Zhiao Huang4, Chuang Gan5†, Shuran Song1†
1Columbia University, 2CMU, 3UC Berkeley, 4UC San Diego, 5UMass Amherst & MIT-IBM Lab

RSS 2023

This repository contains code for training and evaluating RoboNinja in both simulation and real-world settings.

Installation

We recommend Mambaforge instead of the standard anaconda distribution for faster installation:

$ mamba env create -f environment.yml

but you can use conda as well:

$ conda env create -f environment.yml

Activate conda environment and login to wandb.

$ conda activate roboninja
$ wandb login

Simulation and Trajectory Optimization

Rigid Core Generation

Generate cores with in-distribution geometries (300 train + 50 eval)

$ python roboninja/workspace/bone_generation_workspace.py

Generate cores with out-of-distribution geometries (50 eval)

$ python roboninja/workspace/bone_generation_ood_workspace.py

Quick Example

simulation_example.ipynb provides a quick example of the simulation. It first create a scene and render an image. It then runs a forward pass a backward pass using the initial action trajectory. Finally, it executes an optimized action trajectory.

If you get an error related to rendering, here are some potential solutions:

  • make sure vulkan is installed
  • TI_VISIBLE_DEVICE is not correctly set in roboninja/env/tc_env.py (L17). The reason is that vukan device index is not alighed with cuda device index, and that's the reason I have the function called get_vulkan_offset(). Change this function implementation based on your setup.

Trajectory Optimization via Differentiable Simulation

$ python roboninja/workspace/optimization_workspace.py name=NAME

Here {NAME} typically choose expert_{X}, where X is the index of the rigid core. Loss curve are logged as roboninja/{NAME} on wandb as well as visualization of intermediate results. The final result will be saved in data/optimization/{NAME}. Configurations are stored in roboninja/config/optimization_workspace.yaml.

Training

State Estimation

$ python roboninja/workspace/state_estimation_workspace.py

Loss curve are logged as roboninja/state_estimation on wandb as well as visualization on both training and testing sets. Checkpoints will be saved in data/state_estimation. Configurations are stored in roboninja/config/state_estimation_workspace.yaml.

Cutting Policy

$ python roboninja/workspace/close_loop_policy_workspace.py dataset.expert_dir=data/expert

Here is the training script using the provided expert demonstraitons stored in data/expert. You can also change the directory to the trajectories you collected in the previous step. Loss curve are logged as roboninja/close_loop_policy on wandb as well as visualization with different tollerance values. Checkpoints will be saved in data/close_loop_policy. Configurations are stored in roboninja/config/close_loop_policy_workspace.yaml.

Evaluation in Simulation

Pretrained Model

Pretrained models can be downloaded by:

wget https://roboninja.cs.columbia.edu/download/roboninja-pretrined.zip

Unzip and remember to change the state_estimation_path and close_loop_policy_path in roboninja/configs/eval_workspace.yaml.

Evaluation Environments

We provide two type of simulaiton environments for evaluation:

  • sim: a geometry-based simulation that incorporates collision detection. It is designed to calculate cut mass and the number of collisions. This simulation is very fast. It is very fast but can not estimate energy consumption.
  • taichi: The physics-based simulaiton implemented in Taichi and used for trajectory optimization. It is slower compared to the sim environment but supports energy consumption estimation. Simulation results in our paper are evaluated using this environment.

Here is the command using taichi environment:

$ python roboninja/workspace/eval_workspace.py type=taichi

Results will be saved in data/eval

Real-world Evaluation

Hardware

Relevant tutorial: https://tutorials-raspberrypi.com/digital-raspberry-pi-scale-weight-sensor-hx711/

Setup Raspberry Pi

Copy roboninja/real_world/server_udp.py into Raspberry Pi and run

python server_udp.py

Evaluate Cutting Policy

Update ip address and other configurations in roboninja/configs/real_env.yaml, and run

python roboninja/workspace/eval_workspace.py type=real bone_idx={INDEX}

BibTeX

@inproceedings{xu2023roboninja,
	title={RoboNinja: Learning an Adaptive Cutting Policy for Multi-Material Objects},
	author={Xu, Zhenjia and Xian, Zhou and Lin, Xingyu and Chi, Cheng and Huang, Zhiao and Gan, Chuang and Song, Shuran},
	booktitle={Proceedings of Robotics: Science and Systems (RSS)},
	year={2023}
}

License

This repository is released under the MIT license. See LICENSE for additional details.

Acknowledgement

About

[RSS 2023] RoboNinja: Learning an Adaptive Cutting Policy for Multi-Material Objects

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published