Skip to content

[CVPR 2023 Highlight] GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts.

Notifications You must be signed in to change notification settings

ShaneHuangHZ/GAPartNet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 

Repository files navigation

GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts

CVPR 2023 Highlight

This is the official repository of GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts.

For more information, please visit our project page.

💡 News

  • 2023/6/28 We polish our model with the user-friendly Lightning framework and release detailed training code! Check the gapartnet folder for more details!

  • 2023/5/21 GAPartNet Dataset has been released, including Object & Part Assets and Annotations, Rendered PointCloud Data and our Pre-trained Checkpoint.

GAPartNet Dataset

(New!) GAPartNet Dataset has been released, including Object & Part Assets and Annotations, Rendered PointCloud Data, and our Pre-trained Checkpoint.

To obtain our dataset, please fill out this form and check the Terms&Conditions. Please cite our paper if you use our dataset.

Download our pretrained checkpoint here! We also release the checkpoint trained on all the GAPartNet dataset with the best performance here (Notice that the checkpoint in the dataset is expired, please use this one.)

GAPartNet Network and Inference

We release our network and checkpoints; check the gapartnet folder for more details. You can segment parts and estimate the pose of it. We also provide visualization code. This is a visualization example: example example2

How to use our code and model:

1. Install dependencies

  • Python 3.8
  • Pytorch >= 1.11.0
  • CUDA >= 11.3
  • Open3D with extension (See install guide below)
  • epic_ops (See install guide below)
  • pointnet2_ops (See install guide below)
  • other pip packages

2. Install Open3D & epic_ops & pointnet2_ops

See this repo for more details:

GAPartNet_env: This repo includes Open3D, epic_ops and pointnet2_ops. You can install them by following the instructions in this repo.

3. Download our model and data

See gapartnet folder for more details.

4. Inference and visualization

cd gapartnet

CUDA_VISIBLE_DEVICES=0 \
python train.py test -c gapartnet.yaml \
--model.init_args.training_schedule "[0,0]" \
--model.init_args.ckpt ckpt/release.ckpt

Notice:

  • We provide visualization code here, you can change cfg in model.init_args.visualize_cfg and control
    • whether to visualize (visualize)
    • where to save results (SAVE_ROOT)
    • what to visualize: save_option includes ["raw", "pc", "sem_pred", "sem_gt", "ins_pred", "ins_gt", "npcs_pred", "npcs_gt", "bbox_gt", "bbox_gt_pure", "bbox_pred", "bbox_pred_pure"] (save_option)
    • the number of visualization samples (sample_num)
  • We fix some bugs for mAP computation, check the code for more details.

5. Training

You can run the following code to train the policy:

cd gapartnet

CUDA_VISIBLE_DEVICES=0 \
python train.py fit -c gapartnet.yaml

Notice:

  • For training, please use a good schedule, first train the semantic segmentation backbone and head, then, add the clustering and scorenet supervision for instance segmentation. You can change the schedule in cfg(model.init_args.training_schedule). The schedule is a list, the first number indicate the epoch to start the clustering and scorenet training, the second number indicate the epoch to start the npcsnet training. For example, [5,10] means that the clustering and scorenet training will start at epoch 5, and the npcsnet training will start at epoch 10.
  • If you want to debug, add --model.init_args.debug True to the command and also change data.init_args.xxx_few_shot in the cfg to be True, here xxx is the name of training and validation sets.
  • We also provide multi-GPU parallel training, please set CUDA_VISIBLE_DEVICES to be the GPUs you want to use, e.g. CUDA_VISIBLE_DEVICES=3,6 means you want to use 2 GPU #3 and #6 for training.

Citation

If you find our work useful in your research, please consider citing:

@article{geng2022gapartnet,
  title={GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts},
  author={Geng, Haoran and Xu, Helin and Zhao, Chengyang and Xu, Chao and Yi, Li and Huang, Siyuan and Wang, He},
  journal={arXiv preprint arXiv:2211.05272},
  year={2022}
}

Contact

If you have any questions, please open a github issue or contact us:

Haoran Geng: [email protected]

Helin Xu: [email protected]

Chengyang Zhao: [email protected]

He Wang: [email protected]

About

[CVPR 2023 Highlight] GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.3%
  • Cuda 6.6%
  • C++ 3.2%
  • Other 0.9%