Pytorch implementation of GPNet. Project Page.
- Ubuntu 16.04
- pytorch 0.4.1
- CUDA 8.0 or CUDA 9.2
Our depth images are saved in .exr files, please install the OpenEXR, then run pip install OpenEXR.
cd lib/pointnet2mkdir build && cd buildcmake .. && make
Our dataset is available at Google Driver. Backup (2qln).
🔔$\color{red}{Warning!!!}$
🔔 The contact points in our released dataset are not correct, please run the following script to get the correct contact points and grasp centers.
python get_contact_cos.py
or download the correct version of grasp centers and contacts from here and place it in the your_data_path/annotations directory.
The simulation environment is built on PyBullet. You can use pip to install the python packages:
pip install pybullet
pip install attrdict
pip install collections
pip install joblib
pip install gc
Please look for the details of our simulation configurations in the directory simulator.
We use Adam optimizer for stable training.
CUDA_VISIBLE_DEVICES=0,1 python train.py --tanh --grid --dataset_root path_to_dataset
The model trained with correct grasp centers and contacts can be found here. The simulation results are much better as the success rate@10% reaches 96.7% (corresponding result in our paper is 90%).
CUDA_VISIBLE_DEVICES=0,1 python test.py --tanh --grid --dataset_root path_to_dataset --resume pretrained_model/ --epoch 380
Then it will generate the predicted grasps saved in .npz files in pretrained_model/test/epoch380/view0. The file pretrained_model/test/epoch380/nms_poses_view0.txt contains the predicted grasps after nms. If you want to test multiple models, you can specify the epoch numbers by --epoch epoch_num1,epoch_num2,...,epoch_numk.
You can use the following script to abtain the success rate and coverage rate.
CUDA_VISIBLE_DEVICES=0 python topk_percent_coverage_precision.py -pd pretrained_model/test/epoch380/view0 -gd path_to_gt_annotations
To test the predicted grasps in simulation environment:
cd simulator
python -m simulateTest.simulatorTestDemo -t pretrained_model/test/epoch380/nms_poses_view0.txt
@article{wu2020grasp,
title={Grasp Proposal Networks: An End-to-End Solution for Visual Learning of Robotic Grasps},
author={Wu, Chaozheng and Chen, Jian and Cao, Qiaoyu and Zhang, Jianchi and Tai, Yunxin and Sun, Lin and Jia, Kui},
journal={arXiv preprint arXiv:2009.12606},
year={2020}
}
The code of pointnet2 are borrowed from Pointnet2_PyTorch.