Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/IROS40897.2019.8967869guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

ROI-based Robotic Grasp Detection for Object Overlapping Scenes

Published: 01 November 2019 Publication History

Abstract

Grasp detection considering the affiliations between grasps and their owner in object overlapping scenes is a necessary and challenging task for the practical use of the robotic grasping approach. In this paper, a robotic grasp detection algorithm named ROI-GD is proposed to provide a feasible solution to this problem based on Region of Interest (ROI), which is the region proposal for objects. ROI-GD uses features from ROIs to detect grasps instead of the whole scene. It has two stages: the first stage is to provide ROIs in the input image and the second-stage is the grasp detector based on ROI features. We also contribute a multi-object grasp dataset, (a) which is much larger than Cornell Grasp Dataset, by labeling Visual Manipulation Relationship Dataset. Experimental results demonstrate that ROI-GD performs much better in object overlapping scenes and at the meantime, remains comparable with state-of-the-art grasp detection algorithms on Cornell Grasp Dataset and Jacquard Dataset. Robotic experiments demonstrate that ROI-GD can help robots grasp the target in single-object and multi-object scenes with the overall success rates of 92.5% and 83.8% respectively.

References

[1]
Nan-ning Zheng, Zi-yi Liu, Peng-ju Ren, Yong-qiang Ma, Shi-tao Chen, Si-yu Yu, et al. Hybrid-augmented intelligence: collaboration and cognition. Frontiers of Information Technology & Electronic Engineering, 18(2):153–179, 2017.
[2]
Joseph Redmon and Anelia Angelova. Real-time grasp detection using convolutional neural networks. In IEEE International Conference on Robotics and Automation, pages 1316–1322. IEEE, 2015.
[3]
Sulabh Kumra and Christopher Kanan. Robotic grasp detection using deep convolutional neural networks. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2017.
[4]
Fu-Jen Chu, Ruinian Xu, and Patricio Vela. Real-world multi-object, multi-grasp detection. IEEE Robotics and Automation Letters, 2018.
[5]
Xinwen Zhou, Xuguang Lan, Hanbo Zhang, Zhiqiang Tian, Yang Zhang, and Nanning Zheng. Fully convolutional grasp detection network with oriented anchor box. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018.
[6]
Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, 2016.
[7]
Marcus Gualtieri, Andreas ten Pas, Kate Saenko, and Robert Platt. High precision grasp pose detection in dense clutter. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 598–605. IEEE, 2016.
[8]
Jeffrey Mahler and Ken Goldberg. Learning deep policies for robot bin picking by simulating robust grasping sequences. In Proceedings of the 1st Annual Conference on Robot Learning, volume 78, pages 515–524. PMLR, 2017.
[9]
Jeffrey Mahler, Matthew Matl, Xinyu Liu, Albert Li, David Gealy, and Ken Goldberg. Dex-net 3.0: Computing robust robot suction grasp targets in point clouds using a new analytic model and deep learning. In IEEE International Conference on Robotics and Automation, 2018.
[10]
Andy Zeng, Shuran Song, Kuan-Ting Yu, Elliott Donlon, Francois R Hogan, Maria Bauza, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. In 2018 IEEE International Conference on Robotics and Automation, pages 1–8. IEEE, 2018.
[11]
Di Guo, Tao Kong, Fuchun Sun, and Huaping Liu. Object discovery and grasp detection with a shared convolutional neural network. In IEEE International Conference on Robotics and Automation, pages 2038–2043. IEEE, 2016.
[12]
Hanbo Zhang, Xuguang Lan, Xinwen Zhou, Zhiqiang Tian, and Nanning Zheng. Visual manipulation relationship network. arXiv preprint arXiv:1802.08857, 2018.
[13]
Yun Jiang, Stephen Moseson, and Ashutosh Saxena. Efficient grasping from rgbd images: Learning using a new rectangle representation. In IEEE International Conference on Robotics and Automation, pages 3304–3311. IEEE, 2011.
[14]
Antonio Bicchi and Vijay Kumar. Robotic grasping and contact: A review. In ICRA, volume 348, page 353. Citeseer, 2000.
[15]
Andrew T Miller, Steffen Knoop, Henrik I Christensen, and Peter K Allen. Automatic grasp planning using shape primitives. In IEEE International Conference on Robotics and Automation, volume 2, pages 1824–1829. IEEE, 2003.
[16]
Andrew T Miller and Peter K Allen. Graspit! a versatile simulator for robotic grasping. IEEE Robotics and Automation Magazine, 11(4):110–122, 2004.
[17]
Raphael Pelossof, Andrew Miller, Peter Allen, and Tony Jebara. An svm learning approach to robotic grasping. In IEEE International Conference on Robotics and Automation, volume 4, pages 3512–3518. IEEE, 2004.
[18]
Qingkai Lu, Kautilya Chenna, Balakumar Sundaralingam, and Tucker Hermans. Planning multi-fingered grasps as probabilistic inference in a learned deep network. In International Symposium on Robotics Research, 2017.
[19]
Ian Lenz, Honglak Lee, and Ashutosh Saxena. Deep learning for detecting robotic grasps. The International Journal of Robotics Research, 34(4-5):705–724, 2015.
[20]
Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In IEEE International Conference on Robotics and Automation, pages 3406–3413. IEEE, 2016.
[21]
Di Guo, Fuchun Sun, Tao Kong, and Huaping Liu. Deep vision networks for real-time robotic grasp detection. International Journal of Advanced Robotic Systems, 14(1):1729881416682706, 2016.
[22]
Di Guo, Fuchun Sun, Huaping Liu, Tao Kong, Bin Fang, and Ning Xi. A hybrid deep architecture for robotic grasp detection. In IEEE International Conference on Robotics and Automation, pages 1609–1614. IEEE, 2017.
[23]
Amaury Depierre, Emmanuel Dellandréa, and Liming Chen. Jacquard: A large scale dataset for robotic grasp detection. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3511–3516. IEEE, 2018.
[24]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
[25]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
[26]
Simonyan Karen and Zisserman Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[27]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster rcnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91–99, 2015.
[28]
Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European Conference on Computer Vision, pages 21–37. Springer, 2016.
[29]
Girshick Ross. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 1440–1448. IEEE, 2015.
[30]
Umar Asif, Mohammed Bennamoun, and Ferdous A Sohel. Rgbd object recognition and grasp detection using hierarchical cascaded forests. IEEE Transactions on Robotics, 33(3):547–564, 2017.
[31]
Umar Asif, Jianbin Tang, and Stefan Harrer. Graspnet: An efficient convolutional neural network for real-time grasp detection for low-powered devices. In International Joint Conference on Artificial Intelligence, pages 4875–4882, 2018.
[32]
Ghazal Ghazaei, Iro Laina, Christian Rupprecht, Federico Tombari, Nassir Navab, and Kianoush Nazarpour. Dealing with ambiguity in robotic grasping via multiple predictions. In Proceedings of Asian Conference on Computer Vision, 2018.

Cited By

View all
  • (2024)EDCoA-net: A generative Grasp Detection Network Based on Coordinate AttentionProceedings of the International Conference on Computer Vision and Deep Learning10.1145/3653804.3655992(1-6)Online publication date: 19-Jan-2024
  • (2024)Customizable 6 degrees of freedom grasping dataset and an interactive training method for graph convolutional networkEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.109320138:PAOnline publication date: 1-Dec-2024
  • (2023)A semantic robotic grasping framework based on multi-task learning in stacking scenesEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.106059121:COnline publication date: 1-May-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
6597 pages

Publisher

IEEE Press

Publication History

Published: 01 November 2019

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)EDCoA-net: A generative Grasp Detection Network Based on Coordinate AttentionProceedings of the International Conference on Computer Vision and Deep Learning10.1145/3653804.3655992(1-6)Online publication date: 19-Jan-2024
  • (2024)Customizable 6 degrees of freedom grasping dataset and an interactive training method for graph convolutional networkEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.109320138:PAOnline publication date: 1-Dec-2024
  • (2023)A semantic robotic grasping framework based on multi-task learning in stacking scenesEngineering Applications of Artificial Intelligence10.1016/j.engappai.2023.106059121:COnline publication date: 1-May-2023

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media