Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3406971.3406988acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicgspConference Proceedingsconference-collections
research-article

Real-Time Grasp Detection Using Improved FMM and Cascaded Neural Networks

Published: 29 July 2020 Publication History

Abstract

The successful grasping task for the robotic arm requires precise grasping posture. In this paper, we use the cascading depth network to predict the optimal object grasping pose. The model is mainly divided into two steps: i) generating a set of candidates that contain the regions of objects; ii) getting the optimal capture position by detecting the candidate region, and combining the depth image to obtain the three-dimensional coordinates of the capture position for objects. Due to flaws and edge noise in the depth image of Kinect, an improved FMM (Fast Marching Method) algorithm is used to repair the depth image hole, and then the joint bilateral filtering algorithm is employed to recover the edge noise of the depth image. Experimental results in public dataset and real scenes have demonstrated the effectiveness of the proposed method.

References

[1]
I. Lenz, H. Lee, and A. Saxena, "Deep learning for detecting robotic grasps," in Proceedings of Robotics: Science and Systems, Berlin, Germany, June 2013.
[2]
Y. Jiang, S. Moseson, and A. Saxena, "Efficient grasping from rgb-d images: Learning using a new rectangle representation," in IEEE International Conference on Robotics & Automation (ICRA). IEEE, 2011, pp. 3304--3311.
[3]
A. Bicchi and V. Kumar, "Robotic grasping and contact: A review," in IEEE International Conference on Robotics & Automation (ICRA). Citeseer, 2000, pp. 348--353.
[4]
A. T. Miller, S. Knoop, H. I. Christensen, and P. K. Allen, "Automatic grasp planning using shape primitives," in IEEE International Conference on Robotics & Automation(ICRA), vol.2. IEEE, 2003, pp. 1824--1829.
[5]
A. T. Miller and P. K. Allen, "Graspit! a versatile simulator for robotic grasping," Robotics & Automation Magazine, IEEE, vol. 11, no. 4, pp. 110--122, 2004.
[6]
R. Pelossof, A. Miller, P. Allen, and T. Jebara, "An svm learning approach to robotic grasping," in IEEE International Conference on Robotics & Automation (ICRA), vol. 4. IEEE, 2004, pp. 3512--3518.
[7]
B. Léon, S. Ulbrich, R. Diankov, G. Puche, M. Przybylski, A. Morales, T. Asfour, S. Moisio, J. Bohg, J. Kuffner, et al., "Opengrasp: a toolkit for robot grasping simulation," in Simulation, Modeling, and Programming for Autonomous Robots. Springer, 2010, pp. 109--120.
[8]
S. Park and D. Kim, "Study on 3D Action Recognition Based on Deep Neural Network," 2019 International Conference on Electronics, Information, and Communication (ICEIC), Auckland, New Zealand, 2019, pp. 1--3.
[9]
Q. Hou, M. Cheng, X. Hu, A. Borji, Z. Tu and P. H. S. Torr, "Deeply Supervised Salient Object Detection with Short Connections," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 4, pp. 815--828, 1 April 2019.
[10]
X. Chen, K. Kundu, Y. Zhu, H. Ma, S. Fidler and R. Urtasun, "3D Object Proposals Using Stereo Imagery for Accurate Object Class Detection," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 5, pp. 1259--1272, 1 May 2018.
[11]
Q. Fu et al., "A Robust RGB-D SLAM System With Points and Lines for Low Texture Indoor Environments," in IEEE Sensors Journal, vol. 19, no. 21, pp. 9908--9920, 1 Nov. 1, 2019.
[12]
S. Yang and S. Scherer, "CubeSLAM: Monocular 3-D Object SLAM," in IEEE Transactions on Robotics, vol. 35, no. 4, pp. 925--938, Aug. 2019.
[13]
A. Telea, "An image inpainting technique based on the fast marching method," in Journal of graphics tools, 2004, 9(1): 23--34.
[14]
J. Kopf, M. F. Cohen, D. Lischinski, et al., "Joint bilateral upsampling," in ACM Transactions on Graphics (ToG), 2007, 26(3): 96--110.
[15]
I. Lenz, H. Lee, and A. Saxena, "Deep learning for detecting robotic grasps," in The International Journal of Robotics Research, 2015, 34(4-5): 705--724.

Index Terms

  1. Real-Time Grasp Detection Using Improved FMM and Cascaded Neural Networks

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICGSP '20: Proceedings of the 4th International Conference on Graphics and Signal Processing
    June 2020
    127 pages
    ISBN:9781450377812
    DOI:10.1145/3406971
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • University of Macedonia
    • NITech: Nagoya Institute of Technology
    • Zhejiang University: Zhejiang University

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 July 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. RGB-D
    2. cascaded neural networks
    3. fast marching method
    4. object grasp detection

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICGSP 2020

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 41
      Total Downloads
    • Downloads (Last 12 months)4
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 08 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media