Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3653804.3655992acmotherconferencesArticle/Chapter ViewAbstractPublication PagescvdlConference Proceedingsconference-collections
research-article

EDCoA-net: A generative Grasp Detection Network Based on Coordinate Attention

Published: 01 June 2024 Publication History

Abstract

Grasping detection learning techniques are crucial for robotic operations, as they transfer knowledge learned by robots to new real-world objects, enabling robots to grasp unknown objects effortlessly. However, most previous works have not adequately emphasized spatial information features, leading to subpar grasping performance. How to design a grasp detection network that effectively utilizes spatial information features, efficiently encodes relationships between channels, and addresses long-range dependencies to enhance robot grasping performance remains a challenging problem. To tackle this concern, we introduce EDCoA-net, a novel grasping detection network utilizing the encoder-decoder architecture. In this network, we propose a novel module, the CoRA module, which innovatively integrates the idea of residuals and utilizes Coordinate Attention to enhance the expression capability of learning features, while simultaneously encoding channel relationships and long-range dependencies. We assess the effectiveness of this network on the publicly available Jacquard grasping dataset, achieving a high accuracy of 95.4%, demonstrating the model performance of EDCoA-net. Additionally, we assess the efficacy of the CoRA module through a series of ablation experiments

References

[1]
A. Bicchi and V. Kumar, “Robotic grasping and contact: A review,” in IEEE International Conference on Robotics and Automation, 2000, pp. 348–353.
[2]
D. Buchholz, M. Futterlieb, S. Winkelbach, and F. M. Wahl, “Efficient bin-picking and grasp planning based on depth data,” in IEEE International Conference on Robotics and Automation, 2013, pp. 3245–3250.
[3]
J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” in Robotics: Science and Systems, 2017.
[4]
Liang, H.; PointNetGPD: Detecting Grasp Configurations from Point Sets. 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019; pp. 3629-3635. [
[5]
H. Zhang, X. Lan, S. Bai, X. Zhou, Z. Tian, and N. Zheng, "ROI-based Robotic Grasp Detection for Object Overlapping Scenes," in IEEE International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 4768-4775.
[6]
Morrison, D.; Leitner, J.; Corke, P. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach. In Robotics: Science and Systems, 2018.
[7]
S. Kumra, S. Joshi, and F. Sahin, "Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020: IEEE.
[8]
Y. Teng and P. Gao, "Generative Robotic Grasping Using Depthwise Separable Convolution," Computers & Electrical Engineering, vol. 94, p. 107318, 2021.
[9]
Wang, S.; Zhou, Z.; Kan, Z. When transformer meets robotic grasping: Exploits context for efficient grasp detection. IEEE Robot. Autom. Lett. 2022, 7, 8170–8177. [
[10]
Hou, Q.; Zhou, D.; Feng, J. Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021; pp. 13713-13722.
[11]
He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proc. 14th Eur. Conf. Comput. Vis., 2016; pp. 630–645.
[12]
Depierre, A.; Dellandréa, E.; Chen, L. Jacquard: A Large Scale Dataset for Robotic Grasp Detection. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018; pp. 3511-3516. [
[13]
Zhou, X.; Lan, X.; Zhang, H.; Bai, S.; Tian, Z.; Zhang, Y.; Zheng, N. Fully convolutional grasp detection network with oriented anchor box. In IEEE International Conference on Intelligent Robots and Systems, 2018; pp. 7223–7230.
[14]
Cao, H.; Chen, G.; Li, Z.; Lin, J.; Knoll, A. Residual squeeze-and-excitation network with multi-scale spatial pyramid module for fast robotic grasping detection. In Proceedings of the IEEE International Conference on Robotics and Automation, 2021; pp. 13445–13451.
[15]
Ainetter, S.; Fraundorfer, F. End-to-end trainable deep neural network for robotic grasp detection and semantic segmentation from RGB. In Proceedings of the IEEE International Conference on Robotics and Automation, 2021; pp. 13452–13458.
[16]
Liu, D.; Tao, X.; Yuan, L.; Du, Y.; Cong, M. Robotic Objects Detection and Grasping in Clutter based on Cascaded Deep Convolutional Neural Network. IEEE Transactions on Instrumentation and Measurement 2021, pp. 1-1. [
[17]
Zhou, Z.; Zhu, X.; Cao, Q. AAGDN: Attention-Augmented Grasp Detection Network Based on Coordinate Attention and Effective Feature Fusion Method. IEEE Robotics and Automation Letters, 2023.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CVDL '24: Proceedings of the International Conference on Computer Vision and Deep Learning
January 2024
506 pages
ISBN:9798400718199
DOI:10.1145/3653804
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 June 2024

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CVDL 2024

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 10
    Total Downloads
  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)1
Reflects downloads up to 11 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media