Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Robot grasping based on object shape approximation and LightGBM

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Object grasp planning is a challenging task. Recently, methods based on deep learning have made great progress in this area, but they are highly dependent on datasets, which means that they may encounter some difficulties when facing novel objects that are not available in the datasets. In this paper, a novel method is proposed to generate grasping candidate rectangles for object based on shape approximation, without datasets or shape priori of objects. Specifically, combining K-means and the minimum oriented bounding box algorithm for point sets, an adaptive K-means algorithm is applied for decomposing objects into multiple rectangles. The algorithm can independently select the number of K-means cores and automatically select the number of rectangles used to approximate the shape of the object. According to the parameters of each rectangle, a candidate grasping rectangle of the object is generated. In addition, using the Cornell grasping dataset, a LightGBM classifier is trained for the classification and evaluation of object candidate grasping rectangles. Experimental results show that our classification accuracy rate has reached 94.5% and the detection time is only 0.0003s. Among the candidate rectangles, the one with the highest score in the LightGBM model would be selected for real robot grasping. Finally, a multi-object grasping experiment conducted on a real robot platform shows that our algorithm can help the robot grasp new objects with an average success rate of 91.81%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Algorithm 1
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Availability of Data and Materials

The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request

Code Availability

The code is available from the corresponding author on reasonable request.

References

  1. Bochkovskiy A, Wang C-Y, Liao H-Y M (2020) Yolov4: optimal speed and accuracy of object detection. arXiv:2004.10934

  2. Bonilla M, Resasco D, Gabiccini M, Bicchi A (2015) Grasp planning with soft hands using bounding box object decomposition. In: 2015 IEEE/RSJ International conference on intelligent robots and systems (IROS). IEEE, pp 518–523

  3. Chauhan S, Singh M, Agarwal A K (2019) Crisscross optimization algorithm for the designing of quadrature mirror filter bank. In: 2019 2nd International conference on intelligent communication and computational techniques (Icct). IEEE, pp 124–130

  4. Chauhan S, Singh M, Aggarwal A K (2021) Experimental analysis of effect of tuning parameters on the performance of diversity-driven multi-parent evolutionary algorithm. In: 2021 IEEE 2Nd international conference on electrical power and energy systems (ICEPES). IEEE, pp 1–6

  5. Chen T, He T, Benesty M, Khotilovich V, Tang Y, Cho H, et al. (2015) Xgboost: extreme gradient boosting, vol 1

  6. Chib P S, Khari M, Santosh KC (2023) A computational study on calibrated vgg19 for multimodal learning and representation in surveillance. In: Recent trends in image processing and pattern recognition: 5th international conference, RTIP2R 2022, Kingsville, TX, USA, December 1-2, 2022, Revised Selected Papers. Springer, pp 261–271

  7. Ding X, Wang Y, Wang Y, Xu K (2021) A review of structures, verification, and calibration technologies of space robotic systems for on-orbit servicing. Sci Chin Technol Sci 64(3):462–480

    Article  Google Scholar 

  8. Dizioğlu B, Lakshiminarayana K (1984) Mechanics of form closure. Acta mechanica 52(1):107–118

    Article  Google Scholar 

  9. Dong H, Zhou J, Qiu C, Prasad D K, Chen I-M (2022) Robotic manipulations of cylinders and ellipsoids by ellipse detection with domain randomization. IEEE/ASME Trans Mechatron

  10. Du G, Wang K, Lian S, Zhao K (2021) Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review. Artif Intell Rev 54(3):1677–1734

    Article  Google Scholar 

  11. Guo K, Su H, Yang C (2022) A small opening workspace control strategy for redundant manipulator based on rcm method. IEEE Trans Control Syst Technol

  12. Hang K, Stork J A, Pollard N S, Kragic D (2017) A framework for optimal grasp contact planning. IEEE Robot Autom Lett 2(2):704–711

    Article  Google Scholar 

  13. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  14. Jiang Y, Moseson S, Saxena A (2011) Efficient grasping from rgbd images: learning using a new rectangle representation. In: 2011 IEEE International conference on robotics and automation. IEEE, pp 3304–3311

  15. Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, Ye Q, Liu T-Y (2017) Lightgbm: a highly efficient gradient boosting decision tree. Adv Neur Inform Process Syst 30:3146–3154

    Google Scholar 

  16. Khari M, Garg A K, Crespo R G, Verdú E (2019) Gesture recognition of rgb and rgb-d static images using convolutional neural networks. Int J Interact Multim Artif Intell 5(7):22–27

    Google Scholar 

  17. Lenz I, Lee H, Saxena A (2015) Deep learning for detecting robotic grasps. Int J Robot Res 34(4-5):705–724

    Article  Google Scholar 

  18. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125

  19. Lin H, Zhang T, Chen Z, Song H, Yang C (2019) Adaptive fuzzy gaussian mixture models for shape approximation in robot grasping. Int J Fuzzy Syst 21(4):1026–1037

    Article  MathSciNet  Google Scholar 

  20. Lin S, Wang Z, Ling Y, Tao Y, Yang C (2022) E2ek: end-to-end regression network based on keypoint for 6d pose estimation. IEEE Robot Autom Lett 7(3):6526–6533

    Article  Google Scholar 

  21. Morrison D, Corke P, Leitner J (2018) Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. Robot: Sci Syst XIV:1–10

    Google Scholar 

  22. Morrison D, Corke P, Leitner J (2020) Learning robust, real-time, reactive robotic grasping. Int J Robot Res 39(2–3):183–201

    Article  Google Scholar 

  23. Mousavian A, Eppner C, Fox D (2019) 6-dof graspnet: variational grasp generation for object manipulation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 2901–2910

  24. Pillai M S, Chaudhary G, Khari M, Crespo R G (2021) Real-time image enhancement for an automatic automobile accident detection through cctv using deep learning. Soft Comput, 1–12

  25. Ponce J, Stam D, Faverjon B (1993) On computing two-finger force-closure grasps of curved 2d objects. Int J Robot Res 12(3):263–273

    Article  Google Scholar 

  26. Raj R, Rajiv P, Kumar P, Khari M, Verdú E, Crespo R G, Manogaran G (2020) Feature based video stabilization based on boosted haar cascade and representative point matching algorithm. Image Vis Comput 101:103957

    Article  Google Scholar 

  27. Ren S, He K, Girshick R, Sun J (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  28. Wu B, Zhong J, Yang C (2021) A visual-based gesture prediction framework applied in social robots. IEEE/CAA J Automatica Sinica 9(3):510–519

    Article  Google Scholar 

  29. Yu Y, Cao Z, Liu Z, Geng W, Yu J, Zhang W (2020) A two-stream cnn with simultaneous detection and segmentation for robotic grasping. IEEE Transactions on Systems, Man, and Cybernetics: Systems

  30. Zhang J, Yang C, Li M, Feng Y (2018) Grasping novel objects with real-time obstacle avoidance. In: International conference on social robotics. Springer, pp 160–169

  31. Zhang H, Lan X, Bai S, Zhou X, Tian Z, Zheng N (2019) Roi-based robotic grasp detection for object overlapping scenes. In: 2019 IEEE/RSJ International conference on intelligent robots and systems (IROS). IEEE, pp 4768–4775

  32. Zhang H, Zhou X, Lan X, Li J, Tian Z, Zheng N (2019) A real-time robotic grasping approach with oriented anchor box. IEEE Transactions on Systems, Man, and Cybernetics: Systems

  33. Zhang J, Li M, Feng Y, Yang C (2020) Robotic grasp detection based on image processing and random forest. Multimed Tools Applic 79(3):2427–2446

    Article  Google Scholar 

Download references

Funding

National Nature Science Foundation of China (NSFC) under Grant U20A20200 and Major Research Grant No. 92148204 , Guangdong Basic and Applied Basic Research Foundation under Grants 2020B1515120054 and Industrial Key Technologies R & D Program of Foshan under Grant 2020001006308 and Grant 2020001006496.

Author information

Authors and Affiliations

Authors

Contributions

Shifeng Lin designed this study. All authors, including Shifeng Lin, Chao Zeng and Chenguang Yang, contributed to the writing of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Chenguang Yang.

Ethics declarations

Ethics approval and consent to participate

This research does not involve animals or human participants. No ethical approval required.

Consent for Publication

The authors declare no interest conflict regarding the publication of this paper.

Competing interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, S., Zeng, C. & Yang, C. Robot grasping based on object shape approximation and LightGBM. Multimed Tools Appl 83, 9103–9119 (2024). https://doi.org/10.1007/s11042-023-15547-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-15547-y

Keywords