Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Learning high-DOF reaching-and-grasping via dynamic representation of gripper-object interaction

Published: 22 July 2022 Publication History

Abstract

We approach the problem of high-DOF reaching-and-grasping via learning joint planning of grasp and motion with deep reinforcement learning. To resolve the sample efficiency issue in learning the high-dimensional and complex control of dexterous grasping, we propose an effective representation of grasping state characterizing the spatial interaction between the gripper and the target object. To represent gripper-object interaction, we adopt Interaction Bisector Surface (IBS) which is the Voronoi diagram between two close by 3D geometric objects and has been successfully applied in characterizing spatial relations between 3D objects. We found that IBS is surprisingly effective as a state representation since it well informs the finegrained control of each finger with spatial relation against the target object. This novel grasp representation, together with several technical contributions including a fast IBS approximation, a novel vector-based reward and an effective training strategy, facilitate learning a strong control model of high-DOF grasping with good sample efficiency, dynamic adaptability, and cross-category generality. Experiments show that it generates high-quality dexterous grasp for complex shapes with smooth grasping motions. Code and data for this paper are at https://github.com/qijinshe/IBS-Grasping.

Supplemental Material

MP4 File
presentation
SRT File
presentation
ZIP File
supplemental material

References

[1]
Jacopo Aleotti and Stefano Caselli. 2012. A 3D shape segmentation approach for robot grasping by parts. Robotics and Autonomous Systems 60, 3 (2012), 358--366.
[2]
OpenAI: Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. 2020. Learning dexterous in-hand manipulation. The International Journal of Robotics Research 39, 1 (2020), 3--20.
[3]
Jeannette Bohg, Antonio Morales, Tamim Asfour, and Danica Kragic. 2013. Data-driven grasp synthesis---a survey. IEEE Transactions on Robotics 30, 2 (2013), 289--309.
[4]
Samarth Brahmbhatt, Chengcheng Tang, Christopher D Twigg, Charles C Kemp, and James Hays. 2020. ContactPose: A dataset of grasps with object contact and hand pose. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XIII 16. Springer, 361--378.
[5]
Berk Calli, Arjun Singh, James Bruce, Aaron Walsman, Kurt Konolige, Siddhartha Srinivasa, Pieter Abbeel, and Aaron M Dollar. 2017. Yale-CMU-Berkeley dataset for robotic manipulation research. The International Journal of Robotics Research 36, 3 (2017), 261--268.
[6]
I-Ming Chen and Joel W Burdick. 1993. Finding antipodal point grasps on irregularly shaped objects. IEEE transactions on Robotics and Automation 9, 4 (1993), 507--512.
[7]
Alvaro Collet and Siddhartha S Srinivasa. 2010. Efficient multi-view object recognition and full pose estimation. In 2010 IEEE International Conference on Robotics and Automation. IEEE, 2050--2055.
[8]
Cosimo Della Santina, Visar Arapi, Giuseppe Averta, Francesca Damiani, Gaia Fiore, Alessandro Settimi, Manuel G Catalano, Davide Bacciu, Antonio Bicchi, and Matteo Bianchi. 2019. Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands. IEEE Robotics and Automation Letters 4, 2 (2019), 1533--1540.
[9]
Amaury Depierre, Emmanuel Dellandréa, and Liming Chen. 2018. Jacquard: A large scale dataset for robotic grasp detection. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 3511--3516.
[10]
Raffaele Di Gregorio. 2008. A novel point of view to define the distance between two rigid-body poses. In Advances in robot kinematics: Analysis and design. Springer, 361--369.
[11]
Clemens Eppner, Raphael Deimel, José Alvarez-Ruiz, Marianne Maertens, and Oliver Brock. 2015. Exploitation of environmental constraints in human and robotic grasping. The International Journal of Robotics Research 34, 7 (2015), 1021--1038.
[12]
Kuan Fang, Yunfei Bai, Stefan Hinterstoisser, Silvio Savarese, and Mrinal Kalakrishnan. 2018. Multi-Task Domain Adaptation for Deep Learning of Instance Grasping from Simulation. IEEE International Conference on Robotics and Automation (ICRA) (2018).
[13]
Carlo Ferrari and John F Canny. 1992. Planning optimal grasps. In ICRA, Vol. 3. 2290--2295.
[14]
F Ficuciello, A Migliozzi, G Laudante, P Falco, and B Siciliano. 2019. Vision-based grasp learning of an anthropomorphic hand-arm system in a synergy-based control framework. Science robotics 4, 26 (2019).
[15]
Fanny Ficuciello, Damiano Zaccara, and Bruno Siciliano. 2016. Synergy-based policy improvement with path integrals for anthropomorphic hands. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 1940--1945.
[16]
Joan Fontanals, Bao-Anh Dang-Vu, Oliver Porges, Jan Rosell, and Máximo A Roa. 2014. Integrated grasp and motion planning using independent contact regions. In 2014 IEEE-RAS International Conference on Humanoid Robots. IEEE, 887--893.
[17]
Marcus Gualtieri, Andreas Ten Pas, Kate Saenko, and Robert Platt. 2016. High precision grasp pose detection in dense clutter. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 598--605.
[18]
Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. 2018. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905 (2018).
[19]
Kaiyu Hang, Johannes A. Stork, Nancy S. Pollard, and Danica Kragic. 2017. A Framework for Optimal Grasp Contact Planning. IEEE Robotics and Automation Letters 2, 2 (2017), 704--711.
[20]
Ruizhen Hu, Chenyang Zhu, Oliver van Kaick, Ligang Liu, Ariel Shamir, and Hao Zhang. 2015. Interaction context (ICON) towards a geometric functionality descriptor. ACM Transactions on Graphics 34, 4 (2015), 1--12.
[21]
Divye Jain, Andrew Li, Shivam Singhal, Aravind Rajeswaran, Vikash Kumar, and Emanuel Todorov. 2019. Learning Deep Visuomotor Policies for Dexterous Hand Manipulation. In International Conference on Robotics and Automation (ICRA).
[22]
Stephen James, Paul Wohlhart, Mrinal Kalakrishnan, Dmitry Kalashnikov, Alex Irpan, Julian Ibarz, Sergey Levine, Raia Hadsell, and Konstantinos Bousmalis. 2019. Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition. 12627--12637.
[23]
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al. 2018. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293 (2018).
[24]
Daniel Kappler, Jeannette Bohg, and Stefan Schaal. 2015. Leveraging big data for grasp planning. In 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 4304--4311.
[25]
Korrawe Karunratanakul, Jinlong Yang, Yan Zhang, Michael Black, Krikamol Muandet, and Siyu Tang. 2020. Grasping Field: Learning Implicit Representations for Human Grasps. arXiv preprint arXiv:2008.04451 (2020).
[26]
Alexander Kasper, Zhixing Xue, and Rüdiger Dillmann. 2012. The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics. The International Journal of Robotics Research 31, 8 (2012), 927--934.
[27]
Lydia E Kavraki, Petr Svestka, J-C Latombe, and Mark H Overmars. 1996. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE transactions on Robotics and Automation 12, 4 (1996), 566--580.
[28]
Marios Kiatos and Sotiris Malassiotis. 2019. Grasping Unknown Objects by Exploiting Complementarity with Robot Hand Geometry. In International Conference on Computer Vision Systems. Springer, 88--97.
[29]
Chong-Min Kim, Chung-InWon, Youngsong Cho, Donguk Kim, Sunghoon Lee, Jonghwa Bhak, and Deok-Soo Kim. 2006. Interaction interfaces in proteins via the Voronoi diagram of atoms. Computer-Aided Design 38, 11 (2006), 1192--1204.
[30]
Kilian Kleeberger, Richard Bormann, Werner Kraus, and Marco F Huber. 2020. A survey on learning-based robotic grasping. Current Robotics Reports (2020), 1--11.
[31]
Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. 2018. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research 37, 4--5 (2018), 421--436.
[32]
C Karen Liu. 2009. Dextrous manipulation from a grasping pose. ACM Trans. on Graphics (Proc. SIGGRAPH) 28, 3 (2009), 59:1--59:6.
[33]
Min Liu, Zherong Pan, Kai Xu, Kanishka Ganguly, and Dinesh Manocha. 2019. Generating Grasp Poses for A High-DOF Gripper Using Neural Networks. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. 1518--1525.
[34]
Min Liu, Zherong Pan, Kai Xu, Kanishka Ganguly, and Dinesh Manocha. 2020b. Deep Differentiable Grasp Planner for High-DOF Grippers. In Proceedings of the Robotics: Science and Systems 2020.
[35]
Min Liu, Zherong Pan, Kai Xu, and Dinesh Manocha. 2020a. New Formulation of Mixed-Integer Conic Programming for Globally Optimal Grasp Planning. IEEE Robotics and Automation Letters 5, 3 (2020), 4663--4670.
[36]
Qingkai Lu, Kautilya Chenna, Balakumar Sundaralingam, and Tucker Hermans. 2020a. Planning multi-fingered grasps as probabilistic inference in a learned deep network. In Robotics Research. Springer, 455--472.
[37]
Qingkai Lu, Mark Van der Merwe, Balakumar Sundaralingam, and Tucker Hermans. 2020b. Multifingered Grasp Planning via Inference in Deep Neural Networks: Outperforming Sampling by Learning Differentiable Models. IEEE Robotics & Automation Magazine (2020).
[38]
Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio, and Ken Goldberg. 2017. Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics. In Proceedings of Robotics: Science and Systems.
[39]
Jeffrey Mahler, Matthew Matl, Xinyu Liu, Albert Li, David Gealy, and Ken Goldberg. 2018. Dex-net 3.0: Computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning. In 2018 IEEE International Conference on robotics and automation (ICRA). IEEE, 5620--5627.
[40]
Jeffrey Mahler, Florian T Pokorny, Brian Hou, Melrose Roderick, Michael Laskey, Mathieu Aubry, Kai Kohlhoff, Torsten Kröger, James Kuffner, and Ken Goldberg. 2016. Dex-net 1.0: A cloud-based network of 3d objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In 2016 IEEE international conference on robotics and automation (ICRA). IEEE, 1957--1964.
[41]
Alexis Maldonado, Ulrich Klank, and Michael Beetz. 2010. Robotic grasping of un-modeled objects using time-of-flight range data and finger torque information. In International Conference on Intelligent Robots and Systems. IEEE, 2586--2591.
[42]
Priyanka Mandikal and Kristen Grauman. 2021. Dexterous robotic grasping with object-centric visual affordances. In 2021 IEEE international conference on robotics and automation (ICRA). IEEE.
[43]
Andrew T Miller and Peter K Allen. 2000. Graspit!: A versatile simulator for grasp analysis. In in Proc. of the ASME Dynamic Systems and Control Division. Citeseer.
[44]
Andrew T Miller and Peter K Allen. 2004. Graspit! a versatile simulator for robotic grasping. IEEE Robotics & Automation Magazine 11, 4 (2004), 110--122.
[45]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature 518, 7540 (2015), 529--533.
[46]
Marco Monforte, Fanny Ficuciello, and Bruno Siciliano. 2019. Multifunctional principal component analysis for human-like grasping. In Human Friendly Robotics. Springer, 47--58.
[47]
Douglas Morrison, Peter Corke, and Jürgen Leitner. 2018. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach. arXiv preprint arXiv:1804.05172 (2018).
[48]
Van-Duc Nguyen. 1988. Constructing force-closure grasps. The International Journal of Robotics Research 7, 3 (1988), 3--16.
[49]
Zherong Pan, Duo Zhang, Changhe Tu, and Xifeng Gao. 2022. Planning of Power Grasps Using Infinite Program Under Complementary Constraints. IEEE Robotics and Automation Letters 7, 1 (2022), 650--657.
[50]
Jack Parker-Holder, Aldo Pacchiano, Krzysztof M Choromanski, and Stephen J Roberts. 2020. Effective diversity in population based reinforcement learning. Advances in Neural Information Processing Systems 33 (2020), 18050--18062.
[51]
Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. 2018. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 3803--3810.
[52]
Sören Pirk, Vojtech Krs, Kaimo Hu, Suren Deepak Rajasekaran, Hao Kang, Yusuke Yoshiyasu, Bedrich Benes, and Leonidas J Guibas. 2017. Understanding and exploiting object interaction landscapes. ACM Transactions on Graphics 36, 3 (2017), 1--14.
[53]
Nancy S Pollard and Victor Brian Zordan. 2005. Physically based grasping control from example. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation. 311--318.
[54]
Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. 2017. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition. 652--660.
[55]
Deirdre Quillen, Eric Jang, Ofir Nachum, Chelsea Finn, Julian Ibarz, and Sergey Levine. 2018. Deep reinforcement learning for vision-based robotic grasping: A simulated comparative evaluation of off-policy methods. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 6284--6291.
[56]
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. 2017. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087 (2017).
[57]
Máximo A Roa and Raúl Suárez. 2009. Computation of independent contact regions for grasping 3-d objects. IEEE Transactions on Robotics 25, 4 (2009), 839--850.
[58]
Anis Sahbani, Sahar El-Khoury, and Philippe Bidaud. 2012. An overview of 3D object grasp synthesis algorithms. Robotics and Autonomous Systems 60, 3 (2012), 326--336.
[59]
Ashutosh Saxena, Justin Driemeyer, Justin Kearns, and Andrew Y Ng. 2007. Robotic grasping of novel objects. In Advances in neural information processing systems. 1209--1216.
[60]
Ashutosh Saxena, Lawson Wong, Morgan Quigley, and Andrew Y Ng. 2010. A vision-based system for grasping novel objects in cluttered environments. In Robotics research. Springer, 337--348.
[61]
Arjun Singh, James Sha, Karthik S Narayan, Tudor Achim, and Pieter Abbeel. 2014. Bigbird: A large-scale 3d database of object instances. In 2014 IEEE international conference on robotics and automation (ICRA). IEEE, 509--516.
[62]
Shuran Song, Andy Zeng, Johnny Lee, and Thomas Funkhouser. 2020. Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations. IEEE Robotics and Automation Letters 5, 3 (2020), 4978--4985.
[63]
Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. 2019. Neural state machine for character-scene interactions. ACM Trans. Graph. 38, 6 (2019), 209--1.
[64]
Ioan A Sucan, Mark Moll, and Lydia E Kavraki. 2012. The open motion planning library. IEEE Robotics & Automation Magazine 19, 4 (2012), 72--82.
[65]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
[66]
Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. 2019. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF international conference on computer vision. 6411--6420.
[67]
Mark Van der Merwe, Qingkai Lu, Balakumar Sundaralingam, Martin Matak, and Tucker Hermans. 2019. Learning Continuous 3D Reconstructions for Geometrically Aware Grasping. arXiv preprint arXiv:1910.00983 (2019).
[68]
Jacob Varley, Chad DeChant, Adam Richardson, Joaquín Ruales, and Peter Allen. 2017. Shape completion enabled robotic grasping. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2442--2447.
[69]
Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin Riedmiller. 2017. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817 (2017).
[70]
Ulrich Viereck, Andreas Pas, Kate Saenko, and Robert Platt. 2017. Learning a visuo-motor controller for real world robotic grasping using simulated depth images. In Conference on Robot Learning. PMLR, 291--300.
[71]
Nick Walker, Christoforos Mavrogiannis, Siddhartha Srinivasa, and Maya Cakmak. 2022. Influencing Behavioral Attributions to Robot Motion During Task Execution. In Conference on Robot Learning. PMLR, 169--179.
[72]
Lirui Wang, Yu Xiang, and Dieter Fox. 2019. Manipulation trajectory optimization with online grasp synthesis and selection. arXiv preprint arXiv:1911.10280 (2019).
[73]
Walter Wohlkinger, Aitor Aldoma, Radu B Rusu, and Markus Vincze. 2012. 3dnet: Large-scale object class recognition from cad models. In 2012 IEEE international conference on robotics and automation. IEEE, 5384--5391.
[74]
Wenxuan Wu, Zhongang Qi, and Li Fuxin. 2019. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9621--9630.
[75]
Zhenjia Xu, Beichun Qi, Shubham Agrawal, and Shuran Song. 2021. Adagrasp: Learning an adaptive gripper-aware grasping policy. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 4620--4626.
[76]
Xi Zhao, Myung Geol Choi, and Taku Komura. 2017. Character-object interaction retrieval using the interaction bisector surface. Computer Graphics Forum (Proc. Eurographics) 36, 2 (2017), 119--129.
[77]
Xi Zhao, He Wang, and Taku Komura. 2014. Indexing 3d scenes using the interaction bisector surface. ACM Trans. on Graphics 33, 3 (2014), 1--14.

Cited By

View all
  • (2024)Optimizing multimodal feature selection using binary reinforced cuckoo search algorithm for improved classification performancePeerJ Computer Science10.7717/peerj-cs.181610(e1816)Online publication date: 29-Jan-2024
  • (2024)Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and LocalizationSensors10.3390/s2410303224:10(3032)Online publication date: 10-May-2024
  • (2024)A Wearable Inertial Sensor Approach for Locomotion and Localization Recognition on Physical ActivitySensors10.3390/s2403073524:3(735)Online publication date: 23-Jan-2024
  • Show More Cited By

Index Terms

  1. Learning high-DOF reaching-and-grasping via dynamic representation of gripper-object interaction

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 41, Issue 4
    July 2022
    1978 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/3528223
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 July 2022
    Published in TOG Volume 41, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. dynamic representation and planning
    2. imperfect demonstration
    3. replay buffer
    4. vector-based reward

    Qualifiers

    • Research-article

    Funding Sources

    • National Key R&D Program of China
    • Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
    • GD Talent Plan
    • NSFC
    • DEGP Key Project
    • GD Natural Science Foundation
    • Shenzhen Science and Technology Program

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)101
    • Downloads (Last 6 weeks)14
    Reflects downloads up to 10 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Optimizing multimodal feature selection using binary reinforced cuckoo search algorithm for improved classification performancePeerJ Computer Science10.7717/peerj-cs.181610(e1816)Online publication date: 29-Jan-2024
    • (2024)Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and LocalizationSensors10.3390/s2410303224:10(3032)Online publication date: 10-May-2024
    • (2024)A Wearable Inertial Sensor Approach for Locomotion and Localization Recognition on Physical ActivitySensors10.3390/s2403073524:3(735)Online publication date: 23-Jan-2024
    • (2024)Optimization of Smart Textiles Robotic Arm Path Planning: A Model-Free Deep Reinforcement Learning Approach with Inverse KinematicsProcesses10.3390/pr1201015612:1(156)Online publication date: 9-Jan-2024
    • (2024)Robust human locomotion and localization activity recognition over multisensoryFrontiers in Physiology10.3389/fphys.2024.134488715Online publication date: 21-Feb-2024
    • (2024)RETRACTED: Monitoring and analysis of physical activity and health conditions based on smart wearable devicesJournal of Intelligent & Fuzzy Systems10.3233/JIFS-23748346:4(8497-8512)Online publication date: 18-Apr-2024
    • (2024)Research progress in human-like indoor scene interactionJournal of Image and Graphics10.11834/jig.24000429:6(1575-1606)Online publication date: 2024
    • (2024)Learning Prehensile Dexterity by Imitating and Emulating State-Only ObservationsIEEE Robotics and Automation Letters10.1109/LRA.2024.34435959:10(8266-8273)Online publication date: Oct-2024
    • (2024)Grasp Multiple Objects With One HandIEEE Robotics and Automation Letters10.1109/LRA.2024.33741909:5(4027-4034)Online publication date: May-2024
    • (2024)CNN-Based Object Detection via Segmentation Capabilities in Outdoor Natural ScenesIEEE Access10.1109/ACCESS.2024.341384812(84984-85000)Online publication date: 2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media