Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A LiDAR-depth camera information fusion method for human robot collaboration environment

Published: 07 January 2025 Publication History

Abstract

With the evolution of human–robot collaboration in advanced manufacturing, multisensor integration has increasingly become a critical component for ensuring safety during human–robot interactions. Given the disparities in range scales, densities, and arrangement patterns among multisensor data, such as that from depth cameras and LiDAR, accurately fusing information from multiple sources has emerged as a pressing need to safeguard human–robot safety. This paper focuses on LiDAR and depth cameras, addressing the challenges posed by the differences in data collection range, point density, and distribution patterns which complicate information fusion. We propose a heterogeneous sensor information fusion method for human–robot collaborative environments. To solve the problem of substantial differences in point cloud range scales, a moving sphere space coarse localization algorithm is introduced, narrowing down the scale of interest based on similar features. Furthermore, to address the challenge of significant density differences and low overlap rates between point clouds, we present an improved FPFH coarse registration algorithm based on overlap ratio and an enhanced ICP fine registration algorithm based on the generation of corresponding points. The method proposed herein is applied to the fusion of information from a 64-line LiDAR and a depth camera within a human–robot collaboration scene. Experimental results demonstrate an absolute translational accuracy of 4.29 cm and an absolute rotational accuracy of 0.006 rad, meeting the requirements for heterogeneous sensor information fusion in the context of human–robot collaboration.

Graphical abstract

Display Omitted

Highlights

A method for sensor fusion in human–robot environment is proposed.
A coarse localization algorithm is proposed to address point cloud scale differences.
An improved FPFH algorithm based on overlap ratio for coarse registration.
An improved ICP algorithm using generated corresponding points for fine registration.

References

[1]
Yong T., He G., Tianmiao W., Application of mobile industrial robot in aircraft assembly production line, Aeronaut. Manuf. Technol. 64 (5) (2021) 32–41, 67.
[2]
Hao W., Genliang C., Research progress and perspective of robotic equipment applied in aviation assembly, Acta Aeronaut. Astronaut. Sinica 43 (5) (2022) 49–71.
[3]
Ruiqin H., Lijian Z., Shaohua M., Que D., Changyu L., Robotic assembly technology for heavy component of spacecraft based on compliance control, J. Mech. Eng. 54 (11) (2018) 85–93.
[4]
Haninger K., Radke M., Vick A., Krüger J., Towards high-payload admittance control for manual guidance with environmental contact, IEEE Robot. Autom. Lett. 7 (2) (2022) 4275–4282.
[5]
Jidong J., Minglu Z., Research progress and development trend of the safety of human-robot interaction technology, J. Mech. Eng. 56 (3) (2020) 16–30.
[6]
Chaoli W., Prospect of develpment trend of human robot integration safety technology, Process. Autom. Instrum. 41 (3) (2020) 1–5+10.
[7]
S.-E. Wei, V. Ramakrishna, T. Kanade, Y. Sheikh, Convolutional Pose Machines, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4724–4732.
[8]
Ramakrishna V., Munoz D., Hebert M., Andrew Bagnell J., Sheikh Y., Pose machines: Articulated pose estimation via inference machines, in: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (Eds.), Computer Vision – ECCV 2014, in: Lecture Notes in Computer Science, Springer International Publishing, Cham, 2014, pp. 33–47.
[9]
A. Toshev, C. Szegedy, Deeppose: Human Pose Estimation via Deep Neural Networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1653–1660.
[10]
Tompson J.J., Jain A., LeCun Y., Bregler C., Joint training of a convolutional network and a graphical model for human pose estimation, Adv. Neural Inf. Process. Syst. 27 (2014).
[11]
Z. Cao, T. Simon, S.-E. Wei, Y. Sheikh, Realtime Multi-Person 2d Pose Estimation Using Part Affinity Fields, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7291–7299.
[12]
Xianlun W., Tianyu W., Research progress of human motion prediction methods in human robot collaboration, Mach. Tool Hydraul. 50 (12) (2022) 147–152.
[13]
Chen L.-H., Zhang J., Li Y., Pang Y., Xia X., Liu T., HumanMAC: Masked motion completion for human motion prediction, 2023, arXiv arXiv:2302.03665.
[14]
Qiuhui W., Yaoyao Z., Research and progress on robot human machine integration technology, Robot Techn. Appl. (5) (2021) 16–22.
[15]
Qiu Z., External multi-modal imaging sensor calibration for sensor fusion: A review, Inf. Fusion 97 (2023).
[16]
Stiller C., León F.P., Kruse M., Information fusion for automotive applications – An overview, Inf. Fusion 12 (4) (2011) 244–252.
[17]
Ouyang Z., Cui J., Dong X., Li Y., Niu J., SaccadeFork: A lightweight multi-sensor fusion-based target detector, Inf. Fusion 77 (2022) 172–183.
[18]
Zhao Y., Zhang J., Xu S., Ma J., Deep learning-based low overlap point cloud registration for complex scenario: The review, Inf. Fusion 107 (2024).
[19]
Gardner A., Tchou C., Hawkins T., Debevec P., Linear light source reflectometry, ACM Trans. Graph. 22 (3) (2003) 749–758.
[20]
A. Zeng, S. Song, M. Niessner, M. Fisher, J. Xiao, T. Funkhouser, 3DMatch: Learning Local Geometric Descriptors From RGB-D Reconstructions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1802–1811.
[21]
M. Deuge, A. Quadros, C. Hung, B. Douillard, Unsupervised Feature Learning for Classification of Outdoor 3D Scans, in: Australasian Conference on Robotics and Automation, ACRA, 2013.
[22]
Q. Zhang, R. Pless, Extrinsic Calibration of a Camera and Laser Range Finder (Improves Camera Calibration), in: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Vol. 3, 2004, pp. 2301–2306.
[23]
Unnikrishnan R., Hebert M., Fast Extrinsic Calibration of a Laser Rangefinder to a Camera, Carnegie Mellon University, 2005.
[24]
Deqi Y., Guangyun L., Li W., Shuaixin L., Wenpeng Z., Calibration of LiDAR and camera based on 3D Feature Point Sets, Bull. Survey. Mapp. (11) (2018) 40.
[25]
Qing W., Rongxuan T., Youyang F., Chao Y., Yang S., Joint calibration method of camera and lidar based on 3D calibration plate, J. Chin. Inert. Technol. 31 (1) (2023) 100–106.
[26]
P. Moghadam, M. Bosse, R. Zlot, Line-Based Extrinsic Calibration of Range and Image Sensors, in: IEEE International Conference on Robotics and Automationm Vol. 2, ICRA, 2013.
[27]
R. Gomez, J. Briales, E. Fernández-Moral, J. González-Jiménez, Extrinsic Calibration of a 2d Laser-Rangefinder and a Camera Based on Scene Corners, in: Proceedings - IEEE International Conference on Robotics and Automation, Vol. 2015, 2015, pp. 3611–3616.
[28]
Bai Z., Jiang G., Xu A., LiDAR-camera calibration using line correspondences, Sensors 20 (21) (2020) 6319.
[29]
Abedinia A., Hahnb M., Samadzadegana F., An investigation into the registration of LIDAR intensity data and aerial images using the SIFT approach, Ratio (first, second) 2 (6) (2008).
[30]
Pandey G., McBride J.R., Savarese S., Eustice R.M., Automatic extrinsic calibration of vision and Lidar by maximizing mutual information, J. Field Robotics 32 (5) (2015) 696–722.
[31]
G. Pandey, J. McBride, S. Savarese, R. Eustice, Automatic Targetless Extrinsic Calibration of a 3d Lidar and Camera by Maximizing Mutual Information, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 26, 2012, pp. 2053–2059.
[32]
Z. Taylor, J. Nieto, A Mutual Information Approach to Automatic Calibration of Camera and Lidar in Natural Environments, in: Australian Conference on Robotics and Automation, 2012, pp. 3–5.
[33]
X. Lv, B. Wang, Z. Dou, D. Ye, S. Wang, LCCNet: LiDAR and Camera Self-Calibration Using Cost Volume Network, in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, (ISSN: 2160-7516) 2021, pp. 2888–2895.
[34]
D. Cattaneo, M. Vaghi, A.L. Ballardini, S. Fontana, D.G. Sorrenti, W. Burgard, CMRNet: Camera to LiDAR-Map Registration, in: 2019 IEEE Intelligent Transportation Systems Conference, ITSC, 2019, pp. 1283–1289.
[35]
Shi J., Zhu Z., Zhang J., Liu R., Wang Z., Chen S., Liu H., CalibRCNN: Calibrating camera and LiDAR by recurrent convolutional neural network and geometric constraints, 2020, pp. 10197–10202.
[36]
Zhao G., Hu J., You S., Kuo C., CalibDNN: Multimodal sensor calibration for perception using deep neural networks, 2021, p. 46.
[37]
Lv X., Wang S., Ye D., CFNet: LiDAR-camera registration using calibration flow network, Sensors 21 (23) (2021) 8112.
[38]
Jing X., Ding X., Xiong R., Deng H., Wang Y., DXQ-Net: Differentiable LiDAR-camera extrinsic calibration using quality-aware flow, in: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, Kyoto, Japan, 2022, pp. 6235–6241.
[39]
Wu Y., Zhu M., Liang J., PSNet: LiDAR and camera registration using parallel subnetworks, IEEE Access 10 (2022) 70553–70561.
[40]
Sun Y., Li J., Wang Y., Xu X., Yang X., Sun Z., ATOP: An attention-to-optimization approach for automatic LiDAR-camera calibration via cross-modal object matching, IEEE Trans. Intell. Veh. 8 (1) (2023) 696–708.
[41]
Wu Y., Liu J., Gong M., Miao Q., Ma W., Xu C., Joint semantic segmentation using representations of LiDAR point clouds and camera images, Inf. Fusion (2024).
[42]
Wilkowski A., Mańkowski D., RGB-D and Lidar calibration supported by GPU, in: Chmielewski L.J., Kozera R., Orłowski A. (Eds.), Computer Vision and Graphics, Springer International Publishing, Cham, 2020, pp. 214–226.
[43]
C. Guindel, J. Beltrán, D. Martín, F. García, Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups, in: 2017 IEEE 20th International Conference on Intelligent Transportation Systems, ITSC, (ISSN: 2153-0017) 2017, pp. 1–6.
[44]
Park Y., Yun S., Won C.S., Cho K., Um K., Sim S., Calibration between color camera and 3D LIDAR instruments with a polygonal planar board, Sensors 14 (3) (2014) 5333–5353.
[45]
Lei H., Jiang G., Quan L., Fast descriptors and correspondence propagation for robust global point cloud registration, IEEE Trans. Image Process. (2017) 1.
[46]
Li P., Wang J., Zhao Y., Wang Y., Yao Y., Improved algorithm for point cloud registration based on fast point feature histograms, J. Appl. Remote Sens. 10 (4) (2016).
[47]
Xu Y., Boerner R., Yao W., Hoegner L., Stilla U., Pairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets, ISPRS J. Photogramm. Remote Sens. 151 (2019) 106–123.
[48]
E. Rosten, T. Drummond, Machine Learning for High-Speed Corner Detection, in: Comput Conf Comput Vis, Vol. 3951, ISBN: 978-3-540-33832-1, 2006.
[49]
E. Rublee, V. Rabaud, K. Konolige, G. Bradski, ORB: An Efficient Alternative to SIFT or SURF, in: 2011 International Conference on Computer Vision, (ISSN: 2380-7504) 2011, pp. 2564–2571.
[50]
Bay H., Ess A., Tuytelaars T., Gool L.V., Speeded-Up Robust Features (SURF), Comput. Vis. Image Underst. 110 (3) (2008) 346–359.
[51]
Yang B., Zang Y., Automated registration of dense terrestrial laser-scanning point clouds using curves, ISPRS J. Photogramm. Remote Sens. 95 (2014) 109–121.
[52]
C. Brenner, C. Dold, Automatic Relative Orientation of Terrestrial Laser Scans Using Planar Structures and Angle Constraints, in: ISPRS Workshop on Laser Scanning 2007 and SilviLaser 2007, 2007.
[53]
R.B. Rusu, Z.C. Marton, N. Blodow, M. Beetz, Persistent Point Feature Histograms for 3D Point Clouds, in: Proc 10th Int Conf Intel Autonomous Syst, IAS-10, Baden-Baden, Germany, 2008, pp. 119–128.
[54]
Rusu R.B., Blodow N., Beetz M., Fast point feature histograms (FPFH) for 3D registration, in: 2009 IEEE International Conference on Robotics and Automation, IEEE, 2009, pp. 3212–3217.
[55]
Guo Y., Sohel F., Bennamoun M., Lu M., Wan J., Rotational projection statistics for 3D local surface description and object recognition, Int. J. Comput. Vis. 105 (1) (2013) 63–86.
[56]
Chen S., Nan L., Xia R., Zhao J., Wonka P., PLADE: A plane-based descriptor for point cloud registration with small overlap, IEEE Trans. Geosci. Remote Sens. 58 (4) (2020) 2530–2540.
[57]
Besl P., McKay N.D., A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Mach. Intell. 14 (2) (1992) 239–256.
[58]
Gressin A., Mallet C., Demantké J., David N., Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge, ISPRS J. Photogramm. Remote Sens. 79 (2013) 240–251.
[59]
Kim P., Chen J., Cho Y.K., Automated point cloud registration using visual and planar features for construction environments, J. Comput. Civ. Eng. 32 (2) (2018).
[60]
Kwon S., Lee M., Lee M., Lee S., Lee J., Development of optimized point cloud merging algorithms for accurate processing to create earthwork site models, Autom. Constr. 35 (2013) 618–624.
[61]
Kim C., Son H., Kim C., Fully automated registration of 3D data to a 3D CAD model for project progress monitoring, Autom. Constr. 35 (2013) 587–594.
[62]
Chen Y., Medioni G., Object modelling by registration of multiple range images, Image Vis. Comput. 10 (3) (1992) 145–155.
[63]
A. Segal, D. Hähnel, S. Thrun, Generalized-ICP, in: Proc. of Robotics: Science and Systems, 2009.
[64]
C.R. Qi, H. Su, K. Mo, L.J. Guibas, PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 652–660.
[65]
Qi C.R., Yi L., Su H., Guibas L.J., PointNet++: Deep hierarchical feature learning on point sets in a metric space, in: Advances in Neural Information Processing Systems, Vol. 30, Curran Associates, Inc., 2017.
[66]
Y. Aoki, H. Goforth, R.A. Srivatsan, S. Lucey, PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7163–7172.
[67]
Welzl E., Smallest enclosing disks (balls and ellipsoids), in: Maurer H. (Ed.), New Results and New Trends in Computer Science, in: Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 1991, pp. 359–370.
[68]
Y. Wang, J.M. Solomon, Deep Closest Point: Learning Representations for Point Cloud Registration, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3523–3532.

Index Terms

  1. A LiDAR-depth camera information fusion method for human robot collaboration environment
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Information Fusion
          Information Fusion  Volume 114, Issue C
          Feb 2025
          1192 pages

          Publisher

          Elsevier Science Publishers B. V.

          Netherlands

          Publication History

          Published: 07 January 2025

          Author Tags

          1. Human–robot collaboration
          2. Heterogeneous sensor
          3. Information fusion
          4. Point cloud registration
          5. Depth camera
          6. LiDAR

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 13 Jan 2025

          Other Metrics

          Citations

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media