Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3603273.3635240acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaaiaConference Proceedingsconference-collections
research-article

QN-SAM-LIVO: A Multi-Sensor Tightly Coupled SLAM Framework Based on Coarse-to-Fine Loop Closure Detection

Published: 09 January 2024 Publication History
  • Get Citation Alerts
  • Abstract

    In order to attain precise and robust transformation estimation in simultaneous localization and mapping (SLAM) tasks, the integration of multiple sensors has demonstrated effectiveness and significant potential in robotics applications. QN-SAM-LIVO is a rapid tightly-coupled LiDAR-Inertial-Visual SLAM system, which consists of three tightly-coupled components: the LIO module, the VIO module, and the loop closure detection module. The LIO module directly constructs the raw scanning point increments into a point cloud map, which is used for matching. The VIO component performs image alignment by aligning the observed points and the loop closure detection module provides the system with real-time cumulative error correction through factor graph optimization based on the iSAM2 optimizer. The three components are integrated together via an error state iterative Kalman filter (ESIKF) to provide accurate odometry information, while optimizing the color point cloud construction of conventional non-solid-state LiDAR. To reduce the computational effort for loop closure detection, we use a coarse-to-fine point cloud matching approach, using Quatro to derive a priori states for the keyframe point cloud, followed by detailed transformation computation using NanoGICP. Experimental evaluations conducted on both open and private datasets substantiate the superior performance of the proposed method compared to other similar approaches. The results indicate the adaptability of the method to various challenging situations.

    References

    [1]
    Ji Zhang and Sanjiv Singh. 2014. LOAM: Lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems, 2014. . https://doi.org/10.15607/RSS.2014.X.007
    [2]
    Kok-Lim Low. 2004. Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration. (January 2004).
    [3]
    Tixiao Shan and Brendan Englot. 2018. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In IEEE International Conference on Intelligent Robots and Systems, 2018. https://doi.org/10.1109/IROS.2018.8594299
    [4]
    Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniela Rus. 2020. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. IEEE Int. Conf. Intell. Robot. Syst. (2020), 5135–5142. https://doi.org/10.1109/IROS45743.2020.9341176
    [5]
    Wei Xu and Fu Zhang. 2021. FAST-LIO: A Fast, Robust LiDAR-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 6, 2 (2021), 3317–3324. https://doi.org/10.1109/LRA.2021.3064227
    [6]
    Wei Xu, Yixi Cai, Dongjiao He, Jiarong Lin, and Fu Zhang. 2022. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry. IEEE Trans. Robot. 38, 4 (2022), 2053–2073. https://doi.org/10.1109/TRO.2022.3141876
    [7]
    Anastasios I. Mourikis and Stergios I. Roumeliotis. 2007. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings - IEEE International Conference on Robotics and Automation, 2007. https://doi.org/10.1109/ROBOT.2007.364024
    [8]
    Michael Bloesch, Sammy Omari, Marco Hutter, and Roland Siegwart. 2015. Robust visual inertial odometry using a direct EKF-based approach. In IEEE International Conference on Intelligent Robots and Systems, 2015. https://doi.org/10.1109/IROS.2015.7353389
    [9]
    Patrick Geneva, Kevin Eckenhoff, Woosik Lee, Yulin Yang, and Guoquan Huang. 2020. OpenVINS: A Research Platform for Visual-Inertial Estimation. In Proceedings - IEEE International Conference on Robotics and Automation, 2020. https://doi.org/10.1109/ICRA40945.2020.9196524
    [10]
    Stefan Leutenegger, Simon Lynen, Michael Bosse, Roland Siegwart, and Paul Furgale. 2015. Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Rob. Res. 34, 3 (2015). https://doi.org/10.1177/0278364914554813
    [11]
    Tong Qin, Peiliang Li, and Shaojie Shen. 2018. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 34, 4 (2018), 1004–1020. https://doi.org/10.1109/TRO.2018.2853729
    [12]
    Y Zhu, C Zheng, C Yuan, X Huang, and X Hong. 2021. CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System. In 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021. 5049–5055. https://doi.org/10.1109/ICRA48506.2021.9561149
    [13]
    Xingxing Zuo, Yulin Yang, Patrick Geneva, Jiajun Lv, Yong Liu, Guoquan Huang, and Marc Pollefeys. 2020. LIC-Fusion 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking. In IEEE International Conference on Intelligent Robots and Systems, 2020. . https://doi.org/10.1109/IROS45743.2020.9340704
    [14]
    Tixiao Shan, Brendan Englot, Carlo Ratti, and Daniela Rus. 2021. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping. In Proceedings - IEEE International Conference on Robotics and Automation, 2021. https://doi.org/10.1109/ICRA48506.2021.9561996
    [15]
    Jiarong Lin, Chunran Zheng, Wei Xu, and Fu Zhang. 2021. R2LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping. IEEE Robot. Autom. Lett. 6, 4 (2021). https://doi.org/10.1109/LRA.2021.3095515
    [16]
    B M Bell and F W Cathey. 1993. The iterated Kalman filter update as a Gauss-Newton method. IEEE Trans. Automat. Contr. 38, 2 (1993), 294–297. https://doi.org/10.1109/9.250476
    [17]
    Dongjiao He, Wei Xu, and Fu Zhang. 2021. Kalman filters on differentiable manifolds. arXiv Prepr. arXiv2102.03804 (2021).
    [18]
    Chunran Zheng, Qingyan Zhu, Wei Xu, Xiyuan Liu, Qizhi Guo, and Fu Zhang. 2022. FAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (2022), 4003–4009. https://doi.org/10.1109/iros47612.2022.9981107
    [19]
    Thien-Minh Nguyen, Shenghai Yuan, Muqing Cao, Yang Lyu, Thien H Nguyen, and Lihua Xie. 2021. NTU VIRAL: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint. Int. J. Rob. Res. 41, 3 (November 2021), 270–280. https://doi.org/10.1177/02783649211052312
    [20]
    Kenny Chen, Brett T. Lopez, Ali Akbar Agha-Mohammadi, and Ankur Mehta. 2022. Direct LiDAR Odometry: Fast Localization With Dense Point Clouds. IEEE Robot. Autom. Lett. 7, 2 (2022), 2000–2007. https://doi.org/10.1109/LRA.2022.3142739
    [21]
    Hyungtae Lim, Suyong Yeon, Soohyun Ryu, Yonghan Lee, Youngji Kim, Jaeseong Yun, Euigon Jung, Donghwan Lee, and Hyun Myung. 2022. A Single Correspondence Is Enough: Robust Global Registration to Avoid Degeneracy in Urban Environments. Proc. - IEEE Int. Conf. Robot. Autom. (2022), 8010–8017. https://doi.org/10.1109/ICRA46639.2022.9812018
    [22]
    Martin Magnusson, Achim Lilienthal, and Tom Duckett. 2007. Scan registration for autonomous mining vehicles using 3D-NDT. J. F. Robot. 24, 10 (2007). https://doi.org/10.1002/rob.20204
    [23]
    Szymon Rusinkiewicz and Marc Levoy. 2001. Efficient variants of the ICP algorithm. Proc. Int. Conf. 3-D Digit. Imaging Model. 3DIM (2001). https://doi.org/10.1109/IM.2001.924423
    [24]
    Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno. 2021. Voxelized GICP for Fast and Accurate 3D Point Cloud Registration. Proc. - IEEE Int. Conf. Robot. Autom. 2021-May, Icra (2021), 11054–11059. https://doi.org/10.1109/ICRA48506.2021.9560835

    Index Terms

    1. QN-SAM-LIVO: A Multi-Sensor Tightly Coupled SLAM Framework Based on Coarse-to-Fine Loop Closure Detection

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      AAIA '23: Proceedings of the 2023 International Conference on Advances in Artificial Intelligence and Applications
      November 2023
      406 pages
      ISBN:9798400708268
      DOI:10.1145/3603273
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 09 January 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Localization
      2. Mapping
      3. Multi-sensor
      4. SLAM

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • Key Laboratory of Thermal Management and Energy Utilization of Aircraft, Ministry of Industry and Information Technology
      • the State Key Laboratory of Mechanics and Control for Aerospace Structures (Nanjing University of Aeronautics and Astronautics)
      • National Key R&D Program of China

      Conference

      AAIA 2023

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 21
        Total Downloads
      • Downloads (Last 12 months)21
      • Downloads (Last 6 weeks)1

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media