A Novel Real-Time Autonomous Localization Algorithm Based on Weighted Loosely Coupled Visual–Inertial Data of the Velocity Layer
Abstract
:1. Introduction
1.1. Motivation
1.2. Related Work
1.3. Our Approach
2. Design of the State Vector
2.1. Definition of Variables in the ICEKF
2.2. Construction of the State Vector
2.3. Coupling Process
2.4. Simplification of the State Vector
2.5. Error of the State Vector
3. Propagation and Update of the ICEKF
3.1. Propagation
3.2. Measurement
3.3. Entire ICEKF Process
4. Nonlinear Observability Analysis
5. Simulation and Experiments
5.1. Simulation
5.2. Dataset Experiment
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Appendix B
- When , , and , .
- When , similarly, .
- When , there is the following:
Appendix C
References
- Servières, M.; Renaudin, V.; Dupuis, A.; Antigny, N. Visual and Visual-Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking. J. Sens. 2021, 2021, 2054828. [Google Scholar] [CrossRef]
- He, M.; Zhu, C.; Huang, Q.; Ren, B.; Liu, J. A review of monocular visual odometry. Vis. Comput. 2020, 36, 1053–1065. [Google Scholar] [CrossRef]
- Namgung, H.; Kim, J.S. Collision risk inference system for maritime autonomous surface ships using COLREGs rules compliant collision avoidance. IEEE Access 2019, 9, 7823–7835. [Google Scholar] [CrossRef]
- Lin, Y.; Gao, F.; Qin, T.; Gao, W.; Liu, T.; Wu, W.; Yang, Z.; Shen, S. Autonomous aerial navigation using monocular visual-inertial fusion. J. Field Robot. 2018, 35, 23–51. [Google Scholar] [CrossRef]
- Nemec, D.; Šimák, V.; Janota, A.; Hruboš, M.; Bubeníková, E. Precise localization of the mobile wheeled robot using sensor fusion of odometry, visual artificial landmarks and inertial sensors. Robot. Auton. Syst. 2019, 112, 168–177. [Google Scholar] [CrossRef]
- Li, Z.; You, B.; Ding, L.; Gao, H.; Huang, F. Trajectory Tracking Control for WMRs with the Time-Varying Longitudinal Slippage Based on a New Adaptive SMC Method. Int. J. Aerosp. Eng. 2019, 2019, 4951538. [Google Scholar] [CrossRef]
- Namgung, H. Local route planning for collision avoidance of maritime autonomous surface ships in compliance with COLREGs rules. Sustainability 2021, 14, 198. [Google Scholar] [CrossRef]
- Alatise, M.B.; Hancke, G.P. A review on challenges of autonomous mobile robot and sensor fusion methods. IEEE Access 2020, 8, 39830–39846. [Google Scholar] [CrossRef]
- Tonini, A.; Castelli, M.; Bates, J.S.; Lin, N.N.N.; Painho, M. Visual-Inertial Method for Localizing Aerial Vehicles in GNSS-Denied Environments. Appl. Sci. 2024, 14, 9493. [Google Scholar] [CrossRef]
- Hou, Z.; Wang, R. A Loosely-Coupled GNSS-Visual-Inertial Fusion for State Estimation Based on Optimation. In Proceedings of the 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), Greenville, SC, USA, 12–14 November 2021; pp. 163–168. [Google Scholar]
- Talebi, S.P.; Mandic, D.P. On the Dynamics of Multiagent Nonlinear Filtering and Learning. In Proceedings of the 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP), London, UK, 22–25 September 2024; pp. 1–6. [Google Scholar]
- He, X.; Li, B.; Qiu, S.; Liu, K. Visual–Inertial Odometry of Structured and Unstructured Lines Based on Vanishing Points in Indoor Environments. Appl. Sci. 2024, 14, 1990. [Google Scholar] [CrossRef]
- Sun, Z.; Gao, W.; Tao, X.; Pan, S.; Wu, P.; Huang, H. Semi-Tightly Coupled Robust Model for GNSS/UWB/INS Integrated Positioning in Challenging Environments. Remote Sens. 2024, 16, 2108. [Google Scholar] [CrossRef]
- Gopaul, N.S.; Wang, J.; Hu, B. Loosely coupled visual odometry aided inertial navigation system using discrete extended Kalman filter with pairwise time correlated measurements. In Proceedings of the 2017 Forum on Cooperative Positioning and Service (CPGPS), Harbin, China, 19–21 May 2017; pp. 283–288. [Google Scholar]
- Weiss, S.M. Vision Based Navigation for Micro Helicopters. Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2012. [Google Scholar]
- Kelly, J.; Sukhatme, G.S. Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration. Int. J. Robot. Res. 2011, 30, 56–79. [Google Scholar] [CrossRef]
- Weiss, S.; Siegwart, R. Real-time metric state estimation for modular vision-inertial systems. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4531–4537. [Google Scholar]
- Achtelik, M.W.; Weiss, S.; Chli, M.; Dellaerty, F.; Siegwart, R. Collaborative stereo. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2242–2248. [Google Scholar]
- Brossard, M.; Bonnabel, S.; Barrau, A. Invariant Kalman filtering for visual inertial SLAM. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; pp. 2021–2028. [Google Scholar]
- Sun, W.; Li, Y.; Ding, W.; Zhao, J. A Novel Visual Inertial Odometry Based on Interactive Multiple Model and Multi-state Constrained Kalman Filter. IEEE Trans. Instrum. Meas. 2023, 73, 5000110. [Google Scholar] [CrossRef]
- Fornasier, A.; Ng, Y.; Mahony, R.; Weiss, S. Equivariant filter design for inertial navigation systems with input measurement biases. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 4333–4339. [Google Scholar]
- van Goor, P.; Mahony, R. An equivariant filter for visual inertial odometry. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 14432–14438. [Google Scholar]
- Trawny, N.; Roumeliotis, S.I. Indirect Kalman Filter for 3D Attitude Estimation; Technical Report; University of Minnesota, Department of Computer Science and Engineering: Minneapolis, MN, USA, 2005; Volume 2. [Google Scholar]
- Maybeck, P. Stochastic Models, Estimation, and Control; Academic Press: Cambridge, MA, USA, 1982. [Google Scholar]
- Beder, C.; Steffen, R. Determining an initial image pair for fixing the scale of a 3d reconstruction from an image sequence. In Proceedings of the Joint Pattern Recognition Symposium, Berlin, Germany, 12–14 September 2006; pp. 657–666. [Google Scholar]
- Eudes, A.; Lhuillier, M. Error propagations for local bundle adjustment. In Proceedings of the 2009 IEEE Conference On Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2411–2418. [Google Scholar]
- Hermann, R.; Krener, A. Nonlinear controllability and observability. IEEE Trans. Autom. Control 1977, 22, 728–740. [Google Scholar] [CrossRef]
- Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Grupp, M. Evo: Python Package for the Evaluation of Odometry and SLAM. Available online: https://github.com/MichaelGrupp/evo (accessed on 9 December 2024).
- Li, M.; Mourikis, A.I. High-precision, consistent EKF-based visual-inertial odometry. Int. J. Robot. Res. 2013, 32, 690–711. [Google Scholar] [CrossRef]
- Shuster, M.D. A survey of attitude representations. Navigation 1993, 8, 439–517. [Google Scholar]
- Dam, E.B.; Koch, M.; Lillholm, M. Quaternions, Interpolation and Animation; Datalogisk Institut, Københavns Universitet: Copenhagen, Denmark, 1998; Volume 2. [Google Scholar]
- Gelman, H. A note on the time dependence of the effective axis and angle of rotation. J. Res. Natl. Bur. Stand. 1971, 75, 165–171. [Google Scholar] [CrossRef]
Symbol | Description |
---|---|
w | fixed world coordinate frame |
i | coordinate frame attached to the IMU |
c | coordinate frame attached to the camera |
ic | coordinate frame attached to the IMU-aided camera system |
represents a general viable vector, A is the coordinate frame attached to the vector, and B is the reference frame; for example, denotes the linear translation of the camera with the frame c, measured with respect to the world frame w | |
translation vector of rigid bodies along 3 axes, of which the quasi-quaternion description is | |
unit quaternion according to the Hamilton notation [16], written as | |
conjugate quaternion of | |
rotation matrix converted from | |
skew-symmetric matrix of [23] | |
white Gaussian noise vector with zero mean and covariance | |
gravity vector in the world frame |
Translation RMSE | Translation Mean Error | Translation STD | Attitude RMSE | Attitude Mean Error | Attitude STD |
---|---|---|---|---|---|
0.207 m | 0.1447 m | 0.1408 m | 0.1684 rad | 0.1348 rad | 0.1008 rad |
Translation RMSE | Translation Mean Error | Translation STD | |
---|---|---|---|
ICEKF | 0.04153 m | 0.00574 m | 0.04112 m |
Monocular ORB-SLAM V3 | 0.0393 m | 0.0355 m | 0.01678 m |
Monocular ORB-SLAM V3 with IMU | 0.003841 m | 0.002973 m | 0.002433 m |
Monocular MSCKF | 0.1305 m | 0.05366 m | 0.119 m |
Translation RMSE | Translation Mean Error | Translation STD | |
---|---|---|---|
ICEKF | 0.08922 m | 0.009765 m | 0.08869 m |
Monocular ORB-SLAM V3 | 0.735 m | 0.533 m | 0.506 m |
Monocular ORB-SLAM V3 with IMU | 0.001366 m | 0.004981 m | 0.01272 m |
Monocular MSCKF | 0.2689 m | 0.09905 m | 0.25 m |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, C.; Wang, T.; Li, Z.; Tian, P. A Novel Real-Time Autonomous Localization Algorithm Based on Weighted Loosely Coupled Visual–Inertial Data of the Velocity Layer. Appl. Sci. 2025, 15, 989. https://doi.org/10.3390/app15020989
Liu C, Wang T, Li Z, Tian P. A Novel Real-Time Autonomous Localization Algorithm Based on Weighted Loosely Coupled Visual–Inertial Data of the Velocity Layer. Applied Sciences. 2025; 15(2):989. https://doi.org/10.3390/app15020989
Chicago/Turabian StyleLiu, Cheng, Tao Wang, Zhi Li, and Peng Tian. 2025. "A Novel Real-Time Autonomous Localization Algorithm Based on Weighted Loosely Coupled Visual–Inertial Data of the Velocity Layer" Applied Sciences 15, no. 2: 989. https://doi.org/10.3390/app15020989
APA StyleLiu, C., Wang, T., Li, Z., & Tian, P. (2025). A Novel Real-Time Autonomous Localization Algorithm Based on Weighted Loosely Coupled Visual–Inertial Data of the Velocity Layer. Applied Sciences, 15(2), 989. https://doi.org/10.3390/app15020989