Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (280)

Search Parameters:
Keywords = visual odometry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3120 KiB  
Article
Implementation of Visual Odometry on Jetson Nano
by Jakub Krško, Dušan Nemec, Vojtech Šimák and Mário Michálik
Sensors 2025, 25(4), 1025; https://doi.org/10.3390/s25041025 - 9 Feb 2025
Viewed by 491
Abstract
This paper presents the implementation of ORB-SLAM3 for visual odometry on a low-power ARM-based system, specifically the Jetson Nano, to track a robot’s movement using RGB-D cameras. Key challenges addressed include the selection of compatible software libraries, camera calibration, and system optimization. The [...] Read more.
This paper presents the implementation of ORB-SLAM3 for visual odometry on a low-power ARM-based system, specifically the Jetson Nano, to track a robot’s movement using RGB-D cameras. Key challenges addressed include the selection of compatible software libraries, camera calibration, and system optimization. The ORB-SLAM3 algorithm was adapted for the ARM architecture and tested using both the EuRoC dataset and real-world scenarios involving a mobile robot. The testing demonstrated that ORB-SLAM3 provides accurate localization, with errors in path estimation ranging from 3 to 11 cm when using the EuRoC dataset. Real-world tests on a mobile robot revealed discrepancies primarily due to encoder drift and environmental factors such as lighting and texture. The paper discusses strategies for mitigating these errors, including enhanced calibration and the potential use of encoder data for tracking when camera performance falters. Future improvements focus on refining the calibration process, adding trajectory correction mechanisms, and integrating visual odometry data more effectively into broader systems. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

29 pages, 4682 KiB  
Article
LSAF-LSTM-Based Self-Adaptive Multi-Sensor Fusion for Robust UAV State Estimation in Challenging Environments
by Mahammad Irfan, Sagar Dalai, Petar Trslic, James Riordan and Gerard Dooly
Machines 2025, 13(2), 130; https://doi.org/10.3390/machines13020130 - 9 Feb 2025
Viewed by 493
Abstract
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging [...] Read more.
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging environments. We propose a deep learning-based adaptive sensor fusion framework for UAV state estimation, integrating multi-sensor data from stereo cameras, an IMU, two 3D LiDAR’s, and GPS. The framework dynamically adjusts fusion weights in real time using a long short-term memory (LSTM) model, enhancing robustness under diverse conditions such as illumination changes, structureless environments, degraded GPS signals, or complete signal loss where traditional single-sensor SLAM methods often fail. Validated on an in-house integrated UAV platform and evaluated against high-precision RTK ground truth, the algorithm incorporates deep learning-predicted fusion weights into an optimization-based odometry pipeline. The system delivers robust, consistent, and accurate state estimation, outperforming state-of-the-art techniques. Experimental results demonstrate its adaptability and effectiveness across challenging scenarios, showcasing significant advancements in UAV autonomy and reliability through the synergistic integration of deep learning and sensor fusion. Full article
Show Figures

Figure 1

16 pages, 6121 KiB  
Article
Stereo Event-Based Visual–Inertial Odometry
by Kunfeng Wang, Kaichun Zhao, Wenshuai Lu and Zheng You
Sensors 2025, 25(3), 887; https://doi.org/10.3390/s25030887 - 31 Jan 2025
Viewed by 464
Abstract
Event-based cameras are a new type of vision sensor in which pixels operate independently and respond asynchronously to changes in brightness with microsecond resolution, instead of providing standard intensity frames. Compared with traditional cameras, event-based cameras have low latency, no motion blur, and [...] Read more.
Event-based cameras are a new type of vision sensor in which pixels operate independently and respond asynchronously to changes in brightness with microsecond resolution, instead of providing standard intensity frames. Compared with traditional cameras, event-based cameras have low latency, no motion blur, and high dynamic range (HDR), which provide possibilities for robots to deal with some challenging scenes. We propose a visual–inertial odometry for stereo event-based cameras based on Error-State Kalman Filter (ESKF). The vision module updates the pose by relying on the edge alignment of a semi-dense 3D map to a 2D image, while the IMU module updates the pose using median integration. We evaluate our method on public datasets with general 6-DoF motion (three-axis translation and three-axis rotation) and compare the results against the ground truth. We compared our results with those from other methods, demonstrating the effectiveness of our approach. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 12478 KiB  
Article
A Novel Real-Time Autonomous Localization Algorithm Based on Weighted Loosely Coupled Visual–Inertial Data of the Velocity Layer
by Cheng Liu, Tao Wang, Zhi Li and Peng Tian
Appl. Sci. 2025, 15(2), 989; https://doi.org/10.3390/app15020989 - 20 Jan 2025
Viewed by 535
Abstract
IMUs (inertial measurement units) and cameras are widely utilized and combined to autonomously measure the motion states of mobile robots. This paper presents a loosely coupled algorithm for autonomous localization, the ICEKF (IMU-aided camera extended Kalman filter), for the weighted data fusion of [...] Read more.
IMUs (inertial measurement units) and cameras are widely utilized and combined to autonomously measure the motion states of mobile robots. This paper presents a loosely coupled algorithm for autonomous localization, the ICEKF (IMU-aided camera extended Kalman filter), for the weighted data fusion of the IMU and visual measurement. The algorithm fuses motion information on the velocity layer, thereby mitigating the excessive accumulation of IMU errors caused by direct subtraction on the positional layer after quadratic integration. Furthermore, by incorporating a weighting mechanism, the algorithm allows for a flexible adjustment of the emphasis placed on IMU data versus visual information, which augments the robustness and adaptability of autonomous motion estimation for robots. The simulation and dataset experiments demonstrate that the ICEKF can provide reliable estimates for robot motion trajectories. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

17 pages, 4607 KiB  
Article
Event-Based Visual/Inertial Odometry for UAV Indoor Navigation
by Ahmed Elamin, Ahmed El-Rabbany and Sunil Jacob
Sensors 2025, 25(1), 61; https://doi.org/10.3390/s25010061 - 25 Dec 2024
Cited by 1 | Viewed by 803
Abstract
Indoor navigation is becoming increasingly essential for multiple applications. It is complex and challenging due to dynamic scenes, limited space, and, more importantly, the unavailability of global navigation satellite system (GNSS) signals. Recently, new sensors have emerged, namely event cameras, which show great [...] Read more.
Indoor navigation is becoming increasingly essential for multiple applications. It is complex and challenging due to dynamic scenes, limited space, and, more importantly, the unavailability of global navigation satellite system (GNSS) signals. Recently, new sensors have emerged, namely event cameras, which show great potential for indoor navigation due to their high dynamic range and low latency. In this study, an event-based visual–inertial odometry approach is proposed, emphasizing adaptive event accumulation and selective keyframe updates to reduce computational overhead. The proposed approach fuses events, standard frames, and inertial measurements for precise indoor navigation. Features are detected and tracked on the standard images. The events are accumulated into frames and used to track the features between the standard frames. Subsequently, the IMU measurements and the feature tracks are fused to continuously estimate the sensor states. The proposed approach is evaluated using both simulated and real-world datasets. Compared with the state-of-the-art U-SLAM algorithm, our approach achieves a substantial reduction in the mean positional error and RMSE in simulated environments, showing up to 50% and 47% reductions along the x- and y-axes, respectively. The approach achieves 5–10 ms latency per event batch and 10–20 ms for frame updates, demonstrating real-time performance on resource-constrained platforms. These results underscore the potential of our approach as a robust solution for real-world UAV indoor navigation scenarios. Full article
(This article belongs to the Special Issue Multi-sensor Integration for Navigation and Environmental Sensing)
Show Figures

Figure 1

24 pages, 31029 KiB  
Article
InCrowd-VI: A Realistic Visual–Inertial Dataset for Evaluating Simultaneous Localization and Mapping in Indoor Pedestrian-Rich Spaces for Human Navigation
by Marziyeh Bamdad, Hans-Peter Hutter and Alireza Darvishy
Sensors 2024, 24(24), 8164; https://doi.org/10.3390/s24248164 - 21 Dec 2024
Viewed by 734
Abstract
Simultaneous localization and mapping (SLAM) techniques can be used to navigate the visually impaired, but the development of robust SLAM solutions for crowded spaces is limited by the lack of realistic datasets. To address this, we introduce InCrowd-VI, a novel visual–inertial dataset specifically [...] Read more.
Simultaneous localization and mapping (SLAM) techniques can be used to navigate the visually impaired, but the development of robust SLAM solutions for crowded spaces is limited by the lack of realistic datasets. To address this, we introduce InCrowd-VI, a novel visual–inertial dataset specifically designed for human navigation in indoor pedestrian-rich environments. Recorded using Meta Aria Project glasses, it captures realistic scenarios without environmental control. InCrowd-VI features 58 sequences totaling a 5 km trajectory length and 1.5 h of recording time, including RGB, stereo images, and IMU measurements. The dataset captures important challenges such as pedestrian occlusions, varying crowd densities, complex layouts, and lighting changes. Ground-truth trajectories, accurate to approximately 2 cm, are provided in the dataset, originating from the Meta Aria project machine perception SLAM service. In addition, a semi-dense 3D point cloud of scenes is provided for each sequence. The evaluation of state-of-the-art visual odometry (VO) and SLAM algorithms on InCrowd-VI revealed severe performance limitations in these realistic scenarios. Under challenging conditions, systems exceeded the required localization accuracy of 0.5 m and the 1% drift threshold, with classical methods showing drift up to 5–10%. While deep learning-based approaches maintained high pose estimation coverage (>90%), they failed to achieve real-time processing speeds necessary for walking pace navigation. These results demonstrate the need and value of a new dataset to advance SLAM research for visually impaired navigation in complex indoor environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

26 pages, 6416 KiB  
Article
Advanced Monocular Outdoor Pose Estimation in Autonomous Systems: Leveraging Optical Flow, Depth Estimation, and Semantic Segmentation with Dynamic Object Removal
by Alireza Ghasemieh and Rasha Kashef
Sensors 2024, 24(24), 8040; https://doi.org/10.3390/s24248040 - 17 Dec 2024
Viewed by 743
Abstract
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor [...] Read more.
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor spaces. Moreover, GPS reliance introduces vulnerabilities to signal disruptions, which can lead to significant operational failures. Hence, developing alternative localization techniques that do not depend on external signals is essential, showing a critical need for robust, GPS-independent localization solutions adaptable to different applications, ranging from Earth-based autonomous vehicles to robotic missions on Mars. This paper addresses these challenges using Visual odometry (VO) to estimate a camera’s pose by analyzing captured image sequences in GPS-denied areas tailored for autonomous vehicles (AVs), where safety and real-time decision-making are paramount. Extensive research has been dedicated to pose estimation using LiDAR or stereo cameras, which, despite their accuracy, are constrained by weight, cost, and complexity. In contrast, monocular vision is practical and cost-effective, making it a popular choice for drones, cars, and autonomous vehicles. However, robust and reliable monocular pose estimation models remain underexplored. This research aims to fill this gap by developing a novel adaptive framework for outdoor pose estimation and safe navigation using enhanced visual odometry systems with monocular cameras, especially for applications where deploying additional sensors is not feasible due to cost or physical constraints. This framework is designed to be adaptable across different vehicles and platforms, ensuring accurate and reliable pose estimation. We integrate advanced control theory to provide safety guarantees for motion control, ensuring that the AV can react safely to the imminent hazards and unknown trajectories of nearby traffic agents. The focus is on creating an AI-driven model(s) that meets the performance standards of multi-sensor systems while leveraging the inherent advantages of monocular vision. This research uses state-of-the-art machine learning techniques to advance visual odometry’s technical capabilities and ensure its adaptability across different platforms, cameras, and environments. By merging cutting-edge visual odometry techniques with robust control theory, our approach enhances both the safety and performance of AVs in complex traffic situations, directly addressing the challenge of safe and adaptive navigation. Experimental results on the KITTI odometry dataset demonstrate a significant improvement in pose estimation accuracy, offering a cost-effective and robust solution for real-world applications. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

20 pages, 12255 KiB  
Article
A Biomimetic Pose Estimation and Target Perception Strategy for Transmission Line Maintenance UAVs
by Haoze Zhuo, Zhong Yang, Chi Zhang, Nuo Xu, Bayang Xue, Zekun Zhu and Yucheng Xie
Biomimetics 2024, 9(12), 745; https://doi.org/10.3390/biomimetics9120745 - 6 Dec 2024
Viewed by 767
Abstract
High-voltage overhead power lines serve as the carrier of power transmission and are crucial to the stable operation of the power system. Therefore, it is particularly important to detect and remove foreign objects attached to transmission lines, as soon as possible. In this [...] Read more.
High-voltage overhead power lines serve as the carrier of power transmission and are crucial to the stable operation of the power system. Therefore, it is particularly important to detect and remove foreign objects attached to transmission lines, as soon as possible. In this context, the widespread promotion and application of smart robots in the power industry can help address the increasingly complex challenges faced by the industry and ensure the efficient, economical, and safe operation of the power grid system. This article proposes a bionic-based UAV pose estimation and target perception strategy, which aims to address the lack of pattern recognition and automatic tracking capabilities of traditional power line inspection UAVs, as well as the poor robustness of visual odometry. Compared with the existing UAV environmental perception solutions, the bionic target perception algorithm proposed in this article can efficiently extract point and line features from infrared images and realize the target detection and automatic tracking function of small multi-rotor drones in the power line scenario, with low power consumption. Full article
Show Figures

Figure 1

20 pages, 4436 KiB  
Article
An Integrated Algorithm Fusing UWB Ranging Positioning and Visual–Inertial Information for Unmanned Vehicles
by Shuang Li, Lihui Wang, Baoguo Yu, Xiaohu Liang, Shitong Du, Yifan Li and Zihan Yang
Remote Sens. 2024, 16(23), 4530; https://doi.org/10.3390/rs16234530 - 3 Dec 2024
Viewed by 812
Abstract
During the execution of autonomous tasks within sheltered space environments, unmanned vehicles demand highly precise and seamless continuous positioning capabilities. While the existing visual–inertial-based positioning methods can provide accurate poses over short distances, they are prone to error accumulation. Conversely, radio-based positioning techniques [...] Read more.
During the execution of autonomous tasks within sheltered space environments, unmanned vehicles demand highly precise and seamless continuous positioning capabilities. While the existing visual–inertial-based positioning methods can provide accurate poses over short distances, they are prone to error accumulation. Conversely, radio-based positioning techniques could offer absolute position information, yet they encountered difficulties in sheltered space scenarios. Usually, three or more base stations were required for localization. To address these issues, a binocular vision/inertia/ultra-wideband (UWB) combined positioning method based on factor graph optimization was proposed. This approach incorporated UWB ranging and positioning information into the visual–inertia system. Based on a sliding window, the joint nonlinear optimization of multi-source data, including IMU measurements, visual features, as well as UWB ranging and positioning information, was accomplished. Relying on visual inertial odometry, this methodology enabled autonomous positioning without the prerequisite for prior scene knowledge. When UWB base stations were available in the environment, their distance measurements or positioning information could be employed to institute global pose constraints in combination with visual–inertial odometry data. Through the joint optimization of UWB distance or positioning measurements and visual–inertial odometry data, the proposed method precisely ascertained the vehicle’s position and effectively mitigated accumulated errors. The experimental results indicated that the positioning error of the proposed method was reduced by 51.4% compared to the traditional method, thereby fulfilling the requirements for the precise autonomous navigation of unmanned vehicles in sheltered space. Full article
Show Figures

Figure 1

7 pages, 3886 KiB  
Proceeding Paper
Event/Visual/IMU Integration for UAV-Based Indoor Navigation
by Ahmed Elamin and Ahmed El-Rabbany
Proceedings 2024, 110(1), 2; https://doi.org/10.3390/proceedings2024110002 - 2 Dec 2024
Viewed by 658
Abstract
Unmanned aerial vehicle (UAV) navigation in indoor environments is challenging due to varying light conditions, the dynamic clutter typical of indoor spaces, and the absence of GNSS signals. In response to these complexities, emerging sensors, such as event cameras, demonstrate significant potential in [...] Read more.
Unmanned aerial vehicle (UAV) navigation in indoor environments is challenging due to varying light conditions, the dynamic clutter typical of indoor spaces, and the absence of GNSS signals. In response to these complexities, emerging sensors, such as event cameras, demonstrate significant potential in indoor navigation with their low latency and high dynamic range characteristics. Unlike traditional RGB cameras, event cameras mitigate motion blur and operate effectively in low-light conditions. Nevertheless, they exhibit limitations in terms of information output during scenarios of limited motion, in contrast to standard cameras that can capture detailed surroundings. This study proposes a novel event-based visual–inertial odometry approach for precise indoor navigation. In the proposed approach, the standard images are leveraged for feature detection and tracking, while events are aggregated into frames to track features between consecutive standard frames. The fusion of IMU measurements and feature tracks facilitates the continuous estimation of sensor states. The proposed approach is evaluated and validated using a controlled office environment simulation developed using Gazebo, employing a P230 simulated drone equipped with an event camera, an RGB camera, and IMU sensors. This simulated environment provides a testbed for evaluating and showcasing the proposed approach’s robust performance in realistic indoor navigation scenarios. Full article
(This article belongs to the Proceedings of The 31st International Conference on Geoinformatics)
Show Figures

Figure 1

18 pages, 8489 KiB  
Article
Tightly Coupled SLAM Algorithm Based on Similarity Detection Using LiDAR-IMU Sensor Fusion for Autonomous Navigation
by Jiahui Zheng, Yi Wang and Yadong Men
World Electr. Veh. J. 2024, 15(12), 558; https://doi.org/10.3390/wevj15120558 - 2 Dec 2024
Viewed by 913
Abstract
In recent years, the rise of unmanned technology has made Simultaneous Localization and Mapping (SLAM) algorithms a focal point of research in the field of robotics. SLAM algorithms are primarily categorized into visual SLAM and laser SLAM, based on the type of external [...] Read more.
In recent years, the rise of unmanned technology has made Simultaneous Localization and Mapping (SLAM) algorithms a focal point of research in the field of robotics. SLAM algorithms are primarily categorized into visual SLAM and laser SLAM, based on the type of external sensors employed. Laser SLAM algorithms have become essential in robotics and autonomous driving due to their insensitivity to lighting conditions, precise distance measurements, and ease of generating navigation maps. Throughout the development of SLAM technology, numerous effective algorithms have been introduced. However, existing algorithms still encounter challenges, such as localization errors and suboptimal utilization of sensor data. To address these issues, this paper proposes a tightly coupled SLAM algorithm based on similarity detection. The algorithm integrates Inertial Measurement Unit (IMU) and LiDAR odometry modules, employs a tightly coupled processing approach for sensor data, and utilizes curvature feature optimization extraction methods to enhance the accuracy and robustness of inter-frame matching. Additionally, the algorithm incorporates a local keyframe sliding window method and introduces a similarity detection mechanism, which reduces the real-time computational load and improves efficiency. Experimental results demonstrate that the algorithm achieves superior performance, with reduced positioning errors and enhanced global consistency, in tests conducted on the KITTI dataset. The accuracy of the real trajectory data compared to the ground truth is evaluated using metrics such as ATE (absolute trajectory error) and RMSE (root mean square error). Full article
(This article belongs to the Special Issue Motion Planning and Control of Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 22712 KiB  
Article
Adaptive Route Memory Sequences for Insect-Inspired Visual Route Navigation
by Efstathios Kagioulis, James Knight, Paul Graham, Thomas Nowotny and Andrew Philippides
Biomimetics 2024, 9(12), 731; https://doi.org/10.3390/biomimetics9120731 - 1 Dec 2024
Viewed by 880
Abstract
Visual navigation is a key capability for robots and animals. Inspired by the navigational prowess of social insects, a family of insect-inspired route navigation algorithms—familiarity-based algorithms—have been developed that use stored panoramic images collected during a training route to subsequently derive directional information [...] Read more.
Visual navigation is a key capability for robots and animals. Inspired by the navigational prowess of social insects, a family of insect-inspired route navigation algorithms—familiarity-based algorithms—have been developed that use stored panoramic images collected during a training route to subsequently derive directional information during route recapitulation. However, unlike the ants that inspire them, these algorithms ignore the sequence in which the training images are acquired so that all temporal information/correlation is lost. In this paper, the benefits of incorporating sequence information in familiarity-based algorithms are tested. To do this, instead of comparing a test view to all the training route images, a window of memories is used to restrict the number of comparisons that need to be made. As ants are able to visually navigate when odometric information is removed, the window position is updated via visual matching information only and not odometry. The performance of an algorithm without sequence information is compared to the performance of window methods with different fixed lengths as well as a method that adapts the window size dynamically. All algorithms were benchmarked on a simulation of an environment used for ant navigation experiments and showed that sequence information can boost performance and reduce computation. A detailed analysis of successes and failures highlights the interaction between the length of the route memory sequence and environment type and shows the benefits of an adaptive method. Full article
(This article belongs to the Special Issue Bio-Inspired Robotics and Applications)
Show Figures

Figure 1

22 pages, 2553 KiB  
Review
Advancements in Indoor Precision Positioning: A Comprehensive Survey of UWB and Wi-Fi RTT Positioning Technologies
by Jiageng Qiao, Fan Yang, Jingbin Liu, Gege Huang, Wei Zhang and Mengxiang Li
Network 2024, 4(4), 545-566; https://doi.org/10.3390/network4040027 - 29 Nov 2024
Cited by 1 | Viewed by 1106
Abstract
High-precision indoor positioning is essential for various applications, such as the Internet of Things, robotics, and smart manufacturing, requiring accuracy better than 1 m. Conventional indoor positioning methods, like Wi-Fi or Bluetooth fingerprinting, typically provide low accuracy within a range of several meters, [...] Read more.
High-precision indoor positioning is essential for various applications, such as the Internet of Things, robotics, and smart manufacturing, requiring accuracy better than 1 m. Conventional indoor positioning methods, like Wi-Fi or Bluetooth fingerprinting, typically provide low accuracy within a range of several meters, while techniques such as laser or visual odometry often require fusion with absolute positioning methods. Ultra-wideband (UWB) and Wi-Fi Round-Trip Time (RTT) are emerging radio positioning technologies supported by industry leaders like Apple and Google, respectively, both capable of achieving high-precision indoor positioning. This paper offers a comprehensive survey of UWB and Wi-Fi positioning, beginning with an overview of UWB and Wi-Fi RTT ranging, followed by an explanation of the fundamental principles of UWB and Wi-Fi RTT-based geometric positioning. Additionally, it compares the strengths and limitations of UWB and Wi-Fi RTT technologies and reviews advanced studies that address practical challenges in UWB and Wi-Fi RTT positioning, such as accuracy, reliability, continuity, and base station coordinate calibration issues. These challenges are primarily addressed through a multi-sensor fusion approach that integrates relative and absolute positioning. Finally, this paper highlights future directions for the development of UWB- and Wi-Fi RTT-based indoor positioning technologies. Full article
Show Figures

Figure 1

33 pages, 14639 KiB  
Article
Multi-Sensor Fusion for Wheel-Inertial-Visual Systems Using a Fuzzification-Assisted Iterated Error State Kalman Filter
by Guohao Huang, Haibin Huang, Yaning Zhai, Guohao Tang, Ling Zhang, Xingyu Gao, Yang Huang and Guoping Ge
Sensors 2024, 24(23), 7619; https://doi.org/10.3390/s24237619 - 28 Nov 2024
Cited by 1 | Viewed by 1170
Abstract
This paper investigates the odometry drift problem in differential-drive indoor mobile robots and proposes a multi-sensor fusion approach utilizing a Fuzzy Inference System (FIS) within a Wheel-Inertial-Visual Odometry (WIVO) framework to optimize the 6-DoF localization of the robot in unstructured scenes. The structure [...] Read more.
This paper investigates the odometry drift problem in differential-drive indoor mobile robots and proposes a multi-sensor fusion approach utilizing a Fuzzy Inference System (FIS) within a Wheel-Inertial-Visual Odometry (WIVO) framework to optimize the 6-DoF localization of the robot in unstructured scenes. The structure and principles of the multi-sensor fusion system are developed, incorporating an Iterated Error State Kalman Filter (IESKF) for enhanced accuracy. An FIS is integrated with the IESKF to address the limitations of traditional fixed covariance matrices in process and observation noise, which fail to adapt effectively to complex kinematic characteristics and visual observation challenges such as varying lighting conditions and unstructured scenes in dynamic environments. The fusion filter gains in FIS-IESKF are adaptively adjusted for noise predictions, optimizing the rule parameters of the fuzzy inference process. Experimental results demonstrate that the proposed method effectively enhances the localization accuracy and system robustness of differential-drive indoor mobile robots in dynamically changing movements and environments. Full article
Show Figures

Figure 1

22 pages, 5386 KiB  
Article
A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping
by Lu Chen, Amir Hussain, Yu Liu, Jie Tan, Yang Li, Yuhao Yang, Haoyuan Ma, Shenbing Fu and Gun Li
Sensors 2024, 24(22), 7381; https://doi.org/10.3390/s24227381 - 19 Nov 2024
Cited by 1 | Viewed by 912
Abstract
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome [...] Read more.
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome these issues to develop an integrated localization and navigation framework, IIVL-LM (IMU, Infrared, Vision, and LiDAR Fusion for Localization and Mapping). This framework achieves tightly coupled integration at the data level using inputs from an IMU (Inertial Measurement Unit), an infrared camera, an RGB (Red, Green and Blue) camera, and LiDAR. We propose a real-time luminance calculation model and verify its conversion accuracy. Additionally, we designed a fast approximation method for the nonlinear weighted fusion of features from infrared and RGB frames based on luminance values. Finally, we optimize the VIO (Visual-Inertial Odometry) module in the R3LIVE++ (Robust, Real-time, Radiance Reconstruction with LiDAR-Inertial-Visual state Estimation) framework based on the infrared camera’s capability to acquire depth information. In a controlled study, using a simulated indoor rescue scenario dataset, the IIVL-LM system demonstrated significant performance enhancements in challenging luminance conditions, particularly in low-light environments. Specifically, the average RMSE ATE (Root Mean Square Error of absolute trajectory Error) improved by 23% to 39%, with reductions from 0.006 to 0.013. At the same time, we conducted comparative experiments using the publicly available TUM-VI (Technical University of Munich Visual-Inertial Dataset) without the infrared image input. It was found that no leading results were achieved, which verifies the importance of infrared image fusion. By maintaining the active engagement of at least three sensors at all times, the IIVL-LM system significantly boosts its robustness in both unknown and expansive environments while ensuring high precision. This enhancement is particularly critical for applications in complex environments, such as indoor rescue operations. Full article
(This article belongs to the Special Issue New Trends in Optical Imaging and Sensing Technologies)
Show Figures

Figure 1

Back to TopTop