Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (94)

Search Parameters:
Keywords = visual inertial SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1184 KiB  
Article
PGMF-VINS: Perpendicular-Based 3D Gaussian–Uniform Mixture Filter
by Wenqing Deng, Zhe Yan, Bo Hu, Zhiyan Dong and Lihua Zhang
Sensors 2024, 24(19), 6482; https://doi.org/10.3390/s24196482 - 8 Oct 2024
Abstract
Visual–Inertial SLAM (VI-SLAM) has a wide range of applications spanning robotics, autonomous driving, AR, and VR due to its low-cost and high-precision characteristics. VI-SLAM is divided into localization and mapping tasks. However, researchers focus more on the localization task while the robustness of [...] Read more.
Visual–Inertial SLAM (VI-SLAM) has a wide range of applications spanning robotics, autonomous driving, AR, and VR due to its low-cost and high-precision characteristics. VI-SLAM is divided into localization and mapping tasks. However, researchers focus more on the localization task while the robustness of the mapping task is often ignored. To address this, we propose a map-point convergence strategy which explicitly estimates the position, the uncertainty, and the stability of the map point (SoM). As a result, the proposed method can effectively improve the quality of the whole map while ensuring state-of-the-art localization accuracy. The convergence strategy mainly consists of a perpendicular-based triangulation and 3D Gaussian–uniform mixture filter, and we name it PGMF, perpendicular-based 3D Gaussian–uniform mixture filter. The algorithm is extensively tested on open-source datasets, which shows the RVM (Ratio of Valid Map points) of our algorithm exhibits an average increase of 0.1471 compared to VINS-mono, with a variance reduction of 68.8%. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

14 pages, 4431 KiB  
Article
Improved Multi-Sensor Fusion Dynamic Odometry Based on Neural Networks
by Lishu Luo, Fulun Peng and Longhui Dong
Sensors 2024, 24(19), 6193; https://doi.org/10.3390/s24196193 - 25 Sep 2024
Viewed by 454
Abstract
High-precision simultaneous localization and mapping (SLAM) in dynamic real-world environments plays a crucial role in autonomous robot navigation, self-driving cars, and drone control. To address this dynamic localization issue, in this paper, a dynamic odometry method is proposed based on FAST-LIVO, a fast [...] Read more.
High-precision simultaneous localization and mapping (SLAM) in dynamic real-world environments plays a crucial role in autonomous robot navigation, self-driving cars, and drone control. To address this dynamic localization issue, in this paper, a dynamic odometry method is proposed based on FAST-LIVO, a fast LiDAR (light detection and ranging)–inertial–visual odometry system, integrating neural networks with laser, camera, and inertial measurement unit modalities. The method first constructs visual–inertial and LiDAR–inertial odometry subsystems. Then, a lightweight neural network is used to remove dynamic elements from the visual part, and dynamic clustering is applied to the LiDAR part to eliminate dynamic environments, ensuring the reliability of the remaining environmental data. Validation of the datasets shows that the proposed multi-sensor fusion dynamic odometry can achieve high-precision pose estimation in complex dynamic environments with high continuity, reliability, and dynamic robustness. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

18 pages, 5473 KiB  
Article
Visual-Inertial RGB-D SLAM with Encoder Integration of ORB Triangulation and Depth Measurement Uncertainties
by Zhan-Wu Ma and Wan-Sheng Cheng
Sensors 2024, 24(18), 5964; https://doi.org/10.3390/s24185964 - 14 Sep 2024
Viewed by 556
Abstract
In recent years, the accuracy of visual SLAM (Simultaneous Localization and Mapping) technology has seen significant improvements, making it a prominent area of research. However, within the current RGB-D SLAM systems, the estimation of 3D positions of feature points primarily relies on direct [...] Read more.
In recent years, the accuracy of visual SLAM (Simultaneous Localization and Mapping) technology has seen significant improvements, making it a prominent area of research. However, within the current RGB-D SLAM systems, the estimation of 3D positions of feature points primarily relies on direct measurements from RGB-D depth cameras, which inherently contain measurement errors. Moreover, the potential of triangulation-based estimation for ORB (Oriented FAST and Rotated BRIEF) feature points remains underutilized. To address the singularity of measurement data, this paper proposes the integration of the ORB features, triangulation uncertainty estimation and depth measurements uncertainty estimation, for 3D positions of feature points. This integration is achieved using a CI (Covariance Intersection) filter, referred to as the CI-TEDM (Triangulation Estimates and Depth Measurements) method. Vision-based SLAM systems face significant challenges, particularly in environments, such as long straight corridors, weakly textured scenes, or during rapid motion, where tracking failures are common. To enhance the stability of visual SLAM, this paper introduces an improved CI-TEDM method by incorporating wheel encoder data. The mathematical model of the encoder is proposed, and detailed derivations of the encoder pre-integration model and error model are provided. Building on these improvements, we propose a novel tightly coupled visual-inertial RGB-D SLAM with encoder integration of ORB triangulation and depth measurement uncertainties. Validation on open-source datasets and real-world environments demonstrates that the proposed improvements significantly enhance the robustness of real-time state estimation and localization accuracy for intelligent vehicles in challenging environments. Full article
Show Figures

Figure 1

21 pages, 7239 KiB  
Article
UVIO: Adaptive Kalman Filtering UWB-Aided Visual-Inertial SLAM System for Complex Indoor Environments
by Junxi Li, Shouwen Wang, Jiahui Hao, Biao Ma and Henry K. Chu
Remote Sens. 2024, 16(17), 3245; https://doi.org/10.3390/rs16173245 - 1 Sep 2024
Viewed by 926
Abstract
Precise positioning in an indoor environment is a challenging task because it is difficult to receive a strong and reliable global positioning system (GPS) signal. For existing wireless indoor positioning methods, ultra-wideband (UWB) has become more popular because of its low energy consumption [...] Read more.
Precise positioning in an indoor environment is a challenging task because it is difficult to receive a strong and reliable global positioning system (GPS) signal. For existing wireless indoor positioning methods, ultra-wideband (UWB) has become more popular because of its low energy consumption and high interference immunity. Nevertheless, factors such as indoor non-line-of-sight (NLOS) obstructions can still lead to large errors or fluctuations in the measurement data. In this paper, we propose a fusion method based on ultra-wideband (UWB), inertial measurement unit (IMU), and visual simultaneous localization and mapping (V-SLAM) to achieve high accuracy and robustness in tracking a mobile robot in a complex indoor environment. Specifically, we first focus on the identification and correction between line-of-sight (LOS) and non-line-of-sight (NLOS) UWB signals. The distance evaluated from UWB is first processed by an adaptive Kalman filter with IMU signals for pose estimation, where a new noise covariance matrix using the received signal strength indicator (RSSI) and estimation of precision (EOP) is proposed to reduce the effect due to NLOS. After that, the corrected UWB estimation is tightly integrated with IMU and visual SLAM through factor graph optimization (FGO) to further refine the pose estimation. The experimental results show that, compared with single or dual positioning systems, the proposed fusion method provides significant improvements in positioning accuracy in a complex indoor environment. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

25 pages, 4182 KiB  
Article
W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots
by Dingji Luo, Yucan Huang, Xuchao Huang, Mingda Miao and Xueshan Gao
Sensors 2024, 24(17), 5662; https://doi.org/10.3390/s24175662 - 30 Aug 2024
Viewed by 516
Abstract
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of [...] Read more.
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot’s body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results. Full article
Show Figures

Figure 1

24 pages, 1413 KiB  
Article
Loop Detection Method Based on Neural Radiance Field BoW Model for Visual Inertial Navigation of UAVs
by Xiaoyue Zhang, Yue Cui, Yanchao Ren, Guodong Duan and Huanrui Zhang
Remote Sens. 2024, 16(16), 3038; https://doi.org/10.3390/rs16163038 - 19 Aug 2024
Viewed by 463
Abstract
The loop closure detection (LCD) methods in Unmanned Aerial Vehicle (UAV) Visual Inertial Navigation System (VINS) are often affected by issues such as insufficient image texture information and limited observational perspectives, resulting in constrained UAV positioning accuracy and reduced capability to perform complex [...] Read more.
The loop closure detection (LCD) methods in Unmanned Aerial Vehicle (UAV) Visual Inertial Navigation System (VINS) are often affected by issues such as insufficient image texture information and limited observational perspectives, resulting in constrained UAV positioning accuracy and reduced capability to perform complex tasks. This study proposes a Bag-of-Words (BoW) LCD method based on Neural Radiance Field (NeRF), which estimates camera poses from existing images and achieves rapid scene reconstruction through NeRF. A method is designed to select virtual viewpoints and render images along the flight trajectory using a specific sampling approach to expand the limited observational angles, mitigating the impact of image blur and insufficient texture information at specific viewpoints while enlarging the loop closure candidate frames to improve the accuracy and success rate of LCD. Additionally, a BoW vector construction method that incorporates the importance of similar visual words and an adapted virtual image filtering and comprehensive scoring calculation method are designed to determine loop closures. Applied to VINS-Mono and ORB-SLAM3, and compared with the advanced BoW model LCDs of the two systems, results indicate that the NeRF-based BoW LCD method can detect more than 48% additional accurate loop closures, while the system’s navigation positioning error mean is reduced by over 46%, validating the effectiveness and superiority of the proposed method and demonstrating its significant importance for improving the navigation accuracy of VINS. Full article
Show Figures

Figure 1

14 pages, 3371 KiB  
Technical Note
Pose Estimation Based on Bidirectional Visual–Inertial Odometry with 3D LiDAR (BV-LIO)
by Gang Peng, Qiang Gao, Yue Xu, Jianfeng Li, Zhang Deng and Cong Li
Remote Sens. 2024, 16(16), 2970; https://doi.org/10.3390/rs16162970 - 14 Aug 2024
Viewed by 799
Abstract
Due to the limitation of a single sensor such as only camera or only LiDAR, the Visual SLAM detects few effective features in the case of poor lighting or no texture. The LiDAR SLAM will also degrade in an unstructured environment and open [...] Read more.
Due to the limitation of a single sensor such as only camera or only LiDAR, the Visual SLAM detects few effective features in the case of poor lighting or no texture. The LiDAR SLAM will also degrade in an unstructured environment and open spaces, which reduces the accuracy of pose estimation and the quality of mapping. In order to solve this problem, on account of the high efficiency of Visual odometry and the high accuracy of LiDAR odometry, this paper investigates the multi-sensor fusion of bidirectional visual–inertial odometry with 3D LiDAR for pose estimation. This method can couple the IMU with the bidirectional vision respectively, and the LiDAR odometry is obtained assisted by the bidirectional visual inertial. The factor graph optimization is constructed, which effectively improves the accuracy of pose estimation. The algorithm in this paper is compared with LIO-LOAM, LeGO-LOAM, VINS-Mono, and so on using challenging datasets such as KITTI and M2DGR. The results show that this method effectively improves the accuracy of pose estimation and has high application value for mobile robots. Full article
Show Figures

Figure 1

22 pages, 5331 KiB  
Article
Rapid Initialization Method of Unmanned Aerial Vehicle Swarm Based on VIO-UWB in Satellite Denial Environment
by Runmin Wang and Zhongliang Deng
Drones 2024, 8(7), 339; https://doi.org/10.3390/drones8070339 - 22 Jul 2024
Cited by 1 | Viewed by 676
Abstract
In environments where satellite signals are blocked, initializing UAV swarms quickly is a technical challenge, especially indoors or in areas with weak satellite signals, making it difficult to establish the relative position of the swarm. Two common methods for initialization are using the [...] Read more.
In environments where satellite signals are blocked, initializing UAV swarms quickly is a technical challenge, especially indoors or in areas with weak satellite signals, making it difficult to establish the relative position of the swarm. Two common methods for initialization are using the camera for joint SLAM initialization, which increases communication burden due to image feature point analysis, and obtaining a rough positional relationship using prior information through a device such as a magnetic compass, which lacks accuracy. In recent years, visual–inertial odometry (VIO) technology has significantly progressed, providing new solutions. With improved computing power and enhanced VIO accuracy, it is now possible to establish the relative position relationship through the movement of drones. This paper proposes a two-stage robust initialization method for swarms of more than four UAVs, suitable for larger-scale satellite denial scenarios. Firstly, the paper analyzes the Cramér–Rao lower bound (CRLB) problem and the moving configuration problem of the cluster to determine the optimal anchor node for the algorithm. Subsequently, a strategy is used to screen anchor nodes that are close to the lower bound of CRLB, and an optimization problem is constructed to solve the position relationship between anchor nodes through the relative motion and ranging relationship between UAVs. This optimization problem includes quadratic constraints as well as linear constraints and is a quadratically constrained quadratic programming problem (QCQP) with high robustness and high precision. After addressing the anchor node problem, this paper simplifies and improves a fast swarm cooperative positioning algorithm, which is faster than the traditional multidimensional scaling (MDS) algorithm. The results of theoretical simulations and actual UAV tests demonstrate that the proposed algorithm is advanced, superior, and effectively solves the UAV swarm initialization problem under the condition of a satellite signal rejection. Full article
Show Figures

Figure 1

22 pages, 9635 KiB  
Article
Real-Time Estimation of Tree Position, Tree Height, and Tree Diameter at Breast Height Point, Using Smartphones Based on Monocular SLAM
by Jueying Su, Yongxiang Fan, Abdul Mannan, Shan Wang, Lin Long and Zhongke Feng
Forests 2024, 15(6), 939; https://doi.org/10.3390/f15060939 - 29 May 2024
Cited by 1 | Viewed by 851
Abstract
Precisely estimating the position, diameter at breast height (DBH), and height of trees is essential in forest resource inventory. Augmented reality (AR)-based devices help overcome the issue of inconsistent global point cloud data under thick forest canopies with insufficient Global Navigation Satellite System [...] Read more.
Precisely estimating the position, diameter at breast height (DBH), and height of trees is essential in forest resource inventory. Augmented reality (AR)-based devices help overcome the issue of inconsistent global point cloud data under thick forest canopies with insufficient Global Navigation Satellite System (GNSS) coverage. Although monocular simultaneous localization and mapping (SLAM) is one of the current mainstream systems, there is still no monocular SLAM solution for forest resource inventories, particularly for the precise measurement of inclined trees. We developed a forest plot survey system based on monocular SLAM that utilizes array cameras and Inertial Measurement Unit (IMU) sensors provided by smartphones, combined with augmented reality technology, to achieve a real-time estimation of the position, DBH, and height of trees within forest plots. Our results from the tested plots showed that the tree position estimation is unbiased, with an RMSE of 0.12 m and 0.11 m in the x-axis and y-axis directions, respectively; the DBH estimation bias is −0.17 cm (−0.65%), with an RMSE of 0.83 cm (3.59%), while the height estimation bias is −0.1 m (−0.95%), with an RMSE of 0.99 m (5.38%). This study will be useful in designing an algorithm to estimate the DBH and position of inclined trees using point clouds constrained by sectional planes at the breast height of the trunk, developing an algorithm to estimate the height of inclined trees utilizing the relationship between rays and plane positions, and providing observers with visual measurement results using augmented reality technology, allowing them to judge the accuracy of the estimates intuitively. Clearly, this system has significant potential applications in forest resource management and ecological research. Full article
(This article belongs to the Topic Individual Tree Detection (ITD) and Its Applications)
Show Figures

Figure 1

26 pages, 18657 KiB  
Article
Development of Unmanned Aerial Vehicle Navigation and Warehouse Inventory System Based on Reinforcement Learning
by Huei-Yung Lin, Kai-Lun Chang and Hsin-Ying Huang
Drones 2024, 8(6), 220; https://doi.org/10.3390/drones8060220 - 28 May 2024
Cited by 1 | Viewed by 1126
Abstract
In this paper, we present the exploration of indoor positioning technologies for UAVs, as well as navigation techniques for path planning and obstacle avoidance. The objective was to perform warehouse inventory tasks, using a drone to search for barcodes or markers to identify [...] Read more.
In this paper, we present the exploration of indoor positioning technologies for UAVs, as well as navigation techniques for path planning and obstacle avoidance. The objective was to perform warehouse inventory tasks, using a drone to search for barcodes or markers to identify objects. For the indoor positioning techniques, we employed visual-inertial odometry (VIO), ultra-wideband (UWB), AprilTag fiducial markers, and simultaneous localization and mapping (SLAM). These algorithms included global positioning, local positioning, and pre-mapping positioning, comparing the merits and drawbacks of various techniques and trajectories. For UAV navigation, we combined the SLAM-based RTAB-map indoor mapping and navigation path planning of the ROS for indoor environments. This system enabled precise drone positioning indoors and utilized global and local path planners to generate flight paths that avoided dynamic, static, unknown, and known obstacles, demonstrating high practicality and feasibility. To achieve warehouse inventory inspection, a reinforcement learning approach was proposed, recognizing markers by adjusting the UAV’s viewpoint. We addressed several of the main problems in inventory management, including efficiently planning of paths, while ensuring a certain detection rate. Two reinforcement learning techniques, AC (actor–critic) and PPO (proximal policy optimization), were implemented based on AprilTag identification. Testing was performed in both simulated and real-world environments, and the effectiveness of the proposed method was validated. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

20 pages, 4786 KiB  
Article
VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information
by Yinglong Wang, Xiaoxiong Liu, Minkun Zhao and Xinlong Xu
ISPRS Int. J. Geo-Inf. 2024, 13(5), 163; https://doi.org/10.3390/ijgi13050163 - 13 May 2024
Cited by 2 | Viewed by 1322
Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry [...] Read more.
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. Full article
(This article belongs to the Topic Artificial Intelligence in Navigation)
Show Figures

Figure 1

24 pages, 2438 KiB  
Review
Visual SLAM for Unmanned Aerial Vehicles: Localization and Perception
by Licong Zhuang, Xiaorong Zhong, Linjie Xu, Chunbao Tian and Wenshuai Yu
Sensors 2024, 24(10), 2980; https://doi.org/10.3390/s24102980 - 8 May 2024
Viewed by 1887
Abstract
Localization and perception play an important role as the basis of autonomous Unmanned Aerial Vehicle (UAV) applications, providing the internal state of movements and the external understanding of environments. Simultaneous Localization And Mapping (SLAM), one of the critical techniques for localization and perception, [...] Read more.
Localization and perception play an important role as the basis of autonomous Unmanned Aerial Vehicle (UAV) applications, providing the internal state of movements and the external understanding of environments. Simultaneous Localization And Mapping (SLAM), one of the critical techniques for localization and perception, is facing technical upgrading, due to the development of embedded hardware, multi-sensor technology, and artificial intelligence. This survey aims at the development of visual SLAM and the basis of UAV applications. The solutions to critical problems for visual SLAM are shown by reviewing state-of-the-art and newly presented algorithms, providing the research progression and direction in three essential aspects: real-time performance, texture-less environments, and dynamic environments. Visual–inertial fusion and learning-based enhancement are discussed for UAV localization and perception to illustrate their role in UAV applications. Subsequently, the trend of UAV localization and perception is shown. The algorithm components, camera configuration, and data processing methods are also introduced to give comprehensive preliminaries. In this paper, we provide coverage of visual SLAM and its related technologies over the past decade, with a specific focus on their applications in autonomous UAV applications. We summarize the current research, reveal potential problems, and outline future trends from academic and engineering perspectives. Full article
Show Figures

Figure 1

15 pages, 4438 KiB  
Article
PSMD-SLAM: Panoptic Segmentation-Aided Multi-Sensor Fusion Simultaneous Localization and Mapping in Dynamic Scenes
by Chengqun Song, Bo Zeng, Jun Cheng, Fuxiang Wu and Fusheng Hao
Appl. Sci. 2024, 14(9), 3843; https://doi.org/10.3390/app14093843 - 30 Apr 2024
Cited by 2 | Viewed by 846
Abstract
Multi-sensor fusion is pivotal in augmenting the robustness and precision of simultaneous localization and mapping (SLAM) systems. The LiDAR–visual–inertial approach has been empirically shown to adeptly amalgamate the benefits of these sensors for SLAM across various scenarios. Furthermore, methods of panoptic segmentation have [...] Read more.
Multi-sensor fusion is pivotal in augmenting the robustness and precision of simultaneous localization and mapping (SLAM) systems. The LiDAR–visual–inertial approach has been empirically shown to adeptly amalgamate the benefits of these sensors for SLAM across various scenarios. Furthermore, methods of panoptic segmentation have been introduced to deliver pixel-level semantic and instance segmentation data in a single instance. This paper delves deeper into these methodologies, introducing PSMD-SLAM, a novel panoptic segmentation assisted multi-sensor fusion SLAM approach tailored for dynamic environments. Our approach employs both probability propagation-based and PCA-based clustering techniques, supplemented by panoptic segmentation. This is utilized for dynamic object detection and the removal of visual and LiDAR data, respectively. Furthermore, we introduce a module designed for the robust real-time estimation of the 6D pose of dynamic objects. We test our approach on a publicly available dataset and show that PSMD-SLAM outperforms other SLAM algorithms in terms of accuracy and robustness, especially in dynamic environments. Full article
Show Figures

Figure 1

27 pages, 23020 KiB  
Article
Seamless Fusion: Multi-Modal Localization for First Responders in Challenging Environments
by Dennis Dahlke, Petros Drakoulis, Anaida Fernández García, Susanna Kaiser, Sotiris Karavarsamis, Michail Mallis, William Oliff, Georgia Sakellari, Alberto Belmonte-Hernández, Federico Alvarez and Dimitrios Zarpalas
Sensors 2024, 24(9), 2864; https://doi.org/10.3390/s24092864 - 30 Apr 2024
Viewed by 1046
Abstract
In dynamic and unpredictable environments, the precise localization of first responders and rescuers is crucial for effective incident response. This paper introduces a novel approach leveraging three complementary localization modalities: visual-based, Galileo-based, and inertial-based. Each modality contributes uniquely to the final Fusion tool, [...] Read more.
In dynamic and unpredictable environments, the precise localization of first responders and rescuers is crucial for effective incident response. This paper introduces a novel approach leveraging three complementary localization modalities: visual-based, Galileo-based, and inertial-based. Each modality contributes uniquely to the final Fusion tool, facilitating seamless indoor and outdoor localization, offering a robust and accurate localization solution without reliance on pre-existing infrastructure, essential for maintaining responder safety and optimizing operational effectiveness. The visual-based localization method utilizes an RGB camera coupled with a modified implementation of the ORB-SLAM2 method, enabling operation with or without prior area scanning. The Galileo-based localization method employs a lightweight prototype equipped with a high-accuracy GNSS receiver board, tailored to meet the specific needs of first responders. The inertial-based localization method utilizes sensor fusion, primarily leveraging smartphone inertial measurement units, to predict and adjust first responders’ positions incrementally, compensating for the GPS signal attenuation indoors. A comprehensive validation test involving various environmental conditions was carried out to demonstrate the efficacy of the proposed fused localization tool. Our results show that our proposed solution always provides a location regardless of the conditions (indoors, outdoors, etc.), with an overall mean error of 1.73 m. Full article
(This article belongs to the Special Issue Multimodal Sensing Technologies for IoT and AI-Enabled Systems)
Show Figures

Figure 1

28 pages, 18297 KiB  
Article
LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme
by Zhenbin Liu, Zengke Li, Ao Liu, Kefan Shao, Qiang Guo and Chuanhao Wang
Remote Sens. 2024, 16(9), 1524; https://doi.org/10.3390/rs16091524 - 25 Apr 2024
Cited by 4 | Viewed by 1349
Abstract
With the development of simultaneous positioning and mapping technology in the field of automatic driving, the current simultaneous localization and mapping scheme is no longer limited to a single sensor and is developing in the direction of multi-sensor fusion to enhance the robustness [...] Read more.
With the development of simultaneous positioning and mapping technology in the field of automatic driving, the current simultaneous localization and mapping scheme is no longer limited to a single sensor and is developing in the direction of multi-sensor fusion to enhance the robustness and accuracy. In this study, a localization and mapping scheme named LVI-fusion based on multi-sensor fusion of camera, lidar and IMU is proposed. Different sensors have different data acquisition frequencies. To solve the problem of time inconsistency in heterogeneous sensor data tight coupling, the time alignment module is used to align the time stamp between the lidar, camera and IMU. The image segmentation algorithm is used to segment the dynamic target of the image and extract the static key points. At the same time, the optical flow tracking based on the static key points are carried out and a robust feature point depth recovery model is proposed to realize the robust estimation of feature point depth. Finally, lidar constraint factor, IMU pre-integral constraint factor and visual constraint factor together construct the error equation that is processed with a sliding window-based optimization module. Experimental results show that the proposed algorithm has competitive accuracy and robustness. Full article
Show Figures

Figure 1

Back to TopTop