Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Mechanical Properties and Constitutive Model for Artificially Structured Soils under Undrained Conditions
Previous Article in Journal
Optimization of Interconnected Natural Gas and Power Systems Using Mathematical Programs with Complementarity Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Multi-Sensor Fusion Localization Method Based on Filtering

1
Hubei Key Laboratory of Modern Auto Parts Technology, Wuhan University of Technology, Wuhan 470030, China
2
Auto Prats Technology Hubei Collaborative Innovation Center, Wuhan University of Technology, Wuhan 470030, China
3
Hubei Technology Research Center of New Energy and Intelligent Connected Vehicle Engineering, Wuhan University of Technology, Wuhan 470030, China
4
Commercial Product R&D Institute, Dongfeng Automobile Co., Ltd., Wuhan 470000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(14), 2225; https://doi.org/10.3390/math12142225
Submission received: 24 June 2024 / Revised: 13 July 2024 / Accepted: 15 July 2024 / Published: 17 July 2024

Abstract

:
High-precision positioning is a fundamental requirement for autonomous vehicles. However, the accuracy of single-sensor positioning technology can be compromised in complex scenarios due to inherent limitations. To address this issue, we propose an adaptive multi-sensor fusion localization method based on the error-state Kalman filter. By incorporating a tightly coupled laser inertial odometer that utilizes the Normal Distribution Transform (NDT), we constructed a multi-level fuzzy evaluation model for posture transformation states. This model assesses the reliability of Global Navigation Satellite System (GNSS) data and the laser inertial odometer when GNSS signals are disrupted, prioritizing data with higher reliability for posture updates. Real vehicle tests demonstrate that our proposed positioning method satisfactorily meets the positioning accuracy and robustness requirements for autonomous driving vehicles in complex environments.

1. Introduction

As the scientific and technological landscape continues to evolve, there has been a significant increase in the number of autonomous driving test vehicles. This has led to heightened public interest in autonomous driving. The fundamental technologies that support autonomous driving vehicles include perception technology, positioning technology, decision-making planning, and control technology [1]. Of these, the accuracy and reliability of positioning are of utmost importance.
The Global Navigation Satellite System (GNSS) is widely employed due to its capacity to provide long-term, high-precision global positioning under optimal signal conditions. However, in complex urban environments, the accuracy of the system is compromised by factors such as high-rise buildings, tunnels, and viaducts. In contrast, the Inertial Measurement Unit (IMU) can consistently provide high-frequency vehicle attitude information through its relatively independent measurement method. As a result, numerous studies have integrated the GNSS and IMUs to improve positioning accuracy in urban roads and intricate scenarios. In [2], the authors loosely coupled a dual-antenna GNSS with an inertial sensor and proposed a fault detection and elimination strategy to eliminate GNSS attitude anomalies. In [3], the GNSS and IMUs are tightly coupled based on the differential carrier phase and a strategy for handling GNSS cycle jumps is proposed. In [4], the authors introduce an IMU correction value prediction model based on a double-check GNSS assessment that adaptively adjusts the weights of the GNSS measurements and neural network predictions based on the duration of the integrated system’s GNSS faults to assist RTK/IMU integration under GNSS outage or fault conditions. In [5], an in-motion rough heading alignment method using a single-antenna GNSS and a low-level IMU based on the basic principle of trajectory similarity is proposed. In [6], a novel hybrid GPS/INS/Doppler velocity log (DVL) positioning method is proposed, which introduces DVL as the reference information to assist the GPS module to correct the divergence error of INS. Despite this, reference [7] points out that when the GNSS signal is interfered with for a long time or interrupted for a short time, the combined navigation algorithm that only integrates the low-cost IMU will still produce large positioning errors.
To address the issue of significant positioning errors resulting from interference with GNSS signals by environmental factors, a growing number of researchers have begun to explore local positioning technology that combines Inertial Measurement Units (IMUs) and Light Detection and Ranging (LiDAR). LiDAR is an active detection sensor that excels in environmental perception due to its high resolution, robustness, and low noise. It can accurately capture three-dimensional spatial information and exhibits strong adaptability. Notably, it can continue to provide stable data even in adverse weather conditions, making it a crucial tool in fields such as autonomous driving and terrain mapping. In recent years, advancements in manufacturing technology have led to the widespread adoption of LiDAR, and significant progress has been made in the research of this technology. The authors of [8] proposed a normal distribution transform (NDT) algorithm for registering two-dimensional point cloud data, converting point cloud data into a probability density function, and maximizing the sum of the probability densities of two frames of point clouds after matching through optimization. In [9], the authors proposed a three-dimensional NDT algorithm based on the two-dimensional NDT algorithm, dividing the point cloud into several grids, solving the multidimensional normal distribution of each grid, and calculating its probability distribution model. In [10], the authors proposed using spatial clustering to smooth the probability density function and improve the registration of point cloud grid maps. In [11], the authors proposed a probabilistic radar sub-map and threshold module based on NDT matching to remove point cloud noise and implement a laser odometer. In [12], a laser SLAM scheme FAST-LIO algorithm is proposed for the tight coupling of LIDAR and IMUs. The algorithm uses an Iterative Extended Kalman Filter (IEKF) to fuse the feature point cloud with the IMU measurements and utilizes the back-propagation process of the IMU measurements to compensate for the point cloud aberrations. In [13], a faster laser inertial odometry framework Faster-LIO algorithm using ivoxel as a nearest neighbor finding scheme is proposed, which can significantly improve the computational speed of the odometer with the same accuracy metrics. In [14], the authors propose a robust data model for dual-driven fusion with uncertainty estimation for the LiDAR-IMU localization system, which integrates the advantages of data-driven and model-driven approaches. In [15], the authors employ fast direct LiDAR-inertial odometry (FAST-LIO2) and a Stable Triangle Descriptor as LiDAR-IMU odometry and the loop detection method, respectively. In [16], a novel LiDAR inertial odometry approach integrating inertial measurement units and a multi-feature joint registration strategy introduces a ground segmentation method and feature categorization strategy, enhancing ground detection mechanisms and optimizing the feature extraction process. In [17], the authors introduce a cost-effective SLAM system integrated with the 3D multi-goal Rapidly exploring Random Tree (RRT) algorithm with Nonlinear Model Predictive Control (NMPC) for autonomous exploration in complex environments. However, a simple laser odometer can only achieve local relative positioning, and long-term operation will produce a large cumulative error. In addition, when the point cloud features are relatively simple, it is easy to cause point cloud mismatching, causing the laser odometer to degrade.
To address the limitations of laser odometers, numerous researchers have explored multi-sensor fusion positioning technology that integrates LiDAR, the GNSS, and IMUs. In [18], the authors proposed a solution to tightly couple the three sensors, in which the GNSS’ double-differenced pseudorange and carrier-phase measurements, IMU data, and LiDAR plane features are fused at the raw-data level via an extended Kalman filter. In [19], the authors proposed a fusion solution based on graph optimization, which effectively reduced the positioning deviation caused by abnormal single sensor data by fusing the three data at the observation level. In [20], the authors proposed a data mutual inspection strategy to ensure that sensor data are credible during the fusion update process. In [21], an adaptive loosely coupled IMU/GNSS/LiDAR integrated navigation system based on factor graph optimization with sensor weight optimization and fault detection is proposed. In [22], an improved geometry-based cycle slip detection and repair method considering the positioning error at the immediate prior epoch and the influence of the satellite geometry change is proposed. In [23], the authors directly fuse GNSS data with LiDAR and design a parallel filtering scheme to improve the robustness of positioning output. The above research results show that the multi-sensor fusion method can effectively make up for the shortcomings of a single sensor in complex scenarios; however, it requires real-time processing of point cloud data and the accumulated error will increase over time.
This paper introduces a multi-level fuzzy evaluation model and implements an adaptive multi-sensor fusion positioning method based on error-state Kalman filtering. This is designed to address the issue of low accuracy and poor robustness associated with single-sensor positioning technology in complex scenarios. Unlike previous methods, the proposed approach eliminates the need for real-time operation of the laser odometer, thereby reducing the cumulative error caused by local positioning. This significantly conserves vehicle computing resources while maintaining the positioning accuracy of the vehicle in complex environments.

2. Multi-Sensor Fusion Framework Based on an Error-State Kalman Filter

The adaptive multi-sensor fusion positioning system proposed in this paper is depicted in Figure 1. The Inertial Measurement Unit (IMU) serves as the primary sensor due to its minimal interference from external environmental factors and high-frequency sampling rate. Upon initialization of the IMU, the acquired acceleration and angular velocity data are relayed to the Kalman filter module for recursive computation, thereby determining the vehicle’s posture transformation. At the same time, the GNSS signal is judged. If the GNSS signal is normal, it is directly sent to the ESKF module for a status update. If the GNSS signal deteriorates, the laser odometer of the tightly coupled IMU is started. GNSS data and laser inertial odometer data are then processed through a multi-level fuzzy evaluation model for the posture transformation state. The model outputs more reliable results, which serve as observation data to update the positioning data and yield high-precision positioning results.

2.1. Establishing the Navigation Coordinate System

Within the fusion algorithm, data procured from diverse sensors originate from varying coordinate systems.
Global Navigation Satellite System (GNSS) data typically employ the World Geodetic System-84 (WGS-84), wherein longitude, latitude, and elevation data are expressed within this coordinate system. The IMU coordinate system’s origin is situated at the IMU measurement center, with the positive direction of the X-axis pointing forward on the vehicle, the Y-axis to the left, and the Z-axis determined by the right-hand coordinate system. Conversely, the Northeast Sky coordinate system uses a point on Earth’s surface as its origin, with the positive direction of the X-axis pointing east, the Y-axis north, and the Z-axis determined by the right-hand coordinate system.
This study employs the Northeast Celestial Coordinate System as the primary navigational reference. Consequently, it is imperative to transform the data from various sensors into this coordinate system.

2.2. IMU Model

Since the IMU’s measurement value may have a certain offset when it is stationary, the IMU is initialized first to obtain the zero bias. The state of the vehicle is judged using the information of the vehicle’s wheel speed meter. Initialization when the vehicle is stationary can obtain a more accurate initial zero bias. After obtaining the zero bias of the IMU, the state is updated according to the measurement equation. The measurement equation of the IMU is as follows:
α ~ = R T α g + b a + η a   ω ~ = ω + b g + η g
where α ~ , ω ~ represent the three-axis acceleration and three-axis angular velocity measured by the IMU sensor. R T represents the transformation matrix from the world coordinate system to the vehicle coordinate system. b a and b g represent the zero bias of the accelerometer and gyroscope, respectively. η a and η g represent the noise of the accelerometer and gyroscope, respectively. g represents the acceleration of gravity.
Substitute Equation (1) into the IMU motion equation to obtain the motion model
R ( t + t ) = R ( t ) E x p ω ~ b g t p t + t = p t + v t + 1 2 R ( t ) α ~ b a t 2 + 1 2 g t 2 v t + t = v t + R ( t ) α ~ b a t + g t
where R , p , and v represent rotation, displacement, and velocity, respectively. t represents time.
Utilizing the IMU’s motion model, it is possible to infer the vehicle’s state at the subsequent moment. This inference is based on the current state of the vehicle and the IMU data for the following moment, thereby facilitating high-frequency updates of the vehicle’s position and posture.

2.3. LiDAR-Inertial Odometry

The conventional Iterative Closest Point (ICP) algorithm iteratively determines the closest point pairs among all points in two frame point clouds, subsequently solving the pose transformation matrix using optimization techniques such as the least squares method. The NDT algorithm, on the other hand, aligns the local statistical information of two frame point clouds, thereby reducing computational demands and accelerating the calculation process for the laser odometer. This approach offers high accuracy and is particularly well-suited for real-time positioning in autonomous vehicles.
Therefore, this paper constructs a laser inertial odometer based on the original NDT point cloud registration method by tightly coupling IMU data [9]. First, the IMU data between two frames of point clouds are integrated to obtain the pose transformation, and each point in a frame of point cloud is de-distorted using the pose transformation data and converted to the point cloud coordinate system at the same time. Secondly, the point cloud data are downsampled using the voxel grid filter provided by the PCL point cloud library. The degree of downsampling is determined according to the density of the point cloud in the actual scene. Then, the point cloud data are voxel-divided and the Gaussian distribution in each voxel is calculated. Finally, the IMU and point cloud residual are tightly coupled based on IEKF.
Since the frequency of IMU data is much higher than the frequency of point cloud data, the state is recursively calculated according to Formula (2) when there is no point cloud data. In the NDT registration process, the voxel ID to which each point in the point cloud data belongs is calculated, the relationship between the point and the corresponding ID is established, and the residual consisting of distance and variance is constructed. The estimated state is iteratively calculated using the Gauss–Newton method. The voxel residual is constructed as follows:
e i = R q i + t μ i
where R and t represent the rotation matrix and translation vector. q i represents the i-th point in the registered point cloud q . μ i represents the mean.
During each NDT iteration, the IEKF status is updated. The specific update formula is as follows:
D k = P k 1 + R k T M 1 R k 1 x k = D k R k T M 1 e k P k + 1 = I D k R k T M 1 R k P k
where D k is the intermediate variable. P represents the error covariance. R   is the Jacobian matrix of the observation equation relative to the error state. M is the covariance matrix of the noise. I is the unit matrix.
However, in scenes with sparse features, matching point cloud frames may lead to mismatches. Therefore, the NDT algorithm is optimized to build a local point cloud map based on time and distance transformation and match each frame of point cloud data with the local point cloud map. The mean and variance update model of the local point cloud map is as follows:
μ = m μ m + n μ n m + n
Σ = m Σ m + μ m μ μ m μ T + n Σ n + ( μ n μ ) ( μ n μ ) T m + n
where m is the number of point clouds in the local point cloud map. n is the number of point clouds in the current point cloud frame. μ and Σ represent the mean and variance of the Gaussian distribution after the current frame point cloud is merged with the local map, respectively. μ m and μ n represent the means in the local map and the current frame point cloud. Σ m and Σ n represent the variances in the local map and the current frame point cloud.

2.4. Error-State Kalman Filter

Based on the IMU model, laser inertial odometer model, and GNSS data introduced above, the system state is defined as follows
x = β , υ , R , b g , b a   T
where β is translation, υ is velocity, R is rotation, and b g and b a are the zero bias of the IMU gyroscope and accelerometer.
The state equation of the system is
δ x ( t + t ) = J δ x + ω ,
J = I 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I t I 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 R ( a ~ b a ) t 0 3 × 3 R t E x p ( ( w ~ b g ) t ) 0 3 × 3 0 3 × 3 I t I 0 3 × 3 0 3 × 3 0 3 × 3 I ,
where δ x represents the error state variable. t represents time. J represents the linearized Jacobian matrix. ω represents the noise. I is the unit matrix. a ~ and w ~ represent the measurements from the accelerometer and gyroscope, respectively.
Using the Taylor expansion on the above equation, the following formula is thus obtained
δ x p r e = J δ x P p r e = J P J T + Q ,
where P represents the error covariance. δ x p r e represents the prediction of the error state variable. P p r e represents the prediction of the error covariance. Q represents the process noise covariance matrix.
According to the results of the multi-level fuzzy evaluation model of the posture transformation state, the more suitable posture transformation data from the laser odometer and GNSS data are selected and regarded as the observation model of the error state Kalman filter for state updates
K = P p r e F T F P p r e F T + E 1 x = x p r e + K z h x p r e P = ( I K F ) P p r e ,
where K is the Kalman gain. F is the Jacobian matrix of the observation equation relative to the error state. E is the covariance matrix of the noise. z is the observation data. h · is the observation equation.

3. Multi-Level Fuzzy Evaluation Function of the Posture Transformation State

When an autonomous vehicle is driving in a complex urban environment, the GNSS signal will be interfered with, resulting in a decrease in positioning accuracy. However, the laser odometer can provide high-precision posture transformation data without relying on the influence of external signals. Therefore, a multi-level fuzzy evaluation model of the posture transformation state with adaptive sensor switching is proposed. The specific components of the model include factor set A , evaluation set V , and fuzzy evaluation matrix H .
A = a 1 a 2 = a 11 a 12 a 13 a 21 a 22 a 23 V = v 1 v 2 H = h 11 h 12 h 21 h 22 = k a 1 , v 1 k a 1 , v 2 k a 2 , v 1 k a 2 , v 2 ,
where a 1 and a 2 in A are both first-level evaluation factors, corresponding to the GNSS and laser odometer, respectively. a 11 , a 12 , a 13 represent the three-axis posture changes of the GNSS. a 21 , a 22 , a 23 represent the three-axis posture changes of the laser odometer. v 1 and v 2 in V correspond to the evaluation results of the posture update using the GNSS or laser odometer, respectively. H represents the fuzzy relationship between factor set A and evaluation set V . Each element h i j in the matrix represents the membership degree between the corresponding evaluation factor and the evaluation result, and the corresponding k a i , v j represents the membership function.
When estimating H , we must first calculate the fuzzy evaluation matrix H 1 , H 2 corresponding to the secondary evaluation factors, as follows:
H 1 = k a 11 , v 1 k a 11 , v 2 k a 12 , v 1 k a 12 , v 2 k a 13 , v 1 k a 13 , v 2 ,
H 2 = k a 21 , v 1 k a 21 , v 2 k a 22 , v 1 k a 22 , v 2 k a 23 , v 1 k a 23 , v 2 ,
where each element k a i j , v i in H 1 and H 2 is the corresponding secondary evaluation factor and membership function, respectively.
Taking the first row of elements in H 1 as an example, model the membership function
k a 11 , v 1 = 1                                   a 11 < 0.2 1 / 2       0.2 a 11 < 0.7 0                               a 11 > 0.7 k a 11 , v 2 = 0                                   a 11 < 0.2 1 / 2       0.2 a 11 < 0.7 1                               a 11 > 0.7 ,
where a 11 represents the X-axis pose change of the GNSS. k a 11 , v 1 represents the membership function of using the GNSS for pose observation updates. k a 11 , v 1 represents the membership function of using the laser odometer for pose observation updates.
The residual elements in H 1 and   H 2 are represented as per Formula (11). To ensure a comprehensive evaluation, the hierarchical analysis method is employed to compute the weight matrix. The judgment matrix S 1 , which corresponds to the GNSS three-axis posture transformation factor set, is presented below
S 1 = s 11 s 12 s 13 s 21 s 22 s 23 s 31 s 32 s 33 = 1 1 2 1 1 2 1 / 2 1 / 2 1 ,
where the values of each element s i j of S 1 are determined based on the literature [24].
Based on the elements in Formula (12), the corresponding weights are calculated and standardized
t ¯ 1 i = j = 1 e s i j e , i = 1,2 , , e ,
t 1 i = t ¯ 1 i / j = 1 e t ¯ 1 i , i = 1,2 , , e ,
where t ¯ 1 i is the weight corresponding to each evaluation factor before standardization. t 1 i is the weight corresponding to each evaluation factor after standardization. s i j is the element in S 1 . e represents the number of evaluation factors.
According to Equations (16)–(18), the weight matrix T 1 corresponding to the GNSS data is as follows
T 1 = t 1 t 2 t 3 = 0.4 0.4 0.2 ,
where t 1 , t 2 , t 3 correspond to the weights of the posture transformation evaluation factors of the X, Y, and Z axes of the GNSS, respectively.
After verifying T 1 using the hierarchical analysis consistency test method, it is applied to H 1 . The values of h 11 and h 12 are calculated as follows:
k a 1 , v 1 = t 1 k a 11 , v 1 + t 2 k a 12 , v 1 + t 3 k a 13 , v 1 k a 1 , v 2 = t 1 k a 12 , v 2 + t 2 k a 12 , v 2 + t 3 k a 13 , v 2 ,
The modeling of the three-axis pose change factor set of the laser odometer is the same as that of the GNSS. The above method is used to calculate the first-level weight matrix T and the evaluation set V
T = t 1 t 2 = 0.67 0.33 ,
V = v 1 v 2 = t 1 k a 1 , v 1 + t 2 k a 1 , v 2 t 1 k a 2 , v 1 + t 2 k a 2 , v 2 T ,
where t 1 and t 2 are the weights of the evaluation factors. v 1 and v 2 are the membership degrees of the evaluation results.
According to the maximum membership principle in the literature [25], the elements in V are compared. If v 1 is larger, the GNSS data are used for the observation update. If v 2 is larger, the laser odometer data are used for the observation update.

4. Experimental Results and Discussion

4.1. Test Conditions

To ascertain the robustness and precision of the proposed algorithm, it was implemented on a real vehicle platform for testing, as depicted in Figure 2. The test vehicle employs the CGI-410 receiver from Shanghai Huace Navigation Technology Ltd. (Shanghai, China), which is situated at the center of the vehicle’s rear axle. The device boasts a gyroscope zero bias stability of 6°/h and an accelerometer zero bias stability of 0.02 mg, with data collection occurring at a frequency of 100 Hz. The lidar utilizes the CH128X1 from Shenzhen Leishen Intelligence System Co., Ltd. (Shenzhen, China) The device offers a horizontal scanning field of view of 120° and a vertical scanning field of view of 25° (with a resolution ranging from −18° to 7°), and data are gathered at a frequency of 10 Hz.
The onboard computing chip gathers data from a variety of sensors, transmits this raw information to the positioning module via the communication middleware ROS2, and subsequently calculates and outputs the vehicle’s position and posture.

4.2. Algorithm Comparison Test

In order to verify the influence of complex scenarios such as interference on the positioning accuracy of GNSS signals, open road scenarios, tunnel scenarios, and viaduct scenarios were selected for testing. The test scenarios are shown in Figure 3.
First, during the testing phase, a high-precision positioning device was installed on the vehicle to gather the actual trajectory of the test environment, which served as the true value. Then, the LIO-SAM algorithm proposed in the literature [26], the GNSS/IMU combined navigation algorithm, and the adaptive multi-sensor fusion positioning algorithm based on the error-state Kalman filter proposed in this paper were deployed on the actual vehicle platform for testing. Since the algorithm in this paper and the combined navigation algorithm do not have loop detection and mapping functions, only the laser odometer module of the LIO-SAM algorithm is used for comparison. Finally, the absolute errors between the test trajectories and the true trajectories of the three scenarios are compared and analyzed.

4.2.1. Public Road Scenario Test

A road with complex scenes was selected in Caidian District, Wuhan City for real vehicle testing. The total length is about 3.8 km, including a tunnel route of about 140 m and a bidirectional route under an elevated bridge of about 700 m. During the test, the average vehicle speed was maintained at 15 km/h. The test environment is shown in Figure 3a.
The global trajectories of the three algorithms are shown in Figure 4. On public roads, the proposed method is more consistent with the true trajectory than the laser mileage calculation method of LIO-SAM or the GNSS/IMU integrated navigation algorithm.
When the GNSS/IMU algorithm operates over extended periods in tunnels and viaducts, the cumulative error stemming from posture recursion based solely on IMU data persists, leading to significant positioning inaccuracies. Because it extracts fewer feature points at open intersections, using the LIO-SAM algorithm results in substantial positioning errors during vehicle maneuvers. The proposed algorithm in this paper employs a combination of different sensors for timely fusion positioning. This is achieved using a multi-level fuzzy evaluation model of posture transformation states, effectively compensating for positioning errors induced by interference with the GNSS signal.
As shown in Figure 5, the absolute error comparison chart of the X-axis and Y-axis of the algorithm proposed in this paper, the GNSS/IMU integrated navigation algorithm, and the LIO-SAM laser mileage calculation method with the true value are shown. Since the positioning error of the GNSS/IMU algorithm under the viaduct is too large, in order to clearly show the effect of the algorithm in this paper, the positioning error within a 5 m range on the X and Y axes is selected. The error graph can more clearly reflect the positioning accuracy and robustness of the three algorithms.
When a vehicle operates beneath a viaduct for an extended period, the GNSS/IMU algorithm yields significant positioning errors due to interference with the GNSS signal. The maximum error on the X-axis is observed to be 30 m, while the Y-axis error peaks at 100 m. This specific positioning variation is depicted in Figure 4. The laser mileage calculation method of LIO-SAM also results in substantial positioning errors due to the mismatch of feature points beneath the viaduct amidst dense traffic, with the maximum errors recorded at 10 m on the X-axis and 3 m on the Y-axis. Conversely, the maximum positioning error of the algorithm proposed in this study is a mere 0.8 m on the X-axis and 1 m on the Y-axis.
Table 1 shows the relative translation errors of the three algorithms every 150 s. Between 151 s and 300 s, the vehicle passed through the tunnel area, and the GNSS signal was interfered with. The relative translation error of the GNSS/IMU algorithm increased. The LIO-SAM algorithm achieved relatively good performance in the short tunnel. Compared with the open road, the algorithm in this paper only increased the relative translation error by 0.06 m.
Between 451 s and 600 s, the vehicle traversed a road section beneath a viaduct, resulting in severe interference with the GNSS signal. The data presented in the table reveal that the prolonged absence of GNSS data for the GNSS/IMU algorithm leads to significant positioning errors when solely reliant on IMU recursion. The positioning error of the LIO-SAM laser mileage calculation method has not changed significantly. The relative translation error of the algorithm in this paper under the viaduct increases by 0.22 m compared to the open road. It can be clearly seen from the relative translation error data in the table for different time periods that the positioning accuracy of the method proposed in this paper on the open road is similar to that of the GNSS/IMU algorithm, and the positioning accuracy in tunnels and viaducts is much higher than the other two algorithms.

4.2.2. Tunnel Scenario Test

The Maanshan Tunnel in Caidian District, Wuhan City was selected as the tunnel test scene. The tunnel is 385 m long. During the test, the average vehicle speed was maintained at 30 km/h. The test environment is shown in Figure 3b.
The global trajectories of the three algorithms are shown in Figure 6. After the vehicle entered the tunnel for a period of time, the laser mileage calculation method of LIO-SAM experienced scene degradation, resulting in increased positioning error. When the GNSS signal just entered the tunnel, it was not greatly disturbed, so the GNSS/IMU algorithm could still maintain a certain positioning accuracy. However, as the degree of interference with the GNSS signal increased, the positioning error became larger and larger by relying solely on IMU recursion. The algorithm proposed in this paper can run a tightly coupled laser inertial odometer when the GNSS signal is interfered with, and adopts the NDT matching method. At the same time, it is updated in time when the GNSS signal is received. The overall positioning accuracy is guaranteed. Among them, the laser mileage calculation method of LIO-SAM fails after running for 200 m. The GNSS/IMU algorithm fails after running for 280 m. The algorithm in this paper passed the long tunnel scene.

4.2.3. Viaduct Scene Test

The 1.6 km long road under the Wuhan Dongfeng Avenue viaduct was selected as the test scene. The average vehicle speed during the test was maintained at 35 km/h.
The global trajectories of the three algorithms are shown in Figure 7. When the vehicle drives under the viaduct, the GNSS/IMU algorithm also shows an increase in positioning error after a period of GNSS signal loss. The laser mileage calculation method of LIO-SAM has a large positioning error due to the mismatch of feature points under the viaduct with dense traffic, and the maximum positioning errors of the X-axis and Y-axis are 9.67 m and 7.02 m, respectively. The maximum positioning errors of the algorithm in this paper are 1.67 m and 1.92 m, respectively.

5. Conclusions

Addressing the issue of low positioning accuracy and poor robustness in single-sensor positioning within complex environments, this paper proposes a multi-level fuzzy evaluation model for posture transformation states. Additionally, an adaptive multi-sensor fusion positioning method, based on the error-state Kalman filter, is introduced to evaluate data from GNSS and laser inertial odometers. This approach facilitates the adaptive updating of posture.
In real-world vehicle testing conducted on public roads, the lateral error of the proposed algorithm was found to be within 0.8 m, while the longitudinal error remained within 1 m. In both a 385 m tunnel and a 1.6 km viaduct, the overall positioning robustness and accuracy of the proposed algorithm outperformed that of the GNSS/IMU combined navigation algorithm and the laser mileage calculation method of LIO-SAM. These experimental results demonstrate that the proposed adaptive multi-sensor fusion positioning method can achieve sub-meter positioning accuracy in complex urban environments.
In future research, the integration of lane line positioning data will be considered to enhance the constraints on the vehicle’s lateral displacement.

Author Contributions

Writing—original draft preparation, Z.W. and Y.B.; writing—review, J.H., Y.T. and F.C.; experiments, Y.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Major Science and Technology Projects of Hubei Province under grant 2022 AAA001 and in part by Major research projects of Hubei Province under grant 2023BAA017.

Data Availability Statement

The data will be made available by the authors on request.

Conflicts of Interest

Fei Cheng was employed by the Dongfeng Automobile Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Dongfeng Automobile Co., Ltd. had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Yang, P.; Duan, D.; Chen, C.; Cheng, X.; Yang, L. Multi-Sensor Multi-Vehicle (MSMV) Localization and Mobility Tracking for Autonomous Driving. IEEE Trans. Veh. Technol. 2020, 69, 14355–14364. [Google Scholar] [CrossRef]
  2. Zhu, F.; Hu, Z.; Liu, W.; Zhang, X. Dual-Antenna GNSS Integrated With MEMS for Reliable and Continuous Attitude Determination in Challenged Environments. IEEE Sens. J. 2019, 19, 3449–3461. [Google Scholar] [CrossRef]
  3. Chen, K.; Chang, G.; Chen, C.; Zhu, T. An improved TDCP-GNSS/INS integration scheme considering small cycle slip for low-cost land vehicular applications. Meas. Sci. Technol. 2021, 32, 055006. [Google Scholar] [CrossRef]
  4. Sun, R.; Shang, X.; Cheng, Q.; Jiang, L.; Sheng, Q. A tightly coupled GNSS RTK/IMU integration with GA-BP neural network for challenging urban navigation. Meas. Sci. Technol. 2024, 35, 086310. [Google Scholar]
  5. Wu, Q.; Chai, H.; Xiang, M.; Zhang, F.; Feng, X. Rapid Initial Heading Alignment Using Single-Antenna GNSS and a Low-Grade IMU. Mar. Geod. 2024, 1–26. [Google Scholar] [CrossRef]
  6. Xiong, H.; Mai, Z.; Tang, J.; He, F. Robust GPS/INS/DVL Navigation and Positioning Method Using Adaptive Federated Strong Tracking Filter Based on Weighted Least Square Principle. IEEE Access 2019, 7, 26168–26178. [Google Scholar] [CrossRef]
  7. Li, T.; Zhang, H.; Gao, Z.; Chen, Q.; Niu, X. High-Accuracy Positioning in Urban Environments Using Single-Frequency Multi-GNSS RTK/MEMS-IMU Integration. Remote Sens. 2018, 10, 205. [Google Scholar] [CrossRef]
  8. Biber, P.; Strasser, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 27–31 October 2003. [Google Scholar]
  9. Magnusson, M. The Three-Dimensional Normal-Distributions Transform: An Efficient Representation for Registration, Surface Analysis, and Loop Detection. Doctoral Thesis, Örebro University, Örebro, Sweden, 2009. [Google Scholar]
  10. Rapp, M.; Barjenbruch, M.; Hahn, M.; Dickmann, J.; Dietmayer, K. Clustering improved grid map registration using the normal distribution transform. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Republic of Korea, 28 June–1 July 2015. [Google Scholar]
  11. Kung, P.; Wang, C.; Lin, W. A Normal Distribution Transform-Based Radar Odometry Designed For Scanning and Automotive Radars. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  12. Xu, W.; Zhang, F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
  13. Bai, C.; Xiao, T.; Chen, Y.; Wang, H.; Zhang, F.; Gao, X. Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial odometry Using Parallel Sparse Incremental Voxels. IEEE Robot. Autom. Lett. 2022, 7, 4861–4868. [Google Scholar] [CrossRef]
  14. Li, Q.; Zhuang, Y.; Huai, J.; Wang, X.; Wang, B.; Cao, Y. A robust data-model dual-driven fusion with uncertainty estimation for LiDAR-IMU localization system. ISPRS J. Photogramm. Remote Sens. 2024, 210, 128–140. [Google Scholar] [CrossRef]
  15. Zou, Z.; Yuan, C.; Xu, W.; Li, H.; Zhou, S.; Xue, K.; Zhang, F. LTA-OM: Long-term association LiDAR-IMU odometry and mapping. J. Field Robot. 2024, 1–20. [Google Scholar] [CrossRef]
  16. Song, S.; Shi, X.; Ma, C.; Mei, X. MF-LIO: Integrating multi-feature LiDAR inertial odometry with FPFH loop closure in SLAM. Meas. Sci. Technol. 2024, 35, 086308. [Google Scholar] [CrossRef]
  17. Pang, C.; Zhou, L.; Huang, X. A Low-Cost 3D SLAM System Integration of Autonomous Exploration Based on Fast-ICP Enhanced LiDAR-Inertial Odometry. Remote Sens. 2024, 16, 1979. [Google Scholar] [CrossRef]
  18. Li, X.; Wang, S.; Li, S.; Zhou, Y.; Xia, C.; Shen, Z. Enhancing RTK Performance in Urban Environments by Tightly Integrating INS and LiDAR. IEEE Trans. Veh. Technol. 2023, 72, 9845–9856. [Google Scholar] [CrossRef]
  19. Li, X.; Yu, H.; Wang, X.; Li, S.; Zhou, Y.; Chang, H. FGO-GIL: Factor Graph Optimization-Based GNSS RTK/INS/LiDAR Tightly Coupled Integration for Precise and Continuous Navigation. IEEE Sens. J. 2023, 23, 14534–14548. [Google Scholar] [CrossRef]
  20. Chiang, K.; Tsai, G.; Chu, H.; EI-Sheimy, N. Performance Enhancement of INS/GNSS/Refreshed-SLAM Integration for Acceptable Lane-Level Navigation Accuracy. IEEE Trans. Veh. Technol. 2020, 69, 2463–2476. [Google Scholar] [CrossRef]
  21. Wang, S.; Zeng, Q.; Shao, C.; Li, F.; Liu, J. Fault Detection and Interactive Multiple Models Optimization Algorithm Based on Factor Graph Navigation System. Remote Sens. 2024, 16, 1651. [Google Scholar] [CrossRef]
  22. Zhang, H.; Qian, C.; Li, W.; Li, B.; Liu, H. A LiDAR-INS-aided geometry-based cycle slip resolution for intelligent vehicle in urban environment with long-term satellite signal loss. GPS Solut. 2024, 28, 61. [Google Scholar] [CrossRef]
  23. Li, W.; Liu, G.; Cui, X.; Lu, M. Feature-Aided RTK/LiDAR/INS Integrated Positioning System with Parallel Filters in the Ambiguity-Position-Joint Domain for Urban Environments. Remote Sens. 2021, 13, 2013. [Google Scholar] [CrossRef]
  24. Gass, FSI. The Analytic Hierarchy Process: An Exposition. Oper. Res. 2001, 49, 469–486. [Google Scholar] [CrossRef]
  25. Zhu, X.; Wang, Y.; Li, D. The Effectiveness Test of the Maximum Membership Principle in Fuzzy Comprehensive Evaluation. Geomat. Spat. Inf. Technol. 2016, 39, 135–137, 143. [Google Scholar]
  26. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021. [Google Scholar]
Figure 1. Multi-sensor fusion positioning system framework.
Figure 1. Multi-sensor fusion positioning system framework.
Mathematics 12 02225 g001
Figure 2. Real vehicle test platform.
Figure 2. Real vehicle test platform.
Mathematics 12 02225 g002
Figure 3. Satellite image of the test route and actual environment: (a) satellite image of public roads; (b) Highway tunnel.
Figure 3. Satellite image of the test route and actual environment: (a) satellite image of public roads; (b) Highway tunnel.
Mathematics 12 02225 g003
Figure 4. Comparison of public road scene trajectories.
Figure 4. Comparison of public road scene trajectories.
Mathematics 12 02225 g004
Figure 5. Absolute error comparison of the three algorithms: (a) X-axis error graph; (b) Y-axis error graph.
Figure 5. Absolute error comparison of the three algorithms: (a) X-axis error graph; (b) Y-axis error graph.
Mathematics 12 02225 g005
Figure 6. Tunnel scene trajectory comparison chart.
Figure 6. Tunnel scene trajectory comparison chart.
Mathematics 12 02225 g006
Figure 7. Comparison of scene trajectories under the viaduct.
Figure 7. Comparison of scene trajectories under the viaduct.
Mathematics 12 02225 g007
Table 1. Relative translation error every 150 s (m/150 s).
Table 1. Relative translation error every 150 s (m/150 s).
0~150 s151~300 s301~450 s451~600 s601~750 s751~900 s
GNSS/IMU0.130.210.091.890.110.11
LIO-SAM1.831.612.672.443.152.09
Ours0.070.130.090.310.170.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Bai, Y.; Hu, J.; Tang, Y.; Cheng, F. Adaptive Multi-Sensor Fusion Localization Method Based on Filtering. Mathematics 2024, 12, 2225. https://doi.org/10.3390/math12142225

AMA Style

Wang Z, Bai Y, Hu J, Tang Y, Cheng F. Adaptive Multi-Sensor Fusion Localization Method Based on Filtering. Mathematics. 2024; 12(14):2225. https://doi.org/10.3390/math12142225

Chicago/Turabian Style

Wang, Zhihong, Yuntian Bai, Jie Hu, Yuxuan Tang, and Fei Cheng. 2024. "Adaptive Multi-Sensor Fusion Localization Method Based on Filtering" Mathematics 12, no. 14: 2225. https://doi.org/10.3390/math12142225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop