1. Introduction
The competition for resilient positioning, navigation, and timing (PNT) is a strategic area which directly impacts national security [
1]. With the rapid evolution of mobile communication networks and artificial intelligence, the widespread adoption of smart devices has become crucial to the growth of location-based services (LBSs) [
2]. Many redundant PNT sensors are embedded in hidden application scenarios and platforms, such as smart spatiotemporal networks and the next-generation industrial internet. The core challenge remains to provide high-precision, continuous and reliable navigation services to users in highly dynamic and complex environments.
The Global Navigation Satellite System (GNSS) serves as the most critical solution for LBS, offering high accuracy, real-time capabilities, and extensive coverage, thereby providing continuous and stable navigation information to users in open outdoor environments [
3]. However, in urban canyons, bridges, and tunnels, GNSS signals suffer from significant degradation due to nonline-of-sight and multipath effects, which severely constrain positioning accuracy and continuity. To address these GNSS limitations, multi-sensor fusion approaches have been investigated, incorporating technologies such as inertial navigation systems (IMUs), LiDAR SLAM, UWB, Bluetooth, and axle speed sensors [
3]. Li [
4] proposed a method employing graph optimization tightly integrated with an IMU, which in comparison with innovation Kalman filtering (KF) on simulated data demonstrated an ability to effectively identify satellite faults, even under GNSS reliability contamination. The IMU, as a standalone navigation system, offers independent positioning solutions and anti-jamming capabilities but is susceptible to considerable positional error drift during GNSS signal interruptions [
5]. To minimize the risk of positioning failure, Zhen et al. [
6,
7] demonstrated the complementary nature of LiDAR and ultra wideband (UWB) sensors in geometrically degraded environments, such as elongated narrow spaces, achieving robust localization estimates via probabilistic sensor fusion methods.
As next-generation communication technology, the fifth generation (5G) has already achieved extensive indoor and outdoor coverage globally. With its low latency, high bandwidth, and broad coverage, 5G has the potential to become critical technology for addressing the limitations of GNSS signals. In the 3rd Generation Partnership Project (3GPP) Rel-16 [
8] standard, 5G positioning capabilities are specified to achieve an accuracy of 3 m (at 80%) indoors and 10 m (at 80%) outdoors, with end-to-end latency of less than 1 second. As the Rel-18 [
9] version progresses—marking the first release of 5G-Advanced—the positioning accuracy of 5G is expected to reach centimeter-level precision, with applications expanding to autonomous driving, vehicle-to-everything (V2X), and other side link scenarios. China Telecom Guangdong led the nation’s first commercial implementation of high-precision indoor positioning using a cost-effective 5G UTDOA 1 point X solution [
10], significantly reducing the costs associated with achieving high-precision positioning through 5G cellular networks. Regarding GNSS/5G/IMU integrated positioning [
11], Yin et al. [
12] introduced an adaptive integrated navigation approach based on Kalman filtering, integrating a GNSS, 5G, and an IMU to address frequent satellite signal lock losses and non-line-of-sight errors in seamless indoor-outdoor environments. However, the lack of robust estimation techniques results in degradation of the positioning performance under high-interference conditions. It is worth noting that while technologies such as WiFi-RTT and Bluetooth can provide some RTT measurements over short distances or in indoor environments, 5G solutions, relying on a standardized, large-bandwidth, low-latency, and wide-coverage network foundation, are able to provide superior positioning capabilities over a wider range with higher accuracy and in more complex scenarios.
In practical applications, navigation systems are subject to inevitable errors and faults due to human interference, adverse environments, and hardware aging [
13]. If not identified and immediately excluded, these issues can lead to significant degradation in positioning accuracy or even severe errors, which is unacceptable for safety-critical navigation systems [
14]. Thus, effective fault detection and exclusion (FDE) schemes are essential for maintaining the reliability and precision of integrated navigation systems. The concept of integrity monitoring for navigation systems originates in the safety domain [
15,
16], representing a confidence metric in the correctness of position estimates, including the ability to alert users when reliability is compromised. Early FDE approaches primarily focused on redundant navigation sources within global navigation satellite systems (GNSSs), with the aim of detecting anomalies in satellite measurements, receiver hardware, or signal distortions. A widely adopted approach is receiver autonomous integrity monitoring (RAIM) [
17,
18,
19]. In recent years, scholars have explored a variety of fault detection and exclusion (FDE) methods to perform integrity monitoring, which can be roughly classified into position domain and measurement domain approaches, depending on whether fault detection (FD) is derived from observations or positioning solutions [
20]. In the position-domain approach, representative FDE algorithms include multiple solution separation (MSS), multiple hypothesis solution separation (MHSS), and advanced receiver autonomous integrity monitoring (ARAIM) [
21]. These algorithms make decisions by comparing the test statistics of full-set and subset solutions. Furthermore, Chen [
22] proposed a two-level integrity monitoring strategy for multi-source information fusion navigation, removing fault features at both the system and sensor levels and being applicable to integrated navigation systems like GNSSs, 5G, and barometers (BAs). Inspired by the excellent performance of MHSS ARAIM, researchers have started applying MHSS in Kalman filter integrity monitoring (KFIM) [
23]. MHSS has been shown to often yield lower protection levels (PLs) than traditional methods [
23], though its high computational cost limits further development of this method. A representative measurement-domain approach is autonomous integrity monitoring based on extrapolation (AIME) [
24], which uses deformed residual vectors from the filtering process as fault detection statistics. Building upon this foundation, Yang [
25] developed an IMU/GNSS integrated navigation system for urban environments featuring fault detection and exclusion (FDE) capabilities but did not consider the possibility of erroneous IMU measurements. Kaddour [
26] proposed a multi-fault detection algorithm using in-formation space observation projection, which involves differentiating the state vectors in a Kalman filter (KF) [
27]. Nevertheless, the rapid variability of reference satellites in urban canyons diminished the algorithm’s efficacy. Another issue in GNSS/5G/IMU integrated systems is the presence of undetected measurement faults. Lee [
27] employed snapshot innovations for fault detection, yet this method exhibited poor sensitivity to ramp faults. Furthermore, undetected faults tend to accumulate, causing inaccuracies in the filter innovation sequence during recursive computations [
28] and resulting in an error-tracking phenomenon which weakens fault detection performance.
To ensure the provision of precise, continuous, and reliable positioning, navigation, and timing (PNT) services, we present a fault-tolerant and robust navigation approach designed for resilient integration of GNSS, 5G, and IMU systems. Sub-filters achieve a tight coupling of observation data, while the main filter dynamically allocates information according to the quality of the observation residuals. A weighted robust adaptive filter based on the maximum correlation entropy criterion is derived, effectively addressing issues of nonpositivity or singularity in the iterative noise covariance estimation process and eliminating error-tracking phenomena. Furthermore, an improved AIME method, termed TSAIME, is developed to mitigate performance degradation caused by variations in the number of signals between epochs in sequential methods [
29]. A covariance-optimal, expansion-based integrity crossing protection strategy is proposed, and the optimal scalar expansion strategy yields a conservative estimate of the diagonal matrix crossing. Through fault exclusion and sensor recovery verification, the dynamically adjusted fusion model is better suited to meet the requirements of resilient PNT fusion navigation under complex positioning scenarios.
This paper initially presents the tightly integrated GNSS/5G/IMU framework and its corresponding observation model. The causes of error tracking are then explained in detail, followed by an in-depth elaboration of the construction of the weighted robust adaptive filter. A comprehensive scheme for fault detection, exclusion, and sensor recovery verification is subsequently proposed, including the computation of protection levels and an evaluation of algorithm availability. Lastly, the field test results are analyzed to assess the performance of the proposed method when subjected to various fault types, and conclusions are drawn accordingly.
3. Weighted Robust Adaptive Filter
This section introduces an enhanced weighted robust adaptive filter, building on the Sage-Husa Adaptive Kalman Filter (SHAKF). It begins by explaining the causes of error tracking, followed by a discussion on the fundamentals of maximum correntropy. The MCC is then incorporated into the MRAF framework, leading to the derivation of a maximum correntropy-based adaptive filter.
3.1. Error Tracking Phenomenon
During the estimation process, minor faults which do not significantly impact positioning are often undetectable by conventional algorithms. Error tracking describes the cumulative effect of these undetected minor faults, resulting in inaccuracies in test statistics. The detailed analysis is as follows.
Assume the observation at epoch
k includes faults:
The innovation vector containing the faults can be expressed as
where
denotes the state of a prior estimate at moment
k. The state estimate contains the faults
where
denotes the filter gain. For the next epoch, the predicted state with unknown faults is
where
is the state transfer matrix.The innovation of the filter at epoch
when faults reoccur is
where
reflects the effect of previously undetected faults on the current innovation and
represents the expected impact of current faults on the detection statistic. The presence of the error term leads to a gradual decrease in filter innovation due to error accumulation, which reduces the algorithm’s sensitivity and prevents the detection of new faults.
Thus, a new filtering approach is required to mitigate the impact of accumulated errors on fault detection statistics in the absence of new faults.
3.2. Maximum Correlation Entropy Criterion
For the random variables
X and
Y, the goal is to maximize their correlation and determine their optimal joint distribution under given constraints. The correlation coefficient us
Here,
represents the kernel function in the feature space obtained through kernel mapping. The Gaussian kernel is commonly used, and it is defined as follows:
where
represents the kernel bandwidth, determining the weight distribution between second-order and higher-order moments. The kernel function maps points
x and
y from the original space to
and
in a high-dimensional space. Here,
denotes the mapping function
, where
is the high-dimensional feature space. Consider the Taylor series expansion
where
represents the coefficient of the extended weighting matrix. Unlike the minimum mean square error (MMSE) criterion used in the extended Kalman filter (EKF), unscented Kalman filter (UKF), and their variants, managing non-Gaussian noise through higher-order mapping is especially crucial.
By incorporating Lagrange multipliers and normalization constraints, the objective function can be reformulated as follows:
When
is close to
, this indicates higher similarity between the two variables, implying a stronger dependency between the samples and resulting in higher entropy. The final optimization problem is expressed as follows:
We seek the optimal solution within the feasible set of possible joint distributions of , aiming to make the filtering result approximate the desired signal as closely as possible. A higher correlation coefficient indicates a better probability combination, offering an improved filter solution for non-Gaussian errors.
3.3. Weighted Robust Filtering Method Based on Maximum Correlation Entropy Criterion
In dynamic scenarios, Kalman filtering relies on accurate prior state and covariance matrix estimates to balance the error between consecutive observations, ensuring faster convergence. Process and observation noise matrices are usually derived from empirical models, adjusted for the characteristics of the measurement equipment and user expertise, and treated as fixed constants. However, in navigation systems utilizing GNSSs, 5G, and IMUs, constant prior information may not accurately reflect observation quality.
Building on the EKF, the SHAKF enhances adaptability in dynamic systems by estimating the process noise covariance Q and observation noise covariance R in real time, addressing unpredictable noise characteristics. However, the SHAKF still adheres to the MMSE criterion, resulting in poor performance in non-Gaussian or nonlinear noise conditions. Furthermore, the covariance matrix generated by the SHAKF may lose positive definiteness, causing instability during iteration.
To address the aforementioned constraints, the MMSE criterion is replaced by the MCC criterion, which leverages higher-order statistics to better capture non-Gaussian noise and complex error distributions. An innovative weighted iterative KF and posterior residual estimation are used to ensure the observation noise matrix
R remains symmetrical and stable throughout the calculation. In the prediction phase, the state transition model and prior state information are used to generate the predicted state at epoch
k, along with its associated error covariance matrix, given by
where
denotes the control input matrix,
is the control input vector, and
is the process noise covariance matrix. Its posterior distribution satisfies
. The state prediction bias at this moment is
This is derived from the linear regression between the predicted values and observations:
The state error covariance matrix
and the observation error covariance matrix are
We can derive the extended error covariance matrix as follows:
where
and
are the lower triangular matrices of the
and
Cholesky decompositions, respectively. By introducing maximum correntropy as the loss function, the state estimation solution is derived:
Here,
m and
n denote the dimensions of the state and observation vectors, respectively. The error vector
is given by
Furthermore, the optimal solution is obtained:
State estimation is expressed as
, where
in which
denotes the diagonal symbol. We can reformulate the predicted error covariance matrix and the measurement noise variance as follows:
In the SHAKF, the observation noise matrix
frequently becomes non-positive definite during estimation, occasionally resulting in a singular matrix. The high sensitivity of these changes can cause numerical instability and divergence in the filter. Following the approach of Akhlaghi et al. [
32], we further obtain
where
represents the adjustment factor and
b is typically set between
and
. The process noise covariance matrix
is estimated similarly to the observation noise. To avoid matrix singularity while keeping complexity low, it is calculated as follows:
Here,
represents the innovation residual. The updated covariance is then incorporated into the measurement update process:
In this equation,
,
, and
represent the gain matrix, posterior state, and posterior covariance matrix, respectively. The choice of kernel bandwidth
is critical for filter robustness, as suggested in [
33].
The MCC-WRAF method proposed in this section assigns weights according to the reliability of various measurements, suppresses error tracking, and ensures that the detection statistic accurately reflects the actual fault magnitude. In the absence of faults, the corrected measurement noise matrix helps mitigate accumulated errors and enhances positioning accuracy.
4. Fault Detection, Exclusion, and Sensor Recovery Verification Scheme
Figure 3 illustrates the state estimation models for various fault modes, followed by a detailed explanation of the comprehensive fault detection and isolation scheme, which performs stepwise identification of fault sources. A fault exclusion strategy is introduced to maintain positioning performance during faults. Next, a sensor recovery verification method is developed to reintegrate reliable sensors into the navigation system. Lastly, a covariance-optimal inflation strategy for integrity boundary protection was developed to rigorously evaluate the reliability of positioning outcomes.
4.1. Fault Detection and Separation
When the innovation vector of the MCC-WRAF is affected by faults, it directly influences the positioning results. Inspired by the AIME approach, we developed a novel fault detection and isolation scheme where a sliding window extends over the innovation vector along the time sequence, according to the type and number of information sources.
Taking the GNSS as an example, measurement modeling is used to separate GNSS and IMU faults, allowing for IMU fault detection even when faulty satellites are detected and excluded. The AIME test statistic is
, where
Here, and denote the noise and fault vectors derived from the IMU, respectively, while and represent the noise and fault vectors derived from the GNSS, respectively. follows a chi-squared distribution with degrees of freedom equal to the number of visible satellites, which explains why the detection threshold dynamically changes due to non-line-of-sight interference and obstructions, while represents the covariance matrix of the innovation.
The detection threshold
is determined by the false alarm rate
and the number of satellites:
Here, represents the cumulative distribution function of the central chi-squared distribution, with typically set to . When , AIME usually considers that a fault has occurred.
Variations in signal quantity across different epochs directly affect the computation of the innovation vector and the covariance matrix, impacting fault detection performance. Specifically, the rapid transformation of the observation matrix leads to instability in , and fluctuations in degrees of freedom alter the chi-squared distribution shape, increasing the risks of false alarms and missed detections. Weighted approaches and sliding windows cannot fully address the performance degradation caused by these variations in sequential methods. To solve this, we propose a temporal sequence AIME (TSAIME) method, which decomposes the innovation vector based on the source number, shifting the detection focus from multiple satellites within the same epoch to multiple epochs for a fixed satellite.
We first compute the test statistics for all visible signals, with the detection statistic for SAT-1 defined as
, where
Here, denotes the covariance matrix of the innovation vector at time , and is composed of the elements of the innovation vector related to SAT-1 across different times in the sliding window, where . The detection threshold for is determined jointly by the false alarm rate and the sliding window length.
When integrated with wireless signals, the IMU is generally assumed to be a stable and reliable navigation tool. However, in specific application scenarios (such as prolonged GNSS outages or sensor damage), IMU errors tend to be cumulative and progressive, leading to faults which can significantly impact system performance and increase positioning errors. Our objective is to detect all-source faults.
First, we construct a test statistic solely influenced by GNSS faults, with the corresponding innovation vector given by
where
. The covariance matrix is given by
, where
. The 5G fault detection method is similar to that of a GNSS and is performed on a per-signal basis.
Next, the innovation for the IMU fault test statistic is
During GNSS faults, cannot be directly used to calculate the IMU test statistic. Therefore, GNSS faults are prioritized for exclusion to minimize the effect of on IMU fault detection. The corresponding covariance matrix is . The test statistic is compared with the detection threshold to determine if the IMU has a fault.
4.2. Troubleshooting Strategy
The objective of fault exclusion is to maintain the accuracy of the navigation solution and the stability of the integrated system. Given the unique characteristics of GNSSs, IMUs, and 5G, their fault exclusion methods differ. For a single type of wireless signal, when the number of visible sources exceeds four, the system excludes faulty sources from calculations. However, if the number of visible signals is four or fewer, relevant innovations are weighted accordingly. For GNSSs, the weight calculation is as follows:
Here, , where is the jth element of , represents the normalized variance of , and is the corresponding detection threshold.
The IMU serves as a reference filter, and its faults can significantly degrade navigation performance. Upon detection of an IMU fault, the covariance matrix
will be redefined as follows:
It is worth mentioning that when the IMU is damaged or completely unavailable, positioning is achieved by directly integrating the GNSS and 5G, as suggested by Liu et al. [
34]. Sensors excluded due to faults are validated through a recovery model, and those meeting the requirements are reintegrated into the positioning solution.
4.3. Sensor Recovery Verification Method
When the system includes unreliable sensors, the verification process will be conducted periodically. The GNSS/5G recovery verification model is essentially similar to an innovation-based chi-squared test, where the observation model is composed of measurements from both unreliable and reliable sensors:
Here,
denotes the observation value from the reliable sensor, while
represents the observation from the unreliable sensor. The verification test statistic is derived from the innovations of the unreliable sensor over the period
:
In the verification phase, if conforms to a chi-squared distribution, then the sensor is deemed reliable and can be reintegrated into the integrated navigation solution. Otherwise, the sensor is still considered faulty and will wait for the next verification cycle.
We apply W detection [
35] to verify the recovery of faulty IMUs, where the verification test statistic and detection threshold are given in least squares form as follows:
where:
Here, denotes the unit vector, which is one at the IMU measurement and zero elsewhere, and is the critical outlier value determined by the significance level . If the condition is met for five consecutive verifications, then the IMU is considered ready to rejoin the positioning solution.
4.4. Covariance Optimal Expansion-Based Completeness Overrun Protection Strategy
Unknown inter-filter correlations can introduce random fluctuations in the protection level estimates of distributed systems, either elevating or reducing the estimates depending on the correlation strengths. The protection level (PL) is a crucial indicator of the reliability and precision of a positioning system. It provides performance assurance by estimating the maximum possible positioning error within a specified confidence level. For instance, the horizontal protection level (HPL) is used, and the equivalent condition for the applicability of the fault detection and exclusion method is
, where the right-hand side represents the horizontal alert limit. The calculation of the HPL is as follows:
In this equation,
represents the signal characteristic slope,
is the non-centrality parameter, which depends on the false alarm rate, missed detection rate, and number of visible signals, and
and
represent the standard deviations of the observation noise and positional error, respectively. The characteristic slope is further expressed by
In this equation,
, and by taking the expectation of the above equation, where
, we obtain
Assuming that the algorithm is effective and previous faults have been excluded, for the current fault,
. The non-centrality parameter
can be represented by
, where
The characteristic slope can be reformulated as follows:
By substituting the above equation into Equation (
66), the covariance matrix of each known filter is adjusted by applying an optimal scalar inflation factor:
where
,
. The necessary and sufficient conditions to ensure integrity in distributed PNT systems are as follows:
6. Discussion
The tightly coupled GNSS/5G/IMU framework, which incorporates robust weighted adaptive filtering and FDE functionality, is a crucial element to deliver high-integrity PNT services. Using massive MIMO, wide bandwidth, and millimeter-wave technologies, 5G provides positioning capabilities in challenging environments such as indoor areas and urban canyons, addressing GNSS limitations such as signal obstruction and frequent multipath or nonline-of-sight interference. The IMU serves as a reference source, maintaining functionality during brief signal outages to ensure system availability and reliability. Fault detection and exclusion in navigation systems are essential for defining the navigation and safety performance in multi-source fusion set-ups, reflecting the system’s ability to promptly alert users when it becomes unreliable.
We addressed full-source faults in a tightly coupled GNSS/5G/IMU navigation system, systematically analyzing the impact of undetected or residual faults on state estimation and proposing a fault detection and exclusion method based on filter innovation. The proposed multi-step fault separation accurately detected and differentiates GNSS, 5G, and IMU faults, and TSAIME was introduced to effectively and promptly identify faulty satellite or base station signals. Additionally, a fault exclusion strategy and sensor recovery verification method were devised to mitigate the effects of faults on system positioning performance, enabling the swift reintegration of reliable source sensors into the navigation solution. Moreover, a weighted robust adaptive filter based on maximum correntropy was designed to solve the non-positive definiteness problem of the covariance matrix during filter iteration, eliminating error propagation. Urban vehicle field experiments verified the effectiveness of the proposed method using typical fault scenarios. The results demonstrate that the method effectively distinguished GNSS, 5G, and IMU faults, eliminated error propagation, improved detection sensitivity under various fault conditions, shortened the alarm response times for ramp faults, and maintained the positioning performance during fault periods. The inclusion of the weighted robust adaptive filter also significantly reduced observation residual distortion during iterations, decreased the accumulation of errors, and improved the positioning accuracy. Meanwhile, this paper currently only gives empirical evidence and simulations for outdoor and urban canyon environments, and indoor deep occlusion scenarios are part of our future research program.
7. Conclusions
In this study, we focused on challenges in tightly coupled GNSS/5G/IMU positioning, including poor robustness, error tracking in innovation vectors, low FDE sensitivity, and insufficient usability of fusion results. To address these issues, we proposed a weighted robust adaptive filter based on the maximum correntropy criterion, along with fault detection and exclusion and sensor verification methods. The field experiments, combined with the simulation results, showed that the proposed method effectively reduced the negative effects of undetected faults, improved detection sensitivity, reduced alarm response times, and improved the dynamic positioning accuracy of the fusion navigation system to 0.83 m (). This work will provide unprecedented accuracy and reliability of timing and positioning services for mass user terminals in complex occluded or semi-occluded positioning environments.
In future work, we will focus on analyzing the impact of the number and geometric distribution of satellites and 5G base stations on the positioning performance, analyzing and testing the complexity of multi-path and non-line-of-sight propagation in real environments, especially by conducting experiments in typical indoor scenarios, including further digging into visually assisted fault detection techniques in the case of severe obstruction of wireless signals and improving the robustness of the receiver’s multi-source fusion positioning, striving to provide reliable PNT services to the general public.