Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Locality Preserving Property Constrained Contrastive Learning for Object Classification in SAR Imagery
Next Article in Special Issue
Python Software Tool for Diagnostics of the Global Navigation Satellite System Station (PS-NETM)–Reviewing the New Global Navigation Satellite System Time Series Analysis Tool
Previous Article in Journal
Sea Surface Chlorophyll-a Concentration Retrieval from HY-1C Satellite Data Based on Residual Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved VMD-LSTM Model for Time-Varying GNSS Time Series Prediction with Temporally Correlated Noise

1
School of Geodesy and Geomatics, East China University of Technology, Nanchang 341000, China
2
School of Civil and Surveying & Mapping Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
3
School of Environment and Surveying, China University of Mining and Technology, Xuzhou 221000, China
4
School of Surveying and Mapping Science and Technology, Xi’an University of Science and Technology, Xi’an 710000, China
5
School of Transportation Engineering, East China Jiao Tong University, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(14), 3694; https://doi.org/10.3390/rs15143694
Submission received: 23 June 2023 / Revised: 19 July 2023 / Accepted: 23 July 2023 / Published: 24 July 2023
(This article belongs to the Special Issue Advances in GNSS for Time Series Analysis)

Abstract

:
GNSS time series prediction plays a significant role in monitoring crustal plate motion, landslide detection, and the maintenance of the global coordinate framework. Long short-term memory (LSTM) is a deep learning model that has been widely applied in the field of high-precision time series prediction and is often combined with Variational Mode Decomposition (VMD) to form the VMD-LSTM hybrid model. To further improve the prediction accuracy of the VMD-LSTM model, this paper proposes a dual variational modal decomposition long short-term memory (DVMD-LSTM) model to effectively handle noise in GNSS time series prediction. This model extracts fluctuation features from the residual terms obtained after VMD decomposition to reduce the prediction errors associated with residual terms in the VMD-LSTM model. Daily E, N, and U coordinate data recorded at multiple GNSS stations between 2000 and 2022 were used to validate the performance of the proposed DVMD-LSTM model. The experimental results demonstrate that, compared to the VMD-LSTM model, the DVMD-LSTM model achieves significant improvements in prediction performance across all measurement stations. The average RMSE is reduced by 9.86% and the average MAE is reduced by 9.44%; moreover, the average R2 increased by 17.97%. Furthermore, the average accuracy of the optimal noise model for the predicted results is improved by 36.50%, and the average velocity accuracy of the predicted results is enhanced by 33.02%. These findings collectively attest to the superior predictive capabilities of the DVMD-LSTM model, thereby demonstrating the reliability of the predicted results.

Graphical Abstract

1. Introduction

Over the past three decades, with the rapid development of satellite navigation technology, many GNSS continuously operating reference stations have been established worldwide. These stations provide important data sources for crustal plate motion monitoring [1,2,3,4,5], landslide detection [6,7,8], the deformation monitoring of bridges or dams [9,10,11,12,13], and the maintenance of regional or global coordinate frameworks [14,15]. By analyzing the long-term GNSS observation data time series obtained from these stations, it is possible to predict the variation of coordinates at continuous time points, thereby providing an important basis for determining motion trends. This has significant practical and theoretical value in geodesy and geodynamics research [16,17,18].
Time series prediction methods can be mainly categorized into two types: physical simulation and numerical simulation [19,20]. Traditional physical and numerical simulation methods rely on geophysical theories, linear terms, periodic terms, and gap information to construct models [21]. However, these models face challenges in capturing complex nonlinear data and require a manual selection of feature information and modeling parameters, leading to systematic biases and limitations [22]. In contrast, deep learning, as an emerging technology, can automatically extract information that is suitable for data features by constructing deep network structures. Deep learning exhibits strong learning capabilities and has advantages in handling large-scale and high-dimensional data. It has been widely applied in various fields such as image recognition [23,24,25], natural language processing [26,27,28], speech recognition [29,30,31], and time series prediction [32,33,34,35,36]. Li et al. (2022) comprehensively analyzed and elaborated on the application of image recognition to plant phenotypes by comparing and analyzing various deep learning methods [24]. Otter et al. (2020) summarized and analyzed the relevant research of deep learning models in the field of natural language processing and provided valuable suggestions for future research in this field [26]. Nassif et al. (2019) systematically studied its accuracy in speech recognition through convolutional, recurrent, and fully connected deep learning methods [31]. Masini et al. (2023) elaborated on the application of machine learning in the field of economy and finance by analyzing the application of different neural networks and tree structures in time series in the context of deep learning [36].
Long short-term memory (LSTM), as an excellent variant of recurrent neural networks (RNNs), overcomes the issues of gradient vanishing, gradient exploding, and insufficient long-term memory in RNNs [37,38,39]. Due to its significant advantages in long-range time series prediction, LSTM has been widely applied in various time series prediction domains such as electricity load forecasting [40,41,42] and wind speed prediction [43,44,45]. In recent years, the application of the LSTM algorithm in the GNSS domain has also become increasingly widespread. Kim et al. (2019) improved the accuracy and stability of absolute positioning solutions in autonomous vehicle navigation using a multi-layer LSTM model [46]. Tao et al. (2021) utilized a CNN-LSTM approach to extract deep multipath features from GNSS coordinate sequences, reducing the impact of multipath effects on positioning accuracy [47]. Xie et al. (2019) accurately predicted landslide periodic components using the LSTM model to establish a landslide hazard warning system [48].
Variational Mode Decomposition (VMD) is a signal processing method based on the principle of variational inference. It decomposes signals into various mode components (Intrinsic Mode Functions, IMF) with different frequencies through an optimization process, effectively extracting the local time–frequency features of signals and enabling efficient signal decomposition and analysis [49,50,51]. Currently, many researchers have combined VMD with LSTM to enhance the performance of LSTM in a range of fields [52,53,54,55]. Huang et al. (2022) applied the VMD-LSTM model in the coal seam thickness prediction field, confirming that the predicted results closely matched the coal seam information obtained from existing boreholes [56]. Zhang et al. (2022) applied the VMD-LSTM model in the field of sports artificial intelligence, demonstrating its broad application prospects in predicting sports artificial intelligence directions [57]. Han et al. (2019) applied the VMD-LSTM model in the wind power prediction field, validating its high performance in multi-step and real-time predictions [58]. Xing et al. (2019) applied the VMD-LSTM model in predicting the dynamic displacements of landslides and verified its high prediction accuracy using the case of landslides in paddy fields in China [59].
The VMD-LSTM model has been widely adopted in various fields for time series prediction. However, most studies utilize VMD to decompose the original data, predict each Intrinsic Mode Function (IMF) and residual term separately, and then combine the predicted results to obtain the final prediction. Although this method yields good results for each IMF value, the fluctuation characteristics of the residual term are difficult to extract, leading to significant prediction errors in the model. Furthermore, existing research has mainly focused on the accuracy of the prediction results while neglecting the noise characteristics of the data itself [60,61,62]. Considering these factors, this paper proposes a dual VMD-LSTM (DVMD-LSTM) hybrid model that considers the characteristics of noise. By performing VMD decomposition on the residual components obtained from the initial VMD decomposition, the proposed model effectively extracts the fluctuation features within the residuals, enabling the high-precision prediction of GNSS time series. By analyzing the RMSE and MAE and R2 (coefficient of determination) of the predicted results in the E, N, and U directions across multiple measurement stations, the applicability and robustness of the proposed method are evaluated. Additionally, the quality of the predicted results is assessed by incorporating noise models and velocity evaluation.
The structure of this paper is as follows: Section 2 introduces the principles of VMD, LSTM algorithms, and accuracy evaluation metrics. The principles and specific processes of the DVMD-LSTM model are explained in detail. Section 3 describes the GNSS station data, presents data-preprocessing strategies, and analyzes reasons for the improved accuracy of the DVMD-LSTM model. Section 4 focuses on the prediction results and accuracy of both the single LSTM model and the hybrid model. The optimal noise model and velocity under each prediction model are compared and analyzed to evaluate the performance of the DVMD-LSTM model using different accuracy assessment metrics. Finally, Section 5 provides conclusions and an analysis.

2. Principle and Method

2.1. Variational Modal Decomposition (VMD)

Variational Mode Decomposition (VMD) is an adaptive and fully non-recursive method used for solving modal variational and signal processing problems [63]. GNSS time series exhibit inherent non-stationarity. Utilizing VMD to decompose the data effectively separates it into stationary signals, thereby extracting the fluctuation characteristics of the GNSS time series and providing a superior data foundation for model prediction. VMD iteratively searches for a variational model to decompose the original time series into distinct modal components. The specific decomposition process is outlined as follows [64,65,66]:
(1) For each modal component μ K ( t ) , the corresponding analytic signal is computed using the Hilbert transform, which allows its one-sided spectrum to be obtained:
[ δ ( t ) + j π t ] μ K ( t )
In the equation, j 2 = 1 , where δ is the Dirac distribution.
(2) By introducing exponential terms in each mode, the center frequency e j ω K t of each mode can be estimated, and the spectral components of each mode can be modulated to their respective fundamental frequency bands:
[ ( δ ( t ) + j π t ) μ K ( t ) ] e j ω K t
(3) The bandwidth of ω K is estimated based on the smoothness of the demodulated signal’s H1 Gaussian. This leads to a constrained variational problem:
min { μ K } , { ω K } { K d t [ ( δ ( t ) + j π t ) u K ( t ) ] e j ω K t 2 2
s , t , K μ K = f
In the equation, f represents the original signal, { μ K } represents the decomposed mode functions, and { ω K } represents the corresponding center frequencies of each mode.
(4) On this basis, quadratic penalty factors α and the Lagrange multiplier operator λ t are introduced to transform it into an unconstrained variational problem. The extended Lagrange expression is as follows:
L ( { μ K } , { ω K } , λ ) = α K t [ ( δ ( t ) + j π t ) μ K ( t ) ] e j ω K t 2 2 + f ( t ) K μ K ( t ) 2 2 + λ ( t ) , f ( t ) K μ K ( t )
where α represents the quadratic penalty factor and λ t denotes the Lagrange multiplier operator. Subsequently, the alternating direction method of multipliers (ADMMs) is employed to solve this unconstrained variational problem. By alternately updating μ K n + 1 , ω K n + 1 , and λ   n + 1 , the saddle point of the extended Lagrange expression, i.e., the optimal solution of the constrained variational model in Equation (3), is sought.

2.2. Long Short-Term Memory (LSTM)

LSTM is an improved type of recurrent neural network (RNN) that addresses the issue of long-term dependencies by utilizing memory cells, effectively mitigating the problems of vanishing and exploding gradients [67,68,69]. Compared to traditional neural networks, LSTM demonstrates strong advantages in handling long-term sequence prediction tasks and has been widely applied in areas such as time series forecasting and fault detection [70,71,72,73,74]. The LSTM architecture consists of input layers, hidden layers, and output layers, where each hidden layer employs input gates, forget gates, and output gates to store and access data, as shown in Figure 1.

2.3. Dual Variational Mode Decomposition Long-Short Term Memory Network Model (DVMD-LSTM)

The VMD-LSTM model, as a classical hybrid deep learning model, has been widely applied in time series prediction tasks such as load forecasting and wind speed prediction, demonstrating remarkable predictive accuracy [75,76]. This model utilizes the Variational Mode Decomposition (VMD) to decompose the original data into a set of Intrinsic Mode Functions (IMFs) and a residue component, denoted as “r.”. Subsequently, each IMF and the residue component are individually predicted, and their predictions are cumulatively aggregated to obtain the final model’s prediction. It is worth noting that the IMFs, being stationary signals, can achieve higher predictive accuracy when they are individually predicted; thus, effectively enhancing the predictive performance of the VMD-LSTM model. The specific prediction process is shown on the left side of Figure 2, and the residual value is not decomposed. However, the residue component remains unprocessed during the prediction process, leading to errors that can affect the model’s predictive accuracy. Considering that the residual terms obtained after the VMD decomposition of real-world data still exhibit certain fluctuation characteristics and non-white noise such as high-frequency noise [77,78], this model further decomposes the residual terms using VMD and predicts the decomposed mode components to mitigate the impact of incomplete VMD decomposition. The DVMD-LSTM model improves the overall prediction accuracy by replacing the predicted results of the original residual terms with the fused mode components, thereby reducing the influence of residual terms on the prediction accuracy. The specific workflow is illustrated in Figure 2.
The specific prediction process of the DVMD-LSTM model is as follows:
Step 1: Preprocess the GNSS time series data by removing outliers, performing interpolation, and other data preprocessing techniques. Then, input the preprocessed data into the Variational Mode Decomposition (VMD) for decomposition.
Step 2: Further decompose the residue component “r1” obtained from the VMD into individual modal components and another residue “r2” through another round of VMD.
Step 3: Add up the modal components obtained from the VMD decomposition of the residue component “r1” to form the fused Intrinsic Mode Function (Fuse-IMF). Use the Fuse-IMF as a feature for prediction in the LSTM model.
Step 4: Use the individual modal components obtained from the VMD decomposition of the original GNSS time series as features and input them separately into the LSTM model for prediction. Obtain K prediction results, where K represents the number of modal components.
Step 5: Add the K prediction results obtained in Step 4 with the prediction result of the Fuse-IMF to obtain the final prediction result of the DVMD-LSTM model.
Step 6: Calculate the RMSE and MAE of the prediction results and use them to evaluate the performance of the model under different noise models.

2.4. Precision Evaluation Index

To evaluate the prediction accuracy and noise characteristics of the hybrid model, this study employs Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and coefficient of determination (R2) as evaluation metrics for model prediction accuracy [79,80]. Additionally, the Bayesian information criterion (BIC_tp) is used to determine the optimal noise model for the original GNSS time series and the predicted time series under each model to determine whether the prediction results consider colored noise [81,82,83]. The definitions of the three evaluation metrics are as follows:
(1) RMSE
R M S E = 1 n   i = 1 n ( y i y ^ i ) 2
(2) MAE
M A E = 1 n i = 1 n | ( y i y ^ i ) |
(3) R2
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
In the above equations, y i represents the actual GNSS data values, y ¯ represents the mean of actual GNSS data values, y ^ i represents the predicted results of each model, and n denotes the number of GNSS data points. The values of RMSE and MAE are used as evaluation metrics for model prediction accuracy. Smaller values of RMSE and MAE indicate the higher prediction accuracy of the model, while larger values indicate a lower prediction accuracy. The coefficient of determination (R2) ranges between 0 and 1. When R2 is close to 1, it indicates that the prediction model can explain the variability of the dependent variable well. On the other hand, when R2 is close to 0, it suggests that the explanatory power of the prediction model is weak.
(4) BIC_tp
B I C _ t p = 2 log ( L ) + log ( n 2 π ) v
To provide a visual assessment of the improvement achieved by the hybrid model on each evaluation metric, this study introduces the Improvement Ratio (I) to quantify the magnitude of improvement in each accuracy evaluation metric. By calculating the I value, the degree of improvement in accuracy achieved by the hybrid model can be accurately determined. The calculation formula for the Improvement Ratio is as follows:
I y y ^ = y y ^ y
In the above equation, y and y ^ represent the evaluation metrics for accuracy, such as RMSE. The variable y represents the evaluation metric for the accuracy of the initial model’s predictions, while y ^ represents the evaluation metric for the accuracy of the predictions made by the hybrid model. A larger value of I y y ^ indicates a greater improvement in the evaluation metric achieved by the hybrid model and vice versa.

3. Data and Experiments

3.1. Data Sources

In this work, the daily time series of 8 GNSS stations (ENU) from the Extended Solid Earth Science ESDR System (ES3) were selected for the experiment. The GNSS daily loose constraints solution from GAMIT and GIPSY was used with Quasi-Observation Combination Analysis (QOCA) software to generate a combined solution [62]. The information for each station is presented in Table 1, and the distribution of the stations is depicted in Figure 3. See Appendix A for details of data fluctuations.
In order to reduce the impact of missing data on the estimation and prediction results of the noise model, the following principles were followed in the selection process of the station: (1) the selected station coordinate time series must contain data from 2000 to 2022 to ensure the consistency of the experiment and obtain reliable velocity parameter estimation; (2) in the time range from 2000 to 2022, the average missing rate of the selected station data should not exceed 5% to ensure the reliability of the predicted experiment; (3) in order to reduce the impact of inter-regional correlation on the repeatability of speed parameters and noise modeling, the selected sites should be evenly distributed.

3.2. Data Preprocessing

For data preprocessing, this study employed the Hector software to remove outliers and detect step discontinuities in the raw data [84,85]. After identifying the step discontinuities, they were corrected using the least squares fitting method. The corrected data were then subjected to interpolation using the Regularized Expectation Maximization (RegEM) algorithm [86,87]. This method combines the Expectation Maximization (EM) algorithm with regularization techniques to simultaneously maximize the likelihood function and consider the smoothness of the model and noise reduction. It can effectively handle the interpolation problem of missing data [88,89]. Due to space limitations, only the comparison of interpolation results for the GBOS station with the highest missing rate in the E, N, and U components is shown in Figure 4.
As shown in the figure, it can be observed that the RegEM method not only produces good interpolation results for scattered missing data but also maintains the trend of the sequence well in the presence of many continuous missing data. It successfully overcomes the limitation of the poor interpolation performance of linear interpolation at locations with continuous missing data. Moreover, it provides high-quality continuous time series data for subsequent experiments.

3.3. VMD Parameter Discussion

When performing data decomposition using VMD, the selection of an appropriate number of mode components K is crucial for achieving high-quality decomposition results in VMD. An excessively large K may lead to over-decomposition, while a small K may result in the under-decomposition of the data. To determine the optimal K value for the E, N, and U time series of the different stations, this study adopts the method of comparing the signal-to-noise ratio (SNR) of the decomposed data to evaluate the quality of the decomposition results. A higher SNR indicates clearer signal decomposition and a better denoising effect. Through extensive experiments, and based on empirical rules, this study restricts the K value to a range of 2 to 10 and selects the K value within this range that yields the highest SNR as the optimal K value for each time series [90,91]. The definition of SNR is given as follows:
SNR = 10 lg i = 1 N f 2 ( i ) i = 1 N [ f ( i ) g ( i ) ] 2
where f ( i ) represents the original signal, and g ( i ) represents the reconstructed signal. The determination of the penalty factor α also has a certain impact on the decomposition results in VMD; moreover, considering that selecting a penalty factor of approximately 1.5 times the decomposed data is optimal [92], in order to ensure experimental consistency, a penalty factor of 10,000 was set for all the decomposition processes in this study. The results of K value selection in three directions at each site are shown in Table 2.

4. Experimental Results and Analysis

4.1. DVMD-LSTM Prediction Results Analysis

To ensure experimental fairness and consistency, all deep learning models used in this paper are consistently divided into data sets, which are divided into training sets (2000.0 to 2011.9), validation sets (2012.0 to 2014.9), and test sets (2015.0 to 2022.9). The training set was used to train the model parameters and learn the data features. The validation set was used to fine-tune the model’s hyperparameters and evaluate its performance. The test set was used for the final evaluation of the model’s performance to assess its effectiveness in practical applications. The purpose of this dataset partitioning scheme was to ensure that the model had sufficient training data to fully learn the data features. Additionally, by obtaining sufficient prediction results on the test set, the optimal noise model for prediction accuracy could be evaluated.
In order to visually demonstrate the differences in the prediction results between the DVMD-LSTM model and the VMD-LSTM model, this study compares and discusses the prediction results of the decomposed IMF and residual terms using the two hybrid models. Due to space limitations, this paper only presents the prediction results of the IMF and residual terms in the U direction at the SEDR station. For detailed information, please refer to Figure 5.
From Figure 5, it can be observed that both the VMD-LSTM and DVMD-LSTM models yield good prediction results for each IMF component. However, due to the lack of apparent regularity in the residual terms, the VMD-LSTM model struggles to capture their fluctuation characteristics effectively, resulting in lower prediction accuracy and subsequently affecting the overall prediction performance of the VMD-LSTM model. To address this issue, the proposed DVMD-LSTM model conducts a secondary VMD decomposition on the residual terms obtained after the first VMD decomposition, further extracting the fluctuation information within the residual terms and significantly improving the prediction accuracy. In order to investigate whether performing multiple VMD decompositions can further enhance accuracy, analyses were conducted on the residual terms after the second decomposition. It was found that they lack noticeable fluctuation characteristics. When these results are incorporated into the model for prediction, there is no significant improvement observed; moreover, some stations even exhibit a decrease in prediction accuracy. This indicates that increasing the number of decompositions on the residual terms may not necessarily enhance the prediction accuracy of the model. Therefore, in this study, the data after the secondary VMD decomposition were used as the feature input for the subsequent deep learning experiments.

4.2. DVMD-LSTM Model Prediction Results and Precision Analysis

To compare the improvement in the predictive accuracy of the DVMD-LSTM model and the VMD-LSTM model compared to the LSTM model under different fluctuation amplitudes, this study conducted experiments using datasets from different stations in three directions. To better distinguish the prediction results, this study analyzed the prediction error R, which is the difference between the true values and the predicted results. Due to space limitations, this section only presents the prediction results of the SEDR station in three directions for different models, as shown in Figure 6.
From Figure 6, it can be observed that, as the fluctuation amplitude of the original data increases, the prediction errors of different models also increase to varying degrees, with the largest errors being observed in the U direction. Compared to the LSTM model, the VMD-LSTM hybrid model better captures the fluctuation trends and amplitudes of the true values in the data and exhibits smaller variations and extremities in the prediction error R. This indicates that, after VMD decomposition, the VMD-LSTM model can capture the inherent fluctuation characteristics of the initial data more effectively, leading to more accurate predictions. Both the VMD-LSTM and DVMD-LSTM models exhibit similar prediction fluctuations and trends; however, the DVMD-LSTM model has smaller prediction errors R. This suggests that the DVMD-LSTM model not only retains the advantages of the VMD-LSTM model in predicting fluctuation trends and amplitudes but that it also achieves higher prediction accuracy.
To analyze the applicability and robustness of the DVMD-LSTM model, this study conducted predictions using the LSTM, VMD-LSTM, and DVMD-LSTM models in the E, N, and U directions for each GNSS station. The prediction accuracy and improvement achieved by each model are summarized in Table 3, where “I” represents the degree of accuracy improvement of the hybrid model compared with the single LSTM model under different accuracy indexes.
From the results in Table 3, it can be observed that the VMD-LSTM model exhibits an average reduction of 19.77% in RMSE for the E direction, an average reduction of 26.83% in RMSE for the N direction, and an average reduction of 19.31% in RMSE for the U direction, outperforming the LSTM model. The VMD-LSTM model demonstrates an average reduction of 20.31% in MAE for the E direction, an average reduction of 27.12% in MAE for the N direction, and an average reduction of 19.48% in MAE for the U direction. Additionally, the VMD-LSTM model shows an average increase of 43.66% in R2 for the E direction, an average increase of 43.47% in R2 for the N direction, and an average increase of 44.54% in R2 for the U direction. The experimental results indicate that the VMD-LSTM model significantly improves prediction accuracy compared to the standalone LSTM model. Although there are varying degrees of improvement in R2, they are observed to different degrees at different stations. Notably, the improvement is more prominent in stations where the LSTM model had lower R2 values, suggesting that the VMD-LSTM model exhibits better explanatory power and produces predictions that closely match the observed values with improved fitting results.
Compared to the VMD-LSTM model, the DVMD-LSTM model demonstrates an average reduction of 9.71% in RMSE for the E direction, an average reduction of 8.84% in RMSE for the N direction, and an average reduction of 11.02% in RMSE for the U direction. The DVMD-LSTM model exhibits an average reduction of 9.17% in MAE for the E direction, an average reduction of 8.55% in MAE for the N direction, and an average reduction of 10.61% in MAE for the U direction. Moreover, the DVMD-LSTM model shows an average increase of 20.68% in R2 for the E direction, an average increase of 12.18% in R2 for the N direction, and an average increase of 21.03% in R2 for the U direction. The overall average R2 value reaches 0.78, indicating a strong correlation between the DVMD-LSTM model’s prediction results and the original data along with improved fitting performance. It can be concluded that the DVMD-LSTM model achieves a significant improvement in accuracy compared to the VMD-LSTM model, with particularly notable improvements in R2. The DVMD-LSTM model exhibits a greater improvement in the U direction, suggesting that it performs better for time series with larger fluctuations. This is because, for time series with larger fluctuations, the residual terms obtained after VMD decomposition are larger and contain more fluctuation characteristics.
In summary, the DVMD-LSTM model preserves the advantages of the VMD-LSTM model in predicting fluctuation trends and frequencies while achieving higher prediction accuracy. The results of the predictions conducted across the different directional components of various stations further validate the superiority of the proposed model. These experimental findings confirm the model’s applicability and robustness, demonstrating its potential for broad utilization in the field of high-precision time series forecasting.

4.3. Optimal Noise Model Research

4.3.1. Comparison of Optimal Noise Models under Each Prediction Model

To further investigate whether the DVMD-LSTM model can adequately consider the noise characteristics of different datasets during the prediction process, we considered the fact that, currently, domestic and foreign scholars believe that white noise + flicker noise (FN + WN) and a small amount of random walk noise + flicker noise (RW + FN) are the optimal random models for the noise characteristics of GPS coordinate time series [93,94,95,96,97]. In addition, some scholars have proposed that, in GPS coordinate time series, some noise models can be represented by power law noise (PL) and the Gaussian Markov model (GGM) [98,99,100]. This paper takes GNSS reference stations with the same time span in North America as the research object. Four combined noise models, random walk noise + flicker noise + white noise (RW + FN + WN), flicker noise + white noise (FN + WN), power law noise + white noise (PL + WN) and Gaussian Markov + white noise (GGM + WN), were used to analyze the training set and test set data of each station. Finally, eight stations with the same optimal noise model were selected as the experimental data, and the optimal noise model of each prediction model to the prediction results of each station was calculated. The specific results are shown in Table 4.
According to Table 4, the optimal noise models differ among different stations, indicating the presence of inconsistent noise characteristics. The LSTM model exhibits significant differences between its prediction results and the optimal noise models of the original data, with an average accuracy of only 25% across all three directions. Additionally, the predominant optimal noise models are PLWN and GGMWN. This suggests that the LSTM model does not adequately consider the inherent noise characteristics of GNSS time series during prediction. In contrast, the VMD-LSTM model shows improved accuracy in capturing the optimal noise models, with an average accuracy of 42.67%. This indicates that the VMD decomposition effectively captures the noise characteristics within the IMF components; however, the noise characteristics in the residual component r are not fully captured, resulting in relatively lower overall accuracy. Therefore, the proposed DVMD-LSTM model further enhances the noise characteristics in the residual component r by performing VMD decomposition once again. As a result, the DVMD-LSTM model achieves an impressive average accuracy of 79.17% in capturing the optimal noise models. In summary, the DVMD-LSTM model adequately considers the noise characteristics of the data during the prediction process by processing the original data and decomposed residual components.

4.3.2. Velocity Estimation Impact Analysis

To further investigate the quality of the prediction results from each deep learning model, this study first utilized these models to predict the original data. The optimal noise model and corresponding velocities were computed for each model’s prediction results. Subsequently, these velocities were compared with the velocities obtained by calculating the optimal noise model of the original data using the Hector software [84,85]. By calculating the absolute error between the prediction results of each model and the original velocities at different measurement stations, the average absolute error between the velocities computed from the prediction results of each deep learning model and the velocities from the original data could be obtained. Finally, by comparing the average absolute error between the prediction results of each deep learning model and the velocities from the original data, the quality of the model’s prediction results could be assessed. The velocities computed from the prediction results of each deep learning model under the optimal noise model at different measurement stations are shown in Table 5.
According to Table 5, in the E direction of each station, the average absolute error between the velocities predicted by the LSTM model and the velocities of the original data is 0.068 mm/year. In the N direction, it is 0.093 mm/year; in the U direction, it is 0.078 mm/year. For the VMD-LSTM model, the average absolute error between the predicted velocities and the velocities of the original data is 0.031 mm/year in the E direction, 0.060 mm/year in the N direction, and 0.060 mm/year in the U direction. As for the DVMD-LSTM model, the average absolute error between the predicted velocities and the velocities of the original data is 0.016 mm/year in the E direction, 0.042 mm/year in the N direction, and 0.047 mm/year in the U direction. Compared to the LSTM model, the VMD-LSTM model shows an average improvement of 37.67% in velocity prediction accuracy, while the DVMD-LSTM model demonstrates an average improvement of 56.80%. Compared with VMD-LSTM, the velocity prediction accuracy of the DVMD-LSTM model is improved by 33.02% on average. Thus, both the VMD-LSTM and DVMD-LSTM models exhibit improved velocity prediction accuracy compared to the LSTM model, with the DVMD-LSTM model showing a greater improvement, further demonstrating its outstanding predictive performance.
In summary, this study evaluated the performance of various prediction models by analyzing their prediction accuracy, optimal noise models, and velocity results. The results indicate that the DVMD-LSTM model outperforms the others in multiple aspects, highlighting its potential for widely applicable high-precision time series prediction with multiple noise characteristics.

5. Conclusions

Addressing the limitations of low prediction accuracy and inadequate consideration of noise characteristics in the VMD-LSTM model for time series forecasting, this paper proposes a high-precision GNSS time series prediction method based on DVMD and LSTM. The proposed method is comprehensively validated and tested on the daily time series data from eight North American regional GNSS stations, spanning the period from 2000 to 2022, in the E, N, and U directions. The experimental results demonstrate the following:
(1)
The VMD-LSTM model shows good prediction results for each IMF value after VMD decomposition but performs poorly in predicting the residual component. The proposed DVMD-LSTM model utilizes VMD decomposition to extract the fluctuation characteristics of the residual component, leading to a significant improvement in the prediction accuracy of the residual component and enhancing the overall prediction accuracy;
(2)
Compared to the initial VMD-LSTM hybrid model, the DVMD-LSTM model exhibits significant improvements in prediction accuracy. The RMSE values for the DVMD-LSTM model are reduced by an average of 9.71% in the E direction, 8.84% in the N direction, and 11.02% in the U direction. Additionally, the MAE values decreased by an average of 9.17% in the E direction, 8.55% in the N direction, and 10.61% in the U direction. Moreover, the DVMD-LSTM model shows an average increase of 20.68% in R2 for the E direction, an average increase of 12.18% in R2 for the N direction, and an average increase of 21.03% in R2 for the U direction. Across all measurement stations, the DVMD-LSTM model consistently outperforms the VMD-LSTM model, indicating its superior predictive accuracy, adaptability, and robustness;
(3)
Compared to the LSTM model, the DVMD-LSTM model achieves an average improvement of 36.50% in the accuracy of the average optimal noise model across all stations, reaching an overall accuracy of 79.17%. This demonstrates that the DVMD-LSTM model adequately considers the noise characteristics of the data during the prediction process and achieves superior prediction results. By calculating the velocities obtained from the optimal noise models, it is evident that the DVMD-LSTM model achieves an average improvement of 33.02% in velocity prediction accuracy compared to the VMD-LSTM model, further confirming the outstanding predictive performance of the DVMD-LSTM model.

Author Contributions

H.C. and J.H., writing—original draft preparation; X.H., T.L., K.Y. and X.M., methodology and reviewed and edited the manuscript; X.S. and Z.H., data processing and figure plotting. All authors have read and agreed to the published version of the manuscript.

Funding

This work was sponsored by National Natural Science Foundation of China (42104023), Major Discipline Academic and Technical Leaders Training Program of Jiangxi Province (20225BCJ23014), Jiangxi University of Science and Technology Postgraduate Education Teaching Reform Research Project (YJG2022006). Hebei Water Conservancy Research Plan (2021-27).

Data Availability Statement

The processing of GNSS data can be obtained at http://garner.ucsd.edu/pub/measuresESESES_products/Timeseries/Global/.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. ALBH station data distribution.
Figure A1. ALBH station data distribution.
Remotesensing 15 03694 g0a1
Figure A2. BURN station data distribution.
Figure A2. BURN station data distribution.
Remotesensing 15 03694 g0a2
Figure A3. CEDA station data distribution.
Figure A3. CEDA station data distribution.
Remotesensing 15 03694 g0a3
Figure A4. FOOT station data distribution.
Figure A4. FOOT station data distribution.
Remotesensing 15 03694 g0a4
Figure A5. GOBS station data distribution.
Figure A5. GOBS station data distribution.
Remotesensing 15 03694 g0a5
Figure A6. RHCL station data distribution.
Figure A6. RHCL station data distribution.
Remotesensing 15 03694 g0a6
Figure A7. SEDR station data distribution.
Figure A7. SEDR station data distribution.
Remotesensing 15 03694 g0a7
Figure A8. SMEL station data distribution.
Figure A8. SMEL station data distribution.
Remotesensing 15 03694 g0a8

References

  1. Ohta, Y.; Kobayashi, T.; Tsushima, H.; Miura, S.; Hino, R.; Takasu, T.; Fujimoto, H.; Iinuma, T.; Tachibana, K.; Demachi, T.; et al. Quasi real-time fault model estimation for near-field tsunami forecasting based on RTK-GPS analysis: Application to the 2011 Tohoku-Oki earthquake (Mw 9.0). J. Geophys. Res. Solid Earth 2012, 117. [Google Scholar] [CrossRef]
  2. Serpelloni, E.; Faccenna, C.; Spada, G.; Dong, D.; Williams, S.D. Vertical GPS ground motion rates in the Euro-Mediterranean region: New evidence of velocity gradients at different spatial scales along the Nubia-Eurasia plate boundary. J. Geophys. Res. Solid Earth 2013, 118, 6003–6024. [Google Scholar] [CrossRef] [Green Version]
  3. Serpelloni, E.; Vannucci, G.; Pondrelli, S.; Argnani, A.; Casula, G.; Anzidei, M.; Baldi, P.; Gasperini, P. Kinematics of the Western Africa-Eurasia plate boundary from focal mechanisms and GPS data. Geophys. J. Int. 2007, 169, 1180–1200. [Google Scholar] [CrossRef]
  4. Kong, Q.; Zhang, L.; Han, J.; Li, C.; Fang, W.; Wang, T. Analysis of coordinate time series of DORIS stations on Eurasian plate and the plate motion based on SSA and FFT. Geod. Geodyn. 2023, 14, 90–97. [Google Scholar] [CrossRef]
  5. Younes, S.A. Study of crustal deformation in Egypt based on GNSS measurements. Surv. Rev. 2022, 55, 338–349. [Google Scholar] [CrossRef]
  6. Cina, A.; Piras, M. Performance of low-cost GNSS receiver for landslides monitoring: Test and results. Geomat. Nat. Hazard Risk 2015, 6, 497–514. [Google Scholar] [CrossRef]
  7. Shen, N.; Chen, L.; Wang, L.; Hu, H.; Lu, X.; Qian, C.; Liu, J.; Jin, S.; Chen, R. Short-term landslide displacement detection based on GNSS real-time kinematic positioning. IEEE Trans. Instrum. Meas. 2021, 70, 1004714. [Google Scholar] [CrossRef]
  8. Shen, N.; Chen, L.; Chen, R. Displacement detection based on Bayesian inference from GNSS kinematic positioning for deformation monitoring. Mech. Syst. Signal. Pract. 2022, 167, 108570. [Google Scholar] [CrossRef]
  9. Meng, X.; Roberts, G.W.; Dodson, A.H.; Cosser, E.; Barnes, J.; Rizos, C. Impact of GPS satellite and pseudolite geometry on structural deformation monitoring: Analytical and empirical studies. J. Geodesy 2004, 77, 809–822. [Google Scholar] [CrossRef]
  10. Yi, T.H.; Li, H.N.; Gu, M. Experimental assessment of high-rate GPS receivers for deformation monitoring of bridge. Measurement 2013, 46, 420–432. [Google Scholar] [CrossRef]
  11. Xiao, R.; Shi, H.; He, X.; Li, Z.; Jia, D.; Yang, Z. Deformation monitoring of reservoir dams using GNSS: An application to south-to-north water diversion project, China. IEEE Access 2019, 7, 54981–54992. [Google Scholar] [CrossRef]
  12. Reguzzoni, M.; Rossi, L.; De Gaetani, C.I.; Caldera, S.; Barzaghi, R. GNSS-based dam monitoring: The application of a statistical approach for time series analysis to a case study. Appl. Sci. 2022, 12, 9981. [Google Scholar] [CrossRef]
  13. Zhao, L.; Yang, Y.; Xiang, Z.; Zhang, S.; Li, X.; Wang, X.; Ma, X.; Hu, C.; Pan, J.; Zhou, Y.; et al. A novel low-cost GNSS solution for the real-time deformation monitoring of cable saddle pushing: A case study of Guojiatuo suspension bridge. Remote Sens. 2022, 14, 5174. [Google Scholar] [CrossRef]
  14. Altamimi, Z.; Rebischung, P.; Métivier, L.; Collilieux, X. ITRF2014: A new release of the International Terrestrial Reference Frame modeling nonlinear station motions. J. Geophys. Res. Solid Earth 2016, 121, 6109–6131. [Google Scholar] [CrossRef] [Green Version]
  15. Chen, M.; Zhang, Q. Analysis of positioning deviation between Beidou and GPS based on National Reference Stations in China. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 209–214. [Google Scholar] [CrossRef]
  16. Blewitt, G.; Lavallée, D. Effect of annual signals on geodetic velocity. J. Geophys. Res. Solid Earth 2002, 107, ETG 9-1–ETG 9-11. [Google Scholar] [CrossRef] [Green Version]
  17. Segall, P.; Davis, J.L. GPS applications for geodynamics and earthquake studies. Annu. Rev. Earth Planet. Sci. 1997, 25, 301–336. [Google Scholar] [CrossRef] [Green Version]
  18. Usifoh, S.E.; Männel, B.; Sakic, P.; Dodo, J.D.; Schuh, H. Determination of a GNSS-Based Velocity Field of the African Continent; Springer: Cham, Switzerland, 2022. [Google Scholar]
  19. Chen, J.H. Petascale direct numerical simulation of turbulent combustion—Fundamental insights towards predictive models. Proc. Combust. Inst. 2011, 33, 99–123. [Google Scholar] [CrossRef]
  20. Xu, W.; Xu, H.; Chen, J.; Kang, Y.; Pu, Y.; Ye, Y.; Tong, J. Combining numerical simulation and deep learning for landslide displacement prediction: An attempt to expand the deep learning dataset. Sustainability 2022, 14, 6908. [Google Scholar] [CrossRef]
  21. Wang, J.; Jiang, W.; Li, Z.; Lu, Y. A new multi-scale sliding window LSTM framework (MSSW-LSTM): A case study for GNSS time-series prediction. Remote Sens. 2021, 13, 3328. [Google Scholar] [CrossRef]
  22. Klos, A.; Olivares, G.; Teferle, F.N.; Hunegnaw, A.; Bogusz, J. On the combined effect of periodic signals and colored noise on velocity uncertainties. GPS Solut. 2018, 22, 1. [Google Scholar] [CrossRef] [Green Version]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27 June–30 June 2016; pp. 770–778. [Google Scholar]
  24. Li, Y. Research and application of deep learning in image recognition. In Proceedings of the 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), Shenyang, China, 21–23 January 2022; pp. 994–999. [Google Scholar]
  25. Xiong, J.; Yu, D.; Liu, S.; Shu, L.; Wang, X.; Liu, Z. A review of plant phenotypic image recognition technology based on deep learning. Electronics 2021, 10, 81. [Google Scholar] [CrossRef]
  26. Otter, D.W.; Medina, J.R.; Kalita, J.K. A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Networks Learn. Syst. 2020, 32, 604–624. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Lauriola, I.; Lavelli, A.; Aiolli, F. An introduction to deep learning in natural language processing: Models, techniques, and tools. Neurocomputing 2022, 470, 443–456. [Google Scholar] [CrossRef]
  28. Wu, L.; Chen, Y.; Shen, K.; Guo, X.; Gao, H.; Li, S.; Pei, J.; Long, B. Graph neural networks for natural language processing: A survey. Found. Trends Mach. 2023, 16, 119–328. [Google Scholar] [CrossRef]
  29. Deng, L.; Platt, J. Ensemble deep learning for speech recognition. In Proceedings of the Interspeech 2014, Singapore, 14–18 September 2014. [Google Scholar]
  30. Lee, W.; Seong, J.J.; Ozlu, B.; Shim, B.S.; Marakhimov, A.; Lee, S. Biosignal sensors and deep learning-based speech recognition: A review. Sensors 2021, 21, 1399. [Google Scholar] [CrossRef]
  31. Nassif, A.B.; Shahin, I.; Attili, I.; Azzeh, M.; Shaalan, K. Speech recognition using deep neural networks: A systematic review. IEEE Access 2019, 7, 19143–19165. [Google Scholar] [CrossRef]
  32. Lim, B.; Zohren, S. Time-series forecasting with deep learning: A survey. Philos. Trans. R. Soc. A 2021, 379, 20200209. [Google Scholar] [CrossRef]
  33. Hua, Y.; Zhao, Z.; Li, R.; Chen, X.; Liu, Z.; Zhang, H. Deep learning with long short-term memory for time series prediction. IEEE Commun. Mag. 2019, 57, 114–119. [Google Scholar] [CrossRef] [Green Version]
  34. Sezer, O.B.; Gudelek, M.U.; Ozbayoglu, A.M. Financial time series forecasting with deep learning: A systematic literature review: 2005–2019. Appl. Soft. Comput. 2020, 90, 106181. [Google Scholar] [CrossRef] [Green Version]
  35. Torres, J.F.; Hadjout, D.; Sebaa, A.; Martínez-Álvarez, F.; Troncoso, A. Deep learning for time series forecasting: A survey. Big Data 2021, 9, 3–21. [Google Scholar] [CrossRef] [PubMed]
  36. Masini, R.P.; Medeiros, M.C.; Mendes, E.F. Machine learning advances for time series forecasting. J. Econ. Surv. 2023, 37, 76–111. [Google Scholar] [CrossRef]
  37. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Networks 1994, 5, 157–166. [Google Scholar] [CrossRef] [PubMed]
  38. Graves, A.; Graves, A. Long short-term memory. In Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012; pp. 37–45. [Google Scholar]
  39. Van Houdt, G.; Mosquera, C.; Nápoles, G. A review on the long short-term memory model. Artif. Intell. Rev. 2020, 53, 5929–5955. [Google Scholar] [CrossRef]
  40. Gasparin, A.; Lukovic, S.; Alippi, C. Deep learning for time series forecasting: The electric load case. CAAI Trans. Intell. Technol. 2022, 7, 5929–5955. [Google Scholar] [CrossRef]
  41. Bashir, T.; Haoyong, C.; Tahir, M.F.; Liqiang, Z. Short term electricity load forecasting using hybrid prophet-LSTM model optimized by BPNN. Energy Rep. 2022, 8, 1678–1686. [Google Scholar] [CrossRef]
  42. Lin, J.; Ma, J.; Zhu, J.; Cui, Y. Short-term load forecasting based on LSTM networks considering attention mechanism. Int. J. Electr. Power 2022, 137, 107818. [Google Scholar] [CrossRef]
  43. Yao, W.; Huang, P.; Jia, Z. Multidimensional LSTM networks to predict wind speed. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 7493–7497. [Google Scholar]
  44. Li, J.; Song, Z.; Wang, X.; Wang, Y.; Jia, Y. A novel offshore wind farm typhoon wind speed prediction model based on PSO–Bi-LSTM improved by VMD. Energy 2022, 251, 123848. [Google Scholar] [CrossRef]
  45. Yan, Y.; Wang, X.; Ren, F.; Shao, Z.; Tian, C. Wind speed prediction using a hybrid model of EEMD and LSTM considering seasonal features. Energy Rep. 2022, 8, 8965–8980. [Google Scholar] [CrossRef]
  46. Kim, H.U.; Bae, T.S. Deep learning-based GNSS network-based real-time kinematic improvement for autonomous ground vehicle navigation. J. Sensors 2019, 2019, 3737265. [Google Scholar] [CrossRef] [Green Version]
  47. Tao, Y.; Liu, C.; Chen, T.; Zhao, X.; Liu, C.; Hu, H.; Zhou, T.; Xin, H. Real-time multipath mitigation in multi-GNSS short baseline positioning via CNN-LSTM method. Math. Probl. Eng. 2021, 2021, 6573230. [Google Scholar] [CrossRef]
  48. Xie, P.; Zhou, A.; Chai, B. The application of long short-term memory (LSTM) method on displacement prediction of multifactor-induced landslides. IEEE Access 2019, 7, 54305–54311. [Google Scholar] [CrossRef]
  49. Wang, Y.; Markert, R.; Xiang, J.; Zheng, W. Research on variational mode decomposition and its application in detecting rub-impact fault of the rotor system. Mech. Syst. Signal Process. 2015, 60, 243–251. [Google Scholar] [CrossRef]
  50. Lian, J.; Liu, Z.; Wang, H.; Dong, X. Adaptive variational mode decomposition method for signal processing based on mode characteristic. Mech. Syst. Signal Process. 2018, 107, 53–77. [Google Scholar] [CrossRef]
  51. Lahmiri, S. A variational mode decompoisition approach for analysis and forecasting of economic and financial time series. Expert. Syst. Appl. 2016, 55, 268–273. [Google Scholar] [CrossRef]
  52. Zhao, L.; Li, Z.; Qu, L.; Zhang, J.; Teng, B. A hybrid VMD-LSTM/GRU model to predict non-stationary and irregular waves on the east coast of China. Ocean Eng. 2023, 276, 114136. [Google Scholar] [CrossRef]
  53. Wang, X.; Wang, Y.; Yuan, P.; Wang, L.; Cheng, D. An adaptive daily runoff forecast model using VMD-LSTM-PSO hybrid approach. Hydrol. Sci. J. 2021, 66, 1488–1502. [Google Scholar] [CrossRef]
  54. Xu, D.; Hu, X.; Hong, W.; Li, M.; Chen, Z. Power Quality Indices Online Prediction Based on VMD-LSTM Residual Analysis. J. Phys. Conf. Ser. 2022, 2290, 012009. [Google Scholar] [CrossRef]
  55. Tao, D.; Yang, Y.; Cai, Z.; Duan, J.; Lan, H. Application of VMD-LSTM in Water Quality Prediction. J. Phys. Conf. Ser. 2023, 2504, 012057. [Google Scholar] [CrossRef]
  56. Huang, Y.; Yan, L.; Cheng, Y.; Qi, X.; Li, Z. Coal thickness prediction method based on VMD and LSTM. Electronics 2022, 11, 232. [Google Scholar] [CrossRef]
  57. Zhang, T.; Fu, C. Application of Improved VMD-LSTM Model in Sports Artificial Intelligence. Comput. Intell. Neurosci. 2022, 2022, 3410153. [Google Scholar] [CrossRef]
  58. Han, L.; Zhang, R.; Wang, X.; Bao, A.; Jing, H. Multi-step wind power forecast based on VMD-LSTM. IET Renew. Power Gen. 2019, 13, 1690–1700. [Google Scholar] [CrossRef]
  59. Xing, Y.; Yue, J.; Chen, C.; Cong, K.; Zhu, S.; Bian, Y. Dynamic displacement forecasting of dashuitian landslide in China using variational mode decomposition and stack long short-term memory network. Appl. Sci. 2019, 9, 2951. [Google Scholar] [CrossRef] [Green Version]
  60. He, X.; Bos, M.S.; Montillet, J.P.; Fernandes, R.; Melbourne, T.; Jiang, W.; Li, W. Spatial variations of stochastic noise properties in GPS time series. Remote Sens. 2021, 13, 4534. [Google Scholar] [CrossRef]
  61. Nistor, S.; Suba, N.S.; Maciuk, K.; Kudrys, J.; Nastase, E.I.; Muntean, A. Analysis of noise and velocity in GNSS EPN-repro 2 time series. Remote Sens. 2021, 13, 2783. [Google Scholar] [CrossRef]
  62. He, X.; Montillet, J.P.; Fernandes, R.; Bos, M.; Yu, K.; Hua, X.; Jiang, W. Review of current GPS methodologies for producing accurate time series and their error sources. J. Geodyn. 2017, 106, 12–29. [Google Scholar] [CrossRef]
  63. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2013, 62, 531–544. [Google Scholar] [CrossRef]
  64. Ur Rehman, N.; Aftab, H. Multivariate variational mode decomposition. IEEE Trans. Signal Process. 2019, 67, 6039–6052. [Google Scholar] [CrossRef] [Green Version]
  65. Wang, Z.; He, X.; Shen, H.; Fan, S.; Zeng, Y. Multi-source information fusion to identify water supply pipe leakage based on SVM and VMD. Inf. Process. Manag. 2022, 59, 102819. [Google Scholar] [CrossRef]
  66. Liu, Y.; Yang, G.; Li, M.; Yin, H. Variational mode decomposition denoising combined the detrended fluctuation analysis. Signal Process. 2016, 125, 349–364. [Google Scholar] [CrossRef]
  67. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
  68. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  69. Muhuri, P.S.; Chatterjee, P.; Yuan, X.; Roy, K.; Esterline, A. Using a long short-term memory recurrent neural network (LSTM-RNN) to classify network attacks. Information 2020, 11, 243. [Google Scholar] [CrossRef]
  70. Sagheer, A.; Kotb, M. Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing 2019, 323, 203–213. [Google Scholar] [CrossRef]
  71. Yadav, A.; Jha, C.K.; Sharan, A. Optimizing LSTM for time series prediction in Indian stock market. Procedia Comput. Sci. 2020, 167, 2091–2100. [Google Scholar] [CrossRef]
  72. Fischer, T.; Krauss, C. Deep learning with long short-term memory networks for financial market predictions. Eur. J. Oper. Res. 2018, 270, 654–669. [Google Scholar] [CrossRef] [Green Version]
  73. Malhotra, P.; Vig, L.; Shroff, G.; Agarwal, P. Long Short Term Memory Networks for Anomaly Detection in Time Series. In Proceedings of the Esann 2015, Bruges, Belgium, 22–24 April 2015; Volume 2015, p. 89. [Google Scholar]
  74. Liao, X.; Liu, Z.; Deng, W. Short-term wind speed multistep combined forecasting model based on two-stage decomposition and LSTM. Wind Energy 2021, 24, 991–1012. [Google Scholar] [CrossRef]
  75. Jin, Y.; Guo, H.; Wang, J.; Song, A. A hybrid system based on LSTM for short-term power load forecasting. Energies 2020, 13, 6241. [Google Scholar] [CrossRef]
  76. Sun, Z.; Zhao, S.; Zhang, J. Short-term wind power forecasting on multiple scales using VMD decomposition, K-means clustering and LSTM principal computing. IEEE Access 2019, 7, 166917–166929. [Google Scholar] [CrossRef]
  77. Li, Y.; Li, Y.; Chen, X.; Yu, J. Denoising and feature extraction algorithms using NPE combined with VMD and their applications in ship-radiated noise. Symmetry 2017, 9, 256. [Google Scholar] [CrossRef] [Green Version]
  78. Li, C.; Wu, Y.; Lin, H.; Li, J.; Zhang, F.; Yang, Y. ECG denoising method based on an improved VMD algorithm. IEEE Sens. J. 2022, 22, 22725–22733. [Google Scholar] [CrossRef]
  79. Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)?–Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
  80. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  81. He, X.; Bos, M.S.; Montillet, J.P.; Fernandes, R.M.S. Investigation of the noise properties at low frequencies in long GNSS time series. J. Geodesy 2019, 93, 1271–1282. [Google Scholar] [CrossRef]
  82. Neath, A.A.; Cavanaugh, J.E. The Bayesian information criterion: Background, derivation, and applications. Wires Comput. Stat. 2012, 4, 199–203. [Google Scholar] [CrossRef]
  83. Vrieze, S.I. Model selection and psychological theory: A discussion of the differences between the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Psychol. Methods 2012, 17, 228. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Williams, S.D.P. CATS: GPS coordinate time series analysis software. GPS Solut. 2008, 12, 147–153. [Google Scholar] [CrossRef]
  85. He, Y.; Zhang, S.; Wang, Q.; Liu, Q.; Qu, W.; Hou, X. HECTOR for analysis of GPS time series. In China Satellite Navigation Conference (CSNC) 2018 Proceedings: Volume I; Springer: Singapore, 2018; pp. 187–196. [Google Scholar]
  86. Tingley, M.P.; Huybers, P. A Bayesian algorithm for reconstructing climate anomalies in space and time. Part II: Comparison with the regularized expectation–maximization algorithm. J. Clim. 2010, 23, 2782–2800. [Google Scholar] [CrossRef] [Green Version]
  87. Conchello, J.A.; McNally, J.G. Fast regularization technique for expectation maximization algorithm for optical sectioning microscopy. In Proceedings of the SPIE, Three-Dimensional Microscopy: Image Acquisition and Processing III, San Jose, CA, USA, 28 January–2 February 1996; Volume 2655, pp. 199–208. [Google Scholar]
  88. Schneider, T. Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values. J. Clim. 2001, 14, 853–871. [Google Scholar] [CrossRef]
  89. Christiansen, B.; Schmith, T.; Thejll, P. A surrogate ensemble study of climate reconstruction methods: Stochasticity and robustness. J. Clim. 2009, 22, 951–976. [Google Scholar] [CrossRef]
  90. Mei, L.; Li, S.; Zhang, C.; Han, M. Adaptive signal enhancement based on improved VMD-SVD for leak location in water-supply pipeline. IEEE Sens. J. 2021, 21, 24601–24612. [Google Scholar] [CrossRef]
  91. Ding, M.; Shi, Z.; Du, B.; Wang, H.; Han, L. A signal de-noising method for a MEMS gyroscope based on improved VMD-WTD. Meas. Sci. Technol. 2021, 32, 095112. [Google Scholar] [CrossRef]
  92. Ding, J.; Xiao, D.; Li, X. Gear fault diagnosis based on genetic mutation particle swarm optimization VMD and probabilistic neural network algorithm. IEEE Access 2020, 8, 18456–18474. [Google Scholar] [CrossRef]
  93. Agnew, D.C. The time-domain behavior of power-law noises. Geophys. Res. Lett. 1992, 19, 333–336. [Google Scholar] [CrossRef]
  94. Zhang, J.; Bock, Y.; Johnson, H.; Fang, P.; Williams, S.; Genrich, J.; Wdowinski, S.; Behr, J. Southern California Permanent GPS Geodetic Array: Error analysis of daily position estimates and site velocities. J. Geophys. Res.-Solid Earth 1997, 102, 18035–18055. [Google Scholar] [CrossRef]
  95. Mao, A.; Harrison, C.G.A.; Dixon, T.H. Noise in GPS coordinate time series. J. Geophys. Res.-Solid Earth 1999, 104, 2797–2816. [Google Scholar] [CrossRef] [Green Version]
  96. Williams, S.D.P. The effect of coloured noise on the uncertainties of rates estimated from geodetic time series. J. Geod. 2003, 76, 483–494. [Google Scholar] [CrossRef]
  97. Hackl, M.; Malservisi, R.; Hugentobler, U.; Wonnacott, R. Estimation of velocity uncertainties from GPS time series: Examples from the analysis of the South African TrigNet network. J. Geophys. Res.-Solid Earth 2011, 116. [Google Scholar] [CrossRef]
  98. Langbein, J. Estimating rate uncertainty with maximum likelihood: Differences between power-law and flicker–random-walk models. J. Geod. 2012, 86, 775–783. [Google Scholar] [CrossRef] [Green Version]
  99. Bos, M.S.; Fernandes, R.M.S.; Williams, S.D.P.; Bastos, L. Fast error analysis of continuous GNSS observations with missing data. J. Geod. 2013, 87, 351–360. [Google Scholar] [CrossRef] [Green Version]
  100. Dmitrieva, K.; Segall, P.; DeMets, C. Network-based estimation of time-dependent noise in GPS position time series. J. Geod. 2015, 89, 591–606. [Google Scholar] [CrossRef]
Figure 1. Basic structure of LSTM.
Figure 1. Basic structure of LSTM.
Remotesensing 15 03694 g001
Figure 2. DVMD-LSTM hybrid model prediction process.
Figure 2. DVMD-LSTM hybrid model prediction process.
Remotesensing 15 03694 g002
Figure 3. Distribution map of each GNSS station.
Figure 3. Distribution map of each GNSS station.
Remotesensing 15 03694 g003
Figure 4. Three-direction interpolation comparison chart of GBOS station.
Figure 4. Three-direction interpolation comparison chart of GBOS station.
Remotesensing 15 03694 g004
Figure 5. Prediction results of each IMF and residual terms under different models after VMD decomposition in the U direction of the SEDR station (the black curve represents the original data as well as the IMF components and residual terms obtained from VMD decomposition. The red curve represents the prediction results of IMF components using the DFVMD-LSTM and VMD-LSTM models, the blue curve represents the prediction results of residual terms using the VMD-LSTM model, and the green curve represents the prediction results of residual terms using the DVMD-LSTM model).
Figure 5. Prediction results of each IMF and residual terms under different models after VMD decomposition in the U direction of the SEDR station (the black curve represents the original data as well as the IMF components and residual terms obtained from VMD decomposition. The red curve represents the prediction results of IMF components using the DFVMD-LSTM and VMD-LSTM models, the blue curve represents the prediction results of residual terms using the VMD-LSTM model, and the green curve represents the prediction results of residual terms using the DVMD-LSTM model).
Remotesensing 15 03694 g005
Figure 6. Comparison of prediction results and prediction error R in three directions of the SEDR station under different models (sub-figures (ac) are the prediction results of each model and sub-figures (df) are comparison diagrams of the prediction error R of each model).
Figure 6. Comparison of prediction results and prediction error R in three directions of the SEDR station under different models (sub-figures (ac) are the prediction results of each model and sub-figures (df) are comparison diagrams of the prediction error R of each model).
Remotesensing 15 03694 g006
Table 1. Information of each GNSS station.
Table 1. Information of each GNSS station.
SiteLongitude (°)Latitude (°)Time Span (Year)Date Missing Rate
ALBH−123.4948.392000–20220.61%
BURN−117.8442.782000–20221.27%
CEDA−112.8640.682000–20222.74%
FOOT−113.8139.372000–20223.40%
GOBS−120.8145.842000–20223.65%
RHCL−118.0334.022000–20221.79%
SEDR−122.2248.522000–20220.49%
SMEL−112.8439.432000–20220.79%
Table 2. Results of K value selection in three directions at each site.
Table 2. Results of K value selection in three directions at each site.
SiteDirection
NEU
ALBH363
BURN443
CEDA443
FOOT385
GOBS365
RHCL733
SEDR357
SMEL735
Table 3. Comparison of the prediction results of each GNSS station in the three directions of E, N, and U under different models (the units of RMSE and MAE in the table are in mm).
Table 3. Comparison of the prediction results of each GNSS station in the three directions of E, N, and U under different models (the units of RMSE and MAE in the table are in mm).
SiteENULSTMVMD-LSTMDVMD-LSTM
RMSEMAER2RMSEI/%MAEI/%R2I/%RMSEI/%MAEI/%R2I/%
ALBHE0.890.650.650.7613.910.5514.030.7413.750.6724.560.4924.310.8022.89
BURN1.401.100.511.1617.000.9216.700.6630.371.0227.000.8225.780.7445.61
CEDA1.731.350.701.3720.751.0621.180.8116.001.2129.820.9430.320.8521.83
FOOT0.580.440.130.5112.910.3813.510.34157.60.4522.120.3422.270.47256.7
GOBS1.000.700.860.8613.740.5816.080.904.100.7723.530.5224.500.926.66
RHCL1.621.280.611.0734.080.8334.780.8335.510.9441.630.7441.910.8741.40
SEDR0.680.530.660.5815.000.4515.130.7614.230.5027.070.3926.760.8224.00
SMEL0.570.440.400.4030.800.3031.080.7177.690.3440.110.2639.980.7995.60
ALBHN0.730.570.620.5524.530.4324.230.7826.180.4932.770.3832.530.8333.33
BURN1.391.110.551.0722.740.8523.370.7332.590.9531.650.7632.130.7943.08
CEDA1.381.100.461.0523.540.8324.050.6848.720.9034.500.7234.330.7766.97
FOOT0.590.430.480.3933.450.2931.810.7759.650.3441.350.2639.950.8270.25
GOBS0.860.630.780.6326.950.4626.600.8813.330.5634.860.4134.100.9116.46
RHCL3.142.540.461.7145.591.3148.530.8481.391.5849.551.2152.280.8686.19
SEDR0.850.630.440.6622.230.5021.790.6650.490.5634.150.4233.100.7672.34
SMEL0.550.420.450.4715.620.3516.540.6135.420.4126.530.3026.910.7056.60
ALBHU3.382.600.582.8914.572.2513.770.6919.402.5125.741.960.830.7732.21
BURN2.301.780.531.9415.781.4916.290.6626.081.6627.821.290.790.7542.98
CEDA2.652.030.512.2714.481.7315.080.6425.631.9626.091.490.770.7343.28
FOOT2.391.830.311.8721.891.4322.230.5888.111.6032.941.230.820.69124.3
GOBS2.922.220.622.2822.171.7222.480.7724.561.9932.041.530.910.8233.52
RHCL2.451.900.312.1014.501.6314.040.4960.461.8723.681.460.860.6093.85
SEDR3.332.620.652.3728.681.8728.790.8226.631.9641.191.540.760.8835.44
SMEL2.361.870.321.8422.381.4323.120.5985.491.5833.171.240.700.70118.9
Table 4. The optimal noise model of each station under different models in the three directions of E, N, and U.
Table 4. The optimal noise model of each station under different models in the three directions of E, N, and U.
SiteENUOptimal Noise Model
TURELSTMVMD-LSTMDVMD-LSTM
ALBHERW + FN + WNPL + WNRW + FN + WNRW + FN + WN
BURNRW + FN + WNPL + WNPL + WNRW + FN + WN
CEDARW + FN + WNPL + WNPL + WNRW + FN + WN
FOOTPL + WNGGM + WNFN + WNPL + WN
GOBSRW + FN + WNPL + WNRW + FN + WNRW + FN + WN
RHCLRW + FN + WNGGM + WNPL + WNRW + FN + WN
SEDRRW + FN + WNPL + WNPL + WNRW + FN + WN
SMELFN + WNPL + WNFN + WNFN + WN
ALBHNRW + FN + WNPL + WNRW + FN + WNRW + FN + WN
BURNFN + WNPL + WNPL + WNPL + WN
CEDARW + FN + WNPL + WNPL + WNRW + FN + WN
FOOTFN + WNGGM + WNFN + WNFN + WN
GOBSRW + FN + WNPL + WNRW + FN + WNRW + FN + WN
RHCLRW + FN + WNRW + FN + WNPL + WNPL + WN
SEDRFN + WNGGM + WNRW + FN + WNFN + WN
SMELFN + WNPL + WNFN + WNFN + WN
ALBHUPL + WNPL + WNRW + FN + WNFN + WN
BURNPL + WNGGM + WNPL + WNPL + WN
CEDAPL + WNPL + WNRW + FN + WNPL + WN
FOOTPL + WNPL + WNFN + WNFN + WN
GOBSPL + WNGGM + WNPL + WNFN + WN
RHCLFN + WNPL + WNRW + FN + WNFN + WN
SEDRPL + WNPL + WNPL + WNPL + WN
SMELPL + WNPL + WNFN + WNPL + WN
Table 5. Velocity values obtained by each station under the optimal noise model.
Table 5. Velocity values obtained by each station under the optimal noise model.
SiteENUTrend (mm/Year)
TURELSTMVMD-LSTMDVMD-LSTM
ALBHE−0.0410.0200.055−0.044
BURN−0.108−0.005−0.051−0.116
CEDA−0.726−0.528−0.693−0.736
FOOT0.020.0150.0010.009
GOBS0.6590.6560.6720.682
RHCL0.8110.6660.8050.783
SEDR0.3540.3410.3780.313
SMEL0.0260.0090.0230.021
ALBHN0.3270.2450.2760.295
BURN0.1240.0800.1160.130
CEDA−0.065−0.041−0.227−0.042
FOOT0.0090.029−0.0360.005
GOBS0.0630.0780.029−0.020
RHCL1.2530.7431.1321.071
SEDR0.1990.1700.2120.195
SMEL0.020−0.001−0.0250.017
ALBHU0.3830.2040.1310.268
BURN0.2410.1440.2380.216
CEDA0.0160.1590.0740.137
FOOT0.1940.1250.1940.202
GOBS0.3010.2780.2830.262
RHCL0.2980.2060.3670.264
SEDR0.0170.0220.0820.04
SMEL0.1950.1820.2060.183
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Lu, T.; Huang, J.; He, X.; Yu, K.; Sun, X.; Ma, X.; Huang, Z. An Improved VMD-LSTM Model for Time-Varying GNSS Time Series Prediction with Temporally Correlated Noise. Remote Sens. 2023, 15, 3694. https://doi.org/10.3390/rs15143694

AMA Style

Chen H, Lu T, Huang J, He X, Yu K, Sun X, Ma X, Huang Z. An Improved VMD-LSTM Model for Time-Varying GNSS Time Series Prediction with Temporally Correlated Noise. Remote Sensing. 2023; 15(14):3694. https://doi.org/10.3390/rs15143694

Chicago/Turabian Style

Chen, Hongkang, Tieding Lu, Jiahui Huang, Xiaoxing He, Kegen Yu, Xiwen Sun, Xiaping Ma, and Zhengkai Huang. 2023. "An Improved VMD-LSTM Model for Time-Varying GNSS Time Series Prediction with Temporally Correlated Noise" Remote Sensing 15, no. 14: 3694. https://doi.org/10.3390/rs15143694

APA Style

Chen, H., Lu, T., Huang, J., He, X., Yu, K., Sun, X., Ma, X., & Huang, Z. (2023). An Improved VMD-LSTM Model for Time-Varying GNSS Time Series Prediction with Temporally Correlated Noise. Remote Sensing, 15(14), 3694. https://doi.org/10.3390/rs15143694

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop