In order to verify the effectiveness of the LLL improvement algorithm proposed in this paper in the application of ambiguity resolution, simulation experiments and measured data are used to compare and analyze HLLL, HSLLL, PLLL, PSLLL, and PLLLR, and to evaluate the performance advantages and disadvantages of each algorithm in terms of the extent of reduction basis orthogonality and the quality of reduction basis size reduction. In ambiguity resolution, the searching process of the ambiguity degree adopts the SE-VB strategy which is widely used at present [
10]. The experimental environment is a private PC (Intel Core i7-9700 CPU, 2.80 GHz, 16.0 GB of RAM, 64-bit Windows 10 operating system) and the software is MATLAB R2017 a.
3.1. Indicators for Evaluating the Quality of the Reduced Basis
In measuring the performance of lattice basis reduction, an orthogonal defect (OD) is usually used to reflect the orthogonality of the basis vector, but it has an obvious disadvantage in that only the OD value is obtained, which is not able to intuitively judge the extent of the orthogonality of the reduced basis [
37,
38,
39]. Therefore, in this paper, the minimum angle
of the reduced basis vector is used instead of the orthogonality defect to measure the extent of the orthogonality of the reduced basis. Its expression is given as:
where
By definition, it follows that . If , it means that all basis vectors are orthogonal to each other. , as an alternative indicator of the extent of orthogonality, can be used to roughly determine the orthogonality of the reduced basis intuitively. And the calculation of and OD is based only on the elements of the variance covariance matrix ; it does not increase the computational complexity.
The purpose of the lattice basis reduction is to make the reduced basis as orthogonal as possible and to make the length of the first basis vector as short as possible after the basis vector exchange. Based on this property, the Hermite factor in lattice theory is introduced as another indicator for evaluating the performance of the reduction [
40,
41], which is defined as:
where
denotes the first basis vector of the lattice basis
. Obviously,
is a fixed value, then the size of the Hermite factor depends on the length of
. The smaller the value of
, the shorter the length of the first basis vector after the lattice basis reduction, the more adequate the basis vector exchange, and the better the quality of the reduction, and vice versa.
3.2. Simulation Experiment
The random simulation method in the literature [
8] is used to construct 5–40 dimensional ambiguity float solution
and variance covariance matrix
. Each dimension constructs 100 groups of data, which are processed by HLLL, HSLLL, PLLL, PSLLL, and PLLLR algorithms for lattice basis reduction, respectively. These calculate the average number of basis vector swaps, the average reduction time consumption, and the average number of ambiguity candidate points for the 100 groups of data. The specific construction is as follows:
Scheme 1: is an upper triangular matrix unit and the upper triangular element follows the standard normal distribution; .
Scheme 2: is a random orthogonal matrix, obtained by the QR decomposition of the random matrix generated by ; , , , .
Figure 2 shows the trend of the number of basis vector swaps for the five algorithms in different schemes and dimensions. As seen in
Figure 2, the number of basis vector swaps for the five algorithms is positively correlated with the number of dimensions; overall, PSLLL has the fewest number of swaps, and PLLL and PLLLR have the same number of swaps.
By analyzing the results in
Figure 2, it can be seen that PLLLR is equivalent to PLLL in terms of the number of basis vector swaps because PLLLR only adds an additional size reduction process, which has no effect on the ordering of the basis vectors, a phenomenon that is in line with the theory. HSLLL and PSLLL simplify the LLL reduction process by relaxing the swap condition of the basis vector, which reduces the number of basis vector swaps.
Figure 3 shows the reduction time consumption for the five algorithms with different schemes and dimensions. It can be intuitively seen from
Figure 3 that as the number of dimensions increases, the overall trend of the reduction time consumption for ambiguity is upward, and PSLLL has the smallest reduction times. From
Figure 3a, it can be observed that the PLLLR reduction time consumption is lower than the HLLL, except for the 16th dimension. The PSLLL reduction time consumption is lower than the HSLLL (except for the 6th and 11th dimensions). A similar conclusion can be drawn in
Figure 3b. The possible reasons for the above special cases are that the reduction time consumption is smaller in the case of lower dimensions and due to the running error of MATLAB.
Figure 4 represents the number of search candidate points for the five algorithms with different schemes and dimensions. As can be seen from
Figure 4, the change in the number of search candidate points and dimensions have the same trend overall, that is, the number of candidate points increase with the growth of dimensions. PLLLR and PLLL have the same number of search candidate points, whereas the number of candidate points for the ambiguity of HSLLL and PSLLL is more than HLLL and PLLL in most dimensions compared to the other methods, which indicates that it may be more time consuming in the ambiguity search process.
In analyzing the results of
Figure 4, since the simple size reduction does not change the candidate integer vector for the ambiguity search, the number of candidate points for the search of PLLL is equivalent to the search results of the PLLLR algorithm, which is consistent with the theory. Since HSLLL and PSLLL adopt different basis vector exchange conditions from the regular LLL algorithms, the column exchange operation is reduced on the basis vector exchange, thus speeding up the lattice basis reduction procedure. Therefore, the final basis vector lengths obtained are different from those of HLLL and PLLL (the basis vector is obtained by exchanging them in a certain order), which results in a different number of search candidate vectors for HSLLL and PSLLL in different dimensions compared to HLLL and PLLL.
The minimum angle
and Hermite factor
between the basis vectors after the reduction of Schemes 1 and 2 using the HLLL, HSLLL, PLLL, PSLLL, and PLLLR algorithms are listed in
Table 1 and
Table 2, respectively. As can be seen from the average basis vectors of the five algorithms in
Table 1 with minimum angles
, all five algorithms in Scheme 1 have good reduction effects in general. Considering the extent of the orthogonality of the basis vectors, the HSLLL reduction performance is optimal, followed by PLLLR, PLLL, and PSLLL, and HLLL is the worst. Similar conclusions to
Table 1 can be drawn from
Table 2, but with slight differences in terms of the reduction performance advantages and disadvantages, with PLLLR being the best, followed by PSLLL, HSLLL, and PLLL, and HLLL being the worst. The reason for this difference is that HSLLL and PSLLL are less stable compared to PLLLR. The minimum value of the PSLLL algorithm in
Table 1 is 41.3540°, which fluctuates a lot, which means that the orthogonal performance will be poor, whereas both HSLLL and PLLL’s minimum values are greater than 45° and the reduction performance is more stable. Similarly, the same is true for HSLLL in
Table 2, which will not be explained here. Combining
Table 1 and
Table 2, PLLLR is superior in terms of the stability and extent of orthogonality combined.
From the Hermite factor
of the five algorithms in
Table 1 and
Table 2, it can be observed that the Hermite factors of PLLLR and PLLL are basically the same, and the relative error is 0.0072%, which is negligible. This indicates that it is difficult to evaluate the performance advantages and disadvantages of the two algorithms from the Hermite factor indicator. There is little difference in the reduction performance between HSLLL and PSLLL, and both outperform the other three algorithms. HSLLL slightly outperforms PSLLL in Scheme 1, while the opposite is true for Scheme 2, which may be related to the type and randomness of the reduced basis. HLLL has the worst reduction performance.
3.3. Measured Experiment 1
To further validate the effectiveness of the algorithm and the reduction effect, using the GPS dual-frequency observation data from the US CORS station LWES with DSTR on 15 March 2023 (DOY-074) for 2778 epochs, the baseline length is 7.79 km and the sampling interval is 30 s. The ambiguity dilution of precision (ADOP) is usually used to evaluate the accuracy of the ambiguity resolution [
42].
Figure 5 shows the variation trend of the ambiguity dimension and ADOP in DOY-074. It can be observed from
Figure 5 that the ambiguity dimension of DOY-074 ranges from 12 to 22 dimensions, and the ADOP value is all less than 0.1. The dimensions of the first 200 epochs are about 20, and the ADOP value is all less than 0.06. Therefore, in this paper, we select the data of the first 200 epochs to verify the effectiveness and reduction performance of the improved algorithm.
Figure 6 shows the cumulative distribution functions of the number of basis vector swaps and reduction time consumption for the first 200 epochs of the five algorithms. It can be seen from
Figure 6a that PLLL and PLLLR have the same number of basis vector swaps, and PSLLL has the smallest number of basis vector swaps, followed by HSLLL. This is consistent with the conclusion of the simulation experiments in
Section 3.2. From
Figure 6b, it can be observed that the reduction time consumption of PSLLL and HSLLL is significantly less than the other three algorithms, and PLLLR consumes slightly more reduction time than PLLL due to the extra added size reduction, which is in line with the theory in
Section 2.3. The five algorithms in descending order of reduction efficiency are PSLLL, HSLLL, PLLL, PLLLR, and HLLL.
Figure 7 shows the variation of the number of ambiguity candidate points for the five reduction algorithms in the first 200 epochs, from which it can be seen that the number of ambiguity candidate points is exactly the same for PLLL and PLLLR, which is consistent with the conclusion of the simulation experiments, and will not be explained here. HLLL, HSLLL, and PSLLL have different numbers of candidate points for ambiguity, and the differences between HLLL and HSLLL can be clearly seen in the figure, while the overall trend of PSLLL is in line with PLLL and PLLLR.
Table 3 shows the statistical results for the five algorithmic basis vectors’ minimum angles (deg) and Hermite factors in the first 200 epochs. As seen in
Table 3, the average basis vector minimum angle of all five algorithms is greater than 45°, and the PLLLR has the best reduction performance. From the Hermite factors
of the five algorithms, it can be observed that the order of the reduction performance is consistent with Scheme 2 in the simulation experiments, PLLLR and PLLL have the same Hermite factor, and PSLLL outperforms several other methods.
Table 4 represents the solution time consumption (reduction time consumption, search time consumption, and total time consumption) of the five algorithms for the first 200 epochs, from which it can be observed that the five algorithms have the highest overall efficiency in the order of PSLLL, HSLLL, PLLLR, PLLL, and HLLL. The PSLLL has the highest overall efficiency. The PLLLR algorithm has the highest search efficiency by further size reduction and has the best stability, which is favorable for improving the search efficiency of ambiguity.
3.4. Measured Experiment 2
In order to further verify the reduction performance of the algorithm in the case of multiple GNSS systems and higher dimensionality, the simulated railroad track measured GPS/BDS data of 1210 epochs from Southwest Jiaotong University on 16 August 2023 (DOY-228) are selected, with a baseline length of 9.80 m and a sampling interval of 1 s.
Figure 8 shows the trend plot of the ambiguity dimension and ADOP for 1210 epochs. From the figure, it can be seen that the number of ambiguity dimension is greater than 36 and the value of ADOP is less than 0.07. Therefore, the accuracy of the float solution of the ambiguity is better.
Figure 9 shows the cumulative distribution functions of the number of basis vector swaps and the reduction time consumption for the five algorithms, from which it can be observed that PLLL and PLLLR have the same number of basis vector swaps. As the number of ambiguity dimensions is close to 40, the variation of the reduction time consumption of PSLLL and HSLLL is small, and the reduction time consumption of PSLLL and HSLLL is significantly smaller than the other three algorithms. The reduction time consumption of PLLLR adding extra size reduction does not increase significantly compared with PLLL, which is due to the extra size reduction with low complexity, and the time consumption is basically negligible. There is no difference in the trend of the number of ambiguity candidate points of the five algorithms, which will not be shown here.
Table 5 shows the basis vector minimum angle (deg) and Hermite factor of the five algorithms. It can be seen that the average basis vector minimum angle
of the five algorithms is greater than 45°, and all of them have good reduction effects. The minimum value of
of the PLLLR algorithm is greater than 45°, which indicates that PLLLR has the best robustness in avoiding the reduced basis of poor orthogonality. The Hermite factor
of PLLLR and PLLL is basically the same, and the relative error is negligible. The superiority of the PLLLR and PLLL algorithms cannot be judged from the Hermite factor
alone.
Table 6 shows the solution time consumption of the five algorithms (reduction time consumption, search time consumption, and total time consumption), from which the conclusions are consistent with those of
Table 4 in Measured Experiment 1, and will not be repeated here.
Figure 10 illustrates the cumulative distribution functions of the total time consumed for the two measured experiments, from which it can be seen that the HSLLL, PSLLL, and PLLLR algorithms outperform the HLLL and PLLL. The difference is that it is not possible to ascertain the performance of the HSLLL and PLLLR algorithms from Measured Experiment 1, whereas Measured Experiment 2 clearly shows that the efficiency of the HSLLL outperforms that of the PLLLR. The possible reasons for this difference are related to the number of ambiguity dimensions and MATLAB running errors.
In order to illustrate the performance difference between HLLL, HSLLL, PLLL, PSLLL, and PLLLR more clearly, we compare the speed, stability, and computational complexity of the five algorithms, and the results are shown in
Table 7.