1. Introduction
Localization technology is subdivided into outdoor and indoor localization according to application scenarios. Global Positioning System (GPS)-based outdoor positioning services has almost matured and is widely used. However, GPS cannot achieve indoor positioning accurately due to severe occlusion. Moreover, indoor localization will bring inevitable errors to the results due to complex environmental structure, uncertain conditions, and numerous obstacles [
1].
In order to cope with the state estimation of robots, localization based on probabilistic algorithms is the only effective solution currently known [
2]. As the core idea of probabilistic localization, Bayesian filtering algorithm occupies an important role. In the early days, the best technology for implementing Bayesian filtering was Kalman Filter (KF), which could achieve efficient state estimation for linear Gaussian systems, but difficult to depict non-linear systems. Therefore, extended Kalman Filter (EKF) and unscented Kalman Filter (UKF) were proposed to solve the state estimation in nonlinear systems. In general, EKF and UKF perform well except systems highly non-Gaussian distributions. On this basis, PF is applied as a non-parametric filter [
2], whose typical implementation is AMCL [
3], which performs well in localization efficiency, stability, and accuracy, but poorly in global localization in scenarios with few features.
Indoor localization technology can be subdivided into single sensor localization and multi-sensor localization whose sensors include ultrasonic [
4], infrared [
5,
6], vision [
7], lidar, radio frequency identification (RFID) [
8,
9], Bluetooth [
10], Wi-Fi [
11] and so on. In addition, UWB [
12,
13] has also become a research hot-spot in recent years. Due to the limitations of a single sensor, multi-sensor combined localization is generally used in actual applications.
The odometry is a very widely used sensor in wheeled robot localization [
11,
14]. It has the characteristics of easy data processing, controllable accuracy, and high universality. However, because of accumulated errors, localization accuracy exists during long-term operation will gradually decrease. Thanks to the high ranging accuracy, little influence of light, and easy installation, lidar is popular in various autonomous robots [
15,
16,
17]. However, the effective measure distance is limited, and the matching-based method has the disadvantages of high cost and low efficiency in achieving global localization. UWB related technology has made remarkable progress since it was approved for civilian application, which has the advantages of wide-ranging and no accumulated error, but with a certain drift in the localization process. At present, the accuracy of Sapphire system of Multi-spectral Solutions is under 10 cm. Salman et al. implemented UWB localization on a mobile robot, CoLORbot, for localization in indoor unknown scenarios [
13].
Therefore, there are certain disadvantages when a single sensor acquires information making it difficult to achieve accurate localization, due to which different kinds of sensors are usually combined for localization [
18]. White introduced the general model of data fusion in 1998 [
19], and Hall et al. introduced the algorithms and strategies of data fusion in detail [
20]. At present, the methods employed to multi-sensor fusion localization generally include Bayesian based methods [
3,
11] and neural network methods [
21,
22]. There are numerous data fusion methods based on the multi-Bayesian estimation. The Kalman Filter (KF), a kind of Gaussian filter, is a recursive filter for linear systems. For non-linear systems, there are two types, Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF). In general, KF can complete data fusion well, but when it’s hard to find out system models, there are cases of low real-time performance and reliability. Numerous non-parametric filters are based on the Monte Carlo Localization proposed by Fox et al, which is a non-parametric filter method based on Bayesian estimation [
23]. Valerio Magnago et al. combined odometry and UWB information with UKF [
14]. Peng Gang et al. added an additional Gaussian-Newton-based scan-match step on the basic of AMCL, improving the localization accuracy in complex and unstructured environments [
24]. In [
25], sensors node’s movement dynamics and the measurements of it’s velocity and the received signal strength (RSS) of the radio signal from above-ground relay nodes are used to achieve localization, using corresponding algorithms based on KF for different scenarios. The idea that one supervisor work as planer and the other supervisor improves the result supports the idea of this article [
26]. With the development of machine learning, neural networks have attracted more attention as a new data fusion method. J.Wang et al. used BP neural network to estimate the GPS sampling time and performed subsequent data fusion [
27].
In this paper, we focus on achieving robust robot kidnap detection and recovery based on PF localization, where accurate global proposal distribution, provided by ranging-based localization in UAPF, is necessary. In this case, adaptive estimation of the probability of robot kidnap is feasible with the criterion KNP, proposed to measure the probability of robot kidnap. To solve the problem of false identification of robot kidnap, robot kidnap recovery is triggered only if the uncertainty of the particle swarm is high enough, due to which the reliability of robot kidnap detection increases. Besides, for more accurate ranging-based localization, the double-sided two-way (DSTW) is used in ranging-based localization, in which Jacobin matrix is used to get the position error [
2]. UAPF could make up for the deficiencies of global localization and robot kidnap recovery of PF and achieve accurate localization in open scenarios with few features. The contributions of this work are as follows:
An improved PF-based localization algorithm is proposed, which could achieve robust kidnap detection and pose error compensation.
A novel criterion named KNP is proposed to indicate the probability of robot kidnap, based on the inconsistency of two pose distribution.
An adaptive covariance matrix ameliorates the reliability of UAPF, which is provided by the improved proposal distribution with UWB information.
The rest of this paper is outlined as follows. In
Section 2, we introduce the theoretical basis and the detailed system overview. In
Section 3, pre-experiments are conducted to decrease ranging error and improve localization accuracy of UWB, after which experiments are presented to illustrate improvements of this method proposed in this paper. Finally, we highlight some conclusions.
2. Materials and Methods
UAPF is an improved PF-based localization method with adaptive robot kidnap detection and efficient kidnap recovery. This method mainly consists of PF-based localization [
23], ranging-based localization and adaptive robot kidnap detection. In PF-based localization, 2D laser-scan is utilized to weight particles sampled around odometry pose. Adaptive robot kidnap detection focus on measuring how similar two poses are, and output an error transform matrix and KNP, which is the criterion to judge whether robot kidnap occurs. Besides, a supplementary adaptive update for particles uncertainty is conducted in this part, decreasing the error caused by fixed lidar measurement. The framework of UAPF is shown in
Figure 1 and Algorithm 1.
Algorithm 1: UAPF(,,,). |
- 1:
- 2:
- 3:
Get according to distributions of and - 4:
ifthen - 5:
- 6:
else - 7:
if then - 8:
Re-localization: - 9:
else - 10:
- 11:
end if - 12:
end if - 13:
Update according to and pose difference - 14:
return
|
Where is the fused pose in time , is the distance between the robot and the , is the movement of the robot, and is the observing information from ranging-based localization. and are from PF-based localization and ranging-based localization separately. Similarly, and are the covariance describing the degree of pose dispersion. The is a constant to determine whether trigger the re-localization process. According to our empirical data, 0.67 is a good choice to achieve good results. is the two-dimensional Gaussian distribution with mean and covariance . is the fused pose used to describe the accuracy of UAPF.
2.1. PF-Based Localization
Original PF-based localization is improved in this paper, whose main idea is to choose particles with high weight sampled around the pose calculated according to odometry information, as shown in Algorithm 2, which is obtained by substituting last fused pose, the increment of robot movement, environment observations and corrected particles covariance into PF.
At every moment, we get current pose with differential odometry model (line 2), after which particles are sampled following
(line 5). In this step, the use of
, corrected in (
21) gives a more reasonable proposal distribution, which could adaptively adjust the size of particle swarms according to last pose error, and improve the rationality and reliability of PF-based localization. For every particle, the measurement model is applied in line 6 to weigh the importance according to the matching degree to the current local environment. Candidate particle swarm
covers particles’ poses and corresponding weights. In the update stage, re-sample is conducted, where particles with high weights are much more possible to be sampled than ones with low weights. Finally,
is got and input into pose fusion method (line 10 in Algorithm 1).
Algorithm 2 Improved PF-based Localization (,,,). |
- 1:
, - 2:
= - 3:
Set m as the number of particles should be sampled - 4:
for i = 0 to m do - 5:
= - 6:
= - 7:
+= - 8:
end for - 9:
for i = 0 to m do - 10:
Draw from with probability - 11:
+= - 12:
end for - 13:
= - 14:
return
|
Where is the set to store the pose and covariance of particles of whom is the candidate particle swarm. The two are set to the empty set, . Pose of every particles, , is sampled around , calculated according to robot movement and last robot pose, and is the corresponding weight.
2.2. Ranging-Based Localization
DSTW ranging method is used in the ranging-based localization method. The schematic diagram is shown in
Figure 2. Two axes respectively indicate the time axis of device A and device B.
The predicted value of flight time
can be expressed as
and
where
refers to the time from sending a packet to receiving it, and
refers to the time of data processing for a single device. In this way, the error of flight time can be expressed as
where
and
refer to the ratio of actual frequency to the rated value of devices A and B. DSTW can solve the problem of time synchronization in some degree, improving the ranging accuracy, which is returned in millimeters.
Having got ranges between some anchor and the tag, triangulation is used to calculate the robot position,
, shown in
Figure 3 and Equation (
7).
To cut down the cost of computation,
is set as the original point, with
set on the x-axis and
set on the y-axis, and all three anchors are at the same height
. In Equation (
8),
expresses measured distance between the tag and
, and (
,
,
) refer to the position of
.
Equation (
8) finally shows the position of the robot in UWB coordinate system. However, the positioning accuracy cannot be estimated directly, so the detection-correction link is added, for which the error of position
is converted from the distance error using Jacobian.
As a real-time ranging-based positioning technology, positioning with UWB has no cumulative error, but the covariance varies greatly among different hardware. Therefore, the distance errors between the robot and anchors are used to derive the coordinate error to achieve accurate position estimation.
The distance between the robot and a certain anchor
can be expressed as
Assuming that the robot coordinates
are known, the ranging error of UWB is easily obtained as
To obtain the coordinate error
, Equation (
10) is derived.
By introduceing Equation (
9) into Equation (
11). We obtain Equation (
12).
Therefore, by converting Equation (
12) into a matrix representation, a differential matrix from the coordinate error to the distance error can be obtained. Equation (
14) are used to get
, which is used in Algorithms 1 and 3.
2.3. Robot Kidnap Detection and Recovery
To address the problem of global localization and robot kidnap detection in traditional PF-based localization methods [
23], we propose a novel criterion,
, which is measured according to the distribution of particles,
, and the pose of ranging-based localization,
, where
t represents values at the time
t.
In update phase (the green rectangle in
Figure 1), a match-based measurement method is conducted. We assume both PF-based poses,
, and ranging-based poses,
, follow 2D Gaussian distribution.
where
means the center pose of PF-based localization and
means the center pose of ranging-based localization. And the variance
and
presents how large sizes of particle swarms are. Moreover,
is assumed to be only related to the distance between anchors and the robot because the ranging results are corrected in the range of 3 m to 20 m, and the system is used in unobstructed scenarios for UWB, without Non-Line-of-Sight, NLOS.
where
refers to attenuation coefficients, and
n is the number that ranging distance over 20 m.
Moreover, to measure the possibility of robot kidnapping, a novel criterion called KNP is introduced into UAPF, expressing the difference between two kinds of localization methods. In Equation (
18), expectations and covariance matrices are substituted to get the Wasserstein distance between two localization methods.
Then, in Equation (
19),
is to measure the possibility of robot kidnapping, as shown in line 5 of Algorithm 1 and Equation (
21). Generally, the smaller
is, the more possible robot kidnap occurs, and it could maintain a relative score of 0.8 with normal operating conditions.
2.4. Particles Update for Pose Tracking
In traditional PF-based localization [
3], pose error is measured by the variance of particles swarm, but sensor noise of odometry and lidar is regarded as fixed parameters, which ruins the accuracy of localization to some degree.
In general, when the robot moves from
to
, with the odometry movement
u and the map
m, we can obtain the probability distribution of the robot pose. The combined localization of lidar and odometry is robust in most cases. However, lidar can only reduce the accumulated error of the odometry, rather than eliminates it. Therefore, UWB is introduced to eliminate the accumulative error of the system. Therefore, the probability distribution of
can be expressed as (
20).
where
is the normalization constant,
expresses the odometry pose calculated by robot motion,
expresses lidar likelihood domain model and
is measured by ranging-based localization sub-process.
KNP could adaptively measure localization accuracy to some degree, but not enough for real-time pose tracking. Therefore, the euclidean distance between the results of two localization methods is taken into account to update
.
where
represents the covariance of PF-based localization.
and
are pose differences between the two localization methods. In (
21), only position error is to update because of the low reliability of the yaw in ranging-based localization.
2.5. Pose Fusion
As mentioned above, the UWB position has large uncertainty, which is manifested as positioning results jumping around the real value. Therefore, the fusion of PF-based poses and UWB poses is conducted, to improve the accuracy, as shown in Algorithm 3.
Firstly, fusion starts with UWB pose as initial pose, for solving global localization. In every time
t, PF-based localization is regarded as predictive pose,
. In update stage, the result of ranging-based localization,
and
is used. Due to the high frequency (20Hz) of ranging-based localization, we use sliding window for the average pose value, especially for
.
Algorithm 3 Pose Fusion (,,). |
- 1:
if t-1 = 0 then - 2:
Initialize with - 3:
else - 4:
Predict: = - 5:
Update: - 6:
end if - 7:
return
|
Where is the noise compensation which obeys Gaussian distribution. is the yaw of the ranging-based localization pose.
4. Discussion
In this paper, we presented an indoor localization method for open scenarios with few features. Ranging-based localization provided the initial pose for first global localization, and then pose fusion was conducted as the basis of normal pose tracking. Moreover, we used PF-based localization to overcome noise from sensors. A novel criterion called KNP was introduced into UAPF to evaluate the possibility of robot kidnapping and the stability of localization together with the covariance of particles swarm. Experiments in a real-world situation indicated UAPF could achieve robot kidnap recovery in less than 2 s and position error is less than 0.1 m in a hall of 600 m.
In
Section 3, we compared our method with AMCL, because it’s state of the art PF-based indoor localization method using lidar and odometry.
Table 1 and
Table 2 indicated that the regression function (
22) was suitable for experimental scenario and
Figure 8 showed intuitively how linear regression improves the accuracy of ranging.
Table 3 and
Table 4 expressed time to achieve global localization with UAPF and AMCL separately.
Table 5 compared the accuracy and time used to conduct the recovery from robot kidnapping. As mentioned above, the number of particles used in UAPF was from 500 to 1000 and in AMCL was from 5000 to 10,000. In this situation, UAPF could still conduct global localization in less than 3 s on average, much less than AMCL, illustrating the efficiency of UAPF.
Figure 11c,f showed results of global localization.
Figure 12a–f expressed the process of robot kidnap recovery. Trajectories of different localization methods were shown in
Figure 13, illustrating UAPF could achieve analogous performance to AMCL, much stable than single ranging-based localization method, which restricted further improvement of accuracy.
In the future, the instability of ranging-based localization will be improved, and more sensors such as RGBD will be added to UAPF and make it a universal localization method. Vision-based localization will play an essential role when the robot is in an NLOS environment, lack of ranging information transferred by UWB.