1. Introduction
Nowadays, Unmanned Aerial Vehicles (UAVs) represent a popular solution for executing tasks in several markets and applications [
1], such as delivery of goods [
2], surveillance and monitoring [
3], inspection and mapping [
4], precision agriculture [
5], and cinematography [
6]. The usage of flying platforms allows reducing time and cost of the mission, while guaranteeing high flexibility. This improves mission performance and/or enables missions which were not feasible at all. However, capability of UAVs to carry out their mission while autonomously or remotely piloted, strictly depends on their navigation performance which may require to be very accurate (at least in post processing stage) in several applications, such as mapping and photogrammetry.
UAV navigation is usually tackled by fusing inertial and GNSS (global navigation satellite system) measurements, which for their complementary properties are usually combined in Kalman filters (KF). Inertial measurements consist of three axes gyroscopes’ and accelerometers’ observables, retrieved with an inertial measurement unit (IMU). These measurements are affected by different error sources including a time-varying in-run bias for each channel, which if not correctly estimated, can spoil the performance in positioning, velocity, and attitude estimate. Residual uncompensated inertial biases may also play a key role in the positioning error growth rate in absence of reliable GNSS coverage.
The problem of in-run bias estimation has been widely tackled in the open literature. It requires combining gyros and accelerometers measurements with additional information which could be either measurements provided by other sensor sources, or information on the actual IMU/platform configuration. Several strategies have been developed in the last few years which include static and dynamic inertial biases calibration.
Static calibration is the most used technique, since the assumption of static configuration is effective in improving the bias estimation process. It requires the platform to be steady for a time interval that can last several minutes (which can pose challenges in some application scenarios). The zero velocity upload (ZUPT) [
7,
8] is the most used static technique. It uses, as an additional measurement within the navigation filter, the assumption that the linear velocity is zero. This assumption allows correctly estimating the vertical accelerometer bias and the x and y gyro biases [
9,
10]. Reference [
11] studied the ZUPT problem, demonstrating that in a 15 variables state (3 positions, 3 velocities, 3 attitude angles, 3 accelerometers’, and 3 gyroscopes’ biases), 6 out of 15 variables are unobservable. It also defined subspaces of individually observable errors, coupling the variables which are dependent from each other.
To have a full estimate of all the accelerometer and gyro biases, a common solution is to place the IMU sensor in a set of static positions (attitudes) to solve the full in-run bias estimation problem [
12,
13,
14]. This requires a longer estimation time for in-run bias estimation than the ZUPT solution. Piecewise static position bias calibration concept has been recently extended by [
15], which creates a solution that exploits IMU rotation along sensitive axes. Indeed, IMU calibration may benefit from platform maneuvers which improve bias observability.
Due to this property, several studies deal with dynamic bias calibration, exploiting several external sensors as reference for bias estimation. In addition, due to bias instability, depending on mission duration, in-flight bias estimation may be needed also in the case of accurate initialization. Kalman filters or non linear observers, which guarantee global convergence [
16] have been used for fusing IMU information with other sources of measurements. Several authors integrate IMU measurements either with GNSS [
10,
17] or odometry information [
9,
16] to improve biases estimation, and provide biases observability analyses based on platform motion. Partial IMU bias observers (gyroscope only) have been developed, accounting only for gyro measurements [
18], or for full IMU measurements sets [
19,
20,
21], where GNSS information has also been accounted as positioning reference. Reference [
22] estimates position, velocity, attitude, and gyro biases by fusing IMU measurements with altimeter, heading estimate, and line-of-sight (LOS) estimates given by a camera. A full filter state with 15 variables is used in [
23] for retrieving the full pose of the platform with a camera, demonstrating that the 15 variables set is fully observable when both position and attitude estimates are available. Therefore, in-run bias estimation and IMU calibration can be performed, integrating IMU outputs with GNSS measurements which provides a position reference, under dynamic and static configurations if IMU-independent attitude information is available.
The authors proposed in [
24,
25] an attitude estimation strategy exploiting cooperative navigation [
26], which uses unit vectors estimated in geographic and body frame. One or two vehicles, named deputies, have been used for improving the attitude accuracy of a “chief” vehicle and LOS vector in the local frame and body frame have been retrieved with carrier-phase differential GNSS (CDGNSS) and visual-based information, respectively. This information has been used to retrieve an estimate of the chief attitude which is independent from inertial (IMU) and magnetic measurements. A similar approach, consisting of using LOS measurements only, has been used in [
27], demonstrating that at least three LOS directions are needed to provide the full observability of the 15 variables state.
Cooperative or networked navigation is intended as the operation of using networked relative measurements (e.g., range-based, angle-based) to the aim of the navigation performance increase [
28]. This approach has been widely used in the open literature with promising outcomes especially when navigating under non-nominal GNSS coverage [
29,
30]. As an example, a networked system of the GNSS ground receiver has been exploited in [
31,
32]. In the cooperative navigation framework, magnetometer bias calibration has been carried out in [
33], but no comprehensive studies concerning the potential and applicability of the concept for in-flight inertial biases estimation have been proposed in the open literature, to the best of the authors’ knowledge.
This paper analyzes the potential of the cooperative strategies introduced in [
24,
25] towards in-flight estimation of inertial biases, using extensive numerical analyses and experimental results. Compared with recent literature, the main innovative points are:
The paper is organized as follows.
Section 2 introduces the cooperative navigation strategy.
Section 3 introduces the nomenclature used in the paper. Navigation state estimation and its equations are detailed in
Section 4. A numerical analysis of bias estimation performance is presented in
Section 5, while
Section 6 presents experimental results from flight experiment with multi-rotors. Finally,
Section 7 draws the conclusion of the present work.
3. Nomenclature
Before analyzing the details of the algorithm for state estimation and its equations, this section is in charge of defining the notation that will be used along in the manuscript. Bold,
a, and italic,
a, variables indicate respectively vector and scalar quantities. The projection of a vector
a in the reference frame
i, is indicated with
ai. This paper uses NED, chief’s BRF, and CRF which are respectively indicated with the superscripts
n,
b, and
c. Chief BRF is centered in its center of mass (CoM), whereas CRF is centered at camera location. See
Figure 1, where camera and body frame axes are reported for the chief vehicle, assuming
a(
l) is the
l-th axis of the frame
a.
The three components of the vector ai in the frame i are expressed as . To simplify the notation when the NED frame is accounted for, the axis indices reported within the brackets are usually replaced with the letters n, e, and d to indicate the north east and down direction, respectively. Therefore . A matrix A, is reported in capital style. indicates the rotation matrix which allows transforming a vector from the frame i to the frame j, such as .
The position can be represented with two conventions:
p indicates the position expressed as geographic coordinates (latitude
l, longitude
λ, and altitude
h) and
indicates the vector going from location
a to location
b, which unit vector is indicated with
. Referring to
Figure 1, the center of mass location of the chief and the
j-th deputy have been indicated with
s and
dj, respectively. The vector measuring their distance which is given by the CDGNSS processing is indicated with
. Chief’s camera location is indicated with
c, and the vector connecting camera location with the deputy CoM is
, whose associated unit vector
is measured by the camera.
is the distance between the chief’s center of mass and origin of the chief’s camera frame.
The derivative of a scalar a with respect to a vector vi is a 1 × 3 matrix indicated with , whose l-th component is the derivative of the scalar a, with respect to the l-th component of the vector vi. Conversely, the 3 × 3 matrix indicating the derivative of a vector qi with respect to a vector vi is indicated with , and the element at the l-th raw and j-th column is .
4. Cooperative Navigation Filter
The navigation architecture used for estimating the state of the chief vehicle is represented in
Figure 2. It is based on the extended Kalman filter (EKF) described in [
36], and assumes the vehicle state is composed by position
p in geographic coordinates, velocity expressed in NED frame
vn, attitude from NED to BRF parametrized by heading
ψ, pitch
θ and roll
φ angles (defined with a 3-2-1 rotation sequence of Euler angles), and the 3 × 1 vectors including the accelerometer and gyroscope biases, expressed in BRF, i.e.,
and
, respectively. The filter propagates and corrects the state’s error
δx, which is given by:
where
ρ represents the attitude error vector expressed in the NED reference frame, as reported in [
36]. Its components are indicated with
ρ.
The filter propagates the state and its error with the widely known inertial navigation equations, which uses IMU measurements to predict the UAV state and its error at the current step
k, starting from their best estimate at the previous step
k-1. The WGS84 model has been used for predicting the local gravity vector [
7], to have a more accurate estimate of the down component of the accelerometer bias, especially when experimental data are used. Inertial propagation equations are not reported here for the sake of brevity. The interested reader is referred to [
36] for further details.
Correction steps use both cooperative and uncooperative measurements (reported in gray in
Figure 2). Non cooperative measurements consist of the magnetometer and GNSS outputs, which are complemented with cooperative measurements coming from several (
J) deputies.
For each deputy, the cooperative measurement to be used for attitude estimate is given by combining CDGNSS and visual output which are reported in blue and orange in
Figure 2, respectively.
Correction equation expresses the state error as a function of the measurement residual
δy, through the measurement matrix
H. It can be written as:
where
w is the measurement noise associated with the residual, whose covariance matrix is
R.
Equation (2) can be rewritten for the specific filter reported in
Figure 2, as:
where
Ha,b is the matrix that connects the measurement
a with the part
b of the state which could be position
p, velocity
v, and attitude
ρ.
Ra is the covariance matrix associated to the measurement
a. 0
a×b indicates a matrix composed by all zero elements with
a rows and
b columns.
δyGNSS and
δyMAG are the GNSS and magnetometer residuals.
δyj is the residual associated to the cooperative measurement related to the
j-th deputy, with
j = 1, …,
J, and
Hj and
Rj are their associated measurement and covariance matrices. Detailed derivation of
δyj,
Hj, and
Rj are reported in
Section 4.1.
GNSS pseudorange measurements are tightly integrated within the Kalman filter and the GNSS residual number (m) depends on the number of available satellites. Pseudorange measurements only depend on the chief position. Therefore, GNSS residual only combines with position error.
Magnetometer residual is a scalar residual on the heading angle, which is coupled only with the attitude part of the state. For the sake of brevity, details about magnetometer and GNSS residual and covariance matrices are omitted from this manuscript. For further details, the interested reader is referred to [
29].
4.1. Cooperative Measurement Equation
From Equation (2), the measurement equation for the j-th deputy can be written as . Where wj is a Gaussian noise with covariance Rj. This section is in charge of deriving the terms composing the measurement equation for the cooperative contribution of the j-th deputy. Detailed derivation of δyj, Hj, and Rj is presented hereafter.
The measured distance between the camera and deputy’s center of mass measured in NED, i.e.,
, can be converted to the LOS direction in CRF thanks to the following formula:
where|
a| is the operator yielding the norm of the vector in the brackets. Observing from
Figure 1, that
, and assuming
negligible in with respect to
when computing the norm, Equation (4) can be rewritten as:
Indicating with
the skew symmetric matrix associated with vector
a, and with
the predicted quantity and
the error associated to that quantity so that the true value is
, Equation (5) becomes:
The BRF to CRF rotation matrix is assumed to be perfectly known, therefore the estimated parameters of camera calibration
and
are assumed to be known without errors. Rearranging Equation (6), so to find the CDGNSS/vision residual, i.e., Δ
uc, yields:
The CDGNSS/vision residual is a 3 × 1 vector, which includes two unit vectors estimated in CRF and NED. The so obtained quantity has one component dependent on the other two. To avoid dealing with linear dependent measurements, which makes the associated covariance matrix rank-deficient, i.e., not invertible, a linear independent measurement vector can be obtained converting the unit vector in Equation (7) in angular residuals, i.e., Azimuth
Az and elevation
El residuals, so that:
where
could be either
Az or
El angle estimated starting from a unit vector. As an example, considering a generic unit vector expressed in CRF
,
Az and
El are:
where
represents the derivative of the angle
with respect to
.
To highlight the source of measurement residual, the subscript cam and CDGNSS have been reported in Equation (8). can be obtained directly by converting the pixel information of the deputy in the chief’s camera frame, using pinhole camera model. Whereas, is obtained starting from the CDGNSS measured baseline, i.e., , its associated unit vector and the knowledge of the camera position with respect to BRF .
The errors
and
are associated respectively to CDGNSS and visual estimate. Converting
as a function of
and expressing
in terms of camera angular error (
δcam), the right side of Equation (8) becomes:
and
δcam are respectively CDGNSS and camera error. The first represents the vector including the error along each component of the baseline estimated with the CDGNSS technique. Its standard deviation (STD) components in NED frame are
σCDGNSS(
n),
σCDGNSS(
e), and
σCDGNSS(
d). On the other hand,
δcam is the error in camera identification of the target which coincides with the instantaneous field of view (IFOV) and has as STD
σcam. Equation (10) is the measurement equation for the CDGNSS/vision observable.
δyj,
Hj, and
Rj can be extracted from this equation, considering the left side of the equation for the measurement residual, and the state dependent and state independent part of the right side of the equation for measurement and covariance matrix, respectively. Therefore:
5. Numerical Analysis
This section is in charge of assessing the potential of the proposed approach for bias estimation via simulation-based analyses. The necessity of a numerical approach derives from the problem dependency on the system dynamics, which makes bias estimation performance dependent not only on cooperative navigation measurements but also on the time evolution of position, velocity, and attitude, in a fully coupled fashion. Thus, a purely analytical approach such as the one proposed in [
29] for positioning accuracy prediction cannot be applied. A custom camera/IMU/GNSS/magnetometer simulator has been developed for this purpose in MATLAB
®.
Results are presented for both the cases of one and two deputies. The two-deputies case is analyzed first (
Section 5.1) since in this case, full knowledge of the attitude is available and satisfying results in bias estimation are expected. In the single deputy case (
Section 5.2), the attitude information is not fully available which makes some of the states unobservable, but bias estimation by cooperation can be enhanced by providing relative motion among the platforms and/or accelerated dynamics for the chief.
The following sub-sections assume the chief UAV moves along a quasi-straight-line trajectory, which is depicted in
Figure 3. Unless differently specified, the UAV is assumed to proceed with a constant heading, with the nose pointed eastwards. To further remark the benefit of using cooperative measurements in estimating attitude, the simulated magnetometer estimate is assumed to be biased (as it actually happens in typical flight scenarios). IMU parameters used for simulating the gyroscopes and accelerometers outputs are reported in
Table 1. GNSS integration uses standalone measurements, as remarked in
Section 4. Results obtained by the cooperative filter are compared with those obtained when cooperative measurements are not used, i.e., the filter reported in
Section 4 is used without cooperative measurements. The following sections analyze the IMU biases estimation performance.
5.1. Two Deputies
When two deputies are used, the full attitude information can be estimated if chief and the two deputies are not aligned with each other.
Figure 4 shows the formation geometry, which is defined by the elevation of each deputy with respect to the chief, i.e., μ
j, its range (
rj), the separation between the two deputies on the local horizontal plane (Δχ), and the angle between the projection of chief’s forward direction on the local horizontal plane, i.e.,
, and the center of deputy formation, i.e., χ, which is positive if defined clockwise along the down direction.
Depending on the geometry, cooperative navigation can be less or more accurate in terms of estimation of different attitude angles, which influences the process of bias estimation, especially concerning the accelerometers. Reference [
35] demonstrates that using a formation of two UAVs centered along the UAV forward direction, i.e., roll axis (χ = 0°), cooperative aiding is more effective on pitch estimate if the horizontal angle between the two deputies (Δχ) is smaller. On the contrary, when Δχ increases, the roll angle is characterized by a better accuracy. By posing χ = 90°, the behavior inverts, giving a more accurate roll estimate with small Δχ. In this section, the influence of the triplet’s formation geometry is analyzed by posing
r1 =
r2 =
r, μ
1 = μ
2 = μ, and χ = 0, and varying μ and Δχ.
Figure 5 and
Figure 6 report the results in the case the trajectory depicted in
Figure 3 has been assumed for the chief vehicle, while the relative deputies’ geometry is given by μ = 0°, Δχ = 70°,
r = 100 m.
Figure 5a,b shows the accelerometer and gyroscope biases estimated by the cooperative filter (in black) compared with those estimated without cooperation (blue) and with the simulated biases, i.e., reference solution, in red. The 3σ bound derived by estimating the error STD (i.e., σ) with the filter predicted covariance, is also reported in gray. Root mean square (RMS) and maximum errors are reported for cooperation-aided and non-cooperative filter, removing the first 60 s needed for filter convergence. As concerns gyroscopes, cooperative navigation measurements allow the filter to converge to the true bias values faster, due to more accurate heading estimate provided by CDGNSS/vision measurements compared with magnetometer. As far as accelerometers’ biases are concerned, the estimate is dramatically improved using cooperation. In fact, cooperative measurements allow convergence to the reference value, which otherwise would not be achieved.
For the sake of completeness,
Figure 6 shows the attitude errors, remarking the effectiveness of cooperation especially in heading estimate, which is debiased due to the IMU/magnetometer independent nature of the CDGNSS/vision measurement. RMS and max errors are reported both for cooperative filter and for the filter which does not use cooperative measurements.
Figure 7 and
Figure 8 show respectively IMU biases and angular errors resulting when the angular separation between deputies (i.e., Δχ) is reduced from 70° to 20°. The other parameters have been assumed equal to the previous case. With respect to the previous analyzed case (
Figure 5 and
Figure 6), it could be seen that pitch error slightly reduces with an increase of roll error as predicted by [
35]. Whereas bias acceleration RMS decreases on the first axis and increases along the second axis of the BRF. This behavior can be justified by referring to the derivations of [
11]. Reference [
11] explicitly derives the connection between the north angle error and east accelerometer bias, and between the east angle error and the north accelerometer biases, grouping the two couples in two of the six unobservable subsets in ZUPT calibration. Without loss of generality, for a quasi-leveled flight with small pitch and roll angle (which holds true in quadrotor flight avoiding aggressive flight conditions), one can extend the dependencies found in [
11] in BRF. Using the rotation matrix, one can find two couples (i.e., subsets) of linear dependent errors: pitch error and accelerometer bias along the first axis of the BRF, i.e.,
, and roll error and accelerometer bias along the second axis of the BRF, i.e.,
. Therefore, any attempt to improve pitch accuracy (e.g., reducing Δχ from 70° to 20°) will reduce the error of one of the elements of the first subset. Indeed, comparison between
Figure 5 and
Figure 7 shows a reduction of
, and an increase of
RMS values. This is further highlighted by the increased value of the covariance of
, with respect to the previous case. Conversely, gyroscope bias slightly varies with respect to the
Figure 5 case. Indeed, horizontal gyroscopes’ biases mostly depend on position covariance, while the bias of the gyroscope along the third axis is proportional to the heading error [
11].
When μ increases up to 90°, cooperative measurements have more impact on the horizontal plane angles (pitch and roll) than on heading. Results obtained using Δχ = 70° and increasing μ up to 60°, are reported in
Figure 9 and
Figure 10 for IMU biases and angular errors, respectively.
Comparing these results with those reported in
Figure 5 and
Figure 6 (i.e., with the same value of Δχ, but with a null μ), one can notice an increase of heading RMS and covariance, with a reduction on the horizontal angles covariances and RMS. Conversely, an improvement in horizontal accelerometers’ biases estimation is provided. Additionally, down gyroscope bias estimation error, i.e.,
, slightly increases, due to the increase of heading error and its covariance.
To highlight the benefit of having a large baseline,
Table 2 compares the results obtained with the same formation geometries described before, but with a distance between deputies and chief, i.e.,
r, reduced from 100 to 40 m.
5.2. One Deputy
The relative geometry between the chief and the single deputy, (if constant with time) can be defined by referring to
Figure 4, with Δχ = 0. In this case, the position of the deputy UAV coincides with the center of the formation and only χ, μ, and
r are needed to uniquely identify the relative formation geometry.
When only one deputy is available, the measurements provided to the filter, i.e., CDGNSS/vision residuals for attitude, and GNSS observables for position, do not give enough information to ensure full observability of the filter state. As an example, the results obtained by routing the chief UAV along the trajectory reported in
Figure 3, and assuming a deputy with a fixed relative geometry with χ = 0°, μ = 0°, and
r = 100 m (i.e., deputy along the roll direction) are shown in
Figure 11 and
Figure 12, for IMU biases and angular error respectively. Since cooperative aiding is effective only in the directions orthogonal to the LOS, roll angle error increases due to unobservability as well as the error on the accelerometer bias of the second axis, as a consequence of the dependence among these two variables, demonstrated in [
11]. However, both the unobservable variables are well within their 3σ bound.
To cope with these observability challenges in the case of a single deputy, different strategies can be proposed. In particular, relative geometry variation and accelerated chief motion are analyzed in the following.
Section 5.2.1 reports the result with improved chief dynamics. Whereas
Section 5.2.2 reports the results obtained by making the relative geometry change when the chief is routed along a straight line.
5.2.1. Accelerated Dynamics of the Chief Vehicle
In this section, it is assumed the chief moves along a zig-zag path, whose top view is depicted in
Figure 13. The chief (UAV 1 in the figure) is always pointed toward east, with a 90° heading angle. The trajectory of the deputy (UAV 2) is also depicted in the figure. The deputy moves along a straight line with a 90° heading angle. The two vehicles fly at constant altitude equal to 20 m. Cooperative filter results in terms of IMU biases and angular errors are reported in
Figure 14 and
Figure 15.
Choosing a zero elevation (vehicles flying at the same altitude) allows the cooperative measurements to give a significant contribution to heading angle estimation. Indeed, due to the small heading angle covariance of CDGNSS/vision, magnetometer measurements are filtered out by the filter, allowing the heading estimate to be debiased. As far as accelerometer biases are concerned, after the initial excursion, which holds true for the accelerometer bias on the first component, the relative geometry variation and the chief dynamics improve the state observability, providing a very accurate estimation in accelerometer biases.
5.2.2. Relative Geometry Variation
Relative geometry variation allows the LOS direction to change during the motion, which introduces spatial diversity in the measurements and is useful to tackle the observability challenges which characterize a constant relative geometry. Three different relative geometries have been taken into account in this section, assuming the chief vehicle is always flown along a straight line.
Chief and deputy vehicles move along the trajectories of UAV 2 and UAV 1 reported in
Figure 13, respectively (i.e., they are inverted with respect to the
Section 5.2.1);
Chief flies along the quasi-straight-line path reported in
Figure 3 by continuously rotating its heading with a 100 s period, starting from an initial heading angle, ψ = 90°. The deputy moves along a trajectory parallel to the chief, which has been defined with r = 100, μ = 12°, and χ = 30° in the initial point of the trajectory;
Chief flies along the quasi-straight-line path reported in
Figure 3 with a constant heading angle assumed equal to 90°. The deputy is steady and its NED position vector is [−100 m, −20 m, −40 m]
T.
Cases 2 and 3 assume a large camera FOV, which can be achieved with omnidirectional [
38] or multiple cameras system mounted on the chief platforms. For the sake of brevity, only bias results are reported in the following subsections, whereas angular RMS errors are indicated in the text.
Results of case 1 are reported in
Figure 16. Angular RMS errors are 0.05, 0.08, and 0.08 degrees for heading, pitch and roll angles, respectively. Accelerometer bias estimation overperforms the one obtained in the case the two UAVs invert their trajectory (presented in the previous section and reported in
Figure 14), demonstrating that more that ownership dynamics, relative geometry variation plays a significant role in cooperative bias estimation. First axis bias still presents huge excursions in the first epochs, before the convergence is encountered when a satisfactory set of measurements have been acquired in terms of spatial diversity.
Result obtained by changing the heading of the chief, i.e., case 2, are reported in
Figure 17. Differently from the other cases, heading rotation negatively impacts the gyroscope bias estimation along the horizontal axes if no cooperative measurements are provided. On the other hand, using cooperation allows both accelerometer and gyroscopes measurements to be debiased. Attitude RMS error obtained using cooperation are 0.06, 0.08, and 0.09 for heading pitch and roll, respectively.
As concerns case 3, observability of the full state has been performed by providing spatial diversity while making the chief UAV fly along the trajectory reported in
Figure 3 and assuming a steady deputy UAV. This scenario can also model the case in which ground GNSS antennas are used as fixed deputies. Results are reported in
Figure 18. The formation geometry provides the least advantage with respect to the solutions presented before, because enough spatial diversity in the measurements is obtained after a long time (i.e., 200 s). At that time, the relative azimuth variation between the chief and the deputy vehicle is about 30°, which provide sufficient spatial diversity to make the biases converge. Before this time interval, the results present a very inaccurate accelerometers’ bias estimation. However RMS value reduces to [13.0 7.6 0.11] × 10
−3 m/s
2 if estimated after this time interval.
A solution which allows improving the performance in bias prediction when a steady deputy is used consists of providing a null elevation between the chief and the deputy so that the heading direction (which is the most inaccurate since it is based on biased magnetometer estimates) is always observable with cooperative measurements.
Figure 19 shows the result obtained with a steady deputy with the same horizontal position of case 3 (
Figure 18), but with the same altitude of the chief vehicle. So that deputy NED position is [−100 m, −20 m, −20 m]
T.
6. Experimental Set-Up and Results
The efficiency of the proposed method for IMU bias estimation has been tested on experimental data, acquired at a model aircraft airfield. The data acquisition setup is composed by two DJI
TM M100 drones and a Trimble antenna. The flight has been carried out by remotely piloting the two drones, which are shown in
Figure 20. The drones, named Eagle and Athena, have been equipped each with an onboard computer, a CCD camera and an additional GNSS receiver with raw data capability. The latter is required due to the impossibility of reading GNSS raw data directly from DJI autopilot telemetry.
Eagle has as onboard computer, an Intel NUCTM with an i7 CPU running Ubuntu 14.04. It embarks a PointGrey FleaTM FL3-U3-20E4C-C CCD camera (with 1600 × 1200 resolution in pixels, maximum frame rate of 59 fps and an IFOV of about 0.030°) and a uBloxTM LEA-M8T GNSS single frequency multi-constellation receiver with raw measurements capabilities.
Athena has been equipped with an onboard computer (Intel NUCTM with an i5 CPU running Ubuntu 16.04), a CCD camera, i.e., PointGrey BlackflyTM BFLY-U3-50H5C-C (with 2448 × 2048 resolution in pixels, maximum frame rate of 7.5 fps, and an IFOV of about 0.022°) and the same GNSS receiver with raw data capability embarked on Eagle, i.e., uBloxTM LEA-M8T.
uBlox
TM receivers have been set with both GPS and Galileo receiver capability. Whereas only GPS data were available at Trimble ground antenna. As
Figure 20 shows, the uBlox
TM antenna has been mounted symmetrically to the DJI one, on each drone. Both the DJI and uBlox antenna have been placed on a carbon fiber rod higher than the DJI default, to avoid possible interference with the onboard computer.
Data acquisition software capable of retrieving DJI autopilot and IMU, camera and raw GNSS data have been developed in ROS (robot operating system). Using ROS allows easily time-tagging and synchronizing acquired data using custom and already developed (DJI
TM and Pointgrey
TM proprietary) ROS nodes. A custom made node was developed in C++ to acquire uBlox
TM raw data [
39] in user readable format. Camera calibration has been performed indoor using the Kalibr software [
40].
The data acquired during the flight campaign have been processed offline within a MATLAB
® implementation of the cooperative navigation filter reported in
Section 4, assuming Eagle as the chief vehicle and the two deputies being Athena and the Trimble antenna. Accurate 3D positions of GNSS satellites have been calculated using the multi-constellation broadcast ephemeris file in a customized version of the RTKLIB software [
41], able to provide multi-constellation satellite positions and pseudoranges corrected from ionospheric and tropospheric errors. CDGNSS baselines have been retrieved by the “kinematic” mode of the RTKLIB software [
41], using GNSS raw data acquired on board the chief and the two deputies. As concerns camera information, several techniques have been developed by the authors in the framework of cooperative detection, e.g., using deep learning [
42]. This strategy complemented with a supervised approach has been used to acquire camera data, i.e., pixels of the deputies’ center, in this paper, being the focus set on the cooperative filter. Camera and CDGNSS STDs have been retrieved from camera specifics (i.e., IFOV) and from resulting STD of RTKLIB “kinematic solution”, respectively. IMU parameters needed to define the process noise covariance matrix, i.e., velocity and angular random walk and gyroscope and accelerometer bias instabilities have been derived thanks to IMU calibration based on Allan variance analysis performed with Kalibr software [
40].
Figure 21a shows an image taken during the flight where the three platforms (two deputies and one chief) are highlighted. A flight image taken by the chief vehicle including both the deputies is reported in
Figure 21b. Both one deputy and two deputies cases are analyzed.
The two deputies case uses Eagle as chief UAV, a flying deputy (Athena UAV) and a ground antenna (Trimble) as surrogate deputy. The paths of the three “vehicles”, estimated by uBlox receivers for the two drones and by RTKLIB processing for the Trimble antenna are reported in
Figure 22.
Figure 22a shows the latitude longitude coordinates of the paths reported on a satellite image. Whilst east north up (ENU) coordinates are reported in
Figure 22b, where top and 3D views are reported. These paths are relevant to a limited segment (from 334 to 449 s) of the entire dataset acquired during the flight campaign, where both the deputies are within the field of view of the chief’s camera.
The one deputy case uses Eagle as chief vehicle and Athena as deputy vehicle, exploiting proper dynamics of the two platforms. Specifically, Athena holds an almost steady position whilst Eagle is rotating around it and changing its heading with the aim of always keeping the deputy in its camera FOV. The horizontal acceleration of the chief and the variation of chief-deputy LOS in BRF both provide benefits to inertial bias observability.
Figure 23 reports the trajectory of the two vehicles in latitude-longitude coordinates (
Figure 23a) and in top and 3D view (
Figure 23b). Eagle performs a circle around Athena in the time epoch going from 476 to 551 s of the acquired dataset.
To have a reference for accelerometer and biases estimated quantities, a ZUPT filter has been used for the first 70 s of the test, where Eagle platform has been kept in static conditions. The ZUPT filter uses inertial equations for propagation and correct the state by informing the filter a zero velocity is experienced. To guarantee observability of the third component of the gyroscope bias and heading angle, the ZUPT filter used in this paper also uses the magnetometer measurement in correction step. However, both
and
are unobservable from the ZUPT filter, because their estimated covariance is far higher than the estimated value for the biases, and cannot be used as a benchmark to evaluate the effectiveness of cooperation. Therefore, only benchmarked values (i.e., the three gyroscope and the down accelerometer biases) are reported in
Figure 24.
Figure 24 shows the IMU biases estimated with the navigation filter reported in
Section 4.
Figure 24a,b depicts the results obtained considering trajectories reported in
Figure 22 (two deputies case) and
Figure 23 (one deputy case), respectively. As in the previous section, results with and without cooperation have been reported by black and blue lines. Reference values, obtained as the values to which the ZUPT filter converges, are also reported in red, and enclosed within the gray 3σ bound.
In the two deputies case, the chief UAV is always looking at the deputies and both the multirotors exhibit a limited motion. This results in a scenario with constant geometry and variable range, with the two deputies having a very small Δχ and a χ near to 0°. In the single deputy case, the deputy is always within the chief FOV, providing a relative azimuth variation of about 40°.
In both cases, cooperative estimation allows rapidly estimating gyroscope biases and yields a significant advantage with respect to the non cooperative filter especially on the down axis gyroscope bias estimation, which would otherwise be negatively impacted by the wrong magnetometer estimation. Down accelerometer bias oscillates around the true value within the covariance bound both for cooperative and uncooperative results.