GPS / MV based Aerial Refueling for UAVs
Marco Mammarella 1 Giampiero Campa.2, Marcello R. Napolitano 3, Brad Seanor 4
Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV 26506/6106
Mario L. Fravolini 5
Department of Electronic and Information Engineering, University of Perugia, 06100 Perugia, Italy
and
Lorenzo Pollini6
Department of Electrical Systems and Automation, University of Pisa, 56126 Pisa, Italy
This paper describes the design of a simulation environment for a GPS / Machine Vision
(MV)-based approach for the problem of Aerial Refueling (AR) for Unmanned Aerial
Vehicles (UAVs) using the USAF refueling method. MV-based algorithms are implemented
within this effort as smart sensor in order to detect the relative position and orientation
between the UAV and the tanker. Within this effort, techniques and algorithms for the
visualization the tanker aircraft in a Virtual Reality (VR) setting, for the acquisition of the
tanker image, for the Feature Extraction (FE) from the acquired image, for the Point
Matching (PM) of the features, for the tanker-UAV Pose Estimation (PE) have been
developed and extensively tested in closed loop simulations. Detailed mathematical models of
the tanker and UAV dynamics, refueling boom, turbulence, wind gusts, and tanker’s wake
effects, along with the UAV docking control laws and reference path generation have been
implemented within the simulation environment. Mathematical model of the noise produced
by GPS, MV, INS and pressure sensors are also derived. This paper also presents an
Extended Kalman Filter (EKF) used for the sensors fusion between GPS and MV systems.
Results on the accuracy reached for the estimation of the relative position are also provided.
Nomenclature
B
C
E
P
=
=
=
=
vector
p
=
q
=
R
=
r
=
T
=
T
=
U
=
u
=
V
=
Center of the 3 Dimensional Window (3DW) placed in the boom system
Body-fixed UAV camera reference frame
Earth-fixed reference frame
Earth-fixed reference frame having the x axis aligned with the planar component of the aircraft velocity
angular velocity in x direction in the body reference frame.
angular velocity in y direction in the body reference frame.
Receptacle point placed in the UAV.
angular velocity in z direction in the body reference frame.
Body-fixed tanker reference frame located at the tanker center of gravity (CG)
Homogeneous Transformation Matrix
Body-fixed UAV reference frame located at the UAV CG
Horizontal component in Images
Aircraft velocity in the stability axes
1
Ph.D. Student.
Research Assistant Professor.
3
Professor
4
Research Assistant Professor, AIAA Member.
5
Research Assistant Professor.
6
Research Assistant Professor, AIAA Member.
2
1
American Institute of Aeronautics and Astronautics
092407
v
x
y
z
=
=
=
=
a
b
y
q
f
m
s
=
=
=
=
=
=
Vertical component in Images
x direction in a 3D Reference Frame
y direction in a 3D Reference Frame
z direction in a 3D Reference Frame
Attack angle
Sideslip angle
angle between the x axes of the body reference frame and the earth reference frame
angle between the y axes of the body reference frame and the earth reference frame
angle between the z axes of the body reference frame and the earth reference frame
Mean
= Standard Deviation
I. Introduction
One of the biggest current limitations of Unmanned Aerial Vehicles (UAVs) is their lack of aerial refueling (AR)
capabilities. The effort described in this paper is relative to the US Air Force refueling boom system with the general
goal of extending the use of this system to the refueling of UAV’s 1,2. For this purpose, a key issue is represented by
the need of accurate measurement of the ‘tanker-UAV’ relative position and orientation from the ‘pre-contact’ to the
‘contact’ position and during the refueling. Although sensors based on laser, infrared radar, and GPS technologies
are suitable for autonomous docking 3, there might be limitations associated with their use. For example, the use of
UAV GPS signals might not always be possible since the GPS signals may be distorted by the tanker airframe.
Therefore, the use of Machine Vision (MV) technology has been proposed in addition - or as an alternative - to these
technologies 4. Modeling and control issues related to the introduction of a MV position sensing system were
discussed in 5,6 and 7, for the “Probe and Drogue” refueling system. Specific algorithms suitable for aerospace MV
systems were also discussed in 8, within the contest of close proximity operations of aerospace vehicles and in 9
within the contest of Autonomous Navigation and Landing of Aircraft. The applications of MV algorithms for the
more general problem of the orientation estimation of a target object are described in 10 and 11.
Within the boom-based approach for the UAVs aerial refueling, the control objective is to guide the UAV within
a defined 3D Window (3DW) below the tanker where the boom operator can then manually proceed to the docking
of the refueling boom followed by the refueling phase. Control issues related to this approach were investigated in
12
, while the development of a GPS-based, operator-in-the-loop simulation environment was discussed in 13 and 14.
A MV-based system to sense the UAV-Tanker relative position and orientation assumes the availability of a
digital camera - installed on the UAV - providing the images of the tanker, which are then processed to solve a pose
estimation problem, leading to the real-time estimates of the relative position and orientation vectors. These vectors
are used for the purpose of guiding the UAV from a “pre-contact” to a “contact” position. Once the UAV reaches
the contact position, the boom operator takes over and manually proceeds to the refueling operation.
For the purpose of an accurate evaluation, the simulation environment has to be detailed and flexible enough to
simulate all the involved subsystems at a time, which will in turn allow studying the interactions among the different
MV algorithms within the UAV feedback loop.
The main contribution of this paper is the description of a simulation environment for the GPS / MV-based
Aerial Refueling of UAVs. This environment features detailed mathematical models for the tanker, the UAV, the
refueling boom, the wake effects, the atmospheric turbulence, and the GPS, MV, INS and pressure sensors noise.
The simulation interacts with a Virtual Reality (VR) environment by moving visual 3D models of the aircraft in a
virtual world and by acquiring a stream of images from the environment. Images are then processed by a MV sensor
block, which includes algorithms for Feature Extraction (FE), Point Matching (PM), and Pose Estimation (PE). The
position and orientation information coming from the MV and GPS sensors are then processed by an EKF for
sensors fusion purpose and used by the UAV control laws to guide the aircraft during the docking maneuver and to
maintain the UAV within the 3D window during the refueling phase. The paper is organized as follows. The AR
problem is formally described in the next section. Then, the modeling of the tanker, UAV, boom, wake effects and
turbulence are summarized and a description of the 3D graphical modeling of the objects used within the Virtual
Reality subsystem of the simulation environment is given.
The following sections are dedicated to the description of the main components of the Machine Vision system,
respectively the Feature Extraction (FE), the Feature Matching (FM), and the Pose Estimation (PE) algorithms.
The sensor modeling, the EKF sensor fusion system, and the tracking and docking control laws are then considered
in the subsequent sections. Finally, simulation results are presented.
2
American Institute of Aeronautics and Astronautics
092407
II. The GPS / MV-based AR Problem
A. Reference frames and Notation
The study of the AR problem requires the definition of the following Reference Frames (RFs):
• ERF, or E: earth-fixed reference frame.
• PRF, or P: earth-fixed reference frame having the x axis aligned with the planar component of the
tanker velocity vector.
• TRF or T: body-fixed tanker reference frame located at the tanker center of gravity (CG).
• URF or U: body-fixed UAV reference frame located at the UAV CG.
• CRF or C: body-fixed UAV camera reference frame.
Within this study, geometric points are expressed using homogeneous (4D) coordinates and are indicated with a
capital letter and a left superscript denoting the associated reference frame. Vectors are denoted by two uppercase
letters, indicating the two points at the extremes of the vector. The transformation matrices are (4 x 4) matrices
relating points and vectors expressed in an initial reference frame to points and vectors expressed in a final reference
frame. They are denoted with a capital T with a right subscript indicating the “initial” reference frame and a left
superscript indicating the “final” reference frame.
B. Geometric Formulation of the AR Problem
The objective is to guide the UAV such that its fuel receptacle (point R) is “transferred” to the center of a 3dimensional window (3DW, also called “Refueling Box” or point B) under the tanker. It is assumed that the boom
operator can take control of the refueling operations once the UAV fuel receptacle reaches and remains within this
3DW. It should be emphasized that point B is fixed within the TRF; also, the dimensions of the 3DW δ x, δ y , δ z
are known design parameters. Since the true operational values for the dimension of the 3DW are not available in
the technical literature, the authors assumed some arbitrary values. It is additionally assumed that the tanker and the
UAV can share a short-range data communication link during the docking maneuver. Furthermore, the UAV is
assumed to be equipped with a digital camera along with an on-board computer hosting the MV algorithms
acquiring the images of the tanker. Finally, the 2-D image plane of the MV is defined as the ‘y-z’ plane of the CRF
using the pinhole model.
C. Receptacle-3DW-center vector
The reliability of the AR docking maneuver is strongly dependent on the accuracy of the measurement of the
vector PBR, that is the distance vector between the UAV fuel receptacle and the center of the 3D refueling window,
expressed within the PRF:
P
BR = PTT T B − PTU U R = PTT T B − PTT T TC CTU U R
U
(1)
T
In the above relationships both R and B are known and constant parameters since the fuel receptacle (point R)
and the 3DW center (point B) are located at fixed and known positions with respect to the UAV and tanker frames
respectively. The transformation matrix CTU represents the position and orientation of the CRF with respect to the
URF; therefore, it is also known and assumed to be constant. The transformation matrix PTT represents the position
and orientation of the tanker respect to PRF, which are measured on the tanker and broadcasted to the UAV through
the data communication link. In particular, if the sideslip angle β of the tanker is negligible then PTT only depends on
the tanker roll and pitch angles. Finally, TTC, is the inverse of CTT, which can be evaluated either “directly” - that is
using the relative position and orientation information provided by the MV system - or “indirectly” - that is by using
the formula
C
TT = C TU
(
E
TU
)
−1 E
TT , where the matrices ETU and ETT can be evaluated using information from the
position (GPS) and orientation (gyros) sensors of the tanker and UAV respectively.
III. Aircraft, Boom and Turbulence modeling
A. Modeling of the tanker and UAV systems
The nonlinear aircraft models of the UAV and tanker have been developed using the conventional modeling
procedures and conventions outlined in 15 and 16. Specifically, a nonlinear model of a Boeing 747 aircraft 17 with
linearized aerodynamics was used for the modeling of the tanker. A similar nonlinear model was used for the
3
American Institute of Aeronautics and Astronautics
092407
modeling of the UAV. The selected UAV dynamics is relative to a concept aircraft known as “ICE-101”
conventional state variable modeling procedure was used for both aircraft, leading to the state vector:
[V , α , β , p, q, r ,ψ ,θ , ϕ , x, y, z ]
T
18
. A
(2)
where V ,α , β represent the aircraft velocity in the stability axes; p, q, r are the components of the angular
velocity in the body reference frame while ψ ,θ , ϕ , x, y, z represent the aircraft orientation and position with respect
to ERF. First order responses, together with transport delays, angular position, and angular rate limiters have been
used for the modeling of the actuators of the different control surfaces. Steady state rectilinear conditions (Mach =
0.65, H = 6,000 m) are assumed for the refueling. The tanker autopilot system is designed using LQR-based control
laws 19. The design of the UAV control laws is outlined in one of the following sections.
B. Modeling of the boom
A detailed mathematical model of the boom was developed to provide a realistic simulation from the boom
operator point of view. A joystick block for boom maneuvering was also added to the simulation environment.
The dynamic model of the boom has been derived using the Lagrange method 20,21:
d ∂L ( q, q ) ∂L ( q, q )
−
= Fi , i = 1,..., n
∂qi
dt ∂qi
(3)
where L ( q, q ) = T ( q, q ) − U ( q ) is the Lagrangian, that is the difference between the boom kinetic and potential
energy, and q is the vector of Lagrangian coordinates, defining the position and orientation of the boom elements.
Since the inertial and gravitational forces are included in the left-hand side of (3), Fi only represents the active forces
(wind and control forces) acting on the boom. More details regarding the active and passive joints can be found in 35.
C. Modeling of the atmospheric turbulence and wake effects
The atmospheric turbulence on the probe system and on both tanker and the UAV aircraft has been modeled
using the Dryden wind turbulence model 16,22 at light/moderate conditions.
An experimental investigation was conducted by Birhle Applied Research Lab in the Langley Full Scale Tunnel
to collect the data necessary to model the effects of the wake of a KC-135 tanker on the aerodynamics of a similar
scale ICE101 aircraft in a refueling scenario 23,24. The perturbations to the UAV aerodynamic coefficients
CD , CL , Cm , CY , Cl , Cn due to the presence of the tanker were then isolated and made available to WVU
researchers as a collection of 4 different 3D lookup tables, each expressing the tanker-induced forces and moments
on the UAV for a certain range of angle of attack, lateral, and vertical tanker-UAV distance, and for a specific value
of longitudinal tanker-UAV distance. Additional details about the modeling of the atmospheric turbulence and wake
effect during the AR maneuver can be found in 35.
IV.
Virtual Reality Scenery and Image Acquisition
The simulation outputs were linked to a Virtual Reality Toolbox® (VRT) 25 interface for providing the typical
scenarios associated with the refueling maneuvers. Such interface allows the positions of the UAV, tanker, and
boom within the simulation to drive the position and orientation of the associated objects in the Virtual World (VW).
The VW consisted in a VRML file 26 including visual models of the landscape, tanker, UAV, and boom. Several
objects including the tanker, the landscape and different parts of the boom were originally modeled using 3D Studio
and later exported to VRML. Every object was scaled according to its real dimensions. A B747 model was re-scaled
to match the size of a KC-135 tanker while a B2 model was rescaled to match the size of the ICE 101 aircraft. Eight
different viewpoints were made available to the user, including the view from the UAV camera and the view from
the boom operator. The latter allows the simulator to be used as a boom station simulator if so desired. The
simulation main scheme also features a number of graphic user interface (GUI) menus allowing the user to set a
number of simulation parameters including initial conditions of the AR maneuver, level of atmospheric turbulence,
location of the camera on the UAV and its orientation within the UAV body frame and location of the fuel
receptacle on the UAV.
From the VW, images of the tanker as seen from the UAV camera are continuously acquired and processed
during the simulation. Specifically, after the images are acquired, they are scaled and processed by a corner
detection algorithm. The corner detection algorithm finds the 2D coordinates on the image plane of the points
associated with specific physical corners and/or features of the tanker.
4
American Institute of Aeronautics and Astronautics
092407
V.
The Feature Extraction algorithm
The performances of two specific feature extraction algorithms for the detection of corners in the image were
compared in a previous effort 27. The Harris corner detector 28, 29 was selected for this study. This method is based on
the analysis of the matrix of the intensity derivatives, also known as “Plessey operator” 28, which is defined as
follows:
⎡ BX 2
M =⎢
⎢⎣ BYX
BXY ⎤
⎥
2
BY ⎥⎦
(4)
where B is the gray level intensity (Brightness) of each pixel of the image, and BX, BY, BXY, BYX are its directional
derivatives. The directional derivatives are determined by convolving the image by a kernel of the correspondent
derivative of a Gaussian. If at a certain point the eigenvalues of the matrix M take on large values, then a small
change in any direction will cause a substantial change in the gray level. This drawback was overcome by a
modified version of the Harris, where the “cornerness” C function proposed by Noble is used 29:
det( M )
(5)
C=
Tr ( M ) + ε
Where det(M) and tr(M) are respectively the determinant and trace of M.
The small constant ε is used to avoid a singular denominator in case of a rank zero auto-correlation matrix (M).
In both Harris detector method 28 and its variation by Noble 29 a local maximum search is performed as a final step
of the algorithm with the goal of maximizing the value of C for the selected corners.
VI. Point Matching algorithm
Once the 2D coordinates of the detected features, which in the case are just corners, on the image plane have
been extracted, the problem is to correctly associate each detected feature with its physical feature/corner on the
tanker aircraft, whose position in the TRF (3D coordinates) is assumed to be known.
Within most of the published works on this topic it is implicitly assumed that all the detected features are also
perfectly identified (matched); in other words, the point-matching problem is not specifically addressed. On the
other hand, it should be clear that significant problems may arise when the perfect matching assumption is violated,
thus leading to potential biased estimations of the tanker-UAV relative position and, ultimately, to tracking errors in
the docking of the UAV.
In what follows, the general approach is to identify a subset of detected feature positions
to a subset of estimated feature positions
[u j , v j ] to be matched
[uˆ j , vˆ j ] .
A. Projection equations
The subset [uˆ j , vˆ j ] is a projection in the camera plane of the estimated feature positions P(j) using the standard
“pin-hole” projection model 30. Specifically, according to the “Pin-Hole” model, given a point ‘j’ with coordinates
C
P( j ) = [ C x j , C y j , C z j , 1 ]T in the CRF frame, its projection into the image plane can be calculated using the
projection equation:
C
⎡uˆ j ⎤
f ⎡ y p, j ⎤
C
T
=
⎢C
⎥ = g f , TT ( X ) ⋅ P( j )
⎢ vˆ ⎥ C
z
x
p, j ⎢
⎣ j⎦
⎣ p , j ⎦⎥
(
)
(6)
where f is the camera focal length, TP(j) are the components of the point P(j) in TRF, which are fixed and known
‘a priori’. CTT(X) is the transformation matrix between camera and tanker reference frames, which is a function of
the current position and orientation vector X:
(7)
X = [ C xT , C yT , C zT , Cψ T , CθT , CϕT ]T
For point matching purposes, the vector X is assumed to be known. In fact, the camera-tanker distance - i.e. the
first three elements of X - can be estimated as the camera-tanker distance at previous time instants, this can be used
as a good approximation of the current distance (assuming a sufficiently fast sampling rate for the MV system). The
relative orientation between camera and tanker - that is, the last three elements of X - can be obtained from the yaw,
5
American Institute of Aeronautics and Astronautics
092407
pitch, and roll angle measurements of both UAV and tanker, as provided by the on-board gyros. In a more general
case, an EKF sensor fusion method between the previous MV estimations and the measurements coming from GPS
and gyros could be used as an estimation of the current relative position and orientation vectors. As described in a
previous section the distance and orientation of the camera in the UAV body frame are assumed to be constant and
known. The modeling of the sensors will instead be described in one of the following sections.
B. The ‘Points Matching’ problem
Once the subset [uˆ j , vˆ j ] is available, the problem of relating the points extracted from the camera measurements
to the actual features on the tanker can be formalized in terms of matching the set of points P = { p1 , p2 ,..., pm } where p j = [u j , v j ] is the generic ‘to be matched’ point from the camera - to the set of points Pˆ = { pˆ1 , pˆ 2 ,..., pˆ n } ,
where pˆ j = [uˆ j , vˆ j ] is the generic point obtained through projecting the known nominal corners in the camera
plane. In general, a degree of similarity between two data sets is defined in terms of a cost function or a distance
function derived on general principles as geometric proximity, rigidity, and exclusion. The best matching is then
evaluated as the result of an optimization process exploring the space of the potential solutions 31. A definition of the
point-matching problem as an assignment problem along with an extensive analysis of different matching algorithms
was performed by some of the authors in a previous effort 32, 33. The algorithm implemented within this effort solves
the problem using a heuristic “mutual nearest point” procedure 36 that uses differences in point positions and
differences in feature color (Hue) and “area” characteristics. The function allows the definition of a maximum range
of variation for each dimension; these ranges define a hypercube around each corner of the set P̂ . The distance
actually is computed only if the point pj – and its area and hue value – lies into one of the hyper-cubes defined
around each point of the set P̂ ; otherwise, it is automatically set to infinity. Next, the four dimensions have to be
weighted before calculating the Euclidian distance between P̂ and P. Finally, the choice of the matched points are
based on the proximity criterion. Additional details are available in 36.
C. Pose Estimation Algorithm
Following the solution of the matching problem, the information in the set of points P must be used to derive the
rigid transformation relating CRF to TRF 37. Within this study, the Lu, Hager and Mjolsness (LHM) PE Algorithm
was implemented 38,33.
The LHM algorithm formulates the PE problem in terms of the minimization of an object-space collinearity
error. Specifically, given the ‘observed, detected, and correctly matched’ point ‘j’ on the camera plane at the time
instant k, with coordinates [u j , v j ] , let hj(k) be:
T
h j (k ) = ⎡⎣u j
v j 1⎤⎦
Then, an ‘object-space collinearity error’ vector ej - at the time instant k - can be defined as:
e j (k ) = ( I − V j (k )) CTT ( X (k )) T P( j )
(9)
where
T
⎡ h j (k )h j (k )
⎤
⎢ hT ( k ) h ( k ) 0 ⎥
V j (k ) = ⎢ j
j
⎥
⎢⎣
0
1 ⎥⎦
The PE problem is then formulated as the problem of minimizing the sum of the squared errors:
m
E ( X (k )) = ∑ e j (k )
(8)
2
(10)
(11)
j =1
The algorithm proceeds by iteratively improving an estimate of the rotation portion of the pose. Next, the
algorithm estimates the associated translation only when a satisfactory estimate of the rotation is found. This is
achieved by using the collinearity equations:
6
American Institute of Aeronautics and Astronautics
092407
⎡ hˆ j hˆTj
⎤
⎢ T − 1 0⎥ C T
⎢ hˆ j hˆ j
⎥ TT P( j ) = 0
⎢
⎥
0⎦
⎣ 0
(12)
where
ˆh = ⎡uˆ vˆ 1⎤ T
(13)
j
j
⎣ j
⎦
T
38
and [uˆ j , vˆ j ] is the projection in the camera plane of the point P(j). It has been shown that the LHM algorithm
is globally convergent. Furthermore, empirical results suggest that the algorithm is also very efficient and usually
converges within (5 – 10) iterations starting from any range of initial conditions.
VII.
Sensors Modeling
A. Modeling of the MV Sensor
The MV system can be considered as a smart sensor providing the relative distance between a known object and
the camera. Therefore, a detailed description of the characteristics of its output signals is critical for the use of this
sensor. The measurements provided by the MV are affected by a Gaussian White Noise with non-zero mean, as
demonstrated in 33. A summary of the output characteristics is provided in Table 1. Being the noises white and
Gaussian, only the means (μ) and the standard deviations (σ) of the errors in the CRF directions (x, y, z) are required
for their complete statistical descriptions.
μ
σ
x
(meter)
-0.090
0.056
y
(meter)
0.015
0.060
z
(meter)
-0.069
0.065
Table 1: Statistical Parameters of the MV-Based Position Sensor
B. Modeling of the INS Sensor
Both aircraft are assumed to be equipped with Inertial Navigation Systems (INS), which are capable of providing
the velocities and attitudes of the aircraft by measuring its linear accelerations and angular rates. Within the
developed simulation environment, ‘realistic’ INS outputs are simulated by adding a White Gaussian Noise (WGN)
to the corresponding entries of the aircraft state vector. To validate this type of modeling, the noise within the
signals acquired by the INS has been analyzed using the normal probability analysis and the Power Spectral Density
(PSD). This allowed assessing whether such noise could be modeled as white and Gaussian.
The flight data used to validate the modeling of the INS noise were taken from a recent experimental project
involving the flight testing of multiple YF-22 research aircraft models 34. The analysis was performed with a
sampling time of 10 Hz for all the aircraft sensors. The results for the pitch rate q are shown Figure 1.
The upper portion of Figure 4 shows the normal probability plot – plotted using the Matlab “normplot”
command - of the simulated noise and of the noise provided by the real sensor. The purpose of this plot is to assess
whether the data could come from a normal distribution. In such a case, the plot is perfectly linear. For the noise
related to the pitch rate channel, the part of the noise close to zero follows a linear trend, implying a normal
distribution. Note that due some outliers, the tails of the curve corresponding to the real sensor do not follow this
trend. However, the fact that the trend is followed within the central part of the plot – which represents the majority
of the data - validates that this noise can be modeled as a Gaussian process in a certain neighborhood of zero.
A PSD analysis also confirms the hypothesis of white noise. In fact, the lower portion of Figure 1 shows that the
spectrum of the noise from the real sensor, although not as flat as the spectrum of the simulated noise (shown as a
dotted line), is still fairly well distributed throughout the frequency range. Thus, both the normal probability and
PSD analysis confirm that the noise on the IMU q channel measurement can be modeled as a white Gaussian
random vector. Similar conclusions can be achieved for the p and r IMU channels.
7
American Institute of Aeronautics and Astronautics
092407
Normal Probability Plot in the pitch rate (q)
Real Sensor
Simulated Sensor
0.999
Probability
0.95
0.50
0.05
0.001
Power Spectrum Magnitude (dB)
-0.1
-0.05
0
0.05
Data
PSD in the pitch rate (q)
0.1
0.15
0.2
-30
Real Sensor
Simulated Sensor
-35
-40
-45
0
0.1
0.2
0.3
0.4
0.5
0.6
Frequency
0.7
0.8
0.9
1
Figure 1 – The normal probability and PSD in the pitch rate (q) in Real and Simulated INS
C. Modeling of the Pressure, Nose probe, Gyro, and Heading Sensors
An air-data nose probe - for measuring flow angles and pressure data - was installed on the UAV. This sensor
provides the measurements of the velocity (V), the angle-of-attack (α), and the sideslip angle (β), while the vertical
gyro provides measurements for the aircraft pitch and roll angles (q and φ). Within this analysis the heading was
approximated with the angle of the planar velocity in ERF, that is y = atan2(Vy,Vx), where atan2 is the 4 quadrant
arctangent function and the velocity are supplied by the GPS unit and are based on carrier-phase wave information.
However, the heading can also be calculated by gyros, magnetic sensors, or by a filtered combination of all the
above methods.
Following a similar analysis to the one performed for the INS, the noise on the measurements from the above
sensors was modeled as White and Gaussian Noise (WGN). Table 2 summarizes the results in terms of noise
variances for the different aircraft dynamic variables.
σ2
V
(m/s)2
α
(rad)2
β
(rad)2
p
(rad/s)2
q
(rad/s)2
r
(rad/s)2
Ψ
(rad)2
θ
(rad)2
φ
(rad)2
2e-1
2e-3
2e-3
2e-2
2e-2
2e-2
2e-3
2e-3
2e-3
Table 2: Variance of the Noise of the Sensors
D. Modeling of the GPS Position Sensor
The GPS sensor provides its position (x, y, z) with respect to the ERF. A composition of four different Band
Limited White Noises was used to simulate the GPS noise. Specifically, the four noises have different power and
sample times. Three of these noise signals are added and filtered with a low-pass filter and the resulting signal is
added to the fourth noise and sampled with a zero-order-hold. In fact, GPS measurements –in case more than 4
satellite signals are received - normally exhibit a “short term” noise with amplitude within 2 to 3 meters, as well as
“long term” trend deviations and “jumps” due to satellites motion and occlusions. Therefore, while the first “short
term” noise has been modeled as a White Gaussian Noise, the trend deviations and jumps have been modeled using
the other 3 lower-frequency, filtered, noises.
8
American Institute of Aeronautics and Astronautics
092407
-186.5
Real GPS
Simulated GPS
-187
meter
-187.5
-188
-188.5
-189
600
650
700
750
t (sec)
800
850
900
Figure 2 - Comparison between Real and Simulated GPS signals
Figure 2 shows both the signal from a real GPS receiver (Novatel-OEM4), and the simulated GPS signal.
VIII. The EKF Sensor Fusion system
Due to typical limitations of cameras performance, the MV system can provide reliable results only within a
certain limited distance form the tanker. On the other hand, the GPS signal received by the UAV may be shadowed
or distorted by the tanker airframe when the UAV is near or below the tanker. This could lead to losses in accuracy
and reliability of the GPS-based UAV position measurement.
The use of EKF 19 for sensor fusion is well documented in robotics applications for the fusion of inertial, GPS,
and odometer sensors as described in 39. Within this effort, emphasis was placed on the fusion between data from a
MV-based sensor system and data from the INS/GPS system. In general, sensor fusion applications require the
output function yk = h( xk , vk ) of the dynamic system to be adapted to the number of sensors that the filter has to
combine.
In this case, the output function contains the following variables:
yk = [V α
β
p q r ψ
θ ϕ xGPS
yGPS
zGPS
xMV
yMV
zMV ]
(14)
where the subscript GPS indicates measurements from the GPS system while the subscript MV indicates
measurements from the MV system.
The EKF formulation assumes that the measurements are affected by a white and Gaussian noise 19. Therefore
the noise affecting the variables xGPS, yGPS, and zGPS, were considered to be white and Gaussian with variances of
0.014 m2, 0.013 m2, and 0.022 m2 respectively. These values were calculated using the MATLAB “var” command
on a large set of data from the GPS sensor, simulated as described in Section 4.D. The MATLAB “mean” command
applied on the same set of data, provided results under 2% of the range, which validated the zero-mean assumption.
Similarly, for the MV-based position sensor Table 1 indicates that the mean values of the MV position
measurements can be approximated to be zero.
The EKF scheme requires 3 specific inputs. The first input is the UAV command vector uk, containing the
throttle level and the deflections of the control surfaces. The second input is the complete system output vector
defined in (14), which includes data from the INS/GPS and the MV sensors. The third and last input is the number
of corners used by the PE algorithm, which is critical since the MV system provides reliable estimates of the relative
position vector only if a sufficient number of corners (greater than 6) are properly detected by the ‘Mutual Nearest
Point’ algorithm. Specifically, the entries of Vk relative to the MV position measurements are multiplied by a factor
of 1000 when the number of detected corners is lower than the required amount. Essentially this causes the
exclusion of the MV information from the sensor fusion process.
9
American Institute of Aeronautics and Astronautics
092407
The output of the EKF is the estimate xˆk of the system’s state vector xk, which contains the 12 aircraft state
variables. Specifically, the last 3 variables of the EKF output are the estimates of the aircraft position in the ERF.
The values of these variables are the results of the sensor fusion between the data supplied by the two different
position sensor systems.
According to the selected state and output variables, the matrix Hk – the Jacobian matrix of the output function –
becomes a matrix with dimension 15×12 containing the derivatives of the outputs with respect to the states.
Similarly, the matrix Vkx – the covariance matrix of the output noise – is a matrix of dimensions 15×15, containing
all the noise covariances, including the ones from the GPS and the MV systems.
Figure 3 - Scheme of EKF for sensors fusion
Figure 3 shows the general Simulink Scheme of the EKF, including its different components blocks such as the
“Linearization” block - which performs the linearization of the non linear system - the “Gain Computation” block which calculates the covariance estimate propagation, the filter gain computation and the covariance estimate update
- and the “Output Update” - which calculates the state estimate update how described in 19 -. The tuning of the EKF
is performed as follows. First, the initial state of the filter is set equal to the state of the UAV system at the time
instant when the EKF is switched on (Table 3 shows typical values of such initial state). The matrix P0 – the
covariance of the initial state –is then set to zero. Next, the matrix Wk – the covariance of the noise of the state
variables – is kept constant and equal to the identity matrix with dimensions 12*12. As previously mentioned, the
matrix Vk, varies as a function of the corners detected by the MV system. Specifically, if the number of corners is
greater than 6 the matrix Vk, is a diagonal matrix containing the 9 values provided in Table 2, the 3 variances of the
GPS measurements: 0.014 m2, 0.013 m2, and 0.022 m2 for x y and z directions respectively, and the 3 variances
related to the distances measured by the MV system, provided in Table 1. Whenever the number of detected corners
is less than 6 then the 3 variances related to the MV system are multiplied by 1000 so that these measurements are
practically discarded.
Variable
α
β
p
q
r
ψ
θ
φ
xe
ye
H
V
Value
205
0.077
0
0
0
0
0
0.077
0
-58.8
0
6068
Table 3: Typical Initial State Vector
IX.
UAV Docking Control Laws
The receptacle position in PRF, that is PR, and the UAV center of mass in ERF, that is EU are two equivalent
ways to represent the UAV position information, since the following relationship holds
P
(15)
R = PTE (ψ 0 ) ETU (ψ , θ , ϕ , EU ) U R
and since UR, the UAV Euler angles, and the tanker heading angle Ψ0 are all known.
An augmented state space model - with respect to the model outlined in (2) - was selected for the UAV:
Z = ⎡⎣V , α , β , p, q, r ,ψ ,θ , ϕ , P R, ∫ tt0 P R dt ⎤⎦
T
(16)
10
American Institute of Aeronautics and Astronautics
092407
In the above vector the last six states represent respectively the three components of the PR point (that is the
UAV receptacle) in PRF, and their integral over time. The last 3 states were added to facilitate the synthesis of a
controller capable of zero steady state tracking error.
The ICE101 features 10 control surfaces 18:
T
(17)
U1−5 = ⎡⎣δ THROTTLE ,δ AMT_R , δ AMT_L , δ TEE_R , δ TEE_L ⎤⎦
U 6−10 = ⎡⎣δ LEF_L , δ LEF_R , δ PF , δ SSD_L , δ SSD_R ⎤⎦
T
(18)
where AMT stands for All Moving Tip, LEF for Leading Edge Flap, PF for Pitch Flap, SSD for Spoiler Slot
Deflector and TEE for Trailing Edge Elevon.
Assuming that the tanker is flying at a straight and level flight conditions, with a known velocity V0 and heading
angle Ψ0, the center of the refueling window PB(t) is subjected to a rectilinear uniform motion, described by:
P
T
B (t ) = ⎡⎣ P B1 (t0 ) + V0 (t − t0 ) 0 0 ⎤⎦
where PB1(t0) is a known initial condition.
The following trajectory in the UAV state space:
(19)
T
(20)
Z ref (t ) = ⎡⎣V0 , α 0 , 0, 0, 0, 0,ψ 0 , α 0 , 0, P B(t ), ∫ tt0 P B(t ) dt ⎤⎦
represents a trim point for the first 9 UAV states. The reference input Uref corresponding to the above reference
trajectory was calculated using a Simulink® trim utility. Since the objective of the UAV control laws is to guide the
UAV so that PR (the fuel receptacle) is eventually “transferred” to the point PB, it is reasonable to assume small
perturbations from the flight condition in (20) during the refueling maneuver. Under this assumption, the UAV
dynamics can be modeled as the linear system resulting from the linearization of the UAV equations about the
reference trajectory in (20):
Z = AZ + BU
(21)
T
P
t P
Y = CZ = ⎡⎣ RB, ∫ t0 RB dt ⎤⎦
where the “~” denotes deviation from the reference trajectory, the state space matrices A and B describe the
dynamics of the resulting linear system, and C defines a “performance” output vector containing PRB and its integral
over time.
The design of the UAV docking control laws was then performed using a Linear Quadratic Regulator (LQR)
approach 19. The resulting cost function is expressed as:
(
)
∞
J = ∫ Y T QY + U T RU dt
0
(22)
The elements of the matrix R were selected to approximately balance the control authority among the different
control channels, while an iterative procedure was used to select the values for the elements of Q so that a satisfying
compromise between tracking error, disturbance rejection, and high frequency bandwidth attenuation could be
reached. The element of Q that weights the integral of the error along the y direction is smaller because it was
noticed that the control system could keep the error to zero more effectively along that direction than along the other
two. The resulting weighting matrices are given by:
Q = diag ([10,10,10, 0.1, 0.001, 0.1])
(23)
R = diag ([0.1,1000,1000,1,1000,1000,1000,1000, 0.1, 0.1])
and the resulting LQR control law is given by:
(24)
U = − K ⋅ Z
where the LQR matrix K is obtained by solving an Algebraic Riccati Equation 19. Following the structure of the
state vector, equation (31) can be decomposed into the following terms:
(25)
U = − K d ⋅ Z1−9 + K p P BR + K i ∫ tt0 P BR dt
where the derivative term Kd is applied to the first 9 element of the state, and the proportional and integral terms
Kp and Ki are applied respectively to PBR (which is obtained as discussed in the previous sections) and its integral
over time.
X.
The Reference Path generation system
Once the AR “tracking & docking” scheme is activated, the UAV control system is tasked to generate a suitable
sequence of feasible commands leading to a smooth docking within a defined time. This cannot be achieved by
11
American Institute of Aeronautics and Astronautics
092407
directly using the control law in (25), which can only be used under the assumption of small deviations from the
reference trajectory. In fact when the PBR vector takes on large values - as it happens when the UAV is at the precontact position and the control system is activated - the proportional term in (25) will generate a large command,
which would drive the system outside the validity range of the small perturbation assumption.
To avoid the above problem, the control law in (25) was modified to include a desired trajectory PBRdes(t):
(26)
U = − K d ⋅ Z1−9 + K p ( P BR − P BRdes ) + K i ∫ tt0 ( P BR − P BRdes ) dt
where PBRdes(t), is generated when the tracking and docking control system is activated.
Let tf be the desired duration of the docking phase and let PBR(0) denote the distance between the 3DW and the
UAV receptacle at the pre-contact position. The relative velocity between the UAV and the tanker is designed to
start from zero, to reach its maximum value at tf /2, and to return to zero at t = t f , when the UAV reaches the
contact position at the center of the 3D refueling window. Thus, the desired trajectory can be defined through the
following relationship:
P
(27)
BRdes , i (t ) = ai t 3 + bi t 2 + ci t + di , i = x, y, z
The coefficients in the above polynomial are evaluated through imposing the boundary conditions on the initial
and final positions:
P
(28)
BR des , i (0) = P BR i (0), P BR des , i (t f ) = 0, i = x, y, z
and the initial and final velocities:
T
BR
des , i (0) = 0,
T
BR
i = x, y , z
des , i (t f ) = 0,
(29)
The resulting reference trajectory is defined by:
3
2
(30)
BR des , i (t ) = P BR des , i (0) ⎡⎢2 t t f − 3 t t f + 1⎤⎥ , 0 ≤ t ≤ t f
⎣
⎦
Dividing the vertical and lateral components of the reference trajectory by the longitudinal component will yield
two constants:
(
P
P
P
BR des , z (t )
BR des , x (t )
P
=
P
BR des , z (0)
BR des , x (0)
)
P
;
P
(
)
BR des , y (t )
BR des , x (t )
=
P
BR des , y (0)
P
BR des , x (0)
(31)
which in turn means that the reference trajectory is a straight line, because the lateral and vertical components
are a linear function of the longitudinal one.
Finally, the maximum values of the velocity and acceleration along the trajectory are found to be:
Vmax,i = −
3 ⋅ P BR des , i (0)
2t f
,
Accmax,i = ±
6 ⋅ P BR des , i (0)
t 2f
, i = x, y , z
(32)
Typically the UAV docking from the “pre-contact” to the “contact” position is performed with the UAV
perfectly aligned with the tanker longitudinal axis, resulting in the initial condition P BR des , y (0) = 0 .
XI.
Closed Loop Simulations
The analysis of the closed loop simulations was performed to validate the performance of the Aerial Refueling
scheme. In this study, the UAV acquires data from all its onboard sensors, which are modeled as described in
Section VII, and receives data from the tanker, which are pre-filtered for noise reduction purposes. The EKF output that is the result of the sensor fusion between the MV and GPS data - is used in the docking control laws for guiding
the UAV from the ‘pre-contact’ position to the ‘contact’ position and for holding position in the defined 3DW once
the contact position has been reached.
Without any loss of generality, the ‘pre-contact’ position was assumed to be located 50 m behind and 10 m
below the tanker aircraft, while the ‘contact’ position, i.e. the 3DW position, was assumed to be directly below the
tanker, within the reach of the telescopic portion of the refueling boom.
12
American Institute of Aeronautics and Astronautics
092407
2
10
GPS
MV
EKF
0
10
log(|x err|) (m)
-2
10
-4
10
-6
10
-8
10
0
10
20
30
40
50
t (sec)
60
70
80
90
100
Figure 4 - Comparison of errors along the x-axis between EKF, GPS and MV system
Note that, due to finite camera resolution and due to the fact that objects appear smaller at larger distances, a
MV-based system cannot provide reliable results when the tanker-UAV distance is too large 5, 14. Thus, MV-based
results are inaccurate until approx. 30 sec. in the simulation.
In Figure 4 the logarithm of the EKF error - defined as the absolute value of the difference between the actual
position and the EKF-based position - is compared with the MV and GPS noises, for the x-axis. It can be noted that
the error of the EKF output is approximately two orders of magnitude smaller than the noises of both the GPS and
MV systems.
The accuracy of the EKF-based estimations is particularly evident in Figure 5, which shows the components of
the EKF error in the estimation of the position of the UAV along the 3 axes.
Figure 6 shows the UAV tracking error during the approach and docking phases. It can be seen that during the
refueling maneuver the tracking error remain between the interval [-0.015 m, 0.010 m] providing a substantial
improvement in terms of tracking performance during the UAV docking phase respect a previous efforts of the
authors 5, 27, 32, 33, 36,40. Additional details on the development of the EKF for sensors fusion purpose can be found in
40
.
-3
12
x 10
x err
10
y err
z err
8
errors (meter)
6
4
2
0
-2
-4
-6
0
100
200
300
400
500
t (sec)
600
700
800
900
Figure 5 - Errors in the position using EKF
13
American Institute of Aeronautics and Astronautics
092407
1000
0.01
x err
y err
errors (meter)
0.005
z err
0
-0.005
-0.01
-0.015
0
100
200
300
400
500
t (sec)
600
700
800
900
1000
Figure 6 - Errors in the components of the tracking error
XII.
Conclusions
This paper describes the features and the components of a simulation environment developed for the GPS / MVbased Autonomous Aerial Refueling for UAVs. Specifically, UAV and tanker dynamics, the boom system and the
atmospheric turbulence and wake effects are analyzed. Detailed description of MV system as well as the sensor
modeling, the EKF based sensor fusion system and control system are provided. A closed-loop simulation study
using the simulation environment specifically designed for the analysis of GPS / MV-based AR problem was
performed. Results show that the proposed method allows a considerable precision in the estimation of the position
of the UAV as well as in the tracking error.
References
1.
2.
3.
4.
5.
6.
7.
8.
Nalepka, J.P., Hinchman, J.L., “Automated Aerial Refueling: Extending the Effectiveness of Unmanned Air
Vehicles”, AIAA Modeling and Simulation Technologies Conference and Exhibit 15-18 August 2005, San
Francisco, CA.
Zhipu Jin, Z., Shima, T., Schumacher, C.J., “Optimal Scheduling for Refueling Multiple Autonomous Aerial
Vehicles” IEEE Transactions On Robotics, Vol. 22, No. 4, August 2006,
Korbly, R. and Sensong, L. “Relative attitudes for automatic docking,” AIAA Journal of Guidance Control and
Dynamics, Vol. 6, No. 3, 1983, pp. 213-215.
Valasek, J., Gunnam, K., Kimmett, J., Tandale, M.D., Junkins, J.L., Hughes, D., “Vision-Based Sensor and
Navigation System for Autonomous Air Refueling” Journal of Guidance, Control, and Dynamics, Vol. 28,
No. 5, September–October 2005.
Fravolini, M.L., Ficola, A., Campa, G., Napolitano M.R., Seanor, B., “Modeling and Control Issues for
Autonomous Aerial Refueling for UAVs Using a Probe-Drogue Refueling System,” Journal of Aerospace
Science Technology, Vol. 8, No. 7, 2004, pp. 611-618.
Pollini, L., Innocenti, M., Mati, R., “Vision Algorithms for Formation Flight and Aerial Refueling with
Optimal Marker Labeling”, AIAA Modeling and Simulation Technologies Conference and Exhibit, 15 - 18
August 2005, San Francisco, CA.
Herrnberger, M., Sachs, G., Holzapfel, F., Tostmann, W, Weixler, E, “Simulation Analysis of Autonomous
Aerial Refueling Procedures” AIAA Guidance, Navigation, and Control Conference and Exhibit, 15-18
August 2005, San Francisco, CA.
Kelsey, J.,M., Byrne, J., Cosgrove, M., Seereeram, S., Mehra, R.K., “Vision-Based Relative Pose Estimation
for Autonomous Rendezvous And Docking”, 2006 IEEE Aerospace Conference, Big Sky, MT, March 4-11,
2006.
14
American Institute of Aeronautics and Astronautics
092407
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
Chatterji, G. B., Menon, P. K., Sridhar, B., “GPS/Machine Vision Navigation System for Aircraft”, IEEE
Transactions on Aerospace and Electronic Systems, Vol. 33, No. 3 July 1997, pp 1012-1025.
Defigueiredo, R.J.P., Kehtarnavaz, N., “Model-Based Orientation-Independent 3-D Machine Vision
Techniques” IEEE Transactions On Aerospace and Electronic Systems Vol. 24. No. 5, September 1988, pp
597-607.
Chandra, S.D.V., “Target Orientation Estimation Using Fourier Energy Spectrum”, IEEE Transactions on
Aerospace and Electronic Systems Vol. 34, No. 3 July 1998, pp 1009-1012.
Ross, S.M., Pachter, M., Jacques, D.R., Kish, B.A., Millman, D.R., “Autonomous aerial refueling based on
the tanker reference frame”, 2006 IEEE Aerospace Conference, Big Sky, MT, March 4-11, 2006.
Nguyen, B., T., Lin, L., T., “The Use of Flight Simulation and Flight Testing in the Automated Aerial
Refueling Program”, AIAA Modeling and Simulation Technologies Conference and Exhibit, 15 - 18 August
2005, San Francisco, CA.
Burns S.R., Clark, C.S., Ewart, R., “The Automated Aerial Refueling Simulation at the AVTAS Laboratory”,
AIAA Modeling and Simulation Technologies Conference and Exhibit, 15 - 18 August 2005, San Francisco,
CA.
Etkin B, (1972), Dynamics of Atmospheric Flight, John Wiley & Sons, Inc
Rauw, M.O.: "FDC 1.2 - A Simulink Toolbox for Flight Dynamics and Control Analysis". Zeist, The
Netherlands, 1997. ISBN: 90-807177-1-1, http://www.dutchroll.com/
Campa G, (2003), Airlib, The Aircraft Library, http://www.mathworks.com/matlabcentral/
Addington, G.A., Myatt, J.H., “Control-Surface Deflection Effects on the Innovative Control Effectors (ICE
101) Design, ”Air Force Report”, AFRL-VA-WP-TR-2000-3027, June 2000
Stengel, R.F., “Optimal control and estimation”, Dover Publication Inc. New York, 1994.
Asada H.J., Slotine J.E., “Robot Analysis and Control”, Wiley, New York, 1986, pp. 15-50.
Spong M.W., Vidyasagar M. “Robot Dynamics and control”, Wiley, New York, 1989, pp. 62-91.
Roskam J. “Airplane Flight Dynamics and Automatic Flight Controls – Part II”, DARC Corporation,
Lawrence, KS, 1994.
Blake W, Gingras D.R., “Comparison of Predicted and Measured Formation Flight Interference Effect”,
Proceedings of the 2001 AIAA Atmospheric Flight Mechanics Conference, AIAA Paper 2001-4136,
Montreal, August 2001.
Gingras D.R., Player J.L., Blake W., “Static and Dynamic Wind Tunnel testing of Air Vehicles in Close
Proximity”, Proceedings of the 2001 AIAA Atmospheric Flight Mechanics Conference, Paper 2001-4137,
Montreal, Canada, August 2001.
Virtual Reality Toolbox Users Guide, 2001-2006, HUMUSOFT and The MathWorks Inc.
The VRML Web Repository, Dec. 2002: (http://www.web3d.org/x3d/vrml/)
Vendra, S., Campa, G., Napolitano, M.R., Mammarella, M., Fravolini, M.L., Perhinschi, M., “Addressing
Corner Detection Issues for Machine Vision based UAV Aerial Refueling” Accepted for publication Machine
Vision and Application, November 2006.
Harris, C and Stephens, M, “A Combined Corner and Edge Detector”, Proc. 4th Alvey Vision Conference,
Manchester, pp. 147-151, 1988.
Noble, A. “Finding Corners”, Image and Vision Computing Journal, 6(2): 121-128, 1988.
Hutchinson S., Hager G., Corke P., “A tutorial on visual servo control”, IEEE Transactions on Robotics and
Automation, Vol. 12, No. 5, 1996, pp. 651-670.
Pla, F., Marchant, J.A., “Matching Feature Points in Image Sequences through a Region-Based Method,”
Computer vision and image understanding, Vol. 66, No. 3, 1997, pp. 271-285.
Fravolini, M. L., Brunori V., Ficola, A., La Cava M., Campa, G., “Feature Matching Algorithms for Machine
Vision Based Autonomous Aerial Refueling", Mediterranean Control Conference 2006, June 28-30 2006,
Ancona, Italy.
Campa, G., Mammarella, M., Napolitano, M. R., Fravolini, M. L., Pollini, L., Stolarik, B., "A comparison of
Pose Estimation algorithms for Machine Vision based Aerial Refueling for UAV", Mediterranean Control
Conference 2006, June 28-30 2006, Ancona, Italy.
Gu, Y., Seanor, B., Campa, G., Napolitano, M. R., Rowe, L., Gururajan, S., Wan, S., “Design And Flight
Testing Evaluation Of Formation Control Laws”, IEEE Transactions on Control Systems Technology, Vol.14,
No 6, pp 1105-1112.
Campa, G., Napolitano, M. R., Fravolini, M. L., "A Simulation Environment for Machine Vision Based Aerial
Refueling for UAV", IEEE Transaction on Aerospace and Electronic Systems (to be published), April -May,
2008.
15
American Institute of Aeronautics and Astronautics
092407
36.
37.
38.
39.
40.
Mammarella, M., Campa, G., Napolitano, M. R., Fravolini, M. L., Dell’Aquila, R., Brunori, V., Perhinschi,
M.G., "Comparison of Point Matching Algorithms for the UAV Aerial Refueling Problem ", Machine Vision
and Applications, (to be published) 2008.
Haralick, R.M et al., “Pose Estimation from Corresponding Point Data”, IEEE Transactions on Systems, Man,
and Cybernetics, Vol. 19, No. 6, pp. 1426-1446, 1989.
Lu, C.P., Hager, G.D., Mjolsness, E., “Fast and Globally Convergent Pose Estimation from Video Images,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 6, 2000.
S. Panzieri, F. Pascucci, G. Ulivi, “An Outdoor Navigation System Using GPS and Inertial Platform,”
IEEE/ASME Trans. on Mechatronics, vol. 7, n. 2, pp. 134-142, 2002.
Mammarella, M., Campa, G., Napolitano, M. R., Fravolini, M. L., Perhinschi M. G., Gu, Y., "Machine Vision
/ GPS Integration Using EKF for the UAV Aerial Refueling Problem", IEEE Transactions on Systems, Man,
and Cybernetics, 2008 (to be published).
16
American Institute of Aeronautics and Astronautics
092407
View publication stats