Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Generating UAV Accurate Ortho Mosaicked

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Generating UAV accurate ortho-mosaicked images using

a six-band multispectral camera arrangement

F.J. Mesas-Carrascosaa1, J. Torres-Sánchez2, J.M. Peña2, A. García-Ferrer1, I. L.


Castillejo-González1 and F. López Granados2
a1
Corresponding author. University of Cordoba, Department of Graphic Engineering
and Geomatics, Campus de Rabanales, 14071 Córdoba, Spain. email: fjmesas@uco.es
2
Institute for Sustainable Agriculture, CSIC, P.O. Box 4084, 14080-Córdoba, Spain

Abstract. Site-specific weed management in a very early phenological stage of


crop and weed plants requires the ultra-high spatial resolution imagery provided
by unmanned aerial vehicles (UAV) flying at low altitudes. These UAV images
cannot cover the whole field, resulting in the need to take a sequence or series of
multiple overlapped (end-lap and side-lap) images. As consequence, a large
number of UAV images are acquired for each plot. These multiple overlapped
images must be oriented by calculating the bundle adjustment and ortho-rectified
throughout an aerial-triangulation procedure to create an accurately geo-
referenced ortho-mosaicked image of the entire field.
As the spatial accuracy of ortho-mosaicked image is mainly dependent on
camera calibration and percentage of overlapping, this paper describes the effect
of flight parameters and sensor arrangement using an UAV flying at 30 m
altitude and six-band multispectral camera (miniature camera array composed of
a master channel and five more channels or bands) on the workflow of the
mosaicking process. The objective is to develop a procedure to generate an
accurate ortho-image. This procedure would be then used in multispectral UAV
imagery taken on crops for weed seedling mapping. The main phases of UAV
workflow were: sensor calibration, mission planning, UAV flight and processing
images. Regarding the sensor calibration, six different geometric calibrations
were calculated, one for each image registered corresponding to every band.
Referring to mission planning, two end-lap and side-lap settings (60%-30% and
70%-40%) were considered. Calibration and image overlapping are related to
the flight duration and accuracy resolving aerial-triangulation. Different aerial-
triangulations were calculated for different scenarios taken into account several
combinations of calibrations and the overlapping values. The spatial accuracy of
the different generated ortho-mosaicked images using both overlapping values
and calibration were evaluated by the ASPRS (American Society for
Photogrammetry and Remote Sensing) test. The results showed that the best
setting of flight to keep the spatial accuracy in the bundle adjustment was 40-
70% overlapping and using the master channel for the aero-triangulation and its
own calibration parameters. Generally, high values of end-lap and side-lap
improved the accuracy of block photogrammetric adjustment, although they
increased the flight length and decreased the area overflow. In addition, the
framework to process and mosaic the images taken by this type of sensor was
defined.

Keywords: low flight altitudes, overlapping, sensor calibration, visible and


near-infrared spectrum, weed seedling monitoring
1 Introduction
Over the last decade, significant changes have occurred in the field of unmanned
aerial vehicles (UAVs) and the general interest has increased for civil and research
purposes such as detection and monitoring fires (Merino et al., 2012), civil protection
(Maza et al., 2011) or precision agriculture for site-specific management (Garcia-Ruiz
et al., 2013). The result is that UAVs are often being used in photogrammetry since a
geo-referenced product can be obtained with reasonable accuracy and any geographic
phenomena can be measured and mapped (Link et al., 2013).
For precision agriculture objectives, site-specific weed management in a very early
phenological stage of crop and weed plants requires the ultra-high spatial resolution
imagery provided by UAV flying at very low altitudes. Due to their intrinsic
characteristics, these UAV images cannot cover the whole field, resulting in the need
to take a sequence or series of multiple overlapped (end-lap and side-lap) images. As
consequence, a large number of UAV images are acquired for each plot. These
multiple overlapped images must be oriented by calculating the bundle adjustment
and ortho-rectified throughout an aerial-triangulation procedure to create an
accurately geo-referenced ortho-mosaicked image of the entire field. However, the
spatial quality of imagery derived from UAV is usually poorer than that of
conventional products obtained by satellite or aircraft system (Zongjian, 2008). These
limitations are related to sensors on-board and flight setting.
Low cost sensors such as a still point-and-shoot camera have shown to provide
ultrahigh spatial resolution ortho-mosaics (0.74 cm-pixel at 30 m flight altitude) with
fine-scale spatial resolution (Gómez-Candón et al., 2013). However, other
multispectral sensors used on board UAVs can present lack of vertical adjustment or
unknown interior orientation which affect the consistency of image spatial accuracy.
Interior orientation of this kind of cameras can be particularly critical when it is an
array set up sensor. In this situation, an image is taken with different sensors working
within the camera at the same time and, as an unavoidable consequence, some multi-
unknown interior orientations take part in the process.
The other factor, the flight setting, is related directly with the quality of the
photogrammetric processes, that is, high values of end-lap and side-lap could improve
the accuracy of block photogrammetric adjustment, although they increase the UAV
flight length and decrease the area overflow. In order to optimize the acquisition of
imagery from UAV, it is necessary to determine the quality of the image data and to
balance the flight project to keep steady spatial quality results and flight duration.
Because an array sensor is going to be used in our group to complement our research
with an UAV equipped with a low cost camera for detecting early weed plants
(Torres-Sánchez et al., 2013), the objective of this research was to generate UAV
accurate ortho-mosaicked images by developing a procedure to define the best flight
setting (camera calibration and percentage of overlapping) and the way to process
images taken by this array sensor. This paper describes the effect of flight parameters
and sensor correction using an UAV flying at 30 m altitude and a six-band
multispectral camera (TetraCam) on mosaicking process in order to generate an
accurate ortho-mosaicked image. The proposed corrections would be subsequently
used in UAV imagery taken for weed seedling mapping.
2 Materials and methods

2.1 Study site


The work was carried out in an urban parcel situated in the University of Córdoba
campus (southern Spain, coordinates 37.909ºN, 4.728ºW, datum WGS84). The study
site had an area of about 0.5 ha, and it included a street and some unbuilt land. This
location was selected to perform the UAV flights and to develop the procedure
because of its abundance of elements such as street lamps, zebra crossings, floor sinks
or sidewalks whose coordinates could be registered to be used in the evaluation of the
ortho-mosaicked image quality.

2.2 UAV and sensor characteristics


A quadrocopter complete carbon design was used (UAV MD4-1000, microdrones
GmbH, Figure 1.a. See more UAV details in Torres-Sánchez et al., 2013 a) flying at
30 m altitude. The images were collected with a 6-channel multispectral camera
(miniature-camera-array: mini-MCA, Tetracam Inc., Chatsworth, CA, USA; Figure
1.b) which is a lightweight (700 g). The TetraCam camera is a rolling shutter camera
with six individual channels each consisting of a sensor with a progressive shutter.
Each channel has a focal length of 9.6 mm and a 1.3-megapixel (1,280 x 1,024 pixels)
CMOS sensor that stores images on a compact flash card. The camera has user-
configurable band-pass filters (Andover Corporation, Salem, NH, USA) of 10-nm full
width at half-maximum and center wavelengths of 450 nm (blue region of the
electromagnetic spectrum), 530 nm (green region), 670 and 700 nm (red region), 740
nm (red-edge region) and 780 nm (near-infrared region). The software PixelWrench2
was supplied with the camera to provide full camera control and image management
(TetraCam, 2011), including correction of the vignette effect, alignment of RAW
image sets and building of multi-band TIFs, as explained in Torres et al. (2013 a).
The camera channels are arranged in a 2×3 array, with one of them called “master”
and the other 5 called “slaves”. The slave channels are labeled from ‘‘1’’ to ‘‘5’’,
while the sixth master objective is used as reference channel to define the global
settings used by the slave ones. That is, the master channel calculates its own
exposure time and this parameter is used for the slave objectives to ensure the
simultaneity of image acquisition by all the channels. Each channel was calibrated by
Brown-Conrady's distortion model by using Agisoft Lens software and a planar
calibration grid of known geometric properties. Imagery of the calibration grid was
captured by the camera at multiple angles and an iterative process estimated the
intrinsic and extrinsic camera parameters taking into account the technical
characteristics of the Tetracam mini-MCA. As result, each band had its own
calibration parameters and the focal length, principal point coordinates, and radial and
tangential distortion coefficients were calculated for every band.
Fig. 1. a) The quadrocopter UAV, model md4-1000 b) Detail of the TetraCam camera

A GNSS (Global Navigation Satellite System) campaign was carried out with two
objectives, the first one was to record a total of 6 reference points to be used in the
aerial-triangulation, and the second one consisted in measuring the coordinates of 150
ground control points to assess the spatial accuracy of ortho-mosaicked imagery
generated. To reach the maximum spatial accuracy, two receivers were used: one of
them was a reference station from GNSS-RAP network (RAP: Red Andaluza de
Posicionamiento) from the Institute for Statistics and Cartography of Andalusia
(Southern Spain), and the other one, a Leica GS15 GNSS Receiver managed as a
rover. For the 6 control points, a rapid static technique was used as positioning
technique, and in order to assess the image spatial quality, the 150 ground control
points were recorded by using the Stop&Go technique as relative positioning
technique by means of the NTRIP protocol (The Radio Technical Commission for
Maritime Services, RTCM, for Networked Transfer via Internet Protocol).
Two missions planning were considered at altitude ground level of 30 m. The first
mission planning was designed with an end-lap and a side-lap of 30% and 60%
respectively; whereas the second mission had an end-lap and a side-lap of 40-70%.
Then, to study the image spatial accuracy for each ortho-mosaicked image, four
different scenarios were defined according to the overlapping, and channel used in the
aerial triangulation and the calibration parameters used in the ortho-rectification and
mosaicking. Scenario A: only the master channel considering its own calibration
parameters were used. Scenario B: only the master channel was used and its
calibration parameters were not considered. Scenario C: took into account bands 1, 2
and master channel, and calibration parameters of master channel, and finally
Scenario D: consisted of bands 3, 4 and master channel, and calibration parameters of
master channel. Images from each scenario were mosaicked using EnsoMosaic UAV
software. It consists of a core of different programs which calculates the aerial
triangulation, the digital surface models, and generates the ortho-mosaicked imagery.
The ortho-mosaic was created using only 6 reference points which were manually
identified whereas all the tie-points were automatically recognized.
To assess spatial quality of ortho-mosaicked imagery generated, ASPRS (1990)
(American Society for Photogrammetry and Remote Sensing) methodology
developed for accuracy standard for large-scale maps (ASLSM) was used. The
horizontal accuracy is defined by the root mean squared error (RMSE). This error
corresponds to all errors including those introduced by the compiled and extraction of
coordinates from the ortho-mosaicked image. The RMSE is calculated considering the
differences in coordinate values as derived from the ortho-mosaicked images in every
scenario and as determined by GNSS measurements. The standard agreement of
ASPRS defines that the sample size must have at least 20 points. As stated before, a
total of 150 ground points were used to reduce the user risk (Ariza-López et al.,
2008). The standard also defines a maximum RMSE related with the scale of the
geomatic product. The ASLSM test assesses a limiting RMSE of 0.05 m and 0.125 m
for the scales 1:200 and 1:500, respectively.

3 Results and discussion


Spatial resolution of ortho-mosaics was 1.6 cm for any scenario and overlapping.
Calibration parameters for each channel of the TetraCam camera are showed in Table
1. Calibration parameters obtained for Bands 1, 2, 3, 4 and 5 were not used in any of
the four scenarios defined. However, this information will be very useful for our
subsequent studies in order to generate multispectral ortho-mosaicked images. Radial
distortion represents the curving effect towards the centre of the lens. Negative
displacement radially moves points towards the origin point of the lens distortion and
positive displacements moves points away from the lens distortion (Park et al., 2009).
Tangential distortion occurs from the non-alignment of the lens.

Table 1. Calibration parameters for the 6-channel multispectral TetraCam camera.


Master
Parameters Channel* Band-1* Band-2* Band-3* Band-4* Band-5*
Focal Length
(x) [pixel] 1877.63 1869 1863.85 1858.9 1854.26 1865.33
Focal Length
(y) [pixel] 1873.48 1867.05 1859.59 1854.23 1851.05 1864.17
Principal point
(x) [pixel] 673.945 675.197 658.949 673.613 681.251 661.797
Principal point
(y) [pixel] 449.137 456.364 443.19 449.641 464.176 462.699
Skew 2.24492 7.6544 3.40017 2.19382 4.46007 5.0959
Radial K1 -0.147974 -0.156026 -0.156957 -0.156915 -0.190734 -0.198358
Radial K2 -0.163914 0.0232547 0.110478 0.098838 0.375953 0.426037
Radial K3 0.70123 0.0327477 -0.154836 -0.0352014 -1.08931 -0.986604
Tangencial P1 0.000730161 0.00158142 -0.000303408 0.000033536 0.00031332 0.00028807
Tangencial P2 0.00165047 0.00196331 -7.34136E-05 0.000513933 0.00175363 0.00102959
*Band wavelengths: Master (red-edge): 740 nm; Band-1 (blue): 450 nm; Band-2
(green): 530 nm; Band-3 (red): 670 nm; Band-4 (red): 700 nm; Band-5 (NIR): 780
nm.
Table 2 shows the ASLSM test to check the accuracy of every ortho-mosaicked image
obtained from each flight for all scenarios by comparing coordinates (x,y) from ortho-
mosaicked images and the 150 ground control points. The flights called F1
corresponded to a setting of 30% end-lap and 60% side-lap, while F2 corresponded to
40%-70% overlap. Those ortho-mosaicked images produced with only master channel
and its calibration parameters were titled "A". Those one produced with only master
channel but without taking into account its calibration parameters were titled "B".
Those images titled "C" corresponded to the band combination master-1-2, and
finally, images "D" corresponded to bands master-3-4. It was not possible to work
with the six bands simultaneously because software limits. These two last scenarios
also included master calibration parameters. Two types of errors were identified once
ortho-mosaics were created and analyzed: 1) errors related to spatial accuracy, that is,
random errors generated by the workflow, and 2) outliers produced during the bundle
adjustment and digital surface model generation since they were carried out in an
automatic mode. The ortho-mosaicked images obtained from scenarios F2-A and F1-
A showed a RMSE of 0.048 and 0.076 m, respectively, which were lower than 0.05 m
and 0.125 m for 1:200 and 1:500 scales defined for ASLSM test. The RMSE from the
other ortho-images were higher than 0.137 m suggesting that they were valid for a
scale of 1:1000. These results indicated that at 1:200 scale, each pixel had an
uncertainty of 4.8 cm, that is, pixel coordinates were displaced 4.8 cm. At higher
scales, higher RMSE and higher displacements. Figure 2.a shows the ortho-mosaic
obtained from scenario F2-A. Using this ortho-mosaic as reference, different outliers
in ortho-mosaics from other scenarios were identified as example of local high errors
(Figure 2: 1-4).
Regarding the pixel size obtained in any imagery (1.6 cm), the RMSE for F2-A (4.8
cm) was equivalent to 3 pixels. As part of an overall research program to investigate
the opportunities and limitations of UAV imagery in mapping early weeds in narrow
crop rows (e.g. wheat, crop rows: 15-17 cm wide) and wide crop rows (e.g. maize or
sunflower, crop rows: 70-75 cm wide), an RMSE of 4.8 cm would not break the crop
line continuity of mosaics and further crop-weed discrimination could potentially be
achieved. This is relevant in weed seedling monitoring for site-specific weed
management since definition of crop row structure is crucial for further identification
of weeds which are usually located between crop rows (Peña et al., 2013, Torres-
Sánchez et al., 2013 b). According to Laliberté et al. (2010), an RMSE of 1 pixel or
less is desirable when working with aerial imagery from a piloted aircraft (pixel size
usually higher than 50 cm), although such as RMSE is difficult to achieve with UAV
imagery. They reported that the accuracy achieved with errors of 1.5 to 2 pixels from
the aerial triangulation of imagery with 8 cm spatial resolution could be acceptable for
UAV flying at 214 m above ground level equipped with a low-cost camera for
rangeland monitoring. The main differences between that study and ours were that
they worked with a low cost-camera (true-color: RGB), lower spatial resolution and
much higher flight altitude, being all of them the crucial parameters for developments
using UAV.
Table 2. Errors for aerial triangulation for 30-60% (F1) and 40-70% (F2) overlapping and
calibration scenarios (A, B, C, D) considered.
F1-A F1-B F1-C F1-D F2-A F2-B F2-C F2-D
Minimum 0,000 0,000 0,028 0,000 0,000 0,017 0,000 0,000
Maximum 0,335 0,828 0,760 0,810 0,276 0,823 0,850 0,800
Mean 0,145 0,157 0,166 0,160 0,072 0,219 0,139 0,135
Median 0,124 0,100 0,132 0,106 0,072 0,118 0,096 0,092
RMSE 0,076 0,137 0,134 0,159 0,048 0,229 0,157 0,145

Figure 2. (a) Ortho-mosaic obtained from a flight with 40%-70% overlapping and
using only master channel. (1),(2),(3) and (4) show outliers in other scenarios taking
into account 30-60% flight setting and different band combinations.

Comparing the two mission flights, ortho-mosaicked images from F2 flights achieved
an RMSE lower than those from F1 flights which indicates that increasing end and
side laps improved spatial accuracy of mosaicked images. However, this was valid
only if master channel calibration parameters were introduced in the bundle
adjustment since RMSE for ortho-images generated for F2-B flights were similar than
those obtained for ortho-images from F1-B flights.
Fig. 3. Box and whiskers plots for RMSE values of each scenario: a) 30-60 %
overlapping, b) 40-70% overlapping.

Figure 3 summarizes the box and whiskers plots of RMSE for both overlapping
values at any scenario. Ortho-mosaicked images created with master channel without
calibration parameters (B scenarios) showed similar box-plots at both overlapping.
Apart from maximum errors (whiskers) around 0.75 m that were achieved for B, C
and D scenarios, box-plots for F1 flights (Fig 3.a) were higher than for F2 flights (Fig
3.b) which point out that 30%-60% overlapping produced higher RMSE than 40%-
70% overlapping. According to these results, only ortho-images generated using 40%-
70% overlap, and master channel and calibration parameters showed tolerable errors.
Next investigation should address the study and debugging of the maximum errors
(outliers) obtained during the mosaicking process.
The progress of UAV-based projects requires a balance between data quality with
end-and-side overlapping and the duration of the flight. Higher overlapping values
increased flight length. Thus, the flight with 30-60% overlapping consisted of 4 laps
with 16 images each one (a total of 64 images) being the time flight 17min and 2s.
The flight with 40-70% overlapping consisted of 5 laps with 21 images each one (a
total of 105 images) being the flight time of 24min and 49s. If UAV type is a multi-
rotor, the consequence is that one flight has to be divided into different individual
flights due to battery limitations. In this case, it would be necessary to change the
UAV battery quickly. In case of waiting for charging the battery would be necessary,
conditions illuminations could change in that time interval which could harm the
bundle adjustment and, eventually, affect further image analysis.

4 Conclusion
Since the spatial accuracy of UAV ortho-mosaicked images obtained using a rolling
shutter camera with six individual objectives is mainly dependent on camera
calibration and percentage of overlapping, this paper describes a procedure to
generate fine-scale spatial resolution ortho-mosaics considering flight parameters and
sensor arrangement using an UAV flying at 30 m altitude. This procedure would be
subsequently used in multispectral UAV imagery taken on crops for weed seedling
mapping which requires an ultra-high spatial resolution and a very-high spatial
accuracy of the ortho-mosaicked image. In this study, increasing end and side laps
from 30-60% to 40-70% improved the ortho-image spatial accuracy twice as much.
Higher end and side laps increased the flight time around 7 min and a total of 40 more
images had to be processed. It is therefore important to assess the necessary balance
among objective aimed, costs, improvements of accuracy and the duration of the
flight due to UAV energy limitations.
All workflow has to be built considering the sensor's architecture. The primary focus
of this study was on the preliminary calibrations for sensor correction and a ortho-
mosaicked image was created to illustrate the effects of this sensor correction on the
ortho-image spatial accuracy. The results presented would recommend to calculate the
bundle adjustment for aero-triangulation using only master channel and its calibration
parameters, and later to produce the ortho-mosaicked images of multiple bands with
external orientation calculated earlier. Next step would be to create a multispectral
ortho-image including NIR band using this multi-channel sensor and to evaluate its
spatial accuracy. Our subsequent hypothesis is that errors around 3 pixels (4.8 cm)
would be sufficient for our final objective of weed seedling monitoring in narrow crop
rows (rows 15-17 cm apart) and wide crop rows (rows 70-75 cm apart) because of the
crop row continuity would not break. This should be studied next.

Acknowledgments
This research was partly financed by TOAS (ref.: PEOPLE-2011-CIG-293991, EC-
7th Frame Program) and AGL2011-30442-CO2-01 (MINECO-FEDER Funds)
Projects. Research of Mr. Torres-Sánchez and Dr. José M. Peña were financed by
FPI Program (MINECO funds) and RHEA Project (NMP-CP-IP 245986-2, EC-7th
Frame Program), respectively.

References
Ariza-López, F.J., Atkinson-Gordo, A.D., Rodriguez-Avi, J.. Acceptance Curves for
the Positional Control of Geographic Databases. Journal of Surveying Engineering
134, 26-32 (2008)
ASPRS. Accuracy standards for large scale maps. Photogrammetric Engineering and
Remote Sensing 56, 1068-1070 (1990)
García-Ruiz, F., Sankaran, S., Maja, J.M., Lee, W.S., Rasmussen, J., Ehsani, R..
Comparison of two aerial imaging platforms for identification of Huanglongbing-
infected citrus trees. Computers and Electronics in Agriculture 91, 106-115 (2013)
Gómez-Candón, D., de Castro, A.I. and López-Granados, F.. Assessing the accuracy
of mosaics from unmanned aerial vehicle (UAV) imagery for precision agriculture
purposes. Precision Agriculture 15, 44-56 (2014)
Laliberte, A. S., Herrick, J. E., Rango, A., Winters, C.. Acquisition, orthorectification,
and object-based classification of unmanned aerial vehicle (UAV) imagery for
rangeland monitoring. Photogrammetric Engineering and Remote Sensing 76,
661-672 (2010).
Link, J., Senner, D., Claupein, W.. Developing and evaluating an aerial sensor
platform (ASP) to collect multispectral data for deriving management decisions in
precision farming. Computers and Electronics in Agriculture 94, 20-28 (2013).
Maza, I., Caballero, F., Capitán, J., Martínez-de-Dios, J., Ollero, A.. Experimental
results in multi-UAV coordination for disaster management and civil security
applications. Journal of intelligent & robotic systems vol 61, pp 563-585 (2011)
Merino, L., Caballero, F., Martínez-de-Dios, J.R., Maza, I., Ollero, A.. An Unmanned
Aircraft System for Automatic Forest Fire Monitoring and Measurement. Journal
of Intelligent & Robotic Systems 65, 533-548 (2012).
Park, J., Byun, S.C., Lee, B.U. Lens distorsion correction using ideal image
coordinates. IEEE Transactions of Consumer Electronics. 55, 987-991 (2009).
Peña, J.M., Torres-Sánchez, J., de Castro, A.I. and López-Granados, F. Generating
weed maps in early-season maize fields by using an Unmanned Aerial Vehicle
(UAV) and object-based image analysis. PloS One, e77151 (2013).
Tetracam (2011) http://www.tetracam.com/PDFs/PW2%20FAQ.pdf.
Torres-Sánchez, J., Peña, J.M., de Castro, A.I. and López-Granados, F.. Configuration
and Specifications of an Unmanned Aerial Vehicle (UAV) for Early Site Specific
Weed Management. PLoS ONE 8, DOI: 10.1371/journal.pone.e58210 (2013).
Torres-Sánchez, J., Peña, J.M., de Castro, A.I. and López-Granados, F. Multi-
temporal mapping of vegetation fraction in early-season wheat fields using images
from UAV. Submitted. Computers and Electronics in Agriculture (2013b).
Zhang, C. and Kovacs J.. The application of small unmanned aerial systems for
precision agriculture: a review. Precision Agriculture vol 13, pp 693–712 (2012).
Zongjian, L. UAV for mapping—low altitude photogrammetric survey. International
Archives of Photogrammetry and Remote Sensing, Beijing, China 37, 1183-1186
(2008).

You might also like