Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Land-Use and Land-Cover Mapping Using a Gradable Classification Method
Next Article in Special Issue
Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV) Imagery
Previous Article in Journal
Application of Semi-Automated Filter to Improve Waveform Lidar Sub-Canopy Elevation Model
Previous Article in Special Issue
Sensor Correction of a 6-Band Multispectral Imaging Sensor for UAV Remote Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a UAV-LiDAR System with Application to Forest Inventory

School of Geography and Environmental Studies, University of Tasmania, Hobart, TAS 7001, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2012, 4(6), 1519-1543; https://doi.org/10.3390/rs4061519
Submission received: 14 March 2012 / Revised: 14 May 2012 / Accepted: 17 May 2012 / Published: 25 May 2012
(This article belongs to the Special Issue Unmanned Aerial Vehicles (UAVs) based Remote Sensing)

Abstract

:
We present the development of a low-cost Unmanned Aerial Vehicle-Light Detecting and Ranging (UAV-LiDAR) system and an accompanying workflow to produce 3D point clouds. UAV systems provide an unrivalled combination of high temporal and spatial resolution datasets. The TerraLuma UAV-LiDAR system has been developed to take advantage of these properties and in doing so overcome some of the current limitations of the use of this technology within the forestry industry. A modified processing workflow including a novel trajectory determination algorithm fusing observations from a GPS receiver, an Inertial Measurement Unit (IMU) and a High Definition (HD) video camera is presented. The advantages of this workflow are demonstrated using a rigorous assessment of the spatial accuracy of the final point clouds. It is shown that due to the inclusion of video the horizontal accuracy of the final point cloud improves from 0.61 m to 0.34 m (RMS error assessed against ground control). The effect of the very high density point clouds (up to 62 points per m2) produced by the UAV-LiDAR system on the measurement of tree location, height and crown width are also assessed by performing repeat surveys over individual isolated trees. The standard deviation of tree height is shown to reduce from 0.26 m, when using data with a density of 8 points per m2, to 0.15 m when the higher density data was used. Improvements in the uncertainty of the measurement of tree location, 0.80 m to 0.53 m, and crown width, 0.69 m to 0.61 m are also shown.

Graphical Abstract

1. Introduction

1.1. Background

Airborne LiDAR remote sensing has become a powerful tool in the management of modern forest inventories [1]. Ongoing research into the processing and analysis of LiDAR data has allowed for the development of an extensive range of LiDAR data products from which a wide range of forest metrics can be derived [24]. Stand metrics and tree-level statistics have provided forest managers with significantly richer information about their forests [5,6]. It is, however, evident that the full potential of airborne LiDAR technology for forest measurement and management is yet to be reached. Prohibitive factors, including high survey costs and short flying seasons in many areas, have limited the ongoing application of multi-temporal studies. As such, the assessment of factors such as forest health, defoliation, and rate of canopy closure are not feasible from the current intermittent LiDAR surveys utilised by forest managers.
Recently, improvements in small-scale technology have enabled the use of Unmanned Aerial Vehicles (UAVs) as an alternative remote sensing platform offering a distinctive combination of very high resolution data capture at a significantly lower survey cost. Current research into the use of UAVs as a 3D data-capture platform includes archaeological surveys [7,8] and vegetation monitoring [911], for example. These studies use image matching and photogrammetric techniques, which allow high density point clouds to be generated from the very high resolution imagery collected by UAVs. These point clouds have also been evaluated for forest monitoring and management by Tao et al. [12] and Dandois and Ellis [13], both showing the advantages of significantly higher densities point clouds in comparison to those commonly collected with full-scale LiDAR platforms. The drawback of point clouds generated from imagery is that few points will be measured from within the canopy and from the underlying surface in densely vegetated areas. This within canopy information is vital for many of the techniques which have been developed for deriving stand level and tree level metrics from point clouds, such as above ground biomass.
Jaakkola et al. [14] provided the first example of the potential of UAV-borne LiDAR for use in the forestry research. With the deployment of a rotor wing UAV equipped with a number of navigation sensors, in combination with two on-board LiDAR sensors, high-resolution data sets were produced offering improved individual tree level mapping. Jaakkola et al. [14], and more recently Lin et al. [15], have shown that due to the improved density of a UAV LiDAR point cloud, several metrics can be measured at a finer scale and with higher precision when compared to traditional LiDAR platforms. Because of their high spatial and temporal resolution, together with low operational costs, UAVs can provide a more targeted approach to forest monitoring and allow for the use of multi-temporal surveys such as forest health and canopy closure monitoring. Studies such as these suggest that through the combination of low-cost, high resolution data capture, UAV platforms are likely to be the next tool of choice for optimising detailed small area surveys within forests.
Several other UAV-platforms have also been developed for the purpose of collecting LiDAR data [14,1618]. The majority of these examples have been designed for government or military purposes [16,17] or as proof of concept platforms to show that LiDAR data can be collected from UAVs [18]. The drawback of such platforms is that the size and budget is significantly larger than what could be considered useful as an operational tool in forest management. A key reason for this is that the derivation of a spatially accurate point cloud requires careful consideration to be given to the determination of aircraft position and orientation. Even small errors in the observation of orientation and position result in substantial on ground displacements in point measurements [19]. The considerations regarding the positioning and orientation payload for UAV systems are confounded by the payload weight and size restrictions, and a trade-off between accuracy and weight must often be made. This has resulted in the majority of UAV-borne LiDAR systems consisting of larger UAVs, capable of carrying heavier payloads [1618], making them difficult to deploy in forested environments. For example, the platform outlined in Nagai et al. [17] has a take-off weight of 330 kg. The mini-UAV outlined by Jaakkola et al. [14] uses a combination of a tactical grade IMU and laser scanner. This combination, which weighs over 3 kg, pushes the payload limitations of most commercially available vertical take off and landing (VTOL) UAV’s suitable for forest based research.
Micro-Electromechanical System (MEMS) based IMUs offer an alternative option for positioning and orientation that is both lightweight and low-cost. These IMUs have been deployed for a variety of positioning and orientation tasks, including navigation, obstacle avoidance and land-based mapping [20]. This technology can be used as the primary orientation sensor within a GPS/IMU sensor framework to provide the high rate estimates of position and orientation required for LiDAR mapping. However, due to the high levels of error within MEMS IMUs and based on reported errors of sensor fusion algorithms using such sensors [21], it can be shown that estimates of orientation and position would be of an inadequate accuracy for use on-board a UAV-borne LiDAR system [22]. Several innovative algorithms fusing GPS and MEMS observations have been shown to improve the modelling of the large stochastic drifts within MEMS IMUs and as a consequence the accuracy of orientation estimates [2325]. Furthermore, the augmentation of techniques developed within the fields of photogrammetry and computer vision have contributed to improving the accuracies of MEMS-based navigation systems when used for direct georeferencing [2628]. These developments suggest that a UAV system consisting of a lightweight MEMS based IMU along with GPS and visual observations can provide estimates of position and orientation with the accuracy required for mapping forest metrics using UAV-borne LiDAR.

1.2. Objectives

The aim of this paper is to present the development of a UAV-borne LiDAR system using lightweight and low-cost sensors, and demonstrate its capability of collecting spatially dense, accurate, and repeatable measurements for forestry inventory applications. This paper outlines and assesses the accuracy of a modified workflow to produce a UAV-borne LiDAR point cloud. Within this workflow, a technique for accurately georeferencing LiDAR points is presented, which includes a novel inclusion of orientation estimates from HD-video using a modified version of the structure from motion (SfM) algorithm outlined in Snavely et al. [29]. The fusion of these orientation observations with observations from the GPS receiver and the MEMS-IMU within a Sigma Point Kalman Smoother is proposed in order to overcome the presence of large orientation errors which occur in GPS/MEMS-IMU based fusion systems. We evaluate the system in terms of absolute spatial accuracy as well as the accuracy of derived for forest metrics at the individual tree level.

2. Hardware

A multi-rotor UAV (OktoKopter Droidworx/Mikrokopter AD-8) currently being developed as a remote sensing platform by the TerraLuma research group at the University of Tasmania is used for this study (Figure 1). Multi-rotor UAVs offer increased stability and decreased vibration in comparison to other platforms. This is a key consideration in the development of a mapping platform as any source of vibration equates to a source of error within measurements of position and orientation. The OktoKopter has 8 brushless motors which operate at different rotor speeds to achieve directional flight. There is however a minimum rotor speed required to achieve flight which will induce vibration. We isolate this vibration by ensuring the sensing payload is mounted in a rigid frame isolated from the OktoKopter airframe through the use of 4 silicon mounts. These mounts have been selected based on the mass of the payload and the minimum frequency vibration expected from the rotors.
The OktoKopter has a standalone control system including an on-board navigation and autopilot system. This allows predefined flight paths to be followed ensuring maximum use of flight time and repeat surveys to be easily performed. Furthermore, the use of a VTOL UAV within forested environments is of high importance as cleared areas for use as runways are often not present. The main limitation of this platform is the small payload capacity and subsequently the reduced flight time. The electric OktoKopter is only capable of flight times between 3–5 min.
The sensor payload is mounted on the UAV through a custom-designed rigid sensor framework with fixed lever arm offsets and boresight angles between all sensors. The framework also allows for an adequate sky-view for the accurate operation of both the GPS antennas and GPS enabled video camera. The primary sensor on-board the UAV is an Ibeo LUX laser scanner. The scanner is designed for automotive purposes and has a maximum range of 200 m and scans in 4 parallel layers with a transversal beam divergence of 0.8° allowing complete coverage of the sensor’s field of view. The sensor was set to have a scan frequency of 12.5 Hz and an angular resolution of 0.25°. These settings were chosen primarily due to the limitations of the data logging computer. The maximum scanning range of the Ibeo LUX is 110°, although this is restricted to ±30° as large scan angles have been shown to have a significant impact on the derivation of key metrics used for forest investigation [30]. The LUX records ranges and intensities for up to three echoes per pulse, with a range repeatability of 10 cm and a resolution of 4 cm. Although the attributes of the scanner such as the wide beam divergence and low range resolution do not make it an ideal mapping sensor, its low power consumption and lightweight (approximately 1 kg) allow its use on-board UAV platforms.
The remaining sensors contained within the sensing payload belong to the Positioning and Orientation System (POS). The POS consists of a MEMS IMU (microstrain 3DM-GX3 35), a dual frequency GPS receiver (Novatel OEMV1-df) and lightweight antenna (Novatel ANT-A72GA) and a high definition GPS enabled video camera. The IMU contains orthogonal sets of accelerometers, gyroscopes, and magnetometers as well as an internal GPS receiver allowing all observations to be synchronised to GPS time. The IMU, which weighs only 50 g, is set to observe angular rate and acceleration at a rate of 100 Hz. The key properties of the gyroscopes and accelerometers are summarised in Table 1. The IMU has been factory calibrated, however, to confirm the results the IMU was calibrated using the methods outlined in [31]. The GPS observations, recorded at a rate of 5 Hz, are differentially post-processed in order to achieve the highest possible accuracy. Finally, the HD video camera records 30 frames per second and has a field of view of 110°. The calibration parameters of this camera have been determined using the method outlined in Bouget [32]. Data logging and time synchronisation is performed using an on-board miniaturised computer (Gumstix Verdex pro). All other processing is performed offline. The entire sensor payload weighs 2.4 kg meeting the requirements for use on-board the OktoKopter platform, which a has maximum payload of 2.8 kg.

3. Methodology

3.1. LiDAR Workflow

The use of airborne LiDAR has received significant research attention and operational use as a source of information for forest scientist since its introduction in the mid-1990s. This has allowed a well defined best practice data collection and processing workflow to be established by data providers. The end product of this workflow is a spatially accurate point cloud with each point given an appropriate classification. The processing of LiDAR data captured using a UAV follows a similar workflow as shown in Figure 2. However, considerations need to be given to the miniaturised sensors, reduced flying height and time, as well as the increased point density during each of the stages of processing. For example, during the data collection phase consideration needs to be given to the limited flight time of a UAV and optimal mapping strategies need to be determined. An advantage of the platform used in this study is the on-board autopilot allowing maximum coverage of the targeted area during each flight.
Similarly, modifications need to be made within the trajectory determination and point cloud generation stages due to the use of miniaturised sensors. The underlying requirement of these stages is the determination of all the variables within the direct georeferencing LiDAR equation at the instance of each pulse, given as follows:
[ x y z ] = p t + R b m [ R s b r s + a b ]
The determination of aircraft trajectory, including position, p t = [ X Y Z ] T, and the orientation matrix, R b m, are crucial to the final spatial accuracy of the position of each measured LiDAR point [ x y z ] T measured in the mapping frame (North, East, Up). System calibration to determine the boresight matrix R s b and lever arm ab is performed separately, as the reduced accuracy of the laser scanner means traditional techniques such as strip adjustment are not feasible. Further discussion on these issue is provided in Sections 3.2 and 3.3. Furthermore, the four layer off-nadir scanning properties of the Ibeo LUX laser scanner require the laser scanner observation matrix (rs) to be modified giving Equation (2).
r s = [ cos Θ L sin Θ L sin Θ E sin Θ L cos Θ E 0 cos Θ E sin Θ E sin Θ L cos Θ L sin Θ E cos Θ L cos Θ E ] [ 0 0 r ]
This observation matrix includes the addition of a layer angle, ΘL, as well as the range r and the encoder angle, ΘE, used within the common LiDAR equation.

3.2. Trajectory Determination

In order to resolve the UAV’s states (position and orientation) from the observation of the multiple on-board sensors, a state based estimator is used. The aim of this estimator is to make use of all information and optimally combine the results. The core algorithm of the state-based estimator used in this work is the Square Root Unscented variant of the Sigma Point Kalman filter (SPKF) outlined in [33]. This variant of the SPKF was chosen over the traditional Extended Kalman Filter (EKF) as it partially addresses the issues of approximation present in the EKF [34]. Furthermore, the SPKF has been shown to converge faster, thus allowing a greater section of each flight to be used for mapping [23]. This implementation of a SPKF is a straightforward extension of the sigma-point approach to the recursive estimation of a non-linear discrete time system. Error within the system is estimated by propagating sigma points selected from within an a priori measurement noise distribution. These sigma points are applied to the current augmented state through the kinematic model to determine a corresponding set of updated sigma-points. A complete overview of the SPKF algorithm can be found in [33] and for brevity is not included in this paper. The application of the SPKF to the determination of vehicle states does however requires state-based process and observation models to be defined.

3.2.1. Process Model

As in most strap-down navigators, the observations of linear acceleration a = [ a x a y a z ] and rotational velocity ω = [ ω p ω q ω r ] are made in the body frame by the IMU. The biases and noise within these observations are corrected for using the following models:
a ¯ = a a b n a
ω ¯ = ω ω b C n b ω c n ω
where ā and are the corrected observations and na and nω are estimated noise terms within the measurements of acceleration (a) and angular rate (ω). ωc is the rotational velocity of the Earth for a given longitude and latitude. The time varying noise terms ab and ωb model both the bias and scale factor error terms of the IMU. These noise terms are modelled as a zero-mean Gaussian random variable with the variance set according to manufacturers specifications. A quaternion representation of orientation is used to avoid singularities which can occur in alternate representations of orientation, following Shin [23], Van Der Merwe and Wan [33], Crassidis [35]. This results in a state vector with 16 elements as follows:
x = [ p t υ t e t a t ω t ] T
where p t = [ x y z ] T and υ t = [ υ x υ y υ z ] T are the position and velocity within a fixed mapping frame. e t = [ q 0 q 1 q 2 q 3 ] T is attitude quaternion representing the rotation between the body frame and the mapping frame. at and ωt are the three element accelerometer and gyroscope bias vectors. The mechanisation of the corrected IMU observations used as the kinematic process model to transform the measurements made in the body frame into changes in position and attitude in the mapping frame follows Van Der Merwe and Wan [33]. The discrete time kinematic equations are given as follows:
p t + 1 = p t + p ˙ t d t
υ t + 1 = υ t + υ ˙ t d t
e t + 1 = exp ( 1 2 Ω ˜ d t ) e t
a b t + 1 = a b t + r w t d t
ω b t + 1 = ω b t + w b t d t
where rwt and wbt are zero mean Gaussian noise terms and the term exp ( 1 2 Ω ˜ d t ) is composed of a skew symmetric matrix representing the effective rotations in the body frame:
Ω ˜ = [ 0 ω ¯ p d t ω ¯ q d t ω ¯ r d t ω ¯ p d t 0 ω ¯ r d t ω ¯ q d t ω ¯ q d t ω ¯ r d t 0 ω ¯ p d t ω ¯ r d t ω ¯ q d t ω ¯ r d t 0 ]
Based on the proofs provided by Gavrilets [36] and Van Der Merwe [37], the matrix exponent and the skew symmetric property can be used in forming a closed form solution as follows:
exp ( 1 2 Ω ˜ d t ) = [ I ( cos ( s ) + j d t λ ) 1 2 Ω ˜ sin ( s ) s ] e k
where s = 1 2 [ ω ¯ p d t ω ¯ q d t ω ¯ r d t ] . The term j · dtλ serves as a Lagrange multiplier to ensure the unity norm constraint of the quaternion orientation representation, given λ =1 − |ek|2 and j is the factor that determines the convergence speed of the numerical error.

3.2.2. Observation Models

The measurement update step of the Kalman Filter uses the current state of the kinematic system, the independent observations from the GPS receiver and/or the video camera. A cascading filter structure is used as both the GPS and video observations require some amount of preprocessing.
The GPS receiver measurements are post-processed in Novatel’s GrafNav software, which provides position p t G P S and velocity υ t G P S relative to the mapping frame. The antenna reference point is offset from the origin of the body frame by a lever arm GPS. This allows the GPS to also provide an observation of orientation through the lever arm effect [33]. Compensating the GPS observations for the lever arm offset gives the following mathematical model:
p t G P S = p t + C b n r ¯ G P S + n p t
υ t G P S = υ t + C b n ω t r ¯ G P S + n p t
where C b n represents the direction cosine matrix from the body frame to the mapping and is a function of the current quaternion attitude. ωt is the true rotational rate of the vehicle and npt and nvt are the stochastic measurement noise terms.
The observations of orientation provided by the video camera are determined using a modification of the structure from motion (SfM) technique. The SfM technique allows the construction of the 3D structure of imaged objects as well as the estimation of the exterior camera orientation by analysing motion signals over time [38]. The technique can be applied to a set of overlapping images to obtain a sparse point cloud for a wide range of objects [29]. In creating this sparse point cloud, the technique optimally estimates the exterior orientation parameters of the camera in relative space through the use of a bundle adjustment.
The 30 frames per second rate of the video camera, in conjunction with the flight dynamics of the UAV, results in relatively short baselines between the capture of video frames. This is problematic for the recovery of 3D structure within a bundle adjustment due to the poor geometry of the ray intersections used. To include a more complete set of video observations, a modified SfM algorithm was developed for use with this UAV platform. The algorithm first uses a standard SfM algorithm on key frames automatically selected based on the GPS velocity observations such that there is a constant overlap between key frames and a solution with strong geometry is achieved. Each frame is then trimmed to remove a border of 200 pixels from the edge to reduce the effects of lens distortion present in the consumer grade camera.
The first stage of the SfM algorithm is then used to identify projections of the same features in space from two or more views using the Scale Invariant Feature Transform (SIFT) technique developed in [39]. SIFT key features are identified in each frame and then matched between frames using an approximate nearest neighbour kd-Tree approach [29]. The approach used in the matching of SIFT key features allows preliminary matches that are invariant to large changes in scale and rotation to be made [39]. This set of feature matches is likely to contain several incorrect correspondences or outliers. An iterative global RANSAC approach following Nistér [40] is used to identify and eliminate these outliers and at the same time estimate the frame to frame homographies. The camera’s exterior orientation estimates can then be derived from these homographies. The resulting set of feature correspondences and exterior orientation estimates can then be used to predict the three-dimensional locations of each feature in relative space. This information is then used along with initial camera locations within a global bundle adjustment to provide optimal estimates of 3D point location and the camera’s exterior orientation within an arbitrary mapping frame.
Once an initial sparse set of 3D points and exterior orientation estimates are known for each of the key frames, the orientation of the non-key frames which are selected to createa5Hz dataset to match the GPS observation rate can be determined. A reduced set of points with a strong spatial distribution is selected from each non-key frame based on an initial estimate of that frames geometry. These points are matched to points within the key frames and a spatial resection is performed to determine the exterior orientation of the non-key frames. The initial alignment of the camera was determined based on ground control targets placed near the take-off and landing area. A minimum of three targets was used for this purpose, however, the use of four targets is preferable in order to provide redundancy and avoid gross errors. If the boresight angles e b c between the camera and body frames are known, the orientation component of each exterior orientation estimate can be transformed into the body frame giving qc. This provides the observation model used for each video frame. For consistency the orientation is calculated in quaternion space as follows:
q c = e ϕ ( e b c e k )
where eϕ is the estimate of camera orientation in the camera frame given by the SfM algorithm. ek is the stochastic measurement noise term within the SfM estimates of orientation. This noise is weighted based on the output of the bundle adjustment and non-key frame observations are attributed a higher noise than the surrounding key frames.

3.2.3. Sigma Point Kalman Smoother

To further improve the accuracy of the system, a smoothing algorithm is used. By applying a smoothing algorithm, an optimal solution is found for the position and orientation of the LiDAR system based on the entire set of flight observations. The smoothing algorithm involves running two independent filters forward and backward in time. The optimal state estimate, for each epoch, is then found by optimally combining the forward and backward estimates and their error covariances. It is well known that these smoothed estimates, which now incorporate all measurements, provide a significantly improved estimate of the vehicles state, see Shin [23] for an example.

3.3. Calibration

The calibration of a LiDAR system is an important step in generating an accurate point cloud. The determination of the calibration parameters is made particularly difficult by the use of a system made up of off-the-shelf low cost component that introduces significantly large errors and has internal coordinate systems not well-defined. The lever-arm offsets between the laser scanner and the navigation system (ab) and between the GPS and the IMU (GPS) have been manually measured to cm level accuracy. The resolution of the boresight angles ( R b s) between the laser scanner and the IMU are effectively hidden by the resolution and accuracy of the laser scanner as well as the orientation errors of any determined trajectory. A thorough literature search revealed no method which could repeatably resolve these angle, as such they have been assumed to be zero.
The determination of the boresight angles ( e b c) between the camera and the IMU require careful attention as any error in these angles will introduce errors into the trajectory determination algorithm. A technique, outlined by Hol et al. [41], which makes use of observations of the gravity vector made by both sensors was employed for this task. This procedure was repeated on 15 separate occasions to ensure that the correct boresight angles were found. Once the calibration parameters have been determined, the repeatability of the individual mounts and the strap-down nature of the system suggest that they will remain constant for future surveys.

3.4. Point Cloud generation and Accuracy Assessment

The University of Tasmania’s farm was used as the study area to assess the accuracy of the point cloud generated by the UAV platform (Figure 3). The 100 × 100 m area provides significant variation in slope as well as an area of significant canopy coverage in the south west. Furthermore, five different planar man-made features are present within the area. This area was surveyed using 8 different flight transects during 4 different flights at an average flying height of 50 m and an approximate horizontal velocity of 3.3 m/s. As previously mentioned, only scan angles of between ±30° were used for mapping. Under these conditions the resultant point cloud has a swath width of 57 m and a point density of approximately 50 points per m2. The laser footprint at nadir is 0.69 m along track and 0.07 m across track. This results in significant overlap of laser footprints along the track and a gap of approximately 0.06 m across track. The footprint size increases towards the edge of the swath depending on the range and the angle of incidence with the terrain. Each of the four flights was flown using the OktoKopter’s on-board autopilot and are summarised in Table 2. All flights were flown in very similar conditions, indicating the on-board autopilot provides an adequate solution for following predefined flight paths.
Each flight was processed twice to assess the effect of the inclusion or otherwise of SfM observations. To achieve this assessment, 32 high reflectivity targets were placed across the study site. The 0.3 m2 target size allowed a minimum of 4 direct observations to be made of each target under the targeted flying conditions. The design of these targets allowed them to be identified within the point cloud based on the return pulse width, which is reported by the Ibeo LUX and can be considered analogous to intensity for this study. A threshold at which these target could be identified was found by examining a histogram of return pulse widths produced for each point cloud. As trees were the only other features within the study producing pulse widths above this threshold, ground control strikes were identified as ground points (see Section 3.5) with a pulse width above this threshold.
The location of ground control targets as measured by the laser scanner was determined to be the mean of all identifiable strike positions. This location was compared to the location of the center point of each target measured using dual frequency differential GPS (±0.02 m horizontal and ±0.05 m vertical) to determine the accuracy of the UAV-borne LiDAR system. Only ground control with 5 or more identifiable strikes were used in this comparison, resulting in a small number of target from the edge of the scan being excluded. The mean difference and standard deviation in the North, East and up values were calculated for each transect as well as the absolute horizontal error statistics. Finally, the points obtained over planar surfaces (e.g., building roofs) were extracted from each of the 8 point clouds. After applying a least squares fit of a plane to the extracted data, the standard deviation was calculated. This standard deviation gives an indication of the measurement performance of the laser scanner and the internal precision of the vertical measurements within the point cloud.

3.5. Individual Tree Metrics

Once generated the LAS point clouds were classified into ground and non-ground points using the filtering technique available in LASTools [42]. The laser heights above the ground were then calculated by subtracting the ground elevation from the absolute point height. Four individual trees (as shown in Figure 4) were then manually extracted from each of the point clouds in which they were measured. For each tree a height and location was determined to be at the location of the highest point above the ground. Crown width was measured as the average of two perpendicular lines from the tree top location. These lines were chosen as the set of two perpendicular cardinal directions that produced the largest value. Using the largest value partially negates the effect of shadowing on the far side of the canopy induced by the low flying height. Finally, crown area was measured as the area of a 2D convex hull around the entire set of points comprising the tree crown. The standard deviation of this set of measurements was then taken as a measure of repeatability in the measurements of individual tree metrics.
The point clouds were also decimated to 8 points per m2 to simulate the point densities of modern full scale airborne discrete data as used in studies such as Shrestha and Wynne [43], Blanchard et al. [44], Adams [45]. The decimation procedure is similar to the method outlined in Raber et al. [46]. This involved selecting every second beam from the second and fourth layers of the Ibeo LUX laser scanner. First, second and third returns were kept for every point. The same individual metrics were then computed for each of the decimated point clouds for comparison. This comparison solely shows the effect of variable point density and does not account for the effect of the larger footprint diameters produced by full scale scanners.

4. Results and Discussion

4.1. Trajectory Generation

Observations of the on-board position and orientation sensors were made for the entirety of each flight. The vibration isolation properties of the sensor mount allowed a high signal to noise ratio in the IMU observations to be achieved. These observations did not exceed the maximum range of the gyroscopes and accelerometers at any stage. GPS lock was also maintained by all receivers for each flight. A minimum of 5 satellites was observed by the dual frequency receiver at all times during each flight. This allowed accurate GPS position and velocity observations to be computed for each of the four flights.
The camera to IMU calibration procedure produced results with 99% confidence intervals of 0.11°, 0.17° and 0.18° for pitch, roll and heading. These results are consistent with the results presented in Hol et al. [41] and can be considered to have been resolved to an accuracy suitable for this application. The SfM algorithm was able to orient all of the frames from each of the four flights. However, the accuracy of the observations from images dominated by the building in the middle of the study area was reduced due to the relatively low number of SIFT features detected in these frames (Figure 5(a)). Within a forested environment, areas of bare earth will produce a similar reduction in SIFT features. The inclusion of these frames therefore represents a reasonable determination of the spatial accuracy of a point cloud generated in these environments. A high number of SIFT features were detected in frames that included a significant area of vegetation cover, as demonstrated in Figure 5(b). These results suggest that when used in densely vegetated environments, the SIFT algorithm should provide a reliable number of features for the SfM technique to operate accurately.
Figures 6(a) and 6(b) provide a comparison between the trajectory estimates from Flight 2 by the SPKS with and without the inclusion of SfM observations. As it was not possible to determine the true pose of the system, this comparison only allows the discrepancy between the two solutions to be observed. In this case, the orientation is varied by the inclusion of the SfM solutions. Considering an error of only 0.5° in an orientation angle can result in an on the ground error of 0.2 m, these differences are highly significant and any improvement will be determined by examining the spatial accuracy of the point cloud. There is only comparatively small differences in the estimation of position which is governed by the GPS observations.

4.2. Point Cloud Properties

Figure 7 shows an example of the point cloud generated for Transect 2a. This point cloud has a point density of 42.2 points per m2. Table 3 gives a summary of the point cloud properties for each of the eight transects. The average point density including all three returns over every flight was 43.21 points per m2. This density is significantly higher than the data that is currently used in forestry management and research (8 points per m2)[43]. The number of second and third returns across the entire study area was low, however, in the areas of dense vegetation the percentage of second returns increased up to 24% and third returns to 4.2%. This increase along with visual inspection of the point clouds is sufficient to suggest that points are being measured from within the canopy for use in the modelling of key forest metrics such as leaf area index and above ground biomass. An average area of 11,288 m2 was mapped in each flight, which allows multiple transects to be flown over individual forest plots in a single flight.
The internal precision of the point cloud is highlighted by the standard deviations across planar surfaces. The standard deviation from a least squares best fit planes fitted to each surface was found to be 0.04 m,which is within the measurement precision of the Ibeo LUX (0.1 m at 1 σ). This is consistent with the results from [14] who reported a similar variation with a Ibeo LUX sensor.

4.3. Point Cloud Accuracy

The results from the point clouds generated from the IMU/GPS only trajectory were first evaluated against the observed ground control targets. The method used to identify the number of points belonging to ground control allowed an average of 14 points per target to be observed. Across the 8 point clouds only 3 targets were rejected as they were observed with five or less points. All 3 of these rejected targets were only partially observed at the edge of the swath. Table 4(a) shows that the errors are generally within the expected values from the stochastic modelling outlined Wallace et al. [22]. The standard deviation in both North and East measurements are similar. This suggests that they are not dominated by the effect of the beam divergence properties of the Ibeo LUX laser scanner.
The results from the IMU/GPS/SfM solution shown in Table 4(b) are close to the expected values from stochastic modelling [22]. The horizontal RMS error across all of the 130 measured ground control locations is 0.34 m (expected 0.26 m) and the vertical error is 0.15 m (expected 0.14 m). One source of error which is not included in stochastic modelling can be attributed to the use of large ground control targets and the systematic sampling properties of the laser scanner. This target size is required in order to ensure the ground control target is directly measured and to ensure that these measurements are found. The use of an average position from all identified target strikes relies on either an even point sampling across the target or a significant number of random strikes being recorded. This is potentially biased in the along track direction by the location of scan lines across. On average 19 points were used in the calculation of each ground control target location.
Another effect of the averaging technique used is that the error due to beam divergence is not fully represented in the final RMS errors. Beam divergence, which is high in Ibeo LUX laser scanner, affects predominantly the horizontal accuracy of the point cloud in the along track direction. The standard deviation in flights one (flown north-south) and two (flown east-west) suggest that some of this effect is captured. The effects of these errors are more evident when examining the high intensity returns within the point clouds, in which the footprints of each individual ground control points can measure up to 0.6 m in the across track direction. This averaging will not occur in measured trees for future surveys, therefore, careful consideration will need to be given to tree metrics measured in the horizontal direction such as canopy width. These measurements are likely to be exaggerated and may require adjustment before being applied within any further modelling.
In comparison to the IMU/GPS only solution, the results with the inclusion of SfM observations show significant improvement in the horizontal component. The RMS error for the total 130 ground control targets dropped from 0.61 m without SfM observations to 0.34 m with SfM observations. This improvement combined with the difference in orientation shown in Figure 6(b) suggests that the SfM algorithm has resulted in an improvement in the estimation of orientation. As expected, the vertical errors within the point clouds from both solutions are similar. These results suggest that both of the assessed solutions are suitable for use in forest inventory assessment. However, the SfM observations allows the accuracy to improve to a level comparable to that achieved by modern full-scale systems (based on the values reported in [47]). This improvement will allow for direct comparison and integration of the two datasets. Furthermore, the improvement in accuracy enhances the reliability and suitability of the platform for use in multi-temporal surveys.

4.4. Individual Tree Metrics

At the point densities commonly used for the determination of forest metrics, area based approaches have been shown to produce comparable results to individual tree methods [48]. Furthermore, any increase in point density will likely improve the individual tree level results and allow for the derivation of new metrics such as stem quality [45]. The improvement in the repeatability of tree height and tree location derived through the comparison of the UAV point clouds and a set of decimated versions is demonstrated in Figure 8. The results shown in Table 5 indicate, through lower standard deviations, that increasing the sampling density of a tree crown results in an increased likelihood that the top of a tree will be sampled. Furthermore, if the tree top is not directly observed there will be a significant underestimation of the tree height.
There is some slight improvement also seen in the repeatability of the measurement of canopy width. This measurement, however, is highly affected by the beam divergence as well as the low flying height and the angle of incidence of the laser beam. The combination of low flying height and large angles of incidence produces a shadowing effect which prevents the far edge of the canopy from being measured. The effect of shadowing could be reduced by choosing a survey design that further reduces the angle of incidence to within ±15° over the targeted area. Furthermore, merging point clouds from multiple offset transects will increase the likelihood the whole crown will be sampled. This reduces shadowing and will significantly improve the crown width measurements.

5. Conclusions and Future Work

This paper has outlined the development of a low-cost Unmanned Aerial Vehicle-Light Detecting and Ranging (UAV-LiDAR) system. This included the development of a Sigma Point Kalman Smoother (SPKS) with the aim of optimally combining observations from a Micro-Electromechanical System (MEMS)-based Inertial Measurement Unit (IMU), a GPS receiver and the observations of orientation using High Definition (HD) video and a Structure from Motion (SfM) algorithm to determine an accurate estimate of aircraft position and orientation. This system was assessed with application to forest inventory to determine the spatial accuracy of the resultant point clouds and the repeatability in the measurement of individual tree height, location and canopy width.
We demonstrated that with the inclusion of observations of orientation from video, this system is capable of producing point clouds with RMS errors of 0.34 m horizontally and 0.14 m vertically for a nominal flying height of 50 m. This represents a 68% reduction in variance within the horizontal component when compared to point clouds generated with the same SPKS without video observations. This accuracy combined with the very high resolution point clouds (with densities up to 62 points per m2) allowed measurements of tree height, location, and canopy width to be made with standard deviations as small as 0.05 m for height, 0.44 m for location and 0.25 m for canopy width.
Future research is aimed at performing forest surveys to further evaluate the UAV’s potential within the forestry industry. These surveys will be designed to assess the UAV as a tool which allows forest managers to make more informed decisions on pruning and thinning regimes, monitoring tree health and defoliation, as well as improving the accuracy of allometric forest growth models. To achieve this goal, further research is required to analyse the full effects of the reduced flying heights and increased point densities in comparison to full-scale LiDAR on the derivation of the metrics required to make these assessments.
Although future developments of this system (including hardware and battery technology) will increase flight endurance, the current system has been shown to be capable of flying multiple transects over an individual forest plot in a single flight. These results have confirmed that our UAV-LiDAR system is a suitable platform for the generation of high resolution point clouds for assessing forest structure at the individual tree level.

Acknowledgments

We acknowledge the Winfred Violet Scott Trust for funding support in purchasing the OktoKopter and the Laser Scanner. The Australian Antarctic Division are acknowledged for providing work shop time in the construction of the mounting system. We also acknowledge the considerable help and technical know-how of Tony Veness in design and construction of the sensor mount.

References

  1. Hyyppä, J.; Hyyppä, H.; Leckie, D.; Gougeon, F.; Yu, X.; Maltamo, M. Review of methods of small-footprint airborne laser scanning for extracting forest inventory data in boreal forests. Int J. Remote Sens 2008, 29, 1339–1366. [Google Scholar]
  2. Lefsky, M.; Cohen, W.; Parker, G.; Harding, D. Lidar remote sensing for ecosystem studies. Bioscience 2002, 52, 19–30. [Google Scholar]
  3. Akay, A.E.; Ouz, H.; Karas, I.R.; Aruga, K. Using LiDAR technology in forestry activities. Environ. Monit. Assess 2009, 151, 117–125. [Google Scholar]
  4. Erdody, T.L.; Moskal, L.M. Fusion of LiDAR and imagery for estimating forest canopy fuels. Remote Sens. Environ 2010, 114, 725–737. [Google Scholar]
  5. Morsdorf, F.; Nichol, C.; Malthus, T.; Woodhouse, I.H. Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling. Remote Sens. Environ 2009, 113, 2152–2163. [Google Scholar]
  6. Lim, K.; Treitz, P.; Wulder, M.; St-Onge, B.; Flood, M. LiDAR remote sensing of forest structure. Prog. Phys. Geog 2003, 27, 88–106. [Google Scholar]
  7. Barazzetti, L.; Remondino, F.; Scaioni, M. Automation in 3D reconstruction: Results on different kinds of close-range blocks. Int. Arch. Photogramm., Remote Sens. Spat. Inf. Sci 2010, 38, 55–61. [Google Scholar]
  8. Chiabrando, F.; Nex, F.; Piatti, D.; Rinaudo, F. UAV and RPV systems for photogrammetric surveys in archaelogical areas: Two tests in the Piedmont region (Italy). J. Archaeol. Sci 2011, 38, 697–710. [Google Scholar]
  9. Sugiura, R.; Noguchi, N.; Ishii, K. Remote-sensing technology for vegetation monitoring using an unmanned helicopter. Biosyst. Eng 2005, 90, 369–379. [Google Scholar]
  10. Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral remote sensing from unmanned aircraft: Image processing workflows and applications for rangeland environments. Remote Sens 2011, 3, 2529–2551. [Google Scholar]
  11. Hunt, E., Jr; Hively, W.; Fujikawa, S.; Linden, D.; Daughtry, C.; McCarty, G. Acquisition of NIR-Green-Blue digital photographs from unmanned aircraft for crop monitoring. Remote Sens 2010, 2, 290–305. [Google Scholar]
  12. Tao, W.; Lei, Y.; Mooney, P. Dense Point Cloud Extraction from UAV Captured Images in Forest Area. Proceedings of the 2011 IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services, Fuzhou, China, 29 June–1 July 2011; pp. 389–392.
  13. Dandois, J.P.; Ellis, E.C. Remote sensing of vegetation structure using computer vision. Remote Sens 2010, 2, 1157–1176. [Google Scholar]
  14. Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-costmulti-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm 2010, 65, 514–522. [Google Scholar]
  15. Lin, Y.; Hyyppä, J.; Jaakkola, A. Mini-UAV-borne LIDAR for fine-scale mapping. IEEE Geosci. Remote S 2011, 8, 426–430. [Google Scholar]
  16. Choi, K.; Lee, I.; Hong, J.; Oh, T.; Shin, S. Developing a UAV-based rapid mapping system for emergency response. Proc. SPIE 2009, 7332, 733209–733209-12. [Google Scholar]
  17. Nagai, M.; Shibasaki, R.; Kumagai, H.; Ahmed, A. UAV-borne 3-D mapping system by multisensor integration. IEEE T. Geosci. Remote 2009, 47, 701–708. [Google Scholar]
  18. Miller, R.; Amidi, O. 3-D Site Mapping with the CMU Autonomous Helicopter 3-D Site Mapping with the CMU Autonomous Helicopter. Proceedings of the 5th International Conference on Intelligent Autonomous Systems, Sapparo, Japan, June 1998.
  19. Glennie, C. Rigorous 3D error analysis of kinematic scanning LIDAR systems. J. Appl. Geodes 2007, 1, 147–157. [Google Scholar]
  20. Schwarz, K.; El-Sheimy, N. Mobile Mapping Systems State of the art and future trends. Int. Arch. Photogr. Remote Sens. Spat. Inf. Sci 2004, 35, 10. [Google Scholar]
  21. El-sheimy, N. Emerging MEMS IMU and Its Impact on Mapping Applications; Photogrammetric Week: Stuttgart, Germany, 7–11 September 2009; pp. 203–216. [Google Scholar]
  22. Wallace, L.; Lucieer, A.; Turner, D.; Watson, C. Error assessment and mitigation for hyper-temporal UAV-borne LiDAR surveys of forest inventory. Proceedings of Silvilaser 2011, Hobart, Australia, 16–20 October 2011.
  23. Shin, E. An Unscented Kalman Filter for In-Motion Alignment of Low-Cost IMUs. Proceedings of Position Location and Navigation Symposium, Huntsville, AL, USA, 26–29 April 2004; pp. 273–279.
  24. El-Sheimy, N.; Chiang, K.W.; Noureldin, A. The Utilization of Artificial Neural Networks for Multisensor System Integration in Navigation and Positioning Instruments. IEEE T. Instrum. Meas 2006, 55, 1606–1615. [Google Scholar]
  25. Chiang, K.W.; Chang, H.W.; Li, C.Y.; Huang, Y.W. An artificial neural network embedded position and orientation determination algorithm for low cost MEMS INS/GPS integrated sensors. Sensors 2009, 9, 2586–2610. [Google Scholar]
  26. Bryson, M.; Sukkarieh, S. A Comparison of Feature and Pose-Based Mapping Using Vision, Inertial and GPS on a UAV. Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 4256–4262.
  27. Andersen, E.; Taylor, C. Improving MAV Pose Estimation Using Visual Information. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3745–3750.
  28. Gajdamowicz, K.; Öhman, D.; Horemuz, M. Mapping and 3D Modelling of Urban Environment Based on Lidar, Gps/Imu and Image Data. Proceeding of 5th International Symposium on Mobile Mapping Technology, Padova, Italy, 28–31 May 2007.
  29. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism: exploring photo collections in 3D. ACM Trans. Graph 2006, 25, 835–846. [Google Scholar]
  30. Morsdorf, F.; Kötz, B.; Meier, E.; Itten, K.; Allgöwer, B. Estimation of LAI and fractional cover from small footprint airborne laser scanning data based on gap fraction. Remote Sens. Environ 2006, 104, 50–61. [Google Scholar]
  31. Zhang, H.; Wu, Y.; Wu, W.; Wu, M.; Hu, X. Improved multi-position calibration for inertial measurement units. Meas. Sci. Technol 2010, 21, 15107–15117. [Google Scholar]
  32. Bouget, J.Y. Camera Calibration Toolbox for Matlab. 2010. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/ (accessed on 12 January 2012).
  33. Van Der Merwe, R.; Wan, E. Sigma-Point Kalman Filters for Integrated Navigation. In Proceedings of the 60th Annual Meeting of the Institute of Navigation (ION), Dayton, OH, USA, 7–9 June 2004; pp. 641–654.
  34. Wan, E.; van der Merwe, R. The Unscented Kalman Filter for Nonlinear Estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium, Lake Louise, AB, Canada, 1–4 October 2000; pp. 153–158.
  35. Crassidis, J. Sigma-point Kalman filtering for integrated GPS and inertial navigation. IEEE T. Aero. Elec. Sys 2006, 42, 750–756. [Google Scholar]
  36. Gavrilets, V. Autonomous Aerobatic Maneuvering of Miniature Helicopters, 2003.
  37. Van Der Merwe, R. Sigma-Point Kalman Filters for Probabilistic Inference in Dynamic State-Space Models; PhD Thesis, Oregon Health and Science University, Portland, OR, USA; 2004. [Google Scholar]
  38. Dellaert, F.; Seitz, S.M.; Thorpe, C.E.; Thrun, S. Structure from Motion without Correspondence. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Hilton Head, SC, USA, 13–15 June 2000; pp. 2557–2564.
  39. Lowe, D. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis 2004, 60, 91–110. [Google Scholar]
  40. Nistér, D. An efficient solution to the five-point relative pose problem. IEEE T. Pattern. Anal 2004, 26, 756–770. [Google Scholar]
  41. Hol, J.D.; Schon, T.B.; Gustafsson, F. Modeling and calibration of inertial and vision sensors. Int. J. Rob. Res 2010, 29, 231–244. [Google Scholar]
  42. Isenberg, M. LAStools—Efficient Tools for LiDAR Processing. 2011. Available online: http://www.cs.unc.edu/~isenburg/lastools/ (accessed on 15 December 2011).
  43. Shrestha, R.; Wynne, R.H. Estimating biophysical parameters of individual trees in an urban environment using small footprint discrete-return imaging lidar. Remote Sens 2012, 4, 484–508. [Google Scholar]
  44. Blanchard, S.D.; Jakubowski, M.K.; Kelly, M. Object-based image analysis of downed logs in disturbed forested landscapes using lidar. Remote Sens 2011, 3, 2420–2439. [Google Scholar]
  45. Adams, T. Remotely Sensed Crown Structure as an Indicator of Wood Quality. A Comparison of Metrics from Aerial and Terrestrial Laser Scanning. Proceedings of Silvilaser 2011, Hobart, Australia, 16–20 October 2011.
  46. Raber, G.; Jensen, J.; Hodgson, M.; Tullis, J.; Davis, B.; Berglund, J. Impact of lidar nominal post-spacing on DEM accuracy and flood zone delineation. Photogramm. Eng. Remote Sensing 2007, 7, 793–804. [Google Scholar]
  47. Goulden, T.; Hopkinson, C. The forward propagation of integrated system component errors within airborne lidar data. Photogramm. Eng. Remote Sensing 2010, 5, 589–601. [Google Scholar]
  48. Yu, X.; Hyyppä, J.; Holopainen, M.; Vastaranta, M. Comparison of area-based and individual tree-based methods for predicting plot-level forest attributes. Remote Sens 2010, 2, 1481–1495. [Google Scholar]
Figure 1. The multi-rotor Oktocopter UAV platform with the vibration isolated sensor frame, carrying the laser scanner, MEMS based IMU, GPS receiver and antenna and video camera.
Figure 1. The multi-rotor Oktocopter UAV platform with the vibration isolated sensor frame, carrying the laser scanner, MEMS based IMU, GPS receiver and antenna and video camera.
Remotesensing 04 01519f1
Figure 2. The modified LiDAR workflow to be used in producing a point cloud from the UAV system. Signification modifications are highlighted in red to account for the miniaturised sensors and increased resolution when using this workflow to produce a point cloud from data captured from on-board a mini-UAV.
Figure 2. The modified LiDAR workflow to be used in producing a point cloud from the UAV system. Signification modifications are highlighted in red to account for the miniaturised sensors and increased resolution when using this workflow to produce a point cloud from data captured from on-board a mini-UAV.
Remotesensing 04 01519f2
Figure 3. The test field (red) used for both the calibration and the determination accuracy of the UAV-platform showing the flight paths (yellow).
Figure 3. The test field (red) used for both the calibration and the determination accuracy of the UAV-platform showing the flight paths (yellow).
Remotesensing 04 01519f3
Figure 4. Locations of the individual trees manually delineated from the point cloud for use in determining the repeatability of individual tree metric measurements using the UAV-borne LiDAR data.
Figure 4. Locations of the individual trees manually delineated from the point cloud for use in determining the repeatability of individual tree metric measurements using the UAV-borne LiDAR data.
Remotesensing 04 01519f4
Figure 5. The detection of SIFT features (yellow crosses) within (a) a frame dominated by the building in the center of the study area, 1714 SIFT features were found and (b) a frame dominated by natural features including trees, 2959 features were found.
Figure 5. The detection of SIFT features (yellow crosses) within (a) a frame dominated by the building in the center of the study area, 1714 SIFT features were found and (b) a frame dominated by natural features including trees, 2959 features were found.
Remotesensing 04 01519f5
Figure 6. The differences in position (a) and orientation (b) as estimated by the Sigma Point Kalman Smoother with and without the inclusion of observations of orientation generated by the SfM algorithm. A difference in orientation 0.5° can result in a 0.2 m difference of a measured point at the nominal flying height used in this trial.
Figure 6. The differences in position (a) and orientation (b) as estimated by the Sigma Point Kalman Smoother with and without the inclusion of observations of orientation generated by the SfM algorithm. A difference in orientation 0.5° can result in a 0.2 m difference of a measured point at the nominal flying height used in this trial.
Remotesensing 04 01519f6aRemotesensing 04 01519f6b
Figure 7. An example point cloud produced by the SfM trajectory determination algorithm. This particular point cloud has an average point density of 40 points per m2and covers 4,877 m2.
Figure 7. An example point cloud produced by the SfM trajectory determination algorithm. This particular point cloud has an average point density of 40 points per m2and covers 4,877 m2.
Remotesensing 04 01519f7
Figure 8. Example point clouds of Tree 1. Measurements of tree location and height from each of the four flights are shown with the (a) profile and (b) footprint of the original point cloud from flight 4 and the (c) profile and (d) footprint in the decimated point cloud from flight 4, where flight 1a is blue, 1b is green, 2b is yellow, 3b is black, 4a is red, and 4b is purple.
Figure 8. Example point clouds of Tree 1. Measurements of tree location and height from each of the four flights are shown with the (a) profile and (b) footprint of the original point cloud from flight 4 and the (c) profile and (d) footprint in the decimated point cloud from flight 4, where flight 1a is blue, 1b is green, 2b is yellow, 3b is black, 4a is red, and 4b is purple.
Remotesensing 04 01519f8
Table 1. Properties of the MEMS based Microstrain 3DM-GX3 35 IMU.
Table 1. Properties of the MEMS based Microstrain 3DM-GX3 35 IMU.
GyroscopesAccelerometers
Range50°/s1.7g
Non-linearity0.2%0.2%
Bias Stability0.2°/s0.003g
Table 2. Summary of the four flights flown over the study area.
Table 2. Summary of the four flights flown over the study area.
FlightFlight Time (s)Mean Height (m)Mean Horizontal Velocity (m/s)Primary Heading (deg)Key Frames/s
1161483.771802.3
2137543.27901.8
3130463.161351.9
4195443.332251.8
Table 3. Properties of the eight generated point clouds.
Table 3. Properties of the eight generated point clouds.
TransectArea (m2)Point Density (ppm2)% 2nd Returns% 3rd Returns
1a5, 93138.74.090.35
1b6, 28837.25.880.53
2a5, 58662.63.460.54
2b4, 92242.213.902.05
3a5, 45935.915.012.02
3b4, 98636.515.412.37
4a6, 17640.62.920.25
4b5, 81152.02.630.17
Table 4. Mean, Standard Deviation and RMS error in the differences between the ground control location measured by the LiDAR system without (a) and with (b) SfM observations in comparison to the locations measured by a differential GPS survey. The expected values as derived from stochastic modelling are also included as derived in Wallace et al. [22].
Table 4. Mean, Standard Deviation and RMS error in the differences between the ground control location measured by the LiDAR system without (a) and with (b) SfM observations in comparison to the locations measured by a differential GPS survey. The expected values as derived from stochastic modelling are also included as derived in Wallace et al. [22].
(a)

FlightNo. TargetsEast (m)North (m)Up (m)RMSE (m)




Mean (m)σ (m)Mean (m)σ (m)Mean (m)σ (m)Hor. (m)Vert. (m)
1a16−0.110.230.260.210.110.230.570.24
1b140.220.40−0.130.31−0.020.070.620.07
2a170.080.59−0.060.330.090.220.660.23
2b16−0.130.42−0.100.210.010.050.480.05
3a16−0.170.550.180.340.270.090.850.29
3b17−0.040.140.300.240.070.100.400.12
4a15−0.060.50−0.050.540.000.170.700.16
4b18−0.130.370.150.42−0.000.200.580.19
all130−0.030.410.070.420.060.180.600.19

Expected0.540.16
Table 4. Mean, Standard Deviation and RMS error in the differences between the ground control location measured by the LiDAR system without (a) and with (b) SfM observations in comparison to the locations measured by a differential GPS survey. The expected values as derived from stochastic modelling are also included as derived in Wallace et al. [22].
(b)

FlightNo. TargetsEast (m)North (m)Up (m)RMSE (m)




Mean (m)σ (m)Mean (m)σ (m)Mean (m)σ (m)Hor. (m)Vert. (m)
1a170.020.11−0.120.27−0.020.140.380.15
1b140.130.15−0.050.240.090.080.410.12
2a17−0.030.250.030.060.040.140.350.17
2b16−0.040.12−0.060.07−0.000.030.210.03
3a16−0.190.320.080.080.050.130.340.16
3b170.090.120.170.220.110.140.310.17
4a150.020.30−0.170.24−0.040.090.410.10
4b18−0.030.220.020.26−0.060.150.330.16
all1300.010.17−0.010.210.030.120.320.14

Expected0.260.15
Table 5. Mean and Standard deviations (σ) of individual tree metrics derived from the (a) full density point clouds and (b) the point clouds decimated to 8 points per m2.
Table 5. Mean and Standard deviations (σ) of individual tree metrics derived from the (a) full density point clouds and (b) the point clouds decimated to 8 points per m2.
(a)

TreeMeasurement CountLocationHeight (m)Width (m)


σ (m)Mean (m)σ (m)Mean (m)σ (m)
160.4410.940.058.260.25
240.5911.490.1311.530.35
330.5512.710.1112.931.02
450.5614.650.2514.390.54
Table 5. Mean and Standard deviations (σ) of individual tree metrics derived from the (a) full density point clouds and (b) the point clouds decimated to 8 points per m2.
(b)

TreeMeasurement CountLocationHeight (m)Width (m)


σ (m)Mean (m)σ (m)Mean (m)σ (m)
160.4710.850.107.770.32
240.7311.250.2511.210.54
330.7412.560.3512.921.18
451.1414.470.3114.100.36

Share and Cite

MDPI and ACS Style

Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAV-LiDAR System with Application to Forest Inventory. Remote Sens. 2012, 4, 1519-1543. https://doi.org/10.3390/rs4061519

AMA Style

Wallace L, Lucieer A, Watson C, Turner D. Development of a UAV-LiDAR System with Application to Forest Inventory. Remote Sensing. 2012; 4(6):1519-1543. https://doi.org/10.3390/rs4061519

Chicago/Turabian Style

Wallace, Luke, Arko Lucieer, Christopher Watson, and Darren Turner. 2012. "Development of a UAV-LiDAR System with Application to Forest Inventory" Remote Sensing 4, no. 6: 1519-1543. https://doi.org/10.3390/rs4061519

Article Metrics

Back to TopTop