Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Deep Learning-Based Seedling Row Detection and Localization Using High-Resolution UAV Imagery for Rice Transplanter Operation Quality Evaluation
Previous Article in Journal
SBAS-InSAR Monitoring of Landslides and Glaciers Along the Karakoram Highway Between China and Pakistan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors

Department of Plants and Crops, UAV Research Centre, Ghent University, Coupure Links 653 Bl A, 9000 Ghent, Belgium
Remote Sens. 2025, 17(4), 606; https://doi.org/10.3390/rs17040606
Submission received: 29 November 2024 / Revised: 6 February 2025 / Accepted: 7 February 2025 / Published: 10 February 2025
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, hyperspectral, or thermal cameras. Based on a literature review, this paper provides comprehensive guidelines and best practices for executing such mapping flights. It addresses critical aspects of flight preparation and flight execution. Key considerations in flight preparation covered include sensor selection, flight height and GSD, flight speed, overlap settings, flight pattern, direction, and viewing angle; considerations in flight execution include on-site preparations (GCPs, camera settings, sensor calibration, and reference targets) as well as on-site conditions (weather conditions, time of the flights) to take into account. In all these steps, high-resolution and high-quality data acquisition needs to be balanced with feasibility constraints such as flight time, data volume, and post-flight processing time. For reflectance and thermal measurements, BRDF issues also influence the correct setting. The formulated guidelines are based on literature consensus. However, the paper also identifies knowledge gaps for mapping flight settings, particularly in viewing angle pattern, flight direction, and thermal imaging in general. The guidelines aim to advance the harmonization of UAV mapping practices, promoting reproducibility and enhanced data quality across diverse applications.

Graphical Abstract

1. Introduction

Uncrewed aerial vehicles (UAVs, or drones) represent one of the most important new technologies of the last decade. The civil drone market was estimated at $4.3 billion USD in 2024 [1], serving amateurs and professional photographers alike. Beyond recreational and commercial applications, drone technology has unlocked unprecedented possibilities in remote sensing. UAVs offer unmatched spatial resolution and flexibility in terms of the coverage area, spatial resolution, timing, and sensor configurations. In many sectors and scientific research fields, UAVs have become a standard remote sensing tool. The most typical approach in such fields involves conducting photogrammetry flights using snapshot 2D imaging sensors; in this context, photogrammetry should be understood in its broader sense, i.e., as the “science and technology of making measurements using photographs” [2]. The purpose of these flights can be to generate accurate 3D point clouds or 3D models, or to generate orthomosaics of (high resolution) RGB data, reflectance data (multispectral and snapshot hyperspectral cameras) or thermal data (thermal cameras). Applications include diverse areas such as precision agriculture [3], forestry [4], ecology [5,6], cultural heritage and archaeology [7,8], geosciences and glaciology [9,10,11], among others.
It takes more than being a good UAV pilot to conduct a successful mapping flight. Nowadays, good general pilot training is widely available and is often compulsory to obtain pilot licenses. While this training is essential, it rarely delves into mapping-specific techniques. In this article, this gap is addressed by providing practical guidelines for conducting mapping flights with snapshot imaging sensors. However, most of the guidelines are adaptable to other types of sensors, such as line-scanning or LiDAR devices.
The UAV mapping process typically involves three key steps: (i) off-site flight preparation and initial flight map planning, (ii) on-site preparation and execution, and (iii) data processing (Figure 1). Flight preparation begins with determining the objectives, as well as evaluating whether and when flights can legally and safely occur over the target area. This involves evaluating legal limitations, weather conditions, and the pilot’s readiness (from legal, health, and practical perspectives). These checks are essential for any UAV operation and should be part of any comprehensive UAV training program. Therefore, these will not be discussed in detail in this article.
The second aspect of flight preparation involves choosing the appropriate sensor and platform and planning the flight pattern. This step requires balancing coverage area and detail levels. Key parameters include flight altitude, overlap, speed, flight direction, overall flight pattern, and viewing angle. These factors are discussed in detail in Section 3.
Step two involves on-site preparation and execution, covered in Section 4. This includes checking appropriate weather conditions and time of flight as well as on-site pre-flight preparations, such as ground control points, camera settings, and reference targets.
Finally, the dataset produced consists of individual images, often with metadata. These can be processed into a 3D point cloud, orthomosaic and a digital elevation model (DEM, including vegetation and buildings) or digital surface model (DSM, excluding buildings and vegetation) with structure-from-motion (SfM) software. These outputs serve as inputs for further analysis. Data processing has been extensively covered in prior work [8,12,13] and is beyond the scope of this article.
Several studies have explored UAV mapping techniques:
  • O’Connor et al. [14] examined camera settings and their impact on image and orthomosaic quality, focusing on geosciences.
  • Roth et al. [15] developed mapping software and provided a strong mathematical foundation.
  • Assmann et al. [16] offered general flight guidelines, particularly for high latitudes and multispectral sensors.
  • Tmusic et al. [17] presented general flight planning guidelines, including data processing and quality control, though with less focus on flight-specific details.
This article builds upon and expands these works. It covers a broader range of sensors, applications, and geographic contexts while incorporating recent insights from literature. The paper also uncovers remaining uncertainties and knowledge gaps. To facilitate accessibility, I largely avoid extensive mathematical explanations but encourage readers seeking deeper insights to consult O’Connor et al. [14] and Roth et al. [15].
Proper flight planning and execution requires a basic understanding of the influence of sun-sensor geometry on the measured signal. Therefore, Section 2 first provides a basic introduction of sun-sensor geometry effects.

2. Sun-Sensor Geometry and BRDF

The sun’s position is defined by two parameters (Figure 2): the solar zenith angle indicates how high the sun is in the sky, representing the vertical angle. The solar azimuth angle represents the sun’s heading (horizontal angle), measured as the angle between the sun’s projection on the Earth’s surface and the north axis. Similarly, the sensor’s perspective of an object can be described by the sensor zenith angle and the sensor azimuth angle (Figure 2). The difference between the solar and the sensor azimuth angle, the relative azimuth angle, is used.
Figure 3a shows a meadow captured with a UAV-mounted RGB camera under varying viewing and relative azimuth angles. Despite being taken in consistent weather conditions and within a short timeframe, significant differences in brightness levels are evident across the images. Photos taken with the sun behind the camera (backscatter) appear significantly brighter than those where the sensor faces the sun (forward scatter).
This phenomenon can be visualized graphically, as demonstrated in Figure 3b, which plots the reflectance in the green wavelength for a tropical forest across various relative azimuth and sensor viewing angles. Such plots model the influence of sun-sensor geometry on reflectance and are referred to as bidirectional reflectance distribution function (BRDF) plots. In the BRDF plot, a prominent bright region—known as the hot spot—is observed in the backscatter direction where the sensor zenith angle closely aligns with the solar zenith angle. Within the hot spot region, even minor variations in viewing angles can cause substantial changes in reflectance. For instance, Li et al. [18] reported reflectance differences of up to 38% in satellite (GOES) data for viewing angle shifts as small as 2.5°.
Figure 3c,d presents BRDF plots for a simulated canopy, largely corroborating the patterns observed in the tropical forest example. These plots confirm that the BRDF varies across wavelengths, here illustrated for the red and near-infrared wavelengths, but also reveal that this variation is wavelength specific. As a consequence, vegetation indices derived from these wavelengths, such as the normalized difference vegetation index (NDVI), are also influenced by sun-sensor geometry, albeit to a lesser degree. Figure 3e shows that the simulated NDVI exhibits a pattern inverse to that of the individual wavelength bands; specifically, NDVI values are lower in the hotspot region. This phenomenon has been corroborated by observational studies [19,20].
Figure 3. BRDF influence on spectral reflectance. (a) Images obtained with a UAV from a meadow from different sensor zenith and azimuth angles (Canon S110 camera, on a Vulcan hexacopter with an AV200 gimbal (PhotoHigher, Wellington, New Zealand), obtained on 28 July 2015 over a meadow near Richmond, NSW, Australia (lat: 33.611° S, lon: 150.732° E)). (b) Empirical BRDF in the green wavelength over a tropical forest (Robson Creek, Queensland, Australia (lat: 17.118° S, lon: 145.630° E), obtained with the same UAV and camera on 16 August 2015, from [21], (ce) Simulations of reflectance in the red (c) and near infrared (d) spectrum and for NDVI (e) (SCOPE; for a vegetation of 1 m height, LAI of 2, Chlorophyll content of 40 μg/cm2 and fixed solar zenith angle of 30°).
Figure 3. BRDF influence on spectral reflectance. (a) Images obtained with a UAV from a meadow from different sensor zenith and azimuth angles (Canon S110 camera, on a Vulcan hexacopter with an AV200 gimbal (PhotoHigher, Wellington, New Zealand), obtained on 28 July 2015 over a meadow near Richmond, NSW, Australia (lat: 33.611° S, lon: 150.732° E)). (b) Empirical BRDF in the green wavelength over a tropical forest (Robson Creek, Queensland, Australia (lat: 17.118° S, lon: 145.630° E), obtained with the same UAV and camera on 16 August 2015, from [21], (ce) Simulations of reflectance in the red (c) and near infrared (d) spectrum and for NDVI (e) (SCOPE; for a vegetation of 1 m height, LAI of 2, Chlorophyll content of 40 μg/cm2 and fixed solar zenith angle of 30°).
Remotesensing 17 00606 g003
Although thermal BRDF effects have received comparatively less attention [22], the anisotropy observed in thermal imagery is similar to that in reflectance measurements, including the presence of a hotspot region. This is particularly of concern for vegetation [22,23,24] and urban studies [25] and is often compounded by camera vignetting effects [26].
The influence of sun-viewing geometry on thermal and reflectance measurements has far-reaching consequences. UAV flights conducted over the same area at varying times of the day—or even at the same time across different seasons—can produce inconsistent reflectance data due to changes in the solar zenith angle. Additionally, each UAV image pixel corresponds to a slightly different viewing and azimuth angle, adding further to the complexity [27,28,29]. Indeed, even with a perfectly nadir-facing sensor, a standard camera with a 50° diagonal field of view (FOV) will have viewing angles as wide as 25° at its edges, significantly influencing reflectance measurements [30].
To address these challenges, several studies have proposed empirical corrections for sun-sensor geometry [19,22,23,30,31]. Despite these advancements, it remains strongly recommended to avoid capturing the hotspot in UAV imagery [19]. Effective flight planning should consider this by carefully selecting flight time, viewing angles, horizontal and vertical overlaps, and flight direction.
Anisotropic effects are most pronounced towards the edges of the orthomosaic, where image overlap is typically lower (Figure 11; [20]). These effects can be mitigated by including a sufficiently large buffer zone around the field or area of interest during mission planning.

3. Flight Planning

The first step in the process is to clearly define the main objectives of the flight mission:
  • Is the primary goal to generate an accurate point cloud or DSM, or is it to produce an orthomosaic? What are the desired horizontal and vertical accuracy levels?
  • What spatial resolution (ground sampling distance (GSD), see Section 3.2) is required?
  • What spectral information and resolution are necessary?
Additionally, some practical considerations must be addressed before flight planning begins:
  • Area characteristics: What is the size, shape, and terrain (relief) of the area to be covered? Are there any legal restrictions or specific flight permits required? Where are potential take-off and landing points, and what obstacles might be present in the area?
  • Timing: When can the flight be conducted? How much time is available for both the mapping flight and for generating the required outputs?
  • Weather Conditions: What weather conditions are necessary for the mission’s objectives, what are the weather limitations for safe flights, and what is the actual weather forecast?
Finally, you need to assess the available equipment and tools:
  • What type of UAV(s) and sensor(s) are accessible, and what are their specifications and limitations?
  • Which flight application will be used for planning and executing the mission?
Only by thoroughly addressing these considerations can you ensure effective planning and execution of the flight mission.
Of the many flight applications available for performing UAV flights, only a limited number are developed for UAV mapping. An app with waypoint functionality as such is not sufficient; instead, a mapping app should enable users to cover an area of interest. Each of the mapping apps is typically compatible with only a limited number of UAV models. These apps can be installed on either a separate smartphone or tablet connected to the controller or on a device embedded within the controller. Some apps are free; others are not.
Only UAVs capable of waypoint flight can be used for mapping flights. However, not all UAVs with waypoint flight capabilities are compatible with the available apps, and there is often a delay before new UAV models are supported. When purchasing a new UAV for mapping purposes, it is critical to verify app compatibility. This is particularly important for newer, compact UAVs aimed at the hobby photography market.
Most flight mapping apps offer user-friendly and reliable functionality, with a similar workflow across platforms. This has made mapping flights significantly more accessible and user-friendly, reaching a broader community. First, the area of interest or the linear feature to be covered can be drawn by the user, and the UAV and sensor specified. Then, the app will automatically suggest a pattern of parallel flight lines, as well as a range of parameters. These include GSD, flight altitude, horizontal and vertical overlap, flight speed, direction, and viewing angles. While these parameter settings can be generated automatically by the app, the user can change them according to the flight objectives. The proper configuration of these parameters is essential, as it directly impacts the quality of the orthomosaic [15].
This section provides guidelines for setting these parameters based on the earlier defined specific objectives and constraints of the flight mission, emphasizing the balance between detail and quality on one hand and feasibility on the other. Feasibility involves considerations such as sensor capabilities, flight time, processing time, and data volume. A step-by-step workflow is illustrated in Figure 4; Table 1 summarizes the main recommendations for different sensors at each step.

3.1. Selection of Sensors and Lenses

The first stage of flight preparation involves selecting the appropriate sensor type (i.e., RGB, multispectral, hyperspectral, thermal) or sensor combination, as well as the UAV platform. This decision ultimately depends on the specific application and research objectives. Several reviews, such as those by Aasen et al. [30] and Maes and Steppe [3], provide comprehensive overviews of the advantages and disadvantages of each sensor type. In general, the highest spatial resolution is achieved with RGB cameras and the highest spectral resolution with multi- and hyperspectral sensors. Note that guidelines for hyperspectral sensors included in this review are not for line-scanning or push-broom scanners but for snapshot hyperspectral sensors, such as, e.g., the Cubert Firefly, Cubert Ultris X20, or imec hyperspectral payload [32].
Particularly for RGB cameras, the sensor resolution is considered the most important camera quality parameter by many. However, it is only one aspect influencing the overall camera and image quality. Additional factors affecting the quality include image stabilization (in-body and/or lens based), focal length, focus speed and focus precision, chromatic aberration, color rendering, image vignetting, and distortion [15]. The lens/sensor combination significantly influences these parameters.
Lens distortion is a critical factor because it impacts the structure-from-motion process. This can be corrected via software self-calibration during processing or through pre-calibration by imaging a 2D planar reference pattern from different angles (preferred for RGB and thermal cameras—[33,34]). Lens vignetting is less significant for photogrammetry since it can be corrected during post-processing. Structure-from-motion software typically uses the central part of images, especially in standard nadir flights. However, sufficient overlap (Section 3.3) is required to avoid vignetting impacting orthomosaic quality.
The focal length is usually expressed in mm and determines, along with the sensor size, the camera’s field of view—where a larger value represents a more narrow field of view. The selection of the best focal length depends on the objectives:
  • In general, ultra-wide focal length (<20 mm) should be avoided due to significant distortion issues [35].
  • Wide lenses (20–40 mm) generally show superior photogrammetry results [35,36]. With terrestrial laser scans as reference, Denter et al. [35] compared various lenses for reconstructing a 3D forest scene and found that 21 mm and 35 mm lenses performed best, as they provided a better lateral view of tree crowns and trunks. Similar results were reported for thermal cameras [37]. On the other hand, the broad range of viewing angles captured within a single image can lead to bidirectional reflectance distribution function (BRDF) issues [27,28] (Section 2), requiring higher overlap (Section 3.3).
  • Longer focal lengths (e.g., 50–100 mm) produced poorer photogrammetry results than wide-angle cameras in the mentioned study [35], but on the other hand, show less distortion and enable lower GSDs for resolutions in the sub-cm or sub-mm range (Section 3.2).
Please note that the focal length recommendations above are intended for full-frame sensors. For crop sensors or other sensor sizes, adjust the focal length to its full-frame equivalent; e.g., APS-C sensors have a crop factor of 1.6, so a 20 mm lens on an APS-C camera would have an equivalent focal length of 32 mm.
Overall, surprisingly little attention is given to camera and lens quality specifications and requirements for UAV photogrammetry. O’Connor et al. [14] argued that many studies fail to report camera specifications and settings in sufficient detail. For an overview of RGB UAV camera considerations, see O’Connor et al. [14] and Roth et al. [15]. Camera settings are discussed in Section 4.4.

3.2. Ground Sampling Distance and Flight Height

3.2.1. GSD and Flight Height

One of the most important mapping flight parameters to consider is the required spatial resolution or ground sampling distance (GSD). Although these terms are often used interchangeably, they have distinct meanings. GSD refers to the distance between the centers of two adjacent pixels, while spatial resolution describes the smallest feature that can be detected. Spatial resolution should also not be confused with camera or sensor resolution, which represents the total number of pixels in a sensor, typically expressed in megapixels.
The GSD can be mathematically expressed as [14,15]:
G S D = F l i g h t   h e i g h t     S e n s o r   w i d t h F o c a l   l e n g t h     I m a g e   w i d t h
From this equation, it follows that GSD is determined by camera properties and flight height.
(Ultra-) high-resolution RGB: The relationship in Equation (1) imposes limits on the minimal achievable GSD for a particular sensor/lens combination. Most flight apps set a minimum flight height (e.g., 12 m in DJI apps), effectively determining the smallest achievable GSD. However, lower flight altitudes can be programmed manually using other apps or waypoint flight settings, though even then, a minimum flight height must be maintained. This limitation is particularly important to prevent collisions with vegetation or structures, as well as, for larger UAVs, to mitigate the effects of downwash. Downwash not only creates unstable flight conditions but can also disturb the canopy or stir up dust and soil, affecting image quality [38].
Nevertheless, recent camera and UAV advancements have significantly improved achievable GSD, with sub-millimeter resolution now possible. For example, Van De Vijver et al. [39] achieved a GSD of 0.32 mm, referring to this as “ultra-high resolution”. Ultra-high and high resolution have become increasingly relevant, especially in agricultural research, where applications include weed detection (e.g., [40,41]), crop emergence monitoring [42,43], ear, fruit, or flower counting [44,45,46], or plant disease detection [39,47,48]. In ecological studies, applications of high or ultra-high-resolution data include flower counting or identification [49,50,51] and tree species mapping [52].
The impact of GSD on data usability is illustrated by RGB imagery of weeds in a cornfield, shown in Figure 5. At 1 mm, 2 mm, and possibly 5 mm, individual weed species can be identified. At a resolution of 1 cm or 2 cm, weeds between crop rows can still be detected as weeds, but the detail is insufficient for species identification. At 5 cm resolution, the image becomes too coarse for effective weed mapping.
While low GSD offers numerous advantages, it also presents several challenges and limitations. Achieving (ultra) high-resolution RGB data is associated with significant resource and logistical costs. It requires advanced equipment, including superior cameras, longer lenses, and heavier UAV platforms. These technological demands translate into greater financial investment.
Additionally, the impact of GSD on flight time can be striking. This is a result of longer overall flight paths due to the increased number of flight lines required (Section 3.3) in combination with slower flight speeds (Section 3.4). Figure 6b illustrates the relationship between GSD and flight time for a hypothetical area of 1 hectare (100 m × 100 m, Figure 6a) using a standard multispectral camera. At a GSD of 5 cm, the area can be surveyed in 3 min and 18 s, requiring 90 images and approximately 4.52 GB of storage. Reducing the GSD to 1 cm, however, increases the flight time to nearly 37 min, with 2217 images totaling 50.88 GB. Halving the GSD increases flight time by a factor of approximately 3.55.
Decreasing the GSD also leads to an exponential rise in processing time [53] and in the size of the final orthomosaic. Halving the GSD results in a fourfold increase in the size of the orthomosaic. For example, in the RGB weed dataset illustrated in Figure 6, an 8-bit RGB orthomosaic of a 1-hectare area requires 14.3 MB at 5 cm resolution. This increases to 1.50 GB at 5 mm, 9.51 GB at 2 mm, and 37.7 GB at 1 mm GSD.
3D model: In general, flying lower (i.e., lower GSD) favors the accuracy of the 3D reconstruction. Seifert et al. [53] found a hyperbolic increase in the number of tie points, and hence, in the quality of the 3D model, in the structure-from-motion process with a decrease in flight height. As such, high-resolution imagery, acquired at lower flight height, generally results in better 3D reconstruction [54,55,56]. In fact, in a review by Deliry et al. [57], flight height was identified as the most important factor influencing DEM quality, along with the use of GCPs (Section 4.3). On the other hand, flying too low can lead to a doming effect (bending of DEM in the edges of the image due to error accumulation) in 3D reconstruction, affecting the accuracy of the height estimates of the DEM [58].
For multispectral, hyperspectral, and thermal imagery, sensor resolution is typically much lower, resulting in a coarser minimal GSD. Despite this limitation, achieving a low GSD (high resolution) can still be highly beneficial, as it enhances structural detail capture [59], reduces the occurrence of mixed pixels, and can improve the quality of the orthomosaic [54].
However, although some studies found that higher resolution performed best for estimating vegetation characteristics in multispectral imagery (e.g., Zhu et al. [60] for the estimation of biomass in wheat), this is certainly not universally applicable [54,61,62]. Yin et al. [63], for instance, found that multispectral images with a GSD of 2.1 cm were more suitable for predicting SPAD values in winter wheat than those with a finer GSD of 1.1 cm. Finally, extended flight times make it more likely for weather conditions to change during the operation, potentially compromising the consistency and reliability of the collected data.
This all underscores the critical importance of carefully balancing the desired level of detail and accuracy against the practical and financial constraints of data collection.

3.2.2. Terrain Following

The terrain-following option is an option in which the flight height is adjusted to the relief (DTM) of the area (Figure 7). This is in some apps also referred to as above-ground level (AGL), as opposed to the standard flight height, which is the height relative to the take-off point. Terrain following ensures a consistent distance between the UAV and the ground surface, providing a uniform GSD, minimizing out-of-focus issues, and enhancing safety—especially in mountainous regions [64]. Direct comparisons of terrain-following against constant flight height confirmed the higher accuracies for the 3D reconstruction and orthomosaic [65,66].
True terrain-following can be achieved by equipping a UAV with a laser altimeter, enabling it to maintain a set distance from the ground. While this method is supported by flight apps such as UgCS 5, it is primarily designed for very low-altitude flights over areas with little or no vegetation and is not yet widely used in mapping applications. Newer UAV models offer real-time terrain-following capabilities using obstacle-avoidance cameras, but this feature can be limited to a minimum flight height, such as 80 m for the DJI Mavic 3 Enterprise series.
For other cases, most mapping flight apps now include terrain-following options based on UAV GNSS location and DSM. Recent flight apps, such as UgCS 4.3 or higher and DJI Pilot 2, have integrated DSMs, although these often have limited resolution (e.g., 30 m for DJI Pilot 2 using ASTER DGEM). If higher resolution is needed, users can upload their own DSMs, though this process is less user-friendly and requires additional preparation.
Despite the relatively recent introduction of the terrain-following option, terrain following is likely to become standard in the near future. In mountainous areas or regions with steep slopes, terrain following is highly recommended [67]. In other areas, it can enhance data quality but is not strictly necessary.
If working in hilly terrain and if terrain-following is not available, it is advisable to take off from a higher point in the landscape whenever possible. This approach improves safety by providing better oversight and reducing the risk of collisions with upslope obstacles, such as trees or buildings. It also helps ensure sufficient overlap in the imagery. When taking off from a higher point is not feasible, a sufficient safety margin in flight height should be maintained, which may require accepting a higher GSD. Adjustments to overlap settings (Section 3.3) should also be taken into account for variations in terrain elevation.

3.3. Overlap: Balancing Flight Time and Data Quality

After selecting the camera and determining the GSD, the next crucial parameter to establish is the overlap. Overlap refers to the shared area between two images. Vertical overlap pertains to consecutive images captured along the same flight line, while horizontal overlap relates to images captured on adjacent parallel lines (Figure 8a).
Vertical overlap is determined by the frequency of image capture and the flight speed, considering the vertical field of view of the camera and the flight height. Mathematically, it can be expressed as:
V e r t i c a l   O v e r l a p = 100 I m a g e   l e n g t h V e r t i c a l   d i s t a n c e I m a g e   l e n g t h
Here, the “Image length (m)” represents the projected vertical length of an image on the ground, and “Vertical distance (m)” is the spacing between consecutive image captures.
Horizontal overlap, on the other hand, is defined by the horizontal distance between two adjacent flight lines and is calculated using the formula:
H o r i z o n t a l   O v e r l a p = 100 I m a g e   w i d t h H o r i z o n t a l   d i s t a n c e I m a g e   w i d t h
where “Image width” (m) is the projected horizontal width of the image, and “Horizontal distance” (m) represents the separation between adjacent flight paths.
Both vertical and horizontal overlaps are essential for accurate image alignment and mosaicking in the structure-from-motion (SfM) process. In theory, each point should be covered by at least two images for reconstruction, but in practice, a much higher number is needed; in addition to the number of points, the angular range at which the points are seen is important too [68]. Insufficient overlap can cause image alignment issues, such as unaligned cameras leading to gaps in the DEM and orthomosaic, or poorly aligned camera groups generating unrealistic 3D point clouds [34]. Higher overlap exponentially increases the number of cameras capturing each point as well as the angular range (Figure 9b,c), which enhances the number of key points (features) between images, minimizes geometric errors, and improves the accuracy of the point cloud, DEM, and orthomosaic. Among horizontal and vertical overlap, vertical overlap is generally the more significant factor influencing orthomosaic quality [54,69]. High vertical overlap (>80%) is now achievable with most sensors, thanks to advancements in image capture frequency.
However, as with flight height, increasing overlap comes at a cost. Greater overlap leads to a hyperbolic rise in the number of images collected, resulting in longer flight times (Figure 9a). This, in turn, translates into extended data processing durations and increased resource demands [53,70]. Balancing overlap settings with operational efficiency is therefore essential to optimize both data quality and practical feasibility [71].
As such, horizontal and vertical overlap are pivotal parameters to set for UAV imaging, and numerous studies have explored optimal overlap settings. However, these studies often reach differing conclusions, which is not surprising given that overlap and its effects are closely tied to the field of view (or viewing angle) of the sensors. Overlap also is tied to GSD, with lower GSDs (higher resolution) requiring lower overlap [72]. Furthermore, the specific objective defines the optimal overlap.
RGB: As a general guideline, for generating a geometrically accurate orthomosaic, relatively low overlap values—60% vertical and 50% horizontal—are typically sufficient [34,73].
3D model: Similarly, overlaps of 70% vertical and 50% horizontal are generally adequate for creating digital terrain models when the terrain is not very complex [70,74]. For more complex terrain, higher overlap is recommended, although these are hard to quantify—Jiménez-Jiménez et al. [72] recommended between 70–90% vertical and 60–80% horizontal overlap, depending on the flight height.
The required overlap for assessing the 3D structure and canopy properties of orchard trees or forests is intensively studied [54,69,75,76], where studies considering processing efficiency have noted that the marginal quality improvements from extremely high overlap often do not justify the much longer processing times [69,76]. A general consensus here is that for reliable 3D structure reconstruction of trees and forest canopies, vertical overlap should be at least 80% [54,75], while horizontal overlap should be at least 70% [53,54,69]. To map the canopy floor as well, higher overlaps of up to 90% may be necessary [75,77].
An important consideration in forested or orchard areas is that overlap is typically calculated from the take-off or “home point” level. However, the critical overlap is required at the top of the canopy. Most mapping apps do not automatically adjust for this, so users must input higher overlap values to ensure adequate coverage at canopy height. This adjustment can be calculated using the formula:
O L a d j = 100 100 O L t a r g e t F H V H F H
where OLtarget is the targeted overlap (horizontal or vertical) at the tree top level, OLadj the adjusted overlap (input in the app, overlap at ground level), FH the flight height, and VH the vegetation height. This correction becomes more pronounced at lower flight heights and for taller vegetation (Figure 10). For instance, if the targeted overlap at the canopy top is 80%, the adjusted overlap at ground level may need to reach 90% for lower flight altitudes and higher vegetation.
Notably, many studies discussing overlap appear not to have accounted for this correction, which could additionally explain the inconsistencies in their overlap recommendations. To facilitate this adjustment, an Excel-based calculator has been provided as Supplementary Materials. In sloped areas, if the terrain-follow feature is not activated, Equation (4) can also be applied. Again, a practical solution in such cases is to take off and land from the highest part of the area whenever feasible.
Multispectral and hyperspectral: For multispectral and hyperspectral imaging, avoiding bidirectional reflectance distribution function (BRDF) effects is critical (Section 2). Higher overlaps reduce the area sampled per image during orthomosaic creation, resulting in a relatively homogeneous orthomosaic (Figure 11). The need for higher overlap depends on the field of view (focal length) of the camera; wider fields of view necessitate greater overlap. A general rule of thumb for typical wide-angle multispectral cameras is to aim for at least 75% vertical and horizontal overlap, and 80% where feasible. If the solar zenith angle is high and hotspots appear on the edges of the images, higher overlaps are required to minimize their impact on the orthomosaic.
Thermal: For thermal cameras, the literature is less extensive. However, both BRDF effects and image vignetting can introduce artifacts in thermal orthomosaics, particularly when overlap is insufficient [26,78]. Most radiometric cameras include non-uniformity correction (NUC) to address vignetting issues, but NUC alone may not fully resolve the issue [26,79]. To minimize these effects, an overlap of 80% in both vertical and horizontal directions is recommended for thermal cameras [80].

3.4. Flight Speed

Flight speed directly influences the area that can be covered during a UAV operation. The maximum allowable flight speed is primarily determined by legal flight speed regulations, the vertical overlap, and the frequency of image capture [15]. However, other factors can necessitate a reduction in this speed to ensure image quality and safety.
Higher flight speeds can increase the UAV’s pitch angle [54]. If the camera is not mounted on a gimbal, this tilt can degrade image quality by causing uneven capture angles.
However, the most critical consideration is motion blur, which occurs when the UAV (and, with it, the camera) moves while the aperture is open. Motion blur reduces image sharpness and detail, potentially impacting the photogrammetric process [81].
Motion blur is typically expressed as a percentage of the ground sampling distance. While early recommendations suggested keeping motion blur below 150% [14], Roth et al. [15] advocated for stricter limits, recommending it be kept below 50%. (Ultra-)high-resolution RGB: Motion blur is especially pertinent for (ultra) high-resolution RGB imagery. For instance, at a flight speed of 2 m/s and a shutter speed of 1/500th of a second, the UAV moves 4 mm during exposure, which can introduce noticeable blur.
Modern cameras often incorporate image stabilization in their sensors or lenses, effectively mitigating motion blur. Additionally, adjusting camera settings, such as increasing shutter speed and compensating with lower aperture or higher ISO (see Section 4.4), can help reduce blur.
Thermal: Thermal cameras are particularly sensitive to motion blur, despite their generally lower GSD [82]. The microbolometer sensors used in thermal cameras operate differently from photon detectors. Incoming radiation heats the amorphous silicon membrane of each pixel, altering its resistance, which is measured to produce the thermal image. This process has a time lag, defined by the camera’s time constant (typically 7–12 ms). As a rule of thumb, it requires about three to five times the time constant for a measurement to reach a steady state and obtain unblurred images. In practice, this is equivalent to a photon camera with a “shutter speed” of around 0.021 s (7 ms × 3 cycles). Even at a modest flight speed of 3 m/s, this results in significant motion blur of approximately 6.3 cm. Despite references to motion blur challenges in thermal UAV imagery [37], there is a lack of studies on the effects of flight speed on thermal image quality or specific recommendations for maximum speed.

3.5. Flight Pattern and Flight Direction

3.5.1. Grid Flight Pattern

Grid flight patterns, where a second set of parallel lines is flown perpendicular to the standard set (Figure 8b), have shown potential benefits in specific scenarios.
3D model: Grid patterns are particularly beneficial for generating accurate point clouds and 3D models, as they allow for lower flight overlap while increasing precision [54,74,83,84]. They also reduce doming [72].
RGB, Multispectral and hyperspectral and Thermal: Asmann et al. [16] suggested that grid patterns may reduce anisotropic effects in multispectral imagery, especially at the edges. However, compelling evidence for this effect remains lacking. Due to the increased flight time, memory requirements, and processing demands, grid patterns are best reserved for applications that require precise DEMs or DSMs, rather than for general orthomosaic mapping tasks.

3.5.2. Flight Direction

The flight direction of the standard parallel flight lines is typically set by the flight app to minimize the flight time. However, users can modify the direction, and certain factors may influence this choice.
One key consideration is wind speed. In windy conditions, headwinds can reduce flight time [85]. However, it is unclear whether flying with a constant sidewind (perpendicular to the wind) is more energy efficient than alternating between tail- and headwinds.
Multispectral and hyperspectral: A more important factor is the sun’s azimuth angle, which affects both the performance of the camera and the incoming light sensor for multispectral systems. Studies have shown that the incoming light sensors of MicaSense DLS2 [19,86] and Parrot Sequoia [87,88,89] are sensitive to their orientation relative to the sun [28]. Jafarbiglu and Pourreza [19] therefore recommended flying perpendicular to the sun direction for the MicaSense DLS2 sensor in order to minimize these fluctuations. A good—yet untested—alternative can be to maintain a fixed flight direction (heading) throughout the flight.
Flights should also be planned to minimize anisotropic effects caused by the sun-sensor geometry (Section 2); hotspots in particular should be avoided [19,20]. However, it is not clear whether the flight directions should also be perpendicular to the sun—in fact, as the horizontal field of view of most cameras is typically larger than the vertical field of view, flying parallel to the sun’s azimuth may be more effective.
For reflectance measurements over aquatic systems, special attention is needed to minimize skyglint and sunglint. Mobley [90] recommended flying at a relative azimuth angle of 135° with a viewing angle of 40° to reduce skyglint.

3.6. Viewing Angle

In standard mapping flights, the camera is typically oriented in a nadir-looking position, with its central axis (i.e., the central pixel) perpendicular to the ground. However, most flight apps now offer the option for additional oblique photography, where images can be taken from multiple flight directions and angles.
For a pilot, different options for viewing angle capture are illustrated in Figure 12. They range from capturing a limited number of oblique images from a single direction [91] (e.g., the “Elevation Optimization” option in DJI Pilot2 software), which minimally impacts flight and processing time, to performing parallel flight lines in four directions (“Oblique” option)—increasing flight time and memory requirements by a factor of five compared to standard nadir flights. Alternative flight patterns requiring less time and memory have been explored [92] but are not immediately available in the flight apps.
3D model: The primary advantage of oblique photography is its ability to improve the quality of the DEM [93,94]. Oblique imagery is especially beneficial in urban areas or for terrains with highly variable relief, as it provides more comprehensive coverage of the building walls or of the terrain’s vertical and sloping features [65,67,92,95,96,97,98]. Additionally, it can mitigate doming [96,99], although using well-distributed GCPs (Section 4.3) is more efficient in doing so [100]. This is also the reason why DJI Pilot2 incorporates a diagonal flight line with oblique imagery at the end of a nadir-flight as a standard setting (“Elevation Optimization” option). The optimal flight pattern and intensity of oblique measurements deserve further attention.
RGB, Multispectral and hyperspectral and Thermal: For applications where 3D point clouds or detailed shape reconstructions are not the primary focus, oblique imagery is generally unnecessary [100], although the mentioned “Elevation optimization” option can be added without substantially affecting flight or processing time.
Oblique multispectral imagery can be very useful when examining the anisotropy of soils [101] or vegetation [102,103,104,105]. By capturing surface features under varying angles, oblique photography can provide insights into directional patterns of reflectance or structure that are not apparent in nadir imagery.

3.7. Line of Sight Limitation: How Far Can You See a UAV?

In most countries, the legal flight distance for UAVs is restricted to the visual line of sight (VLOS). This means the UAV must remain within the pilot’s visible range during the mission. In some jurisdictions, this range can be extended by using UAV spotters, who assist the pilot in maintaining visual contact with the UAV.
So how far can you see a UAV? The detectability of a UAV depends on its size, contrast against the background, and the observer’s visual acuity. Li et al. [106,107] investigated this by determining the probability of detecting a DJI Phantom 4 and DJI Mavic Air at various distances. Using a detection probability of 50% as the threshold, they established the maximum visual distance for a DJI Phantom 4 at 245 m [106] and for the DJI Mavic Air at 307 m [107], corresponding to a visual angle of 0.065° in both studies. Based on these results, the visual line of sight (VLOS) can be estimated using the formula:
VLOS = 881.4 H
with H the height of the system (m).
EASA, the European Union Aviation Safety Agency responsible for UAV guidelines and legislation, distinguishes two threshold distances, the detection line of sight (DLOS) and the attitude line of sight (ALOS) [108]. The DLOS is the maximum distance at which another aircraft can be visually detected in time to execute an avoidance maneuver and is simply given as 30% of the ground visibility. The ALOS is the distance at which the pilot can discern the UAV’s position and heading and is for a multicopter calculated as
ALOS = 327 CD + 20
with CD the characteristic dimension of the UAV (maximum dimension in m, in this case the diagonal size).
Table 2 presents a comparison of visual line-of-sight distances calculated using Equations (5) and (6) for a selection of commonly used UAVs. The two methods show strong agreement (R2 = 0.99), with the EASA formula generally yielding slightly longer distances for smaller UAVs and shorter distances for larger ones.
Apart from the UAV’s dimensions and the general visibility conditions, obstacles such as trees, buildings, and other structures between the observer and the UAV can affect the visual line of sight. These obstructions may block the view entirely or partially, depending on their size and density, limiting the effective range of VLOS. Additionally, the background contrast plays a crucial role in detecting a UAV. A UAV is much easier to spot when it is flying against a bright background, such as a clear sky, compared to darker backgrounds like dense forests, mountains, or tall buildings. The ability to detect the UAV can therefore vary greatly depending on the landscape and environmental conditions in the flight area.

4. Flight Execution: Ensuring Safe Flights at Best Quality

This section provides an overview of the different aspects to take into account during the flight execution. Table 3 provides a summary for the different applications and camera types.

4.1. Weather Conditions and Their Impact on UAV Mapping Flights

It goes without saying that, as for any UAV flight, the UAV-specific safety restrictions must be respected at all times when performing mapping flights—i.e., pilots should check for maximum wind velocity, kP index, chance of rain, etc., for every flight. In addition to these general UAV safety-based limitations, weather conditions also influence the quality of the data product, which is the focus of this section.

4.1.1. Illumination

The most critical weather condition for UAV-based remote sensing is the illumination.
3D model: For 3D construction with RGB cameras, overcast conditions are preferable due to the absence of dark shadows [55,109,110]. Sunny conditions still give good results in most studies and are required for mapping snow depths [111,112]. Variable illumination can best be avoided throughout [110].
RGB: For orthomosaic generation, flights are best conducted under uninterrupted sunny conditions, which enable very fast shutter speeds, helping to avoid motion blur (see Section 4.4). Slightly overcast conditions are also suitable, as they reduce harsh shadows. However, variable lighting conditions will appear as inconsistencies in the orthomosaic.
For deep learning of high-resolution RGB mapping, studies have shown that diverse weather conditions (sunny, overcast, variable) do not significantly impact model performance for applications such as weed or disease detection [40,47,113]. Including varied conditions in training datasets is recommended to build robust models.
Multispectral and hyperspectral: For multispectral and hyperspectral reflectance measurements, there is more concern. Ideally, you want to fly in fully sunny conditions. If this is not possible, the extent to which overcast or changing conditions affect data quality depends on the sensor type and intended application.
Overcast conditions reduce available light, leading to noisier data, particularly in hyperspectral imaging [114,115]. Further, under overcast conditions, diffuse radiation increases, which has an impact on the reflectance. However, these effects are relatively small [86], particularly when vegetation indices are calculated from the reflectance data [116,117].
Reflectance is defined as the portion of the incoming energy (irradiance) that is reflected per wavelength; hence, variation in irradiance during the flight has a strong effect and must be corrected for [89]. While most multispectral cameras include on-board irradiance sensors, these are, as mentioned in Section 3.5, not always reliable.
Alternative irradiance correction methods exist. Radiometric block adjustment, developed for UAVs by Honkavaara and her team [28,118], compares adjacent images to correct for illumination changes and BRDF effects. This proved to be more effective than using the on-board irradiance radiometer [28,119,120]. The Retinex method separates reflectance and illumination components via Gaussian filters and has been successful for estimating chlorophyll in soybean [121] and LAI in rice [122] in varying conditions. Kizel et al. [123] and Qin et al. [124] offer further alternatives for correcting illumination effects. A comparison of these methods and their implementation in structure-from-motion software deserves further attention.
To conclude, overcast and variable weather can be corrected for, so when illumination conditions are suboptimal, it is better to collect multispectral data than not to fly at all. A good strategy is to perform an initial flight under varying conditions and then to wait for improved conditions, repeating the flight to capture higher-quality data if conditions improve.
Thermal: Thermal imaging is highly sensitive to weather conditions. Sunny conditions are key for thermal imaging in most applications. While cloudy conditions reduce emissivity-related errors [125,126], they also lower contrast and increase noise, complicating the calculation of indices like the crop water stress index [127,128]. Changes in illumination are challenging to correct in thermal data. Techniques like radiometric block adjustment may offer potential solutions, though they remain underexplored in thermal contexts.

4.1.2. Wind Speed and Air Temperature

Strong wind significantly reduces flight time and can reduce UAV stability, affecting DEM and orthomosaic quality [75,85,112].
3D model: Over vegetation, strong and variable wind speed affects canopy reconstruction, reducing the accuracy in tree height estimates [75,109,129]. Snow depth and extent mapping did not work well in wind speeds above 10 m/s, mainly due to platform instability [112].
Multispectral and hyperspectral: Strong and variable wind speed also has a limited impact on reflectance by affecting leaf angle distribution—although no studies were found focusing specifically on the impact of wind speed on vegetation reflectance measurements. Aquatic monitoring can suffer from sunglint issues caused by wind-induced waves [130,131].
Thermal: Thermal imaging is particularly vulnerable to wind and temperature changes. Increased wind reduces the resistance to heat transport and hence the surface temperature [125]. In general, temperature contrasts are larger at low wind speeds. Over vegetation, variable wind speeds can lead to within-flight errors of up to 3.9 °C [132] and are difficult to correct for.
Further, fluctuations in air temperature, often caused by changes in irradiation and wind, introduce errors in thermal measurements. Corrections using high-frequency air temperature measurements can mitigate these effects [34,126,133].

4.2. Time of the Flight

The time of day is mainly important due to changes in solar zenith angle. Solar noon, the time corresponding to the highest solar position, is often preferred, as it is characterized by maximum and stable solar intensity, as well as minimal shadows.
3D model and RGB: When flying in sunny conditions, solar noon remains the most favorable time due to the reduced impact of shadows, particularly when focusing on vertical structures [54]. However, in cases where shadowing is less critical, the time of flight appears less influential for DEM construction [109,129]. Slade et al. [109] therefore suggested prioritizing flights for mapping vegetation structure during low wind speeds, which often occur earlier in the day, over adhering strictly to solar noon conditions. As mentioned in Section 4.1.1, when mapping snow cover and snow depth, sunny conditions are important, and acquisitions around solar noon are preferred [111,112].
Multispectral and hyperspectral: The classic remote sensing textbooks generally recommend performing reflectance measurements around solar noon (e.g., [134,135]). In aquatic research, sunglint is also lowest then [136]. Solar elevation significantly influences both reflectance and vegetation indices, such as the normalized difference vegetation index (NDVI). Research has shown that NDVI exhibits a distinct daily pattern, with the lowest values typically recorded around solar noon [20,137]. For repeated flights throughout a season, it is advisable to conduct missions consistently at the same hour or, even better, at the same solar zenith angle. It is also crucial to avoid capturing images that include the hotspot. Particularly in tropical regions during the summer, flying at solar noon may exacerbate this issue, making it better to schedule flights at different times [19]—defining the time window between the sun being too high (causing possible hotspot issues) and too low (excessive shadows when the sun angle is below 30° [138]). An online tool (https://digitalag.ucdavis.edu/decision-support-tools/when2fly, accessed on 5 February 2025) developed by Jafarbiglu and Pourreza [19] provides guidance on the optimal times to fly for a given sensor, location, and date, offering valuable support for flight planning. Although this tool might be a bit too strict—hotspot effects in the edges of the images can be mitigated by increasing image overlap (Section 3.3)—it is a helpful resource for ensuring data quality.
Thermal: Thermal remote sensing also benefits from solar noon measurements due to minimized shading [125,139,140].
On a general note, the specific purpose of the UAV flight should also guide the choice of optimal flight time. For example, early morning flights are ideal for thermal imagery aimed at wildlife detection because of the pronounced temperature contrast between animals and their surroundings at this time [141]. This early contrast also enhances species-level differentiation [142]. Mapping wildflowers in meadows benefits from the reduced shadowing achieved during solar noon flights [143], but this timing may exclude flowers that bloom exclusively in the morning.

4.3. Ground Control Points and Precision GNSS Technology

Ground control points (GCPs) are crucial elements in UAV remote sensing and are used to enhance camera alignment and georeferencing in the structure-from-motion (SfM) process [144]. These points have known geographic coordinates and are designed for easy identification in images. Typically, UAV GCPs are rectangular panels with a 2 × 2 black-and-white checkerboard pattern, of which the center point serves as the reference. For thermal cameras, GCPs made with dark plastic (warm) and aluminum foil (appearing cold) areas are recommended for better visibility [145,146].
While GCPs play a vital role in refining camera alignment, their use is labor intensive and time consuming. Deploying GCPs involves distributing them across the survey area, recording their locations, and collecting them post-flight. For repeated surveys, permanent GCPs, such as plastic markers fixed with pegs, are more practical. The manual matching of GCPs in SfM software further adds to the effort.
The necessity, number, and distribution of GCPs depend on the project requirements and study area. Research findings emphasize several key points:
  • Utility of GCPs: GCPs significantly improve georeferencing accuracy [147], even if oblique cameras and grid lines are used [98]. In fact, the use of GCPs is the most important factor improving DEM or DSM accuracy, together with flight height [57].
  • Required amount: A minimum of five GCPs is required for successful georeferencing [95,148]. For larger areas or areas with complex terrain, additional GCPs are needed [56,149,150], in particular to attain high vertical accuracy [71,151]. The optimal GCP density ultimately depends on the desired accuracy and the complexity of the relief [95]; in general, a meta-analysis showed that accuracy is improved until about 15 GCPs, after which more GCPs are only sensible in highly complex or very large areas [57].
  • Optimal spatial distribution: The spatial arrangement of GCPs is as critical as their quantity [56,147,150,151]. They should cover the entire survey area, ideally distributed stratified or along the area edges, but with a few inside the area [151,152]. In addition, they should be best distributed across the different height classes in the terrain to be covered [71]. For a minimal setup of five GCPs, a quincunx (die-like) arrangement is recommended [147]. GCPs near edges should be positioned to ensure they are captured by multiple camera views, and GCPs should not be placed too close to each other, as this can complicate manual matching in SfM software, potentially degrading referencing accuracy.
As mentioned, it is crucial to know the exact location of the GCP. High-precision GNSS systems, like RTK (real-time kinematic), are used to measure these positions accurately. The same technology is now also available on the UAV systems. With this technology, centimeter accuracy is reached for the UAV position during flight, enhancing flight reproducibility and safety, as well as for the image geotagging. However, even with RTK positioning, GCPs are still required—though in lower number—for accurate mapping [112,153,154], particularly to reduce the vertical error [155,156].
PPK (post-process kinematic) is an alternative to RTK. It is less integrated into the UAV systems and thus less user-friendly and can only be used to obtain highly accurate image location (not to improve UAV location during flight). Yet, it is slightly less noise-sensitive than RTK and therefore reaches higher precision [157,158]. Yet, as for RTK technology, PPK alone still does not perform as well as when GCPs are used—even without any precision technology [159].
In conclusion, for mapping purposes, GCPs remain the best option. RTK or PPK positioning can be recommended as they reduce the number of required GCPs and can improve flight safety but should not be considered a necessity. In the case where GCPs cannot be used, the best alternative is to either use PPK [158,160] or to combine RTK with oblique imagery [161].

4.4. Camera Set-Up and Camera Settings

3D model and RGB: Configuring camera settings correctly is crucial to obtaining high-quality data. Even for DEM purposes, it is key to avoid over- and underexposure of the RGB images, as this affects the point cloud reconstruction [162,163]. For RGB cameras, the challenge is to prioritize between fast shutter speed (avoiding motion blur), high F-value (sharper; larger depth of view), and low ISO value (low noise levels); see O’Connor et al. [14] and Roth et al. [15] for a more in-depth discussion. In general, fast shutter speed is the most important parameter. For (ultra-) high-resolution RGB flights, shutter speed should be at least 1/1000th of a second and less if possible. If the camera and/or lens have excellent image stabilization, this can be somewhat reduced. The ISO speed is the second most important parameter. For the ISO speed, it is essential to determine the maximum acceptable ISO level for the specific camera [15], with full-frame sensors normally showing much lower noise values at a given ISO; this level is furthermore case specific, as for some applications some level of noise is acceptable.
In sunny and constant lighting conditions, setting the camera manually based on its intensity histogram yields the best results [15]. Under variable lighting, using automatic ISO settings ensures proper exposure across images. Over vegetative areas or when mapping snow depth or extent, underexposing by 0.5–1 stop prevents bright objects from becoming overexposed. White balance should also match the conditions—make sure to set it manually when illumination is consistent (overcast or sunny) and only use auto white balance settings for changing environments. Capturing images in RAW format enhances dynamic range and facilitates post-flight white balance adjustments, though this requires more memory, can reduce capture frequency, and adds post-processing steps.
For focus settings, integrated cameras usually offer three options: continuous autofocus (AFC in DJI apps), where the focus is adjusted for each single image; still autofocus, where the camera focuses when triggered and then remains fixed; and manual focus, where the user sets the fixed focus distance using a slide bar on the screen. Similarly, cameras that are not fully integrated can have these three options, although manual focus means that you will need to set the focus manually on the lens before the flight.
The optimal choice of focus setting depends on flight and camera specifications. At higher altitudes with minimal distance variation and wide lenses, still autofocus or manual focus with fixed settings works best. For lower-altitude flights, with variable distances between sensors and objects or with narrow lenses, continuous autofocus is preferable, especially if terrain following is not employed.
Multispectral and hyperspectral: Multispectral cameras typically have fixed focus at infinity due to their wide lenses and usually operate in auto-exposure mode. However, recent research suggests that fixed exposure settings reduce radiometric errors, a potential improvement for future applications [164,165]. Multispectral cameras also benefit from a stabilization period, in which the camera is turned on about 15 min before the flight [88].
Thermal: For thermal cameras, a non-uniformity correction (NUC), also known as flat-field correction, addresses sensor drift and vignetting. It causes the characteristic clicking noise. While factory recommendations suggest regular NUCs during flights, this process interrupts data collection temporarily. Studies have indicated that thermal drift during typical flights is minimal [166], leading some experts to recommend performing a single NUC before the flight and leaving it off afterward [145]. When possible, triggering NUCs during non-critical moments, such as at the end of flight lines, minimizes data gaps. If unsure, leaving the NUC on while ensuring sufficient image overlap is a safe choice.
Thermal cameras require temperature stabilization, which involves turning on the sensor well before the flight. This should be done at least 15 min beforehand, or longer if possible, as recommended by multiple studies [26,133,166,167].

4.5. Reference Measurements and Reference Targets

Reference targets play a critical role in ensuring accurate data processing in UAV-based remote sensing, particularly for reflectance and thermal measurements.
Multispectral and hyperspectral: For reflectance measurements, the empirical line method (ELM) is widely regarded as the standard approach due to its user-friendly application and precision [30,86]. This method involves imaging grey reference panels with known reflectance values in the field to establish a linear relationship between the sensor’s digital number and the known reflectance of the panels and subsequently applying this calibration to the entire dataset [86].
A few commercial multispectral systems, such as MicaSense and Sequoia cameras, include a single small grey panel that needs to be imaged before and after each flight by holding the UAV and its sensor directly above it. Several commercial SfM packages process these reference panel images automatically, converting the data to reflectance, although their algorithms are essentially a black-box, leading to small differences in reflectance estimates among the packages [86]. In the field, carefully following the manufacturer’s guidelines is crucial—in particular, avoiding casting a shadow on the grey panel as well as on the incoming light sensor while holding the UAV.
Using this single grey panel is practical but lacks the capability to correct for atmospheric scattering and absorption. The use of a set of larger grey panels, visible from the operational flight height, is recommended in humid conditions (high air temperature or high relative humidity) or for flights at relatively high flight altitude (roughly flight height > 50 m) [86,88]. The panel size must be sufficient to include “pure” pixels from the panel centre [117]. These panels should have near-Lambertian properties, and their reflectance across all wavelengths must be known. In case of self-made reference panels, it is best to measure their reflectance in the field with a spectrometer around the time of the flight [86]. From personal experience, the set of grey panels for terrestrial vegetation or aquatic applications should include:
  • Very dark panel: as dark as possible (ideally, about 1% reflectance): A very dark panel should be included as reflectance of vegetation and water in most visible spectra is very low (2–4%), and, ideally, the reference panel should have still lower reflectance.
  • Medium dark panels: Dark grey (8–10%) and medium-grey (15–20%): Including panels of this range is important because of their relevance in the visual regions, but also because some multispectral cameras tend to saturate over brighter panels when positioned in an otherwise darker surroundings (such as vegetation or soil), particularly for the visible bands. Including this range of panels still facilitates the ELM method for all channels.
  • Bright grey panel: 60–75% reflectance: To include brighter areas and, particularly, to correctly estimate reflectance of vegetation in the near-infrared.
Optimal placement of these panels is important; they should be located centrally within the target area, in an open space to minimize the effects of in-scattering from surrounding vegetation or structures [168]. Placing several sets of reference panels strategically over the area is even better, as this can help in correcting for variable irradiance [169].
Alternative approaches to the ELM, such as the use of atmospheric radiative transfer models, are particularly effective for hyperspectral data but require more complex post-processing workflows [30,170].
Some additional measurements should be considered. Although other correction methods exist (cfr. Section 4.1), incoming light can best be measured for correction of multispectral or hyperspectral imagery, particularly when conditions are variable. As explained earlier, most multispectral cameras have an incoming light sensor; for hyperspectral cameras, irradiance spectrometers can be used, either on-board the UAV or located at the ground level [30].
RGB: While ELM is extensively used for multispectral and hyperspectral imagery, RGB sensors can also benefit from grey panels to normalize reflectance data, given the non-linear relationship—typically exponential or cubic—between digital number (when in JPEG) and reflectance [121].
Thermal: Several applications of thermal cameras focus primarily on identifying hot or cold spot anomalies (e.g., photovoltaic panel monitoring [171,172], wildlife detection [141], …), in which case an absolute temperature measurement is not essential. However, when correct surface temperature retrieval is key, such as in vegetation monitoring [125,126], several kinds of reference measurements should be considered:
  • Camera accuracy correction: Microbolometer sensors have limited absolute accuracy, roughly ranging between ±5 to ±1 K. Cold and warm reference panels with known temperatures can be used to linearly correct the brightness temperature of the image [26,146,173,174], similar to the ELM of reflectance measurements. Typically, (ice-cold) water is used, or very bright (low temperature) and dark (high temperature) reference panels. Han et al. [175] constructed special temperature-controlled reference targets. Note that it is crucial to retrieve the exact emissivity of each panel and to correct for the emissivity during the workflow. On top of this, vignetting in thermal cameras can create temperature differences between the edges and the center of the image of up to several degrees [26]. The non-uniformity correction (NUC, Section 4.4) is for some models not sufficient [26], in which case the vignetting can be quantified by taking a thermal image of a uniform blackbody [26,176]. However, this is not absolutely required, provided that sufficiently high horizontal and vertical overlaps are foreseen.
  • Atmospheric correction: Between the object and the camera, thermal radiation from the object is partially absorbed by the atmosphere, while atmospheric scattering contributes additional signals [34] (Figure 13); see Heinemann et al. [126] for an atmospheric correction protocol. They found that not correcting for atmospheric conditions leads to an error of 0.1–0.4 K. Atmospheric correction requires air temperature and relative humidity to be measured (Section 4.1).
  • Correction for emissivity and incoming radiation: The thermal radiation leaving an object (Lout) is influenced by the emissivity ε, longwave incoming radiation Lin and surface temperature Ts, the variable of interest (Figure 13) [125]. Heinemann et al. [126] reported errors of 0.5–2.9 K, depending on the emissivity of the object, in sunny conditions when not correcting for emissivity and longwave incoming radiation. Both emissivity and longwave incoming radiation need to be known with sufficient accuracy. Emissivity is the most sensitive variable [125]. Unfortunately, it cannot be directly measured but needs to be estimated by either image segmentation and using look-up tables, or through an NDVI approach (see [126] for the protocol). Longwave incoming radiation Lin can be measured as the imaged temperature of a reference panel covered with crumpled aluminum foil [34,126]. This is economic and user-friendly and should always be included in thermal measurements.
  • Normalization of atmospheric conditions: For research on drought stress or evapotranspiration of terrestrial ecosystems, surface temperature is usually expressed as a thermal index, similar to the vegetation indices for reflectance measurements [125]. The most common index, the crop water stress index CWSI [177,178], uses the lowest and highest possible temperature that the vegetation can attain in the given conditions. These temperatures should not be confounded with the low and high temperature panels for the thermal accuracy correction, since it is crucial that these panels correspond to temperatures of the actual vegetation [179,180]. A common reference target is to use a wet cloth as a cold reference temperature, as it transpires at a maximal rate and essentially provides the wet bulb temperature [146,181,182]. However, it does not accurately represent the canopy conditions [180,183]. Maes et al. [180] showed that artificial leaves made of cotton, remaining wet by constantly absorbing water from a reservoir, give a more precise estimate, but the scalability of this method to field level remains to be explored.

5. Discussion

5.1. Remaining Limitations and Knowledge Gaps

This article outlined recommendations and best practices for flight planning and execution across a range of UAV-mounted sensors. However, even with high-quality data, SfM software can struggle to align images correctly and generate accurate DEMs and orthomosaics when the photographed objects are moving or in the case of very homogeneous and uniform datasets [13,34,162,163]. Here, direct georeferencing must be used instead. An example of moving objects is measurements over open water, where SfM typically fails [131,136]. In windy conditions, vegetation, particularly forests, can give similar issues [13]. In such cases, it is recommended to increase flight height and overlap so it is more likely that non-moving objects are included, at the cost of GSD [13]. Fresh snow is an example of a very homogeneous structure where image alignment often fails [56,150]. In addition, very low-resolution datasets in combination with low contrast, such as in thermal cameras [34,184], are also more likely to give alignment issues [162], requiring direct georeferencing [185,186]. However, good SfM performance is reported in even quite demanding situations, such as in bare ice or older snow [150], and most issues seem indeed related to suboptimal flight and camera settings or flight conditions.
Table 1 provides an overview of the optimal flight settings based on Section 3. While each parameter was discussed separately, these settings are to some extent interdependent, which can complicate achieving the correct configuration. For example, overlap is a critical parameter: insufficient overlap degrades data quality, whereas excessive overlap increases flight time, memory usage, and processing requirements. However, defining optimal horizontal or vertical overlap is challenging, as it depends on factors such as flight height (lower flight heights, or lower GSD, allow for lower overlap), camera focal length, and terrain complexity. Additionally, the use of grid flight patterns or the inclusion of oblique measurements can reduce overlap requirements to some extent.
Research consistently shows that 3D reconstruction quality improves when using grid patterns or oblique measurements. However, there remains uncertainty about which method is more effective or whether both should be employed—each option involves trade-offs in terms of flight time, memory usage, and processing demands. Findings by Dai et al. [95,187] suggest that incorporating oblique imagery into grid patterns provides added value, though further research is needed to confirm these results. Further, while literature agrees on the benefits of oblique imagery for mitigating the doming effect and improving 3D representation, there is no consensus on the intensity or pattern of oblique imagery required, which also deserves future attention.
Several other knowledge gaps were identified. One prominent area for further investigation is the optimal flight direction relative to the sun’s zenith angle as well as the dominant wind speed, which remains insufficiently explored. For thermal imaging in particular, there is a noticeable scarcity of research addressing the optimization of flight parameters. Unresolved questions for thermal imaging persist regarding the effects of flight altitude and speed on image quality, the determination of optimal overlap, strategies for correcting variations in atmospheric and weather conditions, and the influence of anisotropy (BRDF) effects.

5.2. Towards a Harmonized Mapping Protocol?

Despite these knowledge gaps, the need for harmonization and standardization in UAV mapping protocols is evident. Standardized approaches are critical for producing repeatable datasets and developing transferable machine learning models. Each application requires specific parameter settings, particularly concerning ground sampling distance (GSD) and flight height, which limits the creation of universally applicable protocols. Nonetheless, adhering to the recommendations summarized in Table 1 and Table 3 can facilitate standardized approaches tailored to the specific applications. Figure 14 gives an overall and visual summary of these tables. Inspiration for protocols can be drawn from initiatives such as the UAV field operation guidelines of TERN Australia, which provide structured methodologies for UAV-based data collection (https://www.tern.org.au/data-collection-protocols/; accessed on 5 February 2025).
Even with rigorous protocols, weather conditions inevitably affect the resulting DEM and orthomosaic. These external influences, along with the associated camera settings, are underreported in current literature. To ensure datasets are interpretable and comparable, proper annotation of metadata is essential. Metadata should comprehensively document flight parameters, camera settings, and meteorological conditions during the flight. Developing and adopting a standardized framework for metadata annotation is critical for advancing the field. Such a framework would not only improve data comparability but also foster a shared understanding of UAV-derived datasets within the research community.

5.3. Is There an Alternative for the Tedious Flight Mapping and Processing?

While each pixel is captured multiple times from various viewing angles, standard blending techniques typically retain only the most nadir-looking observation for constructing the orthomosaic. As such, the high overlap in UAV mapping flights generates significant “redundant” information. To address this inefficiency, several methods have been proposed that either make better use of the captured data or that reduce the volume of data collected.
One approach to maximizing the utility of the dataset is to use alternative blending modes during orthomosaic creation. For instance, the average or weighted average blending mode (e.g., “orthomosaic” mode) has been explored as an alternative to the standard (“disable”) mode [115,145]. However, the interpretation of this product is not necessarily more straightforward [145], and neither does it necessarily improve the quality of the orthomosaic [78].
A more complex approach involves moving beyond the traditional orthomosaic by incorporating individual directional observations directly into the analysis. Roosjen et al. [105] applied SfM software to extract observations along with their relative azimuth and sensor zenith angles, then used them as input for a radiative transfer model (PROSAIL) to derive plant parameters. This method exemplifies how raw, angular-specific data can provide additional insights, albeit at the cost of greater computational complexity and specialized expertise.
An alternative, simpler strategy with potentially broader applicability was recently introduced by Heim et al. [29]. They developed machine learning models to predict leaf chlorophyll content and leaf area index (LAI) in maize. The study revealed that using all observations from all viewing angles as input led to poorer model performance compared to models trained on standard orthomosaic data. However, selecting observations from a restricted range of sensor zenith and relative azimuth angles significantly enhanced the models’ predictive accuracy relative to the standard orthomosaic.
Reducing data collection volume can be achieved by revising flight planning strategies. Mueller et al. [96] demonstrated that alternative flight trajectories, such as spiral or loop patterns, reduced flight time while yielding better DEM quality compared to traditional mapping with parallel flight lines and nadir-looking cameras. However, this finding was limited to relatively flat, square areas, and further research is required to explore its applicability to diverse terrain types and for data acquisition beyond DEMs, such as reflectance or thermal orthomosaics.
A more transformative alternative is to replace traditional mapping flights with direct sampling flights. The individual images are georeferenced based on the sensor’s (and UAV’s) position and attitude at the time of capture and are then analyzed individually. This approach bypasses the SfM processing. Direct sampling has been utilized for some time, for instance, when the structure-from-motion software fails (see Section 5.1) or for high-resolution RGB data [39,47].
However, consumer-grade miniature UAVs could give it a new stimulus. These small UAVs, equipped with high-resolution cameras, can fly at very low altitudes to capture ultra-high-resolution images [113,188], for a fraction of the cost of the standard ultra-high-resolution UAV system. Their minimal size avoids the downwash disturbances associated with professional UAVs operating at low altitudes (Section 3.2). Many contemporary models also include waypoint flight capabilities, enabling them to be programmed for systematic flights akin to mapping missions, but at much lower altitudes and with no or very low overlap between images [189]. This approach can dramatically reduce flight time, data volume, and post-processing efforts. Additionally, when equipped with RTK GNSS, direct and highly precise georeferencing of the images is feasible [189].
Direct sampling missions for RGB data collection are not yet integrated into standard UAV flight software and present certain safety challenges, such as low-altitude flights and line-of-sight constraints. Nonetheless, they might represent a promising and competitive alternative for high-resolution RGB data collection in the near future. For multispectral/hyperspectral or thermal data, direct sampling seems to have less potential, given the influence of BRDF and/or vignetting on the image quality.

6. Conclusions

Based on the recent research findings, this document provides detailed guidelines for planning and executing UAV mapping flights. The flight planning section addresses aspects such as sensor selection, GSD (ground sampling distance), flight height, overlap, flight direction, and flight speed. It presents the most appropriate settings of these parameters for flights focusing on 3D acquisition (3D point clouds, digital elevation models), high-resolution RGB, reflectance (multispectral and hyperspectral snapshot cameras), and thermal orthomosaics. Several settings are interdependent, complicating the definition of optimal settings. Furthermore, knowledge gaps remain, particularly in setting optimal overlap, the comparison of grid flights versus oblique imagery, and the intensity of oblique imagery required. Nevertheless, standard settings per application are possible and can harmonize UAV data collection.
For the same application areas, general guidelines are also provided for the flight execution, including weather conditions, time of the flight, camera settings, use of GCPs, and reference measurements that are required.
By promoting standardized methodologies and best practices, these guidelines aim to enhance the accuracy and reliability of UAV-based mapping. Future research should focus on refining thermal imaging workflows, reducing redundancy in data collection, and exploring innovative flight strategies to further improve efficiency and scalability across applications.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs17040606/s1, File S1: Excel file with calculation of overlap.

Funding

W.H.M. did not receive special funding for this review.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The author would like to thank four anonymous reviewers and Lucas Chojnacki for their constructive feedback.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Statista. Drones—Worldwide. Available online: https://www.statista.com/outlook/cmo/consumer-electronics/drones/worldwide (accessed on 6 November 2024).
  2. Collier, P. Photogrammetry/Aerial Photography. In International Encyclopedia of Human Geography; Kitchin, R., Thrift, N., Eds.; Elsevier: Oxford, UK, 2009; pp. 151–156. [Google Scholar]
  3. Maes, W.H.; Steppe, K. Perspectives for remote sensing with Unmanned Aerial Vehicles in precision agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef] [PubMed]
  4. Tang, L.; Shao, G. Drone remote sensing for forestry research and practices. J. For. Res. 2015, 26, 791–797. [Google Scholar] [CrossRef]
  5. Koh, L.P.; Wich, S.A. Dawn of Drone Ecology: Low-Cost Autonomous Aerial Vehicles for Conservation. Trop. Conserv. Sci. 2012, 5, 121–132. [Google Scholar] [CrossRef]
  6. Sun, Z.; Wang, X.; Wang, Z.; Yang, L.; Xie, Y.; Huang, Y. UAVs as remote sensing platforms in plant ecology: Review of applications and challenges. J. Plant Ecol. 2021, 14, 1003–1023. [Google Scholar] [CrossRef]
  7. Mesas-Carrascosa, F.-J.; Notario García, M.D.; Meroño de Larriva, J.E.; García-Ferrer, A. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas. Sensors 2016, 16, 1838. [Google Scholar] [CrossRef]
  8. Pepe, M.; Alfio, V.S.; Costantino, D. UAV Platforms and the SfM-MVS Approach in the 3D Surveys and Modelling: A Review in the Cultural Heritage Field. Appl. Sci. 2022, 12, 12886. [Google Scholar] [CrossRef]
  9. Bhardwaj, A.; Sam, L.; Akanksha; Martín-Torres, F.J.; Kumar, R. UAVs as remote sensing platform in glaciology: Present applications and future prospects. Remote Sens. Environ. 2016, 175, 196–204. [Google Scholar] [CrossRef]
  10. Park, S.; Choi, Y. Applications of Unmanned Aerial Vehicles in Mining from Exploration to Reclamation: A Review. Minerals 2020, 10, 663. [Google Scholar] [CrossRef]
  11. Śledź, S.; Ewertowski, M.W.; Piekarczyk, J. Applications of unmanned aerial vehicle (UAV) surveys and Structure from Motion photogrammetry in glacial and periglacial geomorphology. Geomorphology 2021, 378, 107620. [Google Scholar] [CrossRef]
  12. Jiang, S.; Jiang, C.; Jiang, W.S. Efficient structure from motion for large-scale UAV images: A review and a comparison of SfM tools. ISPRS J. Photogramm. Remote Sens. 2020, 167, 230–251. [Google Scholar] [CrossRef]
  13. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef]
  14. O’Connor, J.; Smith, M.J.; James, M.R. Cameras and settings for aerial surveys in the geosciences: Optimising image data. Prog. Phys. Geogr. Earth Environ. 2017, 41, 325–344. [Google Scholar] [CrossRef]
  15. Roth, L.; Hund, A.; Aasen, H. PhenoFly Planning Tool: Flight planning for high-resolution optical remote sensing with unmanned areal systems. Plant Methods 2018, 14, 116. [Google Scholar] [CrossRef]
  16. Assmann, J.J.; Kerby, J.T.; Cunliffe, A.M.; Myers-Smith, I.H. Vegetation monitoring using multispectral sensors—Best practices and lessons learned from high latitudes. J. Unmanned Veh. Syst. 2019, 7, 54–75. [Google Scholar] [CrossRef]
  17. Tmušić, G.; Manfreda, S.; Aasen, H.; James, M.R.; Gonçalves, G.; Ben-Dor, E.; Brook, A.; Polinova, M.; Arranz, J.J.; Mészáros, J.; et al. Current Practices in UAS-based Environmental Monitoring. Remote Sens. 2020, 12, 1001. [Google Scholar] [CrossRef]
  18. Li, Z.; Roy, D.P.; Zhang, H.K. The incidence and magnitude of the hot-spot bidirectional reflectance distribution function (BRDF) signature in GOES-16 Advanced Baseline Imager (ABI) 10 and 15 min reflectance over north America. Remote Sens. Environ. 2021, 265, 112638. [Google Scholar] [CrossRef]
  19. Jafarbiglu, H.; Pourreza, A. Impact of sun-view geometry on canopy spectral reflectance variability. ISPRS J. Photogramm. Remote Sens. 2023, 196, 270–286. [Google Scholar] [CrossRef]
  20. Stow, D.; Nichol, C.J.; Wade, T.; Assmann, J.J.; Simpson, G.; Helfter, C. Illumination Geometry and Flying Height Influence Surface Reflectance and NDVI Derived from Multispectral UAS Imagery. Drones 2019, 3, 55. [Google Scholar] [CrossRef]
  21. Bovend’aerde, L. An Empirical BRDF Model for the Queensland Rainforests. Master’s Thesis, Ghent University, Ghent, Belgium, 2016. [Google Scholar]
  22. Bian, Z.; Roujean, J.-L.; Cao, B.; Du, Y.; Li, H.; Gamet, P.; Fang, J.; Xiao, Q.; Liu, Q. Modeling the directional anisotropy of fine-scale TIR emissions over tree and crop canopies based on UAV measurements. Remote Sens. Environ. 2021, 252, 112150. [Google Scholar] [CrossRef]
  23. Bian, Z.; Roujean, J.L.; Lagouarde, J.P.; Cao, B.; Li, H.; Du, Y.; Liu, Q.; Xiao, Q.; Liu, Q. A semi-empirical approach for modeling the vegetation thermal infrared directional anisotropy of canopies based on using vegetation indices. ISPRS J. Photogramm. Remote Sens. 2020, 160, 136–148. [Google Scholar] [CrossRef]
  24. Lagouarde, J.P.; Dayau, S.; Moreau, P.; Guyon, D. Directional Anisotropy of Brightness Surface Temperature Over Vineyards: Case Study Over the Medoc Region (SW France). IEEE Geosci. Remote Sens. Lett. 2014, 11, 574–578. [Google Scholar] [CrossRef]
  25. Jiang, L.; Zhan, W.; Tu, L.; Dong, P.; Wang, S.; Li, L.; Wang, C.; Wang, C. Diurnal variations in directional brightness temperature over urban areas through a multi-angle UAV experiment. Build. Environ. 2022, 222, 109408. [Google Scholar] [CrossRef]
  26. Kelly, J.; Kljun, N.; Olsson, P.-O.; Mihai, L.; Liljeblad, B.; Weslien, P.; Klemedtsson, L.; Eklundh, L. Challenges and Best Practices for Deriving Temperature Data from an Uncalibrated UAV Thermal Infrared Camera. Remote Sens. 2019, 11, 567. [Google Scholar] [CrossRef]
  27. Stark, B.; Zhao, T.; Chen, Y. An analysis of the effect of the bidirectional reflectance distribution function on remote sensing imagery accuracy from Small Unmanned Aircraft Systems. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016; pp. 1342–1350. [Google Scholar]
  28. Honkavaara, E.; Khoramshahi, E. Radiometric Correction of Close-Range Spectral Image Blocks Captured Using an Unmanned Aerial Vehicle with a Radiometric Block Adjustment. Remote Sens. 2018, 10, 256. [Google Scholar] [CrossRef]
  29. Heim, R.H.J.; Okole, N.; Steppe, K.; Van Labeke, M.-C.; Geedicke, I.; Maes, W.H. An applied framework to unlocking multi-angular UAV reflectance data: A case study for classification of plant parameters in maize (Zea mays). Precis. Agric. 2024, 25, 1751–1775. [Google Scholar] [CrossRef]
  30. Aasen, H.; Honkavaara, E.; Lucieer, A.; Zarco-Tejada, P. Quantitative remote sensing at ultra-high resolution with uav spectroscopy: A review of sensor technology, measurement procedures, and data correction workflows. Remote Sens. 2018, 10, 1091. [Google Scholar] [CrossRef]
  31. Tu, Y.-H.; Phinn, S.; Johansen, K.; Robson, A. Assessing Radiometric Correction Approaches for Multi-Spectral UAS Imagery for Horticultural Applications. Remote Sens. 2018, 10, 1684. [Google Scholar] [CrossRef]
  32. Li, Y.Q.; Masschelein, B.; Vandebriel, R.; Vanmeerbeeck, G.; Luong, H.; Maes, W.; Van Beek, J.; Pauly, K.; Jayapala, M.; Charle, W.; et al. Compact VNIR snapshot multispectral airborne system and integration with drone system. In Proceedings of the Conference on Photonic Instrumentation Engineering IX Part of SPIE Photonics West OPTO Conference, San Francisco, CA, USA, 22 January–24 February 2022. [Google Scholar]
  33. Griffiths, D.; Burningham, H. Comparison of pre- and self-calibrated camera calibration models for UAS-derived nadir imagery for a SfM application. Prog. Phys. Geogr. Earth Environ. 2019, 43, 215–235. [Google Scholar] [CrossRef]
  34. Maes, W.; Huete, A.; Steppe, K. Optimizing the processing of UAV-based thermal imagery. Remote Sens. 2017, 9, 476. [Google Scholar] [CrossRef]
  35. Denter, M.; Frey, J.; Kattenborn, T.; Weinacker, H.; Seifert, T.; Koch, B. Assessment of camera focal length influence on canopy reconstruction quality. ISPRS Open J. Photogramm. Remote Sens. 2022, 6, 100025. [Google Scholar] [CrossRef]
  36. Kraus, K. Photogrammetry: Geometry from Images and Laser Scans; Walter de Gruyter: Berlin, Germany, 2011. [Google Scholar]
  37. Sangha, H.S.; Sharda, A.; Koch, L.; Prabhakar, P.; Wang, G. Impact of camera focal length and sUAS flying altitude on spatial crop canopy temperature evaluation. Comput. Electron. Agric. 2020, 172, 105344. [Google Scholar] [CrossRef]
  38. Zhu, Y.; Guo, Q.; Tang, Y.; Zhu, X.; He, Y.; Huang, H.; Luo, S. CFD simulation and measurement of the downwash airflow of a quadrotor plant protection UAV during operation. Comput. Electron. Agric. 2022, 201, 107286. [Google Scholar] [CrossRef]
  39. Van De Vijver, R.; Mertens, K.; Heungens, K.; Nuyttens, D.; Wieme, J.; Maes, W.H.; Van Beek, J.; Somers, B.; Saeys, W. Ultra-High-Resolution UAV-Based Detection of Alternaria solani Infections in Potato Fields. Remote Sens. 2022, 14, 6232. [Google Scholar] [CrossRef]
  40. Rasmussen, J.; Nielsen, J.; Streibig, J.C.; Jensen, J.E.; Pedersen, K.S.; Olsen, S.I. Pre-harvest weed mapping of Cirsium arvense in wheat and barley with off-the-shelf UAVs. Precis. Agric. 2019, 20, 983–999. [Google Scholar] [CrossRef]
  41. Gao, J.; Liao, W.; Nuyttens, D.; Lootens, P.; Vangeyte, J.; Pižurica, A.; He, Y.; Pieters, J.G. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 43–53. [Google Scholar] [CrossRef]
  42. Barreto, A.; Lottes, P.; Ispizua Yamati, F.R.; Baumgarten, S.; Wolf, N.A.; Stachniss, C.; Mahlein, A.-K.; Paulus, S. Automatic UAV-based counting of seedlings in sugar-beet field and extension to maize and strawberry. Comput. Electron. Agric. 2021, 191, 106493. [Google Scholar] [CrossRef]
  43. García-Martínez, H.; Flores-Magdaleno, H.; Khalil-Gardezi, A.; Ascencio-Hernández, R.; Tijerina-Chávez, L.; Vázquez-Peña, M.A.; Mancilla-Villa, O.R. Digital Count of Corn Plants Using Images Taken by Unmanned Aerial Vehicles and Cross Correlation of Templates. Agronomy 2020, 10, 469. [Google Scholar] [CrossRef]
  44. Petti, D.; Li, C.Y. Weakly-supervised learning to automatically count cotton flowers from aerial imagery. Comput. Electron. Agric. 2022, 194, 106734. [Google Scholar] [CrossRef]
  45. Xu, X.; Li, H.; Yin, F.; Xi, L.; Qiao, H.; Ma, Z.; Shen, S.; Jiang, B.; Ma, X. Wheat ear counting using K-means clustering segmentation and convolutional neural network. Plant Methods 2020, 16, 106. [Google Scholar] [CrossRef]
  46. Fernandez-Gallego, J.A.; Lootens, P.; Borra-Serrano, I.; Derycke, V.; Haesaert, G.; Roldán-Ruiz, I.; Araus, J.L.; Kefauver, S.C. Automatic wheat ear counting using machine learning based on RGB UAV imagery. Plant J. 2020, 103, 1603–1613. [Google Scholar] [CrossRef]
  47. Wieme, J.; Leroux, S.; Cool, S.R.; Van Beek, J.; Pieters, J.G.; Maes, W.H. Ultra-high-resolution UAV-imaging and supervised deep learning for accurate detection of Alternaria solani in potato fields. Front. Plant Sci. 2024, 15, 1206998. [Google Scholar] [CrossRef] [PubMed]
  48. Kontogiannis, S.; Konstantinidou, M.; Tsioukas, V.; Pikridas, C. A Cloud-Based Deep Learning Framework for Downy Mildew Detection in Viticulture Using Real-Time Image Acquisition from Embedded Devices and Drones. Information 2024, 15, 178. [Google Scholar] [CrossRef]
  49. Carl, C.; Landgraf, D.; Van der Maaten-Theunissen, M.; Biber, P.; Pretzsch, H. Robinia pseudoacacia L. Flower Analyzed by Using An Unmanned Aerial Vehicle (UAV). Remote Sens. 2017, 9, 1091. [Google Scholar] [CrossRef]
  50. Gallmann, J.; Schüpbach, B.; Jacot, K.; Albrecht, M.; Winizki, J.; Kirchgessner, N.; Aasen, H. Flower Mapping in Grasslands With Drones and Deep Learning. Front. Plant Sci. 2022, 12, 774965. [Google Scholar] [CrossRef]
  51. Gröschler, K.-C.; Muhuri, A.; Roy, S.K.; Oppelt, N. Monitoring the Population Development of Indicator Plants in High Nature Value Grassland Using Machine Learning and Drone Data. Drones 2023, 7, 644. [Google Scholar] [CrossRef]
  52. Pu, R. Mapping Tree Species Using Advanced Remote Sensing Technologies: A State-of-the-Art Review and Perspective. J. Remote Sens. 2021, 2021, 9812624. [Google Scholar] [CrossRef]
  53. Seifert, E.; Seifert, S.; Vogt, H.; Drew, D.; van Aardt, J.; Kunneke, A.; Seifert, T. Influence of Drone Altitude, Image Overlap, and Optical Sensor Resolution on Multi-View Reconstruction of Forest Images. Remote Sens. 2019, 11, 1252. [Google Scholar] [CrossRef]
  54. Tu, Y.-H.; Phinn, S.; Johansen, K.; Robson, A.; Wu, D. Optimising drone flight planning for measuring horticultural tree crop structure. ISPRS J. Photogramm. Remote Sens. 2020, 160, 83–96. [Google Scholar] [CrossRef]
  55. Leitão, J.P.; Moy de Vitry, M.; Scheidegger, A.; Rieckermann, J. Assessing the quality of digital elevation models obtained from mini unmanned aerial vehicles for overland flow modelling in urban areas. Hydrol. Earth Syst. Sci. 2016, 20, 1637–1653. [Google Scholar] [CrossRef]
  56. Lee, S.; Park, J.; Choi, E.; Kim, D. Factors Influencing the Accuracy of Shallow Snow Depth Measured Using UAV-Based Photogrammetry. Remote Sens. 2021, 13, 828. [Google Scholar] [CrossRef]
  57. Deliry, S.I.; Avdan, U. Accuracy of Unmanned Aerial Systems Photogrammetry and Structure from Motion in Surveying and Mapping: A Review. J. Indian Soc. Remote Sens. 2021, 49, 1997–2017. [Google Scholar] [CrossRef]
  58. Yurtseven, H. Comparison of GNSS-, TLS- and Different Altitude UAV-Generated Datasets on the Basis of Spatial Differences. ISPRS Int. J. Geo-Inf. 2019, 8, 175. [Google Scholar] [CrossRef]
  59. Jay, S.; Baret, F.; Dutartre, D.; Malatesta, G.; Héno, S.; Comar, A.; Weiss, M.; Maupas, F. Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops. Remote Sens. Environ. 2018, 231, 110898. [Google Scholar] [CrossRef]
  60. Zhu, W.; Rezaei, E.E.; Nouri, H.; Sun, Z.; Li, J.; Yu, D.; Siebert, S. UAV Flight Height Impacts on Wheat Biomass Estimation via Machine and Deep Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 7471–7485. [Google Scholar] [CrossRef]
  61. Avtar, R.; Suab, S.A.; Syukur, M.S.; Korom, A.; Umarhadi, D.A.; Yunus, A.P. Assessing the Influence of UAV Altitude on Extracted Biophysical Parameters of Young Oil Palm. Remote Sens. 2020, 12, 3030. [Google Scholar] [CrossRef]
  62. Johansen, K.; Raharjo, T.; McCabe, M.F. Using Multi-Spectral UAV Imagery to Extract Tree Crop Structural Properties and Assess Pruning Effects. Remote Sens. 2018, 10, 854. [Google Scholar] [CrossRef]
  63. Yin, Q.; Zhang, Y.; Li, W.; Wang, J.; Wang, W.; Ahmad, I.; Zhou, G.; Huo, Z. Estimation of Winter Wheat SPAD Values Based on UAV Multispectral Remote Sensing. Remote Sens. 2023, 15, 3595. [Google Scholar] [CrossRef]
  64. Singh, C.H.; Mishra, V.; Jain, K. High-resolution mapping of forested hills using real-time UAV terrain following. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, X-1/W1-2023, 665–671. [Google Scholar] [CrossRef]
  65. Agüera-Vega, F.; Ferrer-González, E.; Martínez-Carricondo, P.; Sánchez-Hermosilla, J.; Carvajal-Ramírez, F. Influence of the Inclusion of Off-Nadir Images on UAV-Photogrammetry Projects from Nadir Images and AGL (Above Ground Level) or AMSL (Above Mean Sea Level) Flights. Drones 2024, 8, 662. [Google Scholar] [CrossRef]
  66. Zhao, H.; Zhang, B.; Hu, W.; Liu, J.; Li, D.; Liu, Y.; Yang, H.; Pan, J.; Xu, L. Adaptable Flight Line Planning for Airborne Photogrammetry Using DEM. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6206–6218. [Google Scholar] [CrossRef]
  67. Kozmus Trajkovski, K.; Grigillo, D.; Petrovič, D. Optimization of UAV Flight Missions in Steep Terrain. Remote Sens. 2020, 12, 1293. [Google Scholar] [CrossRef]
  68. Smith, M.W.; Carrivick, J.L.; Quincey, D.J. Structure from motion photogrammetry in physical geography. Prog. Phys. Geogr. Earth Environ. 2016, 40, 247–275. [Google Scholar] [CrossRef]
  69. Domingo, D.; Ørka, H.O.; Næsset, E.; Kachamba, D.; Gobakken, T. Effects of UAV Image Resolution, Camera Type, and Image Overlap on Accuracy of Biomass Predictions in a Tropical Woodland. Remote Sens. 2019, 11, 948. [Google Scholar] [CrossRef]
  70. Lopes Bento, N.; Araújo E Silva Ferraz, G.; Alexandre Pena Barata, R.; Santos Santana, L.; Diennevan Souza Barbosa, B.; Conti, L.; Becciolini, V.; Rossi, G. Overlap influence in images obtained by an unmanned aerial vehicle on a digital terrain model of altimetric precision. Eur. J. Remote Sens. 2022, 55, 263–276. [Google Scholar] [CrossRef]
  71. Gonçalves, G.; Gonçalves, D.; Gómez-Gutiérrez, Á.; Andriolo, U.; Pérez-Alvárez, J.A. 3D Reconstruction of Coastal Cliffs from Fixed-Wing and Multi-Rotor UAS: Impact of SfM-MVS Processing Parameters, Image Redundancy and Acquisition Geometry. Remote Sens. 2021, 13, 1222. [Google Scholar] [CrossRef]
  72. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Marcial-Pablo, M.d.J.; Enciso, J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS Int. J. Geo-Inf. 2021, 10, 285. [Google Scholar] [CrossRef]
  73. Flores-de-Santiago, F.; Valderrama-Landeros, L.; Rodríguez-Sobreyra, R.; Flores-Verdugo, F. Assessing the effect of flight altitude and overlap on orthoimage generation for UAV estimates of coastal wetlands. J. Coast. Conserv. 2020, 24, 35. [Google Scholar] [CrossRef]
  74. Chaudhry, M.H.; Ahmad, A.; Gulzar, Q. Impact of UAV Surveying Parameters on Mixed Urban Landuse Surface Modelling. ISPRS Int. J. Geo-Inf. 2020, 9, 656. [Google Scholar] [CrossRef]
  75. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef]
  76. Torres-Sánchez, J.; López-Granados, F.; Borra-Serrano, I.; Peña, J.M.J.P.A. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards. Precis. Agric. 2018, 19, 115–133. [Google Scholar] [CrossRef]
  77. Frey, J.; Kovach, K.; Stemmler, S.; Koch, B. UAV Photogrammetry of Forests as a Vulnerable Process. A Sensitivity Analysis for a Structure from Motion RGB-Image Pipeline. Remote Sens. 2018, 10, 912. [Google Scholar] [CrossRef]
  78. Malbéteau, Y.; Johansen, K.; Aragon, B.; Al-Mashhawari, S.K.; McCabe, M.F. Overcoming the Challenges of Thermal Infrared Orthomosaics Using a Swath-Based Approach to Correct for Dynamic Temperature and Wind Effects. Remote Sens. 2021, 13, 3255. [Google Scholar] [CrossRef]
  79. Olbrycht, R.; Więcek, B. New approach to thermal drift correction in microbolometer thermal cameras. Quant. InfraRed Thermogr. J. 2015, 12, 184–195. [Google Scholar] [CrossRef]
  80. Boesch, R. Thermal remote sensing with UAV-based workflows. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 41–46. [Google Scholar] [CrossRef]
  81. Sieberth, T.; Wackrow, R.; Chandler, J.H. Motion blur disturbs—The influence of motion-blurred images in photogrammetry. Photogramm. Rec. 2014, 29, 434–453. [Google Scholar] [CrossRef]
  82. Lee, K.; Ban, Y.; Kim, C. Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector. Sensors 2022, 22, 1893. [Google Scholar] [CrossRef]
  83. Ahmed, S.; El-Shazly, A.; Abed, F.; Ahmed, W. The Influence of Flight Direction and Camera Orientation on the Quality Products of UAV-Based SfM-Photogrammetry. Appl. Sci. 2022, 12, 10492. [Google Scholar] [CrossRef]
  84. Mora-Felix, Z.D.; Sanhouse-Garcia, A.J.; Bustos-Terrones, Y.A.; Loaiza, J.G.; Monjardin-Armenta, S.A.; Rangel-Peraza, J.G. Effect of photogrammetric RPAS flight parameters on plani-altimetric accuracy of DTM. Open Geosci. 2020, 12, 1017–1035. [Google Scholar] [CrossRef]
  85. Beigi, P.; Rajabi, M.S.; Aghakhani, S. An Overview of Drone Energy Consumption Factors and Models. In Handbook of Smart Energy Systems; Fathi, M., Zio, E., Pardalos, P.M., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 1–20. [Google Scholar]
  86. Daniels, L.; Eeckhout, E.; Wieme, J.; Dejaegher, Y.; Audenaert, K.; Maes, W.H. Identifying the Optimal Radiometric Calibration Method for UAV-Based Multispectral Imaging. Remote Sens. 2023, 15, 2909. [Google Scholar] [CrossRef]
  87. Fawcett, D.; Anderson, K. Investigating Impacts of Calibration Methodology and Irradiance Variations on Lightweight Drone-Based Sensor Derived Surface Reflectance Products; SPIE: Bellingham, WA, USA, 2019; Volume 11149. [Google Scholar]
  88. Olsson, P.-O.; Vivekar, A.; Adler, K.; Garcia Millan, V.E.; Koc, A.; Alamrani, M.; Eklundh, L. Radiometric Correction of Multispectral UAS Images: Evaluating the Accuracy of the Parrot Sequoia Camera and Sunshine Sensor. Remote Sens. 2021, 13, 577. [Google Scholar] [CrossRef]
  89. Zhu, H.; Huang, Y.; An, Z.; Zhang, H.; Han, Y.; Zhao, Z.; Li, F.; Zhang, C.; Hou, C. Assessing radiometric calibration methods for multispectral UAV imagery and the influence of illumination, flight altitude and flight time on reflectance, vegetation index and inversion of winter wheat AGB and LAI. Comput. Electron. Agric. 2024, 219, 108821. [Google Scholar] [CrossRef]
  90. Mobley, C.D. Estimation of the remote-sensing reflectance from above-surface measurements. Appl. Opt. 1999, 38, 7442–7455. [Google Scholar] [CrossRef] [PubMed]
  91. Bi, R.; Gan, S.; Yuan, X.; Li, R.; Gao, S.; Luo, W.; Hu, L. Studies on Three-Dimensional (3D) Accuracy Optimization and Repeatability of UAV in Complex Pit-Rim Landforms As Assisted by Oblique Imaging and RTK Positioning. Sensors 2021, 21, 8109. [Google Scholar] [CrossRef] [PubMed]
  92. Nesbit, P.R.; Hugenholtz, C.H. Enhancing UAV–SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images. Remote Sens. 2019, 11, 239. [Google Scholar] [CrossRef]
  93. Jiang, S.; Jiang, W.; Huang, W.; Yang, L. UAV-based oblique photogrammetry for outdoor data acquisition and offsite visual inspection of transmission line. Remote Sens. 2017, 9, 278. [Google Scholar] [CrossRef]
  94. Lin, Y.; Jiang, M.; Yao, Y.; Zhang, L.; Lin, J. Use of UAV oblique imaging for the detection of individual trees in residential environments. Urban For. Urban Green. 2015, 14, 404–412. [Google Scholar] [CrossRef]
  95. Dai, W.; Zheng, G.; Antoniazza, G.; Zhao, F.; Chen, K.; Lu, W.; Lane, S.N. Improving UAV-SfM photogrammetry for modelling high-relief terrain: Image collection strategies and ground control quantity. Earth Surf. Process. Landf. 2023, 48, 2884–2899. [Google Scholar] [CrossRef]
  96. Mueller, M.M.; Dietenberger, S.; Nestler, M.; Hese, S.; Ziemer, J.; Bachmann, F.; Leiber, J.; Dubois, C.; Thiel, C. Novel UAV Flight Designs for Accuracy Optimization of Structure from Motion Data Products. Remote Sens. 2023, 15, 4308. [Google Scholar] [CrossRef]
  97. Sadeq, H.A. Accuracy assessment using different UAV image overlaps. J. Unmanned Veh. Syst. 2019, 7, 175–193. [Google Scholar] [CrossRef]
  98. Rossi, P.; Mancini, F.; Dubbini, M.; Mazzone, F.; Capra, A. Combining nadir and oblique UAV imagery to reconstruct quarry topography: Methodology and feasibility analysis. Eur. J. Remote Sens. 2017, 50, 211–221. [Google Scholar] [CrossRef]
  99. James, M.R.; Robson, S.; d’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [Google Scholar] [CrossRef]
  100. Meinen, B.U.; Robinson, D.T. Mapping erosion and deposition in an agricultural landscape: Optimization of UAV image acquisition schemes for SfM-MVS. Remote Sens. Environ. 2020, 239, 111666. [Google Scholar] [CrossRef]
  101. Li, L.; Mu, X.; Qi, J.; Pisek, J.; Roosjen, P.; Yan, G.; Huang, H.; Liu, S.; Baret, F. Characterizing reflectance anisotropy of background soil in open-canopy plantations using UAV-based multiangular images. ISPRS J. Photogramm. Remote Sens. 2021, 177, 263–278. [Google Scholar] [CrossRef]
  102. Deng, L.; Chen, Y.; Zhao, Y.; Zhu, L.; Gong, H.-L.; Guo, L.-J.; Zou, H.-Y. An approach for reflectance anisotropy retrieval from UAV-based oblique photogrammetry hyperspectral imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102442. [Google Scholar] [CrossRef]
  103. Aasen, H.; Burkart, A.; Bolten, A.; Bareth, G. Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance. ISPRS J. Photogramm. Remote Sens. 2015, 108, 245–259. [Google Scholar] [CrossRef]
  104. Burkart, A.; Aasen, H.; Alonso, L.; Menz, G.; Bareth, G.; Rascher, U. Angular dependency of hyperspectral measurements over wheat characterized by a novel UAV based goniometer. Remote Sens. 2015, 7, 725–746. [Google Scholar] [CrossRef]
  105. Roosjen, P.P.J.; Brede, B.; Suomalainen, J.M.; Bartholomeus, H.M.; Kooistra, L.; Clevers, J.G.P.W. Improved estimation of leaf area index and leaf chlorophyll content of a potato crop using multi-angle spectral data—Potential of unmanned aerial vehicle imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 14–26. [Google Scholar] [CrossRef]
  106. Li, K.W.; Jia, H.; Peng, L.; Gan, L. Line-of-sight in operating a small unmanned aerial vehicle: How far can a quadcopter fly in line-of-sight? Appl. Ergon. 2019, 81, 102898. [Google Scholar] [CrossRef]
  107. Li, K.W.; Sun, C.; Li, N. Distance and Visual Angle of Line-of-Sight of a Small Drone. Appl. Sci. 2020, 10, 5501. [Google Scholar] [CrossRef]
  108. EASA. Guidelines for UAS Operations in the Open and Specific Category—Ref to Regulation (EU) 2019/947; EASA: Cologne, Germany, 2024. [Google Scholar]
  109. Slade, G.; Anderson, K.; Graham, H.A.; Cunliffe, A.M. Repeated drone photogrammetry surveys demonstrate that reconstructed canopy heights are sensitive to wind speed but relatively insensitive to illumination conditions. Int. J. Remote Sens. 2024, 28, 24–41. [Google Scholar] [CrossRef]
  110. Denka Durgan, S.; Zhang, C.; Duecaster, A. Evaluation and enhancement of unmanned aircraft system photogrammetric data quality for coastal wetlands. GIScience Remote Sens. 2020, 57, 865–881. [Google Scholar] [CrossRef]
  111. Revuelto, J.; Alonso-Gonzalez, E.; Vidaller-Gayan, I.; Lacroix, E.; Izagirre, E.; Rodríguez-López, G.; López-Moreno, J.I. Intercomparison of UAV platforms for mapping snow depth distribution in complex alpine terrain. Cold Reg. Sci. Technol. 2021, 190, 103344. [Google Scholar] [CrossRef]
  112. Harder, P.; Schirmer, M.; Pomeroy, J.; Helgason, W. Accuracy of snow depth estimation in mountain and prairie environments by an unmanned aerial vehicle. Cryosphere 2016, 10, 2559–2571. [Google Scholar] [CrossRef]
  113. Tetila, E.C.; Machado, B.B.; Astolfi, G.; Belete, N.A.d.S.; Amorim, W.P.; Roel, A.R.; Pistori, H. Detection and classification of soybean pests using deep learning with UAV images. Comput. Electron. Agric. 2020, 179, 105836. [Google Scholar] [CrossRef]
  114. Wang, S.; Baum, A.; Zarco-Tejada, P.J.; Dam-Hansen, C.; Thorseth, A.; Bauer-Gottwein, P.; Bandini, F.; Garcia, M. Unmanned Aerial System multispectral mapping for low and variable solar irradiance conditions: Potential of tensor decomposition. ISPRS J. Photogramm. Remote Sens. 2019, 155, 58–71. [Google Scholar] [CrossRef]
  115. Aasen, H.; Bolten, A. Multi-temporal high-resolution imaging spectroscopy with hyperspectral 2D imagers–From theory to application. Remote Sens. Environ. 2018, 205, 374–389. [Google Scholar] [CrossRef]
  116. Fawcett, D.; Bennie, J.; Anderson, K. Monitoring spring phenology of individual tree crowns using drone-acquired NDVI data. Remote Sens. Ecol. Conserv. 2021, 7, 227–244. [Google Scholar] [CrossRef]
  117. Cao, S.; Danielson, B.; Clare, S.; Koenig, S.; Campos-Vargas, C.; Sanchez-Azofeifa, A. Radiometric calibration assessments for UAS-borne multispectral cameras: Laboratory and field protocols. ISPRS J. Photogramm. Remote Sens. 2019, 149, 132–145. [Google Scholar] [CrossRef]
  118. Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J.; Pesonen, L. Processing and Assessment of Spectrometric, Stereoscopic Imagery Collected Using a Lightweight UAV Spectral Camera for Precision Agriculture. Remote Sens. 2013, 5, 5006–5039. [Google Scholar] [CrossRef]
  119. Miyoshi, G.T.; Imai, N.N.; Tommaselli, A.M.G.; Honkavaara, E.; Näsi, R.; Moriya, É.A.S. Radiometric block adjustment of hyperspectral image blocks in the Brazilian environment. Int. J. Remote Sens. 2018, 39, 4910–4930. [Google Scholar] [CrossRef]
  120. Wang, Y.; Yang, Z.; Khan, H.A.; Kootstra, G. Improving Radiometric Block Adjustment for UAV Multispectral Imagery under Variable Illumination Conditions. Remote Sens. 2024, 16, 3019. [Google Scholar] [CrossRef]
  121. Wang, Y.; Yang, Z.; Kootstra, G.; Khan, H.A. The impact of variable illumination on vegetation indices and evaluation of illumination correction methods on chlorophyll content estimation using UAV imagery. Plant Methods 2023, 19, 51. [Google Scholar] [CrossRef] [PubMed]
  122. Sun, B.; Li, Y.; Huang, J.; Cao, Z.; Peng, X. Impacts of Variable Illumination and Image Background on Rice LAI Estimation Based on UAV RGB-Derived Color Indices. Appl. Sci. 2024, 14, 3214. [Google Scholar] [CrossRef]
  123. Kizel, F.; Benediktsson, J.A.; Bruzzone, L.; Pedersen, G.B.M.; Vilmundardóttir, O.K.; Falco, N. Simultaneous and Constrained Calibration of Multiple Hyperspectral Images Through a New Generalized Empirical Line Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2047–2058. [Google Scholar] [CrossRef]
  124. Qin, Z.; Li, X.; Gu, Y. An Illumination Estimation and Compensation Method for Radiometric Correction of UAV Multispectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  125. Maes, W.H.; Steppe, K. Estimating evapotranspiration and drought stress with ground-based thermal remote sensing in agriculture: A review. J. Exp. Bot. 2012, 63, 4671–4712. [Google Scholar] [CrossRef]
  126. Heinemann, S.; Siegmann, B.; Thonfeld, F.; Muro, J.; Jedmowski, C.; Kemna, A.; Kraska, T.; Muller, O.; Schultz, J.; Udelhoven, T.; et al. Land Surface Temperature Retrieval for Agricultural Areas Using a Novel UAV Platform Equipped with a Thermal Infrared and Multispectral Sensor. Remote Sens. 2020, 12, 1075. [Google Scholar] [CrossRef]
  127. King, B.A.; Tarkalson, D.D.; Sharma, V.; Bjorneberg, D.L. Thermal Crop Water Stress Index Base Line Temperatures for Sugarbeet in Arid Western U.S. Agric. Water Manag. 2021, 243, 106459. [Google Scholar] [CrossRef]
  128. Ekinzog, E.K.; Schlerf, M.; Kraft, M.; Werner, F.; Riedel, A.; Rock, G.; Mallick, K. Revisiting crop water stress index based on potato field experiments in Northern Germany. Agric. Water Manag. 2022, 269, 107664. [Google Scholar] [CrossRef]
  129. Cunliffe, A.M.; Anderson, K.; Boschetti, F.; Brazier, R.E.; Graham, H.A.; Myers-Smith, I.H.; Astor, T.; Boer, M.M.; Calvo, L.G.; Clark, P.E.; et al. Global application of an unoccupied aerial vehicle photogrammetry protocol for predicting aboveground biomass in non-forest ecosystems. Remote Sens. Ecol. Conserv. 2022, 8, 57–71. [Google Scholar] [CrossRef]
  130. Mount, R. Acquisition of through-water aerial survey images. Photogramm. Eng. Remote Sens. 2005, 71, 1407–1415. [Google Scholar] [CrossRef]
  131. De Keukelaere, L.; Moelans, R.; Knaeps, E.; Sterckx, S.; Reusen, I.; De Munck, D.; Simis, S.G.H.; Constantinescu, A.M.; Scrieciu, A.; Katsouras, G.; et al. Airborne Drones for Water Quality Mapping in Inland, Transitional and Coastal Waters—MapEO Water Data Processing and Validation. Remote Sens. 2023, 15, 1345. [Google Scholar] [CrossRef]
  132. Elfarkh, J.; Johansen, K.; Angulo, V.; Camargo, O.L.; McCabe, M.F. Quantifying Within-Flight Variation in Land Surface Temperature from a UAV-Based Thermal Infrared Camera. Drones 2023, 7, 617. [Google Scholar] [CrossRef]
  133. Jin, R.; Zhao, L.; Ren, P.; Wu, H.; Zhong, X.; Gao, M.; Nie, Z. An Enhanced Model for Obtaining At-Sensor Brightness Temperature for UAVs Incorporating Meteorological Features and Its Application in Urban Thermal Environment. Sustain. Cities Soc. 2024, 118, 105987. [Google Scholar] [CrossRef]
  134. Gao, J. Quantitative Remote Sensing: Fundamentals and Environmental Applications; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar]
  135. McCoy, R.M. Field Methods in Remote Sensing; Guilford Publications: New York, NY, USA, 2005. [Google Scholar]
  136. Román, A.; Heredia, S.; Windle, A.E.; Tovar-Sánchez, A.; Navarro, G. Enhancing Georeferencing and Mosaicking Techniques over Water Surfaces with High-Resolution Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2024, 16, 290. [Google Scholar] [CrossRef]
  137. Jiang, R.; Wang, P.; Xu, Y.; Zhou, Z.; Luo, X.; Lan, Y.; Zhao, G.; Sanchez-Azofeifa, A.; Laakso, K. Assessing the Operation Parameters of a Low-altitude UAV for the Collection of NDVI Values Over a Paddy Rice Field. Remote Sens. 2020, 12, 1850. [Google Scholar] [CrossRef]
  138. Pepe, M.; Fregonese, L.; Scaioni, M. Planning airborne photogrammetry and remote-sensing missions with modern platforms and sensors. Eur. J. Remote Sens. 2018, 51, 412–436. [Google Scholar] [CrossRef]
  139. García-Tejero, I.F.; Costa, J.M.; Egipto, R.; Durán-Zuazo, V.H.; Lima, R.S.N.; Lopes, C.M.; Chaves, M.M. Thermal data to monitor crop-water status in irrigated Mediterranean viticulture. Agric. Water Manag. 2016, 176, 80–90. [Google Scholar] [CrossRef]
  140. Pou, A.; Diago, M.P.; Medrano, H.; Baluja, J.; Tardaguila, J. Validation of thermal indices for water status identification in grapevine. Agric. Water Manag. 2014, 134, 60–72. [Google Scholar] [CrossRef]
  141. Mirka, B.; Stow, D.A.; Paulus, G.; Loerch, A.C.; Coulter, L.L.; An, L.; Lewison, R.L.; Pflüger, L.S. Evaluation of thermal infrared imaging from uninhabited aerial vehicles for arboreal wildlife surveillance. Environ. Monit. Assess. 2022, 194, 512. [Google Scholar] [CrossRef]
  142. Whitworth, A.; Pinto, C.; Ortiz, J.; Flatt, E.; Silman, M. Flight speed and time of day heavily influence rainforest canopy wildlife counts from drone-mounted thermal camera surveys. Biodivers. Conserv. 2022, 31, 3179–3195. [Google Scholar] [CrossRef]
  143. Sângeorzan, D.D.; Păcurar, F.; Reif, A.; Weinacker, H.; Rușdea, E.; Vaida, I.; Rotar, I. Detection and Quantification of Arnica montana L. Inflorescences in Grassland Ecosystems Using Convolutional Neural Networks and Drone-Based Remote Sensing. Remote Sens. 2024, 16, 2012. [Google Scholar] [CrossRef]
  144. Dering, G.M.; Micklethwaite, S.; Thiele, S.T.; Vollgger, S.A.; Cruden, A.R. Review of drones, photogrammetry and emerging sensor technology for the study of dykes: Best practises and future potential. J. Volcanol. Geotherm. Res. 2019, 373, 148–166. [Google Scholar] [CrossRef]
  145. Perich, G.; Hund, A.; Anderegg, J.; Roth, L.; Boer, M.P.; Walter, A.; Liebisch, F.; Aasen, H. Assessment of Multi-Image Unmanned Aerial Vehicle Based High-Throughput Field Phenotyping of Canopy Temperature. Front. Plant Sci. 2020, 11, 150. [Google Scholar] [CrossRef]
  146. Messina, G.; Modica, G. Applications of UAV Thermal Imagery in Precision Agriculture: State of the Art and Future Research Outlook. Remote Sens. 2020, 12, 1491. [Google Scholar] [CrossRef]
  147. Awasthi, B.; Karki, S.; Regmi, P.; Dhami, D.S.; Thapa, S.; Panday, U.S. Analyzing the Effect of Distribution Pattern and Number of GCPs on Overall Accuracy of UAV Photogrammetric Results. In Proceedings of UASG 2019; Springer: Cham, Switzerland, 2020; pp. 339–354. [Google Scholar]
  148. Stöcker, C.; Nex, F.; Koeva, M.; Gerke, M. High-Quality UAV-Based Orthophotos for Cadastral Mapping: Guidance for Optimal Flight Configurations. Remote Sens. 2020, 12, 3625. [Google Scholar] [CrossRef]
  149. Yu, J.J.; Kim, D.W.; Lee, E.J.; Son, S.W. Determining the Optimal Number of Ground Control Points for Varying Study Sites through Accuracy Evaluation of Unmanned Aerial System-Based 3D Point Clouds and Digital Surface Models. Drones 2020, 4, 49. [Google Scholar] [CrossRef]
  150. Gindraux, S.; Boesch, R.; Farinotti, D. Accuracy Assessment of Digital Surface Models from Unmanned Aerial Vehicles’ Imagery on Glaciers. Remote Sens. 2017, 9, 186. [Google Scholar] [CrossRef]
  151. Cabo, C.; Sanz-Ablanedo, E.; Roca-Pardiñas, J.; Ordóñez, C. Influence of the Number and Spatial Distribution of Ground Control Points in the Accuracy of UAV-SfM DEMs: An Approach Based on Generalized Additive Models. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10618–10627. [Google Scholar] [CrossRef]
  152. Martínez-Carricondo, P.; Agüera-Vega, F.; Carvajal-Ramírez, F.; Mesas-Carrascosa, F.-J.; García-Ferrer, A.; Pérez-Porras, F.-J. Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 1–10. [Google Scholar] [CrossRef]
  153. Forlani, G.; Dall’Asta, E.; Diotri, F.; Cella, U.M.d.; Roncella, R.; Santise, M. Quality Assessment of DSMs Produced from UAV Flights Georeferenced with On-Board RTK Positioning. Remote Sens. 2018, 10, 311. [Google Scholar] [CrossRef]
  154. Bolkas, D. Assessment of GCP Number and Separation Distance for Small UAS Surveys with and without GNSS-PPK Positioning. J. Surv. Eng. 2019, 145, 04019007. [Google Scholar] [CrossRef]
  155. Nota, E.W.; Nijland, W.; de Haas, T. Improving UAV-SfM time-series accuracy by co-alignment and contributions of ground control or RTK positioning. Int. J. Appl. Earth Obs. Geoinf. 2022, 109, 102772. [Google Scholar] [CrossRef]
  156. Hugenholtz, C.; Brown, O.; Walker, J.; Barchyn, T.; Nesbit, P.; Kucharczyk, M.; Myshak, S. Spatial Accuracy of UAV-Derived Orthoimagery and Topography: Comparing Photogrammetric Models Processed with Direct Geo-Referencing and Ground Control Points. GEOMATICA 2016, 70, 21–30. [Google Scholar] [CrossRef]
  157. Cledat, E.; Jospin, L.V.; Cucci, D.A.; Skaloud, J. Mapping quality prediction for RTK/PPK-equipped micro-drones operating in complex natural environment. ISPRS J. Photogramm. Remote Sens. 2020, 167, 24–38. [Google Scholar] [CrossRef]
  158. Salas López, R.; Terrones Murga, R.E.; Silva-López, J.O.; Rojas-Briceño, N.B.; Gómez Fernández, D.; Oliva-Cruz, M.; Taddia, Y. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications Based on UAS-GNSS for High Andean Urban Environments. Drones 2022, 6, 388. [Google Scholar] [CrossRef]
  159. Žabota, B.; Kobal, M. Accuracy Assessment of UAV-Photogrammetric-Derived Products Using PPK and GCPs in Challenging Terrains: In Search of Optimized Rockfall Mapping. Remote Sens. 2021, 13, 3812. [Google Scholar] [CrossRef]
  160. Famiglietti, N.A.; Cecere, G.; Grasso, C.; Memmolo, A.; Vicari, A. A Test on the Potential of a Low Cost Unmanned Aerial Vehicle RTK/PPK Solution for Precision Positioning. Sensors 2021, 21, 3882. [Google Scholar] [CrossRef]
  161. Štroner, M.; Urban, R.; Seidl, J.; Reindl, T.; Brouček, J. Photogrammetry Using UAV-Mounted GNSS RTK: Georeferencing Strategies without GCPs. Remote Sens. 2021, 13, 1336. [Google Scholar] [CrossRef]
  162. Eltner, A.; Kaiser, A.; Castillo, C.; Rock, G.; Neugirg, F.; Abellán, A. Image-based surface reconstruction in geomorphometry—Merits, limits and developments. Earth Surf. Dynam. 2016, 4, 359–389. [Google Scholar] [CrossRef]
  163. Berra, E.F.; Peppa, M.V. Advances and Challenges of UAV SFM MVS Photogrammetry and Remote Sensing: Short Review. In Proceedings of the 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 22–26 March 2020; pp. 533–538. [Google Scholar]
  164. Bagnall, G.C.; Thomasson, J.A.; Yang, C.; Wang, T.; Han, X.; Sima, C.; Chang, A. Uncrewed aerial vehicle radiometric calibration: A comparison of autoexposure and fixed-exposure images. Plant Phenome J. 2023, 6, e20082. [Google Scholar] [CrossRef]
  165. Swaminathan, V.; Thomasson, J.A.; Hardin, R.G.; Rajan, N.; Raman, R. Selection of appropriate multispectral camera exposure settings and radiometric calibration methods for applications in phenotyping and precision agriculture. Plant Phenome J. 2024, 7, e70000. [Google Scholar] [CrossRef]
  166. Yuan, W.; Hua, W. A Case Study of Vignetting Nonuniformity in UAV-Based Uncooled Thermal Cameras. Drones 2022, 6, 394. [Google Scholar] [CrossRef]
  167. Berni, J.A.J.; Zarco-Tejada, P.J.; Suarez, L.; Fereres, E. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef]
  168. Arroyo-Mora, J.P.; Kalacska, M.; Soffer, R.J.; Lucanus, O. Comparison of Calibration Panels from Field Spectroscopy and UAV Hyperspectral Imagery Acquired Under Diffuse Illumination. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 60–63. [Google Scholar]
  169. Wang, Y.; Kootstra, G.; Yang, Z.; Khan, H.A. UAV multispectral remote sensing for agriculture: A comparative study of radiometric correction methods under varying illumination conditions. Biosyst. Eng. 2024, 248, 240–254. [Google Scholar] [CrossRef]
  170. Cao, H.; Gu, X.; Sun, Y.; Gao, H.; Tao, Z.; Shi, S. Comparing, validating and improving the performance of reflectance obtention method for UAV-Remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102391. [Google Scholar] [CrossRef]
  171. Vlaminck, M.; Heidbuchel, R.; Philips, W.; Luong, H. Region-Based CNN for Anomaly Detection in PV Power Plants Using Aerial Imagery. Sensors 2022, 22, 1244. [Google Scholar] [CrossRef]
  172. Quater, P.B.; Grimaccia, F.; Leva, S.; Mussetta, M.; Aghaei, M. Light Unmanned Aerial Vehicles (UAVs) for Cooperative Inspection of PV Plants. IEEE J. Photovolt. 2014, 4, 1107–1113. [Google Scholar] [CrossRef]
  173. Gadhwal, M.; Sharda, A.; Sangha, H.S.; Merwe, D.V.d. Spatial corn canopy temperature extraction: How focal length and sUAS flying altitude influence thermal infrared sensing accuracy. Comput. Electron. Agric. 2023, 209, 107812. [Google Scholar] [CrossRef]
  174. Gómez-Candón, D.; Virlet, N.; Labbé, S.; Jolivot, A.; Regnard, J.-L.J.P.A. Field phenotyping of water stress at tree scale by UAV-sensed imagery: New insights for thermal acquisition and calibration. Precis. Agric. 2016, 17, 786–800. [Google Scholar] [CrossRef]
  175. Han, X.; Thomasson, J.A.; Swaminathan, V.; Wang, T.; Siegfried, J.; Raman, R.; Rajan, N.; Neely, H. Field-Based Calibration of Unmanned Aerial Vehicle Thermal Infrared Imagery with Temperature-Controlled References. Sensors 2020, 20, 7098. [Google Scholar] [CrossRef] [PubMed]
  176. Aragon, B.; Johansen, K.; Parkes, S.; Malbeteau, Y.; Al-Mashharawi, S.; Al-Amoudi, T.; Andrade, C.F.; Turner, D.; Lucieer, A.; McCabe, M.F. A Calibration Procedure for Field and UAV-Based Uncooled Thermal Infrared Instruments. Sensors 2020, 20, 3316. [Google Scholar] [CrossRef] [PubMed]
  177. Idso, S.B.; Jackson, R.D.; Pinter, P.J.; Reginato, R.J.; Hatfield, J.L. Normalizing the stress-degree-day parameter for environmental variability. Agric. Meteorol. 1981, 24, 45–55. [Google Scholar] [CrossRef]
  178. Jackson, R.D.; Idso, S.B.; Reginato, R.J.; Pinter, P.J. Canopy temperature as a crop water-stress indicator. Water Resour. Res. 1981, 17, 1133–1138. [Google Scholar] [CrossRef]
  179. Maes, W.H.; Achten, W.M.J.; Reubens, B.; Muys, B. Monitoring stomatal conductance of Jatropha curcas seedlings under different levels of water shortage with infrared thermography. Agric. For. Meteorol. 2011, 151, 554–564. [Google Scholar] [CrossRef]
  180. Maes, W.H.; Baert, A.; Huete, A.R.; Minchin, P.E.H.; Snelgar, W.P.; Steppe, K. A new wet reference target method for continuous infrared thermography of vegetations. Agric. For. Meteorol. 2016, 226–227, 119–131. [Google Scholar] [CrossRef]
  181. Meron, M.; Tsipris, J.; Charitt, D. Remote mapping of crop water status to assess spatial variability of crop stress. In Precision Agriculture, Proceedings of the 4th European Conference on Precision Agriculture, Berlin, Germany, 15 June 2003; Stafford, J., Werner, A., Eds.; Academic Publishers: Wageningen, The Netherlands, 2003; pp. 405–410. [Google Scholar]
  182. Möller, M.; Alchanatis, V.; Cohen, Y.; Meron, M.; Tsipris, J.; Naor, A.; Ostrovsky, V.; Sprintsin, M.; Cohen, S. Use of thermal and visible imagery for estimating crop water status of irrigated grapevine. J. Exp. Bot. 2007, 58, 827–838. [Google Scholar] [CrossRef]
  183. Prashar, A.; Jones, H. Infra-Red Thermography as a High-Throughput Tool for Field Phenotyping. Agronomy 2014, 4, 397. [Google Scholar] [CrossRef]
  184. Ribeiro-Gomes, K.; Hernandez-Lopez, D.; Ortega, J.F.; Ballesteros, R.; Poblete, T.; Moreno, M.A. Uncooled thermal camera calibration and optimization of the photogrammetry process for UAV applications in agriculture. Sensors 2017, 17, 2173. [Google Scholar] [CrossRef]
  185. Maes, W.H.; Huete, A.; Avino, M.; Boer, M.; Dehaan, R.; Pendall, E.; Griebel, A.; Steppe, K. Can UAV-based infrared thermography be used to study plant-parasite interactions between mistletoe and eucalypt trees? Remote Sens. 2018, 10, 2062. [Google Scholar] [CrossRef]
  186. Deery, D.M.; Rebetzke, G.J.; Jimenez-Berni, J.A.; James, R.A.; Condon, A.G.; Bovill, W.D.; Hutchinson, P.; Scarrow, J.; Davy, R.; Furbank, R.T. Methodology for High-Throughput Field Phenotyping of Canopy Temperature Using Airborne Thermography. Front. Plant Sci. 2016, 7, 1808. [Google Scholar] [CrossRef] [PubMed]
  187. Dai, W.; Qiu, R.; Wang, B.; Lu, W.; Zheng, G.; Amankwah, S.O.Y.; Wang, G. Enhancing UAV-SfM Photogrammetry for Terrain Modeling from the Perspective of Spatial Structure of Errors. Remote Sens. 2023, 15, 4305. [Google Scholar] [CrossRef]
  188. Tang, Z.; Wang, M.; Schirrmann, M.; Dammer, K.-H.; Li, X.; Brueggeman, R.; Sankaran, S.; Carter, A.H.; Pumphrey, M.O.; Hu, Y.; et al. Affordable High Throughput Field Detection of Wheat Stripe Rust Using Deep Learning with Semi-Automated Image Labeling. Comput. Electron. Agric. 2023, 207, 107709. [Google Scholar] [CrossRef]
  189. Raymaekers, D.; Delalieux, S. UAV-Based Remote Sensing: Improve efficiency through sampling missions. In Proceedings of the UAV-Based Remote Sensing Methods for Monitoring Vegetation, Köln, Germany, 30 September–1 October 2024. [Google Scholar]
Figure 1. Overview of the UAV mapping process. This review focuses on the areas in bold and green.
Figure 1. Overview of the UAV mapping process. This review focuses on the areas in bold and green.
Remotesensing 17 00606 g001
Figure 2. Schematic overview of the solar and sensor viewing angles.
Figure 2. Schematic overview of the solar and sensor viewing angles.
Remotesensing 17 00606 g002
Figure 4. General workflow for the flight planning with an indication of the most important considerations in each step.
Figure 4. General workflow for the flight planning with an indication of the most important considerations in each step.
Remotesensing 17 00606 g004
Figure 5. The effect of ground sampling distance (GSD) on the image quality, in this case for weed detection in a corn field. Image taken on 14/07/2022 in Bottelare, Belgium (lat: 50.959° N, lon: 3.767° E), with a Sony α7R IV camera, equipped with an 85 mm lens flying at 18 m altitude on a DJI M600 Pro UAV. Here, a small section of the orthomosaic, created in Agisoft Metashape, is shown. The original GSD was 0.85 mm, which was downscaled and exported at different GSD using Agisoft Metashape.
Figure 5. The effect of ground sampling distance (GSD) on the image quality, in this case for weed detection in a corn field. Image taken on 14/07/2022 in Bottelare, Belgium (lat: 50.959° N, lon: 3.767° E), with a Sony α7R IV camera, equipped with an 85 mm lens flying at 18 m altitude on a DJI M600 Pro UAV. Here, a small section of the orthomosaic, created in Agisoft Metashape, is shown. The original GSD was 0.85 mm, which was downscaled and exported at different GSD using Agisoft Metashape.
Remotesensing 17 00606 g005
Figure 6. (a) The 1 ha-field of which the simulation was done. (b) The effect of GSD on the estimated flight time and the number of images required for mapping this area. Here, we calculated the flight time and number of images for a multispectral camera (MicaSense RedEdge-MX Dual). The simulation was performed using the DJI Pilot app, with horizontal and vertical overlap set at 80%, and the maximum flight speed set at 5 m s−1.
Figure 6. (a) The 1 ha-field of which the simulation was done. (b) The effect of GSD on the estimated flight time and the number of images required for mapping this area. Here, we calculated the flight time and number of images for a multispectral camera (MicaSense RedEdge-MX Dual). The simulation was performed using the DJI Pilot app, with horizontal and vertical overlap set at 80%, and the maximum flight speed set at 5 m s−1.
Remotesensing 17 00606 g006
Figure 7. Illustration of terrain following (b) option relative to the standard flight height option (a). The colors in (b) represent the actual altitude of the UAV above sea level, in m. Output (print screen) of DJI Pilot 2, here for a Mavic 3E (RGB) camera with 70% overlap and GSD of 2.7 cm.
Figure 7. Illustration of terrain following (b) option relative to the standard flight height option (a). The colors in (b) represent the actual altitude of the UAV above sea level, in m. Output (print screen) of DJI Pilot 2, here for a Mavic 3E (RGB) camera with 70% overlap and GSD of 2.7 cm.
Remotesensing 17 00606 g007
Figure 8. (a) Schematic figure of a standard (parallel) mapping mission over a target area (orange line) with the planned locations for image capture (dots) illustrating the vertical and horizontal overlap. (b) The same area, but now covered with a grid flight pattern (Section 3.5.1).
Figure 8. (a) Schematic figure of a standard (parallel) mapping mission over a target area (orange line) with the planned locations for image capture (dots) illustrating the vertical and horizontal overlap. (b) The same area, but now covered with a grid flight pattern (Section 3.5.1).
Remotesensing 17 00606 g008
Figure 9. (a) The number of images collected for mapping a 1 ha area (100 m × 100 m field, see Figure 6) with a MicaSense RedEdge-MX multispectral camera as a function of the vertical and horizontal overlap. Image number was estimated in DJI Pilot. (b) Simulated number of cameras seen per point for the same range of overlap and camera. (c) Simulated coordinates of the cameras (in m) seeing the center point (black +, relative coordinates of (0,0)) for different overlaps (same horizontal and vertical overlap, see color scale) for the same camera, flown at 50 m flight height.
Figure 9. (a) The number of images collected for mapping a 1 ha area (100 m × 100 m field, see Figure 6) with a MicaSense RedEdge-MX multispectral camera as a function of the vertical and horizontal overlap. Image number was estimated in DJI Pilot. (b) Simulated number of cameras seen per point for the same range of overlap and camera. (c) Simulated coordinates of the cameras (in m) seeing the center point (black +, relative coordinates of (0,0)) for different overlaps (same horizontal and vertical overlap, see color scale) for the same camera, flown at 50 m flight height.
Remotesensing 17 00606 g009
Figure 10. Adjusted overlap (overlap needed to be given as input in the flight app) as a function of flight height and vegetation height, when the target overlap is 80%.
Figure 10. Adjusted overlap (overlap needed to be given as input in the flight app) as a function of flight height and vegetation height, when the target overlap is 80%.
Remotesensing 17 00606 g010
Figure 11. Orthomosaic (a) full field; (b) detail) of a flight generated with a flight overlap of 80% in horizontal and vertical direction. The yellow lines indicate the area taken from each single image. Notice the constant pattern in the core of the images, whereas the edges typically have larger areas from a single image, increasing the risk of anisotropic effects. Image taken from Agisoft Metashape from a dataset of multispectral imagery (MicaSense RedEdge-MX Dual), acquired on 07/10/2024, over a potato field in Bottelare, Belgium (lat: 50.9612°N, lon: 3.7677°E), at a flight height of 32 m.
Figure 11. Orthomosaic (a) full field; (b) detail) of a flight generated with a flight overlap of 80% in horizontal and vertical direction. The yellow lines indicate the area taken from each single image. Notice the constant pattern in the core of the images, whereas the edges typically have larger areas from a single image, increasing the risk of anisotropic effects. Image taken from Agisoft Metashape from a dataset of multispectral imagery (MicaSense RedEdge-MX Dual), acquired on 07/10/2024, over a potato field in Bottelare, Belgium (lat: 50.9612°N, lon: 3.7677°E), at a flight height of 32 m.
Remotesensing 17 00606 g011
Figure 12. Illustration of different viewing angle options available. (a) standard nadir option; (b) Limited number of oblique images from a single direction (“Elevation Optimization”) and (cf) oblique mapping under four different viewing angles. Output (print screen) of DJI Pilot 2 app, here for a Zenmuse P1 RGB camera (50 mm lens) on a DJI M350, with 65% horizontal and vertical overlap and a GSD of 0.22 cm.
Figure 12. Illustration of different viewing angle options available. (a) standard nadir option; (b) Limited number of oblique images from a single direction (“Elevation Optimization”) and (cf) oblique mapping under four different viewing angles. Output (print screen) of DJI Pilot 2 app, here for a Zenmuse P1 RGB camera (50 mm lens) on a DJI M350, with 65% horizontal and vertical overlap and a GSD of 0.22 cm.
Remotesensing 17 00606 g012
Figure 13. Schematic overview of corrections of thermal measurements atmospheric correction (Latm, τ) and the additional correction for emissivity (ε) and longwave incoming radiation (Lin, W m−2) needed to retrieve surface temperature (Ts, K). (Lsensor = ad-sensor radiance, W m−2; Latm= upwelling ad-sensor radiance, W m−2; τ = atmospheric transmittance (-), σ = Stefan–Boltzmann constant = 5.67 10−8 W m−2 K−4).
Figure 13. Schematic overview of corrections of thermal measurements atmospheric correction (Latm, τ) and the additional correction for emissivity (ε) and longwave incoming radiation (Lin, W m−2) needed to retrieve surface temperature (Ts, K). (Lsensor = ad-sensor radiance, W m−2; Latm= upwelling ad-sensor radiance, W m−2; τ = atmospheric transmittance (-), σ = Stefan–Boltzmann constant = 5.67 10−8 W m−2 K−4).
Remotesensing 17 00606 g013
Figure 14. Overall summary of flight settings and flight conditions for the different applications. * More for larger or complex terrains.
Figure 14. Overall summary of flight settings and flight conditions for the different applications. * More for larger or complex terrains.
Remotesensing 17 00606 g014
Table 1. Summary of guidelines for flight planning for the different sensors/applications. The flight speed is difficult to quantify, as it depends on the platform, sensor and flight height; normal implies no specific adjustments are needed; slow that the flight speed needs to be reduced significantly to keep motion blur below 50% (S3.4).
Table 1. Summary of guidelines for flight planning for the different sensors/applications. The flight speed is difficult to quantify, as it depends on the platform, sensor and flight height; normal implies no specific adjustments are needed; slow that the flight speed needs to be reduced significantly to keep motion blur below 50% (S3.4).
3D ModelOrthomosaic Generation
TerrainCanopyRGBReflectance (Multi-/Hyperspectral)Thermal
Overlap>70 V, >50 H *>80 V, >70 H **>60 V, >50 H>80 V, >80 H>80 V, >80 H
Flight speedNormalSlowNormalSlow
Grid pattern?YesNoNoNo
Flight directionStandardStandardPerpendicular to sun ***Standard (?)
Viewing angleInclude obliqueNadirNadirNadir
* For complex terrain, this can be higher (70–90% V; 60–80% H, depending on flight height and whether grid patterns and/or oblique imagery is added). ** Can be higher (90%) when soil pixels should be included in forest ecosystems; *** depending on the incoming light sensor; extra attention needed for avoiding hot spot; different for aquatic remote sensing.
Table 2. Maximum distance (m) a multicopter UAV can be flown respecting the visual line of sight (VLOS) restrictions according to two different approaches.
Table 2. Maximum distance (m) a multicopter UAV can be flown respecting the visual line of sight (VLOS) restrictions according to two different approaches.
Height (m)Diagonal Size (m)Maximum Distance (Equation (5))Maximum Distance (Equation (6))
DJI Mini40.0640.2135690
DJI Mavic 30.1070.38194145
DJI Phantom0.280.59245213
DJI M3500.430.895379313
DJI M6000.7591.669669566
Table 3. Overview of the different factors needed to take into account when executing flights.
Table 3. Overview of the different factors needed to take into account when executing flights.
3D ModelOrthomosaic Generation
TerrainCanopyRGBReflectance (Multi-/Hyperspectral)Thermal
IlluminationBest overcast, sunny possible; preferably not variable *Preferably not variablePreferably sunny, but not requiredSunny
Wind speed?Not relevantLowBest low, but can be higherBest low, but can be higherLow
Time of flightLess relevant; if sunny conditions, best around solar noonLess relevantSolar noon (but avoid hot spot)Solar noon (but avoid hot spot)
GCP?YesYesYesYes
Reference targetsNot relevantGrey panel(s) recommendedSingle or multiple grey panels *Aluminium foil-covered panel + temperature panels (+extreme temperature panels)
* Sunny for mapping snow depths.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maes, W.H. Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors. Remote Sens. 2025, 17, 606. https://doi.org/10.3390/rs17040606

AMA Style

Maes WH. Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors. Remote Sensing. 2025; 17(4):606. https://doi.org/10.3390/rs17040606

Chicago/Turabian Style

Maes, Wouter H. 2025. "Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors" Remote Sensing 17, no. 4: 606. https://doi.org/10.3390/rs17040606

APA Style

Maes, W. H. (2025). Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors. Remote Sensing, 17(4), 606. https://doi.org/10.3390/rs17040606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop