Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Energy-Aware Hierarchical Reinforcement Learning Based on the Predictive Energy Consumption Algorithm for Search and Rescue Aerial Robots in Unknown Environments
Previous Article in Journal
Digital Forensic Research for Analyzing Drone and Mobile Device: Focusing on DJI Mavic 2 Pro
Previous Article in Special Issue
A Comparative Study of Multi-Rotor Unmanned Aerial Vehicles (UAVs) with Spectral Sensors for Real-Time Turbidity Monitoring in the Coastal Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Photogrammetric Measurement of Grassland Fire Spread: Techniques and Challenges with Low-Cost Unmanned Aerial Vehicles

1
Department of Surveying, Faculty of Civil Engineering, Slovak Technical University, 810 05 Bratislava, Slovakia
2
Department of Theoretical Geodesy and Geoinformatics, Faculty of Civil Engineering, Slovak Technical University, 810 05 Bratislava, Slovakia
3
Department of Mathematics and Descriptive Geometry, Faculty of Civil Engineering, Slovak Technical University, 810 05 Bratislava, Slovakia
*
Author to whom correspondence should be addressed.
Drones 2024, 8(7), 282; https://doi.org/10.3390/drones8070282
Submission received: 22 May 2024 / Revised: 12 June 2024 / Accepted: 18 June 2024 / Published: 22 June 2024
(This article belongs to the Special Issue Unconventional Drone-Based Surveying 2nd Edition)

Abstract

:
The spread of natural fires is a complex issue, as its mathematical modeling needs to consider many parameters. Therefore, the results of such modeling always need to be validated by comparison with experimental measurements under real-world conditions. Remote sensing with the support of satellite or aerial sensors has long been used for this purpose. In this article, we focused on data collection with an unmanned aerial vehicle (UAV), which was used both for creating a digital surface model and for dynamic monitoring of the spread of controlled grassland fires in the visible spectrum. We subsequently tested the impact of various processing settings on the accuracy of the digital elevation model (DEM) and orthophotos, which are commonly used as a basis for analyzing fire spread. For the DEM generated from images taken during the final flight after the fire, deviations did not exceed 0.1 m compared to the reference model from LiDAR. Scale errors in the model with only approximal WGS84 exterior orientation parameters did not exceed a relative accuracy of 1:500, and possible deformations of the DEM up to 0.5 m in height had a minimal impact on determining the rate of fire spread, even with oblique images taken at an angle of 45°. The results of the experiments highlight the advantages of using low-cost SfM photogrammetry and provide an overview of potential issues encountered in measuring and performing photogrammetric processing of fire spread.

1. Introduction

With the increasing frequency of prolonged droughts, wildfires in the wild pose a threat to people, animals, property, and the environment. This is a global problem that we will increasingly encounter due to climate change [1,2,3]. Knowledge of the spread of fires in different environments can, therefore, be helpful, especially for rescue teams and firefighters, who can adjust their operational scenarios based on them [4]. As natural fires begin to occur in areas where they did not previously exist [5], it is appropriate to analyze fire behavior through mathematical modeling, which should consider as many aspects of this complex issue as possible. Among the most fundamental factors influencing the rate of fire spread are fuel flammability and wind strength, but one should not forget about its direction in combination with the terrain slope [6,7]. Mathematical models of natural fires can be broadly divided into two groups: physical models, based, for example, on fluid dynamics modeling, and empirical models, which try to simplify the time complexity of physical models by using empirical laws of fire development [8]. In the last two decades, various approaches have been taken to model fires, whether based on Bayesian modeling principles [9], logistic regression [10], fuzzy systems [11], or maximum entropy [12].
Empirical models can be developed and validated based on data obtained through remote sensing techniques, occasionally including ground-based photogrammetry. In the case of extensive fires, satellite sensors are mainly utilized [13], but when higher resolutions are required, multispectral sensors on aircraft [14,15] or unmanned aerial vehicles (UAVs) can also be employed [16,17]. The main advantage of UAVs is their high variability of solutions, freedom of camera positions and orientations relative to the fire, lower costs compared to manned aircraft, higher level of detail, and overall safety in monitoring the progress or aftermath of natural disasters [18]. Disadvantages include the risk of collision with low-flying aircraft, which may be involved, for example, in firefighting, and a short flight time, often limited to a maximum of 45 min [19]. Image data from UAVs have also been used in the empirical modeling of fire spread using the “by evolving surface curve” approach [20,21], but the publications have only addressed the mathematical aspect of the problem and not the collection and processing of underlying data, which brings several challenges, especially for large image blocks.
If images from UAVs are to be utilized for validating mathematical models, they need to be processed photogrammetrically. The processing method strongly depends on the type of collected data and the camera network configuration. If only monoscopic images are available, they must be accurately georeferenced, and a digital terrain model must be available for the ortorectification of images and for correcting the position of the fire front extracted from them [22]. Stereoscopic recordings from at least two known camera positions do not require primary terrain knowledge; the entire 3D reconstruction of the environment and the fire can be performed directly from a fixed pair of stereo cameras [23], but the accuracy of 3D processing significantly decreases with increasing distance from the measured object [24]. A system of multiple UAVs flying in designated formations can provide greater rigidity to the entire camera network [19]. Since the visible spectrum may be compromised, for example, by smoke obscuring the view of the fire front, existing approaches often rely on the use of specialized UAV systems developed specifically for this purpose, which include infrared sensors (thermal imaging) [25,26]. In terms of georeferencing, it is advantageous if these systems are equipped with real-time kinematic (RTK) or post-processing kinematic (PPK) technology [27] to avoid relying on ground control points (GCPs) during measurement, which are generally difficult to measure during a fire [28,29]. However, due to the high specificity of these systems, it is unclear as to what extent more affordable UAVs, especially those without thermal imaging, can be used in measuring fire development.
Images captured in the visible spectrum can be effectively processed using computer vision techniques supported by Structure from Motion (SfM) and Multi-View Stereo (MVS) to create a relatively accurate and detailed point cloud. The origins of SfM can be traced back to more than 40 years ago [30], but its usage was boosted when the Scale Invariant Feature Transform (SIFT) [31] was proposed to fully automate the image matching part of SfM. In the last years, new approaches based on deep-learning have extended the capability of SIFT-like detectors and descriptors to be significantly more robust to extreme illumination changes, difficult radiometric changes, and extreme viewpoints [32]. Owing to the effective combination of these algorithms, it is possible to fully automatically orient images, even from non-metric cameras [33]. There is various photogrammetric software available on the market that utilize different SfM strategies—incremental [34], hierarchical [35], and global [36]. Some of the most well-known software includes Metashape Professional by Agisoft LLC (St. Petersburg, Russia,) (agisoft.com, accessed on 20 February 2024), RealityCapture by Capturing Reality (Bratislava, Slovakia) (capturingreality.com, accessed on 20 February 2024), Pix4Dmapper by Pix4D (Prilly, Switzerland) (pix4d.com, accessed on 20 February 2024), iTwin Capture Modeler by Bentley Systems (Exton, PA, United States) (bentley.com, accessed on 20 February 2024), or 3DFZephyr by 3Dflow (Verona, Italy) (3dflow.net, accessed on 20 February 2024). These software packages always include advanced MVS algorithms, which allow for the detailed reconstruction of a wide range of scanned surfaces [37], albeit with varying quality of results [38]. This quality can be significantly influenced not only by surface texture but also by available lighting conditions [39]. The best results with this method are achieved on surfaces with a clear granular texture, and since MVS algorithms are based on principles of stereo photogrammetry, the accuracy of 3D reconstruction heavily depends on the baseline ratio. Although algorithms can reliably identify matches between images with very short baselines, the intersection angle of the rays determining the depth is too small, and the depth accuracy significantly deteriorates [40]. Therefore, it is necessary to find a suitable compromise in the position and orientation of the cameras relative to the surface being scanned during imaging. Images taken from nearly identical positions can reduce the quality of the resulting point clouds due to small intersection angles and, consequently, a low depth reliability. However, short baselines can also cause the failure of relative image orientation during bundle adjustment [41]. Small intersection angles can lead to bundle instability, reduce the reliability of 3D scene reconstruction, or even prevent iterative solution convergence [42,43].
A separate challenge for photogrammetric processing is the multi-epoch analysis of dynamic phenomena, such as landslides [44], mining activities [45], and natural disasters [46]. Photogrammetry is primarily used for measuring static objects, and motion typically occurs only relative to the camera and the scene. When the position or shape of an object within the scene changes, the method of so-called 4D or time lapse photogrammetry [47,48] needs to be employed, which requires simultaneous imaging of the scene with two or more cameras. Essentially, this does not necessarily mean recording a video; what is crucial is synchronizing image recordings collected at a predefined frequency, such as in time-lapse or interval shooting mode. In monitoring fires, this approach allows not only for modeling the spread of burnt ground surface but also for capturing the 3D shape of the flames themselves [25]. However, UAVs equipped with two cameras are more expensive, so in this article, we decided to present the possibility of 4D photogrammetric measurement with a low-cost off-the-shelf UAV equipped with only one camera, which also distinguishes our study from other works.
In the scientific literature, the term 4D photogrammetry is often mentioned even in cases where a dynamic phenomenon is not monitored but where two or more epochs are compared over a certain time interval [49,50,51]. Simultaneous photogrammetric measurements using a single camera, which serves both for 3D surface modeling and for monitoring a dynamic phenomenon, present so many challenges that we decided not to focus on the mathematical modeling of the spread of fire. Instead, we concentrated on the processing, evaluation, and statistical analysis of a series of images acquired by the UAV in a specific configuration as part of a case study on mapping the spread of fire. Specific configurations include both the use and non-use of ground control points during the bundle adjustment of the camera network, as well as the use of approximate external orientation elements from UAVs without RTK/PPK systems for georeferencing the image block.
The basic photogrammetric processing that aimed at creating orthophotos necessary for monitoring fire spread is outlined in Section 3. The analysis of the results and further tested variants, which suggest the usability of UAVs for SfM processing without GCPs or RTK/PPK systems, is more detailed in Section 4.

2. Materials and Methods

The measurements were conducted in the Lešť area south of the town of Zvolen in central Slovakia (Figure 1) during a training session of firefighters, who ensured the safety of the experiment and ignited the fire itself. To prevent the uncontrolled spread of the grassland fire, a control line was first burned around the specific location (Figure 2).
The UAV measurement was conducted in March 2019. The day temperature reached 8 °C, and the west wind was of variable speed (1–5 m/s). For image data collection, the UAV DJI Mavic 2 Pro was used with camera parameters listed in Table 1. Since the used UAV did not have an RTK or PPK system, georeferencing was performed using 8 GCPs (Figure 2, left), marked with black and white targets, whose coordinates were determined using GNSS Trimble R6 equipment by the RTN (Real-time Network) measurement method in the SKPOS observation service with a spatial accuracy of ±0.05 m in the ETRS89 coordinate system, followed by transformation into the state coordinate system S-JTSK (EPSG: 5514) and the Bpv height system. We used a relatively high number of GCPs to increase control over the entire experiment and to ensure the possibility of further testing of photogrammetric processing. GCPs were not distributed throughout the whole field as we did not plan to fly the UAV further than 500 m from the takeoff point, located at the southern end of the location at point 201. The distance limit was also defined in the UAV settings. The flight altitude was set to the maximum legislatively permitted height of 120 m above ground level.
The collection of image data consisted of 3 flights, performed manually with an oblique axis of capture (45° from nadir). It is important to emphasize that the priority of the entire operation was not photogrammetric measurement but firefighter training, in which we participated as observers and collected experimental data alongside. The simulated fires initially had difficulty igniting and were started in various locations and grassy areas. There was not enough time to plan and execute a separate automated flight with nadir images in multiple strips with predefined overlap in this specific location, and the flight was carried out manually immediately after placing all the targets. During the first flight, the meadow was already burning. The following are the details of the 3 flights that were conducted simultaneously during a single 20-min mission:
  • The first flight (Figure 3) was conducted during attempts to ignite the meadow and waiting for suitable wind conditions—the aim was to create a cohesive block of images to which images from the subsequent monitoring flight could be later aligned.
  • The second flight (Figure 4) monitored the development of the fire itself at an interval of 2 s between images while the UAV gradually moved along the fireline—the goal was to automatically align the images with the first flight and to project them onto a model from the third flight.
  • The third flight (Figure 5) was conducted immediately after the fire to map the entire burned area and create a 3D model of the surface without grassland cover.
In Figure 3, the image block from the first flight consisted partially of two strips. This was due to capturing images even during the return to the starting fire, intended to increase the robustness of the camera network. All three flights were conducted consecutively without interruption. During this 20 min mission, approximately 8 hectares of grassland burned. The oblique angle of capture (45°) was chosen in all cases because it is often advantageous to capture fires slightly from the side during monitoring, which can partially eliminate areas obscured by smoke and provide a better overview of the overall situation. The aim was also to verify the suitability of using only oblique images in generating a digital surface model, onto which images from the monitoring flight were subsequently projected. In addition to the primary goal of creating orthophotos for fire spread analysis, several other tests were conducted (Section 4), the results of which indicate the potential use of low-cost UAV systems in this area, even in the absence of GCPs.

3. Photogrammetric Processing

The main experiment was photogrammetrically processed using the Structure from Motion (SfM) principles in Agisoft Metashape Professional software, version 2.1.0, by Agisoft LLC, St. Petersburg, Russia. For creating orthophotos, the processing procedure consisted of steps graphically depicted in Figure 6.
During this main photogrammetric processing, ground control points were always used to ensure increased geometric accuracy of the image blocks. However, aside from creating orthophotos, we were also interested in the impact of excluding GCPs from the processing, as GCPs may not be available when processing archival records. For this reason, we decided to conduct several tests, discussed in Section 4, Analysis of the results.
Further details on the individual steps of the main experiment required for producing orthophotos are as follows:
  • Processing of images from the 3rd flight—significant features in the surface texture were automatically detected based on the SIFT algorithm, matched using the Nearest Neighbor approach, and then matches were geometrically verified with epipolar geometry using random sample consensus (RANSAC) [52]. The model coordinates of tie points and the elements of the interior and relative orientation of images were adjusted during bundle adjustment, resulting in a reconstructed 3D scene. Georeferencing was also performed as part of SfM using manually measured GCPs. Detailed surface reconstruction based on MVS algorithms, and a generated 3D model (Figure 7), were also conducted.
  • Processing of images from the 1st flight—identical to the 3rd flight, with the difference that fixed elements of the interior orientation obtained from the camera calibration on images from the 3rd flight were used during image orientation. This step was necessary due to the reduced reliability of the results from the 1st flight caused by high grass cover—similar problems are also described in [53].
  • Processing of images from the 2nd flight—the UAV moved alongside the fire line, but its movement was not continuous, and sometimes it remained stationary in one place. Here, a significant problem arose with a series of images taken from one position during monitoring. For reliable processing using SfM, it is necessary that baselines exist between adjacent images, meaning that the projection center of the camera changes its position in space; otherwise, the angle of intersection of determining rays at a given point becomes too small, and scene reconstruction is unstable. Essentially, this might not be a problem; in photogrammetry, it is often recommended to take images from one camera position for a higher degree of redundancy, however, when there are too many such images, combined with the RANSAC algorithm, it may cause failure in image orientation. Therefore, it was necessary to divide the images from the 2nd flight into 13 subgroups, within which the images were oriented separately (Table 2). Subgroup 13 contained only images where the camera changed its position (dynamic flight), and there was no problem orienting them directly with the images from the 1st flight. In groups 1 to 12, images created from almost static UAV positions were then included (Figure 8). In groups 1 to 12, the closest moving images from group 13 (according to overlap) were always added to ensure that the respective scene had reconstructible 3D geometry. If the reconstruction failed in one calculation, it was repeated until RANSAC found a solution for the relative orientation of the images that satisfied the largest number of already matched tie points, which, in some cases, included up to 5 repeated calculations.
  • Sequential relative orientation of partial image blocks 1 to 12 with block 13 connected to the 1st flight. Subsequently, all blocks were merged using the “Merge chunks” function in the Agisoft Metashape software into one common image block, which contained 40 images from the 1st flight (mapping) and 290 images from the 2nd flight (monitoring). This was possible since all image blocks were in the same reference coordinate system.
  • Import of the 3D model from the 3rd flight into the merged image block from the 1st and 2nd flights.
  • Creation and export of orthoimages from the 2nd flight (monitoring) based on the 3D model from the 3rd flight (Figure 9).
During the orientation of the images, conventional settings recommended by the software developers were used (Table 3). High accuracy settings mean that the images were inputted into the keypoint detection process in their original resolution. The results from the orientation of the images and georeferencing of individual projects are presented in Table 4. It is worth noting that the reference coordinates of the GCPs were included in the bundle adjustment during the orientation of all image blocks, which helped to prevent potential deformations of the image blocks.
The next step was the orthorectification of 290 images from the monitoring flight based on the 3D surface model (Figure 9). These were exported automatically from the Agisoft Metashape software environment.
The exported orthoimages were subsequently vectorized and used to create a comprehensive overview of the fire spread—for clarity, at a sparser interval of 8 s (Figure 10). As the visibility of the fire front was complicated by dense smoke in many orthoimages, vectorization was performed manually using the CAD software MicroStation V8i SELECTseries 10, version 08.11.09.910 by Bentley Systems, Exton, PA, United States.
From Figure 9 and Figure 10, it is evident that during the movement of the UAV, some parts of the fire momentarily moved out of the camera’s field of view. The goal was always to orient the camera towards the fire so that approximately half of the image captured the meadow without smoke, enabling the images to be oriented relative to each other (Figure 11). However, for the purposes of the experiment, this deficiency was negligible. The distribution of the curves also reveals locations along the road where firefighters gradually established additional fire points to ignite the entire meadow progressively.

4. Analysis of the Results

The accuracy of the resulting orthoimages is influenced by several factors. One of the most important factors is the accuracy of determining the elements of the interior, relative, and exterior orientation of images because this ultimately affects the accuracy of generating digital surface models and orthoimages.
However, the accuracy of detecting the fire boundary in orthoimages should not be overlooked either. Since only one camera is used in the proposed approach to monitor the fire’s development, it is not possible to model the shape of flames, and they are incorrectly projected onto the reference model due to central projection. Ideally, the fire front should be evaluated at points of contact with the terrain; however, this is rarely clearly visible due to smoke and flames being pushed forward by the wind and often low above the terrain. This would require a very low UAV flight altitude, bringing safety complications. Therefore, for the reasons stated above, we vectorized the rear part of the fire boundary (Figure 10), that is, the area that had already been burned.
The individual aspects are explained in more detail in the following subsections.

4.1. The Reliability of Image Orientation

The reliability of the relative and exterior orientation of images depends on both the configuration of the camera network (levels of overlap and intersection angles between determining rays) and the accuracy of measuring GCPs on the images and in the reference coordinate system. However, the accuracy of relative orientation is also closely related to camera self-calibration and thus the determination of interior orientation elements. This can be complicated, especially when capturing surfaces with inappropriate texture, including grassy vegetation. During the detection and pairing of features, incorrect correspondences may be assigned, ultimately leading to unreliable camera calibration and relative orientation of images. This was also evident in the processing of the first aerial survey, as shown in Table 5.
Extreme differences in the elements of interior orientation (Table 5—especially cx, cy) also manifested in the deformations of the image block from the first aerial survey when GCPs were intentionally not used in the bundle adjustment (BA) (Table 6) to show the impact of incorrect matches in the grass texture (as a test variant). Compared to flight 3, significantly larger residuals were achieved after 3D affine transformation into the reference coordinate system, when GCPs were inactive during BA, even though the camera network configuration was similar in both flights. This indicates significant deformations in the camera network after the relative orientation of images. Therefore, all projects aiming to produce orthoimages (Table 4) were computed with a pre-calibrated camera and fixed interior orientation elements obtained from the processing of the third flight with active GCPs during BA. Burnt grass without stems, which would change their appearance in the image with changes in perspective, evidently belongs to suitable textures for processing using SfM, justifying data collection for 3D modeling post-fire.
Regarding the accuracy of determining the elements of the exterior orientation of images in the final project used for orthoimage creation (Table 4—Flight 1 + Flight 2), it can be evaluated based on the results of bundle adjustment and the reprojection errors on GCPs. GCPs were measured on all visible images—for example, point No. 203 was measured on 106 images. On most images, there were simultaneously at least three GCPs. The reprojection error on GCPs did not exceed 1.7 pixels on any image, with an average pixel size on the ground (GSD), considering the oblique imaging axis, of 0.045 m.

4.2. The Reliability of the Digital Surface Model

The accuracy of image orientation directly affects the accuracy of the digital surface model. The images from the final flight after the fire with active GCPs during bundle adjustment were used to create the final 3D model used for orthorectification of images. However, we were interested in the impact that the orientation of images with inactive GCPs during BA and different camera network configurations would have on the model’s accuracy. The absence of the need to establish GCPs would open the door to using archival and amateur records from low-cost UAVs without RTK and PPK. For this purpose, three models were generated:
  • From the final flight with active GCPs during BA (Figure 12 on the left),
  • From the final flight with inactive GCPs during BA (Figure 12 on the right and Figure 13),
  • From the side monitoring flight with inactive GCPs during BA (Figure 14).
For the last two models, approximate elements of the exterior orientation from the EXIF data were also removed to rely solely on automatically detected tie points on the images for the entire relative orientation solution. To verify the accuracy of the 3D models, they were compared with classified point clouds obtained by using airborne laser scanning (ALS), the results of which are freely available for the entire territory of the Slovak Republic (zbgis.skgeodesy.sk, accessed on 10 August 2023). The provider of these data is the Geodesy, Cartography, and Cadastre Authority of the Slovak Republic (ÚGKK SR). The reference data from aerial laser scanning were collected in March 2019, during the non-growing season. The vertical accuracy of the point cloud is higher than 0.1 m (www.geoportal.sk/sk/zbgis/lls/, accessed on 29 February 2024). The comparison was conducted in CloudCompare, version 2.13, provided under general public license (www.cloudcompare.org, accessed on 15 September 2023) based on the calculation of Hausdorff distances in the Z-axis direction.
The bare ground class from the ALS point cloud served as the reference model. The red differences in the northeastern parts of the territory in both models in Figure 12 are caused by unburned vegetation beyond the control line. From the comparison, it is evident that the accuracy of the 3D model used for the orthorectification of images (Figure 12 on the left) was at a level no worse than ±0.1 m in height. Considering that the target resolution of the orthoimages was also at the level of 0.1 m, we can consider the achieved model accuracy to be sufficient. In the model with inactive GCPs during BA (Figure 12 on the right), height deviations of the model were up to 0.5 m, and even that was only in areas with a low image overlap in the southern part of the territory (Figure 13).
Archival records of fires from UAVs usually do not include separate flights for terrain modeling. Therefore, we attempted to reconstruct the surface of the meadow solely from the side flight originally intended for dynamic fire monitoring—specifically, it was flight No. 2 (blocks 1 to 13) in Table 2. During image alignment, we did not use fixed elements of interior orientation obtained from the final flight, as was the case in previous scenarios. Since the resulting point cloud logically exhibited the largest errors in areas covered by variable flames and smoke, outliers were filtered out from the point cloud before generating the 3D model based on reliability parameters. The resulting 3D model showed significantly larger deformations compared to the final flight; hence, the color range of deviations in Figure 14 is enlarged to ±1.0 m. In the northern part without GCPs, deviations even exceeded 1 m. As the UAV’s position at the side flight ended at point 205, the northern part was modeled based on images with a very short baseline (Figure 7—blocks 9, 10, 11, and 12), and inaccuracies in relative image orientation might have been more pronounced here. However, in the main part of the area of interest defined by GCPs, despite smoke and flames, the height deviations of the model did not significantly exceed 0.5 m.
However, the absence of GCPs and an RTK/PPK system can lead to changes in scale in addition to various deformations of image blocks and models. The accuracy of GNSS implemented in low-cost UAVs is typically approx. 1 m, which is sufficient for stabilizing the UAV above the terrain. From the perspective of photogrammetric processing, the influence of exterior orientation accuracy on the scale of the model decreases with the extent of the image block. For example, if the image block has a length of 500 m, with a 1 m error, the relative error would be 1:500, which may be acceptable, especially for slower fire spread rates, such as in our case (~2 m/s). We experimentally verified this assumption by georeferencing the final project (with all images) using only approximate elements of exterior orientation directly from the UAV deck and then comparing the control lengths measured between GCPs on the output orthophotomosaic (Figure 15). The influence of the scale change was even smaller, reaching only approximately one-thousandth of the original length in the longitudinal direction of the area.
For practical purposes, it is essential to note that to determine relative changes between images, a 3D model generated from an image block without GCPs may be sufficient, georeferenced solely based on approximate GNSS coordinates of projection centers from the EXIF data of images. Moreover, if there is also an available current digital surface model from ALS in the area, the step of photogrammetric modeling may be entirely omitted, and the images can be projected directly on the ALS model.

4.3. The Reliability of Orthoimages

Due to the oblique camera axis at a 45° angle, height deviations of 0.5 m achieved in the previous section would lead to an erroneous projection with a horizontal shift of the texture by approximately 0.5 m as well. The impact of this deficiency on determining the rate of fire spread is illustrated in Figure 16.
If the deformation of the 3D model were vertically negative, meaning that the surface model would be located below the actual terrain, there would be an artificial increase in speed due to the central projection. The acceptability of the achieved error would then depend not only on the level of deformation of the model but also on the distance from the camera’s projection center to the reference surface onto which the textural information is projected. At a flight height of 120 m and a maximum vertical deformation of the 3D model of 0.5 m, by applying basic trigonometry, we would conclude that the difference in the determined speed using a 3D model is negligible and reaches approximately the ratio between the vertical deformation of the model and the flight height above the terrain (a ratio of 0.5 m/120 m corresponds to an error in speed determination of approximately 0.4%).
The demonstration of the impact of model deformations on the orthorectification of images achieved in our experiment is illustrated in Figure 17 and Figure 18.

5. Discussion and Conclusions

In this study, we focused on the photogrammetric collection and SfM processing of images in the visible spectrum obtained using UAV for the purpose of reconstructing the spread of grassland fire. One advantage of the proposed approach is the use of only one low-cost off-the-shelf UAV and simultaneous data collection not only for mapping of the burned area but also for monitoring the development of the fire itself.
One of the main complications of photogrammetric measurement and processing is the georeferencing of collected data. In experimental measurements under controlled conditions, the use of GCPs is straightforward, and in our case, it allowed for greater control over the results of the measurements. Currently, GCP measurement during a fire in uncontrolled conditions can be fully replaced using UAVs with RTK or PPK technology, with a position accuracy of the camera projection center at the level of 0.05 m. However, if the goal was only to monitor the speed of the fire spread, approximate elements of the exterior orientation from the UAV’s onboard system without RTK/PPK could also be sufficient for georeferencing. Although these elements have an absolute position accuracy of 1 m, for larger image blocks, e.g., 500 m, such accuracy would cause a relative change in the scale of the entire block of at most 1:500, which may be acceptable because other factors, especially flame detection accuracy, have a more significant impact on the accuracy of determining the fire front speed.
A different situation arises when processing archival amateur recordings. These recordings are often in the form of videos rather than images, so even approximate elements of the exterior orientation from EXIF/XMP data would not be available. This complication can only be addressed by subsequent GCP measurements in the field after the fire, with precise selection of measured elements that can be clearly identified in the images (such as buildings, solitary tree trunks, etc.). Another complication could be the lower resolution of images from archival videos. One way to test this effect on the data obtained during this study would be to resample the original 20 MP images to a lower resolution (e.g., full HD) and crop them to a 16:9 ratio (1920 × 1080). However, adding another experiment would go beyond the originally intended scope of this study, and we will consider it in future work.
In an ideal scenario, the fire front should be monitored at the point where it meets the terrain. However, obtaining such a camera perspective is usually impossible. The UAV would need to fly directly ahead of the fire and at a relatively low altitude above the terrain to avoid smoke and flames, which could compromise monitoring safety. Additionally, evaluating the flames themselves is not suitable, as they have a spatial shape and would be incorrectly projected due to the central projection. Furthermore, the size of the flames is variable over time. Instead, it is preferable to analyze the already burned area behind the flames, where there is a relatively high contrast between the burnt surface and the fire. However, visibility in this area may also be impaired due to smoke, which can only be addressed by using thermal cameras operating in the infrared spectrum with wavelengths of 1.4–3 μm, 3–5 μm, or 8–14 μm [54]. Processing infrared images using SfM introduces additional complications, especially in automatic keypoint detection, as these images often have a significantly lower resolution than conventional visible-spectrum images [55]. In this case, it is, therefore, advisable to use a combination of sensors—use visible-spectrum images for image alignment and subsequently project infrared images with a lower resolution, collected simultaneously from the same UAV, onto the 3D model.
An example of the use of photogrammetric measurements in verifying mathematical models of fire spread can be found in the Supplementary Materials in the form of a previously unpublished video. Specifically, it concerns 80 s of the fire, from which we had images at 5 s intervals. On each image, the boundary of the fire was manually segmented (red curve). Based on these segmented curves, we searched for optimal values for several parameters in our model, such as wind direction and the influence of curvature, terrain slope, and wind size. The optimal values were then used in the fire reconstruction (blue curve). Contour lines are indicated in green, and the blue arrow on the top right indicates the current wind direction for the given interval.
Nevertheless, the aim of this study was not to analyze the fire spread itself; that task should be handled by fire experts and mathematicians specialized in this field. However, since the reliability of their conclusions directly depends on the reliability of the real-world data collected in the field, our goal was primarily to highlight the photogrammetric aspect of the experiments. From the presented data, it is evident that despite various complications, low-cost SfM photogrammetry can provide interesting results for the validation or correction of mathematical models of fire spread, even with images in the visible spectrum.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/drones8070282/s1, Video S1: Example of fire propagation modeling.

Author Contributions

Conceptualization, M.M. and M.F.; methodology, M.M. and M.F.; software, T.L.; validation, M.F., M.A. and K.M.; formal analysis, M.F.; investigation, M.M.; resources, M.F.; data curation, M.F. and T.L.; writing—original draft preparation, M.M.; writing—review and editing, M.F., M.A. and K.M.; visualization, M.M. and M.A.; supervision, M.F.; project administration, M.F.; funding acquisition, M.F. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Scientific Grant Agency of the Slovak Republic under the grant 1/0618/23.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Materials, and further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to express their gratitude to the Fire and Rescue Corps of Banská Bystrica under the leadership of Commander Roman Čunderlík and to Andrea Majlingová from the Technical University of Zvolen for organizing the action with controlled fire and inviting the authors to its documentation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gill, A.M.; Stephens, S.L.; Cary, G.J. The worldwide “wildfire” problem. Ecol. Appl. 2013, 23, 438–454. [Google Scholar] [CrossRef] [PubMed]
  2. Pechony, O.; Shindell, D.T. Driving forces of global wildfires over the past millennium and the forthcoming century. Proc. Natl. Acad. Sci. USA 2010, 107, 19167–19170. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, Z.; Chappellaz, J.; Park, K.; Mak, J.E. Large variations in Southern Hemisphere biomass burning during the last 650 years. Science 2010, 330, 1663–1666. [Google Scholar] [CrossRef] [PubMed]
  4. Van Hees, P. Validation and verification of fire models for fire safety engineering. Procedia Eng. 2013, 62, 154–168. [Google Scholar] [CrossRef]
  5. Artés, T.; Oom, D.; De Rigo, D.; Durrant, T.H.; Maianti, P.; Libertà, G.; San-Miguel-Ayanz, J. A global wildfire dataset for the analysis of fire regimes and fire behaviour. Sci. Data 2019, 6, 296. [Google Scholar] [CrossRef] [PubMed]
  6. Lopes, A.M.G.; Sousa, A.C.M.; Viegas, D.X. Numerical simulation of turbulent flow and fire propagation in complex topography. Numer. Heat Transf. Part A Appl. 1995, 27, 229–253. [Google Scholar] [CrossRef]
  7. Boboulos, M.; Purvis, M.R.I. Wind and slope effects on ROS during the fire propagation in East-Mediterranean pine forest litter. Fire Saf. J. 2009, 44, 764–769. [Google Scholar] [CrossRef]
  8. Sullivan, A.L. Wildland surface fire spread modelling 2009, 1990–2007. 1: Physical and quasi-physical models. Int. J. Wildland Fire 2009, 18, 349–368. [Google Scholar] [CrossRef]
  9. Dickson, B.G.; Prather, J.W.; Xu, Y.; Hampton, H.M.; Aumack, E.N.; Sisk, T.D. Mapping the probability of large fire occurrence in northern Arizona, USA. Landsc. Ecol. 2006, 21, 747–761. [Google Scholar] [CrossRef]
  10. Syphard, A.D.; Radeloff, V.C.; Keuler, N.S.; Taylor, R.S.; Hawbaker, T.J.; Stewart, S.I.; Clayton, M.K. Predicting spatial patterns of fire on a southern California landscape. Int. J. Wildland Fire 2008, 17, 602–613. [Google Scholar] [CrossRef]
  11. Semeraro, T.; Mastroleo, G.; Aretano, R.; Facchinetti, G.; Zurlini, G.; Petrosillo, I. GIS Fuzzy Expert System for the assessment of ecosystems vulnerability to fire in managing Mediterranean natural protected areas. J. Environ. Manag. 2016, 168, 94–103. [Google Scholar] [CrossRef] [PubMed]
  12. West, A.M.; Kumar, S.; Jarnevich, C.S. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA. Clim. Change 2016, 134, 565–577. [Google Scholar] [CrossRef]
  13. Li, Z.; Nadon, S.; Cihlar, J.; Stocks, B. Satellite-based mapping of Canadian boreal forest fires: Evaluation and comparison of algorithms. Int. J. Remote Sens. 2000, 21, 3071–3082. [Google Scholar] [CrossRef]
  14. Li, Y.; Vodacek, A.; Kremens, R.L.; Ononye, A.; Tang, C. A hybrid contextual approach to wildland fire detection using multispectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2115–2126. [Google Scholar] [CrossRef]
  15. Stow, D.A.; Riggan, P.J.; Storey, E.J.; Coulter, L.L. Measuring fire spread rates from repeat pass airborne thermal infrared imagery. Remote Sens. Lett. 2014, 5, 803–812. [Google Scholar] [CrossRef]
  16. Ambrosia, V.G.; Wegener, S.S.; Sullivan, D.V.; Buechel, S.W.; Dunagan, S.E.; Brass, J.A.; Stoneburner, J.; Schoenung, S.M. Demonstrating UAV-acquired real-time thermal data over fires. Photogramm. Eng. Remote Sens. 2003, 69, 391–402. [Google Scholar] [CrossRef]
  17. Sherstjuk, V.; Zharikova, M.; Sokol, I. Forest fire-fighting monitoring system based on UAV team and remote sensing. In Proceedings of the 2018 IEEE 38th International Conference on Electronics and Nanotechnology (ELNANO), Kyiv, Ukraine, 24–26 April 2018; pp. 663–668. [Google Scholar] [CrossRef]
  18. Gomez, C.; Purdie, H. UAV-based photogrammetry and geocomputing for hazards and disaster risk monitoring—A review. Geoenviron. Disasters 2016, 3, 23. [Google Scholar] [CrossRef]
  19. Afghah, F.; Razi, A.; Chakareski, J.; Ashdown, J. Wildfire monitoring in remote areas using autonomous unmanned aerial vehicles. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019; pp. 835–840. [Google Scholar] [CrossRef]
  20. Ambroz, M.; Balažovjech, M.; Medľa, M.; Mikula, K. Numerical modeling of wildland surface fire propagation by evolving surface curves. Adv. Comput. Math. 2019, 45, 1067–1103. [Google Scholar] [CrossRef]
  21. Ambroz, M.; Mikula, K.; Fraštia, M.; Marčiš, M. Parameter estimation for the forest fire propagation model. Tatra Mt. Math. Publ. 2020, 75, 1–22. [Google Scholar] [CrossRef]
  22. Martinez-de Dios, J.R.; Arrue, B.C.; Ollero, A.; Merino, L.; Gómez-Rodríguez, F. Computer vision techniques for forest fire perception. Image Vis. Comput. 2008, 26, 550–562. [Google Scholar] [CrossRef]
  23. Toulouse, T.; Rossi, L.; Akhloufi, M.A.; Pieri, A.; Maldague, X. A multimodal 3D framework for fire characteristics estimation. Meas. Sci. Technol. 2018, 29, 025404. [Google Scholar] [CrossRef]
  24. Förstner, W.; Wrobel, B.P. Photogrammetric Computer Vision; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  25. Ciullo, V.; Rossi, L.; Pieri, A. Experimental fire measurement with UAV multimodal stereovision. Remote Sens. 2020, 12, 3546. [Google Scholar] [CrossRef]
  26. Zhao, Y.; Ma, J.; Li, X.; Zhang, J. Saliency detection and deep learning-based wildfire identification in UAV imagery. Sensors 2018, 18, 712. [Google Scholar] [CrossRef] [PubMed]
  27. Tomaštík, J.; Mokroš, M.; Surový, P.; Grznárová, A.; Merganič, J. UAV RTK/PPK method—An optimal solution for mapping inaccessible forested areas? Remote Sens. 2019, 11, 721. [Google Scholar] [CrossRef]
  28. Giordan, D.; Hayakawa, Y.; Nex, F.; Remondino, F.; Tarolli, P. The use of remotely piloted aircraft systems (RPASs) for natural hazards monitoring and management. Nat. Hazards Earth Syst. Sci. 2018, 18, 1079–1096. [Google Scholar] [CrossRef]
  29. Štroner, M.; Urban, R.; Seidl, J.; Reindl, T.; Brouček, J. Photogrammetry using UAV-mounted GNSS RTK: Georeferencing strategies without GCPs. Remote Sens. 2021, 13, 1336. [Google Scholar] [CrossRef]
  30. Ullman, S. The interpretation of structure from motion. Proceedings of the Royal Society of London. Series B. Biol. Sci. 1979, 203, 405–426. [Google Scholar] [CrossRef]
  31. Lowe, G. Sift-the scale invariant feature transform. Int. J. 2004, 2, 2. [Google Scholar]
  32. Morelli, L.; Ioli, F.; Maiwald, F.; Mazzacca, G.; Menna, F.; Remondino, F. Deep-Image-Matching: An open-source toolbox for multi-view image matching of complex geomorphological scenarios. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2024, 48, 309–316. [Google Scholar] [CrossRef]
  33. Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar] [CrossRef]
  34. Agarwal, S.; Furukawa, Y.; Snavely, N.; Simon, I.; Curless, B.; Seitz, S.M.; Szeliski, R. Building Rome in a day. Commun. ACM 2011, 54, 105–112. [Google Scholar] [CrossRef]
  35. Gao, X.S.; Hou, X.R.; Tang, J.; Cheng, H.F. Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 930–943. [Google Scholar] [CrossRef]
  36. Crandall, D.; Owens, A.; Snavely, N.; Huttenlocher, D. Discrete-continuous optimization for large-scale structure from motion. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 3001–3008. [Google Scholar] [CrossRef]
  37. Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O. Use of image based modelling for documentation of intricately shaped objects. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 327–334. [Google Scholar] [CrossRef]
  38. Qureshi, A.H.; Alaloul, W.S.; Murtiyoso, A.; Saad, S.; Manzoor, B. Comparison of Photogrammetry Tools Considering Rebar Progress Recognition. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 141–146. [Google Scholar] [CrossRef]
  39. Pukanská, K.; Bartoš, K.; Bella, P.; Sabová, J. Comparison of non-contact surveying technologies for modelling underground morphological structures. Acta Montan. Slovaca 2017, 22, 246. [Google Scholar]
  40. Haala, N. Multiray photogrammetry and dense image matching. In Photogrammetric Week; Fritsch, E.D., Ed.; VDE Verlag: Heidelberg, Germany, 2011; Volume 11, pp. 185–195. [Google Scholar]
  41. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms Corfu, Greece, September 21–22 2000, 1999 Proceedings; Springer: Berlin/Heidelberg, Germany, 2002; pp. 298–372. [Google Scholar] [CrossRef]
  42. Schneider, J.; Schindler, F.; Läbe, T.; Förstner, W. Bundle adjustment for multi-camera systems with points at infinity. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 3, 75–80. [Google Scholar] [CrossRef]
  43. Börlin, N.; Grussenmeyer, P. Experiments with metadata-derived initial values and linesearch bundle adjustment in architectural photogrammetry. In Proceedings of the XXIV International CIPA Symposium, Strasbourg, France, 2–6 September 2013; Copernicus Publications: Enschede, The Netherlands, 2013; Volume 2, pp. 43–48. [Google Scholar] [CrossRef]
  44. Blanch, X.; Eltner, A.; Guinau, M.; Abellan, A. Multi-Epoch and Multi-Imagery (MEMI) photogrammetric workflow for enhanced change detection using time-lapse cameras. Remote Sens. 2021, 13, 1460. [Google Scholar] [CrossRef]
  45. Pukanská, K.; Bartoš, K.; Bakoň, M.; Papčo, J.; Kubica, L.; Barlák, J.; Rovňák, M.; Kseňak, Ľ.; Zelenakova, M.; Savchyn, I.; et al. Multi-sensor and multi-temporal approach in monitoring of deformation zone with permanent monitoring solution and management of environmental changes: A case study of Solotvyno salt mine, Ukraine. Front. Earth Sci. 2023, 11, 1167672. [Google Scholar] [CrossRef]
  46. McRae, R.H.; Sharples, J.J.; Wilkes, S.R.; Walker, A. An Australian pyro-tornadogenesis event. Nat. Hazards 2013, 65, 1801–1811. [Google Scholar] [CrossRef]
  47. Eltner, A.; Kaiser, A.; Abellan, A.; Schindewolf, M. Time lapse structure-from-motion photogrammetry for continuous geomorphic monitoring. Earth Surf. Process. Landf. 2017, 42, 2240–2253. [Google Scholar] [CrossRef]
  48. Ioli, F.; Bruno, E.; Calzolari, D.; Galbiati, M.; Mannocchi, A.; Manzoni, P.; Martini, M.; Bianchi, A.; Cina, A.; De Michele, C.; et al. A replicable open-source multi-camera system for low-cost 4d glacier monitoring. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 137–144. [Google Scholar] [CrossRef]
  49. Cucchiaro, S.; Cavalli, M.; Vericat, D.; Crema, S.; Llena, M.; Beinat, A.; Marchi, L.; Cazorzi, F. Monitoring topographic changes through 4D-structure-from-motion photogrammetry: Application to a debris-flow channel. Environ. Earth Sci. 2018, 77, 632. [Google Scholar] [CrossRef]
  50. Pacheco-Ruiz, R.; Adams, J.; Pedrotti, F. 4D modelling of low visibility Underwater Archaeological excavations using multi-source photogrammetry in the Bulgarian Black Sea. J. Archaeol. Sci. 2018, 100, 120–129. [Google Scholar] [CrossRef]
  51. Sherwood, C.R.; Warrick, J.A.; Hill, A.D.; Ritchie, A.C.; Andrews, B.D.; Plant, N.G. Rapid, remote assessment of Hurricane Matthew impacts using four-dimensional structure-from-motion photogrammetry. J. Coast. Res. 2018, 34, 1303–1316. [Google Scholar] [CrossRef]
  52. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  53. Harwin, S.; Lucieer, A.; Osborn, J. The impact of the calibration method on the accuracy of point clouds derived using unmanned aerial vehicle multi-view stereopsis. Remote Sens. 2015, 7, 11933–11953. [Google Scholar] [CrossRef]
  54. Gouverneur, B.; Verstockt, S.; Pauwels EJ, E.M.; Han, J.; de Zeeuw, P.M.; Vermeiren, J. Archeological treasures protection based on early forest wildfire multi-band imaging detection system. In Electro-Optical and Infrared Systems: Technology and Applications IX; SPIE: Bellingham, WA, USA, 2012; Volume 8541, pp. 104–119. [Google Scholar] [CrossRef]
  55. Dlesk, A.; Vach, K.; Pavelka, K. Photogrammetric co-processing of thermal infrared images and RGB images. Sensors 2022, 22, 1655. [Google Scholar] [CrossRef]
Figure 1. Location of the testing site in central Slovakia (left) and details of the meadow used for the experimental controlled fire in highlighted yellow region with red point corresponding to the displayed ETRS89 coordinates (right) (source: google.com/maps, accessed on 25 February 2024).
Figure 1. Location of the testing site in central Slovakia (left) and details of the meadow used for the experimental controlled fire in highlighted yellow region with red point corresponding to the displayed ETRS89 coordinates (right) (source: google.com/maps, accessed on 25 February 2024).
Drones 08 00282 g001
Figure 2. Burned control line from the eastern side of the specific location—wider view including numbers of GCPs and a yellow box (left) to which the detailed image from UAV (right) pertains.
Figure 2. Burned control line from the eastern side of the specific location—wider view including numbers of GCPs and a yellow box (left) to which the detailed image from UAV (right) pertains.
Drones 08 00282 g002
Figure 3. Flight at the beginning of the fire ignition—camera network configuration (left) and an example of an oblique image (right).
Figure 3. Flight at the beginning of the fire ignition—camera network configuration (left) and an example of an oblique image (right).
Drones 08 00282 g003
Figure 4. Camera network from dynamic monitoring of fire development at a 2 s interval (red images)—the images were aligned towards the original image block from the first flight (blue images).
Figure 4. Camera network from dynamic monitoring of fire development at a 2 s interval (red images)—the images were aligned towards the original image block from the first flight (blue images).
Drones 08 00282 g004
Figure 5. Image block created immediately after the completion of dynamic monitoring (left) and an example of an oblique image from the UAV (right).
Figure 5. Image block created immediately after the completion of dynamic monitoring (left) and an example of an oblique image from the UAV (right).
Drones 08 00282 g005
Figure 6. Graphic representation of the workflow of the basic experiment.
Figure 6. Graphic representation of the workflow of the basic experiment.
Drones 08 00282 g006
Figure 7. Digital surface model from the 3rd aerial survey after the burning of the entire meadow, containing approximately 0.5 million triangles.
Figure 7. Digital surface model from the 3rd aerial survey after the burning of the entire meadow, containing approximately 0.5 million triangles.
Drones 08 00282 g007
Figure 8. Overview of static subsets 1–12 within the image block from the 2nd flight (monitoring—red color).
Figure 8. Overview of static subsets 1–12 within the image block from the 2nd flight (monitoring—red color).
Drones 08 00282 g008
Figure 9. Example of orthorectified images at intervals of 60 s, showing the change in camera perspective during dynamic fire monitoring.
Figure 9. Example of orthorectified images at intervals of 60 s, showing the change in camera perspective during dynamic fire monitoring.
Drones 08 00282 g009
Figure 10. Result of manual vectorization of fire spread at an 8 s interval. In the background is an orthophoto mosaic of the burned meadow from the last flight.
Figure 10. Result of manual vectorization of fire spread at an 8 s interval. In the background is an orthophoto mosaic of the burned meadow from the last flight.
Drones 08 00282 g010
Figure 11. Example of the distribution of tie points used for orienting the selected image. Blue points represent those for which a corresponding pair was found in subsequent images, while gray points represent those for which no match was found.
Figure 11. Example of the distribution of tie points used for orienting the selected image. Blue points represent those for which a corresponding pair was found in subsequent images, while gray points represent those for which no match was found.
Drones 08 00282 g011
Figure 12. Vertical deviations of the photogrammetric 3D model of the burned meadow from ALS point cloud—GCPs active (left) and inactive (right) during bundle adjustment (BA). Gray color represents values outside the range of ±0.5 m.
Figure 12. Vertical deviations of the photogrammetric 3D model of the burned meadow from ALS point cloud—GCPs active (left) and inactive (right) during bundle adjustment (BA). Gray color represents values outside the range of ±0.5 m.
Drones 08 00282 g012
Figure 13. Confidence of points in the point cloud used for generating the 3D model determined based on image overlap or the number of depth maps used for point reconstruction. Blue represents the highest reliability, and red represents the lowest reliability.
Figure 13. Confidence of points in the point cloud used for generating the 3D model determined based on image overlap or the number of depth maps used for point reconstruction. Blue represents the highest reliability, and red represents the lowest reliability.
Drones 08 00282 g013
Figure 14. Vertical deviations of the photogrammetric 3D model created solely from the side monitoring flight compared to the ALS point cloud. The gray color represents values outside the range of ±1.0 m.
Figure 14. Vertical deviations of the photogrammetric 3D model created solely from the side monitoring flight compared to the ALS point cloud. The gray color represents values outside the range of ±1.0 m.
Drones 08 00282 g014
Figure 15. Illustration of length errors (yellow values) on the orthophotomosaic georeferenced using approximate onboard coordinates of projection centers in the WGS84 coordinate system. All lengths (green values) between GCPs (white numbers) were shorter than they should be.
Figure 15. Illustration of length errors (yellow values) on the orthophotomosaic georeferenced using approximate onboard coordinates of projection centers in the WGS84 coordinate system. All lengths (green values) between GCPs (white numbers) were shorter than they should be.
Drones 08 00282 g015
Figure 16. Visualization of the impact of orthoimages generated at 2 s intervals from vertically displaced 3D models on the accuracy of determining relative changes, from which the rate of fire spread is calculated.
Figure 16. Visualization of the impact of orthoimages generated at 2 s intervals from vertically displaced 3D models on the accuracy of determining relative changes, from which the rate of fire spread is calculated.
Drones 08 00282 g016
Figure 17. Selected UAV image (left) and its orthorectified version (right). The green rectangle in the right image indicates the area near point 204, which is detailed in Figure 18.
Figure 17. Selected UAV image (left) and its orthorectified version (right). The green rectangle in the right image indicates the area near point 204, which is detailed in Figure 18.
Drones 08 00282 g017
Figure 18. Detail of the orthoimage rectified based on the 3D model from the post-fire flight (left) and the deformed model generated from the side monitoring flight (right). Differences in the position of the green markers against the background texture are visible in the images. Deviations reached a maximum of 0.3 m, which corresponds to the vertical deviation of the model in the corresponding area in Figure 14.
Figure 18. Detail of the orthoimage rectified based on the 3D model from the post-fire flight (left) and the deformed model generated from the side monitoring flight (right). Differences in the position of the green markers against the background texture are visible in the images. Deviations reached a maximum of 0.3 m, which corresponds to the vertical deviation of the model in the corresponding area in Figure 14.
Drones 08 00282 g018
Table 1. Parameters of the Hasselblad L1D-20c camera integrated into the DJI Mavic 2 Pro.
Table 1. Parameters of the Hasselblad L1D-20c camera integrated into the DJI Mavic 2 Pro.
Camera ParameterValue
Image sensor size 13.13 × 8.76 mm
Image sensor resolution5472 × 3648 pixels
Pixel size2.4 μm
Focal length10.26 mm
Shutter typeElectronic rolling shutter
Aperturef/2.8 to f/11
Table 2. Overview of image blocks.
Table 2. Overview of image blocks.
FlightSubsetImages in BlockStatic/Moving CameraPurpose
1 (before fire)-40MovingMain image block
2114StaticMonitoring
29StaticMonitoring
337StaticMonitoring
428StaticMonitoring
510StaticMonitoring
618StaticMonitoring
731StaticMonitoring
813StaticMonitoring
96StaticMonitoring
105StaticMonitoring
1138StaticMonitoring
126StaticMonitoring
1376MovingMonitoring
3 (after fire)-28Moving3D model
Table 3. Settings for image alignment process in Agisoft Metashape Professional.
Table 3. Settings for image alignment process in Agisoft Metashape Professional.
AccuracyGeneric PreselectionKey Point LimitTie Point Limit
HighYes40,000 per image10,000 per image
Table 4. Statistics from the photogrammetric processing of image blocks.
Table 4. Statistics from the photogrammetric processing of image blocks.
FlightImagesTie PointsTie Points RMS
Reprojection Error [pix]
GCPs
X RMSE
[m]
GCPs
Y RMSE
[m]
GCPs
Z RMSE
[m]
GCPs RMS Reprojection Error [pix]
1 (before fire)40112,6270.750.080.110.100.57
1 + 2 (monitor.)331165,0251.230.030.050.030.42
3 (after fire)2864,6660.940.020.040.050.36
Table 5. Interior orientation elements for flight 3 and flight 1 without GCPs. The focal length f, as well as the image coordinates of the principal point cx, cy, can be converted from pixels to mm using the known pixel size on the sensor (2.4 μm). The coefficients of radial distortion K1, K2, K3 and the tangential distortion P1, P2 of the Brown distortion model are dimensionless.
Table 5. Interior orientation elements for flight 3 and flight 1 without GCPs. The focal length f, as well as the image coordinates of the principal point cx, cy, can be converted from pixels to mm using the known pixel size on the sensor (2.4 μm). The coefficients of radial distortion K1, K2, K3 and the tangential distortion P1, P2 of the Brown distortion model are dimensionless.
ParameterFlight 1Flight 3Difference [%]
f [pix]4939.294298.16−14.9
cx [pix]−55.5822.17350.7
cy [pix]−691.0812.565603.1
K10.000441−0.008228105.4
K20.0175760.03637051.7
K3−0.023303−0.04368746.7
P10.0024800.001253−97.9
P2−0.001990−0.0020492.9
Table 6. Statistics from experimental photogrammetric processing with various image orientation settings.
Table 6. Statistics from experimental photogrammetric processing with various image orientation settings.
FlightGCPs in BAPrecalib. CameraImagesTie PointsTie Point RMS Reprojection Error
[pix]
GCPs
X RMSE
[m]
GCPs
Y RMSE
[m]
GCPs
Z RMSE
[m]
GCPs RMS Reprojection Error [pix]
1 (before fire)NONO40111,1230.730.380.270.350.43
1 (before fire)YESNO40112,7380.750.080.110.100.57
1 + 2 (monitor.)NOYES331237,3730.850.220.260.050.62
3 (after fire)NONO2863,9340.500.200.140.100.32
3 (after fire)YESNO2864,6660.940.020.040.050.36
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marčiš, M.; Fraštia, M.; Lieskovský, T.; Ambroz, M.; Mikula, K. Photogrammetric Measurement of Grassland Fire Spread: Techniques and Challenges with Low-Cost Unmanned Aerial Vehicles. Drones 2024, 8, 282. https://doi.org/10.3390/drones8070282

AMA Style

Marčiš M, Fraštia M, Lieskovský T, Ambroz M, Mikula K. Photogrammetric Measurement of Grassland Fire Spread: Techniques and Challenges with Low-Cost Unmanned Aerial Vehicles. Drones. 2024; 8(7):282. https://doi.org/10.3390/drones8070282

Chicago/Turabian Style

Marčiš, Marián, Marek Fraštia, Tibor Lieskovský, Martin Ambroz, and Karol Mikula. 2024. "Photogrammetric Measurement of Grassland Fire Spread: Techniques and Challenges with Low-Cost Unmanned Aerial Vehicles" Drones 8, no. 7: 282. https://doi.org/10.3390/drones8070282

Article Metrics

Back to TopTop