Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Risk Assessment and Distribution Estimation for UAV Operations with Accurate Ground Feature Extraction Based on a Multi-Layer Method in Urban Areas
Previous Article in Journal
Unmanned Aerial Vehicle Obstacle Avoidance Based Custom Elliptic Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining Drone LiDAR and Virtual Reality Geovisualizations towards a Cartographic Approach to Visualize Flooding Scenarios

by
Ermioni Eirini Papadopoulou
and
Apostolos Papakonstantinou
*
Department of Civil Engineering and Geomatics, School of Engineering and Technology, Cyprus University of Technology, Lemesos 3603, Cyprus
*
Author to whom correspondence should be addressed.
Drones 2024, 8(8), 398; https://doi.org/10.3390/drones8080398
Submission received: 9 July 2024 / Revised: 11 August 2024 / Accepted: 13 August 2024 / Published: 15 August 2024

Abstract

:
This study aims to create virtual reality (VR) geovisualizations using 3D point clouds obtained from airborne LiDAR technology. These visualizations were used to map the current state of river channels and tributaries in the Thessalian Plain, Greece, following severe flooding in the summer of 2023. The study area examined in this paper is the embankments enclosing the tributaries of the Pineios River in the Thessalian Plain region, specifically between the cities of Karditsa and Trikala in mainland Greece. This area was significantly affected in the summer of 2023 when flooding the region’s rivers destroyed urban elements and crops. The extent of the impact across the entire Thessalian Plain made managing the event highly challenging to the authorities. High-resolution 3D mapping and VR geovisualization of the embarkments encasing the main rivers and the tributaries of the Thessalian Plain essentially provides information for planning the area’s restoration processes and designing prevention and mitigation measures for similar disasters. The proposed methodology consists of four stages. The first and second stages of the methodology present the design of the data acquisition process with airborne LiDAR, aiming at the high-resolution 3D mapping of the sites. The third stage focuses on data processing, cloud point classification, and thematic information creation. The fourth stage is focused on developing the VR application. The VR application will allow users to immerse themselves in the study area, observe, and interact with the existing state of the embankments in high resolution. Additionally, users can interact with the 3D point cloud, where thematic information is displayed describing the classification of the 3D cloud, the altitude, and the RGB color. Additional thematic information in vector form, providing qualitative characteristics, is also illustrated in the virtual space. Furthermore, six different scenarios were visualized in the 3D space using a VR app. Visualizing these 3D scenarios using digital twins of the current antiflood infrastructure provides scenarios of floods at varying water levels. This study aims to explore the efficient visualization of thematic information in 3D virtual space. The goal is to provide an innovative VR tool for managing the impact on anthropogenic infrastructures, livestock, and the ecological capital of various scenarios of a catastrophic flood.

1. Introduction

Light Detection and Ranging (LiDAR) technology in geospatial applications has seen significant advancements over the past two decades, providing precise and detailed data necessary for various domains such as environmental monitoring, urban planning, and disaster management [1]. LiDAR is a remote sensing method that uses light as a pulsed laser to measure variable distances to the Earth’s surface. This technology, particularly when integrated with mobile and airborne platforms, has transformed the capabilities for 3D mapping and geovisualization [2].
Airborne LiDAR has become a critical tool for capturing high-resolution 3D representations of the Earth’s surface [3]. By emitting laser pulses from an aircraft and measuring the time it takes for these pulses to return after hitting objects on the ground, LiDAR generates precise point cloud data [4]. These data are invaluable for creating digital elevation models (DEMs) and other 3D representations essential for flood modeling and risk assessment [5,6,7,8]. The ability of LiDAR to penetrate vegetation and provide accurate ground measurements makes it especially useful in densely vegetated floodplains where traditional surveying methods may be less effective [9,10].
In recent years, the integration of LiDAR with other technologies has expanded its application scope [11,12,13]. Combining LiDAR data with hydrological and hydraulic models enhances the accuracy of flood predictions and risk assessments. These integrations allow for the simulation of flood events under various scenarios, improving the understanding of flood behavior and aiding in developing effective mitigation strategies [14,15,16,17]. The detailed topographic information obtained from LiDAR is crucial for identifying flood-prone areas and planning infrastructure to minimize flood impacts.
Virtual reality (VR) technology represents a significant advancement in the geovisualization of geospatial data [18,19,20,21]. VR provides immersive environments where users can interact with 3D models of landscapes and infrastructure [22,23,24]. This interaction is particularly beneficial for visualizing flood scenarios and assessing their potential impacts. VR allows stakeholders, including scientists, policymakers, and the public, to explore complex data intuitively and engagingly [25,26,27]. It also enhances communication and collaborative decision-making by providing a more accessible representation of flood risks compared to traditional maps and models [28,29,30].
This study aims to develop VR geovisualizations utilizing 3D point clouds collected through airborne LiDAR technology. This approach is intended for the 3D mapping of the current state of river channels and tributaries within the Thessalian Plain following catastrophic flooding in the summer of 2023. This study specifically focuses on the levees of the rivers and tributaries in the Thessalian Plain, mainland Greece, an area significantly impacted by flooding that destroyed urban elements and crops. High-resolution 3D mapping and VR geovisualization of the embankments surrounding the main rivers and tributaries provide vital information for planning restoration processes and designing prevention and mitigation measures for future disasters. Integrating LiDAR data into VR platforms involves creating realistic 3D models augmented with real-time flood simulation data, allowing users to visualize potential flood scenarios dynamically. This immersive experience helps comprehend complex flood dynamics and assess the potential impact on communities and infrastructure.
This study contributes significantly to the field of cartography and 3D geovisualization. It explores the efficient transfer of thematic information into the 3D virtual space, providing an innovative tool for managing the impacts of natural disasters, specifically flooding. The high-resolution 3D mapping and VR geovisualizations could offer critical information for planning restoration processes and designing prevention and mitigation measures for similar disasters in the future. The proposed approach represents a notable advancement in integrating geospatial technologies with immersive VR applications, enhancing the ability to visualize and analyze environmental impacts in a detailed and interactive manner. This study addresses the significant challenge of flood risk management by leveraging airborne LiDAR for high-precision 3D mapping and employing VR for immersive geovisualizations of six flood scenarios. These technologies collectively enhance the ability to predict, prepare for, and mitigate the impacts of flood events, ultimately contributing to more resilient communities and better-informed decision-making processes.

2. Materials and Methods

This study employs a comprehensive methodology involving four main stages: (i) LiDAR flight planning, (ii) data acquisition, (iii) data processing, and (iv) geovisualization (Figure 1). The initial stage involves planning LiDAR flights. Sensor L1 was utilized, with a linear flight path and a front overlap of 85%. The flights are conducted at an altitude of 70 m., ensuring a point density exceeding 300 points per square meter. The data capture multiple returns (3 and 5), enhancing the accuracy and detail of the collected information. In the data acquisition stage, a total of 15 flights are conducted. These flights yield over 3000 photos and point clouds with a total point count of 50,462,349. The total mapped area covers 35 km in length and 100 m. in width.
The processing stage begins with the classification of the 3D point cloud into two primary classes: ground and no ground. The classified point cloud is then rasterized to create a digital terrain model (DTM). From the DTM, contours are generated to represent elevation changes. Concurrently, image-based 3D modeling techniques are applied to produce detailed 3D models and orthomosaics. These models are used for 3D comparisons and flood analysis, providing insights into terrain variations and potential flood zones. The final phase involves geovisualization, where the processed data are transformed into 3D web maps with thematic layers. These layers include topography, classes, elevation, density, and contours, offering a multi-faceted terrain view.
A VR application has also been developed, allowing for interactive experiences such as pop-ups and teleportation within the virtual environment. The completed visualizations and analyses are published online, making them accessible for further research and public viewing. This methodology proposes a detailed and accurate representation of the mapped area, combining advanced LiDAR technology with sophisticated data processing and visualization techniques.

2.1. Study Area

The Thessalian Plain, located in central Greece, is one of the country’s most significant agricultural regions, known for its fertile soil and extensive farming activities (Figure 2). The plain is bordered by the Pindos Mountain range to the west and the Aegean Sea to the east, with the Pineios River crossing the area, providing essential water resources for agriculture and local communities.
In the summer of 2023, the Thessalian Plain experienced an unprecedented flood due to extreme weather conditions exacerbated by climate change. Heavy and prolonged rainfall from severe thunderstorms led to the Pineios River and its tributaries overflowing their banks. The flood caused widespread devastation across the region, impacting urban and rural areas. The rainfall intensity during the event was recorded at levels significantly higher than the historical average. Meteorological data indicated that some areas received over 300 mm of rain from 3 September 2023 to 7 September 2023, overwhelming the existing drainage infrastructure [31]. The Pineios River had a dramatic rise in water levels, with its peak flow reaching approximately 1950 m3/s, leading to severe flooding across the Thessalian Plain. The flooding affected over 720 km2 of the Thessalian Plain, including major towns and villages, leading to extensive damage to homes, infrastructure, and agricultural lands [32]. The rapid accumulation of water led to flash floods, particularly in low-lying areas of the plain. The Pineios River, already characterized by its meandering nature and dynamic fluvial processes, saw an unprecedented rise in water levels. The geomorphological features of the river, including its wide floodplain and alluvial deposits, contributed to the rapid spread of floodwaters [33].
The flood severely damaged the local economy, particularly in the agricultural sector. Thousands of hectares of crops, including cotton, corn, and wheat, were destroyed. Livestock losses were also substantial, with many animals drowning or perishing due to a lack of access to dry land and food [34]. The floodwaters inundated homes, businesses, and public infrastructure, leading to an estimated economic loss of several hundred million euros [34]. Environmentally, the floodwaters carried pollutants and sediments, contaminating water supplies and leading to soil erosion. The ecosystem disruption also affected local wildlife, with many species losing their habitats due to the extensive flooding. The sediment transport and deposition processes inherent to the river’s geomorphology played a significant role in redistributing nutrients and contaminants across the floodplain.
An estimated 28,000 people were affected in the impacted areas [35]. The Greek government, along with international aid organizations, provided immediate relief in the form of food, water, medical supplies, and temporary shelter. Long-term recovery efforts focused on rebuilding infrastructure, restoring agricultural productivity, and improving flood defenses.

2.2. Fieldwork and Equipment

The DJI Matrice 300 (M300) is a versatile and robust aerial platform for various applications. Equipped with advanced sensors and flight capabilities, up to 55 min of flight time, and various safety features, the M300 has high standards for unmanned aerial vehicle (UAV) performance. In this study, we coupled the drone with the Zenmuse L1 LiDAR sensor, and the M300 became a powerful tool for high-precision aerial surveying and mapping. The Zenmuse L1 integrates a Livox LiDAR module, a high-accuracy IMU, and a camera with a 1-inch CMOS sensor, providing real-time 3D data with centimeter-level accuracy. L1 also features an integrated RGB camera with a 20-megapixel resolution and a 24 mm focal length lens.
This combination allows for efficient data collection in complex environments, making it ideal for topographic mapping, forestry management, and infrastructure inspection applications. Integrating the L1 sensor with the M300 UAV ensures seamless data capture and processing, enhancing the accuracy and efficiency of geospatial data acquisition.

2.3. Data Acquisition

This paper focuses on the embankments surrounding the tributaries of the Pineios River in the Thessalian Plain region, specifically between the cities of Karditsa and Trikala in mainland Greece. In the summer of 2023, this area experienced significant damage as flooding the region’s rivers destroyed urban infrastructure and crops. The widespread impact of the flood across the entire Thessalian Plain posed significant challenges for the authorities in managing the event. The survey area encompassed four distinct regions within the Thessalian Plain (Figure 2). The initial mapping of the study area necessitated the development of 7 different flight plans.
The data acquisition campaigns were conducted through a series of flights from 24 to 27 February 2024, covering approximately 36 km along linear infrastructures such as river channels and tributaries. To ensure efficient data acquisition, the flights were organized into four corridors. The distribution of the data acquisition flights is as follows: (i) 24 January 2024: 3 flights (Corridor A1); (ii) 25 January 2024: 8 flights (Corridors A1 and A2); (iii) 26 January 2024: 6 flights (Corridor A3); and (iv) 27 January 2024: 3 flights (Corridor A4). The flights were conducted between 08:00 and 13:00 every day. From 24 January to 26 January, the weather conditions remained consistent, characterized by relative cloudiness, low humidity, wind speeds of 1–2 Bf., and temperatures ranging from 3 to 6 degrees °C. The details of these flights are outlined in Table 1 below. Throughout all flight plans, the selected flight altitude was consistently maintained at 80 m above the takeoff point. The flight altitude was chosen to achieve a necessary point cloud density greater than 350 pts/m2.
In Figure 3, a linear flight plan is presented. This study analyzes and presents the results of the flood analysis conducted in this area. All the flood impact geovisualizations pertain to this specific area.
Data acquisition using unmanned aerial vehicles (UAVs) and airborne LiDAR technology presents a highly efficient method for capturing detailed geographic information, especially in areas where access is physically challenging. The embankments surrounding the tributaries of the Pineios River in the Thessalian Plain region are long linear structures. They have heights ranging from approximately 2 to 5 m and lengths spanning from 7 kilometers to 12.2 kilometers across the entire study area. Challenges in this method often include navigating through difficult terrains such as dense forests, mountainous areas, or environmentally sensitive zones where ground access is restricted or impossible. Furthermore, weather conditions, regulations, and sensor calibration can affect UAV operations and data accuracy.

2.4. Processing

2.4.1. Point Cloud Classification

Data processing classified the points collected with airborne LiDAR into two classes: (i) ground and (ii) no ground. This process, leveraging the LiDAR Sensor L1 and point cloud classification techniques within ArcGIS Pro v3.0.0, involves several key steps to classify LiDAR point clouds accurately. Upon acquiring LiDAR point cloud data, the initial preprocessing phase focuses on enhancing data quality. Noise reduction techniques are applied to filter out erroneous points, ensuring a cleaner dataset. Additionally, normalization processes standardize the data, facilitating consistent analysis and interpretation.
Feature extraction is a critical stage where relevant attributes are derived from the point cloud. These features, including elevation, intensity, and point density, provide valuable information for distinguishing ground surfaces from vegetation. By capturing the geometric and spatial characteristics of points, feature extraction lays the foundation for accurate classification.
The subsequent step involves creating a labeled dataset for training the deep learning model. Manual annotation of a point cloud subset assigns labels such as ‘ground’ and ‘vegetation’ to points, enabling supervised learning (Figure 4). This annotated dataset serves as the basis for training a convolutional neural network (CNN) within ArcGIS Pro.
CNN architecture is designed to learn and classify points effectively based on their extracted features. The model learns to differentiate between ground and vegetation points through iterative training using the labeled dataset. Data augmentation and regularization are employed to enhance model performance and prevent overfitting. Integration of the trained CNN into ArcGIS Pro facilitates seamless application to the entire point cloud dataset. The software processes each point, classifying it as either ground or vegetation based on learned patterns. Post-processing steps involve validation and refinement to ensure the accuracy and reliability of the classification results. Using deep learning through ArcGIS Pro, the process of point cloud classification into ground and vegetation from LiDAR Sensor L1 data is a systematic and rigorous methodology. Leveraging advanced computational techniques and geospatial tools, this approach enables precise analysis and interpretation of LiDAR data for various environmental and urban applications.

2.4.2. Image-Based 3D Modeling

According to the methodology, the next stage involved data processing to generate a 3D point cloud, 3D models, and high-resolution orthomosaics of river canals. Photogrammetric and image-based 3D modeling methods were employed to produce cartographic outcomes. The image-based 3D modeling processing steps were consistent across all datasets for each recording date.
Initially, an expert photo interpreter conducted the Very High-Resolution Image (VHRI) quality control visually and subsequently used the Image Quality Index (IQI) algorithm. Images exhibiting blurriness, shaking, overexposure, or containing portions of the horizon were excluded through visual inspections. VHRIs with IQI index values outside the 0.5–1 range were rejected in the subsequent processing steps.
The VHRIs deemed suitable for photogrammetric processing were imported into Agisoft Metashape v1.7.4 software, where image alignment was performed [36]. The alignment process employed Structure-from-Motion (SfM) algorithms, including Scale Invariant Feature Transform (SIFT) and Random Sample Consensus (RANSAC). This process resulted in a sparse point cloud, which was then densified using the Multi-View Stereo (MVS) algorithm. The resulting dense 3D point cloud served as the foundation for creating a 3D mesh. Spatial interpolation connected the points into a Triangulated Irregular Network (TIN), forming a single 3D mesh. This mesh was then textured with photorealistic textures, producing a textured 3D model.
Subsequently, a Digital Surface Model (DSM) was generated to describe the area’s elevation, followed by orthorectification of the image pixels to create the orthomosaic. This processing was applied to the data collected across eight recording dates for each of the three different scales. The outcomes of the image-based 3D modeling processing utilized for developing the virtual reality (VR) application included 3D models and orthophoto maps.

2.4.3. Flood Analysis

Flood impact analysis is a critical process in mitigating and managing flood risks. Using ArcGIS Pro, this analysis can be conducted effectively with high-resolution digital terrain models (DTMs). This text outlines the process of flood impact analysis using a DTM with a spatial resolution of 10 cm, where the highest height is 133 m and the lowest is 121 m.
First, the high-resolution DTM is imported into ArcGIS Pro. The 10 cm spatial resolution allows for detailed surface modeling, which is crucial for accurately mapping flood extents and depths. The height range of 121 m to 133 m will be used to identify areas at different flood risks. The next step is preprocessing the DTM. This includes filtering out noise and ensuring the data accurately represent the terrain surface. Hydrological tools within ArcGIS Pro, such as “Fill” and “Flow Direction”, are then utilized to prepare the DTM for flood simulation. The “Fill” tool corrects small imperfections, such as depressions or pits, that could impact flow accumulation modeling. After preprocessing, flood scenario parameters are defined. These parameters include rainfall intensity, duration, and potential water levels. Using the “Raster Calculator”, potential flood extents are modeled based on these parameters. This tool calculates water depth by subtracting the DTM from the expected water surface elevation during a flood event (Figure 5).
The results are then analyzed to identify critical areas that are most vulnerable to flooding. This includes evaluating the spatial distribution of flood depths and assessing the impact on the flooded area’s anthropogenic infrastructure, livestock, and ecological capital. The high-resolution data ensure that even small-scale features are accounted for, providing a comprehensive understanding of flood risks.

3. Results

3.1. 2D and 3D Results

The 2D and 3D derivatives generated from the study area using the proposed methodology are as follows: (i) 3D dense point cloud, (ii) textured 3D model, (iii) orthomosaic, (iv) digital terrain model (DTM), (v) Digital Surface Model (DSM), and (vi) contour lines (.shp files, 1 m interval). All 2D and 3D cartographic products were in the same coordinate system, WGS84.
For every point cloud created, ground and no ground classification was performed. Furthermore, every 3D point cloud includes (i) intensity values, (ii) return values, (iii) reflectance, (iv) height, (v) RGB, and (iv) classification. The merged 3D point cloud was exported in two reference systems, WGS 84 and UTM 34N, while all the individual 3D point clouds were exported in the UTM 34N reference system. Table 2 presents the spatial derivative results.
Figure 6a,b present the 3D point cloud representation (.las) of riverbank areas along the Pineios River. Using the Zenmuse L1 sensor, the initial step involved cleaning the point cloud to remove extraneous terrain and outliers while preserving the targeted area. The LiDAR sensor used to acquire point clouds has a vertical resolution of approximately 3 cm. The DSM for the study area, derived from high-resolution image processing, spans the entire mapped area at a 20 cm spatial resolution. The lowest recorded elevation, 122 m, lies at the riverbank base where the water level resides, while the highest elevation is 153 m. Figure 6c,d display typical examples from the DSM at the same 20 cm spatial resolution. The DTM, generated from analyzing the classified 3D dense point cloud, covers the entire mapped Area 3 with a spatial resolution of 20 cm. At 121.75 m, the lowest elevation in the area lies at the base of the embankments, while the highest elevation recorded is 137.08 m. Figure 6e,f provide illustrative instances from the DTM, displaying the diversity of elevations captured in the dataset.
The orthomosaic created provides a detailed view of the entirety of Area 2, measuring a total length of 7080 m and a width of 100 m. The raster file processes a spatial resolution of 5 cm/pixel, comprises 120,711 × 48,073 pixels, and covers an area of 1.01 km2. Figure 6g,h present distinctive examples from various sections of the studied area, demonstrating parts of the orthomosaic and its detail. The contour lines were created with a 1 m interval from the DTM, and they underwent additional refinement by a smoothing process applied at 3 m intervals. The implementation of thorough quality control is necessary for the elimination of inaccuracies in the lines. These contours describe elevations that span from 122 m to 133 m throughout the whole area. Figure 6i,j represent an illustrative image of the contour lines designed for the studied area.

3.2. Flood Impact Analysis Results

The results of the flood impact analysis in the Thessalian Plain underscore a gradual yet significant increase in flooding severity as water levels rise along the river canal. This detailed analysis reveals that even a minor increase of 1 m in water level results in noticeable changes in flood dynamics, signaling the onset of significant impacts on the area (Figure 7a). During this initial stage, low-lying areas near the riverbanks begin to experience flooding, although this flooding occurs on a limited scale, affecting approximately 12 hectares of land. This early flooding indicates the potential for more severe impacts if water levels continue to rise, emphasizing the importance of close monitoring and early intervention.
As the water level rises to 2 m, the impact of flooding becomes more pronounced. At this stage, the flood-affected area expands to about 35 hectares. Certain parts of the area begin to be covered by floodwaters more extensively (Figure 7b). Notably, roads crossing the river are among the first infrastructure elements to be affected, leading to disruptions in vital transportation links. With each subsequent meter increase in water level, the extent of the flooding becomes more severe. When the water level reaches 3 m, floodwaters encroach further inland, affecting a broader landscape area, now covering 42 hectares (Figure 7c). The area flooded increased by 13 ha compared to the 2 m scenario, now covering 35 ha of land. A total of 42 ha is flooded, and the impact at this stage is particularly significant. Small hills along the riverbanks become submerged. An important observation from this scenario is the asymmetric expansion of floodwaters. The lower elevation on the eastern side of the river results in a more extensive inundation compared to the higher elevation on the western side. This differential impact highlights the critical role of topography in shaping flood dynamics, with lower elevation areas being more vulnerable to flooding. This topographical influence is particularly notable in urban areas on the eastern side of the river, where substantial rises in the water surface level can pose significant risks to infrastructure and human lives. From the results of this scenario, it is evident that the flooded area has expanded to the road crossing the river, creating a barrier to transportation and isolating residents. The urban area’s vulnerability underscores the need for targeted flood protection measures, such as levees and floodwalls, and the importance of integrating flood risk assessments into urban planning and development processes.
As water levels rise to 4 m, the flood-affected area expands, covering 44 hectares (Figure 7d). The flood extent at this stage remains relatively stable compared to the previous level. However, further rises to 5 and 6 m result in an even greater flood coverage. At 5 m (Figure 7e), the covered area remains at 44 hectares, but at 6 m, the floodwaters expand further to cover 56 hectares (Figure 7f). These advanced stages of flooding pose substantial risks to communities and infrastructure, particularly in the eastern, low-lying regions. The increased flood extent towards the east exacerbates the vulnerability of these areas, underscoring the importance of comprehensive flood management strategies. Table 3 further illustrates the impact of rising water levels on the covered area.
Table 3 depicts the progressive increase in flood-affected areas corresponding to each meter rise in water levels. Notably, the transition from 1 m to 2 m sees a significant jump in the affected area, emphasizing the non-linear nature of the flood and its escalating impact on the region.

4. Geovisualization

4.1. 3D Mesh vs. 3D Point Cloud

For the 3D geovisualization of the results, both 2D and 3D outcomes were utilized. The 3D cartographic derivatives emerged from processing 3D point clouds and 3D mesh models. The cloud-to-mesh distance method was used in CloudCompare software v 2.7.0 to compare 3D point clouds and 3D mesh models. This method involves calculating the distance between each point in one point cloud and the nearest point in the other point cloud. It effectively quantifies the difference between the representations, highlighting areas where the mesh deviates from the true geometry captured by the point cloud. More specifically, the method was applied for distances of 1 m between the surfaces of the 3D model and the points of the 3D point cloud. Figure 8 presents the results of the C2M distance method. Large distances (1 m.) are shown in red, which are observed in the upper part of the trees. This is because the vegetation, especially leaves and tree branches, does not form detailed geometries in a mesh.
A 3D mesh model consists of vertices, edges, and faces, forming a continuous surface representing the terrain and vegetation. Although 3D mesh models have been mainly used for VR geovisualization in recent years, they have some limitations [37]. Creating a mesh model from point cloud data can introduce artifacts, especially in areas with dense vegetation. This simplification leads to a loss of geometric accuracy, as the mesh surfaces do not accurately represent the true geometry of the area. This makes the 3D mesh model unsuitable for accurately visualizing the vegetation structure in the Thessalian Plain.
In contrast, a 3D point cloud captures the terrain as a collection of numerous discrete points with precise X, Y, and Z coordinates. This method offers unparalleled geometric accuracy, which is crucial for representing detailed vegetation structures and complex terrain features. Additionally, each point in a 3D point cloud can carry valuable additional information such as the number of returns, intensity value, and class. This information enhances the analysis and visualization capabilities, providing a richer and more detailed dataset. Point clouds allow for a more realistic and accurate visualization, which is particularly important for VR geovisualizations.
Given the goal of achieving the most detailed and accurate VR geovisualization of the river canal in the Thessalian Plain, the decision to continue with the 3D point cloud, supported by the cloud-to-mesh distance analysis performed in CloudCompare, was motivated by its superior geometric precision and the richness of additional data it provides. The point cloud’s high level of detail and supplementary information ensures an immersive and realistic experience, which is crucial for comprehensive VR geovisualization tasks.

4.2. 3D Mapping

The geovisualization of the results consists of two primary stages: (a) 3D web mapping and (b) the development of the VR application. During the first stage of 3D web mapping, the visual variables of the results in the 3D space were investigated. Specifically, the cartographic results used for creating the geovisualizations included (i) classified 3D point clouds, (ii) DTM, (iii) WSE (water surface elevation), (iv) contour lines, (v) roads, (vi) overflow points, (vii) water level points, and (viii) hazardous areas. Initially, all results were shared on the web through the ArcGIS Online application. Subsequently, the results were transferred to a 3D web scene where their symbolism could be customized. The features used for the development of the geovisualizations include different types of data: (a) vector, (b) raster, and (c) point clouds. This requires a different approach to visualizing them, as the visual variables for these data types are not common.
For the 3D point cloud, the size of the points in relation to the cartographic scale and the thematic information of the points were studied. In a 3D scene environment, the size of the point affects the extent to which the recorded information is referenced. Therefore, when the size of the points increases, there is a risk of overlap. The point sizes examined ranged from 1 to 6 pixels each (Figure 9). The size selected was 3 pixels per point to avoid information loss and overlap (Figure 9c). The thematic information visualized at the 3D point cloud level included RGB value, elevation, and the classification of the points. Color was chosen as the visual variable to depict these thematic layers successfully. As shown in Figure 9, points with higher elevations are displayed in bright red, while points with lower elevations are shown in blue. Two different approaches were used to visualize the classification of points. The first involved removing points classified as vegetation, leaving only ground class points, allowing the user to observe the area without riparian vegetation. The second approach provided the ability to observe points belonging to both classes, differentiating them by color.
The 3D visualization of raster data was primarily accomplished by utilizing their elevation information for extrusion. Additionally, for the DTM, a color gradient was applied to the elevation at 1 m. intervals. At the same time, a specially designed transparent texture was assigned to WSE for river water surfaces, giving a waived animation property to enhance photorealism (Figure 10).
Although initially created in 2D form, vector data subsequently acquired 3D properties through the 3D scene. Specifically, linear elements such as contour lines were adapted to the DTM, and elevation information was obtained based on their value. The 3D visualization facilitated easy distinction of the rise in water surface levels relative to the area’s relief, providing quantitative elevation information. Furthermore, the diameter thickness of the contour lines was chosen to be 10 cm to ensure they were discernible in the 3D space.

4.3. VR Web Application

The data processing was conducted using ESRIs ArcGIS Pro software. The processed results were then uploaded to ArcGIS Online, where the 3D web scene was used for the development of the VR application. The thematic information layers, visualized in 2D and 3D dimensions, were also uploaded to the same environment. Finally, the application was adapted to a VR environment using ESRI’s online plugin VR 360, which enables the distribution of the application over the internet.
The study area covered in this work spans a geographical extent of 8 km, necessitating its visualization at two different viewing levels. For the development of VR geovisualizations, two scenarios were created: a fly-mode (or drone-mode) scenario and a pedestrian-mode scenario. Each scenario serves a different purpose and addresses different needs.
Fly-mode scenario: The first viewing level caters to the need for an overall observation and identification of high-risk overflow points, allowing users to observe the entire mapped area (Figure 10). In this approach, the user is placed in two flight positions, (i) at 50 m and (ii) at 500 m above ground, denoting a 1:50 and 1:500 cartographic scale, respectively. The scale is calculated based on the camera parameters used in this study. Furthermore, when observing a flooded area or flood scenarios, viewing the affected area at a specific height is crucial to avoid physical obstacles and barriers. This approach provides greater freedom when navigating the scene, allowing for a comprehensive understanding of the flood results.
Additionally, to facilitate the identification of high-risk overflow points, indicator markers (red flags) were added to affected areas or parts of the study area that were vulnerable to the flood. These are anthropogenic infrastructures, livestock, and ecological capital near the riverbanks, such as roads crossing the river, bridges, settlements in proximity, and agricultural or livestock facilities. This drone-level view is crucial for a comprehensive understanding of the region and strategic planning.
Pedestrian-mode scenario: The second viewing level (pedestrian mode) places the user at street level (1:1 scale) and allows them to explore the area up close, observing the condition of the levees. The viewing positions were selected based on their high-risk status due to river overflow. These points were numbered sequentially from east to west. Users of the application can move from one viewing point to the next or select their desired point from the bookmark bar integrated into the application using the VR headset controllers (Figure 11).
Users can change the thematic information they wish to observe at specific viewing points, such as the classification of point cloud points. Additionally, users can choose to observe the river’s water level rise incrementally by 1 m and see how this increase affects the area. The application was developed using ArcGIS Online and shared on the web to ensure easy accessibility. More specifically, the application can be used from any mobile device, while VR viewing is supported by Valve Index, HTC Vive, and MetaQuest VR headsets.

5. Discussion

VR geovisualizations using 3D point clouds obtained from airborne LiDAR data have offered substantial insights into the condition of river channels and tributaries within the Thessalian Plain following severe flooding in the summer of 2023. This study primarily aimed to utilize advanced data acquisition and visualization geospatial technologies to efficiently illustrate, enhance understanding, and facilitate the management of the impacts of various flooding scenarios in the study area.
One of the main challenges was acquiring high-resolution data in a flood-affected area. Accessing certain regions was difficult due to water damage and debris. However, drone-based LiDAR technology helped overcome some of these obstacles, enabling precise data acquisition from the air. The data processing phase presented additional challenges, especially in classifying point clouds and generating detailed thematic information. The methodology included several stages, beginning with flight plans for comprehensive 3D mapping using airborne LiDAR. The subsequent data processing focused on reducing noise and normalizing the data to ensure high-quality datasets. Classifying point clouds into ground and non-ground points using deep learning techniques was critical. This improved the accuracy of terrain models and facilitated the extraction of essential features such as elevation and point density, which are crucial for flood risk assessment and management. The authors used pre-existing deep learning tools to classify the LiDAR point cloud within ArcGIS Pro. Using airborne LiDAR and deep learning techniques in ArcGIS Pro offers significant advantages over traditional methods in point cloud classification accuracy and efficiency. Conventional methods for point cloud classification often involve manual techniques and basic algorithms such as thresholding, clustering (e.g., k-means), and rule-based methods [38,39,40]. These methods are being increasingly supplemented or replaced by advanced techniques like deep learning models in software such as ArcGIS Pro, which provides high precision by accurately differentiating ground and non-ground points and analyzing features like elevation, intensity, and point density [41].
The use of spatial data and their VR geovisualization implementation can vary. Rydvanskiy and Hedley [42] use the ArcGIS City Engine but then transfer the data to Unity, resulting in a loss of geographic position and coordinates in VR space. This has been a recurring issue with VR visualizations of geographic data. Additionally, the authors have already tested [20,21] approaches that use spatial data embedded in the VR space, but the final application does not have geographic coordinates.
Other studies [18,43,44] use scaled 3D geovisualizations in the virtual space by depicting 3D models on a virtual table. In the proposed approach, the users are embedded into the virtual space to scales of 1:1 and 1:50, allowing for their immersion into the digital twin of the study area to be enhanced by thematic information of flooding scenarios.
In VR, it is common to adjust the complexity or the level of detail (LoD) of a 3D model based on its distance from the viewer or its importance in the scene. This is crucial because it helps optimize performance by reducing the number of polygons rendered for objects far away or less important while maintaining high detail for objects closer or more critical to the user’s experience. When DTM or DSM needs to be presented, highly detailed earth models are simplified, ensuring that the user experience remains immersive without overloading the system. A generalized approach to the 3D models is used [45,46,47]. By using a LiDAR point cloud integrated into the virtual space to maintain a higher LoD in geovisualization at any scale, this approach ensures a more realistic representation of spatial information without losing details from the Earth’s surface 3D visualization.
Developing the VR application significantly enhanced data usability and accessibility by transforming complex geospatial datasets (LiDAR point clouds) into interactive, immersive visualizations. Users can intuitively explore and manipulate 3D point clouds and models, gaining a better understanding of spatial relationships and flood extents in various scenarios. High-resolution DTMs provide detailed and accurate flood risk analysis. The 3D virtual space makes advanced geospatial tools accessible to non-experts, and collaborative features allow multiple users to work together, enhancing the spatial knowledge of the flood phenomenon and thus facilitating planning and decision-making processes. This capability is particularly useful for planning restoration processes and designing mitigation measures, providing a detailed and realistic representation of current conditions.
However, implementing the VR application had limitations. High computational power and specialized VR equipment were needed, restricting its widespread use. Additionally, the accuracy of VR geovisualizations depends heavily on the quality of the initial LiDAR data and the effectiveness of processing algorithms. Inaccuracies in these stages can lead to misrepresentations in the virtual environment. By adopting specific strategies, the authors aim to address potential inaccuracies and enhance the reliability of VR geovisualizations for flood modeling and other geospatial applications. More specifically, to reduce potential inaccuracies in the VR geovisualizations due to the quality of initial LiDAR data or processing algorithms, the authors proposed the following strategies: (1) Data Validation and Calibration: Implementing rigorous validation and calibration processes to ensure the accuracy of the initial LiDAR data before its integration into the VR environment. This includes cross-referencing LiDAR data with other high-precision in-situ measurements. (2) Error Analysis and Correction: Conducting detailed error analysis to identify and correct inaccuracies in the generated DTMs. This may include manual corrections in areas where automated processes fail to deliver the desired accuracy. Concerning VR geovisualizations, the authors propose using user opinions and an iterative improvement approach. This involves incorporating feedback from users interacting with the VR geovisualizations to identify and address any inaccuracies. An iterative development approach allows for continuous improvements based on real-world usage and expert input.
Integrating drone LiDAR data with VR geovisualizations offers numerous benefits for flood impact analysis and management but also presents several challenges. Despite the challenges, drone LiDAR data and thematic information in a VR virtual space can create a valuable tool for managing natural disasters and planning future flood events. This can involve depicting flood scenarios and their impact on infrastructure, livestock, and natural resources.
Compared to traditional 2D maps and static 3D models, VR provides a more engaging and effective tool for flood risk management and urban planning. The combination of drone LiDAR and VR geovisualizations offers significant advantages for flood management and risk assessment. It enables precise and detailed mapping through advanced LiDAR technology, which is crucial for accurate flood modeling and risk assessment. This detailed mapping improves the identification of flood-prone areas and infrastructure planning to minimize flood impacts. VR integration allows for immersive visualizations, enhancing stakeholder engagement and decision-making. Stakeholders can interact with 3D flood scenarios in real time, improving their understanding and response strategies. This integrated approach supports the development of more effective flood mitigation and emergency response plans.

6. Conclusions

This study presents a significant advancement in flood risk management and disaster mitigation by integrating airborne LiDAR data with VR geovisualizations, focusing on the Thessalian Plain in Greece, which suffered extensive damage due to severe flooding in the summer of 2023. The methodological approach involved high-resolution 3D mapping of river channels and tributaries, utilizing airborne LiDAR technology for precise data collection and VR for immersive geospatial visualization. Airborne LiDAR facilitated the collection of detailed 3D point clouds, enabling an accurate representation of the terrain and vegetation, which proved invaluable in creating digital elevation and terrain models (DEM-DTM) and other 3D derivatives necessary for flood modeling and risk assessment. By integrating these datasets into a VR platform, this study provided an innovative visualization approach for stakeholders to interact with and understand complex flood scenarios and their affection more intuitively.
Key findings from this study include the efficiency and precision of airborne LiDAR in data acquisition, even in challenging and flood-affected areas, and the effectiveness of advanced classification techniques, including deep learning algorithms, in ensuring accurate differentiation between ground and non-ground points. The VR application offered an immersive experience, allowing users to visualize flood-impacted areas dynamically, observe potential future flood scenarios, and assess the impact on infrastructure and communities, which is crucial for planning restoration processes and designing mitigation measures.
The methodology involved meticulous LiDAR flight planning with the DJI Matrice 300 drone equipped with the Zenmuse L1 LiDAR sensor, achieving a high point density exceeding 300 points per square meter. Data processing included noise reduction, normalization, and classification into ground and non-ground points using deep learning techniques. This resulted in high-quality 3D models and orthomosaics used for flood analysis and VR development. The VR application allowed users to explore the affected areas in both fly and pedestrian modes, observing the impact of incremental water level rises and accessing thematic information interactively.
Integrating airborne LiDAR data with VR geovisualizations represents a promising approach to flood risk management, providing detailed, accurate, and interactive visualizations and tools for disaster preparedness and response. Future work should focus on improving data collection methods in challenging terrains, enhancing the accuracy of point cloud classification, and adding more thematic information into VR, creating a new cartographic approach. Furthermore, the methodology, as described in this study, faces several challenges and limitations in practical flood management and planning applications. This technology demands specialized knowledge and training for stakeholders, who must continually update their skills to utilize VR visualizations effectively. There is potential to enhance and standardize the visualized data in a VR environment, particularly the data spatial analysis and accuracy based on a cartographic scale, which necessitates further research. Developing a data acquisition protocol will aim to standardize UAV LiDAR data used on the cartographic approach of VR geovisualizations.
This study contributes to cartography and geospatial analysis by demonstrating the practical application of advanced technologies in managing natural disasters and planning future flood events. The findings underscore the importance of high-resolution data and innovative visualization tools in enhancing our understanding of environmental challenges, ultimately contributing to more resilient communities and better-informed decision-making processes in the face of natural disasters.
The proposed methodology is versatile and can be applied to various urban applications beyond flood management, aiding in urban planning, environmental monitoring, disaster management, infrastructure inspection, and agricultural management. This approach’s high-resolution 3D mapping and advanced analytical capabilities make it a powerful tool for multiple applications.

Author Contributions

Conceptualization, E.E.P. and A.P.; methodology, E.E.P. and A.P.; software, E.E.P.; writing—original draft preparation, E.E.P. and A.P.; writing—review and editing, E.E.P. and A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ORIENTATE project grant number EX200163.

Data Availability Statement

The authors thank HVA International BV for generously providing the license to use the Lidar data for this publication.

Acknowledgments

The authors acknowledge the ORIENTATE project (DrOne sensors and immeRsive tEchNologies towards the Thematic MApping of SpaTiotemporal PhEnomena) and CUT internal funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. White, J.C.; Stepper, C.; Tompalski, P.; Coops, N.C.; Wulder, M.A. Comparing ALS and Image-Based Point Cloud Metrics and Modelled Forest Inventory Attributes in a Complex Coastal Forest Environment. Forests 2015, 6, 3704–3732. [Google Scholar] [CrossRef]
  2. Romanoni, A.; Fiorenti, D.; Matteucci, M. Mesh-Based 3D Textured Urban Mapping. arXiv 2017, arXiv:1708.05543. [Google Scholar]
  3. Javanmardi, M.; Javanmardi, E.; Gu, Y.; Kamijo, S. Towards High-Definition 3D Urban Mapping: Road Feature-Based Registration of Mobile Mapping Systems and Aerial Imagery. Remote Sens. 2017, 9, 975. [Google Scholar] [CrossRef]
  4. Mohammadzadeh, A.; Valadan Zoej, M.J. A State of Art on Airborne Lidar Application in Hydrology and Oceanography: A Comprehensive Overview. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 315–317. [Google Scholar]
  5. Wedajo, G.K. LiDAR DEM Data for Flood Mapping and Assessment; Opportunities and Challenges: A Review. J. Remote Sens. GIS 2017, 06, 2015–2018. [Google Scholar] [CrossRef]
  6. Vassilaki, D.I.; Stamos, A.A. TanDEM-X DEM: Comparative Performance Review Employing LIDAR Data and DSMs. ISPRS J. Photogramm. Remote Sens. 2020, 160, 33–50. [Google Scholar] [CrossRef]
  7. Štular, B.; Lozić, E.; Eichert, S. Airborne LiDAR-Derived Digital Elevation Model for Archaeology. Remote Sens. 2021, 13, 1855. [Google Scholar] [CrossRef]
  8. Chen, Z.; Li, J.; Yang, B. A Strip Adjustment Method of Uav-Borne Lidar Point Cloud Based on Dem Features for Mountainous Area. Sensors 2021, 21, 2782. [Google Scholar] [CrossRef] [PubMed]
  9. Yoshida, K.; Pan, S.; Taniguchi, J.; Nishiyama, S.; Kojima, T.; Islam, M.T. Airborne LiDAR-Assisted Deep Learning Methodology for Riparian Land Cover Classification Using Aerial Photographs and Its Application for Flood Modelling. J. Hydroinform. 2022, 24, 179–201. [Google Scholar] [CrossRef]
  10. Münzinger, M.; Prechtel, N.; Behnisch, M. Mapping the Urban Forest in Detail: From LiDAR Point Clouds to 3D Tree Models. Urban For. Urban Green. 2022, 74, 127637. [Google Scholar] [CrossRef]
  11. He, Y.; Xu, G.; Kaufmann, H.; Wang, J.; Ma, H.; Liu, T. Integration of InSAR and LiDAR Technologies for a Detailed Urban Subsidence and Hazard Assessment in Shenzhen, China. Remote Sens. 2021, 13, 2366. [Google Scholar] [CrossRef]
  12. Abdelaziz, N.; El-Rabbany, A. Deep Learning-Aided Inertial/Visual/LiDAR Integration for GNSS-Challenging Environments. Sensors 2023, 23, 6019. [Google Scholar] [CrossRef]
  13. Ilci, V.; Toth, C. High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation. Sensors 2020, 20, 899. [Google Scholar] [CrossRef]
  14. Breen, M.J.; Kebede, A.S.; König, C.S. Assessing Coupled Human-Flood Interactions Using LiDAR Geostatistics and Neighbourhood Analyses. Geomat. Nat. Hazards Risk 2024, 15, 2361812. [Google Scholar] [CrossRef]
  15. Podhorányi, M.; Unucka, J.; Bobál’, P.; Říhová, V. Effects of LIDAR DEM Resolution in Hydrodynamic Modelling: Model Sensitivity for Cross-Sections. Int. J. Digit. Earth 2013, 6, 3–27. [Google Scholar] [CrossRef]
  16. Muhadi, N.A.; Abdullah, A.F.; Bejo, S.K.; Mahadi, M.R.; Mijic, A. The Use of LiDAR-Derived DEM in Flood Applications: A Review. Remote Sens. 2020, 12, 2308. [Google Scholar] [CrossRef]
  17. Wu, Y.; Peng, F.; Peng, Y.; Kong, X.; Liang, H.; Li, Q. Dynamic 3D Simulation of Flood Risk Based on the Integration of Spatio-Temporal GIS and Hydrodynamic Models. ISPRS Int. J. Geo-Inf. 2019, 8, 520. [Google Scholar] [CrossRef]
  18. Kalacska, M.; Arroyo-Mora, J.P.; Lucanus, O. Comparing Uas Lidar and Structure-from-Motion Photogrammetry for Peatland Mapping and Virtual Reality (Vr) Visualization. Drones 2021, 5, 36. [Google Scholar] [CrossRef]
  19. Yang, Y.; Jenny, B.; Dwyer, T.; Marriott, K.; Chen, H.; Cordeil, M. Maps and Globes in Virtual Reality. Comput. Graph. Forum 2018, 37, 427–438. [Google Scholar] [CrossRef]
  20. Lütjens, M.; Kersten, T.; Dorschel, B.; Tschirschwitz, F. Virtual Reality in Cartography: Immersive 3D Visualization of the Arctic Clyde Inlet (Canada) Using Digital Elevation Models and Bathymetric Data. Multimodal Technol. Interact. 2019, 3, 9. [Google Scholar] [CrossRef]
  21. Papadopoulou, E.-E.; Papakonstantinou, A.; Kapogianni, N.-A.; Zouros, N.; Soulakellis, N. VR Multiscale Geovisualization Based on UAS Multitemporal Data: The Case of Geological Monuments. Remote Sens. 2022, 14, 4259. [Google Scholar] [CrossRef]
  22. Virtanen, J.P.; Julin, A.; Handolin, H.; Rantanen, T.; Maksimainen, M.; Hyyppä, J.; Hyyppä, H. Interactive Geo-Information in Virtual Reality—Observations and Future Challenges. In Proceedings of the 3rd BIM/GIS Integration Workshop and 15th 3D GeoInfo Conference, London, UK, 7–11 September 2020; Volume 44, pp. 159–165. [Google Scholar] [CrossRef]
  23. Havenith, H.-B. 3D Landslide Models in VR. In Understanding and Reducing Landslide Disaster Risk: Volume 4 Testing, Modeling and Risk Assessment; Tiwari, B., Sassa, K., Bobrowsky, P.T., Takara, K., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 195–204. ISBN 978-3-030-60706-7. [Google Scholar]
  24. Hruby, F.; Sánchez, L.F.Á.; Ressl, R.; Escobar-Briones, E.G. An Empirical Study on Spatial Presence in Immersive Geo-Environments. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 155–163. [Google Scholar] [CrossRef]
  25. Froehlich, M.; Azhar, S. Investigating Virtual Reality Headset Applications in Construction. In Proceedings of the 52nd ASC Annual International Conference, Provo, UT, USA, 13–16 April 2016; pp. 13–16. [Google Scholar]
  26. Du, J.; Zhu, Q.; Shi, Y.; Wang, Q.; Lin, Y.; Zhao, D. Cognition Digital Twins for Personalized Information Systems of Smart Cities: Proof of Concept. J. Manag. Eng. 2020, 36, 04019052. [Google Scholar] [CrossRef]
  27. Zhu, Y.; Li, N. Virtual and Augmented Reality Technologies for Emergency Management in the Built Environments: A State-of-the-Art Review. J. Saf. Sci. Resil. 2021, 2, 1–10. [Google Scholar] [CrossRef]
  28. Minucci, G.; Molinari, D.; Gemini, G.; Pezzoli, S. Enhancing Flood Risk Maps by a Participatory and Collaborative Design Process. Int. J. Disaster Risk Reduct. 2020, 50, 101747. [Google Scholar] [CrossRef]
  29. Sanders, B.F.; Schubert, J.E.; Goodrich, K.A.; Houston, D.; Feldman, D.L.; Basolo, V.; Luke, A.; Boudreau, D.; Karlin, B.; Cheung, W.; et al. Collaborative Modeling With Fine-Resolution Data Enhances Flood Awareness, Minimizes Differences in Flood Perception, and Produces Actionable Flood Maps. Earth’s Future 2020, 8, e2019EF001391. [Google Scholar] [CrossRef]
  30. Sermet, Y.; Demir, I. GeospatialVR: A Web-Based Virtual Reality Framework for Collaborative Environmental Simulations. Comput. Geosci. 2022, 159, 105010. [Google Scholar] [CrossRef]
  31. He, K.; Yang, Q.; Shen, X.; Dimitriou, E.; Mentzafou, A.; Papadaki, C.; Stoumboudi, M.; Anagnostou, E.N. Brief Communication: Storm Daniel Flood Impact in Greece 2023: Mapping Crop and Livestock Exposure from SAR. Nat. Hazards Earth Syst. Sci 2024, 24, 23752382. [Google Scholar] [CrossRef]
  32. Dimitriou, E.; Efstratiadis, A.; Zotou, I.; Papadopoulos, A.; Iliopoulou, T.; Sakki, G.K.; Mazi, K.; Rozos, E.; Koukouvinos, A.; Koussis, A.D.; et al. Post-Analysis of Daniel Extreme Flood Event in Thessaly, Central Greece: Practical Lessons and the Value of State-of-the-Art Water-Monitoring Networks. Water 2024, 16, 980. [Google Scholar] [CrossRef]
  33. Elhag, M.; Yilmaz, N. Insights of Remote Sensing Data to Surmount Rainfall/Runoff Data Limitations of the Downstream Catchment of Pineios River, Greece. Environ. Earth Sci. 2021, 80, 35. [Google Scholar] [CrossRef]
  34. Adamopoulos, I.; Frantzana, A.; Syrou, N. Climate Crises Associated with Epidemiological, Environmental, and Ecosystem Effects of a Storm: Flooding, Landslides, and Damage to Urban and Rural Areas (Extreme Weather Events of Storm Daniel in Thessaly, Greece). Med. Sci. Forum 2024, 25, 7. [Google Scholar] [CrossRef]
  35. HVA. First Report Regarding Post-Disaster Remediation of 2023 Thessaly Flooding. 2023; pp. 1–33. Available online: https://www.government.gov.gr/wp-content/uploads/2023/11/HVA-Fact-Finding-Mission-Report-on-Thessaly-Post-Disaster-Remediation.pdf (accessed on 20 June 2024).
  36. Metashape, A. AgiSoft Metashape Professional Edition, Version 1.7.4; Agisoft LLC: St. Petersburg, Russia, 2021. [Google Scholar]
  37. Hruby, F.; Ressl, R.; de la Borbolla del Valle, G. Geovisualization with Immersive Virtual Environments in Theory and Practice. Int. J. Digit. Earth 2019, 12, 123–136. [Google Scholar] [CrossRef]
  38. Diab, A.; Kashef, R.; Shaker, A. Deep Learning for LiDAR Point Cloud Classification in Remote Sensing. Sensors 2022, 22, 7868. [Google Scholar] [CrossRef] [PubMed]
  39. Yastikli, N.; Cetin, Z. Classification of Raw LiDAR Point Cloud Using Point-Based Methods with Spatial Features for 3D Building Reconstruction. Arab. J. Geosci. 2021, 14, 146. [Google Scholar] [CrossRef]
  40. Zhang, Z.; Zhang, L.; Tong, X.; Guo, B.; Zhang, L.; Xing, X. Discriminative-Dictionary-Learning-Based Multilevel Point-Cluster Features for ALS Point-Cloud Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7309–7322. [Google Scholar] [CrossRef]
  41. Esri Inc. ArcGIS Pro, version 3.0.0; Software; Desktop; Esri Inc.: Redlands, CA, USA, 2022. [Google Scholar]
  42. Rydvanskiy, R.; Hedley, N. Mixed Reality Flood Visualizations: Reflections on Development and Usability of Current Systems. ISPRS Int. J. Geo-Inf. 2021, 10, 82. [Google Scholar] [CrossRef]
  43. Lochhead, I.; Hedley, N. Designing Virtual Spaces for Immersive Visual Analytics. KN—J. Cartogr. Geogr. Inf. 2021, 71, 223–240. [Google Scholar] [CrossRef]
  44. Dong, W.; Yang, T.; Liao, H.; Meng, L. How Does Map Use Differ in Virtual Reality and Desktop-Based Environments? Int. J. Digit. Earth 2020, 13, 1484–1503. [Google Scholar] [CrossRef]
  45. Haynes, P.; Hehl-Lange, S.; Lange, E. Mobile Augmented Reality for Flood Visualisation. Environ. Model. Softw. 2018, 109, 380–389. [Google Scholar] [CrossRef]
  46. Keil, J.; Edler, D.; Schmitt, T.; Dickmann, F. Creating Immersive Virtual Environments Based on Open Geospatial Data and Game Engines. KN—J. Cartogr. Geogr. Inf. 2021, 71, 53–65. [Google Scholar] [CrossRef]
  47. Ma, Y.; Wright, J.; Gopal, S.; Phillips, N. Seeing the Invisible: From Imagined to Virtual Urban Landscapes. Cities 2020, 98, 102559. [Google Scholar] [CrossRef]
Figure 1. The methodological workflow followed to conduct the study.
Figure 1. The methodological workflow followed to conduct the study.
Drones 08 00398 g001
Figure 2. Study area. The blue color indicates the embankments enclosing the tributaries of the Pineios River between the cities of Karditsa and Trikala. The inset maps indicate the study area in red.
Figure 2. Study area. The blue color indicates the embankments enclosing the tributaries of the Pineios River between the cities of Karditsa and Trikala. The inset maps indicate the study area in red.
Drones 08 00398 g002
Figure 3. Linear flight plan for data acquisition of the study area.
Figure 3. Linear flight plan for data acquisition of the study area.
Drones 08 00398 g003
Figure 4. Examples of ‘Ground’ and ‘No ground’ classes from the manually annotated subset of the point cloud are depicted in gray and brown colors, respectively.
Figure 4. Examples of ‘Ground’ and ‘No ground’ classes from the manually annotated subset of the point cloud are depicted in gray and brown colors, respectively.
Drones 08 00398 g004
Figure 5. Three-dimensional visualization of the study area with 3 m WSF flood.
Figure 5. Three-dimensional visualization of the study area with 3 m WSF flood.
Drones 08 00398 g005
Figure 6. Visualizations of the 2D and 3D results of the study area, (a) a 3D point cloud of the wider area, (b) a 3D point cloud of a representative part, (c) a DSM of the wider area, (d) a DSM of a representative part, (e) a DTM of the wider area, (f) an orthomosaic of a representative part, (g) an orthomosaic of the wider area, (h) a DSM of a representative part, (i) contour lines of the wider area, and (j) contour lines overlayed in a DTM of a representative part of the study area.
Figure 6. Visualizations of the 2D and 3D results of the study area, (a) a 3D point cloud of the wider area, (b) a 3D point cloud of a representative part, (c) a DSM of the wider area, (d) a DSM of a representative part, (e) a DTM of the wider area, (f) an orthomosaic of a representative part, (g) an orthomosaic of the wider area, (h) a DSM of a representative part, (i) contour lines of the wider area, and (j) contour lines overlayed in a DTM of a representative part of the study area.
Drones 08 00398 g006
Figure 7. Two-dimensional visualization of the flood scenarios at (a) 1 m, (b) 2 m, (c) 3 m, (d) 4 m, (e) 5 m, and (f) 6 m of flood height.
Figure 7. Two-dimensional visualization of the flood scenarios at (a) 1 m, (b) 2 m, (c) 3 m, (d) 4 m, (e) 5 m, and (f) 6 m of flood height.
Drones 08 00398 g007
Figure 8. Cloud-to-mesh distance comparison. Distances greater than 1 m are depicted in red color.
Figure 8. Cloud-to-mesh distance comparison. Distances greater than 1 m are depicted in red color.
Drones 08 00398 g008
Figure 9. Point cloud size variation visualization from 1 to 6 pixels: (a) 1 point equal to 1 pixel, (b) 2 points equal to 2 pixels, (c) 3 points equal to 3 pixels, (d) 4 points equal to 4 pixels, (e) 5 points equal to 5 pixels, and (f) 6 points equal to 6 pixels. To prevent information loss and overlap, using a size of 3 pixels per point is recommended.
Figure 9. Point cloud size variation visualization from 1 to 6 pixels: (a) 1 point equal to 1 pixel, (b) 2 points equal to 2 pixels, (c) 3 points equal to 3 pixels, (d) 4 points equal to 4 pixels, (e) 5 points equal to 5 pixels, and (f) 6 points equal to 6 pixels. To prevent information loss and overlap, using a size of 3 pixels per point is recommended.
Drones 08 00398 g009
Figure 10. VR geovisualization of a flooded road crossing the river. A fly-mode (or drone-mode) approach: (a) a user at 50 m above the ground and (b) a user at 500 m above the ground. The red flag indicates the affected part, signifying a danger in a 2 m flood scenario.
Figure 10. VR geovisualization of a flooded road crossing the river. A fly-mode (or drone-mode) approach: (a) a user at 50 m above the ground and (b) a user at 500 m above the ground. The red flag indicates the affected part, signifying a danger in a 2 m flood scenario.
Drones 08 00398 g010
Figure 11. Three-dimensional visualization of the study area in the VR space using pedestrian mode: (a) ground classified point cloud in RGB. A POI is depicted with the red flag: (b) ground and no ground classified point cloud in green and brown colors.
Figure 11. Three-dimensional visualization of the study area in the VR space using pedestrian mode: (a) ground classified point cloud in RGB. A POI is depicted with the red flag: (b) ground and no ground classified point cloud in green and brown colors.
Drones 08 00398 g011
Table 1. Data acquisition details.
Table 1. Data acquisition details.
AreaDateTotal FlightsHeight of FlightPoint Density (pts/m2)GSD
(cm/pix)
Front OverlapLength
124 January 2024780 m393 pts/m22.18 cm80%12.199 m
225 January 2024480 m393 pts/m22.18 cm80%7.080 m
325 January 2024
26 January 2024
580 m393 pts/m22.18 cm80%8.377 m
426 January 2024
27 January 2024
480 m393 pts/m22.18 cm80%8.055 m
Table 2. Specifications of the spatial derivatives generated from airborne LiDAR data over the study area.
Table 2. Specifications of the spatial derivatives generated from airborne LiDAR data over the study area.
TypeResolution/DensityCoordinate SystemFormat
3D point cloud525 pts/m2WGS 84 (4326) and UTM 34N.las
DTM20 cm/pixWGS 84 (4326).tiff
DSM20 cm/pixWGS 84 (4326).tiff
Orthomosaic5 cm/pixWGS 84 (4326).tiff
Table 3. Flooding areas cover the six scenarios realized.
Table 3. Flooding areas cover the six scenarios realized.
Scenario NumberWater Surface Level (m)Cover Area (ha)Overflow Point of Interest (POI)Difference in Cover Area
(ha)
Scenario 11 m.12 ha--
Scenario 22 m.35 haEastern embankment23 ha
Scenario 33 m.42 haCrossroads13 ha
Scenario 44 m.44 haProvincial roads2 ha
Scenario 55 m.44 haProvincial roads0 ha
Scenario 66 m.56 haWestern embankment12 ha
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papadopoulou, E.E.; Papakonstantinou, A. Combining Drone LiDAR and Virtual Reality Geovisualizations towards a Cartographic Approach to Visualize Flooding Scenarios. Drones 2024, 8, 398. https://doi.org/10.3390/drones8080398

AMA Style

Papadopoulou EE, Papakonstantinou A. Combining Drone LiDAR and Virtual Reality Geovisualizations towards a Cartographic Approach to Visualize Flooding Scenarios. Drones. 2024; 8(8):398. https://doi.org/10.3390/drones8080398

Chicago/Turabian Style

Papadopoulou, Ermioni Eirini, and Apostolos Papakonstantinou. 2024. "Combining Drone LiDAR and Virtual Reality Geovisualizations towards a Cartographic Approach to Visualize Flooding Scenarios" Drones 8, no. 8: 398. https://doi.org/10.3390/drones8080398

Article Metrics

Back to TopTop