Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Projected Spatiotemporal Evolution of Urban Form Using the SLEUTH Model with Urban Master Plan Scenarios
Previous Article in Journal
Inclined Aerial Image and Satellite Image Matching Based on Edge Curve Direction Angle Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning

1
Department of Forest and Soil Sciences, Institute of Forest Growth, University of Natural Resources and Life Sciences, Vienna (BOKU), 1190 Vienna, Austria
2
Department of Forest and Soil Sciences, Institute of Forest Ecology, University of Natural Resources and Life Sciences, Vienna (BOKU), 1190 Vienna, Austria
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(2), 269; https://doi.org/10.3390/rs17020269
Submission received: 14 November 2024 / Revised: 6 January 2025 / Accepted: 8 January 2025 / Published: 14 January 2025
(This article belongs to the Section Forest Remote Sensing)

Abstract

:
The presence of sufficient natural regeneration in mature forests is regarded as a pivotal criterion for their future stability, ensuring seamless reforestation following final harvesting operations or forest calamities. Consequently, forest regeneration is typically quantified as part of forest inventories to monitor its occurrence and development over time. Light detection and ranging (LiDAR) technology, particularly ground-based LiDAR, has emerged as a powerful tool for assessing typical forest inventory parameters, providing high-resolution, three-dimensional data on the forest structure. Therefore, it is logical to attempt a LiDAR-based quantification of forest regeneration, which could greatly enhance area-wide monitoring, further supporting sustainable forest management through data-driven decision making. However, examples in the literature are relatively sparse, with most relevant studies focusing on an indirect quantification of understory density from airborne LiDAR data (ALS). The objective of this study is to develop an accurate and reliable method for estimating regeneration coverage from data obtained through personal laser scanning (PLS). To this end, 19 forest inventory plots were scanned with both a personal and a high-resolution terrestrial laser scanner (TLS) for reference purposes. The voxelated point clouds obtained from the personal laser scanner were converted into raster images, providing either the canopy height, the total number of filled voxels (containing at least one LiDAR point), or the ratio of filled voxels to the total number of voxels. Local maxima in these raster images, assumed to be likely to contain tree saplings, were then used as seed points for a raster-based tree segmentation, which was employed to derive the final regeneration coverage estimate. The results showed that the estimates differed from the reference in a range of approximately −10 to +10 percentage points, with an average deviation of around 0 percentage points. In contrast, visually estimated regeneration coverages on the same forest plots deviated from the reference by between −20 and +30 percentage points, approximately −2 percentage points on average. These findings highlight the potential of PLS data for automated forest regeneration quantification, which could be further expanded to include a broader range of data collected during LiDAR-based forest inventory campaigns.

1. Introduction

The presence of adequate natural regeneration is essential for maintaining forest structure, ensuring stability, and reducing vulnerability to disturbances. Monitoring and quantifying forest regeneration, particularly in uneven-aged forests, thus plays a crucial role in sustainable forest management and planning [1]. This is especially important for protective forests, which mitigate the effects of natural hazards such as avalanches, rockfalls, and landslides, thereby safeguarding human life, infrastructure, and agricultural land. In regions like Austria, where approximately 30% of the forested area is classified as protective forest, the growing infrastructural density and overaging stands underscore the need for efficient, large-scale monitoring approaches for assessing forest regeneration [2,3].
Nevertheless, the number of studies on the quantification of regeneration using remote sensing technologies is still limited. To date, the majority of studies addressing the aforementioned topic are based on airborne laser scanning (ALS) data [4,5,6,7,8]. Although this form of active remote sensing provides a detailed view of the canopy at a high spatial resolution [9] and enables, to some extent, the depiction of understory structures, the exact quantification of sub-canopy forest regeneration is still challenging [8,10]. Amiri et al. [4], for example, estimated the regeneration coverage from full-waveform ALS point clouds. Their approach resulted in an underestimation of the regeneration coverage by approximately 30% in comparison to the reference [4].
Naturally, ground-based LiDAR exhibits higher precision regarding the near-ground layer, since the scanning system is itself situated below the canopy cover. An example of terrestrial laser scanning (TLS) applied to the quantification of forest regeneration was given by Heinzel and Ginzler [10]. The authors initially identified tree stems from TLS point clouds, calculated their diameters, and then segmented individual trees. By classifying stems into mature trees, unestablished regeneration, and established regeneration (the latter being defined as trees taller than 130 cm and with a DBH ≤ 12 cm), they estimated regeneration coverage within their study plots. The authors encountered the greatest challenge in the form of occlusion effects caused by clusters of densely growing coniferous tree regeneration, which are often covered with branches that reach to the ground and become impenetrable to the laser beams [10]. In another study, Brolly et al. [11] developed and tested an algorithm for the mapping of juvenile trees between heights of 3 and 6 m. Although they achieved accuracies comparable to detecting mature trees, stem density played a major role in their outcomes. Both studies highlight a central issue with TLS: while the data quality near the ground is excellent, occlusions remain problematic, and capturing comprehensive coverage often requires multiple scans from different positions, increasing labor and costs.
Personal laser scanning (PLS) systems offer a promising advancement. With PLS, data are collected on the move, potentially reducing occlusions and providing a more complete point cloud of the understory. Although PLS typically has a lower point density than TLS, it still delivers highly detailed measurements of understory structures and can be more cost-effective, since scanning occurs continuously and does not require setting up multiple fixed stations [12]. To the best of our knowledge, however, there is no study addressing the quantification of forest regeneration from PLS data, which is why the results of the following study were compared to those of studies that were based on other data sources.
Upon closer examination of the limited number of studies addressing the topic of regeneration quantification from LiDAR point clouds, the inherent difficulties of this task become apparent. Most of these studies either define regeneration as consisting of relatively high trees, taller than 1 m, or they remove LiDAR points below a certain height, which might originate from grass or herbs. By doing so, some major difficulties associated with the detection and quantification of small regeneration trees from PLS data are avoided. As mentioned by Balenović et al. [13], a small size of targeted trees together with the comparably low-ranging accuracies of PLS systems generally involve higher noise levels in the data, resulting in reduced tree detection accuracy. Since this obstacle was reported for trees with a DBH of up to 10 cm, it is even more relevant for tree saplings, which is one major reason for the above-mentioned removal of LiDAR points below certain heights and the limited number of studies addressing this topic.
In our study, and in contrast to other existing approaches, regeneration is defined through the presence of woody plants in their juvenile stage, with heights of at least 0.1 m; when taller than 1.3 m, their DBH must not exceed 5 cm. We developed and tested a novel algorithm for estimating the coverage of these regeneration trees from PLS point clouds on 19 forest inventory plots. To achieve this goal, different fundamental algorithms for the detection of treetops and the segmentation of crowns were combined. In more detail, a variable window filter followed by a watershed segmentation was applied to different sets of raster images, each derived from the voxelated point cloud of the regeneration. By combining height models, voxel numbers, and voxel densities, the regeneration coverage of each plot was calculated in five different variants of the core algorithms. To assess the possibilities and limitations of the new algorithm, the resulting regeneration coverages were compared to reference coverages, which were manually derived from high-resolution LiDAR data obtained with a RIEGL VZ-600i stationary TLS system. Furthermore, a comparison with visually estimated regeneration coverages was performed to demonstrate the differences between the method that was traditionally used in forest inventory and our LiDAR-based approach.
The primary goals of this study were as follows: (i) to develop an algorithm to estimate the forest regeneration coverage from PLS point clouds; (ii) to compare the results with reliable and objective reference data; and (iii) to evaluate our results against traditional expert estimates of regeneration coverage.

2. Materials and Methods

2.1. Study Area, Data Collection, and Data Preparation

This study was conducted in the BOKU University training forest near the village of Forchtenstein (Lower Austria), which is part of the Austrian Federal Forest Service. Forest vegetation was scanned on 19 forest inventory plots using two different laser scanners (Figure 1), a handheld personal laser scanning (PLS) system GeoSLAM Zeb Horizon (GeoSLAM Ltd., Nottingham, UK), and an RIEGL VZ-600i TLS system (RIEGL Laser Measurement Systems GmbH, Horn, Austria). The PLS system provides a point detection rate of 300,000 points per second at a maximum range of 100 m. The PLS system is highly mobile due to its low weight, its high battery capacity, and the implemented SLAM (simultaneous localization and mapping) technology. When scanning with the PLS system, instructions on the walking path that were given by Gollob et al. [14] were followed. The scanning operator started at the plot center and headed north, surrounded the plot center within 20 m, crossed the center twice, and ended the scan process at the starting point. This procedure ensured a high coverage of the entire sample plot area and a complete 3D scan of the vegetation from the ground to the treetops [14].
The RIEGL VZ-600i TLS has a pulse repetition rate of up to 2.2 MHz, given a resolution of 6 mm in 10 m distance. The vertical and horizontal resolutions can be scaled by the user and were both set to 0.034°. Due to its stationary operating principle, the RIEGL VZ-600i TLS has limitations when used in forest inventory practice. However, the RIEGL TLS provides higher precision, a higher point density, and enables an accurate colorization of the resulting point clouds; therefore, it was used in multi-scan mode to collect accurate reference data (see Section 2.2). Depending on the height and density of the ground vegetation, between 4 and 6 scanning positions were distributed symmetrically around the plot center at distances of approximately 4 m. This scan alignment was chosen to maximize the completeness of the point clouds by minimizing shadowing effects.

2.2. Reference Data

The reference data of the regeneration coverage were assessed via two methods. First, the coverage rates were visually estimated by experienced experts; this is still common practice in vegetation science and forest management planning. To mitigate a potential observer bias, the visual assessments were permanently calibrated against the schematic illustrations of vegetation coverage classes defined by Braun-Blanquet [15], and observations were made by three operators independently from one another. Second, the colorized high-resolution point clouds obtained by the RIEGL VZ-600i TLS were manually cropped to derive the exact area covered by the forest regeneration plants. The cropping of the point clouds was performed using the clipping tool in the CloudCompare software [16].
To ensure that only trees with the predefined size (height taller than 0.1 m and DBH less than 5 cm) were included in this process, the precise positions of these trees were mapped in the field using Version 23.7 of the ITS GeoAce Survey software (Figure 2) running on an Apple iPad Pro (Apple Inc., Cupertino, CA, USA), which implements visual SLAM-positioning and augmented reality technologies (ITS Geo Solutions GmbH, Jena, Germany). The thereby-acquired tree positions, visualized as points in CloudCompare, were used as visual markers to facilitate and improve the identification and clipping of the regenerating trees from the RIEGL point clouds.
The resulting 2D hulls were imported as shapefiles into the workspace of the R software [17] to calculate the reference regeneration coverage with polygon operation functionalities provided by the sf package [18]. This approach provided the most accurate reference data; accordingly, both the visually estimated and the LiDAR-derived regeneration coverages could eventually be evaluated.

2.3. Calculation of Regeneration Coverage from LiDAR Data

2.3.1. General Approach

Although different methods were tested in the subsequent steps, the core method for identifying treetops and delineating crown areas remained consistent. The differences among the tested methods lay mainly in their choice of input data and the sequence in which certain functions were applied. Our fundamental approach was inspired by established methodologies that detect mature trees from ALS-derived digital surface models (DSMs) [19]. Similarly, after converting the PLS point clouds of dense regeneration clusters into DSMs, algorithms for the detection of treetops as local maxima in the surface model can be applied to localize the areas that likely contain regeneration trees. We are well aware that, in heterogenous natural forests, peaks in such a small-scale DSM might also originate from other objects or plants. Therefore, the described methodology was tested with different models, abandoning the strict reliance on height information alone. Figure 3 schematically illustrates the workflow of the general approach, which is described in the following.
Prior to all further steps, the raw PLS data were pre-processed with the GeoSLAM Hub (GeoSLAM Ltd., Nottingham, UK), and the resulting point cloud data were then exported in a LAS file format [20]. The methodology presented by Tockner et al. [21] was applied to the vegetation points to achieve segmentation of the individual overstory trees (DBH > 5 cm) via a voxel-based region growing algorithm. The segmented overstorey trees were removed from the point cloud, and only the remaining ground and ground vegetation points were further analyzed. However, prior to the automatic analysis of the smaller tree regeneration, noise was filtered using statistical outlier removal (SOR) [22]. Points originating from the laser hits on the pole, which marked the center of each inventory plot, were cut out manually using CloudCompare software [16].
The resulting point clouds served as input data for the quantification of regeneration. In the first step, a classification of ground and non-ground points was performed. This was accomplished through the use of a cloth simulation filter (CSF) from the lidR package [22].
In the second step, the ground points were used to generate a digital terrain model (DTM) that was required for the subsequent normalization of the vegetation point cloud using the normalize_height function [22].
To achieve a more uniform distribution of the measurement points and reduce bias in the point density [23], the normalized point cloud was voxelized in the third step, using the voxelize_points function. Each voxel containing at least one LiDAR point was considered “filled”, and its 3D coordinates were used to create a new voxel cloud. Without voxelization, the point density would vary depending on the PLS sensor’s position during scanning and the density of the scanned vegetation, possibly falsifying the results [23]. A grid search optimization identified the optimal parameter settings for the edge length of the cubic voxels (voxres) as well as the parameters of the CSF function, class threshold (classthr), and cloth resolution (clothres) (see Section 2.4).
In the fourth step, these vegetation voxels were projected onto the XY-plane with the rasterize function from the terra package [24], where the pixel size was set to the size of the voxels (voxres). Accordingly, three different raster images were created for each plot: the Surface Raster (representing the height of the highest filled voxel), the Voxel Count Raster (representing the number of filled voxels), and the Voxel Density Raster (representing the ratio of filled voxels to the total possible voxel stack above each cell). To smooth the resulting raster images and facilitate their interpretation, a mean filter [24] with a kernel size of 7 pixels was eventually applied to the latter, meaning that each pixel was assigned the average of the values of a 7 × 7 pixel area around it. This particular kernel size was chosen after visual inspection of the raster images resulting from filtering with different kernel sizes. However, since the kernel size did not seem to have such a great influence and the mean filter was only used for an optical smoothing of the raster images, the optimization of this parameter was disregarded.
In the fifth step, the areas likely to contain a regeneration tree were identified using either the Surface, Voxel Count, or Voxel Density Raster, depending on the chosen method (see Section 2.3.2). For this purpose, the vwf function from the ForestTools [25] package was used to find local maxima within a variable search radius. Depending on which raster was filtered, the variable search radius was defined by one of three window functions. Equations (1)–(3) show the basic setup of these functions as chosen in the course of this study, with r s w being the radius of the search window and x s u r f a c e , x v o x e l , and x d e n s i t y being the pixel values of the Surface, Voxel Count, and Voxel Density Raster, respectively. In cases where this pixel value represented a local maximum within the search window, it was marked as a treetop, and its XY coordinates were saved for the next steps.
r s w = x s u r f a c e w s f a c + 0.1
r s w = x v o x e l   × v o x r e s w s f a c + 0.1
r s w = x d e n s i t y w s f a c + 0.1
The parameter w s f a c defines the search window radius relative to the tree height [26] (or other pixel values). Together with the above-mentioned parameters voxres, classthr, and clothres, w s f a c was also optimized via a grid search optimization (see Section 2.4).
To enable a comparison with the reference data, which were acquired via an inventory of saplings higher than 0.1 m, only treetops higher than 0.1 m were included in the following workflow. Trees exceeding the upper DBH threshold of 5 cm were already removed during the tree segmentation of overstory trees (see Section 2.1).
In step 6, the mcws (marker-controlled watershed segmentation) function [25] was used to delineate crown areas, starting from the detected treetops. Naturally, the function requires the positions of the treetops together with a CHM—or another raster image—from which to segment the tree crowns. A threshold, above which a tree crown is delineated, had to be chosen. For all methods working on the Surface or Voxel Count Raster, the height threshold was set to 0.1 m or 0.1 v o x r e s , respectively, assuming that smaller detected trees do not yet have crowns of noteworthy size. The corresponding threshold for the Voxel Density Raster ( s e g m i n ), not giving any height information, was optimized in the grid search optimization (see Section 2.4). The final output of the sixth step is polygons, representing the crown areas of the delineated trees, from which the regeneration coverage could finally be calculated.
Where the established methods usually employ the same canopy height model for both treetop detection and crown segmentation [19], this study explored various combinations of input data and parameters. Subsequent sections discuss these evaluations and identify the most effective approaches for quantifying regeneration coverage from PLS data.

2.3.2. Methods for Estimating Regeneration Coverage

Figure A20 in Appendix B schematically illustrates the workflow of the different methods tested in the course of this study, which is described in the following. Additionally, Table 1 gives a step-by-step overview of the workflow for each method together with the corresponding parameter settings, which is described further in Section 2.4.
In method M1, both the treetop detection (step 5) and crown segmentation (step 6) processes rely entirely on the Surface Raster. This method closely resembles standard procedures used for ALS data, where a canopy height model (CHM) is typically employed. Here, the Surface Raster serves as a small-scale CHM for regeneration layers, with treetops identified as local height maxima and crowns delineated directly from these peaks.
Method M2 uses only the Voxel Count Raster for both treetop detection and crown segmentation. The idea is that higher voxel counts indicate vertical structures, such as stems, more reliably than the Surface Raster, which can be easily influenced by grass and other ground vegetation. While this approach does not reflect actual crown shapes as directly as the one using the Surface Raster, it can highlight likely tree positions more accurately.
Method M3 combines the strengths of M1 and M2. First, treetops are identified from the Voxel Count Raster, which is better at pinpointing where stems are likely to be. Next, crown segmentation is performed using the Surface Raster. This two-step combination seeks to leverage both the reliable stem detection from voxel counts and the more realistic crown shapes derived from the surface information.
Method M4 refines this approach by adding an intermediate step. After initially detecting peaks and segmenting areas with higher voxel counts in the Voxel Count Raster, these areas are used as masks on the Surface Raster. Within these masked areas, the highest point on the Surface Raster is selected as a new treetop position (step 7 in Table 1), supposedly enhancing the final crown segmentation applied on the Surface Raster (step 8 in Table 1). This step aims to fine-tune crown segmentation, improving the fit between identified treetops and the actual surface structure.
The fifth and last tested method (M5) performs tree top detection and crown delineation on the Voxel Density Raster. This method originated from the same idea as M2, namely that, in areas of the point cloud containing a sapling stem, the number of voxels should be higher than in the surrounding areas. However, when taking the absolute voxel count as segmentation criterion, high grass could easily be mistaken for a tree. Additionally, the crowns are segmented starting from the treetop until the pixel values fall below a preset height threshold when applying M1 to M4. Applying a static threshold based only on height information falls short of differentiating between the initially segmented tree and adjacent grass patches, which can lead to major misclassifications and negatively impact the final result. Therefore, M5 is an attempt to incorporate the ratio of filled to total number of voxels as new segmentation criterion (see Section 2.3.1). For the treetop detection with the vwf function (step 5), the minimum value ( t o p m i n ) —for which a pixel representing a local maximum must be exhibited for it to be considered as a treetop—was set to 0.4 after the grid search optimization (see Section 2.4). This approach is based on the assumption that peaks in the Voxel Density Raster which exhibit a high proportion of filled voxels are likely to represent sapling stems. The crown delineation, starting from these detected peaks, was then conducted until the threshold ( s e g m i n ) of 0.1. Thus, instead of assigning every pixel above a certain height or a certain number of voxels to a tree crown, the algorithm incorporates the proportion of filled voxels as segmentation criterion, supposedly enabling a more distinct delineation of trees and other vegetation elements.

2.4. Comparison of Parameter Settings

We examined the parameters influencing ground classification, point cloud voxelization, and tree detection and their impact on the final results for all methods (M1M5). Therefore, a grid search optimization (brute force algorithm) was performed across different parameter combinations.
One of the optimized parameters was the size of the cubic voxels (voxres), into which the LiDAR points were converted in step 3. During the optimization, this size varied between 1 and 5 cm in a 1 cm step width. The variable w s f a c of the window size function also varied between 1 and 5, but in a step width of 0.5. However, since the latter did not influence the results in any of the applied methods, w s f a c was simply set to 1.5 after an initial grid search.
Accurate ground classification is critical, especially at such a small scale, since it strongly influences the quality of tree identification and delineation. For this reason, the parameters of the cloth simulation filter (CSF) algorithm used for ground classification were given special attention. The CSF algorithm, described by Zhang et al. [27], simulates a rigid cloth draped over the inverted point cloud. By analyzing how the cloth interacts with the surface, it determines which points represent the ground. The cloth resolution (clothres) controls the resolution of the initial cloth grid. Three values for clothres (0.1, 0.3, and 0.5) were tested. The class threshold (classthr) defines how close a point must be to the cloth to be considered ground. Three values for classthr (0.10, 0.15, and 0.20) were tested as well. The other CSF parameters (rigidness, time step, and the number of iterations) were left at their default settings.
For M5, the thresholds for tree detection and segmentation, t o p m i n and s e g m i n , needed to be optimized. The t o p m i n threshold (ranging from 0.4 to 0.9 in increments of 0.1) defines how large a local maximum in voxel density must be to count as a treetop. The s e g m i n threshold (ranging from 0 to 0.7 in increments of 0.1) defines where crown segmentation ends. Naturally, only parameter combinations with t o p m i n being larger than s e g m i n were taken into account.
Consequently, 45 possible parameter combinations were tested for M1 to M4 and 1710 for M5. The optimal parameter sets were identified by minimizing the mean absolute error (MAE) between the calculated regeneration coverage and the reference coverage from the RIEGL point clouds. The MAE was calculated as outlined in Equation (4):
M A E = 1 n   i = 1 n | y i y i ^ |
where y i is the reference regeneration coverage, y i ^ is the estimated regeneration coverage, and n is the number of plots.

2.5. Comparison of Methods

After identifying the optimal parameter combinations for each of the five tested methods (M1M5) using the approach described above, a more detailed evaluation of their performance became possible. By calculating the regeneration coverage under these optimal settings, the best achievable results for each method were determined, allowing for a fair and direct comparison.
To account for the bias and detect any systematic errors that may have occurred, the mean deviation (MD) was used for the final comparison of the optimized methods. It was calculated as outlined in Equation (5):
M D = 1 n   i = 1 n ( y i y i ^ )
where y i is the reference regeneration coverage at sample plot i, y i ^ is the estimated regeneration coverage at the same plot, and n is the total number of sample plots.

3. Results

3.1. Optimization of Parameters

As described in Section 2.4, the MAE of the calculated regeneration coverages was analyzed in terms of different parameter combinations (voxel resolution—voxres; class threshold—classthr; cloth resolution—clothres; thresholds for the local maxima and segmented crowns of M5 t o p m i n and s e g m i n ). The thereby-identified best match for M1, i.e., the lowest MAE, was obtained with a voxel resolution of 0.01 m, a class threshold of 0.2 m, and a cloth resolution of 0.3 m. While the optimal voxel resolution was the same, the M3 and M4 methods performed best with a class threshold of 0.15 m and a cloth resolution of 0.5 m. Method M2 reached its lowest MAE of 2.59 percentage points (pp) with a voxel resolution of 0.03 m, a class threshold of 0.2 m and a cloth resolution of 0.1 m. However, it should be noted that all parameter combinations with a voxel resolution of 0.03 m led to low MAE values when applied with method M2 (Figure 4). Applying M5, the lowest MAE of 3.87 pp was reached with a voxel resolution of 0.02 m, a class threshold of 0.2 m, and a cloth resolution of 0.1 m. The minimum value for the detection of local maxima ( t o p m i n ) from the Voxel Density Raster was optimal at 0.4, with an optimal segmentation threshold s e g m i n of 0.1. Generally, voxel resolution and class threshold had a larger impact on the final results, whereas the cloth resolution had only marginal influence and a change in the window size parameter w s f a c had almost no impact at all. The latter fact became apparent after an initial grid search, which is why the influence of the window size parameter was not investigated further.
Apart from M1, all the methods reached average deviations around 0 pp when applying certain parameter combinations (Figure 4). Note, that in Figure 4, only the optimized values of the parameters t o p m i n and s e g m i n ( t o p m i n = 0.4 , s e g m i n = 0.1 ) are included in the illustrated results of M5, in order to enable a balanced comparison with the other methods.
The mean absolute error (MAE) and the mean deviation from the reference coverage (MD) are affected by the parameters t o p m i n and s e g m i n when applying the otherwise-optimized parameters ( v o x r e s = 0.02 m , c l a s s t h r = 0.2 m , c l o t h r e s = 0.1 m ) on method M5 (Figure 5). The lowest MAE was achieved with t o p m i n = 0.4 and s e g m i n   = 0.1 . Notably, increasing the treetop detection threshold t o p m i n above 0.5 led to underestimations in regeneration coverage, as more treetops are overlooked. Similarly, higher segmentation threshold ( s e g m i n   ) values reduced the segmented crown area, further increasing the likelihood of underestimation.

3.2. Comparison of Methods

3.2.1. Comparison Between Modeling Methods

Comparing the deviations of the modeled from the reference regeneration coverages of the different models, M2M5 reached deviations of around 0 pp. Across all possible parameter combinations, the mean deviation (MD) for M2 ranged between −28.8 and 34.4 pp, on average reaching 4.7 pp. The deviations achieved with M5 ranged between −28.9 and 58.3 pp, thus also revealing an average deviation of approximately 0 pp for certain parameter combinations (Figure 6).
Of course, these deviations above, when the wrong parameter settings are applied, can reach quite high levels, which would not be assumed to be satisfactory results for practical applications. However, the deviations are reduced drastically when applying the parameter settings optimized as described in the previous section (see Figure 4 and Figure 5). After doing so, the final deviations achieved with M2 are distributed more equally around 0 and ranged between −7.4 and 6.5 pp, with an average deviation of −0.3 pp across all plots (Figure 7). M5, which exhibited a slightly larger bias but still yielded good results, ranging from −9.1 to 12.3 pp with an average overestimation of 2.4 pp when applying the optimized parameter settings. The methods based on the Surface Raster (M1, M3, and M4) yielded adequate results as well, which are more biased compared to those achieved with M2 or M5.

3.2.2. Comparison with Visual Estimation

As described in Section 2.2, a visual estimation of the regeneration coverage in compliance with the Braun-Blanquet method was also performed. Depending on the operator, the resulting deviations ranged from −18.5 to 29.9 pp, with an average deviation of −0.1 pp for Operator 1, −2.7 pp for Operator 2, and −2 pp for Operator 3 (Figure 7). The outcomes of the visual estimations for each individual operator are illustrated in Figure 8, alongside those from M2 and M5, in order to illustrate the impact of subjectivity on the reproducibility of results.
To enable a plot-wise comparison between the different approaches, Figure 9 shows the regeneration coverages of the best methods (M2 and M5), together with the visually estimated and averaged regeneration coverages, as well as the reference. All estimations exhibited a strong correlation with the latter, as evidenced by their coefficients of determination, amounting to 0.94 for M2, 0.85 for M5, and 0.66 for the visual estimates. In Appendix A, the raster images of all individual plots together with the modeled and referenced regeneration cover for all tested methods can be investigated in more detail.

4. Discussion

4.1. Definition of Forest Regeneration and Target Variables

The majority of remote sensing methods for quantifying understory vegetation rely on ALS data. Their effectiveness depends largely on the proportion of laser beams passing through the canopy [5,28]. As a result, many studies focus on indirect indicators of regeneration—such as last-return reflections [29], intensity values [30,31], or height distribution metrics [6,31]—rather than attempting a direct quantification.
A selection of studies that attempt a direct quantification of regeneration is listed in Table 2, containing studies based on TLS [10,11], ALS [4,7,32] and photogrammetry [33]. To the best of our knowledge, the two studies based on TLS [10,11] data are the only ones achieving this objective with ground-based LiDAR.
A key challenge in comparing such research lies in differing definitions of forest regeneration. Without a standardized definition, comparing results becomes difficult. In our work, we chose to focus on trees higher than 0.1 m and with a DBH below 5 cm based on other criteria. The DBH threshold was predetermined by the tree segmentation algorithm [21] used for delineating and removing the overstory trees. Trees above this threshold are reliably detected and removed anyway [21], thus leaving only trees with a DBH below 5 cm undetected. The lower height threshold of 0.1 m, compliant with that of the Austrian national forest inventory [34], was necessary to exclude seedlings, which would not be depicted well enough in the point cloud to even detect them visually, much less automatically. Moreover, seedlings, not having passed their most vulnerable phase of development, are commonly not the focus of regeneration inventories [35].
Another source of inconsistency in regeneration studies is the target variable. Some aim to determine regeneration coverage, while others count the number of saplings. We focused on coverage, a key criterion in forest management and planning that requires less time and effort than counting individual stems [36]. Furthermore, the influence of obstructed tree stems on the accuracy of single tree detection is less critical when measuring coverage [10]. Even if some treetops are missed due to dense clustering or an inappropriately large search window, the overall coverage will still be represented correctly, as crown segmentation merges overlooked trees into larger contiguous areas. This also explains why the window size function for treetop detection, and consequently also the parameter w s f a c , did not influence the final results too much.
Taking into account the above-mentioned considerations, it becomes obvious why studies that are geared towards quantifying the number of saplings usually focus on rather large regeneration trees with heights above 1 m (see Table 2). Since an accurate quantification of the number of saplings requires a high point density and minimal obstruction of the stems during scanning [10,11], this objective appears unfeasible for very small trees.

4.2. Differentiation from Other Vegetational and Non-Vegetational Elements

Another reason for considering larger saplings as regeneration in ALS studies is that they are easier to distinguish from other ground vegetation and non-vegetational elements. By simply cropping the vegetation point cloud at a certain height threshold—beneath which, grass, herbs, and shrubs are believed to dominate—the exclusion of the latter is ensured [4,7,10], simultaneously rendering the detection of small saplings impossible.
By applying the lower threshold of 0.1 m, we could not entirely exclude the influence of grass and shrubs which reach heights well above this threshold on many plots. Accordingly, one of the biggest disadvantages of our approach is its inability to reliably distinguish between regeneration trees, shrubs, dense grass, or large clumps of grass. To the best of our knowledge, no method of LiDAR-based regeneration quantification has yet succeeded in solving this problem. Although the distinction of grass, shrubs, and trees from aerial images through their texture, spectral features, or color information is common practice, these methods are mainly applied for large-scale land cover classifications [37,38]. Some studies, using high-resolution images acquired with unmanned aerial vehicles (UAVs), also present techniques for precise detection of individual tree saplings [39,40,41]. However, these approaches were developed and tested on images of tree nurseries or artificial regeneration, with homogeneously distributed saplings standing out against a distinct, contrasting background. Moreover, these image-based approaches require freestanding regeneration without overstory trees that would obstruct the view of the latter from above.
Although our approach likewise cannot be completely relied upon for differentiation between tree saplings, non-vegetational, and herbaceous vegetation elements, the application of the voxel-based approaches (M2 and M5) was able to minimize negative influences on the tree detection caused by grass. While the latter can easily lead to peaks in the Surface Raster, it is less likely to do so in the Voxel Count or Voxel Density Raster. These advantages become especially apparent from a closer inspection of Plot 7 (Figure 10). While the eastern part of the plot was covered by a dense cluster of regenerating spruce, the southern and western parts of the plot are vegetated with high tufts of reed grass (Figure 10a). Methods based on the Surface Raster, represented by M1 in Figure 10b, segmented these areas as tree crowns due to the detected peaks and high pixel values resulting from point hits on the inflorescences of the grass. The application of M5, however, enabled a complete prevention of the aforementioned misclassification (Figure 10c), with the areas of the Voxel Density Raster containing reed grass exhibiting voxel densities far below the preset segmentation threshold of 0.1.
It should be noted that the reliability of this strategy is limited by the somewhat-restricted diversity of the data used. A certain amount of inaccuracy cannot be avoided when relying solely on 3D voxels for the detection of saplings and their differentiation from other plants and non-vegetational elements. A more reliable distinction might be reached by incorporating additional data, as was performed in previous studies, using, for example, intensity values [31] or spectral signatures [33]. Despite the incorporation of such additional data, these studies still could not solve the problem of differentiating between tree saplings and other elements, especially coarse woody debris [31,33].
Considering these findings, it is also important to highlight that the simplicity achieved by restricting the data to the 3D coordinates of the LiDAR points also presents certain advantages. By avoiding the incorporation of, for example, color information or reflectance values, the presented approach is universally applicable, regardless of the tree species or the light conditions during scanning.

4.3. Choice of Modeling Method

From the comparison of deviations presented in Section 3.2.1 (Figure 8), it becomes obvious that the methods that use surface-based crown segmentation are less suited for the estimation of regeneration coverage. Method M2 on the other hand, which performs tree detection as well as crown segmentation solely on the Voxel Count Raster, yields more accurate results, with deviations ranging between −7.4 and 6.5 pp. Similarly, method M5, based solely on the Voxel Density Raster, overestimates the regeneration cover by only 2.4 pp on average.
These findings align with those of Hershey et al. [42], who also encountered difficulties with canopy height models and watershed methods. They observed that subtle canopy contours and multiple height peaks per crown caused overestimations of tree numbers. They switched to a voxel-based approach, identifying vertical structures by counting the number of voxels and points within a specific height bin. Although our primary goal is not counting individual trees but rather identifying likely tree positions for crown delineation, our voxel-based methods (M2 and M5) follow a similar logic. Instead of relying on surface peaks that can be influenced by grass and other vegetation, these methods focus on voxel counts or densities, which more reliably indicate tree locations.
Surface-based methods not only struggle with tree detection but also with the subsequent crown segmentation. Starting from detected peaks, a surface-based segmentation spreads across all connected non-ground points above 0.1 m, assigning them to a tree crown. When dense grass taller than 0.1 m is present near saplings, this leads to a significant overestimation of regeneration coverage. In contrast, voxel-based methods handle such scenarios better, as the thin blades of grass produce fewer voxels (M2) or lower voxel densities (M5), making it easier to distinguish them from tree saplings.
However, it is important to note that voxel density and counts depend on the quality and resolution of the point cloud data. The parameter settings optimized for our scanning setup may not directly apply to different scanners or scanning patterns. Future applications of these voxel-based approaches may need parameter adjustments to accommodate varying data acquisition conditions.

4.4. Comparison with Visual Estimation of Regeneration Coverage

Visual estimates of regeneration coverage can be practical and efficient, especially when quick evaluations are needed. In our study, experienced observers achieved results that aligned closely with the reference data. However, the accuracy and consistency of the visual estimates depend heavily on the observer’s experience and objectivity. Past research has shown that different operators can produce highly inconsistent results, particularly in areas with higher regeneration cover [43]. These findings correspond with the results presented in Figure 10, which similarly revealed inconsistent results between operators, especially for plots with higher regeneration coverages.
In practical forest inventories, regeneration coverage and ground vegetation are often recorded in broad classes or categories [44]. Although less precise, this categorization is generally deemed acceptable since it maintains data integrity at larger scales. Comparing the class sizes obtained from such approaches, like the Braun-Blanquet scale [44], with the comparably smaller deviations found in our LiDAR-based estimates, highlights the reliability of the latter.
Nevertheless, automated LiDAR-based methods cannot replace thorough field inventories if the goal is to determine the absolute number of saplings [33]. Most national forest inventories require sapling counts [45], but they use small plots [46], often with counting limits [34,45], to manage the exhaustive workload. These methods, though seemingly accurate, lose precision when aggregated over larger areas due to high local variability. In contrast, management-oriented inventories generally rely on coverage classes rather than exact sapling counts, since coverage information is sufficient for planning activities such as regeneration harvests and thinning [36]. These coverage classes are easier to obtain, less time-consuming, and still useful for operational decisions.
In this regard, LiDAR-based estimations of regeneration coverage provide objective, consistent results, which are also repeatable for comparisons between measurement periods. Similarly, Heinzel and Ginzler [10] also highlighted that visual estimation of regeneration coverage is prone to failure, even if conducted by experienced operators. However, for new methodologies to be accepted and their practical usability evaluated, their comparison to acknowledged, long-term practices is indispensable [10]; this is why the results of our study were also compared to visual estimates in addition to the high-precision reference data.
From an economic perspective, the total time requirement for conducting the whole workflow, including PLS data acquisition (approx. 10 min [14]), overstory tree segmentation (20–60 min [21]), and regeneration quantification (1.6 s) amounts to roughly 50 min per plot—around 16 h in total for all 19 plots. While this might sound time-consuming, it is important to remember that the described workflow is not limited to estimating regeneration coverage alone. This full digital forest inventory process also provides detailed information on tree heights, diameters, standing volume, and more, without requiring any additional data acquisition. For a fair comparison, the LiDAR-based workflow should be weighed against a similarly comprehensive traditional inventory approach, in which every individual tree’s DBH and height would also need to be measured.

5. Conclusions

Given the inevitable discrepancies in experience levels and the unavoidable impact of subjectivity, it is not uncommon for visual estimates of regeneration cover to diverge significantly between operators and measurement periods. These shortcomings highlight the advantages of objective remote sensing techniques, which provide a reliable and reproducible means of data acquisition.
This study explores the challenges and methodologies involved in quantifying forest regeneration using LiDAR data. Voxel-based approaches (M2 and M5) proved superior to surface-based crown segmentation methods, demonstrating reduced bias in estimating regeneration coverage, especially in heterogeneous forest plots with dense grass or other obstructions. These methods minimize the influence of non-vegetation elements by focusing on vertical structures with higher voxel counts or densities. Despite these advantages, the voxel-based approaches still encounter difficulties in distinguishing between tree saplings and other vegetation such as shrubs and grass. Incorporating additional data, such as color information or intensity values, could improve the reliability of these distinctions. However, our method retains simplicity and universal applicability by relying solely on 3D LiDAR point coordinates.
Despite these simplifications, the proposed approach yielded promising results that were in good agreement with the reference data. While the quality of the visual estimates depends on the experience and subjective judgement of the operators, our method provides an objective and reproducible assessment of forest regeneration. Moreover, no data acquisition additional to that of a LiDAR-based forest inventory is required, since all necessary information is derived from the forest point clouds and the methodology can be integrated into existing software routines. Consequently, the presented approach could enhance digital, laser-based forest inventories by incorporating forest regeneration as an additional metric.

Author Contributions

Conceptualization, S.W., C.G., T.R. and A.N.; methodology, S.W., C.G., T.R. and A.N.; software, S.W.; validation, C.G., T.R., A.T., L.M., V.S., T.O.-G., H.S. and A.N.; formal analysis, C.G., R.K., T.R., A.T., L.M., V.S., T.O.-G., H.S. and A.N.; investigation, S.W., C.G., T.R., A.T. and A.N.; resources, A.N.; data curation, S.W. and R.K.; writing—original draft preparation, S.W.; writing—review and editing, C.G., T.R., A.T., L.M., V.S., T.O.-G., H.S. and A.N.; visualization, S.W.; supervision, C.G., T.R., H.S. and A.N.; project administration, A.N. and T.R.; funding acquisition, A.N., C.G. and T.R. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the project Invent-PLS and was financed by the Austrian Federal Ministry of Finance via the Austrian Research Promotion Agency (FFG) under project number 899975 and eCall number 47418931. S. Witzmann’s work was completely financed by Invent-PLS.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Illustration of the calculated crown areas of Plot 1 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A1. Illustration of the calculated crown areas of Plot 1 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a1
Figure A2. Illustration of the calculated crown areas of Plot 2 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A2. Illustration of the calculated crown areas of Plot 2 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a2
Figure A3. Illustration of the calculated crown areas of Plot 3 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A3. Illustration of the calculated crown areas of Plot 3 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a3
Figure A4. Illustration of the calculated crown areas of Plot 4 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A4. Illustration of the calculated crown areas of Plot 4 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a4
Figure A5. Illustration of the calculated crown areas of Plot 6 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A5. Illustration of the calculated crown areas of Plot 6 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a5
Figure A6. Illustration of the calculated crown areas of Plot 7 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A6. Illustration of the calculated crown areas of Plot 7 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a6
Figure A7. Illustration of the calculated crown areas of Plot 8 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A7. Illustration of the calculated crown areas of Plot 8 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a7
Figure A8. Illustration of the calculated crown areas of Plot 9 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A8. Illustration of the calculated crown areas of Plot 9 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a8
Figure A9. Illustration of the calculated crown areas of Plot 10 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A9. Illustration of the calculated crown areas of Plot 10 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a9
Figure A10. Illustration of the calculated crown areas of Plot 11 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A10. Illustration of the calculated crown areas of Plot 11 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a10
Figure A11. Illustration of the calculated crown areas of Plot 12 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A11. Illustration of the calculated crown areas of Plot 12 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a11
Figure A12. Illustration of the calculated crown areas of Plot 13 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A12. Illustration of the calculated crown areas of Plot 13 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a12
Figure A13. Illustration of the calculated crown areas of Plot 14 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A13. Illustration of the calculated crown areas of Plot 14 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a13
Figure A14. Illustration of the calculated crown areas of Plot 15 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A14. Illustration of the calculated crown areas of Plot 15 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a14
Figure A15. Illustration of the calculated crown areas of Plot 16 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A15. Illustration of the calculated crown areas of Plot 16 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a15
Figure A16. Illustration of the calculated crown areas of Plot 17 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A16. Illustration of the calculated crown areas of Plot 17 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a16
Figure A17. Illustration of the calculated crown areas of Plot 18 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A17. Illustration of the calculated crown areas of Plot 18 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a17
Figure A18. Illustration of the calculated crown areas of Plot 19 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A18. Illustration of the calculated crown areas of Plot 19 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a18
Figure A19. Illustration of the calculated crown areas of Plot 20 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Figure A19. Illustration of the calculated crown areas of Plot 20 for each method. The raster image in the background is the one on which the final crown segmentation was performed. The red lines represent the reference crown areas; the blue lines represent the calculated ones. The corresponding regeneration coverages (RCs) are also given to enable their comparison. The colors of the pixels and color scales represent height in meters (for M1, M3 and M4), voxel count (M2) or voxel density (M5), respectively.
Remotesensing 17 00269 g0a19

Appendix B

Figure A20. Schematical illustration of methods M1, M2, M3, and M5, starting from the step of treetop detection.
Figure A20. Schematical illustration of methods M1, M2, M3, and M5, starting from the step of treetop detection.
Remotesensing 17 00269 g0a20

References

  1. Miina, J.; Eerikäinen, K.; Hasenauer, H. Modeling Forest Regeneration. In Sustainable Forest Management: Growth Models for Europe; Springer: Berlin/Heidelberg, Germany, 2006; pp. 93–109. ISBN 3540260986. [Google Scholar]
  2. Makino, Y.; Rudolf-Miklau, F. The Protective Functions of Forests in a Changing Climate—European Experience; Forestry, W., Ed.; FAO and Austrian Federal Ministry for Agriculture, Regions and Tourism: Rome, Italy, 2021; ISBN 9789251343173. [Google Scholar]
  3. Motta, R.; Haudemand, J.C. Protective Forests and Silvicultural Stability. Mt. Res. Dev. 2000, 20, 180–187. [Google Scholar] [CrossRef]
  4. Amiri, N.; Yao, W.; Heurich, M.; Krzystek, P.; Skidmore, A.K. Estimation of Regeneration Coverage in a Temperate Forest by 3D Segmentation Using Airborne Laser Scanning Data. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 252–262. [Google Scholar] [CrossRef]
  5. Du, L.; Pang, Y. Identifying Regenerated Saplings by Stratifying Forest Overstory Using Airborne LiDAR Data. Plant Phenomics 2024, 6, 0145. [Google Scholar] [CrossRef] [PubMed]
  6. Korpela, I.; Hovi, A.; Morsdorf, F. Understory Trees in Airborne LiDAR Data—Selective Mapping Due to Transmission Losses and Echo-Triggering Mechanisms. Remote Sens. Environ. 2012, 119, 92–104. [Google Scholar] [CrossRef]
  7. Hamraz, H.; Contreras, M.A.; Zhang, J. Vertical Stratification of Forest Canopy for Segmentation of Understory Trees within Small-Footprint Airborne LiDAR Point Clouds. ISPRS J. Photogramm. Remote Sens. 2017, 130, 385–392. [Google Scholar] [CrossRef]
  8. Jarron, L.R.; Coops, N.C.; MacKenzie, W.H.; Tompalski, P.; Dykstra, P. Detection of Sub-Canopy Forest Structure Using Airborne LiDAR. Remote Sens. Environ. 2020, 244, 111770. [Google Scholar] [CrossRef]
  9. Dobrowolska, D.; Piasecka; Kuberski; Stereńczak, K. Canopy Gap Characteristics and Regeneration Patterns in the Białowieża Forest Based on Remote Sensing Data and Field Measurements. For. Ecol. Manag. 2022, 511, 120123. [Google Scholar] [CrossRef]
  10. Heinzel, J.; Ginzler, C. A Single-Tree Processing Framework Using Terrestrial Laser Scanning Data for Detecting Forest Regeneration. Remote Sens. 2019, 11, 60. [Google Scholar] [CrossRef]
  11. Brolly, G.; Király, G.; Czimber, K. Mapping Forest Regeneration from Terrestrial Laser Scans. Acta Silv. Lignaria Hung. 2013, 9, 135–146. [Google Scholar] [CrossRef]
  12. Lin, Y.; Holopainen, M.; Kankare, V.; Hyyppa, J. Validation of Mobile Laser Scanning for Understory Tree Characterization in Urban Forest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3167–3173. [Google Scholar] [CrossRef]
  13. Balenović, I.; Liang, X.; Jurjević, L.; Hyyppä, J.; Seletković, A.; Kukko, A. Hand-Held Personal Laser Scanning—Current Status and Perspectives for Forest Inventory Application. Croat. J. For. Eng. 2020, 42, 165–183. [Google Scholar] [CrossRef]
  14. Gollob, C.; Ritter, T.; Nothdurft, A. Forest Inventory with Long Range and High-Speed Personal Laser Scanning (PLS) and Simultaneous Localization and Mapping (SLAM) Technology. Remote Sens. 2020, 12, 1509. [Google Scholar] [CrossRef]
  15. Gehlker, H. Eine Hilfstafel Zur Schätzung von Deckungsgrad Und Artmächtigkeit. Mitt. Flor. Soz. Arbeitsgem. NF 1977, 19, 427–429. [Google Scholar]
  16. Girardeau-Montaut, D.C. 3D Point Cloud and Mesh Processing Software. 2017. Available online: https://www.danielgm.net/cc/ (accessed on 10 June 2023).
  17. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2020; Available online: https://www.R-project.org/ (accessed on 13 January 2024).
  18. Pebesma, E. Simple Features for R: Standardized Support for Spatial Vector Data. R J. 2018, 10, 439–446. [Google Scholar] [CrossRef]
  19. Koch, B.; Heyder, U.; Weinacker, H. Detection of Individual Tree Crowns in Airborne Lidar Data. Photogramm. Eng. Remote Sens. 2006, 72, 357–363. [Google Scholar] [CrossRef]
  20. LAS Specification 1.4—R14. The American Society for Photogrammetry & Remote Sensing (ASPRS). 2019. Available online: https://github.com/ASPRSorg/LAS (accessed on 24 July 2024).
  21. Tockner, A.; Gollob, C.; Kraßnitzer, R.; Ritter, T.; Nothdurft, A. Automatic Tree Crown Segmentation Using Dense Forest Point Clouds from Personal Laser Scanning (PLS). Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 103025. [Google Scholar] [CrossRef]
  22. Roussel, J.-R.; Auty, D.; Coops, N.C.; Tompalski, P.; Goodbody, T.R.H.; Sánchez Meador, A.; Bourdon, J.-F.; de Boissieu, F.; Achim, A. LidR: An R Package for Analysis of Airborne Laser Scanning (ALS) Data. Remote Sens. Environ. 2020, 251, 112061. [Google Scholar] [CrossRef]
  23. Mathes, T.; Seidel, D.; Häberle, K.H.; Pretzsch, H.; Annighöfer, P. What Are We Missing? Occlusion in Laser Scanning Point Clouds and Its Impact on the Detection of Single-Tree Morphologies and Stand Structural Variables. Remote Sens. 2023, 15, 450. [Google Scholar] [CrossRef]
  24. Hijmans, R.J. Terra: Spatial Data Analysis. R Package Version 1.7-83. 2023. Available online: https://github.com/rspatial/terra (accessed on 22 July 2024).
  25. Plowright, A. ForestTools: Tools for Analyzing Remote Sensing Forest Data. 2023. Available online: https://github.com/andrew-plowright/ForestTools (accessed on 22 July 2024).
  26. Popescu, S.C.; Wynne, R.H. Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height. Photogramm. Eng. Remote Sens. 2004, 70, 589–604. [Google Scholar] [CrossRef]
  27. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  28. Kükenbrink, D.; Schneider, F.D.; Leiterer, R.; Schaepman, M.E.; Morsdorf, F. Quantification of Hidden Canopy Volume of Airborne Laser Scanning Data Using a Voxel Traversal Algorithm. Remote Sens. Environ. 2017, 194, 424–436. [Google Scholar] [CrossRef]
  29. Hill, R.A.; Broughton, R.K. Mapping the Understorey of Deciduous Woodland from Leaf-on and Leaf-off Airborne LiDAR Data: A Case Study in Lowland Britain. ISPRS J. Photogramm. Remote Sens. 2009, 64, 223–233. [Google Scholar] [CrossRef]
  30. Morsdorf, F.; Mårell, A.; Koetz, B.; Cassagne, N.; Pimont, F.; Rigolot, E.; Allgöwer, B. Discrimination of Vegetation Strata in a Multi-Layered Mediterranean Forest Ecosystem Using Height and Intensity Information Derived from Airborne Laser Scanning. Remote Sens. Environ. 2010, 114, 1403–1415. [Google Scholar] [CrossRef]
  31. Wing, B.M.; Ritchie, M.W.; Boston, K.; Cohen, W.B.; Gitelman, A.; Olsen, M.J. Prediction of Understory Vegetation Cover with Airborne Lidar in an Interior Ponderosa Pine Forest. Remote Sens. Environ. 2012, 124, 730–741. [Google Scholar] [CrossRef]
  32. Duncanson, L.I.; Cook, B.D.; Hurtt, G.C.; Dubayah, R.O. An Efficient, Multi-Layered Crown Delineation Algorithm for Mapping Individual Tree Structure across Multiple Ecosystems. Remote Sens. Environ. 2014, 154, 378–386. [Google Scholar] [CrossRef]
  33. Röder, M.; Latifi, H.; Hill, S.; Wild, J.; Svoboda, M.; Brůna, J.; Macek, M.; Nováková, M.H.; Gülch, E.; Heurich, M. Application of Optical Unmanned Aerial Vehicle-Based Imagery for the Inventory of Natural Regeneration and Standing Deadwood in Post-Disturbed Spruce Forests. Int. J. Remote Sens. 2018, 39, 5288–5309. [Google Scholar] [CrossRef]
  34. Hauk, E.; Niese, G.; Schadauer, K. Instruktion für Die Feldarbeit der Österreichischen Waldinventur 2016. Available online: https://www.bfw.gv.at/instruktion-feldarbeit-oesterreichische-waldinventur/ (accessed on 11 August 2024).
  35. Chirici, G.; Winter, S.; McRoberts, R.E. National Forest Inventories: Contributions for Forest Biodiversity Assessments; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 9789400704817. [Google Scholar]
  36. Nikolova, P.S.; Leuch, B.A.; Frehner, M.; Wohlgemuth, T.; Brang, P. Indicators of Forest Regeneration and Their Application. Schweiz. Z. Forstwes 2024, 175, 108–115. [Google Scholar] [CrossRef]
  37. Qian, Y.; Zhou, W.; Nytch, C.J.; Han, L.; Li, Z. A New Index to Differentiate Tree and Grass Based on High Resolution Image and Object-Based Methods. Urban For. Urban Green. 2020, 53, 126661. [Google Scholar] [CrossRef]
  38. Ayhan, B.; Kwan, C. Tree, Shrub, and Grass Classification Using Only RGB Images. Remote Sens. 2020, 12, 1333. [Google Scholar] [CrossRef]
  39. Moharram, D.; Yuan, X.; Li, D. Tree Seedlings Detection and Counting Using a Deep Learning Algorithm. Appl. Sci. 2023, 13, 895. [Google Scholar] [CrossRef]
  40. Jayathunga, S.; Pearse, G.D.; Watt, M.S. Unsupervised Methodology for Large-Scale Tree Seedling Mapping in Diverse Forestry Settings Using UAV-Based RGB Imagery. Remote Sens. 2023, 15, 5276. [Google Scholar] [CrossRef]
  41. Pearse, G.D.; Tan, A.Y.S.; Watt, M.S.; Franz, M.O.; Dash, J.P. Detecting and Mapping Tree Seedlings in UAV Imagery Using Convolutional Neural Networks and Field-Verified Data. ISPRS J. Photogramm. Remote Sens. 2020, 168, 156–169. [Google Scholar] [CrossRef]
  42. Hershey, J.L.; McDill, M.E.; Miller, D.A.; Holderman, B.; Michael, J.H. A Voxel-Based Individual Tree Stem Detection Method Using Airborne LiDAR in Mature Northeastern U.S. Forests. Remote Sens. 2022, 14, 806. [Google Scholar] [CrossRef]
  43. van Hees; Willem, W.S.; Mead, B.R. Ocular Estimates of Understory Vegetation Structure in a Closed Picea Glauca/Betula Papyrifera Forest. J. Veg. Sci. 2000, 11, 195–200. [Google Scholar] [CrossRef]
  44. Schmidt, W. Die Vegetationskundliche Untersuchung von Dauerprobeflächen. Mitt. Flor. Soz. Arbeitsgem. N. F. 1974, 17, 103–106. [Google Scholar]
  45. Tomppo, E.; Gschwantner, T.; Lawrence, M.; McRoberts, R.E. National Forest Inventories: Pathways for Common Reporting; Springer: Dordrecht, The Netherlands, 2010; pp. 1–612. [Google Scholar] [CrossRef]
  46. Storch, F.; Dormann, C.F.; Bauhus, J. Quantifying Forest Structural Diversity Based on Large-Scale Inventory Data: A New Approach to Support Biodiversity Monitoring. For. Ecosyst. 2018, 5, 34. [Google Scholar] [CrossRef]
Figure 1. GeoSLAM Zeb Horizon (left) and RIEGL VZ-600i (right) during fieldwork.
Figure 1. GeoSLAM Zeb Horizon (left) and RIEGL VZ-600i (right) during fieldwork.
Remotesensing 17 00269 g001
Figure 2. Screenshot from the GeoAce App (ITS Geo Solutions GmbH, Jena, Germany) during fieldwork. The green crosses mark the positions of marked trees, while the red dot marks the plot center. The positions of the surveyed trees were visualized in CloudCompare to avoid the clipping of trees smaller or larger than the predefined threshold.
Figure 2. Screenshot from the GeoAce App (ITS Geo Solutions GmbH, Jena, Germany) during fieldwork. The green crosses mark the positions of marked trees, while the red dot marks the plot center. The positions of the surveyed trees were visualized in CloudCompare to avoid the clipping of trees smaller or larger than the predefined threshold.
Remotesensing 17 00269 g002
Figure 3. Schematical illustration of the general workflow.
Figure 3. Schematical illustration of the general workflow.
Remotesensing 17 00269 g003
Figure 4. Illustration of quality measures for regeneration quantification as functions of the voxel resolution, class threshold, and cloth resolution. The different class thresholds and cloth resolutions are represented by different colors and line types, respectively.
Figure 4. Illustration of quality measures for regeneration quantification as functions of the voxel resolution, class threshold, and cloth resolution. The different class thresholds and cloth resolutions are represented by different colors and line types, respectively.
Remotesensing 17 00269 g004
Figure 5. Illustration of quality measures for regeneration quantification as functions of thresholds for tree detection and segmentation. The different detection thresholds are represented by different colors, as described in the legend.
Figure 5. Illustration of quality measures for regeneration quantification as functions of thresholds for tree detection and segmentation. The different detection thresholds are represented by different colors, as described in the legend.
Remotesensing 17 00269 g005
Figure 6. Comparison of deviations achieved with M1–M5 across all parameter combinations.
Figure 6. Comparison of deviations achieved with M1–M5 across all parameter combinations.
Remotesensing 17 00269 g006
Figure 7. Comparison of deviations achieved with M1–M5 with optimized parameter combinations in gray and deviations of visual estimations in green (Op 1 to Op 3 represent the deviations achieved by the three different operators).
Figure 7. Comparison of deviations achieved with M1–M5 with optimized parameter combinations in gray and deviations of visual estimations in green (Op 1 to Op 3 represent the deviations achieved by the three different operators).
Remotesensing 17 00269 g007
Figure 8. Comparison of regeneration coverages. The results from the visual estimates are depicted in grey and those from the best LiDAR-based methods (M2 and M5) in blue and green, respectively.
Figure 8. Comparison of regeneration coverages. The results from the visual estimates are depicted in grey and those from the best LiDAR-based methods (M2 and M5) in blue and green, respectively.
Remotesensing 17 00269 g008
Figure 9. Plot-wise depiction of estimated and reference regeneration coverages. Since the visual estimates, represented by the grey bars, are averaged across the estimates of all 3 operators, the error bars plotted with the latter represent the highest and lowest estimates, respectively. The reference coverages are depicted in black, whereas the coverages derived from M2 to M5 are depicted in blue and green, respectively.
Figure 9. Plot-wise depiction of estimated and reference regeneration coverages. Since the visual estimates, represented by the grey bars, are averaged across the estimates of all 3 operators, the error bars plotted with the latter represent the highest and lowest estimates, respectively. The reference coverages are depicted in black, whereas the coverages derived from M2 to M5 are depicted in blue and green, respectively.
Remotesensing 17 00269 g009
Figure 10. Depiction of Plot 7. (a) shows the point cloud of this plot. The red lines in (b,c) represent the outlines of the manually cropped reference tree crowns. The blue lines represent the outlines of the tree crowns as segmented with M2 (b) and M5 (c), respectively. The colors of the pixels and color scales in (b,c) represent height (in meters) and voxel density, respectively.
Figure 10. Depiction of Plot 7. (a) shows the point cloud of this plot. The red lines in (b,c) represent the outlines of the manually cropped reference tree crowns. The blue lines represent the outlines of the tree crowns as segmented with M2 (b) and M5 (c), respectively. The colors of the pixels and color scales in (b,c) represent height (in meters) and voxel density, respectively.
Remotesensing 17 00269 g010
Table 1. Step-by-step workflow of method M1M5 with parameter settings. The column ‘Setting of relevant parameters‘ provides the values of the parameters specified in the previous column, as chosen after their optimization (see Section 2.4). The hyphen signifies that no relevant information on input data or parameters can be given for the corresponding step.
Table 1. Step-by-step workflow of method M1M5 with parameter settings. The column ‘Setting of relevant parameters‘ provides the values of the parameters specified in the previous column, as chosen after their optimization (see Section 2.4). The hyphen signifies that no relevant information on input data or parameters can be given for the corresponding step.
Step No.StepStep DescriptionMethodInput Data/
Input Raster
Relevant
Parameters
Setting of Relevant ParametersPackage/
Function
1Ground classificationThe point cloud, with grown trees already cropped out, is classified into ground and non-ground points.M1-Class threshold/
cloth resolution
0.2 m/0.3 mlidR/
classify_ground()
M20.2 m/0.1 m
M30.15 m/0.5 m
M40.15 m/0.5 m
M50.2 m/0.1 m
2Point cloud normalizationBased on the DTM, derived from the classified ground points, the point cloud is normalized.M1–M5---lidR/
normalize_height()
3VoxelizationThe normalized vegetation point cloud (without ground points) is converted into cubic voxels. Depending on the applied method, different voxel resolutions are applied.M1-Voxel resolution0.01 mlidR/voxelize_points()
M20.03 m
M30.01 m
M40.01 m
M50.02 m
4RasterizationThe resulting vegetation voxel cloud is then rasterized, to obtain a 2D raster image. Depending on the applied method, different properties of the voxel cloud (surface height, voxel count, or voxel density) are used as input for the pixel values of these raster images. M1Surface--terra/rasterize()
M2Voxel count
M3Surface and voxel count
M4Surface and voxel count
M5Voxel density
5Maxima detectionFrom the resulting raster images, local maxima, assumed to represent likely positions of treetops, are detected. A minimum value (height, voxel number, or voxel density—depending on raster image) for the maxima has to be chosen.M1Surface Raster Minimum   value   ( t o p m i n )0.1ForestTools/vwf()
M2Voxel Count Raster 3 + 1 / 3
M3Voxel Count Raster10
M4Voxel Count Raster10
M5Voxel Density Raster0.4
6Crown segmentationBased on one of the raster images computed in step 4, tree crowns are segmented, starting from the treetops detected in step 5. A threshold (height, voxel number, or voxel density—depending on raster image), above which crown areas are segmented, has to be chosen.M1Surface RasterSegmentation
threshold   ( s e g m i n )
0.1ForestTools/mcws()
M2Voxel Count Raster 3 + 1 / 3
M3Surface Raster0.1
M4Voxel Count Raster10
M5Voxel Density Raster0.1
7Maxima detectionFor method M4, the treetop detection is repeated, but this time on the surface raster and only within the polygons of the crown areas segmented in step 6.M4Surface Raster Minimum   value   ( t o p m i n )0.1ForestTools/vwf()
8Crown segmentationAfterward, a final crown segmentation is applied using the treetops detected in step 7, this time on the Surface Raster.M4Surface RasterSegmentation
threshold   ( s e g m i n )
0.1ForestTools/mcws()
Table 2. Comparison of studies with similar objectives based on direct object detection.
Table 2. Comparison of studies with similar objectives based on direct object detection.
ReferencePlatformTarget VariableMethodSize of Detected TreesReference DataMAE [pp]Tree Detection Accuracy
This studyPLSRegeneration coverageTree top detection and crown delineation on Voxel Density Raster0.1 m (height)–0.05 m (DBH)Delineation of crowns from TLS point cloud2.59 (M2)
3.87 (M5)
[11]TLSNumber of treesReconstruction of stems from voxelized point cloud by aggregation of stem fragments (deciduous trees)3–6 m (height)Manual cropping of trees from point cloud 87.9–90.2%
85.8%
[10]TLSRegeneration coverageDetection of tree stems, individual tree segmentation and classification into established and unestablished regeneration based on DBH and height0.5 m–1.3 m (height)Visual estimation2.23
1.3 m (height)–0.12 m (DBH)8.08
[4]ALSRegeneration coverageMean shift clustering and normalized cut algorithm1–5 m (height)Visual estimation
[33]UAV imageryNumber of treesTreetop detection and crown delineation on DSM with incorporation of spectral features to avoid misclassifications>0.2 m (height)Trees counted manually in the field 24.1%
[7]ALSNumber of treesSegmentation of canopy layers with iterative vertical stratificationmin. 4 m (height)Matching of crown apex with field observed stem location 86%
[32]ALSNumber of treesWatershed-based delineation of canopy height modelmin. 2 m (height)Matching of delineated crowns with field-observed stem location 21%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Witzmann, S.; Gollob, C.; Kraßnitzer, R.; Ritter, T.; Tockner, A.; Moik, L.; Sarkleti, V.; Ofner-Graff, T.; Schume, H.; Nothdurft, A. Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning. Remote Sens. 2025, 17, 269. https://doi.org/10.3390/rs17020269

AMA Style

Witzmann S, Gollob C, Kraßnitzer R, Ritter T, Tockner A, Moik L, Sarkleti V, Ofner-Graff T, Schume H, Nothdurft A. Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning. Remote Sensing. 2025; 17(2):269. https://doi.org/10.3390/rs17020269

Chicago/Turabian Style

Witzmann, Sarah, Christoph Gollob, Ralf Kraßnitzer, Tim Ritter, Andreas Tockner, Lukas Moik, Valentin Sarkleti, Tobias Ofner-Graff, Helmut Schume, and Arne Nothdurft. 2025. "Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning" Remote Sensing 17, no. 2: 269. https://doi.org/10.3390/rs17020269

APA Style

Witzmann, S., Gollob, C., Kraßnitzer, R., Ritter, T., Tockner, A., Moik, L., Sarkleti, V., Ofner-Graff, T., Schume, H., & Nothdurft, A. (2025). Quantification of Forest Regeneration on Forest Inventory Sample Plots Using Point Clouds from Personal Laser Scanning. Remote Sensing, 17(2), 269. https://doi.org/10.3390/rs17020269

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop