Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,293)

Search Parameters:
Keywords = point cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2842 KiB  
Article
Incremental SFM 3D Reconstruction Based on Deep Learning
by Lei Liu, Congzheng Wang, Chuncheng Feng, Wanqi Gong, Lingyi Zhang, Libin Liao and Chang Feng
Electronics 2024, 13(14), 2850; https://doi.org/10.3390/electronics13142850 (registering DOI) - 19 Jul 2024
Abstract
In recent years, with the rapid development of unmanned aerial vehicle (UAV) technology, multi-view 3D reconstruction has once again become a hot spot in computer vision. Incremental Structure From Motion (SFM) is currently the most prevalent reconstruction pipeline, but it still faces challenges [...] Read more.
In recent years, with the rapid development of unmanned aerial vehicle (UAV) technology, multi-view 3D reconstruction has once again become a hot spot in computer vision. Incremental Structure From Motion (SFM) is currently the most prevalent reconstruction pipeline, but it still faces challenges in reconstruction efficiency, accuracy, and feature matching. In this paper, we use deep learning algorithms for feature matching to obtain more accurate matching point pairs. Moreover, we adopted the improved Gauss–Newton (GN) method, which not only avoids numerical divergence but also accelerates the speed of bundle adjustment (BA). Then, the sparse point cloud reconstructed by SFM and the original image are used as the input of the depth estimation network to predict the depth map of each image. Finally, the depth map is fused to complete the reconstruction of dense point clouds. After experimental verification, the reconstructed dense point clouds have rich details and clear textures, and the integrity, overall accuracy, and reconstruction efficiency of the point clouds have been improved. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Deep Learning and Its Applications)
Show Figures

Figure 1

22 pages, 13840 KiB  
Article
Tree Canopy Volume Extraction Fusing ALS and TLS Based on Improved PointNeXt
by Hao Sun, Qiaolin Ye, Qiao Chen, Liyong Fu, Zhongqi Xu and Chunhua Hu
Remote Sens. 2024, 16(14), 2641; https://doi.org/10.3390/rs16142641 (registering DOI) - 19 Jul 2024
Abstract
Canopy volume is a crucial biological parameter for assessing tree growth, accurately estimating forest Above-Ground Biomass (AGB), and evaluating ecosystem stability. Airborne Laser Scanning (ALS) and Terrestrial Laser Scanning (TLS) are advanced precision mapping technologies that capture highly accurate point clouds for forest [...] Read more.
Canopy volume is a crucial biological parameter for assessing tree growth, accurately estimating forest Above-Ground Biomass (AGB), and evaluating ecosystem stability. Airborne Laser Scanning (ALS) and Terrestrial Laser Scanning (TLS) are advanced precision mapping technologies that capture highly accurate point clouds for forest digitization studies. Despite advances in calculating canopy volume, challenges remain in accurately extracting the canopy and removing gaps. This study proposes a canopy volume extraction method based on an improved PointNeXt model, fusing ALS and TLS point cloud data. In this work, improved PointNeXt is first utilized to extract the canopy, enhancing extraction accuracy and mitigating under-segmentation and over-segmentation issues. To effectively calculate canopy volume, the canopy is divided into multiple levels, each projected into the xOy plane. Then, an improved Mean Shift algorithm, combined with KdTree, is employed to remove gaps and obtain parts of the real canopy. Subsequently, a convex hull algorithm is utilized to calculate the area of each part, and the sum of the areas of all parts multiplied by their heights yields the canopy volume. The proposed method’s performance is tested on a dataset comprising poplar, willow, and cherry trees. As a result, the improved PointNeXt model achieves a mean intersection over union (mIoU) of 98.19% on the test set, outperforming the original PointNeXt by 1%. Regarding canopy volume, the algorithm’s Root Mean Square Error (RMSE) is 0.18 m3, and a high correlation is observed between predicted canopy volumes, with an R-Square (R2) value of 0.92. Therefore, the proposed method effectively and efficiently acquires canopy volume, providing a stable and accurate technical reference for forest biomass statistics. Full article
Show Figures

Figure 1

29 pages, 7422 KiB  
Article
Continuous Online Semantic Implicit Representation for Autonomous Ground Robot Navigation in Unstructured Environments
by Quentin Serdel, Julien Marzat and Julien Moras
Robotics 2024, 13(7), 108; https://doi.org/10.3390/robotics13070108 - 18 Jul 2024
Viewed by 48
Abstract
While mobile ground robots have now the physical capacity of travelling in unstructured challenging environments such as extraterrestrial surfaces or devastated terrains, their safe and efficient autonomous navigation has yet to be improved before entrusting them with complex unsupervised missions in such conditions. [...] Read more.
While mobile ground robots have now the physical capacity of travelling in unstructured challenging environments such as extraterrestrial surfaces or devastated terrains, their safe and efficient autonomous navigation has yet to be improved before entrusting them with complex unsupervised missions in such conditions. Recent advances in machine learning applied to semantic scene understanding and environment representations, coupled with modern embedded computational means and sensors hold promising potential in this matter. This paper therefore introduces the combination of semantic understanding, continuous implicit environment representation and smooth informed path-planning in a new method named COSMAu-Nav. It is specifically dedicated to autonomous ground robot navigation in unstructured environments and adaptable for embedded, real-time usage without requiring any form of telecommunication. Data clustering and Gaussian processes are employed to perform online regression of the environment topography, occupancy and terrain traversability from 3D semantic point clouds while providing an uncertainty modeling. The continuous and differentiable properties of Gaussian processes allow gradient based optimisation to be used for smooth local path-planning with respect to the terrain properties. The proposed pipeline has been evaluated and compared with two reference 3D semantic mapping methods in terms of quality of representation under localisation and semantic segmentation uncertainty using a Gazebo simulation, derived from the 3DRMS dataset. Its computational requirements have been evaluated using the Rellis-3D real world dataset. It has been implemented on a real ground robot and successfully employed for its autonomous navigation in a previously unknown outdoor environment. Full article
Show Figures

Figure 1

14 pages, 4193 KiB  
Article
Latent Space Representations for Marker-Less Realtime Hand–Eye Calibration
by Juan Camilo Martínez-Franco, Ariel Rojas-Álvarez, Alejandra Tabares, David Álvarez-Martínez and César Augusto Marín-Moreno
Sensors 2024, 24(14), 4662; https://doi.org/10.3390/s24144662 - 18 Jul 2024
Viewed by 141
Abstract
Marker-less hand–eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected [...] Read more.
Marker-less hand–eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected coordinates. In this work, we introduce a hand–eye calibration procedure based on the rotation representations inferred by an augmented autoencoder neural network. Learning-based models that attempt to directly regress the spatial transform of objects such as the links of robotic manipulators perform poorly in the orientation domain, but this can be overcome through the analysis of the latent space vectors constructed in the autoencoding process. This technique is computationally inexpensive and can be run in real time in markedly varied lighting and occlusion conditions. To evaluate the procedure, we use a color-depth camera and perform a registration step between the predicted and the captured point clouds to measure translation and orientation errors and compare the results to a baseline based on traditional checkerboard markers. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

24 pages, 11966 KiB  
Article
Evaluation of Denoising and Voxelization Algorithms on 3D Point Clouds
by Sara Gonizzi Barsanti, Marco Raoul Marini, Saverio Giulio Malatesta and Adriana Rossi
Remote Sens. 2024, 16(14), 2632; https://doi.org/10.3390/rs16142632 - 18 Jul 2024
Viewed by 119
Abstract
Proper documentation is fundamental to providing structural health monitoring, damage identification and failure assessment for Cultural Heritage (CH). Three-dimensional models from photogrammetric and laser scanning surveys usually provide 3D point clouds that can be converted into meshes. The point clouds usually contain noise [...] Read more.
Proper documentation is fundamental to providing structural health monitoring, damage identification and failure assessment for Cultural Heritage (CH). Three-dimensional models from photogrammetric and laser scanning surveys usually provide 3D point clouds that can be converted into meshes. The point clouds usually contain noise data due to different causes: non-cooperative material or surfaces, bad lighting, complex geometry and low accuracy of the instruments utilized. Point cloud denoising has become one of the hot topics of 3D geometric data processing, removing these noise data to recover the ground-truth point cloud and adding smoothing to the ideal surface. These cleaned point clouds can be converted in volumes with different algorithms, suitable for different uses, mainly for structural analysis. This paper aimed to analyse the geometric accuracy of algorithms available for the conversion of 3D point clouds into volumetric models that can be used for structural analyses through the FEA process. The process is evaluated, highlighting problems and difficulties that lie in poor reconstruction results of volumes from denoised point clouds due to the geometric complexity of the objects. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud II)
Show Figures

Figure 1

20 pages, 39702 KiB  
Article
Spatial Information Enhancement with Multi-Scale Feature Aggregation for Long-Range Object and Small Reflective Area Object Detection from Point Cloud
by Hanwen Li, Huamin Tao, Qiuqun Deng, Shanzhu Xiao and Jianxiong Zhou
Remote Sens. 2024, 16(14), 2631; https://doi.org/10.3390/rs16142631 - 18 Jul 2024
Viewed by 113
Abstract
Accurate and comprehensive 3D objects detection is important for perception systems in autonomous driving. Nevertheless, contemporary mainstream methods tend to perform more effectively on large objects in regions proximate to the LiDAR, leaving limited exploration of long-range objects and small objects. The divergent [...] Read more.
Accurate and comprehensive 3D objects detection is important for perception systems in autonomous driving. Nevertheless, contemporary mainstream methods tend to perform more effectively on large objects in regions proximate to the LiDAR, leaving limited exploration of long-range objects and small objects. The divergent point pattern of LiDAR, which results in a reduction in point density as the distance increases, leads to a non-uniform point distribution that is ill-suited to discretized volumetric feature extraction. To address this challenge, we propose the Foreground Voxel Proposal (FVP) module, which effectively locates and generates voxels at the foreground of objects. The outputs are subsequently merged to mitigating the difference in point cloud density and completing the object shape. Furthermore, the susceptibility of small objects to occlusion results in the loss of feature space. To overcome this, we propose the Multi-Scale Feature Integration Network (MsFIN), which captures contextual information at different ranges. Subsequently, the outputs of these features are integrated through a cascade framework based on transformers in order to supplement the object features space. The extensive experimental results demonstrate that our network achieves remarkable results. Remarkably, our approach demonstrated an improvement of 8.56% AP on the SECOND baseline for the Car detection task at a distance of more than 20 m, and 9.38% AP on the Cyclist detection task. Full article
Show Figures

Figure 1

15 pages, 12837 KiB  
Article
Prediction of Grazing Incidence Focusing Mirror Imaging Quality Based on Accurate Modelling of the Surface Shape Accuracy for the Whole Assembly Process
by Erbo Li, Zhijing Zhang, Chaojiang Li, Fuchang Zuo, Zhiwu Mei and Taiyu Su
Appl. Sci. 2024, 14(14), 6242; https://doi.org/10.3390/app14146242 - 18 Jul 2024
Viewed by 211
Abstract
The key indicator of a grazing incidence focusing mirror’s imaging quality is its angular resolution, which is significantly influenced by its surface shape distribution error. In this paper, we propose a method for the prediction of grazing incidence focusing mirror imaging quality based [...] Read more.
The key indicator of a grazing incidence focusing mirror’s imaging quality is its angular resolution, which is significantly influenced by its surface shape distribution error. In this paper, we propose a method for the prediction of grazing incidence focusing mirror imaging quality based on accurate modelling of the surface shape accuracy for the whole assembly process. Firstly, the three-dimensional surface shape distribution error of the inner surface of the focusing mirror is reconstructed based on measured point cloud data, and the changes in the surface shape induced by suspension gravity and the adhesive curing shrinkage force are obtained through simulation, and then an accurate geometric digital twin model based on the characterisation of its surface shape accuracy is established. Finally, a study on the quantitative prediction of the angular resolution of its imaging quality is performed. The results show that the surface shape error before assembly has the greatest influence on the imaging quality; the difference in angular resolution between the two suspension methods under the influence of gravity is approximately 2.1″, and the angular resolution decreases by about 4.2″ due to adhesive curing. This method can provide effective support for the prediction of the imaging quality of grazing incidence focusing mirrors. Full article
Show Figures

Figure 1

14 pages, 2596 KiB  
Article
Occurrence of Wetness on the Fruit Surface Modeled Using Spatio-Temporal Temperature Data from Sweet Cherry Tree Canopies
by Nicolas Tapia-Zapata, Andreas Winkler and Manuela Zude-Sasse
Horticulturae 2024, 10(7), 757; https://doi.org/10.3390/horticulturae10070757 - 17 Jul 2024
Viewed by 304
Abstract
Typically, fruit cracking in sweet cherry is associated with the occurrence of free water at the fruit surface level due to direct (rain and fog) and indirect (cold exposure and dew) mechanisms. Recent advances in close range remote sensing have enabled the monitoring [...] Read more.
Typically, fruit cracking in sweet cherry is associated with the occurrence of free water at the fruit surface level due to direct (rain and fog) and indirect (cold exposure and dew) mechanisms. Recent advances in close range remote sensing have enabled the monitoring of the temperature distribution with high spatial resolution based on light detection and ranging (LiDAR) and thermal imaging. The fusion of LiDAR-derived geometric 3D point clouds and merged thermal data provides spatially resolved temperature data at the fruit level as LiDAR 4D point clouds. This paper aimed to investigate the thermal behavior of sweet cherry canopies using this new method with emphasis on the surface temperature of fruit around the dew point. Sweet cherry trees were stored in a cold chamber (6 °C) and subsequently scanned at different time intervals at room temperature. A total of 62 sweet cherry LiDAR 4D point clouds were identified. The estimated temperature distribution was validated by means of manual reference readings (n = 40), where average R2 values of 0.70 and 0.94 were found for ideal and real scenarios, respectively. The canopy density was estimated using the ratio of the number of LiDAR points of fruit related to the canopy. The occurrence of wetness on the surface of sweet cherry was visually assessed and compared to an estimated dew point (Ydew) index. At mean Ydew of 1.17, no wetness was observed on the fruit surface. The canopy density ratio had a marginal impact on the thermal kinetics and the occurrence of wetness on the surface of sweet cherry in the slender spindle tree architecture. The modelling of fruit surface wetness based on estimated fruit temperature distribution can support ecophysiological studies on tree architectures considering resilience against climate change and in studies on physiological disorders of fruit. Full article
Show Figures

Figure 1

22 pages, 913 KiB  
Review
A Comparative Literature Review of Machine Learning and Image Processing Techniques Used for Scaling and Grading of Wood Logs
by Yohann Jacob Sandvik, Cecilia Marie Futsæther, Kristian Hovde Liland and Oliver Tomic
Forests 2024, 15(7), 1243; https://doi.org/10.3390/f15071243 - 17 Jul 2024
Viewed by 269
Abstract
This literature review assesses the efficacy of image-processing techniques and machine-learning models in computer vision for wood log grading and scaling. Four searches were conducted in four scientific databases, yielding a total of 1288 results, which were narrowed down to 33 relevant studies. [...] Read more.
This literature review assesses the efficacy of image-processing techniques and machine-learning models in computer vision for wood log grading and scaling. Four searches were conducted in four scientific databases, yielding a total of 1288 results, which were narrowed down to 33 relevant studies. The studies were categorized according to their goals, including log end grading, log side grading, individual log scaling, log pile scaling, and log segmentation. The studies were compared based on the input used, choice of model, model performance, and level of autonomy. This review found a preference for images over point cloud representations for logs and an increase in camera use over laser scanners. It identified three primary model types: classical image-processing algorithms, deep learning models, and other machine learning models. However, comparing performance across studies proved challenging due to varying goals and metrics. Deep learning models showed better performance in the log pile scaling and log segmentation goal categories. Cameras were found to have become more popular over time compared to laser scanners, possibly due to stereovision cameras taking over for laser scanners for sampling point cloud datasets. Classical image-processing algorithms were consistently used, deep learning models gained prominence in 2018, and other machine learning models were used in studies published between 2010 and 2018. Full article
Show Figures

Figure 1

23 pages, 24773 KiB  
Article
Design and Experiment of Ordinary Tea Profiling Harvesting Device Based on Light Detection and Ranging Perception
by Xiaolong Huan, Min Wu, Xianbing Bian, Jiangming Jia, Chenchen Kang, Chuanyu Wu, Runmao Zhao and Jianneng Chen
Agriculture 2024, 14(7), 1147; https://doi.org/10.3390/agriculture14071147 - 15 Jul 2024
Viewed by 256
Abstract
Due to the complex shape of the tea tree canopy and the large undulation of a tea garden terrain, the quality of fresh tea leaves harvested by existing tea harvesting machines is poor. This study proposed a tea canopy surface profiling method based [...] Read more.
Due to the complex shape of the tea tree canopy and the large undulation of a tea garden terrain, the quality of fresh tea leaves harvested by existing tea harvesting machines is poor. This study proposed a tea canopy surface profiling method based on 2D LiDAR perception and investigated the extraction and fitting methods of canopy point clouds. Meanwhile, a tea profiling harvester prototype was developed and field tests were conducted. The tea profiling harvesting device adopted a scheme of sectional arrangement of multiple groups of profiling tea harvesting units, and each unit sensed the height information of its own bottom canopy area through 2D LiDAR. A cross-platform communication network was established, enabling point cloud fitting of tea plant surfaces and accurate estimation of cutter profiling height through the RANSAC algorithm. Additionally, a sensing control system with multiple execution units was developed using rapid control prototype technology. The results of field tests showed that the bud leaf integrity rate was 84.64%, the impurity rate was 5.94%, the missing collection rate was 0.30%, and the missing harvesting rate was 0.68%. Furthermore, 89.57% of the harvested tea could be processed into commercial tea, with 88.34% consisting of young tea shoots with one bud and three leaves or fewer. All of these results demonstrated that the proposed device effectively meets the technical standards for machine-harvested tea and the requirements of standard tea processing techniques. Moreover, compared to other commercial tea harvesters, the proposed tea profiling harvesting device demonstrated improved performance in harvesting fresh tea leaves. Full article
(This article belongs to the Special Issue Sensor-Based Precision Agriculture)
Show Figures

Figure 1

21 pages, 11155 KiB  
Article
Integrating NoSQL, Hilbert Curve, and R*-Tree to Efficiently Manage Mobile LiDAR Point Cloud Data
by Yuqi Yang, Xiaoqing Zuo, Kang Zhao and Yongfa Li
ISPRS Int. J. Geo-Inf. 2024, 13(7), 253; https://doi.org/10.3390/ijgi13070253 - 14 Jul 2024
Viewed by 302
Abstract
The widespread use of Light Detection and Ranging (LiDAR) technology has led to a surge in three-dimensional point cloud data; although, it also poses challenges in terms of data storage and indexing. Efficient storage and management of LiDAR data are prerequisites for data [...] Read more.
The widespread use of Light Detection and Ranging (LiDAR) technology has led to a surge in three-dimensional point cloud data; although, it also poses challenges in terms of data storage and indexing. Efficient storage and management of LiDAR data are prerequisites for data processing and analysis for various LiDAR-based scientific applications. Traditional relational database management systems and centralized file storage struggle to meet the storage, scaling, and specific query requirements of massive point cloud data. However, NoSQL databases, known for their scalability, speed, and cost-effectiveness, provide a viable solution. In this study, a 3D point cloud indexing strategy for mobile LiDAR point cloud data that integrates Hilbert curves, R*-trees, and B+-trees was proposed to support MongoDB-based point cloud storage and querying from the following aspects: (1) partitioning the point cloud using an adaptive space partitioning strategy to improve the I/O efficiency and ensure data locality; (2) encoding partitions using Hilbert curves to construct global indices; (3) constructing local indexes (R*-trees) for each point cloud partition so that MongoDB can natively support indexing of point cloud data; and (4) a MongoDB-oriented storage structure design based on a hierarchical indexing structure. We evaluated the efficacy of chunked point cloud data storage with MongoDB for spatial querying and found that the proposed storage strategy provides higher data encoding, index construction and retrieval speeds, and more scalable storage structures to support efficient point cloud spatial query processing compared to many mainstream point cloud indexing strategies and database systems. Full article
Show Figures

Figure 1

11 pages, 13628 KiB  
Communication
A Semi-Automatic Approach for Tree Crown Competition Indices Assessment from UAV LiDAR
by Nicola Puletti, Matteo Guasti, Simone Innocenti, Lorenzo Cesaretti and Ugo Chiavetta
Remote Sens. 2024, 16(14), 2576; https://doi.org/10.3390/rs16142576 - 13 Jul 2024
Viewed by 380
Abstract
Understanding the spatial heterogeneity of forest structure is crucial for comprehending ecosystem dynamics and promoting sustainable forest management. Unmanned aerial vehicle (UAV) LiDAR technology provides a promising method to capture detailed three-dimensional (3D) information about forest canopies, aiding in management and silvicultural practices. [...] Read more.
Understanding the spatial heterogeneity of forest structure is crucial for comprehending ecosystem dynamics and promoting sustainable forest management. Unmanned aerial vehicle (UAV) LiDAR technology provides a promising method to capture detailed three-dimensional (3D) information about forest canopies, aiding in management and silvicultural practices. This study investigates the heterogeneity of forest structure in broadleaf forests using UAV LiDAR data, with a particular focus on tree crown features and their different information content compared to diameters. We explored a non-conventionally used method that emphasizes crown competition by employing a nearest neighbor selection technique based on metrics derived from UAV point cloud profiles at the tree level, rather than traditional DBH (diameter at breast height) spatial arrangement. About 300 vegetation elements within 10 plots collected in a managed Beech forest were used as reference data. We demonstrate that crown-based approaches, which are feasible with UAV LiDAR data at a reasonable cost and time, significantly enhances the understanding of forest heterogeneity, adding new information content for managers. Our findings underscore the utility of UAV LiDAR in characterizing the complexity and variability of forest structure at high resolution, offering valuable insights for carbon accounting and sustainable forest management. Full article
(This article belongs to the Special Issue Novel Applications of UAV Imagery for Forest Science)
Show Figures

Figure 1

15 pages, 5365 KiB  
Article
Extraction of Arbors from Terrestrial Laser Scanning Data Based on Trunk Axis Fitting
by Song Liu, Yuncheng Deng, Jianpeng Zhang, Jinliang Wang and Di Duan
Forests 2024, 15(7), 1217; https://doi.org/10.3390/f15071217 - 13 Jul 2024
Viewed by 349
Abstract
Accurate arbor extraction is an important element of forest surveys. However, the presence of shrubs can interfere with the extraction of arbors. Addressing the issues of low accuracy and weak generalizability in existing Terrestrial Laser Scanning (TLS) arbor point clouds extraction methods, this [...] Read more.
Accurate arbor extraction is an important element of forest surveys. However, the presence of shrubs can interfere with the extraction of arbors. Addressing the issues of low accuracy and weak generalizability in existing Terrestrial Laser Scanning (TLS) arbor point clouds extraction methods, this study proposes a trunk axis fitting (TAF) method for arbor extraction. After separating the point cloud data by upper and lower, slicing, clustering, fitting circles, obtaining the main central axis, filtering by distance, etc. The canopy point clouds are merged with the extracted trunk point clouds to precisely separate arbors and shrubs. The advantage of the TAF method proposed in this study is that it is not affected by point cloud density or the degree of trunk curvature. This study focuses on a natural forest plot in Shangri-La City, Yunnan Province, and a plantation plot in Kunming City, using manually extracted data from a standardized dataset of samples to test the accuracy of the TAF method and validate the feasibility of the proposed method. The results showed that the TAF method proposed in this study has high extraction accuracy. It can effectively avoid the problem of trunk point cloud loss caused by tree growth curvature. The experimental accuracy for both plots reached over 99%. This study can provide certain technical support for arbor parameter extraction and scientific guidance for forest resource investigation and forest management decision-making. Full article
(This article belongs to the Special Issue Airborne and Terrestrial Laser Scanning in Forests)
Show Figures

Figure 1

21 pages, 15971 KiB  
Article
Low-Overlap Bullet Point Cloud Registration Algorithm Based on Line Feature Detection
by Qiwen Zhang, Zhiya Mu, Xin He, Zhonghui Wei, Ruidong Hao, Yi Liao and Hongyang Wang
Appl. Sci. 2024, 14(14), 6105; https://doi.org/10.3390/app14146105 - 12 Jul 2024
Viewed by 381
Abstract
A bullet point cloud registration algorithm with a low overlap rate based on line feature detection was proposed to solve the problem of the difficulty and low efficiency of point cloud registration due to the low overlap rate among point clouds sampled by [...] Read more.
A bullet point cloud registration algorithm with a low overlap rate based on line feature detection was proposed to solve the problem of the difficulty and low efficiency of point cloud registration due to the low overlap rate among point clouds sampled by the bullet model. In this paper, voxel downsampling is used to remove some noise points and outliers from the bullet point cloud and applied to the specified resolution to reduce the calculation cost. The bullet point cloud is transformed to a better initial position by fitting the central axis with the geometrical features of the bullet. Then, the direction vector of the bullet linear features is obtained by using an icosahedral fitting discrete Hough transform to simplify the parameter space of the search transformation. Finally, the optimal rotation angle is searched for in the parameter space by using the improved Cuckoo algorithm to realize the registration of the bullet point cloud with a low overlap rate. Simulation and experimental results show that the proposed registration method can accurately register bullet point clouds of different densities with a low overlap rate. Compared with the commonly used ICP, GICP, and TRICP algorithms, the registration error of the proposed algorithm is reduced by 92.68% on average when the overlap rate is 52.85%. The registration error is reduced by 98.87% in the case of a 41.36% overlap rate, by 99.52% in the case of a 33.02% overlap rate, and by 98.89% in the case of a 22.75% overlap rate. Full article
Show Figures

Figure 1

21 pages, 15760 KiB  
Article
Deep Learning-Based Digital Surface Model Reconstruction of ZY-3 Satellite Imagery
by Yanbin Zhao, Yang Liu, Shuang Gao, Guohua Liu, Zhiqiang Wan and Denghui Hu
Remote Sens. 2024, 16(14), 2567; https://doi.org/10.3390/rs16142567 - 12 Jul 2024
Viewed by 340
Abstract
This study introduces a novel satellite image digital surface model (DSM) reconstruction framework grounded in deep learning methodology. The proposed framework effectively utilizes a rational polynomial camera (RPC) model to establish the mapping relationship between image coordinates and geographic coordinates. Given the expansive [...] Read more.
This study introduces a novel satellite image digital surface model (DSM) reconstruction framework grounded in deep learning methodology. The proposed framework effectively utilizes a rational polynomial camera (RPC) model to establish the mapping relationship between image coordinates and geographic coordinates. Given the expansive coverage and abundant ground object data inherent in satellite images, we designed a lightweight deep network model. This model facilitates both coarse and fine estimation of a height map through two distinct stages. Our approach harnesses shallow and deep image information via a feature extraction module, subsequently employing RPC Warping to construct feature volumes for various angles. We employ variance as a similarity metric to achieve image matching and derive the fused cost volume. Following this, we aggregate cost information across different scales and height directions using a regularization module. This process yields the confidence level of the current height plane, which is then regressed to predict the height map. Once the height map from stage 1 is obtained, we gauge the prediction’s uncertainty based on the variance in the probability distribution in the height direction. This allows us to adjust the height estimation range according to this uncertainty, thereby enabling precise height value prediction in stage 2. After conducting geometric consistency detection filtering of fine height maps from diverse viewpoints, we generate 3D point clouds through the inverse projection of RPC models. Finally, we resample these 3D point clouds to produce high-precision DSM products. By analyzing the results of our method’s height map predictions and comparing them with existing deep learning-based reconstruction methods, we assess the DSM reconstruction performance of our proposed framework. The experimental findings underscore the robustness of our method against discontinuous regions, occlusions, uneven illumination areas in satellite imagery, and weak texture regions during height map generation. Furthermore, the reconstructed digital surface model (DSM) surpasses existing solutions in terms of completeness and root mean square error metrics while concurrently reducing the model parameters by 42.93%. This optimization markedly diminishes memory usage, thereby conserving both software and hardware resources as well as system overhead. Such savings pave the way for a more efficient system design and development process. Full article
Show Figures

Figure 1

Back to TopTop