Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (226)

Search Parameters:
Keywords = LiDAR-SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7483 KiB  
Article
An Enhanced LiDAR-Based SLAM Framework: Improving NDT Odometry with Efficient Feature Extraction and Loop Closure Detection
by Yan Ren, Zhendong Shen, Wanquan Liu and Xinyu Chen
Processes 2025, 13(1), 272; https://doi.org/10.3390/pr13010272 (registering DOI) - 19 Jan 2025
Viewed by 300
Abstract
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the [...] Read more.
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the high computational complexity of processing large-scale point clouds. This paper introduces an improved NDT-based LiDAR odometry framework to address these challenges. The proposed method enhances computational efficiency and registration accuracy by introducing a unified feature point cloud framework that integrates planar and edge features, enabling more accurate and efficient inter-frame matching. To further improve loop closure detection, a parallel hybrid approach combining Radius Search and Scan Context is developed, which significantly enhances robustness and accuracy. Additionally, feature-based point cloud registration is seamlessly integrated with full cloud mapping in global optimization, ensuring high-precision pose estimation and detailed environmental reconstruction. Experiments on both public datasets and real-world environments validate the effectiveness of the proposed framework. Compared with traditional NDT, our method achieves trajectory estimation accuracy increases of 35.59% and over 35%, respectively, with and without loop detection. The average registration time is reduced by 66.7%, memory usage is decreased by 23.16%, and CPU usage drops by 19.25%. These results surpass those of existing SLAM systems, such as LOAM. The proposed method demonstrates superior robustness, enabling reliable pose estimation and map construction in dynamic, complex settings. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

13 pages, 3742 KiB  
Article
NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest
by Adam Korycki, Cory Yeaton, Gregory S. Gilbert, Colleen Josephson and Steve McGuire
Forests 2025, 16(1), 173; https://doi.org/10.3390/f16010173 - 17 Jan 2025
Viewed by 227
Abstract
Forest mapping provides critical observational data needed to understand the dynamics of forest environments. Notably, tree diameter at breast height (DBH) is a metric used to estimate forest biomass and carbon dioxide (CO2) sequestration. Manual methods of forest mapping are labor [...] Read more.
Forest mapping provides critical observational data needed to understand the dynamics of forest environments. Notably, tree diameter at breast height (DBH) is a metric used to estimate forest biomass and carbon dioxide (CO2) sequestration. Manual methods of forest mapping are labor intensive and time consuming, a bottleneck for large-scale mapping efforts. Automated mapping relies on acquiring dense forest reconstructions, typically in the form of point clouds. Terrestrial laser scanning (TLS) and mobile laser scanning (MLS) generate point clouds using expensive LiDAR sensing and have been used successfully to estimate tree diameter. Neural radiance fields (NeRFs) are an emergent technology enabling photorealistic, vision-based reconstruction by training a neural network on a sparse set of input views. In this paper, we present a comparison of MLS and NeRF forest reconstructions for the purpose of trunk diameter estimation in a mixed-evergreen Redwood forest. In addition, we propose an improved DBH-estimation method using convex-hull modeling. Using this approach, we achieved 1.68 cm RMSE (2.81%), which consistently outperformed standard cylinder modeling approaches. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Forestry: 2nd Edition)
14 pages, 6078 KiB  
Data Descriptor
The EDI Multi-Modal Simultaneous Localization and Mapping Dataset (EDI-SLAM)
by Peteris Racinskis, Gustavs Krasnikovs, Janis Arents and Modris Greitans
Data 2025, 10(1), 5; https://doi.org/10.3390/data10010005 - 7 Jan 2025
Viewed by 463
Abstract
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an [...] Read more.
This paper accompanies the initial public release of the EDI multi-modal SLAM dataset, a collection of long tracks recorded with a portable sensor package. These include two global shutter RGB camera feeds, LiDAR scans, as well as inertial and GNSS data from an RTK-enabled IMU-GNSS positioning module—both as satellite fixes and internally fused interpolated pose estimates. The tracks are formatted as ROS1 and ROS2 bags, with separately available calibration and ground truth data. In addition to the filtered positioning module outputs, a second form of sparse ground truth pose annotation is provided using independently surveyed visual fiducial markers as a reference. This enables the meaningful evaluation of systems that directly utilize data from the positioning module into their localization estimates, and serves as an alternative when the GNSS reference is disrupted by intermittent signals or multipath scattering. In this paper, we describe the methods used to collect the dataset, its contents, and its intended use. Full article
24 pages, 7653 KiB  
Article
Design and Experiment of Electric Uncrewed Transport Vehicle for Solanaceous Vegetables in Greenhouse
by Chunsong Guan, Weisong Zhao, Binxing Xu, Zhichao Cui, Yating Yang and Yan Gong
Agriculture 2025, 15(2), 118; https://doi.org/10.3390/agriculture15020118 - 7 Jan 2025
Viewed by 400
Abstract
Despite some rudimentary handling vehicles employed in the labor-intensive harvesting and transportation of greenhouse vegetables, research on intelligent uncrewed transport vehicles remains limited. Herein, an uncrewed transport vehicle was designed for greenhouse solanaceous vegetable harvesting. Its overall structure and path planning were tailored [...] Read more.
Despite some rudimentary handling vehicles employed in the labor-intensive harvesting and transportation of greenhouse vegetables, research on intelligent uncrewed transport vehicles remains limited. Herein, an uncrewed transport vehicle was designed for greenhouse solanaceous vegetable harvesting. Its overall structure and path planning were tailored to the greenhouse environment, with specially designed components, including the electric crawler chassis, unloading mechanism, and control system. A SLAM system based on fusion of LiDAR and inertial navigation ensures precise positioning and navigation with the help of an overall path planner using an A* algorithm and a 3D scanning constructed local virtual environment. Multi-sensor fusion localization, path planning, and control enable autonomous operation. Experimental studies demonstrated it can automatically move, pause, steer, and unload along predefined trajectories. The driving capacity and range of electric chassis reach the design specifications, whose walking speeds approach set speeds (<5% error). Under various loads, the vehicle closely follows the target path with very small tracking errors. Initial test points showed high localization accuracy at maximum longitudinal and lateral deviations of 9.5 cm and 6.7 cm, while the average value of the lateral deviation of other points below 5 cm. These findings contribute to the advancement of uncrewed transportation technology and equipment in greenhouse applications. Full article
(This article belongs to the Special Issue New Energy-Powered Agricultural Machinery and Equipment)
Show Figures

Figure 1

60 pages, 45657 KiB  
Review
Remote Wind Turbine Inspections: Exploring the Potential of Multimodal Drones
by Ahmed Omara, Adel Nasser, Ahmad Alsayed and Mostafa R. A. Nabawy
Drones 2025, 9(1), 4; https://doi.org/10.3390/drones9010004 - 24 Dec 2024
Viewed by 999
Abstract
With the ever-increasing demand for harvesting wind energy, the inspection of its associated infrastructures, particularly turbines, has become essential to ensure continued and sustainable operations. With these inspections being hazardous to human operators, time-consuming and expensive, the door was opened for drone solutions [...] Read more.
With the ever-increasing demand for harvesting wind energy, the inspection of its associated infrastructures, particularly turbines, has become essential to ensure continued and sustainable operations. With these inspections being hazardous to human operators, time-consuming and expensive, the door was opened for drone solutions to offer a more effective alternative. However, drones also come with their own issues, such as communication, maintenance and the personnel needed to operate them. A multimodal approach to this problem thus has the potential to provide a combined solution where a single platform can perform all inspection operations required for wind turbine structures. This paper reviews the current approaches and technologies used in wind turbine inspections together with a multitude of multimodal designs that are surveyed to assess their potential for this application. Rotor-based designs demonstrate simpler and more efficient means to conduct such missions, whereas bio-inspired designs allow greater flexibility and more accurate locomotion. Whilst each of these design categories comes with different trade-offs, both should be considered for an effective hybrid design to create a more optimal system. Finally, the use of sensor fusion within techniques such as GPS and LiDAR SLAM enables high navigation performances while simultaneously utilising these sensors to conduct the inspection tasks. Full article
Show Figures

Figure 1

25 pages, 17064 KiB  
Article
An Environment Recognition Algorithm for Staircase Climbing Robots
by Yanjie Liu, Yanlong Wei, Chao Wang and Heng Wu
Remote Sens. 2024, 16(24), 4718; https://doi.org/10.3390/rs16244718 - 17 Dec 2024
Viewed by 520
Abstract
For deformed wheel-based staircase-climbing robots, the accuracy of staircase step geometry perception and scene mapping are critical factors in determining whether the robot can successfully ascend the stairs and continue its task. Currently, while there are LiDAR-based algorithms that focus either on step [...] Read more.
For deformed wheel-based staircase-climbing robots, the accuracy of staircase step geometry perception and scene mapping are critical factors in determining whether the robot can successfully ascend the stairs and continue its task. Currently, while there are LiDAR-based algorithms that focus either on step geometry detection or scene mapping, few comprehensive algorithms exist that address both step geometry perception and scene mapping for staircases. Moreover, significant errors in step geometry estimation and low mapping accuracy can hinder the ability of deformed wheel-based mobile robots to climb stairs, negatively impacting the efficiency and success rate of task execution. To solve the above problems, we propose an effective LiDAR-Inertial-based point cloud detection method for staircases. Firstly, we preprocess the staircase point cloud, mainly using the Statistical Outlier Removal algorithm to effectively remove the outliers in the staircase scene and combine the vertical angular resolution and spatial geometric relationship of LiDAR to realize the ground segmentation in the staircase scene. Then, we perform post-processing based on the point cloud map obtained from LiDAR SLAM, extract the staircase point cloud and project and fit the staircase point cloud by Ceres optimizer, and solve the dimensional information such as depth and height of the staircase by combining with the mean filtering method. Finally, we fully validate the effectiveness of the method proposed in this paper by conducting multiple sets of SLAM and size detection experiments in real different staircase scenarios. Full article
(This article belongs to the Special Issue Advanced AI Technology in Remote Sensing)
Show Figures

Figure 1

30 pages, 12451 KiB  
Article
A Method Coupling NDT and VGICP for Registering UAV-LiDAR and LiDAR-SLAM Point Clouds in Plantation Forest Plots
by Fan Wang, Jiawei Wang, Yun Wu, Zhijie Xue, Xin Tan, Yueyuan Yang and Simei Lin
Forests 2024, 15(12), 2186; https://doi.org/10.3390/f15122186 - 12 Dec 2024
Viewed by 546
Abstract
The combination of UAV-LiDAR and LiDAR-SLAM (Simultaneous Localization and Mapping) technology can overcome the scanning limitations of different platforms and obtain comprehensive 3D structural information of forest stands. To address the challenges of the traditional registration algorithms, such as high initial value requirements [...] Read more.
The combination of UAV-LiDAR and LiDAR-SLAM (Simultaneous Localization and Mapping) technology can overcome the scanning limitations of different platforms and obtain comprehensive 3D structural information of forest stands. To address the challenges of the traditional registration algorithms, such as high initial value requirements and susceptibility to local optima, in this paper, we propose a high-precision, robust, NDT-VGICP registration method that integrates voxel features to register UAV-LiDAR and LiDAR-SLAM point clouds at the forest stand scale. First, the point clouds are voxelized, and their normal vectors and normal distribution models are computed, then the initial transformation matrix is quickly estimated based on the point pair distribution characteristics to achieve preliminary alignment. Second, high-dimensional feature weighting is introduced, and the iterative closest point (ICP) algorithm is used to optimize the distance between the matching point pairs, adjusting the transformation matrix to reduce the registration errors iteratively. Finally, the algorithm converges when the iterative conditions are met, yielding an optimal transformation matrix and achieving precise point cloud registration. The results show that the algorithm performs well in Chinese fir forest stands of different age groups (average RMSE—horizontal: 4.27 cm; vertical: 3.86 cm) and achieves high accuracy in single-tree crown vertex detection and tree height estimation (average F-score: 0.90; R2 for tree height estimation: 0.88). This study demonstrates that the NDT-VGICP algorithm can effectively fuse and collaboratively apply multi-platform LiDAR data, providing a methodological reference for accurately quantifying individual tree parameters and efficiently monitoring 3D forest stand structures. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 17557 KiB  
Article
Lidar Simultaneous Localization and Mapping Algorithm for Dynamic Scenes
by Peng Ji, Qingsong Xu and Yifan Zhao
World Electr. Veh. J. 2024, 15(12), 567; https://doi.org/10.3390/wevj15120567 - 7 Dec 2024
Viewed by 1010
Abstract
To address the issue of significant point cloud ghosting in the construction of high-precision point cloud maps by low-speed intelligent mobile vehicles due to the presence of numerous dynamic obstacles in the environment, which affects the accuracy of map construction, this paper proposes [...] Read more.
To address the issue of significant point cloud ghosting in the construction of high-precision point cloud maps by low-speed intelligent mobile vehicles due to the presence of numerous dynamic obstacles in the environment, which affects the accuracy of map construction, this paper proposes a LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithm tailored for dynamic scenes. The algorithm employs a tightly coupled SLAM framework integrating LiDAR and inertial measurement unit (IMU). In the process of dynamic obstacle removal, the point cloud data is first gridded. To more comprehensively represent the point cloud information, the point cloud within the perception area is linearly discretized by height to obtain the distribution of the point cloud at different height layers, which is then encoded to construct a linear discretized height descriptor for dynamic region extraction. To preserve more static feature points without altering the original point cloud, the Random Sample Consensus (RANSAC) ground fitting algorithm is employed to fit and segment the ground point cloud within the dynamic regions, followed by the removal of dynamic obstacles. Finally, accurate point cloud poses are obtained through static feature matching. The proposed algorithm has been validated using open-source datasets and self-collected campus datasets. The results demonstrate that the algorithm improves dynamic point cloud removal accuracy by 12.3% compared to the ERASOR algorithm and enhances overall mapping and localization accuracy by 8.3% compared to the LIO-SAM algorithm, thereby providing a reliable environmental description for intelligent mobile vehicles. Full article
Show Figures

Figure 1

24 pages, 47033 KiB  
Article
Hybrid Denoising Algorithm for Architectural Point Clouds Acquired with SLAM Systems
by Antonella Ambrosino, Alessandro Di Benedetto and Margherita Fiani
Remote Sens. 2024, 16(23), 4559; https://doi.org/10.3390/rs16234559 - 5 Dec 2024
Viewed by 697
Abstract
The sudden development of systems capable of rapidly acquiring dense point clouds has underscored the importance of data processing and pre-processing prior to modeling. This work presents the implementation of a denoising algorithm for point clouds acquired with LiDAR SLAM systems, aimed at [...] Read more.
The sudden development of systems capable of rapidly acquiring dense point clouds has underscored the importance of data processing and pre-processing prior to modeling. This work presents the implementation of a denoising algorithm for point clouds acquired with LiDAR SLAM systems, aimed at optimizing data processing and the reconstruction of surveyed object geometries for graphical rendering and modeling. Implemented in a MATLAB environment, the algorithm utilizes an approximate modeling of a reference surface with Poisson’s model and a statistical analysis of the distances between the original point cloud and the reconstructed surface. Tested on point clouds from historically significant buildings with complex geometries scanned with three different SLAM systems, the results demonstrate a satisfactory reduction in point density to approximately one third of the original. The filtering process effectively removed about 50% of the points while preserving essential details, facilitating improved restitution and modeling of architectural and structural elements. This approach serves as a valuable tool for noise removal in SLAM-derived datasets, enhancing the accuracy of architectural surveying and heritage documentation. Full article
(This article belongs to the Special Issue 3D Scene Reconstruction, Modeling and Analysis Using Remote Sensing)
Show Figures

Graphical abstract

18 pages, 8489 KiB  
Article
Tightly Coupled SLAM Algorithm Based on Similarity Detection Using LiDAR-IMU Sensor Fusion for Autonomous Navigation
by Jiahui Zheng, Yi Wang and Yadong Men
World Electr. Veh. J. 2024, 15(12), 558; https://doi.org/10.3390/wevj15120558 - 2 Dec 2024
Viewed by 741
Abstract
In recent years, the rise of unmanned technology has made Simultaneous Localization and Mapping (SLAM) algorithms a focal point of research in the field of robotics. SLAM algorithms are primarily categorized into visual SLAM and laser SLAM, based on the type of external [...] Read more.
In recent years, the rise of unmanned technology has made Simultaneous Localization and Mapping (SLAM) algorithms a focal point of research in the field of robotics. SLAM algorithms are primarily categorized into visual SLAM and laser SLAM, based on the type of external sensors employed. Laser SLAM algorithms have become essential in robotics and autonomous driving due to their insensitivity to lighting conditions, precise distance measurements, and ease of generating navigation maps. Throughout the development of SLAM technology, numerous effective algorithms have been introduced. However, existing algorithms still encounter challenges, such as localization errors and suboptimal utilization of sensor data. To address these issues, this paper proposes a tightly coupled SLAM algorithm based on similarity detection. The algorithm integrates Inertial Measurement Unit (IMU) and LiDAR odometry modules, employs a tightly coupled processing approach for sensor data, and utilizes curvature feature optimization extraction methods to enhance the accuracy and robustness of inter-frame matching. Additionally, the algorithm incorporates a local keyframe sliding window method and introduces a similarity detection mechanism, which reduces the real-time computational load and improves efficiency. Experimental results demonstrate that the algorithm achieves superior performance, with reduced positioning errors and enhanced global consistency, in tests conducted on the KITTI dataset. The accuracy of the real trajectory data compared to the ground truth is evaluated using metrics such as ATE (absolute trajectory error) and RMSE (root mean square error). Full article
(This article belongs to the Special Issue Motion Planning and Control of Autonomous Vehicles)
Show Figures

Figure 1

15 pages, 6614 KiB  
Article
Advancing Forest Plot Surveys: A Comparative Study of Visual vs. LiDAR SLAM Technologies
by Tianshuo Guan, Yuchen Shen, Yuankai Wang, Peidong Zhang, Rui Wang and Fei Yan
Forests 2024, 15(12), 2083; https://doi.org/10.3390/f15122083 - 26 Nov 2024
Viewed by 677
Abstract
Forest plot surveys are vital for monitoring forest resource growth, contributing to their sustainable development. The accuracy and efficiency of these surveys are paramount, making technological advancements such as Simultaneous Localization and Mapping (SLAM) crucial. This study investigates the application of SLAM technology, [...] Read more.
Forest plot surveys are vital for monitoring forest resource growth, contributing to their sustainable development. The accuracy and efficiency of these surveys are paramount, making technological advancements such as Simultaneous Localization and Mapping (SLAM) crucial. This study investigates the application of SLAM technology, utilizing LiDAR (Light Detection and Ranging) and monocular cameras, to enhance forestry plot surveys. Conducted in three 32 × 32 m plots within the Tibet Autonomous Region of China, the research compares the efficacy of LiDAR-based and visual SLAM algorithms in estimating tree parameters such as diameter at breast height (DBH), tree height, and position, alongside their adaptability to forest environments. The findings revealed that both types of algorithms achieved high precision in DBH estimation, with LiDAR SLAM presenting a root mean square error (RMSE) range of 1.4 to 1.96 cm and visual SLAM showing a slightly higher precision, with an RMSE of 0.72 to 0.85 cm. In terms of tree position accuracy, the three methods can achieve tree location measurements. LiDAR SLAM accurately represents the relative positions of trees, while the traditional and visual SLAM systems exhibit slight positional offsets for individual trees. However, discrepancies arose in tree height estimation accuracy, where visual SLAM exhibited a bias range from −0.55 to 0.19 m and an RMSE of 1.36 to 2.34 m, while LiDAR SLAM had a broader bias range and higher RMSE, especially for trees over 25 m, attributed to scanning angle limitations and branch occlusion. Moreover, the study highlights the comprehensive point cloud data generated by LiDAR SLAM, useful for calculating extensive tree parameters such as volume and carbon storage and Tree Information Modeling (TIM) through digital twin technology. In contrast, the sparser data from visual SLAM limits its use to basic parameter estimation. These insights underscore the effectiveness and precision of SLAM-based approaches in forestry plot surveys while also indicating distinct advantages and suitability of each method to different forest environments. The findings advocate for tailored survey strategies, aligning with specific forest conditions and requirements, enhancing the application of SLAM technology in forestry management and conservation efforts. Full article
(This article belongs to the Special Issue Integrated Measurements for Precision Forestry)
Show Figures

Figure 1

32 pages, 4267 KiB  
Review
Advancements in Sensor Fusion for Underwater SLAM: A Review on Enhanced Navigation and Environmental Perception
by Fomekong Fomekong Rachel Merveille, Baozhu Jia, Zhizun Xu and Bissih Fred
Sensors 2024, 24(23), 7490; https://doi.org/10.3390/s24237490 - 24 Nov 2024
Viewed by 908
Abstract
Underwater simultaneous localization and mapping (SLAM) has significant challenges due to the complexities of underwater environments, marked by limited visibility, variable conditions, and restricted global positioning system (GPS) availability. This study provides a comprehensive analysis of sensor fusion techniques in underwater SLAM, highlighting [...] Read more.
Underwater simultaneous localization and mapping (SLAM) has significant challenges due to the complexities of underwater environments, marked by limited visibility, variable conditions, and restricted global positioning system (GPS) availability. This study provides a comprehensive analysis of sensor fusion techniques in underwater SLAM, highlighting the amalgamation of proprioceptive and exteroceptive sensors to improve UUV navigational accuracy and system resilience. Essential sensor applications, including inertial measurement units (IMUs), Doppler velocity logs (DVLs), cameras, sonar, and LiDAR (light detection and ranging), are examined for their contributions to navigation and perception. Fusion methodologies, such as Kalman filters, particle filters, and graph-based SLAM, are evaluated for their benefits, limitations, and computational demands. Additionally, innovative technologies like quantum sensors and AI-driven filtering techniques are examined for their potential to enhance SLAM precision and adaptability. Case studies demonstrate practical applications, analyzing the compromises between accuracy, computational requirements, and adaptability to environmental changes. This paper proceeds to emphasize future directions, stressing the need for advanced filtering and machine learning to address sensor drift, noise, and environmental unpredictability, hence improving autonomous underwater navigation through reliable sensor fusion. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

14 pages, 4975 KiB  
Article
Assessment of Tree Species Classification by Decision Tree Algorithm Using Multiwavelength Airborne Polarimetric LiDAR Data
by Zhong Hu and Songxin Tan
Electronics 2024, 13(22), 4534; https://doi.org/10.3390/electronics13224534 - 19 Nov 2024
Viewed by 846
Abstract
Polarimetric measurement has been proven to be of great importance in various applications, including remote sensing in agriculture and forest. Polarimetric full waveform LiDAR is a relatively new yet valuable active remote sensing tool. This instrument offers the full waveform data and polarimetric [...] Read more.
Polarimetric measurement has been proven to be of great importance in various applications, including remote sensing in agriculture and forest. Polarimetric full waveform LiDAR is a relatively new yet valuable active remote sensing tool. This instrument offers the full waveform data and polarimetric information simultaneously. Current studies have primarily used commercial non-polarimetric LiDAR for tree species classification, either at the dominant species level or at the individual tree level. Many classification approaches combine multiple features, such as tree height, stand width, and crown shape, without utilizing polarimetric information. In this work, a customized Multiwavelength Airborne Polarimetric LiDAR (MAPL) system was developed for field tree measurements. The MAPL is a unique system with unparalleled capabilities in vegetation remote sensing. It features four receiving channels at dual wavelengths and dual polarization: near infrared (NIR) co-polarization, NIR cross-polarization, green (GN) co-polarization, and GN cross-polarization, respectively. Data were collected from several tree species, including coniferous trees (blue spruce, ponderosa pine, and Austrian pine) and deciduous trees (ash and maple). The goal was to improve the target identification ability and detection accuracy. A machine learning (ML) approach, specifically a decision tree, was developed to classify tree species based on the peak reflectance values of the MAPL waveforms. The results indicate a re-substitution error of 3.23% and a k-fold loss error of 5.03% for the 2106 tree samples used in this study. The decision tree method proved to be both accurate and effective, and the classification of new observation data can be performed using the previously trained decision tree, as suggested by both error values. Future research will focus on incorporating additional LiDAR data features, exploring more advanced ML methods, and expanding to other vegetation classification applications. Furthermore, the MAPL data can be fused with data from other sensors to provide augmented reality applications, such as Simultaneous Localization and Mapping (SLAM) and Bird’s Eye View (BEV). Its polarimetric capability will enable target characterization beyond shape and distance. Full article
(This article belongs to the Special Issue Image Analysis Using LiDAR Data)
Show Figures

Graphical abstract

6 pages, 195 KiB  
Proceeding Paper
Assessment of SLAM Methods Applied in Monochromatic Environments
by Rudolf Krecht and Áron Ballagi
Eng. Proc. 2024, 79(1), 51; https://doi.org/10.3390/engproc2024079051 - 6 Nov 2024
Viewed by 301
Abstract
One of the most significant challenges in sustainable autonomous mobile robot and vehicle development is the perception of stochastic environments. Various environmental perception methods have been proposed to address these challenges; however, these methods often lack general applicability. Many of these methods rely [...] Read more.
One of the most significant challenges in sustainable autonomous mobile robot and vehicle development is the perception of stochastic environments. Various environmental perception methods have been proposed to address these challenges; however, these methods often lack general applicability. Many of these methods rely on environmental feature extraction, which can fail in specific scenarios, such as monochromatic environments. This article aims to evaluate existing SLAM (Simultaneous Localization and Mapping) methods that utilize camera or combined camera and LiDAR input data in predominantly monochromatic environments. Additionally, this study seeks to identify performance issues in such applications. Full article
(This article belongs to the Proceedings of The Sustainable Mobility and Transportation Symposium 2024)
16 pages, 11298 KiB  
Article
Scene Measurement Method Based on Fusion of Image Sequence and Improved LiDAR SLAM
by Dongtai Liang, Donghui Li, Kui Yang, Wenxue Hu, Xuwen Chen and Zhangwei Chen
Electronics 2024, 13(21), 4250; https://doi.org/10.3390/electronics13214250 - 30 Oct 2024
Viewed by 826
Abstract
To address the issue that sparse point cloud maps constructed by SLAM cannot provide detailed information about measured objects, and image sequence-based measurement methods have problems with large data volume and cumulative errors, this paper proposes a scene measurement method that integrates image [...] Read more.
To address the issue that sparse point cloud maps constructed by SLAM cannot provide detailed information about measured objects, and image sequence-based measurement methods have problems with large data volume and cumulative errors, this paper proposes a scene measurement method that integrates image sequences with an improved LiDAR SLAM. By introducing plane features, the positioning accuracy of LiDAR SLAM is enhanced, and real-time odometry poses are generated. Simultaneously, the system captures image sequences of the measured object using synchronized cameras, and NeRF is used for 3D reconstruction. Time synchronization and data registration between the LiDAR and camera data frames with identical timestamps are achieved. Finally, the least squares method and ICP algorithm are employed to compute the scale factor s and transformation matrices R and t between different point clouds from LiDAR and NeRF reconstruction. Then, the precise measurement of the objects could be implemented. Experimental results demonstrate that this method significantly improves measurement accuracy, with an average error within 10 mm and 1°, providing a robust and reliable solution for scene measurement. Full article
Show Figures

Figure 1

Back to TopTop