Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,682)

Search Parameters:
Keywords = lidar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1293 KiB  
Article
Active Remote Sensing Assessment of Biomass Productivity and Canopy Structure of Short-Rotation Coppice American Sycamore (Platanus occidentalis L.)
by Omoyemeh Jennifer Ukachukwu, Lindsey Smart, Justyna Jeziorska, Helena Mitasova and John S. King
Remote Sens. 2024, 16(14), 2589; https://doi.org/10.3390/rs16142589 - 15 Jul 2024
Viewed by 3
Abstract
The short-rotation coppice (SRC) culture of trees provides a sustainable form of renewable biomass energy, while simultaneously sequestering carbon and contributing to the regional carbon feedstock balance. To understand the role of SRC in carbon feedstock balances, field inventories with selective destructive tree [...] Read more.
The short-rotation coppice (SRC) culture of trees provides a sustainable form of renewable biomass energy, while simultaneously sequestering carbon and contributing to the regional carbon feedstock balance. To understand the role of SRC in carbon feedstock balances, field inventories with selective destructive tree sampling are commonly used to estimate aboveground biomass (AGB) and canopy structure dynamics. However, these methods are resource intensive and spatially limited. To address these constraints, we examined the utility of publicly available airborne Light Detection and Ranging (LiDAR) data and easily accessible imagery from Unmanned Aerial Systems (UASs) to estimate the AGB and canopy structure of an American sycamore SRC in the piedmont region of North Carolina, USA. We compared LiDAR-derived AGB estimates to field estimates from 2015, and UAS-derived AGB estimates to field estimates from 2022 across four planting densities (10,000, 5000, 2500, and 1250 trees per hectare (tph)). The results showed significant effects of planting density treatments on LIDAR- and UAS-derived canopy metrics and significant relationships between these canopy metrics and AGB. In the 10,000 tph, the field-estimated AGB in 2015 (7.00 ± 1.56 Mg ha−1) and LiDAR-derived AGB (7.19 ± 0.13 Mg ha−1) were comparable. On the other hand, the UAS-derived AGB was overestimated in the 10,000 tph planting density and underestimated in the 1250 tph compared to the 2022 field-estimated AGB. This study demonstrates that the remote sensing-derived estimates are within an acceptable level of error for biomass estimation when compared to precise field estimates, thereby showing the potential for increasing the use of accessible remote-sensing technology to estimate AGB of SRC plantations. Full article
(This article belongs to the Section Forest Remote Sensing)
23 pages, 24773 KiB  
Article
Design and Experiment of Ordinary Tea Profiling Harvesting Device Based on Light Detection and Ranging Perception
by Xiaolong Huan, Min Wu, Xianbing Bian, Jiangming Jia, Chenchen Kang, Chuanyu Wu, Runmao Zhao and Jianneng Chen
Agriculture 2024, 14(7), 1147; https://doi.org/10.3390/agriculture14071147 - 15 Jul 2024
Viewed by 138
Abstract
Due to the complex shape of the tea tree canopy and the large undulation of a tea garden terrain, the quality of fresh tea leaves harvested by existing tea harvesting machines is poor. This study proposed a tea canopy surface profiling method based [...] Read more.
Due to the complex shape of the tea tree canopy and the large undulation of a tea garden terrain, the quality of fresh tea leaves harvested by existing tea harvesting machines is poor. This study proposed a tea canopy surface profiling method based on 2D LiDAR perception and investigated the extraction and fitting methods of canopy point clouds. Meanwhile, a tea profiling harvester prototype was developed and field tests were conducted. The tea profiling harvesting device adopted a scheme of sectional arrangement of multiple groups of profiling tea harvesting units, and each unit sensed the height information of its own bottom canopy area through 2D LiDAR. A cross-platform communication network was established, enabling point cloud fitting of tea plant surfaces and accurate estimation of cutter profiling height through the RANSAC algorithm. Additionally, a sensing control system with multiple execution units was developed using rapid control prototype technology. The results of field tests showed that the bud leaf integrity rate was 84.64%, the impurity rate was 5.94%, the missing collection rate was 0.30%, and the missing harvesting rate was 0.68%. Furthermore, 89.57% of the harvested tea could be processed into commercial tea, with 88.34% consisting of young tea shoots with one bud and three leaves or fewer. All of these results demonstrated that the proposed device effectively meets the technical standards for machine-harvested tea and the requirements of standard tea processing techniques. Moreover, compared to other commercial tea harvesters, the proposed tea profiling harvesting device demonstrated improved performance in harvesting fresh tea leaves. Full article
(This article belongs to the Special Issue Sensor-Based Precision Agriculture)
Show Figures

Figure 1

21 pages, 11155 KiB  
Article
Integrating NoSQL, Hilbert Curve, and R*-Tree to Efficiently Manage Mobile LiDAR Point Cloud Data
by Yuqi Yang, Xiaoqing Zuo, Kang Zhao and Yongfa Li
ISPRS Int. J. Geo-Inf. 2024, 13(7), 253; https://doi.org/10.3390/ijgi13070253 - 14 Jul 2024
Viewed by 232
Abstract
The widespread use of Light Detection and Ranging (LiDAR) technology has led to a surge in three-dimensional point cloud data; although, it also poses challenges in terms of data storage and indexing. Efficient storage and management of LiDAR data are prerequisites for data [...] Read more.
The widespread use of Light Detection and Ranging (LiDAR) technology has led to a surge in three-dimensional point cloud data; although, it also poses challenges in terms of data storage and indexing. Efficient storage and management of LiDAR data are prerequisites for data processing and analysis for various LiDAR-based scientific applications. Traditional relational database management systems and centralized file storage struggle to meet the storage, scaling, and specific query requirements of massive point cloud data. However, NoSQL databases, known for their scalability, speed, and cost-effectiveness, provide a viable solution. In this study, a 3D point cloud indexing strategy for mobile LiDAR point cloud data that integrates Hilbert curves, R*-trees, and B+-trees was proposed to support MongoDB-based point cloud storage and querying from the following aspects: (1) partitioning the point cloud using an adaptive space partitioning strategy to improve the I/O efficiency and ensure data locality; (2) encoding partitions using Hilbert curves to construct global indices; (3) constructing local indexes (R*-trees) for each point cloud partition so that MongoDB can natively support indexing of point cloud data; and (4) a MongoDB-oriented storage structure design based on a hierarchical indexing structure. We evaluated the efficacy of chunked point cloud data storage with MongoDB for spatial querying and found that the proposed storage strategy provides higher data encoding, index construction and retrieval speeds, and more scalable storage structures to support efficient point cloud spatial query processing compared to many mainstream point cloud indexing strategies and database systems. Full article
Show Figures

Figure 1

30 pages, 10784 KiB  
Article
Phenology and Plant Functional Type Link Optical Properties of Vegetation Canopies to Patterns of Vertical Vegetation Complexity
by Duncan Jurayj, Rebecca Bowers and Jessica V. Fayne
Remote Sens. 2024, 16(14), 2577; https://doi.org/10.3390/rs16142577 (registering DOI) - 13 Jul 2024
Viewed by 354
Abstract
Vegetation vertical complexity influences biodiversity and ecosystem productivity. Rapid warming in the boreal region is altering patterns of vertical complexity. LiDAR sensors offer novel structural metrics for quantifying these changes, but their spatiotemporal limitations and their need for ecological context complicate their application [...] Read more.
Vegetation vertical complexity influences biodiversity and ecosystem productivity. Rapid warming in the boreal region is altering patterns of vertical complexity. LiDAR sensors offer novel structural metrics for quantifying these changes, but their spatiotemporal limitations and their need for ecological context complicate their application and interpretation. Satellite variables can estimate LiDAR metrics, but retrievals of vegetation structure using optical reflectance can lack interpretability and accuracy. We compare vertical complexity from the airborne LiDAR Land Vegetation and Ice Sensor (LVIS) in boreal Canada and Alaska to plant functional type, optical, and phenological variables. We show that spring onset and green season length from satellite phenology algorithms are more strongly correlated with vegetation vertical complexity (R = 0.43–0.63) than optical reflectance (R = 0.03–0.43). Median annual temperature explained patterns of vegetation vertical complexity (R = 0.45), but only when paired with plant functional type data. Random forest models effectively learned patterns of vegetation vertical complexity using plant functional type and phenological variables, but the validation performance depended on the validation methodology (R2 = 0.50–0.80). In correlating satellite phenology, plant functional type, and vegetation vertical complexity, we propose new methods of retrieving vertical complexity with satellite data. Full article
Show Figures

Graphical abstract

11 pages, 13630 KiB  
Communication
A Semi-Automatic Approach for Tree Crown Competition Indices Assessment From UAV LiDAR
by Nicola Puletti, Matteo Guasti, Simone Innocenti, Lorenzo Cesaretti and Ugo Chiavetta
Remote Sens. 2024, 16(14), 2576; https://doi.org/10.3390/rs16142576 - 13 Jul 2024
Viewed by 297
Abstract
Understanding the spatial heterogeneity of forest structure is crucial for comprehending ecosystem dynamics and promoting sustainable forest management. Unmanned aerial vehicle (UAV) LiDAR technology provides a promising method to capture detailed three-dimensional (3D) information about forest canopies, aiding in management and silvicultural practices. [...] Read more.
Understanding the spatial heterogeneity of forest structure is crucial for comprehending ecosystem dynamics and promoting sustainable forest management. Unmanned aerial vehicle (UAV) LiDAR technology provides a promising method to capture detailed three-dimensional (3D) information about forest canopies, aiding in management and silvicultural practices. This study investigates the heterogeneity of forest structure in broadleaf forests using UAV LiDAR data, with a particular focus on tree crown features and their different information content compared to diameters. We explored a non-conventionally used method that emphasizes crown competition by employing a nearest neighbor selection technique based on metrics derived from UAV point cloud profiles at the tree level, rather than traditional DBH (diameter at breast height) spatial arrangement. About 300 vegetation elements within 10 plots collected in a managed Beech forest were used as reference data. We demonstrate that crown-based approaches, which are feasible with UAV LiDAR data at a reasonable cost and time, significantly enhances the understanding of forest heterogeneity, adding new information content for managers. Our findings underscore the utility of UAV LiDAR in characterizing the complexity and variability of forest structure at high resolution, offering valuable insights for carbon accounting and sustainable forest management. Full article
(This article belongs to the Special Issue Novel Applications of UAV Imagery for Forest Science)
21 pages, 5659 KiB  
Article
Estimating Brazilian Amazon Canopy Height Using Landsat Reflectance Products in a Random Forest Model with Lidar as Reference Data
by Pedro V. C. Oliveira, Hankui K. Zhang and Xiaoyang Zhang
Remote Sens. 2024, 16(14), 2571; https://doi.org/10.3390/rs16142571 - 13 Jul 2024
Viewed by 240
Abstract
Landsat data have been used to derive forest canopy structure, height, and volume using machine learning models, i.e., giving computers the ability to learn from data and make decisions and predictions without being explicitly programmed, with training data provided by ground measurement or [...] Read more.
Landsat data have been used to derive forest canopy structure, height, and volume using machine learning models, i.e., giving computers the ability to learn from data and make decisions and predictions without being explicitly programmed, with training data provided by ground measurement or airborne lidar. This study explored the potential use of Landsat reflectance and airborne lidar data as training data to estimate canopy heights in the Brazilian Amazon forest and examined the impacts of Landsat reflectance products at different process levels and sample spatial autocorrelation on random forest modeling. Specifically, this study assessed the accuracy of canopy height predictions from random forest regression models impacted by three different Landsat 8 reflectance product inputs (i.e., USGS level 1 top of atmosphere reflectance, USGS level 2 surface reflectance, and NASA nadir bidirectional reflectance distribution function (BRDF) adjusted reflectance (NBAR)), sample sizes, training/test split strategies, and geographic coordinates. In the establishment of random forest regression models, the dependent variable (i.e., the response variable) was the dominant canopy heights at a 90 m resolution derived from airborne lidar data, while the independent variables (i.e., the predictor variables) were the temporal metrics extracted from each Landsat reflectance product. The results indicated that the choice of Landsat reflectance products had an impact on model accuracy, with NBAR data yielding more trustful results than the other products despite having higher RMSE values. Training and test split strategy also affected the derived model accuracy metrics, with the random sample split (randomly distributed training and test samples) showing inflated accuracy compared to the spatial split (training and test samples spatially set apart). Such inflation was induced by the spatial autocorrelation that existed between training and test data in the random split. The inclusion of geographic coordinates as independent variables improved model accuracy in the random split strategy but not in the spatial split, where training and test samples had different geographic coordinate ranges. The study highlighted the importance of data processing levels and the training and test split methods in random forest modeling of canopy height. Full article
(This article belongs to the Special Issue Lidar for Forest Parameters Retrieval)
Show Figures

Figure 1

19 pages, 2500 KiB  
Article
Real-Time Multimodal 3D Object Detection with Transformers
by Hengsong Liu and Tongle Duan
World Electr. Veh. J. 2024, 15(7), 307; https://doi.org/10.3390/wevj15070307 - 12 Jul 2024
Viewed by 286
Abstract
The accuracy and real-time performance of 3D object detection are key factors limiting its widespread application. While cameras capture detailed color and texture features, they lack depth information compared to LiDAR. Multimodal detection combining both can improve results but incurs significant computational overhead, [...] Read more.
The accuracy and real-time performance of 3D object detection are key factors limiting its widespread application. While cameras capture detailed color and texture features, they lack depth information compared to LiDAR. Multimodal detection combining both can improve results but incurs significant computational overhead, affecting real-time performance. To address these challenges, this paper presents a real-time multimodal fusion model called Fast Transfusion that combines the benefits of LiDAR and camera sensors and reduces the computational burden of their fusion. Specifically, our Fast Transfusion method uses QConv (Quick Convolution) to replace the convolutional backbones compared to other models. QConv concentrates the convolution operations at the feature map center, where the most information resides, to expedite inference. It also utilizes deformable convolution to better match the actual shapes of detected objects, enhancing accuracy. And the model incorporates EH Decoder (Efficient and Hybrid Decoder) which decouples multiscale fusion into intra-scale interaction and cross-scale fusion, efficiently decoding and integrating features extracted from multimodal data. Furthermore, our proposed semi-dynamic query selection refines the initialization of object queries. On the KITTI 3D object detection dataset, our proposed approach reduced the inference time by 36 ms and improved 3D AP by 1.81% compared to state-of-the-art methods. Full article
23 pages, 19174 KiB  
Article
Unmanned Aerial Vehicle Landing on Rugged Terrain by On-Board LIDAR–Camera Positioning System
by Cheng Zou, Yezhen Sun and Linghua Kong
Appl. Sci. 2024, 14(14), 6079; https://doi.org/10.3390/app14146079 - 12 Jul 2024
Viewed by 238
Abstract
Safely landing unmanned aerial vehicles (UAVs) in unknown environments that are denied by GPS is challenging but crucial. In most cases, traditional landing methods are not suitable, especially under complex terrain conditions with insufficient map information. This report proposes an innovative multi-stage UAV [...] Read more.
Safely landing unmanned aerial vehicles (UAVs) in unknown environments that are denied by GPS is challenging but crucial. In most cases, traditional landing methods are not suitable, especially under complex terrain conditions with insufficient map information. This report proposes an innovative multi-stage UAV landing framework involving (i) point cloud and image fusion positioning, (ii) terrain analysis, and (iii) neural network semantic recognition to optimize landing site selection. In the first step, 3D point cloud and image data are fused to attain a comprehensive perception of the environment. In the second step, an energy cost function considering texture and flatness is employed to identify potential landing sites based on energy scores. To navigate the complexities of classification for precise landings, the results are stratified by the difficulty of various UAV landing scenarios. In the third step, a network model is applied to analyze UAV landing site options by integrating the ResNet50 network with a convolutional block attention module. Experimental results indicate a reduction in computational load and improved landing site identification accuracy. The developed framework fuses multi-modal data to enhance the safety and feasibility of UAV landings in complex environments. Full article
Show Figures

Graphical abstract

17 pages, 25206 KiB  
Article
The Use of an Unmanned Aerial Vehicle (UAV) for First-Failure Landslide Detection
by Michele Mercuri, Deborah Biondino, Mariantonietta Ciurleo, Gino Cofone, Massimo Conforti, Giovanni Gullà, Maria Carmela Stellato and Luigi Borrelli
GeoHazards 2024, 5(3), 683-699; https://doi.org/10.3390/geohazards5030035 - 12 Jul 2024
Viewed by 295
Abstract
The use of unmanned aerial vehicles (UAVs) can significantly assist landslide detection and characterization in different geological contexts at a detailed scale. This study investigated the role of UAVs in detecting a first-failure landslide occurring in Calabria, South Italy, and involving weathered granitoid [...] Read more.
The use of unmanned aerial vehicles (UAVs) can significantly assist landslide detection and characterization in different geological contexts at a detailed scale. This study investigated the role of UAVs in detecting a first-failure landslide occurring in Calabria, South Italy, and involving weathered granitoid rocks. After the landslide event, which caused the interruption of State Road 107, a UAV flight was carried out to identify landslide boundaries and morphological features in areas where there are problems of safe access. The landslide was classified as flow-type, with a total length of 240 m, a maximum width of 70 m, and a maximum depth of about 6.5 m. The comparison of the DTMs generated from UAV data with previously available LIDAR data indicated significant topographic changes across the landslide area. A minimum negative value of −6.3 m suggested material removal at the landslide source area. An approximate value of −2 m in the transportation area signified bed erosion and displacement of material as the landslide moved downslope. A maximum positive value of 4.2 m was found in the deposition area. The landslide volume was estimated to be about 6000 m3. These findings demonstrated the effectiveness of UAVs for landslide detection, showing their potentiality as valuable tools in planning further studies for a detailed landslide characterization and for defining the most appropriate risk mitigation measures. Full article
(This article belongs to the Topic Natural Hazards and Disaster Risks Reduction, 2nd Volume)
Show Figures

Figure 1

15 pages, 25320 KiB  
Article
The Impact of Historic Underground Buildings on Land Use
by Tsung-Chiang Wu and Wei-Cheng Lu
Land 2024, 13(7), 1046; https://doi.org/10.3390/land13071046 - 12 Jul 2024
Viewed by 270
Abstract
During the Second Taiwan Strait Crisis from 1958 to 1979, a large number of underground tunnels were dug to meet the needs of the war on the island of Kinmen, which is located between Taiwan and China, to provide defense, refuge, and transportation [...] Read more.
During the Second Taiwan Strait Crisis from 1958 to 1979, a large number of underground tunnels were dug to meet the needs of the war on the island of Kinmen, which is located between Taiwan and China, to provide defense, refuge, and transportation of materials. However, the tunnels caused many problems during the post-war development of the island. For example, there are problems of property ownership between underground and aboveground objects, and difficulties in infrastructure construction. Therefore, it is necessary to clarify the relationship between assets of historical significance and value and aboveground objects to ensure that cultural assets are adequately protected and properly planned. In this study, the 3D point cloud model of underground tunnels and the ground surface will be integrated by ground-based lidar technology and analyzed by overlapping with cadastral maps and urban planning maps to obtain accurate spatial relationships. The point cloud data measurements can be used to obtain the location and depth of the tunnels, which can be used as a reference for land disputes, urban planning, engineering design, and preservation or restoration plans for cultural assets. Full article
Show Figures

Figure 1

18 pages, 11067 KiB  
Article
Enhancing Deep Learning-Based Segmentation Accuracy through Intensity Rendering and 3D Point Interpolation Techniques to Mitigate Sensor Variability
by Myeong-Jun Kim, Suyeon Kim, Banghyon Lee and Jungha Kim
Sensors 2024, 24(14), 4475; https://doi.org/10.3390/s24144475 - 11 Jul 2024
Viewed by 271
Abstract
In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to [...] Read more.
In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements. Full article
(This article belongs to the Special Issue Sensors for Intelligent Vehicles and Autonomous Driving)
Show Figures

Figure 1

34 pages, 14681 KiB  
Article
Performance Evaluation and Optimization of 3D Models from Low-Cost 3D Scanning Technologies for Virtual Reality and Metaverse E-Commerce
by Rubén Grande, Javier Albusac, David Vallejo, Carlos Glez-Morcillo and José Jesús Castro-Schez
Appl. Sci. 2024, 14(14), 6037; https://doi.org/10.3390/app14146037 - 10 Jul 2024
Viewed by 407
Abstract
Virtual Reality (VR) is and will be a key driver in the evolution of e-commerce, providing an immersive and gamified shopping experience. However, for VR shopping spaces to become a reality, retailers’ product catalogues must first be digitised into 3D models. While this [...] Read more.
Virtual Reality (VR) is and will be a key driver in the evolution of e-commerce, providing an immersive and gamified shopping experience. However, for VR shopping spaces to become a reality, retailers’ product catalogues must first be digitised into 3D models. While this may be a simple task for retail giants, it can be a major obstacle for small retailers, whose human and financial resources are often more limited, making them less competitive. Therefore, this paper presents an analysis of low-cost scanning technologies for small business owners to digitise their products and make them available on VR shopping platforms, with the aim of helping improve the competitiveness of small businesses through VR and Artificial Intelligence (AI). The technologies to be considered are photogrammetry, LiDAR sensors and NeRF.In addition to investigating which technology provides the best visual quality of 3D models based on metrics and quantitative results, these models must also offer good performance in commercial VR headsets. In this way, we also analyse the performance of such models when running on Meta Quest 2, Quest Pro and Quest 3 headsets (Reality Labs, Reality Labs, CA, USA) to determine their feasibility and provide use cases for each type of model from a scalability point of view. Finally, our work describes a model optimisation process that reduce the polygon count and texture size of high-poly models, converting them into more performance-friendly versions without significantly compromising visual quality. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 8476 KiB  
Article
Enhanced Strapdown Inertial Navigation System (SINS)/LiDAR Tightly Integrated Simultaneous Localization and Mapping (SLAM) for Urban Structural Feature Weaken Occasions in Vehicular Platform
by Xu Xu, Lianwu Guan, Yanbin Gao, Yufei Chen and Zhejun Liu
Remote Sens. 2024, 16(14), 2527; https://doi.org/10.3390/rs16142527 - 10 Jul 2024
Viewed by 351
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) offer robustness against illumination changes, but the inherent sparsity of LiDAR point clouds poses challenges for continuous tracking and navigation, especially in feature-deprived scenarios. This paper proposes a novel LiDAR/SINS tightly integrated SLAM algorithm designed to address [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) offer robustness against illumination changes, but the inherent sparsity of LiDAR point clouds poses challenges for continuous tracking and navigation, especially in feature-deprived scenarios. This paper proposes a novel LiDAR/SINS tightly integrated SLAM algorithm designed to address the localization challenges in urban environments characterized in sparse structural features. Firstly, the method extracts edge points from the LiDAR point cloud using a traditional segmentation method and clusters them to form distinctive edge lines. Then, a rotation-invariant feature—line distance—is calculated based on the edge line properties that were inspired by the traditional tightly integrated navigation system. This line distance is utilized as the observation in a Kalman filter that is integrated into a tightly coupled LiDAR/SINS system. This system tracks the same edge lines across multiple frames for filtering and correction instead of tracking points or LiDAR odometry results. Meanwhile, for loop closure, the method modifies the common SCANCONTEXT algorithm by designating all bins that do not reach the maximum height as special loop keys, which reduce false matches. Finally, the experimental validation conducted in urban environments with sparse structural features demonstrated a 17% improvement in positioning accuracy when compared to the conventional point-based methods. Full article
Show Figures

Graphical abstract

18 pages, 7108 KiB  
Article
Inversion of Soybean Net Photosynthetic Rate Based on UAV Multi-Source Remote Sensing and Machine Learning
by Zhen Lu, Wenbo Yao, Shuangkang Pei, Yuwei Lu, Heng Liang, Dong Xu, Haiyan Li, Lejun Yu, Yonggang Zhou and Qian Liu
Agronomy 2024, 14(7), 1493; https://doi.org/10.3390/agronomy14071493 - 10 Jul 2024
Viewed by 262
Abstract
Net photosynthetic rate (Pn) is a common indicator used to measure the efficiency of photosynthesis and growth conditions of plants. In this study, soybeans under different moisture gradients were selected as the research objects. Fourteen vegetation indices (VIS) and five canopy structure characteristics [...] Read more.
Net photosynthetic rate (Pn) is a common indicator used to measure the efficiency of photosynthesis and growth conditions of plants. In this study, soybeans under different moisture gradients were selected as the research objects. Fourteen vegetation indices (VIS) and five canopy structure characteristics (CSC) (plant height (PH), volume (V), canopy cover (CC), canopy length (L), and canopy width (W)) were obtained using an unmanned aerial vehicle (UAV) equipped with three different sensors (visible, multispectral, and LiDAR) at five growth stages of soybeans. Soybean Pn was simultaneously measured manually in the field. The variability of soybean Pn under different conditions and the trend change of CSC under different moisture gradients were analysed. VIS, CSC, and their combinations were used as input features, and four machine learning algorithms (multiple linear regression, random forest, Extreme gradient-boosting tree regression, and ridge regression) were used to perform soybean Pn inversion. The results showed that, compared with the inversion model using VIS or CSC as features alone, the inversion model using the combination of VIS and CSC features showed a significant improvement in the inversion accuracy at all five stages. The highest accuracy (R2 = 0.86, RMSE = 1.73 µmol m−2 s−1, RPD = 2.63) was achieved 63 days after sowing (DAS63). Full article
Show Figures

Figure 1

14 pages, 11264 KiB  
Article
Robust BEV 3D Object Detection for Vehicles with Tire Blow-Out
by Dongsheng Yang, Xiaojie Fan, Wei Dong, Chaosheng Huang and Jun Li
Sensors 2024, 24(14), 4446; https://doi.org/10.3390/s24144446 - 9 Jul 2024
Viewed by 315
Abstract
The bird’s-eye view (BEV) method, which is a vision-centric representation-based perception task, is essential and promising for future Autonomous Vehicle perception. It has advantages of fusion-friendly, intuitive, end-to-end optimization and is cheaper than LiDAR. The performance of existing BEV methods, however, would be [...] Read more.
The bird’s-eye view (BEV) method, which is a vision-centric representation-based perception task, is essential and promising for future Autonomous Vehicle perception. It has advantages of fusion-friendly, intuitive, end-to-end optimization and is cheaper than LiDAR. The performance of existing BEV methods, however, would be deteriorated under the situation of a tire blow-out. This is because they quite rely on accurate camera calibration which may be disabled by noisy camera parameters during blow-out. Therefore, it is extremely unsafe to use existing BEV methods in the tire blow-out situation. In this paper, we propose a geometry-guided auto-resizable kernel transformer (GARKT) method, which is designed especially for vehicles with tire blow-out. Specifically, we establish a camera deviation model for vehicles with tire blow-out. Then we use the geometric priors to attain the prior position in perspective view with auto-resizable kernels. The resizable perception areas are encoded and flattened to generate BEV representation. GARKT predicts the nuScenes detection score (NDS) with a value of 0.439 on a newly created blow-out dataset based on nuScenes. NDS can still obtain 0.431 when the tire is completely flat, which is much more robust compared to other transformer-based BEV methods. Moreover, the GARKT method has almost real-time computing speed, with about 20.5 fps on one GPU. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Back to TopTop