Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,341)

Search Parameters:
Keywords = LiDAR sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4887 KiB  
Article
Driving Assistance System with Obstacle Avoidance for Electric Wheelchairs
by Esranur Erturk, Soonkyum Kim and Dongyoung Lee
Sensors 2024, 24(14), 4644; https://doi.org/10.3390/s24144644 - 17 Jul 2024
Viewed by 204
Abstract
A system has been developed to convert manual wheelchairs into electric wheelchairs, providing assistance to users through the implemented algorithm, which ensures safe driving and obstacle avoidance. While manual wheelchairs are typically controlled indoors based on user preferences, they do not guarantee safe [...] Read more.
A system has been developed to convert manual wheelchairs into electric wheelchairs, providing assistance to users through the implemented algorithm, which ensures safe driving and obstacle avoidance. While manual wheelchairs are typically controlled indoors based on user preferences, they do not guarantee safe driving in areas outside the user’s field of vision. The proposed model utilizes the dynamic window approach specifically designed for wheelchair use, allowing for obstacle avoidance. This method evaluates potential movements within a defined velocity space to calculate the optimal path, providing seamless and safe driving assistance in real time. This innovative approach enhances user assistance and safety by integrating state-of-the-art algorithms developed using the dynamic window approach alongside advanced sensor technology. With the assistance of LiDAR sensors, the system perceives the wheelchair’s surroundings, generating real-time speed values within the algorithm framework to ensure secure driving. The model’s ability to adapt to indoor environments and its robust performance in real-world scenarios underscore its potential for widespread application. This study has undergone various tests, conclusively proving that the system aids users in avoidance obstacles and ensures safe driving. These tests demonstrate significant improvements in maneuverability and user safety, highlighting a noteworthy advancement in assistive technology for individuals with limited mobility. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

30 pages, 10784 KiB  
Article
Phenology and Plant Functional Type Link Optical Properties of Vegetation Canopies to Patterns of Vertical Vegetation Complexity
by Duncan Jurayj, Rebecca Bowers and Jessica V. Fayne
Remote Sens. 2024, 16(14), 2577; https://doi.org/10.3390/rs16142577 - 13 Jul 2024
Viewed by 411
Abstract
Vegetation vertical complexity influences biodiversity and ecosystem productivity. Rapid warming in the boreal region is altering patterns of vertical complexity. LiDAR sensors offer novel structural metrics for quantifying these changes, but their spatiotemporal limitations and their need for ecological context complicate their application [...] Read more.
Vegetation vertical complexity influences biodiversity and ecosystem productivity. Rapid warming in the boreal region is altering patterns of vertical complexity. LiDAR sensors offer novel structural metrics for quantifying these changes, but their spatiotemporal limitations and their need for ecological context complicate their application and interpretation. Satellite variables can estimate LiDAR metrics, but retrievals of vegetation structure using optical reflectance can lack interpretability and accuracy. We compare vertical complexity from the airborne LiDAR Land Vegetation and Ice Sensor (LVIS) in boreal Canada and Alaska to plant functional type, optical, and phenological variables. We show that spring onset and green season length from satellite phenology algorithms are more strongly correlated with vegetation vertical complexity (R = 0.43–0.63) than optical reflectance (R = 0.03–0.43). Median annual temperature explained patterns of vegetation vertical complexity (R = 0.45), but only when paired with plant functional type data. Random forest models effectively learned patterns of vegetation vertical complexity using plant functional type and phenological variables, but the validation performance depended on the validation methodology (R2 = 0.50–0.80). In correlating satellite phenology, plant functional type, and vegetation vertical complexity, we propose new methods of retrieving vertical complexity with satellite data. Full article
Show Figures

Graphical abstract

19 pages, 2503 KiB  
Article
Real-Time Multimodal 3D Object Detection with Transformers
by Hengsong Liu and Tongle Duan
World Electr. Veh. J. 2024, 15(7), 307; https://doi.org/10.3390/wevj15070307 - 12 Jul 2024
Viewed by 346
Abstract
The accuracy and real-time performance of 3D object detection are key factors limiting its widespread application. While cameras capture detailed color and texture features, they lack depth information compared to LiDAR. Multimodal detection combining both can improve results but incurs significant computational overhead, [...] Read more.
The accuracy and real-time performance of 3D object detection are key factors limiting its widespread application. While cameras capture detailed color and texture features, they lack depth information compared to LiDAR. Multimodal detection combining both can improve results but incurs significant computational overhead, affecting real-time performance. To address these challenges, this paper presents a real-time multimodal fusion model called Fast Transfusion that combines the benefits of LiDAR and camera sensors and reduces the computational burden of their fusion. Specifically, our Fast Transfusion method uses QConv (Quick Convolution) to replace the convolutional backbones compared to other models. QConv concentrates the convolution operations at the feature map center, where the most information resides, to expedite inference. It also utilizes deformable convolution to better match the actual shapes of detected objects, enhancing accuracy. And the model incorporates EH Decoder (Efficient and Hybrid Decoder) which decouples multiscale fusion into intra-scale interaction and cross-scale fusion, efficiently decoding and integrating features extracted from multimodal data. Furthermore, our proposed semi-dynamic query selection refines the initialization of object queries. On the KITTI 3D object detection dataset, our proposed approach reduced the inference time by 36 ms and improved 3D AP by 1.81% compared to state-of-the-art methods. Full article
Show Figures

Figure 1

18 pages, 11067 KiB  
Article
Enhancing Deep Learning-Based Segmentation Accuracy through Intensity Rendering and 3D Point Interpolation Techniques to Mitigate Sensor Variability
by Myeong-Jun Kim, Suyeon Kim, Banghyon Lee and Jungha Kim
Sensors 2024, 24(14), 4475; https://doi.org/10.3390/s24144475 - 11 Jul 2024
Viewed by 291
Abstract
In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to [...] Read more.
In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements. Full article
(This article belongs to the Special Issue Sensors for Intelligent Vehicles and Autonomous Driving)
Show Figures

Figure 1

34 pages, 14681 KiB  
Article
Performance Evaluation and Optimization of 3D Models from Low-Cost 3D Scanning Technologies for Virtual Reality and Metaverse E-Commerce
by Rubén Grande, Javier Albusac, David Vallejo, Carlos Glez-Morcillo and José Jesús Castro-Schez
Appl. Sci. 2024, 14(14), 6037; https://doi.org/10.3390/app14146037 - 10 Jul 2024
Viewed by 456
Abstract
Virtual Reality (VR) is and will be a key driver in the evolution of e-commerce, providing an immersive and gamified shopping experience. However, for VR shopping spaces to become a reality, retailers’ product catalogues must first be digitised into 3D models. While this [...] Read more.
Virtual Reality (VR) is and will be a key driver in the evolution of e-commerce, providing an immersive and gamified shopping experience. However, for VR shopping spaces to become a reality, retailers’ product catalogues must first be digitised into 3D models. While this may be a simple task for retail giants, it can be a major obstacle for small retailers, whose human and financial resources are often more limited, making them less competitive. Therefore, this paper presents an analysis of low-cost scanning technologies for small business owners to digitise their products and make them available on VR shopping platforms, with the aim of helping improve the competitiveness of small businesses through VR and Artificial Intelligence (AI). The technologies to be considered are photogrammetry, LiDAR sensors and NeRF.In addition to investigating which technology provides the best visual quality of 3D models based on metrics and quantitative results, these models must also offer good performance in commercial VR headsets. In this way, we also analyse the performance of such models when running on Meta Quest 2, Quest Pro and Quest 3 headsets (Reality Labs, Reality Labs, CA, USA) to determine their feasibility and provide use cases for each type of model from a scalability point of view. Finally, our work describes a model optimisation process that reduce the polygon count and texture size of high-poly models, converting them into more performance-friendly versions without significantly compromising visual quality. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 7108 KiB  
Article
Inversion of Soybean Net Photosynthetic Rate Based on UAV Multi-Source Remote Sensing and Machine Learning
by Zhen Lu, Wenbo Yao, Shuangkang Pei, Yuwei Lu, Heng Liang, Dong Xu, Haiyan Li, Lejun Yu, Yonggang Zhou and Qian Liu
Agronomy 2024, 14(7), 1493; https://doi.org/10.3390/agronomy14071493 - 10 Jul 2024
Viewed by 278
Abstract
Net photosynthetic rate (Pn) is a common indicator used to measure the efficiency of photosynthesis and growth conditions of plants. In this study, soybeans under different moisture gradients were selected as the research objects. Fourteen vegetation indices (VIS) and five canopy structure characteristics [...] Read more.
Net photosynthetic rate (Pn) is a common indicator used to measure the efficiency of photosynthesis and growth conditions of plants. In this study, soybeans under different moisture gradients were selected as the research objects. Fourteen vegetation indices (VIS) and five canopy structure characteristics (CSC) (plant height (PH), volume (V), canopy cover (CC), canopy length (L), and canopy width (W)) were obtained using an unmanned aerial vehicle (UAV) equipped with three different sensors (visible, multispectral, and LiDAR) at five growth stages of soybeans. Soybean Pn was simultaneously measured manually in the field. The variability of soybean Pn under different conditions and the trend change of CSC under different moisture gradients were analysed. VIS, CSC, and their combinations were used as input features, and four machine learning algorithms (multiple linear regression, random forest, Extreme gradient-boosting tree regression, and ridge regression) were used to perform soybean Pn inversion. The results showed that, compared with the inversion model using VIS or CSC as features alone, the inversion model using the combination of VIS and CSC features showed a significant improvement in the inversion accuracy at all five stages. The highest accuracy (R2 = 0.86, RMSE = 1.73 µmol m−2 s−1, RPD = 2.63) was achieved 63 days after sowing (DAS63). Full article
Show Figures

Figure 1

16 pages, 3438 KiB  
Article
Fruit Position, Light Exposure and Fruit Surface Temperature Affect Colour Expression in a Dark-Red Apple Cultivar
by Madeleine Peavey, Alessio Scalisi, Muhammad S. Islam and Ian Goodwin
Horticulturae 2024, 10(7), 725; https://doi.org/10.3390/horticulturae10070725 - 9 Jul 2024
Viewed by 638
Abstract
This study aimed to evaluate the effects of fruit position, light exposure and fruit surface temperature (FST) on apple fruit colour development and fruit quality at harvest, including sunburn damage severity. This was achieved by undertaking two experiments in a high-density planting of [...] Read more.
This study aimed to evaluate the effects of fruit position, light exposure and fruit surface temperature (FST) on apple fruit colour development and fruit quality at harvest, including sunburn damage severity. This was achieved by undertaking two experiments in a high-density planting of the dark-red apple ANABP 01 in Tatura, Australia. In the 2020–2021 growing season an experiment was conducted to draw relationships between fruit position and fruit quality parameters. Here, sample fruit position and level of light exposure were respectively determined using a static LiDAR system and a portable quantum photosynthetically active radiation (PAR) sensor. At harvest the sample fruit were analysed for percentage red colour coverage, objective colour parameters (L*, a*, b*, hue angle and chroma), sunburn damage, fruit diameter (FD), soluble solids concentration (SSC), flesh firmness (FF) and starch pattern index (SPI). A second experiment was conducted in the 2021–2022 growing season and focused on how fruit shading, light exposure and the removal of ultraviolet (UV) radiation affected the FST, colour development and harvest fruit quality. Five treatments were distributed among sample fruit: fully shaded with aluminium umbrellas, shaded for one month and then exposed to sunlight until harvest, exposed for one month and then shaded until harvest, covered with a longpass UV filter and a control treatment. The development of colour in this dark-red apple cultivar was highly responsive to aspects of fruit position, and the intensity and quality of light exposure. The best-coloured fruit were exposed to higher quantities of PAR, exposed to both PAR and UV radiation simultaneously and located higher in the tree canopy. Fruit that were fully exposed to PAR and achieved better colour development also displayed higher FST and sunburn damage severity. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

15 pages, 6273 KiB  
Article
Deriving Verified Vehicle Trajectories from LiDAR Sensor Data to Evaluate Traffic Signal Performance
by Enrique D. Saldivar-Carranza and Darcy M. Bullock
Future Transp. 2024, 4(3), 765-779; https://doi.org/10.3390/futuretransp4030036 - 9 Jul 2024
Viewed by 450
Abstract
Advances and cost reductions in Light Detection and Ranging (LiDAR) sensor technology have allowed for their implementation in detecting vehicles, cyclists, and pedestrians at signalized intersections. Most LiDAR use cases have focused on safety analyses using its high-fidelity tracking capabilities. This study presents [...] Read more.
Advances and cost reductions in Light Detection and Ranging (LiDAR) sensor technology have allowed for their implementation in detecting vehicles, cyclists, and pedestrians at signalized intersections. Most LiDAR use cases have focused on safety analyses using its high-fidelity tracking capabilities. This study presents a methodology to transform LiDAR data into localized, verified, and linear-referenced trajectories to derive Purdue Probe Diagrams (PPDs). The following four performance measures are then derived from the PPDs: arrivals on green (AOG), split failures (SF), downstream blockage (DSB), and control delay level of service (LOS). Noise is filtered for each detected vehicle by iteratively projecting each sample’s future location and keeping the subsequent sample that is close enough to the estimated destination. Then, a far side is defined for the analyzed intersection’s movement to linear reference sampled trajectories and to remove those that do not cross through that point. The technique is demonstrated by using over one hour of LiDAR data at an intersection in Utah to derive PPDs. Signal performance is then estimated from these PPDs. The results are compared to those obtained from comparable PPDs derived from connected vehicle (CV) trajectory data. The generated PPDs from both data sources are similar, with relatively modest differences of 1% AOG and a 1.39 s/veh control delay. Practitioners can use the presented methodology to estimate trajectory-based traffic signal performance measures from their deployed LiDAR sensors. The paper concludes by recommending that unfiltered LiDAR data are used for deriving PPDs and extending the detection zones to cover the largest observed queues to improve performance estimation reliability. Full article
Show Figures

Figure 1

20 pages, 1958 KiB  
Article
Integrating LiDAR Sensor Data into Microsimulation Model Calibration for Proactive Safety Analysis
by Morris Igene, Qiyang Luo, Keshav Jimee, Mohammad Soltanirad, Tamer Bataineh and Hongchao Liu
Sensors 2024, 24(13), 4393; https://doi.org/10.3390/s24134393 - 6 Jul 2024
Viewed by 688
Abstract
Studies have shown that vehicle trajectory data are effective for calibrating microsimulation models. Light Detection and Ranging (LiDAR) technology offers high-resolution 3D data, allowing for detailed mapping of the surrounding environment, including road geometry, roadside infrastructures, and moving objects such as vehicles, cyclists, [...] Read more.
Studies have shown that vehicle trajectory data are effective for calibrating microsimulation models. Light Detection and Ranging (LiDAR) technology offers high-resolution 3D data, allowing for detailed mapping of the surrounding environment, including road geometry, roadside infrastructures, and moving objects such as vehicles, cyclists, and pedestrians. Unlike other traditional methods of trajectory data collection, LiDAR’s high-speed data processing, fine angular resolution, high measurement accuracy, and high performance in adverse weather and low-light conditions make it well suited for applications requiring real-time response, such as autonomous vehicles. This research presents a comprehensive framework for integrating LiDAR sensor data into simulation models and their accurate calibration strategies for proactive safety analysis. Vehicle trajectory data were extracted from LiDAR point clouds collected at six urban signalized intersections in Lubbock, Texas, in the USA. Each study intersection was modeled with PTV VISSIM and calibrated to replicate the observed field scenarios. The Directed Brute Force method was used to calibrate two car-following and two lane-change parameters of the Wiedemann 1999 model in VISSIM, resulting in an average accuracy of 92.7%. Rear-end conflicts extracted from the calibrated models combined with a ten-year historical crash dataset were fitted into a Negative Binomial (NB) model to estimate the model’s parameters. In all the six intersections, rear-end conflict count is a statistically significant predictor (p-value < 0.05) of observed rear-end crash frequency. The outcome of this study provides a framework for the combined use of LiDAR-based vehicle trajectory data, microsimulation, and surrogate safety assessment tools to transportation professionals. This integration allows for more accurate and proactive safety evaluations, which are essential for designing safer transportation systems, effective traffic control strategies, and predicting future congestion problems. Full article
(This article belongs to the Special Issue Vehicle Sensing and Dynamic Control)
Show Figures

Figure 1

24 pages, 13355 KiB  
Article
Enhanced Object Detection in Autonomous Vehicles through LiDAR—Camera Sensor Fusion
by Zhongmou Dai, Zhiwei Guan, Qiang Chen, Yi Xu and Fengyi Sun
World Electr. Veh. J. 2024, 15(7), 297; https://doi.org/10.3390/wevj15070297 - 3 Jul 2024
Viewed by 625
Abstract
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential [...] Read more.
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential process in efforts to overcome the shortcomings of individual sensor types and improve the efficiency and reliability of autonomous vehicles. This paper puts forward moving object detection and tracking methods based on LiDAR—camera fusion. Operating based on the calibration of the camera and LiDAR technology, this paper uses YOLO and PointPillars network models to perform object detection based on image and point cloud data. Then, a target box intersection-over-union (IoU) matching strategy, based on center-point distance probability and the improved Dempster–Shafer (D–S) theory, is used to perform class confidence fusion to obtain the final fusion detection result. In the process of moving object tracking, the DeepSORT algorithm is improved to address the issue of identity switching resulting from dynamic objects re-emerging after occlusion. An unscented Kalman filter is utilized to accurately predict the motion state of nonlinear objects, and object motion information is added to the IoU matching module to improve the matching accuracy in the data association process. Through self-collected data verification, the performances of fusion detection and tracking are judged to be significantly better than those of a single sensor. The evaluation indexes of the improved DeepSORT algorithm are 66% for MOTA and 79% for MOTP, which are, respectively, 10% and 5% higher than those of the original DeepSORT algorithm. The improved DeepSORT algorithm effectively solves the problem of tracking instability caused by the occlusion of moving objects. Full article
Show Figures

Figure 1

13 pages, 17412 KiB  
Article
Geometric Fidelity Requirements for Meshes in Automotive Lidar Simulation
by Christopher Goodin, Marc N. Moore, Daniel W. Carruth, Zachary Aspin and John Kaniarz
Virtual Worlds 2024, 3(3), 270-282; https://doi.org/10.3390/virtualworlds3030014 - 3 Jul 2024
Viewed by 361
Abstract
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves. [...] Read more.
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves. In this work, we propose to answer the question, “What degree of geometric fidelity is required to realistically simulate lidar in AGV simulations?” To answer this question, in this work we present an analysis that determines the required geometric fidelity of digital scenes and assets used in the simulation of AGVs. Focusing on vegetation, we use a comparison of the real and simulated perceived distribution of leaf orientation angles in lidar point clouds to determine the number of triangles required to reliably reproduce realistic results. By comparing real lidar scans of vegetation to simulated lidar scans of vegetation with a variety of geometric fidelities, we find that digital tree models (meshes) need to have a minimum triangle density of >1600 triangles per cubic meter in order to accurately reproduce the geometric properties of lidar scans of real vegetation, with a recommended triangle density of 11,000 triangles per cubic meter for best performance. Furthermore, by comparing these experiments to past work investigating the same question for cameras, we develop a general “rule-of-thumb” for vegetation mesh fidelity in AGV sensor simulation. Full article
Show Figures

Figure 1

31 pages, 2478 KiB  
Article
Precise Adverse Weather Characterization by Deep-Learning-Based Noise Processing in Automotive LiDAR Sensors
by Marcel Kettelgerdes, Nicolas Sarmiento, Hüseyin Erdogan, Bernhard Wunderle and Gordon Elger
Remote Sens. 2024, 16(13), 2407; https://doi.org/10.3390/rs16132407 - 30 Jun 2024
Viewed by 663
Abstract
With current advances in automated driving, optical sensors like cameras and LiDARs are playing an increasingly important role in modern driver assistance systems. However, these sensors face challenges from adverse weather effects like fog and precipitation, which significantly degrade the sensor performance due [...] Read more.
With current advances in automated driving, optical sensors like cameras and LiDARs are playing an increasingly important role in modern driver assistance systems. However, these sensors face challenges from adverse weather effects like fog and precipitation, which significantly degrade the sensor performance due to scattering effects in its optical path. Consequently, major efforts are being made to understand, model, and mitigate these effects. In this work, the reverse research question is investigated, demonstrating that these measurement effects can be exploited to predict occurring weather conditions by using state-of-the-art deep learning mechanisms. In order to do so, a variety of models have been developed and trained on a recorded multiseason dataset and benchmarked with respect to performance, model size, and required computational resources, showing that especially modern vision transformers achieve remarkable results in distinguishing up to 15 precipitation classes with an accuracy of 84.41% and predicting the corresponding precipitation rate with a mean absolute error of less than 0.47 mm/h, solely based on measurement noise. Therefore, this research may contribute to a cost-effective solution for characterizing precipitation with a commercial Flash LiDAR sensor, which can be implemented as a lightweight vehicle software feature to issue advanced driver warnings, adapt driving dynamics, or serve as a data quality measure for adaptive data preprocessing and fusion. Full article
Show Figures

Graphical abstract

24 pages, 6484 KiB  
Article
The Effectiveness of UWB-Based Indoor Positioning Systems for the Navigation of Visually Impaired Individuals
by Maria Rosiak, Mateusz Kawulok and Michał Maćkowski
Appl. Sci. 2024, 14(13), 5646; https://doi.org/10.3390/app14135646 - 28 Jun 2024
Viewed by 589
Abstract
UWB has been in existence for several years, but it was only a few years ago that it transitioned from a specialized niche to more mainstream applications. Recent market data indicate a rapid increase in the popularity of UWB in consumer products, such [...] Read more.
UWB has been in existence for several years, but it was only a few years ago that it transitioned from a specialized niche to more mainstream applications. Recent market data indicate a rapid increase in the popularity of UWB in consumer products, such as smartphones and smart home devices, as well as automotive and industrial real-time location systems. The challenge of achieving accurate positioning in indoor environments arises from various factors such as distance, location, beacon density, dynamic surroundings, and the density and type of obstacles. This research used MFi-certified UWB beacon chipsets and integrated them with a mobile application dedicated to iOS by implementing the near interaction accessory protocol. The analysis covers both static and dynamic cases. Thanks to the acquisition of measurements, two main candidates for indoor localization infrastructure were analyzed and compared in terms of accuracy, namely UWB and LIDAR, with the latter used as a reference system. The problem of achieving accurate positioning in various applications and environments was analyzed, and future solutions were proposed. The results show that the achieved accuracy is sufficient for tracking individuals and may serve as guidelines for achievable accuracy or may provide a basis for further research into a complex sensor fusion-based navigation system. This research provides several findings. Firstly, in dynamic conditions, LIDAR measurements showed higher accuracy than UWB beacons. Secondly, integrating data from multiple sensors could enhance localization accuracy in non-line-of-sight scenarios. Lastly, advancements in UWB technology may expand the availability of competitive hardware, facilitating a thorough evaluation of its accuracy and effectiveness in practical systems. These insights may be particularly useful in designing navigation systems for blind individuals in buildings. Full article
Show Figures

Figure 1

23 pages, 9558 KiB  
Data Descriptor
A Point Cloud Dataset of Vehicles Passing through a Toll Station for Use in Training Classification Algorithms
by Alexander Campo-Ramírez, Eduardo F. Caicedo-Bravo and Eval B. Bacca-Cortes
Data 2024, 9(7), 87; https://doi.org/10.3390/data9070087 - 27 Jun 2024
Viewed by 371
Abstract
This work presents a point cloud dataset of vehicles passing through a toll station in Colombia to be used to train artificial vision and computational intelligence algorithms. This article details the process of creating the dataset, covering initial data acquisition, range information preprocessing, [...] Read more.
This work presents a point cloud dataset of vehicles passing through a toll station in Colombia to be used to train artificial vision and computational intelligence algorithms. This article details the process of creating the dataset, covering initial data acquisition, range information preprocessing, point cloud validation, and vehicle labeling. Additionally, a detailed description of the structure and content of the dataset is provided, along with some potential applications of its use. The dataset consists of 36,026 total objects divided into 6 classes: 31,432 cars, campers, vans and 2-axle trucks with a single tire on the rear axle, 452 minibuses with a single tire on the rear axle, 1158 buses, 1179 2-axle small trucks, 797 2-axle large trucks, and 1008 trucks with 3 or more axles. The point clouds were captured using a LiDAR sensor and Doppler effect speed sensors. The dataset can be used to train and evaluate algorithms for range data processing, vehicle classification, vehicle counting, and traffic flow analysis. The dataset can also be used to develop new applications for intelligent transportation systems. Full article
Show Figures

Figure 1

15 pages, 4809 KiB  
Article
LiDAR Point Cloud Super-Resolution Reconstruction Based on Point Cloud Weighted Fusion Algorithm of Improved RANSAC and Reciprocal Distance
by Xiaoping Yang, Ping Ni, Zhenhua Li and Guanghui Liu
Electronics 2024, 13(13), 2521; https://doi.org/10.3390/electronics13132521 - 27 Jun 2024
Viewed by 354
Abstract
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution [...] Read more.
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution point clouds with higher-resolution point clouds at the data level, the algorithm generates high-resolution point clouds, achieving the super-resolution reconstruction of lidar point clouds. This method effectively reduces noise in the higher-resolution point clouds while preserving the structure of the low-resolution point clouds, ensuring that the semantic information of the generated high-resolution point clouds remains consistent with that of the low-resolution point clouds. Specifically, the algorithm constructs a K-d tree using the low-resolution point cloud to perform a nearest neighbor search, establishing the correspondence between the low-resolution and higher-resolution point clouds. Next, the improved RANSAC algorithm is employed for point cloud alignment, and inverse distance weighting is used for point-by-point weighted fusion, ultimately yielding the high-resolution point cloud. The experimental results demonstrate that the proposed point cloud super-resolution reconstruction method outperforms other methods across various metrics. Notably, it reduces the Chamfer Distance (CD) metric by 0.49 and 0.29 and improves the Precision metric by 7.75% and 4.47%, respectively, compared to two other methods. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

Back to TopTop