Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = line-of-sight identification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6355 KiB  
Article
Improving Non-Line-of-Sight Identification in Cellular Positioning Systems Using a Deep Autoencoding and Generative Adversarial Network Model
by Yanbiao Gao, Zhongliang Deng, Yuqi Huo and Wenyan Chen
Sensors 2024, 24(19), 6494; https://doi.org/10.3390/s24196494 - 9 Oct 2024
Viewed by 761
Abstract
Positioning service is a critical technology that bridges the physical world with digital information, significantly enhancing efficiency and convenience in life and work. The evolution of 5G technology has proven that positioning services are integral components of current and future cellular networks. However, [...] Read more.
Positioning service is a critical technology that bridges the physical world with digital information, significantly enhancing efficiency and convenience in life and work. The evolution of 5G technology has proven that positioning services are integral components of current and future cellular networks. However, positioning accuracy is hindered by non-line-of-sight (NLoS) propagation, which severely affects the measurements of angles and delays. In this study, we introduced a deep autoencoding channel transform-generative adversarial network model that utilizes line-of-sight (LoS) samples as a singular category training set to fully extract the latent features of LoS, ultimately employing a discriminator as an NLoS identifier. We validated the proposed model in 5G indoor and indoor factory (dense clutter, low base station) scenarios by assessing its generalization capability across different scenarios. The results indicate that, compared to the state-of-the-art method, the proposed model markedly diminished the utilization of device resources and achieved a 2.15% higher area under the curve while reducing computing time by 12.6%. This approach holds promise for deployment in future positioning terminals to achieve superior localization precision, catering to commercial and industrial Internet of Things applications. Full article
Show Figures

Figure 1

26 pages, 542 KiB  
Review
WiFi-Based Human Identification with Machine Learning: A Comprehensive Survey
by Manal Mosharaf, Jae B. Kwak and Wooyeol Choi
Sensors 2024, 24(19), 6413; https://doi.org/10.3390/s24196413 - 3 Oct 2024
Viewed by 803
Abstract
In the modern world of human–computer interaction, notable advancements in human identification have been achieved across fields like healthcare, academia, security, etc. Despite these advancements, challenges remain, particularly in scenarios with poor lighting, occlusion, or non-line-of-sight. To overcome these limitations, the utilization of [...] Read more.
In the modern world of human–computer interaction, notable advancements in human identification have been achieved across fields like healthcare, academia, security, etc. Despite these advancements, challenges remain, particularly in scenarios with poor lighting, occlusion, or non-line-of-sight. To overcome these limitations, the utilization of radio frequency (RF) wireless signals, particularly wireless fidelity (WiFi), has been considered an innovative solution in recent research studies. By analyzing WiFi signal fluctuations caused by human presence, researchers have developed machine learning (ML) models that significantly improve identification accuracy. This paper conducts a comprehensive survey of recent advances and practical implementations of WiFi-based human identification. Furthermore, it covers the ML models used for human identification, system overviews, and detailed WiFi-based human identification methods. It also includes system evaluation, discussion, and future trends related to human identification. Finally, we conclude by examining the limitations of the research and discussing how researchers can shift their attention toward shaping the future trajectory of human identification through wireless signals. Full article
Show Figures

Figure 1

34 pages, 23658 KiB  
Article
Deep Learning-Based Nonparametric Identification and Path Planning for Autonomous Underwater Vehicles
by Bin Mei, Chenyu Li, Dongdong Liu and Jie Zhang
J. Mar. Sci. Eng. 2024, 12(9), 1683; https://doi.org/10.3390/jmse12091683 - 22 Sep 2024
Viewed by 607
Abstract
As the nonlinear and coupling characteristics of autonomous underwater vehicles (AUVs) are the challenges for motion modeling, the nonparametric identification method is proposed based on dung beetle optimization (DBO) and deep temporal convolutional networks (DTCNs). First, the improved wavelet threshold is utilized to [...] Read more.
As the nonlinear and coupling characteristics of autonomous underwater vehicles (AUVs) are the challenges for motion modeling, the nonparametric identification method is proposed based on dung beetle optimization (DBO) and deep temporal convolutional networks (DTCNs). First, the improved wavelet threshold is utilized to select the optimal threshold and wavelet basis functions, and the raw model test data are denoising. Second, the bidirectional temporal convolutional networks, the bidirectional gated recurrent unit, and the attention mechanism are used to achieve the nonlinear nonparametric model of the AUV motion. And the hyperparameters are optimized by the DBO. Finally, the lazy-search-based path planning and the line-of-sight-based path following control are used for the proposed AUV model. The simulation shows that the prediction accuracy of the DBO-DTCN is better than other artificial intelligence methods and mechanical models, and the path following of AUV is feasible. The methods proposed in this paper can provide an effective strategy for AUV modeling, searching, and rescue cruising. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 7239 KiB  
Article
UVIO: Adaptive Kalman Filtering UWB-Aided Visual-Inertial SLAM System for Complex Indoor Environments
by Junxi Li, Shouwen Wang, Jiahui Hao, Biao Ma and Henry K. Chu
Remote Sens. 2024, 16(17), 3245; https://doi.org/10.3390/rs16173245 - 1 Sep 2024
Viewed by 1044
Abstract
Precise positioning in an indoor environment is a challenging task because it is difficult to receive a strong and reliable global positioning system (GPS) signal. For existing wireless indoor positioning methods, ultra-wideband (UWB) has become more popular because of its low energy consumption [...] Read more.
Precise positioning in an indoor environment is a challenging task because it is difficult to receive a strong and reliable global positioning system (GPS) signal. For existing wireless indoor positioning methods, ultra-wideband (UWB) has become more popular because of its low energy consumption and high interference immunity. Nevertheless, factors such as indoor non-line-of-sight (NLOS) obstructions can still lead to large errors or fluctuations in the measurement data. In this paper, we propose a fusion method based on ultra-wideband (UWB), inertial measurement unit (IMU), and visual simultaneous localization and mapping (V-SLAM) to achieve high accuracy and robustness in tracking a mobile robot in a complex indoor environment. Specifically, we first focus on the identification and correction between line-of-sight (LOS) and non-line-of-sight (NLOS) UWB signals. The distance evaluated from UWB is first processed by an adaptive Kalman filter with IMU signals for pose estimation, where a new noise covariance matrix using the received signal strength indicator (RSSI) and estimation of precision (EOP) is proposed to reduce the effect due to NLOS. After that, the corrected UWB estimation is tightly integrated with IMU and visual SLAM through factor graph optimization (FGO) to further refine the pose estimation. The experimental results show that, compared with single or dual positioning systems, the proposed fusion method provides significant improvements in positioning accuracy in a complex indoor environment. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

19 pages, 5459 KiB  
Article
An Improved ELOS Guidance Law for Path Following of Underactuated Unmanned Surface Vehicles
by Shipeng Wu, Hui Ye, Wei Liu, Xiaofei Yang, Ziqing Liu and Hao Zhang
Sensors 2024, 24(16), 5384; https://doi.org/10.3390/s24165384 - 20 Aug 2024
Viewed by 630
Abstract
In this paper, targeting the problem that it is difficult to deal with the time-varying sideslip angle of an underactuated unmanned surface vehicle (USV), a line–of–sight (LOS) guidance law based on an improved extended state observer (ESO) is proposed. A reduced-order ESO is [...] Read more.
In this paper, targeting the problem that it is difficult to deal with the time-varying sideslip angle of an underactuated unmanned surface vehicle (USV), a line–of–sight (LOS) guidance law based on an improved extended state observer (ESO) is proposed. A reduced-order ESO is introduced into the identification of the sideslip angle caused by the environmental disturbance, which ensures a fast and accurate estimation of the sideslip angle. This enables the USV to follow the reference path with high precision, despite external disturbances from wind, waves, and currents. These unknown disturbances are modeled as drift, which the modified ESO-based LOS guidance law compensates for using the ESO. In the guidance subsystem incorporating the reduced-order state observer, the observer estimation and track errors are proved uniformly ultimately bounded. Simulation and experimental results are presented to validate the effectiveness of the proposed method. The simulation and comparison results demonstrate that the proposed ELOS guidance can help a USV track different types of paths quickly and smoothly. Additionally, the experimental results confirm the feasibility of the method. Full article
(This article belongs to the Special Issue Vehicle Sensing and Dynamic Control)
Show Figures

Figure 1

19 pages, 9956 KiB  
Article
Optimized Radio Frequency Footprint Identification Based on UAV Telemetry Radios
by Yuan Tian, Hong Wen, Jiaxin Zhou, Zhiqiang Duan and Tao Li
Sensors 2024, 24(16), 5099; https://doi.org/10.3390/s24165099 - 6 Aug 2024
Viewed by 746
Abstract
With the widespread use of unmanned aerial vehicles (UAVs), the detection and identification of UAVs is a vital security issue for the safety of airspace and ground facilities in the no-fly zone. Telemetry radios are important wireless communication devices for UAVs, especially in [...] Read more.
With the widespread use of unmanned aerial vehicles (UAVs), the detection and identification of UAVs is a vital security issue for the safety of airspace and ground facilities in the no-fly zone. Telemetry radios are important wireless communication devices for UAVs, especially in UAVs beyond the visual line of sight (BVLOS) operating mode. This work focuses on the UAV identification approach using transient signals from UAV telemetry radios instead of the signals from UAV controllers that the former research work depended on. In our novel UAV Radio Frequency (RF) identification system framework based on telemetry radio signals, the ECα algorithm is optimized to detect the starting point of the UAV transient signal and the detection accuracy at different signal-to-noise ratios (SNR) is evaluated. In the training stage, the Convolutional Neural Network (CNN) model is trained to extract features from raw I/Q data of the transient signals with different waveforms. Its architecture and hyperparameters are analyzed and optimized. In the identification stage, the extracted transient signals are clustered through the Self-Organizing Map (SOM) algorithm and the Clustering Signals Joint Identification (CSJI) algorithm is proposed to improve the accuracy of RF fingerprint identification. To evaluate the performance of our proposed approach, we design a testbed, including two UAVs as the flight platform, a Universal Software Radio Peripheral (USRP) as the receiver, and 20 telemetry radios with the same model as targets for identification. Indoor test results show that the optimized identification approach achieves an average accuracy of 92.3% at 30 dB. In comparison, the identification accuracy of SVM and KNN is 69.7% and 74.5%, respectively, at the same SNR condition. Extensive experiments are conducted outdoors to demonstrate the feasibility of this approach. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

8 pages, 1254 KiB  
Proceeding Paper
Performance Aspects of Retrodirective RFID Tags
by Theodoros N. F. Kaifas
Eng. Proc. 2024, 70(1), 19; https://doi.org/10.3390/engproc2024070019 - 1 Aug 2024
Viewed by 352
Abstract
Although RFID(radio frequency identification) tags do not require a direct line of sight, their operational range is often characterized as being limited. Indeed, in the case of passive RFID tags, the interrogating signal from the transmitter needs to reach the tag’s radio transponder [...] Read more.
Although RFID(radio frequency identification) tags do not require a direct line of sight, their operational range is often characterized as being limited. Indeed, in the case of passive RFID tags, the interrogating signal from the transmitter needs to reach the tag’s radio transponder and trigger a nearly omnidirectional scattered signal to be harvested by the receiver. This two-way (from Tx to the tag and back to Rx) channel exhibits increased attenuation not only due to the doubled distance (in case Tx and Rx are collocated) but also to the uncontrolled (i.e., unfocused) backscattering. In the work at hand, we propose a way to control the backscattered radiation and focus the produced beam towards the direction of the reader (the Tx-Rx device). Indeed, one can utilize the concept of retrodirective arrays to immediately control the direction of departure of the backscatter link, maximizing the scattered power towards the reader and thus delivering an increase in the operational range of the tag. This of course means that in this case, the tag should be equipped with a minimum of two element radiators. Thus, retrodirective RFID array tags are introduced in the current work to increase the operating range with minimal costs and levels of complexity since 90° hybrids are used to achieve proper backscattering. To evaluate the proposed passive tag array, performance aspects are addressed. Specifically, we examine the Bit Error Rate with respect to the Signal to Noise Ratio for the retrodirective tag, the one antenna, the broadside, and the spatial diversity array. The results prove that the proposed tag allows for a significant increase in the operational range. Full article
Show Figures

Figure 1

20 pages, 24513 KiB  
Article
Study on Optimization Method for InSAR Baseline Considering Changes in Vegetation Coverage
by Junqi Guo, Wenfei Xi, Zhiquan Yang, Guangcai Huang, Bo Xiao, Tingting Jin, Wenyu Hong, Fuyu Gui and Yijie Ma
Sensors 2024, 24(15), 4783; https://doi.org/10.3390/s24154783 - 23 Jul 2024
Viewed by 762
Abstract
Time-series Interferometric Synthetic Aperture Radar (InSAR) technology, renowned for its high-precision, wide coverage, and all-weather capabilities, has become an essential tool for Earth observation. However, the quality of the interferometric baseline network significantly influences the monitoring accuracy of InSAR technology. Therefore, optimizing the [...] Read more.
Time-series Interferometric Synthetic Aperture Radar (InSAR) technology, renowned for its high-precision, wide coverage, and all-weather capabilities, has become an essential tool for Earth observation. However, the quality of the interferometric baseline network significantly influences the monitoring accuracy of InSAR technology. Therefore, optimizing the interferometric baseline is crucial for enhancing InSAR’s monitoring accuracy. Surface vegetation changes can disrupt the coherence between SAR images, introducing incoherent noise into interferograms and reducing InSAR’s monitoring accuracy. To address this issue, we propose and validate an optimization method for the InSAR baseline that considers changes in vegetation coverage (OM-InSAR-BCCVC) in the Yuanmou dry-hot valley. Initially, based on the imaging times of SAR image pairs, we categorize all interferometric image pairs into those captured during months of high vegetation coverage and those from months of low vegetation coverage. We then remove the image pairs with coherence coefficients below the category average. Using the Small Baseline Subset InSAR (SBAS-InSAR) technique, we retrieve surface deformation information in the Yuanmou dry-hot valley. Landslide identification is subsequently verified using optical remote sensing images. The results show that significant seasonal changes in vegetation coverage in the Yuanmou dry-hot valley lead to noticeable seasonal variations in InSAR coherence, with the lowest coherence in July, August, and September, and the highest in January, February, and December. The average coherence threshold method is limited in this context, resulting in discontinuities in the interferometric baseline network. Compared with methods without baseline optimization, the interferometric map ratio improved by 17.5% overall after applying the OM-InSAR-BCCVC method, and the overall inversion error RMSE decreased by 0.5 rad. From January 2021 to May 2023, the radar line of sight (LOS) surface deformation rate in the Yuanmou dry-hot valley, obtained after atmospheric correction by GACOS, baseline optimization, and geometric distortion region masking, ranged from −73.87 mm/year to 127.35 mm/year. We identified fifteen landslides and potential landslide sites, primarily located in the northern part of the Yuanmou dry-hot valley, with maximum subsidence exceeding 100 mm at two notable points. The OM-InSAR-BCCVC method effectively reduces incoherent noise caused by vegetation coverage changes, thereby improving the monitoring accuracy of InSAR. Full article
Show Figures

Figure 1

20 pages, 3808 KiB  
Article
A Post-Processing Multipath/NLoS Bias Estimation Method Based on DBSCAN
by Yihan Guo, Simone Zocca, Paolo Dabove and Fabio Dovis
Sensors 2024, 24(8), 2611; https://doi.org/10.3390/s24082611 - 19 Apr 2024
Viewed by 821
Abstract
Positioning based on Global Navigation Satellite Systems (GNSSs) in urban environments always suffers from multipath and Non-Line-of-Sight (NLoS) effects. In such conditions, the GNSS pseudorange measurements can be affected by biases disrupting the GNSS-based applications. Many efforts have been devoted to detecting and [...] Read more.
Positioning based on Global Navigation Satellite Systems (GNSSs) in urban environments always suffers from multipath and Non-Line-of-Sight (NLoS) effects. In such conditions, the GNSS pseudorange measurements can be affected by biases disrupting the GNSS-based applications. Many efforts have been devoted to detecting and mitigating the effects of multipath/NLoS, but the identification and classification of such events are still challenging. This research proposes a method for the post-processing estimation of pseudorange biases resulting from multipath/NLoS effects. Providing estimated pseudorange biases due to multipath/NLoS effects serves two main purposes. Firstly, machine learning-based techniques can leverage accurately estimated pseudorange biases as training data to detect and mitigate multipath/NLoS effects. Secondly, these accurately estimated pseudorange biases can serve as a benchmark for evaluating the effectiveness of the methods proposed to detect multipath/NLoS effects. The estimation is achieved by extracting the multipath/NLoS biases from pseudoranges using a clustering algorithm named Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The performance is demonstrated using two real-world data collections in multipath/NLoS scenarios for both static and dynamic conditions. Since there is no ground truth for the pseudorange biases due to the multipath/NLoS scenarios, the proposed method is validated based on the positioning performance. Positioning solutions are computed by subtracting the estimated biases from the raw pseudoranges and comparing them to the ground truth. Full article
Show Figures

Figure 1

18 pages, 6117 KiB  
Article
Research on a Visual/Ultra-Wideband Tightly Coupled Fusion Localization Algorithm
by Pin Jiang, Chen Hu, Tingting Wang, Ke Lv, Tingfeng Guo, Jinxuan Jiang and Wenwu Hu
Sensors 2024, 24(5), 1710; https://doi.org/10.3390/s24051710 - 6 Mar 2024
Cited by 1 | Viewed by 1098
Abstract
In the autonomous navigation of mobile robots, precise positioning is crucial. In forest environments with weak satellite signals or in sites disturbed by complex environments, satellite positioning accuracy has difficulty in meeting the requirements of autonomous navigation positioning accuracy for robots. This article [...] Read more.
In the autonomous navigation of mobile robots, precise positioning is crucial. In forest environments with weak satellite signals or in sites disturbed by complex environments, satellite positioning accuracy has difficulty in meeting the requirements of autonomous navigation positioning accuracy for robots. This article proposes a vision SLAM/UWB tightly coupled localization method and designs a UWB non-line-of-sight error identification method using the displacement increment of the visual odometer. It utilizes the displacement increment of visual output and UWB ranging information as measurement values and applies the extended Kalman filtering algorithm for data fusion. This study utilized the constructed experimental platform to collect images and ultra-wideband ranging data in outdoor environments and experimentally validated the combined positioning method. The experimental results show that the algorithm outperforms individual UWB or loosely coupled combination positioning methods in terms of positioning accuracy. It effectively eliminates non-line-of-sight errors in UWB, improving the accuracy and stability of the combined positioning system. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

29 pages, 3153 KiB  
Article
Ultra-Wideband Ranging Error Mitigation with Novel Channel Impulse Response Feature Parameters and Two-Step Non-Line-of-Sight Identification
by Hongchao Yang, Yunjia Wang, Shenglei Xu, Jingxue Bi, Haonan Jia and Cheekiat Seow
Sensors 2024, 24(5), 1703; https://doi.org/10.3390/s24051703 - 6 Mar 2024
Cited by 1 | Viewed by 1304
Abstract
The effective identification and mitigation of non-line-of-sight (NLOS) ranging errors are essential for achieving high-precision positioning and navigation with ultra-wideband (UWB) technology in harsh indoor environments. In this paper, an efficient UWB ranging-error mitigation strategy that uses novel channel impulse response parameters based [...] Read more.
The effective identification and mitigation of non-line-of-sight (NLOS) ranging errors are essential for achieving high-precision positioning and navigation with ultra-wideband (UWB) technology in harsh indoor environments. In this paper, an efficient UWB ranging-error mitigation strategy that uses novel channel impulse response parameters based on the results of a two-step NLOS identification, composed of a decision tree and feedforward neural network, is proposed to realize indoor locations. NLOS ranging errors are classified into three types, and corresponding mitigation strategies and recall mechanisms are developed, which are also extended to partial line-of-sight (LOS) errors. Extensive experiments involving three obstacles (humans, walls, and glass) and two sites show an average NLOS identification accuracy of 95.05%, with LOS/NLOS recall rates of 95.72%/94.15%. The mitigated LOS errors are reduced by 50.4%, while the average improvement in the accuracy of the three types of NLOS ranging errors is 61.8%, reaching up to 76.84%. Overall, this method achieves a reduction in LOS and NLOS ranging errors of 25.19% and 69.85%, respectively, resulting in a 54.46% enhancement in positioning accuracy. This performance surpasses that of state-of-the-art techniques, such as the convolutional neural network (CNN), long short-term memory–extended Kalman filter (LSTM-EKF), least-squares–support vector machine (LS-SVM), and k-nearest neighbor (K-NN) algorithms. Full article
Show Figures

Figure 1

30 pages, 16286 KiB  
Article
Implementing and Testing a U-Space System: Lessons Learnt
by Miguel-Ángel Fas-Millán, Andreas Pick, Daniel González del Río, Alejandro Paniagua Tineo and Rubén García García
Aerospace 2024, 11(3), 178; https://doi.org/10.3390/aerospace11030178 - 23 Feb 2024
Viewed by 4322
Abstract
Within the framework of the European Union’s Horizon 2020 research and innovation program, one of the main goals of the Labyrinth project was to develop and test the Conflict Management services of a U-space-based Unmanned Traffic Management (UTM) system. The U-space concept of [...] Read more.
Within the framework of the European Union’s Horizon 2020 research and innovation program, one of the main goals of the Labyrinth project was to develop and test the Conflict Management services of a U-space-based Unmanned Traffic Management (UTM) system. The U-space concept of operations (ConOps) provides a high-level description of the architecture, requirements and functionalities of these systems, but the implementer has a certain degree of freedom in aspects like the techniques used or some policies and procedures. The current document describes some of those implementation decisions. The prototype included part of the services defined by the ConOps, namely e-identification, Tracking, Geo-awareness, Drone Aeronautical Information Management, Geo-fence Provision, Operation Plan Preparation/Optimization, Operation Plan Processing, Strategic Conflict Resolution, Tactical Conflict Resolution, Emergency Management, Monitoring, Traffic Information and Legal Recording. Moreover, a Web app interface was developed for the operator/pilot. The system was tested in simulations and real visual line of sight (VLOS) and beyond VLOS (BVLOS) flights, with both vertical take-off and landing (VTOL) and fixed-wing platforms, while assisting final users interested in incorporating drones to support their tasks. The development and testing of the environment provided lessons at different levels: functionalities, compatibility, procedures, information, usability, ground control station (GCS) integration and aircrew roles. Full article
(This article belongs to the Special Issue UAV Path Planning and Navigation)
Show Figures

Figure 1

37 pages, 5728 KiB  
Article
Dynamic Identification Method for Potential Threat Vehicles beyond Line of Sight in Expressway Scenarios
by Fumin Zou, Chenxi Xia, Feng Guo, Xinjian Cai, Qiqin Cai, Guanghao Luo and Ting Ye
Appl. Sci. 2023, 13(23), 12899; https://doi.org/10.3390/app132312899 - 1 Dec 2023
Viewed by 1076
Abstract
Due to the challenge of limited line of sight in the perception system of intelligent driving vehicles (cameras, radar, body sensors, etc.), which can only perceive threats within a limited range, potential threats outside the line of sight cannot be fed back to [...] Read more.
Due to the challenge of limited line of sight in the perception system of intelligent driving vehicles (cameras, radar, body sensors, etc.), which can only perceive threats within a limited range, potential threats outside the line of sight cannot be fed back to the driver. Therefore, this article proposes a safety perception detection method for beyond the line of sight for intelligent driving. This method can improve driving safety, enabling drivers to perceive potential threats to vehicles in the rear areas beyond the line of sight earlier and make decisions in advance. Firstly, the electronic toll collection (ETC) transaction data are preprocessed to construct the vehicle trajectory speed dataset; then, wavelet transform (WT) is used to decompose and reconstruct the speed dataset, and lightweight gradient noosting machine learning (LightGBM) is adopted to train and learn the features of the vehicle section speed. On this basis, we also consider the features of vehicle type, traffic flow, and other characteristics, and construct a quantitative method to identify potential threat vehicles (PTVs) based on a fuzzy set to realize the dynamic safety assessment of vehicles, so as to effectively detect PTVs within the over-the-horizon range behind the driver. We simulated an expressway scenario using an ETC simulation platform to evaluate the detection of over-the-horizon PTVs. The simulation results indicate that the method can accurately detect PTVs of different types and under different road scenarios with an identification accuracy of 97.66%, which verifies the effectiveness of the method in this study. This result provides important theoretical and practical support for intelligent driving safety assistance in vehicle–road collaboration scenarios. Full article
(This article belongs to the Special Issue Vehicle Safety and Crash Avoidance)
Show Figures

Figure 1

26 pages, 9013 KiB  
Article
Indoor Human Action Recognition Based on Dual Kinect V2 and Improved Ensemble Learning Method
by Ruixiang Kan, Hongbing Qiu, Xin Liu, Peng Zhang, Yan Wang, Mengxiang Huang and Mei Wang
Sensors 2023, 23(21), 8921; https://doi.org/10.3390/s23218921 - 2 Nov 2023
Cited by 1 | Viewed by 1453
Abstract
Indoor human action recognition, essential across various applications, faces significant challenges such as orientation constraints and identification limitations, particularly in systems reliant on non-contact devices. Self-occlusions and non-line of sight (NLOS) situations are important representatives among them. To address these challenges, this paper [...] Read more.
Indoor human action recognition, essential across various applications, faces significant challenges such as orientation constraints and identification limitations, particularly in systems reliant on non-contact devices. Self-occlusions and non-line of sight (NLOS) situations are important representatives among them. To address these challenges, this paper presents a novel system utilizing dual Kinect V2, enhanced by an advanced Transmission Control Protocol (TCP) and sophisticated ensemble learning techniques, tailor-made to handle self-occlusions and NLOS situations. Our main works are as follows: (1) a data-adaptive adjustment mechanism, anchored on localization outcomes, to mitigate self-occlusion in dynamic orientations; (2) the adoption of sophisticated ensemble learning techniques, including a Chirp acoustic signal identification method, based on an optimized fuzzy c-means-AdaBoost algorithm, for improving positioning accuracy in NLOS contexts; and (3) an amalgamation of the Random Forest model and bat algorithm, providing innovative action identification strategies for intricate scenarios. We conduct extensive experiments, and our results show that the proposed system augments human action recognition precision by a substantial 30.25%, surpassing the benchmarks set by current state-of-the-art works. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

2558 KiB  
Proceeding Paper
Realism-Oriented Design, Verification, and Validation of Novel Robust Navigation Solutions
by Sorin Andrei Negru, Patrick Geragersian, Ivan Petrunin, Raphael Grech and Guy Buesnel
Eng. Proc. 2023, 54(1), 57; https://doi.org/10.3390/ENC2023-15424 - 29 Oct 2023
Cited by 1 | Viewed by 658
Abstract
Urban environments are characterized by a set of conditions underpinning degradation Position, Navigation and Timing (PNT) signals, such as multipath and non-line of sight (NLOS) effects, negatively affecting the position and the navigation integrity during the Uncrewed Aerial Vehicles (UAVs) operations. Before the [...] Read more.
Urban environments are characterized by a set of conditions underpinning degradation Position, Navigation and Timing (PNT) signals, such as multipath and non-line of sight (NLOS) effects, negatively affecting the position and the navigation integrity during the Uncrewed Aerial Vehicles (UAVs) operations. Before the deployment of such uncrewed aerial platforms, a realistic simulation set-up is required, which should facilitate the identification and mitigation of the performance degradation that may appear during the actual mission. This paper presents the case study of the development of a robust Artificial Intelligence (AI)-based multi-sensor fusion framework using a federated architecture. The dataset for this development, comprising the outputs of a Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU) and a monocular camera is generated in a high-fidelity simulation framework. The simulation framework is built around Spirent’s GSS7000 simulator, software tools from Spirent (SimGEN and SimSENSOR) and OKTAL-SE (Sim3D), where the realism for the vision sensor data generation is provided by a photorealistic environment generated using the AirSim software with the Unreal Engine aid. To verify and validate the fusion framework a hardware in the loop (HIL) set-up has been implemented using the Pixhawk controller. The results obtained demonstrate that the presented HIL set-up is the essential component of a more robust navigation solution development framework, providing resilience under conditions of GNSS outages. Full article
(This article belongs to the Proceedings of European Navigation Conference ENC 2023)
Show Figures

Figure 1

Back to TopTop