Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,033)

Search Parameters:
Keywords = single-sensor data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 9438 KiB  
Article
High-Throughput and Accurate 3D Scanning of Cattle Using Time-of-Flight Sensors and Deep Learning
by Gbenga Omotara, Seyed Mohamad Ali Tousi, Jared Decker, Derek Brake and G. N. DeSouza
Sensors 2024, 24(16), 5275; https://doi.org/10.3390/s24165275 - 14 Aug 2024
Viewed by 398
Abstract
We introduce a high-throughput 3D scanning system designed to accurately measure cattle phenotypes. This scanner employs an array of depth sensors, i.e., time-of-flight (ToF) sensors, each controlled by dedicated embedded devices. The sensors generate high-fidelity 3D point clouds, which are automatically stitched using [...] Read more.
We introduce a high-throughput 3D scanning system designed to accurately measure cattle phenotypes. This scanner employs an array of depth sensors, i.e., time-of-flight (ToF) sensors, each controlled by dedicated embedded devices. The sensors generate high-fidelity 3D point clouds, which are automatically stitched using a point could segmentation approach through deep learning. The deep learner combines raw RGB and depth data to identify correspondences between the multiple 3D point clouds, thus creating a single and accurate mesh that reconstructs the cattle geometry on the fly. In order to evaluate the performance of our system, we implemented a two-fold validation process. Initially, we quantitatively tested the scanner for its ability to determine accurate volume and surface area measurements in a controlled environment featuring known objects. Next, we explored the impact and need for multi-device synchronization when scanning moving targets (cattle). Finally, we performed qualitative and quantitative measurements on cattle. The experimental results demonstrate that the proposed system is capable of producing high-quality meshes of untamed cattle with accurate volume and surface area measurements for livestock studies. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

19 pages, 9250 KiB  
Article
Multi-Agent Deep Reinforcement Learning Based Dynamic Task Offloading in a Device-to-Device Mobile-Edge Computing Network to Minimize Average Task Delay with Deadline Constraints
by Huaiwen He, Xiangdong Yang, Xin Mi, Hong Shen and Xuefeng Liao
Sensors 2024, 24(16), 5141; https://doi.org/10.3390/s24165141 - 8 Aug 2024
Viewed by 553
Abstract
Device-to-device (D2D) is a pivotal technology in the next generation of communication, allowing for direct task offloading between mobile devices (MDs) to improve the efficient utilization of idle resources. This paper proposes a novel algorithm for dynamic task offloading between the active MDs [...] Read more.
Device-to-device (D2D) is a pivotal technology in the next generation of communication, allowing for direct task offloading between mobile devices (MDs) to improve the efficient utilization of idle resources. This paper proposes a novel algorithm for dynamic task offloading between the active MDs and the idle MDs in a D2D–MEC (mobile edge computing) system by deploying multi-agent deep reinforcement learning (DRL) to minimize the long-term average delay of delay-sensitive tasks under deadline constraints. Our core innovation is a dynamic partitioning scheme for idle and active devices in the D2D–MEC system, accounting for stochastic task arrivals and multi-time-slot task execution, which has been insufficiently explored in the existing literature. We adopt a queue-based system to formulate a dynamic task offloading optimization problem. To address the challenges of large action space and the coupling of actions across time slots, we model the problem as a Markov decision process (MDP) and perform multi-agent DRL through multi-agent proximal policy optimization (MAPPO). We employ a centralized training with decentralized execution (CTDE) framework to enable each MD to make offloading decisions solely based on its local system state. Extensive simulations demonstrate the efficiency and fast convergence of our algorithm. In comparison to the existing sub-optimal results deploying single-agent DRL, our algorithm reduces the average task completion delay by 11.0% and the ratio of dropped tasks by 17.0%. Our proposed algorithm is particularly pertinent to sensor networks, where mobile devices equipped with sensors generate a substantial volume of data that requires timely processing to ensure quality of experience (QoE) and meet the service-level agreements (SLAs) of delay-sensitive applications. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

17 pages, 5683 KiB  
Article
Enhancing Lambda Measurement in Hydrogen-Fueled SI Engines through Virtual Sensor Implementation
by Federico Ricci, Massimiliano Avana and Francesco Mariani
Energies 2024, 17(16), 3932; https://doi.org/10.3390/en17163932 - 8 Aug 2024
Viewed by 339
Abstract
The automotive industry is increasingly challenged to develop cleaner, more efficient solutions to comply with stringent emission standards. Hydrogen (H2)-powered internal combustion engines (ICEs) offer a promising alternative, with the potential to reduce carbon-based emissions and improve efficiency. However, hydrogen combustion [...] Read more.
The automotive industry is increasingly challenged to develop cleaner, more efficient solutions to comply with stringent emission standards. Hydrogen (H2)-powered internal combustion engines (ICEs) offer a promising alternative, with the potential to reduce carbon-based emissions and improve efficiency. However, hydrogen combustion presents two main challenges related to the calibration process: emissions control and measurement of the air excess coefficient (λ). Traditional lambda sensors struggle with hydrogen’s combustion dynamics, leading to potential inefficiencies and increased pollutant emissions. Consequently, the determination of engine performance could also be compromised. This study explores the feasibility of using machine learning (ML) to replace physical lambda sensors with virtual ones in hydrogen-fueled ICEs. The research was conducted on a single-cylinder spark-ignition (SI) engine, collecting data across a range of air excess coefficients from 1.6 to 3.0. An advanced hybrid model combining long short-term memory (LSTM) networks and convolutional neural networks (CNNs) was developed and fine-tuned to accurately predict the air–fuel ratio; its predictive performance was compared to that obtained with the backpropagation (BP) architecture. The optimal configuration was identified through iterative experimentation, focusing on the neuron count, number of hidden layers, and input variables. The results demonstrate that the LSTM + 1DCNN model successfully converged without overfitting; it also showed better prediction ability in terms of accuracy and robustness when compared with the backpropagation approach. Full article
(This article belongs to the Section I2: Energy and Combustion Science)
Show Figures

Figure 1

18 pages, 2738 KiB  
Article
PSA-FL-CDM: A Novel Federated Learning-Based Consensus Model for Post-Stroke Assessment
by Najmeh Razfar, Rasha Kashef and Farah Mohammadi
Sensors 2024, 24(16), 5095; https://doi.org/10.3390/s24165095 - 6 Aug 2024
Viewed by 444
Abstract
The rapid development of Internet of Things (IoT) technologies and the potential benefits of employing the vast datasets generated by IoT devices, including wearable sensors and camera systems, has ushered in a new era of opportunities for enhancing smart rehabilitation in various healthcare [...] Read more.
The rapid development of Internet of Things (IoT) technologies and the potential benefits of employing the vast datasets generated by IoT devices, including wearable sensors and camera systems, has ushered in a new era of opportunities for enhancing smart rehabilitation in various healthcare systems. Maintaining patient privacy is paramount in healthcare while providing smart insights and recommendations. This study proposed the adoption of federated learning to develop a scalable AI model for post-stroke assessment while protecting patients’ privacy. This research compares the centralized (PSA-MNMF) model performance with the proposed scalable federated PSA-FL-CDM model for sensor- and camera-based datasets. The computational time indicates that the federated PSA-FL-CDM model significantly reduces the execution time and attains comparable performance while preserving the patient’s privacy. Impact Statement—This research introduces groundbreaking contributions to stroke assessment by successfully implementing federated learning for the first time in this domain and applying consensus models in each node. It enables collaborative model training among multiple nodes or clients while ensuring the privacy of raw data. The study explores eight different clustering methods independently on each node, revolutionizing data organization based on similarities in stroke assessment. Additionally, the research applies the centralized PSA-MNMF consensus clustering technique to each client, resulting in more accurate and robust clustering solutions. By utilizing the FedAvg federated learning algorithm strategy, locally trained models are combined to create a global model that captures the collective knowledge of all participants. Comparative performance measurements and computational time analyses are conducted, facilitating a fair evaluation between centralized and federated learning models in stroke assessment. Moreover, the research extends beyond a single type of database by conducting experiments on two distinct datasets, wearable and camera-based, broadening the understanding of the proposed methods across different data modalities. These contributions develop stroke assessment methodologies, enabling efficient collaboration and accurate consensus clustering models and maintaining data privacy. Full article
(This article belongs to the Special Issue IoT-Based Smart Environments, Applications and Tools)
Show Figures

Figure 1

15 pages, 7882 KiB  
Article
The Prediction and Evaluation of Surface Quality during the Milling of Blade-Root Grooves Based on a Long Short-Term Memory Network and Signal Fusion
by Jing Ni, Kai Chen, Zhen Meng, Zuji Li, Ruizhi Li and Weiguang Liu
Sensors 2024, 24(15), 5055; https://doi.org/10.3390/s24155055 - 5 Aug 2024
Viewed by 350
Abstract
The surface quality of milled blade-root grooves in industrial turbine blades significantly influences their mechanical properties. The surface texture reveals the interaction between the tool and the workpiece during the machining process, which plays a key role in determining the surface quality. In [...] Read more.
The surface quality of milled blade-root grooves in industrial turbine blades significantly influences their mechanical properties. The surface texture reveals the interaction between the tool and the workpiece during the machining process, which plays a key role in determining the surface quality. In addition, there is a significant correlation between acoustic vibration signals and surface texture features. However, current research on surface quality is still relatively limited, and most considers only a single signal. In this paper, 160 sets of industrial field data were collected by multiple sensors to study the surface quality of a blade-root groove. A surface texture feature prediction method based on acoustic vibration signal fusion is proposed to evaluate the surface quality. Fast Fourier transform (FFT) is used to process the signal, and the clean and smooth features are extracted by combining wavelet denoising and multivariate smoothing denoising. At the same time, based on the gray-level co-occurrence matrix, the surface texture image features of different angles of the blade-root groove are extracted to describe the texture features. The fused acoustic vibration signal features are input, and the texture features are output to establish a texture feature prediction model. After predicting the texture features, the surface quality is evaluated by setting a threshold value. The threshold is selected based on all sample data, and the final judgment accuracy is 90%. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

19 pages, 10716 KiB  
Article
Crop Water Status Analysis from Complex Agricultural Data Using UMAP-Based Local Biplot
by Jenniffer Carolina Triana-Martinez, Andrés Marino Álvarez-Meza, Julian Gil-González, Tom De Swaef and Jose A. Fernandez-Gallego
Remote Sens. 2024, 16(15), 2854; https://doi.org/10.3390/rs16152854 - 4 Aug 2024
Viewed by 568
Abstract
To optimize growth and management, precision agriculture relies on a deep understanding of agricultural dynamics, particularly crop water status analysis. Leveraging unmanned aerial vehicles, we can efficiently acquire high-resolution spatiotemporal samples by utilizing remote sensors. However, non-linear relationships among data features, localized within [...] Read more.
To optimize growth and management, precision agriculture relies on a deep understanding of agricultural dynamics, particularly crop water status analysis. Leveraging unmanned aerial vehicles, we can efficiently acquire high-resolution spatiotemporal samples by utilizing remote sensors. However, non-linear relationships among data features, localized within specific subgroups, frequently emerge in agricultural data. Interpreting these complex patterns requires sophisticated analysis due to the presence of noise, high variability, and non-stationarity behavior in the collected samples. Here, we introduce Local Biplot, a methodological framework tailored for discerning meaningful data patterns in non-stationary contexts for precision agriculture. Local Biplot relies on the well-known uniform manifold approximation and projection method, such as UMAP, and local affine transformations to codify non-stationary and non-linear data patterns while maintaining interpretability. This lets us find important clusters for transformation and projection within a single global axis pair. Hence, our framework encompasses variable and observational contributions within individual clusters. At the same time, we provide a relevance analysis strategy to help explain why those clusters exist, facilitating the understanding of data dynamics while favoring interpretability. We demonstrated our method’s capabilities through experiments on both synthetic and real-world datasets, covering scenarios involving grass and rice crops. Moreover, we use random forest and linear regression models to predict water status variables from our Local Biplot-based feature ranking and clusters. Our findings revealed enhanced clustering and prediction capability while emphasizing the importance of input features in precision agriculture. As a result, Local Biplot is a useful tool to visualize, analyze, and compare the intricate underlying patterns and internal structures of complex agricultural datasets. Full article
(This article belongs to the Special Issue Application of Satellite and UAV Data in Precision Agriculture)
Show Figures

Figure 1

16 pages, 2033 KiB  
Article
Deciphering Optimal Radar Ensemble for Advancing Sleep Posture Prediction through Multiview Convolutional Neural Network (MVCNN) Approach Using Spatial Radio Echo Map (SREM)
by Derek Ka-Hei Lai, Andy Yiu-Chau Tam, Bryan Pak-Hei So, Andy Chi-Ho Chan, Li-Wen Zha, Duo Wai-Chi Wong and James Chung-Wai Cheung
Sensors 2024, 24(15), 5016; https://doi.org/10.3390/s24155016 - 2 Aug 2024
Viewed by 357
Abstract
Assessing sleep posture, a critical component in sleep tests, is crucial for understanding an individual’s sleep quality and identifying potential sleep disorders. However, monitoring sleep posture has traditionally posed significant challenges due to factors such as low light conditions and obstructions like blankets. [...] Read more.
Assessing sleep posture, a critical component in sleep tests, is crucial for understanding an individual’s sleep quality and identifying potential sleep disorders. However, monitoring sleep posture has traditionally posed significant challenges due to factors such as low light conditions and obstructions like blankets. The use of radar technolsogy could be a potential solution. The objective of this study is to identify the optimal quantity and placement of radar sensors to achieve accurate sleep posture estimation. We invited 70 participants to assume nine different sleep postures under blankets of varying thicknesses. This was conducted in a setting equipped with a baseline of eight radars—three positioned at the headboard and five along the side. We proposed a novel technique for generating radar maps, Spatial Radio Echo Map (SREM), designed specifically for data fusion across multiple radars. Sleep posture estimation was conducted using a Multiview Convolutional Neural Network (MVCNN), which serves as the overarching framework for the comparative evaluation of various deep feature extractors, including ResNet-50, EfficientNet-50, DenseNet-121, PHResNet-50, Attention-50, and Swin Transformer. Among these, DenseNet-121 achieved the highest accuracy, scoring 0.534 and 0.804 for nine-class coarse- and four-class fine-grained classification, respectively. This led to further analysis on the optimal ensemble of radars. For the radars positioned at the head, a single left-located radar proved both essential and sufficient, achieving an accuracy of 0.809. When only one central head radar was used, omitting the central side radar and retaining only the three upper-body radars resulted in accuracies of 0.779 and 0.753, respectively. This study established the foundation for determining the optimal sensor configuration in this application, while also exploring the trade-offs between accuracy and the use of fewer sensors. Full article
Show Figures

Figure 1

24 pages, 478 KiB  
Article
Energy Consumption Modeling for Heterogeneous Internet of Things Wireless Sensor Network Devices: Entire Modes and Operation Cycles Considerations
by Canek Portillo, Jorge Martinez-Bauset, Vicent Pla and Vicente Casares-Giner
Telecom 2024, 5(3), 723-746; https://doi.org/10.3390/telecom5030036 - 2 Aug 2024
Viewed by 359
Abstract
Wireless sensor networks (WSNs) and sensing devices are considered to be core components of the Internet of Things (IoT). The performance modeling of IoT–WSN is of key importance to better understand, deploy, and manage this technology. As sensor nodes are battery-constrained, a fundamental [...] Read more.
Wireless sensor networks (WSNs) and sensing devices are considered to be core components of the Internet of Things (IoT). The performance modeling of IoT–WSN is of key importance to better understand, deploy, and manage this technology. As sensor nodes are battery-constrained, a fundamental issue in WSN is energy consumption. Additional issues also arise in heterogeneous scenarios due to the coexistence of sensor nodes with different features. In these scenarios, the modeling process becomes more challenging as an efficient orchestration of the sensor nodes must be achieved to guarantee a successful operation in terms of medium access, synchronization, and energy conservation. We propose a novel methodology to determine the energy consumed by sensor nodes deploying a recently proposed synchronous duty-cycled MAC protocol named Priority Sink Access MAC (PSA-MAC). We model the operation of a WSN with two classes of sensor devices by a pair of two-dimensional Discrete-Time Markov Chains (2D-DTMC), determine their stationary probability distribution, and propose new expressions to compute the energy consumption based solely on the obtained stationary probability distribution. This new approach is more systematic and accurate than previously proposed ones. The new methodology to determine energy consumption takes into account different specific features of the PSA-MAC protocol as: (i) the synchronization among sensor nodes; (ii) the normal and awake operation cycles to ensure synchronization among sensor nodes and energy conservation; (iii) the two periods that compose a full operation cycle: the data and sleep periods; (iv) two transmission schemes, SPT (single packet transmission) and APT (aggregated packet transmission) (v) the support of multiple sensor node classes; and (vi) the support of different priority assignments per class of sensor nodes. The accuracy of the proposed methodology has been validated by an independent discrete-event-based simulation model, showing that very precise results are obtained. Full article
Show Figures

Figure 1

23 pages, 22622 KiB  
Article
CMFPNet: A Cross-Modal Multidimensional Frequency Perception Network for Extracting Offshore Aquaculture Areas from MSI and SAR Images
by Haomiao Yu, Fangxiong Wang, Yingzi Hou, Junfu Wang, Jianfeng Zhu and Zhenqi Cui
Remote Sens. 2024, 16(15), 2825; https://doi.org/10.3390/rs16152825 - 1 Aug 2024
Viewed by 455
Abstract
The accurate extraction and monitoring of offshore aquaculture areas are crucial for the marine economy, environmental management, and sustainable development. Existing methods relying on unimodal remote sensing images are limited by natural conditions and sensor characteristics. To address this issue, we integrated multispectral [...] Read more.
The accurate extraction and monitoring of offshore aquaculture areas are crucial for the marine economy, environmental management, and sustainable development. Existing methods relying on unimodal remote sensing images are limited by natural conditions and sensor characteristics. To address this issue, we integrated multispectral imaging (MSI) and synthetic aperture radar imaging (SAR) to overcome the limitations of single-modal images. We propose a cross-modal multidimensional frequency perception network (CMFPNet) to enhance classification and extraction accuracy. CMFPNet includes a local–global perception block (LGPB) for combining local and global semantic information and a multidimensional adaptive frequency filtering attention block (MAFFAB) that dynamically filters frequency-domain information that is beneficial for aquaculture area recognition. We constructed six typical offshore aquaculture datasets and compared CMFPNet with other models. The quantitative results showed that CMFPNet outperformed the existing methods in terms of classifying and extracting floating raft aquaculture (FRA) and cage aquaculture (CA), achieving mean intersection over union (mIoU), mean F1 score (mF1), and mean Kappa coefficient (mKappa) values of 87.66%, 93.41%, and 92.59%, respectively. Moreover, CMFPNet has low model complexity and successfully achieves a good balance between performance and the number of required parameters. Qualitative results indicate significant reductions in missed detections, false detections, and adhesion phenomena. Overall, CMFPNet demonstrates great potential for accurately extracting large-scale offshore aquaculture areas, providing effective data support for marine planning and environmental protection. Our code is available at Data Availability Statement section. Full article
Show Figures

Figure 1

25 pages, 2861 KiB  
Article
Simplification of Mobility Tests and Data Processing to Increase Applicability of Wearable Sensors as Diagnostic Tools for Parkinson’s Disease
by Rana M. Khalil, Lisa M. Shulman, Ann L. Gruber-Baldini, Sunita Shakya, Rebecca Fenderson, Maxwell Van Hoven, Jeffrey M. Hausdorff, Rainer von Coelln and Michael P. Cummings
Sensors 2024, 24(15), 4983; https://doi.org/10.3390/s24154983 - 1 Aug 2024
Viewed by 517
Abstract
Quantitative mobility analysis using wearable sensors, while promising as a diagnostic tool for Parkinson’s disease (PD), is not commonly applied in clinical settings. Major obstacles include uncertainty regarding the best protocol for instrumented mobility testing and subsequent data processing, as well as the [...] Read more.
Quantitative mobility analysis using wearable sensors, while promising as a diagnostic tool for Parkinson’s disease (PD), is not commonly applied in clinical settings. Major obstacles include uncertainty regarding the best protocol for instrumented mobility testing and subsequent data processing, as well as the added workload and complexity of this multi-step process. To simplify sensor-based mobility testing in diagnosing PD, we analyzed data from 262 PD participants and 50 controls performing several motor tasks wearing a sensor on their lower back containing a triaxial accelerometer and a triaxial gyroscope. Using ensembles of heterogeneous machine learning models incorporating a range of classifiers trained on a set of sensor features, we show that our models effectively differentiate between participants with PD and controls, both for mixed-stage PD (92.6% accuracy) and a group selected for mild PD only (89.4% accuracy). Omitting algorithmic segmentation of complex mobility tasks decreased the diagnostic accuracy of our models, as did the inclusion of kinesiological features. Feature importance analysis revealed that Timed Up and Go (TUG) tasks to contribute the highest-yield predictive features, with only minor decreases in accuracy for models based on cognitive TUG as a single mobility task. Our machine learning approach facilitates major simplification of instrumented mobility testing without compromising predictive performance. Full article
(This article belongs to the Special Issue Combining Machine Learning and Sensors in Human Movement Biomechanics)
Show Figures

Figure 1

19 pages, 43879 KiB  
Article
3D Data Processing and Entropy Reduction for Reconstruction from Low-Resolution Spatial Coordinate Clouds in a Technical Vision System
by Ivan Y. Alba Corpus, Wendy Flores-Fuentes, Oleg Sergiyenko, Julio C. Rodríguez-Quiñonez, Jesús E. Miranda-Vega, Wendy Garcia-González and José A. Núñez-López
Entropy 2024, 26(8), 646; https://doi.org/10.3390/e26080646 - 30 Jul 2024
Viewed by 404
Abstract
This paper proposes an advancement in the application of a Technical Vision System (TVS), which integrates a laser scanning mechanism with a single light sensor to measure 3D spatial coordinates. In this application, the system is used to scan and digitalize objects using [...] Read more.
This paper proposes an advancement in the application of a Technical Vision System (TVS), which integrates a laser scanning mechanism with a single light sensor to measure 3D spatial coordinates. In this application, the system is used to scan and digitalize objects using a rotating table to explore the potential of the system for 3D scanning at reduced resolutions. The experiments undertaken searched for optimal scanning windows and used statistical data filtering techniques and regression models to find a method to generate a 3D scan that was still recognizable with the least amount of 3D points, balancing the number of points scanned and time, while at the same time reducing effects caused by the particularities of the TVS, such as noise and entropy in the form of natural distortion in the resulting scans. The evaluation of the experimentation results uses 3D point registration methods, joining multiple faces from the original volume scanned by the TVS and aligning it to the ground truth model point clouds, which are based on a commercial 3D camera to verify that the reconstructed 3D model retains substantial detail from the original object. This research finds it is possible to reconstruct sufficiently detailed 3D models obtained from the TVS, which contain coarsely scanned data or scans that initially lack high definition or are too noisy. Full article
Show Figures

Figure 1

23 pages, 3898 KiB  
Article
Enhanced Classification of Human Fall and Sit Motions Using Ultra-Wideband Radar and Hidden Markov Models
by Thottempudi Pardhu, Vijay Kumar, Andreas Kanavos, Vassilis C. Gerogiannis and Biswaranjan Acharya
Mathematics 2024, 12(15), 2314; https://doi.org/10.3390/math12152314 - 24 Jul 2024
Viewed by 725
Abstract
In this study, we address the challenge of accurately classifying human movements in complex environments using sensor data. We analyze both video and radar data to tackle this problem. From video sequences, we extract temporal characteristics using techniques such as motion history images [...] Read more.
In this study, we address the challenge of accurately classifying human movements in complex environments using sensor data. We analyze both video and radar data to tackle this problem. From video sequences, we extract temporal characteristics using techniques such as motion history images (MHI) and Hu moments, which capture the dynamic aspects of movement. Radar data are processed through principal component analysis (PCA) to identify unique detection signatures. We refine these features using k-means clustering and employ them to train hidden Markov models (HMMs). These models are tailored to distinguish between distinct movements, specifically focusing on differentiating sitting from falling motions. Our experimental findings reveal that integrating video-derived and radar-derived features significantly improves the accuracy of motion classification. Specifically, the combined approach enhanced the precision of detecting sitting motions by over 10% compared to using single-modality data. This integrated method not only boosts classification accuracy but also extends the practical applicability of motion detection systems in diverse real-world scenarios, such as healthcare monitoring and emergency response systems. Full article
(This article belongs to the Special Issue Advanced Research in Image Processing and Optimization Methods)
Show Figures

Figure 1

14 pages, 5519 KiB  
Article
Optimized Ammonia-Sensing Electrode with CeO2/rGO Nano-Composite Coating Synthesized by Focused Laser Ablation in Liquid
by Mengqi Shi and Hiroyuki Wada
Nanomaterials 2024, 14(15), 1238; https://doi.org/10.3390/nano14151238 - 23 Jul 2024
Viewed by 430
Abstract
This study investigated the synthesis of cerium oxide (CeO2) nanoparticles (NPs) and composites with reduced graphene oxide (rGO) for the enhanced electrochemical sensing of ammonia. CeO2 NPs were prepared by the focused laser ablation in liquid (LAL) method, which enabled [...] Read more.
This study investigated the synthesis of cerium oxide (CeO2) nanoparticles (NPs) and composites with reduced graphene oxide (rGO) for the enhanced electrochemical sensing of ammonia. CeO2 NPs were prepared by the focused laser ablation in liquid (LAL) method, which enabled the production of high-purity, spherical nanoparticles with a uniform dispersion and sizes under 50 nm in a short time. The effects of varying irradiation fluence and time on the nanoparticle size, production yield, and dispersion were systematically studied. The synthesized CeO2 NPs were doped with rGO to form CeO2/rGO composites, which were drop casted to modify the glassy carbon electrodes (GCE). The CeO2/rGO-GCE electrodes exhibited superior electrochemical properties compared with single-component electrodes, which demonstrated the significant potential for ammonia detection, especially at a 4 J/cm2 fluence. The CeO2/rGO composites showed uniformly dispersed CeO2 NPs between the rGO sheets, which enhanced the conductivity, as confirmed by SEM, EDS mapping, and XRD analysis. Cyclic voltammetry data demonstrated superior electrochemical activity of the CeO2/rGO composite electrodes, with the 2rGO/1CeO2 ratio showing the highest current response and sensitivity. The CV response to varying ammonia concentrations exhibited a linear relationship, indicating the electrode’s capability for accurate quantification. These findings highlight the effectiveness of focused laser ablation in enhancing nanoparticle synthesis and the promising synergistic effects of CeO2 and rGO in developing high-performance electrochemical sensors. Full article
(This article belongs to the Special Issue Laser-Based Nano Fabrication and Nano Lithography: Second Edition)
Show Figures

Figure 1

23 pages, 12771 KiB  
Article
Harmonized Landsat and Sentinel-2 Data with Google Earth Engine
by Elias Fernando Berra, Denise Cybis Fontana, Feng Yin and Fabio Marcelo Breunig
Remote Sens. 2024, 16(15), 2695; https://doi.org/10.3390/rs16152695 - 23 Jul 2024
Viewed by 644
Abstract
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging [...] Read more.
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging with a single optical satellite sensor, as the frequency of good-quality observations can be low. To optimize good-quality data availability, some studies propose harmonized databases. This work aims at developing an ‘all-in-one’ Google Earth Engine (GEE) web-based workflow to produce harmonized surface reflectance data from Landsat-7 (L7) ETM+, Landsat-8 (L8) OLI, and Sentinel-2 (S2) MSI top of atmosphere (TOA) reflectance data. Six major processing steps to generate a new source of near-daily Harmonized Landsat and Sentinel (HLS) reflectance observations at 30 m spatial resolution are proposed and described: band adjustment, atmospheric correction, cloud and cloud shadow masking, view and illumination angle adjustment, co-registration, and reprojection and resampling. The HLS is applied to six equivalent spectral bands, resulting in a surface nadir BRDF-adjusted reflectance (NBAR) time series gridded to a common pixel resolution, map projection, and spatial extent. The spectrally corresponding bands and derived Normalized Difference Vegetation Index (NDVI) were compared, and their sensor differences were quantified by regression analyses. Examples of HLS time series are presented for two potential applications: agricultural and forest phenology. The HLS product is also validated against ground measurements of NDVI, achieving very similar temporal trajectories and magnitude of values (R2 = 0.98). The workflow and script presented in this work may be useful for the scientific community aiming at taking advantage of multi-sensor harmonized time series of optical data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

19 pages, 7121 KiB  
Article
Sensor-Fused Nighttime System for Enhanced Pedestrian Detection in ADAS and Autonomous Vehicles
by Jungme Park, Bharath Kumar Thota and Karthik Somashekar
Sensors 2024, 24(14), 4755; https://doi.org/10.3390/s24144755 - 22 Jul 2024
Viewed by 633
Abstract
Ensuring a safe nighttime environmental perception system relies on the early detection of vulnerable road users with minimal delay and high precision. This paper presents a sensor-fused nighttime environmental perception system by integrating data from thermal and RGB cameras. A new alignment algorithm [...] Read more.
Ensuring a safe nighttime environmental perception system relies on the early detection of vulnerable road users with minimal delay and high precision. This paper presents a sensor-fused nighttime environmental perception system by integrating data from thermal and RGB cameras. A new alignment algorithm is proposed to fuse the data from the two camera sensors. The proposed alignment procedure is crucial for effective sensor fusion. To develop a robust Deep Neural Network (DNN) system, nighttime thermal and RGB images were collected under various scenarios, creating a labeled dataset of 32,000 image pairs. Three fusion techniques were explored using transfer learning, alongside two single-sensor models using only RGB or thermal data. Five DNN models were developed and evaluated, with experimental results showing superior performance of fused models over non-fusion counterparts. The late-fusion system was selected for its optimal balance of accuracy and response time. For real-time inferencing, the best model was further optimized, achieving 33 fps on the embedded edge computing device, an 83.33% improvement in inference speed over the system without optimization. These findings are valuable for advancing Advanced Driver Assistance Systems (ADASs) and autonomous vehicle technologies, enhancing pedestrian detection during nighttime to improve road safety and reduce accidents. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop