Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (588)

Search Parameters:
Keywords = multisensor modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 10515 KiB  
Article
Exploring the Processing Paradigm of Input Data for End-to-End Deep Learning in Tool Condition Monitoring
by Chengguan Wang, Guangping Wang, Tao Wang, Xiyao Xiong, Zhongchuan Ouyang and Tao Gong
Sensors 2024, 24(16), 5300; https://doi.org/10.3390/s24165300 (registering DOI) - 15 Aug 2024
Abstract
Tool condition monitoring technology is an indispensable part of intelligent manufacturing. Most current research focuses on complex signal processing techniques or advanced deep learning algorithms to improve prediction performance without fully leveraging the end-to-end advantages of deep learning. The challenge lies in transforming [...] Read more.
Tool condition monitoring technology is an indispensable part of intelligent manufacturing. Most current research focuses on complex signal processing techniques or advanced deep learning algorithms to improve prediction performance without fully leveraging the end-to-end advantages of deep learning. The challenge lies in transforming multi-sensor raw data into input data suitable for direct model feeding, all while minimizing data scale and preserving sufficient temporal interpretation of tool wear. However, there is no clear reference standard for this so far. In light of this, this paper innovatively explores the processing methods that transform raw data into input data for deep learning models, a process known as an input paradigm. This paper introduces three new input paradigms: the downsampling paradigm, the periodic paradigm, and the subsequence paradigm. Then an improved hybrid model that combines a convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) was employed to validate the model’s performance. The subsequence paradigm demonstrated considerable superiority in prediction results based on the PHM2010 dataset, as the newly generated time series maintained the integrity of the raw data. Further investigation revealed that, with 120 subsequences and the temporal indicator being the maximum value, the model’s mean absolute error (MAE) and root mean square error (RMSE) were the lowest after threefold cross-validation, outperforming several classical and contemporary methods. The methods explored in this paper provide references for designing input data for deep learning models, helping to enhance the end-to-end potential of deep learning models, and promoting the industrial deployment and practical application of tool condition monitoring systems. Full article
Show Figures

Figure 1

17 pages, 16956 KiB  
Article
Motor Fault Diagnosis Using Attention-Based Multisensor Feature Fusion
by Zhuoyao Miao, Wenshan Feng, Zhuo Long, Gongping Wu, Le Deng, Xuan Zhou and Liwei Xie
Energies 2024, 17(16), 4053; https://doi.org/10.3390/en17164053 - 15 Aug 2024
Abstract
In order to reduce the influence of environmental noise and different operating conditions on the accuracy of motor fault diagnosis, this paper proposes a capsule network method combining multi-channel signals and the efficient channel attention (ECA) mechanism, sampling the data from multiple sensors [...] Read more.
In order to reduce the influence of environmental noise and different operating conditions on the accuracy of motor fault diagnosis, this paper proposes a capsule network method combining multi-channel signals and the efficient channel attention (ECA) mechanism, sampling the data from multiple sensors and visualizing the one-dimensional time-frequency domain as a two-dimensional symmetric dot pattern (SDP) image, then fusing the multi-channel image data and extracting the image using a capsule network combining the ECA attention mechanism features to match eight different fault types for fault classification. In order to guarantee the universality of the suggested model, data from Case Western Reserve University (CWRU) is used for validation. The suggested multi-channel signal fusion ECA attention capsule network (MSF-ECA-CapsNet) model fault identification accuracy may reach 99.21%, according to the experimental findings, which is higher than the traditional method. Meanwhile, the method of multi-sensor data fusion and the use of the ECA attention mechanism make the diagnosis accuracy much higher. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

19 pages, 8210 KiB  
Article
Wearable Multi-Sensor Positioning Prototype for Rowing Technique Evaluation
by Luis Rodriguez Mendoza and Kyle O’Keefe
Sensors 2024, 24(16), 5280; https://doi.org/10.3390/s24165280 - 15 Aug 2024
Viewed by 91
Abstract
The goal of this study is to determine the feasibility of a wearable multi-sensor positioning prototype to be used as a training tool to evaluate rowing technique and to determine the positioning accuracy using multiple mathematical models and estimation methods. The wearable device [...] Read more.
The goal of this study is to determine the feasibility of a wearable multi-sensor positioning prototype to be used as a training tool to evaluate rowing technique and to determine the positioning accuracy using multiple mathematical models and estimation methods. The wearable device consists of an inertial measurement unit (IMU), an ultra-wideband (UWB) transceiver, and a global navigation satellite system (GNSS) receiver. An experiment on a rowing shell was conducted to evaluate the performance of the system on a rower’s wrist, against a centimeter-level GNSS reference trajectory. This experiment analyzed the rowing motion in multiple navigation frames and with various positioning methods. The results show that the wearable device prototype is a viable option for rowing technique analysis; the system was able to provide the position, velocity, and attitude of a rower’s wrist, with a positioning accuracy ranging between ±0.185 m and ±1.656 m depending on the estimation method. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

31 pages, 9525 KiB  
Article
Bump Feature Detection Based on Spectrum Modeling of Discrete-Sampled, Non-Homogeneous Multi-Sensor Stream Data
by Haiyang Lyu, Qiqi Zhong, Donglai Jiao and Jianchun Hua
Appl. Sci. 2024, 14(15), 6744; https://doi.org/10.3390/app14156744 - 2 Aug 2024
Viewed by 301
Abstract
Roads are the most heavily affected aspect of urban infrastructure given the ever-increasing number of vehicles needed to provide mobility to residents, supply them with goods, and help sustain urban growth. An important indicator of degrading road infrastructure is the so-called bump features [...] Read more.
Roads are the most heavily affected aspect of urban infrastructure given the ever-increasing number of vehicles needed to provide mobility to residents, supply them with goods, and help sustain urban growth. An important indicator of degrading road infrastructure is the so-called bump features of the road surface (BFRS), which have affected transportation safety and driving experience. To collect BFRS, we can collect discrete-sampled, non-homogeneous multi-sensor stream data. We propose a BFRS detection method based on spectrum modeling and multi-dimensional features. With the sampling rate of GPS at 1 Hz and a gyroscope and accelerometer at 100 Hz, multi-sensor stream data are recorded at three different urban areas of Nanjing, China, using the smartphone mounted on a vehicle. The recorded stream data captures a geometric feature modeling movement and the respective driving conditions. Derived features also include acceleration, orientation, and speed information. To capture bump features, we develop a deep-learning-based approach based on so-called spectrum features. BFRS detection experiments using multi-sensor stream data from smartphones are conducted, and 4, 14, and 17 BFRS are correctly detected in three different areas, with the precision as 100%, 70.00%, and 77.27%, respectively. Then, comparisons are conducted between the proposed method and three other methods, and the F-score of the proposed method is computed as 1.0000, 0.6363, and 0.7555 at three different areas, which hold the highest value among all results. Finally, it shows that the proposed method performs well in different geographic areas. Full article
Show Figures

Figure 1

15 pages, 3022 KiB  
Article
A Rotating Machinery Fault Diagnosis Method Based on Dynamic Graph Convolution Network and Hard Threshold Denoising
by Qiting Zhou, Longxian Xue, Jie He, Sixiang Jia and Yongbo Li
Sensors 2024, 24(15), 4887; https://doi.org/10.3390/s24154887 - 27 Jul 2024
Viewed by 470
Abstract
With the development of precision sensing instruments and data storage devices, the fusion of multi-sensor data in gearbox fault diagnosis has attracted much attention. However, existing methods have difficulty in capturing the local temporal dependencies of multi-sensor monitoring information, and the inescapable noise [...] Read more.
With the development of precision sensing instruments and data storage devices, the fusion of multi-sensor data in gearbox fault diagnosis has attracted much attention. However, existing methods have difficulty in capturing the local temporal dependencies of multi-sensor monitoring information, and the inescapable noise severely decreases the accuracy of multi-sensor information fusion diagnosis. To address these issues, this paper proposes a fault diagnosis method based on dynamic graph convolutional neural networks and hard threshold denoising. Firstly, considering that the relationships between monitoring data from different sensors change over time, a dynamic graph structure is adopted to model the temporal dependencies of multi-sensor data, and, further, a graph convolutional neural network is constructed to achieve the interaction and feature extraction of temporal information from multi-sensor data. Secondly, to avoid the influence of noise in practical engineering, a hard threshold denoising strategy is designed, and a learnable hard threshold denoising layer is embedded into the graph neural network. Experimental fault datasets from two typical gearbox fault test benches under environmental noise are used to verify the effectiveness of the proposed method in gearbox fault diagnosis. The experimental results show that the proposed DDGCN method achieves an average diagnostic accuracy of up to 99.7% under different levels of environmental noise, demonstrating good noise resistance. Full article
Show Figures

Figure 1

31 pages, 1582 KiB  
Article
Recent Advances in 3D Object Detection for Self-Driving Vehicles: A Survey
by Oluwajuwon A. Fawole and Danda B. Rawat
AI 2024, 5(3), 1255-1285; https://doi.org/10.3390/ai5030061 - 25 Jul 2024
Viewed by 582
Abstract
The development of self-driving or autonomous vehicles has led to significant advancements in 3D object detection technologies, which are critical for the safety and efficiency of autonomous driving. Despite recent advances, several challenges remain in sensor integration, handling sparse and noisy data, and [...] Read more.
The development of self-driving or autonomous vehicles has led to significant advancements in 3D object detection technologies, which are critical for the safety and efficiency of autonomous driving. Despite recent advances, several challenges remain in sensor integration, handling sparse and noisy data, and ensuring reliable performance across diverse environmental conditions. This paper comprehensively surveys state-of-the-art 3D object detection techniques for autonomous vehicles, emphasizing the importance of multi-sensor fusion techniques and advanced deep learning models. Furthermore, we present key areas for future research, including enhancing sensor fusion algorithms, improving computational efficiency, and addressing ethical, security, and privacy concerns. The integration of these technologies into real-world applications for autonomous driving is presented by highlighting potential benefits and limitations. We also present a side-by-side comparison of different techniques in a tabular form. Through a comprehensive review, this paper aims to provide insights into the future directions of 3D object detection and its impact on the evolution of autonomous driving. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

21 pages, 8752 KiB  
Article
Data-Driven Rotary Machine Fault Diagnosis Using Multisensor Vibration Data with Bandpass Filtering and Convolutional Neural Network for Signal-to-Image Recognition
by Dominik Łuczak
Electronics 2024, 13(15), 2940; https://doi.org/10.3390/electronics13152940 - 25 Jul 2024
Viewed by 426
Abstract
This paper proposes a novel data-driven method for machine fault diagnosis, named multisensor-BPF-Signal2Image-CNN2D. This method uses multisensor data, bandpass filtering (BPF), and a 2D convolutional neural network (CNN2D) for signal-to-image recognition. The proposed method is particularly suitable for scenarios where traditional time-domain analysis [...] Read more.
This paper proposes a novel data-driven method for machine fault diagnosis, named multisensor-BPF-Signal2Image-CNN2D. This method uses multisensor data, bandpass filtering (BPF), and a 2D convolutional neural network (CNN2D) for signal-to-image recognition. The proposed method is particularly suitable for scenarios where traditional time-domain analysis might be insufficient due to the complexity or similarity of the data. The results demonstrate that the multisensor-BPF-Signal2Image-CNN2D method achieves high accuracy in fault classification across the three datasets (constant-velocity fan imbalance, variable-velocity fan imbalance, Case Western Reserve University Bearing Data Center). In particular, the proposed multisensor method exhibits a significantly faster training speed compared to the reference IMU6DoF-Time2GrayscaleGrid-CNN, IMU6DoF-Time2RGBbyType-CNN, and IMU6DoF-Time2RGBbyAxis-CNN methods, which use the signal-to-image approach, requiring fewer iterations to achieve the desired level of accuracy. The interpretability of the model is also explored. This research demonstrates the potential of bandpass filters in the signal-to-image approach with a CNN2D to be robust and interpretable in selected frequency bandwidth machine fault diagnosis using multiple sensor data. Full article
Show Figures

Figure 1

24 pages, 4243 KiB  
Article
Machine Learning Methods for Predicting Argania spinosa Crop Yield and Leaf Area Index: A Combined Drought Index Approach from Multisource Remote Sensing Data
by Mohamed Mouafik, Mounir Fouad and Ahmed El Aboudi
AgriEngineering 2024, 6(3), 2283-2305; https://doi.org/10.3390/agriengineering6030134 - 17 Jul 2024
Cited by 1 | Viewed by 400
Abstract
In this study, we explored the efficacy of random forest algorithms in downscaling CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) precipitation data to predict Argane stand traits. Nonparametric regression integrated original CHIRPS data with environmental variables, demonstrating enhanced accuracy aligned with [...] Read more.
In this study, we explored the efficacy of random forest algorithms in downscaling CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) precipitation data to predict Argane stand traits. Nonparametric regression integrated original CHIRPS data with environmental variables, demonstrating enhanced accuracy aligned with ground rain gauge observations after residual correction. Furthermore, we explored the performance of range machine learning algorithms, encompassing XGBoost, GBDT, RF, DT, SVR, LR and ANN, in predicting the Leaf Area Index (LAI) and crop yield of Argane trees using condition index-based drought indices such as PCI, VCI, TCI and ETCI derived from multi-sensor satellites. The results demonstrated the superiority of XGBoost in estimating these parameters, with drought indices used as input. XGBoost-based crop yield achieved a higher R2 value of 0.94 and a lower RMSE of 6.25 kg/ha. Similarly, the XGBoost-based LAI model showed the highest level of accuracy, with an R2 of 0.62 and an RMSE of 0.67. The XGBoost model demonstrated superior performance in predicting the crop yield and LAI estimation of Argania sinosa, followed by GBDT, RF and ANN. Additionally, the study employed the Combined Drought Index (CDI) to monitor agricultural and meteorological drought over two decades, by combining four key parameters, PCI, VCI, TCI and ETCI, validating its accuracy through comparison with other drought indices. CDI exhibited positive correlations with VHI, SPI and crop yield, with a particularly strong and statistically significant correlation observed with VHI (r = 0.83). Therefore, CDI was recommended as an effective method and index for assessing and monitoring drought across Argane forest stands area. The findings demonstrated the potential of advanced machine learning models for improving precipitation data resolution and enhancing agricultural drought monitoring, contributing to better land and hydrological management. Full article
Show Figures

Figure 1

14 pages, 2858 KiB  
Article
Adaptive Multi-Sensor Fusion Localization Method Based on Filtering
by Zhihong Wang, Yuntian Bai, Jie Hu, Yuxuan Tang and Fei Cheng
Mathematics 2024, 12(14), 2225; https://doi.org/10.3390/math12142225 - 17 Jul 2024
Viewed by 445
Abstract
High-precision positioning is a fundamental requirement for autonomous vehicles. However, the accuracy of single-sensor positioning technology can be compromised in complex scenarios due to inherent limitations. To address this issue, we propose an adaptive multi-sensor fusion localization method based on the error-state Kalman [...] Read more.
High-precision positioning is a fundamental requirement for autonomous vehicles. However, the accuracy of single-sensor positioning technology can be compromised in complex scenarios due to inherent limitations. To address this issue, we propose an adaptive multi-sensor fusion localization method based on the error-state Kalman filter. By incorporating a tightly coupled laser inertial odometer that utilizes the Normal Distribution Transform (NDT), we constructed a multi-level fuzzy evaluation model for posture transformation states. This model assesses the reliability of Global Navigation Satellite System (GNSS) data and the laser inertial odometer when GNSS signals are disrupted, prioritizing data with higher reliability for posture updates. Real vehicle tests demonstrate that our proposed positioning method satisfactorily meets the positioning accuracy and robustness requirements for autonomous driving vehicles in complex environments. Full article
Show Figures

Figure 1

25 pages, 3230 KiB  
Article
Augmented Millimeter Wave Radar and Vision Fusion Simulator for Roadside Perception
by Haodong Liu, Jian Wan, Peng Zhou, Shanshan Ding and Wei Huang
Electronics 2024, 13(14), 2729; https://doi.org/10.3390/electronics13142729 - 11 Jul 2024
Viewed by 581
Abstract
Millimeter-wave radar has the advantages of strong penetration, high-precision speed detection and low power consumption. It can be used to conduct robust object detection in abnormal lighting and severe weather conditions. The emerging 4D millimeter-wave radar has improved the quality and quantity of [...] Read more.
Millimeter-wave radar has the advantages of strong penetration, high-precision speed detection and low power consumption. It can be used to conduct robust object detection in abnormal lighting and severe weather conditions. The emerging 4D millimeter-wave radar has improved the quality and quantity of generated point clouds. Adding radar–camera fusion enhances the tracking reliability of transportation system operation. However, it is challenging due to the absence of standardized testing methods. Hence, this paper proposes a radar–camera fusion algorithm testing framework in a highway roadside scenario using SUMO and CARLA simulators. First, we propose a 4D millimeter-wave radar simulation method. A roadside multi-sensor perception dataset is generated in a 3D environment through co-simulation. Then, deep-learning object detection models are trained under different weather and lighting conditions. Finally, we propose a baseline fusion method for the algorithm testing framework. This framework provides a realistic virtual environment for device selection, algorithm testing and parameter tuning for millimeter-wave radar–camera fusion algorithms. Solutions show that the method proposed in this paper can provide a realistic virtual environment for radar–camera fusion algorithm testing for roadside traffic perception. Compared to the camera-only tracking method, the radar–vision fusion method proposed significantly improves tracking performance in rainy night scenarios. The trajectory RMSE is improved by 68.61% in expressway scenarios and 67.45% in urban scenarios. This method can also be applied to improve the detection of stop-and-go waves on congested expressways. Full article
Show Figures

Figure 1

26 pages, 5154 KiB  
Article
A Robust Deep Feature Extraction Method for Human Activity Recognition Using a Wavelet Based Spectral Visualisation Technique
by Nadeem Ahmed, Md Obaydullah Al Numan, Raihan Kabir, Md Rashedul Islam and Yutaka Watanobe
Sensors 2024, 24(13), 4343; https://doi.org/10.3390/s24134343 - 4 Jul 2024
Viewed by 813
Abstract
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right [...] Read more.
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of ’scalograms’, derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms. Full article
Show Figures

Figure 1

24 pages, 13355 KiB  
Article
Enhanced Object Detection in Autonomous Vehicles through LiDAR—Camera Sensor Fusion
by Zhongmou Dai, Zhiwei Guan, Qiang Chen, Yi Xu and Fengyi Sun
World Electr. Veh. J. 2024, 15(7), 297; https://doi.org/10.3390/wevj15070297 - 3 Jul 2024
Cited by 2 | Viewed by 861
Abstract
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential [...] Read more.
To realize accurate environment perception, which is the technological key to enabling autonomous vehicles to interact with their external environments, it is primarily necessary to solve the issues of object detection and tracking in the vehicle-movement process. Multi-sensor fusion has become an essential process in efforts to overcome the shortcomings of individual sensor types and improve the efficiency and reliability of autonomous vehicles. This paper puts forward moving object detection and tracking methods based on LiDAR—camera fusion. Operating based on the calibration of the camera and LiDAR technology, this paper uses YOLO and PointPillars network models to perform object detection based on image and point cloud data. Then, a target box intersection-over-union (IoU) matching strategy, based on center-point distance probability and the improved Dempster–Shafer (D–S) theory, is used to perform class confidence fusion to obtain the final fusion detection result. In the process of moving object tracking, the DeepSORT algorithm is improved to address the issue of identity switching resulting from dynamic objects re-emerging after occlusion. An unscented Kalman filter is utilized to accurately predict the motion state of nonlinear objects, and object motion information is added to the IoU matching module to improve the matching accuracy in the data association process. Through self-collected data verification, the performances of fusion detection and tracking are judged to be significantly better than those of a single sensor. The evaluation indexes of the improved DeepSORT algorithm are 66% for MOTA and 79% for MOTP, which are, respectively, 10% and 5% higher than those of the original DeepSORT algorithm. The improved DeepSORT algorithm effectively solves the problem of tracking instability caused by the occlusion of moving objects. Full article
Show Figures

Figure 1

19 pages, 3876 KiB  
Article
An Adaptive Fast Incremental Smoothing Approach to INS/GPS/VO Factor Graph Inference
by Zhaoxu Tian, Yongmei Cheng and Shun Yao
Appl. Sci. 2024, 14(13), 5691; https://doi.org/10.3390/app14135691 - 29 Jun 2024
Viewed by 406
Abstract
In response to asynchronous and delayed sensors within multi-sensor integrated navigation systems, the computational complexity of joint optimization navigation solutions persistently rises. This paper introduces an adaptive fast integrated navigation algorithm for INS/GPS/VO based on factor graph. The factor graph model for INS/GPS/VO [...] Read more.
In response to asynchronous and delayed sensors within multi-sensor integrated navigation systems, the computational complexity of joint optimization navigation solutions persistently rises. This paper introduces an adaptive fast integrated navigation algorithm for INS/GPS/VO based on factor graph. The factor graph model for INS/GPS/VO is developed subsequent to individual modeling of the Inertial Navigation System (INS), Global Positioning System (GPS), and Visual Odometer (VO) using the factor graph model approach. Additionally, an Adaptive Fast Incremental Smoothing (AFIS) factor graph optimization algorithm is proposed. The simulation results demonstrate that the factor-graph-based integrated navigation algorithm consistently yields high-precision navigation outcomes even amidst dynamic changes in sensor validity and the presence of asynchronous and delayed sensor measurements. Notably, the AFIS factor graph optimization algorithm significantly enhances real-time performance compared to traditional Incremental Smoothing (IF) algorithms, while maintaining comparable real-time accuracy. Full article
(This article belongs to the Collection Advances in Automation and Robotics)
Show Figures

Figure 1

19 pages, 4678 KiB  
Article
Rotation Error Prediction of CNC Spindle Based on Short-Time Fourier Transform of Vibration Sensor Signals and Improved Weighted Residual Network
by Lin Song and Jianying Tan
Sensors 2024, 24(13), 4244; https://doi.org/10.3390/s24134244 - 29 Jun 2024
Viewed by 548
Abstract
The spindle rotation error of computer numerical control (CNC) equipment directly reflects the machining quality of the workpiece and is a key indicator reflecting the performance and reliability of CNC equipment. Existing rotation error prediction methods do not consider the importance of different [...] Read more.
The spindle rotation error of computer numerical control (CNC) equipment directly reflects the machining quality of the workpiece and is a key indicator reflecting the performance and reliability of CNC equipment. Existing rotation error prediction methods do not consider the importance of different sensor data. This study developed an adaptive weighted deep residual network (ResNet) for predicting spindle rotation errors, thereby establishing accurate mapping between easily obtainable vibration information and difficult-to-obtain rotation errors. Firstly, multi-sensor data are collected by a vibration sensor, and Short-time Fourier Transform (STFT) is adopted to extract the feature information in the original data. Then, an adaptive feature recalibration unit with residual connection is constructed based on the attention weighting operation. By stacking multiple residual blocks and attention weighting units, the data of different channels are adaptively weighted to highlight important information and suppress redundancy information. The weight visualization results indicate that the adaptive weighted ResNet (AWResNet) can learn a set of weights for channel recalibration. The comparison results indicate that AWResNet has higher prediction accuracy than other deep learning models and can be used for spindle rotation error prediction. Full article
Show Figures

Figure 1

18 pages, 3298 KiB  
Article
Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data
by Shurong Yang, Lei Li, Shuaipeng Fei, Mengjiao Yang, Zhiqiang Tao, Yaxiong Meng and Yonggui Xiao
Drones 2024, 8(7), 284; https://doi.org/10.3390/drones8070284 - 24 Jun 2024
Viewed by 740
Abstract
Accurate forecasting of crop yields holds paramount importance in guiding decision-making processes related to breeding efforts. Despite significant advancements in crop yield forecasting, existing methods often struggle with integrating diverse sensor data and achieving high prediction accuracy under varying environmental conditions. This study [...] Read more.
Accurate forecasting of crop yields holds paramount importance in guiding decision-making processes related to breeding efforts. Despite significant advancements in crop yield forecasting, existing methods often struggle with integrating diverse sensor data and achieving high prediction accuracy under varying environmental conditions. This study focused on the application of multi-sensor data fusion and machine learning algorithms based on unmanned aerial vehicles (UAVs) in wheat yield prediction. Five machine learning (ML) algorithms, namely random forest (RF), partial least squares (PLS), ridge regression (RR), k-nearest neighbor (KNN) and extreme gradient boosting decision tree (XGboost), were utilized for multi-sensor data fusion, together with three ensemble methods including the second-level ensemble methods (stacking and feature-weighted) and the third-level ensemble method (simple average), for wheat yield prediction. The 270 wheat hybrids were used as planting materials under full and limited irrigation treatments. A cost-effective multi-sensor UAV platform, equipped with red–green–blue (RGB), multispectral (MS), and thermal infrared (TIR) sensors, was utilized to gather remote sensing data. The results revealed that the XGboost algorithm exhibited outstanding performance in multi-sensor data fusion, with the RGB + MS + Texture + TIR combination demonstrating the highest fusion performance (R2 = 0.660, RMSE = 0.754). Compared with the single ML model, the employment of three ensemble methods significantly enhanced the accuracy of wheat yield prediction. Notably, the third-layer simple average ensemble method demonstrated superior performance (R2 = 0.733, RMSE = 0.668 t ha−1). It significantly outperformed both the second-layer ensemble methods of stacking (R2 = 0.668, RMSE = 0.673 t ha−1) and feature-weighted (R2 = 0.667, RMSE = 0.674 t ha−1), thereby exhibiting superior predictive capabilities. This finding highlighted the third-layer ensemble method’s ability to enhance predictive capabilities and refined the accuracy of wheat yield prediction through simple average ensemble learning, offering a novel perspective for crop yield prediction and breeding selection. Full article
Show Figures

Figure 1

Back to TopTop