Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
A Novel Dataset and Approach for Adversarial Attack Detection in Connected and Automated Vehicles
Previous Article in Journal
NSVDNet: Normalized Spatial-Variant Diffusion Network for Robust Image-Guided Depth Completion
Previous Article in Special Issue
Research on the Short-Term Prediction of Offshore Wind Power Based on Unit Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UBO-EREX: Uncertainty Bayesian-Optimized Extreme Recurrent EXpansion for Degradation Assessment of Wind Turbine Bearings

by
Tarek Berghout
1 and
Mohamed Benbouzid
2,3,*
1
Laboratory of Automation and Manufacturing Engineering, University of Batna 2, Batna 05000, Algeria
2
Institut de Recherche Dupuy de Lôme (UMR CNRS 6027), University of Brest, 29238 Brest, France
3
Logistics Engineering College, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(12), 2419; https://doi.org/10.3390/electronics13122419
Submission received: 8 May 2024 / Revised: 11 June 2024 / Accepted: 19 June 2024 / Published: 20 June 2024

Abstract

:
Maintenance planning is crucial for efficient operation of wind turbines, particularly in harsh conditions where degradation of critical components, such as bearings, can lead to costly downtimes and safety threats. In this context, prognostics of degradation play a vital role, enabling timely interventions to prevent failures and optimize maintenance schedules. Learning systems-based vibration analysis of bearings stands out as one of the primary methods for assessing wind turbine health. However, data complexity and challenging conditions pose significant challenges to accurate degradation assessment. This paper proposes a novel approach, Uncertainty Bayesian-Optimized Extreme Recurrent EXpansion (UBO-EREX), which combines Extreme Learning Machines (ELM), a lightweight neural network, with Recurrent Expansion algorithms, a recently advanced representation learning technique. The UBO-EREX algorithm leverages Bayesian optimization to optimize its parameters, targeting uncertainty as an objective function to be minimized. We conducted a comprehensive study comparing UBO-EREX with basic ELM and a set of time-series adaptive deep learners, all optimized using Bayesian optimization with prediction errors as the main objective. Our results demonstrate the superior performance of UBO-EREX in terms of approximation and generalization. Specifically, UBO-EREX shows improvements of approximately 5.1460 ± 2.1338% in the coefficient of determination of generalization over deep learners and 5.7056% over ELM, respectively. Moreover, the objective search time is significantly reduced with UBO-EREX with 99.7884 ± 0.2404% over deep learners, highlighting its effectiveness in real-time degradation assessment of wind turbine bearings. Overall, our findings underscore the significance of incorporating uncertainty-aware UBO-EREX in predictive maintenance strategies for wind turbines, offering enhanced accuracy, efficiency, and robustness in degradation assessment.

1. Introduction

Wind turbines play a pivotal role in the renewable energy landscape, offering a sustainable solution to power generation [1,2]. However, ensuring the reliable operation of wind turbines is crucial for maximizing energy output and minimizing maintenance costs [3,4]. Among the various components of wind turbines, bearings are particularly susceptible to degradation, which can lead to costly downtimes and safety risks [5,6]. Therefore, effective prognostics of bearing health are essential for predictive maintenance and planning, enabling timely interventions to prevent failures and optimize operational efficiency [7,8,9].
While techniques such as deep learning have demonstrated promise in analyzing complex vibration data to detect early signs of deterioration in wind turbine bearings, the field still faces persistent challenges and research gaps. Despite advancements in prognostics and predictive maintenance techniques, recent state-of-the-art works highlight the existence of several unresolved challenges in assessing wind turbine bearing degradation. In this section, we provide an overview of notable works and their contributions, culminating in an exploration of overarching research gaps.
For instance, authors in [10] developed a sophisticated prognostic method for bearing health management by introducing a new health indicator that leverages multi-scale, distribution-similarity-based features, optimized using a multi-objective grasshopper optimization algorithm. This targets data quality in the life cycle record using metrics such as robustness, monotonicity, trendability, and prognosability. This health indicator is integrated with a Gated Recurrent Unit (GRU) network that adaptively determines hyperparameters to predict the remaining useful life (RUL) of bearings using the same optimization process for hyperparameters. Although the process of enhancing data quality and modeling effectively handles high-dimensional data and complex relationships within the data, it remains computationally complex due to its multi-layered optimization and feature fusion processes. Moreover, the study does not specifically target uncertainty quantification within the predictive model, focusing more on accuracy and robustness against noise than explicit uncertainty management. The authors of [11] developed a novel prognostic strategy for predicting the RUL of rolling element bearings, focusing on integrating robust anomaly detection and multi-step estimation techniques. They employed support vector data description for anomaly detection in noisy data and moving horizon estimation for multi-step estimation, allowing for consideration of multiple previous states rather than just the immediate past. This approach was enhanced by extracting advanced entropy and sparsity-based health indicators from signals filtered across different frequency bands, with the most predictive health indicator selected based on a specific criterion. The methodology aimed to improve the accuracy and reliability of RUL predictions, addressing complexities in both data handling and model formulation. By focusing on improving data quality and employing simpler learning methods like support vector data description instead of more complex deep learning architectures, the authors effectively reduced modeling complexity. However, explicit quantification of uncertainty was not specifically targeted in this study. The authors in [12] developed a graph domain adaptation method for predicting the RUL of rolling bearings. They constructed a dynamic model to simulate bearing degradation and generate extensive data, which was used to train a multi-layered cross-domain gated graph convolutional network. The designed network model enhances the ability to discern graph domain differences and adapt features from twin data to real data, optimizing prediction accuracy. However, the authors did not explicitly address uncertainty quantification within their predictive modeling framework. This omission could be a limitation, as uncertainty quantification is crucial in reliability engineering for understanding the variability in data and model outputs and for making informed decisions under uncertainty. Regarding model complexity, the use of advanced methods might lead to a model that is computationally expensive and requires substantial computational resources for training and inference. Additionally, the complexity might make the model more challenging to interpret and maintain, potentially limiting its applicability in scenarios where simpler models might suffice or where computational resources are constrained. The authors in [13] developed a parallel neural network architecture designed to estimate the RUL of rolling element bearings. This architecture integrates parallel processing pathways with advanced deep learning techniques such as time transformers and convolutional long-short-term memory networks. To address data complexity, the researchers implemented a variational stride temporal window strategy that dynamically adjusts data extraction based on the degradation stage of the components. This strategy, along with the parallel network, ensures that large volumes of data can be processed simultaneously with less information loss. While several techniques, such as positional encoding and self-attention mechanisms, are detailed, this does not explicitly discuss uncertainty quantification in the context of RUL predictions. A limitation or area for future work in similar studies could focus on the explicit quantification of uncertainty in predictions to enhance the reliability and robustness of the predictive models used in industrial applications. Another area could involve simplifying the architecture to reduce computational demands while maintaining high accuracy levels. In [14], the authors developed a method to predict the RUL of rolling bearings by integrating a framework that combines multi-domain mixed features with a temporal convolutional network. For effective handling of complex and noisy data, they utilized the dung beetle algorithm to optimize the variational mode decomposition method, enabling superior noise reduction. This optimization was crucial for enhancing the quality of input data through improved feature extraction, which included time-domain, frequency-domain, and entropy features. Additionally, the model complexity was addressed by incorporating a multi-head attention mechanism and a bidirectional gated recurrent unit into the temporal convolutional network, enhancing its ability to process and predict complex datasets. Although the study extensively tackled noise and feature extraction complexities, it did not explicitly address uncertainty quantification within the predictions. The optimized approach ensured effective handling of data and model complexities, facilitating accurate RUL predictions without reducing the deep learning architecture’s complexity. In [15], the authors present a framework that utilizes regression models to accurately forecast the RUL of bearings. The models are trained using operational data, which is collected via a supervisory control and data collection system. The system begins by carefully filtering the data and then constructs a deterioration profile by analyzing the behavior of temperature time series. Furthermore, it utilizes a cross-validation technique to tackle the issue of limited data, hence improving the reliability of the model by using subsets of data from other turbines that are accessible. Multiple models were created with an average estimation of the RUL of 20 days. The work presented in [16] presents a mechanism that can accurately detect faults in the inner race of bearings and predict their RUL under various conditions. The model combines time and frequency-domain vibration signal analysis to extract characteristics, leverages a stacked variational denoising autoencoder to create a health indicator, and employs a bidirectional long-short-term memory neural network to forecast the remaining useful lifetime of the bearings.
Overall, these algorithms contribute to common perspectives, as all of these works address data complexity at the primary stage with a specific set of data processing techniques. When data complexity is reduced as expected, the complex architecture of deep learning, with multiple layers, parallel structures, and multiple nonlinear abstractions, comes into play. Generally, the problem of hyperparameters is addressed most of the time via optimization algorithms. On the other hand, these works leave behind some important research gaps, providing an important opportunity for new research contributions. The complexity of deep learning algorithms poses a significant barrier, including the following:
  • Extensive computational resources and expertise for implementation and optimization;
  • The computational time associated with deep learning models can be prohibitive, particularly for real-time applications where timely decision-making is crucial;
  • The inherent complexity of vibration data collected from wind turbines, coupled with the uncertainties introduced by harsh environmental conditions, further exacerbates the challenge of accurate degradation assessment.
In this context, our contributions aim to address the aforementioned challenges by proposing a novel approach, Uncertainty Bayesian-Optimized Extreme Recurrent EXpansion (UBO-EREX), for wind turbine bearing degradation assessment. Similar to previous works, after exposing data to a well-designed data preprocessing pipeline, including denoising, extraction, and outlier removal, a new learning scheme comes into play. Our approach combines the strengths of ELM, a lightweight neural network, with recurrent expansion algorithms, a recently advanced representation learning technique [17,18]. By leveraging Bayesian optimization, we optimize the parameters of the UBO-EREX algorithm, with a focus on minimizing uncertainty as the objective function [19]. Our solution offers several key advantages over existing approaches:
  • UBO-EREX provides a more computationally efficient alternative to traditional deep learning models, enabling faster model training and inference.
  • REX also integrates principal component analysis, controlled by specific variance-retained ratio hyperparameters, allowing for the reduction of REX mapping size and optimization of learning performance.
  • By targeting uncertainty in the optimization process, UBO-EREX enhances the robustness and reliability of degradation assessment, particularly in the face of data complexity and environmental uncertainties.
  • Additionally, our approach simplifies the model architecture while improving approximation and generalization performance, making it suitable for real-time applications in wind turbine maintenance and planning.
While it is true that many works combine ELM theories or machine learning methods in general with Bayesian optimization, such as those found in [20,21,22], this work, to the best of our knowledge, is the first to combine Bayesian optimization with ELM specifically for the purpose of uncertainty reduction, not just for enhancing generalization capability. This distinct focus on uncertainty reduction sets this current research apart from previous studies. Additionally, although some scientists familiar with the field might consider this combination a traditional contribution, our work introduces a significant novelty by integrating both ELM and Bayesian optimization with the innovative learning rules of Recurrent EXpansion (REX) [17]. REX is a cutting-edge technique currently in its early development stage, and its combination with the UBO-EREX model makes this approach particularly novel and unique. The integration of REX involves iterative learning, where the model not only learns from additional mappings of labels but also enhances its understanding of input-label interactions over multiple rounds. This iterative process significantly improves the model’s approximation and generalization capabilities.
Within the REX framework, the utilization of PCA for dimensionality reduction enhances the learning process by effectively managing the large number of hidden layers. This methodology stands out due to its comprehensive approach to uncertainty quantification. Through the incorporation of confidence intervals and the utilization of metrics like stability, coverage probability, and interval width, our model offers a robust evaluation of prediction uncertainty. This meticulous attention to uncertainty quantification represents a significant advancement over traditional ELM implementations, which typically prioritize generalization without adequately addressing prediction confidence. Bayesian optimization was employed, with careful consideration given to defining the hyperparameter space and ensuring convergence within computational constraints.
For a more in-depth view of our methodology, the flowchart in Figure 1 provides a concise summary of our contributions in order. Additionally, this flowchart offers a general overview of the dataset used and simplifies the understanding of our approach. Subsequent sections will provide detailed explanations of each step.
In summary, our contributions are expected to offer a promising solution to the challenges in wind turbine bearing degradation assessment, paving the way for more effective predictive maintenance strategies and enhanced operational efficiency in the renewable energy sector.
The remainder of this paper is organized as follows: Section 2 is dedicated to describing the data utilized in this study, exploring its complexity, and detailing the various data processing techniques applied, accompanied by illustrative examples. Section 3 focuses on the methods employed, emphasizing the overall architecture of UBO-EREX. Section 4 presents the results and discussions, where several approximation metrics and methods of uncertainty quantification are applied and thoroughly discussed. Finally, Section 5 concludes the paper with key insights and future perspectives.

2. Materials

In order to derive more accurate conclusions regarding bearing deterioration, this study integrates a realistic dataset obtained from real-world conditions [23]. In a previous study, a comprehensive run-to-failure experiment was conducted to monitor real-time health indicators of a high-speed shaft equipped with a 20-tooth pinion gear, driven by a 2 MW wind turbine. The data collection process involved meticulous measurements to account for environmental and operational fluctuations affecting turbine performance [23]. Accelerometers mounted on the turbine’s shaft captured vibrations at a high sampling rate of 97,656 Hz within 6-s time intervals per window. This setup effectively captured subtle variations in vibrations caused by changes in wind speed and mechanical loads.
To address the non-stationarity inherent in the data due to frequent fluctuations in shaft speed during operation, synchronous resampling techniques were employed. These variations stem from factors such as fluctuations in wind speed, torque ripple effects from the tower, and other operational loads influencing the turbine’s mechanical stability. Synchronous resampling aligned the vibration data to a consistent reference frame, improving the reliability of spectral analyses used for detecting potential bearing faults and ensuring the accuracy of the monitoring process.
This meticulous approach facilitated early detection of bearing failures, thereby enhancing maintenance and operation strategies for wind turbines. The data collection process specifically targeted failures associated with high-speed shaft bearings in wind turbines. The analysis revealed inner race failures as a prevalent fault type due to the significant stress and load endured by these components, as illustrated in Figure 2.
The application of synchronous resampling enhanced fault detection by improving the resolution of frequency analysis, allowing for clearer differentiation between fault-related frequencies and normal operational frequencies. This method effectively identified deviations in vibration patterns indicative of bearing degradation, such as increases in inner race energy, which directly correlate with the presence of faults. This approach not only aids in early fault detection but also contributes to a more targeted and efficient maintenance regime, reducing downtime and enhancing turbine efficiency.
Throughout the 50-day observation period under normal operating conditions, 50 profiles were stored separately, with each file containing approximately 585,936 samples treated as a single health indicator. It was observed that the collected data exhibited exponential variations over time due to changes in the physical health conditions of the bearings, as depicted in Figure 3a. Consequently, after 50 days of operation, the bearings ceased functioning due to the occurrence of an internal race fault.
To summarize, the detailed information about the dataset is presented in the following list.
  • Data source and experiment setup: A comprehensive run-to-failure experiment is conducted to monitor real-time health indicators for a high-speed shaft with a 20-tooth pinion gear driven by a 2 MW wind turbine.
  • Data collection process: Measurements are taken to account for environmental and operational fluctuations, and accelerometers on the turbine’s shaft capture vibrations at a high sampling rate of 97,656 Hz in 6-s intervals, capturing subtle variations due to wind speed and mechanical loads.
  • Data handling and processing: To address non-stationarity in the data, a synchronous resampling technique is employed, aligning vibration data to a consistent reference frame and improving spectral analysis reliability.
  • Detection and analysis: The experiment focuses on early detection of bearing failures, particularly targeting high-speed shaft bearings, revealing inner race failures as prevalent due to significant stress and load.
  • Synchronous resampling: Enhanced fault detection is expected through improved frequency analysis resolution, allowing clearer differentiation between fault-related and normal operational frequencies and identifying deviations indicative of bearing degradation.
  • Observation period and data profiles: The process is conducted over a 50-day period, storing 50 profiles, each with approximately 585,936 samples treated as single health indicators. Exponential variations over time are observed, with bearings ceasing function due to internal race faults after 50 days.
  • Training and testing: Lastly, in this work, it is worth mentioning that the data splitting process follows the 80–20% division rule, ensuring 80% of the data is used for training and 20% for testing. This approach helps create a balanced dataset for robust model training while maintaining enough data for evaluating predictive accuracy.
As addressed in Figure 3b–g, specific steps are followed in this work in order to reveal the primary degradation patterns inherent in the signals before feeding the learning systems. These steps include, denoising, feature extraction, outlier removal, and linear filtering. Before and after each step, the data scales in the interval [0, 1]. The data processing steps are explained as follows.

2.1. Denoising

Vibration signals collected from wind turbines often contain noise due to many factors, such as environmental conditions, mechanical vibrations, electrical interference, sensor imperfections, and transmission and signal processing. In this work, denoising is executed through several steps of wavelet denoising algorithms. These algorithms, including Beylkin, Best-localized Daubechies, Symlets, Coiflets, Daubechies, Fejer-Korovkin, Morris minimum-bandwidth orthogonal, and Vaidyanathan, are applied to each vibration signal independently [25,26]. During the denoising process, each algorithm operates by decomposing the vibration signal into its constituent wavelet components, effectively separating the signal from noise. By leveraging the unique properties of wavelets, such as localization in time and frequency domains, these algorithms identify and suppress noise components while retaining signal features of interest. It is worthy to mention that the default parameters for wavelet denoising as per MATLAB 23.2.0.2515942 (R2023b) Update 7 documentation, including automatic determination of the decomposition level based on signal length, soft thresholding for noise suppression, universal thresholding for noise estimation, level-independent thresholding, and automatic rescaling of coefficients, are applied, ensuring efficient and reliable denoising of signals. The denoising process, as depicted in Figure 3b, effectively mitigates the fluctuations in signal amplitudes experienced by the data. This reduction in fluctuations is particularly notable within the time range of (30, 40) days, where the denoising process distinguishes these fluctuations from actual failure patterns. In contrast, the time range (40, 50) days exhibits significant growth in signal amplitude, along with larger envelopes and perturbations. Consequently, degradation patterns become more discernible compared with the raw data of Figure 3a after denoising, as the process clarifies the underlying trends and features within the data, making fault patterns more evident and facilitating accurate fault detection and analysis.

2.2. Variance Extraction

After denoising, variance extraction follows to provide insights into the spread of signal values within each window and to capture the variability of signals over time [27]. In this implementation, a window of size 300 samples is moved along the signal, and at each position, the variance of the signal within that window is computed. As depicted in Figure 3c, degradation patterns now become clearer, describing the health of the turbine. However, a slight issue that could potentially be deemed a problem is the presence of anomalies in the data, particularly in the initial primary recorded samples. These anomalies manifest as massive variability in the data, which may not accurately reflect the health of the turbine. Instead, they are more likely to be attributed to errors in measurements caused by various factors. Notably, the amplitudes of these anomalies are observed to equal or exceed the amplitudes of variance observed towards the end-of-life of the turbine, which is logically inconsistent. Therefore, to address this issue and mitigate the influence of these misrepresented samples, the next steps of outlier removal become necessary. Outlier removal aims to identify and eliminate these anomalous data points, thus refining the dataset and improving the accuracy of subsequent analysis and interpretation.

2.3. Envelope Analysis

Before proceeding to outlier removal, another essential feature step of signal envelope extraction is considered necessary [28]. In this implementation, a time window of a specified size (i.e., 200 samples) is slid along the signal, and at each position, the envelope values are computed. These envelopes provide valuable information about the signal’s behavior, facilitating further analysis and interpretation, such as identifying trends, periodicities, and anomalies. In Figure 3d, the obtained results of envelope extraction showcase a reduction in fluctuations in measurements. However, the anomalies identified in the previous step of variance extraction persist, highlighting the need for further discussion and recommendations regarding the use of outlier removal.

2.4. Outlier Detection and Removal

Outliers, defined as data points that significantly deviate from the majority of the dataset, have the potential to distort analysis results and lead to inaccurate conclusions [29,30]. In this work, robust outlier detection techniques are employed to identify and remove such outliers, thereby ensuring the integrity of the dataset. Utilizing various statistical models and algorithms, including median analysis, Grubbs’ test, mean analysis, and quartiles analysis, the process systematically identifies outliers that deviate significantly from the underlying data distribution. Figure 3e illustrates the results obtained after outlier removal, particularly highlighting the elimination of further pulses observed at the end of the turbine’s life span, within the range of 40 to 50 days. This indicates that the outlier removal process has effectively identified and removed anomalous data points or pulses that occurred towards the end of the turbine’s operation. By eliminating these outliers, the dataset is refined, and the integrity of the data is enhanced, allowing for more accurate and reliable analysis. However, there still remains a challenge posed by anomalies at the beginning of the turbine’s life. Consequently, we are compelled to explore trend analysis methods, such as linear filtering, to address this issue.

2.5. Trend Analysis

Linear regression filtering is a crucial technique utilized to uncover underlying trends or patterns within datasets. By applying this method, we aim to identify long-term changes or abnormalities that may serve as indicators of impending bearing faults or degradation [31]. In this particular study, the linear regression filtering process is employed to smooth the signal and uncover significant trends or anomalies within the data. Specifically, a window size of 9800 data points is utilized for the filtering operation to encompass a substantial range of observations. Ensuring that the window size remains odd maintains the integrity of the filtering process, preserving symmetry and accuracy in trend analysis. Figure 3f illustrates the outcomes derived from the filtering process, demonstrating a reduction in anomalies present at the beginning of the life cycle. This reduction signifies an improved depiction of the degradation trend, as the filtering process effectively mitigates noise and fluctuations, thereby facilitating a clearer understanding of the data and enhancing the ability to discern meaningful patterns indicative of bearing health.

2.6. RUL Label Generation

Subsequently, a linearly spaced array is created, spanning from 50 to 0 days. This array represents the RUL values, where 50 days corresponds to the initial state (full health) and 0 days represents the end of the bearing’s operational life (failure). The length of this array matches the length of the dataset, ensuring that each data point is associated with a corresponding RUL label, as addressed in Figure 3h.
It is worth highlighting that our approach to data quality analysis relies heavily on visual inspection at each step of the process. This means that human intervention is necessary to determine whether degradation signals are detectable in the data. While this visual inspection has proven effective in achieving our objectives thus far, it is essential to acknowledge that this approach has limitations. Visual inspection may not always capture subtle patterns or anomalies in the data, potentially leading to overlooked insights or inaccuracies in the analysis. Therefore, there is a need for future research to explore and develop more analytical and precise methods for data analysis. By incorporating advanced analytical techniques, such as statistical methods, we can enhance the robustness and accuracy of our data analysis processes, ultimately leading to more reliable insights and conclusions.

3. Methods

This study integrates ELM [32] and REX [17] learning methodologies to formulate the proposed UBO-EREX model. This unified framework optimizes all hyperparameters via Bayesian optimization [33] while focusing on uncertainty quantification as the core objective function. The uncertainty quantification is conducted through the confidence interval philosophy, allowing for the estimation of the merging range of predictions. Thus, this section is dedicated to elucidating this approach, with an additional emphasis on the philosophy of uncertainty quantification.

3.1. UBO-EREX

As depicted in the flow diagram of Figure 4, the proposed UBO-EREX architecture involves the utilization of both the ELM network architecture and REX. In this work, ELM is trained by generating random input weights and biases ( a , b ) for the hidden layer H , which is then activated by an activation function σ for specific inputs x as in (1).
After that, the learning weights β , which are the output weights of H , are computed using the Moore-Penrose pseudo-inverse of the matrix involving a regularization parameter C , the transpose of the hidden layer H , the desired outputs T , and the identity matrix I as in (2). Here, σ , the number of neurons l in H , and c are hyperparameters of the currently used basic ELM architecture.
H = σ ( a x + b )
β = p i n v ( H H + C I ) H T
In the REX philosophy, the learning model is expected to learn both model representations and behavior by merging the entire ELM neural network outcome, including x , H , and estimated outputs T ~ , into another ELM network for multiple rounds k . By repeating the process over time for multiple rounds, as addressed in (3), the model’s approximation and generalization are expected to improve over time. This improvement occurs because the model first learns from additional mappings of labels, serving as a source of transductive learning. On the other hand, from each input and response, the model in round k + 1 is able to gain a better sense of the interaction between inputs and labels from previous rounds in k .
x k + 1 = [ x k , ρ ( H k , T ~ k ) ]
In the formula of REX in (3), ρ represents a data processing function suggested to process feature maps and estimated targets, especially due to the expected large size of the hidden layer, which will complexify the REX of the next rounds. Accordingly, this work defines ρ as a dimensionality reduction algorithm based on principal components analysis (PCA) [34]. PCA is controlled by the amount of retained variance ratio ( v r a t i o ), which is given as a hyperparameter of REX in this case, along with the number of rounds k . The ρ algorithm can then be defined as follows:
Let x r be the reduced feature matrix, where x is the original feature matrix and v r a t i o is the desired explained variance ratio. We can express the PCA reduction as follows by first computing the covariance matrix δ as in (4). After that, we perform a Singular Value Decomposition (SVD) σ on the covariance matrix δ , as in (5).
V =   c o v ( X )  
σ = U   δ V
The next step consists of computing the total variance V , as in (6), while after we compute the target variance to retain V t , as in (7).
V = i = 1 n σ i 2
V t = V r a t i o V
The last step consists of determining the number of principal components to retain N P C A , as in (8).
N P C A = a r g m i n i = 1 n σ i 2   V t
Matrix dimensions are reduced using the retained principal components, as in (9), where U r consists of the first N P C A columns of U .
x r = x × U r

3.2. Uncertainty Quantification Objective Function

In this study, an objective function based on uncertainty quantification is proposed when searching for optimal hyperparameters of the UBO-EREX algorithm using the Bayesian approach. Accordingly, a confidence interval (CI) is utilized, as in Equation (10) [35]. x ¯ denotes the sample mean, and z represents the score associated with a given confidence level   C I l . In this case, a confidence level of 99% is utilized, which results in z approximately equaling 2.5758. Furthermore, ω signifies the standard deviation of the samples, and n denotes the sample size.
C I = x ¯ ± z   ·   ω
Formula (10) defines a range of values indicating the confidence level regarding the population mean. It is worth noting that CI analysis in this work focuses on residuals r i , as described in (11). Emphasizing residual analysis, this approach offers direct insights into the uncertainty in predictions, thus providing valuable evaluations of the model’s dependability and efficiency. Opting for a 99% confidence level not only ensures a high degree of certainty but also proves beneficial in scenarios necessitating crucial decision-making.
r i = ( T T ~ )
This investigation relies on extracting pivotal features from the confidence interval to improve the overall certainty of predictions. Consequently, metrics such as CI stability C I s , coverage probability C I p , and interval width C I w are established. Initially, C I s is determined by evaluating the consistency of the confidence interval. This involves comparing the lower and upper bounds to identify at least two comparable subsets with similar lengths. Following this,   C I w and their corresponding medians, denoted as C I w m , are computed, along with the calculation of their absolute deviations, referred to as C I w m d . Subsequently, the Levene test is applied to C I w m d , comparing it to critical values derived from the Fisher-Snedecor F-distribution at a predetermined significance level, as outlined in Equation (12). If the test statistic exceeds the critical value, the null hypothesis H 0 (CI is non-stable) is rejected ( H 0 = 0 and   H 1 = 1 ), confirming stability; conversely, a lower value indicates instability. C I w is determined using the margin of error z × ω ,   as described in Equation (13). Next, C I p is computed using the coverage parameter C I C and n , as depicted in Equation (14). Here, C I C represents the count of confidence intervals encompassing the true parameter, while C I p signifies the proportion of confidence intervals covering the true parameter. This metric offers valuable insights into the reliability of the estimation process. This study introduces an uncertainty quantification (UQ) formula outlined in Equation (15), where elevated values indicate increased uncertainty in predictions. The inverse of C I s , denoted as i n v   ( C I s ) , equals 1 when the confidence interval is considered unstable (i.e., rejecting the null Levene hypothesis), and 0 otherwise. For the stability test, a 99% confidence level is conceded per default.
C I s = C I w m C I w m d × C I w 1 4 × C I w 1
C I w = 2 × z × ω
C I p = C I c n
U Q = C I l P c 100 + I w + i n v ( C I s )
The uncertainty quantification U Q formula serves as the primary objective for hyperparameter tuning in this scenario. This implies that the aim is to decrease the interval width and its instability while enhancing its coverage probability.
In summary, Algorithm 1 illustrates the UBO-EREX algorithm proposed in this work along with its primary learning rules.
Algorithm 1. UBO-EREX Framework
% Inputs:
- Training dataset
- Validation dataset
- Hyperparameters: regularization parameter C , variance ratio v r a t i o
% Outputs:
- Trained UBO-EREX model
- Model performance metrics (e.g., accuracy, stability)
% 1. Define ELM and REX architectures;
% 2. Train ELM;
% a. Initialize random input weights and biases  ( a ,   b )  for hidden layer  H ;
% b. Activate  H  with an activation function σ for inputs  x ;
% c. Compute output weights β using Moore-Penrose pseudo-inverse with regularization parameter  C ;
H   =   σ a x   +   b
β   =   p i n v H   H   +   C I H   T
% 3. Implement REX;
% a. Merge ELM outcomes ( x ,   H , estimated outputs T ~ ) into another ELM network for multiple rounds k ;
% b. Repeat the process over time for improved model approximation and generalization;
x k + 1 = x k ,   ρ H k ,   T ~ k
% 4. Define ρ as a dimensionality reduction algorithm based on PCA;
% a. Compute covariance matrix δ ;
% b. Perform singular value decomposition on δ ;
% c. Determine total variance V   and target variance to retain V t ;
% d. Calculate the number of principal components to retain N P C A ;
V   =   i = 1 n σ i 2
V t =   v r a t i o   V
N P C A =   a r g m i n i = 1 n σ i 2   V t
% 5. Reduce matrix dimensions using the retained principal components
x r =   x   ×   U r
% 6. Define uncertainty quantification objective function;
% a. Utilize confidence interval to estimate population mean;
% b. Analyze residuals r i to evaluate uncertainty in predictions;
% c. Calculate CI stability  ( C I s ) , coverage probability ( C I p ), and interval width ( C I w );
% d. Introduce uncertainty quantification (UQ) formula to serve as primary objective for hyperparameter tuning;
C I   =   x ¯ ±   z ω
r i =   T       T ~
C I s = C I w m C I w m d ×   C I w   1 4 C I w   1
C I w =   2 z ω
C I p = C I c n
U Q   = C I l   P c 100   +   C I w +   i n v C I s
% 7. Optimize hyperparameters using Bayesian optimization technique;
% 8. Evaluate model performance using validation dataset.

4. Results

In addition to the previously discussed uncertainty quantification metrics, the UBO-EREX method is evaluated across other various performance metrics, including root mean squared error (RMSE), mean squares of errors (MSE), mean absolute errors (MAE), and the coefficient of determination ( R 2 ) , for both training and evaluation datasets. Additionally, the percentage of R 2 improvements ( P R 2 ) and the percentage of objective search time improvements ( P o b j ) are further utilized to estimate the amount of UBO-EREX improvement in both accuracy and computational cost for comparison reasons. These metrics are explained in Formulas (16–21), respectively, where T ¯ is the mean of the observed values T i and n is the number of samples.
R M S E = 1 n i = 1 n T i T ~ i 2
M S E = 1 n i = 1 n T i T ~ i 2
M A E = 1 n i = 1 n T i T ~ i
R 2 = 1 i = 1 n T i T ~ i 2 i = 1 n T i T ¯ i 2
P R 2 = R U B O E R E X 2 R 2 R 2 100 %
P o b j = ( o b j U B O E R E X o b j ) o b j 100 %
Furthermore, the results are strengthened and validated through visual illustrations. Moreover, to provide a comprehensive comparison, UBO-EREX is benchmarked against several other algorithms, including the original ELM and a selection of deep learning time series models such as long-short term memory (LSTM), bidirectional LSTM (BiLSTM), and gated recurrent unite (GRU). It is worth noting that all compared algorithms undergo optimization via Bayesian optimization, with the objective function being RMSE, unlike UBO-EREX, which incorporates a UQ objective. As a result, this section thoroughly explores the UBO-EREX results independently, as it represents our primary algorithm. It also elaborates on the results obtained from the compared methods in the subsequent steps.
Firstly, by delving into the performance analysis behind UBO-EREX, we aim to demonstrate that incorporating learning from maps and labels as additional sources alongside the input data enhances the learning process over time. This leads to a deeper understanding of data representation and model behavior.
In this context, Figure 5 presents the most crucial metrics related to approximation errors and prediction variability, namely RMSE and R 2 , respectively. These metrics are gathered from each training round k of the REX process. Notably, these metrics begin to demonstrate improved learning performance after k = 5 and appear to stabilize at better performances around k = 10 . This explains why the learning model at the first stage struggles to grasp the maps and targets with respect to the inputs of each learning round. This is why we observe stability in the initial rounds. Then, as the model discovers improved representations and begins to understand the relationships between inputs, maps, and targets, it starts to correctly tune the ELM weights, resulting in a clear increase in performance. It is worth noting that k is tuned via Bayesian objective optimization. This observed learning behavior validates the REX theory and underscores its applicability for the ELM light network.
Now we move to the numerical evaluation and comparisons of these approximation metrics, as well as uncertainty quantification metrics, in a comparative analysis for a better understanding of UBO-EREX learning performances. Accordingly, Table 1 and Table 2 are dedicated to this matter, respectively.
Table 1 provides a comprehensive comparison of evaluation metrics across different methods, focusing on both the training and testing phases. Each method, including BiLSTM, ELM, GRU, LSTM, and UBO-EREX, is evaluated based on the aforementioned key metrics. In terms of training performance, UBO-EREX demonstrates promising results with the lowest errors and highest accuracy in predicting target values compared with other methods. Additionally, UBO-EREX achieves a high R 2 value of 0.8710, suggesting a strong correlation between predicted and actual values, which signifies the effectiveness of the learning process. Upon transitioning to the testing phase, UBO-EREX maintains its competitive edge, showcasing comparable performance metrics to those observed during training. The RMSE (0.1037), MAE (0.0760), MSE (0.0107), and R 2 (0.8730) values remain consistent, reinforcing the robustness of the UBO-EREX algorithm in generalizing to unseen data.
In terms of computational efficiency, UBO-EREX also stands out, with a remarkably low search time of 0.0499 s. This indicates that UBO-EREX efficiently explores the search space to optimize model parameters, resulting in expedited convergence and reduced computational overhead. Furthermore, the percentage improvement over R 2 over BiLSTM, ELM, GRU, and LSTM, respectively (7.6050, 4.0509, 3.7822, and 5.7056), clearly indicates its superior performance. Additionally, the percentage improvement in search time shows that UBO-EREX already demonstrates the most efficient search process among the evaluated methods, except for ELM, whose architecture is light and may incur less computational cost.
Overall, Table 1 highlights UBO-EREX as a robust and efficient method for training neural networks, offering superior predictive accuracy, strong generalization capability, and minimal computational overhead. These findings underscore the potential of UBO-EREX to advance machine learning applications in the field of predictive maintenance of rotating machinery, specifically wind turbine bearing degradation.
The visual illustrations of curves fit in Figure 6 for both training Figure 6a and testing data. Figure 6b also showcases the performance of UBO-EREX and confirms the results presented in Table 2. However, further details about the smoothness and accuracy of the curve fit are revealed by UBO-EREX. The reason UBO-EREX behaves differently and approaches the target function better than all the compared learners lies in its objective function minimization. By specifically minimizing U Q objective, UBO-EREX effectively narrows the confidence interval width and reduces its variability. In this case, BiLSTM demonstrates the worst behavior, as it follows a divergent path for Remaining Useful Life (RUL) prediction. This divergence indicates that BiLSTM struggles to effectively model the RUL trajectory or fails to capture the underlying patterns in the data. As a result, its predictions deviate significantly from the actual RUL values, leading to poorer performance compared with other methods.
Moving to uncertainty quantification, Table 2 provides a detailed breakdown of uncertainty quantification metrics for various methods. These metrics offer insights into the reliability and stability of the uncertainty estimates produced by each method. The first metric, interval width, measures the range of the uncertainty interval generated by each method. A narrower interval signifies a more precise estimation of uncertainty. In the table, we observe that ELM, GRU, and UBO-EREX exhibit similar and comparatively narrower interval widths, indicating more accurate uncertainty estimates compared with BiLSTM and LSTM. Moving on to coverage probability, this metric evaluates the proportion of true values that fall within the uncertainty interval. A coverage probability close to 0.99 ± 0.1 % implies that the method accurately captures the true uncertainty. Remarkably, all methods demonstrate high accuracy coverage probabilities, suggesting reliable uncertainty estimates across the board. Interval stability, the third metric, assesses how consistent the uncertainty intervals remain across different observations or instances. A stability value of 1 indicates perfect consistency, implying that the interval width remains constant. In this context, ELM, GRU, and UBO-EREX exhibit perfect stability, while BiLSTM and LSTM display varying degrees of instability. Lastly, the uncertainty metric provides an overall assessment of uncertainty estimation by combining interval width, coverage probability, and stability. Lower uncertainty values indicate more accurate and reliable uncertainty estimates. Notably, UBO-EREX achieves the lowest uncertainty value among all methods, indicating its superior performance in uncertainty quantification.
The Confidence Interval (CI) plots depicted in Figure 7 serve to reinforce the findings presented in Table 1, while also providing a visual representation of the uncertainty associated with the predictions. These plots demonstrate that UBO-EREX consistently exhibits lesser variability and tighter CIs, even at a 99% confidence level, compared with the other methods evaluated. Specifically, UBO-EREX’s CI plots indicate a higher level of confidence in its predictions, with narrower intervals around the predicted values. This suggests that UBO-EREX provides more precise and reliable estimates of uncertainty, offering greater confidence in its predictions compared with other methods. Conversely, BiLSTM consistently displays higher levels of variability in its CI plots, indicating less confidence in its predictions and a wider range of possible outcomes. This suggests that BiLSTM’s predictions may be less reliable and more uncertain compared with UBO-EREX. Additionally, the CI plots highlight the instability of CI for LSTM. This instability is reflected in the fluctuation of the confidence intervals across different observations or instances, indicating inconsistencies in uncertainty estimation. This further underscores the superior performance of UBO-EREX in providing stable and reliable uncertainty estimates compared with LSTM. In summary, the CI plots in Figure 7 provide visual evidence supporting the findings of Table 1, demonstrating that UBO-EREX consistently outperforms other methods by exhibiting lesser variability and tighter confidence intervals, even at a higher confidence level, thereby enhancing confidence in its predictions.
Figure 8 summarizes information about Bayesian optimization in terms of objective function behavior and computational time, while further details can be revealed about the computational efficiency and convergence behavior of the learning models.
Firstly, in Figure 8a, it is evident that the time consumed during the search for the objective function increases significantly for the deep neural networks, particularly for architectures such as LSTM, GRU, and BiLSTM, each consuming progressively more time than the other. Conversely, UBO-EREX and ELM require remarkably less computational time, with ELM being the least time-consuming. This observation highlights the advantages of newer architectures over traditional deep learning methods, as they maintain superior accuracy while requiring significantly less computational time.
Secondly, in Figure 8b, the behavior of RMSE objective minimization is addressed. It is evident that GRU exhibits signs of overfitting, indicating that the model may be fitting too closely to the training data and may struggle to generalize to unseen data. On the other hand, LSTM and ELM show moderate and somewhat stable convergence behavior, suggesting that they are better able to adapt to the data without overfitting or underfitting. However, BiLSTM clearly underfits, indicating that it fails to capture the complexity of the data and may produce overly simplistic models.
Simultaneously, Figure 8c illustrates the uncertainty quantification objective of UBO-EREX in terms of interval width. Similar to the convergence patterns observed in Figure 5a,b and Figure 8c, this indicates a comparable convergence pattern for UBO-EREX. This suggests that UBO-EREX exhibits stable convergence behavior in uncertainty quantification, aligning with its ability to provide accurate and reliable uncertainty estimates.
In the final Table 3, the obtained hyperparameters via Bayesian optimization are showcased. It should be noted that the Bayesian optimization process in this work involved defining a hyperparameter space encompassing parameters such as learning rates, regularization parameters, activation functions, and network architecture (neurons). The optimization process was guided by a Gaussian process regression model, assuming a Gaussian prior distribution over the objective function, with hyperparameters like length scale and noise level determined iteratively. Regarding the optimization stopping criterion, termination was based on a predefined number of iterations (i.e., 50 iterations) or function evaluations, ensuring convergence to a satisfactory solution within computational constraints. While default settings were utilized for simplicity, it is recognized that further exploration into the impact of varying these parameters on optimization performance and model outcomes is important. This table provides valuable insights, particularly concerning the number of neurons utilized by each method. Notably, UBO-EREX allows for a lesser number of neurons compared with the other methods, even when employing a large number of hidden layers and maps (e.g., V r a t i o = 70 % ). This observation carries significant implications, indicating that UBO-EREX effectively captures complex patterns within the data while requiring fewer neurons. This suggests that UBO-EREX can achieve comparable or even superior performance with a more efficient and streamlined neural network architecture. By leveraging Bayesian optimization to fine-tune hyperparameters, UBO-EREX optimally balances model complexity and predictive accuracy, resulting in a more efficient and effective learning process.
In summary, UBO-EREX demonstrates a significant impact on predictive maintenance, particularly in addressing wind turbine bearing degradation. Through comprehensive evaluations and comparisons, UBO-EREX consistently outperforms alternative methods in predictive accuracy, adaptability, and computational efficiency. The technique exhibits robust performance indicators, including minimal errors, high precision, and a strong correlation between predicted and actual values. Moreover, UBO-EREX provides reliable estimates of uncertainty and consistent predictions, further underscoring its relevance in practical applications. More precisely, the incorporation of UBO significantly enhances the predictive performance of the EREX model. By integrating uncertainty quantification objectives into the optimization process, UBO-EREX achieves improved predictive accuracy and reliability. This approach not only fine-tunes model parameters to produce more precise forecasts but also provides valuable insights into the confidence level of predictions, enhancing decision-making processes. Moreover, UBO-EREX demonstrates robustness to data variability and outliers while optimizing computational efficiency. Overall, the integration of UBO methodology elevates the EREX model’s effectiveness in real-world applications, ensuring accurate and reliable predictions with enhanced confidence. These findings underscore the substantial impact of UBO-EREX in advancing machine learning applications, especially predictive maintenance for rotating machinery.

5. Conclusions

This work introduces a novel representation learning architecture named UBO-EREX, which combines ELM and REX methodologies to address challenges in wind turbine health degradation prognosis. The model is augmented by Bayesian optimization methods with an objective function targeting uncertainties in the data. Applied to a realistic dataset that has undergone thorough preprocessing stages, including denoising, outlier removal, filtering, scaling, and more, the algorithm demonstrates strong performance across a wide range of metrics. Through comprehensive evaluation utilizing error metrics, uncertainty quantification metrics, and various illustrative visualizations and curves, the algorithm exhibits remarkable performance. Particularly noteworthy is its superiority over existing streamlined time series deep learning models, positioning it as a preferred choice for degradation analysis throughout the turbine lifecycle. Future opportunities in this domain will focus on refining and expanding uncertainty quantification approaches, aiming to further enhance the robustness and reliability of prognostic models in wind turbine health monitoring and maintenance.

Author Contributions

Conceptualization, T.B. and M.B.; methodology, T.B. and M.B.; software, T.B.; validation, T.B. and M.B.; formal analysis, T.B. and M.B.; resources, T.B. and M.B.; data curation, T.B. and M.B.; writing—original draft preparation, T.B.; writing—review and editing, T.B. and M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All codes and data utilized in this paper can be downloaded from: https://zenodo.org/doi/10.5281/zenodo.12180212.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bin Abu Sofian, A.D.A.; Lim, H.R.; Siti Halimatul Munawaroh, H.; Ma, Z.; Chew, K.W.; Show, P.L. Machine Learning and the Renewable Energy Revolution: Exploring Solar and Wind Energy Solutions for a Sustainable Future Including Innovations in Energy Storage. Sustain. Dev. 2024, 1–26. [Google Scholar] [CrossRef]
  2. Ramakrishnan, S.; Delpisheh, M.; Convery, C.; Niblett, D.; Vinothkannan, M.; Mamlouk, M. Offshore Green Hydrogen Production from Wind Energy: Critical Review and Perspective. Renew. Sustain. Energy Rev. 2024, 195, 114320. [Google Scholar] [CrossRef]
  3. da Fonseca Santiago, R.A.; Barbosa, N.B.; Mergulhão, H.G.; de Carvalho, T.F.; Santos, A.A.B.; Medrado, R.C.; de Melo Filho, J.B.; Pinheiro, O.R.; Nascimento, E.G.S. Data-Driven Models Applied to Predictive and Prescriptive Maintenance of Wind Turbine: A Systematic Review of Approaches Based on Failure Detection, Diagnosis, and Prognosis. Energies 2024, 17, 1010. [Google Scholar] [CrossRef]
  4. Chen, B.-Q.; Liu, K.; Yu, T.; Li, R. Enhancing Reliability in Floating Offshore Wind Turbines through Digital Twin Technology: A Comprehensive Review. Energies 2024, 17, 1964. [Google Scholar] [CrossRef]
  5. Kenworthy, J.; Hart, E.; Stirling, J.; Stock, A.; Keller, J.; Guo, Y.; Brasseur, J.; Evans, R. Wind Turbine Main Bearing Rating Lives as Determined by IEC 61400-1 and ISO 281: A Critical Review and Exploratory Case Study. Wind Energy 2024, 27, 179–197. [Google Scholar] [CrossRef]
  6. Benabdesselam, A.; Dollon, Q.; Zemouri, R.; Pelletier, F.; Gagnon, M.; Tahan, A. On the Use of Indirect Measurements in Virtual Sensors for Renewable Energies: A Review. Electronics 2024, 13, 1545. [Google Scholar] [CrossRef]
  7. Farooq, U.; Ademola, M.; Shaalan, A. Comparative Analysis of Machine Learning Models for Predictive Maintenance of Ball Bearing Systems. Electronics 2024, 13, 438. [Google Scholar] [CrossRef]
  8. AlShorman, O.; Irfan, M.; Abdelrahman, R.B.; Masadeh, M.; Alshorman, A.; Sheikh, M.A.; Saad, N.; Rahman, S. Advancements in Condition Monitoring and Fault Diagnosis of Rotating Machinery: A Comprehensive Review of Image-Based Intelligent Techniques for Induction Motors. Eng. Appl. Artif. Intell. 2024, 130, 107724. [Google Scholar] [CrossRef]
  9. Warke, V.; Kumar, S.; Bongale, A.; Kamat, P.; Kotecha, K.; Selvachandran, G.; Abraham, A. Improving the Useful Life of Tools Using Active Vibration Control through Data-Driven Approaches: A Systematic Literature Review. Eng. Appl. Artif. Intell. 2024, 128, 107367. [Google Scholar] [CrossRef]
  10. Ni, Q.; Ji, J.C.; Feng, K.; Zhang, Y.; Lin, D.; Zheng, J. Data-Driven Bearing Health Management Using a Novel Multi-Scale Fused Feature and Gated Recurrent Unit. Reliab. Eng. Syst. Saf. 2024, 242, 109753. [Google Scholar] [CrossRef]
  11. Qi, J.; Zhu, R.; Liu, C.; Mauricio, A.; Gryllias, K. Anomaly Detection and Multi-Step Estimation Based Remaining Useful Life Prediction for Rolling Element Bearings. Mech. Syst. Signal Process. 2024, 206, 110910. [Google Scholar] [CrossRef]
  12. Cui, L.; Xiao, Y.; Liu, D.; Han, H. Digital Twin-Driven Graph Domain Adaptation Neural Network for Remaining Useful Life Prediction of Rolling Bearing. Reliab. Eng. Syst. Saf. 2024, 245, 109991. [Google Scholar] [CrossRef]
  13. Niazi, S.G.; Huang, T.; Zhou, H.; Bai, S.; Huang, H.Z. Multi-Scale Time Series Analysis Using TT-ConvLSTM Technique for Bearing Remaining Useful Life Prediction. Mech. Syst. Signal Process. 2024, 206, 110888. [Google Scholar] [CrossRef]
  14. Cao, X.; Zhang, F.; Zhao, J.; Duan, Y.; Guo, X. Remaining Useful Life Prediction of Rolling Bearing Based on Multi-Domain Mixed Features and Temporal Convolutional Networks. Appl. Sci. 2024, 14, 2354. [Google Scholar] [CrossRef]
  15. de Moraes Vieira, J.L.; Farias, F.C.; Ochoa, A.A.V.; de Menezes, F.D.; da Costa, A.C.A.; da Costa, J.Â.P.; de Novaes Pires Leite, G.; de Castro Vilela, O.; de Souza, M.G.G.; Michima, P.S.A. Remaining Useful Life Estimation Framework for the Main Bearing of Wind Turbines Operating in Real Time. Energies 2024, 17, 1430. [Google Scholar] [CrossRef]
  16. Magadán, L.; Granda, J.C.; Suárez, F.J. Robust Prediction of Remaining Useful Lifetime of Bearings Using Deep Learning. Eng. Appl. Artif. Intell. 2024, 130, 107690. [Google Scholar] [CrossRef]
  17. Berghout, T.; Benbouzid, M.; Ferrag, M.A. Multiverse Recurrent Expansion with Multiple Repeats: A Representation Learning Algorithm for Electricity Theft Detection in Smart Grids. IEEE Trans. Smart Grid 2023, 14, 4693–4703. [Google Scholar] [CrossRef]
  18. Berghout, T.; Benbouzid, M. EL-NAHL: Exploring Labels Autoencoding in Augmented Hidden Layers of Feedforward Neural Networks for Cybersecurity in Smart Grids. Reliab. Eng. Syst. Saf. 2022, 226, 108680. [Google Scholar] [CrossRef]
  19. Zhong, J.H.; Zhang, J.; Liang, J.; Wang, H. Multi-Fault Rapid Diagnosis for Wind Turbine Gearbox Using Sparse Bayesian Extreme Learning Machine. IEEE Access 2019, 7, 773–781. [Google Scholar] [CrossRef]
  20. Bouazzi, Y.; Yahyaoui, Z.; Hajji, M. Deep Recurrent Neural Networks Based Bayesian Optimization for Fault Diagnosis of Uncertain GCPV Systems Depending on Outdoor Condition Variation. Alex. Eng. J. 2024, 86, 335–345. [Google Scholar] [CrossRef]
  21. Zhang, C.; Zhang, L. Wind Turbine Pitch Bearing Fault Detection with Bayesian Augmented Temporal Convolutional Networks. Struct. Health Monit. 2024, 23, 1089–1106. [Google Scholar] [CrossRef]
  22. Xiang, Z.Q.; Wang, J.T.; Wang, W.; Pan, J.W.; Liu, J.F.; Le, Z.J.; Cai, X.Y. Vibration-Based Health Monitoring of the Offshore Wind Turbine Tower Using Machine Learning with Bayesian Optimisation. Ocean Eng. 2024, 292, 116513. [Google Scholar] [CrossRef]
  23. Bechhoefer, E.; Van Hecke, B.; He, D. Processing for Improved Spectral Analysis. Annu. Conf. PHM Soc. 2013, 5, 33–38. [Google Scholar] [CrossRef]
  24. Benbouzid, M.; Berghout, T.; Sarma, N.; Djurović, S.; Wu, Y.; Ma, X. Intelligent Condition Monitoring of Wind Power Systems: State of the Art Review. Energies 2021, 14, 5967. [Google Scholar] [CrossRef]
  25. Helm, D.; Timusk, M. Wavelet Denoising Applied to Hardware Redundant Systems for Rolling Element Bearing Fault Detection. J. Dyn. Monit. Diagn. 2023, 2, 133–143. [Google Scholar] [CrossRef]
  26. Fu, S.; Wu, Y.; Wang, R.; Mao, M. A Bearing Fault Diagnosis Method Based on Wavelet Denoising and Machine Learning. Appl. Sci. 2023, 13, 5936. [Google Scholar] [CrossRef]
  27. Bai, X.; Li, M.; Di, Z.; Dong, W.; Liang, J.; Zhang, J.; Sun, H. Open Circuit Fault Diagnosis of Wind Power Converter Based on VMD Energy Entropy and Time Domain Feature Analysis. Energy Sci. Eng. 2024, 12, 577–595. [Google Scholar] [CrossRef]
  28. Ruiz-Sarrio, J.E.; Antonino-Daviu, J.A.; Martis, C. Comprehensive Diagnosis of Localized Rolling Bearing Faults during Rotating Machine Start-Up via Vibration Envelope Analysis. Electronics 2024, 13, 375. [Google Scholar] [CrossRef]
  29. Smiti, A. A Critical Overview of Outlier Detection Methods. Comput. Sci. Rev. 2020, 38, 100306. [Google Scholar] [CrossRef]
  30. Blázquez-García, A.; Conde, A.; Mori, U.; Lozano, J.A. A Review on Outlier/Anomaly Detection in Time Series Data. arXiv 2020, arXiv:2002.04236v1. [Google Scholar] [CrossRef]
  31. Somers, P.A.A.M.; Bhattacharya, N. A New Method for Processing Time Averaged Vibration Patterns: Linear Regression. Strain 2016, 52, 264–275. [Google Scholar] [CrossRef]
  32. Huang, G. Bin What Are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt’s Dream and John von Neumann’s Puzzle. Cogn. Comput. 2015, 7, 263–278. [Google Scholar] [CrossRef]
  33. Wu, J.; Chen, X.Y.; Zhang, H.; Xiong, L.D.; Lei, H.; Deng, S.H. Hyperparameter Optimization for Machine Learning Models Based on Bayesian Optimization. J. Electron. Sci. Technol. 2019, 17, 26–40. [Google Scholar] [CrossRef]
  34. Maćkiewicz, A.; Ratajczak, W. Principal Components Analysis (PCA). Comput. Geosci. 1993, 19, 303–342. [Google Scholar] [CrossRef]
  35. Poole, C. Beyond the Confidence Interval. Am. J. Public Health 1987, 77, 195–199. [Google Scholar] [CrossRef]
Figure 1. Overview of methodology and contributions.
Figure 1. Overview of methodology and contributions.
Electronics 13 02419 g001
Figure 2. Real-world display of an inner race fault on the high-speed shaft: Following data collection, a bearing inspection revealed a cracked inner race. Reproduced from [24]: MDPI 2021.
Figure 2. Real-world display of an inner race fault on the high-speed shaft: Following data collection, a bearing inspection revealed a cracked inner race. Reproduced from [24]: MDPI 2021.
Electronics 13 02419 g002
Figure 3. Vibration raw data and processing stages: (a) Raw data; (b) Denoising of raw signals; (c) Extraction of variance from denoised vibration signals; (d) Extraction of envelopes from variance signals; (e) Outliers removal from the envelopes; (f) Linear filtering, respectively; (g) RUL labels.
Figure 3. Vibration raw data and processing stages: (a) Raw data; (b) Denoising of raw signals; (c) Extraction of variance from denoised vibration signals; (d) Extraction of envelopes from variance signals; (e) Outliers removal from the envelopes; (f) Linear filtering, respectively; (g) RUL labels.
Electronics 13 02419 g003
Figure 4. Architecture of the proposed approach: (a) ELM network(s); (b) recurrent expansion of the ELM network(s).
Figure 4. Architecture of the proposed approach: (a) ELM network(s); (b) recurrent expansion of the ELM network(s).
Electronics 13 02419 g004
Figure 5. UBO-EREX convergence performances versus expansion rounds: (a) training RMSE and R 2 ; (b) testing RMSE and R 2 .
Figure 5. UBO-EREX convergence performances versus expansion rounds: (a) training RMSE and R 2 ; (b) testing RMSE and R 2 .
Electronics 13 02419 g005
Figure 6. Curve fitting results: (a) training and (b) testing.
Figure 6. Curve fitting results: (a) training and (b) testing.
Electronics 13 02419 g006
Figure 7. Confidence Interval Analysis for Residuals.
Figure 7. Confidence Interval Analysis for Residuals.
Electronics 13 02419 g007
Figure 8. Plot Bayesian optimization results characteristics: (a) elapsed time versus number of evaluations; (b) RMSE versus number of evaluations; (c) UQ objective versus number of evaluations.
Figure 8. Plot Bayesian optimization results characteristics: (a) elapsed time versus number of evaluations; (b) RMSE versus number of evaluations; (c) UQ objective versus number of evaluations.
Electronics 13 02419 g008
Table 1. Error evaluation metrics for training and testing and improvement ratio of UBO-EREX.
Table 1. Error evaluation metrics for training and testing and improvement ratio of UBO-EREX.
TrainingTestingTrainingTestingTraining
MethodRMSEMAEMSE R 2 RMSEMAEMSE R 2 o b j P R 2 P o b j
BiLSTM0.12090.09920.01460.82500.12490.10180.01560.8113152.99687.605099.9674
ELM0.11780.08820.01390.83270.11680.08690.01370.83900.03104.0509−60.6088
GRU0.11600.08280.01350.83810.11570.08990.01340.841242.47693.782299.8826
LSTM0.11670.08820.01360.83670.12050.09070.01450.825910.28215.705699.5151
UBO-EREX0.10350.07460.01070.87100.10370.07600.01070.87300.0499--
Table 2. UQ metrics results.
Table 2. UQ metrics results.
Method C I w C I p C I s U Q
BiLSTM 0.63970.996610.6463
ELM 0.6019110.6119
GRU 0.5943110.6043
LSTM 0.62010.999201.6293
UBO-EREX 0.6018110.6118
Table 3. List of tuned hyperparameters.
Table 3. List of tuned hyperparameters.
MethodsHyperparameterNeuronsEpochsBatchLearning RateGradient Threshold
Deep learningBiLSTM199292390.00160.1038
GRU101041440.02620.5937
LSTM20110790.03310.522
UBO-EREX
variants
HyperparameterNeuronsActivationRegularizationRounds V r a t i o
ELM76tanh0.0358--
UBO-EREX1010.06611070
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Berghout, T.; Benbouzid, M. UBO-EREX: Uncertainty Bayesian-Optimized Extreme Recurrent EXpansion for Degradation Assessment of Wind Turbine Bearings. Electronics 2024, 13, 2419. https://doi.org/10.3390/electronics13122419

AMA Style

Berghout T, Benbouzid M. UBO-EREX: Uncertainty Bayesian-Optimized Extreme Recurrent EXpansion for Degradation Assessment of Wind Turbine Bearings. Electronics. 2024; 13(12):2419. https://doi.org/10.3390/electronics13122419

Chicago/Turabian Style

Berghout, Tarek, and Mohamed Benbouzid. 2024. "UBO-EREX: Uncertainty Bayesian-Optimized Extreme Recurrent EXpansion for Degradation Assessment of Wind Turbine Bearings" Electronics 13, no. 12: 2419. https://doi.org/10.3390/electronics13122419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop