Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Dynamic Event-Triggered Control for Sensor–Controller–Actuator Networked Control Systems
Next Article in Special Issue
A Spectral-Based Blade Fault Detection in Shot Blast Machines with XGBoost and Feature Importance
Previous Article in Journal
Task Scheduling Algorithm for Power Minimization in Low-Cost Disaster Monitoring System: A Heuristic Approach
Previous Article in Special Issue
Predictive Maintenance in IoT-Monitored Systems for Fault Prevention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Data Augmentation Techniques for Enhanced Fault Diagnosis in Industrial Centrifugal Pumps

Department of Mechanical Engineering (Department of Aeronautics, Mechanical and Electronic Convergence Engineering), Kumoh National Institute of Technology, Gumi 39177, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2024, 13(5), 60; https://doi.org/10.3390/jsan13050060
Submission received: 11 August 2024 / Revised: 18 September 2024 / Accepted: 23 September 2024 / Published: 25 September 2024
(This article belongs to the Special Issue Fault Diagnosis in the Internet of Things Applications)

Abstract

:
This study presents an advanced data augmentation framework to enhance fault diagnostics in industrial centrifugal pumps using vibration data. The proposed framework addresses the challenge of insufficient defect data in industrial settings by integrating traditional augmentation techniques, such as Gaussian noise (GN) and signal stretching (SS), with advanced models, including Long Short-Term Memory (LSTM) networks, Autoencoders (AE), and Generative Adversarial Networks (GANs). Our approach significantly improves the robustness and accuracy of machine learning (ML) models for fault detection and classification. Key findings demonstrate a marked reduction in false positives and a substantial increase in fault detection rates, particularly in complex operational scenarios where traditional statistical methods may fall short. The experimental results underscore the effectiveness of combining these augmentation techniques, achieving up to a 30% improvement in fault detection accuracy and a 25% reduction in false positives compared to baseline models. These improvements highlight the practical value of the proposed framework in ensuring reliable operation and the predictive maintenance of centrifugal pumps in diverse industrial environments.

1. Introduction

The manufacturing and industrial sectors are experiencing significant transformation in the context of Industry 4.0 and the evolving Industry 5.0. These advancements are centered on integrating intelligent technologies, digital twins, and human–machine collaboration to enhance operational efficiency, predictive maintenance, and overall system intelligence. A crucial aspect of this transformation is establishing effective fault classification systems. These systems play a pivotal role in maintaining the health and performance of industrial systems, as they are responsible for accurately identifying and diagnosing faults in machinery and processes. More importantly, these systems are foundational to predictive maintenance strategies, which aim to anticipate failures and schedule maintenance to avoid unexpected disruptions [1,2,3,4].
Fault classification in industrial systems ensures operational reliability, safety, and efficiency [5,6]. However, several challenges impede effective fault classification, especially in the complex environments of Industry 4.0 and the emerging Industry 5.0. One major challenge is the need for high-quality datasets with labeled fault conditions. Compared to other domains with readily available data, industrial systems often need comprehensive datasets encompassing various fault types and severities [7,8]. This scarcity arises from the infrequent occurrence of specific faults, the high cost and risk of inducing faults in operational systems, and the proprietary nature of many industrial processes. Consequently, existing datasets often need more size and diversity to develop robust fault classification algorithms. A diverse dataset is crucial for training ML models to generalize effectively to new, unseen scenarios, enhancing their performance in real-world applications [9,10].
The complexity of industrial systems, with their myriad interacting components and processes, adds another layer of difficulty. Faults can manifest in multiple ways, with a single fault potentially presenting various symptoms and different faults causing similar symptoms [11]. For example, increased vibration in a motor might signal issues ranging from misalignment and imbalance to bearing faults or electrical problems. This complexity complicates the creation of generalized fault classification algorithms. Additionally, interactions between components can lead to secondary faults or obscure the primary fault, further complicating diagnosis. Effective fault classification requires a deep understanding of the system’s mechanics, dynamics, and operating conditions, which can vary widely across industries and individual machines. Real-time fault detection and diagnosis are crucial for maintaining high levels of efficiency and safety in Industry 4.0 and 5.0 environments. Real-time fault classification is a cornerstone of predictive maintenance strategies, aiming to address potential issues before they escalate. However, achieving real-time performance involves several challenges. Classification models must be fast and accurate, capable of processing large volumes of real-time sensor data. This often necessitates sophisticated, computationally intensive algorithms, which may be challenging to implement on standard industrial hardware. Furthermore, models must be robust against noise and variations in data to ensure reliable performance under all operating conditions [12,13,14,15].
Recent advancements in vibration analysis leverage emerging technologies and techniques to enhance fault diagnosis capabilities. ML algorithms and artificial intelligence techniques are increasingly used to automate vibration data analysis, identify complex patterns, and improve fault diagnosis accuracy [16]. IoT technologies enable real-time vibration monitoring and data analysis through connected sensors and cloud-based platforms, providing continuous insight into machinery health. Developing more sensitive and accurate sensors, including MEMS-based accelerometers and wireless vibration sensors, has improved the quality and ease of vibration data collection [17,18]. However, despite these technological advancements, the scarcity and imbalance of fault data remain significant challenges. In response, data augmentation has emerged as a critical technique for enhancing the generalization capabilities of ML models by artificially increasing the diversity and quantity of training data. This approach is particularly beneficial in fault classification, where data are often scarce and imbalanced. By enriching the dataset with diverse and varied examples, data augmentation enables more robust and accurate training of ML models. This is especially important in predictive maintenance and fault detection, where generating faulty data through augmentation techniques can significantly enhance model performance. In the era of Industry 4.0 and Industry 5.0, where equipment downtime can result in substantial financial losses, having a comprehensive dataset that includes various fault scenarios is imperative for building reliable and practical models [19,20,21,22].
Data augmentation helps simulate different fault conditions, such as cavitation, misalignment, imbalance, and bearing wear, thereby creating a more extensive dataset than what might be captured during normal operations. This approach not only enhances the diversity of the training data but also addresses the common challenge of data scarcity in industrial settings [23,24,25]. Data augmentation techniques can be classified into three categories: (1) Data-driven methods, (2) Model-level methods, and (3) Digital twin-based methods [26,27]. Model-level methods do not directly alter the input data but instead modify the model architecture or the learning process to achieve the effects of data augmentation. In contrast, data-driven methods directly modify the input data to create new, augmented samples, thereby diversifying the dataset [28,29].
The increasing complexity of industrial systems, such as centrifugal pumps and motor drive systems, necessitates advanced diagnostic methods to detect and classify faults, even in the presence of imbalanced data. Traditional approaches often need help with insufficient or unbalanced fault data, leading to challenges in achieving high diagnostic accuracy. After extensive study and analysis of the water distribution system, we identified the need for a dedicated fault detection system to detect early signs of mechanical failures. Despite the programmable logic controller’s (PLC’s) effectiveness in switching between pumps randomly, it cannot monitor vibration patterns that could indicate potential faults. To address this gap, we developed a robust system designed to detect early failures, enhancing the reliability and longevity of the pumps.
Recent studies have explored integrating advanced machine learning techniques with data augmentation strategies to address challenges in fault diagnosis. Generative Adversarial Networks (GANs), Autoencoders (AEs), and Long Short-Term Memory (LSTM) networks have been widely applied to enhance fault detection, particularly by generating synthetic data to mitigate data scarcity and handling temporal dependencies in sequential data. These approaches have significantly improved fault classification under diverse operational conditions. Furthermore, advanced early fault detection (EFD) and health monitoring methods have been developed to address imbalanced and noisy data challenges. For example, an innovative fault detection and diagnosis (FDD) method using LSTM-GANs was proposed for a large LOX/kerosene rocket engine, demonstrating its effectiveness in detecting faults across various operational stages and outperforming traditional diagnostic methods [30]. Additionally, a hybrid framework combining GANs, Convolutional LSTMs, and Weighted Extreme Learning Machines (WELMs) was developed to tackle imbalanced and noisy datasets, significantly improving fault detection performance in multiple industrial scenarios. These methods have proven highly effective in enhancing fault detection performance, providing reassurance in their application across various industrial scenarios [31]. A Compressed Sensing approach for data augmentation was employed in bearing fault diagnosis to generate high-diversity, low-complexity data from limited fault samples. The augmented data improved feature extraction and fault identification, proving highly effective in resource-constrained environments where the availability of data is limited due to factors such as cost, time, or physical constraints. This method offers a low-complexity yet high-fidelity alternative to traditional GANs, particularly beneficial for handling limited datasets [32]. In railway infrastructure, an improved Autoencoder combined with SMOTE-based data augmentation was utilized to address unbalanced datasets in the fault diagnosis of railway turnout systems (RTSs). This method achieved high diagnostic accuracy by extracting deep features from noisy data, demonstrating the power of data augmentation in improving fault detection for critical transportation systems [33]. A novel Adversarial Variational Autoencoder (AVAE) with Sequential Attention (SQA) was also proposed to address imbalanced fault diagnosis in rolling bearings. The model synthesized data to supplement imbalanced datasets, providing interpretable results and improving generalization in fault detection tasks. This approach outperformed existing methods, particularly in handling imbalanced data common in industrial applications [34]. Moreover, GANs have been employed in mechanical sensor signal processing. A GAN-based framework (ACGAN) was developed to generate realistic sensor data for fault diagnosis in machinery. The generated synthetic data improved classification accuracy, showcasing the value of GANs in producing high-quality augmented data for industrial fault detection [35]. In compound fault diagnosis for bearings, a hybrid MCNN-LSTM model was proposed, leveraging data augmentation to enhance generalization. By superimposing signals in the frequency domain, the model significantly improved the diagnosis accuracy for single and compound faults, even in noisy environments [36]. Using small-sample datasets, a DRN-LSTM model was introduced for innovative grid applications to diagnose electromagnetic transients (EMTs). Data augmentation techniques, such as random cropping, enhanced the model’s generalization, demonstrating the potential of LSTM in fault detection tasks with limited datasets. This potential opens up new possibilities for fault diagnosis in grid applications, fostering optimism in fault diagnosis with limited datasets [37]. Finally, a deep learning method was developed for rotating machinery fault diagnosis, incorporating multiple data augmentation techniques such as Gaussian noise and time stretching. By artificially creating additional valid samples, the model achieved high diagnostic accuracy even with limited original data, further validating the effectiveness of these augmentation strategies in real-world industrial applications [38].
These advancements demonstrate the growing importance of integrating advanced ML models into fault diagnosis frameworks, including GANs, AEs, LSTM networks, and Bayesian approaches. By leveraging these techniques, achieving higher accuracy, robustness, and generalization in diagnosing faults across various industrial settings is possible. Building on these advancements, our study introduces an advanced data augmentation framework specifically engineered to enhance diagnostics in centrifugal pumps using vibration data. By integrating traditional techniques, such as Gaussian noise and signal stretching, with advanced models like LSTM, AE, and GANs, our approach is designed to substantially improve the robustness and accuracy of fault diagnostic systems in industrial applications. This practical solution has the potential to significantly improve the reliability and safety of industrial systems, offering a promising outlook for the future of the field. This study’s contributions are as follows:
  • This study introduces a novel data augmentation method that utilizes Gaussian noise addition and signal stretching to generate synthetic data, effectively addressing the challenge of insufficient defect data in industrial environments. These traditional techniques simulate varied operating conditions and rotational speeds, contributing to more robust fault diagnostics.
  • Our research further enhances data augmentation by integrating advanced techniques, including LSTM, AE, and GANs. This tailored approach significantly improves the performance of diagnostic algorithms by capturing temporal dependencies, reducing noise, and generating synthetic data. As a result, the model substantially boosts accuracy and reliability for fault detection and classification, particularly in detecting rare and subtle anomalies such as cracks and wear.
  • This study highlights the critical role of data augmentation in fault diagnostics, demonstrating how a well-augmented dataset can enhance predictive maintenance protocols. By ensuring the availability of diverse and representative data, this research contributes to more effective and reliable fault detection, ultimately supporting the efficient operation of industrial systems.
The remainder of this study is as follows: Section 2 covers the associated background study and related works. Section 3 covers the proposed methodology framework to achieve the fault diagnostics of the centrifugal pump. Section 4 covers the experimental procedure, and Section 5 covers the results, discussion, and limitations of this study. Section 6 shows the concluding remarks of this study.

2. Backgrounds and Related Works

The modern industrial landscape increasingly relies on advanced diagnostic techniques to ensure the efficiency and reliability of machinery. A critical example is the motor–centrifugal pump system, which is widely used across various applications for its precision and reliability. Understanding and diagnosing faults in such systems are crucial for maintaining operational continuity and preventing costly downtimes. This is where fault diagnosis techniques come into play, offering methods to identify and address issues before they lead to significant failures. One of the most effective techniques is vibration analysis, which uses the vibrational patterns of machinery to detect anomalies. Predictive maintenance for centrifugal water pumps involves proactively identifying potential failures before they occur by analyzing the wear and cracks in critical components such as impellers, seals, and bearings. Centrifugal pumps are essential in various industrial and municipal applications, making their reliable operation crucial. Predictive maintenance techniques, such as vibration analysis, acoustic monitoring, and signal processing methods like short-time Fourier transform (STFT) and wavelet transforms, help detect early signs of wear and cracks that can lead to pump failure. Impeller wear and cracks can significantly impact pump performance by reducing efficiency and increasing energy consumption. Monitoring impeller conditions helps identify erosion or fatigue affecting flow dynamics. Sealing failures often result in leakage and loss of pressure, potentially causing operational disruptions and safety hazards. Regular inspection and analysis of seal integrity can prevent such issues. Bearing wear is another critical aspect; deteriorated bearings can increase vibrations and noise, indicating potential mechanical failures.

2.1. Fault Diagnostics under Vibration-Based AC Motor-Driven Centrifugal Pumps

AC motor-driven centrifugal pump systems are critical in various industrial applications, including water supply, wastewater treatment, and chemical processing. These systems combine AC motors’ electrical-to-mechanical energy conversion capabilities with the fluid dynamics of centrifugal pumps, resulting in highly efficient and controllable fluid handling solutions. AC motors are essential in converting electrical energy into mechanical energy through the interaction of magnetic fields. Key components include the stator, rotor, and various winding configurations. When an AC supply is provided, the stator generates a rotating magnetic field, causing the rotor to turn due to electromagnetic induction. The rotor is typically a squirrel cage or wound type, interacting with the magnetic field to produce torque. The most common AC motor type is the induction motor. It has a simple design, is robust, and requires little maintenance. It is widely used for its efficiency and reliability. Synchronous Motors run at synchronous speed, meaning their rotor speed matches the frequency of the supply current. They are used in applications where constant speed is essential. The paper [39] presents a novel automated gear fault detection method, combining Fourier–Bessel series expansion (FBSE) with empirical wavelet transform (EWT), which is termed FBSE-EWT. This approach enhances frequency resolution by decomposing gear vibration signals into narrow-band components and selecting significant features using the Kruskal–Wallis test. Compared to traditional EWT, FBSE-EWT with a random forest classifier has demonstrated superior gear fault detection performance, offering improved reliability and, most importantly, enhanced effectiveness in monitoring rotary systems. The paper [40] introduces a novel network architecture, signal bootstrap your own latent (SBYOL), designed to enhance fault diagnosis in rotating machinery with minimal labeled data. Unlike traditional methods relying on semi-supervised and transfer learning, SBYOL leverages unlabeled vibration signals to tackle challenges like variable working conditions and noise. The architecture incorporates a self-supervised pre-training network using ResNet-18 and a time–frequency signal transformation (TFST) technique for robust fault feature recognition and diagnosis, showing superior performance in scenarios with limited samples and intense noise. In the paper [41], a fault prognostic system using LSTM was developed to enhance the reliability of rolling element bearings in industrial systems. This model leverages raw time series sensor data, minimizing feature engineering compared to conventional methods that use time, frequency, or time–frequency domain features. The LSTM model achieved the lowest root mean square error and demonstrated superior generalization across various vibration data sources, including hydro and wind power turbines, showcasing its effectiveness in proactive fault diagnostics. In their study [42], the authors present a comprehensive method for monitoring and diagnosing water hammer faults in centrifugal pumps, a critical aspect of industrial safety and water supply systems. They developed a novel approach to capture and analyze vibration signals, implementing a monitoring model that integrates edge and server-side diagnostics. Experimental results validated their method’s effectiveness, showing that high-pass filtering and subsequent analysis using kurtosis, pulse, and margin indices reliably detect water hammer events. This model significantly enhances the safety and reliability of centrifugal pump operations by providing timely fault detection and accurate diagnostics. In their paper [43], the authors address the challenge of unbalanced mechanical condition monitoring data affecting diagnosis accuracy. They propose an advanced fault diagnosis method combining SMOTE + Tomek Link for sample balancing and a dual-channel feature fusion approach. By integrating a global–local feature complementary module (GLFC) with BiGRU and an attention layer, their method enhances diagnostic performance even with limited fault samples. Experimental results validated the model’s improved accuracy and robustness. The study by [44] presents an innovative approach for fault detection in monoblock centrifugal pumps (MCPs), utilizing deep transfer learning techniques. The study employed a sophisticated deep-learning classification system to diagnose faults by converting accelerometer-captured vibration signals into spectrogram images. Evaluating 15 pre-trained networks, including ResNet-50 and AlexNet, the research found that AlexNet achieved 100% accuracy with a training time of 17 s. This method promises enhanced reliability and maintenance practices for MCPs in industrial applications.

2.2. Gaussian Noise and Signal Stretching

Gaussian noise (GN) is a common type of statistical noise that follows a Gaussian distribution. Its bell-shaped probability density function characterizes it and is prevalent in various data types, including images, audio, and sensor readings. Many factors, such as electronic interference, thermal fluctuations, and quantization errors, introduce this noise. In data augmentation, GN is often used to simulate real-world conditions, enhancing the robustness of models by preventing overfitting and improving generalization. GN can be represented mathematically as X N ( μ , σ 2 ) , where N denotes a normal distribution, μ is the mean of the noise, and σ 2 is the variance. For a data point x, the noisy observation x is given by the following:
x = x + ϵ
where ϵ is a random variable drawn from the Gaussian distribution N ( μ , σ 2 ) . The probability density function of a normal distribution is given by the following:
f ( x ) = 1 2 π σ 2 exp ( x μ ) 2 2 σ 2 .
Signal stretching (SS) is a data augmentation technique that simulates variations in a signal’s temporal or spatial domain. It involves altering the duration or length of the signal, which can help generalize the model to handle variations in input data. This technique is beneficial in time series and audio data, where stretching can mimic changes in speed or sampling rates. It improves the model’s robustness to such variations and enhances its generalization ability to unseen data. SS can be mathematically expressed using time-scaling transformations. For a signal s ( t ) , the stretched signal s ( t ) is defined as follows:
s ( t ) = s ( t / α )
where α is the stretching factor. If α > 1, the signal is stretched (i.e., duration increased), while α < 1 compresses the signal (i.e., duration increased).

2.3. ML Classifier

The Support Vector Machine (SVM) classifier is a supervised learning method for classification tasks. It aims to find the optimal hyperplane that separates data points of different classes with the maximum margin. Unlike the One-Class SVM, designed for anomaly detection, the traditional SVM focuses on distinguishing between two or more classes by finding the hyperplane that maximizes the distance (margin) between each class’s closest points (support vectors). Using a kernel function, the SVM classifier maps the input data into a high-dimensional feature space. The goal is to find a hyperplane that separates the classes in this feature space as distinctly as possible. The data points closest to the hyperplane are called support vectors, and they play a crucial role in defining the decision boundary. The SVM classifier can handle linear and non-linear classification tasks depending on the kernel. Mathematically, the SVM classifier aims to solve the following optimization problem:
min W , b , ξ 1 2 W 2 + C i = 1 n ξ i
which is subject to y i ( W · ϕ ( X i ) + b ) 1 ξ i , ξ i 0 , i = 1 , . . . . . , n , where W is the weight vector, b is the bias term, ξ i are slack variables that allow some data points to be misclassified, C is the regularization parameter that controls the trade-off between maximizing the margin and minimizing the classification error, ϕ ( X i ) represents the mapping of the input data into the high-dimensional feature space, and y i denotes the class labels. This optimization problem ensures that the SVM classifier finds a hyperplane that maximizes the margin while allowing for some misclassifications to achieve better generalization on unseen data [45,46]. Random forest (RF) is an ensemble learning method that builds multiple decision trees during training and outputs the classes’ mode or the individual trees’ mean prediction. The key idea is to improve the model’s accuracy and robustness by reducing the variance associated with decision trees. Each tree in the RF is trained on a different bootstrap sample of the original data, and at each node, the best split is chosen from a random subset of features, which introduces diversity among the trees [47]. The prediction of the RF for a given input x can be expressed mathematically as follows:
y ^ = 1 N i = 1 N h i ( x )
where N is the number of trees in the model, and h i ( x ) is the prediction from the ith tree. Gradient Boosting (GB) is an ensemble learning technique that builds models sequentially, with each new model aiming to correct the errors made by the previous ones. Unlike the RF model, where trees are built independently, GB constructs trees one at a time, and each new tree fits the negative gradient of the loss function concerning the prediction. GB is highly effective for classification and regression tasks, offering high accuracy by focusing on difficult-to-predict instances. It is advantageous when overfitting can be controlled through regularization and early stopping [48]. The prediction of a GB model is given by the following:
y ^ = m = 1 M α m h m ( x )
where M is the number of trees, h m ( x ) is the mth tree’s prediction, and α m is the learning rate that scales the contribution of each tree.

2.4. Long Short-Term Memory (LSTM) Networks

Long Short-Term Memory (LSTM) networks are a type of recurrent neural network (RNN) designed to model temporal sequences and long-range dependencies more effectively than traditional RNNs. Proposed by Hochreiter and Schmidhuber in 1997, LSTM networks address the vanishing gradient problem, enabling the learning of long-term dependencies. The core of an LSTM is its memory cell, which maintains information over time. Each cell has three gates: the input gate ( i t ), the forget gate ( f t ) , and the output gate 0 t . These gates regulate the flow of information into, within, and out of the cell [49,50,51]. Mathematically, the forget gates f t , input gate i t , candidate values c ˜ t , cell state ( c t ) , output gate o t , and hidden state gate are updated as follows, respectively:
f t = σ ( W f · h t 1 + b f )
i t = σ ( W i · h t 1 + b i )
c ˜ t = t a n h ( W c · h t 1 , x t + b c )
c t = f t · c t 1 + i t · c t ˜ )
o t = σ ( W o · h t 1 , x t + b o )
h t = o t · t a n h ( c t )
These mechanisms allow LSTM networks to retain and update information over long periods, making them effective for tasks like time series prediction [52], natural language processing [53], and anomaly detection [54] in sequential data.

2.5. Autoencoder Network

The Autoencoder (AE) is a generative model that learns to encode input data into a lower-dimensional latent space and then reconstruct the data from this latent space. It consists of an encoder network, which maps input data x to a latent distribution q ( z | x ) , and a decoder network, which reconstructs the data from this latent distribution p ( x | z ) . The AE is particularly useful for anomaly detection by learning normal data distribution and identifying deviations from this learned distribution as anomalies. The AE optimizes the evidence lower bound (ELBO) on the log-likelihood of the data, with the equation expressed as follows:
log p ( x ) E q ( z | x ) log p ( x | z ) K L q ( z | x ) | | p ( z )
where p ( x | z ) is the reconstruction probability of the data given the latent variables, q ( z | x ) is the approximation of the posterior distribution of the latent variables, and p ( z ) is the prior distribution of the latent variables. The Kullback–Leibler (KL) divergence term penalizes the divergence between the learned latent and prior distributions. For anomaly detection, the reconstruction error and the latent space distribution help identify outliers. Anomalies typically exhibit high reconstruction errors because they deviate significantly from the normal data distribution learned by the AE. Autoencoders (AEs) have been employed for anomaly detection by learning normal data distribution and identifying deviations that signify anomalies [55,56,57,58,59].

2.6. Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) are a class of ML models introduced by Ian Goodfellow and colleagues in 2014. GANs consist of two neural networks—the generator and the discriminator—that are trained simultaneously through a process of adversarial learning. The generator aims to produce synthetic data that are indistinguishable from real data, while the discriminator’s goal is to correctly classify data as either real or generated. The interplay between these two networks allows GANs to learn and generate high-quality data samples [60,61,62]. The core concept of GANs can be understood through the following components:
  • Generator (G): The generator network G ( z ; θ G ) takes as input a random noise vector z (often sampled from a uniform or normal distribution) and transforms it into a synthetic data sample G ( z ) . The generator is parameterized by θ G , which are the neural network weights.
  • Discriminator (D): The discriminator network D ( x ; ) θ D takes as input a data sample x (which can be real or generated) and outputs a probability D ( x ) indicating whether the sample is real (close to 1) or generated (close to 0). The discriminator is parameterized by θ D .
  • Adversarial Loss: The training process of a GAN involves optimizing the following minimax objective: min G max D V ( D , G ) = E x p data ( x ) [ log D ( x ) ] + E z p z ( z ) [ log ( 1 D ( G ( z ) ) ) ] .
  • Training Process: Step 1: Update the discriminator by maximizing V ( D , G ) while keeping the generator fixed. This step improves the discriminator’s ability to distinguish between real and fake data. Step 2: Update the generator by minimizing V ( D , G ) while keeping the discriminator fixed. This step improves the generator’s ability to produce data that fool the discriminator.
  • Convergence: Theoretically, a GAN reaches a Nash equilibrium when the discriminator cannot distinguish between real and generated data, meaning D ( x ) = 0.5 for all x. At this point, the generator has learned the underlying data distribution.

2.7. Time–Frequency Signal Processing Techniques

The STFT is a technique used in signal processing to analyze the frequency content of a signal over time. It is essentially a Fourier transform applied to localized sections of the signal, which allows for the examination of non-stationary signals. The signal is divided into overlapping segments, with each windowed to minimize edge effects, and the Fourier transform is computed for each segment [63]. Mathematically, the STFT of a signal x t is defined as follows:
S T F T x ( t ) ( t , f ) = x ( τ ) w ( t τ ) e j 2 π f τ d τ
where w ( t ) is a window function centered around zero. The wavelet transform is another time–frequency analysis tool that decomposes a signal into shifted and scaled versions of a wavelet function. Unlike the STFT, wavelets can provide multi-resolution analysis, offering good time resolution at high frequencies and good frequency resolution at low frequencies [64]. The continuous wavelet transform (CWT) of a signal x t is defined as follows:
C W T ( a , b ) = x ( t ) ψ t b a d t
where ψ ( t ) is the mother wavelet, a is the scaling factor, and b is the translation factor. The HT is used to derive the analytic representation of a real-valued signal, which helps in extracting the instantaneous amplitude and phase. It is used extensively in modulation and demodulation schemes in communications and in the analysis of non-stationary signals [65]. The Hilbert transform x ^ ( t ) of a signal x ( t ) is given by the following:
x ^ ( t ) = 1 π P . V . x ( τ ) t τ d τ
where P . V . denotes the Cauchy principal value. The analytic signal z ( t ) is then defined as follows:
z ( t ) = x ( t ) + j x ^ ( t )

3. Methodology

Data augmentation techniques were applied to expand the size of the training dataset and introduce variability. This approach is crucial for improving the model’s ability to generalize to unseen data, mitigating the risk of overfitting and ensuring that the model would be exposed to a broader range of possible input scenarios. By enriching the dataset with diverse examples, data augmentation enhanced the robustness and accuracy of the ML models used in this study, particularly in predictive maintenance and fault detection. In addition to the existing PLC that manages pump operations by switching between them randomly, we implemented an advanced system for early fault detection. This system continuously monitors vibration patterns to identify early signs of failure, particularly those the PLC cannot detect. The integration of this system is crucial for maintaining uninterrupted water distribution and preventing costly downtime.

3.1. Gaussian Noise and Signal Stretching

The data collected from the centrifugal pump over one day consisted of 1,048,576 normal data points, 97,912 wear data points, and 109,328 crack data points. The overall framework is illustrated in Figure 1. Before feeding the data into the model, several pre-processing techniques were applied: (i) Data cleaning to remove rows with missing values, (ii) Normalization to scale the values uniformly without altering their relative differences, (iii) Feature extraction to identify relevant features for model training, and (iv) Feature selection to choose the most significant features using a threshold of 0.9. For the augmented data, we performed data augmentation on the faulty data using a combination of Gaussian noise and signal stretching. Gaussian noise was added to the raw data with a standard deviation of 0.15, and signal stretching was applied with a stretch factor of 0.2. The augmented data were then combined with the original faulty data, increasing the data size to three times the original size before normalization. After feature selection, the normal data were reduced to 5825 features, while the augmented faulty data, comprising wear and crack data, were reduced to 1619 and 1822 features, respectively. Without data augmentation, the selected features for the faulty data were reduced to 539 and 607, respectively. For training the classifier models—SVM, RF, and GB—we used 70% of the data for training and 30% for testing. The parameters of the classifier models are shown in Table 1. The performance of each model, both before and after augmentation, was evaluated in terms of accuracy, precision, recall, F1-score, confusion matrix, and 5-fold cross-validation.

3.2. LSTMAEGAN for Anomaly Detection

The LSTM-Autoencoder-GAN (LSTMAEGAN) model, as shown in Figure 2, is integrated to handle and generate sequential data effectively. This hybrid model leverages the strengths of LSTM networks for sequence reconstruction and GANs for generating realistic synthetic sequences. The LSTM-AE serves as the foundation of the model. It consists of an encoder that processes input sequences through an LSTM layer with 128 units, compressing them into a latent representation. This latent space representation is then expanded to reconstruct the original sequence length using a decoder LSTM layer with 128 units. The final output is produced through a Time-Distributed Dense layer that matches the original sequence dimensions. This architecture is detailed in Table 2.
Following the Autoencoder, the GAN framework introduces two additional components: the generator and the discriminator. The generator network creates synthetic sequences from random noise. It begins with a Dense layer with 100 units, followed by a LeakyReLU activation and BatchNormalization to stabilize training. The generator then uses another Dense layer with a tanh activation function to output sequences reshaped to the desired dimensions, as shown in Table 3. The discriminator is tasked with distinguishing between real and fake sequences. It comprises two LSTM layers—the first with 128 units and the second with 64 units—which are designed to process the sequences and extract features. The final layer is a Dense layer with a sigmoid activation function that outputs the probability of the sequence being real or fake. This setup is detailed in Table 4. Training the model involves a two-phase process. First, the discriminator is trained to differentiate actual sequences from those generated by the generator. Then, the generator is trained to improve its ability to produce sequences that can effectively fool the discriminator. This iterative training process enhances the model’s capability to generate high-quality synthetic sequences and refine sequence reconstruction.
Overall, the LSTMAEGAN model leverages advanced sequence processing and generative techniques to handle time series data, making it a robust tool for anomaly detection and synthetic data generation tasks. Combining the Autoencoder’s reconstruction capabilities with the GAN’s generative power, the model offers a comprehensive approach to managing and analyzing sequential data.

3.3. Practical Considerations in Deploying LSTMAEGAN for Anomaly Detection

In deploying advanced deep learning models like LSTM-AE and GANs, several key considerations must be addressed to ensure optimal performance and applicability in real-world scenarios. While these models reduce the need for extensive manual feature engineering by automatically learning representations from raw data, the initial data pre-processing steps, such as normalization and sequence segmentation, remain critical. Furthermore, although highly scalable and capable of handling large and complex datasets, deep learning models require significant computational resources, particularly GPUs, for efficient training and inference; this scalability comes with the challenge of managing numerous hyperparameters, which need careful tuning to avoid issues like overfitting. Finally, the interpretability of these models poses another challenge. Despite their ability to model complex patterns and generate high-quality synthetic data, LSTM-AE models and GANs are often regarded as black box models. Techniques such as visualizing latent spaces and analyzing reconstruction errors provide some insight into their decision-making processes, which is crucial in industrial applications where understanding the basis for anomaly detection is essential.

3.4. Performance Metrics

The performance of ML models is commonly assessed using key metrics: accuracy (A), precision (P), recall (R), and F1-score (F1). These metrics are critical when evaluating models on imbalanced datasets, as they provide a more comprehensive understanding of a model’s ability to classify instances correctly. In this study, the SVM, RF, and GB models were evaluated using 5-fold cross-validation, which is a robust technique that ensures the reliability and consistency of the results. Additionally, the LSTMAEGAN model, known for its role as a generative model, was assessed using precision, recall, F1-score, and a reconstruction error threshold (E). This unique evaluation approach reflects the model’s distinct ability to generate new data points, which is a function that sets it apart from other models. While accuracy provides an overall measure of how well the models classify data correctly, precision and recall become crucial in imbalanced datasets. Precision indicates the proportion of true positive predictions among all positive predictions, while recall measures the proportion of actual positives correctly identified by the model. The F1-score, the harmonic mean of precision and recall, offers a balanced metric when there is a need to balance the trade-off between precision and recall. For the LSTMAEGAN model, the reconstruction error threshold is an additional metric used to assess the model’s effectiveness in anomaly detection, where the model aims to reconstruct normal sequences accurately and identify deviations as anomalies. The mathematical expressions for these metrics are expressed as follows:
A = T P + T N T P + T N + F P + F N
P = T P T P + F P
R = T P T P + F N
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n R e c a l l
E = x x ^
where T P is the number of true positives, T N is the number of true negatives, F P is the number of false positives, F N is the number of false negatives, x is the original input sequence, x ^ is the reconstructed sequence by the model, and · represents a norm, typically the Euclidean norm.

4. Experimental Study

Figure 3 shows the vibration sensor placement to validate this study’s proposed method. Our experimental study of a centrifugal water pump focused on analyzing vibrations to detect potential anomalies and improve maintenance strategies. Two vibration sensors were strategically attached to the centrifugal pump to capture data from different points, ensuring a comprehensive understanding of the pump’s operational state. Table 5 provides the specifications of the motors and pumps used in the water supply system. The table compares the characteristics of the main and backup systems. The main motor operates at 15 kW, with a frequency of 60 Hz, and it delivers 20 HP through a 4-pole configuration. It operates at a current of 52/30 A, with a 220/380 V voltage and a speed of 1760 revolutions per minute (rpm). The motor weighs 140 kg and has an efficiency of 91.0%. The corresponding pump in the main system is rated at 15 kW and operates at 1750 rpm. On the other hand, the backup motor is a 7.5 kW unit with a similar operational frequency (60 Hz) and pole configuration (four poles). Still, it delivers 10 HP and operates at a lower current of 26/15.1 A. It runs at a slightly reduced speed of 1730 rpm, weighs 70 kg, and has a somewhat lower efficiency of 87.5%. The backup pump has a rating of 7.5 kW and operates at 1750 rpm.

4.1. Data Collection and Pre-Processing

The sensors were placed permanently in the water distribution system of TSR company, Gumi-si, South Korea, which experienced crack and wear symptoms during the data collection period, as explained by the personnel on site. These sensors continuously recorded vibration signals, generating raw data that underwent several meticulous pre-processing steps. The primary data acquisition was done using a Quartz Accelerometer and the NI 9234 data acquisition module. The accelerometer, with a sensitivity of (±5%)100 mV/g and a frequency range of (±5%)1 to 4000 Hz, provided exceptionally high-resolution measurements crucial for capturing the subtle variations in vibration signals, including those associated with crack and wear conditions.
The accelerometer was connected to the NI 9234 module sourced from National Instruments Korea Co., Ltd., which is a critical component that features four analog input channels, a 24-bit Delta-Sigma ADC (manufactured by National Instruments Korea Co., Ltd, Seoul, Republic of Korea), and a sampling frequency set at 1000 Hz. This setup was instrumental in ensuring accurate and high-fidelity data collection, which was essential for subsequent analysis, particularly in identifying and characterizing the vibration patterns indicative of crack and wear faults. The raw vibration data were meticulously cleaned to remove any noise or irrelevant information that could skew the analysis. This ensured that the critical signals associated with crack and wear were preserved.
After cleaning, Min-Max scaling was applied to normalize the signals across a consistent range. Min-max scaling is a normalization technique that transforms data to a standard scale without distorting differences in the ranges of values. This facilitates comparison across different datasets and enhances the model’s ability to learn from the data. Sequences were then created from the time series data, carefully considering sequence length and overlap to refine the data further and making them more structured and suitable for input into ML models. These pre-processing steps, along with the detailed specifications of the equipment used, are summarized in Table 6.

4.2. Data Augmentation and Implementation

Traditional data augmentation techniques, such as adding GN and SS, are straightforward yet effective in increasing dataset diversity. GN enhances model robustness by introducing random variations, while SS adjusts signal timing without altering content, allowing the model to learn from different signal durations. In contrast, advanced data augmentation using the LSTMAEGAN leverages deep learning to generate synthetic data that closely mimic the original dataset’s complex patterns. This method preserves temporal dependencies and produces more diverse and high-quality augmentations, significantly improving model generalization and performance, especially in detecting anomalies or rare events. In our implementation, these augmented datasets were used to train anomaly detection algorithms, enhancing the models’ accuracy and robustness. By combining traditional and advanced augmentation techniques, we significantly improved the system’s ability to detect and classify anomalies in vibration data, leading to more effective maintenance strategies for centrifugal pumps. This study underscores the crucial role of data augmentation in predictive maintenance and highlights the potential of advanced ML techniques in improving the reliability and efficiency of industrial equipment.

5. Results and Discussion

5.1. Time–Frequency Signal Processing

Figure 4 illustrates the application of time–frequency analysis techniques—(STFT) and (CWT)—to vibration signals from normal, crack, and wear conditions. The STFT (top row) shows how the frequency content changes over time. The STFT captures the frequency content over time, while the CWT provides a time–frequency representation that highlights varying scales of the signals. This technique is advantageous for its simplicity and efficiency in identifying consistent frequency components, making it useful for detecting periodic signals. However, its fixed window size limits its ability to simultaneously resolve time and frequency details, potentially missing transient anomalies. The CWT (bottom row) offers a more flexible approach by analyzing the signal at multiple scales, thus capturing varying frequency components with better temporal resolution. This adaptability is beneficial for detecting transient or non-stationary features, though it may result in higher computational costs and complexity. When interpreting these plots, it is crucial to look for distinct changes in frequency patterns and scales, which can indicate the presence of anomalies or faults in the signal. Figure 5 showcases the HT’s ability to extract amplitude envelope and instantaneous frequency information from vibration signals under normal, crack, and wear conditions. The HT is valuable for analyzing non-stationary signals, providing insights into the amplitude modulation and instantaneous frequency that are crucial for fault detection and condition monitoring. Its primary advantage lies in its capability to reveal phase information and the time-varying frequency of signals. However, the HT may be sensitive to noise and less effective for signals with overlapping frequency components. In these plots, observe the amplitude envelope for changes in signal strength and the instantaneous frequency plot for variations in frequency, which can highlight anomalies or deviations from normal operating conditions. This analysis helps understand the signal’s dynamic behavior and identify potential faults.

5.2. Statistical Feature Engineering

Statistical feature engineering is a critical step in the pre-processing phase of ML, especially in tasks involving time series or sequential data, such as vibration analysis in predictive maintenance. By transforming raw data into meaningful statistical features, we can capture essential characteristics that help distinguish between normal and abnormal behavior in systems like centrifugal pumps. After extracting these features, selecting those most relevant to the target variable is crucial, as it directly impacts the model’s effectiveness, efficiency, and overall outcome. One standard feature selection method, which leverages the Pearson correlation coefficient (PCC), is a testament to the value of one’s expertise. This coefficient measures the strength and direction of the linear relationship between each feature and the target variable. The PCC r X Y is calculated in Equation (23). Features with a high absolute correlation with the target variable are considered more informative and are selected for further analysis. A threshold t can be applied to retain only features that meet the desired correlation criterion, as depicted in Equation (24). Table 7 shows the time domain statistical features extracted from the vibration dataset.
r X Y = i = 1 n ( X i X ¯ ) ( Y i Y ¯ ) i = 1 n ( X i X ¯ ) 2 i = 1 n ( Y i Y ¯ ) 2
The condition for selecting a feature X j based on a threshold t is as follows:
Select X j if | r X j Y | t
The correlation plot of the extracted statistical features before data augmentation Figure 6 provides crucial insights into the relationships between the twelve features: maximum value, mean value, minimum value, standard deviation, peak-to-peak, mean amplitude, RMS, waveform factor, pulse indicator, peak index, square root amplitude, and margin indicator. This analysis is pivotal in identifying potential multi-collinearity issues, where features are highly correlated:
  • Multi-collinearity Concerns: High correlations, particularly those above 0.9, are evident between features such as the RMS, peak-to-peak, standard deviation, and maximum value. These high correlations suggest that these features convey similar information, which could lead to redundancy. For instance, the RMS is highly correlated with the mean amplitude, peak-to-peak, and maximum value, indicating that these features might not contribute additional unique information to the model. Such redundancy can decrease model efficiency by complicating the feature space unnecessarily.
  • Feature Selection: A more streamlined feature set was selected by applying a PCC threshold of 0.9. The chosen features—maximum value, minimum value, waveform indicator, and peak index—exhibited lower inter-correlations. This selection ensured that each feature provided distinct and valuable information, thus reducing the complexity of the model. A more interpretable model often leads to better generalization and accuracy, mainly when the selected features are less redundant and more independent.
Via post-data augmentation, as seen in the correlation plot Figure 7, the relationships among the features underwent noticeable changes:
  • Expanded Feature Set: The feature set after augmentation now includes additional features such as standard deviation and peak-to-peak, which were previously excluded. Including these features suggests that the data augmentation process has revealed additional meaningful relationships within the data. Techniques like GN and SS introduce variability, which helps the model to recognize and learn from patterns that might have been missed in the original dataset.
  • Impact on Multi-collinearity: While augmentation increases data diversity, it also necessitates a re-evaluation of multi-collinearity. Although features like the RMS and peak-to-peak still show a high correlation, the overall distribution of correlation values has shifted, indicating that the augmentation has effectively diversified the dataset. However, the augmentation process has also introduced new correlations that need to be carefully managed to avoid introducing unnecessary complexity.
  • Refined Feature Selection: After applying the same PCC threshold of 0.9 post-augmentation, the retained features are the maximum value, mean value, minimum value, standard deviation, peak-to-peak, waveform indicator, and peak factor. These features demonstrate lower inter-correlation than the pre-augmentation set, ensuring that they contribute uniquely to the model. Including standard deviation and peak-to-peak post-augmentation indicates that these features now provide additional discriminatory power, which is likely due to the increased diversity in the dataset. This refined selection process helps simplify the model further, making it more efficient while improving its interpretability and predictive reliability.
The data augmentation process plays a significant role in enhancing the robustness and generalizability of the model. By expanding the feature set and introducing new relationships within the data, augmentation mitigates the risk of overfitting and enables the model to handle a wider variety of input scenarios. The post-augmentation analysis, a crucial step in our process, confirms that a broader, more informative set of features can be selected, leading to improved model performance. However, the augmentation also requires careful management of new correlations to avoid introducing unnecessary complexity that could impact the model’s effectiveness. Our rigorous approach to managing these new correlations ensured that the model remained robust and reliable.

5.3. Gaussian Noise and Signal Stretching

GN and SS are augmentation techniques aimed at improving the robustness and generalization of ML models by increasing the diversity of the training dataset. The process involved applying these augmentations to the raw data, calculating the weighted average, and extracting statistical features. Figure 8 shows the normalized augmented dataset for the three class labels. These features were then selected based on a set threshold before being fed into three ML classifiers: SVM, RF, and GB. These classifiers were chosen for their different approaches to handling data and their potential to demonstrate the impact of data augmentation on model performance. Below, we discuss the results before and after augmentation, focusing on the impact on the actual label predictions across the three models. Figure 9 and Figure 10 show the confusion matrix plot for the classifier model before and after augmentation, respectively.
Before data augmentation, the classifiers demonstrated a commendable performance, as reflected in their high accuracy, precision, recall, and F1-scores. For instance, the SVM model showed strong performance, particularly in precision and recall for the majority class (wear), although it faced challenges with minority classes such as regular and crack. The SVM results before augmentation were as follows:
  • Normal: Out of “196 samples”, “121” were correctly predicted as usual, “66” were misclassified as cracks, and only one was misclassified as wear.
  • Wear: Among “1737 samples”, the model performed exceptionally well, correctly predicting “1736” as wear, with just one misclassified as a crack.
  • Crack: Out of “159” crack samples, “64” were correctly identified as cracks, but a significant number (91) were misclassified as normal, and four were misclassified as wear.
These results underscore the critical issue of class imbalance, where the model is biased towards the majority class (wear), resulting in a higher number of misclassifications for the minority classes (normal and crack). Addressing this imbalance is crucial for a more balanced and accurate model performance.
After applying GN and SS, the number of data samples increased significantly, introducing more variability into the training set and helping mitigate some of the class imbalance. This increase in data variability, while leading to a decrease in performance metrics such as accuracy, precision, recall, and F1-score, also led to a rise in true label predictions for the augmented data, particularly for the normal and crack classes. This indicates a promising improvement in the model’s ability to recognize these previously underrepresented classes. The SVM results after augmentation were as follows:
  • Normal: The normal class saw a substantial increase in sample size to “549”, with “277” correctly predicted as normal, though “251” were misclassified as wear and 21 as cracks.
  • Wear: Out of “1762” wear samples, “1728” were correctly identified, with a slight increase in misclassifications into the normal (23) and crack (11) classes.
  • Crack: The crack class also benefited from augmentation, increasing to “469” samples. Here, “124” were misclassified as normal, “287” as wear, and “58” correctly identified as cracks.
These results indicate that while augmentation led to a slight decrease in overall performance metrics, the increase in true label predictions for normal and crack classes is significant. The improvement in the model’s ability to detect these classes suggests that data augmentation helped address the data imbalance issue, providing a more diverse training set that allowed the classifiers to generalize better to previously under-represented classes. The augmentation process demonstrates that while traditional metrics like accuracy, precision, and recall might decrease, the true positive rate for minority classes can improve, leading to a more balanced model performance across different classes. This is particularly evident in the confusion matrix results, where the post-augmentation predictions for normal and crack samples increased significantly across all three models. This improvement can be attributed to the augmentation techniques creating more diverse and representative samples, which reduce the model’s bias towards the majority class.
The random forest (RF) results before augmentation were as follows:
  • Normal: Out of “137” normal samples, “165” were correctly classified, but “59” were misclassified as cracks.
  • Wear: The RF model performed excellently in the wear class, correctly classifying “1736” out of “1737” samples and misclassifying only one as a crack.
  • Crack: A total of “120” out of “159” were correctly identified as crack samples, but “39” were incorrectly labeled as normal.
After Augmentation, we had the following results:
  • Normal: The number of normal samples increased to “549”, with “454” correctly predicted. This represents a significant improvement in the true positive rate for normal samples, which is a key benefit of data augmentation. However, the model now misclassified “61” samples as wear and “34” as cracks, introducing more variability in misclassification.
  • Wear: Among the “1762” wear samples, “1655” were correctly identified, showing a slight decline from the pre-augmentation performance. A total of 58 were misclassified as normal, and “49” were classified as cracks.
  • Crack: For the crack samples, the model correctly classified “294” out of “469” samples. However, the increase in misclassifications, particularly into the wear category (130 samples), indicates that while the model’s ability to detect cracks improved, it also became more prone to confusion between similar classes.
The RF model’s performance metrics slightly declined after augmentation, with a noticeable misclassification increase across all classes. However, the model showed a marked improvement in identifying normal samples which were previously underrepresented. The increase in true positives for the normal class suggests that the augmented data provided more diverse examples for the model to learn from, reducing bias towards the majority class (wear). The trade-off here is an increase in the number of misclassified samples, particularly for the crack class. This may indicate that the augmented data introduced new complexities that the RF model struggled to generalize.
The Gradient Boosting (GB) results before augmentation were as follows:
  • Normal: Out of “196” normal samples, 139 were correctly classified, with “57” misclassified as cracks, which was similar to the RF.
  • Wear: The GB model performed almost flawlessly for the wear class, correctly classifying “1736” out of “1737” samples, with only one misclassification as a crack.
  • Crack: Among the crack samples, “122” out of “159” were correctly classified, with 37 misclassified as normal.
After Augmentation, we had the following results:
  • Normal: The sample size for normal increased significantly, with “448” out of “549” samples correctly identified. The misclassification rates were “58” as wear and “43” as cracks, showing an improvement in identifying normal samples but with similar misclassification patterns as the RF.
  • Wear: The GB model correctly identified “1663” out of “1762” wear samples, showing a slight decline in accuracy compared to the pre-augmentation results. This decline underscores the trade-offs involved in improving class representation through data augmentation.
  • Crack: The model correctly classified “282” out of “469” crack samples. However, misclassifications increased, with “56” labeled as normal and “131” as wear, indicating a similar challenge in distinguishing cracks from other classes.
The GB model, like the RF, decreased its overall performance metrics after augmentation but with an improved true positive rate for the normal class. The augmentation led to a better balance in class representation, particularly for normal and crack samples, which were previously underrepresented. However, the model’s ability to accurately distinguish between similar fault types, especially wear and crack, was somewhat compromised. This suggests that while the augmented data helped address the class imbalance, they also introduced additional complexity that the model had difficulty managing, leading to increased misclassifications.
Table 8 presents the performance metrics, including accuracy, precision, recall, and F1-score, before and after data augmentation through GN addition and SS. The data augmentation techniques applied in this study played a significant role in enhancing the performance of the ML models by effectively increasing the diversity of the dataset. This introduction of variability in the input features improved the models’ ability to generalize and recognize previously unseen patterns in the data.
After data augmentation, some models’ accuracy improved, particularly in detecting under-represented classes like “normal” and “crack.” For example, while showing a slight decrease in overall accuracy, the SVM model exhibited a significant improvement in its true positive rates for these minority classes, indicating a better balance in model performance across different conditions. The precision and recall metrics further highlight this improvement. Post-augmentation, the models demonstrated improved recall, especially for minority classes that were previously misclassified. This suggests that the augmented data helped the models become more sensitive to detecting anomalies and normal conditions, which were under-represented in the original dataset.
The F1-score, which harmonizes precision and recall, generally improved or remained stable after augmentation. This suggests that the models maintained a good equilibrium between correctly identifying faults and minimizing false alarms, which is a critical aspect in an industrial setting. In particular, the RF and GB models displayed a more consistent performance improvement after augmentation. Despite a slight trade-off in terms of increased misclassification in some cases, the models’ overall ability to correctly identify faults, particularly in complex and noisy environments, was enhanced.
The augmentation process effectively addressed the class imbalance, providing the models with more diverse training examples. However, introducing more complex variations in the data, such as highly distorted signals or extreme noise levels, likely increased the misclassification rates, particularly in more complex fault types (wear and crack). This highlights the importance of carefully balancing augmentation techniques to ensure that, while class representation is improved, the data remain distinguishable by the models. Applying data augmentation techniques led to a more robust and generalizable set of ML models. Although the traditional performance metrics like accuracy, precision, and recall may have shown minor declines, the true positive rate for the minority classes and overall model robustness were significantly enhanced. This resulted in more reliable fault detection and diagnosis in practical applications, making the models better suited for industrial use.

5.4. LSTMAEGAN Modeling Result

In this study, we employed a comprehensive data pre-processing and augmentation pipeline to enhance the robustness of our diagnostic models for centrifugal pumps. The pump vibration data collected were first normalized using MinMaxScaler and then segmented into sequences of 100 time steps each to preserve temporal patterns. These sequential data were divided into training and testing sets, with 20% reserved for testing. An LSTM-based Autoencoder was implemented to capture the expected behavior of the pump, with its performance evaluated based on the reconstruction error measured as the Mean Squared Error (MSE). A threshold, set at the 95th percentile of the reconstruction error distribution, was used to identify anomalies—those sequences with errors exceeding this threshold. A GAN network was developed to improve anomaly detection further. It consisted of a generator and discriminator trained to produce synthetic sequences resembling the vibration data. This approach introduced additional variability and enhanced the model’s generalization capabilities.
The performance of the anomaly detection model was evaluated using key metrics, yielding high scores—an accuracy of 1.0; a precision of 1.0; a recall of 0.98; and an F1-score of 0.99. The confusion matrix confirmed these results, showing that the model correctly identified most sequences while misclassifying only a small number. Figure 11 illustrates the further analysis of the model’s performance. The time series plot with highlighted anomalies provides a visual overview, where anomalies—marked by red dots—clustered at specific intervals. This clustering suggests that the system experienced recurrent deviations from its normal operation, which are potentially indicative of underlying faults or operational irregularities. These anomalies may correspond to particular pump cycles or external factors affecting the system.
The reconstruction error distribution, presented in a histogram, resembles a Gaussian curve typical of well-functioning systems. The red dashed line represents the anomaly detection threshold, which has been strategically placed to flag only the most significant deviations. This placement is critical for balancing sensitivity and specificity, ensuring that the model effectively distinguished between normal and abnormal data. A closer examination of the reconstruction errors for the anomaly data reveals spikes above the threshold, indicating instances where the model struggled to reconstruct the original data, leading to significant deviations. The varying magnitude of these spikes implies differences in the nature or severity of the anomalies, which could be crucial for maintenance strategies. This variation allows for prioritizing responses to different types of faults based on their impact or frequency.
Together, these analyses demonstrate the model’s proficiency in detecting subtle and severe anomalies, underscoring its robustness in handling time series data. Integrating data augmentation and advanced modeling techniques, such as the LSTMAEGAN, ensures that the model generalizes well across different fault types, making it a reliable tool for real-world predictive maintenance applications. This capability enhances the reliability and efficiency of industrial systems and provides a promising outlook for the future application of this model in various operational contexts.

6. Conclusions

In conclusion, our study demonstrates the necessity of an early fault detection system in water distribution networks. While the programmable logic controller (PLC) handles operational tasks efficiently, our supplementary system offers critical early detection capabilities, significantly enhancing pump reliability and preventing unexpected failures. In this study, we conducted an extensive experiment on a centrifugal water pump using vibration sensors to monitor and analyze the system’s operational state. Two strategically positioned sensors captured detailed vibration data, forming the foundation of our analysis. Initially, we employed traditional data augmentation techniques, incorporating GN and SS methods to enhance the dataset thoroughly. Statistical time domain features were extracted, and a weighted average of the augmented data was calculated to ensure a robust representation of the vibration patterns. These data were input into three classifier models—SVM, RF, and GB. We used 5-fold cross-validation to ensure the reliability of our results, calculating the average across key performance metrics—the accuracy, precision, recall, and F1-score.
Recognizing the limitations of traditional augmentation methods, we advanced our approach by integrating the LSTM, AE, and GAN models for more sophisticated data augmentation. The application of data augmentation was instrumental in improving the robustness and generalization of our model, enabling it to perform effectively even with the limited data available. The LSTMAEGAN model was also used for anomaly detection, distinguishing between the pump’s normal and abnormal operation states. Our comparative analysis revealed that the LSTMAEGAN approach significantly improved the model’s ability to detect anomalies, underscoring the potential of deep learning techniques in enhancing the accuracy and reliability of predictive maintenance systems.
However, we acknowledge certain limitations in our approach. The LSTM, AE, and GAN models involve many hyperparameters, making it challenging to adjust and optimize them. The current implementation required extensive experimentation to identify a suitable set of hyperparameters. We plan to explore hyperparameter optimization techniques, such as grid search, random search, or Bayesian optimization, to streamline this process and enhance model performance.
Another area for improvement is the need for testing on other types of centrifugal pumps. While our model performed well on the data used in this study, its generalization to different pumps or similar machinery remains untested. The type and structure of the data will be crucial in determining the model’s performance in such scenarios. Future work will extend the model’s application to other pump types to assess its robustness and generalization capabilities across different datasets. These identified limitations present opportunities for further research and development, which could increase the model’s efficiency and applicability in a broader range of industrial contexts. This study showcases the efficacy of modern techniques and sets the stage for future research in optimizing predictive maintenance strategies. Future work could explore additional augmentation techniques to enhance model performance and robustness.

Author Contributions

Conceptualization D.-Y.K., A.B.K. and D.D.; methodology D.-Y.K., A.B.K. and D.D.; software D.-Y.K., A.B.K., B.-C.S., D.D. and J.-W.H.; validation D.-Y.K., A.B.K. and D.D.; formal analysis D.-Y.K., A.B.K. and D.D.; investigation D.-Y.K., A.B.K. and D.D.; data curation D.-Y.K., A.B.K., B.-C.S. and D.D.; writing—original draft preparation A.B.K. and D.D.; writing—review and editing A.B.K. and D.D.; and visualization D.-Y.K., A.B.K. and D.D., resources and supervision B.-C.S. and J.-W.H.; project administration B.-C.S. and J.-W.H.; and funding acquisition B.-C.S. and J.-W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Innovative Human Resource Development for Local Intellectualization program through the Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korean government(MSIT) (IITP-2024-2020-0-01612).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to laboratory regulations.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Espina-Romero, L.; Guerrero-Alcedo, J.; Goñi Avila, N.; Noroño Sánchez, J.G.; Gutiérrez Hurtado, H.; Quiñones Li, A. Industry 5.0: Tracking Scientific Activity on the Most Influential Industries, Associated Topics, and Future Research Agenda. Sustainability 2023, 15, 5554. [Google Scholar] [CrossRef]
  2. Mladineo, M.; Celent, L.; Milković, V.; Veža, I. Current State Analysis of Croatian Manufacturing Industry with Regard to Industry 4.0/5.0. Machines 2024, 12, 87. [Google Scholar] [CrossRef]
  3. Jamwal, A.; Agrawal, R.; Sharma, M.; Giallanza, A. Industry 4.0 Technologies for Manufacturing Sustainability: A Systematic Review and Future Research Directions. Appl. Sci. 2021, 11, 5725. [Google Scholar] [CrossRef]
  4. Konstantinidis, F.K.; Myrillas, N.; Mouroutsos, S.G.; Koulouriotis, D.; Gasteratos, A. Assessment of Industry 4.0 for Modern Manufacturing Ecosystem: A Systematic Survey of Surveys. Machines 2022, 10, 746. [Google Scholar] [CrossRef]
  5. Webert, H.; Döß, T.; Kaupp, L.; Simons, S. Fault Handling in Industry 4.0: Definition, Process and Applications. Sensors 2022, 22, 2205. [Google Scholar] [CrossRef]
  6. Angelopoulos, A.; Michailidis, E.T.; Nomikos, N.; Trakadas, P.; Hatziefremidis, A.; Voliotis, S.; Zahariadis, T. Tackling Faults in the Industry 4.0 Era—A Survey of Machine-Learning Solutions and Key Aspects. Sensors 2020, 20, 109. [Google Scholar] [CrossRef]
  7. Hadi, R.H.; Hady, H.N.; Hasan, A.M.; Al-Jodah, A.; Humaidi, A.J. Improved Fault Classification for Predictive Maintenance in Industrial IoT Based on AutoML: A Case Study of Ball-Bearing Faults. Processes 2023, 11, 1507. [Google Scholar] [CrossRef]
  8. Kim, S.W.; Kong, J.H.; Lee, S.W.; Lee, S. Recent Advances of Artificial Intelligence in Manufacturing Industrial Sectors: A Review. Int. J. Precis. Eng. Manuf. 2022, 23, 111–129. [Google Scholar] [CrossRef]
  9. Aldoseri, A.; Al-Khalifa, K.N.; Hamouda, A.M. Re-Thinking Data Strategy and Integration for Artificial Intelligence: Concepts, Opportunities, and Challenges. Appl. Sci. 2023, 13, 7082. [Google Scholar] [CrossRef]
  10. Cao, K.; Zhang, T.; Huang, J. Advanced Hybrid LSTM-Transformer Architecture for Real-Time Multi-Task Prediction in Engineering Systems. Sci. Rep. 2024, 14, 4890. [Google Scholar] [CrossRef]
  11. Törngren, M.; Grogan, P.T. How to Deal with the Complexity of Future Cyber-Physical Systems? Designs 2018, 2, 40. [Google Scholar] [CrossRef]
  12. Yan, W.; Wang, J.; Lu, S.; Zhou, M.; Peng, X. A Review of Real-Time Fault Diagnosis Methods for Industrial Smart Manufacturing. Processes 2023, 11, 369. [Google Scholar] [CrossRef]
  13. Gültekin, Ö.; Cinar, E.; Özkan, K.; Yazıcı, A. Real-Time Fault Detection and Condition Monitoring for Industrial Autonomous Transfer Vehicles Utilizing Edge Artificial Intelligence. Sensors 2022, 22, 3208. [Google Scholar] [CrossRef] [PubMed]
  14. Moshrefi, A.; Nabki, F. Advanced Industrial Fault Detection: A Comparative Analysis of Ultrasonic Signal Processing and Ensemble Machine Learning Techniques. Appl. Sci. 2024, 14, 6397. [Google Scholar] [CrossRef]
  15. Mercorelli, P. Recent Advances in Intelligent Algorithms for Fault Detection and Diagnosis. Sensors 2024, 24, 2656. [Google Scholar] [CrossRef] [PubMed]
  16. Mey, O.; Neufeld, D. Explainable AI Algorithms for Vibration Data-Based Fault Detection: Use Case-Adapted Methods and Critical Evaluation. Sensors 2022, 22, 9037. [Google Scholar] [CrossRef]
  17. Łuczak, D. Data-Driven Machine Fault Diagnosis of Multisensor Vibration Data Using Synchrosqueezed Transform and Time-Frequency Image Recognition with Convolutional Neural Network. Electronics 2024, 13, 2411. [Google Scholar] [CrossRef]
  18. Senjoba, L.; Ikeda, H.; Toriya, H.; Adachi, T.; Kawamura, Y. Enhancing Interpretability in Drill Bit Wear Analysis through Explainable Artificial Intelligence: A Grad-CAM Approach. Appl. Sci. 2024, 14, 3621. [Google Scholar] [CrossRef]
  19. Ma, L.; Ding, Y.; Wang, Z.; Wang, C.; Ma, J.; Lu, C. An Interpretable Data Augmentation Scheme for Machine Fault Diagnosis Based on a Sparsity-Constrained Generative Adversarial Network. Expert Syst. Appl. 2021, 182, 115234. [Google Scholar] [CrossRef]
  20. Iwana, B.K.; Uchida, S. An Empirical Survey of Data Augmentation for Time Series Classification with Neural Networks. PLoS ONE 2021, 16, e0254841. [Google Scholar] [CrossRef]
  21. Wei, J.; Zou, K. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; Association for Computational Linguistics. pp. 6382–6388. [Google Scholar] [CrossRef]
  22. Jeong, H.; Jeung, S.; Lee, H.; Kwon, J. BiVi-GAN: Bivariate Vibration GAN. Sensors 2024, 24, 1765. [Google Scholar] [CrossRef] [PubMed]
  23. Stathatos, E.; Tzimas, E.; Benardos, P.; Vosniakos, G.-C. Convolutional Neural Networks for Raw Signal Classification in CNC Turning Process Monitoring. Sensors 2024, 24, 1390. [Google Scholar] [CrossRef] [PubMed]
  24. Cui, W.; Ding, J.; Meng, G.; Lv, Z.; Feng, Y.; Wang, A.; Wan, X. Fault Diagnosis of Rolling Bearings in Primary Mine Fans under Sample Imbalance Conditions. Entropy 2023, 25, 1233. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, H.; Li, Y.; Jin, Y.; Zhao, S.; Han, C.; Song, L. Remaining Useful Life Prediction Method Enhanced by Data Augmentation and Similarity Fusion. Vibration 2024, 7, 560–581. [Google Scholar] [CrossRef]
  26. Feng, L.; Luo, H.; Xu, S.; Du, K. Inverter Fault Diagnosis for a Three-Phase Permanent-Magnet Synchronous Motor Drive System Based on SDAE-GAN-LSTM. Electronics 2023, 12, 4172. [Google Scholar] [CrossRef]
  27. Lyu, P.; Zhang, H.; Yu, W.; Liu, C. A Novel Model-Independent Data Augmentation Method for Fault Diagnosis in Smart Manufacturing. Procedia CIRP 2022, 107, 949–954. [Google Scholar] [CrossRef]
  28. Cai, Z.; Ma, W.; Wang, X.; Wang, H.; Feng, Z. The Performance Analysis of Time Series Data Augmentation Technology for Small Sample Communication Device Recognition. IEEE Trans. Reliab. 2023, 72, 574–585. [Google Scholar] [CrossRef]
  29. Jiang, X.; Ge, Z. Data Augmentation Classifier for Imbalanced Fault Classification. IEEE Trans. Autom. Sci. Eng. 2021, 18, 1206–1217. [Google Scholar] [CrossRef]
  30. Deng, L.; Cheng, Y.; Shi, Y. Fault Detection and Diagnosis for Liquid Rocket Engines Based on Long Short-Term Memory and Generative Adversarial Networks. Aerospace 2022, 9, 399. [Google Scholar] [CrossRef]
  31. Jalayer, M.; Kaboli, A.; Orsenigo, C.; Vercellis, C. Fault Detection and Diagnosis with Imbalanced and Noisy Data: A Hybrid Framework for Rotating Machinery. Machines 2022, 10, 237. [Google Scholar] [CrossRef]
  32. Wang, D.; Dong, Y.; Wang, H.; Tang, G. Limited Fault Data Augmentation with Compressed Sensing for Bearing Fault Diagnosis. IEEE Sens. J. 2023, 23, 14499–14511. [Google Scholar] [CrossRef]
  33. Li, M.; Hei, X.; Ji, W.; Zhu, L.; Wang, Y.; Qiu, Y. A Fault-Diagnosis Method for Railway Turnout Systems Based on Improved Autoencoder and Data Augmentation. Sensors 2022, 22, 9438. [Google Scholar] [CrossRef] [PubMed]
  34. Yunpeng, L.; Hongkai, J.; Renhe, Y.; Hongxuan, Z. Interpretable data-augmented adversarial variational autoencoder with sequential attention for imbalanced fault diagnosis. J. Manuf. Syst. 2023, 71, 342–359. [Google Scholar] [CrossRef]
  35. Shao, S.; Wang, P.; Yan, R. Generative adversarial networks for data augmentation in machine fault diagnosis. Comput. Ind. 2019, 106, 85–93. [Google Scholar] [CrossRef]
  36. Hu, M.; Wang, C.; Zhuang, C.; Wang, Y. Bearing fault diagnosis method based on data augmentation and MCNN-LSTM. In Proceedings of the 2023 IEEE 6th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 24–26 February 2023; pp. 663–671. [Google Scholar] [CrossRef]
  37. Wenxia, S.; Han, Z.; Ming, Y.; Xiuhua, L. Diagnosis of small-sample measured electromagnetic transients in power system using DRN-LSTM and data augmentation. Int. J. Electr. Power Energy Syst. 2022, 137, 107820. [Google Scholar] [CrossRef]
  38. Li, X.; Zhang, W.; Ding, Q.; Sun, J.Q. Intelligent rotating machinery fault diagnosis based on deep learning using data augmentation. J. Intell. Manuf. 2020, 31, 433–452. [Google Scholar] [CrossRef]
  39. Ramteke, D.S.; Parey, A.; Pachori, R.B. A New Automated Classification Framework for Gear Fault Diagnosis Using Fourier–Bessel Domain-Based Empirical Wavelet Transform. Machines 2023, 11, 1055. [Google Scholar] [CrossRef]
  40. Yan, Z.; Liu, H.; Tao, L.; Ma, J.; Cheng, Y. A Universal Feature Extractor Based on Self-Supervised Pre-Training for Fault Diagnosis of Rotating Machinery under Limited Data. Aerospace 2023, 10, 681. [Google Scholar] [CrossRef]
  41. Afridi, Y.S.; Hasan, L.; Ullah, R.; Ahmad, Z.; Kim, J.-M. LSTM-Based Condition Monitoring and Fault Prognostics of Rolling Element Bearings Using Raw Vibrational Data. Machines 2023, 11, 531. [Google Scholar] [CrossRef]
  42. Chen, L.; Li, Z.; Shi, W.; Li, W. Research on Fault Detection and Automatic Diagnosis Technology of Water Hammer in Centrifugal Pump. Appl. Sci. 2024, 14, 5606. [Google Scholar] [CrossRef]
  43. Yang, X.; Xu, X.; Wang, Y.; Liu, S.; Bai, X.; Jing, L.; Ma, J.; Huang, J. The Fault Diagnosis of a Plunger Pump Based on the SMOTE + Tomek Link and Dual-Channel Feature Fusion. Appl. Sci. 2024, 14, 4785. [Google Scholar] [CrossRef]
  44. Viswanathan, C.; Venkatesh, S.N.; Dhanasekaran, S.; Mahanta, T.K.; Sugumaran, V.; Lakshmaiya, N.; Ramasamy, S.N. Deep Learning for Enhanced Fault Diagnosis of Monoblock Centrifugal Pumps: Spectrogram-Based Analysis. Machines 2023, 11, 874. [Google Scholar] [CrossRef]
  45. Alizadeh, J.; Bogdan, M.; Classen, J.; Fricke, C. Support Vector Machine Classifiers Show High Generalizability in Automatic Fall Detection in Older Adults. Sensors 2021, 21, 7166. [Google Scholar] [CrossRef] [PubMed]
  46. Kabir, R.; Watanobe, Y.; Islam, M.R.; Naruse, K.; Rahman, M.M. Unknown Object Detection Using a One-Class Support Vector Machine for a Cloud–Robot System. Sensors 2022, 22, 1352. [Google Scholar] [CrossRef]
  47. Kareem, A.B.; Ejike Akpudo, U.; Hur, J.-W. An Integrated Cost-Aware Dual Monitoring Framework for SMPS Switching Device Diagnosis. Electronics 2021, 10, 2487. [Google Scholar] [CrossRef]
  48. Nadkarni, S.B.; Vijay, G.S.; Kamath, R.C. Comparative Study of Random Forest and Gradient Boosting Algorithms to Predict Airfoil Self-Noise. Eng. Proc. 2023, 59, 24. [Google Scholar] [CrossRef]
  49. Yang, Y.; Li, Y.; Cai, Y.; Tang, H.; Xu, P. Data-Driven Golden Jackal Optimization–Long Short-Term Memory Short-Term Energy-Consumption Prediction and Optimization System. Energies 2024, 17, 3738. [Google Scholar] [CrossRef]
  50. Wang, W.; Ma, B.; Guo, X.; Chen, Y.; Xu, Y. A Hybrid ARIMA-LSTM Model for Short-Term Vehicle Speed Prediction. Energies 2024, 17, 3736. [Google Scholar] [CrossRef]
  51. Moon, Y.; Lee, Y.; Hwang, Y.; Jeong, J. Long Short-Term Memory Autoencoder and Extreme Gradient Boosting-Based Factory Energy Management Framework for Power Consumption Forecasting. Energies 2024, 17, 3666. [Google Scholar] [CrossRef]
  52. Ju, J.; Liu, F.-A. Multivariate Time Series Data Prediction Based on ATT-LSTM Network. Appl. Sci. 2021, 11, 9373. [Google Scholar] [CrossRef]
  53. Yin, Z.; Shao, J.; Hussain, M.J.; Hao, Y.; Chen, Y.; Zhang, X.; Wang, L. DPG-LSTM: An Enhanced LSTM Framework for Sentiment Analysis in Social Media Text Based on Dependency Parsing and GCN. Appl. Sci. 2023, 13, 354. [Google Scholar] [CrossRef]
  54. Kim, T.; Kim, J.; You, I. An Anomaly Detection Method Based on Multiple LSTM-Autoencoder Models for In-Vehicle Network. Electronics 2023, 12, 3543. [Google Scholar] [CrossRef]
  55. Do, J.S.; Kareem, A.B.; Hur, J.-W. LSTM-Autoencoder for Vibration Anomaly Detection in Vertical Carousel Storage and Retrieval System (VCSRS). Sensors 2023, 23, 1009. [Google Scholar] [CrossRef] [PubMed]
  56. Lee, S.; Kareem, A.B.; Hur, J.-W. A Comparative Study of Deep-Learning Autoencoders (DLAEs) for Vibration Anomaly Detection in Manufacturing Equipment. Electronics 2024, 13, 1700. [Google Scholar] [CrossRef]
  57. Lee, J.-H.; Okwuosa, C.N.; Hur, J.-W. Extruder Machine Gear Fault Detection Using Autoencoder LSTM via Sensor Fusion Approach. Inventions 2023, 8, 140. [Google Scholar] [CrossRef]
  58. Lee, J.-G.; Kim, D.-H.; Lee, J.H. Proactive Fault Diagnosis of a Radiator: A Combination of Gaussian Mixture Model and LSTM Autoencoder. Sensors 2023, 23, 8688. [Google Scholar] [CrossRef]
  59. Lachekhab, F.; Benzaoui, M.; Tadjer, S.A.; Bensmaine, A.; Hamma, H. LSTM-Autoencoder Deep Learning Model for Anomaly Detection in Electric Motor. Energies 2024, 17, 2340. [Google Scholar] [CrossRef]
  60. Tang, T.-W.; Kuo, W.-H.; Lan, J.-H.; Ding, C.-F.; Hsu, H.; Young, H.-T. Anomaly Detection Neural Network with Dual Auto-Encoders GAN and Its Industrial Inspection Applications. Sensors 2020, 20, 3336. [Google Scholar] [CrossRef]
  61. Chen, L.; Li, Y.; Deng, X.; Liu, Z.; Lv, M.; Zhang, H. Dual Auto-Encoder GAN-Based Anomaly Detection for Industrial Control System. Appl. Sci. 2022, 12, 4986. [Google Scholar] [CrossRef]
  62. Avola, D.; Cannistraci, I.; Cascio, M.; Cinque, L.; Diko, A.; Fagioli, A.; Foresti, G.L.; Lanzino, R.; Mancini, M.; Mecca, A.; et al. A Novel GAN-Based Anomaly Detection and Localization Method for Aerial Video Surveillance at Low Altitude. Remote Sens. 2022, 14, 4110. [Google Scholar] [CrossRef]
  63. Ewert, P.; Wicher, B.; Pajchrowski, T. Application of the STFT for Detection of the Rotor Unbalance of a Servo-Drive System with an Elastic Interconnection. Electronics 2024, 13, 441. [Google Scholar] [CrossRef]
  64. Yang, X.; Chen, X.; Sun, K.; Xiong, C.; Song, D.; Lu, Y.; Huang, L.; He, S.; Zhang, X. A Wavelet Transform-Based Real-Time Filtering Algorithm for Fusion Magnet Power Signals and Its Implementation. Energies 2023, 16, 4091. [Google Scholar] [CrossRef]
  65. Li, Y.; Lin, J.; Niu, G.; Wu, M.; Wei, X. A Hilbert–Huang Transform-Based Adaptive Fault Detection and Classification Method for Microgrids. Energies 2021, 14, 5040. [Google Scholar] [CrossRef]
Figure 1. Framework for using data augmentation using Gaussian noise and signal stretching.
Figure 1. Framework for using data augmentation using Gaussian noise and signal stretching.
Jsan 13 00060 g001
Figure 2. Framework for the integration of LSTMAEGAN architecture.
Figure 2. Framework for the integration of LSTMAEGAN architecture.
Jsan 13 00060 g002
Figure 3. Vibration data collection setup for the water pump system.
Figure 3. Vibration data collection setup for the water pump system.
Jsan 13 00060 g003
Figure 4. Time–frequency analysis of vibration signals from different conditions. The top row shows the STFT of vibration data: (a) Normal, (b) Crack, and (c) Wear. The bottom row displays the CWT of the same signals: (d) Normal, (e) Crack, and (f) Wear.
Figure 4. Time–frequency analysis of vibration signals from different conditions. The top row shows the STFT of vibration data: (a) Normal, (b) Crack, and (c) Wear. The bottom row displays the CWT of the same signals: (d) Normal, (e) Crack, and (f) Wear.
Jsan 13 00060 g004
Figure 5. Analysis of vibration signals using the HT. For each dataset, (a) Normal, (b) Crack, and (c) Wear, the plot shows the following: (1) The original signal, (2) The amplitude envelope computed from the Hilbert Transform, and (3) The instantaneous frequency derived from the phase of the analytic signal.
Figure 5. Analysis of vibration signals using the HT. For each dataset, (a) Normal, (b) Crack, and (c) Wear, the plot shows the following: (1) The original signal, (2) The amplitude envelope computed from the Hilbert Transform, and (3) The instantaneous frequency derived from the phase of the analytic signal.
Jsan 13 00060 g005
Figure 6. Before augmentation: (a) Correlation plot of all statistical features, showing the degree of the linear relationship between each pair of features. This plot helps identify multi-collinearity among the features. (b) Selected features after applying a PCC threshold of <0.9, highlighting the features with lower inter-correlation, thus reducing redundancy and improving the robustness of the model.
Figure 6. Before augmentation: (a) Correlation plot of all statistical features, showing the degree of the linear relationship between each pair of features. This plot helps identify multi-collinearity among the features. (b) Selected features after applying a PCC threshold of <0.9, highlighting the features with lower inter-correlation, thus reducing redundancy and improving the robustness of the model.
Jsan 13 00060 g006
Figure 7. After augmentation: (a) Correlation plot of all statistical features, showing the degree of linear relationship between each pair of features. This plot helps identify multi-collinearity among the features. (b) Selected features after applying a PCC threshold of <0.9, highlighting the features with lower inter-correlation, thus reducing redundancy and improving the robustness of the model.
Figure 7. After augmentation: (a) Correlation plot of all statistical features, showing the degree of linear relationship between each pair of features. This plot helps identify multi-collinearity among the features. (b) Selected features after applying a PCC threshold of <0.9, highlighting the features with lower inter-correlation, thus reducing redundancy and improving the robustness of the model.
Jsan 13 00060 g007
Figure 8. Plot of the normalized weighted average of Gaussian noise and signal stretching under different fault conditions: normal, wear, and crack.
Figure 8. Plot of the normalized weighted average of Gaussian noise and signal stretching under different fault conditions: normal, wear, and crack.
Jsan 13 00060 g008
Figure 9. Confusion matrices for the classification performance of three models before augmentation: (a) SVM, (b) RF, and (c) GB. Each matrix shows the classification of vibration signal labels: “Normal”, “Wear”, and “Crack Fault”.
Figure 9. Confusion matrices for the classification performance of three models before augmentation: (a) SVM, (b) RF, and (c) GB. Each matrix shows the classification of vibration signal labels: “Normal”, “Wear”, and “Crack Fault”.
Jsan 13 00060 g009
Figure 10. Confusion matrices for the classification performance of three models after augmentation: (a) SVM, (b) RF, and (c) GB. Each matrix shows the classification of vibration signal labels: “Normal”, “Wear”, and “Crack Fault”.
Figure 10. Confusion matrices for the classification performance of three models after augmentation: (a) SVM, (b) RF, and (c) GB. Each matrix shows the classification of vibration signal labels: “Normal”, “Wear”, and “Crack Fault”.
Jsan 13 00060 g010
Figure 11. (a) Distribution of reconstruction errors for the dataset, with the red dashed line indicating the chosen threshold for anomaly detection. The histogram shows the frequency of Mean Squared Errors (MSEs), helping to visualize the separation between normal and abnormal data points. (b) Time series plot of the original data, with anomalies highlighted in red. The anomalies, identified based on the reconstruction error threshold, are marked against the backdrop of the normal data, illustrating the model’s ability to detect deviations over time. (c) Reconstruction error for anomaly data over different sample indices. The blue line represents the reconstruction error for each sample, and the red dashed line represents the threshold for detecting anomalies, showing which data points exceed the anomaly detection threshold.
Figure 11. (a) Distribution of reconstruction errors for the dataset, with the red dashed line indicating the chosen threshold for anomaly detection. The histogram shows the frequency of Mean Squared Errors (MSEs), helping to visualize the separation between normal and abnormal data points. (b) Time series plot of the original data, with anomalies highlighted in red. The anomalies, identified based on the reconstruction error threshold, are marked against the backdrop of the normal data, illustrating the model’s ability to detect deviations over time. (c) Reconstruction error for anomaly data over different sample indices. The blue line represents the reconstruction error for each sample, and the red dashed line represents the threshold for detecting anomalies, showing which data points exceed the anomaly detection threshold.
Jsan 13 00060 g011
Table 1. ML models and parameter values.
Table 1. ML models and parameter values.
ML ModelParameterValue
SVMgamma, Cscale, 90
RF and GBn estimators70
Table 2. Architecture of the LSTM Autoencoder.
Table 2. Architecture of the LSTM Autoencoder.
Layer TypeUnitsActivationOutput Shape
Input--(seq_length, n_features)
LSTM (Encoder)128ReLU(seq_length, 128)
RepeatVector--(seq_length, 128)
LSTM (Decoder)128ReLU(seq_length, 128)
Densen_features-(seq_length, n_features)
Table 3. Architecture of the generator network.
Table 3. Architecture of the generator network.
Layer TypeUnitsActivationOutput Shape
Dense100LeakyReLU(None, 100)
BatchNormalization--(None, 100)
Denseseq_length × n_featuresTanh(None, seq_length × n_features)
Reshape--(None, seq_length, n_features)
Table 4. Architecture of the discriminator network.
Table 4. Architecture of the discriminator network.
Layer TypeUnitsActivationOutput Shape
LSTM128-(seq_length, 128)
LSTM64-(64)
Dense1Sigmoid(1)
Table 5. Specifications of motors and pumps for the water supply system.
Table 5. Specifications of motors and pumps for the water supply system.
Description
Water SystemMainBackup
Motor15 KW, 60 Hz, 20 HP-4 poles, 52/30 A, 220/380, 1760 min 1 , 140 Kg, 91.0%7.5W-4 poles, 60 Hz, 10 HP, 26/15.1, 220/380, 1730 rpm, 70 Kg, 87.5%
Pump15 kW, 1750 rpm7.5 kW, 1750 rpm
Table 6. Experimental parameters of the sensor setting.
Table 6. Experimental parameters of the sensor setting.
Accelerometer
Item Description
Sensor TypeQuartz Accelerometer
Sensitivity(±5%) 100 mV/g (10.19 mV/(m/s2))
Dynamic Range±50 g pk (±491 m/s2 pk)
Broadband Resolution0.0005 g rms (0.005 m/s2 rms)
Frequency Range(±5%) 1 to 4000 Hz
Weight0.95 oz (27 g)
NI 9234 Module
Number of Channels4 analog input channels
ADC Resolution24 bits
Type of ADCDelta-Sigma (with analog prefiltering)
Sampling Frequency1000 Hz
Sampling Time10–60 s
Input Range±5 V
IEPE Excitation CurrentTypical: 2.1 mA (software-selectable on/off)
Power ConsumptionActive mode: 900 mW maximum
Operating Temperature−40 °C to 70 °C
Weight6.1 oz (173 g)
Table 7. Description of statistical features.
Table 7. Description of statistical features.
Statistical FeatureDescription (Mathematical Expression)
Maximum Value max ( x )
Mean Value μ = 1 N i = 1 N x i
Minimum Value min ( x )
Standard Deviation σ = 1 N i = 1 N ( x i μ ) 2
Peak-to-Peak P 2 P = max ( x ) min ( x )
Mean Amplitude Mean Amplitude = 1 N i = 1 N | x i |
RMS RMS = 1 N i = 1 N x i 2
Waveform Indicator Waveform Indicator = RMS Mean Amplitude
Pulse Indicator Pulse Indicator = max ( x ) Mean Amplitude
Peak Index Peak Index = max ( x ) RMS
Square Root Amplitude Square Root Amplitude = 1 N i = 1 N | x i |
Margin Indicator Margin Indicator = max ( x ) Square Root Amplitude
Table 8. Comparison of performance metrics across different ML models (SVM, Random Forest, Gradient Boost) before and after data augmentation, evaluated using 5-fold cross-validation.
Table 8. Comparison of performance metrics across different ML models (SVM, Random Forest, Gradient Boost) before and after data augmentation, evaluated using 5-fold cross-validation.
ModelFoldBefore AugmentationAfter Augmentation
Accuracy Precision Recall F1-Score Accuracy Precision Recall F1-Score
SVM10.92110.91580.92110.91750.72420.71150.72420.6606
20.92730.92460.92730.92310.72550.71040.72550.6621
30.92930.92820.92930.92390.71400.75200.71400.6362
40.92010.91440.92010.91690.71940.68900.71940.6662
50.91590.90950.91590.91090.72470.69840.72470.6682
Averaged0.92270.91850.92270.91850.72160.71220.72160.6587
RF10.95900.96020.95900.95880.86130.85830.86130.8586
20.95900.95930.95900.95920.86280.86180.86280.8580
30.96310.96310.96310.96310.85350.84960.85350.8486
40.95790.95500.95490.95480.85120.84610.85120.8455
50.96620.96610.96620.96610.84270.83820.84270.8380
Averaged0.96040.96070.96040.96040.85430.85290.85430.8498
GB10.96000.96180.96000.95980.86440.86120.86440.8614
20.95700.95760.95690.95710.85580.85260.85580.8491
30.95490.95490.95490.95490.84500.83980.84500.8403
40.96310.96310.96310.96310.86120.85810.86120.8543
50.95790.95790.95790.95790.84500.83990.84500.8394
Averaged0.95860.95910.95860.95860.85430.85040.85430.8489
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, D.-Y.; Kareem, A.B.; Domingo, D.; Shin, B.-C.; Hur, J.-W. Advanced Data Augmentation Techniques for Enhanced Fault Diagnosis in Industrial Centrifugal Pumps. J. Sens. Actuator Netw. 2024, 13, 60. https://doi.org/10.3390/jsan13050060

AMA Style

Kim D-Y, Kareem AB, Domingo D, Shin B-C, Hur J-W. Advanced Data Augmentation Techniques for Enhanced Fault Diagnosis in Industrial Centrifugal Pumps. Journal of Sensor and Actuator Networks. 2024; 13(5):60. https://doi.org/10.3390/jsan13050060

Chicago/Turabian Style

Kim, Dong-Yun, Akeem Bayo Kareem, Daryl Domingo, Baek-Cheon Shin, and Jang-Wook Hur. 2024. "Advanced Data Augmentation Techniques for Enhanced Fault Diagnosis in Industrial Centrifugal Pumps" Journal of Sensor and Actuator Networks 13, no. 5: 60. https://doi.org/10.3390/jsan13050060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop