Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Metal-Organic Hybrid Metamaterials for Spectral-Band Selective Active Terahertz Modulators
Next Article in Special Issue
Transient Modeling and Recovery of Non-Stationary Fault Signature for Condition Monitoring of Induction Motors
Previous Article in Journal
Silhouettes from Real Objects Enable Realistic Interactions with a Virtual Human in Mobile Augmented Reality
Previous Article in Special Issue
Transfer Learning-Based Fault Diagnosis under Data Deficiency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Trends and Challenges in Intelligent Condition Monitoring of Electrical Machines Using Machine Learning

Department of Electrical Power Engineering and Mechatronics, Tallinn University of Technology, 19086 Tallinn, Estonia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(6), 2761; https://doi.org/10.3390/app11062761
Submission received: 22 February 2021 / Revised: 16 March 2021 / Accepted: 17 March 2021 / Published: 19 March 2021
(This article belongs to the Special Issue Advances in Machine Fault Diagnosis)

Abstract

:
A review of the fault diagnostic techniques based on machine is presented in this paper. As the world is moving towards industry 4.0 standards, the problems of limited computational power and available memory are decreasing day by day. A significant amount of data with a variety of faulty conditions of electrical machines working under different environments can be handled remotely using cloud computation. Moreover, the mathematical models of electrical machines can be utilized for the training of AI algorithms. This is true because the collection of big data is a challenging task for the industry and laboratory because of related limited resources. In this paper, some promising machine learning-based diagnostic techniques are presented in the perspective of their attributes.

1. Introduction

Nowadays, electrical machines and drive systems are being used in many applications and play a significant role in industries. As electrical machines are used in different applications, the maintenance question is of great importance. Today, there are plenty of condition monitoring methods to detect failures in electrical equipment. In general, diagnostic techniques can be divided into the following groups [1,2,3,4,5]:
  • Noise and vibration monitoring,
  • Motor-current signature analysis (MCSA),
  • Temperature measurement,
  • Electromagnetic field monitoring,
  • Chemical analysis,
  • Radio-frequency emissions monitoring,
  • Acoustic noise measurement,
  • Model and artificial intelligence-based techniques.
Generally, stresses that impact electrical machines’ operation can be classified into four main categories, also known as TEAM (Thermal, Electric, Ambient, and Mechanical) stresses. Because of these stresses, faults tend to appear in the machine.
Statistically, 36% of all motor failures are related to the stator winding faults [6]. Usually, winding failures develop from a turn-to-turn short circuit [7]. Without timely maintenance, this fault can grow to phase-to-phase or phase-to-ground short circuits [8]. Due to the fact that this inter-turn fault is hardly detectable in the early stages of its development, this topic is mainly challenging in the electrical machine industry [9]. From the point of view of reliability, in this case, one of the most critical points is electrical machines’ insulation [10]. Insulation plays a significant role during the design processes [11]. The insulation condition can be defined by chemical, mechanical, or electrical analysis of the insulating materials [12].
Mechanical faults make a significant proportion of overall faults in the form of eccentricity, broken rotor bars, cracked end rings, damaged bearings, etc. [13]. A broken rotor bar is a widespread and frequently occurring fault. In the machine, this fault can be caused by high operating temperature, cracks in the bar, or natural degradation [14]. Some effects can indicate a broken rotor bar: torque oscillations, high radial speed, sparking, rotor asymmetry [15]. This fault is difficult to be exposed at the early stages, but it is equally essential for avoiding negative and catastrophic consequences in production.
Another mechanical fault that occurs in an electrical machine is eccentricity. Eccentricity faults refer to the inconsistent air gap between the rotor and the stator. The air gap eccentricity exceeding 10% is considered a fault [16]. There is a variety of eccentricity types: static eccentricity (SE), dynamic eccentricity (DE), and elliptic eccentricity [17]. Additionally, there are cases when mixed eccentricity occurs in electrical machines. Eccentricity is mainly caused by improper installation, bolts lack or missing, shaft misalignment, or rotor imbalance [18]. Eccentricity faults can cause additional noise and vibration [19]. When the eccentric fault becomes severe, it will cause friction between the stator and the rotor and, as a result, affect the regular operation of the motor.
At the same time, another widely spread mechanical failure is bearing faults. The production of bearings is carried out under stringent requirements for quality. However, the bearing’s real lifespan can be significantly decreased due to different ambient and manufacturing factors, such as material fatigue, improper placement, contamination, improper lubrication, and bearing currents [20]. Constant monitoring of the bearing parameters, such as temperature measurement, timely lubricant analysis, noise, and vibration measurement, could significantly decrease the risk of bearing damage [21].
The distribution of all the faults mentioned above depends mainly on the motor’s parameters, such as machine type, size, rated voltage, etc. To increase the reliability of the machine, many parameters must be monitored [22]. The main faults and their signatures are shown in Table 1.
As shown in Figure 1, three main types of machine maintenance can be expressed to be applied in practice: corrective, preventive, and predictive maintenance [29].
In the case of corrective maintenance, also known as reactive maintenance, all needed repairs are assumed to be done after the failure has already occurred. However, this solution is appropriate only for small and insignificant workstations, where unexpected failure does not lead to economic or catastrophic consequences. Alternatively, many manufacturers assume preventive maintenance to the machine to avoid fatal outcomes. In this case, the electrical equipment needs to be regularly checked by the manufacturers through scheduled and specified inspections.
Although this solution can prolong machine lifespan, this schedule-based condition monitoring approach provides very little information on the remaining useful lifetime (RUL) of the devices and does not allow for their prognostic and full exploitation [30]. Moreover, because of the scheduled controls in production, it usually means a partial or total shutdown of the manufacturing process, leading to inefficient resource usage and extra operating costs.
To decrease shutdown costs and minimize downtime, manufacturers switch their production over predictive maintenance [31,32]. Condition monitoring is an essential component of predictive maintenance that allows forecasting a further failure based on electrical equipment’s working conditions. A schematic illustration of the condition monitoring is shown in Figure 2. As can be seen, condition monitoring consists of several stages. The accuracy of measuring systems largely depends on the sensors used for data acquisition. Signal processing is one of the essential stages in condition monitoring.
For feature extraction, to predict and teach the system to detect faults in the future, the system needs a more powerful tool. Moreover, as the data amount is increasing worldwide and computer science is rapidly developing, it is reasonable to remake production under advanced approaches using artificial intelligence (AI). There are widely used thermal imaging in industry to monitor the fault at the early stages of development [33]. In this case, as an example, different variants of machine learning (ML) algorithms can be used for fault detection. These algorithms, as well as their comparison, are described in the following chapters.

2. Diagnostic Possibilities with Machine Learning

Many types of research about intelligent health monitoring refer to machine learning (ML) [34,35,36]. ML is a study of computer science and artificial intelligence that is not oriented directly to problem solution but rather learning in the process by applying solutions to many similar problems [37]. Typical tasks of ML are classification and regression, learning associations, clustering, and other machine learning tasks, such as reinforcement learning, learning to rank, and structure prediction [38]. ML is closely related to data mining, which can discover new data patterns in large datasets. The main difference is that ML is concentrated on adaptive behavior and operative usage, while data mining focuses on processing extensive amounts of data and discovering unknown patterns. Based on the dataset, so-called training data, ML algorithms can build a model that predicts and makes decisions. There are many types as well as algorithms of ML. These algorithms can be supervised, unsupervised, semi-supervised, and reinforcement [39]. Figure 3 shows the most common methods used in machine learning.
The basic paradigms of ML are supervised and unsupervised algorithms. Supervised ML, also known as “learning with a teacher,” is a type of learning from examples, where the training set (situation) and test set (required solution) are set [40,41]. Those training sets are challenging to obtain from industry and laboratories. Because of the limited number of faulty machines working in the industry due to scheduled maintenance (preventive) and in laboratories, a limited number of destructive tests can be performed for training purposes. Moreover, data collection with more than one fault (composite faults) in the same machine is not straightforward in both scenarios. Thanks to the increasing computational power of computers and cloud computation, the mathematical models of electrical machines can train AI algorithms. A comparison of different types of mathematical models of induction motors and their attributes can be found in [42,43].
At the same time, unsupervised ML, also known as “learning without a teacher”, is a type of learning where patterns are to be discovered from unknown data [44,45]. In this case, there is only training data, and the aim is to group objects into clusters and/or reduce a large amount of the given data. Sometimes, industrial systems use semi-supervised algorithms in order to get a more precise outcome. In this case, some cases have both training set and test set, while some have only training data.
Differently from basic approaches, reinforcement ML focuses on understanding patterns in repetitive situations and their generalization [46]. The purpose is to minimize errors and increase accuracy; the machine learns to analyze the information before each step. Moreover, the machine aims to get the maximum reward (benefit) from the learning, which is set in advance, such as minimum resource spending, reaching the desired value, minimum analyzing time, etc.
One group of widely used intelligent condition monitoring methods, which can be successfully applied to condition monitoring of many machine parameters, is artificial neural networks (ANNs). ANNs can be supervised, unsupervised, and reinforced. Many studies mistakenly consider NNs as a separate field from machine learning groups. However, NNs and deep learning are related to computer science, artificial intelligence, and machine learning. A diagram of NNs related fields is shown in Figure 4.
Machine learning is a powerful tool with a broad set of different algorithms that can be applied for solving many problems. These algorithms, as well as other applications, are described in more detail in the following chapters.

3. Supervised Machine Learning

Supervised ML includes a variety of function algorithms that can map inputs to desired outputs. Usually, supervised learning is used in the classification and regression problems: classifiers map inputs into pre-defined classes, while regression algorithms map inputs into a real-value domain. In other words, classification allows predicting the input category, while regression allows predicting a numerical value based on collected data. The general algorithm of supervised learning is shown in Figure 5.
Unsupervised learning aims to discover features from labeled examples so it is possible to analyze unlabeled examples with possibly high accuracy. Basically, the program creates a rule according to what the data are to be processed and classified.
Among supervised algorithms, the most widely used are the following algorithms: linear and logistic regression [47,48], Naive Bayes [49,50], nearest neighbor [51,52], and random forest [53,54,55,56]. In condition monitoring and diagnostics of electrical machines, the most suitable supervised algorithms are decision trees [57,58,59] and support vector machines [60,61,62].

3.1. Decision Trees

A decision tree (DT) is a decision support tool extensively used in data analysis and statistics. Special attention has been paid to DTs in artificial data mining. DTs’ goal is to create a model that predicts the target’s value based on multiple inputs. The structure of DTs can be represented by branches and leaves. The branches contain attributes on which the function depends, while leaves contain the values of the function. The other nodes contain attributes by which the decision cases are different. An example of the DT algorithm is shown in Figure 6.
Among other decision models, DTs are the simplest and need a little amount of data to succeed. Moreover, this algorithm can be a hybrid model with another decision model in achieving a more accurate outcome. However, these models are unstable. A little amount of input data can lead to a significant change in the decision tree structure, leading to inaccurate results. Additionally, regression algorithms can fail in the case of decision trees.

3.2. Support Vector Machines

Another widely used condition monitoring set of ML algorithms are the support vector machines (SVM). This is a set of supervised models used for regression, novelty detection tasks, feature reduction, and SVM, which is preferable in classification objectives [63]. In linear classification, each datapoint is represented as a vector in n-dimensional space (n—the number of features). Each of these points belongs to only one of two classes. Figure 7 shows an example of data classification.
In the picture, two data classes are represented: Class 1 (triangles) and Class 2 (squares). The aim is to separate these points by a hyperplane of dimension (n − 1), ensuring a maximum gap between them. There are many possible hyperplanes. Maximizing the gap between classes contributes to a more confident classification and helps to find an optimal hyperplane. As shown in Figure 8, to detect the optimal hyperplane, it is essential to find support vectors that can be defined in as closer position to the hyperplane as possible.
In addition to linear classification, SVMs can deal with non-linear classification using the kernel trick, also known as the kernel machine. As shown in Figure 9, the processing algorithm is similar to the linear one, but the kernel function replaces the datapoints.
SVM is a good solution when there is no initial information about the data. This method is highly preferred because of the little computation power needed to produce results with significant accuracy. Although kernel machine is a great advantage of SVM, its managing is a complicated task. Moreover, it can take a long time to make large amounts of data processed, so SVM is not preferable in large datasets.
Supervised ML approaches are widely applicable for condition monitoring of electrical machines. Many relevant kinds of research can be found in the literature. The authors in [64] proposed a new signal processing method for fault diagnosis of low-speed machinery based on DT approaches. In [65], the authors applied statistical process control and supervised ML techniques to diagnose wind turbine faults and predict maintenance needs. The researchers in [66] presented a semi-supervised ML method that uses the DT algorithm’s co-training to handle unlabeled data and applied to fault classification in electric power systems. In [67], the authors proposed a RUL prediction method of lithium-ion batteries using particle filter and support vector regression.

4. Unsupervised Machine Learning

Unsupervised ML includes algorithms that can learn spontaneously to perform a proposed task without intervention from a teacher. Unsupervised learning is often contrasted with supervised learning when an outcome is known, and it is required to find a relationship between system responses. In unsupervised learning, as shown in Figure 10, the program tries to find similarities between objects and divide them into groups if there are similar patterns. These groups are called clusters. Among supervised algorithms, the most widely used are the following algorithms: cluster analysis, fuzzy c-means [68,69], and k-means [70]. In the diagnosis of electrical machines, principal component analysis is the most frequently used algorithm [71,72,73].
More frequently, the dataset is so large that it is difficult to interpret and distinguish the necessary information. Principal component analysis (PCA) is one of the most spread algorithms to reduce the data’s dimensions while losing the least amount of information. PCA can be interpreted geometrically, as shown in Figure 11.
The algorithm of SVM is as follows:
(a)
Points with specific coordinates are designated on the plane.
(b)
The direction of the maximum data change is selected, and a new axis PCA is drawn through the experimental points.
(c)
Experimental points are to be projected on the axis PCA.
(d)
It is assumed that all the points were initially projected on the axis PCA, and all deviations from this axis can be considered as noise.
If noise is considerable, another axis can be added perpendicular to the first one to describe the data’s remaining change. As a result, there is a new representation, which has a smaller number of variables, where all variables are considered, and none of them are deleted. An insignificant part of the data is separated and turns into noise. The main components give the initially hidden variables that control the data device.
PCA is the most common approach to dimensionality reduction. It is a useful tool for the visualization of large datasets. One of PCA’s main advantages is that components are independent of each other, and there is no correlation between them. It can significantly reduce the training time. At the same time, these independent values can become less interpretable. Besides applying PCA, there is still information loss, and the data analysis is relatively less precise than the original values.
Many studies are available in the literature where unsupervised algorithms are used for the analysis of high-dimensional datasets. In [74], the authors applied a new method to the fault diagnosis of rolling bearings in the field of high-dimensional unbalanced fault diagnosis data based on PCA for better classification performance. In [75], researchers used a PCA-based method to monitor non-linear processes. The researchers in [76] proposed a PCA-based hybrid method for monitoring linear and non-linear industrial processes.

5. Reinforcement Learning

Reinforcement learning (RL) is one of the ML methods, where the system (agent) learns by interacting with some environment. Different from supervised algorithms, there is no need for labeled data pairs. RL is mainly focused on finding a balance between an unknown environment and existing knowledge. The general algorithm of reinforcement learning is shown in Figure 12.
One of the algorithms, which can be used in data mining and cluster analysis, is swarm intelligence [77,78,79]. Swarm intelligence (SI) describes a decentralized and self-organized system’s collective behavior, which is considered an optimization method. SI system consists of agents (boids) that interact with each other and the environment. SI should be a multi-agent system with self-organized behavior, which could exhibit a reasonable behavior. This algorithm can adapt to changes and converge fast at some optima. Simultaneously, solutions are dependent sequences of random decisions and can be trapped in local minimum in complex tasks.
At the same time, the more frequently used reinforcement algorithm in condition monitoring is the genetic algorithm [80,81,82]. A genetic algorithm (GA) is a tool for solving optimization problems and modeling random selection using natural selection mechanisms in the environment. A distinctive feature of the GA is the emphasis on using the “crossing” operator, which uses the instrumental role of crossing in wildlife.
In the case of GA, the problem is formalized so that its solution can be encoded in the form of a vector of genes (genotype), where each gene has some value. In classical implementations of GA, it is assumed that the genotype has a fixed length. However, there are GA variations that are free from this limitation. The general diagram of GA is shown in Figure 13.
Basically, the optimization algorithm with the usage of GA is as follows:
(a)
There is a task, and many genotypes of the initial population are to be created.
(b)
This initial set of data is to be assessed using the “fitness function,” which determines how well each initial population’s genotype solves the task.
(c)
After this, the best coincidences are to be selected in the population for the next generations.
(d)
The best coincidences obtain new solutions. This process repeats until the task is fulfilled and a resultant population is created.
The main benefit of GA is that specified knowledge about the domain is not needed. GA generates a solution through genetic operators. Moreover, a result can contain more than one appropriate solution. However, GA sometimes suffers from degeneracy. The degeneracy can occur if multiple chromosomes represent the same solution. The same shapes of chromosomes occur repeatedly. In this case, the optimal solution is not guaranteed.
Nonetheless, GA is an efficient tool for industrial processes optimization. In [83], researchers proposed a new method based on GAs that can be used for both fault-type classification and RUL prediction. The authors in [84] proposed a method based on genetic mutation particle swarm optimization for gear faults diagnosis. In [85], researchers proposed a GA-based method to optimize and improve the photovoltaic array accuracy.

6. Neural Networks

ANNs have been proved as quite approving tools for condition monitoring and prediction of RUL due to their adaptability, nonlinearity, and arbitrary function approximation ability [86,87]. The main advantage of NNs is that they can outperform nearly every other ML algorithm. This method is supposed to analyze and model processes of damage propagating and predict further failures based on collected data. The main tasks that neuron networks deal with are [88,89]:
  • Classification,
  • Prediction,
  • Recognition.
Artificial neural networks originate from attempts to reproduce biological nervous systems’ ability to learn and correct errors by modeling the brain’s low-level structure. To create artificial intelligence, you need to build a system with a similar architecture. The architecture of an ANN is shown in Figure 14.
ANNs consist of machine learning algorithms that constitute the human brain with connected signals called neurons. Neurons, both biological as well as artificial, consist of the cell body, dendrite (input), synapse (connection), and axon (output). As seen from the picture, the simplest model of an artificial neural network has three layers of neurons. The first (input) layer is connected to a middle (hidden) layer. The hidden layer is connected to the final (output) layer. In case of the neural networks, to solve a given problem, it is necessary to collect training data. A training dataset is a collection of observations, of which the values of the input and output variables are defined and specified. The neurons transfer a signal from the input layer to the output. The input layer neurons receive data from the outside environment (measuring system, sensors) and, after processing them, transmit signals through the synapses to the neurons of the hidden layer. The neurons of the hidden process receive signals and transmit them to the neurons of the output layer. Basically, the neuron is a computing unit that receives information, performs simple calculations on it, and transfers it further.
Neural networks are not being programmed; they are learning. Learning is one of the main advantages of neural networks over traditional algorithms. Technically, training consists of finding the coefficients of connections between neurons. In the process of training, the neural network can identify complex dependencies between input and output data and perform generalizations. This means that in case of successful training, the network will be able to return the correct result based on data absent in the training sample and incomplete or partially distorted data.
If a neural network consists of more than three layers, which is an increasing tendency nowadays, the algorithm can be considered a deep learning or deep neural network (DNN). Generally, deep learning is one of the ML techniques in ANNs which analyzes big machinery data with more precise results.
NNs have been considered as a universal tool in solving many problems. However, each method has its own limitations, and NNs are no exception. Usually, NNs are used as a hybrid with some other condition monitoring techniques. All the limitations of ANNs and other mentioned ML techniques are given in the following section.
Different types of NN are used for different parameters monitoring. In the literature, a variety of applications can be found. The authors in [90] proposed a novel intelligent fault diagnosis method based on multiscale convolutional NN to identify different failures of wind turbine gearbox. In [91], the authors proposed an intelligent bearing fault diagnosis method combining compressed data acquisition and deep learning, which provides a new strategy to handle the massive data more effectively. The authors in [92] proposed a deep transfer learning (DTL)-based method to predict the remaining useful life in manufacturing. In [93], the author suggested a novel deep convolutional NN cascading architecture for performing localization and detecting defects in power line insulators. Many algorithms have been developed over the years for the automated identification of partial discharges. In [94], an application of a neural network to partial discharge images is presented, which is based on the convolutional neural network architecture, to recognize the aging of high-voltage electrical insulation.

7. Trends in Condition Monitoring and Discussion

The maintenance of the electrical equipment is a very challenging topic at present. Proper, reliable, and efficient fault diagnostic techniques are becoming more and more essential as the world moves towards Industry 4.0 standards [9]. A major issue related to the prediction and condition monitoring is the reliability of the used methods [95,96]. ML algorithms have given a potent tool for classifications. ML methods are not a novelty; thus, researchers meet different limitations. Nowadays, intelligent condition monitoring methods mentioned in previous chapters are mainly used together as a hybrid to get more precise and robust results of fault diagnostics in industrial systems [97].
The main problem of machine learning and neural networks is the training datasets required for system training. To meet precise results and make accurate predictions, the amount and the quality of data play a significant role. Mostly, the dataset shows irrelevant features, requiring a function to build a model. This function will represent how flexible the model is. The main problem with the data is either overfitting or underfitting.
Big data is a trending challenge nowadays. At the same time, high dimensionality and the limited number of training samples lead to overfitting [98]. Frequently, this problem occurs with neural networks [99]. Overfitting means that there is a very qualified training dataset but a very poor test dataset. Simultaneously, the system cannot perform well if the training set is too small or if the data is too noisy and corrupted with irrelevant features. There can be an underfitting phenomenon where the test dataset is good enough, but training data are inferior. All the examples are shown in Figure 15.
As shown in Figure 15, both underfitted and overfitted models describe the same dataset. Although the too generalized model does not give the priciest results, at the same time, the overfitted model has a definite idea and is not flexible enough for upcoming new datasets. The challenge is to find a balance between underfitting and overfitting by the usage of different models.
ML is a widespread trend in load forecasting. Many operating decisions, such as reliability analysis or maintenance planning, are based on load forecasts [100]. In this case, artificial neural networks have paid significant attention to proper performance. The main problem overfitted sub-optimization system of ANN that can lead to uncertain forecast results [101]. Working in dynamically changing environments can be a complicated task for NNs. Even if the network has been successfully trained, there is no guarantee that it will work in the future. The market is continually transforming, so today’s model can be obsolete tomorrow. In this case, various network architectures must be tested to choose the best one that could follow changes in the environment. Moreover, in the case of NNs, a phenomenon can occur known as catastrophic forgetting. This means that NNs cannot be sequentially trained in several tasks. Each new training set will cause rewriting of all neuron weights, and, as a result, the previously trained data will be forgotten.
Another spread limitation for NNs is the so-called “black box” phenomenon. As was already mentioned, deep learning successfully learns hidden layers of NN architecture mapping inputs and outputs. Approximating the function makes it impossible to study insights into the structure and, as a result, study a cause of a mistake. For this reason, in particular, it is reasonable to choose some other technique or to use NNs in combination with another algorithm.

8. Conclusions

A review of the state of the art, machine learning-based fault diagnostic techniques in the field of electrical machines is presented in this paper. The artificial intelligence-based condition monitoring techniques are becoming more popular as computer power is increasing day by day. Unlike conventional on-board processors responsible for data collection and analysis, the utilization of powerful remote resources using cloud computation gives the freedom of unlimited memory and processing power to handle big data vital for intelligent techniques. Moreover, by effective training of AI algorithms using mathematical models with various faulty conditions, the diagnostic algorithms can be made more reliable.
The collection of these big data is neither possible from industry nor the lab environment. It is not possible from the industry because of the limited number of faulty machines under service. In the lab, a limited number of machines can be broken due to economic constraints. Due to the trend of mounting sensors on the remotely located machines and collecting their data over the cloud, the processing power-related constraints are resolved. Machine learning makes a considerably significant portion of AI techniques. For future work, the studied techniques will be implemented in practice on real industrial objects. Those techniques can use statistical or convention signal processing techniques to detect fault-related patterns and estimate electrical machines’ life estimation. Moreover, they give the flexibility to train algorithms under a variety of working conditions. Those conditions may include grid fed, scalar control, low load, and changing load in case of induction machines in particular and for the rest of other machines in general.

Author Contributions

Conceptualization, K.K. and T.V.; methodology, K.K. and B.A.; validation, K.K., B.A., and A.R.; methodology, A.K.; writing—original draft preparation, K.K.; writing—review and editing, T.V. and G.D.; visualization, A.R.; supervision, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the Baltic Research Program under Grant “Industrial Internet methods for electrical energy conversion systems monitoring and diagnostics”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vaimann, J.T.; Sobra, A.; Belahcen, A.; Rassõlkin, M.; Rolak, A.; Kallaste, A. Induction machine fault detection using smartphone recorded audible noise. IET Sci. Meas. Technol. 2018, 12, 554–560. [Google Scholar] [CrossRef] [Green Version]
  2. Vaimann, T.; Belahcen, A.; Kallaste, A. Necessity for implementation of inverse problem theory in electric machine fault diagnosis. In Proceedings of the 2015 IEEE 10th International Symposium on Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), Guarda, Portugal, 1–4 September 2015. [Google Scholar]
  3. Nandi, S.; Toliyat, H.A.; Li, X. Condition monitoring and fault diagnosis of electrical motors—A review. IEEE Trans. Energy Convers. 2005, 20, 719–729. [Google Scholar] [CrossRef]
  4. Wrobel, R.; Mecrow, B.C. A Comprehensive review of additive manufacturing in construction of electrical machines. IEEE Trans. Energy Convers. 2020, 35, 1054–1064. [Google Scholar] [CrossRef] [Green Version]
  5. Iakovleva, M.E.; Belova, M.; Soares, A. Specific features of mapping large discontinuous faults by the method of electro-magnetic emission. Resources 2020, 9, 135. [Google Scholar] [CrossRef]
  6. Sarkhanloo, M.S.; Ghalledar, D.; Azizian, M.R. Diagnosis of stator winding turn to turn fault of induction motor using space vector pattern based on neural network. In Proceedings of the 3rd Conference Thermal Power Plants, Tehran, Iran, 18–19 October 2011; pp. 1–6. [Google Scholar]
  7. Muljadi, E.; Samaan, N.; Gevorgian, V.; Li, J.; Pasupulati, S. Circuit current contribution for different wind turbine generator types. In Proceedings of the IEEE PES General Meeting PES 2010, Detroit, MI, USA, 25–29 July 2010. [Google Scholar]
  8. Kudelina, K.; Asad, B.; Vaimann, T.; Rassõlkin, A.; Kallaste, A. Production quality related propagating faults of induction machines. In Proceedings of the 2020 XI International Conference on Electrical Power Drive Systems (ICEPDS), Saint-Petersburg, Russia, 5–6 October 2020. [Google Scholar]
  9. Asad, B.; Vaimann, T.; Rassõlkin, A.; Kallaste, A.; Belahcen, A. Review of Electrical Machine Diagnostic Methods Applicability in the Perspective of Industry 4.0. Electr. Control Commun. 2018, 14, 108–116. [Google Scholar] [CrossRef] [Green Version]
  10. Stone, G.C.; Boulter, E.A.; Culbert, I.; Dhirani, H. Electrical Insulation for Rotating Machines: Design, Evaluation, Aging, Testing, and Repair; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  11. Orosz, T. Evolution and modern approaches of the power transformer cost optimization methods. Period. Polytech. Electr. Eng. Comput. Sci. 2019, 63, 37–50. [Google Scholar] [CrossRef] [Green Version]
  12. Tamus, Z. Ádám Complex diagnostics of insulating materials in industrial electrostatics. J. Electrost. 2009, 67, 154–157. [Google Scholar] [CrossRef]
  13. Asad, B.; Vaimann, T.; Belahcen, A.; Kallaste, A.; Rassõlkin, A.; Iqbal, M.N. Broken rotor bar fault detection of the grid and inverter-fed induction motor by effective attenuation of the fundamental component. IET Electr. Power Appl. 2019, 13, 2005–2014. [Google Scholar] [CrossRef]
  14. Asad, B.; Vaimann, T.; Rassõlkin, A.; Kallaste, A.; Belahcen, A. A survey of broken rotor bar fault diagnostic methods of induction motor. Electr. Control Commun. Eng. 2019, 14, 117–124. [Google Scholar] [CrossRef] [Green Version]
  15. Asad, B.; Vaimann, T.; Belahcen, A.; Kallaste, A.; Rassolkin, A. Rotor fault diagnostic of inverter fed induction motor using frequency Analysis. In Proceedings of the 2019 IEEE 12th International Symposium on Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), Toulouse, France, 27–30 August 2019; pp. 127–133. [Google Scholar]
  16. Rosero, J.A.; Cusido, J.; Garcia, A.; Ortega, J.; Romeral, L. Broken bearings and eccentricity fault detection for a permanent magnet synchronous motor. In Proceedings of the IECON 2006—32nd Annual Conference on IEEE Industrial Electronics, Paris, France, 7–10 November 2006; pp. 964–969. [Google Scholar]
  17. Kallaste, A.; Belahcen, A.; Kilk, A.; Vaimann, T. Analysis of the eccentricity in a low-speed slotless permanent-magnet wind generator. In Proceedings of the 2012 Electric Power Quality and Supply Reliability, Tartu, Estonia, 11–13 June 2012; pp. 1–6. [Google Scholar]
  18. Chen, Y.; Liang, S.; Li, W.; Liang, H.; Wang, C. Faults and diagnosis methods of permanent magnet synchronous motors: A review. Appl. Sci. 2019, 9, 2116. [Google Scholar] [CrossRef] [Green Version]
  19. Kallaste, A. Low Speed Permanent Magnet Slotless Generator Development and Implementation for Windmills. Ph.D. Thesis, Tallinn University of Technology, Tallinn, Estonia, 2013. [Google Scholar]
  20. Kudelina, K.; Asad, B.; Vaimann, T.; Rassõlkin, A.; Kallaste, A.; Lukichev, D.V. Main faults and diagnostic possibilities of BLDC Motors. In Proceedings of the 2020 27th International Workshop on Electric Drives: MPEI Department of Electric Drives 90th Anniversary (IWED), Moscow, Russia, 27–30 January 2020. [Google Scholar]
  21. Kudelina, K.; Asad, B.; Vaimann, T.; Rassolkin, A.; Kallaste, A. Effect of Bearing Faults on Vibration Spectrum of BLDC Motor. In Proceedings of the 2020 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), Vilnius, Lithuania, 30 April 2020. [Google Scholar]
  22. Bin Lee, S.; Stone, G.C.; Antonino-Daviu, J.; Gyftakis, K.N.; Strangas, E.G.; Maussion, P.; Platero, C.A. Condition monitoring of industrial electric machines: State of the art and future challenges. IEEE Ind. Electron. Mag. 2020, 14, 158–167. [Google Scholar] [CrossRef]
  23. Dos Santos, T.; Ferreira, F.J.; Pires, J.M.; Damásio, C. Stator Winding Short-Circuit Fault Diagnosis in Induction Motors using Random Forest. In Proceedings of the 2017 IEEE International Electric Machines and Drives Conference (IEMDC), Miami, FL, USA, 21–24 May 2017. [Google Scholar]
  24. Ghosh, R.; Seri, P.; Hebner, R.E.; Montanari, G.C. Noise rejection and detection of partial discharges under repetitive impulse supply voltage. IEEE Trans. Ind. Electron. 2020, 67, 4144–4151. [Google Scholar] [CrossRef]
  25. Wang, Z.; Yang, J.; Li, H.; Zhen, D.; Xu, Y.; Gu, F. Fault identification of broken rotor bars in induction motors using an improved cyclic modulation spectral analysis. Energies 2019, 12, 3279. [Google Scholar] [CrossRef] [Green Version]
  26. Xu, X.; Han, Q.; Chu, F. Review of electromagnetic vibration in electrical machines. Energies 2018, 11, 1779. [Google Scholar] [CrossRef] [Green Version]
  27. Sathyan, S.; Aydin, U.; Lehikoinen, A.; Belahcen, A.; Vaimann, T.; Kataja, J. Influence of magnetic forces and magneto-striction on the vibration behavior of an induction motor. Int. J. Appl. Electromagn. Mech. 2019, 59, 825–834. [Google Scholar] [CrossRef] [Green Version]
  28. Kudelina, K.; Asad, B.; Vaimann, T.; Belahcen, A.; Rassõlkin, A.; Kallaste, A.; Lukichev, D.V. Bearing Fault Analysis of BLDC Motor for Electric Scooter Application. Designs 2020, 4, 42. [Google Scholar] [CrossRef]
  29. Susto, G.A.; Schirru, A.; Pampuri, S.; McLoone, S.; Beghi, A. Machine Learning for Predictive Maintenance: A Multiple Classifier Approach. IEEE Trans. Ind. Inform. 2015, 11, 812–820. [Google Scholar] [CrossRef] [Green Version]
  30. Vaimann, T.; Rassõlkin, A.; Kallaste, A.; Pomarnacki, R.; Belahcen, A. Artificial intelligence in monitoring and diagnostics of electrical energy conversion systems. In Proceedings of the 2020 27th International Workshop on Electric Drives: MPEI Department of Electric Drives 90th Anniversary (IWED), Moscow, Russia, 27–30 January 2020. [Google Scholar]
  31. Lei, Y.; Li, N.; Gontarz, S.; Lin, J.; Radkowski, S.; Dybala, J. A Model-Based Method for Remaining Useful Life Prediction of Machinery. IEEE Trans. Reliab. 2016, 65, 1314–1326. [Google Scholar] [CrossRef]
  32. Bangalore, P.; Tjernberg, L.B. An Artificial Neural Network Approach for Early Fault Detection of Gearbox Bearings. IEEE Trans. Smart Grid 2015, 6, 980–987. [Google Scholar] [CrossRef]
  33. Glowacz, A. Fault diagnosis of electric impact drills using thermal imaging. Measurement 2021, 171, 108815. [Google Scholar] [CrossRef]
  34. Leahy, K.; Hu, R.L.; Konstantakopoulos, I.C.; Spanos, C.J.; Agogino, A.M. Diagnosing wind turbine faults using machine learning techniques applied to operational data. In Proceedings of the 2016 IEEE International Conference on Prognostics and Health Management (ICPHM), Ottawa, ON, Canada, 20–22 June 2016. [Google Scholar]
  35. Wu, L.; Kaiser, G.; Solomon, D.; Winter, R.; Boulanger, A.; Anderson, R. Improving efficiency and reliability of building systems using machine learning and automated online evaluation. In Proceedings of the 2012 IEEE Long Island Systems, Applications and Technology Conference (LISAT), Farmingdale, NY, USA, 4 May 2012. [Google Scholar]
  36. Liu, H.; Liu, S.; Liu, Z.; Mrad, N.; Dong, H. Prognostics of damage growth in composite materials using machine learning techniques. In Proceedings of the 2017 IEEE International Conference on Industrial Technology (ICIT), Toronto, OT, Canada, 22–25 May 2017. [Google Scholar]
  37. Helm, J.M.; Swiergosz, A.M.; Haeberle, H.S.; Karnuta, J.M.; Schaffer, J.L.; Krebs, V.E.; Spitzer, A.I.; Ramkumar, P.N. Machine learning and artificial intelligence: Definitions, applications, and future directions. Curr. Rev. Musculoskelet. Med. 2020, 13, 69–76. [Google Scholar] [CrossRef] [PubMed]
  38. Ławrynowicz, A.; Tresp, V. Introducing machine learning. In Perspectives on Ontology Learning; Lehmann, J., Voelker, J., Eds.; IOS Press: Heidelberg, Germany, 2014. [Google Scholar]
  39. Ayodele, T.O. Types of Machine Learning Algorithms. In New Advances in Machine Learning; Zhang, Y., Ed.; IntechOpen: London, UK, 2010. [Google Scholar]
  40. Nasteski, V. An overview of the supervised machine learning methods. Horiz. B 2017, 4, 51–62. [Google Scholar] [CrossRef]
  41. Elforjani, M.; Shanbr, S. Prognosis of bearing acoustic emission signals using supervised machine learning. IEEE Trans. Ind. Electron. 2018, 65, 5864–5871. [Google Scholar] [CrossRef] [Green Version]
  42. Asad, B.; Vaimann, T.; Belahcen, A.; Kallaste, A.; Rassõlkin, A.; Iqbal, M.N. Cluster computation-based hybrid fem—Analytical model of induction motor for fault diagnostics. Appl. Sci. 2020, 10, 7572. [Google Scholar] [CrossRef]
  43. Asad, B.; Vaimann, T.; Belahcen, A.; Kallaste, A.; Rassõlkin, A.; Iqbal, M.N. Modified winding function-based model of squirrel cage induction motor for fault diagnostics. IET Electr. Power Appl. 2020, 14, 1722–1734. [Google Scholar] [CrossRef]
  44. Greene, D.; Cunningham, P.; Mayer, R. Unsupervised Learning and Clustering. In Machine Learning Techniques for Multimedia: Case Studies on Organization and Retrieval; Cunningham, P., Cord, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 51–90. [Google Scholar]
  45. Michau, G.; Fink, O. Unsupervised Fault Detection in Varying Operating Conditions. In Proceedings of the 2019 IEEE International Conference on Prognostics and Health Management (ICPHM), San Francisco, CA, USA, 17–20 June 2019; pp. 1–10. [Google Scholar]
  46. Mousavi, S.S.; Schukat, M.; Howley, E. Distributed deep reinforcement learning: An overview. In Proceedings of the SAI Intelligent Systems Conference, London, UK, 21–22 September 2016. [Google Scholar]
  47. Qian, Y.; Ye, M.; Zhou, J. Hyperspectral Image Classification Based on Structured Sparse Logistic Regression and Three-Dimensional Wavelet Texture Features. IEEE Trans. Geosci. Remote. Sens. 2013, 51, 2276–2291. [Google Scholar] [CrossRef] [Green Version]
  48. Ohsaki, M.; Wang, P.; Matsuda, K.; Katagiri, S.; Watanabe, H.; Ralescu, A. Confusion-matrix-based kernel logistic regression for imbalanced data classification. IEEE Trans. Knowl. Data Eng. 2017, 29, 1806–1819. [Google Scholar] [CrossRef]
  49. Liu, B.; Blasch, E.; Chen, Y.; Shen, D.; Chen, G. Scalable sentiment classification for Big Data analysis using Naïve Bayes Classifier. In Proceedings of the 2013 IEEE International Conference on Big Data, Santa Clara, CA, USA, 6–9 October 2013; pp. 99–104. [Google Scholar]
  50. Sun, S.; Przystupa, K.; Wei, M.; Yu, H.; Ye, Z.; Kochan, O. Fast bearing fault diagnosis of rolling element using Lévy Moth-Flame optimization algorithm and Naive Bayes. Ekspolatacja Niezawodn. Maint. Reliab. 2020, 22, 730–740. [Google Scholar] [CrossRef]
  51. Muja, M.; Lowe, D.G. Scalable nearest neighbor algorithms for high dimensional Data. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2227–2240. [Google Scholar] [CrossRef]
  52. Tian, J.; Morillo, C.; Azarian, M.H.; Pecht, M. Kurtosis-based feature extraction coupled with k-nearest neighbor distance analysis. IEEE Trans. Ind. Electron. 2016, 63, 1793–1803. [Google Scholar] [CrossRef]
  53. Kusiak, A.; Verma, A. A Data-Mining Approach to Monitoring Wind Turbines. IEEE Trans. Sustain. Energy 2012, 3, 150–157. [Google Scholar] [CrossRef]
  54. Ristin, M.; Guillaumin, M.; Gall, J.; Van Gool, L. Learning of random forests for large-scale image classification. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 490–503. [Google Scholar] [CrossRef] [PubMed]
  55. Waske, B.; Van Der Linden, S.; Benediktsson, J.A.; Rabe, A.; Hostert, P. Sensitivity of support vector machines to random feature selection in classification of hyperspectral Data. IEEE Trans. Geosci. Remote. Sens. 2010, 48, 2880–2889. [Google Scholar] [CrossRef] [Green Version]
  56. Saberi, A.N.; Sandirasegaram, S.; Belahcen, A.; Vaimann, T.; Sobra, J. Multi-Sensor fault diagnosis of induction motors using random forests and support vector machine. In Proceedings of the 2020 International Conference on Electrical Machines (ICEM), Gothenburg, Germany, 23–26 August 2020. [Google Scholar]
  57. Zhao, Y.; Yang, L.; Lehman, B.; De Palma, J.-F.; Mosesian, J.; Lyons, R. Decision tree-based fault detection and classification in solar photovoltaic arrays. In Proceedings of the 2012 Twenty-Seventh Annual IEEE Applied Power Electronics Conference and Exposition (APEC), Orlando, FL, USA, 5–9 February 2012; pp. 93–99. [Google Scholar]
  58. Kamiński, B.; Jakubczyk, M.; Szufel, P. A framework for sensitivity analysis of decision trees. Central Eur. J. Oper. Res. 2018, 26, 135–159. [Google Scholar] [CrossRef] [PubMed]
  59. Chen, M.; Zheng, A.; Lloyd, J.; Jordan, M.; Brewer, E. Failure diagnosis using decision trees. In Proceedings of the International Conference on Autonomic Computing, New York, NY, USA, 17–18 May 2004; pp. 36–43. [Google Scholar]
  60. Aydin, I.; Karakose, M.; Akin, E. Artificial immune based support vector machine algorithm for fault diagnosis of induction motors. In Proceedings of the 2007 International Aegean Conference on Electrical Machines and Power Electronics, Bodrum, Turkey, 10–12 September 2007; pp. 217–221. [Google Scholar]
  61. Yi, Z.; Etemadi, A.H. A novel detection algorithm for line-to-line faults in Photovoltaic (PV) arrays based on support vector machine (SVM). In Proceedings of the 2016 IEEE power and energy society general meeting (PESGM), Boston, MA, USA, 17–21 July 2016; pp. 9–12. [Google Scholar]
  62. Soualhi, A.; Medjaher, K.; Zerhouni, N. Bearing health monitoring based on HILBERT–HUANG transform, support vector machine, and regression. IEEE Trans. Instrum. Meas. 2015, 64, 52–62. [Google Scholar] [CrossRef] [Green Version]
  63. Awad, M.; Khanna, R. Support vector machines for classification. In Efficient Learning Machines; Mariette, A., Rahul, K., Eds.; Apress: Berkeley, CA, USA, 2015; pp. 39–66. [Google Scholar]
  64. Song, L.; Wang, H.; Chen, P. Vibration-Based Intelligent Fault Diagnosis for Roller Bearings in Low-Speed Rotating Machinery. IEEE Trans. Instrum. Meas. 2018, 67, 1887–1899. [Google Scholar] [CrossRef]
  65. Hsu, J.-Y.; Wang, Y.-F.; Lin, K.-C.; Chen, M.-Y.; Hsu, J.H.-Y. Wind turbine fault diagnosis and predictive maintenance through statistical process control and machine learning. IEEE Access 2020, 8, 23427–23439. [Google Scholar] [CrossRef]
  66. AbdelGayed, T.S.; Morsi, W.G.; Sidhu, T.S. Fault Detection and Classification Based on Co-training of Semisupervised Machine Learning. IEEE Trans. Ind. Electron. 2018, 65, 1595–1605. [Google Scholar] [CrossRef]
  67. Wei, J.; Dong, G.; Chen, Z. Remaining Useful Life Prediction and State of Health Diagnosis for Lithium-Ion Batteries Using Particle Filter and Support Vector Regression. IEEE Trans. Ind. Electron. 2018, 65, 5634–5643. [Google Scholar] [CrossRef]
  68. Huang, H.-C.; Chuang, Y.-Y.; Chen, C.-S. Multiple Kernel Fuzzy Clustering. IEEE Trans. Fuzzy Syst. 2012, 20, 120–134. [Google Scholar] [CrossRef] [Green Version]
  69. Krinidis, S.; Chatzis, V. A robust fuzzy local information c-means clustering algorithm. IEEE Trans. Image Process. 2010, 19, 1328–1337. [Google Scholar] [CrossRef] [PubMed]
  70. Yu, S.; Tranchevent, L.-C.; Liu, X.; Glanzel, W.; Suykens, J.A.; De Moor, B.; Moreau, Y. Optimized data fusion for kernel k-means clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1031–1039. [Google Scholar] [CrossRef]
  71. Hanley, C.; Kelliher, D.; Pakrashi, V. Principal component analysis for condition monitoring of a network of bridge structures. J. Phys. Conf. Ser. 2015, 628, 012060. [Google Scholar] [CrossRef] [Green Version]
  72. Mazur, K.; Borowa, A.; Brdys, M. Condition monitoring using PCA based method and application to wastewater treatment plant operation. IFAC Proc. Vol. 2006, 39, 208–213. [Google Scholar] [CrossRef]
  73. He, Q.; Yan, R.; Kong, F.; Du, R. Machine condition monitoring using principal component representations. Mech. Syst. Signal Process. 2009, 23, 446–466. [Google Scholar] [CrossRef]
  74. Hang, Q.; Yang, J.; Xing, L. Diagnosis of rolling bearing based on classification for high dimensional unbalanced data. IEEE Access 2019, 7, 79159–79172. [Google Scholar] [CrossRef]
  75. Deng, X.; Tian, X.; Chen, S.; Harris, C.J. Deep principal component analysis based on layer wise feature extraction and its application to nonlinear process monitoring. IEEE Trans. Control. Syst. Technol. 2019, 27, 2526–2540. [Google Scholar] [CrossRef]
  76. Deng, X.; Tian, X.; Chen, S.; Harris, C.J. Nonlinear process fault diagnosis based on serial principal component analysis. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 560–572. [Google Scholar] [CrossRef] [Green Version]
  77. Abdmouleh, Z.; Gastli, A.; Ben-Brahim, L.; Haouari, M.; Al-Emadi, N.A. Review of optimization techniques applied for the integration of distributed generation from renewable energy sources. Renew. Energy 2017, 113, 266–280. [Google Scholar] [CrossRef]
  78. Abraham, A.; Guo, H.; Liu, H. Swarm intelligence: Foundations, perspectives and applications. In Recent Advances in Computational Optimization; Fidanova, S., Ed.; Springer: Geneva, Switzerland, 2006; pp. 3–25. [Google Scholar]
  79. Xue, B.; Zhang, M.; Browne, W.N. Particle swarm optimization for feature selection in classification: A multi-objective approach. IEEE Trans. Cybern. 2012, 43, 1656–1671. [Google Scholar] [CrossRef]
  80. Beg, A.H.; Islam, M.Z. Advantages and limitations of genetic algorithms for clustering records. In Proceedings of the 2016 IEEE 11th Conference on Industrial Electronics and Applications, ICIEA 2016, Hefei, China, 5–7 June 2016. [Google Scholar]
  81. Compare, M.; Martini, F.; Zio, E. Genetic algorithms for condition-based maintenance optimization under uncertainty. Eur. J. Oper. Res. 2015, 244, 611–623. [Google Scholar] [CrossRef]
  82. Baraldi, P.; Canesi, R.; Zio, E.; Seraoui, R.; Chevalier, R. Genetic algorithm-based wrapper approach for grouping condition monitoring signals of nuclear power plant components. Integr. Comput. Eng. 2011, 18, 221–234. [Google Scholar] [CrossRef] [Green Version]
  83. Trinh, H.C.; Kwon, Y.K. A data-independent genetic algorithm framework for fault-type classification and remaining useful life prediction. Appl. Sci. 2020, 10, 368. [Google Scholar] [CrossRef] [Green Version]
  84. Ding, J.; Xiao, D.; Li, X. Gear fault diagnosis based on genetic mutation particle swarm optimization VMD and probabilistic neural network algorithm. IEEE Access 2020, 8, 18456–18474. [Google Scholar] [CrossRef]
  85. Tao, C.; Wang, X.; Gao, F.; Wang, M. Fault Diagnosis of photovoltaic array based on deep belief network optimized by genetic algorithm. Chin. J. Electr. Eng. 2020, 6, 106–114. [Google Scholar] [CrossRef]
  86. Tian, Z. An artificial neural network method for remaining useful life prediction of equipment subject to condition monitoring. J. Intell. Manuf. 2012, 23, 227–237. [Google Scholar] [CrossRef]
  87. Saxena, A.; Saad, A. Evolving an artificial neural network classifier for condition monitoring of rotating mechanical systems. Appl. Soft Comput. 2007, 7, 441–454. [Google Scholar] [CrossRef]
  88. Oong, T.H.; Ashidi, N.; Isa, M. Networks for pattern classification. Adapt. Evol. Artif. Neural Netw. Pattern Classif. 2011, 22, 1823–1836. [Google Scholar]
  89. Deng, Y.; Ren, Z.; Kong, Y.; Bao, F.; Dai, Q. A Hierarchical fused fuzzy deep neural network for data classification. IEEE Trans. Fuzzy Syst. 2014, 25, 51–56. [Google Scholar] [CrossRef]
  90. Jiang, G.; He, H.; Yan, J.; Xie, P. Multiscale convolutional neural networks for fault diagnosis of wind turbine gearbox. IEEE Trans. Ind. Electron. 2019, 66, 3196–3207. [Google Scholar] [CrossRef]
  91. Sun, J.; Yan, C.; Wen, J. Intelligent bearing fault diagnosis method combining compressed data acquisition and deep learning. IEEE Trans. Instrum. Meas. 2018, 67, 185–195. [Google Scholar] [CrossRef]
  92. Sun, C.; Ma, M.; Zhao, Z.; Tian, S.; Yan, R.; Chen, X. Deep transfer learning based on sparse autoencoder for remaining useful life prediction of tool in manufacturing. IEEE Trans. Ind. Inform. 2019, 15, 2416–2425. [Google Scholar] [CrossRef]
  93. Tao, X.; Zhang, D.; Wang, Z.; Liu, X.; Zhang, H.; Xu, D. Detection of power line insulator defects using aerial images analyzed with convolutional neural networks. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 1486–1498. [Google Scholar] [CrossRef]
  94. Florkowski, M. Classification of partial discharge images using deep convolutional neural networks. Energies 2020, 13, 5496. [Google Scholar] [CrossRef]
  95. Belahcen, A.; Gyftakis, K.N.; Martinez, J.; Climente-Alarcon, V.; Vaimann, T. Condition monitoring of electrical machines and its relation to industrial internet. In Proceedings of the 2015 IEEE Workshop on Electrical Machines Design, Control and Diagnosis (WEMDCD), Torino, Italy, 26–27 March 2015. [Google Scholar]
  96. Savard, C.; Iakovleva, E.V. A Suggested improvement for small autonomous energy system reliability by reducing heat and excess charges. Batteries 2019, 5, 29. [Google Scholar] [CrossRef] [Green Version]
  97. Bicen, Y.; Aras, F. Intelligent condition monitoring platform combined with multi-agent approach for complex systems. In Proceedings of the 2014 IEEE Workshop on Environmental, Energy, and Structural Monitoring Systems Proceedings, Naples, Italy, 17–18 September 2014. [Google Scholar]
  98. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  99. Bilbao, I.; Bilbao, J. Overfitting problem and the over-training in the era of data: Particularly for Artificial Neural Networks. In Proceedings of the 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 5–7 December 2017. [Google Scholar]
  100. Fan, S.; Hyndman, R.J. Short-term load forecasting based on a semi-parametric additive Model. IEEE Trans. Power Syst. 2012, 27, 134–141. [Google Scholar] [CrossRef] [Green Version]
  101. Hinojosa, V.H.; Hoese, A. Short-term load forecasting using fuzzy inductive reasoning and evolutionary algorithms. IEEE Trans. Power Syst. 2010, 25, 565–574. [Google Scholar] [CrossRef]
Figure 1. Maintenance types: (a) corrective maintenance, (b) preventive maintenance, (c) predictive maintenance.
Figure 1. Maintenance types: (a) corrective maintenance, (b) preventive maintenance, (c) predictive maintenance.
Applsci 11 02761 g001
Figure 2. General diagram of decision models.
Figure 2. General diagram of decision models.
Applsci 11 02761 g002
Figure 3. Algorithms of machine learning.
Figure 3. Algorithms of machine learning.
Applsci 11 02761 g003
Figure 4. Neural network-related fields.
Figure 4. Neural network-related fields.
Applsci 11 02761 g004
Figure 5. Supervised learning algorithm.
Figure 5. Supervised learning algorithm.
Applsci 11 02761 g005
Figure 6. Decision tree diagram.
Figure 6. Decision tree diagram.
Applsci 11 02761 g006
Figure 7. Possibilities in the finding of the optimal hyperplane.
Figure 7. Possibilities in the finding of the optimal hyperplane.
Applsci 11 02761 g007
Figure 8. Support vectors and optimal hyperplane in linear classification.
Figure 8. Support vectors and optimal hyperplane in linear classification.
Applsci 11 02761 g008
Figure 9. Support vectors and optimal hyperplane in non-linear classification.
Figure 9. Support vectors and optimal hyperplane in non-linear classification.
Applsci 11 02761 g009
Figure 10. Unsupervised learning algorithm.
Figure 10. Unsupervised learning algorithm.
Applsci 11 02761 g010
Figure 11. Support vectors and optimal hyperplane in non-linear classification: (a) initial dataset, (b) optimal vector determination, (c) projection of initial dataset on the vector, (d) new data parameters definition.
Figure 11. Support vectors and optimal hyperplane in non-linear classification: (a) initial dataset, (b) optimal vector determination, (c) projection of initial dataset on the vector, (d) new data parameters definition.
Applsci 11 02761 g011
Figure 12. Reinforcement learning algorithm.
Figure 12. Reinforcement learning algorithm.
Applsci 11 02761 g012
Figure 13. Genetic algorithm diagram: (a) creation of initial population, (b) application of fitness function, (c) selection of the best coincidences, (d) creation of resultant population.
Figure 13. Genetic algorithm diagram: (a) creation of initial population, (b) application of fitness function, (c) selection of the best coincidences, (d) creation of resultant population.
Applsci 11 02761 g013
Figure 14. Neural network architecture.
Figure 14. Neural network architecture.
Applsci 11 02761 g014
Figure 15. Data generalization: (a) test data is underfitted, (b) test data is overfitted, (c) test data is balanced.
Figure 15. Data generalization: (a) test data is underfitted, (b) test data is overfitted, (c) test data is balanced.
Applsci 11 02761 g015
Table 1. Signatures of the main faults in electrical machines.
Table 1. Signatures of the main faults in electrical machines.
Fault SignaturesWinding Short Circuit
[23,24]
Rotor Broken Bar(s)
[25]
Eccentricity
[26,27]
Bearing Faults
[28]
vibration
current
temperature
magnetic flux changes
chemical analysis
torque changes
★—the most preferable parameter for condition monitoring; ✔—parameter can be used for condition monitoring; ✖—parameter cannot be used for condition monitoring.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kudelina, K.; Vaimann, T.; Asad, B.; Rassõlkin, A.; Kallaste, A.; Demidova, G. Trends and Challenges in Intelligent Condition Monitoring of Electrical Machines Using Machine Learning. Appl. Sci. 2021, 11, 2761. https://doi.org/10.3390/app11062761

AMA Style

Kudelina K, Vaimann T, Asad B, Rassõlkin A, Kallaste A, Demidova G. Trends and Challenges in Intelligent Condition Monitoring of Electrical Machines Using Machine Learning. Applied Sciences. 2021; 11(6):2761. https://doi.org/10.3390/app11062761

Chicago/Turabian Style

Kudelina, Karolina, Toomas Vaimann, Bilal Asad, Anton Rassõlkin, Ants Kallaste, and Galina Demidova. 2021. "Trends and Challenges in Intelligent Condition Monitoring of Electrical Machines Using Machine Learning" Applied Sciences 11, no. 6: 2761. https://doi.org/10.3390/app11062761

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop