«Системні технології» 1 (126) 2020 «System technologies»
DOI 10.34185/1562'9945'1'126'2020'10
UDC 004.42:543.572.3
N.A. Matveeva, A.A. Gurtovoy
SURFACE DEFECT DETECTION WITH NEURAL NETWORKS
Abstract. The research results of signal recognition using neural networks are presented. A multilayer perceptron with back-propagation error is implemented on Java. The optimal number of
neurons in the hidden layer is selected for building an effective architecture of the neural network. Training network on different sets of signals with noise allowed teaching her to work with
distorted information, which is typical for non-destructive testing in real conditions. Experiments
were performed to analyze MSE values and accuracy.
Keywords: composite materials, neural networks, multilayer perceptron with back
propagation training, defect, function of activity.
Problem statement and purpose of research. The global economic
pressures have gradually led businesses to become more competitive. In order
to sustain or increase current level of performance in the highly competitive
global market, industry should improve quality of the production process.
Early and accurate detection of defects is an important aspect of quality im'
provement. The accuracy of manual inspection is not good enough due to fa'
tigue and tediousness. The solution to the problem of manual control is an
automated system for checking parts based on machine vision. Automated
part inspection systems mainly involve two challenging problems, namely de'
fect detection and defect classification.
One of the methods nondestructive controls of composite materials is
eddy current. It can be carried out without contact of transducer and the ob'
ject and obtain acceptable results of control even at high speeds displacement
transducer. Eddy current method based on registration of changes in the eddy
current density, so the received signal can influence the outer eddy currents.
The surface of composite materials is characterized by roughness and this cre'
ates additional noise.
Matveeva N.A., Gurtovoy A.A., 2020
96
SSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
«Системні технології» 1 (126) 2020 «System technologies»
Modern technologies allow to create computer systems with involving
neural networks [1] for whom the as input parameters can be used characteris'
tics of electromagnetic signals.
The purpose of this work is to create a neural network for the classifi'
cation of electromagnetic signals that are obtained by scanning the composite
material, as well as for solving problems of defect detection.
Main part. Each artificial neural network is a set of simple elements '
neurons that are connected in some way. The particular form of executable
network data conversion due not only characteristics of neurons that make up
its structure but also its architectural features such as topology interneuron
links directions and methods of information transfer between neurons and
learning tools[2, 3].
Multilayer neural networks of direct distribution are nonlinear systems
that enable better qualified than conventional statistical methods. Multilayer
perceptron (MLP) has a plurality of input nodes that provide the input layer
with one or more hidden layers of neurons and output layers. Each neuron of
MLP which learns based on back propagation algorithm has nonlinear smooth
activation function often use nonlinear logistic sigmoid function type or hy'
perbolic tangent [3, 4].
It is important to highlight that a neural network may have many hid'
den layers or none, as the number of neurons in each layer may vary. How'
ever, the input and output layers have the same number of neurons as the
number of neural inputs/ outputs, respectively.
Neural network training consists of several steps[4]:
' selecting the initial network configuration using, for example, the
following heuristic rule: the number of neurons in the hidden layer is deter'
mined by half the total number of inputs and outputs;
' conducting a number of experiments with different network configu'
rations and choosing one that gives the minimum value of the error func'
tional;
' if the quality of training is insufficient, the number of layer neurons
or the number of layers should be increased;
ISSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
97
«Системні технології» 1 (126) 2020 «System technologies»
' if a retraining phenomenon is observed, it is necessary to reduce the
number of neurons in the layer or to remove one or more layers.
The network learning process includes setting values weights and bias
of network to optimize network performance. Setting performance for net'
works with direct propagation is determined by the mean squared function
(mse) between the outputs of the network (a) and targeted outputs (t) and de'
fined by the formula [2, 3]:
(1)
After training and testing neural network the network object can be
used to calculate the answer for any input value.
When scanning compositeg materials using an eddy current trans'
former, there is a smooth change in the waveform from unimodal with a
maximum amplitude (defects exceed the control zone) to bimodal with the
highest dips. Such changes are modeled using the expression [5]:
)
(2)
The article uses multilayer perceptron (MLP). For the training of MLP
the algorithm of back ' propagation training is used [2, 3].
The creation of a neural network is performed in the Java program'
ming language.
So, let's start implementing. Initially, we are going to define six
classes: Neuron, Layer (class is abstract and cannot be instantiated) that de'
scribes the layers, InputLayer class that describes the input layers (class
in'
herits attributes and methods from the Layer class), HiddenLayer class that de'
fines the middleware (class inherits attributes and methods from the Layer
class), OutputLayer class describes the output layer (class inherits attributes
and methods from the Layer class), NeuralNet (the values of the neural net to'
pology are fixed in this class)[1].
Each output neuron denotes a class. To simulate noise, a normally dis'
tributed random number generator is used. An example of a unimodal signal
with a noise level of 0.05 is shown in figure 1
98
SSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
«Системні технології» 1 (126) 2020 «System technologies»
Figure 1 ' Ideal signal (yellow color) and signal with noise (black color)
Many algorithms for the functioning and training of neural networks
have been developed, and for each task it is necessary to configure the net'
work in different ways. Therefore, research was conducted to determine the
best network parameters, such as: activation functions of input and hidden
neurons and determining the number of neurons in the hidden layer. Hyper'
bolic tangential activation function (Hypertan) and logistic sigmoidal activa'
tion function (Siglog) were used in experiments.
Parameters MSE (1) and accuracy was used for evaluate the neural net'
work. The parameter accuracy is formed on the basis of the expected and real
data provided by the neural network.
We performed many experiments to try to find the best neural network
to classification. The training was conducted signals with noise (0.05). Linear
and tangential functions were used in the output layer. The test results are
shown in tables 1 and 2:
ISSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
99
«Системні технології» 1 (126) 2020 «System technologies»
Table 1
Linear output function
Number of
Experiment
neurons in
hidden layer
Activation
MSE
function
Total accu'
racy
#1
9
HYPERTAN
0.00995573339897946 75%
#2
12
SIGLOG
0.00996491835677809 83,333%
#3
10
HYPERTAN
0.00998401197657948 50%
#4
12
HYPERTAN
0.00999360929493481 66,666%
#5
11
HYPERTAN
0.00999502512706538 75%
#6
8
SIGLOG
0.00999869481576378 83,3333%
#7
8
HYPERTAN
0.02022418703632457 75%
#8
10
SIGLOG
0.02092368752492168 75%
#9
11
SIGLOG
0.02203575970296707 75%
#10
7
HYPERTAN
0.03168826834581766 58,3333%
Table 2
Hypertan output function
Number of
Experiment
neurons in
hidden layer
Activation
function
MSE
Total accu'
racy
#11
9
HYPERTAN
0.00994917320039824 83,333%
#12
12
HYPERTAN
0.00997003565737264 50%
#13
11
HYPERTAN
0.00999157530952272 75%
#14
10
HYPERTAN
0.00999878881273251 100%
#15
9
SIGLOG
0.02377328083679948 41,667%
#16
7
HYPERTAN
0.02908783694768615 83,334%
#17
10
SIGLOG
0.04146528401729575 50%
#18
12
SIGLOG
0.05339244636898565 100%
#19
8
HYPERTAN
0.05342303501442618 75%
#20
8
SIGLOG
0.05983305704878905 41,667%
100
SSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
«Системні технології» 1 (126) 2020 «System technologies»
Experiment #14 and #18 have same total accuracy measure (100%).
Therefore, we selected experiments #14, because he has low MSE values
among the two experiments.
As a result of testing, we chose the following neural network architec'
ture: 21 neurons in the input layer, 10 neurons in the hidden layer and 2 neu'
rons at the output. The hypertan activation function used in hidden layer and
output.
The graph shows a comparison between the real (yellow line) and the
estimated (black line) signal values (Figure 2). The neural network works per'
fectly.
Figure 2 ' The expected class along with the classification estimated
by the neural network
The neural network is trained on signals with noise, and then tested on
signals with different noise levels. The results are shown in table 3.
Table 3
Noise level
training
testing
0.05
0.05
0.05
0.1
0.1
0.1
0.1
0.15
0.15
0.15
0.15
0.2
0.2
0.2
ISSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
MSE
0.009998219707589661
0.009974832345368063
0.015766026730132145
0.01686189145885091
0.03283785458113339
0.014996758480474942
0.07208712408865446
101
«Системні технології» 1 (126) 2020 «System technologies»
Now let's do the reverse test. A neural network learns from more noisy
data than testing is done. The test results are shown in table 4:
Table 4
Noise level
training
0.2
0.15
0.1
0.05
testing
0.05
0.1
0.1
0.15
MSE
0.010012569938241766
0.009969095902656683
0.009998217002776047
0.009983868328343774
Conclusions. Defect detection has been an attractive area of research
for pattern recognition scientists. Research has shown in principle the possi'
bility of using neural networks for signal recognition. The neural network is
implemented by means of the Java language; the number of neurons in the
hidden layer was selected. Training network on different sets of signals with
noise allowed teaching her to work with distorted information, which is typi'
cal for non'destructive testing in real conditions. Training is best done on
signals with a low level of noise and then the neural network shows the best
results of signals recognition.
REFERENCES
1. Fábio M. Soares, Alan M.F. Souza, Neural Network Programming with Java,
' Birmingham, 2016. '244 p.
2. Haykin S. Neural Networks. A Comprehensive Foundation. Second edition.
– New Jersey: Prentice Hall, 2008. '1103 p.
3. Herbert Schildt Java.The Complete Reference Ninth edition, 2014, 1372 p.
4. Bishop C. M. Neural Networks for Pattern Recognition. Oxford University
Press,1995
4. Matveeva N.A. Using Neural Networks programming on Java for solving the
problem of signal recognition. ' Dnipro: «System technologies», 2019. 'Вип.
1(110). –S. 124'131.
Received 23.01.2020.
Accepted 27.01.2020.
102
SSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
«Системні технології» 1 (126) 2020 «System technologies»
Обнаружение поверхностных дефектов с помощью нейронных сетей
Представлены результаты исследования распознавания сигналов с использованием
нейронных сетей. Многослойный персептрон с алгоритмом обратного распространения
ошибки реализован на Java. Для построения эффективной архитектуры нейронной сети
выбирается оптимальное количество нейронов в скрытом слое. Обучение сети на разных
наборах сигналов с шумом позволило научить ее работать с искаженной информацией,
что характерно для неразрушающего контроля в реальных условиях. Эксперименты были выполнены для анализа значений MSE и точности.
Знаходження поверхневих дефектів за допомогою нейронних мереж
Представлені результати дослідження розпізнавання сигналів з використанням
нейронних мереж. Багатошаровий персептрон з алгоритмом зворотного поширення помилки реалізований на Java. Для побудови ефективної архітектури нейронної мережі
вибирається оптимальна кількість нейронів в прихованому шарі. Навчання мережі на
різних наборах сигналів з шумом дозволило навчити її працювати з перекрученою
інформацією, що характерно для неруйнівного контролю в реальних умовах. Експерименти були виконані для аналізу значень MSE і точності.
Матвєєва Наталія Олександрівна – доцент, к.т.н., доцент кафедри еле'
ктронних обчислювальних машин Дніпровського національного універ'
ситету імені Олеся Гончара.
Гуртовий Олександр Олександрович ' магістр кафедри ЕОМ Дніпров'
ського національного університету ім. О. Гончара
Матвеева Наталья Александровна ' доцент, к.т.н., доцент кафедры
электро'нных вычислительных машин Днепровского национального
университета имени Олеся Гончара.
Оптовая Александр Александрович ' магистр кафедры ЭВМ Днепров'
ского национального университета им. О. Гончара.
Matveeva Nataliya — candidate of technical sciences, associate professor of
the department of electronic computers of the faculty of physics electronics
and computer systems of the Oles Honchar Dnipro National University.
Gurtovoy Alexander Z the master of the Department of Electronic
Computing Machinery Department of Department of Electronic Computers of
the Oles Honchar Dnipro National University.
ISSN 1562'9945 (Print)
ISSN 2707'7977 (Online)
103