Electronics 10 02689abdolrasol2021
Electronics 10 02689abdolrasol2021
Electronics 10 02689abdolrasol2021
net/publication/355886896
CITATIONS READS
73 1,598
9 authors, including:
Mahidur Sarker
Universiti Kebangsaan Malaysia
71 PUBLICATIONS 641 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Maher G.M. Abdolrasol on 03 November 2021.
1 Department of Electric, Electronics and System Engineering, Faculty of Engineering and Built Environment,
Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia; ramizi@ukm.edu.my
2 Fukushima Renewable Energy Institute, AIST (FREA), National Institute of Advanced Industrial Science
mahidursarker@ukm.edu.my
4 Department of Electrical Power Engineering, Universiti Tenaga Nasional, Kajang 43000, Selangor, Malaysia;
hannan@uniten.edu.my
5 General Company of Electricity Production Middle Region, Ministry of Electricity, Baghdad 10001, Iraq;
eng_jhy@yahoo.com
6 School of Science, Computing and Engineering Technologies, Swinburne University of Technology,
Publisher’s Note: MDPI stays neu- Keywords: artificial neural networks; optimization algorithms; machine learning;
tral with regard to jurisdictional ANN enhancement; PSO; BSA; ABC; GA
claims in published maps and institu-
tional affiliations.
1. Introduction
Copyright: © 2021 by the authors. Li-
Artificial intelligence (AI) helps computers or inanimate objects based on computers
censee MDPI, Basel, Switzerland. to think or act as humans do. AI research focuses on how the human brain thinks, learns,
This article is an open access article decides, and works to solve problems. AI is a vast field that aims to create intelligent ma-
distributed under the terms and con- chines [1]. Machine learning (ML) is a branch of AI that recognizes and learns different
ditions of the Creative Commons At- data set patterns [2]. As a definition, ML is an AI application that allows systems to learn
tribution (CC BY) license (http://crea- automatically and improve by the experience and is devoid of being programmed implic-
tivecommons.org/licenses/by/4.0/). itly [3,4]. The common algorithms used in the ML are neural networks, support vector
machines, decision trees, random forest, logistic regression, and many more. Also, some
others are subsections of the neural network, such as generative adversarial network
(GAN) by Goodfellow in [5,6]. The deep learning (DL) approach utilizes a hierarchy of
concepts in a field that assists a computer in building knowledge from experience [7]. This
approach has found its use especially in visual object or speech recognition as well as
genomics and medicines [8,9]. The neural networks are a family of deep learning (DL) and
ML methods based on artificial neural networks (ANNs) with multi-hiding layers [10–14].
Neural networks are applied in many different implementations with slight variations in
their structures, such as recurrent neural networks (RNN), Artificial neural network
(ANN), and convolutional neural networks (CNN) [15,16]. Due to their feature engineer-
ing and decision boundaries, the novel neural network approaches are preferred over ma-
chined learning in some fields like self-driving vehicles, unmanned drones, and complex
deep learning problems [17]. The decision boundary is used to classify any data point as
one of the two classes, positive or negative. For this reason, if the data is not separable for
any reason, the neural networks in deep learning will not be a good choice [18,19].
Artificial neural networks are computational algorithms that are utilized to model
data. Their design is based on the biological nervous system, hence the name [20,21]. An
ANNs contain a set of processing elements called neurons that are interrelated compo-
nents. These neuron structures act as a harmonious rhythm to solve certain complex prob-
lems. ANN can be used in scenarios when it is difficult to extract trends or detect patterns.
ANNs have recently gained popularity after almost 50 years of existence. Through their
rapid increases and importance, the underlying logic behind ANNs has existed; however,
due to the pervasive and ubiquitous adoption of powerful computational tools in our con-
temporary society, ANNs have had a sort of renaissance, much to the benefit of experts,
engineers, and consumers.
The current cutting-edge in deep-learning and ANNs focuses highly on their ability
to model and interpret complex data and their ability to scale due through optimization
and parallelization [22]. The current framework for designing ANNs is widely available,
with a myriad of tools facilitating their development. Python, C++, Google’s Tensorflow,
Theano, Matlab, and Spark contain a robust set of mathematical operations that necessi-
tate ANNs. Due to the algorithm behind ANNs, the models are inherently liable for ex-
tracting meaning from imprecise or intricate problems. Speaking reductively, ANNs are
data modeling tools that are trained on a given dataset.
Optimization problems often require good optimization methods to minimize or
maximize objective functions. These functions can often not solve problems accurately,
for example, when they are not linear or polynomial and must be approximated. Full or
partial derivatives are used in some algorithms to linearize these functions at specific
points [23], whereas evolutionary algorithms (EA) may be employed for approximation.
The objective function approximation in optimization problems makes it possible to apply
other artificial intelligent techniques through a non-linear regression to resolve an optimi-
zation problem. The objective function’s derivate should be polynomial to calculate the
optimization problem’s solution. Algorithms are normally used to optimize, e.g., weights,
optimize network architecture, optimize learning rules, neurons, activation function, and
bias. Another way to optimize and enhance the ANN is by using an optimizer to replace
the neural network’s original algorithms with optimization algorithms, replacing the
backpropagation with any optimization techniques to solve certain associated issues.
However, using an optimization algorithm in place of back-propagation, like using the
Liebenberg Marquardt neural network with any optimization techniques for fast or accu-
rate achievement in the neural network training. This research review highlights improv-
ing the neural network by optimizing algorithms by handling neural network parameters
or training parameters to find the finest structure network pattern to solve the problems
with high accuracy and faster. This review included testing results for improving the
ANN performance using four optimization algorithms to search for ANN’s optimal pa-
Electronics 2021, 10, 2689 3 of 44
rameters, such as the number of neurons in the hidden layers and learning rate. The ob-
tained neural net is used for solving energy management problems in the virtual power
plant system.
Supporting AI, ML, DL with optimization techniques has gained importance in the
last few years. There is a lot of ongoing research using optimization to enhance or boost
performance by finding the optimal parameters values to help architecture design. In [24],
a fuzzy logic controller design improvement for PV Inverters utilizes differential search
optimization to find the optimal membership function patterns, which improve the fuzzy
controller to a higher level of accuracy. In [25] the ML, this approach optimizes the Sup-
port Vector Machine model parameters and simultaneously locates the best features sub-
set. Allowing the optimization technique to do the job is the smartest way to improve
almost any AI or ML performance [26]. It is essential to process pre-setting to guarantee
optimal results for almost any application. In the DL and particularly deep neural net-
works (DNNs) and ANN, the more hidden layers, number of neurons, and complex acti-
vation functions, the better the outcomes but will cost more time and more complexity of
the network [27,28]. So, to use the optimum numbers of parameters by trial and error is a
time-consuming and impossible way to follow. From another point of view, the ANN
with human estimation parameters setup could bring outcomes, but how to confirm this
is the best outcome of the ANN? For these reasons, the optimization algorithm can solve
these issues, and this review delivers a detailed analysis of various examples of ANN-
based optimization techniques. For instance, In [29–31], optimization techniques optimize
ANN parameters to solve different electricity and communications fields by finding opti-
mal parameters for the optimum ANN structure.
The rest of this paper is organized as follows: Section 2 presents the materials and
methods used. Section 3 addresses the challenges and motivations for ANN-based opti-
mization, while Section 4 presents a review of optimization algorithms, Section 5 ad-
dresses neural network structure types. Section 6 is the complete overview of neural net-
works enhanced by optimization algorithms, Section 7 is an application on artificial neural
network-based optimization algorithms, Section 8 covers artificial neural network training-
based optimized parameters and finally, Section 9 presents the conclusions and future work.
were counted as eligible for review of references. In this review, only the meaningful and
suitable literature has been considered by evaluating the article’s relevant content and the
critical topic of attention of the review. Accordingly, the related papers were designated
based on the number of citations and research interest. This review methodology process
comprises several stages, and the Prisma guidelines according to [36,37] were followed.
Figure 1 shows how the methodology for utilizing optimization to find optimal parame-
ters of neural networks. A schematic diagram of the review section process, evaluation,
and quality control of the database using the Prisma guidelines is shown in Figure 2.
n
tio
nc
fu
e
iv
ct
je
ob
al
im
pt
O
Initial Argumentation
Screening and , screening &
evaluation of assessment of
433 articles all articles
Screening
127 articles
excluded after
reading title
and abstract
Figure 2. Schematic diagram of the literature selection, evaluation and quality control process of the
database using the Prisma guidelines.
Electronics 2021, 10, 2689 5 of 44
• Depending on the hardware, ANNs need powerful dual processors and ANN struc-
tures. This drawback is called the realization that the whole approach is equipment-
dependent.
• Gradual corruption slows down the process over time and it suffers relative degra-
dation, and the network problems do not immediately degrade directly.
• Difficulty recognizing network problems if they exist since ANNs are based on nu-
merical data that explain the difficulties in numerical values before being introduced
to the ANNs. This could depend on the researcher’s ability to display the mechanism
and influence the network’s performance.
Artificial neural network applications that have increased dramatically in the world
in the middle of the last century are developing very fast. At present, besides the computer
capabilities, the advantages of ANNs have been examined, and the problems users have
encountered. However, it is very important not to neglect the ANN network’s disad-
vantages, which are a developing science branch, and should excluded one after another,
and the advantages of the ANNs are growing progressively. That is, the avenue of using
ANNs will be an increasingly important indispensable part of our lives. The enhancement
of ANNs by using optimization methods could eliminate some of their disadvantages in
picking the best network structure using the proper optimization techniques. The chal-
lenge is finding a system coding that enables appropriate tuning of neural structures in
professional networks, including the best number of neurons, hidden layers, weights,
bias, and self-shaping architecture and multi-stage objective functions.
It is very important to select and adjust the best suitable neural network parameters
for any given application, as there are many possibilities. However, not every neural net-
work can could act perfectly in all applications. Some types are more practical in particular
applications; for example, CNN is good for images and videos, while RNN is good for
text and classification problems, so the networks need to be studied and adjusted, and the
problems need to be compared and contrasted. Somehow, to enhance the neural networks
with optimization it is important to select the neural network parameter optimizer to ob-
tain the best outputs.
Like other AI algorithms, neural networks can deal with non-linear and complicated
problems with a high volume of data. The superiority of neural networks over other dif-
ferent AI algorithms lies in that they are very effective for many inputs and outputs. While
it is true that fuzzy or adaptive neuro-fuzzy inference systems (ANFIS) techniques have
drawbacks, they can accept many inputs, although they are limited in the number of out-
puts they can support. Neural networks do not have this limitation which makes them
work better for classification and regression studies.
tion problem because some techniques use derivatives while others do not. The conven-
tional methods normally use first order derivatives, and others use the second derivative
for their objective function. The search type is either direct or stochastic search for targeted
objective functions that result in its function’s maximum or minimum output.
Normally, the most popular kinds of optimization problem facing neural networks
involve continuous function optimization. Their input pretext for their function estimates
numeric values for either the input function or the output function. However, the more
available information for the target function, the more accurate the achieved optimization
will be. In contrast, the differentiable function can determine any sample in the input
search. Optimization algorithms are, in general, categorized into two groups: determinis-
tic and heuristic algorithms. Deterministic techniques exploit their analytical capabilities,
while in contrast, heuristic techniques are more flexible and efficient than deterministic
techniques through fast-to-obtained solutions, decreasing the number of global solutions.
Global optimization algorithms are used to find the global minimum or maximum in com-
plex problems. This is harder than local optimization with bound constraints and does not
require derivatives. Both the local and global optimizations are a matching set in solving lin-
ear, non-linear, quadratic, and least squares constrained or unconstrained, dense or sparse,
forward or reverse communication, continuous, mixed-integer, integer problems [42]. The op-
timization techniques are classified according to the underlying principle of a biological
and physical-based algorithm. The first category is a biology-based algorithm such as ge-
netic algorithm (GA), harmony search algorithm (HSA), particle swarm optimization
(PSO), bacteria foraging optimization (BFO), cuckoo search algorithm (CSA), bee colony
algorithm (BCA), ant colony optimization (ACO), firefly algorithm (FA) [43], backtracking
search algorithm (BSA), lightning search algorithm (LSA), etc. The second category is
physics-based algorithms such as simulated annealing (SA), gravitational search algo-
rithm (GSA), chaotic optimization algorithm (COA), etc. [44,45]. In this review, some of
the most popular optimization algorithms are explained.
The particle swarm optimization algorithm is one of the most popular evolutionary
optimization algorithms [46]. The PSO algorithm principle depends on the velocity and
position of particles [47]. The authors described that the PSO algorithm is utilized to au-
tomatically design an ANN method to improve the synaptic mass, architecture, and trans-
fer functions for each neuron [48–52]. Nevertheless, PSO has some drawbacks: it is vul-
nerable to becoming stuck in local minima and selecting control parameters incorrectly,
resulting in a bad solution. In [48], an ANN-based PSO method was used to predict the
thermal properties of molecular structure.
Another popular algorithm is the gravitational search algorithm, a physics-based opti-
mization algorithm inspired by Newton’s motion and gravity laws [49]. The GSA optimiza-
tion method has been used in some applications to find the best solution for a short-term
training feedforward approach to ANN problems and to improve the performance [53–57].
In [50], the authors addressed an ANN-based GSA optimization approach to enhance kid-
ney image quality classification for a bio-medical application. The study in [51] presented
a GSA optimization-based ANN to solve geotechnical engineering issues for improving
geogrid-reinforced soil structures.
One optimization algorithm is the neural network algorithm (NNA), which is in-
spired by the functioning of biological nervous systems and artificial neural networks [52].
NNA has recently been used in machine learning, as an intelligent controller, in biodiver-
sity assessment, intelligent feature recognition, and for uncertain data streams to provide
a way of learning features, predicting highly nonlinear functions, discovering useful hid-
den representations of the input because it does not require mathematical models and it
achieves good prediction for ANN [58–61]. However, NNA controllers require massive
data and long-time training and learning. In [53], an artificial bee colony (ABC) and an
NNA intelligent feature recognition for STEP-NC-compliant manufacturing can adjust
geometric and topological information. The study in [54] addressed estimating biodiver-
sity assessment based on AI and NNA.
Electronics 2021, 10, 2689 8 of 44
Another powerful optimizer is the BSA which generates a trial population and then
takes partial advantage of its experiences from previous generations. Crossover is devel-
oped in the trial population. The initial trial populations are taken from mutations. The
described benefits of BSA are in their search exploration process, which has the advantage
of using the mutation and crossover strategies. Though it has some limitations, such as
time-consuming computation because of the dual population algorithm, one parameter is
only used to control the amplitude of the search direction matrix in the mutation phase,
and crossover is complex [55–57]. In [29–30], BSA is applied for a fuzzy logic speed con-
troller optimization approach for induction motor drives. The deterministic global opti-
mization in numerical optimization helps to search for global solutions for optimization
problems [42].
The lightning search algorithm was first proposed by Shareef and his colleagues [58].
Afterwards Ali upgraded it with quantum mechanics theories to generate a quantum-in-
spired LSA (QLSA) [59]. The LSA optimization approach has been utilized in numerous
applications [30,60,61]. The study in [30] described an LSA-based ANN method home en-
ergy management scheduling controller for residential demand response strategies. The
study in [62] proposed a neural networks-based LSA to find the optimized feedforward
learning process to solve datasets. In [63], the author addressed finding the optimal Kp and
Ki value of the LSA-based PI voltage controller and implementing it into the dSPACE con-
troller. Table 1 lists the advantages and disadvantages of the most popular nature-inspired
optimization techniques. However, not all optimization algorithms and their variants pro-
vide superior solutions to specific problems. Also, even though some of the optimization
techniques are efficient, they still need further improvement to enhance their performance.
Besides, how to speed up the convergence of an algorithm is still a very challenging ques-
tion, so new Nature-inspired optimization techniques must be continuously developed to
advance the field of computational intelligence or heuristic optimization [60,61,64–73].
Table 1. Advantages and disadvantages of the most popular nature-inspired optimization techniques.
Technique Advantages Disadvantages
- Fast convergence.
- Easily get trapped in local minima.
PSO [62] - The capability of solving complex problems in a dif-
- Improper selection of control parameters leads to a poor solution.
ferent application domain.
- No guarantee of finding the global minimum,
- It does not require derivative information - Long time for convergence,
GA [63]
- Suitable for a large number of variables, - Hard to fine-tune all the parameters, like mutation rate, crossover pa-
rameters, etc., this is often done by just trial and error.
- Easy to learn and implement. - Abrupt switching to the exploitation stage by quickly varying wave-
NNA [52] - It obtains good results when dealing with lower-di- length and pulse emission rate.
mensional optimization problems. - Difficult to solve high-dimensional optimization problems.
- Premature convergence in the later search period.
- Strong robustness
ABC [64] - Accuracy problems that in some cases cannot meet the optimal solu-
- Fast convergence and flexibility
tion.
- Time-consuming in computation because of the use of the dual popula-
- Suitable for the search exploration process. tion algorithm.
LSA [58] - Has the advantage of using the mutation and crosso- - One parameter only controls the amplitude of the search direction ma-
ver strategies. trix in the mutation phase.
- Crossover is complex.
- Robust concerning noisy evaluation functions. - Usually provide reasonably good performance
EA [65]
- Easily to adjust to the problem - Premature convergence to a local global minimum.
- Suitable for the search exploration process. - Time-consuming in computation because of the dual population algo-
BSA [73]
- Has good mutation and crossover strategies. rithm.
- Easily gest trapped in local minima, and weakness in its strategy to
GSA [67] - Faster solution convergence
diversify the algorithm’s population
- Gets trapped in several local minimal.
- Easy to implement
- Performs local searches
FA [68] - Capable of automatic subdivision and dealing with
- Does not memorize the history of the better situation, and may end
multimodality.
up missing situations
Electronics 2021, 10, 2689 9 of 44
complex relation associated with input data and output data, known as a universal ap-
proximation. Many researchers adopt ANNs to solve complex relations, for example, the
coexistence of cellular and WiFi networks in an unlicensed spectrum [78].
Another example is feed-forward neural network probabilistic neural network
(PNN) in [79] and knowledge-based neural network described in [80,81]. In [82] this ap-
proach was used for modeling a solar field in direct steam generation parabolic trough.
ANN is used as an optimizer in many research projects to solve bundling problems; for
example, in [83] it was used to optimize a flight trajectory for rockets. An ANN optimized
the design and optimization of microwave circuits in [84]. Model-aided wireless AI em-
bedding expert knowledge in DNN to solve wireless system optimization to find the best
architecture of an ANN [22]. ANN is also used to optimize and control thin film growth
processes [85]. A sampling method for the ANN model’s optimal design [86]. A feedfor-
ward neural network optimization is applied to synthesize fault-tolerance [87]. ANNs, to-
gether with the Xinanjiang model to employed to explore nonlinear transformations [88].
Some optimized artificial neural network models for predicting chlorophyll dynamics
were done to decrease the cost of aquatic environmental in-situ monitoring and increase
bloom forecasting accuracy [89]. A problem of crude oil distillation systems was solved
using ANN by optimizing heat-integration [90]. ANN solved the optimization problem
and extraction of anthocyanins in black rice using orthogonal arrays [91]. ANN solves the
optimization problems in traffic lights timing traffic light controller [92]. Also, ANN is
used as an optimizer and applied to waves energy converters (WEC) to predict overtop-
ping rates as part of a sustainable optimization of coastal or harbor defense structures and
their conversion for constructing a predictive model [1]. The architecture of the artificial
neural network is shown in Figure 3. Each neuron output includes an activation function
of a sum of all inputs weights, while the neuron input is a sum of all weights included in
the bias, as shown in Figure 4. The bias is a constant used to adjust the output and the
weighted sum of the inputs to the neuron, while the activation functions are a powerhouse
for neural networks [93–99]. The neural network weights updates in the back-propagation
process are done to get the gradients as a neural network using many hidden layers. The
gradient may vanish and explode during the backward propagation [100,101].
Figure 3. Artificial neural network architectures with feed-forward and backpropagation algorithms.
Electronics 2021, 10, 2689 11 of 44
RNN is normally used to solve problems associated with text data, time-series data,
and audio data. Because the parameters go through different time steps, these steps are
called parameter sharing, ending with fewer parameters to be trained [102]. This action
could save computational time because the gradient computes only at the last step and
vanishes in every neuron in the RNN. The error is back-propagated from the previous
time step to the first step. The error at each time step is calculated, allowing us to update
the weights. The Elman neural network (ENN) has similar concept properties to RNNa,
Electronics 2021, 10, 2689 12 of 44
Figure 6. Details of the convolution neural network architectures network with convolution layers.
rithm, LSA, backtracking search algorithm, etc. [46]. In this review, some common opti-
mization algorithms that enhance the performance of neural networks are discussed in
detail in the following subsections.
Table 2. Studies involving PSO for neural network design based on weights and neuron number optimization.
Neural Net-
Optimizer Optimizer Problem Application Improved
works
dynamic
Adaptive PSO To calculate the weights Design of dynamic modular neural network
MNN [111]
DNN [115] PSO To optimize the number of hidden layer nodes Digital modulation recognition
Simulation an- initial weights and biases of the neural network are Endpoint sulfur content in Kambara reactor
ANN [117]
nealled PSO optimized desulfurization
BiLSTM NN To optimize the hyperparameters
ADPSO Ship motion attitude prediction
[118] of BiLSTM neural network
IT2FNNs For parameter optimization for Takagi-Sugeno- Design interval type-2 fuzzy neural networks
PSO & BBBC
[119] Kang TSK type IT2FNNs IT2FNNs
For parameter and self-adaptive mechanism
FNN [72] SPS-PSO Weight optimization problem parameters
strategies.
To evaluate the fitness of each solution and
ANN [49] PSO Train a set of synaptic weights
find the best ANN design
Electronics 2021, 10, 2689 14 of 44
ANN [113] PSO Find the optimal weights of the network Non-linear channel equalization
For optimizing the number of hidden layers and Global solar irradiance prediction at ex-
ANN [116] PSO
neurons used and the learning rate tremely short-time-intervals
For hyperparameter optimization with linearly de-
CNN [114] PSO CNN architecture design
creasing weights
For an optimal number of hidden layers and learn-
ANN [120] PSO Microgrid scheduling and management
ing rate
A combination of PSO optimization and neural networks is the most common com-
bination between optimization algorithms and AI and is used in many application soft-
ware and controllers. There is much ongoing research on this combinationa; for example,
a PSO-based ANN was used to enhance forecasting software reliability [121], while in
[122], one was used for data-based fault-tolerant control. PSO assists different types of
neural networks in different ways. For example, a PSO-based BP neural network used to
solve big-data mining approach problems associated with financial risk management with
the Internet of Things (IoT) constructs a nonlinear parallel optimization model [3]. There
are some applications done on a giant scale, for example, The Kambara reactor desulfuri-
zation used a combining ANN-based optimization techniques and a simulated annealing
algorithm with PSO (SAPSO) for determining optimal parameter structures such as a
number of hidden layers, neurons, and activation functions training to solve desulfuriza-
tion model performance problems [117]. In [118], an issue of ship motion attitude predic-
tion was solved by using the adaptive dynamic PSO (ADPSO) algorithm and bidirectional
long short-term memory (LSTM). That is done by searching for the hyperparameters of
bidirectional (BiLSTM) neural networks. In [119] interval type-2 fuzzy neural networks
(IT2FNNs)-based PSO and a big bang big crunch (BBBC) functional for parameter optimi-
zation were used for Takagi-Sugeno-Kang type problem. Sadik and his co-workers have
successfully used a hybrid PSO-ANN algorithm for indoor and outdoor track cycling
wireless sensor localization. and the algorithm was used for improving the distance esti-
mation accuracy of mobile nodes [29].
The PSO optimization with AI saves lives in many biomedical applications that help
many smart applications in hospitals, clinics, and therapists by assisting smart diagnoses
or smart robots. Some applications in this area can be highlighted; for example, in [123], a
hybrid ANN-PSO is used for predicting airblast-overpressure by estimating quarry blast-
ing and influential parameters in four granite quarry sites in Malaysia. Also, in [124]
ANN-PSO is used to manage groundwater resources to solve the groundwater manage-
ment problems of groundwater in France’s Dore river basin [124], whereas in western
Australia, short-term traffic flow predictors for forecasting traffic flow conditions on a
section of freeway using Intelligent Swarm PSO-based ANNs were used [125]. In [126], a
functional-link-based neural fuzzy network (FLNFN)-based hybrid cooperative PSO and
cultural algorithm were proposed for solving problems related to orthogonal polynomials
and linearly independent functions in a functional expansion of the functional link neural
networks. in [127], PSO was enhanced with a periodic mutation strategy (PMS) and neural
networks with mutation application strategy and diversity variety for solving problems
of an airfoil in transonic flow. A photovoltaic thermal nanofluid-based collector system
used ANN and PSO to solve a complex non-linear relationship between input and output
parameters [128]. Some researchers have used a neural network to improve the PSO search
performance oppositely [129–131]. Improved PSOs revolve around feed-forward ANNs,
as in [31], to present a unique evolutionary ANN algorithm called IPSONe. In [132], a
neural network with a fuzzy algorithm and PSO is used for a brain-computer interface
classifier for wheelchair commands, whereas PSO is used to optimize with a cross-mu-
tated-based ANN (FPSOCM-ANN). A PSO combined with ANN for data classification
with an opposition-based PSO neural network (OPSONN) algorithm was used for the NN
training to solve data classification problems [133]. Taguchi PSO solves high-dimensional
global numerical optimization problems for ANN design concerning tensile strength for
Electronics 2021, 10, 2689 15 of 44
steel bars [131]. A nonlinear neural network predictive control strategy based on tent-map
chaotic PSO (TCPSO) was used for achieving a nonlinear optimization for advanced con-
vergence and high accuracy [129]. ANN is the most common neural network and the PSO
is the most common optimization method; for that reason, they have been used and com-
pared in some cases with other AI or optimization techniques. For example, training
ANNs over a hybrid PSO and cuckoo search (PSO-SC) algorithms that have been done by
adopting feedforward neural networks (FNNs) to solve algorithm performance problems
[130]. Table 3 presents studies involving PSO for neural network design and application
enhancement.
Table 3. Studies involving PSO for neural networks design and application enhancement.
Table 4. Studies involving GA for neural network design and application enhancement.
Neural Net-
Optimizer Optimizer Problem Application Improved
works
To overcome the training issue of local minima
ANN [152] PSO &GA Short-term load forecasting
traps
To overcome high computational cost by using Design of anisotropic laminated composite
ANN [137] GA
multilayer perceptron NN structures
To determine suitable parameters for maxi- Heat transfer analysis in perforated plate
ANN [143] GA
mum weight reduction fins
To solve the data imbalance problem caused by
ANN [138] GA Corporate bankruptcy prediction
simultaneous ANN optimization.
To select optimal network parameters of the Binary classification for university student
DNN [144] GA
Deep-NN admissions
Crashworthiness optimi- To design parameter alternatives and deter- Circular tubes having a functionally
ANN [139]
zation and GA mine optimal combinations. graded thickness
To find the number of hidden neurons, bias
ANN [140] GA values of hidden neurons, and the connection Time-series forecasting for real-life data
weights between nodes.
Lipase production from Penicillium roque-
To optimize lipase production through the
ANN [145] GA forti ATCC 10110 in solid-state fermenta-
ANN model
tion
Low-temperature extraction of cashew ap-
ANN [146] GA Optimum extraction parameters
ple juice
To optimize the thermal efficiency, exergy effi-
ANN [147] GA Transcritical power cycle with regenerator
ciency, and specific network.
Non-dominated Sorting To numerically solve problems in various flat
ANN [148] Nanofluid flow in flat tubes
GA tubes for nanofluid flow analysis and regime
To minimize the number of decoupling capaci-
ANN [17] GA tors for reducing the differences between the PCB decoupling
input impedance
For analog circuit optimization system auto-
ANN [149] GA Analog design space exploration
mated sizing of integrated circuits
To prevent prediction models from falling into
ADNN [150] GA local optimum and a comprehensive catenary Pantograph and catenary
model
To optimize the weight and threshold of a BP
ANN [151] GA Power grid investment risk problems
neural network
For weight optimization in a pre-specified neu-
ANN [141] GA Applied on a mobile ad-hoc network
ral network
To design the network architecture and select
ANN [142] GA Plasmonic waveguide systems
the hyperparameters for ANNs
MLP, RBFNN
& GRNN GA Search for optimal weights Predicting groundwater salinity
[153]
weights [155]. In [156], intrusion detection for cloud computing using ANN neural net-
works and an ABC and fuzzy logic for identified normal and abnormal network traffic
packets by optimizing the values of linkage weights and biases [156]. Deep neural net-
works are good for classification problems, and some studies use the ABC algorithm with
DNN. For example, in the ABC algorithm search for hybridization parameters of DNN
structure, this study included autoencoder layers cascaded to a softmax classification
layer [157].
Also, a modular neural network presents a modular NN model based on the ABC
algorithm for electric load forecasting with synaptic weights optimization [158]. On the
other hand, some research is merging the neural networks with ABC to solve specific
problems. For example, a study using a swarm-inspired algorithm with ANN to protect
against dual attacks using the concept of ANN as a deep learning algorithm and the
swarm-based ABC optimization technique [8]. Table 5 lists studies involving ABC for neu-
ral network design and application enhancement.
Table 5. Studies involving ABC for neural networks design and application enhancement.
Table 6. Studies involving EA for neural networks design and application enhancement.
Neural Net-
Optimizer Optimizer Problem Application Improved
works
ANN [160] EA To co-optimize the ANN properties Global radiation forecasting
Intrusion detection systems using mul-
Multiverse optimizer To allow ENN to solve problems encountered by
ANN [161] tiverse optimization via a benchmark
(MVO)/EA ANNs
dataset
Self-organized genetic To improve the performance efficiency and structural Structure of neural network and its im-
ANN [162]
EA efficiency of the built ANN plementation
ANN [163] EA For optimization and ANN for modeling High voltage AC systems
For self-adaptive control parameters and dynamically
Unmanned aerial vehicle measure-
ANN [164] EAs adjust the population size for ANN weight optimiza-
ments for mobile communications
tion
To adjust the weights to satisfy the differential equa- Differential equations of fractional or-
ANN [166] EA
tions der
To replace backpropagation in training neural net-
ANN [165] EA/CRO ANN architecture design
works
Table 7. Studies involving BSA for neural network design and application enhancement.
Table 8. Overview of a variety of optimization techniques based on neural network design and application enhancement.
Neural Net-
Optimizer Optimizer Problem Application Improved
works
FNNs [84] SOS For training of FNNs UCI machine learning repository
Estimates of energy consumption in
ANN [174] TLBO To replace the BP with TLBO
Turkey
Social-spider Opti- To improve the training phase of ANN with multilayer
ANN [176] Parkinson’s disease identification
mization perceptrons
DMLP,
DNNs to predict each stock’s future return also DNNs are Portfolio optimization models utiliz-
LSTM, CNN DNNs
applied to measure the risk of each stock ing the stocks market of China
[182]
ANN [177] NNIT & EA To solve dynamic optimization problems Moving peaks benchmark
To get the number of passive components in the input and Designing high power amplifier cir-
DNN [178] TSEMO & DNN
output matching networks cuit topologies
Metaheuristic Al- For the objective analytic function of a continuous optimi-
RNN [40] Estimate tree structures
gorithms zation problem
Optimizing common correntropy-based BP algorithms Improving training in NNs for en-
ANN [179] CCG-BP
based on MSE hancing the signal-to-noise ratios
Artificial noise scheme wiretap chan-
DNN [180] Deep AN For optimal precoding scheme
nels
For reconstruction enhancement and reducing online pre-
ANN [181] DCNN Anthropomorphic manipulators
diction time
To select the input variables subsets for forecasting of elec-Forecasts of short-term electricity
ANN [183] ACS
tricity price prices in a deregulated market
The following examples enhance neural networks by optimizing their weights con-
nections, for example, a prediction of time series to adjust the weights in the ANNs model
with parameter-free simplified swarm optimization (SSO) [184]. ANN-based biogeogra-
phy-based optimization (BBO) also solved electrical energy forecasting problems for long-
term forecasting of India’s sector-wise electrical energy demand [185]. Again, an enhanced
ANN with a shuffled complex evolutionary global optimization algorithm with principal
component analysis—University of California Irvine (SP-UCI) for the weight training for
feedforward ANN [186]. Another example of weights linkages optimization is done in a
metaheuristic, bird mating optimizer (BMO), which was used to train feedforward ANNs
in [21]. Also, a quantum-based algorithm was used to design an ANN with few connec-
tions and high classification performance by simultaneously optimizing the network
structure and the connection weights [187]. Neural network training with a weighting
mechanism-based optimization algorithm was used to resolve some algorithms’ undesir-
able convergence behavior and improve Adam and AMSGrad [188]. A unified automated
model generation algorithm uses optimization to automatically determine the type and
topology of the mapping structure in a knowledge-based neural network model to force
some weights of the mapping neural networks to zeros while leaving other weights non-
zeros optimized in [88]. An Elman neural network was used to train the connection
weights between the layers based on a whale optimization algorithm (WOA) to solve the
problem of falling into local best solutions [189]. Another optimization of connection
weights in neural networks using the WOA for training ANN and verified by compari-
sons with BP algorithm other evolutionary techniques was described in [190]. An evolu-
tionary nonlinear adaptive filter approach via cat swarm functional link ANN (CS-
FLANN) was employed for solving unwanted noise problems by picking the optimum
weights of NN filters in [191]. Cat swarm optimization (CSO) was also used to train the
ANN for structure design by simultaneously optimizing the connection weights [192]. A
calibration method was done to improve the robot positional accuracy of industrial ma-
nipulators using a teaching-learning-based optimization (TLBO) method to optimize the
Electronics 2021, 10, 2689 22 of 44
weights and bias in ANN in [193]. ANNs based sparse optimization simultaneously esti-
mates the weights and model structure of an ANN in [194]. Table 9 lists optimization-
based neural network weights optimization enhancements.
ANN-based path loss prediction for wireless communication network multilayer percep-
tron (MLP) neural network generates low dimensional environmental features and elim-
inates redundant information among similar environmental types [202]. Table 10 over-
views various optimization-based neural network parameters (hidden layers, learning
rate, neurons) optimization enhancement.
Table 10. Overview of various optimization-based neural network parameters (hidden layers, learning rate, neurons, and
weights) optimization enhancement.
Table 11. Studies involving neural networks for improving optimization techniques design and application enhancement.
Neural
Optimization Algorithm
Networks Optimizer Problem Application Improved
Enhanced
Optimizer
ANN is applied to estimate the fitness
ANN [203] GWO Pressurized water reactor
function value of GWO
ANN as a tool in finding the parameter op- A sensitive to exact measurement of alu-
ANN [204] RSW
timization of RSW minum alloy
The trained CNN approximately evaluates Cross-sectional image of an interior per-
CNN [11] TO
individuals manent magnet motor
( , ) … ( , )
= ⋮ ⋱ ⋮ (2) (2)
( , ) … ( , )
ANN deep neural networks using feed-forward structures have been adopted in this
study. The use of trainlm as a network training function that updates weight and bias
values according to Levenberg-Marquardt optimization is considered the fastest back-
propagation algorithm in the Matlab toolbox. However, it does require more memory
Electronics 2021, 10, 2689 25 of 44
than other algorithms. Hidden layers are chosen to be two layers using the sigmoid acti-
vation function and the optimization is adapted to search for the number of the nodes in
both hidden layers; the optimization algorithms are also set to search for the optimal value
for learning rate. This optimization process is done using random trail values in ANN
training based on the aforementioned inputs and outputs data. The optimal trail is the
minimalist mean absolute error (MAE). Those trials from each optimization algorithm
have been done separately, and each optimization takes days to come up with the best
parameters. All these algorithms have addressed the limitations of the search for a set of
trails, as presented in Table 12. The algorithms use random ANN trial parameters as the
initial step pre iteration process includes ANN training for 10000 epochs to evaluate the
minimum objective function. Figure 8 shows that the numbers of inputs and outputs lay-
ers are known based on the data [209]. The duration time of each ANN training is unex-
pected could take a long or short time depending on the trial training points of the ANN
training. The training can show good or bad performance from the very early stages of
the training, but this is not sure because it sometimes behaves differently and improves
or remains in the middle or end of the training.
Electronics 2021, 10, 2689 26 of 44
net newff (minmax p , N1, N 2, 25 ,{' tansig ', ' tansig ', ' purelin '}, ' trainlm ')
n e t1 tra in n e t , p , t
25
MAE Sum(errori) / (4800 25)
i 1
MAE Evaluation k
Evaluation k MAE
Figure 7. General flow chart of optimization algorithms for ANN-PSO, ANN-GA, ANN- ABC, and
ANN- BSA for 100 iteration.
Electronics 2021, 10, 2689 27 of 44
Symbol Description
P Controller input data
t Controller output data
iteration ANN 100 Maximum iterations for ANN
Population size = 20 Size of the population
LowerLR =0 Min value of LR
UpperLR =1 Max value of LR
LowerN 1 =6 Min value of nodes in hidden layer1
UpperN 1 =30 Max value of nodes in hidden layer1
LowerN 2 =6 Min value of nodes in hidden layer2
UpperN 2 =30 Max value of nodes in hidden layer2
In this study, the BSA objective of enhancing the ANN structure toward optimal pa-
rameters was the best among the other techniques, which minimize the MAE to reaches a
value of 0.0062 [210]. The GA objective was 0.0080, which is not very far from the BSA
objective. Simultaneously, the MAE of the PSO and the ABC was greater at 0.0144 and
0.0172, respectively, compared to the other two techniques, as shown in Figure 9. The
BSA’s main principles have been done through its crossover consists of two parts. The
first part generates the binary matrix, and the second part compares population X(i,j) and
the trial population. Crossover is used to obtain an updated map(i,j). Also, this part works
on the control mechanism of boundaries for a trial population. As presented, enhancing
the neural network could help the system enormously and the enhanced ANN is proven
to be overwhelmingly impressive, or at least competitive, by training and testing is as
important as the optimal design of the ANN structure. This study also introduces a novel
way of solving optimization tasks by the neural network.
Electronics 2021, 10, 2689 28 of 44
0.045
ANN-PSO
0.04 ANN-BSA
ANN-GA
0.035 ANN-ABC
0.03
0.0144
MAE
0.025
0.0172
0.02
0.015
0.0080 0.0062
0.01
0.005
10 20 30 40 50 60 70 80 90 100
Iteration
Figure 9. Objectives of optimization algorithms for ANN-PSO, ANN-GA, ANN-ABC, and ANN- BSA for 100 iterations.
Pseudocode of ANN training based on optimized parameters obtained from optimization algorithms.
1: Input: (solar irradiances, wind speed, energy price, battery status, gird status, and diesel fuel status)
2: Output: ANN-Net of the binary matrix of (24 × 25)
3: N1=optimal value obtained
4: N2= optimal value obtained
5: LR= optimal value obtained
6: // ANN
7: Applying Feed-Forward neural network ( newff ) and Levenberg-Marquardt ( trainlm )
8: net newff (minmax p , N1, N2,25 ,{' tansig ', ' tansig ',' purelin'},'trainlm')
9: net trainParamepochs 10,000
10: net trainParamlr LR
11: net trainParam goal 0
12: n e t 1 t r a i n n e t , p , t
13: gen sin( Net1, 1)
14: Output is an ANN-Net with input data and 25 outputs
Electronics 2021, 10, 2689 29 of 44
The optimal enhanced net of ANN-BSA in a Matlab Simulink block is shown in Fig-
ure 10, involving six inputs and twenty-five binary outputs on an hourly basis to manage
distributed generators throughout the virtual power plant system. The net block is gener-
ated after the training completer by using Equation (3). Table 13 presents the ANN train-
ing-based PSO, GA, ABC, and BSA using the optimized parameters. The generated ANN
Net module is an AI controller; it is considered a masterpiece and smart controller. This
Net could be implemented in cheap microchips and used as a smart device to control hug
systems to serve in a very effective smart way cheaply.
Table 13. Artificial neural network training-based PSO, GA, ABC, and BSA using the optimized parameters obtained.
Optimization Al- No. of Nodes in Hid- No. of Nodes in Hid- Learning Rate Value Training Performance
Training Time
gorithm den layer1 (N1) den layer2 (N2) (LR) (MSE)
PSO 18 30 0.7 20:00:48 3.99e-06
GA 23 28 0.6 20:31:36 5.46e-06
ABC 26 29 0.45 30:32:29 2.52e-05
BSA 22 27 0.6 4:30:29 6.37e-07
The following figures represent the training performance and regressions for ANN
deep neural network after using the optimization algorithms’ optimal parameters. This
study shows a fair compression based on each optimization technique to find the best pa-
rameters to serve the system in the best way. These hybrid techniques could save huge trial
and error time during training and find the required best parameters, using smaller nets to
save valuable time during the training and testing. Any of the optimization algorithms used
could give better results than manual parameters tuning. Yet, some techniques could find
the best fitness faster and more efficiently than others, as ANN-BSA in Figure 11, which
shows the best training performance of 6.3695 e−7 at 2317 epochs and regression (R) reach to
best of 1 regression training which the best results it may obtain by training.
Electronics 2021, 10, 2689 30 of 44
Train Data
1
Best Fit
0
10 Y=T
0.9
0.8
0.7
-2
10
0.6
0.5
-4
0.4
10
0.3
0.2
-6 0.1
10
0
0 0.2 0.4 0.6 0.8 1
0 500 1000 1500 2000
Target
2317 Epochs
(a) (b)
Figure 11. (a) Performance and (b) regression of ANN training after applying optimal parameters of ANN-BSA.
However, other optimization techniques trained for 10,000 epochs on their optimal pa-
rameter have good results somewhat near in results. However, in Figure 12, ANN-GA shows
the best training performance of 5.4579 × 10−6, and regression (R) reach 0.99999 it is very close
to unity. Figure 13 and Figure 14 show the best training performance of 3.9938 × 10−6 and 2.5178
× 10−5 and regression of 0.99999 and 0.99995 for ANN-PSO and ANN-ABC [210].
Fair comparison results of Bus1 of 14-bus IEEE test system for virtual power plants
utilize the optimized ANN net based on half-hour binary patterns for managing each dis-
tributed generation (DG) unit in the system. The binary (ANN-BPSO), binary (ANN-
BABC), binary ANN-BGA, and binary (ANN-BBSA) is a controller with binary output 0
or 1 to switch each DG ON or Off based on the inputs. Figure 15 shows that the entire
algorithm saved a huge amount of power. Yet, all the saved power was considered with
sharing new distributed resources to inject power to the loads instead of supplying power
from the utility grid [212]. However, most of the optimized Nets have done an excellent
job. However, some Nets are better than the others based on their objectives as can be seen
that the total power for the 24 h of the ANN-BBSA Net was 1182.5 MW in comparison to
other optimized Nets 1211.3 MW, 1184.3 MW, and 1252.9 MW for, ANN-BGA, ANN-
BPSO, ANN-BABC, respectively.
0.8
Mean Squared Error (mse)
-2 0.6
10
0.4
-4
10 0.2
-6
10
0 2000 4000 6000 8000 10000 0 0.2 0.4 0.6 0.8 1
Target
10000 Epochs
(a) (b)
(a)
Figure 12. (a) performance and (b) regression of ANN training after applying optimal parameters of ANN-GA.
Electronics 2021, 10, 2689 31 of 44
0.6
-2
10
0.4
-4
10 0.2
-6
10
0 2000 4000 6000 8000 10000 0 0.2 0.4 0.6 0.8 1
10000 Epochs Target
(a) (b)
Figure 13. (a) Performance and (b) regression of ANN training after applying optimal parameters of ANN-PSO.
0.8
-2
10 0.6
0.4
0.2
-4
10
0
-0.2
0 2000 4000 6000 8000 10000 0 0.5 1
Target
10000 Epochs
(a) (b)
Figure 14. (a) Performance and (b) regression of ANN training after applying optimal parameters of ANN-ABC.
7
x 10
5
4
Power (W)
2
No-DGs-Optimization
ANN-BPSO
1 ANN-BABC
ANN-BBSA
ANN-BGA
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Time (h)
Figure 15. Original Bus1 of 14-bus IEEE test system compared to ANN-based binary optimization algorithms ANN-BPSO,
ANN-BABC, ANN-BGA, and ANN-BBSA.
Electronics 2021, 10, 2689 32 of 44
Table 14. Comparison of the proposed technique with other techniques of enhancing neural networks by finding the
optimal parameters of No. of nodes in hidden layers and learning rate.
Table 15. Overview of significant studies on NNs based optimization using node numbers in hidden layers and learning rate.
method. For quick tracking, less steady-state errors, and high performance, ANN tech-
niques can be used to monitor robotic sensing and control monitor and achieve bidirec-
tional power management. However, real-time data integrity, reduced operations time,
expensive processing equipment, and the need for good parameter selection and manual
tuning are all disadvantages. As a result, more research is required to select proper opti-
mization methods for enhancing neural network structure design is important.
DL methods are fast evolving for higher performance. There are adequate review
articles about the progressing algorithms in particular application domains. Future work
could be carried out considering other DL methods such as denoising autoencoder, deep
belief networks, and long short-term memory. Further study and review can enhance or
hybridize ML with optimization techniques, random forest, Markov chain Monte Carlo,
or support victor machines. Future work can also consider many optimizations to im-
prove AI and ML to boost their performance [215–217]. Future studies can consider DL
from another perspective, for example, continuous or online optimization.
Abbreviations
ABC Artificial bee colony
ACO Ant colony optimization
ACS Artificial cooperative search algorithm
ADNN Adadelta deep neural networks
ADPSO Adaptive dynamic particle swarm optimization
AFSA Artificial fish swarm optimization
AI Artificial intelligence
AMARM Adaptive memetic algorithm with a rank-based mutation
AMSG Adam optimization stochastic gradient descent
ANFIS Adaptive neuro-fuzzy inference systems
ANN Artificial neural networks
ANN-ABC Artificial neural networks-based artificial bee colony
ANN-BSA Artificial neural networks-based backtracking search algorithm
ANN-BABC Artificial neural networks-based bainary artificial bee colony
ANN-BBSA Artificial neural networks-based binary backtracking search algorithm
ANN-BGA Artificial neural networks-based binary genetic algorithm
ANN-BPSO Artificial neural networks-based binary particle swarm optimization
ANN-GA Artificial neural networks-based genetic algorithm
ANN-PSO Artificial neural networks-based particle swarm optimization
Aop Airblast-overpressure
AP Affinity propagation
BABC Binary artificial bee colony
BBBC Big bang big crunch
BBO Biogeography-based optimization
BBSA Binary backtracking search algorithm
BCA Bee colony algorithm
BFGS Limited memory Broyden Fletcher Goldfarb Shannon
BFO Bacteria foraging optimization
BGA Binary genetic algorithm
BMO Bird mating optimizer
Electronics 2021, 10, 2689 35 of 44
BP Backpropagation
BPNN Backpropagation neural network
BPNN-PSO Backpropagation neural network-based particle swarm optimization
BPSO Binary particle swarm optimization
BSA Backtracking search algorithm
CCG-BP Correntropy-based conjugate gradient- backpropagation
CCPSO Cultural cooperative particle swarm optimization
CNN Convolutional neural networks
COA Chaotic optimization algorithm
CRO Chemical reaction optimization
CS Cuckoo search
CSA Cuckoo search algorithm
CSO Cat swarm optimization
DBBO Differential biogeography-based optimization
DCNN Deep convolutional neural networks
DG Distributed generation
DL Deep learning
DMLP Deep multilayer perceptron
DNN Deep neural networks
DOP Dynamic optimization problem
DSA Dolphin swarm algorithm
EA Evolutionary algorithms
EBP Elman backpropagation algorithm
EFA Electromagnetism-based firefly algorithm
ENN Elman neural network
FA Firefly algorithm
FLANN Functional link artificial neural networks
FLNFN Functional-link-based neural fuzzy network
FNN Optimize feedforward NN
GA Genetic algorithm
GAN Generative adversarial network
GNN Graph Neural Networks
GRNN Generalized regression neural network
GSA Gravitational search algorithm
GWO Grey wolf algorithm
HSA Harmony search algorithm
IT2FNN Interval type-2 fuzzy neural networks
LR Learning Rate
LSA Lightning search algorithm
LSA-ANN Lightning search algorithm-based particle swarm optimization
LSTM Long short-term memory
MAE Mean absolute error
MBSA Modified backtracking search algorithm
MISO Multiple-input single-output
ML Machine learning
MLP Multilayer perceptron
MNN Modular neural network
MOA Microcanonical optimization algorithm
MSE Mean squire error
MVO Multiverse optimizer
NN Neural networks
NNA Neural network algorithm
NNIT Neural network-based information transfer
NNRW Neural network with random weights
NSGA Non-dominated sorting GA
OBD Optimal brain damage
OPSONN Opposition-based PSO neural network
Electronics 2021, 10, 2689 36 of 44
PI Proportional integral
PID Proportional integral derivative
PL Path Loss
PMS Periodic mutation strategy
PNN Probabilistic neural network
PSO Particle swarm optimization
PSO-DNN Particle swarm optimization-based deep neural network
PV Photovoltaic
QLSA Quantum-inspired lightning search algorithm
RBF Radial basis functions
RBFNN Radial basis functions neural network
RNN Recurrent neural networks
RSW Resistance spot welding optimization
SAPSO Simulation annealing algorithm with particle swarm optimization
SGD Stochastic gradient descent
SLFN Single-layer feed-forward network
SOS Symbiotic organisms search
SPS-PSO Self-adaptive parameters and strategy-based PSO
SSO Simplified swarm optimization
TCPSO Tent-map chaotic particle swarm optimization
TLBO Teaching–learning-based optimization algorithm
TO Optimization topology
TPSO Taguchi particle swarm optimization
TSEMO Thompson sampling efficient multi-objective optimization
UCI University of California Irvine
UCS Unconfined compressive strength
WEC Wave energy converters
WOA Whale optimization algorithm
References
1. Oliver, J.M.; Esteban, M.D.; López-Gutiérrez, J.-S.; Negro, V.; Neves, M.G. Optimizing Wave Overtopping Energy Converters
by ANN Modelling: Evaluating the Overtopping Rate Forecasting as the First Step. Sustainability 2021, 13, 1483.
https://doi.org/10.3390/su13031483.
2. Mosavi, A.; Salimi, M.; Ardabili, S.F.; Rabczuk, T.; Shamshirband, S.; Varkonyi-Koczy, A.R. State of the Art of Machine Learning
Models in Energy Systems, a Systematic Review. Energies 2019, 12, 1301. https://doi.org/10.3390/EN12071301.
3. Zhou, H.; Sun, G.; Fu, S.; Liu, J.; Zhou, X.; Zhou, J. A big data mining approach of PSO-Based BP neural network for financial
risk management with IoT. IEEE Access 2019, 7, 154035–154043. https://doi.org/10.1109/ACCESS.2019.2948949.
4. Schweidtmann, A.M.; Mitsos, A. Deterministic Global Optimization with Artificial Neural Networks Embedded. J. Optim.
Theory Appl. 2019, 180, 925–948. https://doi.org/10.1007/s10957-018-1396-0.
5. Li, T.; Chan, Y.H.; Lun, D.P.K. Improved Multiple-Image-Based Reflection Removal Algorithm Using Deep Neural Networks.
IEEE Trans. Image Process. 2021, 30, 68–79. https://doi.org/10.1109/TIP.2020.3031184.
6. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial
networks. Commun. ACM 2020, 63, 139–144. https://doi.org/10.1145/3422622.
7. Ardabili, S.; Mosavi, A.; Dehghani, M.; Várkonyi-Kóczy, A.R. Deep Learning and Machine Learning in Hydrological Processes
Climate Change and Earth Systems a Systematic Review. Lect. Notes Networks Syst. 2019, 101, 52–62. https://doi.org/10.1007/978-
3-030-36841-8_5.
8. Rani, P.; Kavita; Verma, S.; Nguyen, G.N. Mitigation of Black Hole and Gray Hole Attack Using Swarm Inspired Algorithm
with Artificial Neural Network. IEEE Access 2020, 8, 121755–121764. https://doi.org/10.1109/ACCESS.2020.3004692.
9. Milad, A.; Adwan, I.; Majeed, S.A.; Yusoff, N.I.M.; Al-Ansari, N.; Yaseen, Z.M. Emerging Technologies of Deep Learning Models
Development for Pavement Temperature Prediction. IEEE Access 2021, 9, 23840–23849.
https://doi.org/10.1109/ACCESS.2021.3056568.
10. Moayedi, H.; Bui, D.T.; Gör, M.; Pradhan, B.; Jaafari, A. The feasibility of three prediction techniques of the artificial neural
network, adaptive neuro-fuzzy inference system, and hybrid particle swarm optimization for assessing the safety factor of
cohesive slopes. ISPRS Int. J. Geo-Inform. 2019, 8, 391. https://doi.org/10.3390/ijgi8090391.
11. Sasaki, H.; Igarashi, H. Topology optimization accelerated by deep learning. IEEE Trans. Magn. 2019, 55, 1–5.
https://doi.org/10.1109/TMAG.2019.2901906.
Electronics 2021, 10, 2689 37 of 44
12. Shamshirband, S.; Mosavi, A.; Rabczuk, T.; Nabipour, N.; Chau, K. Prediction of significant wave height; comparison between
nested grid numerical model, and machine learning models of artificial neural networks, extreme learning and support vector
machines. Eng. Appl. Comput. Fluid Mech. 2020, 14, 805–817. https://doi.org/10.1080/19942060.2020.1773932.
13. Gonçalves, R.; Ribeiro, V.M.; Pereira, F.L.; Rocha, A.P. Deep learning in exchange markets. Inf. Econ. Policy 2019, 47, 38–51.
https://doi.org/10.1016/J.INFOECOPOL.2019.05.002.
14. Mosavi, A.; Ardabili, S.; Várkonyi-Kóczy, A.R. List of Deep Learning Models. Lect. Notes Netw. Syst. 2019, 101, 202–214.
https://doi.org/10.1007/978-3-030-36841-8_20.
15. Kim, K.G. Deep learning book review. Nature 2019, 29, 1–73.
16. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444.
17. Cecchetti, R.; de Paulis, F.; Olivieri, C.; Orlandi, A.; Buecker, M. Effective PCB Decoupling Optimization by Combining an
Iterative Genetic Algorithm and Machine Learning. Electronics 2020, 9, 1243. https://doi.org/10.3390/electronics9081243.
18. Mijwil, M.M. Artificial Neural Networks Advantages and Disadvantages. Linkedin 2018, 1–2.
https://www.linkedin.com/pulse/artificial-neural-networks-advantages-disadvantages-maad-m-mijwel/ (accessed on 2 April
2021).
19. Nabipour, N.; Dehghani, M.; Mosavi, A.; Shamshirband, S. Short-Term Hydrological Drought Forecasting Based on Different
Nature-Inspired Optimization Algorithms Hybridized with Artificial Neural Networks. IEEE Access 2020, 8, 15210–15222.
https://doi.org/10.1109/ACCESS.2020.2964584.
20. Jafarian, F.; Taghipour, M.; Amirabadi, H. Application of artificial neural network and optimization algorithms for optimizing
surface roughness, tool life and cutting forces in turning operation. J. Mech. Sci. Technol. 2013, 27, 1469–1477.
https://doi.org/10.1007/s12206-013-0327-0.
21. Askarzadeh, A.; Rezazadeh, A. Artificial neural network training using a new efficient optimization algorithm. Appl. Soft
Comput. J. 2013, 13, 1206–1213. https://doi.org/10.1016/j.asoc.2012.10.023.
22. Zappone, A.; Di Renzo, M.; Debbah, M.; Lam, T.T.; Qian, X. Model-Aided Wireless Artificial Intelligence: Embedding Expert
Knowledge in Deep Neural Networks for Wireless System Optimization. IEEE Veh. Technol. Mag. 2019, 14, 60–69.
https://doi.org/10.1109/MVT.2019.2921627.
23. Jiang, J.; Fan, J.A. Simulator-based training of generative neural networks for the inverse design of metasurfaces. Nanophotonics
2019. https://doi.org/10.1515/nanoph-2019-0330.
24. Mutlag, A.H.; Shareef, H.; Mohamed, A.; Hannan, M.A.; Abd Ali, J. An improved fuzzy logic controller design for PV inverters
utilizing differential search optimization. Int. J. Photoenergy 2014, 2014. https://doi.org/10.1155/2014/469313.
25. Aljarah, I.; Al-Zoubi, A.M.; Faris, H.; Hassonah, M.A.; Mirjalili, S.; Saadeh, H. Simultaneous Feature Selection and Support
Vector Machine Optimization Using the Grasshopper Optimization Algorithm. Cogn. Comput. 2018, 10, 478–495.
https://doi.org/10.1007/S12559-017-9542-9.
26. Ghazvinei, P.T.; Darvishi, H.H.; Mosavi, A.; bin W. Yusof, K.; Alizamir, M.; Shamshirband, S.; Chau, K. Sugarcane growth
prediction based on meteorological parameters using extreme learning machine and artificial neural network. Eng. Appl.
Comput. Fluid Mech. 2018, 12, 738–749. https://doi.org/10.1080/19942060.2018.1526119.
27. Ardabili, S.; Mosavi, A.; Várkonyi-Kóczy, A.R. Systematic Review of Deep Learning and Machine Learning Models in Biofuels
Research. Lect. Notes Netw. Syst. 2019, 101, 19–32. https://doi.org/10.1007/978-3-030-36841-8_2.
28. Taghizadeh-Mehrjardi, R.; Emadi, M.; Cherati, A.; Heung, B.; Mosavi, A.; Scholten, T. Bio-Inspired Hybridization of Artificial
Neural Networks: An Application for Mapping the Spatial Distribution of Soil Texture Fractions. Remote Sens. 2021, 13, 1025.
https://doi.org/10.3390/RS13051025.
29. Gharghan, S.K.; Nordin, R.; Ismail, M.; Ali, J.A. Accurate Wireless Sensor Localization Technique Based on Hybrid PSO-ANN
Algorithm for Indoor and Outdoor Track Cycling. IEEE Sens. J. 2016, 16, 529–541. https://doi.org/10.1109/JSEN.2015.2483745.
30. Ahmed, M.; Mohamed, A.; Homod, R.; Shareef, H. Hybrid LSA-ANN Based Home Energy Management Scheduling Controller
for Residential Demand Response Strategy. Energies 2016, 9, 716. https://doi.org/10.3390/en9090716.
31. Yu, J.; Xi, L.; Wang, S. An improved particle swarm optimization for evolving feedforward artificial neural networks. Neural
Process. Lett. 2007, 26, 217–231. https://doi.org/10.1007/s11063-007-9053-x.
32. Dineva, A.; Mosavi, A.; Ardabili, S.F.; Vajda, I.; Shamshirband, S.; Rabczuk, T.; Chau, K.-W. Review of Soft Computing Models
in Design and Control of Rotating Electrical Machines. Energies 2019, 12, 1049. https://doi.org/10.3390/EN12061049.
33. Ayub, S.; Guan, B.H.; Ahmad, F.; Oluwatobi, Y.A.; Nisa, Z.U.; Javed, M.F.; Mosavi, A. Graphene and Iron Reinforced Polymer
Composite Electromagnetic Shielding Applications: A Review. Polymers 2021, 13, 2580. https://doi.org/10.3390/POLYM13152580.
34. Ayub, S.; Guan, B.H.; Ahmad, F.; Javed, M.F.; Mosavi, A.; Felde, I. Preparation Methods for Graphene Metal and Polymer Based
Composites for EMI Shielding Materials: State of the Art Review of the Conventional and Machine Learning Methods. Metals
2021, 11, 1164. https://doi.org/10.3390/MET11081164.
35. Moayedi, H.; Mosavi, A. An Innovative Metaheuristic Strategy for Solar Energy Management through a Neural Networks
Framework. Energies 2021, 14, 1196. https://doi.org/10.3390/EN14041196.
36. Nosratabadi, S.; Mosavi, A.; Duan, P.; Ghamisi, P.; Filip, F.; Band, S.S.; Reuter, U.; Gama, J.; Gandomi, A.H. Data Science in
Economics: Comprehensive Review of Advanced Machine Learning and Deep Learning Methods. Mathematics 2020, 8, 1799.
https://doi.org/10.3390/MATH8101799.
37. Mosavi, A.; Faghan, Y.; Ghamisi, P.; Duan, P.; Ardabili, S.F.; Salwana, E.; Band, S.S. Comprehensive Review of Deep Reinforcement
Learning Methods and Applications in Economics. Mathematics 2020, 8, 1640. https://doi.org/10.3390/MATH8101640.
Electronics 2021, 10, 2689 38 of 44
38. Chen, H.; Heidari, A.A.; Chen, H.; Wang, M.; Pan, Z.; Gandomi, A.H. Multi-population differential evolution-assisted Harris hawks
optimization: Framework and case studies. Futur. Gener. Comput. Syst. 2020, 111, 175–198.
https://doi.org/10.1016/J.FUTURE.2020.04.008.
39. Wang, Z.; Chen, B.; Wang, J.; Chen, C. Networked microgrids for self-healing power systems. IEEE Trans. Smart Grid 2016, 7,
310–319. https://doi.org/10.1109/TSG.2015.2427513.
40. Tian, Y.; Peng, S.; Zhang, X.; Rodemann, T.; Tan, K.C.; Jin, Y. A Recommender System for Metaheuristic Algorithms for Continuous
Optimization Based on Deep Recurrent Neural Networks. IEEE J. Mag. 2020. Available online:
https://ieeexplore.ieee.org/document/9187549 (accessed on 23 February 2021).
41. Balachennaiah, P.; Suryakalavathi, M.; Nagendra, P. Optimizing real power loss and voltage stability limit of a large transmission
network using firefly algorithm. Eng. Sci. Technol. Int. J. 2016, 19, 800–810. https://doi.org/10.1016/j.jestch.2015.10.008.
42. Wong, L.A.; Shareef, H.; Mohamed, A.; Ibrahim, A.A. Novel quantum-inspired firefly algorithm for optimal power quality
monitor placement. Front. Energy 2014, 8, 254–260. https://doi.org/10.1007/s11708-014-0302-1.
43. Ramli, L.; Sam, Y.M.; Mohamed, Z.; Khairi Aripin, M.; Fahezal Ismail, M.; Ramli, L. Composite nonlinear feedback control with
multi-objective particle swarm optimization for active front steering system. J. Teknol. 2015, 72, 13–20.
https://doi.org/10.11113/jt.v72.3877.
44. Lin, M.H.; Tsai, J.F.; Yu, C.S. A review of deterministic optimization methods in engineering and management. Math. Probl. Eng.
2012, 2012, 756023.
45. Bui, D.K.; Nguyen, T.N.; Ngo, T.D.; Nguyen-Xuan, H. An artificial neural network (ANN) expert system enhanced with the
electromagnetism-based firefly algorithm (EFA) for predicting the energy consumption in buildings. Energy 2020, 190, 116370.
https://doi.org/10.1016/J.ENERGY.2019.116370.
46. Hannan, M.A.; Ali, J.A.; Hossain Lipu, M.S.; Mohamed, A.; Ker, P.J.; Indra Mahlia, T.M.; Mansor, M.; Hussain, A.; Muttaqi,
K.M.; Dong, Z.Y. Role of optimization algorithms based fuzzy controller in achieving induction motor performance
enhancement. Nat. Commun. 2020, 11, 1–11. https://doi.org/10.1038/s41467-020-17623-5.
47. Hannan, M.A.; Ali, J.A.; Mohamed, A.; Hussain, A. Optimization techniques to enhance the performance of induction motor
drives: A review. Renew. Sustain. Energy Rev. 2018, 81, 1611–1626.
48. Miao, K.; Feng, Q.; Kuang, W. Particle Swarm Optimization Combined with Inertia-Free Velocity and Direction Search.
Electronics 2021, 10, 597. https://doi.org/10.3390/electronics10050597.
49. Garro, B.A.; Vázquez, R.A. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms. Comput.
Intell. Neurosci. 2015, 2015. https://doi.org/10.1155/2015/369298.
50. Conforth, M.; Meng, Y. Toward evolving Neural networks using Bio-inspired algorithms. In Proceedings of the Artificial
Intelligence and Soft Computing—ICAISC 2008, Zakopane, Poland, 22–26 June 2008; pp. 413–419.
51. Garro, B.A.; Sossa, H.; Vázquez, R.A. Back-Propagation vs Particle Swarm Optimization Algorithm: Which Algorithm is better
to adjust the Synaptic Weights of a Feed-Forward ANN? Int. J. Artif. Intell. 2011, 7, 208–218.
52. Rosli, A.D.; Adenan, N.S.; Hashim, H.; Abdullah, N.E.; Sulaiman, S.; Baharudin, R. Application of Particle Swarm Optimization
Algorithm for Optimizing ANN Model in Recognizing Ripeness of Citrus. IOP Conf. Ser. Mater. Sci. Eng. 2018, 340.
https://doi.org/10.1088/1757-899X/340/1/012015.
53. Lazzús, J.A. Neural network-particle swarm modeling to predict thermal properties. Math. Comput. Model. 2013, 57, 2408–2418.
https://doi.org/10.1016/j.mcm.2012.01.003.
54. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248.
https://doi.org/10.1016/j.ins.2009.03.004.
55. Do, Q.H. A hybrid Gravitational Search Algorithm and back-propagation for training feedforward neural networks. In Proceedings
of the Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2015; Volume 326, pp. 381–392.
56. Chaitanya, S.M.K.; Rajesh Kumar, P. Oppositional Gravitational Search Algorithm and Artificial Neural Network-based
Classification of Kidney Images. J. Intell. Syst. 2020, 29, 485–496. https://doi.org/10.1515/jisys-2017-0458.
57. Momeni, E.; Yarivand, A.; Dowlatshahi, M.B.; Armaghani, D.J. An efficient optimal neural network based on gravitational
search algorithm in predicting the deformation of geogrid-reinforced soil structures. Transp. Geotech. 2021, 26, 100446.
https://doi.org/10.1016/j.trgeo.2020.100446.
58. Zhang, Y.; Jin, Z.; Chen, Y. Hybrid teaching–learning-based optimization and neural network algorithm for engineering design
optimization problems. Knowl.-Based Syst. 2020, 187. https://doi.org/10.1016/j.knosys.2019.07.007.
59. Sun, Y.; Cao, M.; Sun, Y.; Gao, H.; Lou, F.; Liu, S.; Xia, Q. Uncertain data stream algorithm based on clustering RBF neural
network. Microprocess. Microsyst. 2021, 81, 103731. https://doi.org/10.1016/j.micpro.2020.103731.
60. Faris, H.; Aljarah, I.; Al-Madi, N.; Mirjalili, S. Optimizing the Learning Process of Feedforward Neural Networks Using
Lightning Search Algorithm. Int. J. Artif. Intell. Tools 2016, 25. https://doi.org/10.1142/S0218213016500330.
61. Sarker, M.R.; Mohamed, R.; Saad, M.H.M.; Mohamed, A. DSPACE Controller-based enhanced piezoelectric energy harvesting
system using PI-lightning search algorithm. IEEE Access 2019, 7. https://doi.org/10.1109/ACCESS.2018.2888912.
62. Zhang, Y.; Zhang, Y.; He, K.; Li, D.; Xu, X.; Gong, Y. Intelligent feature recognition for STEP-NC-compliant manufacturing based on
artificial bee colony algorithm and back propagation neural network. J. Manuf. Syst. 2021. https://doi.org/10.1016/j.jmsy.2021.01.018.
63. Li, C. Biodiversity assessment based on artificial intelligence and neural network algorithms. Microprocess. Microsyst. 2020, 79,
103321. https://doi.org/10.1016/j.micpro.2020.103321.
Electronics 2021, 10, 2689 39 of 44
64. Civicioglu, P. Backtracking Search Optimization Algorithm for numerical optimization problems. Appl. Math. Comput. 2013,
219, 8121–8144. https://doi.org/10.1016/j.amc.2013.02.017.
65. Guha, D.; Roy, P.K.; Banerjee, S. Application of backtracking search algorithm in load frequency control of multi-area
interconnected power system. Ain Shams Eng. J. 2018, 9, 257–276. https://doi.org/10.1016/j.asej.2016.01.004.
66. Abdolrasol, M.G.M.; Mohamed, A.; Hannan, M.A. Virtual power plant and microgrids controller for energy management based
on optimization techniques. J. Electr. Syst. 2017, 13, 285–294.
67. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. J. 2015, 36, 315–333.
https://doi.org/10.1016/j.asoc.2015.07.028.
68. Abd Ali, J.; Hannan, M.; Mohamed, A. A Novel Quantum-Behaved Lightning Search Algorithm Approach to Improve the
Fuzzy Logic Speed Controller for an Induction Motor Drive. Energies 2015, 8, 13112–13136. https://doi.org/10.3390/en81112358.
69. Liu, L.; Liu, W.; Cartes, D.A. Particle swarm optimization-based parameter identification applied to permanent magnet
synchronous motors. Eng. Appl. Artif. Intell. 2008, 21, 1092–1100. https://doi.org/10.1016/j.engappai.2007.10.002.
70. Tabassum, M.; Mathew, K. A Genetic Algorithm Analysis towards Optimization solutions. Int. J. Digit. Inf. Wirel. Commun. 2014,
4, 124–142. https://doi.org/10.17781/P001091.
71. Chao, K.-H.; Hsieh, C.-C. Photovoltaic Module Array Global Maximum Power Tracking Combined with Artificial Bee Colony
and Particle Swarm Optimization Algorithm. Electronics 2019, 8, 603. https://doi.org/10.3390/ELECTRONICS8060603.
72. Xue, Y.; Tang, T.; Liu, A.X. Large-scale feedforward neural network optimization by a self-adaptive strategy and parameter
based particle swarm optimization. IEEE Access 2019, 7, 52473–52483. https://doi.org/10.1109/ACCESS.2019.2911530.
73. Chen, D.; Zou, F.; Lu, R.; Li, S. Backtracking search optimization algorithm based on knowledge learning. Inf. Sci. (Ny). 2019,
473, 202–226, doi:10.1016/J.INS.2018.09.039.
74. Sahu, R.K.; Panda, S.; Padhan, S. A novel hybrid gravitational search and pattern search algorithm for load frequency control
of nonlinear power system. Appl. Soft Comput. J. 2015, 29, 310–327. https://doi.org/10.1016/j.asoc.2015.01.020.
75. Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 2013, 1, 36.
https://doi.org/10.1504/IJSI.2013.055801.
76. Hassan, L.; Abdel-Nasser, M.; Saleh, A.; Omer, O.A.; Puig, D. Efficient Stain-Aware Nuclei Segmentation Deep Learning
Framework for Multi-Center Histopathological Images. Electronics 2021, 10, 954. https://doi.org/10.3390/electronics10080954.
77. Arora, V.; Mahla, S.K.; Leekha, R.S.; Dhir, A.; Lee, K.; Ko, H. Intervention of Artificial Neural Network with an Improved
Activation Function to Predict the Performance and Emission Characteristics of a Biogas Powered Dual Fuel Engine. Electronics
2021, 10, 584. https://doi.org/10.3390/electronics10050584.
78. Ketkar, N. Convolutional Neural Networks. In Deep Learning with Python; Apress: Berkeley, CA, USA, 2017; pp. 63–78.
79. Medsker, L.R.; Jain, L.C. Recurrent Neural Networks Design and Applications. J. Chem. Inf. Model. 2013, 53, 1689–1699.
80. Liu, J.; Gong, M.; Miao, Q.; Wang, X.; Li, H. Structure Learning for Deep Neural Networks Based on Multiobjective Optimization.
IEEE Trans. Neural Networks Learn. Syst. 2018, 29, 2450–2463. https://doi.org/10.1109/TNNLS.2017.2695223.
81. Rusek, K.; Suarez-Varela, J.; Almasan, P.; Barlet-Ros, P.; Cabellos-Aparicio, A. RouteNet: Leveraging Graph Neural Networks for
Network Modeling and Optimization in SDN. IEEE J. Sel. Areas Commun. 2020, 38, 2260–2270.
https://doi.org/10.1109/JSAC.2020.3000405.
82. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw.
2009, 20, 61–80. https://doi.org/10.1109/TNN.2008.2005605.
83. Takayama, K.; Morva, A.; Fujikawa, M.; Hattori, Y.; Obata, Y.; Nagai, T. Formula optimization of theophylline controlled-release
tablet based on artificial neural networks. J. Control. Release 2000, 68, 175–186. https://doi.org/10.1016/S0168-3659(00)00248-0.
84. Wu, H.; Zhou, Y.; Luo, Q.; Basset, M.A. Training feedforward neural networks using symbiotic organisms search algorithm.
Comput. Intell. Neurosci. 2016, 2016. https://doi.org/10.1155/2016/9063065.
85. Alsenwi, M.; Yaqoob, I.; Pandey, S.R.; Tun, Y.K.; Bairagi, A.K.; Kim, L.W.; Hong, C.S. Towards coexistence of cellular and WiFi
networks in unlicensed spectrum: A neural networks based approach. IEEE Access 2019, 7, 110023–110034.
https://doi.org/10.1109/ACCESS.2019.2933323.
86. Kusy, M.; Zajdel, R. Application of Reinforcement Learning Algorithms for the Adaptive Computation of the Smoothing
Parameter for Probabilistic Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2163–2175.
https://doi.org/10.1109/TNNLS.2014.2376703.
87. Suganthi, L.; Iniyan, S.; Samuel, A.A. Applications of fuzzy logic in renewable energy systems—A review. Renew. Sustain. Energy
Rev. 2015, 48, 585–607. https://doi.org/10.1016/j.rser.2015.04.037.
88. Na, W.; Feng, F.; Zhang, C.; Zhang, Q.J. A Unified Automated Parametric Modeling Algorithm Using Knowledge-Based Neural
Network and l1 Optimization. IEEE Trans. Microw. Theory Tech. 2017, 65, 729–745. https://doi.org/10.1109/TMTT.2016.2630059.
89. Guo, S.; Pei, H.; Wu, F.; He, Y.; Liu, D. Modeling of solar field in direct steam generation parabolic trough based on heat transfer
mechanism and artificial neural network. IEEE Access 2020, 8, 78565–78575. https://doi.org/10.1109/ACCESS.2020.2988670.
90. Do Nascimento, E.O.; De Oliveira, L.N. Numerical Optimization of Flight Trajectory for Rockets via Artificial Neural Networks.
IEEE Lat. Am. Trans. 2017, 15, 1556–1565. https://doi.org/10.1109/TLA.2017.7994806.
91. Rayas-Sánchez, J.E. EM-based optimization of microwave circuits using artificial neural networks: The state-of-the-art. IEEE
Trans. Microw. Theory Tech. 2004, 52, 420–435. https://doi.org/10.1109/TMTT.2003.820897.
Electronics 2021, 10, 2689 40 of 44
92. Chaffart, D.; Ricardez-Sandoval, L.A. Optimization and control of a thin film growth process: A hybrid first principles/artificial neural
network based multiscale modelling approach. Comput. Chem. Eng. 2018, 119, 465–479.
https://doi.org/10.1016/j.compchemeng.2018.08.029.
93. Zhang, Z.; Cheng, Q.S.; Chen, H.; Jiang, F. An Efficient Hybrid Sampling Method for Neural Network-Based Microwave Component
Modeling and Optimization. IEEE Microw. Wirel. Components Lett. 2020, 30, 625–628. https://doi.org/10.1109/LMWC.2020.2995858.
94. Deodhare, D.; Vidyasagar, M.; Sathiya Keerthi, S. Synthesis of fault-tolerant feedforward neural networks using minimax
optimization. IEEE Trans. Neural Netw. 1998, 9, 891–900. https://doi.org/10.1109/72.712162.
95. Song, X.; Kong, F.; Zhan, C.; Han, J. Hybrid Optimization Rainfall-Runoff Simulation Based on Xinanjiang Model and Artificial
Neural Network. J. Hydrol. Eng. 2012, 17, 1033–1041. https://doi.org/10.1061/(asce)he.1943-5584.0000548.
96. Tian, W.; Liao, Z.; Zhang, J. An optimization of artificial neural network model for predicting chlorophyll dynamics. Ecol. Modell.
2017, 364, 42–52. https://doi.org/10.1016/j.ecolmodel.2017.09.013.
97. Ochoa-Estopier, L.M.; Jobson, M.; Smith, R. Operational optimization of crude oil distillation systems using artificial neural
networks. Comput. Chem. Eng. 2013, 59, 178–185. https://doi.org/10.1016/j.compchemeng.2013.05.030.
98. Cui, S. Artificial neural network-based optimization of extraction of anthocyanins in black rice. Food Sci. Technol. 2012, 1.
Available online: https://en.cnki.com.cn/Article_en/CJFDTotal-SSPJ201201057.htm (accessed on 18 March 2021).
99. De Oliveira, M.B.W.; De Almeida Neto, A. Optimization of traffic lights timing based on Artificial Neural Networks. In
Proceedings of the 2014 17th IEEE International Conference on Intelligent Transportation Systems, ITSC 2014, Qingdao, China, 8–11
October 2014; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2014; pp. 1921–1922.
100. Mukherjee, A.; Jain, D.K.; Goswami, P.; Xin, Q.; Yang, L.; Rodrigues, J.J.P.C. Back Propagation Neural Network Based Cluster
Head Identification in MIMO Sensor Networks for Intelligent Transportation Systems. IEEE Access 2020, 8, 28524–28532.
https://doi.org/10.1109/ACCESS.2020.2971969.
101. Rusydi, M.I.; Anandika, A.; Rahmadya, B.; Fahmy, K.; Rusydi, A. Implementation of Grading Method for Gambier Leaves Based
on Combination of Area, Perimeter, and Image Intensity Using Backpropagation Artificial Neural Network. Electronics 2019, 8,
1308. https://doi.org/10.3390/electronics8111308.
102. Yang, F.; Moayedi, H.; Mosavi, A. Predicting the Degree of Dissolved Oxygen Using Three Types of Multi-Layer Perceptron-
Based Artificial Neural Networks. Sustainability 2021, 13, 9898. https://doi.org/10.3390/SU13179898.
103. Dubey, S.R.; Chakraborty, S.; Roy, S.K.; Mukherjee, S.; Singh, S.K.; Chaudhuri, B.B. DiffGrad: An Optimization Method for
Convolutional Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 4500–4511.
https://doi.org/10.1109/TNNLS.2019.2955777.
104. Zhao, L.; Hu, Z. Detection of Wildfire Smoke Images Based on a Densely Dilated Convolutional Network. Electronics 2019, 8, 1131.
https://doi.org/10.3390/electronics8101131.
105. Zhao, E.; Liu, Y.; Zhang, J.; Tian, Y. Forest Fire Smoke Recognition Based on Anchor Box Adaptive Generation Method. Electronics
2021, 10, 566. https://doi.org/10.3390/electronics10050566.
106. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165.
https://doi.org/10.1016/j.chemolab.2015.08.020.
107. Kerdphol, T.; Fuji, K.; Mitani, Y.; Watanabe, M.; Qudaih, Y. Optimization of a battery energy storage system using particle swarm
optimization for stand-alone microgrids. Int. J. Electr. Power Energy Syst. 2016, 81, 32–39. https://doi.org/10.1016/j.ijepes.2016.02.006.
108. Momeni, E.; Jahed Armaghani, D.; Hajihassani, M.; Mohd Amin, M.F. Prediction of uniaxial compressive strength of rock
samples using hybrid particle swarm optimization-based artificial neural networks. Meas. J. Int. Meas. Confed. 2015, 60, 50–63.
https://doi.org/10.1016/j.measurement.2014.09.075.
109. Xiao, G.; Juan, Z.; Zhang, C. Detecting trip purposes from smartphone-based travel surveys with artificial neural networks and
particle swarm optimization. Transp. Res. Part C Emerg. Technol. 2016, 71, 447–463. https://doi.org/10.1016/j.trc.2016.08.008.
110. Li, N.; Chen, J.; Yuan, Y.; Tian, X.; Han, Y.; Xia, M. A Wi-Fi Indoor Localization Strategy Using Particle Swarm Optimization
Based Artificial Neural Networks. Int. J. Distrib. Sens. Netw. 2016, 12, 4583147. https://doi.org/10.1155/2016/4583147.
111. Qiao, J.F.; Lu, C.; Li, W.J. Design of dynamic modular neural network based on adaptive particle swarm optimization algorithm.
IEEE Access 2018, 6, 10850–10857. https://doi.org/10.1109/ACCESS.2018.2803084.
112. Yadav, N.; Yadav, A.; Kumar, M.; Kim, J.H. An efficient algorithm based on artificial neural networks and particle swarm optimization
for solution of nonlinear Troesch’s problem. Neural Comput. Appl. 2017, 28, 171–178. https://doi.org/10.1007/s00521-015-2046-1.
113. Das, G.; Pattnaik, P.K.; Padhy, S.K. Artificial Neural Network trained by Particle Swarm Optimization for non-linear channel
equalization. Expert Syst. Appl. 2014, 41, 3491–3496. https://doi.org/10.1016/j.eswa.2013.10.053.
114. Serizawa, T.; Fujita, H. Optimization of convolutional neural network using the linearly decreasing weight particle swarm
optimization. arXiv 2020, arXiv:2001.05670.
115. Shi, W.; Liu, D.; Cheng, X.; Li, Y.; Zhao, Y. Particle Swarm Optimization-Based Deep Neural Network for Digital Modulation
Recognition. IEEE Access 2019, 7, 104591–104600. https://doi.org/10.1109/ACCESS.2019.2932266.
116. Aljanad, A.; Tan, N.M.L.; Agelidis, V.G.; Shareef, H. Neural Network Approach for Global Solar Irradiance Prediction at Extremely
Short-Time-Intervals Using Particle Swarm Optimization Algorithm. Energies 2021, 14, 1213. https://doi.org/10.3390/en14041213.
117. Wu, S.; Yang, J.; Zhang, R.; Ono, H. Prediction of Endpoint Sulfur Content in KR Desulfurization Based on the Hybrid Algorithm
Combining Artificial Neural Network with SAPSO. IEEE Access 2020, 8, 33778–33791. https://doi.org/10.1109/ACCESS.2020.2971517.
118. Zhang, G.; Tan, F.; Wu, Y. Ship Motion Attitude Prediction Based on an Adaptive Dynamic Particle Swarm Optimization Algorithm
and Bidirectional LSTM Neural Network. IEEE Access 2020, 8, 90087–90098. https://doi.org/10.1109/ACCESS.2020.2993909.
Electronics 2021, 10, 2689 41 of 44
119. Wang, J.; Kumbasar, T. Parameter optimization of interval Type-2 fuzzy neural networks based on PSO and BBBC methods.
IEEE/CAA J. Autom. Sin. 2019, 6, 247–257. https://doi.org/10.1109/JAS.2019.1911348.
120. Abdolrasol, M.G.M.; Mohamed, R.; Hannan, M.A.; Al-Shetwi, A.Q.; Mansor, M.; Blaabjerg, F.G. Artificial Neural Network
Based Particle Swarm Optimization for Microgrid Optimal Energy Scheduling. IEEE Trans. Power Electron. 2021.
https://doi.org/10.1109/TPEL.2021.3074964.
121. Roy, P.; Mahapatra, G.S.; Dey, K.N. Forecasting of software reliability using neighborhood fuzzy particle swarm optimization
based novel neural network. IEEE/CAA J. Autom. Sin. 2019, 6, 1365–1383. https://doi.org/10.1109/JAS.2019.1911753.
122. Lin, H.; Zhao, B.; Liu, D.; Alippi, C. Data-based fault tolerant control for affine nonlinear systems through particle swarm optimized
neural networks. IEEE/CAA J. Autom. Sin. 2020, 7, 954–964. https://doi.org/10.1109/JAS.2020.1003225.
123. Hajihassani, M.; Jahed Armaghani, D.; Sohaei, H.; Tonnizam Mohamad, E.; Marto, A. Prediction of airblast-overpressure induced by
blasting using a hybrid artificial neural network and particle swarm optimization. Appl. Acoust. 2014, 80, 57–67.
https://doi.org/10.1016/j.apacoust.2014.01.005.
124. Gaur, S.; Ch, S.; Graillot, D.; Chahar, B.R.; Kumar, D.N. Application of Artificial Neural Networks and Particle Swarm Optimization
for the Management of Groundwater Resources. Water Resour. Manag. 2013, 27, 927–941. https://doi.org/10.1007/s11269-012-0226-7.
125. Chan, K.Y.; Dillon, T.; Chang, E.; Singh, J. Prediction of short-term traffic variables using intelligent swarm-based neural networks.
IEEE Trans. Control Syst. Technol. 2013, 21, 263–274. https://doi.org/10.1109/TCST.2011.2180386.
126. Lin, C.J.; Chen, C.H.; Lin, C.T. A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks
and its prediction applications. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2009, 39, 55–68.
https://doi.org/10.1109/TSMCC.2008.2002333.
127. Volkan Pehlivanoglu, Y. A new particle swarm optimization method enhanced with a periodic mutation strategy and neural
networks. IEEE Trans. Evol. Comput. 2013, 17, 436–452. https://doi.org/10.1109/TEVC.2012.2196047.
128. Kalani, H.; Sardarabadi, M.; Passandideh-Fard, M. Using artificial neural network models and particle swarm optimization for
manner prediction of a photovoltaic thermal nanofluid based collector. Appl. Therm. Eng. 2017, 113, 1170–1177.
https://doi.org/10.1016/j.applthermaleng.2016.11.105.
129. Song, Y.; Chen, Z.; Yuan, Z. New chaotic PSO-based neural network predictive control for nonlinear process. IEEE Trans. Neural
Netw. 2007, 18, 595–600. https://doi.org/10.1109/TNN.2006.890809.
130. Chen, J.F.; Do, Q.H.; Hsieh, H.N. Training artificial neural networks by a hybrid PSO-CS Algorithm. Algorithms 2015, 8, 292–
308. https://doi.org/10.3390/a8020292.
131. Chou, P.Y.; Tsai, J.T.; Chou, J.H. Modeling and Optimizing Tensile Strength and Yield Point on a Steel Bar Using an Artificial
Neural Network with Taguchi Particle Swarm Optimizer. IEEE Access 2016, 4, 585–593.
https://doi.org/10.1109/ACCESS.2016.2521162.
132. Chai, R.; Ling, S.H.; Hunter, G.P.; Tran, Y.; Nguyen, H.T. Brain-Computer Interface Classifier for Wheelchair Commands Using
Neural Network with Fuzzy Particle Swarm Optimization. IEEE J. Biomed. Heal. Informatics 2014, 18, 1614–1624.
https://doi.org/10.1109/JBHI.2013.2295006.
133. Bangyal, W.H.; Ahmad, J.; Rauf, H.T.; Shakir, R. Evolving artificial neural networks using opposition based particle swarm
optimization neural network for data classification. In Proceedings of the 2018 International Conference on Innovation and Intelligence
for Informatics, Computing, and Technologies, 3ICT 2018, Zallaq, Bahrain, 18–19 November 2018; Institute of Electrical and Electronics
Engineers Inc.: Piscataway, NJ, USA, 2018.
134. Darrel, W. A Genetic Algorithm Tutorial. Stat. Comput. 1994, 4, 65–85. https://doi.org/10.1007/BF00175354.
135. Daraban, S.; Petreus, D.; Morel, C. A novel MPPT (maximum power point tracking) algorithm based on a modified genetic
algorithm specialized on tracking the global maximum power point in photovoltaic systems affected by partial shading. Energy
2014, 74, 374–388. https://doi.org/10.1016/j.energy.2014.07.001.
136. Irshad, M.; Khalid, S.; Hussain, M.Z.; Sarfraz, M. Outline capturing using rational functions with the help of genetic algorithm.
Appl. Math. Comput. 2016, 274, 661–678. https://doi.org/10.1016/j.amc.2015.10.014.
137. Gomes, H.M.; Awruch, A.M.; Lopes, P.A.M. Reliability based optimization of laminated composite structures using genetic
algorithms and Artificial Neural Networks. Struct. Saf. 2011, 33, 186–195. https://doi.org/10.1016/j.strusafe.2011.03.001.
138. Kim, H.J.; Jo, N.O.; Shin, K.S. Optimization of cluster-based evolutionary undersampling for the artificial neural networks in
corporate bankruptcy prediction. Expert Syst. Appl. 2016, 59, 226–234. https://doi.org/10.1016/j.eswa.2016.04.027.
139. Baykasoǧlu, A.; Baykasoǧlu, C. Multiple objective crashworthiness optimization of circular tubes with functionally graded
thickness via artificial neural networks and genetic algorithms. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2017, 231, 2005–
2016. https://doi.org/10.1177/0954406215627181.
140. Erzurum Cicek, Z.I.; Kamisli Ozturk, Z. Optimizing the artificial neural network parameters using a biased random key genetic
algorithm for time series forecasting. Appl. Soft Comput. 2021, 102, 107091. https://doi.org/10.1016/j.asoc.2021.107091.
141. Dharmistha, M.; Vishwakarma, D. Genetic Algorithm based Weights Optimization of Artificial Neural Network. Int. J. Adv. Res.
Electr. Electron. Instrum. Energy 2013, 1, 206–211.
142. Zhang, T.; Wang, J.; Liu, Q.; Zhou, J.; Dai, J.; Han, X.; Zhou, Y.; Xu, K. Efficient spectrum prediction and inverse design for
plasmonic waveguide systems based on artificial neural networks. Photonics Res. 2019, 7, 368–380.
143. Chidambaram, B.; Ravichandran, M.; Seshadri, A.; Muniyandi, V. Computational Heat Transfer Analysis and Genetic
Algorithm-Artificial Neural Network-Genetic Algorithm-Based Multiobjective Optimization of Rectangular Perforated Plate
Fins. IEEE Trans. Components, Packag. Manuf. Technol. 2017, 7, 208–216. https://doi.org/10.1109/TCPMT.2016.2646718.
Electronics 2021, 10, 2689 42 of 44
144. Efosa, C.I.; Kingsley, C.U. Architecture Optimization Model for the Deep Neural Network for Binary Classification Problems.
i-Manags. J. Softw. Eng. 2019, 14, 18. https://doi.org/10.26634/jse.14.2.17162.
145. Sales de Menezes, L.H.; Carneiro, L.L.; Maria de Carvalho Tavares, I.; Santos, P.H.; Pereira das Chagas, T.; Mendes, A.A.;
Paranhos da Silva, E.G.; Franco, M.; Rangel de Oliveira, J. Artificial neural network hybridized with a genetic algorithm for
optimization of lipase production from Penicillium roqueforti ATCC 10110 in solid-state fermentation. Biocatal. Agric. Biotechnol.
2021, 31, 101885. https://doi.org/10.1016/j.bcab.2020.101885.
146. Abdullah, S.; Pradhan, R.C.; Pradhan, D.; Mishra, S. Modeling and optimization of pectinase-assisted low-temperature
extraction of cashew apple juice using artificial neural network coupled with genetic algorithm. Food Chem. 2021, 339, 127862.
https://doi.org/10.1016/j.foodchem.2020.127862.
147. Rashidi, M.M.; Bég, O.A.; Parsa, A.B.; Nazari, F. Analysis and optimization of a transcritical power cycle with regenerator using
artificial neural networks and genetic algorithms. Proc. Inst. Mech. Eng. Part A J. Power Energy 2011, 225, 701–717.
https://doi.org/10.1177/0957650911407700.
148. Safikhani, H.; Abbassi, A.; Khalkhali, A.; Kalteh, M. Multi-objective optimization of nanofluid flow in flat tubes using CFD, Artificial
Neural Networks and genetic algorithms. Adv. Powder Technol. 2014, 25, 1608–1617. https://doi.org/10.1016/j.apt.2014.05.014.
149. Li, Y.; Wang, Y.; Li, Y.; Zhou, R.; Lin, Z. An Artificial Neural Network Assisted Optimization System for Analog Design Space
Exploration. IEEE Trans. Comput. Des. Integr. Circuits Syst. 2020, 39, 2640–2653. https://doi.org/10.1109/TCAD.2019.2961322.
150. Qu, Z.; Yuan, S.; Chi, R.; Chang, L.; Zhao, L. Genetic optimization method of pantograph and catenary comprehensive monitor status
prediction model based on adadelta deep neural network. IEEE Access 2019, 7, 23210–23221.
https://doi.org/10.1109/ACCESS.2019.2899074.
151. Jiang, Q.; Huang, R.; Huang, Y.; Chen, S.; He, Y.; Lan, L.; Liu, C. Application of BP neural network based on genetic algorithm
optimization in evaluation of power grid investment risk. IEEE Access 2019, 7, 154827–154835.
https://doi.org/10.1109/ACCESS.2019.2944609.
152. Abeyrathna, K.D.; Jeenanunta, C. Hybrid particle swarm optimization with genetic algorithm to train artificial neural networks
for short-term load forecasting. Int. J. Swarm Intell. Res. 2019, 10, 1–14. https://doi.org/10.4018/IJSIR.2019010101.
153. Barzegar, R.; Asghari Moghaddam, A. Combining the advantages of neural networks using the concept of committee machine
in the groundwater salinity prediction. Model. Earth Syst. Environ. 2016, 2, 1–13. https://doi.org/10.1007/s40808-015-0072-8.
154. Kayabasi, A. An Application of ANN Trained by ABC Algorithm for Classification of Wheat Grains. Int. J. Intell. Syst. Appl.
Eng. 2018, 6, 85–91, doi:10.18201/IJISAE.2018637936.
155. Awan, S.M.; Aslam, M.; Khan, Z.A.; Saeed, H. An efficient model based on artificial bee colony optimization algorithm with
Neural Networks for electric load forecasting. Neural Comput. Appl. 2014, 25, 1967–1978, doi:10.1007/s00521-014-1685-y.
156. Hajimirzaei, B.; Navimipour, N.J. Intrusion detection for cloud computing using neural networks and artificial bee colony
optimization algorithm. ICT Express 2019, 5, 56–59. https://doi.org/10.1016/j.icte.2018.01.014.
157. Badem, H.; Basturk, A.; Caliskan, A.; Yuksel, M.E. A new efficient training strategy for deep neural networks by hybridization
of artificial bee colony and limited–memory BFGS optimization algorithms. Neurocomputing 2017, 266, 506–526.
https://doi.org/10.1016/j.neucom.2017.05.061.
158. Zhuo-Ming, C.; Yun-Xia, W.; Wei-Xin, L.; Zhen, X.; Han-Lin-Wei, X. Artificial Bee Colony Algorithm for Modular Neural Network;
Springer: Berlin/Heidelberg, Germany, 2013; pp. 350–356.
159. Ding, S.; Li, H.; Su, C.; Yu, J.; Jin, F. Evolutionary artificial neural networks: A review. Artif. Intell. Rev. 2013, 39, 251–260.
160. Kılıç, F.; Yılmaz, İ.H.; Kaya, Ö. Adaptive Co-Optimization of Artificial Neural Networks using Evolutionary Algorithm for
Global Radiation Forecasting. Renew. Energy 2021. https://doi.org/10.1016/j.renene.2021.02.074.
161. Benmessahel, I.; Xie, K.; Chellal, M. A new evolutionary neural networks based on intrusion detection systems using multiverse
optimization. Appl. Intell. 2018, 48, 2315–2327. https://doi.org/10.1007/s10489-017-1085-y.
162. Chai, Z.; Yang, X.; Liu, Z.; Lei, Y.; Zheng, W.; Ji, M.; Zhao, J. Correlation Analysis-Based Neural Network Self-Organizing
Genetic Evolutionary Algorithm. IEEE Access 2019, 7, 135099–135117. https://doi.org/10.1109/ACCESS.2019.2942035.
163. Nassif, N. Modeling and optimization of HVAC systems using artificial neural network and genetic algorithm. Build. Simul.
2014, 7, 237–245. https://doi.org/10.1007/s12273-013-0138-3.
164. Goudos, S.K.; Tsoulos, G.V.; Athanasiadou, G.; Batistatos, M.C.; Zarbouti, D.; Psannis, K.E. Artificial Neural Network Optimal
Modeling and Optimization of UAV Measurements for Mobile Communications Using the L-SHADE Algorithm. IEEE Trans.
Antennas Propag. 2019, 67, 4022–4031. https://doi.org/10.1109/TAP.2019.2905665.
165. Yu, J.J.Q.; Lam, A.Y.S.; Li, V.O.K. Evolutionary artificial neural network based on Chemical Reaction Optimization. In Proceedings
of the 2011 IEEE Congress of Evolutionary Computation, CEC 2011, New Orleans, LA, USA, 5–8 June 2011; pp. 2083–2090.
166. Pakdaman, M.; Ahmadian, A.; Effati, S.; Salahshour, S.; Baleanu, D. Solving differential equations of fractional order using an
optimization technique based on training artificial neural network. Appl. Math. Comput. 2017, 293, 81–95.
https://doi.org/10.1016/j.amc.2016.07.021.
167. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Saad, M.H.; Ayob, A. Neural network approach for estimating state of charge of lithium-
ion battery using backtracking search algorithm. IEEE Access 2018, 6, 10069–10079. https://doi.org/10.1109/ACCESS.2018.2797976.
168. Wang, B.; Wang, L.; Yin, Y.; Xu, Y.; Zhao, W.; Tang, Y. An Improved Neural Network with Random Weights Using Backtracking
Search Algorithm. Neural Process. Lett. 2016, 44, 37–52. https://doi.org/10.1007/s11063-015-9480-z.
169. Chen, D.; Lu, R.; Zou, F.; Li, S.; Wang, P. A learning and niching based backtracking search optimisation algorithm and its applications
in global optimisation and ANN training. Neurocomputing 2017, 266, 579–594. https://doi.org/10.1016/j.neucom.2017.05.076.
Electronics 2021, 10, 2689 43 of 44
170. Wu, S.; Wang, Z.; Ling, D. Echo state network prediction based on backtracking search optimization algorithm. In Proceedings
of the 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference, ITNEC 2019, Chengdu, China,
15–17 March 2019; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2019; pp. 661–664.
171. Hannan, M.A.; Mohamed, R.; Abdolrasol, M.G.M.; Al-Shetwi, A.Q.; Ker, P.J.; Begum, R.A.; Muttaqi, K.M. ANN based binary
backtracking search algorithm for vir- tual power plant scheduling andcost-effective evaluation. In Proceedings of the 2021
IEEE Texas Power and Energy Conference, College Station, TX, USA, 2–5 February 2021.
172. Meng, A.; Ge, J.; Yin, H.; Chen, S. Wind speed forecasting based on wavelet packet decomposition and artificial neural networks
trained by crisscross optimization algorithm. Energy Convers. Manag. 2016, 114, 75–88. https://doi.org/10.1016/j.enconman.2016.02.013.
173. Lehký, D.; Slowik, O.; Novák, D. Reliability-based design: Artificial neural networks and double-loop reliability-based
optimization approaches. Adv. Eng. Softw. 2018, 117, 123–135. https://doi.org/10.1016/j.advengsoft.2017.06.013.
174. Uzlu, E.; Kankal, M.; Akpinar, A.; Dede, T. Estimates of energy consumption in Turkey using neural networks with the teaching-
learning-based optimization algorithm. Energy 2014, 75, 295–303. https://doi.org/10.1016/j.energy.2014.07.078.
175. Shi, H.; Li, W. Artificial neural networks with ant colony optimization for assessing performance of residential buildings. In
Proceedings of the FBIE 2009—2009 International Conference on Future BioMedical Information Engineering, Sanya, China,
13–14 December 2009; pp. 379–382.
176. Pereira, L.A.M.; Rodrigues, D.; Ribeiro, P.B.; Papa, J.P.; Weber, S.A.T. Social-spider optimization-based artificial neural
networks training and its applications for Parkinson’s Disease identification. In Proceedings of the IEEE Symposium on Computer-
Based Medical Systems, New York, NY, USA, 27–29 May 2014; Institute of Electrical and Electronics Engineers Inc.: Piscataway,
NJ, USA, 2014; pp. 14–17.
177. Liu, X.F.; Zhan, Z.H.; Gu, T.L.; Kwong, S.; Lu, Z.; Duh, H.B.L.; Zhang, J. Neural Network-Based Information Transfer for Dynamic
Optimization. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 1557–1570. https://doi.org/10.1109/TNNLS.2019.2920887.
178. Kouhalvandi, L.; Ceylan, O.; Ozoguz, S. Automated Deep Neural Learning-Based Optimization for High Performance High
Power Amplifier Designs. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 67, 4420–4433. https://doi.org/10.1109/TCSI.2020.3008947.
179. Heravi, A.R.; Abed Hodtani, G. A new correntropy-based conjugate gradient backpropagation algorithm for improving training
in neural networks. IEEE Trans. Neural Networks Learn. Syst. 2018, 29, 6252–6263. https://doi.org/10.1109/TNNLS.2018.2827778.
180. Yun, S.; Kang, J.M.; Kim, I.M.; Ha, J. Deep Artificial Noise: Deep Learning-Based Precoding Optimization for Artificial Noise
Scheme. IEEE Trans. Veh. Technol. 2020, 69, 3465–3469. https://doi.org/10.1109/TVT.2020.2965959.
181. Su, H.; Qi, W.; Yang, C.; Aliverti, A.; Ferrigno, G.; De Momi, E. Deep neural network approach in human-like redundancy optimization
for anthropomorphic manipulators. IEEE Access 2019, 7, 124207–124216. https://doi.org/10.1109/ACCESS.2019.2937380.
182. Ma, Y.; Han, R.; Wang, W. Prediction-Based Portfolio Optimization Models Using Deep Neural Networks. IEEE Access 2020, 8,
115393–115405. https://doi.org/10.1109/ACCESS.2020.3003819.
183. Pourdaryaei, A.; Mokhlis, H.; Illias, H.A.; Kaboli, S.H.A.; Ahmad, S.; Ang, S.P. Hybrid ANN and artificial cooperative search
algorithm to forecast short-term electricity price in de-regulated electricity market. IEEE Access 2019, 7, 125369–125386.
https://doi.org/10.1109/ACCESS.2019.2938842.
184. Yeh, W.C. New parameter-free simplified swarm optimization for artificial neural network training and its application in the
prediction of time series. IEEE Trans. Neural Networks Learn. Syst. 2013, 24, 661–665. https://doi.org/10.1109/TNNLS.2012.2232678.
185. Kumaran, J.; Ravi, G. Long-term sector-wise electrical energy forecasting using artificial neural network and biogeography-
based optimization. Electr. Power Components Syst. 2015, 43, 1225–1235. https://doi.org/10.1080/15325008.2015.1028115.
186. Yang, T.; Asanjan, A.A.; Faridzad, M.; Hayatbini, N.; Gao, X.; Sorooshian, S. An enhanced artificial neural network with a
shuffled complex evolutionary global optimization with principal component analysis. Inf. Sci. 2017, 418–419, 302–316.
https://doi.org/10.1016/j.ins.2017.08.003.
187. Lu, T.C.; Yu, G.R.; Juang, J.C. Quantum-based algorithm for optimizing artificial neural networks. IEEE Trans. Neural Networks
Learn. Syst. 2013, 24, 1266–1278. https://doi.org/10.1109/TNNLS.2013.2249089.
188. Yu, Y.; Liu, F. Effective Neural Network Training with a New Weighting Mechanism-Based Optimization Algorithm. IEEE
Access 2019, 7, 72403–72410. https://doi.org/10.1109/ACCESS.2019.2919987.
189. Sun, W.Z.; Wang, J.S. Elman Neural Network Soft-Sensor Model of Conversion Velocity in Polymerization Process Optimized
by Chaos Whale Optimization Algorithm. IEEE Access 2017, 5, 13062–13076. https://doi.org/10.1109/ACCESS.2017.2723610.
190. Aljarah, I.; Faris, H.; Mirjalili, S. Optimizing connection weights in neural networks using the whale optimization algorithm.
Soft Comput. 2018, 22. https://doi.org/10.1007/s00500-016-2442-1.
191. Kumar, M.; Mishra, S.K.; Sahu, S.S. Cat Swarm Optimization Based Functional Link Artificial Neural Network Filter for
Gaussian Noise Removal from Computed Tomography Images. Appl. Comput. Intell. Soft Comput. 2016, 2016, 1–6.
https://doi.org/10.1155/2016/6304915.
192. Yusiong, J.P.T. Optimizing Artificial Neural Networks using Cat Swarm Optimization Algorithm. Int. J. Intell. Syst. Appl. 2012,
5, 69–80. https://doi.org/10.5815/ijisa.2013.01.07.
193. Le, P.N.; Kang, H.J. Robot Manipulator Calibration Using a Model Based Identification Technique and a Neural Network with
the Teaching Learning-Based Optimization. IEEE Access 2020, 8, 105447–105454. https://doi.org/10.1109/ACCESS.2020.2999927.
194. Manngård, M.; Kronqvist, J.; Böling, J.M. Structural learning in artificial neural networks using sparse optimization.
Neurocomputing 2018, 272, 660–667. https://doi.org/10.1016/j.neucom.2017.07.028.
Electronics 2021, 10, 2689 44 of 44
195. Jian-wei, M. Optimization of Feed-Forward Neural Networks based on Artificial Fish-Swarm Algorithm. Computer
Engineering, n. Page. 2004. Available online: https://www.semanticscholar.org/paper/Optimization-of-feed-forward-neural-
networks-based-Jian-wei/663ea00fe44a17a89da2838c774ebe665dedcabf (accessed on 15 March 2021).
196. Kim, T.; Lee, J.; Choe, Y. Bayesian optimization-based global optimal rank selection for compression of convolutional neural
networks. IEEE Access 2020, 8, 17605–17618. https://doi.org/10.1109/ACCESS.2020.2968357.
197. Petro, B.; Kasabov, N.; Kiss, R.M. Selection and Optimization of Temporal Spike Encoding Methods for Spiking Neural
Networks. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 358–370. https://doi.org/10.1109/TNNLS.2019.2906158.
198. Gulcu, A.; Kus, Z. Hyper-Parameter Selection in Convolutional Neural Networks Using Microcanonical Optimization
Algorithm. IEEE Access 2020, 8, 52528–52540. https://doi.org/10.1109/ACCESS.2020.2981141.
199. Wang, H.; Luo, Y.; An, W.; Sun, Q.; Xu, J.; Zhang, L. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural
Networks. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 5079–5091. https://doi.org/10.1109/TNNLS.2019.2963066.
200. Zheng, Y.J.; Ling, H.F.; Chen, S.Y.; Xue, J.Y. A Hybrid Neuro-Fuzzy Network Based on Differential Biogeography-Based Optimization
for Online Population Classification in Earthquakes. IEEE Trans. Fuzzy Syst. 2015, 23, 1070–1083.
https://doi.org/10.1109/TFUZZ.2014.2337938.
201. Sheng, W.; Shan, P.; Mao, J.; Zheng, Y.; Chen, S.; Wang, Z. An Adaptive Memetic Algorithm with Rank-Based Mutation for Artificial
Neural Network Architecture Optimization. IEEE Access 2017, 5, 18895–18908. https://doi.org/10.1109/ACCESS.2017.2752901.
202. Wu, L.; He, D.; Ai, B.; Wang, J.; Qi, H.; Guan, K.; Zhong, Z. Artificial Neural Network Based Path Loss Prediction for Wireless
Communication Network. IEEE Access 2020, 8, 199523–199538. https://doi.org/10.1109/access.2020.3035209.
203. Naserbegi, A.; Aghaie, M.; Mahmoudi, S.M. PWR core pattern optimization using grey wolf algorithm based on artificial neural
network. Prog. Nucl. Energy 2020, 129, 103505. https://doi.org/10.1016/j.pnucene.2020.103505.
204. Arunchai, T.; Sonthipermpoon, K.; Apichayakul, P.; Tamee, K. Resistance Spot Welding Optimization Based on Artificial Neural
Network. Int. J. Manuf. Eng. 2014, 2014. https://doi.org/10.1155/2014/154784.
205. Abdolrasol, M.G.M.; Hannan, M.A.; Hussain, S.M.S.; Ustun, T.S.; Sarker, M.R.; Ker, P.J. Energy Management Scheduling for
Microgrids in the Virtual Power Plant System Using Artificial Neural Networks. Energies 2021, 14, 6507.
https://doi.org/10.3390/EN14206507.
206. Hannan, M.A.; Abdolrasol, M.G.M.; Faisal, M.; Ker, P.J.; Begum, R.A.; Hussain, A. Binary Particle Swarm Optimization for Scheduling
MG Integrated Virtual Power Plant Toward Energy Saving. IEEE Access 2019, 7. https://doi.org/10.1109/ACCESS.2019.2933010.
207. Ahmed, M.S.; Mohamed, A.; Khatib, T.; Shareef, H.; Homod, R.Z.; Ali, J.A. Real time optimal schedule controller for home
energy management system using new binary backtracking search algorithm. Energy Build. 2017, 138, 215–227.
https://doi.org/10.1016/j.enbuild.2016.12.052.
208. Abdolrasol, M.G.M.; Hannan, M.A.; Mohamed, A.; Amiruldin, U.A.U.; Abidin, I.B.Z.; Uddin, M.N. An Optimal Scheduling
Controller for Virtual Power Plant and Microgrid Integration Using the Binary Backtracking Search Algorithm. IEEE Trans. Ind.
Appl. 2018, 54, 2834–2844.
209. Roslan, M.F.; Hannan, M.A.; Jern Ker, P.; Begum, R.A.; Indra Mahlia, T.M.; Dong, Z.Y. Scheduling controller for microgrids
energy management system using optimization algorithm in achieving cost saving and emission reduction. Appl. Energy 2021,
292, 116883, doi:10.1016/J.APENERGY.2021.116883.
210. Hannan, M.A.; Abdolrasol, M.G.M.; Mohamed, R.; Al-Shetwi, A.Q.; Ker, P.J.; Begum, R.A.; Muttaqi, K.M. ANN based Binary
Backtracking Search Algorithm for VPP Optimal Scheduling and Cost-Effective Evaluation. IEEE Trans. Ind. Appl. 2021.
https://doi.org/10.1109/TIA.2021.3100321.
211. ANN Based Binary Backtracking Search Algorithm for Virtual Power Plant Scheduling and Cost-Effective Evaluation. IEEE
Conference Publication. IEEE Xplore. Available online: https://ieeexplore.ieee.org/abstract/document/9384923 (accessed on 17 April
2021).
212. Hannan, M.A.; Begum, R.A.; Abdolrasol, M.G.; Hossain Lipu, M.S.; Mohamed, A.; Rashid, M.M. Review of baseline studies on
energy policies and indicators in Malaysia for future sustainable energy development. Renew. Sustain. Energy Rev. 2018, 94, 551–
564. https://doi.org/10.1016/j.rser.2018.06.041.
213. Safari, A.; Babaei, F.; Farrokhifar, M. A load frequency control using a PSO-based ANN for micro-grids in the presence of electric
vehicles. Int. J. Ambient. Energy 2021, 42, 688–700, doi:10.1080/01430750.2018.1563811.
214. Shabbir, J.; Anwer, T. Artificial Intelligence and its Role in Near Future. arXiv 2018, arXiv:1804.01396.
215. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft
Comput. 2020, 88, 105946. https://doi.org/10.1016/J.ASOC.2019.105946.
216. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y. Double adaptive weights for stabilization of moth flame
optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl.-Based Syst. 2021, 214, 106728.
https://doi.org/10.1016/J.KNOSYS.2020.106728.
217. Tu, J.; Chen, H.; Liu, J.; Heidari, A.A.; Zhang, X.; Wang, M.; Ruby, R.; Pham, Q.V. Evolutionary biogeography-based whale
optimization methods with communication structure: Towards measuring the balance. Knowl.-Based Syst. 2021, 212, 106642.
https://doi.org/10.1016/J.KNOSYS.2020.106642.