A Comprehensive Study On Learning Strategies of Optimization Algorithms and Its Applications
A Comprehensive Study On Learning Strategies of Optimization Algorithms and Its Applications
A Comprehensive Study on
Learning Strategies of
Optimization Algorithms and its
Applications
Syed Muzamil Basha Ravi Kumar Poluru Syed Thouheed Ahmed
School of Computer Science & Department of Information School of Computing and Information
Engineering Technology Technology
REVA University Institute of Aeronautical Engineering REVA University
Bangalore, India Hyderabad, India Bangalore, India
0000-0002-1169-3151 p.ravikumar@iare.ac.in syedthouheed.ahmed@reva.edu.in
muzamilbasha.s@reva.edu.in
2022 8th International Conference on Smart Structures and Systems (ICSSS) | 978-1-6654-9761-9/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICSSS54381.2022.9782200
Anwar Basha H
School of Computer Science &
Engineering
REVA University
Bangalore, India
anwar.mtech@gmail.com
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on June 24,2022 at 05:00:22 UTC from IEEE Xplore. Restrictions apply.
2022 8th International Conference on Smart Structures and Systems (ICSSS)
15 1015 3,523 years 1 EB made in the field of heuristic algorithms along with its
application area and future scope of each of the researcher.
On the other hand, heuristic search technique can be TABLE II. Trends in optimization algorithm
implemented easily with much lesser space and time
complexities. The performance of any heuristic algorithms Author Algorith Applicati Future
like: A*, Hill climbing, Random search is completely ms ons scope
depends on the heuristic function as discussed in Eq. 2. Mavrovouni Ant colony Dynamic Need to list
otis et .al, optimizatio Vehicle out different
f ( x ) = g ( x ) + h( x ) (2) 2017 [1] n (ACO), Routing experimentat
Artificial Problem ion protocols
Where g(x) is the expected computation cost function and bee colony Dynamic for different
h(x) is the heuristic function that gives the heuristic value at (ABC) Knapsack problem
each operation that maps input and action needs to perform. Problem classes.
The advantage of heuristic algorithms is in finding the Ertenlice et ACO, PO Needs to
solution close to optimal solution. The technique problem .al, 2018 [2] PSO, FA, Manageme identify the
behind implementing such algorithms are estimating the ABC nt. right
local and global optimal values. It is really a challenging task benchmark
to define a heuristic function with both the properties, generator
program to
admissible and consistent. The heuristic function h (n ) is different
admissible for all n nodes as in Eq. 3. categories of
h (n ) p h ∗ (n ) (3) SI
Where h*(n) is the true cost to reach the goal state for all n algorithms.
nodes. The consistent property of h(n) is defined as stated in Ramírez‐ Distributed Virus Modeling
Eq. 4. Llanos et discrete mitigation real world
h(n ) ≤ c (n • a • n ′) + h (n ′) .al, 2018 [3] time non in both scenarios
(4) linear human and
Where n ′ is successor node of n . algorithm computer
(Robust networks
The contributions made in the present research work are: Gradient)
1. The research carried out in optimization algorithms Zheng et Biogeograp Biogeograp
Need to list
from 2017 to 2020 is presented along with .al, 2019 [4] hy-Based hy
out different
applications and the limitations. Optimizati
experimenta
2. Addressed the problem of space and time on (BBO)
tion
complexity of searching Techniques adapted in Algorithms
protocols
problem solving.
for different
3. Made a study on different Learning strategies of
problem
Learning Agent.
classes
The organization of this paper is as follows: In introduction
Dall'Anese First-order Machine Modeling
section, the terminology required to understand the problem
et .al, 2020 methods Leaning real world
addressed is included. In Literature review section, the past
[5] Approxima scenarios
and current trends in understanding the role of learning
te first-
strategies towards improving the performance of
order
optimization algorithm included. In Methodology Section,
informatio
the details of the experiment carried out in estimating the
n
performance of learning strategies are described. In Results
Distributed
and discussion section, the finding of the research carried out
computatio
in this paper is included. In Conclusion section, the
n
challenges addressed in study is included along with the
future scope. Cheng et. Brain The work
Necessary to
al, 2017 [6] Storm carried out
develop
Optimizati by the
benchmark
II. LITERATURE REVIEW on (BSO) author can
generators
Brain be used in
Storm improving
In many applications effective decision making Optimizati the
depends on useful insights derived from the dataset. The on object performanc
customer behavior can be predicted based on the patterns space e of Data
derived from the dataset. In Naive Bayes classifier the (BSO-OS) mining
prior knowledge helps to improve the likelihood of the techniques.
classifier. The assumption followed in NB classifier are
the unrelated features are treated as in depended features.
The study happened during the period 2017 to 2020 is
presented in the Table .2. It describe the contributions
978-1-6654-9761-9/22/$31.00 ©2022 IEEE
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on June 24,2022 at 05:00:22 UTC from IEEE Xplore. Restrictions apply.
2022 8th International Conference on Smart Structures and Systems (ICSSS)
III. METHODOLOGY y out = F ( y in )
Supervised Learning: The dataset with predictors and target 0, wx + b ≤ 0
output =
attributes acts as supervisor in this form of learning. The
1, wx + b > 0 (6)
partitioning of the dataset is in the ratio of 70:30 (Training
and Testing (or) validation. The training set is used to define y
in computer at input layer, i i arex ∗w
where,
the rules in the model. The validation set is used to estimate
inputs and their corresponding weights. F is activation
the performance of the constructed model. The hyper
parameters are tuned based on Information Gain and Entropy function (sigmoid). The values of
y out will be either 0 (or)
values [7]. The parameters used to evaluate the performance 1 with bias b.
of supervised model's (Decision Tree, Support Vector
Machine) are accuracy, precision, recall and F-score [8] as
shown in Table 3. IV. RESULTS AND DISCUSSION
The dataset considered in the experiment is Restaurant Tips.
Unsupervised Learning: The dataset used in unsupervised It consists of 6 variables and 30 observations. The complete
algorithm will not have target attribute. The model should description of the dataset is plotted in the Fig .2.
use clustering technique to categories the instances of the
dataset. The parameters used to evaluate the performance of
unsupervised model's (K- means) are Sum of Squares (SoS)
[9] as shown in Eq. 5.
TP
P=
Precision(P) Pr edicted True
Actual True
PV =
Prevalence(PV) Total
R× P Fig .3 Plot of Hyper parameters and Model parameters of the
FS = 2 × dataset
F Score(FS) R+P
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on June 24,2022 at 05:00:22 UTC from IEEE Xplore. Restrictions apply.
2022 8th International Conference on Smart Structures and Systems (ICSSS)
ACCURACY “A comprehensive survey of brain storm optimization
LDA CART KNN SVM DT RFKAPPA algorithms,” In 2017 IEEE Congress on Evolutionary
0.990 0.990 Computation (CEC) (pp. 1637-1644). IEEE, 2017.
0.963 0.963
[7] Basha SM, Rajput DS, Vandhan V, “Impact of gradient ascent and
boosting algorithm in classification,” International Journal of
0.935 0.935
Intelligent Engineering and Systems, vol. 11, no. 1, pp. 41-9, 2018.
0.908 0.908 [8] Basha SM, Rajput DS, Poluru RK, Bhushan SB, Basha SA,
PERFORMANCE
0.880 0.880
“Evaluating the performance of supervised classification models:
decision tree and Naïve Bayes using KNIME,” International
0.853 0.853
Journal of Engineering & Technology, vol. 7, no. 4.5, pp. 248-53,
0.825 0.825 2018.
0.798 0.798
[9] Barak B, Kelner JA, Steurer D, “Dictionary learning and tensor
decomposition via the sum-of-squares method, ” In Proc. of the
0.770 0.770
forty-seventh annual ACM symposium on Theory of computing
0.743 0.743 pp. 143-151, 2015.
[10] Basha SM, Zhenning Y, Rajput DS, SN IN, Caytiles RD, “Domain
LDA CART KNN SVM DT RF
specific predictive analytics: a case study with R, ” International
NAME OF THE ALGORITHM
Journal of Multimedia and Ubiquitous Engineering, vol. 12, no. 6,
pp. 13-22, 2017.
Fig .5: Performance of Learning Algorithms [11] Dutta S, Chen X, Sankaranarayanan S, “Reachability analysis for
neural feedback systems using regressive polynomial rule
inference, ” In Proc. of the 22nd ACM International Conference
The performance of different algorithms are presented on Hybrid Systems: Computation and Control, pp. 157-168, 2019.
in Fig .5. The LDA algorithm is providing accuracy of
96% and Random Forest (RF) is providing 85%.
V. CONCLUSION
The research carried out on optimization algorithms
from 2017 to 2020 is presented along with applications
and the limitations. The observations made from the study
is there is a need towards development of benchmark
generators. In addition to that there is a need to list out
different experimentation protocols for different problem
classes. The finding in the present research is, supervised
learning: Decision Tree (DT) yields 83% accuracy.
Whereas, unsupervised learning: Neural Network yields
the accuracy of 87% with 13% bias (B2). The challenges
addressed in the present research, in training the
supervised model the validation is made using 10-fold
approach. Due to which the accuracy comparatively
lesser than recent study. The other is in fixing the weights
in training the NN model consumes lot of time. In future,
our aim is to adopt Recurrent Neural Network in trailing
the model
ACKNOWLEDGMENT
The research carried out in the present work is
supported by REVA University.
REFERENCES
Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on June 24,2022 at 05:00:22 UTC from IEEE Xplore. Restrictions apply.