Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
210 views

Improved Teaching-Learning-Based Optimization Algorithm

An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems

Uploaded by

Adam Stevenson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
210 views

Improved Teaching-Learning-Based Optimization Algorithm

An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems

Uploaded by

Adam Stevenson
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Scientia Iranica D (2013) 20 (3), 710720

Sharif University of Technology


Scientia Iranica
Transactions D: Computer Science & Engineering and Electrical Engineering
www.sciencedirect.com
An improved teaching-learning-based optimization algorithm for
solving unconstrained optimization problems
R. Venkata Rao

, Vivek Patel
Department of Mechanical Engineering, S.V. National Institute of Technology, Ichchanath, Surat, Gujarat 395 007, India
Received 11 June 2012; revised 16 August 2012; accepted 9 October 2012
KEYWORDS
Evolutionary algorithms;
Swarm intelligence based
algorithms;
Improved
teachinglearning-based
optimization;
Unconstrained benchmark
functions.
Abstract TeachingLearning-Based Optimization (TLBO) algorithms simulate the teachinglearning
phenomenon of a classroom to solve multi-dimensional, linear and nonlinear problems with appreciable
efficiency. In this paper, the basic TLBO algorithmis improved to enhance its exploration and exploitation
capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training
and self motivated learning. Performance of the improved TLBO algorithm is assessed by implementing it
on a range of standard unconstrained benchmark functions having different characteristics. The results of
optimization obtained using the improved TLBO algorithm are validated by comparing them with those
obtained using the basic TLBO and other optimization algorithms available in the literature.
2013 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.
1. Introduction
The problem of finding the global optimum of a function
with large numbers of local minima arises in many scientific ap-
plications. In typical applications, the search space is large and
multi-dimensional. Many of these problems cannot be solved
analytically, and consequently, they have to be addressed by
numerical algorithms. Moreover, in many cases, global opti-
mization problems are non-differentiable. Hence, the gradient-
based methods cannot be used for finding the global optimum
of such problems. To overcome these problems, several modern
heuristic algorithms have been developed for searching near-
optimum solutions to the problems. These algorithms can be
classified into different groups, depending on the criteria being
considered, such as population-based, iterative based, stochas-
tic, deterministic, etc. Depending on the nature of the phe-
nomenon simulated by the algorithms, the population-based
heuristic algorithms have two important groups: Evolutionary
Algorithms (EA) and swarm intelligence based algorithms.

Corresponding author. Tel.: +91 261 2201661; fax: +91 261 2201571.
E-mail address: ravipudirao@gmail.com (R.V. Rao).
Some of the recognized evolutionary algorithms are: Ge-
netic Algorithms (GA) [1], Differential Evolution (DE) [2,3],
Evolution Strategy (ES) [4], Evolution Programming (EP) [5],
Artificial Immune Algorithm (AIA) [6], and Bacteria Foraging
Optimization (BFO) [7] etc. Some of the well known swarm
intelligence based algorithms are: Particle Swarm Optimiza-
tion (PSO) [8], Ant Colony Optimization (ACO) [9], Shuffled
Frog Leaping (SFL) [10], and Artificial Bee Colony (ABC) algo-
rithms [1114], etc. Besides the evolutionary and swarmintelli-
gence based algorithms, there are some other algorithms which
work on the principles of different natural phenomena. Some of
themare: the Harmony Search (HS) algorithm[15], the Gravita-
tional Search Algorithm (GSA) [16], Biogeography-Based Opti-
mization(BBO) [17], the Grenade ExplosionMethod(GEM) [18],
the league championship algorithm [19] and the charged sys-
tem search [20,21].
In order to improve the performance of the above-
mentioned algorithms, the exploration and exploitation ca-
pacities of different algorithms are combined with each other
and hybrid algorithms are produced. Several authors have hy-
bridized different algorithms to improve the performance of
individual algorithms [2231]. Similarly, the performance of
existing algorithms is enhanced by modifying their exploration
and exploitation capacities [3134].
All evolutionary and swarm intelligence based algorithms
are probabilistic algorithms and require common controlling
parameters, like population size and number of generations.
Peer review under responsibility of Sharif University of Technology.
1026-3098 2013 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.
doi:10.1016/j.scient.2012.12.005
R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720 711
Besides common control parameters, different algorithms re-
quire their own algorithm-specific control parameters. For ex-
ample, GA uses mutation rate and crossover rate. Similarly, PSO
uses inertia weight, and social and cognitive parameters. The
proper tuning of the algorithm-specific parameters is a very
crucial factor affecting the performance of optimization algo-
rithms. The improper tuning of algorithm-specific parameters
either increases computational effort or yields the local opti-
mal solution. Considering this fact, recently, Rao et al. [35,36]
and Rao and Patel [37] introduced the Teaching-Learning-Based
Optimization (TLBO) algorithm, which does not require any
algorithm-specific parameters. The TLBO requires only com-
mon controlling parameters like population size and number
of generations for its working. Common control parameters
are common in running any population based optimization
algorithms; algorithm-specific parameters are specific to that
algorithm and different algorithms have different specific pa-
rameters tocontrol. However, the TLBOalgorithmdoes not have
any algorithm-specific parameters to control and it requires
only the control of the common control parameters. Contrary to
the opinion expressed by repinek et al. [38] that TLBO is not a
parameter-less algorithm, Rao and Patel [37] clearly explained
that TLBO is an algorithm-specific parameter-less algorithm. In
fact, all comments made by repinek et al. [38] about the TLBO
algorithm were already addressed by Rao and Patel [37].
In the present work, some improvements in the basic
TLBO algorithm are introduced to enhance its exploration and
exploitation capacities, and the performance of the Improved
Teaching-Learning-Based Optimization (I-TLBO) algorithm is
investigated for parameter optimization of unconstrained
benchmark functions available in the literature.
The next section describes the basic TLBO algorithm.
2. Teaching-Learning-Based Optimization (TLBO) algorithm
Teaching-learning is an important process where every
individual tries to learn something from other individuals to
improve themselves. Rao et al. [35,36] and Rao and Patel [37]
proposed an algorithm, known as Teaching-Learning-Based
Optimization (TLBO), which simulates the traditional teaching-
learning phenomenon of a classroom. The algorithm simulates
two fundamental modes of learning: (i) through the teacher
(known as the teacher phase) and (ii) interacting with other
learners (known as the learner phase). TLBO is a population-
based algorithm, where a group of students (i.e. learner) is
considered the population and the different subjects offered to
the learners are analogous with the different design variables
of the optimization problem. The results of the learner are
analogous to the fitness value of the optimization problem.
The best solution in the entire population is considered as the
teacher. The operationof the TLBOalgorithmis explainedbelow
with the teacher phase and learner phase [37].
2.1. Teacher phase
This phase of the algorithm simulates the learning of the
students (i.e. learners) through the teacher. During this phase, a
teacher conveys knowledge among the learners and makes an
effort to increase the mean result of the class. Suppose there
are m number of subjects (i.e. design variables) offered to n
number of learners (i.e. population size, k = 1, 2, . . . , n). At
any sequential teaching-learning cycle, i, M
j,i
is the mean result
of the learners in a particular subject j (j = 1, 2, . . . , m).
Since a teacher is the most experienced and knowledgeable
person on a subject, the best learner in the entire population
is considered a teacher in the algorithm. Let X
totalkbest,i
be
the result of the best learner considering all the subjects who
is identified as a teacher for that cycle. The teacher will put
maximum effort into increasing the knowledge level of the
whole class, but learners will gain knowledge according to
the quality of teaching delivered by a teacher and the quality
of learners present in the class. Considering this fact, the
difference betweenthe result of the teacher andthe meanresult
of the learners in each subject is expressed as:
Difference_Mean
j,i
= r
i
(X
j,kbest,i
T
F
M
j,i
), (1)
where X
j,kbest,i
is the result of the teacher (i.e. best learner) in
subject j. T
F
is the teaching factor, which decides the value of
mean to be changed, and r
i
is the random number in the range
[0, 1]. The value of T
F
can be either 1 or 2. The value of T
F
is
decided randomly with equal probability as:
T
F
= round[1 + rand(0, 1){2 1}], (2)
where rand is the random number in the range [0, 1]. T
F
is not
a parameter of the TLBO algorithm. The value of T
F
is not given
as an input to the algorithm and its value is randomly decided
by the algorithm using Eq. (2).
Based on the Difference_Mean
j,i
, the existing solution is
updated in the teacher phase according to the following
expression:
X

j,k,i
= X
j,k,i
+ Difference_Mean
j,i
(3)
where X

j,k,i
is the updated value of X
j,k,i
. Accept X

j,k,i
if it gives a
better functionvalue. All the acceptedfunctionvalues at the end
of the teacher phase are maintained, and these values become
the input to the learner phase.
It may be noted that the values of r
i
and T
F
affect the
performance of the TLBO algorithm. r
i
is the random number
in the range [0, 1] and T
F
is the teaching factor. However, the
values of r
i
and T
F
are generated randomly in the algorithmand
these parameters are not supplied as input to the algorithm
(unlike supplying crossover and mutation probabilities in GA,
inertia weight and cognitive and social parameters in PSO, and
colony size and limit in ABC, etc.). Thus, tuning of r
i
and T
F
is not
required in the TLBO algorithm (unlike the tuning of crossover
and mutation probabilities in GA, inertia weight and cognitive
and social parameters in PSO, and colony size and limit in
ABC, etc.). TLBO requires tuning of only the common control
parameters, like population size and number of generations, for
its working, andthese commoncontrol parameters are required
for the working of all populationbasedoptimizationalgorithms.
Thus, TLBO can be called an algorithm-specific parameter-less
algorithm.
2.2. Learner phase
This phase of the algorithm simulates the learning of the
students (i.e. learners) through interaction among themselves.
The students can also gain knowledge by discussing and
interacting with other students. A learner will learn new
information if the other learners have more knowledge than
himor her. The learning phenomenon of this phase is expressed
below.
Randomly select two learners, P and Q, such that X

totalP,i
=
X

totalQ,I
, where, X

totalP,i
and X

totalQ,i
are the updated values
of X
totalP,i
and X
totalQ,i
, respectively, at the end of the teacher
phase.
712 R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720
X

j,P,i
= X

j,P,i
+ r
i
(X

j,P,i
X

j,Q,i
), If X

totalP,i
> X

totalQ,i
, (4a)
X

j,P,i
= X

j,P,i
+ r
i
(X

j,Q,i
X

j,P,i
), If X

totalQ,I
> X

totalP,i
. (4b)
(The above equations are for maximization problems, the
reverse is true for minimization problems.)
Accept X

j,P,i
if it gives a better function value.
3. Improved TLBO (I-TLBO) algorithm
In the basic TLBO algorithm, the result of the learners is im-
proved either by a single teacher (through classroom teaching)
or by interacting with other learners. However, in the tradi-
tional teaching-learning environment, the students also learn
during tutorial hours by discussing with their fellowclassmates
or even by discussion with the teacher himself/herself. More-
over, sometime students are self motivated and try to learn
by themselves. Furthermore, the teaching factor in the basic
TLBO algorithm is either 2 or 1, which reflects two extreme cir-
cumstances where a learner learns either everything or noth-
ing from the teacher. In this system, a teacher has to expend
more effort to improve the results of learners. During the course
of optimization, this situation results in a slower convergence
rate of the optimization problem. Considering this fact, to en-
hance the exploration and exploitation capacities, some im-
provements have been introduced to the basic TLBO algorithm.
Rao and Patel [39,40] made some modifications to the basic
TLBO algorithm and applied the same to the optimization of
a two stage thermoelectric cooler and heat exchangers. In the
present work, the previous modifications are further enhanced
and a new modification is introduced to improve the perfor-
mance of the algorithm.
3.1. Number of teachers
In the basic TLBO algorithm, there is only one teacher who
teaches the learners and tries to improve the mean result of
the class. In this system of teaching-learning, it might be pos-
sible that the efforts of the teacher are distributed and students
also pay less attention, which will reduce the intensity of learn-
ing. Moreover, if the class contains a higher number of below-
average students, then, the teacher has to put more effort into
improving their results; even with this effort, there may not be
any apparent improvement in the results. In the optimization
algorithm, this fact results ina higher number of functionevalu-
ations to reachoptimumsolutionandyields a poor convergence
rate. In order to overcome this issue, the basic TLBOalgorithmis
improved by introducing more than one teacher for the learn-
ers. By means of this modification, the entire class is split into
different groups of learners as per their level (i.e. results), and
anindividual teacher is assigned to anindividual group of learn-
ers. Now, each teacher tries to improve the results of his or her
assigned group and if the level (i.e. results) of the group reaches
up to the level of the assigned teacher, then this group is as-
signed to a better teacher. This modification is explained in the
implementation steps of the algorithm.
The concept of number of teachers is to carry out the
population sorting during the course of optimization and,
thereby, to avoid the premature convergence of the algorithm.
3.2. Adaptive teaching factor
Another modification is related to the teaching factor (T
F
)
of the basic TLBO algorithm. The teaching factor decides the
value of mean to be changed. In the basic TLBO, the decision
of the teaching factor is a heuristic step and it can be either 1 or
2. This practice is corresponding to a situation where learners
learn nothing from the teacher or learn all the things from
the teacher, respectively. But, in an actual teaching-learning
phenomenon, this fraction is not always at its end state for
learners but varies in-between also. The learners may learn in
any proportion fromthe teacher. In the optimization algorithm,
the lower value of T
F
allows the fine search in small steps, but
causes slow convergence. A larger value of T
F
speeds up the
search, but it reduces the exploration capability. Considering
this fact, the teaching factor is modified as:
(T
F
)
i
=
_
X
totalk
X
totalkbest
_
i
k = 1, 2, . . . , n, If X
totalkbest,i
= 0, (5a)
(T
F
)
i
= 1, If X
totalkbest,i
= 0, (5b)
where X
totalk
is the result of any learner, k, considering all
the subjects at iteration, i, and X
totalkbest
is the result of the
teacher at the same iteration, i. Thus, in the I-TLBO algorithm,
the teaching factor varies automatically during the search.
Automatic tuning of TF improves the performance of the
algorithm.
It may be noted that the adaptive teaching factor in TLBO is
generated within the algorithm, based on the result of learner
and teacher. Thus, the adaptive teaching factor is not supplied
as an input parameter to the algorithm.
3.3. Learning through tutorial
This modification is based on the fact that the students can
also learn by discussing with their fellow classmates or even
with the teacher during the tutorial hours while solving the
problems and assignments. Since the students can increase
their knowledge by discussion with other students or the
teacher, we incorporate this search mechanisminto the teacher
phase. Mathematical expression of this modification is given in
the implementation steps of the algorithm.
3.4. Self-motivated learning
In the basic TLBO algorithm, the results of the students are
improved either by learning from the teacher or by interacting
with the other students. However, it is also possible that
students are self motivated and improve their knowledge by
self-learning. Thus, the self-learning aspect to improvise the
knowledge is considered in the I-TLBO algorithm.
Step 1: Define the optimization problem as Minimize or
Maximize f (X)
where f (X) is the objective function value and X is a
vector for design variables.
Step 2: Initialize the population (i.e. learners, k = 1, 2, . . . , n)
and design variables of the optimization problem
(i.e., number of subjects offered to the learners, j =
1, 2, . . . , m), and evaluate them.
Step 3: Select the best solution (i.e. f (X)
best
) who acts as chief
teacher for that cycle. Assign him/her to first rank.
(X
teacher
)
1
= f (X)
1
where f (X)
1
= f (X)
best
.
Step 4: Select the other teachers (T) basedonthe chief teacher
and rank them,
f (X)
s
= f (X)
1
rand

f (X)
1
s = 2, 3, . . . , T.
(If the equality is not met, select the f (X)
s
closest to
the value calculated above)
(X
teacher
)
s
= f (X)
s
, where s = 2, 3, . . . , T.
R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720 713
Step 5: Assign the learners to the teachers according to their
fitness value as:
For k = 1 : (n s)
If f (X)
1
f (X)
k
> f (X)
2
,
assign the learner f (X)
k
to teacher 1 (i.e f (X)
1
).
Else, if f (X)
2
f (X)
k
> f (X)
3
,
assign the learner f (X)
k
to teacher 2 (i.e f (X)
2
).
.
.
.
Else, if f (X)
T1
f (X)
k
> f (X)
T
,
assign the learner f (X)
k
to teacher T 1 (i.e f (X)
T1
).
Else,
assign the learner f (X)
k
to teacher T
End
(The above procedure is for a maximization problem;
the procedure is reversed for a minimization prob-
lem.)
Step 6: Keep the elite solutions of each group.
Step 7: Calculate the mean result of each group of learners in
each subject (i.e. (M
j
)
s
).
Step 8: For each group, evaluate the difference between the
current mean and the corresponding result of the
teacher of that group for each subject by utilizing the
adaptive teaching factor (given by Eqs. (5a) and (5b))
as:
(Difference_Mean
j
)
s
= rand(X
j,teacher
T
F
M
j
)
s
s = 1, 2, . . . , T, j = 1, 2 . . . , m.
Step 9: For each group, update the learners knowledge with
the help of the teachers knowledge, along with
the knowledge acquired by the learners during the
tutorial hours, according to:
(X

j,k
)
s
= (X
j,k
+ Difference_Mean
j
)
s
+rand(X
hh
X
k
)
s
, If f (X)
hh
> f (X)
k
(X

j,k
)
s
= (X
j,k
+ Difference_Mean
j
)
s
+rand(X
k
X
hh
)
s
, If f (X)
k
> f (X)
hh
where hh = k and,
Step 10: For each group, update the learners knowledge by
utilizing the knowledge of some other learners, as well
as by self learning, according to:
(X

j,k
)
s
= X

j,k,i
+ rand(X

j,k
X

j,p
)
s
+rand(X
teacher
E
F
X

j,k
)
s
, If f (X

k
) > f (X

p
)
(X

j,k
)
s
= X

j,k,i
+ rand(X

j,p
X

j,k
)
s
+rand(X
teacher
E
F
X

j,k
)
s
, If f (X

p
) > f (X

k
)
where E
F
= exploration factor = round(1 + rand).
(The above equations are for a maximization problem,
the reverse is true for a minimization problem.)
Step 11: Replace the worst solution of each group with an elite
solution.
Step 12: Eliminate the duplicate solutions randomly.
Step 13: Combine all the groups.
Step 14: Repeat the procedure from step 3 to 13 until the
termination criterion is met.
At this point, it is important to clarify that in the TLBO and
I-TLBO algorithms, the solution is updated in the teacher phase
as well as in the learner phase. Also, in the duplicate elimination
step, if duplicate solutions are present, then they are randomly
modified. So, the total number of function evaluations in the
TLBO algorithm is = {(2 population size number of
generations) + (function evaluations required for duplicate
elimination)}. In the entire experimental work of this paper,
the above formula is used to count the number of function
evaluations while conducting experiments with TLBO and I-
TLBO algorithms.
4. Experiments on unconstrained benchmark functions
In this section, the ability of the I-TLBO algorithm is
assessed by implementing it for the parameter optimization
of several unconstrained benchmark functions with different
dimensions and search space. Results obtained using the I-
TLBO algorithm are compared with the results of the basic
TLBO algorithm, as well as with other optimization algorithms
available in literature. The considered benchmark functions
have different characteristics, like unimodality/multimodality,
separability/non-separability and regularity/non-regularity.
4.1. Experiment-1
This experiment is conducted to identify the ability of the
I-TLBO algorithm to achieve the global optimum value. In this
experiment, eight different benchmark functions are tested
using the TLBO and I-TLBO algorithms, which were earlier
solved using ABC and modified ABC by Akay and Karaboga [33].
The details of the benchmark functions are given in Table 1.
Previously, Akay and Karaboga [33] experimented all functions
with 30000 maximum function evaluations. To maintain the
consistency in the comparison, TLBO and I-TLBO algorithms
are also experimented with the same maximum function
evaluations.
Each benchmark function is experimented 30 times with
TLBO and I-TLBO algorithms and comparative results, in the
formof meanvalue andstandarddeviationof objective function
obtained after 30 independent runs, are shown in Table 2.
Except TLBO and I-TLBO algorithms, the rest of the results
are taken from the previous work of Akay and Karaboga [33].
Moreover, the I-TLBO algorithm is experimented with different
numbers of teachers, and the effect on the obtained objective
function value is reported in Table 2.
It is observed from the results that the I-TLBO algorithm
has achieved the global optimum value for Sphere, Griewank,
Weierstrass, Rastrtigin and NCrastrigin functions, within the
specified number of function evaluations. For the Rosenbrock
function, I-TLBOperforms better than the rest of the algorithms.
The performance of TLBO and I-TLBO algorithms is better than
the rest of the considered algorithms for Sphere and Griewank
functions. For Weierstrass, Rastrigin and NCrastrigin functions,
the performance of I-TLBO and CLPSO are identical and better
than the rest of the considered algorithms. For the Ackley
function, ABC and I-TLBO algorithms perform equally well. For
the Schwefel function, the modified ABC algorithm performs
better than the rest of the considered algorithms.
It is observed from the results that the fitness value of the
objective function is improved as the number of teachers is
increased from 1 to 4 for the I-TLBO algorithm. During the
experimentation, it is observed that with a further increase
in the number of teachers beyond 4, the improvement in the
fitness value of the objective function is insignificant and it
involves significant increment in computational effort.
4.2. Experiment-2
To indentify the computational effort and consistency of the
I-TLBO algorithm, eight different benchmark functions consid-
ered by Ahrari and Atai [18] are tested in this experiment. The
results obtained using the I-TLBO algorithm are compared with
the basic TLBO algorithm, along with other well known opti-
mizationalgorithms. The details of the benchmark functions are
given in Table 3.
714 R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720
Table 1: Benchmark functions considered in experiment 1.
No. Function Formulation D Search range Initialization range
1 Sphere F
min
=

D
i=1
x
2
i
10 [100, 100] [100, 50]
2 Rosenbrock F
min
=

D
i=1
[100(x
2
i
x
i+1
)
2
+ (1 x
i
)
2
] 10 [2.048, 2.048] [2.048, 2.048]
3 Ackley F + min = 20 exp(0.2
_
1
D

D
i+1
x
2
i
) exp(
1
D

D
i=1
cos 2x
i
) + 20 + e 10 [32.768, 32.768] [32.768, 16]
4 Griewank F
min
=
1
4000

D
i=1
x
2
i

D
i=1
cos
_
x
i

i
_
+ 1 10 [600, 600] [600, 200]
5 Weierstrass F
min
=

D
i=1
_

kmax
k=0
_
a
k
cos
_
2b
k
(x
i
+ 0.5)
__
_

kmax
k=0
_
a
k
cos
_
2b
k
0.5
__
, a = 0.5, b = 3, k
max
= 20
10 [0.5, 0.5] [0.5, 0.2]
6 Rastrigin F
min
=

D
i=1
_
x
2
i
10 cos(2x
i
) + 10
_
10 [5.12, 5.12] [5.12, 2]
7 NCRastrigin F
min
=

D
i=1
_
y
2
i
10 cos(2y
i
) + 10
_
y
i
=
_
x
i
, |x
i
| < 0.5,
round (2x
i
)
2
, |x
i
| 0.5
10 [5.12, 5.12] [5.12, 2]
8 Schwefel F
min
=

D
i=1
_
x
i
sin
_
|x
i
|
__
10 [500, 500] [500, 500]
D: Dimension.
Table 2: Comparative results of TLBO and I-TLBO with other evolutionary algorithms over 30 independent runs.
Source: Results of algorithms except TLBO and I-TLBO are taken from Ref. [33].
Mean SD Mean SD Mean SD Mean SD
Sphere Rosenbrock Ackley Griewank
PSOw 7.96E051 3.56E050 3.08E+000 7.69E001 1.58E014 1.60E014 9.69E002 5.01E002
PSOcf 9.84E105 4.21E104 6.98E001 1.46E+000 9.18E001 1.01E+000 1.19E001 7.11E002
PSOw-local 2.13E035 6.17E035 3.92E+000 1.19E+000 6.04E015 1.67E015 7.80E002 3.79E002
PSOcf-local 1.37E079 5.60E079 8.60E001 1.56E+000 5.78E002 2.58E001 2.80E002 6.34E002
UPSO 9.84E118 3.56E117 1.40E+000 1.88E+000 1.33E+000 1.48E+000 1.04E001 7.10E002
FDR 2.21E090 9.88E090 8.67E001 1.63E+000 3.18E014 6.40E014 9.24E002 5.61E002
FIPS 3.15E030 4.56E030 2.78E+000 2.26E001 3.75E015 2.13E014 1.31E001 9.32E002
CPSO-H 4.98E045 1.00E044 1.53E+000 1.70E+000 1.49E014 6.97E015 4.07E002 2.80E002
CLPSO 5.15E029 2.16E28 2.46E+000 1.70E+000 4.32E10 2.55E014 4.56E003 4.81E003
ABC 7.09E017 4.11E017 2.08E+000 2.44E+000 4.58E016 1.76E016 1.57E002 9.06E003
Modified ABC 7.04E017 4.55E017 4.42E001 8.67E001 3.32E016 1.84E016 1.52E002 1.28E002
TLBO 0.00 0.00 1.72E+00 6.62E01 3.55E15 8.32E31 0.00 0.00
I-TLBO
(NT = 1) 0.00 0.00 1.29E+00 3.97E01 3.11E15 4.52E15 0.00 0.00
(NT = 2) 0.00 0.00 1.13E+00 4.29E01 2.93E15 1.74E15 0.00 0.00
(NT = 3) 0.00 0.00 6.34E01 2.53E01 2.02E15 1.51E15 0.00 0.00
(NT = 4) 0.00 0.00 2.00E01 1.42E01 1.42E15 1.83E15 0.00 0.00
Weierstrass Rastrigin NCRastrigin Schwefel
PSOw 2.28E003 7.04E003 5.82E+000 2.96E+000 4.05E+000 2.58E+000 3.20E+002 1.85E+002
PSOcf 6.69E001 7.17E001 1.25E+001 5.17E+000 1.20E+001 4.99E+000 9.87E+002 2.76E+002
PSOw-local 1.41E006 6.31E006 3.88E+000 2.30E+000 4.77E+000 2.84E+000 3.26E+002 1.32E+002
PSOcf-local 7.85E002 5.16E002 9.05E+000 3.48E+000 5.95E+000 2.60E+000 8.78E+002 2.93E+002
UPSO 1.14E+000 1.17E+00 1.17E+001 6.11E+000 5.85E+000 3.15E+000 1.08E+003 2.68E+002
FDR 3.01E003 7.20E003 7.51E+000 3.05E+000 3.35E+000 2.01E+000 8.51E+002 2.76E+002
FIPS 2.02E003 6.40E003 2.12E+000 1.33E+000 4.35E+000 2.80E+000 7.10E+001 1.50E+002
CPSO-H 1.07E015 1.67E015 0 0 2.00E001 4.10E001 2.13E+002 1.41E+002
CLPSO 0 0 0 0 0 0 0 0
ABC 9.01E006 4.61E005 1.61E016 5.20E016 6.64E017 3.96E017 7.91E+000 2.95E+001
Modified ABC 0.00E+000 0.00E+000 1.14E007 6.16E007 1.58E011 7.62E011 3.96E+000 2.13E+001
TLBO 2.42E05 1.38E20 6.77E08 3.68E07 2.65E08 1.23E07 2.94E+02 2.68E+02
I-TLBO
(NT = 1) 9.51E06 1.74E05 3.62E12 7.82E11 1.07E08 6.19E08 2.73E+02 2.04E+02
(NT = 2) 3.17E06 2.66E06 2.16E15 9.13E16 5.16E09 4.43E09 2.62E+02 2.13+02
(NT = 3) 0.00 0.00 0.00 0.00 7.78E016 4.19E015 1.49E+02 1.21+02
(NT = 4) 0.00 0.00 0.00 0.00 0.00 0.00 1.10E+02 1.06E+02
To maintain the consistency in the comparison between all
comparative algorithms, the execution of the TLBO and I-TLBO
algorithms are stopped whenthe difference betweenthe fitness
obtained by the algorithm and the global optimum value is less
than 0.1% (in cases where the optimum value is 0, the solution
is accepted if it differs from the optimum value by less than
0.001). While making this complete study, the I-TLBOalgorithm
is examined for different numbers of teachers and its effect on
the performance of the algorithmis included in the results. Each
benchmark function is experimented 100 times with the TLBO
and I-TLBO algorithms and the comparative results in the form
of mean function evaluations and success percentage is shown
in Table 4. The results of the other algorithms are taken from
Ahrari and Atai [18].
It is observed from Table 4 that except for function 7 (i.e.
Rosenbrock (D = 4)), the I-TLBO algorithm requires less
number of function evaluations than other algorithms to reach
the global optimum value, with a very high success rate of
100%. For the Rosenbrock 4 dimension function, the ant colony
system (ANTS) performs better than I-TLBO with 100% success
R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720 715
Table 3: Benchmark functions considered in experiment 2.
No. Function Formulation D Search range
1 De Jong F
min
= 3905.93 100(x
2
1
x
2
)
2
(1 x
1
)
2
2 [2.048, 2.048]
2 GoldSteinPrice F
min
= [1 + (x
1
+ x
2
+ 1)
2
(19 14x
1
+ 3x
2
1
14x
2
+ 6x
1
x
2
+ 3x
2
2
)] [30 + (2x
1
3x
2
)
2
(18 32x
1
+
12x
2
1
+ 48x
2
36x
1
x
2
+ 27x
2
2
)]
2 [2, 2]
3 Branin F
min
= (x
2

5.1
4
2
x
2
1
+
5

x
1
6)
2
+ 10(1
1
8
) cos x
1
+ 10 2 [5, 10]
4 MartinandGaddy F
min
= (x
1
x
2
)
2
+ [(x
1
+ x
2
10)/3]
2
2 [0, 10]
5 Rosenbrock F
min
= 100(x
2
1
x
2
)
2
+ (1 x
1
)
2
2 [1.2, 1.2]
6 Rosenbrock F
min
= 100(x
2
1
x
2
)
2
+ (1 x
1
)
2
2 [10, 10]
7 Rosenbrock F
min
=

D
i=1
_
100(x
2
1
x
2
)
2
+ (1 x
1
)
2
_
3 [10, 10]
8 Hyper Sphere F
min
=

D
i=1
x
2
i
6 [5.12, 5.12]
D: Dimension.
Table 4: Comparison of results of evolutionary algorithms considered in experiment 2.
Source: Results of algorithms except TLBO and I-TLBO are taken from Ref. [18].
MNFE Succ % MNFE Succ % MNFE Succ % MNFE Succ %
De Jong Goldstein and Price Branin Martin and Gaddy
SIMPSA
NE-SIMPSA
GA 10160 100 5662 100 7325 100 2488 100
ANTS 6000 100 5330 100 1936 100 1688 100
Bee Colony 868 100 999 100 1657 100 526 100
GEM 746 100 701 100 689 100 258 100
TLBO 1070 100 452 100 443 100 422 100
I-TLBO
(NT = 1) 836 100 412 100 438 100 350 100
(NT = 2) 784 100 386 100 421 100 312 100
(NT = 3) 738 100 302 100 390 100 246 100
(NT = 4) 722 100 288 100 367 100 233 100
Rosenbrock (D = 2) Rosenbrock (D = 2) Rosenbrock (D = 4) Hyper sphere (D = 6)
SIMPSA 10780 100 12500 100 21177 99
NE-SIMPSA 4508 100 5007 100 3053 94
GA 10212 100 15468 100
ANTS 6842 100 7505 100 8471 100 22050 100
Bee Colony 631 100 2306 100 28529 100 7113 100
GEM 572 100 2289 100 82188 100 423 100
TLBO 669 100 1986 100 21426 100 417 100
I-TLBO
(NT = 1) 646 100 1356 100 20462 100 410 100
(NT = 2) 602 100 1268 100 20208 100 396 100
(NT = 3) 554 100 1024 100 18490 100 382 100
(NT = 4) 522 100 964 100 17696 100 376 100
rate. Similar to previous experiments, here, also, the results are
improved further as the number of teachers is increased from1
to 4 at the cost of more computational time.
4.3. Experiment-3
In this experiment, the performance of the I-TLBO algorithm
is compared with the recently developed ABC algorithm, along
with its improvised versions (I-ABC and GABC) and hybrid
version (PS-ABC). In this part of the work, TLBO and I-TLBO are
experimented on 23 unconstrained benchmark functions (as
shown in Table 5), which was earlier attempted by Li et al. [31].
This experiment is conducted from small scale to large scale by
considering the dimensions 20, 30 and 50 of all the benchmark
functions.
Li et al. [31] attempted all these functions using ABC, I-
ABC, GABC and PS-ABC with colony size 40 and number of
cycles of 400 (i.e. 40000 maximum function evaluations). But,
it is observed that in the PS-ABC algorithm, three different
food positions are generated for each employed bee and the
corresponding nectar amount is calculated for each position.
Out of these three food positions, the employed bee selected
the best food position based on the calculated nectar amount.
Similarly, for each onlooker bee, three more food positions
are generated and out of these three positions, the onlooker
bees select the best position. In that way, the total number
of function evaluations in the PS-ABC algorithm is not equal
to colony size multiplied by number of cycles. In the PS-
ABC algorithm, three fitness evaluations are required for each
employed bee for selecting the best food position. Similarly,
three fitness evaluations are required for each onlooker bee for
selecting the best food position. So, the total number of function
evaluations for the PS-ABC algorithm is equal to 3*colony
size*number of cycles. Considering this fact, in the present
work, TLBO and I-TLBO are implemented with 40000 function
evaluations to compare its performance with ABC, I-ABC and
GABC algorithms, and 120000 function evaluations to compare
its performance with the PS-ABC algorithm.
In this experiment, each benchmark function is experi-
mented 30 times with TLBO and I-TLBO algorithms and the re-
sults are obtained in the form of mean solution and standard
716 R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720
Table 5: Benchmark functions considered in experiment 3.
No. Function Formulation Search range C
1 Sphere F
min
=

D
i=1
x
2
i
[100, 100] US
2 Schwefel 2.22 F
min
=

D
i=1
|x
i
| +

D
i=1
|x
i
| [10, 10] UN
3 Schwefel 1.2 F
min
=

D
i=1
(

i
j=1
x
2
j
)
2
[100, 100] UN
4 Schwefel 2.21 F
min
= max(|x
i
|) [100, 100] UN
5 Rosenbrock F
min
=

D
i=1
[100(x
2
i
x
i+1
)
2
+ (1 x
i
)
2
] [30, 30] UN
6 Step F
min
=

D
i=1
[|x
i
+ 0.5|]
2
[100, 100] US
7 Quartic F
min
=

D
i=1
ix
4
i
+ rand (0, 1) [1.28, 1.28] US
8 Schwefel F
min
=

D
i=1
(x
i
sin(

|x
i
|)) [500, 500] MS
9 Rastrigin F
min
=

D
i=1
[x
2
i
10 cos(2x
i
) + 10] [5.12, 5.12] MS
10 Ackley F
min
= 20 exp(0.2
_
1
D

D
i=1
x
2
i
) exp(
1
D

D
i=1
cos 2x
i
) + 20 + e [32, 32] MN
11 Griewank F
min
=
1
4000

D
i=1
x
2
i

D
i=1
cos(
x
i

i
) + 1 [600, 600] MN
12 Penalized F
min
=

D
[10 sin
2
(y
1
) +

D1
i=1
(y
i
1)
2
{1 + 10 sin
2
(y
i+1
)} + (y
D
1)
2
] +

D
i=1
u(x
i
, 10, 100, 4), u(x
i
, a, k, m) =
_
_
_
k(x
i
a)
m
x
i
> a,
0, a x
i
a,
k(x
i
a)
m
, x
i
< a
y
i
= 1 + 1/4(x
i
+ 1)
[50, 50] MN
13 Penalized 2 F
min
= 0.1[sin
2
(x
1
) +

D1
i=1
(x
i
1)
2
{1 + sin
2
(3x
i+1
)} + (x
D
1)
2
+ (1 + sin
2
(2x
D
))] +

D
i=1
u(x
i
, 5, 100, 4), u(x
i
, a, k, m) =
_
_
_
k(x
i
a)
m
x
i
> a,
0, a x
i
a,
k(x
i
a)
m
, x
i
< a
[50, 50] MN
14 Foxholes F
min
= [
1
500
+

25
j=1
1
j+

2
i=1
(x
i
a
ij
)
6
]
1
[65.536, 65.536] MS
15 Kowalik F
min
=

11
i=1
(a
i

x
1
(b
2
i
+b
i
x
2
)
b
2
i
+b
i
x
3
+x
4
)
2
[5, 5] MN
16 6Humpcamel back F
min
= 4x
2
1
2.1x
4
1
+
1
3
x
6
1
+ x
1
x
2
4x
2
2
+ 4x
4
2
[5, 5] MN
17 Branin F
min
= (x
2

5.1
4
2
x
2
1
+
5

x
1
6)
2
+ 10(1
1
8
) cos x
1
+ 10 [5, 10][0, 15] MS
18 GoldSteinPrice F
min
= [1 + (x
1
+ x
2
+ 1)
2
(19 14x
1
+ 3x
2
1
14x
2
+ 6x
1
x
2
+ 3x
2
2
)][30 + (2x
1
3x
2
)
2
(18
32x
1
+ 12x
2
1
+ 48x
2
36x
1
x
2
+ 27x
2
2
)]
[5, 5] MN
19 Hartman 3 F
min
=

4
i=1
c
i
exp[

3
j=1
a
ij
(x
j
p
ij
)
2
] [0, 1] MN
20 Hartman 6 F
min
=

4
i=1
c
i
exp[

6
j=1
a
ij
(x
j
p
ij
)
2
] [0, 1] MN
21 Shekel 5 F
min
=

5
i=1
[(x a
i
)(x a
i
)
T
+ c
i
]
1
[0, 10] MN
22 Shekel 7 F
min
=

7
i=1
[(x a
i
)(x a
i
)
T
+ c
i
]
1
[0, 10] MN
23 Shekel 10 F
min
=

10
i=1
[(x a
i
)(x a
i
)
T
+ c
i
]
1
[0, 10] MN
C: Characteristic, U: Unimodal, M: Multimodal, S: Separable, N: Non - separable.
deviation of objective function after 30 independent runs of the
algorithms. Table 6 shows the comparative results of ABC, I-
ABC, GABC, TLBO and I-TLBO algorithms for the first 13 func-
tions with 40000 maximum function evaluations. Except for
TLBO and I-TLBO algorithms, the rest of the results are taken
from the previous work of Li et al. [31].
It is observed from the results that the I-TLBO algorithm
outperforms the rest of the considered algorithms for Schwe-
fel1.2, Step and Quartic functions for all the dimensions. For
the Schwefel 2.21 function, the I-TLBO outperforms the other
algorithms for dimensions 30 and 50, while the performances of
I-TLBOand I-ABC are identical for dimension 10. For the Schwe-
fel function, the performance of the I-TLBO is better than rest of
the algorithms for dimension 50, while performances of GABC
and ABC, I-ABC and GABC are better than the I-TLBO for di-
mensions 30 and 20, respectively. GABC outperforms the other
algorithms for the penalized 1 function. For the penalized 2
function, performances of I-ABC, I-TLBO and GABC are better
than other algorithms for dimensions 20, 30 and 50, respec-
tively. For the Rosenbrock function, the performance of basic
ABC and GABC is better than the other algorithms for dimen-
sions 20 and 30, while GABC is better than the other algorithms
for dimension50. For Sphere, Schwefel 2.22 andGriewank func-
tions, TLBO, I-TLBO and I-ABC perform equally well for all the
dimensions. Similarly, for the Rastrigin function, performances
of I-ABC and I-TLBO are identical and better than the other
considered algorithms. For the Ackley function, performances
of I-ABC, GABC, TLBO and I-TLBO algorithms are more or less
identical.
Table 7 shows the comparative results of PS-ABC, TLBO
and I-TLBO algorithms for the first 13 functions with 120000
maximum function evaluations. It is observed from the results
that I-TLBO outperforms the basic TLBO and PS-ABC algorithms
for Step and Quartic functions (for all the dimensions) and the
Schwefel 2.21 function (for dimensions 30 and 50). The PS-ABC
outperforms the TLBO and I-TLBO for Rosenbrock and Schwefel
functions. For the Schwefel 1.2 function, the performance of
TLBO and I-TLBO is identical and better than the PS-ABC
algorithm. Performance of PS-ABCandI-TLBOis identical for the
Rastrigin function, while performance of all three algorithms is
identical for Sphere, Schwefel 2.22 and Griewank functions. For
Ackley, penalized 1 and penalized 2 functions, performances of
PS-ABC and I-TLBO are more or less similar.
Table 8 shows the comparative results of the considered
algorithms for 14 to 23 functions. Here, the results of ABC,
I-ABC, GABC, TLBO and I-TLBO algorithms are obtained with
40000 maximum function evaluations, while the result of
the PS-ABC algorithm is obtained with 120000 maximum
R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720 717
Table 6: Comparative results of TLBO and I-TLBO algorithms with different variants of ABC algorithm over 30 independent runs (for functions 113 of Table 5
with 40000 maximum function evaluations).
D ABC [31] I-ABC [31] GABC [31] TLBO I-TLBO
Mean SD Mean SD Mean SD Mean SD Mean SD
Sphere
20 6.18E16 2.11E16 0.00 0.00 3.19E16 7.39E17 0.00 0.00 0.00 0.00
30 3.62E09 5.85E09 0.00 0.00 6.26E16 1.08E16 0.00 0.00 0.00 0.00
50 1.11E05 1.25E05 0.00 0.00 1.25E05 6.05E09 0.00 0.00 0.00 0.00
Schwefel 2.22
20 1.35E10 7.15E11 0.00 0.00 9.36E16 1.33E16 0.00 0.00 0.00 0.00
30 5.11E06 2.23E06 0.00 0.00 1.31E10 4.69E11 0.00 0.00 0.00 0.00
50 2.92E03 9.05E04 0.00 0.00 2.37E05 6.19E06 0.00 0.00 0.00 0.00
Schwefel 1.2
20 3.13E+03 1.19E+03 4.54E+03 2.69E+03 2.69E+03 1.46E+03 3.29E38 1.20E37 0.00 0.00
30 1.24E+04 3.01E+03 1.43E+04 2.73E+03 1.09E+04 2.57E+03 3.25E27 8.21E27 0.00 0.00
50 4.57E+04 6.46E+03 4.69E+04 7.36E+03 4.12E+04 5.83E+03 1.38E21 4.00E21 0.00 0.00
Schwefel 2.21
20 3.9602 1.37E+00 0.00 0.00 0.3325 1.08E+00 7.19E278 6.90E278 0.00 0.00
30 24.5694 5.66E+00 1.21E197 0.00 12.6211 2.66E+00 3.96E253 4.24E253 4.7E324 0.00
50 56.3380 4.84E+00 25.5055 5.67E+00 45.3075 4.32E+00 4.77E234 5.11E234 4.9E324 0.00
Rosenbrock
20 1.1114 1.80E+00 15.7165 1.40E+00 1.6769 2.90E+00 16.0706 3.68E01 11.0955 8.71E01
30 4.5509 4.88E+00 26.4282 1.40E+00 7.4796 1.91E+01 26.6567 2.94E01 22.7934 5.82E01
50 48.03 4.67E+01 47.0280 8.60E01 25.7164 3.18E+01 47.0162 3.56E01 43.9786 4.55E01
Step
20 5.55E16 1.69E16 6.31E16 2.13E16 3.34E16 1.02E16 1.99E20 5.03E20 6.16E33 4.11E33
30 2.49E09 3.68E09 3.84E10 2.32E10 6.45E16 1.11E16 2.74E09 5.36E09 1.17E26 3.55E26
50 1.36E05 1.75E05 1.84E05 1.74E05 5.65E09 3.69E09 6.26E04 6.33E04 1.39E11 1.61E11
Quartic
20 6.51E02 2.03E02 8.71E03 3.24E03 3.31E02 7.93E03 1.71E02 1.01E02 6.31E03 6.45E03
30 1.56E01 4.65E02 1.96E02 9.34E03 8.48E02 2.79E02 1.71E02 8.95E03 8.29E03 4.30E03
50 4.88E01 1.07E01 8.83E02 2.55E02 2.46E01 4.72E02 1.59E02 8.11E03 9.68E03 3.88E03
Schwefel
20 8327.49 6.63E+01 8323.770 7.40E+01 8355.92 7.23E+01 8105.47 1.74E+02 8202.98 1.27E+02
30 12130.31 1.59E+02 12251.030 1.67E+02 12407.29 1.06E+02 12311.72 2.21E+02 12351.4 1.35E+02
50 19326.50 2.66E+02 19313.490 2.77E+02 19975.29 2.31E+02 20437.84 1.48E+02 20533.71 2.46E+02
Rastrigin
20 1.41E11 4.05E11 0.00 0.00 0.00 0.00 1.95E13 2.32E13 0.00 0.00
30 0.4531 5.15E01 0.00 0.00 0.0331 1.81E01 1.87E12 6.66E12 0.00 0.00
50 8.4433 2.70E+00 0.00 0.00 2.1733 1.07E+00 2.03E12 5.46E12 0.00 0.00
Ackley
20 2.83E09 2.58E09 8.88E16 0.00 2.75E14 3.58E15 3.55E15 8.32E31 7.11E16 1.50E15
30 2.75E05 2.13E05 8.88E16 0.00 7.78E10 2.98E10 3.55E15 8.32E31 1.42E15 1.83E15
50 4.71E02 3.40E02 8.88E16 0.00 1.11E04 3.88E05 3.55E15 8.32E31 1.42E15 1.83E15
Griewank
20 3.71E03 6.61E03 0.00 0.00 6.02E04 2.23E03 0.00 0.00 0.00 0.00
30 3.81E03 8.45E03 0.00 0.00 6.96E04 2.26E03 0.00 0.00 0.00 0.00
50 1.19E02 1.97E02 0.00 0.00 1.04E03 2.74E03 0.00 0.00 0.00 0.00
Penalized
20 4.06E16 9.42E17 4.17E16 1.09E16 3.26E16 6.67E17 1.13E06 1.15E06 4.00E08 9.72E15
30 1.18E10 2.56E10 7.10E12 5.25E12 5.86E16 1.13E16 6.16E03 2.34E02 2.67E08 1.15E13
50 8.95E06 3.21E05 5.42E07 2.98E07 9.30E11 7.96E11 6.01E02 6.71E02 5.72E08 2.81E08
Penalized 2
20 6.93E08 2.92E07 1.75E16 4.54E16 6.55E08 2.44E07 1.13E06 1.15E06 2.54E08 3.77E11
30 2.27E07 4.12E07 4.78E08 2.04E07 2.17E07 5.66E07 6.16E03 2.34E02 2.55E08 4.89E11
50 1.35E05 2.78E05 2.41E05 4.35E05 8.87E07 1.53E06 6.01E02 6.71E02 1.82E06 1.08E06
function evaluations. It is observed from the results that all the
algorithms perform identically for functions 14, 16, 18, 19 and
2123. The performance of the I-TLBO is better than rest of
the algorithms for the Kowalik function, while performances
of different variants of ABC are better than the TLBO for the
Hartman 6 function.
In order to identify the convergence of TLBO and I-TLBO,
a unimodal (Step) and a multimodal (Rastrigin) function are
considered for the experiment with dimensions 20, 30 and 50.
Maximum function evaluations are set as 40000 and a graph
is plotted between the function value (on logarithmic scale)
and function evaluations. The function value is taken as the
average of the function value for 10 different independent runs.
Figures 1 and 2 show the convergence graphs of unimodal
and multimodal functions, respectively. It is observed from the
graphs that the convergence rate of the I-TLBO is faster than
the basic TLBO algorithm for both unimodal and multimodal
functions for all the dimensions. Similarly, Table 9 shows
the computational effort of TLBO and I-TLBO algorithms for
the lower dimension problem (functions 1423) in the form
of the mean number of function evaluations required to
achieve a global optimum value with a gap of 10
3
. Here,
the mean number of function evaluations is obtained through
30 independent runs on each function. Here, also, the I-TLBO
algorithm requires less number of function evaluations than
the basic TLBO algorithm to achieve the global optimum value.
Moreover, as the number of teachers is increased from 1
to 4, the convergence rate of the I-TLBO algorithm is also
improved.
5. Conclusion
An improved TLBO algorithm has been proposed for uncon-
strained optimization problems. Two new search mechanisms
718 R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720
Table 7: Comparative results of TLBO and I-TLBO algorithms with PS-ABC algorithm over 30 independent runs (for functions 113 of Table 5 with 120000
maximum function evaluations).
PS-ABC [31] TLBO I-TLBO
Mean SD Mean SD Mean SD
Sphere
20 0.00 0.00 0.00 0.00 0.00 0.00
30 0.00 0.00 0.00 0.00 0.00 0.00
50 0.00 0.00 0.00 0.00 0.00 0.00
Schwefel 2.22
20 0.00 0.00 0.00 0.00 0.00 0.00
30 0.00 0.00 0.00 0.00 0.00 0.00
50 0.00 0.00 0.00 0.00 0.00 0.00
Schwefel 1.2
20 1.04E+03 6.11E+02 0.00 0.00 0.00 0.00
30 6.11E+03 1.69E+03 0.00 0.00 0.00 0.00
50 3.01E+04 4.11E+03 0.00 0.00 0.00 0.00
Schwefel 2.21
20 0.00 0.00 0.00 0.00 0.00 0.00
30 8.59E115 4.71E114 4.9E324 0.00 0.00 0.00
50 19.6683 6.31E+00 9.9E324 0.00 0.00 0.00
Rosenbrock
20 0.5190 1.08E+00 15.0536 2.28E01 1.3785 8.49E01
30 1.5922 4.41E+00 25.4036 3.50E01 15.032 1.2E+00
50 34.4913 3.03E+01 45.8955 2.89E01 38.7294 7.57E01
Step
20 2.61E16 3.86E17 9.24E33 4.36E33 0.00 0.00
30 5.71E16 8.25E17 1.94E29 1.88E29 0.00 0.00
50 1.16E15 1.41E16 3.26E13 5.11E13 1.51E32 8.89E33
Quartic
20 6.52E03 2.25E03 1.07E02 5.16E03 5.16E03 4.64E03
30 2.15E02 6.88E03 1.15E02 3.71E03 5.36E03 3.72E03
50 6.53E02 1.77E02 1.17E02 5.00E03 5.60E03 3.40E03
Schwefel
20 8379.66 4.72E12 8210.23 1.66E+02 8263.84 1.16E+02
30 12564.23 2.55E+01 12428.60 1.53E+02 12519.92 1.16E+02
50 20887.98 8.04E+01 20620.72 1.89E+02 20700.70 1.64E+02
Rastrigin
20 0.00 0.00 6.41E14 6.16E14 0.00 0.00
30 0.00 0.00 6.95E13 1.64E12 0.00 0.00
50 0.00 0.00 7.90E13 1.89E12 0.00 0.00
Ackley
20 8.88E16 0.00 3.55E15 8.32E31 7.11E16 0.00
30 8.88E16 0.00 3.55E15 8.32E31 7.11E16 0.00
50 8.88E16 0.00 3.55E15 8.32E31 7.11E16 0.00
Griewank
20 0.00 0.00 0.00 0.00 0.00 0.00
30 0.00 0.00 0.00 0.00 0.00 0.00
50 0.00 0.00 0.00 0.00 0.00 0.00
Penalized
20 2.55E16 4.97E17 4.00E08 6.85E24 2.42E16 1.09E16
30 5.53E16 8.68E17 2.67E08 6.79E12 4.98E16 2.14E16
50 1.02E15 1.58E16 5.18E05 1.92E04 9.19E16 5.38E16
Penalized 2
20 2.34E18 2.20E18 2.34E08 6.85E24 1.93E18 1.12E18
30 6.06E18 5.60E18 2.37E08 4.91E10 5.92E18 4.74E18
50 5.05E17 1.53E16 1.52E03 5.29E03 4.87E17 4.26E17
Table 8: Comparative results of TLBO and I-TLBO algorithms with different variants of ABC algorithmover 30 independent runs (for functions 1423 of Table 5).
Source: Results of algorithms except TLBO and I-TLBO are taken from Ref. [31].
ABC I-ABC GABC PS-ABC TLBO I-TLBO
Foxholes 0.9980 0.9980 0.9980 0.9980 0.9980 0.9980
Kowalik 6.74E04 3.76E04 5.54E04 4.14E04 3.08E04 3.08E04
6 Hump camel back 1.0316 1.0316 1.0316 1.0316 1.0316 1.0316
Branin 0.7012 0.3978 0.6212 0.6300 0.3978 0.3978
Goldstein-Price 3.0010 3.0000 3.0000 3.0000 3.0000 3.0000
Hartman 3 3.8628 3.8628 3.8628 3.8628 3.8628 3.8628
Hartman 6 3.3220 3.3220 3.3220 3.3220 3.2866 3.2948
Shekel 5 10.1532 10.1532 10.1532 10.1532 10.1532 10.1532
Shekel 7 10.4029 10.4029 10.4029 10.4029 10.4029 10.4029
Shekel 10 10.5364 10.5364 10.5364 10.5364 10.5364 10.5364
are introduced in the proposed approach in the form of tutorial
training and self motivated learning. Moreover, the teaching
factor of the basic TLBO algorithm is modified and an adap-
tive teaching factor is introduced. Furthermore, more than one
teacher is introduced for the learners. The presented modifi-
cations enhance the exploration and exploitation capacities of
the basic TLBO algorithm. The performance of the I-TLBO algo-
rithm is evaluated by conducting small scale to large scale ex-
periments on various unconstrained benchmark functions and
the performance is compared with that of the other state-of-
the-art algorithms available in the literature. Furthermore, the
comparisonbetweenthe basic TLBOandI-TLBOis also reported.
The experimental results have shown the satisfactory perfor-
mance of the I-TLBO algorithm for unconstrained optimization
problems. The proposed algorithm can be easily customized to
suit the optimization of any system involving large numbers of
variables and objectives.
A possible direction for future research work is extending
the I-TLBO algorithm to handle single objective and multi-
objective constrained optimization problems and explore its
effectiveness. Analyzing the effect of the number of teachers on
the fitness value of the objective function and experimentation
R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720 719
Figure 1: Convergence of TLBO and I-TLBO algorithms for a unimodal function
(step).
Figure 2: Convergence of TLBO and I-TLBO algorithms for a multimodal
function (Rastrigin).
Table 9: Mean number of function evaluations required for TLBO and I-
TLBO algorithms for functions 1423 of Table 5.
TLBO I-TLBO
NT = 1 NT = 2 NT = 3 NT = 4
Foxholes 524 472 431 344 278
Kowalik 2488 2464 2412 2344 2252
6Humpcamel back 447 426 408 339 276
Branin 443 438 421 390 367
Goldstein-Price 582 570 553 511 473
Hartman 3 547 524 492 378 310
Hartman 6 24847 18998 18542 17326 16696
Shekel 5 1245 1218 1212 1124 1046
Shekel 7 1272 1246 1228 1136 1053
Shekel 10 1270 1251 1233 1150 1062
on very large dimension problems (i.e. 100 and 500) are also
possible future research directions.
References
[1] Holland, J.H., Adaptation in Natural and Artificial Systems, University of
Michigan Press, Ann Arbor, USA (1975).
[2] Storn, R. and Price, K. Differential evolutiona simple and efficient
heuristic for global optimization over continuous spaces, J. Global Optim.,
11, pp. 341359 (1997).
[3] Price, K., Storn, R. and Lampinen, A., Differential EvolutionA Practical
Approach to Global Optimization, Springer-Verlag, Berlin, Germany (2005).
[4] Runarsson, T.P. and Yao, X. Stochastic ranking for constrained evolution-
ary optimization, IEEE Trans. Evol. Comput., 4(3), pp. 284294 (2000).
[5] Fogel, L.J., Owens, A.J. and Walsh, M.J., Artificial Intelligence Through
Simulated Evolution, John Wiley, New York, USA (1966).
[6] Farmer, J.D., Packard, N. and Perelson, A. The immune system, adaptation
and machine learning, Physica D, 22, pp. 187204 (1986).
[7] Passino, K.M. Biomimicry of bacterial foraging for distributed optimiza-
tion and control, IEEE Control Syst. Mag., 22, pp. 5267 (2002).
[8] Kennedy, J. and Eberhart, R.C. Particle swarmoptimization, IEEE Int. Conf.
on Neural Networks, 4, Washington DC, USA, pp. 19421948 (1995).
[9] Dorigo, M., Maniezzo, V. and Colorni, A. Positive feedback as a search
strategy, Tech. Report 91-016, Politecnico di Milano, Italy (1991).
[10] Eusuff, M. and Lansey, E. Optimization of water distribution network
design using the shuffled frog leaping algorithm, J. Water. Res. Pl.-ASCE,
129, pp. 210225 (2003).
[11] Karaboga, D. An idea based on honey bee swarm for numerical
optimization, Tech. Report-TR06, Computer Engineering Department,
Erciyes University, Turkey (2005).
[12] Karaboga, D. and Basturk, B. A powerful and efficient algorithm for
numerical function optimization: Artificial Bee Colony (ABC) algorithm,
J. Global Optim., 39(3), pp. 459471 (2007).
[13] Karaboga, D. and Basturk, B. On the performance of Artificial Bee Colony
(ABC) algorithm, Appl. Soft Comput., 8(1), pp. 687697 (2008).
[14] Karaboga, D. and Akay, B. A comparative study of Artificial Bee Colony
algorithm, Appl. Math. Comput., 214, pp. 108132 (2009).
[15] Geem, Z.W., Kim, J.H. and Loganathan, G.V. A new heuristic optimization
algorithm: harmony search, Simulation, 76, pp. 6070 (2001).
[16] Rashedi, E., Nezamabadi-pour, H. and Saryazdi, S. GSA: a gravitational
search algorithm, Inform. Sci., 179, pp. 22322248 (2009).
[17] Simon, D. Biogeography-based optimization, IEEE Trans. Evol. Comput.,
12, pp. 702713 (2008).
[18] Ahrari, A. and Atai, A.A. Grenade explosion methoda novel tool
for optimization of multimodal functions, Appl. Soft Comput., 10,
pp. 11321140 (2010).
[19] Kashan, A.H. An efficient algorithm for constrained global optimization
and application to mechanical engineering design: league championship
algorithm (LCA), Comput. Aided Des., 43, pp. 17691792 (2011).
[20] Kaveh, A. and Talatahari, S. A novel heuristic optimization method:
charged system search, Acta Mech., 213, pp. 267286 (2010).
[21] Kaveh, A. and Talatahari, S. An enhanced charged system search for
configuration optimization using the concept of fields of forces, Struct.
Multidiscip. Optim., 43, pp. 339351 (2011).
[22] Gao, J. and Wang, J. A hybrid quantum-inspired immune algorithm for
multi-objective optimization, Appl. Math. Comput., 217, pp. 47544770
(2011).
[23] Moumen, S.E., Ellaia, R. andAboulaich, R. Anewhybridmethodfor solving
global optimization problem, Appl. Math. Comput., 218, pp. 32653276
(2011).
[24] Fan, S.S. and Zahara, E. A hybrid simplex search and particle swarm
optimization for unconstrained optimization, European J. Oper. Res., 181,
pp. 527548 (2007).
[25] Olen sek, J., Tuma, T., Puhan, J. and Brmen, . A new asynchronous
parallel global optimization method based on simulated annealing and
differential evolution, Appl. Soft Comput., 11, pp. 14811489 (2011).
[26] Liu, G., Li, Y., Nie, X. and Zheng, H. A novel clustering-based differential
evolution with 2 multi-parent crossovers for global optimization, Appl.
Soft Comput., 12, pp. 663681 (2012).
[27] Cai, Z., Gong, W., Ling, C.X. and Zhang, H. A clustering-based differential
evolution for global optimization, Appl. Soft Comput., 11, pp. 13631379
(2011).
[28] Mashinchi, M.H., Orgun, M.A. and Pedrycz, W. Hybrid optimization with
improved tabu search, Appl. Soft Comput., 11, pp. 19932006 (2011).
[29] Noel, M.M. A new gradient based particle swarm optimization algorithm
for accurate computation of global minimum, Appl. Soft Comput., 12,
pp. 353359 (2012).
[30] Sadjadi, S.J. and Soltani, R. An efficient heuristic versus a robust hybrid
meta-heuristic for general framework of serialparallel redundancy
problem, Reliab. Eng. Syst. Saf., 94(11), pp. 17031710 (2009).
[31] Li, G., Niu, P. and Xiao, X. Development and investigation of efficient
Artificial Bee Colony algorithmfor numerical function optimization, Appl.
Soft Comput., 12, pp. 320332 (2012).
[32] Zhu, G. and Kwong, S. Gbest-guided Artificial Bee Colony algo-
rithm for numerical function optimization, Appl. Math. Comput., 217,
pp. 31663173 (2010).
[33] Akay, B. and Karaboga, D. A modified Artificial Bee Colony algorithm for
real-parameter optimization, Inform. Sci., 192(1), pp. 120142 (2012).
[34] Mahdavi, M., Fesanghary, M. and Damangir, E. An improved harmony
search algorithm for solving optimization problems, Appl. Math. Comput.,
188, pp. 17671779 (2007).
[35] Rao, R.V., Savsani, V.J. and Vakharia, D.P. Teachinglearning-based opti-
mization: a novel method for constrained mechanical design optimization
problems, Comput. Aided Des., 43(3), pp. 303315 (2011).
[36] Rao, R.V., Savsani, V.J. and Vakharia, D.P. Teachinglearning-based
optimization: a novel optimization method for continuous non-linear
large scale problems, Inform. Sci., 183(1), pp. 115 (2011).
[37] Rao, R.V. and Patel, V. An elitist teachinglearning-based optimization
algorithm for solving complex constrained optimization problems, Int. J.
Ind. Eng. Comput., 3(4), pp. 535560 (2012).
720 R.V. Rao, V. Patel / Scientia Iranica, Transactions D: Computer Science & Engineering and Electrical Engineering 20 (2013) 710720
[38]

Crepinek, M., Liu, S.H. and Mernik, L. A note on teachinglearning-based
optimization algorithm, Inform. Sci., 212, pp. 7993 (2012).
[39] Rao, R.V. and Patel, V. Multi-objective optimization of two stage thermo-
electric cooler using a modified teachinglearning-based optimization al-
gorithm, Eng. Appl. Artif. Intell., 26(1), pp. 430445 (2013).
[40] Rao, R.V. and Patel, V. Multi-objective optimization of heat exchangers
using a modified teachinglearning-based optimization algorithm, Appl.
Math. Model., 37(3), pp. 11471162 (2013).
Dr. Ravipudi Venkata Rao is Professor in the Department of Mechanical
Engineering of S.V. National Institute of Technology, Surat, Gujarat (India).
He received his B.Tech degree from Nagarjuna University, his M.Tech degree
from BHU, Varanasi, and his Ph.D. degree from BITS, Pilani, India. He has
about 22 years of teaching and research experience. He has authored about
260 research papers published in various reputed international journals
and conference proceedings. He is also on the Editorial Boards of various
international journals and Associate Editor of some. His research interests
include: advanced engineering optimization techniques and the applications,
fuzzy multiple attribute decision making, advanced manufacturing technology,
automation and robotics.
Vivek Patel is Assistant Professor at Lukhdhirji Government Engineering
College, Morbi, Gujarat (India), and is currently pursuing his Ph.D. degree at S.V.
National Institute of Technology, Surat, India. His research interests include:
design optimization of thermal systems and multi-objective optimization. He
has about 8 years of teaching and research experience and has authored 25
research papers published in various international journals and conference
proceedings.

You might also like