Hybrid Ant Colony-Genetic Algorithm (GAAPI) For Global Continuous Optimization
Hybrid Ant Colony-Genetic Algorithm (GAAPI) For Global Continuous Optimization
Hybrid Ant Colony-Genetic Algorithm (GAAPI) For Global Continuous Optimization
1, FEBRUARY 2012
Hybrid Ant Colony-Genetic Algorithm (GAAPI)
for Global Continuous Optimization
Irina Ciornei, Student Member, IEEE, and Elias Kyriakides, Senior Member, IEEE
AbstractMany real-life optimization problems often face an
increased rank of nonsmoothness (many local minima) which
could prevent a search algorithm from moving toward the global
solution. Evolution-based algorithms try to deal with this issue.
The algorithm proposed in this paper is called GAAPI and is
a hybridization between two optimization techniques: a special
class of ant colony optimization for continuous domains entitled
API and a genetic algorithm (GA). The algorithm adopts the
downhill behavior of API (a key characteristic of optimization
algorithms) and the good spreading in the solution space of the
GA. A probabilistic approach and an empirical comparison study
are presented to prove the convergence of the proposed method in
solving different classes of complex global continuous optimization
problems. Numerical results are reported and compared to the
existing results in the literature to validate the feasibility and the
effectiveness of the proposed method. The proposed algorithm is
shown to be effective and efcient for most of the test functions.
Index TermsAnt colony optimization (ACO), genetic algo-
rithm (GA), global continuous optimization.
I. INTRODUCTION
G
LOBAL optimization in operations research and
computer science refers to the procedure of nding
approximate solutions, which are considered the best possible
solutions, to objective functions [1]. Ideally, the approximation
is optimal up to a small constant error, for which the solution
is considered to be satisfactory. In general, there can be
solutions that are locally optimal, but not globally optimal;
this situation appears more frequently when the dimension
of the problem is high and when the function has many local
optima [2]. Consequently, global optimization problems are
typically quite difcult to be solved exactly, particularly in
the context of nonlinear problems or combinatorial problems.
Global optimization problems fall within the broader class of
nonlinear programming. It should be noted that approximation
algorithms are increasingly being used for problems where
exact polynomial algorithms are known but are computationally
expensive due to the dimensionality of these problems.
This paper focuses on the general global optimization prob-
lems in the continuous domain, having a nonlinear objec-
tive function that is either unconstrained or that has simple
bound constraints. A variety of strategies, such as determin-
Manuscript received August 9, 2010; revised March 14, 2011 and July 5,
2011; accepted July 24, 2011. Date of current version December 7, 2011. This
paper was recommended by Associate Editor S. Hu.
The authors are with the KIOS Research Center for Intelligent Systems
and Networks and the Department of Electrical and Computer Engineering,
University of Cyprus, 1678 Nicosia, Cyprus (e-mail: ciornei.irina@ucy.ac.cy;
elias@ucy.ac.cy).
Digital Object Identier 10.1109/TSMCB.2011.2164245
istic methods and probabilistic/heuristic methods, have been
proposed to solve these problems. In deterministic methods, a
clear relation exists between the characteristics of the possible
solutions for a given problem. Probabilistic/heuristic methods
solve optimization problems approximately in the case that the
aforementioned relationship is not obvious, or it is complicated,
or the dimensionality of the search space is very high [3].
In the last three decades, a signicant research effort was
focused on the development of effective and efcient stochastic
methods that could reach the nearest global optimal solution.
In this class of methods, evolutionary computation (EC) is one
of the favorite methodologies, using techniques that exploit a
set of potential solutions (called a population) to detect the
optimal solution through cooperation and competition among
the individuals of the population [4]. These techniques often
nd the optima for difcult optimization problems faster than
traditional adaptive stochastic search algorithms. The most
frequently used population-based EC methods include evolu-
tionary strategies [4][6], genetic algorithms (GAs) [7][9],
evolutionary programming (EP) [10], clustering methods [11],
ant colony optimization (ACO/API) [12][14], and particle
swarm optimization (PSO) [15], [16].
One of the issues that probabilistic optimization algorithms
might face in solving global, highly nonconvex optimization
problems is premature convergence. One of the causes of
premature convergence of evolutionary-based algorithms is the
lack of diversity. In nature, the diversity is ensured by the vari-
ety and abundance of organisms at a given place and time. The
same principle (different type of solutions at one moment in the
iterative search process) is used in computational intelligence
for optimization algorithms [17].
Another issue of probabilistic approaches in optimization is
related to their lack of advanced search capability around the
global solution. Several studies have shown that incorporating
some knowledge about the search space can improve the search
capability of evolutionary algorithms (EAs) signicantly [18].
In particular, the hybridization of EAs with local searches has
proven to be very promising [19], [20].
A. ACO for Continuous Search Domains
This paper adopts the hybridization of a special class of ACO
called API [14] and a real-coded genetic algorithm (RCGA)
similar to the approach of the evolutionary strategy with oat
representation. Compared to its ACO surrogates, which were
mainly applied to discrete optimization problems, API was
particularly designed for continuous optimization problems.
The proposed ant-based algorithm differs in terms of search
1083-4419/$26.00 2011 IEEE
CIORNEI AND KYRIAKIDES: HYBRID ANT COLONY-GENETIC ALGORITHM (GAAPI) FOR GLOBAL CONTINUOUS OPTIMIZATION 235
strategy from the basic ACO in that the ants have to navigate
by using their memory of the visual landmarks encountered
along with the familiar routes, instead of using pheromones.
Further, API aims at maximizing the prey instead of minimizing
the path. API has also a good downhill (gradient descending)
search behavior. One of its drawbacks though, is that it may end
up quickly in a local minimum due to a constant movement of
the nest only in the best position found by the ants (the search
agents).
ACOstarted to be analyzed fromthe continuous optimization
perspective in the analytical way [21][23] only recently, in
spite of the fact that the rst proposal to adapt ACO for
continuous optimization dates back to 1995 [24]. Bilchev and
Parmees proposal, entitled continuous ant colony optimization,
initializes a nest at a given point of the search space. Then, it
generates random vectors corresponding to the directions that
will be followed by each ant in its attempt to improve the
solution. If an ant is successful in such a pursuit, the direction
vector chosen is updated. The continuous pheromone model
proposed later on [23] is a more complex approach which uses
a Gaussian probabilistic approach for the pheromone update.
This model consists of a Gaussian pdf centered on the best
solution found so far in the search process (up to the current
iteration). The variance vector of this pdf starts with a value
three times greater than the range of each variable of the prob-
lem (e.g., each ant takes a proportion from the solution space
to explore). Then, this variance value is modied according
to a weighted average of the distance between each individual
(ant) in the population and the best solution found so far. The
variance vector depends only on the number of ants. The main
drawback of this model is the fact that it only investigates one
promising region of the problem at a time [21], and it may
therefore suffer from premature convergence. In [22], an ACO
algorithmfor the continuous domain is proposed; this algorithm
can avoid premature convergence, and therefore local optima
trapping. The proposed algorithm uses an archive that holds
a predened number of the best solutions found so far. Each
solution corresponds to the center of a different Gaussian pdf.
B. Hybrid ACO-GA
In recent years, there were proposals to hybridize ACO and
GA in a number of optimization applications [25][28]. The
approaches adopted in [25] and [27] are similar and consist
of using both ACO and GA algorithms to search in parallel
for a better solution. Both of them refer to the combinatorial
optimization model of ACO. Migration of solutions from one
algorithm to the other occurs whenever any of them nds an
improved potential solution after an iteration. Thus, a percent-
age of the best solutions of ACO (say K%) are added to the
GA population pool which will follow the breeding process
proportional to their tness. Then, another percentage of the
best individuals in the GA (L% of the GA population) are used
to add tness-proportional pheromone in the ACO search
process and a new percentage of the worst tted individuals
of GA (M%) are used to evaporate a constant amount of
pheromones in the ACO search. When both algorithms nd
an improvement, or both algorithms do not improve after an
iteration, then no migration takes place. The application of
the hybrid ACO-GA to continuous optimization problems was
adopted in [26] and [28].
This work proposes a novel hybrid stochastic algorithm
which intends to solve global unconstrained continuous op-
timization problems of medium and large dimensions. The
proposed algorithm is a hybridization of two classes of evo-
lutionary optimization algorithms: a special class of ACO for
continuous domain and a RCGA. The proposed algorithm,
entitled GAAPI, keeps the downhill search ability of API,
while avoiding local minimum trapping by using the diversity
in the solution given by the RCGA. The main advantages of the
optimization tool proposed are its reduced computational time,
and the robustness and consistency of high quality approximate
solutions.
The paper is organized as follows: Section II provides
background information and necessary concepts for the type
of optimization problems tackled in this paper. Section III
provides a description of the proposed algorithm and its basic
steps. Sections IV and V provide an analysis of the algorithms
performance. More specically, Section IV focuses on param-
eter settings and benchmark functions used for comparison
analysis with other heuristic methodologies for continuous
global optimization. Section V provides an empirical proof of
convergence behavior of the proposed algorithm using 20 test
functions typically used in the literature for the analysis of
heuristic optimization techniques. Section VI is allocated to
discussion and conclusions.
II. BACKGROUND INFORMATION AND CONCEPTS
The algorithm proposed in this paper addresses optimization
problems in the continuous domain of the form
min
xH
f(x) (1)
where x R
n
, and H = {x R
n
|l
i
x
i
u
i
} R
n
, i =
1, . . . , n, with l
i
and u
i
being the lower and upper bounds of
x
i
, respectively. We are particularly interested in unconstrained
optimization problems, and thus it is assumed that the set H is
wide enough such that
H H
o
= {x R
n
|f(x) c} (2)
for a sufciently large real number c.
Further, it is assumed that the function f(x) is continuous on
H, and that H H
0
is a nonempty and compact set for a real
number c. An interpretation of the H
0
and c for the algorithm
proposed in this paper is given in following section of the paper.
Suppose that x
= (x
1
, x
2
, . . . , x
n
) is a globally optimal
solution and > 0 is a sufciently small number. If a candidate
solution x = ( x
1
, x
2
, . . . , x
n
) satises
| x
i
x
i
| , i = 1, . . . , n (3)
or
|f( x) f(x
)| , i = 1, . . . , n (4)
then, x is called an -optimal solution of the problem dened
in (1).
236 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012
Fig. 1. Search space division according to API strategy. S = R
2
denotes a
bi-dimensional solution space; s
1
, s
2
, s
3
are sites randomly generated around
the nest, their maximum distance from the nest being given by A
ant
. The
small squares denote local exploration of the site (points situated at a maximum
distance of A
site
from the site center s) [14].
In the case of the algorithm proposed in this paper, to
nd -optimal solutions, the feasible solution space [l, u] is
divided into smaller solution spaces with different amplitudes
(dened as a percentage of the search space) from the initial
domain, where overlapping is allowed. Fig. 1 shows how the
initial solution space is divided into smaller search spaces. The
example in Fig. 1 is given for a 2-D search space. This approach
is the one adopted by Monmarche in his thesis when proposing
the API algorithm [29]. The approach is quite similar to the
adaptation of ACO for continuous domains proposed in [24].
The nest N initially takes a random position in the fea-
sible search space [l, u], where l = (l
1
, l
2
, . . . , l
n
), and u =
(u
1
, u
2
, . . . , u
n
) are the lower and upper bound vectors for
each dimension, respectively, delimitating the feasible solution
space in R
n
(n is the dimension of the problem). Therefore,
N = (N
1
, N
2
, . . . , N
n
) is the initial position of the nest in the
feasible solution space.
The amplitudes for search space division change dynami-
cally. The formula used to determine the search amplitude of
each agent (ant) is given by
A
ant
=
_
1
k
N
ants
_
G
ant
i
(5)
where A
ant
is the radius from the nest, delimitating the solution
space ant i can cover; k is the current index (iteration of the
search loop) of ant i, N
ants
is the total number of search agents,
and G
ant
i
is the age of the ant and it is a parameter that in-
creases as ant i performs its tasks with time, and is computed by
G
ant
i
=
T
i
T
ant
i
. (6)
This parameter was inspired from the real behavior of pachy-
condyla apicalis ants described in [30]. T
i
is the current number
of iterations after the movement of the ant ant
i
, and T
ant
i
is the
maximum number of iterations between two movements of the
ant ant
i
.
III. GAAPI ALGORITHM
This section of the paper describes the proposed GAAPI
algorithm. First, an introduction to the two main components of
the GAAPI algorithm is provided. The rst component, the API
algorithm, is the core of the proposed method; the second com-
ponent is the RCGA with emphasis on GA operators modied
such as to maintain an improved balance between exploration
and exploitation in the search procedure. In essence, the pro-
posed GAAPI algorithm is mainly the API algorithm, but the
ants selected for recruitment, instead of exchanging information
from their memorized sites, obtain new hunting sites through a
RCGA that uses a population that includes as individuals values
from old forgotten sites.
A. API Algorithm
The API algorithm was inspired by the behavior of a type
of ants (pachycondyla apicalis ants) which live in the Mexican
tropical forest near the Guatemalan border. Colonies of these
ants comprise around 20 to 100 ants. The foraging strategy
of the pachycondyla apicalis ants can be characterized by the
following description. First, these ants create their hunting sites
which are distributed relatively uniformly within a radius of
approximately 10 m around their nest. In this way, using a
small mosaic of areas, the ants cover a rather large region
around the nest. Second, the ants intensify their search around
some selected sites for prey. Pachycondyla apicalis ants use
a recruitment mechanism called tandem running, in which
pairs of ants, one leading and one following, move toward a
resource. In this foraging process, these ants use their memory
of the visual landmark (map of the terrain encountered in
their previous search) rather than pheromone trails (chemical
signals) encountered in other ant species. After capturing their
prey, the ants will move to a new nest via the tandem running
recruitment mechanism to begin a new cycle of foraging. Based
on the natural behavior of pachycondyla apicalis ants described
in [30], Monmarch et al. proposed an API algorithm (short
for apicalis) for the solution of optimization problems [29].
Despite the good performance of the algorithm, further research
shows that API has poor use of the memory that generally
characterizes ant colony systems [31]. Monmarch suggested
that a hybridization of API with simulated annealing could
improve its performance [29].
A short step-by-step description of API, the main core of the
GAAPI algorithm proposed in this paper, is given below:
1. Initialization: set the algorithm parameters
2. Generation of new nest (exploration)
3. Exploitation
3.1. Intensication search:
For each ant
if the number of hunting sites in its mem-
ory is less than a predened number
then create a new site in its neighborhood
and exploit the new site
elseif the previous site exploitation was
successful
then exploit the same site again
CIORNEI AND KYRIAKIDES: HYBRID ANT COLONY-GENETIC ALGORITHM (GAAPI) FOR GLOBAL CONTINUOUS OPTIMIZATION 237
else exploit a probabilistically selected site
(among the sites in its memory)
end
end
3.2. Erase sites: from the memory of ants erase all sites
that have been explored unsuccessfully more than
a predened consecutive number of times
3.3. Information sharing:
Choose two ants randomly and exchange informa-
tion between them. The information exchanged is
the best site in their memory at the current iteration
3.4. Nest movement:
if the condition for nest movements is satised, go
to step (4)
else, go to step (3.1)
end
4. Termination test:
if the test is successful, STOP
else, empty the memory of all ants and go to step (2)
end.
B. RCGA Algorithm
The RCGA is inspired from the oat representation of the
evolutionary strategy approach. RCGAs work with real number
representation; therefore, there is no other structure of the
chromosomes, but oating vectors whose size is the number of
variables of the optimization problem to be solved. This form
of GA has the advantage of eliminating coding and decoding
procedures needed in the binary representation of GA, thus
using real-value object representation. For most applications of
GAs in constrained optimization problems, the real coding is
used to represent a solution to a given problem. This is one of
the reasons that it has been adopted for hybridization with API
in this work.
GAs start searching the solution space by initializing a popu-
lation of randomcandidates for the solution. Every individual in
the population undergoes genetic evolution through crossover
and mutation. The selection procedure is conducted based on
the tness of each individual. By using elitist strategy, the best
individual in each generation is ensured to be passed to the
next generation. The elitist selection operator creates a new
population by selecting individuals from the old populations,
biased toward the best individuals. The chromosomes, which
produce the best optimal tness, are selected for the next
generations. Crossover is the main genetic operator, which
swaps chromosome parts between individuals. Crossover is not
performed on every pair of individuals, its frequency being
controlled by a crossover probability (P
c
). This probability
should have a large value for a higher chance of creating
offspring with genome appropriate to the parents. The blend
crossover (denoted as BLX-alpha) is the operator adopted in
this work, due to its interesting property: the location of the
child solution depends on the difference in the parent solutions.
In other words, if the difference between the parent solutions
is small, the difference between the child and parent solutions
is also small. This property is essential for a search algorithm
to exhibit self-adaptation. Thus, the BLX-alpha proceeds by
blending two oat vectors (x
t
, y
t
) using a parameter = [0, 1],
where t denotes the index of the generation. The resulting
children (x
t+1
, y
t+1
) are equal to x
t+1
i
= (1
i
)x
t
i
+
i
y
t
i
and y
t+1
i
=
i
x
t
i
+ (1
i
)y
t
i
, respectively.
The last operator is mutation and its action is to change a
random part of the string representing the individual. Muta-
tion probability should be quite low, relative to the crossover
probability, so that only a few elements in the solution vector
undergo the mutation process. If the probability of mutation is
high, then the offspring may lose too many of the characteristics
of the parents and may lead to divergence in the solution.
Uniform mutation was adopted in this work. The algorithm is
repeated for several generations until one of the individuals of
the population converges to an optimal value or the required
number of generations is reached.
A short step-by-step mathematical description of the RCGA
is given below:
1. Initialize the population:
s
i
= U(l
k
, u
k
)
k
, where s
i
is the individual i of the pop-
ulation, with i = 1, . . . , m, and U is a uniform distribution
in the range [l
k
, u
k
], bounded in each k dimension, and m
is the number of potential parents (or population size).
2. Determine the tness score of the population and select
parents according to their tness score (the individuals
with the highest tness are selected as parents):
(s
i
) = G(f(s
i
)), where (s
i
) is the tness score of the
individual s
i
, Gdenotes the tness score function, and f is
the real tness or optimization function. The tness score
function adopted in this paper is the inverse of the tness
function to be optimized. In case of minimization prob-
lems, the individual is considered to be the most tted if it
has the smallest value of the optimization function. In case
of maximization problems, the tness score is given by the
tness function.
3. Variance assignment:
3.1. Apply blend crossover, with probability P
c
s
i
= s
i+m
3.2. Apply mutation operator, with probability P
m
s
i+m,j
=S
i,j
+N(0,
j
(s
i
)+z
j
), j =1, . . . , k
where S
i,j
is the element j of the individual i,
N(,
2
) is the Gaussian random variation with
mean and variance ,
j
is a constant of propor-
tionality to scale (s
i
), and z
j
is the offset that
guarantees the minimum amount of variance.
4. Determine the tness score of each variance:
Each variance s
i+m
is assigned a tness score (s
i+m
) =
G(f(s
i+m
)).
5. Rank the solution in descending order of (s
i
)
6. Repeat: Go to step 3 until an acceptable solution has been
found or the available execution time is exhausted.
The equation from item 3.1 above (s
i
= s
i+m
) shall be read
as follows: after crossover, a new individual is formed (s
i+m
),
which is added at the end of the current population (whose
dimension is m). If a randomly generated number is higher than
238 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012
the probability of crossover (P
c
) of the ith individual in the
current population, then the newly formed individual (s
i+m
)
replaces the ith individual in the next generation. The same
applies to item 3.2, in which the mutation operator is applied
with a probability of mutation P
m
to each gene j of each
individual i.
C. GAAPI Algorithm
To eliminate the shortcomings and the insufcient robustness
of the global search ability of the API algorithm, a GAAPI
algorithm that incorporates some favorable features of GA and
API algorithms is proposed in this paper. The idea in GAAPI
is to keep the algorithm focused toward continuously improv-
ing in the solution, while avoiding trapping in local optima.
Therefore, the API algorithm was intended to be the core of
the GAAPI, keeping the search tracked toward improvement in
the solution, while RCGA was intended to provide the escape
mechanism from local optima when API is trapped. Thus, when
API is at the search level of sites (the lowest search level)
and continuously improves the solution, RCGA is in a passive
mode. In this passive mode, the population of RCGA is formed
by all the best solutions generated by API at the ant level
only (there are no sites to be forgotten). When API is slow in
improving the solution (there are sites to be forgotten due to
failure in improving the solution), RGCA switches to an active
role. This time, its population uses the information of forgotten
sites as well (the population is more heterogeneous than in the
former case), and thus the solution generated by the RCGA has
more chances to be far from the local optimum in which the
API was trapped.
The key modications in API to form the new GAAPI
algorithm are summarized below.
1) Generation of New Nest: After initialization, only the
best solution found since the last nest move has the
opportunity to be selected as a new nest to start the next
iteration. The hill climb property is not very strong in
this case, so the trapping in local minima is avoided.
2) Exploitation with API: Initially, each ant checks its mem-
ory. If the number of hunting sites in its memory is less
than a predened number, it will generate a new site in the
small neighborhood of the center of its current hunting
site, save it to its memory, and use it as the next hunting
site. Otherwise, one of the sites in its memory is selected
as the hunting site. The ant then performs a local search
around the neighborhood of this hunting site. If this local
exploitation is successful, the ant will repeat its explo-
ration around the site until an unsuccessful search occurs;
otherwise (if the previous exploration was unsuccessful),
the ant will select an alternative site among its memorized
sites. This process will be repeated until a termination
criterion is reached. The termination criterion used in
this phase is that the procedure will stop automatically
once the number of successive unsuccessful explorations
reaches a predened value, or there is no improvement
after a number of iterations. A schematic representation
of the search mechanism of API is given in Fig. 2.
Fig. 2. API search mechanism as used in the GAAPI algorithm. ns represents
the counter for the number of sites memorized by each ant; e(ns) is the counter
for consecutive failure in site search; Ns is the total number of sites one ant can
memorize; popRCGA is the counter for the number of individuals added into
the population of the RCGA algorithm; P is the predened number of allowed
consecutive search failures in one site before it is deleted from the memory of
the ant.
2.1. Information sharing with RCGA: To keep diversity
in the solution space, information sharing is per-
formed using a simple RCGA method. A random
site is chosen in the memory of a randomly chosen
ant, and it is replaced by the new RCGA solution.
This can be seen as a form of communication. The
RCGA procedure involves a population formed by
the currently best hunting sites in the memory of all
ants as well as the forgotten (erased) sites. The best
solution obtained after one set of GA operations (se-
lection, crossover, mutation) replaces the chosen site
in the memory of the selected ants. This technique
is applied before moving the nest to the best position
so far. The RCGA contains the forgotten sites to keep
diversity in the population.
The RCGA operators are set as follows: (1) Blend crossover
operator (BLX-) [7] with a probability of 0.3, and a value
of set to 0.366 [30]; a uniform mutation with a mutation
probability set to 0.35; Elitism: the two best individuals are
retained with no modications in the population of the next
generation, such that the strongest genes up to this point are
retained. Fig. 3 shows the GAAPI algorithm in the form of a
owchart, demonstrating the key steps of the process.
For the algorithm proposed here, H
0
and c dened in (2)
have the following interpretation: H
0
is the solution space of
all better solutions than the current nest position (nest) found
by the ant colony during the current search, and c = f(nest) is
the evaluation of the objective function in the current position
of the nest.
CIORNEI AND KYRIAKIDES: HYBRID ANT COLONY-GENETIC ALGORITHM (GAAPI) FOR GLOBAL CONTINUOUS OPTIMIZATION 239
Fig. 3. GAAPI owchart.
Under the assumptions made in the second section of the pa-
per and according to the lemmas dened in [32], the probability
to end up in an -optimal solution tends to 1 as the number of
iterations (nest movements) tends to innity.
GAAPI has a well-established balance between exploration
(with API and RCGA) and exploitation (API). API keeps the
algorithm focused toward the global optimum, moving the nest
position (the point where exploitation starts) only in the best
solution found so far, while RCGA helps the ants to use useful
information of less explored regions (forgotten sites) The strong
inuence of API with its down-hill (gradient descending) be-
havior may increase the speed of convergence toward the global
when compared to other powerful global search techniques
such as PSO, EAs, or GAs, where the exploration behavior may
play a strong role.
IV. PARAMETER SETTINGS AND TEST FUNCTIONS
In this section of the paper, the performance of the proposed
algorithmis investigated, considering a set of 20 benchmark test
functions. These test functions are widely used in the scientic
literature to test optimization algorithms. Note that most of
the test functions have many local minima so that they are
challenging enough for performance evaluation.
A. Test Functions
Twenty widely used functions have been chosen from [20],
[29], [32] as test functions, and the proposed algorithm in this
paper was tested for all of them. These functions are shown in
TABLE I
CHARACTERISTICS OF SIX BENCHMARK FUNCTIONS
TABLE II
CHARACTERISTICS OF THE TEST FUNCTIONS
the Appendix of the paper. A few descriptive characteristics of a
class of six very popular test functions (out of the 20 functions)
are provided in Table I in the Appendix. The basic parameters
of all 20 test functions are listed in Table II, including search
space limits, their dimension, and their global minimum.
240 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012
TABLE III
PERFORMANCE OF GAAPI OVER THE 20 TEST FUNCTIONS
For all 20 test functions, the results obtained by GAAPI are
compared to other well-known evolutionary based optimization
methods. The results are presented in Tables IIIVIII. These
tables are based on the comparison tables given in [32], which
the authors found to be very well documented.
B. Parameter Values for GAAPI
The values of the parameters of GAAPI that have been used
for the global optimization of the 20 test functions are given
below.
TABLE IV
COMPARISON TO OTHER HEURISTIC METHODS
WITH RESPECT TO CPU TIME
The population size of RCGA is variable and depends on
the current iteration and the number of unsuccessful sites
memorized until the recruitment process. In the case of
the initial iteration, the population has ve individuals: the
rst and second best up to the rst call of RCGA and three
other individuals chosen randomly from all the sites of all
CIORNEI AND KYRIAKIDES: HYBRID ANT COLONY-GENETIC ALGORITHM (GAAPI) FOR GLOBAL CONTINUOUS OPTIMIZATION 241
TABLE V
COMPARISON TO OTHER HEURISTIC METHODS FOR F1 TO F5
ants. In the case of subsequent iterations, the population
composition is like the rst iteration only if no forgotten
sites appear up to that point.
Blend crossover operator (BLX-) with a probability
P
c
= 0.3; the value of was set to 0.366.
Uniform mutation with a mutation probability P
m
= 0.35.
The number of ants in the API colony was set to 100.
The number of sites each ant can search and memorize was
set to 3.
The maximum number of explorations of the same site was
set to 30. For a number of functions with many local minima
very near to each other (F5, F7, F16, F17, and F20), the max-
imum number of explorations was set to 500. The number of
consecutive unsuccessful visits at one site before being deleted
from the memory of the ant was set to 5 (or 40 for the functions
cited above).
V. EMPIRICAL PROOF OF CONVERGENCE:
RESULTS AND ANALYSIS
The proposed algorithm was executed in 50 independent
runs for each test function, to keep the same base of compar-
ison. The algorithm was implemented in MATLAB 7.a on a
TABLE VI
COMPARISON TO OTHER HEURISTIC METHODS FOR F6 TO F10
Pentium IV personal computer with a 3.6 GHz processor. The
following data were recorded: the minimum function value de-
noted by MIN, the maximum function value denoted by MAX,
the average function value denoted by MEAN, the standard
deviation denoted by STD, the average CPU time of 50 inde-
pendent runs denoted by CPU, and the mean number of func-
tion evaluations denoted by M-num-fun. The last two analysis
components give a fair indication about the effectiveness of the
242 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012
TABLE VII
COMPARISON TO OTHER HEURISTIC METHODS FOR F11 TO F15
TABLE VIII
COMPARISON TO OTHER HEURISTIC METHODS FOR F16 TO F20
algorithm in real problems. The aforementioned parameters are
generally accepted indicators of performance when referring to
heuristic global optimization algorithms. Note that CPU time,
together with the PC platform on which the algorithm was
executed, is only provided for comparison reasons to other
works which used this indicator. However, this parameter is
subject to hardware platform where the algorithm is run and
may not be the best choice of comparison of computational per-
formance. The use of M-num-fun is emphasized in this paper
instead.
Table III gives the quantitative performance results for the
20 test functions. In most of the benchmark functions, GAAPI
proved its consistency, having the lowest standard deviation
among the other methods and the lowest mean number of
function evaluations and CPU time. The mean number of
function evaluations (M-num-fun) is the average of the total
number of function evaluations during a predened number
of independent runs of the algorithm. In other words, if we
denote with nF
i
the number of function evaluations in the ith
independent run of the proposed algorithm and we have a total
of M runs which we take into account in our evaluation process,
then the mean number of function evaluations is
Mnumfun =
nF
i
M
. (7)
In most of the cases, the number of function evaluations to
reach a solution very near to the global solution is 10 to 50 times
less than the other methods used for comparison. For seven of
the most popular and difcult functions, GAAPI obtained the
best global solution fast and accurately (F1, F5, F8F12).
GAAPI responds very well, particularly for complex func-
tions with higher dimensionality (N = 100 or 30, such as in
F1F7, F9F15, and F18). However, the algorithm did not
perform satisfactorily for test functions F16 (Fig. 4), F17
(Fig. 5), F19, and F20, which have at minima (many local
minima at the same level). This may be due to the termination
criterion to stop when no improvement occurs after a number of
consecutive nest movements. For the analysis of this paper the
GAAPI algorithm was executed for all 20 functions using the
same termination criterion: the algorithm stops if no improve-
ment occurs after 20 consecutive nest movements.
Table IV provides a comparison of the computational time
required for GAAPI and other heuristic methods for determin-
ing the global optimal solution. Results on other methods are
obtained from [32]. It is shown that GAAPI is faster compared
to the other methods; in some cases, it is considerably faster.
As the computational effort is very important, particularly to
actual problems that need to be solved in real time, GAAPI
may be considered as a useful optimization tool based on the
computational time required determining the global optimum.
The initials of the algorithms referenced in this paper are
presented in Table IX of the Appendix. A brief description of
some of these algorithms is presented in [32].
Tables VVIII show a comparison between the performance
of GAAPI and the performance of other heuristic algorithms,
for all 20 test functions used for the present analysis. Each table
compares the performance of each algorithm (if the results are
available, and only for the functions tested by the authors) and
provides the mean number of function evaluations (M-num-
fun), the best value determined by each algorithm (M-best),
and the standard deviation for 50 independent runs of each
algorithm. Further, the optimal value of each test function is
provided. It should be noted that in the literature selected for
comparison for the purposes of this work, the same number
of function evaluations for each algorithm was not available.
Thus, the comparison below gives this measure only to sustain
a quasi-comparison on the speed of convergence of different
CIORNEI AND KYRIAKIDES: HYBRID ANT COLONY-GENETIC ALGORITHM (GAAPI) FOR GLOBAL CONTINUOUS OPTIMIZATION 243
TABLE IX
NOTATIONS OF THE ALGORITHMS USED FOR COMPARISON
heuristic algorithms toward a near global solution denoted
by the authors as the best-mean solution over a number of
independent runs.
For the rst ve functions GAAPI found the near global
solution in much less computational time and/or mean number
of function evaluations (up to 20 times less) for all of the
functions under analysis in this table. Also, GAAPI found the
best solution among all algorithms for two of the functions (F1
and F5), while for the other three functions, GAAPI practically
found the global optimum (the error was less than 10
4
). It
should be noted that the values for the API and ACAGA
algorithms given in Table V and VII were obtained from [29]
and [28], respectively.
For test functions F6 to F10, in three of the cases, GAAPI had
the best performance in terms of the global optimum solution,
standard deviation and computational effort (F8, F9, and F10);
for F6, the best solution found by GAAPI is the second best
among all algorithms and very near to the global optimum
solution. However, for F7, GAAPI did not succeed in nding
the global optimum solution.
For the next group of functions (F11 to F15), for the rst two
functions, GAAPI obtained the best solution reported so far; for
the next three functions (F13 to F15), GAAPI obtained a near
global optimum solution. For F13, GAAPI obtained better so-
lutions than the FEP, CEP and M-L algorithms, and better CPU
time/ mean number of function evaluations than all the other
algorithms. However, OGA/G, HTGA, ACAGA, and LEA had
a better minimum solution. For F14, GAAPI outperformed
ALEP, FEP, CEP, and M-L, but OGA/G, HTGA, CPSO-H6,
ACAGA, and LEA performed better than GAAPI. For F15,
LEA, OGA/Q, HTGA, and LEA outperformed GAAPI in terms
of the best solution found so far; however, the GAAPI solution
was near the global optimum in faster computational time.
For the last group of ve test functions, GAAPI obtained a
good solution for F18. However, its performance for the other
four test functions was not satisfactory.
VI. CONCLUSION
In this paper, a new algorithm, called GAAPI, was pro-
posed to solve global unconstrained continuous optimization
problems. This algorithm is appropriate for optimization prob-
lems whose decision variables take values from the real-
number domain. The hybrid meta-heuristic algorithm pro-
posed in this paper, was created by combining some unique
characteristics of two other robust meta-heuristic algorithms:
RCGA and API.
It was proven that in most of the cases presented in this paper
(15 out of 20 benchmark functions), GAAPI provided satisfac-
tory or optimum solutions, with very little computational effort.
The algorithm is recommended for large, complex problems
with a dimensionality greater than 30. For seven benchmark
functions, GAAPI gave the best solution reported so far in
the literature, with less number of function evaluations (10 to
50 times less than other powerful methods). The best solution
was found for complex functions with high dimensionality
(n = 30 or n = 100) (seven test functions). For eight other
test functions with high dimensionality (n = 30), GAAPI gave
near global solutions with much less computational effort.
However, for a small class of functions (ve benchmark func-
tions), having mainly small dimensionality (n = 2, n = 4 or
n = 6), GAAPI failed to nd the global optimum solution.
The main reason for this failure is the atness of the objective
function around the global minimum.
There are at least two main reasons why GAAPI performs
better than other powerful heuristic techniques. First, the bal-
ance in exploration and exploitation given by the two chosen
algorithms API and GA is one of the reasons. API has a strong
inuence targeting the search toward a continuously improved
solution (the nest is moved only in the best solution found
at each iteration by its ants), while GA has an active role
in the solution search, only when API reduces its speed of
convergence (the solution does not improve much from one
iteration to another, or there are many failures in exploiting
different sites). This balance in exploration and exploitation
increases the chances of a faster convergence toward the global
optimum, while other methods such as PSO, EAs, or GAs
have a strong exploration component. The second reason is the
choice of crossover and mutation functions in RCGA. These
inuence the activeness or passiveness of GA in the GAAPI
search. A different crossover (for example an arithmetic real-
coded crossover) would maintain GA active at each nest move-
ment, which may lead to solution divergence. The same may
happen if the mutation probability is higher than the crossover
probability.
Other hybridization techniques of API with variances of EAs
may further improve the quality of the solution in difcult
global optimization problems, but a difculty in implementa-
tion could appear due to the complicated forms of the operators
to be used. There may be value in comparing analytically
the search behavior of GAAPI and other search models for
ACO-GA hybridization techniques or in the association of API
with other EAs used for some applications in continuous global
optimization. There may also be value in concentrating on
comparisons of GAAPI to other hybridization schemes which
relate to GA and local search mechanisms. This study focused
mainly on continuous domain optimization problems, so further
work can be addressed to see the applicability of the pro-
posed algorithm to discrete as well as constrained optimization
problems.
244 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 42, NO. 1, FEBRUARY 2012
APPENDIX
The functions used for testing the proposed algorithm are
provided below. These are taken from [28], [29], and [32].
F
1
=
n
i=1
x
i
sin
_
_
|x
i
|
_
F
2
=
n
i=1
_
x
2
i
10 cos(2i) + 10
_
F
3
= 20 exp
_
0.2
_
1
n
n
i=1
x
2
i
_
exp
_
1
n
n
i=1
cos(2i)
_
+ 20 + exp(1)
F
4
=
1
4000
n
i=1
x
2
i
n
i=1
cos
_
x
i
i
_
+ 1
F
5
=
n
_
10 sin
2
(y
1
) +
n1
i=1
(y
i
1)
2
[1 + 10 sin(y
i+1
)]
+ (y
n
1)
2
_
+
n
i=1
u(x
i
, 10, 100, 4)
where
y
i
= 1 +
1
4
(x
i
+ 1)
and
u(x
i
, a, k, m) =
k(x
i
a)
m
, x
i
> a
0, a x
i
a
k(x
i
a)
m
, x
x
< a
F
6
=
1
10
_
sin
2
(3x
1
) +
n1
i=1
(x
i
1)
2
_
1 + sin
2
(3x
i+1
)
+ (x
n
1)
2
_
1 + sin
2
(2x
n
)
_
+
n
i=1
u(x
i
, 5, 100, 4)
F
7
=
n
i=1
sin(x
i
) sin
20
_
ix
2
_
F
8
=
n
i=1
j=1
(
ij
sin
j
+
ij
cos
j
)
j=1
(
ij
sin x
j
+
ij
cos x
j
)
2
where,
ij
and
ij
are random numbers in [100, 100], and
i
is a random number in [, ].
F
9
=
1
n
n
i=1
_
x
4
i
16x
2
i
+ 5x
i
_
F
10
=
n
i=1
_
100
_
x
2
i
+x
i+1
_
2
+ (x
i
1)
2
_
F
11
=
n
i=1
x
2
i
F
12
=
n
i=1
ix
4
i
+rand[0, 1)
F
13
=
n
i=1
|x
i
| +
n
i=1
|x
i
|
F
14
=
n
i=1
j=1
x
j
F
15
= max {|x
i
|, i = 1, 2, . . . , n}
F
16
=4x
2
1
2.1x
4
1
+
1
3
x
6
1
+x
1
x
2
4x
2
2
+ 4x
4
2
F
17
=
_
x
2
5.1
4
2
x
2
1
+
5
x
1
6
_
2
+ 10
_
1
1
8
_
cos x
1
+ 10
F
18
=
_
1 + (x
1
+x
2
+ 1)
2
_
19 14x
1
+ 3x
2
1
14x
2
+ 6x
1
x
2
+ 3x
2
2
_
_
30 + (2x
1
3x
2
)
2
_
18 32x
1
+ 12x
2
1
+ 42x
2
36x
1
x
2
+ 27x
2
2
_
F
19
=
11
i=1
_
a
i
x
1
_
b
2
i
+b
i
x
2
_
b
2
i
+b
i
x
3
+x
4
_
where
[a
1
, . . . , a
1
1] =[0.1957 0.1947 0.1735 0.16 0.0844 0.0627
0.0456 0.0342 0.0323 0.0235 0.0246];
[b
1
, . . . , b
11
] =[4 2 1 0.5 1/4 1/6 1/8 1/10 1/12 1/14 1/16]
F
20
=
4
i=1
c
i
exp
j=1
a
ij
(x
j
p
ij
)
2
where
[c
1
, . . . , c
4
] =[1 1.2 3 3.2];
[a
ij
]
46
=
10 3 17 3.5 1.7 8
0.05 10 17 0.1 8 14
3 3.5 1.7 10 17 8
17 8 0.05 10 0.1 14
[p
ij
]
46
=
CIORNEI AND KYRIAKIDES: HYBRID ANT COLONY-GENETIC ALGORITHM (GAAPI) FOR GLOBAL CONTINUOUS OPTIMIZATION 245
REFERENCES
[1] Y.-W. Leung and Y. Wang, An orthogonal genetic algorithm with quan-
tization for global numerical optimization, IEEE Trans. Evol. Comput.,
vol. 5, no. 1, pp. 4153, Feb. 2001.
[2] T. Weise, Global Optimization Algorithms: Theory and Applications2009.
[3] Z. Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics.
New York: Springer-Verlag, 2004.
[4] H.-G. Beyer and H.-P. Schwefel, Evolution strategies: A comprehensive
introduction, Natural Comput.: Int. J., vol. 1, no. 1, pp. 352, May 2002.
[5] P. Guturu and R. Dantu, An impatient evolutionary algorithm with prob-
abilistic Tabu search for unied solution of some NP-hard problems in
graph and set theory via clique nding, IEEE Trans. Syst., Man, Cybern.
B, Cybern., vol. 38, no. 3, pp. 645666, Jun. 2008.
[6] G. Y. Yang, Z. Y. Dong, and K. P. Wong, Amodied differential evolution
algorithm with tness sharing for power system planning, IEEE Trans.
Power Syst., vol. 23, no. 2, pp. 514522, May 2008.
[7] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution
Programs, 2nd ed. New York: Springer-Verlag, 1994.
[8] Z. Tu and Y. Lu, A robust stochastic genetic algorithm (StGA) for
global numerical optimization, IEEE Trans. Evol. Comput., vol. 8, no. 5,
pp. 456470, Oct. 2004.
[9] C. Perales-Gravan and R. Lahoz-Beltra, An AM radio receiver designed
with a genetic algorithm based on a bacterial conjugation genetic opera-
tor, IEEE Trans. Evol. Comput., vol. 12, no. 2, pp. 129142, Apr. 2008.
[10] C. C. A. Rajan and M. R. Mohan, An evolutionary programming-based
tabu search method for solving the unit commitment problem, IEEE
Trans. Power Syst., vol. 19, no. 1, pp. 577585, Feb. 2004.
[11] P. C. H. Ma, K. C. C. Chan, Y. Xin, and D. K. Y. Chiu, An evolutionary
clustering algorithm for gene expression microarray data analysis, IEEE
Trans. Evol. Comput., vol. 10, no. 3, pp. 296314, Jun. 2006.
[12] M. Dorigo and T. Sttzle, Ant Colony Optimization. Scituate, MA:
Bradford Company, 2004.
[13] M. Dorigo, V. Maniezzo, and A. Colorni, Ant system: Optimization by
a colony of cooperating agents, IEEE Trans. Syst., Man, Cybern. B,
Cybern., vol. 26, no. 1, pp. 2941, Feb. 1996.
[14] N. Monmarche, G. Venturini, and M. Slimane, On how Pachycondyla
apicalis ants suggest a new search algorithm, Future Gener. Comput.
Syst., vol. 16, no. 8, pp. 937946, Jun. 2000.
[15] K. E. Parsopoulos and M. N. Vrahatis, On the computation of all global
minimizers through particle swarm optimization, IEEE Trans. Evol.
Comput., vol. 8, no. 3, pp. 211224, Jun. 2004.
[16] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, Comprehensive
learning particle swarm optimizer for global optimization of multimodal
functions, IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 281295,
Jun. 2006.
[17] I. Paenke, J. Branke, and Y. Jin, On the inuence of phenotype plas-
ticity on genotype diversity, in Proc. IEEE Symp. FOCI, Honolulu, HI,
Apr. 2007, pp. 3340.
[18] Y. Jin, Guest editorial: Special issue on knowledge extraction and incor-
poration in evolutionary computation, IEEE Trans. Syst., Man, Cybern.
C, Appl. Rev., vol. 35, no. 2, pp. 129130, May 2005.
[19] M. Lozano, F. Herrera, N. Krasnogor, and D. Molina, Real-coded
memetic algorithms with crossover hill-climbing, Evol. Comput., vol. 12,
no. 3, pp. 273302, Sep. 2004.
[20] N. Noman and H. Iba, Differential evolution for economic load dis-
patch problems, Elect. Power Syst. Res., vol. 78, no. 8, pp. 13221331,
Aug. 2008.
[21] F. Olivetti de Franca, G. P. Coelho, F. J. Von Zuben, and R. R. de
Faissol Attux, Multivariate ant colony optimization in continuous search
spaces, in Proc. 10th Annu. Conf. Genetic Evol. Comput., Atlanta, GA,
2008, pp. 916.
[22] K. Socha and M. Dorigo, Ant colony optimization for continuous do-
mains, Eur. J. Oper. Res., vol. 185, no. 3, pp. 11551173, Mar. 2008.
[23] S. H. Pourtakdoust and H. Nobahari, An extension of ant colony system
to continuous optimization problems, in Proc. ANTS Workshop, Lecture
Notes in Computer Science (Including Subseries Lecture Notes in Arti-
cial Intelligence and Lecture Notes in Bioinformatics), Brussels, Belgium,
Sep. 2004, pp. 294301.
[24] G. Bilchev and I. C. Parmee, The ant colony metaphor for searching
continuous design spaces, in Proc. Sel. Papers From AISB Workshop
Evol. Comput., 1995, pp. 2539.
[25] A. Acan, GAACO: A GA + ACO hybrid for faster and better search
capability, in Proc. 3rd Int. Workshop Ant Algorithms, Brussels, Belgium,
Sep. 2002, pp. 1526.
[26] G. Li, Z. Lv, and H. Sun, Study of available transfer capability based on
hybrid continuous ant colony optimization, in Proc. 3rd Int. Conf. Elect.
Utility Deregulation Restruct. Power Technol. (DRPT), Nanjing, China,
Apr. 2008, pp. 984989.
[27] S. Nemati, M. E. Basiri, N. Ghasem-Aghaee, and M. H. Aghdam, A
novel ACO-GA hybrid algorithm for feature selection in protein func-
tion prediction, Expert Syst. Appl., vol. 36, no. 10, pp. 12 08612 094,
Dec. 2009.
[28] B. Liu and P. Meng, Hybrid algorithm combining ant colony algorithm
with genetic algorithm for continuous domain, in Proc. 9th ICYCS,
Hunan, China, Nov. 2008, pp. 18191824.
[29] N. Monmarch, Algorithmes de fourmis articielles: Applications a
la classication et a loptimisation, Ph.D. dissertation, Laboratories
dInformatique, Universit de Tours, Tours, France, 2000.
[30] D. Fresneau, Biologie et comportement social dune fourmi ponrine
notropicale, Ph.D. dissertation, Laboratoire dEthologie Exprimentale
et Compare, Universit de Paris XIII, Paris, France, 1994.
[31] Q. Lv and X. Xia, Towards termination criteria of ant colony optimiza-
tion, in Proc. 3rd ICNC, Haikou, China, Aug. 2007, pp. 276282.
[32] Y. Wang and C. Dang, An evolutionary algorithm for global optimiza-
tion based on level-set evolution and Latin squares, IEEE Trans. Evol.
Comput., vol. 11, no. 5, pp. 579595, Oct. 2007.
Irina Ciornei (S07) received the Bachelors degree
in power and electrical engineering from the Techni-
cal University of Iasi, Iasi, Romania, in 2002 and the
M.Sc. degree in energy and environment engineering
from the same university and from the Ecole Su-
perieur Politechnique, Poitiers, France, in 2003. She
is currently working toward the Ph.D. degree in the
Department of Electrical and Computer Engineering,
University of Cyprus, Nicosia, Cyprus.
She is a Researcher with the KIOS Research Cen-
ter for Intelligent Systems and Networks, Depart-
ment of Electrical and Computer Engineering, University of Cyprus. Between
20032006, she worked as a Research Assistant at the Technical University of
Iasi, where she worked on a number of feasibility studies for energy producers.
Her research interests include heuristic methods for optimization (GA, API,
SA, and PSO), economic dispatch in electrical power systems, integration of
wind energy into the main power grid, and power system analysis.
Elias Kyriakides (S00M04SM09) received the
B.Sc. degree from the Illinois Institute of Technol-
ogy, Chicago, in 2000, and the M.Sc. and Ph.D. de-
grees from Arizona State University, Tempe, in 2001
and 2003, respectively, all in electrical engineering.
He is currently an Assistant Professor with the
Department of Electrical and Computer Engineer-
ing, University of Cyprus, Nicosia, Cyprus, and a
founding member of the KIOS Research Center for
Intelligent Systems and Networks. He is the Action
Chair of the ESF-COST Action IC0806 Intelligent
Monitoring, Control, and Security of Critical Infrastructure Systems (Intelli-
CIS) (20092013). His research interests include synchronized measurements
in power systems, security and reliability of the power system network, opti-
mization of power system operation techniques, modeling of electric machines,
and renewable energy sources.