Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
3 views

Self-adaptiveDifferentialEvolutionAlgorithmforNumericalOptimization

The document presents a Self-adaptive Differential Evolution algorithm (SaDE) that eliminates the need for pre-specifying learning strategies and control parameters for numerical optimization. SaDE adapts its strategies and parameters based on previous learning experiences, enhancing its performance across various optimization problems. The algorithm's effectiveness is demonstrated through experiments on 25 benchmark functions, showcasing its ability to handle both unimodal and multimodal problems efficiently.

Uploaded by

宗本 官
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Self-adaptiveDifferentialEvolutionAlgorithmforNumericalOptimization

The document presents a Self-adaptive Differential Evolution algorithm (SaDE) that eliminates the need for pre-specifying learning strategies and control parameters for numerical optimization. SaDE adapts its strategies and parameters based on previous learning experiences, enhancing its performance across various optimization problems. The algorithm's effectiveness is demonstrated through experiments on 25 benchmark functions, showcasing its ability to handle both unimodal and multimodal problems efficiently.

Uploaded by

宗本 官
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1785

Self-adaptive Differential Evolution Algorithm for Numerical Optimization

A. K. Qin P. N. Suganthan
School of Electrical and Electronic Engineering, School of Electrical and Electronic Engineering,
Nanyang Technological University Nanyang Technological University
50 Nanyang Ave., Singapore 639798 50 Nanyang Ave., Singapore 639798
qinkai@pmail.ntu.edu.sg epnsugan(ntu.edu.sg

Abstract- In this paper, we propose a novel Self- of the problem under consideration. The DE evolves a
adaptive Differential Evolution algorithm (SaDE), population of NP n-dimensional individual vectors, i.e.
where the choice of learning strategy and the two solution candidates, X, = (xi,l... x) E S, i = 1,...,NP,
control parameters F and CR are not required to be from one generation to the next. The initial population
pre-specified. During evolution, the suitable learning should ideally cover the entire parameter space by
strategy and parameter settings are gradually self- randomly distributing each parameter of an individual
adapted according to the learning experience. The vector with uniform distribution between the prescribed
performance of the SaDE is reported on the set of 25 upper and lower parameter bounds x; and x,.
benchmark functions provided by CEC2005 special
session on real parameter optimization At each generation G, DE employs the mutation and
crossover operations to produce a trial vector UiG for
each individual vector XiG, also called target vector, in
1 Introduction the current population.
a) Mutation operation
Differential evolution (DE) algorithm, proposed by Storn For each target vector XiG at generation G , an
and Price [1], is a simple but powerful population-based
stochastic search technique for solving global associated mutant vector Vi G = {VIi,G V2,G I...IViG } can
optimization problems. Its effectiveness and efficiency usually be generated by using one of the following 5
has been successfully demonstrated in many application strategies as shown in the online availbe codes []
fields such as pattern recognition [1], communication [2]
and mechanical engineering [3]. However, the control "DE/randl/ ": ViG -Xrl,G + F* (Xr2,G Xr3,G)
parameters and learning strategies involved in DE are "DE/best/ ": ViEG -Xbest,G + F *(Xr ,G - Xr2,G)
highly dependent on the problems under consideration.
For a specific task, we may have to spend a huge amount "DE/current to best/l ":
of time to try through various strategies and fine-tune the Vi,G = Xi,G + F- (XbeStG Xi,G)+ F * (XIG - Xr2GG)
-

corresponding parameters. This dilemma motivates us to "DE/best/2":


develop a Self-adaptive DE algorithm (SaDE) to solve Vi,G = Xbes,G + F .(Xrl,G Xr2,G)+ F (X3 ,G Xr4,G)
- -
general problems more efficiently.
In the proposed SaDE algorithm, two DE's learning "DE/rand/2":
strategies are selected as candidates due to their good Vi,G = XrlG + F * (Xr2,G -Xr3,G)+ F (Xr4,G XrsG) -

performance on problems with different characteristics.


These two learning strategies are chosen to be applied to where indices rt, r2, r3, r4, r5 are random and mutually
individuals in the current population with probability
proportional to their previous success rates to generate different integers generated in the range [1, NP], which
potentially good new solutions. Two out of three critical should also be different from the current trial vector's
parameters associated with the original DE algorithm index i . F is a factor in [0,2] for scaling differential
namely, CR and F are adaptively changed instead of vectors and XbesitG is the individual vector with best
taking fixed values to deal with different classes of fitness value in the population at generation G.
problems. Another critical parameter of DE, the
population size NP remains a user-specified variable to b) Crossover operation
tackle problems with different complexity. After the mutation phase, the "binominal" crossover
operation is applied to each pair of the generated mutant
vector ViG and its corresponding target vector XiG to
2 Differential Evolution Algorithm generate a trial vector: Ui,G = (u1iG,G U2i,G **.
. Uni,G)
The original DE algorithm is described in detail as X1j,,G = {Vj:i' , if (rand*[0,1] < CR)or (j = jrnd)
follows: Let S c 9V be the n-dimensional search space
nj-1,2
'ji,=' Xi otherwVise

0-7803-9363-5/05/$20.00 ©2005 IEEE.

Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on March 24,2025 at 09:01:46 UTC from IEEE Xplore. Restrictions apply.
1786

where CR is a user-specified crossover constant in the need to develop a procedure to determine the probability
range [0, 1) and irand is a randomly chosen integer in the of applying each learning strategy. In our current
range [1, NP] to ensure that the trial vector UiG will implementation, we select two learning strategies as
candidates: "rand/l/bin" and "current to best/2/bin" that
differ from its corresponding target vector XiG by at are respectively expressed as:
least one parameter.
c) Selection operation Vi,G = Xr,,G + F*(Xr2,G Xr3GG)
If the values of some parameters of a newly generated Vi,G =-XG +F -
F(XrF,G Xr2GG)
(Xbest,G -Xi,G)+F
trial vector exceed the corresponding upper and lower
bounds, we randomly and uniformly reinitialize it within The reason for our choice is that these two strategies have
the search range. Then the fitness values of all trial been commonly used in many DE literatures [] and
vectors are evaluated. After that, a selection operation is reported to perform well on problems with distinct
performed. The fitness value of each trial vector f(UiJG) characteristics. Among them, "rand/i/bin" strategy
is com ared to that of its corresponding target vector usually demonstrates good diversity while the "current to
f(XG) in the current population. If the trial vector has best/2/bin" strategy shows good convergence property,
which we also observe in our trial experiments.
smaller or equal fitness value (for minimization problem) Since here we have two candidate strategies, assuming
than the corresponding target vector, the trial vector will that the probability of applying strategy "rand/l/bin" to
replace the target vector and enter the population of the each individual in the current population is p1 , the
next generation. Otherwise, the target vector will remain
in the population for the next generation. The operation is probability of applying another strategy should be
expressed as follows: P2 = 1-p1 . The initial probabilities are set to be equal 0.5,
i.e., p1 = p2 = 0.5. Therefore, both strategies have equal
Ui,G if f(Ui,G) < f(Xi,G) probability to be applied to each individual in the initial
X =_
i,G+l-X otherwise population. For the population of size NP , we can
randomly generate a vector of size NP with uniform
distribution in the range [0, 1] for each element. If the 1th
The above 3 steps are repeated generation after element value of the vector is smaller than or equal to p1,
generation until some specific stopping criteria are
satisfied. the strategy "rand/l/bin" will be applied to the jP
individual in the current population. Otherwise the
strategy "current to best/2/bin" will be applied. After
evaluation of all newly generated trial vectors, the number
3 SaDE: Strategy and Parameter Adaptation of trial vectors successfully entering the next generation
while generated by the strategy "rand/i/bin" and the
To achieve good performance on a specific problem by strategy "current to best/2/bin" are recorded as ns, and
using the original DE algorithm, we need to try all
available (usually 5) learning strategies in the mutation ns2, respectively, and the numbers of trial vectors
phase and fine-tune the corresponding critical control discarded while generated by the strategy "rand/l/bin"
parameters CR, F and NP. Many literatures [4], [6] and the strategy "current to best/2/bin" are recorded as
have pointed out that the performance of the original DE nfi and nf2 . Those two numbers are accumulated within
algorithm is highly dependent on the strategies and a specified number of generations (50 in our experiments),
parameter settings. Although we may find the most called the "learning period". Then, the probability of p1
suitable strategy and the corresponding control is updated as:
parameters for a specific problem, it may require a huge
amount of computation time. Also, during different
evolution stages, different strategies and corresponding nsl (ns2 + nf 2)
parameter settings with different global and local search 1 ns2 (nsl + nf l) + nsl (ns2 + nf 2) "2 =
capability might be preferred. Therefore, we attempt to
develop a new DE algorithm that can automatically adapt The above expression represents the percentage of the
the learning strategies and the parameters settings during success rate of trial vectors generated by strategy
evolution. Some related works on parameter or strategy "'rand/l/bin" in the summation of it and the successful
adapation in evolutionary algorithms have been done in rate of trial vectors generated by strategy "current to
literatures [7], [8]. best/2/bin" during the learnng period. Therefore, the
The idea behind our proposed learning strategy probability of applying those two strategies is updated,
adaptation is to probabilistically select one out of several after the learning period. Also we will reset all the
available learning strategies and apply to the current counters ns , ns2, nf1 and nf2 once updating to avoid
population. Hence, we should have several candidate the possible side-effect accumulated in the previous
learning strategies available to be chosen and also we learning stage. This adaptation procedure can gradually

1786

Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on March 24,2025 at 09:01:46 UTC from IEEE Xplore. Restrictions apply.
1787

evolve the most suitable learning strategy at different Evolution algorithm (SaDE). The SaDE does not require
learning stages for the problem under consideration. the choice of a certain learning strategy and the setting of
In the original DE, the 3 critical control parameters specific values to critical control parameters CR and F.
CR, F and NP are closely related to the problem under The learning strategy and control parameter CR, which
consideration. Here, we keep NP as a user-specified are highly dependent on the problem's characteristic and
value as in the original DE, so as to deal with problems complexity, are self-adapted by using the previous
with different dimensionalities. Between the two learning experience. Therefore, the SaDE algorithm can
parameters CR and F , CR is much more sensitive to demonstrate consistently good performance on problems
the problem's property and complexity such as the multi- with different properties, such as unimodal and
modality, while F is more related to the convergence multimodal problems. The influence on the performance
speed. According to our initial experiments, the choice of of SaDE by the number of generations during which
F has a larger flexibility, although most of the time the previous learning information is collected is not
values between (0, 1] are preferred. Here, we consider significant. We further investigate this now.
allowing F to take different random values in the range To speed up the convergence of the SaDE algorithm,
(0, 2] with normal distributions of mean 0.5 and standard we apply the local search procedure after a specified
number of generations which is 200 generations in our
deviation 0.3 for different individuals in the current experiments, on 5% individuals including the best
population. This scheme can keep both local (with samll individual found so far and the randomly selected
F values) and global (with large F values) search individuals out of the best 50% individuals in the current
ability to generate the potential good mutant vector population. Here, we employ the Quasi-Newton method
throughout the evolution process. The control parameter as the local search method. A local search operator is
CR, plays an essential role in the original DE algorithm. required as the prespecified MAX_FES are too small to
The proper choice of CR may lead to good performance reach the required level accuracy.
under several learning strategies while a wrong choice
may result in performance deterioration under any
learning strategy. Also, the good CR parameter value
usually falls within a small range, with which the 4 Experimental Results
algorithm can perform consistently well on a complex
problem. Therefore, we consider accumulating the We evaluate the performance of the proposed SaDE
previous learning experience within a certain generation algorithm on a new set of test problems includes 25
interval so as to dynamically adapt the value of CR to a functions with different complexity, where 5 of them are
suitable range. We assume CR normally distributed in a unimodal problems and other 20 are multimodal problems.
Experiments are conducted on all 25 10-D functions and
range with mean CRm and standard deviation 0.1. the former 15 30D problems. We choose the population
Initially, CRm is set at 0.5 and different CR values size to be 50 and 100 for lOD and 30D problems,
conforming this normal distribution are generated for respectively.
each individual in the current population. These CR For each function, the SaDE is run 25 runs. Best
values for all individuals remain for several generations functions error values achieved when FES=le+2,
(5 in our experiments) and then a new set of CR values is FES=le+3, FES=le+4 for the 25 test functions are listed
generated under the same normal distribution. During in Tables 1-5 for lOD and Tables 6-8 for 30D,
every generation, the CR values associated with trial respectively. Successful FES & Success Performance are
vectors successfully entering the next generation are listed in Tables 9 and 10 for 1 OD and 30D, respectively.
recorded. After a specified number of generations (25 in
our experiments), CR has been changed for several times Table 1. Error Values Achieved for Functions 1-5 (1D)0
(25/5=5 times in our experiments) under the same normal IOD 1 2 3 4 5
distribution with center CRm and standard deviation 0.1, 1A 814.1681 3.1353e+003 6.0649e+006 2.7817e+003 6.6495e+003
______ 1.4865e+003 6.0024e+003 2.2955e+007 6.2917e+003 8.4444e+003
and we recalculate the mean of normal distribution of CR 1 13t 2.0310e+003 7.3835e+003 3.401 Oe+007 7.8418e+003 9.1522e+003
e 19t' 2.4178e+003 9.1189e+003 5.3783e+007 9.5946e+003 9.4916e+003
according to all the recorded CR values corresponding to 3 25" 3.2049e+003 1.1484e+004 8.4690e+007 1.5253e+004 1.0831e+004
successful trial vectors during this period. With this new M l.9758e+003 7.3545e+003 3.9124e+007 8.0915e+003 8.9202e+003
normal distribution's mean and the standard devidation Std 651.2718 2.4077e+003 2.1059e+007 3.1272e+003 999.5368
1I 1.1915e-005 7.9389 2.3266e+005 29.7687 126.9805
0.1, we repeat the above procedure. As a result, the proper 7th
13" 2.6208e-005
3.2409e-005
14.1250
19.6960
7.7086e+005
1.0878e+006
57.3773
70.3737
165.4529
184.6404
CR value range for the current problem can be learned to e 19" 4.9557e-005 30.4271 1.7304e+006 91.9872 228.7035
suit the particular problem and. Note that we will empty 4 25"
M
9.9352e-005
3.8254e-005
45.1573
23.2716
2.9366e+006
1.2350e+006
187.8363
83.1323
437.7502
203.5592
the record of the successful CR values once we Std 2.0194e-005 10.7838 6.8592e+005 43.7055 66.1114
recalculate the normal distribution mean to avoid the _ 1" T
7th 0
0
0
0
0
0
0
1.1133e-006
0.0028
possible inappropriate long-term accumulation effects. 1 13" 0 0 j 0 0 0.0073
e 19" 0 0 9.9142e-006 0 0.0168
We introduce the above learning strategy and +
25
th+
h 0 2.5580e-012 1.0309e-004 3.5456e-004 0.0626
parameter adaptation schemes into the original DE M 0 1 .0459e-013 1 .6720e-005 1.4182e-005 0.0123
Std 0 5.1124e-013 3.1196e-005 7.0912e-005 0.0146
algorithm and develop a new Self-adaptive Differential

1787

Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on March 24,2025 at 09:01:46 UTC from IEEE Xplore. Restrictions apply.
1788

I
25th 1.3429e+003

M .953e+00
-

Table 2. Error Values Achieved for Functions 6-10 (1OD)


10D r 6 77 1 e

1 St 1.7079e+007 113.7969 20.3848 36.9348 45.2123


l9th 898.561 5 786.8441 970.503 1 200.0016 1 393.5933
[ 25th 1.0735e+003 800.1401 1.1207e+003 200.1128 395.6858
7th 3.5636e+007 191,6213 20.5603 49.4287 69.0149
1 137- 4.9869e+007 206.4133 20.7566 53.2327 77.9215
e 1 7.6773e+007 235.1666 20.8557 60.5725 82.2402 Std 203.8093 74.5398 212.1329 0.0224 3.9586
1 st 300 300.0000 559.4683 200 370.91 12
3 25" 1.4553e+008 421.4129 20.9579 70.0434 94.8549 7th 300.0000 750.6537 559.4683 200 373 .0349
M 5.6299e+007 227.6164 20.7176 54.3968 75.7973 e1
e
3th
19th
500.0000
500.0000
752.4286 _
756.9808
559.4683
721.2327
200 =
200
375.4904
378.1761
Std 3,4546e+007 82.5769 0.1696 7.5835 11.6957
I.t 10.2070 0.2876 20.3282 3.8698 24.1745 25th 800.0000 800 970.5031 200 381.5455
7 15.5318 0.6445 20.4420 5.8920 26.9199 M 464.0000 734.9044 664.0557 200 375.8646
1 _ IT
_ 23.6585 0.6998 20.5083 6.5883 32.2517 Std 157.7973 91.5229 152.6608 0 3.1453
e 19= 31.4704 0.7328 20.5607 7.2996 36.3790
4 25" 93.9778 0.7749 20.6977 9.3280 42.5940
M 29.7719 0.6696 20.5059 6.6853 32.2302 Table 6. Error Values Achieved for Functions 1-5 (30D)
Std 23.5266 0.1072 0.0954 1.2652 5.4082 30D 1 2 3 4 5
1St 8 4.6700e-010 20.0000 0 1.9899
7h 4.3190e-009 0.0148 20.0000 8 3.9798 1stO 4.2730e+004 4.8595e+004 6.5006e+008 5.4125e+004 2.6615e+004
13t 5.1631e-009 0.0197 20.0000 0 4.9748 7th 4.8645e+004 7.7846e+004 7.7457e+008 8.8156e+004 3.1265e+004
e
19* 9.1734e-009 0.0271 20.0000 0 5.9698
e
13th 5.3467e+004 8.3764e+004 9.2200e+008 1.0266e+005 3.2998e+004
5 25i 8.0479e-008 0.0369 20.0000 0 9.9496 19th 5.6481e+004 8.931Ie+004 1.0911e+009 1.1412e+005 3.4256e+004
M 1.1987e-008 0.0199 20.0000 0 4.9685 3 25th 6.5195e+004 1.0850e+005 1.3928e+009 1.2596e+005 3.5876e+004
Std 1.9282e-008 0.0107 5.3901e-008 0 1.6918 M 5.3182e+004 8.1192e+004 9.6475e+008 9.865 1e+004 3.2320e+004
Std 5.9527e+003 1.4020e+004 2.1207e+008 1.9938e+004 2.5184e+003
1st 5.7649e+002 2.3977e+004 7.5955e+007 2.6202e+004 1.0918e+004
7th 9.2574e+002 3.0457e+004 1.1061e+008 3.4788e+004 1.1863e+004
1 13th 9.6939e+002 3.1 798e+004 I1.2346e+008 3.8316e+004 1 .2525e+004
e 19th 1.0161e+003 3.3950e+004 1.3413e+008 4.0290e+004 1.3515e+004
Table 3. Error Values Achieved for Functions 1 1-15 (1OD) 4 25th 1.2382e+003 4.4482e+004 1.7999e+008 5.3358e+004 1.4761e+004
M 9.7498e+002 3.1932e+002 2.2425e+004 3.2336e+008 3.2730e+003 4
lOD 11 12 13 14 15 Std 1.3684e+002 4.1549e+003 2.4947e+007 5.5137e+003 1.081le+003
1st 8.9358 1.4861e+004 4.4831 3.7675 437.7188 25st 0 2.3302e-004 8.1709e+004 9.7790e+000 1.3264e+003
7th 0 8.0687e-003 1.3108e+005 7.799e+001 2.1156e+003
7th 11.1173 3.9307e+004 6.3099 4.1824 612.0006 133th 5.6843e-014 06.968le-0082t 2.0066e+005 1.2005e+002 2.5316e+003
13th 11.5523 6.2646e+004 6.9095 4.2771 659.7280 + 19th 5.6843e-014 4.0714e-001 2.9315e+005 3.0624e+002 2.7938e+003
+ 19th 12.0657 6.9730e+004 7.5819 4.3973 685.5215 25th 5.6843e-014 3.5360e+001 3.1015e+006 1.1099e+003 3.8552e+003
3 25th 12.7319 8.1039e+004 9.3805 4.4404 758.4222 M 3.1 832e-0 14 2.3574e+000 3.4760e+005 2.4542e+002 2 .4449e+003
M 11.4084 5.6920e+004 6.9224 4.2598 647.6461 Std 2.8798e-014 7.3445e+000 5.8904e+005 2.7869e+002 5.9879e+002
Std 0.9536 1.8450e+004 1.1116 0.1676 65.1235 1 st 05.6843e-014 1.81 84e+003 4.28044e-0I0 1.3484e+000
1st 5.7757 2.5908e+003 0.9800 3.1891 133.4582 7th _ 0 5.6843e-014 7.7336e+003 1.4460e+007 1.7185e+001
7th 7.3877 7.34 18e+003 1.2205 3.7346 159.2004
3 13th 0 1.1369e-0 13 1.5935e+004 8.5699e-007 6.8808e+00 I
1 t3th 7.8938 9.8042e+003 1.4449 3.8886 193.2431 + 19th 0 1 .1369e-013 2.9740e+004 4.0090e-006 2.1590e+003
e
19th 8.8545 1.0432e+004 1.5457 4.0240 227.7915 5 25th 0 2.4298e-006 8.0315e+005 6.8315e-005 3.5975e+003
4 25th 9.5742 1.2947e+004 1.8841 4.0966 444.3964 M _0 _ 9.719le-008 5.0521le+004 5.8160e-006 7.8803e+002
M 8.0249- 8.8181 e+003 1.4318 3.8438 210.5349 Std I .0 4.8596e-007 1.5754e+005 1.4479e-005 1.2439e+003
Std 1.0255 _ 2.7996e+003 0.2541 0.2161 80.0138
1 st 3.2352 1.4120e-010 0.1201 2.5765 0
7th 4.5129 1 .7250e-008 0.1957 2.7576 0 Table 7. Effor Values Achieved for Functions 6-10 (30I))
13th 4.7649 8.1600e-008 0.2170 2.8923 0
e 19th 5.3023 3.8878e-007 0.2500 3.0258 2.9559e-012
30D 6_ 9 10
5 25th 5.9546 3.3794e-006 0.3117 3.3373 400 1 st 5.0916e+009 4.4818e+003 2.0991e+001 3.6522e+002 5.2034e+002
M 4.8909 4.501 le-007 0.2202 2.9153 32.0000 7th 6.2021e+009 5.5572e+003 2.1156e+001 3.8597e+002 5.7493e+002
Std 0.6619 8.5062e-007 0.0411 0.2063 110.7550 13th 7.4284e+009 5.9274e+003 2.1188e+001 3.9171e+002 6.0359e+002
e 19th 8.4641e+009 6.5588e+003 2.1255e+001 4.0845e+002 6.2592e+002
25th 1.0602e+010 7.1445e+003 2.1302e+001 4.2017e+002 6.9889e+002
Table 4. Error Values Achieved for Functions 16-20 (1OD) 7M 7.4005e+009 5.9507e+003 2.1191e+001 3.9462e+002 6.0193e+002
IOD 16 17 18 19 20
Std 1.5005e1009 7.3502e+002 7.8238e-002 1.5638e+001 4.4696e+00
41.5000e+002
1

13st 3.3127e+006 1.7732e+002 2.0980e+001


2.1069e+001 1.7987e+002
2.3421e+002
2.6193e+002
1 st 235.2350 307.4325 1.0327e+003 1.0629e+003 1.0183e+003 17th 5.9023e+006 2.4483e+002
253th 7.3526e+006 2.7830e+002 2.1080+001 .89706e+002 2.7007e+002
I 7th 281.7288 330.9715 1.0964e+003 1.0936e+003 1.0930e+003 M19th 9.6219e+006 3.0569e+002 2.1125e+001 1[.9524e+002 2.7940e+002
e 13th 304.0599 348.7749 .1120e+003
1 1.1069e+003 1.1086e+003 25th 1.51 0e+007 3.8775e+002 2. 1209e+00 I 2.0492e+002 2.9881e+002
+ 19th 333.1548 405.0067 1.1337e+003 1.1147e+003 1.1347e+003 1t 37.7825e+006 2.7193e+002 2.1096e+001 1.85886e 02 2.6886e+002
3 25th 367.0937 467.2421 1.1793e+003 1.1524e+003 1.1570e+003
M 306.5995 366.3721 1.1124e+003 1.1075e+003 1.1l108e+003 Std 2.8737e+006 4.52e0007 5.5938e-002 13305e+00 1 1
1.4686e+001
1st 2.2i O 1 e+001 3.094 1 e-005 2.0 112 e+00 1 2. 0464e-0 1 2 2.6 864e+00 1
Std 36.3082 45.2002 31.4597 23.6555 31,9689 7th 2.3734e+001 7.4335e-003 2.023 1 e+00 1 1.51 1 le-002 3.8803e+001
1st 142.4128 171.5105 561.9794 543.2119 510.3079 I 1 3th 2.4372e+001 I1.0052e-002 2.0309e+00 1 4.7865e-002 4.5768e+001
7th 161.4197 183.9739 800.8610 804.0210 801.3788 e 1 9th 2.5451le+001 2.0582e-002 2.0362e+001 7.3677e-002 5.3728e+00I
13th 169.3572 200.6682 809.4465 822.0176 815.1567
5 25th 9.1 559e+00 1 5 .41 06e-002 2.0480e+00 1_ 2.8997e-00I 6.0693e+00 1
+ 19th 173.9672 211.5187 854.3151 850.2155 837.9725 2.7283e+001 I1.4565e-002 2.0305e+001 5.8444e-002 4.5763e+001
25th 188.7826 241.7007 970.1451 985.6591 974.6514
4
M 168.3112 200.1827 817.4287 832.3296 813.2161
Std 1 .3445e+001 1 .2971e-002 9.2049e-002 6.7432e-002 9.1881le+000
I 1st 3.9866e+000 2.8422e-0 14 2.0040e+00 1 0 2.5869e+00I
Std 11.2174 18.7424 97.8982 101.2925 102.1561 1 I
1st 86.3059 99.0400 300 300 300 13thth l.8679e+00 1
1.9057e+001
4.1 056e-007
7.3960e-003
2.0096e+00 1
2.0127e+001 0
3 .0844e+00
3.6813e+001
7th 98.5482 106.7286 800.0000 653.5664 800.0000 e
13th 101.4533 113.6242 800.0000 800.0000 800.0000
+ 19th 104.9396 119.2813 800.0000 800.0000 800.0000
5 25th 111.9003 135.5105 900.8377 930.7288 907.0822
M 101.2093 114.0600 719.3861 704.9373 713.0240
-

Std 6.1686 9.9679 208.5161 190.3959 201.3396


Table 8. Error Values Achieved for Functions 11-15 ( 30D)
Table 5. Error Values Achieved for Functions 21-25 (1OD) I II
1 12 13 14 15
I I St 4.0'
- -------- ---- ----
6 8.6240e+001 1.3880e+001 9.0131 e+002
IOD 21 22 23 24 25 e 7th 4.4i 6 1.4490e+002 1.4076e+001 1.0109e+003
1 Ist 1.0738e+003 903.5596 1.1912e+003 778.2495 452.5057
+ 13th 475066e+001 I 1.6705e+006 .9014e+002 1 .4208e+001 1.0498e+003
3 191th 4S
e 7th 1 2915e+003 970.4664 1.2867e+003 .0789e+003 608.3791
3 13th 1.3148e+003 985.8289 1.3152e+003 I 1.1394e+003 648.1046 1 25th 4.7:
19th 1 M 1 4.51 1.0483e+003
1.3239e+003 1.0 1 14e+003 I 1.3239e+003 I 1.2317e+003 727.7877 1

1788

Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on March 24,2025 at 09:01:46 UTC from IEEE Xplore. Restrictions apply.
1789

Std 1.4916e+000 2.0222e+005 5.8095e+001 1.3066e-001 6.6584e+001


Ist 3.9526e+001 6.9444e+005 1.8333e+001 1.3331e+001 5.1 155e+002
7th 4.0650e+001 8.4099e+005 1.9443e+001 1.3731e+OO1 5.7146e+002
13th 4.1464e+001 9.0247e+005 2.0457e+001 1.3837e+)01 6.0739e+002
+ 19th 4.3054e+001 9.7931e+005 2.1555e+001 1.3914e+001 6.6392e+002
4 25th 43636e+001 1.1349e+006 2.3133e+001 1.4049e+001 7.7397e+002
M 4.1743e+001 9.2214e+005 2.0497e+001 1.3790e+001 6.2072e+002
Std 1.2503e+000 1.1142e+005 1.3309e+000 1.8812e-tOOI 7.0309e+001
Ist 2.6526e+001 4.5250e+002 1.5148e+000 1.2497e+001 1.3978e+002
7th 2.9945e+001 2.9058e+003 1.9457e+000 1.2704e+001 3.0006e+002
13th 3.1010e+001 5.1056e+003 2.0321e+O00 1.2894e+001 3.7037e+002
e
19th 3.1861e+001 7.7071e+003 2.1967e+000 1.3015e+001 4.0000e+002
5 25th 3.3046e+001 1.4132e+004 2.7691e+000 1.3222e+001 5.0000e+002
M 3.0807e+001 5.8477e+003 2.0607e+000 1.2870e+001 3.4588e+002
Std 1.5169e+000 3.9301e+003 3.1533e-001 2.1536e-001 7.7823e+001
I st 2.4079e+001 4.3242e+001 9.5408e-001 1.1662e+001 3.6818e+001
7th I 2.5989e+001 1.6940e+002 1. 1129e+000 1.2267e+001 3.0000e+002
e

- 0 i 2 3 4 $ a 7 a 6

Table 9. Best Error Functions Values Achieved in the Figure 1. Convergence Graph for Function 1-5
MAX FES & Success Perfortmance (IOD)
F
______
(Ml) 7' Md_
Mn 13d 19, 25I
(Max) Mean Std Success
eate -1-
Success Perf.
10126 10126 10126 10126 10126 1 10126 0 1 1.0126e+004 le~~~~~~~~~~~~~~~~~~~~~~~as
-se~~~~~~~~~~~~~~~~~1 "'s;
..e;,

t I zZ.
off ........

k
le L 4 I
i
i ......

i
le r, i
i
lo(,0L-- I
5-~~~~~~~~~~~-

2 t 4 5 6 I 8 0
Pt x IC'
Figure 2. Convergence Graph for Function 6-10
Table 10. Best Error Functions Values Achieved in the
MAX-FES & Success Performance (30D)
F Mint
(Mfin) 7Ih 13th
(Med) 19", 25t'
(Max) Mean Std rate
rate Success Perf.
2.023 2.023 2.0234 2.0234 2.023 2.023 5.0662 1.00 2.0234e+004
1 3e+00 3e+00 e+004 e+004 4e+00 4e+00 e-001
4 4 4 4_ _
1.217 1.334 1.4174 1.4648 - 0.96 1.4883e+005 12
2 Se+00 4e+00 e+005 e+005
5 5
3 0 0
2.448 2.843 2.9639 0.52 5.3816e+005
4 2e+00 4e+00 e+005 ~~~~~~~~~~~~~~~~~~o13
5 5
5 - - - - 0 0
6 0 0
6.964 8.342 1.0162 1.6748 0.80 1.3477e+005
7 8e+00 2e+00 e+005 e+005
4 4
8 0 0
8.299 1.035 1 .0389 1.0395 1.039 9.893 9.0090 1.00 9.8934e4004
9 5e+00 le+00 e+005 e+005 6e+00 464+00 e+003

1 4 5 71

Figure 3. Convergence Graph for Function 11I-1 5


The lOD convergence maps of the SaDE algorithm on
functions 1-5 functions 6-10, functions 1 1-15, functions
,

16-20, and functions 21-25 are plotted in Figures 1-5


respectively. The 30D convergence maps of the SaDE
algorithm on functions 1-5 functions 6-10, functions 11-
,

15 are illustrated in Figures 6-8, respectively.

1789

Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on March 24,2025 at 09:01:46 UTC from IEEE Xplore. Restrictions apply.
1790

to' .
--.t1a117.
;, ,X

le,
r0j _ alst

Ig r.J.
I
i i.11 'i
I"I.,

: mm.
.Z- uf .l
i; 4
0 1 r0
........... ...................

vI
i,0&

10~~~~~~~~~~~~~~~~~~~~1 I. r¢-
F i~~~~~~~~E r0
i
0i 05 1 1
W . l
t4 9 Zr~~~~~FE
Xt

Figure 7. Convergence Graph for Function 6-10


Figure 4. Convergence Graph for Function 16-20
10;

;-W 24r ,oXhi


|2~ ~ ~ ~ ~ &141
- faal2S rX ke

i0t L ;

f 1r
If IS

r¶0
10' r.'4f

----i*
-- - - - - - - -*-- -r
-- - -- - - -- -- --- -
0 05' t1 6 2 2
- - - -- - i

**_w { %~ ~ ~ ~ ~ ~ ~ 1
Figure 8. CnegneGahfrFnto1-5
0\ 5 2 $ 4 $ 6 7 6 4 i0
, Oxlo'

Figure 5. Convergence Graph for Function 21-25


From the results, we could observe that, for lOD
1 .. ... T ww
problems, the SaDE algorithm can find the global optimal
- km 2r solution for functions 1, 2, 3, 4, 6, 7, 9, 12 and 15 with
success rate 1, 1, 0.64, 0.96, 1, 0.24, 1, 1 and 0.92,
respectively. For some functions, e.g. function 3, although
t.--r.
-_-r the success rate is not 1, the final obtained best solutions
4it are very close to the success level; For 30D problems, the
SaDE algorithm can find the global optimal solutions for
I
functions 1, 2, 4, 7 and 9 with success rate 1, 0.96, 0.52,
0.8 and 1, respectively. However, from function 16
5. W~~~~~~~~~~~-
throughout to 25, the SaDE algorithm cannot find any
global optimal solution for both 1 OD and 30D over the 25
runs due to the high multi-modality of those composite
t
*Lm
1-0
2.$
functions and also the local search process asscociated
11 with the SaDE make the algorithm to prematurely
Figure 6. Convergence Graph for Function 1-5 converge to a local optimal solution. Therefore, in our
paper, we do not list the 30D results for functions 16-25.
The algorithm complexity, which is defined on
http://www.ntu.edu.sg/home/EPNSugan/, is calculated on
10, 30, 50 dimensions on function 3, to show the
algorithm complexity's relationship with increasing
dimensions as in Table 9. We use the Matlab 6.1 to
implement the algorithm and the system configurations
are listed as follows:

1790

Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on March 24,2025 at 09:01:46 UTC from IEEE Xplore. Restrictions apply.
1791

System Configurations [8] Bryant A. Julstrom, "What Have You Done for Me
Intel Pentiumg 4 CPU 3.00 GHZ Lately? Adapting Operator Probabilities in a Steady-
1 GB of memory State Genetic Algorithm" Proc. of the 6th
Windows XP Professional Version 2002 International Conference on Genetic Algorithms,
Language: Matlab pp.81-87,1995.
Table 9. Algorithm Complexity
TO TI T2 (T2-TI)/TO
D=10 40.0710 31.6860 68.8004 0.8264
D=30 40.0710 38.9190 74.2050 0.8806
D=50 40.0710 47.1940 85.4300 0.9542

5 Conclusions
In this paper, we proposed a Self-adaptive Differential
Evolution algorithm (SaDE), which can automatically
adapt its learning strategies and the asscociated
parameters during the evolving procedure. The
performance of the proposed SaDE algorithm are
evaluated on the newly proposed testbed for CEC2005
special session on real parameter optimization.

Bibliography
[1] R. Storn and K. V. Price, "Differential evolution-A
simple and Efficient Heuristic for Global
Optimization over Continuous Spaces," Journal of
Global Optimization 11:341-359. 1997.
[2] J. Ilonen, J.-K. Kamarainen and J. Lampinen,
"Differential Evolution Training Algorithm for Feed-
Forward Neural Networks," In: Neural Processing
Letters Vol. 7, No. 1 93-105. 2003.
[3] R. Storn, "Differential evolution design of an IIR-
filter," In: Proceedings of IEEE Int. Conference on
Evolutionary Computation ICEC'96. IEEE Press,
New York. 268-273. 1996.
[4] T. Rogalsky, R.W. Derksen, and S. Kocabiyik,
"Differential Evolution in Aerodynamic
Optimization," In: Proc. of 46h Annual Conf of
Canadian Aeronautics and Space Institute. 29-36.
1999.
[5] K. V. Price, "Differential evolution vs. the functions
of the 2nd ICEO", Proc. of 1997 IEEE International
Conference on Evolutionary Computation (ICEC '97),
pp. 153-157, Indianapolis, IN, USA, April 1997.
[6] R. Gaemperle, S. D. Mueller and P. Koumoutsakos,
"A Parameter Study for Differential Evolution", A.
Grmela, N. E. Mastorakis, editors, Advances in
Intelligent Systems, Fuzzy Systems, Evolutionary
Computation, WSEAS Press, pp. 293-298, 2002.
[7] J. Gomez, D. Dasgupta and F. Gonzalez, "Using
Adaptive Operators in Genetic Search", Proc. of the
Genetic and Evolutionary Computation Conference
(GECCO), pp.1580-1581,2003.

1791

Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on March 24,2025 at 09:01:46 UTC from IEEE Xplore. Restrictions apply.

You might also like