Particle Swarm Optimiser With Neighbourhood Operator
Particle Swarm Optimiser With Neighbourhood Operator
Particle Swarm Optimiser With Neighbourhood Operator
P.N.Suganthan
Department of Computer Science and Electrical Engineering
University of Queensland St Lucia QLD 4072, Australia,
Email : suganocsee.uq. edu.au
Abstract In recent years population based meth- tion is maintained through out (i.e. it is not necessary
ods such as genetic algorithms, evolutionary pro- to apply operators such as recombination, selection, etc.
gramming, evolution strategies and genetic pro- t o the population) and constructive cooperation between
gramming have been increasingly employed to particles.
solve a variety of optimisation problems. Re-
cently, another novel population based optimi- Recent studies by Angeline [2] showed t h a t although
sation algorithm - namely the particle swarm op- PSO discovered reasonable quality solutions much faster
timisation ( P S O ) algorithm, was introduced by than other evolutionary algorithms, it did not possess
Eberhart and Kennedy [3,7]. Although the P S O the ability t o perform a fine grain search t o improve upon
algorithm possesses some attractive properties, the quality of the solutions as the number of generations
its solution quality has been somewhat inferior was increased. Whereas other evolutionary algorithms
to other evolutionary optimisation algorithms [2]. almost always continued t o improve the quality of the
In this paper, we propose a number of techniques solutions as the number of generations was increased and
to improve the standard P S O algorithm. Simi- in the end generated much better solutions than those
lar techniques have been employed in the context generated by the PSO algorithm.
of self-organising maps and neural-gas networks
[9,111. In this paper, we propose a number of improvements
for the standard PSO algorithm. Similar techniques have
been used widely in the context of the self-organising
1 Introduction
maps [9] and neural-gas networks [ll]. We introduce
In recent years population based methods such as genetic a variable neighbourhood operator. During the initial
algorithms [5], evolutionary programming [4], evolution stages of the optimisation, the PSO algorithm’s neigh-
strategies [la] and genetic programming [lo] have been bourhood will be an individual particle itself. As the
increasingly employed t o solve a variety of optimisation number of generations increases, the neighbourhood will
problems. These multiple solutions based stochastic al- be gradually extended t o include all particles. In other
gorithms are less likely t o get trapped in local minima words, the variable GBEST in the PSO algorithm is re-
t,han single solution based methods such as the steep- placed by LBEST (i.e. local best solution) where a local
est descent algorithm. Due t o this advantage, numerous neighbourhood size is gradually increased. In addition,
problerns have been formulated in order t o be solved by the magnitudes of the random walk and inertia weight in
evolutionary algorithms. the PSO are also gradually adjusted in order t o perform
Recently, another novel population based optimisa- a fine grain search during the final stages of optimisa-
tion algorithm - namely the particle swarm optimisa- tion. We study function optimisation t o illustrate the
tion (PSO) algorithm, was introduced by Eberhart and performance of the modified PSO algorithm.
Kennedy [3,7]. T h e underlying motivation for t,he devel-
opment of PSO algorithm was social behaviour of ani- In the next section, we will present our modified PSO
mals such as bird flocking, fish schooling, animal herd- algorithm. In Section 3, we explain the test problem -
ing and swarming theory. T h e PSO algorithm possesses namely the function optimisation. Experimental proce-
some attractive properties such as memory, (i.e. every dures are described in Section 4. T h e paper is concluded
particle remembers its best solution), the initial popula- in Section 5.
1959
where K and t are total number of T h e fourth function is the generalised Griewank function
iterations/generations and current iteration/generation expressed as follows.
number. Superscripts cm and 0 denotes parameter n
values at the start and in the end of search process. 1 "
PS08: Repeat steps PSO2-PSO7 until a stop criterion
is satisfied O R a prespecified number of iterations is
These functions have been used by several evolutionary
completed.
computation researchers [ l ] .
1960
My experiments also showed t h a t C1 and Cz had t o 4. L.J.Fogel, “Evolutionary programming in perspec-
be constants t o generate good quality solutions, but not tive: T h e top-down view”, In Computational Intel-
necessarily a value of 2.0 for both parameters gave the ligence: Imitating Life, J.M.Zurada, R.J.Marks I1
best result. For instance, C1 = 2.5, C2 = 1.5 gave an and C.Goldberg, Eds, IEEE Press, Piscataway, N J ,
aggregate error of 0.391 for function 4. This result is 1994.
better than those reported above. On the other hand
5. D.E.Goldberg, Genetic algorithms in search, op-
C1 = 0.0, C2 = 2.5 gave an aggregate error of near zero
for function 2. Hence, it would be interesting t o develop timisation and machine learning, Addison-Wesley,
MA, 1989.
an algorithm (genetic based?) t o search for the best
combination of parameters for the PSO for every problem 6. J.Kennedy, “The particle swarm: Social adaptation
separately. Further, the computation of frac can also of knowledge”, In Proc. of IEEE Int. Conf. on Euo-
be adjusted in an efficient manner by replacing 3.0 and lutionary Computation, Indianapolis, USA, 1997.
0.6 factors by problem specific variables obtained by an
efficient search algorithm. 7. J.Kennedy and R.Eberhart, “Particle swarm optmi-
mization”, In Proc. of IEEE Int. Conf. on Neural
Networks, Perth, Australia, December 1995.
5 Conclusions 8. J.Kennedy and R.Eberhart, “A discrete binary ver-
sion of the particle swarm algorithm”, In Proc. of
In this paper, we investigated the PSO algorithm. In
IEEE Int. Conf. on Systems, Man an,d Cybernetics,
particular, we proposed a number of improvements such
Orlando, FL, USA, 1997.
as gradually increasing local neighbourhood, time vary-
ing random walk and inertia weight values and two al- 9. T.Kohonen, Self-organising maps, Springer Verlag,
ternative schemes for determining the L E EST solution 1995; “The self-organising maps”, Proceedings of the
for every particle. Our experimental results showed some IEEE, Vol. 78, No. 9, 1464-1480, 1990.
advantages of using a neighbourhood operator. We found
10. J.R.Koza, Genetic programming: On the program-
t h a t the time varying values for C1 and C;!did not in fact
ming of computers b y means of natural selection,
perform worse than the system with fixed values. How-
MIT Press, Cambridge, MA, 1992.
ever, we found out t h a t there existed different combina-
tion of values for C1 and C2 (not just 2.0) t o yield good 11. T.M.Martinetz, S.G.Berkovich and K.J.Schulten,
solutions for different problems. We are currently inves- “Neural-gas network for vector quantisation and its
tigating the means t o determine these values efficiently application t o time-series prediction”, IEEE Trans-
for different problems. actions on Neural Networks, Vol. 4 , No. 4, 558-569,
1994.
1961
Table 3: Function optimisation results
1962