Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Particle Swarm Optimization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Soft Comput

DOI 10.1007/s00500-016-2474-6

FOUNDATIONS

Particle swarm optimization algorithm: an overview


Dongshu Wang1 · Dapei Tan1 · Lei Liu2

© Springer-Verlag Berlin Heidelberg 2017

Abstract Particle swarm optimization (PSO) is a Keywords Particle swarm optimization · Topology
population-based stochastic optimization algorithm moti- structure · Discrete PSO · Parallel PSO · Multi-objective
vated by intelligent collective behavior of some animals such optimization PSO
as flocks of birds or schools of fish. Since presented in 1995, it
has experienced a multitude of enhancements. As researchers
have learned about the technique, they derived new versions 1 Introduction
aiming to different demands, developed new applications in a
host of areas, published theoretical studies of the effects of the Particle swarm optimization (PSO) algorithm is a stochastic
various parameters and proposed many variants of the algo- optimization technique based on swarm, which was proposed
rithm. This paper introduces its origin and background and by Eberhart and Kennedy (1995) and Kennedy and Eberhart
carries out the theory analysis of the PSO. Then, we analyze (1995). PSO algorithm simulates animal’s social behavior,
its present situation of research and application in algorithm including insects, herds, birds and fishes. These swarms con-
structure, parameter selection, topology structure, discrete form a cooperative way to find food, and each member in the
PSO algorithm and parallel PSO algorithm, multi-objective swarms keeps changing the search pattern according to the
optimization PSO and its engineering applications. Finally, learning experiences of its own and other members.
the existing problems are analyzed and future research direc- Main design idea of the PSO algorithm is closely related
tions are presented. to two researches: One is evolutionary algorithm, just like
evolutionary algorithm; PSO also uses a swarm mode which
makes it to simultaneously search large region in the solu-
tion space of the optimized objective function. The other is
Communicated by A. Di Nola. artificial life, namely it studies the artificial systems with life
characteristics.
Electronic supplementary material The online version of this
In studying the behavior of social animals with the artifi-
article (doi:10.1007/s00500-016-2474-6) contains supplementary
material, which is available to authorized users. cial life theory, for how to construct the swarm artificial life
systems with cooperative behavior by computer, Millonas
B Dongshu Wang proposed five basic principles (van den Bergh 2001):
wangdongshu@zzu.edu.cn
Dapei Tan (1) Proximity: the swarm should be able to carry out simple
1581901458@qq.com
space and time computations.
Lei Liu (2) Quality: the swarm should be able to sense the quality
luckyliulei@126.com
change in the environment and response it.
1 School of Electrical Engineering, Zhengzhou University, (3) Diverse response: the swarm should not limit its way to
Zhengzhou 450001, Henan, China get the resources in a narrow scope.
2 Department of Research, The People’s Bank of China, (4) Stability: the swarm should not change its behavior mode
Zhengzhou Central Sub-Branch, Zhengzhou, China with every environmental change.

123
D. Wang et al.

(5) Adaptability: the swarm should change its behavior mode constant, rand denotes a random number in [0,1], change in
when this change is worthy. the velocity item can be set according to the following rules:
if x > pbest x, vx = vx − rand × a, otherwise, vx =
Note that the fourth principle and the fifth one are the opposite vx + rand × a.
sides of the same coin. These five principles include the main if y > pbest y, v y = v y − rand × a, otherwise, v y =
characteristics of the artificial life systems, and they have v y + rand × a.
become guiding principles to establish the swarm artificial Then assume that the swarm can communicate in some
life system. In PSO, particles can update their positions and way, and each individual is able to know and memorize the
velocities according to the environment change, namely it best location (marked as gbest) of the total swarm so far.
meets the requirements of proximity and quality. In addition, And b is the velocity adjusting constant; then, after the veloc-
the swarm in PSO does not limit its movement but contin- ity item was adjusted according to above rules, it must also
uously search the optimal solution in the possible solution update according to the following rules:
space. Particles in PSO can keep their stable movement in the if x > gbest x, vx = vx − rand × b, otherwise, vx =
search space, while change their movement mode to adapt vx + rand × b.
the change in the environment. So particle swarm systems if y > gbest y, v y = v y − rand × b, otherwise, v y =
meet the above five principles. v y + rand × b.
Computer simulation results show that when a/b is rel-
atively large, all individuals will gather to the “cornfield”
2 Origin and background quickly; on the contrary, if a/b is small, the particles will
gather around the “cornfield” unsteadily and slowly. Through
In order to illustrate production background and development this simple simulation, it can be found that the swarm can find
of the PSO algorithm, here we first introduce the early simple the optimal point quickly. Inspired by this model, Kennedy
model, namely Boid (Bird-oid) model (Reynolds 1987). This and Eberhart devised an evolutionary optimization algo-
model is designed to simulate the behavior of birds, and it is rithm, after a sea of trials and errors, they finally fixed the
also a direct source of the PSO algorithm. basic algorithm as follows:
The simplest model can be depicted as follows. Each indi-
vidual of the birds is represented by a point in the Cartesian vx = vx + 2 ∗ rand ∗ ( pbest x − x)
coordinate system, randomly assigned with initial velocity
+ 2 ∗ rand ∗ (gbest x − x)
and position. Then run the program in accordance with “the
nearest proximity velocity match rule,” so that one individual x = x + vx (1)
has the same speed as its nearest neighbor. With the iteration
going on in the same way, all the points will have the same They abstracted each individual to be a particle without mass
velocity quickly. As this model is too simple and far away and volume, with only velocity and position, so they called
from the real cases, a random variable is added to the speed this algorithm “particle swarm optimization algorithm.”
item. That is to say, at each iteration, aside from meeting “the On this basis, PSO algorithm can be summarized as fol-
nearest proximity velocity match,” each speed will be added lows: PSO algorithm is a kind of searching process based
with a random variable, which makes the total simulation to on swarm, in which each individual is called a particle
approach the real case. defined as a potential solution of the optimized problem in
Heppner designed a “cornfield model” to simulate the for- D-dimensional search space, and it can memorize the opti-
aging behavior of a flock of birds (Clerc and Kennedy 2002). mal position of the swarm and that of its own, as well as the
Assume that there was a “cornfield model” on the plane, i.e., velocity. In each generation, the particles information is com-
food’s location, and birds randomly dispersed on the plane at bined together to adjust the velocity of each dimension, which
the beginning. In order to find the location of the food, they is used to compute the new position of the particle. Particles
moved according to the following rules. change their states constantly in the multi-dimensional search
First, we assume that position coordinate of the cornfield space, until they reach balance or optimal state, or beyond
is (x0 ,y0 ), and position coordinate and velocity coordinate of the calculating limits. Unique connection among different
individual bird are (x,y) and (vx ,v y ), respectively. Distance dimensions of the problem space is introduced via the objec-
between the current position and cornfield is used to measure tive functions. Many empirical evidences have showed that
the performance of the current position and speed. The further this algorithm is an effective optimization tool. Flowchart of
the distance to the “cornfield”, the better the performance, on the PSO algorithm is shown in Fig. 1.
the contrary, the performance is worse. Assume that each bird The following gives a relatively complete presentation
has the memory ability and can memorize the best position of the PSO algorithm. In the continuous space coordi-
it ever reached, denoted as pbest. a is velocity adjusting nate system, mathematically, the PSO can be described

123
Particle swarm optimization algorithm: an overview

Start vi,t+1
d
= vi,t
d
+ c1 ∗ rand ∗ ( pi,t
d
− xi,t
d
)
+ c2 ∗ rand ∗ ( pg,t
d
− xi,t
d
) (3)
d
xi,t+1 = xi,t
d
+ vi,t+1
d
(4)
Swarm initialization
Since the initial version of PSO was not very effective
in optimization problem, a modified PSO algorithm (Shi
Particle fitness evaluating and Eberhart 1998) appeared soon after the initial algorithm
was proposed. Inertia weight was introduced to the velocity
update formula, and the new velocity update formula became:
Calculating the individual
historical optimal position vi,t+1
d
= ω ∗ vi,t
d
+ c1 ∗ rand ∗ ( pi,t
d
− xi,t
d
)
+ c2 ∗ rand ∗ ( pg,t
d
− xi,t
d
) (5)
Calculating the swarm
historical optimal position Although this modified algorithm has almost the same
complexity as the initial version, it has greatly improved
the algorithm performance; therefore, it has achieved exten-
Updating particle velocity and sive applications. Generally, the modified algorithm is called
position according to the velocity canonical PSO algorithm, and the initial version is called
and position updating equation original PSO algorithm.
By analyzing the convergence behavior of the PSO algo-
rithm, Clerc and Kennedy (2002) introduced a variant of the
No
Satisfying the PSO algorithm with constriction factor χ which ensured the
ending condition? convergence and improved the convergence rate. Then, the
velocity update formula became:
Yes
vi,t+1
d
= χ (vi,t
d
+ φ1 ∗ rand ∗ ( pi,t
d
− xi,t
d
)
End
+ φ2 ∗ rand ∗ ( pg,t
d
− xi,t
d
)) (6)
Fig. 1 Flowchart of the particle swarm optimization algorithm
Obviously, there is no essential difference between the
iteration formulas (5) and (6). If appropriate parameters are
selected, the two formulas are identical.
as follows. Assume that swarm size is N , each parti- PSO algorithm has two versions, called global version
cle’s position vector in D-dimensional space is X i = and local version, respectively. In the global version, two
(xi1 , xi2 , · · · , xid , · · · , xi D ), velocity vector is Vi = (vi1 , extremes that the particles track are the optimal position
vi2 , · · · , vid , · · · , vi D ), individual’s optimal position (i.e., pbest of its own and the optimal position gbest of the swarm.
the optimal position that the particle has experienced) Accordingly, in local version, aside from tracking its own
is Pi = ( pi1 , pi2 , · · · , pid , · · · , pi D ), swarm’s optimal optimal position pbest, the particle does not track the swarm
position (i.e., the optimal position that any individual in optimal position gbest, instead it tracks all particles’ opti-
this swarm has experienced) is represented as Pg = mal position nbest in its topology neighborhood. For the
( pg1 , pg2 , · · · , pgd , · · · , pg D ). Without loss of generality, local version, the velocity update equation (5) became:
taking the minimizing problem as the example, in the initial
version of the PSO algorithm, update formula of the individ- vi,t+1
d
= ω ∗ vi,t
d
+ c1 ∗ rand ∗ ( pi,t
d
− xi,t
d
)
ual’s optimal position is:
+ c2 ∗ rand ∗ ( pl,t
d
− xi,t
d
) (7)

 d
xi,t+1 , if f (X i,t+1 ) < f (Pi,t ) where pl was the optimal position in the local neighborhood.
d
pi,t+1 = d , (2) In each generation, iteration procedure of any particle is
pi,t otherwise
illustrated in Fig. 2. Analyzing the velocity update formula
from a sociological perspective, we can see that in this update
The swarm’s optimal position is that of all the individual’s formula, the first part is the influence of the particle’s previ-
optimal positions. Update formula of velocity and position ous velocity. It means that the particle has confidence on its
is denoted as follows, respectively: current moving state and conducts inertial moving according

123
D. Wang et al.

+ instead it only uses the information of the swarm optima and


individual optima. (3) Though PSO algorithm provides the
possibility of global search, it cannot guarantee convergence
to the global optima. (4) PSO algorithm is a meta-heuristic
+
bionic optimization algorithm, and there is no rigorous the-
ory foundation so far. It is designed only through simplifying
and simulating the search phenomenon of some swarms, but
it neither explains why this algorithm is effective from the
principle, nor specifies its applicable range. Therefore, PSO
algorithm is generally suitable for a class of optimization
problems which are high dimensional and need not to get
very accurate solutions.
Fig. 2 Iteration scheme of the particles
Now there are many different kinds of researches about
the PSO algorithm, and they can be divided into the following
to its own velocity, so parameter ω is called inertia weight. eight categories: (1) Theoretically analyze the PSO algorithm
The second part depends on the distance between the par- and try to understand its working mechanism. (2) Change its
ticle’s current position and its own optimal position, called structure and try to get better performance. (3) Study the
the “cognitive” item. It means particle’s own thinking, i.e., influence of various parameters configuration on PSO algo-
particle’s move resulting from its own experience. Therefore, rithm. (4) Study the influence of various topology structures
parameter c1 is called cognitive learning factor (also called on PSO algorithm. (5) Study the parallel PSO algorithm.
cognitive acceleration factor). The third part relies on the dis- (6) Study the discrete PSO algorithm. (7) Study the multi-
tance between the particle’s current position and the global objective optimization with the PSO algorithm. (8) Apply the
(or local) optimal position in the swarm, called “social” fac- PSO algorithm to various engineering fields. The remainder
tor. It means the information share and cooperation among of this paper will begin to summarize the current researches
the particles, namely particle’s moving coming from other on PSO algorithm from the above eight categories. Because
particles’ experience in the swarm. It simulates the move of the related studies are too much, we cannot do all well, so
good particle through the cognition, so the parameter c2 is we just pick up some representative ones to review.
called social learning factor (also called social acceleration
factor).
Due to its intuitive background, simple and easy to imple- 3 Theoretical analysis
ment, as well as the wide adaptability to different kinds of
functions, since the PSO algorithm has been proposed, it has Nowadays, theory study of the PSO algorithm mainly focuses
obtained great attention. In the past twenty years, both the on the principle of the PSO algorithm, i.e., how the parti-
theory and application of the PSO algorithm have achieved cles to interact with each other, why it is effective for many
great progress. Researchers have had a preliminary under- optimization problems, but not obvious for other problems.
standing of the principle, and its application has been realized Specifically, researches on this problem can be divided into
in different domains. three aspects: One is the moving trajectory of a single parti-
PSO is a stochastic and parallel optimization algorithm. cle; another is the convergence problem; and the third is the
Its advantages can be summarized as follows: It does not evolution and distribution of the total particle system with
require the optimized functions differential, derivative and the time.
continuous; its convergence rate is fast; and the algorithm is The first analysis of simplified particles behavior was car-
simple and easy to execute through programming. Unfortu- ried out by Kennedy (1998), who gave different particle
nately, it also has some disadvantages (Wang 2012): (1) For trajectories under a series of design choices through simulat-
the functions with multiple local extremes, it probably falls ing. The first theoretical analysis of simplified PSO algorithm
into the local extreme and cannot get correct result. Two rea- was proposed by Ozcan and Mohan (1998) who indicated
sons result in this phenomenon: One is the characteristics of that in a simplified one-dimensional PSO system, a parti-
the optimized functions and the other is the particles’ diver- cle moved along a path defined by a sinusoidal wave, and
sity disappearing quickly, causing premature convergence. determined its amplitude and frequency randomly. However,
These two factors are usually inextricably intertwined. (2) their analysis was merely limited to the simple PSO model
Due to lack of cooperation of good search methods, PSO without the inertia weight, and assumed that Pid and Pgd
algorithm cannot get satisfactory results. The reason is that kept unchanged. Actually, Pid and Pgd changed frequently,
the PSO algorithm does not sufficiently use the information so the particle’ trajectory was a sine wave composed of many
obtained in the calculation procedure. During each iteration, different amplitudes and frequencies. Therefore, the total tra-

123
Particle swarm optimization algorithm: an overview

jectory looked still disorder. This reduced the effect of their the accuracy of BBJ compared to other algorithms was sig-
conclusions significantly. nificantly better. Additionally, BBJ was empirically shown
The first formal analysis of the PSO algorithm’s stability to be both the most efficient and the most reliable algorithm
was carried out by Clerc and Kennedy (2002), but this anal- in both local and global neighborhoods.
ysis method treated the random coefficients as constants, so Meanwhile, social variant of PSO (al = 0) and fully
it simplified the standard stochastic PSO to a deterministic informed particle swarm (Mendes et al. 2004) were also stud-
dynamic system. The resulting system was a second-order ied by Poli (2008). Garcia-Gonza and Fernandez-Martinez
linear dynamic system whose stability depended on the sys- (2014) presented the convergence and stochastic stability
tem poles or the eigenroots of state matrix. van den Bergh analysis of a series of PSO variants, and their research was
(2001) did a similar analysis of the deterministic version of different from the classical PSO in the statistical distribu-
PSO algorithm and determined the regions where the stabil- tion of the three PSO parameters: inertia weight, local and
ity could be guaranteed in the parameter space. Convergence global acceleration factors. They gave an analytical presenta-
and parameters selection were also addressed in the literature tion for the upper limit of the second-order stability areas (the
(Trelea 2003; Yasuda et al. 2003). But the authors admitted so called USL curves) of the particle trajectories, which is
that they did not take the stochastic feature of the PSO algo- available for most of the PSO algorithms. Numerical exper-
rithm into account, thus their results had certain limitations. iments showed that the best algorithm performance could
A similar analysis about the continuous version of PSO algo- be obtained through tuning the PSO parameters close to the
rithm has also been done in Emara and Fattah (2004). USL curve.
As it has already been proposed, the PSO algorithm adopts Kadirkamanathan et al. (2006) analyzed the stability of
constant ω and uniform distribution random numbers c1 and particle dynamics by using the Lyapunov stability analy-
c2 . How the first- and second-order stability regions of the sis and the concept of passive system. This analysis did not
particle trajectories will be changed if ω also uses a ran- assume that all parameters were non-random, and obtained
dom variable, and/or c1 and c2 conform to other statistical the sufficient conditions of stability. It was based on the ran-
distributions instead of the uniform distribution? First-order dom particle dynamics that represented particle dynamics
stability analysis (Clerc and Kennedy 2002; Trelea 2003; as a nonlinear feedback control system. Such system had a
Bergh and Engelbrecht 2006) aimed to test that stability of deterministic linear part and a nonlinear one and/or a time-
the mean trajectories relied on the parameters (ω, φ), where varying gain in the feedback loop. Though it considered the
φ = (ag + al )/2, and c1 and c2 were uniform distribution influence of random components, its stability analysis was
in the intervals [0, ag ] and [0, al ], respectively. Stochastic carried out aiming at the optimal position; therefore, the con-
stability analysis contained higher-order moments and had clusion cannot be applied to non-optimal particles directly.
been proved to be very useful for understanding the particle Even the original PSO algorithm could converge, it could
swarm dynamics and clarifying the PSO convergence prop- only converge to the optima that the swarm could search, and
erties (Fernandez-Martinez and Garcia-Gonzalo 2011; Poli could not guarantee that the achieved solution was the best,
2009). even it could not guarantee that it was the local optima. van
Bare Bones PSO (BBPSO) was proposed by Kennedy den Bergh and Engelbrecht (2002) proposed a PSO algorithm
(2003) as a model of PSO dynamics. Its particle’s veloc- which could ensure the algorithm convergence. It applied
ity update conforms to a Gaussian distribution. Although a new update equation for the global optimal particle and
Kennedy’s original formulation is not competitive to stan- made it to generate a random search near the global opti-
dard PSO, adding a component-wise jumping mechanism, mal position, while other particles updated by their original
and a tuning of the standard deviation, can produce a compa- equations. This algorithm could ensure the PSO algorithm
rable optimization algorithm. Hence, al Rifaie and Blackwell to convergence to the local optimal solution with the cost of
(2012) proposed a Bare Bones with jumps algorithm (BBJ), faster convergence rate, but its performance was inferior to
with an altered search spread component and a smaller jump the canonical PSO algorithm in multi-modal problems.
probability. It used the difference between the neighbor- Lack of population diversity was regarded early (Kennedy
hood best with the current position rather than the difference and Eberhart 1995) as the important influence factor for pre-
between either the left and right neighbors’ bests (in local mature convergence of the swarm toward a local optimum;
neighborhood) or the particle’s personal best and the neigh- hence, enhancing diversity was considered to be an useful
borhood best (in global neighborhood). Three performance approach to escape from the local optima (Kennedy and Eber-
measures (i.e., accuracy, efficiency and reliability) were hart 1995; Zhan et al. 2009). Enhancing the swarm diversity,
utilized to compare the BBJ with other standard PSO of however, is harmful to fast convergence toward the optimal
Clerc–Kennedy and other variations of BBJ. Using these solution. This phenomenon is well known because it was
measures, it was shown that in terms of accuracy, when proved by Wolpert and Macready (1997) that an algorithm
benchmarks with successful convergence were considered, cannot surpass all the others on each kind of problem. Hence,

123
D. Wang et al.

research trials to promote the performance of an optimization particle swarm optimization (SPSO) algorithm to optimize
algorithm should not be intended to search for a general func- the neural fuzzy networks. The presented SPSO algorithm
tion optimizer (Mendes et al. 2004; Wolpert and Macready used the multi-swarm strategy which used each particle to
1997), but rather search for a general problem-solver which represent a single fuzzy rule and each particle in each swarm
can perform well on many well-balanced practical bench- evolved separately to avoid falling into a local optima. Chang
mark problems (Garcia-Martinez and Rodriguez 2012). (2015) proposed a modified PSO algorithm to solve multi-
Avoiding premature convergence on a local optimum solu- modal function optimization problems. It divided the original
tion, meanwhile keeping the fast convergence feature of the single swarm into several sub-swarms based on the order of
original PSO formulation, is an important reason why a few particles. The best particle in each sub-swarm was recorded
PSO variants have been put forward (Valle et al. 2008). These and then applied into the velocity updating formula to replace
methods include fine-tuning the PSO parameters to manip- the original global best particle in the whole population. To
ulate the particle velocity updating (Nickabadi et al. 2011), update all particles in each sub-swarm, the enhanced velocity
various PSO local formulation to consider the best solution formula was adopted.
within a local topological particle neighborhood instead of In addition, Tanweer et al. (2016) presented a new
the entire swarm (Kennedy and Mendes 2002, 2003; Mendes dynamic mentoring and self-regulation-based particle swarm
et al. 2004) and integrating the PSO with other heuristic optimization (DMeSR-PSO) algorithm which divided the
algorithms (Chen et al. 2013). For example, comprehensive particles into mentor, mentee and independent learner groups
learning PSO (Liang et al. 2006) applied a new learning according to their fitness differences and Euclidian dis-
scheme to increase the swarm diversity in order to avoid tances with respect to the best particle. Performance of
the premature convergence in solving multi-modal problems. DMeSR-PSO had been extensively evaluated on 12 bench-
ALC-PSO (Chen et al. 2013) endowed the swarm leader an mark functions (unimodal and multi-modal) from CEC2005,
increasing age and a lifespan to escape from the local optima more complex shifted and rotated CEC2013 benchmark
and thus avoid the premature convergence. Self-regulating functions and 8 real-world optimization problems from
PSO (Tanweer et al. 2016) adopted a self-regulating inertia CEC2011. The performance of DMeSR-PSO on CEC2005
weight and self-perception on the global search direction to benchmark functions had been compared with six PSO vari-
get faster convergence and better results. ants and five meta-heuristic algorithms. The results clearly
For the spherically symmetric local neighborhood func- highlighted that DMeSR-PSO provided the most consistent
tions, Blackwell (2005) has theoretically analyzed and exper- performance on the selected benchmark problems. The non-
imentally verified the speed features with diversity loss in parametric Friedman test followed by the pair-wise post hoc
PSO algorithm. Kennedy (2005) has systematically studied Bonferroni–Dunn test provided the evidence that DMeSR-
how the speed influence the PSO algorithm, and it was help- PSO was statistically better than the selected algorithms with
ful to understand the contribution of the speed to the PSO 95% confidence level. Further, the performance had also
performance. Clerc (2006) studied the iteration process of been statistically compared with seven PSO variants on the
the PSO at the stagnant stage, as well as the roles of each CEC2013 benchmark functions where the performance of
random coefficient in detail; finally, he gave the probability DMeSR-PSO was significantly better than five algorithms
density functions of each random coefficient. with a confidence level of 95%. The performance of DMeSR-
PSO on the CEC2011 real-world optimization problems was
better than the winner and runner-up of the competition,
4 Algorithm structure indicating that DMeSR-PSO was an effective optimization
algorithm for real-world applications. Based on these results,
There are a sea of enhancement approaches for the PSO the DMeSR-PSO would be recommended to deal with the
algorithm structure, which can be classified into 8 main sub- CEC test sets.
sections as follows. For the high-dimensional optimization problem, PSO
algorithm requires too many particles which results in high
4.1 Adopting multi-sub-populations computational complexity; thus, it is difficult to achieve a sat-
isfactory solution. So recently, cooperative particle swarm
In 2001, Suganthan (1999) introduced the concept of sub- algorithm (CPSO-H) (Bergh and Engelbrecht 2004) was
population of the genetic algorithm and brought a reproduc- proposed which split the input vector into multiple sub-
tion operator in the PSO algorithm. Dynamic multi-swarm vector, and for each sub-vector, a particle swarm was used
PSO was proposed by Liang and Suganthan (2005) where to optimize it. Although the CPSO-H algorithm used one-
the swarm was divided into several sub-swarm, and these dimensional swarm to search for each dimension, respec-
sub-swarms were regrouped frequently to share information tively, after the search results were integrated by a global
among them. Peng and Chen (2015) presented a symbiotic swarm, its performance on multi-modal problems had been

123
Particle swarm optimization algorithm: an overview

greatly improved. Further, Niu et al. (2005) introduced PSO (Tatsumi et al. 2013). Coelho and Lee (2008) random-
master–slave sub-population mode into the PSO algorithm ized the cognitive and social behaviors of the swarm with
and put forward a multi-population cooperative PSO algo- chaotic sequences and Gaussian distribution, respectively.
rithm. Similarly, Seo et al. (2006) proposed a multi-grouped Tatsumi et al. (2015) emphasized the chaotic PSO to exploit
PSO which used N groups of particles to search N peaks a virtual quartic objective function according to the personal
of the multi-modal problems simultaneously. Selleri et al. and global optima. This model adopted a perturbation-based
(2006) used multiple independent sub-populations and added chaotic system derived from a quartic tentative objective
some new items to the particle velocity update formula which function through applying the steepest descent method with
made the particles to move toward the historical optimal posi- a perturbation. The function was determined for each parti-
tion of the sub-population, or away from the gravity center cle, and it had two global minima at the pbest of the particle
of other sub-populations. and the gbest.
In addition to these methods, in the Bare bones PSO algo-
4.2 Improving the selection strategy for particle rithm (Kennedy 2003), particle positions were updated by
learning object using a Gaussian distribution. Since many foragers and
wandering animals followed a Levy distribution of steps,
Al-kazemi and Mohan (2002) proposed a multi-phase PSO this distribution was useful for optimization algorithms. So
algorithm in which particles were grouped according to Richer and Blackwell (2006) replaced the particle dynamics
the temporary search targets in different phases, and these within PSO with random sampling from a Levy distribu-
temporary targets allowed the particles to move toward or tion. A range of benchmark problems were utilized to test its
away from its own or the global best position. Ting et al. performance; the resulting Levy PSO performed as well, or
(2003) modified every particle’s p Best, and every dimen- better, than a standard PSO or equivalent Gaussian models.
sion learned from randomly determined other dimensions. Moreover, in speed update equation, Hendtlass (2003) added
After that, if the new p Best was better, then it was used to memory ability to each particle and He et al. (2004) intro-
replace the original p Best. duced passive congregation mechanism. Zeng et al. (2005)
In PSO algorithm, Yang and Wang (2006) introduced the introduced acceleration term into the PSO algorithm which
roulette selection technique to determine the g Best, so that changed the PSO algorithm from a second-order stochastic
in the early stage of evolution, all individuals had chance system into a third-order stochastic one. In order to improve
to lead the search direction to avoid premature. Zhan et al. the global search ability of the PSO algorithm, Ho et al.
(2011) introduced an orthogonal learning PSO in which an (2006) proposed a new speed and position update formula and
orthogonal learning scheme was used to get efficient exem- introduced the variable “age.” Moreover, Ngoa et al. (2016)
plars. Abdelbar et al. (2005) proposed a fuzzy measure, and proposed an improved PSO to enhance the performances of
several particles with the best fitness values in each neighbor standard PSO by using a new movement strategy for each
could affect other particles. particle. Particles in the PSO fly to their own predefined tar-
Contrary to the original PSO algorithm, there is a class get instead of the best particles (i.e., personal and global
method which makes the particles to move away from the bests). Performance of proposed improved PSO was illus-
worst position instead of toward the best position. Yang and trated by applying it to 15 unconstrained (i.e., unimodal and
Simon (2005) proposed to record the worst position rather multi-modal) benchmarks and 15 computationally expensive
than the best position in the algorithm, and all particles unconstrained benchmarks.
moved away from these worst positions. Similarly, Leon-
titsis et al. (2006) introduced a new concept—repel operator 4.4 Modifying velocity update strategy
which used the information of individual’s optimal position
and swarm’s optimal position. Meanwhile, it also recorded Although PSO performance has improved over the past
the current individual’s worst positions and swarm’s worst decades, how to select suitable velocity updating strategy and
positions and used them to repel the particles toward the parameters remains an important research domain. Ardiz-
best position, so that the swarm could reach the best position zon et al. (2015) proposed a novel example of the original
quickly. particle swarm concept, with two types of agents in the
swarm, the “explorers” and the “settlers”, that could dynam-
4.3 Modifying particle’s update formula ically exchange their role during the search procedure. This
approach can dynamically update the particle velocities at
Many of these methods use chaotic sequences to modify each time step according to the current distance of each
the particle positions, in which particles search for solu- particle from the best position found so far by the swarm.
tions extensively due to the chaoticity. It is known that these With good exploration capabilities, uniform distribution ran-
PSO variants have a more diverse search than the standard dom numbers in the velocity updating strategy may also

123
D. Wang et al.

affect the particle moving. Thus, Fan and Yan (2014) put 4.6 Combining PSO with other search techniques
forward a self-adaptive PSO with multiple velocity strategies
(SAPSO-MVS) to enhance PSO performance. SAPSO-MVS It has two main purposes: One is to increase the diver-
could generate self-adaptive control parameters in the total sity and avoid premature; the other is to improve the local
evolution procedure and adopted a novel velocity updating search ability of the PSO algorithm. In order to promote
scheme to improve the balance between the exploration and the search diversity in the PSO, a sea of models have been
exploitation capabilities of the PSO algorithm and avoided studied (Poli et al. 2007). These hybrid algorithms included
to tune the PSO parameters manually. Roy and Ghoshal introducing various genetic operators to the PSO algorithm,
(2008) proposed Crazy PSO in which particle velocity was such as selection (Angeline 1998a, b; Lovbjerg et al. 2001),
randomized within predefined limits. Its aim was to ran- crossover (Angeline 1998b; Chen et al. 2014), mutation (Tsa-
domize the velocity of some particles, named as “crazy farakis et al. 2013) or Cauchy mutation (Wang et al. 2011)
particles” through using a predefined probability of craziness to increase the diversity and improve its ability to escape
to keep the diversity for global search and better conver- from the local minima. Meng et al. (2015) proposed a new
gence. Unfortunately, values of the predefined probability of hybrid optimization algorithm called crisscross search parti-
craziness could only be obtained after a few experiments. cle swarm optimization (CSPSO), which was different from
Peram et al. (2003) presented a fitness–distance ratio-based PSO and its variants in that its every particle was directly
PSO (FDR-PSO), in which a new velocity updating equa- expressed by pbest. Its population was updated by modified
tion was used to regenerate the velocity of each particle. PSO and crisscross search optimization in sequence during
Li et al. (2012) presented a self-learning PSO in which each iteration. Seventeen benchmark functions (including
a velocity update scheme could be automatically modified four unimodal functions, five multi-modal functions and sev-
in the evolution procedure. Lu et al. (2015b) proposed a eral complicated shifted and rotated functions) were used
mode-dependent velocity updating equation with Marko- to test the feasibility and efficiency of the CSPSO algo-
vian switching parameters in switching PSO to overcome rithm, but it had not coped with the premature convergence in
the contradiction between the local search and the global the later period of the optimization. Vlachogiannis and Lee
search, which made it easy to jump out of the local mini- (2009) presented a novel control equation in enhanced coor-
mum. dinated aggregation PSO for better communication among
Liu et al. (2004) argued that too frequent velocity update particles to improve the local search. It permitted the parti-
would weaken the particles’ local exploit ability and decrease cles to interact with its own best experience as well as all
the convergence, so he proposed a relaxation velocity update other particles with better experience on aggregate basis,
strategy, which updated the speed only when the original instead of the global optimal experience. Selvakumar and
speed could not improve the particle’s fitness value further. Thanushkodi (2009) presented civilized swarm optimization
Experimental results proved that this strategy could reduce (CSO), through combining society-civilization algorithm
the computation load greatly and accelerate the convergence. with PSO to enhance its communication. This new algorithm
Diosan and Oltean (2006) used genetic algorithm to evolve provided clustered search which produced better exploration
PSO algorithm structure, i.e., particles updating order and and exploitation of the search space. Unfortunately, it needed
frequency. several experiments to decide the optimal control parameters
of the CSO.
4.5 Modifying the speed or position constrain method Lim and Isa (2015) put forward a hybrid PSO algorithm
and dynamically determining the search space which introduced the fuzzy reasoning and a weighted particle
to construct a new search behavior model to increase the
Chaturvedi et al. (2008) dynamically controlled the acceler- search ability of the conventional PSO algorithm. Besides the
ation coefficients in maximum and minimum limits. Deter- information of the global best and individual best particles,
mining the bound value of the acceleration coefficients, Shin and Kita (2014) took advantage of the information of
however, was a very difficult issue because it needed sev- the second global best and second individual best particles
eral simulations. Stacey et al. (2003) offered a new speed to promote the search performance of the original PSO.
constrain method to re-randomize the particle speed and a Tanweer et al. (2016) presented a novel particle swarm
novel position constrain method to re-randomize the parti- optimization algorithm named as self-regulating particle
cle position. Clerc (2004) brought a contraction-expansion swarm optimization (SRPSO) algorithm which introduced
coefficient into evolution algorithms to ensure the algo- the best human learning schemes to search the optimum
rithm convergence, while relaxing the speed bound. Other results. The SRPSO used two learning schemes. The first
approaches, such as squeezing the search space (Barisal scheme adopted a self-regulating inertia weight, and the sec-
2013), had also been proposed to dynamically determine the ond one adopted the self-perception on the global search
search space. direction. Other methods or models to improve the diversity

123
Particle swarm optimization algorithm: an overview

included: attracting-exclusion model (Riget and Vesterstrom (2002) also studied a method to find the multiple optimal
2002), predator-prey model (Gosciniak 2015), uncorrela- solutions simultaneously through adjusting the fitness value
tive component analysis model (Fan et al. 2009), dissipative calculating way. On the basis of the niche PSO algorithm,
model (Xie et al. 2002), self-organizing model (Xie et al. Schoeman and Engelbrecht (2005) adopted vector operation
2004), life cycle model (Krink and Lovbjerg 2002), Bayesian to determine the candidate solution and its border in each
optimization model (Monson and Seppi 2005), chemical niche through using vector dot production operation and par-
reaction optimization (Li et al. 2015b), neighborhood search alleled this process to obtain better results. However, there
mechanism (Wang et al. 2013), collision-avoiding mech- was a common problem in each niche PSO algorithm, namely
anism (Blackwell and Bentley 2002), information sharing it needed to determine a niche radius, and the algorithm per-
mechanism (Li et al. 2015a), local search technique (Sharifi formance was very sensitive to the niche radius. In order to
et al. 2015), cooperative behavior (Bergh and Engelbrecht solve this problem, Benameur et al. (2006) presented an
2004), hierarchical fair competition (Chen et al. 2006b), adaptive method to determine the niching parameters.
external memory (Acan and Gunay 2005), gradient descent
technique (Noel and Jannett 2004), simplex method opera- 4.8 Keeping diversity of the population
tor (Qian et al. 2012; El-Wakeel 2014), hill climbing method
(Lin et al. 2006b), division of labor (Lim and Isa 2015), prin- Population diversity is especially important for enhancing the
cipal component analysis (Mu et al. 2015), Kalman filtering global convergence of the PSO algorithm. The easiest way to
(Monson and Seppi 2004), genetic algorithm (Soleimani and keep population diversity was resetting some particles or total
Kannan 2015), shuffled frog leaping algorithm (Samuel and particle swarm when the diversity was very small. Lovbjerg
Rajan 2015), random search algorithm (Ciuprina et al. 2007), and Krink (2002) adopted a self-organized criticality in PSO
Gaussian local search (Jia et al. 2011), simulated annealing algorithm to depict the proximity degree among the particles
(Liu et al. 2014; Geng et al. 2014), taboo search (Wen and in the swarm, further, to decide whether re-initialize the par-
Liu 2005), Levenberg–Marquardt algorithm (Shirkhani et al. ticle positions or not. Clerc (1999) presented a deterministic
2014), ant colony algorithm (Shelokar et al. 2007), artifi- algorithm named as Re-Hope, when the search space was
cial bee colony (Vitorino et al. 2015; Li et al. 2011), chaos quite small but had not yet found solutions (No-Hope); then,
algorithm (Yuan et al. 2015), differential evolution (Zhai and it reset the swarm. To keep the population diversity and bal-
Jiang 2015), evolutionary programming (Jamian et al. 2015), ance the global and local search, Fang et al. (2016) proposed a
multi-objective cultural algorithm (Zhang et al. 2013). PSO decentralized form of quantum-inspired particle swarm opti-
algorithm was also extended in quantum space by Sun et al. mization (QPSO) with cellular structured population (called
(2004). The novel PSO model was based on the delta potential cQPSO). Performance of cQPSO-lbest was investigated on
well and modeled the particles as having quantum behav- 42 benchmark functions with different properties (including
iors. Furthermore, Medasani and Owechko (2005) expanded unimodal, multi-modal, separated, shifted, rotated, noisy, and
the PSO algorithm through introducing the possibility of c- mis-scaled) and compared with a set of PSO variants with dif-
means and probability theory, and put forward probabilistic ferent topologies and swarm-based evolutionary algorithms
PSO algorithm. (EAs).
The modified PSO of Park et al. (2010) introduced chaotic
4.7 Improving for multi-modal problems inertia weight which decreased and oscillated simultane-
ously under the decreasing line in a chaotic manner. In this
The seventh solution is specifically for multi-modal prob- manner, additional diversity was brought into the PSO but
lems, hoping to find several better solutions. In order to it needed tuning the chaotic control parameters. Recently,
obtain several better solutions for the optimized problem, Netjinda et al. (2015) presented a novel mechanism into PSO
Parsopoulos and Vrahatis (2004) used deflection, stretching to increase the swarm diversity, a mechanism inspired by the
and repulsion and other techniques to find as many as possible collective response behavior of starlings. This mechanism is
minimum points by preventing the particles from moving to composed of three major steps: initialization, which prepared
the minimum area ever found before. However, this method alternative populations for the next steps; identifying seven
would generate new local optima at both ends of the detected nearest neighbors; and orientation changed which updated
local ones, which might lead the optimization algorithm to the particle velocity and position according to those neigh-
fall into local optima. Therefore, Jin et al. (2005) proposed a bors and chosen the best alternative. Due to this collective
new form of function transformation which could avoid this response mechanism, the Starling PSO realized a wider scope
disadvantage. of the search space and hence avoided suboptimal solutions.
Another variant is a niche PSO algorithm proposed by The trade-off for the improving performance was that this
Brits et al. (2003), to locate and track multiple optima by algorithm added more processes to the original algorithm.
using multiple sub-populations simultaneously. Brits et al. As a result, more parameters were needed while the addi-

123
D. Wang et al.

tional process, the collective response process, also made whether accept the inertia weight change or not according to
this algorithm consume more execution time. However, the Metropolis criteria.
algorithm complexity of the Starling PSO was still the same Some people also adopted a random inertia weight, such
as that of the original PSO. as setting to [0.5+(rnd/2.0)] (Eberhart and Shi 2001), [0, 1]
uniform distribution random numbers (Zhang et al. 2003).
Jiang and Bompard (2005) introduced the chaos mechanism
5 Parameters selection in selecting the inertia weight, so the inertia weight could
traverse [0, 1]. The modified PSO of Park et al. (2010) intro-
There are several important parameters in PSO algorithm, duced chaotic inertia weight which oscillated and decreased
i.e., inertia weight ω (or constriction factor χ ), learning fac- simultaneously under the decreasing line in a chaotic way,
tors c1 and c2 , speed limits Vmax , position limits X max , swarm but it needed tuning the chaotic control parameters.
size and the initial swarm. Some researchers fixed other
parameters and only studied the influence of single parame- 5.2 Learning factors c1 and c2
ter on the algorithm, while some researchers also studied the
effect of multiple parameters on the algorithm. The learning factors c1 and c2 represent the weights of the
stochastic acceleration terms that pull each particle toward
5.1 Inertia weight p Best and g Best (or n Best). In many cases, c1 and c2 are
set to 2.0 which make the search to cover the region cen-
Current studies believe that the inertia weight has the greatest tered in p Best and g Best. Another common value is 1.49445
influence on the performance of PSO algorithm, so there are which can ensure the convergence of PSO algorithm (Clerc
the most researches in this area. Shi and Eberhart (1998) was and Kennedy 2002). After a lot of experiments, Carlisle and
the first individual to discuss the parameter selection in PSO. Dozier (2001) put forward a better parameters set which set
They brought an inertia efficient ω into the PSO and promoted c1 and c2 to 2.8 and 1.3, respectively, and the performance of
the convergence feature. An extension of this work adopted this setting was further confirmed by Schutte and Groen-
fuzzy systems to nonlinearly change the inertia weight during wold (2005). Inspired by the idea of time-varying inertia
optimization (Shi and Eberhart 2001). weight, there appeared many PSO variants whose learning
Generally, it is believed that in PSO, inertia weight is used factors changed with time (Ivatloo 2013), such as learn-
to balance the global search and the local search, and bigger ing factor linearly decreased with time (Ratnaweera et al.
inertia weight is tended to global search while the smaller 2004), dynamically adjusted based on the particles’ evolu-
inertia weight is tended to local search, so the value of iner- tionary states (Ide and Yasuda 2005), dynamically adjusted
tia weight should gradually reduce with the time. Shi and in accordance with the number of the fitness values deteri-
Eberhart (1998) suggested that inertia weight should be set orate persistently and the swarm’s dispersion degree (Chen
to [0.9, 1.2] and a linearly time-decreasing inertia weight et al. 2006a).
could significantly enhance the PSO performance. In most cases, the two learning factors c1 and c2 have the
As the fixed inertia weight usually cannot get satisfac- same value, so that the social and cognitive search has the
tory results, there appeared some PSO variants whose inertia same weight. Kennedy (1997) studied two kinds of extremes:
weight declined linearly along with iteration times (Shi and model with only the social term and with only the cognitive
Eberhart 1998), adaptive changed (Nickabadi et al. 2011), term, and the result showed that these two parts were very
adjusted by a quadratic function (Tang et al. 2011) and crucial to the success of swarm search, while there were no
by the population information (Zhan et al. 2009), adjusted definitive conclusions about the asymmetric learning factor.
based on Bayesian techniques (Zhang et al. 2015), expo- There were researches which determined the inertia
nential decreasing inertia weight strategy (Lu et al. 2015a), weight and learning factors simultaneously. Many researchers
declined according to the nonlinear function (Chatterjee and adopted various optimization techniques to dynamically
Siarry 2006), and Sugeno function (Lei et al. 2005) in search determine the inertia weight and learning factors, such as
process. At the same time, there are many methods that the genetic algorithm (Yu et al. 2005), adaptive fuzzy algo-
inertia weight adaptively changes along with some evalua- rithm (Juang et al. 2011), differential evolutionary algorithm
tion indexes, such as the successful history of search (Fourie (Parsopoulos and Vrahatis 2002b), adaptive critic design
and Groenwold 2002), evolution state (Yang et al. 2007), technology (Doctor and Venayagamoorthy 2005).
particle average velocity (Yasuda and Iwasaki 2004), popu-
lation diversity (Jie et al. 2006), smoothness change in the 5.3 Speed limits Vmax
objective function (Wang et al. 2005), evolutionary speed and
aggregation degree of the particle swarm, individual search Speed of the particles was constrained by a maximum speed
ability (Qin et al. 2006). Even, Liu et al. (2005) determined Vmax which can be used as a constraint to control the global

123
Particle swarm optimization algorithm: an overview

search ability of the particle swarm. In original PSO algo- 5.6 Initialization of the population
rithm, ω = 1, c1 = c2 = 2, particles’ speed often quickly
increases to a very high value which will affect the perfor- Initialization of the population is also a very important prob-
mance of the PSO algorithm, so it is necessary to restrict lem. Generally, the initial population is randomly generated,
particle velocity. Later, Clerc and Kennedy (2002) pointed but there are also many intelligent population initializa-
out that it was not necessary to restrict the particle velocity, tion methods, such as using the nonlinear simplex method
introducing constriction factor to the speed update formula (Parsopoulos and Vrahatis 2002a), centroidal Voronoi tes-
could also realize the purpose of limiting particle velocity. sellations (Richards and Ventura 2004), orthogonal design
However, even the constriction factor was used, experiments (Zhan et al. 2011), to determine the initial population of
showed that better result would be obtained if the particle PSO algorithm, making the distribution of the initial pop-
velocity was limited simultaneously (Eberhart and Shi 2000), ulation as evenly as possible, and help the algorithm to
so the idea of speed limitation was still retained in PSO algo- explore the search space more effectively and find a bet-
rithm. Generally speaking, Vmax was set to the dynamic range ter solution. Robinson et al. (2002) pointed out that the
of each variable, and usually a fixed value, but it could also PSO algorithm and GA algorithm could be used in turn, i.e.,
linearly decrease with time (Fan 2002) or dynamically reduce taking the population optimized by the PSO algorithm as
according to the success of search history (Fourie and Groen- the initial population of the GA algorithm, or conversely,
wold 2002). taking the population optimized by GA algorithm as the ini-
tial population of the PSO algorithm, both methods could
get better results. Yang et al. (2015) presented a new PSO
approach called LHNPSO, with low-discrepancy sequence
5.4 Position limits X max
initialized particles and high-order (1/π 2 ) nonlinear time-
varying inertia weight and constant acceleration coefficients.
Positions of the particles can be constrained by a maximum
Initial population was produced through applying the Halton
position X max that can avoid particles flying out of the phys-
sequence to fill the search space adequately.
ical solution space. Robinson and Rahmat-Samii (2004) put
Furthermore, parameters of PSO algorithm could be
forward three different control techniques, namely absorb-
adjusted by the methods such as sensitivity analysis (Bartz-
ing wall, reflecting wall and invisible wall. Once one of the
Beielstein et al. 2002), regression trees (Bartz-Beielstein
dimensions of a particle hit the boundary of the solution
et al. 2004a) and calculate statistics (Bartz-Beielstein et al.
space, the absorbing wall set the velocity in that correspond-
2004b), to promote the performance of PSO algorithm for
ing dimension to zero, while the reflecting wall changed
solving the practical problems.
the direction of particle velocity, the particle was eventu-
Besides these, Beheshti and Shamsuddin (2015) presented
ally pulled back to the allowable solution space by the two
a nonparametric particle swarm optimization (NP-PSO) to
walls. In order to reduce the calculation time and avoid affect
improve the global exploration and the local exploitation
the motions of other particles, the invisible walls did not cal-
in PSO without tuning algorithm parameters. This method
culate the fitness values of the particles flying out of the
integrated local and global topologies with two quadratic
boundary. However, the performance of PSO algorithm was
interpolation operations to enhance the algorithm search
greatly influenced by the dimension of the problem and the
capacity.
relative position between the global optima and the search
space boundary. In order to solve this problem, Huang and
Mohan (2005) integrated the characteristics of absorbing wall
6 Topological structure
and reflecting wall and proposed a hybrid damping boundary,
to obtain robust and consistent performance. And Mikki and
Many researchers have proposed different population topol-
Kishk (2005) combined the techniques of hard position limit,
ogy structures in the PSO algorithm because performance of
absorbing wall and reflecting wall, and the result showed that
PSO is directly influenced by population diversity. Therefore,
it could obtain better results.
designing different topologies to improve the performance of
PSO algorithm is also an active research direction.
Since the topology is studied, it must be related to the
5.5 Population size concept of neighborhood. Neighborhood can be static, or may
be dynamically determined. There are two ways to determine
Selection of population size is related to the problems to be the neighborhood: One is determined according to the flag of
solved, but it is not very sensitive to the problems. Common the particles (or index) which has nothing to do with distance;
selection is 20–50. In some cases, larger population is used the other is determined in accordance with the topological
to meet the special needs. distance between the particles.

123
D. Wang et al.

Most researches (Bratton and Kennedy 2007; Kennedy the nearest m particles were selected to be the new neigh-
and Mendes 2002; Li 2010) used the static topology structure. bors of one particle. Lin et al. (2006a) studied two kinds
Kennedy (1999) and Kennedy and Mendes (2002, 2003) had of dynamically randomized neighborhood topology. Mohais
analyzed different kinds of static neighborhood structures et al. (2005) presented a PSO algorithm with area of influ-
and their influences on the performance of PSO algorithm, ence (AOI), in AOI, influence of the optimal particle on other
and regarded that adaptability of the star, ring and von Neu- particles depended on the distances between them. Hierarchi-
mann topology were best. As studied by the work of Kennedy, cal PSO (Hanaf et al. 2016) used the dynamic tree hierarchy
PSO with a small neighborhood might have a better perfor- based on the performance of each particle in population to
mance on complicated problems, while PSO with a large define the neighborhood structure.
neighborhood would perform better on simple problems. All above neighborhood topologies were used to deter-
Based on K -means clustering algorithm, Kennedy (2000) mine the group experience g Best, while Hendtlass (2003)
also proposed another version of local PSO algorithm, called used neighborhood topology to determine individual experi-
social convergence method, with the hybrid space neighbor- ence pBest.
hood and ring topology. Each particle updated itself by using
the common experience of spatial clustering that it belonged
to, rather than the experience of its own. Kennedy (2004) 7 Discrete PSO
proved enhanced performance of the PSO algorithm through
applying ring topology and made the particles to move in A sea of optimization problems involve discrete or binary
accordance with the normally distributed random perturba- variables, and the typical examples include scheduling prob-
tions. Engelbrecht et al. (2005) studied the ability of the lems or routing problems. While the update formula and
basic PSO algorithm to locate and maintain several solutions procedure of the PSO algorithm are originally designed for
in the multi-modal optimization problems and found that the the continuous space, which limited its application in dis-
global neighborhood PSO (g Best PSO) was incapable of this crete optimization domains, therefore it need some changes
problem, while the efficiency of the local neighborhood PSO to adapt to the discrete space.
(n Best PSO) was very low. In continuous PSO, trajectories are defined as changes
Mendes et al. (2004) presented the fully informed par- in position on a number of dimensions. By contrast, binary
ticle swarm algorithm that used the information of entire particle swarm optimization (BPSO) trajectories are changes
neighborhood to guide the particles to find the best solution. in the probability that a coordinate will take on a value of zero
Influence of each particle on its neighbor was weighted by its or one.
fitness value and the neighborhood size. Peram et al. (2003) Jian et al. (2004) defined the first discrete binary ver-
developed a new PSO algorithm based on fitness–distance sion of PSO to optimize the structure of neural networks.
ratio (FDR-PSO) using the interaction of the neighbors. In The particles used binary strings to encode. By using the
updating each dimension component of the velocity, FDR- sigmoid function, the velocity was limited to [0,1], and it
PSO algorithm selected a n Best of other particles with higher was interpreted as “the change in probability.” By re-defining
fitness and much closer to the particle to be updated. This the particle position and velocity, continuous PSO could be
algorithm selected different neighborhood particles in updat- changed to discrete PSO to solve discrete optimization prob-
ing each dimension of the speed, so it was more effective lems. Tang et al. (2011) extended this method into quantum
than that only selected one neighbor particle in all speed space. Ratnaweera et al. (2004) and Afshinmanesh et al.
dimensions. Peer et al. (2003) used different neighborhood (2005) presented the discrete PSO further.
topologies to study the performance of guaranteed conver- In addition, a few modified binary PSO algorithms have
gence PSO (GCPSO). been proposed. An angle modulation PSO (AMPSO) was
There is also a small part of the researches about dynamic proposed by Mohais et al. (2005) to produce a bit string
topology. Lim and Isa (2014) presented a novel PSO vari- to solve the original high-dimensional problem. In this
ant named as PSO with adaptive time-varying topology approach, the high-dimensional binary problem is reduced
connectivity (PSO-ATVTC) which used an ATVTC mod- to a four-dimensional problem defined in continuous space,
ule and a new learning scheme. The presented ATVTC with a direct mapping back to the binary space by angle
module particularly aimed to balance the algorithm’s explo- modulation. al Rifaie and Blackwell (2012) presented a
ration/exploitation search through changing the particle’s new discrete particle swarm optimization method for the
topology connection with time in accordance with its search discrete time-cost trade-off problem. Two large-scale bench-
performance. Suganthan (1999) used a dynamically adjusted mark problems were use to evaluate the performance of the
neighborhood, and this neighborhood could increase gradu- DPSO. The results indicated that DPSO provided an effec-
ally until it included all the particles. Hu and Eberhart (2002) tive and robust alternative for solving real-world time-cost
studied a dynamic neighborhood, and in each generation, optimization problems. However, the large-scale benchmark

123
Particle swarm optimization algorithm: an overview

problems used in this study included up to 630 activities and 2015), complex network clustering problems (Brits et al.
up to five time-cost alternatives and might have certain lim- 2002), optimizing the echo state network (Shin and Kita
itations for representing the complexity of some large-scale 2014), image matching problems (Qian et al. 2012), instance
construction projects. Peer et al. (2003) developed a genetic selection for time series classification (van den Bergh and
BPSO model without fixing the size of the swarm. In this Engelbrecht 2002), ear detection (Emara and Fattah 2004),
algorithm, two operations, i.e., birth and death, were intro- feature selection (Chen et al. 2006b), capacitated location
duced to dynamically modulate the swarm. Because the birth routing problem (Liang et al. 2006), generation maintenance
and death rates changed naturally with time, this new model scheduling problem (Schaffer 1985), elderly day care center
permitted oscillations in the size of the swarm. So it was a timetabling (Lee et al. 2008), high-dimensional feature selec-
more natural simulation of the social behaviors of the intel- tion, classification and validation (Ardizzon et al. 2015), and
ligent animals. An enhanced binary BPSO was presented by high-order graph matching (Fang et al. 2016). All these prob-
Kadirkamanathan et al. (2006) that introduced the concepts lems had the irrespective challenges and were difficult to be
of genotype-phenotype representation and the mutation oper- optimized, but they could be effectively solved by DPSO.
ator of GA into the BPSO. A novel BPSO was proposed to Most of the above discrete PSO algorithms were indirect
overcome the BPSO problems by Lu et al. (2015b). Although optimization strategies which determined the binary vari-
the performance of the algorithm was better than that of the ables based on the probability rather than the algorithm
BPSO, the new BPSO may be trapped into the local optimum itself; therefore, it could not make full use of the perfor-
and generated premature convergence. Therefore, two differ- mance of PSO algorithm. In dealing with the integer variable,
ent methods were designed to prevent the stagnation of the PSO algorithm was very easy to fall into local minimum.
new BPSO. One of the methods improved the performance Original PSO algorithm learned from the experience of indi-
of the new BPSO by introducing the concept of guaranteed vidual and its companions, the discrete PSO algorithm should
convergence BPSO. Another method, modified new BPSO, also follow this idea. Based on the traditional velocity-
adopted the mutation operator to avoid the stagnation issue. displacement updating operation, Engelbrecht et al. (2005)
Chatterjee and Siarry (2006) developed an essential binary analyzed the optimization mechanism of PSO algorithm and
PSO to optimize problems in the binary search spaces. In this proposed the general particle swarm optimization (GPSO)
algorithm, PSO was divided into its essential elements, and model which was suitable to solve the discrete and combina-
alternative explanations of these elements were proposed. tional optimization problems. Nature of the GPSO model still
Previous direction and state of each particle ware also con- conformed to the PSO mechanism, but its particle updating
sidered to search for the good solutions for the optimized strategy could be designed either according to the features
problems. Fourie and Groenwold (2002) presented a fuzzy of the optimized problems, or integrating with other meth-
discrete particle swarm optimization to cope with real-time ods. Based on the similar ideas, Fan et al. (2009) defined
charging coordination of plug-in electric vehicles. Wang et al. local search and path-relinking procedures as the velocity
(2005) presented a binary bare bones PSO to search for opti- operator to solve the traveling salesman problem. Beheshti
mal feature selection. In this algorithm, a reinforced memory and Shamsuddin (2015) presented a memetic binary particle
scheme was developed to modify the local leaders of parti- swarm optimization strategy in accordance with the hybrid
cles to prevent the degradation of distinguished genes in the local and global searches in BPSO. This binary hybrid topol-
particles, and an uniform combination was presented to bal- ogy particle swarm optimization algorithm had been used to
ance the local exploitation and the global exploration of the solve the optimization problems in the binary search spaces.
algorithm.
Traditional PSO suffers from the dimensionality problem,
i.e., its performance deteriorates quickly when the dimen- 8 Multi-objective optimization PSO
sionality of the search space increases exponentially, which
greatly limits its application to large-scale global optimiza- In recent years, multi-objective (MO) optimization has
tion problems. Therefore, for large-scale social network become an active research area. In the multi-object opti-
clustering, Brits et al. (2002) presented a discrete PSO algo- mization problems, each target function can be optimized
rithm to optimize community structures in social networks. independently and then find the optimal value for each tar-
Particle position and velocity were redefined in terms of a get. Unfortunately, due to the conflicting among the objects,
discrete form. Subsequently, the particle modify strategies it is almost impossible to find a perfect solution for all the
were redesigned in accordance with the network topology. objectives. Therefore, only the Pareto optimal solution can
Discrete PSO (DPSO) had been successfully applied to be found.
many discrete optimization tasks, such as Sudoku puzzle (Liu Information sharing mechanism in PSO algorithm is very
et al. 2004), multi-dimensional knapsack problems (Banka different from other optimization tools based on swarm.
and Dara 2015), jobshop scheduling problems (Vitorino et al. In the genetic algorithm (GA), the chromosomes exchange

123
D. Wang et al.

information with each other through crossover operation, which used the max-min strategy in the fitness function to
which is a bidirectional information sharing mechanism. determine the Pareto dominance. Moreover, Goldbarg et al.
While in most PSO algorithms, only g Best (or n Best) (2006) also used the non-dominated sorting PSO to optimize
provides information for other particles. Due to the point a U-tube steam generator mathematical model in the nuclear
attracting feature, traditional PSO algorithm cannot simulta- power plant.
neously locate multiple optimal points constituting the Pareto Ghodratnama et al. (2015) applied the comprehensive
frontier. By giving different weights to all the objective func- learning PSO algorithm combining with Pareto dominance
tions, then combining them and running many times, though to solve the multi-objective optimization problems. Ozcan
we can obtain multiple optimal solutions, we still want to and Mohan (1998) developed an elitist multi-objective PSO
find a method which can simultaneously obtain a group of that combined the elitist mutation coefficient to improve
Pareto optimal solutions. the particles’ exploitation and exploration capacity. Wang
In the PSO algorithm, a particle is an independent agent et al. (2011) proposed an iterative multi-objective particle
which can search the problem space according to the expe- swarm optimization-based control vector parameterization to
rience of its own and its companions. As mentioned above, cope with the dynamic optimization of the state constrained
the former is the cognitive part of the particles update for- chemical and biochemical engineering problems. In recent
mula, and the latter is the social part. Both parts play key researches, Clerc and Kennedy (2002), Fan and Yan (2014),
role in guiding the particles’ search. Therefore, choosing the Chen et al. (2014), Lei et al. (2005), et al. also proposed the
appropriate social and cognitive guide (g Best and p Best) corresponding multi-objective PSO algorithms.
is the key problems of the MOPSO algorithm. Selection of Among them, Li (2004) proposed a novel Cultural
the cognitive guide conforms the same rule as that of the tra- MOQPSO algorithm, in which cultural evolution mecha-
ditional PSO algorithm, and the only difference is that the nism was introduced into quantum-behaved particle swarm
guide should be determined in accordance with Pareto dom- optimization to deal with multi-objective problems. In Cul-
inance. Selection of the social guide includes two steps: The tural MOQPSO, the exemplar positions of each particle were
first step is creating a candidate pool used to select the guide. obtained according to “belief space,” which contained dif-
In traditional PSO algorithm, the guide is selected from the ferent types of knowledge. Moreover, to increase population
p Best of the neighbors. While in the MOPSO algorithm, the diversity and obtain continuous and even-distributed Pareto
usual method is using an external pool to store more Pareto fronts, a combination-based update operator was proposed to
optimal solutions. The second step is choosing the guide. update the external population. Two quantitative measures,
Selection of the g Best should satisfy the following two stan- inverted generational distance and binary quality metric,
dards: First, it should be able to provide effective guidance were adopted to evaluate its performance. A comprehensive
for the particles to obtain a better convergence speed; second, comparison of Cultural MOQPSO with some state-of-the-art
it needs to provide balanced search along the Pareto frontier, evolutionary algorithms on several benchmark test func-
to maintain the diversity of the population. Two methods tions, including ZDT, DTLZ and CEC2009 test instances,
are usually used to determine the social guide: (1) Roulette demonstrated that Cultural MOQPSO had better perfor-
selection mode which selects randomly in accordance with mance than the other MOQPSOs and MOPSOs. Besides,
some standards, to maintain the diversity of the population. Cultural MOQPSO was also compared to 11 state-of-the-art
(2) Quantity standard: determine the social guide according evolutionary algorithms, by testing on the first 10 functions
to some procedures without involving random selection. defined in CEC-2009. The comparative results demonstrated
After Peram et al. (2003) presented the vector-evaluated that, for half of the test functions, Cultural MOQPSO per-
genetic algorithm, an ocean of multi-objective optimization formed better than most of the 11 algorithms. According to
algorithms were proposed one after another, such as NSGA- these quantitate comparisons, the Cultural MOQPSO can be
II (Coello et al. 2004). Liu et al. (2016) were the first to study recommended to cope with the multi-objective optimization
the application of the PSO algorithm in multi-objective opti- problems.
mization and emphasized the importance of the individual Because the fitness calculation consumes much compu-
search and swarm search, but they did not adopt any method tational resource, in order to reduce the calculation cost, it
to maintain the diversity. On the basis of the concept of non- needs to reduce the evaluating numbers of fitness function.
dominated optimal, Clerc (2004) used an external archive Pampara et al. (2005) adopted fitness inheritance technique
to store and determine which particles would be the non- and estimation technique to achieve this goal and compared
dominated members, and these members were used to guide the effect of fifteen kinds of inheritance techniques and four
other particle’s flight. Kennedy (2003) adopted the main estimation techniques that were applied to multi-objective
mechanism of the NSGA-II algorithm to determine local PSO algorithm.
optimal particle among the local optimal particles and their There are two main methods to maintain the diversity
offspring particles, and proposed non-dominated sorting PSO of the MOPSO: Sigma method (Lovbjerg and Krink 2002)

123
Particle swarm optimization algorithm: an overview

and ε-dominance method (Juang et al. 2011; Robinson and hypothesis test, to solve the function optimization in the noisy
Rahmat-Samii 2004). Robinson and Rahmat-Samii (2004) environment.
put forward a multi-swarm PSO algorithm which broke down Research objects of the above works are the simple
the whole swarm into three equally sized sub-swarms. Each dynamic systems, experiment functions used are the sim-
sub-swarm applied different mutation coefficients, and this ple single-mode functions, and the changes are uniform ones
scheme enhanced the search capacity of the particles. in simple environment (that is, fixed step). In fact, the real
Due to the page limit, engineering applications of the PSO dynamic systems are often nonlinear, and they change non-
are attached in the supplementary file, interested readers are uniformly in complex multi-mode search space. Kennedy
encouraged to refer it. (2005) used four PSO ( a standard PSO, two randomized
PSO algorithms, and a fine-grained PSO) models to compar-
atively study a series of different dynamic environments.
9 Noise and dynamic environments

State of dynamic system changes frequently, even con-


tinuously. Many practical systems involve the dynamic 10 Numerical experiments
environment. For example, due to changes caused by the
priority of customers, unexpected equipment maintenance, PSO was also used in different numerical experiments. To
most of calculating time in the scheduling system was used handle the imprecise operating costs, demands and the capac-
to reschedule. In real applications, these changes in system ity data in a hierarchical production planning system, Carlisle
states often needed to be re-optimized. and Dozier (2001) used a modified variant of a possibilis-
Using the PSO algorithm to track the dynamic system was tic environment-based particle swarm optimization approach
initially proposed by Brits et al. (2003), and it followed the to solve an aggregate production plan model which used
dynamic system through periodically resetting all particles’ the strategy of simultaneously minimizing the most pos-
memories. Deb and Pratap (2002) also adopted the similar sible value of the imprecise total costs, maximizing the
idea. After that, Geng et al. (2014) introduced an adaptive possibility of obtaining lower total costs and minimizing
PSO algorithm, which could automatically track the changes the risk of obtaining higher total costs. This method pro-
in the dynamic system, and different environment detection vides a novel approach to consider the natural uncertainty
and response techniques were tested on the parabolic bench- of the parameters in an aggregate production plan problem
mark function. It effectively increased the tracking ability and can be applied in ambiguous and indeterminate circum-
for the environment change through testing the best particle stances of real-world production planning and scheduling
in the swarm and reinitializing the particles. Later, Carlisle problems with ill-defined data. To analyze the effects of pro-
and Dozier (2000) used a random point in the search space cess parameters (cutting speed, depth of cut and feed rate) on
to determine whether the environment changed or not, but it machining criteria, Ganesh et al. (2014) applied the PSO to
required centralized control, and it was inconsistent with PSO optimize the cutting conditions for the developed response
algorithm’s distributed processing model. So Clerc (2006) surface models. PSO program gave the minimum values of
proposed a Tracking Dynamical PSO (TDPSO) that made the considered criteria and the corresponding optimal cut-
the fitness value of the best history position to decrease with ting conditions. Lu used an improved PSO algorithm which
time; thus, it did not need the centralized control. In order to adopted a combined fitness function to solve the squared
respond to the rapidly changing dynamic environment, Bink- error between the measured values and the modeled ones in
ley and Hagiwara (2005) added a penalty term in particles’ system identification problems. Numerical simulations with
update formula to keep the particles lying in an expanding five benchmark functions were used to validate the feasibil-
swarm, and this method need not to examine whether the best ity of PSO, and furthermore, numerical experiments were
point changed or not. also carried out to evaluate the performance of the improved
Experiments of Monson and Seppi (2004) showed that the PSO. Consistent results demonstrated that combined fitness
basic PSO algorithm could work in the noisy environment function-based PSO algorithm was feasible and efficient for
efficiently and stably; even in many cases, the noise could system identification and could achieve better performance
also help PSO algorithm to avoid falling to the local optima. over conventional PSO algorithm.
Moreover, Mostaghim and Teich (2003) also experimentally To test the Starling PSO, eight numerical benchmarking
studied the performance of unified particle swarm algorithm functions which represent various characteristics of typi-
in the dynamic environment. Nickabadi et al. (2011) pro- cal problems as well as a real-world application involving
posed a anti-noise PSO algorithm. Pan proposed an effective data clustering were used by Lu et al. (2015a). Experimental
hybrid PSO algorithm named PSOOHT by introducing the results showed that Starling PSO improved the performance
optimal computing budget allocation (OCBA) technique and of the original PSO and yielded the optimal solution in many

123
D. Wang et al.

numerical benchmarking functions and most of the real- and has achieved some preliminary theoretical results,
world problems in the clustering experiments. it has not provided the mathematical proofs about the
Selvakumar and Thanushkodi (2009) put forward an algorithm convergence and convergence rate estimation
improved CPSO-VQO with a modified chaotic system whose so far.
bifurcation structure was irrelevant to the difference vec- (2) How to determine the algorithm parameters. Parame-
tor, and they also proposed a new stochastic method that ters in PSO are usually determined depending on the
selected the updating system according to the ratio between specific problems, application experience and numer-
the components of the difference vector for each particle, and ous experiment tests, so it has no versatility. Hence,
restarting and acceleration techniques to develop the standard how to determine the algorithm parameters conve-
updating system used in the proposed PSO model. Through niently and effectively is another urgent problem to be
numerical experiments, they verified that the proposed PSOs, studied.
PSO-TPC, PSO-SPC and PSO-SDPC were superior to the (3) Discrete/binary PSO algorithm. Most research liter-
relatively simple existing PSOs and CPSO-VQO in find- atures reported in this paper deal with continuous
ing high-quality solutions for various benchmark problems. variables. Limited research illustrated that the PSO algo-
Since the chaotic system used in these PSOs was based on rithm had some difficulties in dealing with the discrete
the gbest and pbest, this search was mainly restricted around variables.
the two points in spite of its chaoticity. (4) Aiming at the characteristics of different problems,
In addition, Sierra and Coello (2005) conducted numerical designing corresponding effective algorithm is a very
experiments with benchmark objective functions with high meaningful work. For the specific application problems,
dimensions to verify the convergence and effectiveness of the we should deeply study PSO algorithm and extend its
proposed initialization of PSO. Salehian and Subraminiam application from the breadth and depth. At the same
(2015) adopted an improved PSO to optimize the perfor- time, we should focus on the highly efficient PSO design,
mance in terms of the number of alive nodes in wireless combining the PSO with optimized problem or rules,
sensor networks. The performance of the adopted improved PSO with the neural network, fuzzy logic, evolutionary
PSO was validated by the numerical experiments in conven- algorithm, simulated annealing, taboo search, biological
tional background. intelligence, and chaos, etc, to cope with the prob-
lem that the PSO is easy to be trapped into the local
optima.
11 Conclusions and discussion (5) PSO algorithm design research. More attention should
be emphasized on the highly efficient PSO algorithm
As a technique appearing not long time, PSO algorithm has and put forward suitable core update formula and effec-
received wide attentions in recent years. Advantages of PSO tive strategy to balance the global exploration and local
algorithm can be summarized as follows: (1) It has excellent exploitation.
robustness and can be used in different application environ- (6) PSO application search. Nowadays, most of the PSO
ments with a little modification. (2) It has strong distributed applications are limited in continuous, single objective,
ability, because the algorithm is essentially the swarm evolu- unconstrained, deterministic optimization problems, so
tionary algorithm, so it is easy to realize parallel computation. we should emphasize the applications on discrete, multi-
(3) It can converge to the optimization value quickly. (4) It objective, constrained, un-deterministic, dynamic opti-
is easy to hybridize with other algorithms to improve its per- mization problems. At the same time, the application
formance. areas of PSO should be expanded further.
After many years development, the optimization speed,
quality and robustness of the PSO algorithm have been
Acknowledgements The authors thank the reviewers for their valuable
greatly improved. However, the current studies mostly focus
comments/suggestions which helped to improve the quality of this paper
on the algorithm’s implementation, enhancement and appli- significantly.
cations, while relevant fundamental research is far behind the
algorithm’s development. Lacking of mathematical theory Compliance with ethical standards
basis greatly limits the further generalization, improvement
Funding This study was funded by National Natural Sciences Funds
and application of the PSO algorithm. of China (Grant Number 61174085).
PSO algorithm research still exist a lot of unsolved prob-
lems, including but not limited to: Conflict of interest All the authors declare that they have no conflict
of interest.
(1) Random convergence analysis. Although the PSO algo- Ethical approval This article does not contain any studies with human
rithm has been proved to be effective in real applications participants performed by any of the authors.

123
Particle swarm optimization algorithm: an overview

References Blackwell TM, Bentley PJ (2002) Don’t push me! Collision-avoiding


swarms. In: Proceedings of IEEE congress on evolutionary com-
Abdelbar AM, Abdelshahid S, Wunsch DCI (2005) Fuzzy pso: a gener- putation, Honolulu, HI, USA, August 7–9, pp 1691–1697
alization of particle swarm optimization. In: Proceedings of 2005 Bratton D, Kennedy J (2007) Defining a standard for particle swarm
IEEE international joint conference on neural networks (IJCNN optimization. In: Proceedings of the 2007 IEEE swarm intelligence
’05) Montreal, Canada, July 31–August 4, pp 1086–1091 symposium (SIS2007), Honolulu, HI, USA, April 19–23, pp 120–
Acan A, Gunay A (2005) Enhanced particle swarm optimization 127
through external memory support. In: Proceedings of 2005 IEEE Brits R, Engelbrecht AP, van den Bergh F (2002) Solving systems of
congress on evolutionary computation, Edinburgh, UK, Sept 2–4, unconstrained equations using particle swarm optimization. In:
pp 1875–1882 Proceedings of IEEE international conference on systems, man,
Afshinmanesh F, Marandi A, Rahimi-Kian A (2005) A novel binary and cybernetics, hammamet, Tunisia, October 6–9, 2002. July 27–
particle swarm optimization method using artificial immune sys- 28, 2013, East Lansing, Michigan, pp 1–9
tem. In: Proceedings of the international conference on computer Brits R, Engelbrecht AP, van den Bergh F (2003) Scalability of niche
as a tool (EUROCON 2005) Belgrade, Serbia, Nov 21–24, pp 217– PSO. In: Proceedings of the IEEE swarm intelligence symposium,
220 Indianapolis, Indiana, USA, April 24–26, pp 228–234
Al-kazemi B, Mohan CK (2002) Multi-phase generalization of the Carlisle A, Dozier G (2000) Adapting particle swarm optimization to
particle swarm optimization algorithm. In: Proceedings of 2002 dynamic environments. In: Proceedings of the international confer-
IEEE Congress on Evolutionary Computation, Honolulu, Hawaii, ence on artificial intelligence, Athens, GA, USA, July 31–August
August 7–9, pp 489–494 5, pp 429–434
al Rifaie MM, Blackwell T (2012) Bare bones particle swarms with Carlisle A, Dozier G (2001) An off-the-shelf PSO. In: Proceedings of the
jumps ants. Lect Notes Comput Sci Ser 7461(1):49–60 workshop on particle swarm optimization, Indianapolis, Indiana,
Angeline PJ (1998a) Evolutionary optimization versus particle swarm USA
optimization philosophy and performance difference. In: Evolu- Chang WD (2015) A modified particle swarm optimization with
tionary programming, Lecture notes in computer science, vol. vii multiple subpopulations for multimodal function optimization
edition. Springer, Berlin problems. Appl Soft Comput 33:170–182
Angeline PJ (1998b) Using selection to improve particle swarm Chatterjee A, Siarry P (2006) Nonlinear inertia weight variation for
optimization. In: Proceedings of the 1998 IEEE international con- dynamic adaptation in particle swarm optimization. Comput Oper
ference on evolutionary computation, Anchorage, Alaska, USA, Res 33:859–871
May 4–9, pp 84–89 Chaturvedi KT, Pandit M, Shrivastava L (2008) Self-organizing hier-
Ardizzon G, Cavazzini G, Pavesi G (2015) Adaptive acceleration coef- archical particle swarm optimization for non-convex economic
ficients for a new search diversification strategy in particle swarm dispatch. IEEE Trans Power Syst 23(3):1079–1087
optimization algorithms. Inf Sci 299:337–378 Chen J, Pan F, Cai T (2006a) Acceleration factor harmonious particle
Banka H, Dara S (2015) A hamming distance based binary particle swarm optimizer. Int J Autom Comput 3(1):41–46
swarm optimization (HDBPSO) algorithm for high dimensional Chen K, Li T, Cao T (2006b) Tribe-PSO: a novel global optimization
feature selection, classification and validation. Pattern Recognit algorithm and its application in molecular docking. Chemom Intell
Lett 52:94–100 Lab Syst 82:248–259
Barisal AK (2013) Dynamic search space squeezing strategy based Chen W, Zhang J, Lin Y, Chen N, Zhan Z, Chung H, Li Y, Shi Y (2013)
intelligent algorithm solutions to economic dispatch with multi- Particle swarm optimization with an aging leader and challenger.
ple fuels. Electr Power Energy Syst 45:50–59 IEEE Trans Evolut Comput 17(2):241–258
Bartz-Beielstein T, Parsopoulos KE, Vrahatis MN (2002) Tuning Chen Y, Feng Y, Li X (2014) A parallel system for adaptive optics based
pso parameters through sensitivity analysis. Technical Report CI on parallel mutation PSO algorithm. Optik 125:329–332
124/02, SFB 531. University of Dortmund, Dortmund, Germany, Ciuprina G, Ioan D, Munteanu I (2007) Use of intelligent-particle swarm
Department of Computer Science optimization in electromagnetics. IEEE Trans Manag 38(2):1037–
Bartz-Beielstein T, Parsopoulos KE, Vegt MD, Vrahatis MN (2004a) 1040
Designing particle swarm optimization with regression trees. Clerc M (1999) The swarm and the queen: towards a deterministic and
Technical Report CI 173/04, SFB 531. University of Dortmund, adaptive particle swarm optimization. In: Proceedings of the IEEE
Dortmund, Germany, Department of Computer Science congress on evolutionary computation (CEC 1999), pp 1951–1957,
Bartz-Beielstein T, Parsopoulos KE, Vrahatis MN (2004b) Analysis Washington, DC, USA, July 6–9, 1999
of particle swarm optimization using computational statistics. In: Clerc M (2004) Discrete particle swarm optimization. In: Onwubolu
Proceedings of the international conference of numerical analysis GC (ed) New optimization techniques in engineering. Springer,
and applied mathematics (ICNAAM 2004), Chalkis, Greece, pp Berlin
34–37 Clerc M (2006) Stagnation analysis in particle swarm optimisation or
Beheshti Z, Shamsuddin SM (2015) Non-parametric particle swarm what happens when nothing happens. Technical Report CSM-460,
optimization for global optimization. Appl Soft Comput 28:345– Department of Computer Science, University of Essex, Essex, UK,
359 August 5–8, 2006
Benameur L, Alami J, Imrani A (2006) Adaptively choosing niching Clerc M, Kennedy J (2002) The particle swarm-explosion, stability and
parameters in a PSO. In: Proceedings of genetic and evolution- convergence in a multi dimensional complex space. IEEE Trans
ary computation conference (GECCO 2006), Seattle, Washington, Evolut Comput 6(2):58–73
USA, July 8–12, pp 3–9 Coelho LDS, Lee CS (2008) Solving economic load dispatch prob-
Binkley KJ, Hagiwara M (2005) Particle swarm optimization with area lems in power systems using chaotic and gaussian particle swarm
of influence: increasing the effectiveness of the swarm. In: Pro- optimization approaches. Electr Power Energy Syst 30:297–307
ceedings of 2005 IEEE swarm intelligence symposium (SIS 2005), Coello CAC, Pulido G, Lechuga M (2004) Handling multiple objec-
Pasadena, California, USA, June 8–10, pp 45–52 tives with particle swarm optimization. IEEE Trans Evolut Comput
Blackwell TM (2005) Particle swarms and population diversity. Soft 8(3):256–279
Comput 9(11):793–802 Deb K, Pratap A (2002) A fast and elitist multi objective genetic algo-
rithm: NSGA-II. IEEE Trans Evolut Comput 6(2):182–197

123
D. Wang et al.

del Valle Y, Venayagamoorthy GK, Mohagheghi S, Hernandez JC, Ghodratnama A, Jolai F, Tavakkoli-Moghaddamb R (2015) Solving a
Harley RG (2008) Particle swarm optimization: basic concepts, new multi-objective multiroute flexible flow line problem by multi-
variants and applications in power systems. IEEE Trans Evolut objective particle swarm optimization and nsga-ii. J Manuf Syst
Comput 12:171–195 36:189–202
Diosan L, Oltean M (2006) Evolving the structure of the particle swarm Goldbarg EFG, de Souza GR, Goldbarg MC (2006) Particle swarm
optimization algorithms. In: Proceedings of European conference for the traveling salesman problem. In: Proceedings of European
on evolutionary computation in combinatorial optimization (Evo- conference on evolutionary computation in combinatorial opti-
COP2006), pp 25–36, Budapest, Hungary, April 10–12, 2006 mization (EvoCOP2006), pp 99-110, Budapest, Hungary, April
Doctor S, Venayagamoorthy GK (2005) Improving the performance 10–12, 2006
of particle swarm optimization using adaptive critics designs. In: Gosciniak I (2015) A new approach to particle swarm optimization
Proceedings of 2005 IEEE swarm intelligence symposium (SIS algorithm. Expert Syst Appl 42:844–854
2005), pp 393–396, Pasadena, California, USA, June 8–10, 2005 Hanaf I, Cabrerab FM, Dimanea F, Manzanaresb JT (2016) Applica-
Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm tion of particle swarm optimization for optimizing the process
theory. In: Proceedings of the 6th international symposium on parameters in turning of peek cf30 composites. Procedia Technol
micro machine and human science, pp 39–43, Nagoya, Japan, Mar 22:195–202
13–16, 1995 He S, Wu Q, Wen J (2004) A particle swarm optimizer with passive
Eberhart RC, Shi Y (2000) Comparing inertia weights and constriction congregation. BioSystems 78:135–147
factors in particle swarm optimization. In: Proceedings of the IEEE Hendtlass T (2003) Preserving diversity in particle swarm optimisation.
congress on evolutionary computation (CEC 2000), pp 84–88, San In: Proceedings of the 16th international conference on industrial
Diego, CA, USA, July 16–19, 2000 engineering applications of artificial intelligence and expert sys-
Eberhart RC, Shi Y (2001) Particle swarm optimization: developments, tems, pp 31–40, Loughborough, UK, June 23–26, 2003
applications and resources. In: Proceedings of the IEEE congress Ho S, Yang S, Ni G (2006) A particle swarm optimization method
on evolutionary computation (CEC 2001), pp 81–86, Seoul, Korea, with enhanced global search ability for design optimizations of
May 27–30 electromagnetic devices. IEEE Trans Magn 42(4):1107–1110
El-Wakeel AS (2014) Design optimization of pm couplings using Hu X, Eberhart RC (2002) Adaptive particle swarm optimization:
hybrid particle swarm optimization-simplex method (PSO-SM) Detection and response to dynamic systems. In: Proceedings of
algorithm. Electr Power Syst Res 116:29–35 IEEE congress on evolutionary computation, pp 1666–1670, Hon-
Emara HM, Fattah HAA (2004) Continuous swarm optimization tech- olulu, HI, USA, May 10–14, 2002
nique with stability analysis. In: Proceedings of American Control Huang T, Mohan AS (2005) A hybrid boundary condition for robust
Conference, pp 2811–2817, Boston, MA, USA, June 30–July 2, particle swarm optimization. Antennas Wirel Propag Lett 4:112–
2004 117
Engelbrecht AP, Masiye BS, Pampard G (2005) Niching ability of basic Ide A, Yasuda K (2005) A basic study of adaptive particle swarm opti-
particle swarm optimization algorithms. In: Proceedings of 2005 mization. Electr Eng Jpn 151(3):41–49
IEEE Swarm Intelligence Symposium (SIS 2005), pp 397–400, Ivatloo BM (2013) Combined heat and power economic dispatch prob-
Pasadena, CA, USA, June 8–10, 2005 lem solution using particle swarm optimization with time varying
Fan H (2002) A modification to particle swarm optimization algorithm. acceleration coefficients. Electr Power Syst Res 95(1):9–18
Eng Comput 19(8):970–989 Jamian JJ, Mustafa MW, Mokhlis H (2015) Optimal multiple distributed
Fan Q, Yan X (2014) Self-adaptive particle swarm optimization with generation output through rank evolutionary particle swarm opti-
multiple velocity strategies and its application for p-xylene oxi- mization. Neurocomputing 152:190–198
dation reaction process optimization. Chemom Intell Lab Syst Jia D, Zheng G, Qu B, Khan MK (2011) A hybrid particle swarm opti-
139:15–25 mization algorithm for high-dimensional problems. Comput Ind
Fan SKS, Lin Y, Fan C, Wang Y (2009) Process identification using a Eng 61:1117–1122
new component analysis model and particle swarm optimization. Jian W, Xue Y, Qian J (2004) An improved particle swarm optimization
Chemom Intell Lab Syst 99:19–29 algorithm with neighborhoods topologies. In: Proceedings of 2004
Fang W, Sun J, Chen H, Wu X (2016) A decentralized quantum-inspired international conference on machine learning and cybernetics, pp
particle swarm optimization algorithm with cellular structured 2332–2337, Shanghai, China, August 26–29, 2004
population. Inf Sci 330:19–48 Jiang CW, Bompard E (2005) A hybrid method of chaotic particle
Fernandez-Martinez JL, Garcia-Gonzalo E (2011) Stochastic stability swarm optimization and linear interior for reactive power opti-
analysis of the linear continuous and discrete PSO models. IEEE mization. Math Comput Simul 68:57–65
Trans Evolut Comput 15(3):405–423 Jie J, Zeng J, Han C (2006) Adaptive particle swarm optimization with
Fourie PC, Groenwold AA (2002) The particle swarm optimization feedback control of diversity. In: Proceedings of 2006 international
algorithm in size and shape optimization. Struct Multidiscip Optim conference on intelligent computing (ICIC2006), pp 81–92, Kun-
23(4):259–267 ming, China, August 16–19, 2006
Ganesh MR, Krishna R, Manikantan K, Ramachandran S (2014) Jin Y, Cheng H, Yan J (2005) Local optimum embranchment based
Entropy based binary particle swarm optimization and classifi- convergence guarantee particle swarm optimization and its appli-
cation for ear detection. Eng Appl Artif Intell 27:115–128 cation in transmission network planning. In: Proceedings of 2005
Garcia-Gonza E, Fernandez-Martinez JL (2014) Convergence and IEEE/PES transmission and distribution conference and exhibi-
stochastic stability analysis of particle swarm optimization variants tion: Asia and Pacific, pp 1–6, Dalian, China, Aug 15–18, 2005
with generic parameter distributions. Appl Math Comput 249:286– Juang YT, Tung SL, Chiu HC (2011) Adaptive fuzzy particle swarm
302 optimization for global optimization of multimodal functions. Inf
Garcia-Martinez C, Rodriguez FJ (2012) Arbitrary function optimisa- Sci 181:4539–4549
tion with metaheuristics: no free lunch and real-world problems. Kadirkamanathan V, Selvarajah K, Fleming PJ (2006) Stability analysis
Soft Comput 16:2115–2133 of the particle dynamics in particle swarm optimizer. IEEE Trans
Geng J, Li M, Dong Z, Liao Y (2014) Port throughput forecasting by Evolut Comput 10(3):245–255
MARS-RSVR with chaotic simulated annealing particle swarm Kennedy J (1997) Minds and cultures: particle swarm implications. In:
optimization algorithm. Neurocomputing 147:239–250 Proceedings of the AAAI Fall 1997 symposium on communicative

123
Particle swarm optimization algorithm: an overview

action in humans and machines, pp 67–72, Cambridge, MA, USA, Li Z, Nguyena TT, Chen S, Khac Truong T (2015b) A hybrid algorithm
Nov 8–10, 1997 based on particle swarm and chemical reaction optimization for
Kennedy J (1998) The behavior of particle. In: Proceedings of the multi-object problems. Appl Soft Comput 35:525–540
7th annual conference on evolutionary program, pp 581–589, San Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm
Diego, CA, Mar 10–13, 1998 optimizer. In: Proceedings of IEEE swarm intelligence sympo-
Kennedy J (1999) Small worlds and mega-minds: effects of neighbor- sium, pp 124–129, Pasadena, CA, USA, June 8–10, 2005
hood topology on particle swarm performance. In: Proceedings of Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehensive
the IEEE international conference on evolutionary computation, learning particle swarm optimizer for global optimization of mul-
pp 1931–1938, San Diego, CA, Mar 10–13 timodal functions. IEEE Trans Evolut Comput 10(3):281–295
Kennedy J (2000) Stereotyping: Improving particle swarm performance Lim W, Isa NAM (2014) Particle swarm optimization with adaptive
with cluster analysis. In: Proceedings of the IEEE international time-varying topology connectivity. Appl Soft Comput 24:623–
conference on evolutionary computation, pp 303–308 642
Kennedy J (2003) Bare bones particle swarms. In: Proceedings of the Lim W, Isa NAM (2015) Adaptive division of labor particle swarm
2003 IEEE swarm intelligence symposium (SIS’03), pp 80–87, optimization. Expert Syst Appl 42:5887–5903
Indianapolis, IN, USA, April 24–26, 2003 Lin Q, Li J, Du Z, Chen J, Ming Z (2006a) A novel multi-objective
Kennedy J (2004) Probability and dynamics in the particle swarm. In: particle swarm optimization with multiple search strategies. Eur J
Proceedings of the IEEE international conference on evolutionary Oper Res 247:732–744
computation, pp 340–347, Washington, DC, USA, July 6–9, 2004 Lin X, Li A, Chen B (2006b) Scheduling optimization of mixed model
Kennedy J (2005) Why does it need velocity? In: Proceedings of assembly lines with hybrid particle swarm optimization algorithm.
the IEEE swarm intelligence symposium (SIS’05), pp 38–44, Ind Eng Manag 11(1):53–57
Pasadena, CA, USA, June 8–10, 2005 Liu Y, Qin Z, Xu Z (2004) Using relaxation velocity update strat-
Kennedy J, Eberhart RC (1995) Particle swarm optimization? In: Pro- egy to improve particle swarm optimization. Proceedings of third
ceedings of the IEEE international conference on neural networks, international conference on machine learning and cybernetics, pp
pp 1942–1948, Perth, Australia 2469–2472, Shanghai, China, August 26–29, 2004
Kennedy J, Mendes R (2002) Population structure and particle swarm Liu F, Zhou J, Fang R (2005) An improved particle swarm optimization
performance. In: Proceedings of the IEEE international conference and its application in longterm stream ow forecast. In: Proceed-
on evolutionary computation, pp 1671–1676, Honolulu, HI, USA, ings of 2005 international conference on machine learning and
Sept 22–25, 2002 cybernetics, pp 2913–2918, Guangzhou, China, August 18–21,
Kennedy J, Mendes R (2003) Neighborhood topologies in fully- 2005
informed and best-of-neighborhood particle swarms. In: Proceed- Liu H, Yang G, Song G (2014) MIMO radar array synthesis using
ings of the 2003 IEEE international workshop on soft computing in QPSO with normal distributed contraction-expansion factor. Pro-
industrial applications (SMCia/03), pp 45–50, Binghamton, New cedia Eng 15:2449–2453
York, USA, Oct 12–14, 2003 Liu T, Jiao L, Ma W, Ma J, Shang R (2016) A new quantum-behaved
Krink T, Lovbjerg M (2002) The life cycle model: combining parti- particle swarm optimization based on cultural evolution mech-
cle swarm optimisation, genetic algorithms and hillclimbers. In: anism for multiobjective problems. Knowl Based Syst 101:90–
Lecture notes in computer science (LNCS) No. 2439: proceed- 99
ings of parallel problem solving from nature VII (PPSN 2002), pp Lovbjerg M, Krink T (2002) Extending particle swarm optimizers with
621–630, Granada, Spain, 7–11 Dec 2002 self-organized criticality. In: Proceedings of IEEE congress on evo-
Lee S, Soak S, Oh S, Pedrycz W, Jeonb M (2008) Modified binary lutionary computation (CEC 2002), pp 1588–1593, Honolulu, HI,
particle swarm optimization. Prog Nat Sci 18:1161–1166 USA, May 7–11, 2002
Lei K, Wang F, Qiu Y (2005) An adaptive inertia weight strategy for Lovbjerg M, Rasmussen TK, Krink T (2001) Hybrid particle swarm
particle swarm optimizer. In: Proceedings of the third international optimizer with breeding and subpopulations. In: Proceedings of
conference on mechatronics and information technology, pp 51– third genetic and evolutionary computation conference (GECCO-
55, Chongqing, China, Sept 21–24, 2005 2001), pp 469–476, San Francisco-Silicon Valley, CA, USA, July
Leontitsis A, Kontogiorgos D, Pagge J (2006) Repel the swarm to the 7–11, 2001
optimum. Appl Math Comput 173(1):265–272 Lu J, Hu H, Bai Y (2015a) Generalized radial basis function neural net-
Li X (2004) Better spread and convergence: particle swarm multi- work based on an improved dynamic particle swarm optimization
objective optimization using the maximin fitness function. In: and adaboost algorithm. Neurocomputing 152:305–315
Proceedings of genetic and evolutionary computation conference Lu Y, Zeng N, Liu Y, Zhang Z (2015b) A hybrid wavelet neural net-
(GECCO2004), pp 117–128, Seattle, WA, USA, June 26–30, 2004 work and switching particle swarm optimization algorithm for face
Li X (2010) Niching without niching parameters: particle swarm direction recognition. Neurocomputing 155:219–244
optimization using a ring topology. IEEE Trans Evolut Comput Medasani S, Owechko Y (2005) Possibilistic particle swarms for
14(1):150–169 optimization. In: Applications of neural networks and machine
Li X, Dam KH (2003) Comparing particle swarms for tracking extrema learning in image processing IX vol 5673, pp 82–89
in dynamic environments. In: Proceedings of the 2003 Congress on Mendes R, Kennedy J, Neves J (2004) The fully informed parti-
Evolutionary Computation (CEC’03), pp 1772–1779, Canberra, cle swarm: simpler maybe better. IEEE Trans Evolut Comput
Australia, Dec 8–12, 2003 8(3):204–210
Li Z, Wang W, Yan Y, Li Z (2011) PS-ABC: a hybrid algorithm based Meng A, Li Z, Yin H, Chen S, Guo Z (2015) Accelerating particle
on particle swarm and artificial bee colony for high-dimensional swarm optimization using crisscross search. Inf Sci 329:52–72
optimization problems. Expert Syst Appl 42:8881–8895 Mikki S, Kishk A (2005) Improved particle swarm optimization tech-
Li C, Yang S, Nguyen TT (2012) A self-learning particle swarm opti- nique using hard boundary conditions. Microw Opt Technol Lett
mizer for global optimization problems. IEEE Trans Syst Man 46(5):422–426
Cybernet Part B Cybernet 42(3):627–646 Mohais AS, Mendes R, Ward C (2005) Neighborhood re-structuring in
Li Y, Zhan Z, Lin S, Zhang J, Luo X (2015a) Competitive and particle swarm optimization. In: Proceedings of Australian con-
cooperative particle swarm optimization with information sharing ference on artificial intelligence, pp 776–785, Sydney, Australia,
mechanism for global optimization problems. Inf Sci 293:370–382 Dec 5–9, 2005

123
D. Wang et al.

Monson CK, Seppi KD (2004) The Kalman swarm: a new approach to Poli R (2009) Mean and variance of the sampling distribution of particle
particle motion in swarm optimization. In: Proceedings of genetic swarm optimizers during stagnation. IEEE Trans Evolut Comput
and evolutionary computation conference (GECCO2004), pp 140– 13(4):712–721
150, Seattle, WA, USA, June 26–30, 2004 Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization—
Monson CK, Seppi KD (2005) Bayesian optimization models for an overview. Swarm Intell 1(1):33–57
particle swarms. In: Proceedings of genetic and evolutionary com- Qian X, Cao M, Su Z, Chen J (2012) A hybrid particle swarm opti-
putation conference (GECCO2005), pp 193–200, Washington, mization (PSO)-simplex algorithm for damage identification of
DC, USA, June 25–29, 2005 delaminated beams. Math Probl Eng 1–11:2012
Mostaghim S, Teich J (2003) Strategies for finding good local guides Qin Z, Yu F, Shi Z (2006) Adaptive inertia weight particle swarm
in multi-objective particle swarm optimization (MOPSO). In: optimization. In: Proceedings of the genetic and evolutionary com-
Proceedings of the 2003 IEEE swarm intelligence symposium putation conference, pp 450–459, Zakopane, Poland, June 25–29,
(SIS’03), pp 26–33, Indianapolis, Indiana, USA, April 24–26, 2006
2003 Ratnaweera A, Halgamuge S, Watson H (2004) Self-organizing hier-
Mu B, Wen S, Yuan S, Li H (2015) PPSO: PCA based particle swarm archical particle swarm optimizer with time-varying acceleration
optimization for solving conditional nonlinear optimal perturba- coefficients. IEEE Trans Evolut Comput 8(3):240–255
tion. Comput Geosci 83:65–71 Reynolds CW (1987) Flocks, herds, and schools: a distributed behav-
Netjinda N, Achalakul T, Sirinaovakul B (2015) Particle swarm opti- ioral model. Comput Graph 21(4):25–34
mization inspired by starling flock behavior. Appl Soft Comput Richards M, Ventura D (2004) Choosing a starting configuration for
35:411–422 particle swarm optimization. In: Proceedings of 2004 IEEE inter-
Ngoa TT, Sadollahb A, Kima JH (2016) A cooperative particle swarm national joint conference on neural networks, pp 2309–2312,
optimizer with stochastic movements for computationally expen- Budapest, Hungary, July 25–29, 2004
sive numerical optimization problems. J Comput Sci 13:68–82 Richer TJ, Blackwell TM (2006) The levy particle swarm. In: Pro-
Nickabadi AA, Ebadzadeh MM, Safabakhsh R (2011) A novel particle ceedings of the IEEE congress on evolutionary computation, pp
swarm optimization algorithm with adaptive inertia weight. Appl 808–815, Vancouver, BC, Canada, July 16–21, 2006
Soft Comput 11:3658–3670 Riget J, Vesterstrom JS (2002) A diversity-guided particle swarm
Niu B, Zhu Y, He X (2005) Multi-population cooperative particle swarm optimizer—the ARPSO.Technical Report 2002-02, Department of
optimization. In: Proceedings of advances in artificial life—the Computer Science, Aarhus University, Aarhus, Denmark
eighth European conference (ECAL 2005), pp 874–883, Canter- Robinson J, Rahmat-Samii Y (2004) Particle swarm optimization in
bury, UK, Sept 5–9, 2005 electromagnetics. IEEE Trans Antennas Propag 52(2):397–407
Noel MM, Jannett TC (2004) Simulation of a new hybrid particle swarm Robinson J, Sinton S, Rahmat-Samii Y (2002) Particle swarm, genetic
optimization algorithm. In: Proceedings of the thirty-sixth IEEE algorithm, and their hybrids: optimization of a profiled corrugated
Southeastern symposium on system theory, pp 150–153, Atlanta, horn antenna. In: Proceedings of 2002 IEEE international sympo-
Georgia, USA, March 14–16, 2004 sium on antennas propagation, pp 31–317, San Antonio, Texas,
Ozcan E, Mohan CK (1998) Analysis of a simple particle swarm USA, June 16–21, 2002
optimization system. In: Intelligent engineering systems through Roy R, Ghoshal SP (2008) A novel crazy swarm optimized economic
artificial neural networks, pp 253–258 load dispatch for various types of cost functions. Electr Power
Pampara G, Franken N, Engelbrecht AP (2005) Combining particle Energy Syst 30:242–253
swarm optimization with angle modulation to solve binary prob- Salehian S, Subraminiam SK (2015) Unequal clustering by improved
lems. In: Proceedings of the 2005 IEEE congress on evolutionary particle swarm optimization in wireless sensor network. Procedia
computation, pp 89–96, Edinburgh, UK, Sept 2–4, 2005 Comput Sci 62:403–409
Park JB, Jeong YW, Shin JR, Lee KY (2010) An improved particle Samuel GG, Rajan CCA (2015) Hybrid: particle swarm optimization-
swarm optimization for nonconvex economic dispatch problems. genetic algorithm and particle swarm optimization-shuffled frog
IEEE Trans Power Syst 25(1):156–166 leaping algorithm for long-term generator maintenance schedul-
Parsopoulos KE, Vrahatis MN (2002a) Initializing the particle swarm ing. Electr Power Energy Syst 65:432–442
optimizer using the nonlinear simplex method. WSEAS Press, Schaffer JD (1985) Multi objective optimization with vector evaluated
Rome genetic algorithms. In: Proceedings of the IEEE international con-
Parsopoulos KE, Vrahatis MN (2002b) Recent approaches to global ference on genetic algorithm, pp 93–100, Pittsburgh, Pennsylvania,
optimization problems through particle swarm optimization. Nat USA
Comput 1:235–306 Schoeman IL, Engelbrecht AP (2005) A parallel vector-based particle
Parsopoulos KE, Vrahatis MN (2004) On the computation of all global swarm optimizer. In: Proceedings of the international conference
minimizers through particle swarm optimization. IEEE Trans Evo- on neural networks and genetic algorithms (ICANNGA 2005), pp
lut Comput 8(3):211–224 268–271, Protugal
Peer E, van den Bergh F, Engelbrecht AP (2003) Using neighbor- Schutte JF, Groenwold AA (2005) A study of global optimization using
hoods with the guaranteed convergence PSO. In: Proceedings particle swarms. J Glob Optim 31:93–108
of IEEE swarm intelligence symposium (SIS2003), pp 235–242, Selleri S, Mussetta M, Pirinoli P (2006) Some insight over new varia-
Indianapolis, IN, USA, April 24–26, 2003 tions of the particle swarm optimization method. IEEE Antennas
Peng CC, Chen CH (2015) Compensatory neural fuzzy network with Wirel Propag Lett 5(1):235–238
symbiotic particle swarm optimization for temperature control. Selvakumar AI, Thanushkodi K (2009) Optimization using civilized
Appl Math Model 39:383–395 swarm: solution to economic dispatch with multiple minima. Electr
Peram T, Veeramachaneni k, Mohan CK (2003) Fitness-distance-ratio Power Syst Res 79:8–16
based particle swarm optimization. In: Proceedings of 2003 IEEE Seo JH, Im CH, Heo CG (2006) Multimodal function optimiza-
swarm intelligence symposium, pp 174–181, Indianapolis, Indi- tion based on particle swarm optimization. IEEE Trans Magn
ana, USA, April 24–26, 2003 42(4):1095–1098
Poli R (2008) Dynamics and stability of the sampling distribution of Sharifi A, Kordestani JK, Mahdaviania M, Meybodi MR (2015) A novel
particle swarm optimisers via moment analysis. J Artif Evol Appl hybrid adaptive collaborative approach based on particle swarm
10–34:2008

123
Particle swarm optimization algorithm: an overview

optimization and local search for dynamic optimization problems. system, man and cybernetics, pp 96–101, Hammamet, Tunisia,
Appl Soft Comput 32:432–448 October, 2002
Shelokar PS, Siarry P, Jayaraman VK, Kulkarni BD (2007) Particle van den Bergh F, Engelbrecht AP (2004) A cooperative approach
swarm and ant colony algorithms hybridized for improved contin- to particle swarm optimization. IEEE Trans Evolut Comput
uous optimization. Appl Math Comput 188:129–142 8(3):225–239
Shi Y, Eberhart RC (1998) A modified particle swarm optimizer. In: van den Bergh F, Engelbrecht AP (2006) A study of particle swarm
Proceedings of the IEEE international conference on evolutionary optimization particle trajectories. Inf Sci 176:937–971
computation, pp 69–73, Anchorage, Alaska, USA, May 4–9, 1998 Vitorino LN, Ribeiro SF, Bastos-Filho CJA (2015) A mechanism based
Shi Y, Eberhart RC (2001) Fuzzy adaptive particle swarm optimization. on artificial bee colony to generate diversity in particle swarm
In: Proceedings of the congress on evolutionary computation, pp optimization. Neurocomputing 148:39–45
101–106, IEEE Service Center, Seoul, Korea, May 27–30, 2001 Vlachogiannis JG, Lee KY (2009) Economic load dispatch—a compar-
Shin Y, Kita E (2014) Search performance improvement of particle ative study on heuristic optimization techniques with an improved
swarm optimization by second best particle information. Appl coordinated aggregation based pso. IEEE Trans Power Syst
Math Comput 246:346–354 24(2):991–1001
Shirkhani R, Jazayeri-Rad H, Hashemi SJ (2014) Modeling of a solid Wang W (2012) Research on particle swarm optimization algorithm
oxide fuel cell power plant using an ensemble of neural networks and its application. Southwest Jiaotong University, Doctor Degree
based on a combination of the adaptive particle swarm optimization Dissertation, pp 36–37
and levenberg marquardt algorithms. J Nat Gas Sci Eng 21:1171– Wang Q, Wang Z, Wang S (2005) A modified particle swarm optimizer
1183 using dynamic inertia weight. China Mech Eng 16(11):945–948
Sierra MR, Coello CAC (2005) Improving pso-based multi-objective Wang H, Wu Z, Rahnamayan S, Liu Y, Ventresca M (2011) Enhancing
optimization using crowding, mutation and epsilon-dominance. particle swarm optimization using generalized opposition-based
Lect Notes Comput Sci 3410:505–519 learning. Inf Sci 181:4699–4714
Soleimani H, Kannan G (2015) A hybrid particle swarm optimization Wang H, Sun H, Li C, Rahnamayan S, Pan J (2013) Diversity enhanced
and genetic algorithm for closedloop supply chain network design particle swarm optimization with neighborhood search. Inf Sci
in large-scale networks. Appl Math Model 39:3990–4012 223:119–135
Stacey A, Jancic M, Grundy I (2003) Particle swarm optimization with Wen W, Liu G (2005) Swarm double-tabu search. In: First international
mutation. In: Proceedings of IEEE congress on evolutionary com- conference on intelligent computing, pp 1231–1234, Changsha,
putation 2003 (CEC 2003), pp 1425–1430, Canberra, Australia, China, August 23–26, 2005
December 8–12, 2003 Wolpert DH, Macready WG (1997) Free lunch theorems for optimiza-
Suganthan PN (1999) Particle swarm optimizer with neighborhood tion. IEEE Trans Evolut Comput 1(1):67–82
operator. In: Proceedings of the Congress on Evolutionary Com- Xie X, Zhang W, Yang Z (2002) A dissipative particle swarm opti-
putation, pp 1958–1962, Washington, D.C. USA, July 6–9, 1999 mization. In: Proceedings of IEEE congression on evolutionary
Sun J, Feng B, Xu W (2004) Particle swarm optimization with parti- computation, pp 1456–1461, Honolulu, HI, USA, May, 2002
cles having quantum behavior. In: Proceedings of the congress on Xie X, Zhang W, Bi D (2004) Optimizing semiconductor devices by
evolutionary computation, pp 325–331, Portland, OR, USA, June self-organizing particle swarm. In: Proceedings of congress on
19–23, 2004 evolutionary computation (CEC2004), pp 2017–2022, Portland,
Tang Y, Wang Z, Fang J (2011) Feedback learning particle swarm opti- Oregon, USA, June 19–23, 2004
mization. Appl Soft Comput 11:4713–4725 Yang C, Simon D (2005) A new particle swarm optimization technique.
Tanweer MR, Suresh S, Sundararajan N (2016) Dynamic mentoring and In: Proceedings of 17th international conference on systems engi-
self-regulation based particle swarm optimization algorithm for neering (ICSEng 2005), pp 164–169, Las Vegas, Nevada, USA,
solving complex real-world optimization problems. Inf Sci 326:1– Aug 16–18, 2005
24 Yang Z, Wang F (2006) An analysis of roulette selection in early par-
Tatsumi K, Ibuki T, Tanino T (2013) A chaotic particle swarm opti- ticle swarm optimizing. In: Proceedings of the 1st international
mization exploiting a virtual quartic objective function based symposium on systems and control in aerospace and astronautics,
on the personal and global best solutions. Appl Math Comput (ISSCAA 2006), pp 960–970, Harbin, China, Jan 19–21, 2006
219(17):8991–9011 Yang X, Yuan J, Yuan J, Mao H (2007) A modified particle swarm
Tatsumi K, Ibuki T, Tanino T (2015) Particle swarm optimization with optimizer with dynamic adaptation. Appl Math Comput 189:1205–
stochastic selection of perturbation-based chaotic updating system. 1213
Appl Math Comput 269:904–929 Yang C, Gao W, Liu N, Song C (2015) Low-discrepancy sequence
Ting T, Rao MVC, Loo CK (2003) A new class of operators to accelerate initialized particle swarm optimization algorithm with high-order
particle swarm optimization. In: Proceedings of IEEE congress on nonlinear time-varying inertia weight. Appl Soft Comput 29:386–
evolutionary computation 2003(CEC2003), pp 2406–2410, Can- 394
berra, Australia, Dec 8–12, 2003 Yasuda K, Ide A, Iwasaki N (2003) Adaptive particle swarm optimiza-
Trelea IC (2003) The particle swarm optimization algorithm: con- tion. In: Proceedings of IEEE international conference on systems,
vergence analysis and parameter selection. Inf Process Lett man and cybernetics, pp 1554–1559, Washington, DC, USA, Octo-
85(6):317–325 ber 5–8, 2003
Tsafarakis S, Saridakis C, Baltas G, Matsatsinis N (2013) Hybrid par- Yasuda K, Iwasaki N (2004) Adaptive particle swarm optimization
ticle swarm optimization with mutation for optimizing industrial using velocity information of swarm. In: Proceedings of IEEE
product lines: an application to a mixed solution space considering international conference on systems, man and cybernetics, pp
both discrete and continuous design variables. Ind Market Manage 3475–3481, Hague, Netherlands, October 10–13, 2004
42(4):496–506 Yu H, Zhang L, Chen D, Song X, Hu S (2005) Estimation of model
van den Bergh F (2001) An analysis of particle swarm optimizers. Ph.D. parameters using composite particle swarm optimization. J Chem
dissertation, University of Pretoria, Pretoria, South Africa Eng Chin Univ 19(5):675–680
van den Bergh F, Engelbrecht AP (2002) A new locally convergent Yuan Y, Ji B, Yuan X, Huang Y (2015) Lockage scheduling of three
particle swarm optimizer. In: Proceedings of IEEE conference on gorges-gezhouba dams by hybrid of chaotic particle swarm opti-

123
D. Wang et al.

mization and heuristic-adjusted strategies. Appl Math Comput Zhang L, Yu H, Hu S (2003) A new approach to improve particle
270:74–89 swarm optimization. In: Proceedings of the Genetic and Evolution-
Zeng J, Cui Z, Wang L (2005) A differential evolutionary particle ary Computation Conference 2003 (GECCO 2003), pp 134–139,
swarm optimization with controller. In: Proceedings of the first Chicago, IL, USA, July 12–16, 2003
international conference on intelligent computing (ICIC 2005), pp Zhang R, Zhou J, Moa L, Ouyanga S, Liao X (2013) Economic envi-
467–476, Hefei, China, Aug 23–25, 2005 ronmental dispatch using an enhanced multi-objective cultural
Zhai S, Jiang T (2015) A new sense-through-foliage target recognition algorithm. Electr Power Syst Res 99:18–29
method based on hybrid differential evolution and self-adaptive Zhang L, Tang Y, Hua C, Guan X (2015) A new particle swarm opti-
particle swarm optimization-based support vector machine. Neu- mization algorithm with adaptive inertia weight based on bayesian
rocomputing 149:573–584 techniques. Appl Soft Comput 28:138–149
Zhan Z, Zhang J, Li Y, Chung HH (2009) Adaptive particle swarm
optimization. IEEE Trans Syst Man Cybernet Part B Cybernet
39(6):1362–1381
Zhan Z, Zhang J, Li Y, Shi Y (2011) Orthogonal learning particle swarm
optimization. IEEE Trans Evolut Comput 15(6):832–847

123

You might also like