Recent Adv in PSO
Recent Adv in PSO
Recent Adv in PSO
Xiaohui Hu
Department of Biomedical Engineering Purdue Univesity. West Lafayette, I N 47907 xhu($ourduc.cdu
Yuhui Shi
EDS Embeded Systems Group Kocomo, Indiana, USA Yuhui.Shiti!cds.com
Russ Eberhart
Purdue School of Engineering and Technogloy Indianapolis, IN 46202 rcbcrharChuoui .edu
Abstract- This paper reviews the development of the particle swarm optimization method in recent years. Included are brief discussions of various parameters. Modifications to adapt to different and complex environments are reviewed, and real world applications are listed.
1 . INTRODUCTION
Particle swarm optimization (PSO) is population based stochastic optimization technique developed by Kennedy and Eberhart in 1995 [ I , 21. As a relatively new evolutionary paradigm, PSO has grown in the past several years, and over 300 papers related to PSO have been published. More and more researchers are interested in this new algorithm and it has been investigated from various perspectives (Figure I).
120
find the food? An effective strategy is to follow the bird which is nearest to the food. PSO leams from the scenario and uses it to solve the optimization problems. In PSO, each single solution is like a "bird" in the search space, which is called "particle". All particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the panicles. The particles fly through the problem space by following the particles with the best solutions so far.
A . Basic algorithm PSO is initialized with a group of random particles (solutions) and then searches for optima by updating each generation. In evely iteration, each particle is updated by following two "best" values. The first one is the location of the best solution (fitness) a panicle has achieved so far. This value is called pBesf. Another "best" value is the location of the best solution that any neighbor of a particle has achieved so far. This best value is a neighborhood best and called nBest. When a particle takes all the population as its neighbors, the best location is a global best and is called gBest. The general process of PSO i s as follows
1w
80
0
_ _ _ _
40
20
0
1995 19%
l09S
zow
2801
2wz
2001
Figure I : Number of papers published in each year (Incomplete dam for year 2003)
Following the introduction, major developments in PSO are reviewed in section II. The original version is presented. Following are discussions of various parameters used in PSO. Modifications to improve the algorithms are also discussed. In section Ill, PSO in various scenarios is discussed, including training neural networks, multiobjective optimization, dynamic tracking, and constraint optimization. Finally, some typical real world applications are presented.
1 1 . BASIC ALGORITHM
Do Calculate fitness values of each particle Updatepbesf if the current fitness value is better than pBest Determine nBest for each particle: choose the particle with the best fitness value of all the neighbors as the nBest For each particle Calculate panicle velocity according to (a) Update particle position according to (b) While maximum iterations or minimum criteria is not attained
Figure 2: The process ofpartielo s w m
PSO is inspired by the behavior of bird flocking. Assume the following scenario: a group of birds are randomly searching food in an area. There is only one piece of food in the area being searched. The birds do not know where the food is. But they know bow far the food is and their peers' positions. So what's the best strategy to
0-7803-8515-2104/$20.00 02004 IEEE
YO
The core of PSO is the updating formulae of the particle, which can be represented as follows. Equation (a) calculates a new velocity for each particle (potential solution) based on its previous velocity ( K d ) , the particle's location at which the best fitness so far has been achieved ( pr , orpBest), and the neighbor's best location
doubtful that a topological neighborhood structure can escape from local optima easily. The selection of the nBest is usually determined by comparing fimess values among neighbors. However, in a multi-objective optimization environment, the situation is more complicated when there are multiple fimess values for each particle. Section 111 provides a more detailed discussion of this topic. Peram et al. [13]used the ratio of the fimess and the distance of other particles to determine the nbest and claimed it outperforms the original version OfPSO.
x R a n d ( ) x ( p d -x,)
(0)
= Xjd
+( ,
(b)
B. The selection ofpBest pBest is the best location the particle has achieved so far. It can be viewed as the particles memory. Currently only one memory slot is allocated to each particle,
D.Learningfactors
The leaming factors C I and c2 in equation (a) represent the weighting of the stochastic acceleration terms that pull each particle toward pBest and nBest positions [14]. From a psychological standpoint, the cognitive term represents the tendency of individuals to duplicate past behaviors that have proven successful, whereas the social term represents the tendency to follow the successes of others. Both c, and c2 are sometimes set to 2.0. The obvious reason is it will make the search cover all surrounding regions which is centered at the pBest and nBest. 1.49445 is also used according to the work by Clerc [IS] which indicates that a constriction factor may be necessary to insure convergence o f P S 0 [I41
The best location does not always depend on the value of the fitness function only. Many constraints can be applied to the definition of the best location to adapt to different problems. And this wont lower the search ability and performance. For example, in nonlinear constrained optimization problems [4, 51, the particles only r e m e n k r those positions in the feasible space and disregard unfeasible solutions. This simple modification successfully locates the optimum of a series of benchmark problems. In a multiohjective optimization (MO) environment [6, 71, the best position is determined by Pareto dominance (if solution A is not worse than solution B in every objective dimension and is better than solution B in at least one objective dimension, solution B is dominated by solution A). Another popular technique is memory reset. In dynamic environments [8, 91, particles pBest will be reset to the current value if the environment changes.
C. The selection o f nBest nBest is the best position that neighbors of a particle have achieved so far The neighborhood of a particle is the social environment a particle encounters. The selection of nbest consists of two steps, which are to determine the neighborhood and select the nbest among the neighbors.
In most cases, the leaming factors are identical. That puts the same weights on social searching and cognitive searching. Kennedy investigated two extreme cases: social-only model and cognitive-only model and found out that both parts are essential to the success of particle swarm searching [16]. No definitive conclusions about asymmetric leaming factors have been reported.
E. Inertia weight
Traditionally, PSO takes certain predetermined conjunct particles as the neighbors. The number of neighbors or the size of the neighborhood will affect the convergence speed of the algorithm. It is generally accepted that a larger neighbor size will make the particles converge faster, while a small neighbor size will help to prevent the particle from premature or preconvergence. gBest is an extreme situation of the nBest version. It takes the whole population as the neighbors of each particle [I]. Although Kennedy et al. investigated various neighborhood structures and their influences on performance [IO, 111, no conclusive results have been reached so far. In multiohjective optimization problems, an external repository of Pareto optimal solutions is used
Inertia weight was first introduced by Shi and Eberhart [3]. The function of inertia weight is to balance global exploration and local exploitation. Linearly decreasing inertia weights were recommended by the authors. Zheng et al. [I71 claimed that PSO with increasing inertia weight perfoms better. However, the authors used a different set of leaming factors, and it is not clear from the paper how this affects the performance. A randomized inertia weight is also used in several reports. The inertia weight can be set to [O.S + (RnM.O)],which is selected in the spirit of Clercs constriction factor [141.
F. Other parameters Particles velocities are clamped to a maximum velocity Vmax, which serves as a constraint to control the global exploration ability of particle swarm Generally, Vmax is set to the value of the dynamic range of each variable.
91
The population size selected is problem-dependent. Sizes of 20-50 are most common. In some situations, large population sizes may be used to adapt to different requirements.
111. DISCRETE VERSION OF PSO
results in there being a group of altemative solutions which must be considered equivalent in the absence of information conceming the relevance of each objective relative to the others. The group of altemative solutions is known as a Pareto optimal set or Pareto front. The information sharing mechanism in PSO is significantly different from other population based optimization tools. In genetic algorithms (GAS), chromosomes exchange information with each other by crossover, which is a two-way information sharing mechanism. In PSO, only &est gives out information to others. It is a one-way information sharing mechanism. Due to the point-attraction characteristics, traditional PSO is not able to locate multiple optimal points simultaneously, which represent the Pareto front. Although multiple optimal solutions could be obtained through multiple runs with different weighted combinations of all the objectives, a method that could find a group of Pareto optimal solutions simultaneously is preferred.
Many optimization problems involve in discrete or binary variables. Typical examples are scheduling problems or routing problems. The updating formula of POS and procedures are oriented from and designed for continuous spaces. Some changes have to be made to adapt to the discrete spaces. The coding changes may be simple, but it is hard to define the meanings of velocities and determine the changes of trajectories. Kennedy et al. [18] defined the first discrete binary version of PSO. The particles are coded as binary strings. The velocities are constrained to the interval [0, I] by using the sigmoid function and are interpreted as changes of probabilities. Chang et al. [I91 applied the method to feeder reconfiguration problems and showed it is eficient in searching for the optimal solutions. Mohan el al. [ZO] proposed several binary approaches (direct approach, quantum approach, regularization approach, bias vector approach, and mixed approach) and no conclusion was drawn from limited experimentation. Hu et al. [21] introduced a modified PSO to deal with permutation problems. Particles were defined as permutations of a group of unique values. Velocities are redefined based on the similarity of two particles. Particles make switches to get a new permutation with a random rate defined by their velocities. A mutation factor is also introduced to prevent the current pBesf from being stuck at local minima. Preliminary study on n-queens problem showed that the modified PSO is promising in solving the constraint satisfaction problems. When dealing with integer variables, PSO sometimes are easily trapped into local minim. It seems PSO can locate the optimal area but fails to exploit more details. The philosophy behind the original PSO is to learn from individuals own experience and his peers experience. How to effectively apply these rules to discrete problems is still an open issue. A direct translation of the original PSO might not be the only choice. IV. PSO IN COMPLEX ENVIRONMENTS
A . Mulriobjective optimization In recent years, multi-objective optimization has been a very active research area. In multi-objective optimization (MO) problems, objective functions may be optimized separately from each other and the best solution may be found for each objective. However, perfect solutions which are optimal on all the objective dimensions can seldom be found due to the fact that the objective functions are often in conflict among themselves. This
In PSO, a particle is an independent intelligence agent, which searches the problem space based on its own experience and the experiences of peer particles. The former is the cognitive part of the panicle update formula, while the latter is the social part of particle update formula. Both play crucial NICS to guide the particles searching. Thus, the selection of social and cognitive leaders (nbest and @est) is the key point of MO-PSO algorithms [6,7,22-291.
The selection of the cognitive leader follows the same rule in traditional PSO. The only difference is that the leader is determined by Pareto dominance. It is possible to let each particle have multiple memory slots to store more Pareto optimal solutions. No report has been found in the literature. The selection of the social leader includes two steps, which are similar to the selection of nBesr. The first step is to formalize a candidate pool from which the leader is chosen. In traditional PSO, the leader is chosen from the pbest values of neighbors. A more popular method in MO-PSO is to use an extemal pool to store more Paretooptimal solutions. The second step is the process to choose the leader. The selection of nBest should satisfy the following two criteria: first, it should provide effective guidance to the particle to get better convergence speed; second, it also needs to provide a balanced search along the Pareto front to maintain population diversity. Two typical approaches have been employed in the literature: 1. Roulette wheel selection scheme: In this approach, all candidates are assigned weights based on some criteria, which can be crowding radius [29], crowding factor [24], niche count 1241 or other measurements. Then random selection is used to choose the social leader [6, 24, 29, 301. The aim
92
of this process is to maintain population diversity. 2. Quantitative standards: In this approach, the social leader is determined by some procedure without any random selection involved [7, 22, 23,271.
Ray e f al. I291 combined the Pareto ranking scheme and PSO to handle MO problems. A set of leaders (SOL), which are better performing particles, are selected based on the Pareto ranks during each generation. The remaining particles will select a leader from the set of leaders as nBest and move to a new location. The selection of a leader from the SOL is based on a roulette wheel scheme that ensures SOL members with a larger crowding radius have a higher probability of being selected as a leader.
-. A
particle will find a social leader with minimal sigma distance to the particle.
F2
4
3
I
0 1
PI
Fieldsend [22] ef. a 1 proposed a dominated tree for storing the particles shown in figure 6 .
F1
Figure 3: Pareto making sheme in Ray et al.
Coello Coello e f al. [6] used a two-step selection process to get the social leader. First fitness hyperspace is divided into small hypercubes, and each cube was assigned a weight which is inversely proportional to the number of non-dominated solutions inside the cube. Then roulette-wheel selection is used to select one of the hypercubes from which the nBesf will be picked. In the second step, a social leader will be randomly picked from the selected hypercube.
FI
Figure 6 Pareto ranking scheme i n Ray el al.
I
0
:
FI
Hu et. a1 developed a dynamic neighborhood. strategy to pick the social leader in MOPSO. The objectives are divided into two groups. One objective is defined as the optimization objective, while all the other objectives are defined as neighborhood objectives. First the distances between the current paaicle and all the candidates are calculated, and then a group of neighbors are picked based on the distances. Then the candidate with the minimum fitness value of the optimization objective becomes the social leader. The method is simple and intuitive. However, the drawback is that it is sensitive to the selection of objectives.
Mostaghim et al. [27] introduces the sigma method as a new method for finding the best social leader for each particle of the population. Sigma values are calculated for each individual in the candidate pool as well as the particle, while the sigma value for a two-objective optimization problem is defined as in (c). Then the
93
F2
The constraints are handled by a constraint matrix. A multilevel Pareto ranking scheme is implemented to generate the BPL based on the constraint matrix. It should he noted that the update of each particle uses a simple generational operator instead of the regular PSO formula. Test cases showed much faster convergence and much lower number of function evaluations.
C. PSO in dynamic environments A dynamic system changes state frequently or even perhaps almost continuously. Many real-world systems involve in dynamic environments. For example, most of the computational time in scheduling systems is spent in rescheduling, caused by changes in customer priorities, unexpected equipment maintenance, etc. In real-world applications, then, these system state changes result in a requirement for frequent re-optimization.
I
0
FI
Figure 7: Dynamic neighbor PSO in Hu el al.
B. Constraint Optimization
There are some studies reported in the literature that extend PSO to constrained optimization problems. The goal of constrained optimization problems is to find the solution that optimizes the fitness function while satisfying a group of linear and non-linear constraints. For constrained optimization problems, the original PSO method needs to he modified to deal with constraints. Hu and Eherhart [SI introduced a fairly easy but effective method to solve the constrained optimization problems. The preserving feasibility strategy is employed to deal with constraints. Two modifications were made to PSO algorithms. 1. When updating the memories, all the particles only keep feasible solutions in their memory, 2. During the initialization process, all particles are started with feasible solutions. Various test cases showed that PSO is much faster and better than other evolutionary constrained optimization techniques when dealing with optimization problems with linear or nonlinear inequities constraints. The disadvantage is that the initial feasible solution set is sometimes hard to find. El-Gallad et al. [4] introduced a similar method. The only difference is that when a particle is outside of feasible space, it will he reset to the last best value found. The potential problem is that it may limit the particles to the region where they are initialized. Parsopoulos el al. [31] converted the constrained optimization problem into a non-constrained optimization problem by adopted a non-stationary multi-stage assignment penalty function and then applied PSO to the converted problems. Several benchmark problems were tested and the author claimed it outperformed other evolutionary algorithms, such as Evolution Strategies and GAS. Ray et al. [32] proposed a swarm metaphor with a multilevel information sharing strategy to deal with optimization problems. In a swarm, there are some better performers (leaders) that set the direction of search for the rest of the individuals. An individual that is not in the better performer list (BPL) improves its performance by deriving information from its closest neighbor in the BPL.
Initial work in tracking dynamic systems with particle swarm optimization was reported in Eherhart and Shi [33]. The follow-up paper [9] introduces an adaptive PSO, which automatically tracks various changes in a dynamic system. Different environment detection and response techniques are tested on the parabolic benchmark functions, and re-randomization is introduced to respond to the dynamic changes. The detection method used is to monitor the behavior of the best particle in the population. Carlisle [8] used a random point in the search space to determine if the environment changes. Blackwell [34] added a charged term to the particle updating formula, which keeps the particles in a extended swarm shape to deal with fast changing dynamic environments. No detection is needed in the proposed method.
V. APPLICATIONS
A. Artificial nerual network training PSO has been applied to three main attributes of neural networks: network connection weights, network architecture (network topology, transfer function), and network learning algorithms. Most of the work involving the evolution of ANNs has focused on the network weights and topological structure. Usually the weights and/or topological structure are encoded as a chromosome in GAS. The selection of the fitness function depends on the research goals. For a classification problem, the rate of mis-classified pattems can be viewed as the fitness value.
Compared with the back-propagation training method, the advantage of PSO is that it can be used in cases with non-differentiahle PE transfer functions and no gradient information available. There are several papers reporting use of PSO to replace the back-propagation learning algorithm in ANN in the past several years [35-401. They showed that PSO is a promising method to train ANNs. It is faster and gets better results in most cases. It also avoids some of the problems GAS encounter. 94
Besides neural network training, pso has been combined with various techniques to solve different problems such as f u z ~ y system t41, 421, self-organizing maps [43], support vector machine 1441, and hidden Markov model training [45] etc.
C. A modified pmiclide swarm Shi, Y.and Eberhan, U. [3] optimizer Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC 1998), F'iscataway, NI. pp. 69-73.1998 [4] El-Gallad, A. I., El-~awary, M. E., and S ~ I IA.~A., , "Swarming of intelligent parricies for salving the nonlinear canstrained optimization problem," Engineeringlnlelligenr syStems/or Elecrricnl Eng;neenngandCommunicatians,vol.9,no. 3,pp. 155-163, Sept.2001. 151 . .. Hu. andEberhan. R c. Solvingwnstrainednonlinear optimization problems with panicle swarm optimization. Praeeedings of the Sixth World Multiconferenceon Systemics, Cybernetics and Informatics 2002 (SCI ZOOZ), Orlando, USA. 2002
C. Parameter Optimizuiion As an oDtimization method. PSO has been amlied .. to various parameter estimation and optimization problems. For example, Abido et al. [46, 471 applied PSO to optimize parameter settings of power system stabilizers as well as the optimal power flow problem [48].
x.
D.Feuture Selection
Agrafiotis [49] adapted PSO to the problem of feature selection by viewing the location vectors of the particles as probabilities and employing roulette wheel selection to construct candidate subsets. Testing results showed that the method compares favorably with simulated annealing. The authors also claimed that PSO does not converge as reliably to the same minimum. One possible reason is the selection of leaming factors. Both factors are set to 1 in the experiments, which might be too small according to previous discussion.
[6] Coello Coello, C. A. and Lechuga, M.S.M O P S 0 a proposal for multiple objeetive particle swarm optimization. Raeeedinga of the IEEE Congress on Evolutionary Computanon (CEC 2002). Honolulu, Hawaii USA. 2002 171 Hu, X.and Eberhan, R.C. Multiobjective optimization using dynamic neighborhood panicle swarm optimization. Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002). Honolulu, Hawaii USA. pp. 1677-1681,2002 Carlisle, A. and Dozier,0.Tracking changing extrema With [8] adaptive panicle swarm optimizer. Roceedings ofths 5th Biannual World Automation Congress, 2002,Orlando, Florida, USA. pp. 265-270, 2002
VI. SUMMARY
As an emerging technology, particle Swam has gained a lot of attention in recent years. The first IEEE Swarm Intelligence Symposium was held in Indianapolis, USA in April, 2003. Authors from various countries presented 30 papers in related The response was encouraging. Nevertheless, there are still many unsolved issues in~pakicle s w a m including but not limited to: ' Convergence It i s not that why and how PSO converges. It is also important to the theoretical research of swarm intelligence and chaos systems. pso, ~ i ~ ~ ~ ~ iqost t ~of /the bresearch i ~ projects ~ reported in the literature deal with continuous variables. Limited research showed that PSO has some difficulties dealing with discrete variables. Combination of various PSO techniques to deal with complex problems. Agent-based distributed computation. PSO can be viewed as a distributed agent model and many agent computing characteristics are still uncovered. * Interaction with biological intelligence. Rooted in artificial life. PSO is successful even onlv simde
Flu, X. and Eberhart, U. C. Adaptive partlcle swarm [9] optimization: detection and response l o dynamic systems. Roceedings of the lEEE Congress on E v a l u t i o w Computation (CEC 2002), Honolulu, HawaiiUSA. pp. 1666-1670,2002
~ e ~ eJ. d ~ Mends, , and R Neighborhood topologies in (101 informed and best-of.neighborhaod panicle swarms. Proceedingsof the 2003 IEEE lntemational Workshop on Soft Computing in Industxial
my-
' O o 3 ~sMcu03~~pp~45~50~zp03
Kennedy, I. and Mends, R Papulation s m c w e and particle 11 I] swam performance. Proceedings of the IEEE Congress on Evolutionary Computation 1CEC 2002L Honolulu. Hawaii USA. 2002 [I21 Bnts, R., Engelbrecht, A. P., and van den Bergh, F. Solving systems of unwnstrainedequations using particle s w a m optimization. Proceedings of IEEE Conference on Systems, Man, and Cybernetics 2o02 (sMc Z O 0 2 ) , H ~ e t , T u n i s a pp, 102.107, 2o02 [I31 P e r m , T., Veermachaneni, K., and Mohan, C. K. Pimessdistance-ratio based particle swarm optimization. Proceedings afthe IEEE Swarm Intelligence Symposium 2003 (SIS 2003), Indianapolis, Indiana, USA. pp. 174-181.2003 114) Eberhan, R.C. and Shi, Y.Panicle swarm optimization: developments, applications and resources. Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2001), Seoul, Korea. 2001
. .
BIBLIOGRAPHY
Eberhan, R. C. and Kennedy, J. A new optimizer using particle swarm theory. Proceedings of the Sixth International Symposium on Miaomachine and Human Science, Nagoya, Japan. pp. 3943,1995
[I]
Kennedy, J. Minds and culhwes: particle s w a m implications. 1161 Socially Intelligent Agents: Papers from the 1997 AAA1 Fall Symposium, Menlo Park, CA. PP. 67-72, 1997
Kennedy, I. and Ebdmt, U. C. Panicle swarm optimization. [2] Proceedings oflEEE Intemational Conference an Neural Networks, Piscataway. NI. pp. 1942-1948,1995
Zhcng,Y.,Ma,L., h a n g , L.,andQian,J.On theconvergence (171 analysis and parameter selection in panicle swarm optimization. Proceedings of Intemational Conference on Machine k m i n g and Cybernetics 2003. pp. 1802-1807.2003
9s
Kennedy, I. and Eberhart, R. C. A discrete binary version ofthe particle swarm algorithm. Proceedings of the World Multiconference on Systemics,Cybemetics and l n f o m t i e s 1997,Piscatway, NJ. pp. 41M-
[IS]
[33] Eberhart, R.C. and Shi, Y. Tracking and optimizing dynamic system with particle swarms. Proceedings of the IEEE Congress on Evolutionary Computation (CEC ZOOI), Scoul, Korea. pp. 94-97,2001 [34] Blackwell, T. M. Swarms in dynamic environments Lechlre Nates in Computer Science (LNCS) No.2723:Proceedings of the Genetic and Evolutionary Computation Conference 2003 (GECCO 2003). Chicago, IL, USA. pp. I-12,2003 [35] Eberhan, R. C. and Shi, Y. Evolving artificial neural networks. Proceedings of International Conference on Neural Networks and Brain, 1998,Beijing, P. R. China. pp. PLS-PL13, 1998
4109, 1997
Chang, R F. and Lu,C. N. Feeder reconfigvration far load [ 191 factor improvement. Proceedings of the IEEE Power Engineering Society Transmission and Distribution Conference, pp. 980-984,2002
[20] Mahan, C. K. and AI-kazemi, B. Discrete particle swarm optimization. Proceedings of the Workshop on Particle Swarm Optimization 2001, Indianapolis, IN. 2001 [21]
Hu, X., Eberhm, R. C., and Shi, Y. Swarm intelligence for pennulation optimization: a ease study on n-queens problem. Proceedings afthe IEEE Swarm Intelligence Symposium 2003 (SIS 2003:1, Indianapolis, Indiana, USA. pp. 243-246,2003
[36] Engelbrecht, A. P. and Ismail, A.. "Training product unit neural networks," Slobiliry and Conlrol: lhheory and Applicnrionx, vol. 2, no. 12,pp. 59-74,1999.
van den Be& F. and Engdbrecht, A. P., "Cooperative leaning I371 in neural networks using particle swarm optimizers," Sourh AJricm Compuler Journal, vol. 26 pp. 84-90.2000,
[22] Fieldsend, I. E. and Singh, S. A multi-objective algorithm based upon panicle s w m aptimisation, an efficient data sUucNre and turbulence. Proceedings of the 2002 U.K. Workshop on Computational Intelligence, Birmingham, UK.pp. 37-44.2002 [23] Hu,X., Eberhart, R. C., and Shi, Y. Particle swarm with extended memory for multiobjective optimization. Proceedings of the IEEE Swarm lntelligence Symposium 2003 (SIS 2003), Indianapolis, Indiana, USA. pp. 193-197,2003 [24] Li. X. A non-dominated sorting particle swarm optimizer for multiobjective optimization. k N r e Nates in Computer Science (LNCS) No. 2723: Proceedings afthe Genetic and Evolutionary Computation Conference 2 W 3 (GECCO 2003), Chicago, IL, USA. pp. 37-48,2003 (251 Lu, H.Dynamic population strategy assisted particle swarm optimization in multiobjective evolutionary algorithm design. 2003. IEEE Neural Network Society. IEEE NNS Student Research Grants 2002 -Final Repom.
Moore, J. and Chapman, R Application ofparticle swarm t o multiobjective optimization. 1999. Depamnent of Computer Science and Sonware Engineering, Aubum University.
[38] Zhang, C., Shao, H., and Li, Y. Particle s w a m optimisation for evolving artificial neural network. Proceedings of the IEEE lntemational
Conference on Systems, Man, and Cybernetics 2000,pp. 2487-2490,
2000 [39] Lu,W. Z., Fan, H.-Y., and Lo, S. M., "Application of evolntionary neural network method i n predicting pollutant levels in downtown area of Hong Kong," Neurocompuling, vol. 51 pp. 387-400, 2003. [40] Settles, M., Rodebaugh, B., and Souk, T. Comparison of genetic algorithm and particle swarm optimizer when evolving a recurrent neural network. LecNre Notes in Computer Science (LNCS) No. 2723: Proceedings of the Genetic and Evolutionary Computation Conference 2003 (FECCO 2003), Chicago, IL, USA. pp. 151-152,2003 (411 He,2..Wei, C., Yang,L.,Gao, X., Yao, S., Eberhart, R. C.,and Shi, Y. Extracting rules from f u z y neural network by panicle swarm optimization. Proceedings of IEEE Congress on Evolntiowy Computation (CEC 1998),Anchorage, Alaska, USA. 1998 [42]
Shi, Y. and Eberhart, R.C. Panicle swarm optimization with
[26]
[27]
Parsopaulos, K. E. and Vrahatis, M. N. Panicle swarm omimization method i n multiobiective nrobl-. FToceedines of the ACM Symposium on Applied Camput& 2002 (SAC 2W2i,pp. 603607,2w2
1281
Xiao, X., Dow,E. R., Eberhm, R. C., Ben Miled, Z., and 1431 Oppelt, R. J. Gene clustering using self-organizing maps and particle swarm optimization. Proceedings of Second IEEE lntemational Workshop on High Perfomnce Computational Biology, Nice, France. 2003
[29] Ray, T. and Liew, K. M., "A swarm metaphor for multiobjective deign optimization," Engineering Oplimlralion, vol. 34,no. 2,pp. 141153,2002.
Coello Coello, C. A., Toscano Pulido, G., and SalLechuga, M. An extension of par6de swarm optimization that can handle multiple objectives. Workhap on Multiple Objective Metaheuristics, Park, France. 2 W 2
[44] Paquet, U. and Engelbrecht, A. P. Training support vector machines wit& particle swarms. Proceedings of the lntmational Joint Conference on Neural Networks, 2003 (IJCNN 2003), pp. 1598,2003 [45] Rasmussen, T. K.and Knnk, T., "lmprovcd Hidden Markov Model training for multiple sequence alignment by a panicle swarm optimiration--evolutionary algorithm hybnd," Biosysremr, vol. 72,no. I2,pp. 5-17, Nov.2003. [46] Abido, M. A. Particle swarm optimization for multimachine power system stabilizer design. Power Engineering Society Summer Meeting, pp. 1346-2001,2001 [47] Abido, M. A., "Optimal design of power system stabilizers using particle swarm optimization," IEEE Transaclioni on Energv Conversion,vol. 17,no. 3,pp.406-413,Sept.2002.
[30]
[)I] Parsopodos, K. E. and Vrahatis, M. N. Particle swarm optimization method for constrained optimization problems. Proceedings of the Euro-lntemational Symposium on Computational Intelligence
2002,2002
Ray, T. and Liew, K. M. A swarm with an effective information sharing mechanism for unconstrained and constrained single objective optimizaion problem. Roceedinps of IEEE Congress on Evolutionary Computation (CEC 2001), Seoul, Korea. pp. 75-80,2001
[321
96
Abida, M. A., "Optimal POWET flow using particle swarm [48] optimization," Inlemorionol Journal oJEIecnie~lPower and Energy Synems,vol. 24,no. 7,pp. 563-571.2002.
[49] Agrafiotis, D.K. and Cedeno,W., "Featureselection for srmcrure-activity correlation wing binary panicle swarms," Journal o/ Medicinal Chemisrry. vol. 45, no. 5 , pp. 1098-1 107, Feb.2002.
97