Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
This page intentionally left blank Swarm Intelligence and Bio-Inspired Computation Theory and Applications Edited by Xin-She Yang Department of Design Engineering and Mathematics, Middlesex University, UK Zhihua Cui Complex System and Computational Intelligence Laboratory, Taiyuan University of Science and Technology, China Renbin Xiao Institute of Systems Engineering, Huazhong University of Science and Technology, China Amir Hossein Gandomi Department of Civil Engineering, University of Akron, OH, USA Mehmet Karamanoglu Department of Design Engineering and Mathematics, Middlesex University, UK AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO ● ● ● ● ● ● ● ● ● ● Elsevier 32 Jamestown Road, London NW1 7BY 225 Wyman Street, Waltham, MA 02451, USA First edition 2013 Copyright © 2013 Elsevier Inc. All rights reserved No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangement with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors,or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-405163-8 For information on all Elsevier publications visit our website at store.elsevier.com This book has been manufactured using Print On Demand technology. Each copy is produced to order and is limited to black ink. The online version of this book will show color figures where appropriate. Contents List of Contributors Preface xv xix Part One Theoretical Aspects of Swarm Intelligence and Bio-Inspired Computing 1 1 3 Swarm Intelligence and Bio-Inspired Computation: An Overview Xin-She Yang and Mehmet Karamanoglu 1.1 Introduction 1.2 Current Issues in Bio-Inspired Computing 1.2.1 Gaps Between Theory and Practice 1.2.2 Classifications and Terminology 1.2.3 Tuning of Algorithm-Dependent Parameters 1.2.4 Necessity for Large-Scale and Real-World Applications 1.2.5 Choice of Algorithms 1.3 Search for the Magic Formulas for Optimization 1.3.1 Essence of an Algorithm 1.3.2 What Is an Ideal Algorithm? 1.3.3 Algorithms and Self-Organization 1.3.4 Links Between Algorithms and Self-Organization 1.3.5 The Magic Formulas 1.4 Characteristics of Metaheuristics 1.4.1 Intensification and Diversification 1.4.2 Randomization Techniques 1.5 Swarm-Intelligence-Based Algorithms 1.5.1 Ant Algorithms 1.5.2 Bee Algorithms 1.5.3 Bat Algorithm 1.5.4 Particle Swarm Optimization 1.5.5 Firefly Algorithm 1.5.6 Cuckoo Search 1.5.7 Flower Pollination Algorithm 1.5.8 Other Algorithms 1.6 Open Problems and Further Research Topics References 3 5 5 6 7 7 8 8 8 8 9 10 11 12 12 12 13 13 14 14 15 16 17 19 20 20 21 vi 2 3 4 Contents Analysis of Swarm Intelligence Based Algorithms for Constrained Optimization M.P. Saka, E. Doğan and Ibrahim Aydogdu 2.1 Introduction 2.2 Optimization Problems 2.3 Swarm Intelligence Based Optimization Algorithms 2.3.1 Ant Colony Optimization 2.3.2 Particle Swarm Optimizer 2.3.3 ABC Algorithm 2.3.4 Glowworm Swarm Algorithm 2.3.5 Firefly Algorithm 2.3.6 Cuckoo Search Algorithm 2.3.7 Bat Algorithm 2.3.8 Hunting Search Algorithm 2.4 Numerical Examples 2.4.1 Example 1 2.4.2 Example 2 2.5 Summary and Conclusions References Lévy Flights and Global Optimization Momin Jamil and Hans-Jürgen Zepernick 3.1 Introduction 3.2 Metaheuristic Algorithms 3.3 Lévy Flights in Global Optimization 3.3.1 The Lévy Probability Distribution 3.3.2 Simulation of Lévy Random Numbers 3.3.3 Diversification and Intensification 3.4 Metaheuristic Algorithms Based on Lévy Probability Distribution: Is It a Good Idea? 3.4.1 Evolutionary Programming Using Mutations Based on the Lévy Probability Distribution 3.4.2 Lévy Particle Swarm 3.4.3 Cuckoo Search 3.4.4 Modified Cuckoo Search 3.4.5 Firefly Algorithm 3.4.6 Eagle Strategy 3.5 Discussion 3.6 Conclusions References Memetic Self-Adaptive Firefly Algorithm Iztok Fister, Xin-She Yang, Janez Brest and Iztok Jr. Fister 4.1 Introduction 4.2 Optimization Problems and Their Complexity 25 25 27 28 28 30 32 33 35 36 38 39 41 41 42 44 47 49 49 50 52 53 54 55 59 59 60 61 63 63 66 67 68 69 73 73 76 Contents 4.3 Memetic Self-Adaptive Firefly Algorithm 4.3.1 Self-Adaptation of Control Parameters 4.3.2 Population Model 4.3.3 Balancing Between Exploration and Exploitation 4.3.4 The Local Search 4.3.5 Scheme of the MSA-FFA 4.4 Case Study: Graph 3-Coloring 4.4.1 Graph 3-Coloring 4.4.2 MSA-FFA for Graph 3-Coloring 4.4.3 Experiments and Results 4.5 Conclusions References 5 Modeling and Simulation of Ant Colony’s Labor Division: A Problem-Oriented Approach Renbin Xiao 5.1 Introduction 5.2 Ant Colony’s Labor Division Behavior and its Modeling Description 5.2.1 Ant Colony’s Labor Division 5.2.2 Ant Colony’s Labor Division Model 5.2.3 Some Analysis 5.3 Modeling and Simulation of Ant Colony’s Labor Division with Multitask 5.3.1 Background Analysis 5.3.2 Design and Implementation of Ant Colony’s Labor Division Model with Multitask 5.3.3 Supply Chain Virtual Enterprise Simulation 5.3.4 Virtual Organization Enterprise Simulation 5.3.5 Discussion 5.4 Modeling and Simulation of Ant Colony’s Labor Division with Multistate 5.4.1 Background Analysis 5.4.2 Design and Implementation of Ant Colony’s Labor Division Model with Multistate 5.4.3 Simulation Example of Ant Colony’s Labor Division Model with Multistate 5.5 Modeling and Simulation of Ant Colony’s Labor Division with Multiconstraint 5.5.1 Background Analysis 5.5.2 Design and Implementation of Ant Colony’s Labor Division Model with Multiconstraint 5.5.3 Simulation Results and Analysis 5.6 Concluding Remarks Acknowledgment References vii 79 81 82 83 86 87 87 89 90 92 98 99 103 103 105 105 105 108 109 109 110 113 115 119 119 119 121 123 127 127 128 132 132 134 134 viii 6 7 Contents Particle Swarm Algorithm: Convergence and Applications Shichang Sun and Hongbo Liu 6.1 Introduction 6.2 Convergence Analysis 6.2.1 Individual Trajectory 6.2.2 Probabilistic Analysis 6.3 Performance Illustration 6.3.1 Dataflow Application 6.4 Application in Hidden Markov Models 6.4.1 Parameters Weighted HMM 6.4.2 PSO Viterbi for Parameters Weighted HMMs 6.4.3 POS Tagging Problem and Solution 6.4.4 Experiment 6.5 Conclusions References A Survey of Swarm Algorithms Applied to Discrete Optimization Problems Jonas Krause, Jelson Cordeiro, Rafael Stubs Parpinelli and Heitor Silve´rio Lopes 7.1 Introduction 7.2 Swarm Algorithms 7.2.1 Particle Swarm Optimization 7.2.2 Roach Infestation Optimization 7.2.3 Cuckoo Search Algorithm 7.2.4 Firefly Algorithm 7.2.5 Gravitational Search Algorithm 7.2.6 Bat Algorithm 7.2.7 Glowworm Swarm Optimization Algorithm 7.2.8 Artificial Fish School Algorithm 7.2.9 Bacterial Evolutionary Algorithm 7.2.10 Bee Algorithm 7.2.11 Artificial Bee Colony Algorithm 7.2.12 Bee Colony Optimization 7.2.13 Marriage in Honey-Bees Optimization Algorithm 7.3 Main Concerns to Handle Discrete Problems 7.3.1 Discretization Methods 7.4 Applications to Discrete Problems 7.4.1 Particle Swarm Optimization 7.4.2 Roach Infestation Optimization 7.4.3 Cuckoo Search Algorithm 7.4.4 Firefly Algorithm 7.4.5 Bee Algorithm 7.4.6 Artificial Bee Colony 7.4.7 Bee Colony Optimization 137 137 139 139 142 146 146 157 159 160 160 162 163 165 169 169 170 170 171 171 171 171 172 172 172 173 173 173 174 174 174 175 177 177 178 178 178 179 179 180 Contents 8 7.4.8 Marriage in Honey-Bees Optimization Algorithm 7.4.9 Other Swarm Intelligence Algorithms 7.5 Discussion 7.6 Concluding Remarks and Future Research References 181 181 182 184 186 Test Functions for Global Optimization: A Comprehensive Survey Momin Jamil, Xin-She Yang and Hans-Jürgen Zepernick 8.1 Introduction 8.2 A Collection of Test Functions for GO 8.2.1 Unimodal Test Functions 8.2.2 Multimodal Function 8.3 Conclusions References 193 Part Two Applications and Case Studies 9 10 ix Binary Bat Algorithm for Feature Selection Rodrigo Yuji Mizobe Nakamura, Luı´s Augusto Martins Pereira, Douglas Rodrigues, Kelton Augusto Pontara Costa, João Paulo Papa and Xin-She Yang 9.1 Introduction 9.2 Bat Algorithm 9.3 Binary Bat Algorithm 9.4 Optimum-Path Forest Classifier 9.4.1 Background Theory 9.5 Binary Bat Algorithm 9.6 Experimental Results 9.7 Conclusions References Intelligent Music Composition Maximos A. Kaliakatsos-Papakostas, Andreas Floros and Michael N. Vrahatis 10.1 Introduction 10.2 Unsupervised Intelligent Composition 10.2.1 Unsupervised Composition with Cellular Automata 10.2.2 Unsupervised Composition with L-Systems 10.3 Supervised Intelligent Composition 10.3.1 Supervised Composition with Genetic Algorithms 10.3.2 Supervised Composition Genetic Programming 10.4 Interactive Intelligent Composition 10.4.1 Composing with Swarms 10.4.2 Interactive Composition with GA and GP 10.5 Conclusions References 193 194 196 199 221 221 223 225 225 226 228 228 229 231 233 236 237 239 239 241 241 243 245 246 247 248 250 251 253 254 x 11 12 13 Contents A Review of the Development and Applications of the Cuckoo Search Algorithm Sean Walton, Oubay Hassan, Kenneth Morgan and M. Rowan Brown 11.1 Introduction 11.2 Cuckoo Search Algorithm 11.2.1 The Analogy 11.2.2 Cuckoo Breeding Behavior 11.2.3 Lévy Flights 11.2.4 The Algorithm 11.2.5 Validation 11.3 Modifications and Developments 11.3.1 Algorithmic Modifications 11.3.2 Hybridization 11.4 Applications 11.4.1 Applications in Machine Learning 11.4.2 Applications in Design 11.5 Conclusion References Bio-Inspired Models for Semantic Web Priti Srinivas Sajja and Rajendra Akerkar 12.1 Introduction 12.2 Semantic Web 12.3 Constituent Models 12.3.1 Artificial Neural Network 12.3.2 Genetic Algorithms 12.3.3 Swarm Intelligence 12.3.4 Application in Different Aspects of Semantic Web 12.4 Neuro-Fuzzy System for the Web Content Filtering: Application 12.5 Conclusions References Discrete Firefly Algorithm for Traveling Salesman Problem: A New Movement Scheme Gilang Kusuma Jati, Ruli Manurung and Suyanto 13.1 Introduction 13.2 Evolutionary Discrete Firefly Algorithm 13.2.1 The Representation of the Firefly 13.2.2 Light Intensity 13.2.3 Distance 13.2.4 Attractiveness 13.2.5 Light Absorption 257 257 258 258 258 259 259 261 261 262 264 265 265 266 269 270 273 273 274 276 276 281 284 286 287 291 291 295 295 297 297 298 298 299 299 Contents 13.2.6 Movement 13.2.7 Inversion Mutation 13.2.8 EDFA Scheme 13.3 A New DFA for the TSP 13.3.1 Edge-Based Movement 13.3.2 New DFA Scheme 13.4 Result and Discussion 13.4.1 Firefly Population 13.4.2 Effect of Light Absorption 13.4.3 Number of Updating Index 13.4.4 Performance of New DFA 13.5 Conclusion Acknowledgment References 14 15 Modeling to Generate Alternatives Using Biologically Inspired Algorithms Raha Imanirad and Julian Scott Yeomans 14.1 Introduction 14.2 Modeling to Generate Alternatives 14.3 FA for Function Optimization 14.4 FA-Based Concurrent Coevolutionary Computational Algorithm for MGA 14.5 Computational Testing of the FA Used for MGA 14.6 An SO Approach for Stochastic MGA 14.7 Case Study of Stochastic MGA for the Expansion of Waste Management Facilities 14.8 Conclusions References Structural Optimization Using Krill Herd Algorithm Amir Hossein Gandomi, Amir Hossein Alavi and Siamak Talatahari 15.1 Introduction 15.2 Krill Herd Algorithm 15.2.1 Lagrangian Model of Krill Herding 15.3 Implementation and Numerical Experiments 15.3.1 Case I: Structural Design of a Pin-Jointed Plane Frame 15.3.2 Case II: A Reinforced Concrete Beam Design 15.3.3 Case III: 25-Bar Space Truss Design 15.4 Conclusions and Future Research References xi 300 301 301 302 302 304 305 306 306 307 309 311 311 311 313 313 314 316 318 321 324 326 331 332 335 335 336 336 339 340 342 344 346 348 xii 16 17 18 Contents Artificial Plant Optimization Algorithm Zhihua Cui and Xingjuan Cai 16.1 Introduction 16.2 Primary APOA 16.2.1 Main Method 16.2.2 Photosynthesis Operator 16.2.3 Phototropism Operator 16.2.4 Applications to Artificial Neural Network Training 16.3 Standard APOA 16.3.1 Drawbacks of PAPOA 16.3.2 Phototropism Operator 16.3.3 Apical Dominance Operator 16.3.4 Application to Toy Model of Protein Folding 16.4 Conclusion Acknowledgment References Genetic Algorithm for the Dynamic Berth Allocation Problem in Real Time Carlos Arango, Pablo Corte´s, Alejandro Escudero and Luis Onieva 17.1 Introduction 17.2 Literature Review 17.3 Optimization Model 17.3.1 Sets 17.3.2 Parameters 17.3.3 Decision Variables 17.4 Solution Procedure by Genetic Algorithm 17.4.1 Representation 17.4.2 Fitness 17.4.3 Selection of Parents and Genetic Operators 17.4.4 Mutation 17.4.5 Crossover 17.5 Results and Analysis 17.6 Conclusion References Opportunities and Challenges of Integrating Bio-Inspired Optimization and Data Mining Algorithms Simon Fong 18.1 Introduction 18.2 Challenges in Data Mining 18.2.1 Curse of Dimensionality 18.2.2 Data Streaming 351 351 352 352 352 353 354 357 357 358 359 360 363 364 364 367 367 369 370 372 372 372 375 375 375 376 376 377 377 382 382 385 385 387 387 389 Contents 18.3 Bio-Inspired Optimization Metaheuristics 18.4 The Convergence 18.4.1 Integrating BiCam Algorithms into Clustering 18.4.2 Integrating BiCam Algorithms into Feature Selection 18.5 Conclusion References 19 Improvement of PSO Algorithm by Memory-Based Gradient Search—Application in Inventory Management Tamás Varga, András Király and János Abonyi 19.1 Introduction 19.2 The Improved PSO Algorithm 19.2.1 Classical PSO Algorithm 19.2.2 Improved PSO Algorithm 19.2.3 Results 19.3 Stochastic Optimization of Multiechelon Supply Chain Model 19.3.1 Inventory Model of a Single Warehouse 19.3.2 Inventory Model of a Supply Chain 19.3.3 Optimization Results 19.4 Conclusion Acknowledgment References xiii 390 391 392 394 400 401 403 403 405 405 407 410 414 415 417 419 419 420 420 Preface Swarm intelligence and bio-inspired computation have become increasingly popular in the last two decades. Bio-inspired algorithms such as ant colony algorithm, bat algorithm (BA), cuckoo search (CS), firefly algorithm (FA), and particle swarm optimization have been applied in almost every area of science and engineering with a dramatic increase in the number of relevant publications. Metaheuristic algorithms form an important part of contemporary global optimization algorithms, computational intelligence, and soft computing. New researchers often ask “why metaheuristics?”, and this indeed is a profound question, which can be linked to many aspects of algorithms and optimization, including what algorithms to choose and why certain algorithms perform better than others for a given problem. It was believed that the word “metaheuristic” was coined by Fred Glover in 1986. Generally, “heuristic” means “to find or to discover by trial and error.” Here, “meta-” means “beyond or higher level.” Therefore, metaheuristic can be considered as a higher-level strategy that guides and modifies other heuristic procedures to produce solutions or innovations beyond those that are normally achievable in a quest for local optimality. In reality, we are often puzzled and may be even surprised by the excellent efficiency of bio-inspired metehauristic algorithms because these seemingly simple algorithms can sometime work like a “magic,” even for highly nonlinear, challenging problems. For example, for multimodal optimization problems, many traditional algorithms usually do not work well, while new algorithms such as differential evolution (DE) and FA can work extremely well in practice, even though we may not fully understand the underlying mechanisms of these algorithms. The increasing popularity of bio-inspired metaheuristics and swarm intelligence (SI) has attracted a great deal of attention in engineering and industry. There are many reasons for such popularity, and here we discuss three factors: simplicity, flexibility, and ergodicity. Firstly, most bio-inspired algorithms are simple in the sense that they are easy to implement and their algorithm complexity is relatively low. In most programming languages, the core algorithm can be coded within a hundred lines. Second, these algorithms, though simple, are flexible enough to deal with a wide range of optimization problems, including those that are not solvable by conventional algorithms. Third, bio-inspired algorithms such as FA and CS can often have high degrees of ergodicity in the sense that they can search multimodal landscape with sufficient diversity and ability to escape any local optimum. The ergodicity is often due to some exotic randomization techniques, derived from natural systems in terms of crossover and mutation, or based on statistical models such as random walks and Lévy flights. xx Preface As most real-world problems are nonlinear and multimodal with uncertainty, such complexity and multimodality may imply that it may not be possible to find the true global optimality with a 100% certainty for a given problem. We often have to balance the solution accuracy and computational cost, leading to a (possibly aggressive) local search method. Consequently, we may have to sacrifice the possibility of finding the true global optimality in exchange of some suboptimal, robust solutions. However, in practice, for the vast majority of cases, many bioinspired algorithms can achieve the true global optimality in a practically acceptable fixed number of iterations, though there is no guarantee for this to be the case all the time. The history of bio-inspired computation and SI has spanned over half a century, though the developments have been sped up in the last 20 years. Since the emergence of evolutionary strategies in the 1960s and the development of genetic algorithms (GA) in the 1970s, a golden age with major progress in modern bio-inspired computing is the 1990s. First, in 1992, Marco Dorigo described his innovative work on ant colony optimization (ACO) in his PhD thesis, and in the same year, J.R. Koza published a treatise on genetic programming. Then, in 1995, J. Kennedy and R. Eberhart developed particle swarm optimization (PSO), which essentially opened up a new field, now loosely named as SI. Following this in 1996 and 1997, R. Storn and K. Price published their DE. At the turn of the twenty-first century, Zong Woo Geem et al. developed the harmony search in 2001. Around 2004 to 2005, bee algorithms emerged. S. Nakrani and C. Tovey proposed the honey bee algorithm in 2004, and Xin-She Yang proposed the virtual bee algorithm in 2005. D.T. Pham et al. developed their bee algorithms and D. Karaboga formulated the artificial bee colony all in 2005. In 2008, Xin-She Yang developed the FA for multimodal optimization, and in 2009, Xin-She Yang and Suash Deb developed CS. In 2010, Xin-She Yang first developed the BA, and then Xin-She Yang and S. Deb developed the eagle strategy. More bio-inspired algorithms started to appear in 2012, including krill herd algorithm (KHA) by A.H. Gandomi and A.H. Alavi, flower pollination algorithm by Xin-She Yang, and wolf search algorithm by Rui et al. As we can see, the literature has expanded dramatically in the last decade. Accompanying the rapid developments in bio-inspired computing, another important question comes naturally: Can an algorithm be intelligent? The answers may depend on the definition of “intelligence” itself, and this is also a debating issue. Unless a true Turing test can be passed without any doubt, truly intelligent algorithms may be still a long way to go. However, if we lower our expectation to define the intelligence as “the ability to mimic some aspects of human intelligence” such as memory, automation, and sharing information, then many algorithms can have low-level intelligence to a certain degree. First, many bio-inspired algorithms use elitism and memory to select the best solution or “survival of the fittest,” and then share this information with other agents in a multiple agent system. Algorithms such as artificial neural networks use connectionism, interactions, memory, and learning. Most SI-based algorithms use rule-based updates, and they can adjust their behavior according to the landscape (such as the best values, gradients) in the search space during iterations. To some extent, they can be called Preface xxi “smart” algorithms. Obviously, truly intelligent algorithms are yet to appear in the future. Whatever the forms such intelligent algorithms may take, it would be the holy grail of artificial intelligence and bio-inspired computation. Despite the above recent advances, there are many challenging issues that remain unresolved. First, there are some significant gaps between theory and practice, concerning bio-inspired computing and optimization. From numerical experiments and applications, we know bio-inspired algorithms often work surprisingly well; however, we do not quite understand why they are so efficient. In fact, it lacks solid theoretical proof of convergence for many bio-inspired algorithms, though the good news is that limited results do start to appear in the literature. In addition, for most algorithms, we do not know how parameters can exactly control or influence the performance of an algorithm. Consequently, a major challenge is the tuning of algorithm-dependent parameters so as to produce the optimal performance of an algorithm. In essence, parameter tuning itself is an optimization problem. At present, this is mainly carried out by trial and error, and thus very time consuming. In fact, parameter tuning is a very active research area which requires more research emphasis on both theory and extensive simulations. On the other hand, even though we have seen a vast range of successful applications, however, in most applications, these are still limited to small-scale problems with the number of design variables less than a few dozens or a few hundreds. It is very rare to see larger-scale applications. In reality, many optimization problems may be very large scale, but we are not sure how bio-inspired algorithms can deal with such large-scale problems. As most problems are often nonlinear, scalability may also be a problem, and computational time can be a huge barrier for large-scale problems. Obviously, there are other challenging issues such as performance measures, uncertainty, and comparison statistics. These challenges also provide golden opportunities for researchers to pursue further research in these exciting areas in the years to come. This book strives to provide a timely snapshot of the state-of-the-art developments in bio-inspired computation and SI, capturing the fundamentals and applications of algorithms based on SI and other biological systems. In addition to review and document the recent advances, this book analyze and discuss the latest and future trends in research directions so that it can help new researchers to carry out timely research and inspire readers to develop new algorithms. As the literature is vast and the research area is very broad, it is not possible to include even a good fraction of the current research. However, the contributions by leading experts still contain latest developments in many active areas and applications. Topics include overview and analysis of SI and bio-inspired algorithms, PSO, FA, memetic FA, discrete FA, BA, binary BA, GA, CS and modified CS, KHA, artificial plant optimization, review of commonly used test functions and labor division in ACO. Application topics include traveling salesman problems, feature selection, graph coloring, combinatorial optimization, music composition, mesh generation, semantic web services, optimization alternatives generation, protein folding, berth allocation, data mining, structural optimization, inventory management, and others. xxii Preface It can be expected that this edited book can serve as a source of inspiration for novel research and new applications. Maybe, in the not very far future, some truly, intelligent, self-evolving algorithm may appear to solve a wide range of tough optimization more efficiently and more accurately. Last but not the least, we would like to thank our Editors, Dr Erin Hill-Parks, Sarah E. Lay, and Tracey Miller, and the staff at Elsevier for their help and professionalism. Xin-She Yang, Zhihua Cui, Renbin Xiao, Amir Hossein Gandomi and Mehmet Karamanoglu February 2013 Part One Theoretical Aspects of Swarm Intelligence and Bio-Inspired Computing This page intentionally left blank 1 Swarm Intelligence and Bio-Inspired Computation: An Overview Xin-She Yang and Mehmet Karamanoglu Department of Design Engineering and Mathematics, School of Science and Technology, Middlesex University, The Burroughs, London, UK 1.1 Introduction Swarm intelligence (SI), bio-inspired computation in general, has attracted great interest in the last two decades, and many SI-based optimization algorithms have gained huge popularity. There are many reasons for such popularity and attention, and two main reasons are probably that these SI-based algorithms are flexible and versatile, and that they are very efficient in solving nonlinear design problems with real-world applications. Bio-inspired computation has permeated into almost all areas of sciences, engineering, and industries, from data mining to optimization, from computational intelligence to business planning, and from bioinformatics to industrial applications. In fact, it is perhaps one of the most active and popular research subjects with wide multidisciplinary connections. Even when considered from a narrow point of view of optimization, this is still a very broad area of research. Optimization is everywhere and is thus an important paradigm by itself with a wide range of applications. In almost all applications in engineering and industry, we are always trying to optimize something—whether to minimize cost and energy consumption, or to maximize profit, output, performance, and efficiency. In reality, resources, time, and money are always limited; consequently, optimization is far more important in practice (Yang, 2010b; Yang and Koziel, 2011). The optimal use of available resources of any sort requires a paradigm shift in scientific thinking; this is because most real-world applications have far more complicated factors and parameters that affect how the system behaves and thus how the optimal design needs are met. A proper formulation of optimization is an art, which requires detailed knowledge of the problem and extensive experience. Optimization problems can be formulated in many ways. For example, the commonly used method of least squares Swarm Intelligence and Bio-Inspired Computation. DOI: http://dx.doi.org/10.1016/B978-0-12-405163-8.00001-6 © 2013 Elsevier Inc. All rights reserved. 4 Swarm Intelligence and Bio-Inspired Computation is a special case of maximum-likelihood formulations. By far, the most widely used formulations is to write a nonlinear optimization problem as minimize fi ðxÞ; ði 5 1; 2; . . . ; MÞ ð1:1Þ subject to the following nonlinear constraints: hj ðxÞ 5 0; ð j 5 1; 2; . . . ; JÞ ð1:2Þ gk ðxÞ # 0; ðk 5 1; 2; . . . ; KÞ ð1:3Þ where fi ; hj ; and gk are in general nonlinear functions, or even integrals and/or differential equations. Here, the design vector x 5 ðx1 ; x2 ; . . .; xn Þ can be continuous, discrete, or mixed in d-dimensional space. The functions fi are called objectives, or cost functions, and when M . 1, the optimization is multiobjective or multicriteria (Yang, 2008, 2010b). It is worth pointing out that here we write the problem as a minimization problem; it can also be written as a maximization problem by simply replacing fi by 2fi . When all functions are nonlinear, we are dealing with nonlinear constrained problems. In some special cases when all functions are linear, the problem becomes linear, and we can use the widely used linear programming techniques such as the simplex method. When some design variables can only take discrete values (often integers), while other variables are real continuous, the problem is of mixed type, which is often difficult to solve, especially for large-scale optimization problems. A very special class of optimization is the convex optimization, which has guaranteed global optimality. Any optimal solution to a convex problem is also its global optimum, and most importantly, there are efficient algorithms of polynomial time to solve. These efficient algorithms such as the interiorpoint methods are widely used and have been implemented in many software packages. Despite the fact that the above optimization problem looks seemingly simple, it is usually very challenging to solve. There are many challenging issues; two major challenges are the nonlinearity and complex constraints. Nonlinearity in objective functions makes the cost landscape highly multimodal, and potentially nonsmooth, and nonlinearity in constraints complicates the search boundaries and search domains which may be disconnected. Therefore, the evaluations of objectives and the handling of constraints can be time consuming. In addition, not all problems can be written in terms of explicit objective functions. Sometimes, the objectives such as energy efficiency can have a complex, implicit dependence on the design variables. In this case, we may have to deal with optimization problems with black-box type objectives whose evaluations can be by some external, finite-element simulators. Simulations are often the most time-consuming part. In many applications, an optimization process often involves the evaluation of objective functions many times, often thousands and even millions of Swarm Intelligence and Bio-Inspired Computation: An Overview 5 configurations. Such evaluations often involve the use of extensive computational tools such as a computational fluid dynamics simulator or a finite element solver. Therefore, an efficient optimization algorithm in combination with an efficient solver is extremely important. Furthermore, even when variables take only integer values such as 0 and 1, such combinatorial optimization problem can still be nondeterministic polynomial-time (NP) hard. Therefore, no efficient algorithm exists for such problems. Specific knowledge about the problem of interest can gain good insights, but in many cases, heuristic approaches have to be used by trial and error. That is probably another reason for the popularity of heuristic and metaheuristic algorithms. 1.2 Current Issues in Bio-Inspired Computing Despite the popularity and success of SI and bio-inspired computing, there remain many challenging issues. Here we highlight five issues: gaps between theory and practice, classifications, parameter tuning, lack of truly large-scale real-world applications, and choices of algorithms. 1.2.1 Gaps Between Theory and Practice There is a significant gap between theory and practice in bio-inspired computing. Nature-inspired metaheuristic algorithms work almost magically in practice, but it is not well understood why these algorithms work. For example, except for a few cases such as genetic algorithms, simulated annealing, and particle swarm optimization, there are not many good results concerning the convergence analysis and stability of metaheuristic algorithms. The lack of theoretical understanding may lead to slow progress or even resistance to the wider applications of metaheuristics. There are three main methods for theoretical analysis of algorithms: complexity theory, dynamical systems, and Markov chains. On the one hand, metaheuristic algorithms tend to have low algorithm complexity, but they can solve highly complex problems. Complexity analysis is an active research area and requires more in-depth analysis (Hopper and Turton, 2000; Yang, 2011c). On the other hand, the convergence analysis typically uses dynamic systems and statistical methods based on Markov chains. For example, particle swarm optimization was analyzed by Clerc and Kennedy (2002) using simple dynamic systems, whereas genetic algorithms were analyzed intensively in a few theoretical studies (Aytug et al., 1996; Greenhalgh and Marshal, 2000; Gutjahr, 2010; Villalobos-Arias et al., 2005). For example, for a given mutation rate (µ), string length (L), and population size (n), the number of iterations in genetic algorithms can be estimated by t#  ln ð1 2 pÞ lnf1 2 min½ð12µÞLn ; µLn Šg  ð1:4Þ 6 Swarm Intelligence and Bio-Inspired Computation where due means taking the maximum integer value of u, and p is a function of µ, L, and n (Gutjahr, 2010; Villalobos-Arias et al., 2005). Theoretical understanding lags behind, and thus there is a strong need for further studies in this area. There is no doubt that any new understanding will provide greater insight into the working mechanism of metaheuristic algorithms. 1.2.2 Classifications and Terminology There are many ways to classify optimization algorithms; one of the most widely used is based on the number of agents, and another is based on the iteration procedure. The former will lead to two categories: single agent and multiple agents. Simulated annealing is a single-agent algorithm with a zigzag piecewise trajectory, whereas genetic algorithms, particle swarm optimization, and firefly algorithm are population-based algorithms. These algorithms often have multiple agents, interacting in a nonlinear manner, and a subset of which is called SI-based algorithm. For example, particle swarm optimization and firefly algorithm are swarm-based, inspired by swarming behavior of birds, fish, and fireflies or by SI in general. Another way of classifying algorithms is based on the core procedure of algorithms. If the procedure is fixed without any randomness, an algorithm that starts from a given initial value will reach the same final value, no matter when you run the algorithm. We call this deterministic. For example, the classic Newton Raphson method is a deterministic algorithm, so is the hill-climbing method. On the other hand, if an algorithm contains some randomness in the algorithm procedure, it is called stochastic, evolutionary, or heuristic or even metaheuristic. For example, genetic algorithms with mutation and crossover components can be called evolutionary algorithms, stochastic algorithms, or metaheuristic algorithms. These different names for algorithms with stochastic components reflect an issue that there is still some confusion in the terminologies and terms used in the current literature. Algorithms, such as genetic algorithms, developed before the 1980s were called evolutionary algorithms, and now, they can be called both evolutionarybased and metaheuristic. Depending on the sources of inspiration, there are bioinspired algorithms, nature-inspired algorithms, and metaheuristics in general. However, recent trends call such algorithms “metaheuristic.” Briefly speaking, heuristic means “by trial and error,” and metaheuristic can be considered a higher-level method by using certain selection mechanisms and information sharing. It is believed that the word “metaheuristic” was coined by Glover (1986). The multiple names and inconsistency in terminologies in the literature require efforts from research communities to agree on some of the common terminologies and to systematically classify and analyze algorithms. The current dramatic expansion in the literature makes it an even more challenging task. In this book, we have tried to use metaheuristic, SI, and bio-inspired computation in the right context, with a focus on metaheuristics. It is worth pointing out that from the mobility point of view, algorithms can be classified as local or global. Local search algorithms typically converge toward a local optimum, not necessarily (often not) the global optimum, and such algorithms Swarm Intelligence and Bio-Inspired Computation: An Overview 7 are often deterministic and have no ability of escaping local optima. Simple hillclimbing algorithm is an example. On the other hand, we always try to find the global optimum for a given problem, and if this global optimality is robust, it is often the best, though it is not always possible to find such global optimality. For global optimization, local search algorithms are not suitable. We have to use a global search algorithm. In most cases, modern metaheuristic algorithms are intended for global optimization, though they are not always successful or efficient. 1.2.3 Tuning of Algorithm-Dependent Parameters All metaheuristic algorithms have algorithm-dependent parameters, and the appropriate setting of these parameter values will largely affect the performance of an algorithm. One of the very challenging issues is deciding what values of parameters to use in an algorithm. How can these parameters be tuned so that they can maximize the performance of the algorithm of interest? Parameter tuning itself is a tough optimization problem. In the literature, there are two main approaches. One approach is to run an algorithm with some trial values of key parameters, and the aim of the test runs is to get good setting of these parameters. These parameters are then fixed for more extensive test runs involving same type of problems or larger problems. The other approach is to use one algorithm (which may be well tested and well established) to tune the parameters of another relatively new algorithm. Then, an important issue arises. If we use algorithm A (or tool A) to tune algorithm B, what tool or algorithm should be used to tune algorithm A? If we use, say, algorithm C to tune algorithm A, then what tool should be used to tune algorithm C? In fact, these key issues are still under active research. 1.2.4 Necessity for Large-Scale and Real-World Applications SI and bio-inspired computation are very successful in solving many practical problems. However, the size of these problems in terms of number of design variables is relatively small or moderate. In the current literature, studies have focused on design problems with about a dozen of variables or at most about a hundred. It is rare to see studies with several hundred variables. In contrast, in linear programming, it is routine to solve design problems with half a million to several millions of design variables. Therefore, it remains a huge challenge for SI-based algorithms to apply to real-world, large-scale problems. Accompanying this challenge is the methodology issue. Nobody is sure if we can directly apply the same methods that work well for small, toy problems to large-scale problems. Apart from difference in size, there may be other issues such as memory capacity, computational efficiency, and computing resources needing special care. If we cannot extend existing methods to deal with large-scale problems effectively, often not, then what are the options? After all, real-world problems are typically nonlinear and are often very large-scale. Further and detailed studies are highly needed in this area. 8 Swarm Intelligence and Bio-Inspired Computation 1.2.5 Choice of Algorithms Even with all the knowledge and all the books written on optimization and algorithms, most readers are still not sure what algorithms to choose. It is similar to visiting a shopping mall to choose a certain product. There are often so many different choices, and to make a right choice is again an optimization problem. In the literature, there is no agreed guideline to choose algorithms, though there are specific instructions on how to use a specific algorithm and what types of problems they can solve. Therefore, the issue of choice still remains: partly experience-based and partly by trial and error. Sometimes, even with the best possible intention, the availability of an algorithm and the expertise of the decision makers are the ultimate defining factors for choosing an algorithm. Even though some algorithms are better for the given problem at hand, we may not have that algorithm implemented in our system or we do not have access, which limits our choices. For example, Newton’s method, hill-climbing, Nelder Mead downhill simplex, trust-region methods (Conn et al., 2000), and interior-point methods are implemented in many software packages, which may also increase their popularity in applications. In practice, even with the best possible algorithms and well-crafted implementation, we may still not get the desired solutions. This is the nature of nonlinear global optimization, as most of such problems are NP-hard, and no efficient algorithm (in the polynomial-time sense) exists for a given problem. Thus, the challenges of research in computational optimization and applications are to find the right algorithms most suitable for a given problem to obtain good solutions, hopefully also the global best solutions, in a reasonable timescale with a limited amount of resources. 1.3 Search for the Magic Formulas for Optimization 1.3.1 Essence of an Algorithm Mathematically speaking, an algorithm is a procedure to generate outputs for given inputs. From the optimization point of view, an optimization algorithm generates a new solution xt11 to a given problem from a known solution xt at iteration or time t. That is, xt11 5 Aðxt ; pðtÞÞ ð1:5Þ where A is a nonlinear mapping from a given solution, or d-dimensional vector, xt to a new solution vector xt11. The algorithm A has k algorithm-dependent parameters pðtÞ 5 ðp1 ; . . .; pk Þ, which can be time dependent and can thus be tuned if necessary. 1.3.2 What Is an Ideal Algorithm? In an ideal world, we hope to start from any initial guess solution and wish to get the best solution in a single step. That is, to use minimal computational effort. Swarm Intelligence and Bio-Inspired Computation: An Overview 9 In other words, this is essentially saying that the algorithm simply has to tell what the best answer is to any given problem in a single step! You may wonder if such an algorithm exists. In fact, the answer is yes for a very specific type of problem— quadratic convex problems. We know Newton Raphson method is a root-finding algorithm. It can find the roots of f(x) 5 0. As the minimum or maximum of a function f(x) has to satisfy the critical condition f 0 (x) 5 0, this optimization problem now becomes a problem of finding the roots of f 0 (x). Newton Raphson method provides the following iteration formula: xi11 5 xi 2 f 0 ðxi Þ f 00 ðxi Þ ð1:6Þ For a quadratic function, say, f(x) 5 x2, if we start from a fixed location, say, x0 5 a at i 5 0, we have f 0 (a) 5 2a and f v(a) 5 2. Then, we get x1 5 x0 2 f 0 ðx0 Þ 2a 50 5a2 f 00 ðx0 Þ 2 which is exactly the optimal solution fmin 5 0 at x 5 0, which is also globally optimal. We have found the global optimum in one step. In fact, for quadratic functions that are also convex, Newton Raphson is an ideal algorithm. However, the world is not convex and certainly not quadratic, real-world problems are often highly nonlinear, there is no ideal algorithm. For NP-hard problems, there is no known efficient algorithm at all. Such hard problems require a huge amount of research efforts to search for specific techniques, which are still not satisfactory in practice. These challenges can also be a driving force for active research. 1.3.3 Algorithms and Self-Organization Self-organization exists in many systems, from physical and chemical to biological and artificial systems. Emergent phenomena such as Releigh Bénard convection, Turing pattern formation, and organisms and thunderstorms can all be called selforganization (Ashby, 1962; Keller, 2009). Although there is no universal theory for self-organizing processes, some aspects of self-organization can partly be understood using theories based on nonlinear dynamical systems, far-from-equilibrium multiple interacting agents, and closed system under unchanging laws (Prigogine and Nicolois, 1967). As pointed out by the cyberneticist and mathematician Ashby (1962), every isolated determinate dynamic system, obeying unchanging laws, will ultimately develop some sort of “organisms” that are adapted to their “environments.” For simple systems, going to equilibrium is trivial; however, for a complex system, if its size is so large that its equilibrium states are just a fraction of the vast number of possible states, and if the system is allowed to evolve long enough, 10 Swarm Intelligence and Bio-Inspired Computation some self-organized structures may emerge. The changes in environments can apply pressure on the system to reorganize and adapt to such changes. If the system has sufficient perturbations or noise, often working at the edge of the chaos, some spontaneous formation of structures will emerge as the systems move, far from equilibrium, and select some states, thus reducing the uncertainty or entropy. The state set S of a complex system such as a machine may change from initial states SðψÞ to other states SðφÞ, subject to the change of a parameter set αðtÞ which can be time dependent. That is, αðtÞ SðψÞ ! SðφÞ ð1:7Þ where αðtÞ must come from external conditions such as the heat flow in Raleigh Bénard convection, not from the states S themselves. Obviously, S 1 αðtÞ can be considered as a larger and a closed system (Ashby, 1962; Keller, 2009). In this sense, self-organization is equivalent to a mapping from some high-entropy states to lowentropy states. An optimization algorithm can be viewed as a complex, dynamical system. If we can consider the convergence process as a self-organizing process, then there are strong similarities and links between self-organizing systems and optimization algorithms. First, let us discuss the essence of an optimization algorithm. 1.3.4 Links Between Algorithms and Self-Organization To find the optimal solution x to a given optimization problem S with often an infinite number of states is to select some desired states φ from all states ψ, according to some predefined criterion D. We have AðtÞ SðψÞ ! Sðφðx ÞÞ ð1:8Þ where the final converged state φ corresponds to an optimal solution x to the problem of interest. The selection of the system states in the design space is carried out by running the optimization algorithm A. The behavior of the algorithm is controlled by pðtÞ, the initial solution xt50 , and the stopping criterion D. We can view the combined S 1 AðtÞ as a complex system with a self-organizing capability. The change of states or solution to the problem of interest is controlled by the algorithm A . In many classical algorithms such as hill-climbing, gradient information is often used to select states, say, the minimum value of the landscape, and the stopping criterion can be a given tolerance, accuracy, or zero gradient. Alternatively, an algorithm can act like a tool to tune a complex system. If an algorithm does not use any state information of the problem, then the algorithm is more likely to be versatile to deal with many types of problems. However, such black-box approaches can also imply that the algorithm may not be efficient as it could be for a given type of problem. For example, if the optimization problem is convex, the algorithms that use such convexity information will be more efficient Swarm Intelligence and Bio-Inspired Computation: An Overview 11 than the ones that do not use such information. In order to select states/solutions efficiently, the information from the search process should be used to enhance the search process. In many cases, such information is often fed into the selection mechanism of an algorithm. By far, the most widely used selection mechanism is to select or keep the best solution found so far. That is, some form of “survival of the fittest” is used. From the schematic representation [see Eq. (1.8)] of an optimization process, we can see that the performance of an algorithm may depend on the type of problem S it solves. Whether the final, global optimality is achievable or not (within a given number of iterations) will also depend on the algorithm used. This may be another way of stating the so-called no-free-lunch theorems. Optimization algorithms can be very diverse with several dozens of widely used algorithms. The main characteristics of different algorithms will only depend on the actual, often highly nonlinear or implicit, forms of AðtÞ and their parameters p ðtÞ. 1.3.5 The Magic Formulas The ultimate aim is to find a magic formula or method that works for many problems, such as the Newton Raphson method for quadratic functions. We wish it could work like a magic to provide the best solution for any problem in a few steps. However, such formulas may never exist. As optimization algorithms are iterative, an algorithm to solve a given problem P can be written as the following generic formula: xt11 5 gðxt ; α; PÞ ð1:9Þ which forms a piecewise trajectory in the search space. This algorithm depends on a parameter α, starting with an initial guess x0. The iterative path will depend on the problem (P) or its objective function f(x). However, as algorithms nowadays tend to use multiple agents, Eq. (1.9) can be extended to ½x1 ; x2 ; . . . ; xn Št11 5 gð½x1 ; . . . ; xn Št ; ½α1 ; . . . ; αk Št ; PÞ ð1:10Þ which has a population size of n and depends on k different algorithm-dependent parameters. Each iteration will produce n different solutions [x1, . . ., xn]. Modern metaheuristic algorithms have stochastic components, which mean some of these k parameters can be drawn from some probability distributions. If we wish to express the randomness more explicitly, we can rewrite Eq. (1.10) as ½x1 ; x2 ; . . . ; xn Št11 5 gð½x1 ; . . . ; xn Št ; ½α1 ; . . . ; αk Št ; ½ε1 ; . . . ; εm Št ; PÞ ð1:11Þ with m random variables of ε that are often drawn from uniform distributions or Gaussian distributions. In some cases as those in cuckoo search, these random variables can also be drawn from a Lévy distribution (Yang and Deb, 2009). 12 Swarm Intelligence and Bio-Inspired Computation Although there is no magic formula, each algorithm strives to use fewer iteration t as possible. The only difference among algorithms is the exact form of g(  ). In fact, sometimes, the procedure g(  ) can be divided into many substeps or procedures with different branches so that these branches can be used in a random manner during iterations. This is the essence of all contemporary SI and bio-inspired metaheuristic algorithms. 1.4 1.4.1 Characteristics of Metaheuristics Intensification and Diversification Intensification and diversification are two key components for any metaheuristic algorithm (Blum and Roli, 2003; Yang, 2008). Intensification is also called exploitation, and it uses the local information in the search process so as to generate better solutions. Such local information can be derivative of the objective or the variations of the cost landscape. Diversification is also called exploration, which intends to explore the search space more thoroughly and to help generate diverse solutions. Too much intensification will make the optimization process converge quickly, but it may lead to premature convergence, often to a local optimum, or even a wrong solution. It will also reduce the probability of finding the true global optimum. On the other hand, too much diversification will increase the probability of finding the true optimality globally, but it will often slow down the process with a much lower convergence rate. Therefore, there is a fine balance or trade-off between intensification and diversification, or between exploitation and exploration. Furthermore, just exploitation and exploration are not enough. During the search, we have to use a proper mechanism or criterion to select the best solutions. The most common criterion is to use the survival of the fittest, i.e., to keep updating the current best found so far. In addition, certain elitism is often used, and this is to ensure the best or the fittest solutions are not lost and should be passed on to the next generations (Fogel et al., 1966; Goldberg, 1989; Holland, 1975). For any algorithm to be efficient, it must somehow provide a mechanism to balance the aforementioned two key components properly. It is worth pointing out that the naı̈ve 50 50 balance is not optimal (Yang, 2011c; Yang and He, 2013). More research in this area is highly needed. 1.4.2 Randomization Techniques On analyzing bio-inspired algorithms in more detail, we can single out the type of randomness that a particular algorithm is employing. For example, the simplest and yet often very efficient method is to introduce a random starting point for a deterministic algorithm. The well-known hill-climbing with random restart is a good example. This simple strategy is both efficient, in most cases, and easy to implement in practice. Swarm Intelligence and Bio-Inspired Computation: An Overview 13 A more elaborate way to introduce randomness to an algorithm is to use randomness inside different components of an algorithm, and various probability distributions such as uniform, Gaussian, and Lévy distributions can be used for randomization (Talbi, 2009; Yang, 2008, 2010b). In essence, randomization is an efficient component for global search algorithms. Obviously, it still remains an open question that what is the best way to provide sufficient randomness without slowing down the convergence of an algorithm. In fact, metaheuristic algorithms form a hot research topic with new algorithms appearing almost yearly, and new techniques are being explored (Yang, 2008, 2010b). 1.5 Swarm-Intelligence-Based Algorithms Metaheuristic algorithms are often nature-inspired, and they are now among the most widely used algorithms for optimization. They have many advantages over conventional algorithms, as we can see from many case studies presented in the later chapters of this book. There are a few recent books that are solely dedicated to metaheuristic algorithms (Talbi, 2009; Yang, 2008, 2010a,b). Metaheuristic algorithms are very diverse, including genetic algorithms, simulated annealing, differential evolution, ant and bee algorithms, particle swarm optimization, harmony search, firefly algorithm, and cuckoo search. Here we introduce some of these algorithms briefly, especially those that are based on SI. 1.5.1 Ant Algorithms Ant algorithms, especially the ant colony optimization (Dorigo and Stütle, 2004), mimic the foraging behavior of social ants. Ants primarily use pheromone as a chemical messenger, and the pheromone concentration can be considered as the indicator of quality solutions to a problem of interest. As the solution is often linked with the pheromone concentrations, the search algorithms often produce routes and paths marked by the higher pheromone concentrations; therefore, antsbased algorithms are particularly suitable for discrete optimization problems. The movement of an ant is controlled by pheromone, which will evaporate over time. Without such time-dependent evaporation, ant algorithms will lead to premature convergence to the local (often wrong) solutions. With proper pheromone evaporation, they usually behave very well. There are two important issues here: the probability of choosing a route and the evaporation rate of pheromone. There are a few ways of solving these problems, though it is still an area of active research. For a network routing problem, the probability of ants at a particular node i to choose the route from node i to node j is given by pij 5 Pn φαij dijβ i; j51 φαij dijβ ð1:12Þ 14 Swarm Intelligence and Bio-Inspired Computation where α . 0 and β . 0 are the influence parameters, and their typical values are α  β  2. Here, φij is the pheromone concentration of the route between i and j, and dij the desirability of the same route. Some a priori knowledge about the route such as the distance sij is often used so that dij ~1=sij , which implies that shorter routes will be selected due to their shorter traveling time, and thus the pheromone concentrations on these routes are higher. This is because the traveling time is shorter, and thus, less amount of pheromone has been evaporated during this period. 1.5.2 Bee Algorithms Bees-inspired algorithms are more diverse, and some use pheromone and most do not. Almost all bee algorithms are inspired by the foraging behavior of honeybees in nature. Interesting characteristics such as waggle dance, polarization, and nectar maximization are often used to simulate the allocation of the forager bees along flower patches and thus different search regions in the search space. For a more comprehensive review, refer to Yang (2010a) and Parpinelli and Lopes (2011). Different variants of bee algorithms use slightly different characteristics of the behavior of bees. For example, in the honeybee-based algorithms, forager bees are allocated to different food sources (or flower patches) to maximize the total nectar intake (Karaboga, 2005; Nakrani and Tovey, 2004; Pham et al., 2006; Yang, 2005). In the virtual bee algorithm (VBA), developed by Yang (2005), pheromone concentrations can be linked with the objective functions more directly. On the other hand, the artificial bee colony (ABC) optimization algorithm was first developed by Karaboga (2005). In the ABC algorithm, the bees in a colony are divided into three groups: employed bees (forager bees), onlooker bees (observer bees), and scouts. Unlike the honeybee algorithm, which has two groups of bees (forager bees and observer bees), the bees in ABC are more specialized (Afshar et al., 2007; Karaboga, 2005). Similar to the ants-based algorithms, bee algorithms are also very flexible in dealing with discrete optimization problems. Combinatorial optimization such as routing and optimal paths has been successfully solved by ant and bee algorithms. In principle, they can solve both continuous optimization and discrete optimization problems; however, they should not be the first choice for continuous problems. 1.5.3 Bat Algorithm Bat algorithm is a relatively new metaheuristic, developed by Yang (2010c). It was inspired by the echolocation behavior of microbats. Microbats use a type of sonar, called echolocation, to detect prey, avoid obstacles, and locate their roosting crevices in the dark. These bats emit a very loud sound pulse and listen to the echoes that bounce back from the surrounding objects. Depending on the species, their pulse varies in property and can be correlated with their hunting strategies. Most bats use short, frequency-modulated signals to sweep through about an octave, while others more often use constant-frequency signals for echolocation. Swarm Intelligence and Bio-Inspired Computation: An Overview 15 Their signal bandwidth varies depending on the species and is often increased by using more harmonics. The bat algorithm uses three idealized rules: (i) All bats use echolocation to sense distance, and they also “know” the difference between food/prey and background barriers in some magical way. (ii) A bat roams randomly with a velocity vi at position xi with a fixed frequency range ½ fmin ; fmax Š, varying its emission rate rA½0; 1Š and loudness A0 to search for prey, depending on the proximity of their target. (iii) Although the loudness can vary in many ways, we assume that the loudness varies from a large (positive) A0 to a minimum constant value Amin . These rules can be translated into the following formulas: fi 5 fmin 1 ð fmax 2 fmin Þε; vt11 5 vti 1 ðxti 2 x Þfi; i xt11 5 xti 1 vti i ð1:13Þ where ε is a random number drawn from a uniform distribution and x is the current best solution found so far during iterations. The loudness and pulse rate can vary with iteration t in the following way: At11 5 αAti ; i rit 5 ri0 ½1 2 expð2βtފ ð1:14Þ Here α and β are constants. In fact, α is similar to the cooling factor of a cooling schedule in the simulated annealing, which is discussed later. In the simplest case, we can use α 5 β, and we have in fact used α 5 β 5 0.9 in most simulations. Bat algorithm has been extended to multiobjective bat algorithm (MOBA) by Yang (2011a), and preliminary results suggested that it is very efficient (Yang and Gandomi, 2012). A few other important applications of bat algorithm can be found in the other chapters of this book. 1.5.4 Particle Swarm Optimization Particle swarm optimization (PSO) was developed by Kennedy and Eberhart (1995) based on the swarm behavior such as fish and bird schooling in nature. Since then, PSO has generated much wider interests and forms an exciting, everexpanding research subject called swarm intelligence. This algorithm searches the space of an objective function by adjusting the trajectories of individual agents, called particles, as the piecewise paths formed by positional vectors in a quasistochastic manner. The movement of a swarming particle consists of two major components: a stochastic component and a deterministic component. Each particle is attracted toward the position of the current global best g and its own best location xi in history, while at the same time it has a tendency to move randomly. Let xi and vi be the position vector and velocity of particle i, respectively. The new velocity vector is determined by the following formula: vit11 5 vti 1 α ε1 ½g 2 xti Š 1 β ε2 ½xi 2 xti Š ð1:15Þ 16 Swarm Intelligence and Bio-Inspired Computation where ε1 and ε2 are two random vectors, and each entry takes the values between 0 and 1. The parameters α and β are the learning parameters or acceleration constants, which can typically be taken as, say, α  β  2. The initial locations of all particles should be distributed relatively uniformly so that they can sample over most regions, which is especially important for multimodal problems. The initial velocity of a particle can be taken as zero, i.e., vt50 5 0. The new positions can then be updated by i xit11 5 xti 1 vt11 i ð1:16Þ Although vi can be any value, it is usually bounded in some range ½0; vmax Š. There are many variants that extend the standard PSO algorithm (Kennedy et al., 2001; Yang, 2008, 2010b), and the most noticeable improvement is probably to use inertia function θðtÞ so that vti is replaced by θðtÞ vti : vit11 5 θ vti 1 α ε1 ½g 2 xti Š 1 β ε2 }½xi 2 xti Š ð1:17Þ where θ takes the values between 0 and 1. In the simplest case, the inertia function can be taken as a constant, typically θ  0:5B0:9. This is equivalent to introducing a virtual mass to stabilize the motion of the particles, and thus the algorithm is expected to converge more quickly. Another efficient variant is called accelerated particle swarm optimization (APSO) that proves efficient in solving business optimization problems (Yang et al., 2011). 1.5.5 Firefly Algorithm Firefly algorithm (FA) was first developed by Yang in 2007 (Yang, 2008, 2009) which was based on the flashing patterns and behavior of fireflies. In essence, FA uses the following three idealized rules: 1. Fireflies are unisexual so that one firefly will be attracted to other fireflies regardless of their sex. 2. The attractiveness is proportional to the brightness and they both decrease as their distance increases. Thus, for any two flashing fireflies, the less brighter one will move toward the more brighter one. If there is no brighter one than a particular firefly, it will move randomly. 3. The brightness of a firefly is determined by the landscape of the objective function. The movement of a firefly i is attracted to another, more attractive (brighter) firefly j is determined by 2 xit11 5 xti 1 β 0 e2γrij ðxtj 2 xti Þ 1 αεti ð1:18Þ where β 0 is the attractiveness at the distance r 5 0, and the second term is due to the attraction. The third term is randomization with α being the randomization Swarm Intelligence and Bio-Inspired Computation: An Overview 17 parameter, and εti is a vector of random numbers drawn from a Gaussian distribution or uniform distribution at time t. If β 0 5 0, it becomes a simple random walk. Furthermore, the randomization εti can easily be extended to other distributions such as Lévy flights. The Lévy flight essentially provides a random walk whose random step length is drawn from a Lévy distribution Lðs; λÞ 5 s2ð11λÞ ; ð0 , λ # 2Þ ð1:19Þ which has an infinite variance with an infinite mean. Here the steps essentially form a random walk process with a power-law step-length distribution with a heavy tail. Some of the new solutions should be generated by Lévy walk around the best solution obtained so far; this will often speed up the local search (Pavlyukevich, 2007). A demo version of firefly algorithm implementation, without Lévy flights, can be found at Mathworks file exchange web site.1 Firefly algorithm has attracted much attention (Apostolopoulos and Vlachos, 2011; Gandomi et al., 2011b; Sayadi et al., 2010). A discrete version of FA can efficiently solve NP-hard scheduling problems (Sayadi et al., 2010), while a detailed analysis has demonstrated the efficiency of FA over a wide range of test problems, including multiobjective load dispatch problems (Apostolopoulos and Vlachos, 2011). A chaos-enhanced firefly algorithm with a basic method for automatic parameter tuning is also developed (Yang, 2011b). 1.5.6 Cuckoo Search Cuckoo search (CS) is one of the latest nature-inspired metaheuristic algorithms, developed by Yang and Deb (2009). CS is based on the brood parasitism of some cuckoo species. In addition, this algorithm is enhanced by the so-called Lévy flights (Pavlyukevich, 2007) rather than by simple isotropic random walks. Recent studies show that CS is potentially far more efficient than PSO and genetic algorithms (Yang and Deb, 2010). Cuckoos are fascinating birds, not only because of the beautiful sounds they can make but also because of their aggressive reproduction strategy. Some species such as the Ani and Guira cuckoos lay their eggs in communal nests, though they may remove others’ eggs to increase the hatching probability of their own eggs. Quite a number of species engage the obligate brood parasitism by laying their eggs in the nests of other host birds (often other species). For simplicity in describing the standard Cuckoo Search, we now use the following three idealized rules: 1. Each cuckoo lays one egg at a time and dumps it in a randomly chosen nest. 2. The best nests with high-quality eggs will be carried over to the next generations. 3. The number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability pa A½0; 1Š. In this case, the host bird can either get rid of the egg or simply abandon the nest and build a completely new nest. 1 http://www.mathworks.com/matlabcentral/fileexchange/29693-firefly-algorithm. 18 Swarm Intelligence and Bio-Inspired Computation As a further approximation, this last assumption can be approximated by a fraction pa of the n host nests which are replaced by new nests (with new random solutions). For a maximization problem, the quality or fitness of a solution can simply be proportional to the value of the objective function. Other forms of fitness can be defined in a similar manner to the fitness function in genetic algorithms. From the implementation point of view, we can use the following simple representations: each egg in a nest represents a solution, and each cuckoo can lay only one egg (thus representing one solution); the aim is to use the new and potentially better solutions (cuckoos) to replace a not-so-good solution in the nests. Obviously, this algorithm can be extended to a more complicated case where each nest has multiple eggs representing a set of solutions. For this present introduction, we use the simplest approach where each nest has only a single egg. In this case, there is no distinction between egg, nest, or cuckoo, as each nest corresponds to one egg which also represents one cuckoo. This algorithm uses a balanced combination of a local random walk and the global explorative random walk, controlled by a switching parameter pa . The local random walk can be written as xit11 5 xti 1 αs Hðpa 2 εÞ ðxtj 2 xtk Þ ð1:20Þ where xtj and xtk are two different solutions selected randomly by random permutation, HðuÞ is a Heaviside function, ε is a random number drawn from a uniform distribution, and s is the step size. On the other hand, the global random walk is carried out by using Lévy flights xit11 5 xti 1 αLðs; λÞ ð1:21Þ where Lðs; λÞ 5 λΓðλÞ sin ðπλ=2Þ 1 ; π s11λ ðscs0 . 0Þ ð1:22Þ Here α . 0 is the step-size-scaling factor, which should be related to the scales of the problem of interests. In most cases, we can use α 5 OðL=10Þ, where L is the characteristic scale of the problem of interest, while in some cases α 5 OðL=100Þ can be more effective and can avoid flying too far. Equation (1.22) is essentially the stochastic equation for a random walk. In general, a random walk is a Markov chain whose next status/location only depends on the current location (the first term in Eq. (1.22)) and the transition probability (the second term). However, a substantial fraction of the new solutions should be generated by far field randomization and whose locations should be far enough from the current best solution. This will make sure that the system does not get trapped in a local optimum (Yang and Deb, 2010). Swarm Intelligence and Bio-Inspired Computation: An Overview 19 A Matlab implementation is given by the author and can be downloaded.2 Cuckoo search is very efficient in solving engineering optimization problems (Gandomi et al., 2013; Yang and Deb, 2009). 1.5.7 Flower Pollination Algorithm Flower pollination algorithm (FPA) was developed by Yang (2012) inspired by the flow pollination process of flowering plants: 1. Biotic and cross-pollination can be considered as a process of global pollination process, and pollen-carrying pollinators move in a way that obeys Lévy flights (Rule 1). 2. For local pollination, abiotic and self-pollination are used (Rule 2). 3. Pollinators such as insects can develop flower constancy, which is equivalent to a reproduction probability that is proportional to the similarity of two flowers involved (Rule 3). 4. The interaction or switching of local pollination and global pollination can be controlled by a switch probability pA½0; 1Š, with a slight bias toward local pollination (Rule 4). In order to formulate updating formulas, we have to convert the aforementioned rules into updating equations. For example, in the global pollination step, flower pollen gametes are carried by pollinators such as insects, and pollen can travel over a long distance because insects can often fly and move in a much longer range. Therefore, Rule 1 and flower constancy can be represented mathematically as xit11 5 xti 1 LðλÞ ðxti 2 g Þ ð1:23Þ where xti is the pollen i or solution vector xti at iteration t, and g  is the current best solution found among all solutions at the current generation/iteration. Here LðλÞ is the parameter that corresponds to the strength of the pollination, which essentially is also a step size. Since insects may move over a long distance with various distance steps, we can use a Lévy flight to mimic this characteristic efficiently. That is, we draw L . 0 from a Lévy distribution as described in Eq. (1.22). For the local pollination, both Rule 2 and Rule 3 can be represented as xit11 5 xti 1 εðxtj 2 xtk Þ ð1:24Þ where xtj and xtk are pollen from different flowers of the same plant species. This essentially mimics the flower constancy in a limited neighborhood. Mathematically, if xtj and xtk come from the same species or selected from the same population, this equivalently becomes a local random walk if we draw ε from a uniform distribution in [0,1]. In principle, flower pollination activities can occur at all scales, both local and global. But in reality, adjacent flower patches or flowers in the not-so-far-away neighborhood are more likely to be pollinated by local flower pollen than those far away. In order to mimic this feature, we can effectively use a switch probability (Rule 4) or proximity probability p to switch between common global pollination 2 www.mathworks.com/matlabcentral/fileexchange/29809-cuckoo-search-cs-algorithm. 20 Swarm Intelligence and Bio-Inspired Computation and intensive local pollination. To start with, we can use a naive value of p 5 0:5 as an initial value. A preliminary parametric showed that p 5 0:8 may work better for most applications (Yang, 2012). 1.5.8 Other Algorithms There are many other metaheuristic algorithms, which may be equally popular and powerful, and these include Tabu search (Glover and Laguna, 1997), artificial immune system (Farmer et al., 1986), and others (Koziel and Yang, 2011; Wolpert and Macready, 1997; Yang, 2010a, b). For example, wolf search algorithm (WSA) was developed recently by Rui et al. (2012) based on the wolf pack predating behavior. Preliminary results show that WSA is a very promising algorithm with convincing performance. Other algorithms such as Krill herd algorithm and artificial plant algorithm are described in detail in some relevant chapters of this book. The efficiency of metaheuristic algorithms can be attributed to the fact that they try to imitate the best features in nature, especially the selection of the fittest in biological systems, which have evolved by natural selection over millions of years. 1.6 Open Problems and Further Research Topics We have seen that SI and bio-inspired computing have demonstrated a great success in solving various tough optimization problems, and there are still some challenging issues, some of which have been discussed in Section 1.2. In order to inspire further search in this area, we now summarize some of the key open problems: Theoretical analysis of algorithm convergence: Up to now, only a small fraction of metaheuristic algorithms has some limited mathematical analyses in terms of convergence. More studies are necessary to gain insight into various new algorithms. It is also highly needed to build a framework for theoretical algorithm analysis. Classification and terminology: There is still some confusion in terms of classifications and terminologies used in the current literature. Further studies should also try to classify all known algorithms by some agreed criteria and ideally also to unify the use of key terminologies. This requires the effort by all researchers in the wider communities to participate and to dedicate their usage in future publications. Parameter tuning: The efficiency of an algorithm may depend on its algorithm-dependent parameters, and optimal parameter setting of any algorithm itself is an optimization problem. To find the best methods to tune these parameters are still under active research. It can be expected that algorithms with automatic parameter tuning will be a good paradigm shift in the near future. Large-scale problems: Most current studies in bio-inspired computing have focused on the toy problems and small-scale problems. For large-scale problems, it still remains untested if the same methodology for solving toy problems can be used to get solutions efficiently. For linear programming, it was basically the case; however, nonlinearity often poses greater challenges. Truly intelligent algorithms: Researchers have strived to develop better and smarter algorithms for many years, but truly intelligent algorithms are yet to emerge. This could be the holy grail of optimization and computational intelligence. Swarm Intelligence and Bio-Inspired Computation: An Overview 21 Obviously, challenges also bring opportunities. There is no exaggeration to say that it is a golden time for bio-inspired computing so that researchers can rethink existing methodologies and approaches more deeply and perhaps differently. It is possible that significant progress can be made in the next 10 years. Any progress in theory and in large-scale practice will provide great insight and may ultimately alter the research landscape in metaheuristics. It is can be expected that some truly intelligent, self-evolving algorithms may appear to solve a wide range of tough optimization and classification problems efficiently in the not-so-far future. References Afshar, A., Haddad, O.B., Marino, M.A., Adams, B.J., 2007. Honey-bee mating optimization (HBMO) algorithm for optimal reservoir operation. J. Franklin Inst. 344, 452 462. Apostolopoulos, T., Vlachos, A., 2011. Application of the firefly algorithm for solving the economic emissions load dispatch problem. Int. J. Comb. 2011. (Article ID 523806. ,http://www.hindawi.com/journals/ijct/2011/523806.html.). Ashby, W.R., 1962. Principles of the self-organizing system. In: Von Foerster, H., Zopf Jr., G.W. (Eds.), Principles of Self-Organization: Transactions of the University of Illinois Symposium, 1962. Pergamon Press, London, UK, pp. 255 278. Aytug, H., Bhattacharrya, S., Koehler, G.J., 1996. A Markov chain analysis of genetic algorithms with power of 2 cardinality alphabets. Eur. J. Oper. Res. 96, 195 201. Blum, C., Roli, A., 2003. Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput. Surv. 35, 268 308. Clerc, M., Kennedy, J., 2002. The particle swarm—explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6 (1), 58 73. Conn, A.R., Gould, N.I.M., Toint, P.L., 2000. Trust-region methods, Society for Industrial and Applied Mathematics (SIAM) Press, Philadelphia. Dorigo, M., Stütle, T., 2004. Ant Colony Optimization. MIT Press, Cambridge, MA. Farmer, J.D., Packard, N., Perelson, A., 1986. The immune system, adaptation and machine learning. Physica D 2, 187 204. Fogel, L.J., Owens, A.J., Walsh, M.J., 1966. Artificial Intelligence Through Simulated Evolution. John Wiley & Sons, New York, NY. Gandomi, A.H., Yang, X.S., Alavi, A.H., 2011. Mixed variable structural optimization using firefly algorithm. Comput. Struct. 89 (23/24), 2325 2336. Gandomi, A.H., Yang, X.S., Alavi, A.H., 2013. Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng. with Comput. 29 (1), 17 35. Glover, F., 1986. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 13, 533 549. Glover, F., Laguna, M., 1997. Tabu Search, Kluwer Academic Publishers, Boston. Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Addison Wesley, Reading, MA. Greenhalgh, D., Marshal, S., 2000. Convergence criteria for genetic algorithms. SIAM J. Comput. 30, 269 282. Gutjahr, W.J., 2010. Convergence analysis of metaheuristics. Ann. Inf. Syst. 10, 159 187. Holland, J., 1975. Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI. 22 Swarm Intelligence and Bio-Inspired Computation Hopper, E., Turton, B.C.H., 2000. An empirical investigation of meta-heuristic and heuristic algorithm for a 2D packing problem. Eur. J. Oper. Res. 128 (1), 34 57. Karaboga, D., 2005. An Idea Based on Honey Bee Swarm for Numerical Optimization, Technical Report TR06. Erciyes University, Turkey. Keller, E.F., 2009. Organisms, machines, and thunderstorms: a history of self-organization, part two. Complexity, emergence, and stable attractors. Hist. Stud. Nat. Sci. 39, 1 31. Kennedy, J., Eberhart, R.C., 1995. Particle swarm optimisation. In: Proceedings of the IEEE International Conference on Neural Networks. Piscataway, NJ, pp. 1942 1948. Kennedy, J., Eberhart, R.C., Shi, Y., 2001. Swarm Intelligence, Morgan Kaufmann Publishers, San Diego, USA. Koziel, S., Yang, X.S., 2011. Computational Optimization, Methods and Algorithms. Springer, Germany. Nakrani, S., Tovey, C., 2004. On honey bees and dynamic server allocation in internet hosting centers. Adapt. Behav. 12 (3 4), 223 240. Parpinelli, R.S., Lopes, H.S., 2011. New inspirations in swarm intelligence: a survey. Int. J. Bio-Inspired Comput. 3, 1 16. Pavlyukevich, I., 2007. Lévy flights, non-local search and simulated annealing. J. Comput. Phys. 226, 1830 1844. Pham, D.T., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., Zaidi, M., 2006. The bees algorithm: a novel tool for complex optimisation problems. In: Proceedings of the IPROMS 2006 Conference, pp. 454 461. Prigogine, I., Nicolois, G., 1967. On symmetry-breaking instabilities in dissipative systems. J. Chem. Phys. 46, 3542 3550. Rui, T., Fong, S., Yang, X.S., Deb, S., 2012. Wolf search algorithm with ephemeral memory. In: Fong, S., Pichappan, P., Mohammed, S., Hung, P., Asghar, S. (Eds.), Proceedings of the Seventh International Conference on Digital Information Management (ICDIM2012), (August 22 24), Macau, pp. 165 172. Sayadi, M.K., Ramezanian, R., Ghaffari-Nasab, N., 2010. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems. Int. J. Ind. Eng. Comput. 1, 1 10. Talbi, E.G., 2009. Metaheuristics: From Design to Implementation. John Wiley & Sons, New Jersey. Villalobos-Arias, M., Coello Coello, C.A., Hernández-Lerma, O., 2005. Asymptotic convergence of metaheuristics for multiobjective optimization problems. Soft Comput. 10, 1001 1005. Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 67 82. Yang, X.S., 2005. Engineering optimization via nature-inspired virtual bee algorithms, Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, Springer, Berlin/Heidelberg, (Lecture Notes in Computer Science, vol. 3562, pp. 317 323). Yang, X.S., 2008. Nature-Inspired Metaheuristic Algorithms, first ed. Luniver Press, UK. Yang, X.S., 2009. Firefly algorithms for multimodal optimisation. In: Watanabe, O., Zeugmann, T. (Eds.), Fifth Symposium on Stochastic Algorithms, Foundation and Applications (SAGA 2009), LNCS, vol. 5792, pp. 169 178. Yang, X.S., 2010a. Nature-Inspired Metaheuristic Algorithms. second ed. Luniver Press, UK. Yang, X.S., 2010b. Engineering Optimization: An Introduction with Metaheuristic Applications. John Wiley & Sons, New Jersey. Swarm Intelligence and Bio-Inspired Computation: An Overview 23 Yang, X.S., 2010c. A new metaheuristic bat-inspired algorithm. In: Cruz, C., Gonzalez, J.R., Pelta, D.A., Terrazas, G., (Eds.), Nature-Inspired Cooperative Strategies for Optimization (NICSO 2010), vol. 284. Springer, SCI, Berlin, pp. 65 74. Yang, X.S., 2011a. Bat algorithm for multi-objective optimisation. Int. J. Bio-Inspired Comput. 3 (5), 267 274. Yang, X.S., 2011b. Chaos-enhanced firefly algorithm with automatic parameter tuning. Int. J. Swarm Intell. Res. 2 (4), 1 11. Yang, X.S., 2011c. Metaheuristic optimization: algorithm analysis and open problems. In: Pardalos, P.M., Rebennack, S. (Eds.), Proceedings of the Tenth International Symposium on Experimental Algorithms (SEA 2011). 5 7 May 2011, Kolimpari, Chania, Greece, Lecture Notes in Computer Sciences, vol. 6630, pp. 21 32. Yang, X.S., 2012. Flower pollination algorithm for global optimisation. In: Durand-Lose, J., Jonoska, N. (Eds.), Proceedings of the 11th International Conference on Unconventional Computation and Natural Computation (UCNC 2012). 3 7 September 2012, Orléan, France. Springer, Lecture Notes in Computer Science, vol. 7445, pp. 240 249. Yang, X.S., Deb, S., 2009. Cuckoo search via Lévy flights. Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBic 2009). IEEE Publications, USA, pp. 210 214. Yang, X.S., Deb, S., 2010. Engineering optimization by cuckoo search. Int. J. Math. Model. Num. Opt. 1 (4), 330 343. Yang, X.S., Gandomi, A.H., 2012. Bat algorithm: a novel approach for global engineering optimization. Eng. Comput. 29 (5), 1 18. Yang, X.S., He, X.S., 2013. Firefly algorithm: recent advances and applications. Int. J. Swarm Intell. 1 (1), 1 14. Yang, X.S., Koziel, S., 2011. Computational Optimization and Applications in Engineering and Industry. Springer, Germany. Yang, X.S., Deb, S., Fong, S., 2011. Accelerated particle swarm optimization and support vector machine for business optimization and applications. In: Fong, S., Pichappan, P., (Eds.), Proceedings of the NDT2011, July 2011, Communications in Computer and Information Science, vol. 136, Springer, Heidelberg, pp. 53 66.