Abstract
Ant colony optimization (ACO) is an optimization technique that was inspired by the foraging behaviour of real ant colonies. Originally, the method was introduced for the application to discrete optimization problems. Recently we proposed a first ACO variant for continuous optimization. In this work we choose the training of feed-forward neural networks for pattern classification as a test case for this algorithm. In addition, we propose hybrid algorithm variants that incorporate short runs of classical gradient techniques such as backpropagation. For evaluating our algorithms we apply them to classification problems from the medical field, and compare the results to some basic algorithms from the literature. The results show, first, that the best of our algorithms are comparable to gradient-based algorithms for neural network training, and second, that our algorithms compare favorably with a basic genetic algorithm.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Note that this paper is an extension of the work published in [7, 32]. The extension consists in a more detailed explanation of the algorithm itself, the conduction of a fourfold cross-validation for all applications to test instances, and the conduction of tests for determining the statistical significance of the obtained results.
Note that k can not be smaller than the number of dimensions of the problem being solved. This is due to the explicit handling of correlation among variables as explained in Sect. 3: In order to be able to rotate the coordinate system properly, the number of solutions available has to be at least equal to the number of dimensions.
Such pseudo-random number generators are routinely available for most programming languages.
At step i, only dimensions i through n are used.
Due to the limited resources for tuning, the chosen configuration for each race is not necessarily significantly better than all the others. The limit of 100 experiments per race did sometimes not allow reaching that level of assurance. However, the chosen configuration was definitely not significantly worse than any of the others.
Note that Alba and Chicano did not perform a fourfold cross-validation. They only performed the first one of our four cross-validation experiments. Therefore, the results of our ACO algorithms in these tables refer to the results of the first of our four cross-validation experiments.
References
Alba E, Chicano JF (2004) Training neural networks with GA hybrid algorithms. In: Deb K et al. (ed) Proceedings of the genetic and evolutionary computation conference—GECCO 2004, volume 3102 of Lecture Notes in Computer Science. Springer, Berlin, pp 852–863
Alba E, Marti R (eds) (2006) Metaheuristic procedures for training neural networks. Springer, Berlin
Bilchev B, Parmee IC (1995) The ant colony metaphor for searching continuous design spaces. In: Proceedings of the AISB workshop on evolutionary computation, volume 993 of Lecture Notes in Computer Science, pp 25–39
Birattari M (2005) The problem of tuning metaheuristics as seen from a machine learning perspective. PhD thesis, volume 292 of Dissertationen zur Künstlichen Intelligenz. Akademische Verlagsgesellschaft Aka GmbH, Berlin, Germany
Birattari M, Stützle T, Paquete L, Varrentrapp K (2002) A racing algorithm for configuring metaheuristics. In: Langdon WB et al. (eds) Proceedings of the genetic and evolutionary computation conference. Morgan Kaufman, San Francisco, pp 11–18
Bishop CM (2005) Neural networks for pattern recognition. MIT Press, Cambridge
Blum C, Socha K (2005) Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In: Nedjah N, Mourelle LM, Vellasco MMBR, Abraham A, Köppen M (eds) Proceedings of the Fifth International Conference on Hybrid Intelligent Systems (HIS). IEEE Computer Society, pp 233–238
Bonabeau E, Dorigo M, Theraulaz G (1999) Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press, New York
Peter AN (2000) Bosman and Dirk Thierens. Continuous iterated density estimation evolutionary algorithms within the IDEA framework. In: Pelikan M, Mühlenbein H, Rodriguez AO (eds) Proceedings of OBUPM Workshop at GECCO-2000. Morgan-Kaufmann Publishers, San Francisco, pp 197–200
Box GEP, Muller ME (1958) A note on the generation of random normal deviates. Ann Math Stat 29(2):610–611
Cotta C, Alba E, Sagarna R, Larrañaga P (2001) Adjusting weights in artificial neural networks using evolutionary algorithms. In: Larrañaga P, Lozano JA (eds) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer Academic Publishers, Boston, pp 361–378
Deneubourg J-L, Aron S, Goss S, Pasteels J-M (1990) The self-organizing exploratory pattern of the argentine ant. J Insect Behav 3:159–168
Dorigo M (1992) Optimization, Learning and Natural Algorithms (in Italian). PhD thesis, Dipartimento di Elettronica, Politecnico di Milano, Italy
Dorigo M, Maniezzo V, Colorni A (1996) Ant System: Optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybernetics – Part B 26(1):29–41
Dorigo M, Stützle T (2004) Ant Colony Optimization. MIT Press, Cambridge
Dréo J, Siarry P (2002) A new ant colony algorithm using the heterarchical concept aimed at optimization of multiminima continuous functions. In: Dorigo M, Di Caro G, Sampels M (eds) Proceedings of ANTS 2002 – from ant colonies to artificial ants: third international workshop on ant algorithms, vol 2463 of lecture notes in computer science, Springer, Berlin, pp 216–221
Garcia Pedrajas N, Hervás Martinez C, Muñoz Pérez J (2003) COVNET: A cooperative coevolutionary model for evolving artificial neural networks. IEEE Trans Neural Networks 14(3):575–596
Golub GH, van Loan CF (1989) Matrix computations, 2nd edn. The John Hopkins University Press, Baltimore
Guntsch M, Middendorf M (2003) Solving multi-objective permutation problems with population based ACO. In: Fonseca CM, Fleming PJ, Zitzler E, Deb K, Thiele L (eds) Proceedings of the second international conference on evolutionary multi-criterion optimization (EMO 2003), vol 2636 of lecture notes in computer science. Springer, Berlin, pp 464–478
Hagan MT, Menhaj MB (1994) Training feedforward networks with the marquardt algorithm. IEEE Trans Neural Netw 5(6):989–993
Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195
Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning. Springer, Berlin
Larrañaga P, Lozano JA (eds) (2001) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer Academic Publishers, Boston
Mandischer M (2002) A comparison of evolution strategies and backpropagation for neural network training. Neurocomputing 42(1):87–117
McGill R, Tukey JW, Larsen WA (1978) Variations of box plots. Am Stat 32:12–16
Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of the 2002 international joint conference on neural networks (IJCNN’02), vol 2. IEEE press, pp 1895–1899
Monmarché N, Venturini G, Slimane M (2000) On how pachycondyla apicalis ants suggest a new search algorithm. Future Generation Comput Syst 16:937–946
Montana D, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: Proceedings of the eleventh international joint conference on artificial intelligence (IJCAI). Morgan Kaufmann, San Mateo, pp 762–767
Prechelt L (1994) Proben1—a set of neural network benchmark problems and benchmarking rules. Technical Report 21, Fakultät für Informatik, Universität Karlsruhe, Karlsruhe, Germany
Rumelhart D, Hinton G, Williams R (1986) Learning representations by backpropagation errors. Nature 536:323–533
Socha K (2004) Extended ACO for continuous and mixed-variable optimization. In: Dorigo M, Birattari M, Blum C, Gambardella LM, Mondada F, Stützle T (eds) Proceedings of ANTS 2004 – fourth international workshop on ant algorithms and swarm intelligence. Lecture Notes in Computer Science. Springer, Berlin
Socha K, Blum C (2006) Metaheuristic procedures for training neural networks. chapter ant colony optimization. Springer, Berlin (in press)
Socha K, Dorigo M (2006) Ant colony optimization for continuous domains. Eur J Oper Res (in press)
Socha K (2003) The influence of run-time limits on choosing ant system parameters. In: Cantu-Paz E et al. (eds) Proceedings of GECCO 2003—genetic and evolutionary computation conference, vol 2723 of LNCS. Springer, Berlin, pp 49–60
Stanley KO, Miikulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127
Stützle T, Hoos HH (2000) \({{\cal MAX}\hbox{-}{\cal MIN}}\) Ant System. Future Generation Computer Systems 16(8):889–914
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447
Acknowledgments
This work was supported by the Spanish CICYT project OPLINK (grant TIN-2005-08818-C04), and by the Ramón y Cajal program of the Spanish Ministry of Science and Technology of which Christian Blum is a research fellow. This work was also partially supported by the ANTS project, an Action de Recherche Concertée funded by the Scientific Research Directorate of the French Community of Belgium.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Socha, K., Blum, C. An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput & Applic 16, 235–247 (2007). https://doi.org/10.1007/s00521-007-0084-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-007-0084-z