Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

CIXL2: a crossover operator for evolutionary algorithms based on population features

Published: 01 July 2005 Publication History

Abstract

In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process.
In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests.
As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods.

References

[1]
Ackley, D. (1987). An empirical study of bit vector function optimizacion. Genetic Algorithms and Simulated Annealing, 170-215.
[2]
Andersen, H. C., & Tsoi, A. C. (1993). A constructive algorithm for the training of a multilayer pereptron based on the genetic algorithm. Complex Systems, 7 (4), 249- 268.
[3]
Arabas, J., Michalewicz, Z., & Mulawka, J. (1994). Gavaps - a genetic algorithm with varying population size. In Michalewicz, Z., Krawczyk, J., Kazemi, M., & Janikow, C. (Eds.), First IEEE International Conference on Evolutionary Computation, Vol. 1, pp. 73-78, Orlando. IEEE Service Center, Piscataway, NJ.
[4]
Bebis, G., Georgiopoulos, M., & Kasparis, T. (1997). Coupling weight elimination with genetic algorithms to reduce network size and preserve generalization. Neurocomputing, 17, 167-194.
[5]
Bebis, G., Louis, S., Varol, Y., & Yfantis, A. (2002). Genetic object recognition using combinations of views. IEEE Transactions on Evolutionary Computation, 6 (2), 132.
[6]
Bengoetxea, E., & Miquéélez, T. (2002). Estimation of distribution algorithms: A new tool for evolutionary computation (D.E. Goldberg edition)., Vol. 2 of Genetic algorithms and evolutionary computation, chap. Experimental result in function optimization with EDAs in continuous Domain. Kluwer.
[7]
Bersini, H., Dorigo, M., Langerman, S., Seront, G., & Gambardella, L. M. (1996). Results of the first international contest on evolutionary optimisation (1st iceo). In Proceedings of IEEE International Conference on Evolutionary Computation, IEEE-EC 96, pp. 611-615, Nagoya, Japan. IEEE Press.
[8]
Beyer, H.-G., & Deb, K. (2001). On self-adapting features in real-parameter evolutionary algorithms. IEEE Transactions on evolutionary computation, 5 (3), 250-270.
[9]
Breiman, L. (1996). Stacked regressions. Machine Learning, 24 (1), 49-64.
[10]
Bäck, J. H. (1996). Evolutionary Algorithms in Theory and Practice. Oxford University Press, Oxford.
[11]
Bäck, T., Fogel, D., & Michalewicz, Z. (1997). Handbook of Evolutionary Computation. Institute of Physics Publishing Ltd, Bristol and Oxford University Press, New York.
[12]
Bäck, T., & Schwefel, H. P. (1993). An overview of evolutionary algorithms for parameter optimization. Evolutionary Computation, 1 (1), 1-23.
[13]
Cano, J., Herrera, F., & Lozano, M. (2003). Using evolutionary algorithms as instance selection for data reduction in kdd: an experimental study. IEEE Transactions on Evolutionary Computation, 7 (6), 561-575.
[14]
Davidor, Y. (1991). Genetic Algorithms and Robotics: A Heuristic Strategy for Optimization, Vol. 1 of Robotics and Automated Systems. World Scientific.
[15]
De Jong, K. D. (1975). An analysis of the behavior of a class of genetic adaptive systems. Ph.D. thesis, Departament of Computer and Communication Sciences, University of Michigan, Ann Arbor.
[16]
De Jong, M. B., & Kosters, W. (1998). Solving 3-sat using adaptive sampling. In Poutréé, H., & van den Herik, J. (Eds.), Proceedings of the Tenth Dutch/Belgian Artificial Intelligence Conference, pp. 221-228.
[17]
Deb, K., & Agrawal, R. B. (1995). Simulated binary crossover for continuous search space. Complex Systems, 9, 115-148.
[18]
Deb, K., & Beyer, H. (2001). Self-adaptive genetic algorithms with simulated binary crossover. Evolutionary Computation, 9 (2), 195-219.
[19]
Dixon, L. C. W. (1974). Nonlinear optimization: A survey of the state of the art. Software for Numerical Mathematics, 193-216. Academic Press.
[20]
Dunn, O. J., & Clark, V. (1974). Applied Statistics: Analysis of Variance and Regression. Wiley, New York.
[21]
Eiben, A., & Bääck, T. (1997a). Multi-parent recombination operators in continuous search spaces. Tech. rep. TR-97-01, Leiden University.
[22]
Eiben, A. E., & Bääck, T. (1997b). Empirical investigation of multi-parent recombination operators in evolution strategies. Evolutionary Computation, 5 (3), 347-365.
[23]
Eiben, A., van der Hauw, J., & van Hemert, J. (1998). Graph coloring with adaptive evolutionary algorithms. Journal of Heuristics, 4 (1), 25-46.
[24]
Eshelman, L. J., & Schaffer, J. D. (1993). Real-coded genetic algorithms and interval-schemata. In Whitley, L. D. (Ed.), Foundation of Genetic Algorithms 2, pp. 187C3.3.7:1-C3.3.7:8.-202, San Mateo. Morgan Kaufmann.
[25]
Fletcher, R., & Powell, M. J. D. (1963). A rapidly convergent descent method for minimization. Computer Journal, pp. 163-168.
[26]
Fogel, D. B. (1995). Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. IEEE Press, Piscataway, New Jork.
[27]
Fogel, L. J., Owens, A. J., & Walsh, M. J. (1966). Artificial Intelligence Through Simulated Evolution. John Wiley & Sons.
[28]
Friedman, J. H. (1994). An overview of predictive learning and function approximation. In Cherkassky, V., Friedman, J. H., & Wechsler, H. (Eds.), From Statistics to Neural Networks, Theory and Pattern Recognition Applications, Vol. 136 of NATO ASI Series F, pp. 1-61. Springer-Verlag.
[29]
García-Pedrajas, N., Hervás-Martínez, C., & Ortiz-Boyer, D. (2005). Cooperative coevolution of Artificial neural network ensembles for pattern classification. IEEE Transactions on Evolutionary Computation, 9 (3), 271-302.
[30]
Goldberg, D. E. (1989a). Genetic Algorithms in Search, Optimization, and Machine Learning . Addison-Wesley, New York.
[31]
Goldberg, D. E. (1989b). Sizing populations for serial and parallel genetic algorithms. In Schaffer, J. (Ed.), 3rd International Conference on Genetic Algorithms, pp. 70-79, San Mateo, CA. Morgan Kaufmann.
[32]
Goldberg, D. E. (1991). Real-coded genetic algorithms, virtual alphabets, and blocking. Complex Systems, pp. 139-167.
[33]
Goldberg, D. E., & Deb, K. (1991). A comparative analysis of selection schemes used in genetic algorithms. In Rawlins, G. J. E. (Ed.), Foundations of Genetic Algorithms, pp. 69-93, San Mateo, CA. Morgan Kaufmann.
[34]
Gordon, V. S., & Whitley, D. (1993). Serial and parallel genetic algorithms as function optimizers. In Forrest, S. (Ed.), Fifth International Conference on Genetic Algorithms, pp. 177-183. Morgan Kaufmann.
[35]
Grefenstette, J. J. (1986). Optimization of control parameters for genetic algorithms. IEEE Transactions on Systems, Mans, and Cybernetics, 16 (1), 122-128.
[36]
Hadley, G. (1964). Nonlinear and Dynamics Programming. Addison Wesley.
[37]
Hashem, S. (1997). Optimal linear combinations of neural networks. Neural Networks, 10 (4), 599-614.
[38]
Herrera, F., Herrera-Viedma, E., Lozano, E., & Verdegay, J. L. (1994). Fuzzy tools to improve genetic algorithms. In Second European Congress on Intelligent Techniques and Soft Computing, pp. 1532-1539.
[39]
Herrera, F., & Lozano, M. (2000). Gradual distributed real-coded genetic algorithms. IEEE Transactions on Evolutionary Computation, 4 (1), 43-63.
[40]
Herrera, F., Lozano, M., & Sáánchez, A. M. (2003). A taxonomy for the crossover operator for real-coded genetic algorithms: An experimental study. International Journal of Intelligent Systems, 18, 309-338.
[41]
Herrera, F., Lozano, M., & Verdegay, J. L. (1998). Tackling real-coded genetic algorithms: Operators and tools for behavioural analysis. Artificial Inteligence Review, pp. 265- 319. Kluwer Academic Publisher. Printed in Netherlands.
[42]
Hervás-Martínez, C., & Ortiz-Boyer, D. (2005). Analizing the statistical features of cixl2 crossover offspring. Soft Computing, 9 (4), 270-279.
[43]
Holland, J. H. (1975). Adaptation in natural and Artificial systems. The University of Michigan Press, Ann Arbor, MI.
[44]
Johnson, T., & Husbands, P. (1990). System identification using genetic algorithms. In Parallel Problem Solving from Nature, Vol. 496 of Lecture Notes in Computer Science, pp. 85-89, Berlin. Springer-Verlag.
[45]
Jong, K. A. D., & Sarma, J. (1993). Generation gaps revisited. In Whitley, L. D. (Ed.), Foundations of Genetic Algorithms, Vol. 2, pp. 19-28. Morgan Kaufmann, San Mateo.
[46]
Kendall, M., & Stuart, S. (1977). The advanced theory of statistics, Vol. 1. Charles GriOEn & Company.
[47]
Kita, H. (2001). A comparison study of self-adaptation in evolution strategies and real-code genetic algorithms. Evolutionary Computation, 9 (2), 223-241.
[48]
Kita, H., Ono, I., & Kobayashi, S. (1998). Theoretical analysis of the unimodal normal distribution crossover for real-coded genetic algorithms. In IEEE International Conference on Evolutionary Computation ICEC'98, pp. 529-534, Anchorage, Alaska, USA.
[49]
Kivinen, J., & Warmuth, M. (1997). Exponential gradient descent versus gradient descent for linear predictors. Information and Computation, 132 (1), 1-63.
[50]
Kuncheva, L. (1995). Editing for the k-nearest neighbors rule by a genetic algorithm. Pattern Recognition Letter, 16, 809-814.
[51]
Larrañaga, P., Etxeberria, R., Lozano, J., & Peñña, J. (2000). Optimization in continuous domains by learning and simulation of gaussian networks. In Wu, A. (Ed.), Proceeding of the 2000 Genetic and Evolutionary Computation Conference Workshop Program, pp. 201-204.
[52]
Leblanc, M., & Tibshirani, R. (1993). Combining estimates in regression and classification. Tech. rep., Department of Statistics, University of Toronto.
[53]
Levene, H. (1960). In Contributions to Probability and Statistics, chap. Essays in Honor of Harold Hotelling, pp. 278-292. Stanford University Press.
[54]
Liu, Y., Yao, X., & Higuchi, T. (2000). Evolutionary ensembles with negative correlation learning. IEEE Transactions on Evolutionary Computation, 4 (4), 380-387.
[55]
Merz, C. J. (1999a). A principal components approach to combining regression estimates. Machine Learning, 36 (1), 9-32.
[56]
Merz, C. J. (1999b). Using correspondence analysis to combine classifiers. Machine Learning, 36 (1), 33-58.
[57]
Michalewicz, Z. (1992). Genetic Algorithms + Data Structures = Evolution Programs. Springer-Verlag, New York.
[58]
Miller, G. F., Todd, P. M., & Hedge, S. U. (1991). Designing neural networks. Neural Networks, 4, 53-60.
[59]
Miller, R. G. (1981). Simultaneous Statistical Inference (2 edition). Wiley, New York.
[60]
Miller, R. G. (1996). Beyond ANOVA, Basics of Applied Statistics (2 edition). Chapman & Hall, London.
[61]
Mizumoto, M. (1989). Pictorial representations of fuzzy connectives. part i: Cases of t- norms, t-conorms and averaging operators. Fuzzy Sets Systems, 31, 217-242.
[62]
Moriarty, D., Schultz, A., & Grefenstette, J. (1999). Evolutionary algorithms for reinforcement learning. Journal Artificial Intelligence Reserarch, 11.
[63]
Mühlenbein, H., Mahnig, T., & Rodriguez, O. (1999). Schemata, distributions and graphical models in evolutionary optimazation. Journal of Heuristics, pp. 215-247.
[64]
Mühlenbein, H., & Paaßß, G. (1998). From recombination of genes to the estimation of distributions i. binary parameters. In Eiben, A. E., Bäck, T., Schoenauer, M., & Schwefel, H.-P. (Eds.), The 5th Conference on Parallel Problem Solving from Nature, pp. 178-187. Springer.
[65]
Ono, I., Kita, H., & Kobayashi, S. (1999). A robust real-coded genetic algorithm using unimodal normal distribution crossover augmented by uniform crossover: Effects of self-adaptation of crossover probabilities. In Banzhaf, W., Daida, J., Eiben, A. E., Garzon, M. H., Honavar, V., Jakiela, M., & Smith, R. E. (Eds.), Genetic and Evolutionary Computation Conf. (GECCO'99), pp. 496-503, San Francisco, CA. Morgan Kaufmann.
[66]
Ono, I., & Kobayashi, S. (1997). A real-coded genetic algorithm for function optimization using unimodal normal distribution crossover. In 7th International Conference on Genetic Algorithms, pp. 246-253, Michigan, USA. Michigan State University, Morgan Kaufman.
[67]
Ono, I., Kobayashi, S., & Yoshida, K. (2000). Optimal lens design by real-coded genetic algorithms using undx. Computer methods in applied mechanics and engineering, pp. 483-497.
[68]
Opitz, D. W., & Shavlik, J. W. (1996). Actively searching for an effective neural network ensemble. Connection Science, 8 (3), 337-353.
[69]
Oren, S. S. (1974). On the selection of parameters in self scaling variable metric algorithms. Mathematical Programming, pp. 351-367.
[70]
Ortiz-Boyer, D., Hervás-Martínez, C., & Muññoz-Pérez, J. (2003). Metaheuristics: Computer Decision-Making, chap. Study of genetic algorithms with crossover based on confidence intervals as an alternative to classic least squares estimation methods for non-linear models, pp. 127-151. Kluwer Academic Publishers.
[71]
Perrone, M. P., & Cooper, L. N. (1993). When networks disagree: Ensemble methods for hybrid neural networks. In Mammone, R. J. (Ed.), Neural Networks for Speech and Image Processing, pp. 126-142. Chapman - Hall.
[72]
Rastrigin, L. A. (1974). Extremal control systems. In Theoretical Foundations of Engineering Cybernetics Series. Moscow: Nauka, Russian.
[73]
Rechenberg, I. (1973). Evolutionsstrategie-Optimierum technischer Systeme nach Prinzipien der biologischen Evolution. Ph.D. thesis, Stuttgart-Bad Cannstatt: Frommann-Holzboog.
[74]
Rosenbrock, H. H. (1960). An automatic method for finding the greatest or least value of a function. Computer Journal, pp. 175-184.
[75]
Rudolph, G. (1994). Convergence analysis of canonical genetic algorithms. IEEE Transactions on Neural Networks, special issue on evolutionary computation, 5 (1), 96-101.
[76]
Salomon, R. (1996). Reevaluating genetic algorithm performance under coordinate rotation of benchmark functions. BioSystems, pp. 263-278.
[77]
Satoh, H., Yamamura, M., & Kobayashi, S. (1996). Minimal generation gap model for gas considering both exploration and exploitation. In Proceeding of the IIZUKA: Methodologies for the Conception, Design, and Application of Intelligent Sstems, pp. 494-497.
[78]
Schaffer, J., Caruana, R., Eshelman, L., & Das, R. (1989). A study of control parameters affecting online performance of genetic algorithms for function optimization. In Schaffer, J. (Ed.), 3rd International Conference on Genetic Algorithms, pp. 51-60, San Mateo, CA. Morgan Kaufmann.
[79]
Schlierkamp-Voosen, D. (1994). Strategy adaptation by competition. In Second European Congress on Intelligent Techniques and Soft Computing, pp. 1270-1274.
[80]
Schwefel, H. P. (1981). Numerical Optimization of Computer Models. John Wiley & Sons. English translation of Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie, 1977.
[81]
Schwefel, H. P. (1995). Evolution and Optimum Seeking. John Wiley & Sons.
[82]
Sedighi, K., Ashenayi, K., Manikas, T., Wainwright, R., & Tai, H. (2004). Autonomous local path planning for a mobile robot using a genetic algorithm. In IEEE Congress on Evolutionary Computation.
[83]
Sharkey, A. J. C. (1996). On combining Artificial neural nets. Connection Science, 8, 299-313.
[84]
Singh, M., Chatterjee, A., & Chaudhury, S. (1997). Matching structural shape descriptions using genetic algorithms. Pattern Recognition, 30 (9), 1451-1462.
[85]
Smith, R. E. (1993). Adaptively resizing populations: An algorithm and analysis. In Forrest, S. (Ed.), 5th International Conference on Genetic Algorithms, p. 653, San Mateo, CA. Morgan Kaufmann.
[86]
Snedecor, G. W., & Cochran, W. G. (1980). Statistical Methods (7 edition). Iowa State University Press, Ames, Iowa.
[87]
Spedicato, E. (1975). Computational experience with quasi-newton algorithms for minimization problems of moderately large size. Tech. rep. CISE-N-175, Centro Informazioni Studi Esperienze, Segrate (Milano), Italy.
[88]
Takahashi, O., Kita, H., & Kobayashi, S. (1999). A distance dependent alternation model on real-coded genetic algorithms. In IEEE International Conference on Systems, Man, and Cybernetics, pp. 619-624.
[89]
Tamhane, A. C., & Dunlop, D. D. (2000). Statistics and Data Analysis. Prentice Hall.
[90]
Voigt, H. M., Mühlenbein, H., & Cvetkovic, D. (1995). Fuzzy recombination for the breeder genetic algorithms. In Eshelman, L. (Ed.), The 6th International Conference Genetic Algorithms, pp. 104-111, San Mateo, CA. Morgan Kaufmann.
[91]
Webb, G. I. (2000). Multiboosting: A technique for combining boosting and wagging. Machine Learning, 40 (2), 159-196.
[92]
Whitley, D., Mathias, K., Rana, S., & Dzubera, J. (1995). Building better test functions. In Eshelman, L. (Ed.), Sixth International Conference on Genetic Algorithms, pp. 239-246. Morgan Kaufmann.
[93]
Wolpert, D. H., & Macready, W. G. (1995). No free-lunch theorems for search. Tech. rep. 95-02-010, Santa Fe Institute.
[94]
Wright, A. (1991). Genetic algorithms for real parameter optimization. In Rawlin, G. J. E. (Ed.), Foundations of Genetic Algorithms 1, pp. 205-218, San Mateo. Morgan Kaufmann.
[95]
Zhang, B. T., & Kim, J. J. (2000). Comparison of selection methods for evolutionary optimization. Evolutionary Optimization, 2 (1), 55-70.
[96]
Zhou, Z.-H., Wu, J., & Tang, W. (2002). Ensembling neural networks: Many could be better than all. Artificial Intelligence, 137 (1-2), 239-253.

Cited By

View all
  • (2021)On the Evaluation of Competence Measures for Time Series Forecasting2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC52423.2021.9659086(1527-1532)Online publication date: 17-Oct-2021
  • (2021)On the relationship of degree of separability with depth of evolution in decomposition for cooperative coevolution2016 IEEE Congress on Evolutionary Computation (CEC)10.1109/CEC.2016.7744408(4823-4830)Online publication date: 11-Mar-2021
  • (2020)On the Selection of the Competence Measure for Dynamic Regressor Selection2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC42975.2020.9282872(1630-1637)Online publication date: 11-Oct-2020
  • Show More Cited By
  1. CIXL2: a crossover operator for evolutionary algorithms based on population features

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Journal of Artificial Intelligence Research
    Journal of Artificial Intelligence Research  Volume 24, Issue 1
    July 2005
    892 pages

    Publisher

    AI Access Foundation

    El Segundo, CA, United States

    Publication History

    Published: 01 July 2005
    Received: 01 November 2004
    Published in JAIR Volume 24, Issue 1

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 08 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)On the Evaluation of Competence Measures for Time Series Forecasting2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC52423.2021.9659086(1527-1532)Online publication date: 17-Oct-2021
    • (2021)On the relationship of degree of separability with depth of evolution in decomposition for cooperative coevolution2016 IEEE Congress on Evolutionary Computation (CEC)10.1109/CEC.2016.7744408(4823-4830)Online publication date: 11-Mar-2021
    • (2020)On the Selection of the Competence Measure for Dynamic Regressor Selection2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC42975.2020.9282872(1630-1637)Online publication date: 11-Oct-2020
    • (2019)New feature selection and voting scheme to improve classification accuracySoft Computing - A Fusion of Foundations, Methodologies and Applications10.1007/s00500-019-03757-223:22(12017-12030)Online publication date: 1-Nov-2019
    • (2018)Using Social Information to Compose a Similarity Function Based on Friends Attendance at Events2018 IEEE Congress on Evolutionary Computation (CEC)10.1109/CEC.2018.8477864(1-8)Online publication date: 8-Jul-2018
    • (2017)An empirical evaluation of mutation and crossover operators for multi-objective uncertainty-wise test minimizationProceedings of the 10th International Workshop on Search-Based Software Testing10.5555/3105427.3105432(21-27)Online publication date: 20-May-2017
    • (2017)Modified Nelder-Mead self organizing migrating algorithm for function optimization and its applicationApplied Soft Computing10.1016/j.asoc.2016.11.04351:C(341-350)Online publication date: 1-Feb-2017
    • (2017)Ant colony optimization with different crossover schemes for global optimizationCluster Computing10.1007/s10586-017-0793-820:2(1247-1257)Online publication date: 1-Jun-2017
    • (2017)A genetic algorithm with multi-parent crossover using quaternion representation for numerical function optimizationApplied Intelligence10.1007/s10489-016-0867-y46:4(810-826)Online publication date: 1-Jun-2017
    • (2016)A hybrid approach to constrained global optimizationApplied Soft Computing10.1016/j.asoc.2016.05.02147:C(281-294)Online publication date: 1-Oct-2016
    • Show More Cited By

    View Options

    View options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media