Abstract
A back-propagation (BP) neural network has good self-learning, self-adapting and generalization ability, but it may easily get stuck in a local minimum, and has a poor rate of convergence. Therefore, a method to optimize a BP algorithm based on a genetic algorithm (GA) is proposed to speed the training of BP, and to overcome BP’s disadvantage of being easily stuck in a local minimum. The UCI data set is used here for experimental analysis and the experimental result shows that, compared with the BP algorithm and a method that only uses GA to learn the connection weights, our method that combines GA and BP to train the neural network works better; is less easily stuck in a local minimum; the trained network has a better generalization ability; and it has a good stabilization performance.
Similar content being viewed by others
References
Chen GC, Yu JS (2005) Particle swarm optimization neural network and its application in soft-sensing modeling [J]. Lecture Notes Comput Sci 3611: 610–617
Eysa S, Saeed G (2005) Optimum design of structures by an improved genetic algorithm using neural networks [J]. Adv Eng Softw 36(11–12): 757–767
Ghosh R, Verma B (2003) A hierarchical method for finding optimal architecture and weights using evolutionary least square based learning [J]. Int J Neural Syst 13(1): 13–24
Gupta JND, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training [J]. Omega 27(6): 679–684
Harpham C et al (2004) A review of genetic algorithms applied to training radial basis function networks [J]. Neural Comput Appl 13(3): 193–201
Meng XP et al (2000) A hybrid method of GA and BP for short-term economic dispatch of hydrothermal power systems [J]. Math Comput Simul 51(3–4): 341–348
UCI Maching Learning Repository. http://archive.ics.uci.edu/ml
Venkatesan D et al (2009) A genetic algorithm-based artificial neural network model for the optimization of machining processes [J]. Neural Comput Appl 18(2): 135–140
Yao X, Islam MM (2008) Evolving artificial neural network ensembles [J]. IEEE Comput Intell Mag 3(1): 31–42
Yao WS et al (2004) The researching overview of evolutionary neural networks. Comput Sci 31(3): 125–129
Yao X (1999) Evolving artificial neural networks [J]. Proc IEEE 87(9): 1423–1447
Yao X, Xu Y (2006) Recent advances in evolutionary computation [J]. J Comput Sci Technol 21(1): 1–18
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ding, S., Su, C. & Yu, J. An optimizing BP neural network algorithm based on genetic algorithm. Artif Intell Rev 36, 153–162 (2011). https://doi.org/10.1007/s10462-011-9208-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10462-011-9208-z