Abstract
In this work, a hybrid neural network model (HNNM) is proposed, which combines the advantages of genetic algorithm, multi-agents and reinforcement learning. In order to generate networks with few connections and high classification performance, HNNM could dynamically prune or add hidden neurons at different stages of the training process. Experimental results have shown to be better than those obtained by the most commonly used optimization techniques.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Tsai, J., Chou, J., Liu, T.: Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm. IEEE Trans. Neural Netw. 17, 69–80 (2006)
Teoh, E.J., Tan, K.C., Xiang, C.: Estimating the number of hidden neurons in a feedforward network using the singular value decomposition. IEEE Trans. Neural Netw. 17, 1623–1629 (2006)
Islam, M., Sattar, A., Amin, F., Yao, X., Murase, K.: A New Constructive Algorithm for Architectural and Functional Adaptation of Artificial Neural Networks. IEEE Trans. on Systems, Man and Cybernetics—Part B: Cybernetics 39, 1590–1605 (2009)
Goh, C.K., Teoh, E.J., Tan, K.C.: Hybrid Multiobjective Evolutionary Design for Artificial Neural Networks. IEEE Trans. Neural Netw. 19, 1531–1547 (2008)
Hamid, B., Mohamad, R.M.: A learning automata-based algorithm for determination of the number of hidden units for three-layer neural networks. International Journal of Systems Science 40, 101–118 (2009)
Islam, M., Sattar, A., Amin, F., Yao, X., Murase, K.: A New Adaptive Merging and Growing Algorithm for Designing Artificial Neural Networks. IEEE Trans. on Systems, Man and Cybernetics—Part B: Cybernetics 39, 705–718 (2009)
Gao, P.Y., Chen, C.B., Qin, S., Hu, Y.S.: An Optimization Method for Neural Network Based on GA and TS Algorithm. In: 2nd International Conference on Computer and Automation Engineering. IEEE Press, New York (2010)
Farhang, S., Hamid, R.T., Magdy, M.M.A.S.: A reinforcement agent for object segmentation in ultrasound images. Expert Systems with Applications 35, 772–780 (2008)
Ronnie, W., Robert, W.: 2009 Special Issue: Representation in dynamical agents. Neural Networks 22, 258–266 (2009)
Tan, A.H.: Self-organizing neural architecture for reinforcement learning. In: Wang, J., Yi, Z., Żurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3971, pp. 470–475. Springer, Heidelberg (2006)
Benardos, P.G., Vosniakos, G.C.: Optimizing feedforward artificial neural network architecture. Engineering Application of Artificial Intelligence 20, 365–382 (2007)
Zhang, K., Andrew, B., Gu, F.S., Yu, H., Li, S.: A hybrid model with a weighted voting scheme for feature selection in machinery condition monitoring. In: Proc. of 3rd IEEE International Conference on Automation Science and Engineering, pp. 424–429. IEEE Press, New York (2007)
Sasakawa, T., Hu, J.L., Hirasawa, K.: A Brainlike Learning System with Supervised, Unsupervised, and Reinforcement Learning. Electrical Engineering in Japan 162, 32–38 (2008)
Ludermir, T.B., Akio, Y., Cleber, Z.: An optimization methodology for neural network weights and architectures. IEEE Trans. Neural Netw. 17, 1452–1457 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gao, P., Chen, C., Zhang, K., Hu, Y., Li, D. (2010). A Hybrid Neural Network Model Based Reinforcement Learning Agent . In: Zhang, L., Lu, BL., Kwok, J. (eds) Advances in Neural Networks - ISNN 2010. ISNN 2010. Lecture Notes in Computer Science, vol 6063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13278-0_56
Download citation
DOI: https://doi.org/10.1007/978-3-642-13278-0_56
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13277-3
Online ISBN: 978-3-642-13278-0
eBook Packages: Computer ScienceComputer Science (R0)