Abstract
In this paper, a tabular reinforcement learning (RL) method is proposed based on improved fuzzy min-max (FMM) neural network. The method is named FMM-RL. The FMM neural network is used to segment the state space of the RL problem. The aim is to solve the “curse of dimensionality” problem of RL. Furthermore, the speed of convergence is improved evidently. Regions of state space serve as the hyperboxes of FMM. The minimal and maximal points of the hyperbox are used to define the state space partition boundaries. During the training of FMM neural network, the state space is partitioned via operations on hyperbox. Therefore, a favorable generalization performance of state space can be obtained. Finally, the method of this paper is applied to learn behaviors for the reactive robot. The experiment shows that the algorithm can effectively solve the problem of navigation in a complicated unknown environment.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Michie, D., Chambers, R.A.: Box: An experiment in adaptive control. Machine Intelligent 2, 137–152 (1968)
Moore, A.W., Atkeson, C.G.: The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces. Machine Learning 21, 199–233 (1995)
Munos, R., Moore, A.W.: Variable Resolution Discretization for High-accuracy Solutions of Optimal Control Problems. In: Proc. 16th International Joint Conf. on Artificial Intelligence, pp. 1348–1355 (1999)
Reynolds, S.I.: Adaptive Resolution Model-free Reinforcement Learning: Decision Boundary Partitioning. In: Proc. 17th International Conf. on Maching Learning, pp. 783–790 (2000)
Murao, H., Kitamura, S.: Q-Learning with Adaptive State Segmentation (QLASS). In: Proc. IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp. 179–184 (1997)
Lee, I.S.K., Lau, H.Y.K.: Adaptive State Space Partitioning for Reinforcement Learning. Engineering Applications of Artificial Intelligence 17, 577–588 (2004)
Simpson, P.: Fuzzy Min-max Neural Network-Part I: Classification. IEEE Trans. on Neural Networks 3(5), 776–786 (1992)
Simpson, P.K.: Fuzzy Min-max Neural Network-Part II: Clustering. IEEE Trans. on Fuzzy Systems 1(1), 32–45 (1993)
Gabrys, B., Bargiela, A.: General Fuzzy Min-max Neural Network for Clustering and Classification. IEEE Trans. on Neural Networks 11(3), 769–783 (1999)
Zhang, R.B.: Reinforcement Learning Theory and Applications. Harbin Engineering University Press, Harbin (2000)
Watkins, C.J., Dayan, P.: Q-learning. Machine Learning 8(3), 279–292 (1992)
Peng, J., Williams, R.J.: Incremental Multi-step Q-learning. In: Machine Learning: Proceedings of the Eleventh International Conference (ML94), New Brunswick, NJ, USA, pp. 226–232. Morgan Kaufmann, San Francisco (1994)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Duan, Y., Cui, B., Xu, X. (2007). State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network. In: Liu, D., Fei, S., Hou, Z., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4492. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72393-6_21
Download citation
DOI: https://doi.org/10.1007/978-3-540-72393-6_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72392-9
Online ISBN: 978-3-540-72393-6
eBook Packages: Computer ScienceComputer Science (R0)