Learning to learn: automatic adaptation of learning bias
Pages 871 - 876
Abstract
Traditionally, large areas of research in machine learning have concentrated on pattern recognition and its application to many diversified problems both within the realm of AI as well as outside of it. Over several decades of intensified research, an array of learning methodologies have been proposed, accompanied by attempts to evaluate these methods, with respect to one another on small sets of real world problems. Unfortunately, little emphasis was placed on the problem of learning bias - common to all learning algorithms - and a major culprit in preventing the construction of a universal pattern recognizer. State of the art learning algorithms exploit some inherent bias when performing pattern recognition on yet unseen patterns. Automatically adapting this learning bias - dependent on the type of pattern classification problems seen over time - is largely lacking. In this paper, weaknesses of the traditional one-shot learning environments are pointed out and the move towards a learning method displaying the ability to learn about learning is undertaken. Trans-dimensional learning is introduced as a means to automatically adjust learning bias and empirical evidence is provided showing that in some instances learning the whole can be simpler than learning a part of it.
References
[1]
Baffes, P.T. and Zelle, J.M (1992). Growing Layers of Perceptrons: Introducing the Extentron Algorithm, Proceedings of the 1992 International Joint Conference on Neural Networks (pp. II-392- II-397), Baltimore, MD., June.
[2]
Bengio, Y., Bengio, S., Cloutier, J., Gecsei, J. (1992) On the optimization of a synaptic learning rule, Conference of Optimality in Biological and Artificial Neural Networks, Dallas, USA.
[3]
Chalmers, D.J. (1990) The Evolution of Learning: An experiment in Genetic Connectionism, In D.S. Touretsky, J.L. Elman, T.J. Sejnowski, and G.E. Hinton (Eds.) Proceedings of the 1990 Connectionists Models Summer School.
[4]
Cheng, J., Fayyad, U.M., Irani, K.B., Qian, Z. (1988) Improved Decision Trees: A Generalized Version of ID3, Proceedings of the 5th Inetrnational Conference on Machine Learning, Ann Arbor, Michigan, June.
[5]
Fahlman, S.E. and Lebiere, C. (1990). The Cascade-Correlation Learning Architecture, In D. Touretzky (Ed.), Advances in Neural Information Processing Systems 2 (pp. 524-532). San Mateo, CA.: Morgan Kaufmann.
[6]
Frean, M. (1991). The Upstart Algorithm: A Method for Constructing and Training FeedForward Neural Networks, Neural Computation, 2, 198-209.
[7]
Holland, J.D. (1975) Adaption in Natural and Artificial Systems. University of Michigan Press, AnnArbor, MI.
[8]
Quinlan, J.R., (1979) Discovering rules by induction from large collections of examples. In D. Michie (Ed.), Expert systems in the micro electronic age. Edinburgh University Press.
[9]
Romaniuk, S.G., Hall, L.O. (1993) Divide and Conquer Networks. Neural Networks, Vol. 6, pp. 1105-1116.
[10]
Romaniuk, S.G. (1993) Evolutionary Growth Perceptrons. In S. Forrest Genetic Algorithms : Proceedings of the 5th International Conference, Morgan Kaufmann.
[11]
Valiant, L.G. (1984). A Theory of the learnable. Comm. Ass. Comput. Mach. 27(11), 1134-1142.
Recommendations
10 Things Software Developers Should Learn about Learning
Understanding how human memory and learning works, the differences between beginners and experts, and practical steps developers can take to improve their learning, training, and recruitment.
Comments
Information & Contributors
Information
Published In
August 1994
1508 pages
Sponsors
- Association for the Advancement of Artificial Intelligence
Publisher
AAAI Press
Publication History
Published: 01 August 1994
Qualifiers
- Article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0
Reflects downloads up to 03 Mar 2025