Abstract
Neural networks have been widely used as a tool for regression. They are capable of approximating any function and they do not require any assumption about the distribution of the data. The most commonly used architectures for regression are the feedforward neural networks with one or more hidden layers. In this paper, we present a network pruning algorithm which determines the number of units in the input and hidden layers of the networks. We compare the performance of the pruned networks to four regression methods namely, linear regression (LR), Naive Bayes (NB), k-nearest-neighbor (kNN), and a decision tree predictor M5′. On 32 publicly available data sets tested, the neural network method outperforms NB and kNN if the prediction errors are computed in terms of the root mean squared errors. Under this measurement metric, it also performs as well as LR and M5′. On the other hand, using the mean absolute error as the measurement metric, the neural network method outperforms all four other regression methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ash, T. (1989) Dynamic node creation in backpropagation networks. Connection Science, 1(4), 365–375.
Belue, L.M. and Bauer, Jr. K.W. (1995) Determining input features for multilayer perceptrons. Neurocomputing, 7(2) 111–121.
Cybenko, G. (1989) Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2, 303–314.
Dennis Jr. J.E. and Schnabel, R.E. (1983) Numerical methods for unconstrained optimization and nonlinear equations. Englewood Cliffs, New Jersey: Prentice Halls.
Prank, E., Trigg, L., Holmes, G. and Witten, I.H. (1998) Native Bayes for regression. Working Paper 98/15, Dept. of Computer Science, University of Waikato, New Zealand.
Gelenbe, E., Mao, Z.-H., Li. Y.-D. (1999) Function approximation with spike random networks. IEEE Trans, on Neural Networks, 10(1), 3–9.
Hornik, K. (1991) Approximation capabilities of multilayer feedforward networks. Neural Networks, 4, 251–257.
Kwok, T.Y. and Yeung, D.Y. (1997) Constructive algorithms for structure learning in feedforward neural IEEE Trans, on Neural Networks, 8(3),630–645, May 1997.
Kwok, T.Y. and Yeung, D.Y. (1997) Objective functions for training new hidden units in constructive neural networks. IEEE Trans, on Neural Networks, 8(5) 1131–1148.
Mak, B. and Blanning, R.W. (1998) An empirical measure of element contribution in neural networks. IEEE Trans. on Systems, Man, and Cybernetics-Part C, 28(4) 561–564.
Mozer, M.C. and Smolensky, P. (1989) Using relevance to reduce network size automatically. Connection Science, 1(1), 3–16.
Quinlan, R. (1992) Learning with continuous classes. In Proc. of the Australian Joint Conference on Artificial Intelligence, 343–348, Singapore.
Steppe, J.M. and Bauer, Jr. K.W. (1996) Improved feature screening in feedforward neural networks. Neurocomputing, 13(1) 47–58.
Setiono, R. and Hui, L.C.K. (1995) Use of a quasi-Newton method in a feedforward neural network construction algorithm. IEEE Trans. on Neural Networks, 6(1), 273–277.
Setiono, R. and Liu, H. (1997) Neural network feature selector. IEEE Trans. on Neural Networks, 8(3), 654–662.
Zurada, J.M., Malinowski A. and Usui, S. (1997) Perturbation method for deleting redundant inputs of perceptron networks. Neurocomputing, 14(2) 177–193.
Wang, Y. and Witten, I.H. (1997) Induction of model trees for predicting continuous classes. In Proc. of the Poster Papers of the European Conference on Machine Learning. Prague: University of Economics, Faculty of Informatics and Statistics.
Yoon, Y., Guimaraes, T. and Swales, G. (1994) Integrating artificial neural networks with rule-based expert systems. Decision Support Systems, 11, 497–507.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Setiono, R., Leow, W.K. (2000). Pruned Neural Networks for Regression. In: Mizoguchi, R., Slaney, J. (eds) PRICAI 2000 Topics in Artificial Intelligence. PRICAI 2000. Lecture Notes in Computer Science(), vol 1886. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44533-1_51
Download citation
DOI: https://doi.org/10.1007/3-540-44533-1_51
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67925-7
Online ISBN: 978-3-540-44533-3
eBook Packages: Springer Book Archive