Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Pruned Neural Networks for Regression

  • Conference paper
PRICAI 2000 Topics in Artificial Intelligence (PRICAI 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1886))

Included in the following conference series:

Abstract

Neural networks have been widely used as a tool for regression. They are capable of approximating any function and they do not require any assumption about the distribution of the data. The most commonly used architectures for regression are the feedforward neural networks with one or more hidden layers. In this paper, we present a network pruning algorithm which determines the number of units in the input and hidden layers of the networks. We compare the performance of the pruned networks to four regression methods namely, linear regression (LR), Naive Bayes (NB), k-nearest-neighbor (kNN), and a decision tree predictor M5′. On 32 publicly available data sets tested, the neural network method outperforms NB and kNN if the prediction errors are computed in terms of the root mean squared errors. Under this measurement metric, it also performs as well as LR and M5′. On the other hand, using the mean absolute error as the measurement metric, the neural network method outperforms all four other regression methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Ash, T. (1989) Dynamic node creation in backpropagation networks. Connection Science, 1(4), 365–375.

    Article  Google Scholar 

  2. Belue, L.M. and Bauer, Jr. K.W. (1995) Determining input features for multilayer perceptrons. Neurocomputing, 7(2) 111–121.

    Article  Google Scholar 

  3. Cybenko, G. (1989) Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2, 303–314.

    Article  MATH  Google Scholar 

  4. Dennis Jr. J.E. and Schnabel, R.E. (1983) Numerical methods for unconstrained optimization and nonlinear equations. Englewood Cliffs, New Jersey: Prentice Halls.

    Google Scholar 

  5. Prank, E., Trigg, L., Holmes, G. and Witten, I.H. (1998) Native Bayes for regression. Working Paper 98/15, Dept. of Computer Science, University of Waikato, New Zealand.

    Google Scholar 

  6. Gelenbe, E., Mao, Z.-H., Li. Y.-D. (1999) Function approximation with spike random networks. IEEE Trans, on Neural Networks, 10(1), 3–9.

    Article  Google Scholar 

  7. Hornik, K. (1991) Approximation capabilities of multilayer feedforward networks. Neural Networks, 4, 251–257.

    Article  Google Scholar 

  8. Kwok, T.Y. and Yeung, D.Y. (1997) Constructive algorithms for structure learning in feedforward neural IEEE Trans, on Neural Networks, 8(3),630–645, May 1997.

    Article  Google Scholar 

  9. Kwok, T.Y. and Yeung, D.Y. (1997) Objective functions for training new hidden units in constructive neural networks. IEEE Trans, on Neural Networks, 8(5) 1131–1148.

    Article  Google Scholar 

  10. Mak, B. and Blanning, R.W. (1998) An empirical measure of element contribution in neural networks. IEEE Trans. on Systems, Man, and Cybernetics-Part C, 28(4) 561–564.

    Article  Google Scholar 

  11. Mozer, M.C. and Smolensky, P. (1989) Using relevance to reduce network size automatically. Connection Science, 1(1), 3–16.

    Article  Google Scholar 

  12. Quinlan, R. (1992) Learning with continuous classes. In Proc. of the Australian Joint Conference on Artificial Intelligence, 343–348, Singapore.

    Google Scholar 

  13. Steppe, J.M. and Bauer, Jr. K.W. (1996) Improved feature screening in feedforward neural networks. Neurocomputing, 13(1) 47–58.

    Article  Google Scholar 

  14. Setiono, R. and Hui, L.C.K. (1995) Use of a quasi-Newton method in a feedforward neural network construction algorithm. IEEE Trans. on Neural Networks, 6(1), 273–277.

    Article  Google Scholar 

  15. Setiono, R. and Liu, H. (1997) Neural network feature selector. IEEE Trans. on Neural Networks, 8(3), 654–662.

    Article  Google Scholar 

  16. Zurada, J.M., Malinowski A. and Usui, S. (1997) Perturbation method for deleting redundant inputs of perceptron networks. Neurocomputing, 14(2) 177–193.

    Article  Google Scholar 

  17. Wang, Y. and Witten, I.H. (1997) Induction of model trees for predicting continuous classes. In Proc. of the Poster Papers of the European Conference on Machine Learning. Prague: University of Economics, Faculty of Informatics and Statistics.

    Google Scholar 

  18. Yoon, Y., Guimaraes, T. and Swales, G. (1994) Integrating artificial neural networks with rule-based expert systems. Decision Support Systems, 11, 497–507.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Setiono, R., Leow, W.K. (2000). Pruned Neural Networks for Regression. In: Mizoguchi, R., Slaney, J. (eds) PRICAI 2000 Topics in Artificial Intelligence. PRICAI 2000. Lecture Notes in Computer Science(), vol 1886. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44533-1_51

Download citation

  • DOI: https://doi.org/10.1007/3-540-44533-1_51

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67925-7

  • Online ISBN: 978-3-540-44533-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics