Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Aggregating Regressive Estimators: Gradient-Based Neural Network Ensemble

  • Conference paper
MICAI 2006: Advances in Artificial Intelligence (MICAI 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4293))

Included in the following conference series:

  • 1001 Accesses

Abstract

A gradient-based algorithm for ensemble weights modification is presented and applied on the regression tasks. Simulation results show that this method can produce an estimator ensemble with better generalization than those of bagging and single neural network. The method can not only have a similar function to GASEN of selecting many subnets from all trained networks, but also be of better performance than GASEN, bagging and best individual of regressive estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 239.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Optiz, D., Shavlik, J.: Actively Searching For an Effectively Neural Network Ensemble. Connection Science 8(3-4), 337–353 (1996)

    Article  Google Scholar 

  2. Hanson, L.K., Salamon, P.: Neural Network Ensemble. IEEE Trans. on Pattern Analysis and Machine Intelligence PAMA-12, 993–1002 (1990)

    Article  Google Scholar 

  3. Breiman, L.: Bagging Predictors. Machine Leaning 24, 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  4. Schapire, R.E.: The Strength of Weak Learn Ability. Machine Leaning 5, 197–227 (1990)

    Google Scholar 

  5. Freund, Y.: Boosting a Weak Algorithm by Majority. Information Computation 121, 256–285 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  6. Freund, Y., Schapire, R.E.: A Decision-theoretic Generalization of On-Line Learning and an Application to Boosting. J. Computer and System Science 55, 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  7. Drucker, H.: Improving Regressors Using Boosting Techniques. In: Proc. of 14th International Conf. on Machine Learning, pp. 107–115. Morgan Kaufmann, Burlington, MA (1997)

    Google Scholar 

  8. Schapire, R.E., Singer, Y.: Improved Boosting Algorithms Using Confidence-rated Predictions. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

  9. Avnimilach, R., Intrator, N.: Boosted Mixture of Experts: An Ensemble Learning Scheme. Neural Computation 11, 483–497 (1999)

    Article  Google Scholar 

  10. Solomatine, D.P., Shrestha, D.L.: AdaBoost.RT: a boosting algorithm for regression problems. In: Proc. of 2004 IEEE International Joint Conf. on Neural Networks, vol. 2, pp. 1163–1168 (2004)

    Google Scholar 

  11. Islam, M., Yao, X., Murase, K.: A Constructive Algorithm for Training Cooperative Neural Network Ensembles. IEEE Trans. on Neural Networks 14(4), 820–834 (2003)

    Article  Google Scholar 

  12. Liu, Y., Yao, X.: Simultaneous Training of Negatively Correlated Neural Networks in an Ensemble. IEEE Trans. on System, Man and Cybernetics—PART B: Cybernetics 29(6), 716–725 (1999)

    Article  Google Scholar 

  13. Jang, M., Cho, S.: Observational Learning Algorithm for an Ensemble of Neural Networks. Pattern Analysis & Application 5, 154–167 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  14. Perrone, M.P., Cooper, L.N.: When Networks Disagree: Ensemble Method for Neural Networks. In: Artificial Neural Networks for Speech and Vision, pp. 126–142. Chapman and Hall, New York (1993)

    Google Scholar 

  15. Zhou, Z.H., Wu, J.X., Tang, W.: Ensembling Neural Networks: Many Could Be Better Than All. Artificial Intelligence 137, 239–263 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  16. Zhou, Z.H., Wu, J.X., Jiang, Y., et al.: Genetic Algorithm based Selective Neural Network Ensemble. In: Proc. 17th International Joint Conf. on Artificial Intelligence, Seattle, WA, vol. 2, pp. 797–802 (2001)

    Google Scholar 

  17. German, S., Bienenstock, E., Doursat, R.: Neural Networks And the Bias/Variance Dilemma. Neural Computation 4(1), 1–58 (1992)

    Article  Google Scholar 

  18. Kroph, A., Vedelsby, J.: Neural Network Ensemble, Cross Validation, and Active Learning. In: Advanced in Neural Information Processing System, vol. 7, pp. 231–238. MIT Press, Cambridge (1995)

    Google Scholar 

  19. Friedman, J.: Multivariate Adaptive Regression Splines. Annals of Statistics 19, 1–141 (1991)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Meng, J., An, K. (2006). Aggregating Regressive Estimators: Gradient-Based Neural Network Ensemble. In: Gelbukh, A., Reyes-Garcia, C.A. (eds) MICAI 2006: Advances in Artificial Intelligence. MICAI 2006. Lecture Notes in Computer Science(), vol 4293. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11925231_30

Download citation

  • DOI: https://doi.org/10.1007/11925231_30

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-49026-5

  • Online ISBN: 978-3-540-49058-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics