Abstract
In this paper we present a new method to create neural network ensembles. In an ensemble method like bagging one needs to train multiple neural networks to create the ensemble. Here we present a scheme to generate different copies of a network from one trained network, and use those copies to create the ensemble. The copies are produced by adding controlled noise to a trained base network. We provide a preliminary theoretical justification for our method and experimentally validate the method on several standard data sets. Our method can improve the accuracy of a base network and give rise to considerable savings in training time compared to bagging.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Amari, S., Murata, N., Muller, K.-R., Finke, M., Yang, H.H.: Asymtotic statistical theory of overtraining and cross-validation. IEEE Transactions on Neural Networks 8(5), 985–996 (1997)
Asuncion, A., Newman, D.J.: UCI machine learning repository (2007)
Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
Chen, R., Yu, J.: An improved bagging neural network ensemble algorithm and its application. In: Third International Conference on Natural Computation, vol. 5, pp. 730–734 (2007)
Drucker, H., Schapire, R.E., Simard, P.: Improving performance in neural networks using a boosting algorithm. In: Hanson, S.J., Cowan, J.D., Lee Giles, C. (eds.) NIPS, pp. 42–49. Morgan Kaufmann, San Francisco (1992)
Efron, B., Tibshirani, R.: An Introduction to the Bootstrap. CRC Press, Boca Raton (1993)
Gao, H., Huang, D., Liu, W., Yang, Y.: Double rule learning in boosting. International Journal of Innovative Computing, Information and Control 4(6), 1411–1420 (2008)
Georgiou, V.L., Alevizos, P.D., Vrahatis, M.N.: Novel approaches to probabilistic neural networks through bagging and evolutionary estimating of prior probabilities. Neural Processing Letters 27(2), 153–162 (2008)
Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)
Holmstrom, L., Koistinen, P.: Using additive noise in backpropagation training. IEEE Transactions on Neural Networks 3, 24–38 (1992)
Kuncheva, L.I.: Diversity in multiple classifier systems. Information Fusion 6(1), 3–4 (2005)
Kuncheva, L.I., Rodríguez, J.J.: Classifier ensembles with a random linear oracle. IEEE Trans. Knowl. Data Eng. 19(4), 500–508 (2007)
Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51(2), 181–207 (2003)
Schapire, R.E.: A brief introduction to boosting. In: Dean, T. (ed.) IJCAI, pp. 1401–1406. Morgan Kaufmann, San Francisco (1999)
Schapire, R.E.: Theoretical views of boosting. In: Fischer, P., Simon, H.U. (eds.) EuroCOLT 1999. LNCS (LNAI), vol. 1572, pp. 1–10. Springer, Heidelberg (1999)
Setiono, R.: A penalty function approach for pruning feed forward neural networks. Neural Computation 9, 185–204 (1997)
Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all. Artificial Intelligence 137(1-2), 239–263 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Vázquez-Santacruz, E., Chakraborty, D. (2011). An Ensemble of Degraded Neural Networks. In: Martínez-Trinidad, J.F., Carrasco-Ochoa, J.A., Ben-Youssef Brants, C., Hancock, E.R. (eds) Pattern Recognition. MCPR 2011. Lecture Notes in Computer Science, vol 6718. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21587-2_30
Download citation
DOI: https://doi.org/10.1007/978-3-642-21587-2_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21586-5
Online ISBN: 978-3-642-21587-2
eBook Packages: Computer ScienceComputer Science (R0)