Abstract
The two key factors to design an ensemble of neural networks are how to train the individual networks and how to combine the different outputs to get a single output. In this paper we focus on the combination module. We have proposed two methods based on Stacked Generalization as the combination module of an ensemble of neural networks. In this paper we have performed a comparison among the two versions of Stacked Generalization and six statistical combination methods in order to get the best combination method. We have used the mean increase of performance and the mean percentage or error reduction for the comparison. The results show that the methods based on Stacked Generalization are better than classical combiners.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifiers. Connection Science 8(3-4), 385–403 (1996)
Raviv, Y., Intratorr, N.: Bootstrapping with noise: An effective regularization technique. Connection Science, Special issue on Combining Estimators 8, 356–372 (1996)
Hernandez-Espinosa, C., Fernandez-Redondo, M., Torres-Sospedra, J.: Ensembles of multilayer feedforward for classification problems. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 744–749. Springer, Heidelberg (2004)
Hernandez-Espinosa, C., Torres-Sospedra, J., Fernandez-Redondo, M.: New experiments on ensembles of multilayer feedforward for classification problems. In: Proceedings of International Conference on Neural Networks, IJCNN 2005, Montreal, Canada, pp. 1120–1124 (2005)
Torres-Sospedra, J., Fernandez-Redondo, M., Hernandez-Espinosa, C.: A research on combination methods for ensembles of multilayer feedforward. In: Proceedings of International Conference on Neural Networks, IJCNN 2005, Montreal, Canada, pp. 1125–1130 (2005)
Xu, L., Krzyzak, A., Suen, C.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Transactions on Systems, Man, and Cybernetics 22(3), 418–435 (1992)
Verikas, A., Lipnickas, A., Malmqvist, K., Bacauskiene, M., Gelzinis, A.: Soft combination of neural classifiers: A comparative study. Pattern Recognition Letters 20(4), 429–444 (1999)
Jimenez, D., Walsh, N.: Dynamically weighted ensemble neural networks for classification. IEEE World Congress on Computational Intelligence 1, 753–756 (1998)
Wolpert, D.H.: Stacked generalization. Neural Networks 5(6), 1289–1301 (1994)
Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)
Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)
Breiman, L.: Arcing classifiers. The Annals of Statistics 26(3), 801–849 (1998)
Kuncheva, L., Whitaker, C.J.: Using diversity with three variants of boosting: Aggressive. In: Roli, F., Kittler, J. (eds.) MCS 2002. LNCS, vol. 2364, Springer, Heidelberg (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Torres-Sospedra, J., Hernández-Espinosa, C., Fernández-Redondo, M. (2006). Combining MF Networks: A Comparison Among Statistical Methods and Stacked Generalization. In: Schwenker, F., Marinai, S. (eds) Artificial Neural Networks in Pattern Recognition. ANNPR 2006. Lecture Notes in Computer Science(), vol 4087. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11829898_19
Download citation
DOI: https://doi.org/10.1007/11829898_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-37951-5
Online ISBN: 978-3-540-37952-2
eBook Packages: Computer ScienceComputer Science (R0)