Abstract
The error reduction in generalization is one of the principal motivations of research in machine learning. Thus, a great number of work is carried out on the classifiers aggregation methods in order to improve generally, by voting techniques, the performance of a single classifier. Among these methods of aggregation, we find the Boosting which is most practical thanks to the adaptive update of the distribution of the examples aiming at increasing in an exponential way the weight of the badly classified examples. However, this method is blamed because of overfitting, and the convergence speed especially with noise. In this study, we propose a new approach and modifications carried out on the algorithm of AdaBoost. We will demonstrate that it is possible to improve the performance of the Boosting, by exploiting assumptions generated with the former iterations to correct the weights of the examples. An experimental study shows the interest of this new approach, called hybrid approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Vezhnevets, V., Vezhnevets, A.: Modest adaboost: Teaching adaboost to generalize better, Moscow State University (2002)
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 24, 173–202 (1999)
Breiman, L.: Bagging predictors. Machine Learning 26, 123–140 (1996)
Brodley, C.E., Friedl, M.A.: Identifying and eliminating mislabeled training instances. In: AAAI/IAAI, vol. 1, pp. 799–805 (1996)
Dharmarajan, R.: An effecient boosting algorithm for combining preferences. Technical report, MIT, Septembet (1999)
Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 1–22 (1999)
Dietterich, T.G.: Ensemble methodes in machine learning. In: First International Workshop on Multiple ClassifierSystems, pp. 1–15 (2000)
Blake, C.L., Newman, D.J., Hettich, S., Merz, C.J.: Uci repository of machine learning databases (1998)
Domingo, C., Watanabe, O.: Madaboost: A modification of adaboost. In: Proc. 13th Annu. Conference on Comput. Learning Theory, pp. 180–189. Morgan Kaufmann, San Francisco (2000)
Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Dept. of Statistics, Stanford University Technical Report (1998)
Friedman, J.H., Popescu, B.E.: Predictive learning via rule ensembles (technical report). Stanford University (7) (2005)
Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: International Joint Conference on Artificial Intelligence (IJCAI) (1995)
Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information and computation 24, 212–261 (1994)
Maclin, R.: Boosting classifiers regionally. In: AAAI/IAAI, pp. 700–705 (1998)
McDonald, R., Hand, D., Eckley, I.: An empirical comparison of three boosting algorithms on real data sets with artificial class noise. In: Fourth International Workshop on Multiple Classifier Systems, pp. 35–44 (2003)
Meir, R., El-Yaniv, R., Ben-David, S.: Localized boosting. In: Proc. 13th Annu. Conference on Comput. Learning Theory, pp. 190–199. Morgan Kaufmann, San Francisco (2000)
Rätsch, G.: Ensemble learning methods for classification. Master’s thesis, Dep of computer science, University of Potsdam (April 1998)
Rätsch, G., Onoda, T., Müller, K.-R.: Soft margins for adaboost. Mach. Learn. 42(3), 287–320 (2001)
Schapire, R.E., Singer, Y.: Improved boosting algorithms using confedence rated predictions. Machine Learning 37(3), 297–336 (1999)
Sebban, M., Suchier, H.-M.: Étude sur amélioration du boosting: réduction de l’erreur et accélération de la convergence. Journal électronique d’intelligence artificielle (submitted, 2003)
Servedio, R.A.: Smooth boosting and learning with malicious noise. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 473–489. Springer, Heidelberg (2001)
Shapire, R.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)
Kwek, S., Nguyen, C.: iboost: Boosting using an instance-based exponential weighting scheme. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 245–257. Springer, Heidelberg (2002)
Stolfo, S.J., Fan, W., Lee, W., Prodromidis, A., Chan, P.K.: Cost-based modeling and evaluation for data mining with application to fraud and intrusion detection (1999)
Torre, F.: Globoost: Boosting de moindres généralisés. Technical report, GRAppA - Université Charles de Gaulle - Lille 3 (September 2004)
Wilson, D.R., Martinez, T.R.: Reduction techniques for instance-based learning algorithms. Machine Learning 38(3), 257–286 (2000)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bahri, E., Nicoloyannis, N., Maddouri, M. (2008). Improving Boosting by Exploiting Former Assumptions. In: Raś, Z.W., Tsumoto, S., Zighed, D. (eds) Mining Complex Data. MCD 2007. Lecture Notes in Computer Science(), vol 4944. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-68416-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-540-68416-9_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-68415-2
Online ISBN: 978-3-540-68416-9
eBook Packages: Computer ScienceComputer Science (R0)