Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

MultiBoosting: A Technique for Combining Boosting and Wagging

Published: 01 August 2000 Publication History

Abstract

MultiBoosting is an extension to the highly successful AdaBoost technique for forming decision committees. MultiBoosting can be viewed as combining AdaBoost with wagging. It is able to harness both AdaBoost's high bias and variance reduction with wagging's superior variance reduction. Using C4.5 as the base learning algorithm, MultiBoosting is demonstrated to produce decision committees with lower error than either AdaBoost or wagging significantly more often than the reverse over a large representative cross-section of UCI data sets. It offers the further advantage over AdaBoost of suiting parallel execution.

References

[1]
Ali, K., Brunk, C., & Pazzani, M. (1994). On learning multiple descriptions of a concept. In Proceedings of Tools with Artificial Intelligence (pp. 476-483). New Orleans, LA.
[2]
Bauer, E. & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36, 105-139.
[3]
Blake, C., Keogh, E., & Merz, C. J. (1999). UCI repository of machine learning databases. {Machine-readable data repository}. University of California, Department of Information and Computer Science, Irvine, CA.
[4]
Breiman, L. (1996a). Bagging predictors. Machine Learning, 24, 123-140.
[5]
Breiman, L. (1996b). Bias, variance, and arcing classifiers. Technical report 460. Berkeley, CA, Department of Statistics, University of California.
[6]
Breiman, L. (1997). Arcing the edge. Technical report 486. Berkeley, CA, Department of Statistics, University of California.
[7]
Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and Regression Trees. Belmont, CA: Wadsworth International.
[8]
Dietterich, T. G. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10(7), 1895-1923.
[9]
Freund, Y. & Schapire, R. E. (1995). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55, 119-139.
[10]
Freund, Y. & Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Proceedings of the Thirteenth International Conference on Machine Learning (pp. 148-156). Bari, Italy: Morgan Kaufmann.
[11]
Friedman, J. H. (1997). On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery, 1, 55-77.
[12]
Friedman, J., Hastie, T., & Tibshirani, R. Additive logistic regression: A statistical view of boosting. Annals of Statistics. To appear.
[13]
Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4, 1-48.
[14]
Kohavi, R. & Wolpert, D. (1996). Bias plus variance decomposition for zero-one loss functions. In Proceedings of the 13th International Conference on Machine Learning (pp .275-283). Bari, Italy: Morgan Kaufmann.
[15]
Kong, E. B. & Dietterich, T. G. (1995). Error-correcting output coding corrects bias and variance. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 313-321). Tahoe City, CA: Morgan Kaufmann.
[16]
Krogh, A. & Vedelsby, J. (1995). Neural network ensembles, cross validation, and active learning. G. Tesauro, D. Touretzky, & T. Leen (Eds.), Advances in Neural Information Processing Systems (Vol. 7). Boston, MA: MIT Press.
[17]
Nock, R. & Gascuel, O. (1995). On learning decision committees. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 413-420). Taho City, CA: Morgan Kaufmann.
[18]
Oliver, J. J. & Hand, D. J. (1995). On pruning and averaging decision trees. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 430-437). Taho City, CA: Morgan Kaufmann.
[19]
Quinlan, J. R. (1996). Bagging, boosting, and C4.5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (pp. 725-730). AAAI/MIT Press.
[20]
Rao, R. B., Gordon, D., & Spears, W. (1995). For every generalization action is there really an equal and opposite reaction? Analysis of the conservation law for generalization performance. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 471-479). Taho City, CA: Morgan Kaufmann.
[21]
Salzberg, S. L. (1997). On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and Knowledge Discovery, 1, 317-327.
[22]
Schaffer, C. (1994). A conservation law for generalization performance. In Proceedings of the 1994 International Conference on Machine Learning. Morgan Kaufmann.
[23]
Schapire, R. E., Freund, Y., Bartlett, P., & Lee, W. S. (1998). Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26, 1651-1686.
[24]
Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5, 241-259.
[25]
Wolpert, D. H. (1995). Off-training set error and a priori distinctions between learning algorithms. Technical Report SFI TR 95-01-003. Santa Fe, NM, The Santa Fe Institute.
[26]
Zheng, Z. & Webb, G. I. (1998). Multiple boosting: A combination of boosting and bagging. In Proceedings of the 1998 International Conference on Parallel and Distributed Processing Techniques and Applications (pp. 1133-1140). CSREA Press.

Cited By

View all
  • (2024)A Novel Computer-Aided Diagnostic System for Alzheimer’s Diagnosis Using Variational Mode Decomposition MethodCircuits, Systems, and Signal Processing10.1007/s00034-023-02496-y43:1(615-633)Online publication date: 1-Jan-2024
  • (2023)Cooperative Co-Evolution for Ensembles of Nested Dichotomies for Multi-Class ClassificationProceedings of the Genetic and Evolutionary Computation Conference10.1145/3583131.3590457(597-605)Online publication date: 15-Jul-2023
  • (2023)BoostTree and BoostForest for Ensemble LearningIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2022.322737045:7(8110-8126)Online publication date: 1-Jul-2023
  • Show More Cited By

Index Terms

  1. MultiBoosting: A Technique for Combining Boosting and Wagging

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      Publisher

      Kluwer Academic Publishers

      United States

      Publication History

      Published: 01 August 2000

      Author Tags

      1. aggregation
      2. bagging
      3. boosting
      4. decision committee
      5. decision tree
      6. wagging

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 03 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Novel Computer-Aided Diagnostic System for Alzheimer’s Diagnosis Using Variational Mode Decomposition MethodCircuits, Systems, and Signal Processing10.1007/s00034-023-02496-y43:1(615-633)Online publication date: 1-Jan-2024
      • (2023)Cooperative Co-Evolution for Ensembles of Nested Dichotomies for Multi-Class ClassificationProceedings of the Genetic and Evolutionary Computation Conference10.1145/3583131.3590457(597-605)Online publication date: 15-Jul-2023
      • (2023)BoostTree and BoostForest for Ensemble LearningIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2022.322737045:7(8110-8126)Online publication date: 1-Jul-2023
      • (2022)Evolution of Artificial Intelligence in Bone Fracture DetectionInternational Journal of Reliable and Quality E-Healthcare10.4018/IJRQEH.29995811:2(1-17)Online publication date: 29-Jul-2022
      • (2022)Rotation forest of random subspace modelsIntelligent Decision Technologies10.3233/IDT-21007416:2(315-324)Online publication date: 1-Jan-2022
      • (2022)Location-Centered House Price Prediction: A Multi-Task Learning ApproachACM Transactions on Intelligent Systems and Technology10.1145/350180613:2(1-25)Online publication date: 5-Jan-2022
      • (2022)A comparison of two dissimilarity functions for mixed-type predictor variables in the -machineAdvances in Data Analysis and Classification10.1007/s11634-021-00463-616:4(875-907)Online publication date: 1-Dec-2022
      • (2022)Credit Scoring Models Using Ensemble Learning and Classification Approaches: A Comprehensive SurveyWireless Personal Communications: An International Journal10.1007/s11277-021-09158-9123:1(785-812)Online publication date: 1-Mar-2022
      • (2022)New approach for fingerprint recognition based on stylometric features with blockchain and cancellable biometric aspectsMultimedia Tools and Applications10.1007/s11042-021-11581-w81:25(36715-36733)Online publication date: 1-Oct-2022
      • (2022)A Survey on ensemble learning under the era of deep learningArtificial Intelligence Review10.1007/s10462-022-10283-556:6(5545-5589)Online publication date: 2-Nov-2022
      • Show More Cited By

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media