Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A comparison of decision tree ensemble creation techniques

IEEE Trans Pattern Anal Mach Intell. 2007 Jan;29(1):173-80. doi: 10.1109/tpami.2007.250609.

Abstract

We experimentally evaluate bagging and seven other randomization-based approaches to creating an ensemble of decision tree classifiers. Statistical tests were performed on experimental results from 57 publicly available data sets. When cross-validation comparisons were tested for statistical significance, the best method was statistically more accurate than bagging on only eight of the 57 data sets. Alternatively, examining the average ranks of the algorithms across the group of data sets, we find that boosting, random forests, and randomized trees are statistically significantly better than bagging. Because our results suggest that using an appropriate ensemble size is important, we introduce an algorithm that decides when a sufficient number of classifiers has been created for an ensemble. Our algorithm uses the out-of-bag error estimate, and is shown to result in an accurate ensemble for those methods that incorporate bagging into the construction of the ensemble.

Publication types

  • Comparative Study
  • Evaluation Study
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Algorithms*
  • Artificial Intelligence*
  • Decision Support Techniques*
  • Information Storage and Retrieval / methods*
  • Pattern Recognition, Automated / methods*