Numbers of Classifier
Numbers of Classifier
Numbers of Classifier
manuel.fernandez.delgado@usc.es
eva.cernadas@usc.es
senen.barro@usc.es
CITIUS: Centro de Investigaci
on en Tecnoloxas da Informaci
on da USC
University of Santiago de Compostela
Campus Vida, 15872, Santiago de Compostela, Spain
Dinani Amorim
dinaniamorim@gmail.com
Departamento de Tecnologia e Ciencias Sociais- DTCS
Universidade do Estado da Bahia
Av. Edgard Chastinet S/N - S
ao Geraldo - Juazeiro-BA, CEP: 48.305-680, Brasil
Abstract
We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian,
neural networks, support vector machines, decision trees, rule-based classifiers, boosting,
bagging, stacking, random forests and other ensembles, generalized linear models, nearestneighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods), implemented in
Weka, R (with and without the caret package), C and Matlab, including all the relevant
classifiers available today. We use 121 data sets, which represent the whole UCI data
base (excluding the large-scale problems) and other own real problems, in order to achieve
significant conclusions about the classifier behavior, not dependent on the data set collection. The classifiers most likely to be the bests are the random forest (RF)
versions, the best of which (implemented in R and accessed via caret) achieves 94.1% of
the maximum accuracy overcoming 90% in the 84.3% of the data sets. However, the difference is not statistically significant with the second best, the SVM with Gaussian kernel
implemented in C using LibSVM, which achieves 92.3% of the maximum accuracy. A few
models are clearly better than the remaining ones: random forest, SVM with Gaussian
and polynomial kernels, extreme learning machine with Gaussian kernel, C5.0 and avNNet
(a committee of multi-layer perceptrons implemented in R with the caret package). The
random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF),
followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5
and 3 members in the top-20, respectively).
Keywords: classification, UCI data base, random forest, support vector machine, neural
networks, decision trees, ensembles, rule-based classifiers, discriminant analysis, Bayesian
classifiers, generalized linear models, partial least squares and principal component regression, multiple adaptive regression splines, nearest-neighbors, logistic and multinomial
regression
c
2014
Manuel Fern
andez-Delgado, Eva Cernadas, Sen
en Barro and Dinani Amorim.
1. Introduction
When a researcher or data analyzer faces to the classification of a data set, he/she usually
applies the classifier which he/she expects to be the best one. This expectation is conditioned by the (often partial) researcher knowledge about the available classifiers. One reason
is that they arise from different fields within computer science and mathematics, i.e., they
belong to different classifier families. For example, some classifiers (linear discriminant
analysis or generalized linear models) come from statistics, while others come from symbolic
artificial intelligence and data mining (rule-based classifiers or decision-trees), some others
are connectionist approaches (neural networks), and others are ensembles, use regression or
clustering approaches, etc. A researcher may not be able to use classifiers arising from areas
in which he/she is not an expert (for example, to develop parameter tuning), being often
limited to use the methods within his/her domain of expertise. However, there is no certainty
that they work better, for a given data set, than other classifiers, which seem more exotic
to him/her. The lack of available implementation for many classifiers is a major drawback,
although it has been partially reduced due to the large amount of classifiers implemented
in R1 (mainly from Statistics), Weka2 (from the data mining field) and, in a lesser extend,
in Matlab using the Neural Network Toolbox3 . Besides, the R package caret (Kuhn, 2008)
provides a very easy interface for the execution of many classifiers, allowing automatic parameter tuning and reducing the requirements on the researchers knowledge (about the
tunable parameter values, among other issues). Of course, the researcher can review the
literature to know about classifiers in families outside his/her domain of expertise and, if
they work better, to use them instead of his/her preferred classifier. However, usually the
papers which propose a new classifier compare it only to classifiers within the same family,
excluding families outside the authors area of expertise. Thus, the researcher does not know
whether these classifiers work better or not than the ones that he/she already knows. On the
other hand, these comparisons are usually developed over a few, although expectedly relevant, data sets. Given that all the classifiers (even the good ones) show strong variations
in their results among data sets, the average accuracy (over all the data sets) might be of
limited significance if a reduced collection of data sets is used (Maci`a and Bernado-Mansilla,
2014). Specifically, some classifiers with a good average performance over a reduced data
set collection could achieve significantly worse results when the collection is extended, and
conversely classifiers with sub-optimal performance on the reduced data collection could be
not so bad when more data sets are included. There are useful guidelines (Hothorn et al.,
2005; Eugster et al., 2014) to analyze and design benchmark exploratory and inferential
experiments, giving also a very useful framework to inspect the relationship between data
sets and classifiers.
Each time we find a new classifier or family of classifiers from areas outside our domain
of expertise, we ask ourselves whether that classifier will work better than the ones that we
use routinely. In order to have a clear idea of the capabilities of each classifier and family, it
would be useful to develop a comparison of a high number of classifiers arising from many
different families and areas of knowledge over a large collection of data sets. The objective
1. See http://www.r-project.org.
2. See http://www.cs.waikato.ac.nz/ml/weka.
3. See http://www.mathworks.es/products/neural-network.
3134
is to select the classifier which more probably achieves the best performance for any data
set. In the current paper we use a large collection of classifiers with publicly available
implementations (in order to allow future comparisons), arising from a wide variety of
classifier families, in order to achieve significant conclusions not conditioned by the number
and variety of the classifiers considered. Using a high number of classifiers it is probable that
some of them will achieve the highest possible performance for each data set, which can
be used as reference (maximum accuracy) to evaluate the remaining classifiers. However,
according to the No-Free-Lunch theorem (Wolpert, 1996), the best classifier will not be the
same for all the data sets. Using classifiers from many families, we are not restricting the
significance of our comparison to one specific family among many available methods. Using
a high number of data sets, it is probable that each classifier will work well in some data
sets and not so well in others, increasing the evaluation significance. Finally, considering
the availability of several alternative implementations for the most popular classifiers, their
comparison may also be interesting. The current work pursues: 1) to select the globally
best classifier for the selected data set collection; 2) to rank each classifier and family
according to its accuracy; 3) to determine, for each classifier, its probability of achieving
the best accuracy, and the difference between its accuracy and the best one; 4) to evaluate
the classifier behavior varying the data set properties (complexity, #patterns, #classes and
#inputs).
Some recent papers have analyzed the comparison of classifiers over large collection of
data sets. OpenML (Vanschoren et al., 2012), is a complete web interface4 to anonymously
access an experiment data base including 86 data sets from the UCI machine learning data
base (Bache and Lichman, 2013) and 93 classifiers implemented in Weka. Although plugins for R, Knime and RapidMiner are under development, currently it only allows to use
Weka classifiers. This environment allows to send queries about the classifier behavior with
respect to tunable parameters, considering several common performance measures, feature
selection techniques and bias-variance analysis. There is also an interesting analysis (Maci`
a
and Bernad
o-Mansilla, 2014) about the use of the UCI repository launching several interesting criticisms about the usual practice in experimental comparisons. In the following,
we synthesize these criticisms (the italicized sentences are literal cites) and describe how we
tried to avoid them in our paper:
1. The criterion used to select the data set collection (which is usually reduced) may
bias the comparison results. The same authors stated (Maci`a et al., 2013) that the
superiority of a classifier may be restricted to a given domain characterized by some
complexity measures, studying why and how the data set selection may change the
results of classifier comparisons. Following these suggestions, we use all the data sets
in the UCI classification repository, in order to avoid that a small data collection
invalidate the conclusions of the comparison. This paper also emphasizes that the
UCI repository was not designed to be a complete, reliable framework composed of
standardized real samples.
2. The issue about (1) whether the selection of learners is representative enough and (2)
whether the selected learners are properly configured to work at their best performance
4. See http://expdb.cs.kuleuven.be/expdb.
3135
suggests that proposals of new classifiers usually design and tune them carefully, while
the reference classifiers are run using a baseline configuration. This issue is also related
to the lack of deep knowledge and experience about the details of all the classifiers with
available implementations, so that the researchers usually do not pay much attention
about the selected reference algorithms, which may consequently bias the results in
favour of the proposed algorithm. With respect to this criticism, in the current paper
we do not propose any new classifier nor changes on existing approaches, so we are not
interested in favour any specific classifier, although we are more experienced with some
classifier than others (for example, with respect to the tunable parameter values). We
develop in this work a parameter tuning in the majority of the classifiers used (see
below), selecting the best available configuration over a training set. Specifically, the
classifiers implemented in R using caret automatically tune these parameters and,
even more important, using pre-defined (and supposedly meaningful) values. This
fact should compensate our lack of experience about some classifiers, and reduce its
relevance on the results.
3. It is still impossible to determine the maximum attainable accuracy for a data set,
so that it is difficult to evaluate the true quality of each classifier. In our paper, we
use a large amount of classifiers (179) from many different families, so we hypothesize
that the maximum accuracy achieved by some classifier is the maximum attainable
accuracy for that data set: i.e., we suppose that if no classifier in our collection is
able to reach higher accuracy, no one will reach. We can not test the validity of this
hypothesis, but it seems reasonable that, when the number of classifiers increases,
some of them will achieve the largest possible accuracy.
4. Since the data set complexity (measured somehow by the maximum attainable accuracy) is unknown, we do not know if the classification error is caused by unfitted
classifier design (learners limitation) or by intrinsic difficulties of the problem (data
limitation). In our work, since we consider that the attainable accuracy is the maximum accuracy achieved by some classifier in our collection, we can consider that low
accuracies (with respect to this maximum accuracy) achieved by other classifiers are
always caused by classifier limitations.
5. The lack of standard data partitioning, defining training and testing data for crossvalidation trials. Simply the use of different data partitionings will eventually bias the
results, and make the comparison between experiments impossible, something which is
also emphasized by other researchers (Vanschoren et al., 2012). In the current paper,
each data set uses the same partitioning for all the classifiers, so that this issue can not
bias the results favouring any classifier. Besides, the partitions are publicly available
(see Section 2.1), in order to make possible the experiment replication.
The paper is organized as follows: the Section 2 describes the collection of data sets and
classifiers considered in this work; the Section 3 discusses the results of the experiments,
and the Section 4 compiles the conclusions of the research developed.
3136
Data set
#pat.
#inp.
#cl.
%Maj.
Data set
#pat.
#inp.
#cl.
%Maj.
abalone
ac-inflam
acute-nephritis
adult
annealing
arrhythmia
audiology-std
balance-scale
balloons
bank
blood
breast-cancer
bc-wisc
bc-wisc-diag
bc-wisc-prog
breast-tissue
car
ctg-10classes
ctg-3classes
chess-krvk
chess-krvkp
congress-voting
conn-bench-sonar
conn-bench-vowel
connect-4
contrac
credit-approval
cylinder-bands
dermatology
echocardiogram
ecoli
4177
120
120
48842
798
452
226
625
16
45211
748
286
699
569
198
106
1728
2126
2126
28056
3196
435
208
528
67557
1473
690
512
366
131
336
8
6
6
14
38
262
59
4
4
17
4
9
9
30
33
9
6
21
21
6
36
16
60
11
42
9
15
35
34
10
7
3
2
2
2
6
13
18
3
2
2
2
2
2
2
2
6
4
10
3
18
2
2
2
11
2
3
2
2
6
2
8
34.6
50.8
58.3
75.9
76.2
54.2
26.3
46.1
56.2
88.5
76.2
70.3
65.5
62.7
76.3
20.7
70.0
27.2
77.8
16.2
52.2
61.4
53.4
9.1
75.4
42.7
55.5
60.9
30.6
67.2
42.6
energy-y1
energy-y2
fertility
flags
glass
haberman-survival
hayes-roth
heart-cleveland
heart-hungarian
heart-switzerland
heart-va
hepatitis
hill-valley
horse-colic
ilpd-indian-liver
image-segmentation
ionosphere
iris
led-display
lenses
letter
libras
low-res-spect
lung-cancer
lymphography
magic
mammographic
miniboone
molec-biol-promoter
molec-biol-splice
monks-1
768
768
100
194
214
306
132
303
294
123
200
155
606
300
583
210
351
150
1000
24
20000
360
531
32
148
19020
961
130064
106
3190
124
8
8
9
28
9
3
3
13
12
12
12
19
100
25
9
19
33
4
7
4
16
90
100
56
18
10
5
50
57
60
6
3
3
2
8
6
2
3
5
2
2
5
2
2
2
2
7
2
3
10
3
26
15
9
3
4
2
2
2
2
3
2
46.9
49.9
88.0
30.9
35.5
73.5
38.6
54.1
63.9
39.0
28.0
79.3
50.7
63.7
71.4
14.3
64.1
33.3
11.1
62.5
4.1
6.7
51.9
40.6
54.7
64.8
53.7
71.9
50.0
51.9
50.0
Table 1: Collection of 121 data sets from the UCI data base and our real problems.
It shows the number of patterns (#pat.), inputs (#inp.), classes
(#cl.) and percentage of majority class (%Maj.) for each data set. Continued in Table 2. Some keys are: ac-inflam=acute-inflammation, bc=breastcancer, congress-vot= congressional-voting, ctg=cardiotocography, conn-benchsonar/vowel= connectionist-benchmark-sonar-mines-rocks/vowel-deterding, pb=
pittsburg-bridges, st=statlog, vc=vertebral-column.
3137
3138
Data set
#pat.
#inp.
#cl.
%Maj.
Data set
#pat.
#inp.
#cl.
%Maj.
monks-2
monks-3
mushroom
musk-1
musk-2
nursery
oocMerl2F
oocMerl4D
oocTris2F
oocTris5B
optical
ozone
page-blocks
parkinsons
pendigits
pima
pb-MATERIAL
pb-REL-L
pb-SPAN
pb-T-OR-D
pb-TYPE
planning
plant-margin
plant-shape
plant-texture
post-operative
primary-tumor
ringnorm
seeds
semeion
169
3190
8124
476
6598
12960
1022
1022
912
912
3823
2536
5473
195
7494
768
106
103
92
102
105
182
1600
1600
1600
90
330
7400
210
1593
6
6
21
166
166
8
25
41
25
32
62
72
10
22
16
8
4
4
4
4
4
12
64
64
64
8
17
20
7
256
2
2
2
2
2
5
3
2
2
3
10
2
5
2
10
2
3
3
3
2
6
2
100
100
100
3
15
2
3
10
62.1
50.8
51.8
56.5
84.6
33.3
67.0
68.7
57.8
57.6
10.2
97.1
89.8
75.4
10.4
65.1
74.5
51.5
52.2
86.3
41.9
71.4
1.0
1.0
1.0
71.1
25.4
50.5
33.3
10.2
soybean
spambase
spect
spectf
st-australian-credit
st-german-credit
st-heart
st-image
st-landsat
st-shuttle
st-vehicle
steel-plates
synthetic-control
teaching
thyroid
tic-tac-toe
titanic
trains
twonorm
vc-2classes
vc-3classes
wall-following
waveform
waveform-noise
wine
wine-quality-red
wine-quality-white
yeast
zoo
307
4601
80
80
690
1000
270
2310
4435
43500
846
1941
600
151
3772
958
2201
10
7400
310
310
5456
5000
5000
179
1599
4898
1484
101
35
57
22
44
14
24
13
18
36
9
18
27
60
5
21
9
3
28
20
6
6
24
21
40
13
11
11
8
16
18
2
2
2
2
2
2
7
6
7
4
7
6
3
3
2
2
2
2
2
3
4
3
3
3
6
7
10
7
13.0
60.6
67.1
50.0
67.8
70.0
55.6
14.3
24.2
78.4
25.8
34.7
16.7
34.4
92.5
65.3
67.7
50.0
50.0
67.7
48.4
40.4
33.9
33.8
39.9
42.6
44.9
31.2
40.6
must be defined grouping several values (in data set abalone) we follow the instructions in
the data set description (file data.names). Given that our classifiers are not oriented to
data with missing features, the missing inputs are treated as zero, which should not bias the
comparison results. For each data set (abalone) two data files are created: abalone R.dat,
designed to be read by the R, C and Matlab classifiers, and abalone.arff, designed to be
read by the Weka classifiers.
2.2 Classifiers
We use 179 classifiers implemented in C/C++, Matlab, R and Weka. Excepting the
Matlab classifiers, all of them are free software. We only developed own versions in C for
the classifiers proposed by us (see below). Some of the R programs use directly the package
that provides the classifier, but others use the classifier through the interface train provided
by the caret7 package. This function develops the parameter tuning, selecting the values
which maximize the accuracy according to the validation selected (leave-one-out, k-fold,
etc.). The caret package also allows to define the number of values used for each tunable
parameter, although the specific values can not be selected. We used all the classifiers
provided by Weka, running the command-line version of the java class for each classifier.
OpenML uses 93 Weka classifiers, from which we included 84. We could not include
in our collection the remaining 9 classifiers: ADTree, alternating decision tree (Freund
and Mason, 1999); AODE, aggregating one-dependence estimators (Webb et al., 2005);
Id3 (Quinlan, 1986); LBR, lazy Bayesian rules (Zheng and Webb, 2000); M5Rules (Holmes
et al., 1999); Prism (Cendrowska, 1987); ThresholdSelector; VotedPerceptron (Freund and
Schapire, 1998) and Winnow (Littlestone, 1988). The reason is that they only accept
nominal (not numerical) inputs, while we converted all the inputs to numeric values. Besides, we did not use classifiers ThresholdSelector, VotedPerceptron and Winnow, included
in openML, because they accept only two-class problems. Note that classifiers LocallyWeightedLearning and RippleDownRuleLearner (Vanschoren et al., 2012) are included in
our collection as LWL and Ridor respectively. Furthermore, we also included other 36 classifiers implemented in R, 48 classifiers in R using the caret package, as well as 6 classifiers
implemented in C and other 5 in Matlab, summing up to 179 classifiers.
In the following, we briefly describe the 179 classifiers of the different families identified by acronyms (DA, BY, etc., see below), their names and implementations, coded as
name implementation, where implementation can be C, m (Matlab), R, t (in R using
caret) and w (Weka), and their tunable parameter values (the notation A:B:C means from
A to C step B). We found errors using several classifiers accessed via caret, but we used
the corresponding R packages directly. This is the case of lvq, bdk, gaussprLinear, glmnet, kernelpls, widekernelpls, simpls, obliqueTree, spls, gpls, mars, multinom, lssvmRadial,
partDSA, PenalizedLDA, qda, QdaCov, mda, rda, rpart, rrlda, sddaLDA, sddaQDA and
sparseLDA. Some other classifiers as Linda, smda and xyf (not listed below) gave errors
(both with and without caret) and could not be included in this work. In the R and caret
implementations, we specify the function and, in typewriter font, the package which provide
that classifier (the function name is absent when it is is equal to the classifier).
7. See http://caret.r-forge.r-project.org.
3140
14. fda R, flexible discriminant analysis (Hastie et al., 1993), with function fda in the
mda package and the default linear regression method.
15. fda t is the same FDA, also with linear regression but tuning the parameter nprune
with values 2:3:15 (5 values).
16. mda R, mixture discriminant analysis (Hastie and Tibshirani, 1996), with function
mda in the mda package.
17. mda t uses the caret package as interface to function mda, tuning the parameter
subclasses between 2 and 11.
18. pda t, penalized discriminant analysis, uses the function gen.rigde in the mda package,
which develops PDA tuning the shrinkage penalty coefficient lambda with values from
1 to 10.
19. rda R, regularized discriminant analysis (Friedman, 1989), uses the function rda in
the klaR package. This method uses regularized group covariance matrix to avoid
the problems in LDA derived from collinearity in the data. The parameters lambda
and gamma (used in the calculation of the robust covariance matrices) are tuned with
values 0:0.25:1.
20. hdda R, high-dimensional discriminant analysis (Berge et al., 2012), assumes that
each class lives in a different Gaussian subspace much smaller than the input space,
calculating the subspace parameters in order to classify the test patterns. It uses the
hdda function in the HDclassif package, selecting the best of the 14 available models.
6 classifiers.
21. naiveBayes R uses the function NaiveBayes in R the klaR package, with Gaussian
kernel, bandwidth 1 and Laplace correction 2.
22. vbmpRadial t, variational Bayesian multinomial probit regression with Gaussian
process priors (Girolami and Rogers, 2006), uses the function vbmp from the vbmp
package, which fits a multinomial probit regression model with radial basis function
kernel and covariance parameters estimated from the training patterns.
23. NaiveBayes w (John and Langley, 1995) uses estimator precision values chosen from
the analysis of the training data.
24. NaiveBayesUpdateable w uses estimator precision values updated iteratively using
the training patterns and starting from the scratch.
25. BayesNet w is an ensemble of Bayes classifiers. It uses the K2 search method, which
develops hill climbing restricted by the input order, using one parent and scores of
type Bayes. It also uses the simpleEstimator method, which uses the training patterns
to estimate the conditional probability tables in a Bayesian network once it has been
learnt, which = 0.5 (initial count).
26. NaiveBayesSimple w is a simple naive Bayes classifier (Duda et al., 2001) which
uses a normal distribution to model numeric features.
3142
3143
38. MultilayerPerceptron w is a MLP network with sigmoid hidden neurons, unthresholded linear output neurons, learning rate 0.3, momentum 0.2, 500 training epochs,
and #hidden neurons equal (#inputs and #classes)/2.
39. pnn m: probabilistic neural network (Specht, 1990) in Matlab (function newpnn),
tuning the Gaussian spread with 19 values in the range 0.01-10.
40. elm m, extreme learning machine (Huang et al., 2012) implemented in Matlab using
the code freely available9 . We try 6 activation functions (sine, sign, sigmoid, hardlimit,
triangular basis and radial basis) and 20 values for #hidden neurons between 3 and
200. As recommended, the inputs are scaled between [-1,1].
41. elm kernel m is the ELM with Gaussian kernel, which uses the code available from
the previous site, tuning the regularization parameter and the kernel spread with
values 25 ..214 and 216 ..28 respectively.
42. cascor C, cascade correlation neural network (Fahlman, 1988) implemented in C
using the FANN library (see classifier #32).
43. lvq R is the learning vector quantization (Ripley, 1996) implemented using the function lvq in the class package, with codebook of size 50, and k=5 nearest neighbors.
We selected the best results achieved using the functions lvq1, olvq2, lvq2 and lvq3.
44. lvq t uses caret as interface to function lvq1 in the class package tuning the parameters size and k (the values are specific for each data set).
45. bdk R, bi-directional Kohonen map (Melssen et al., 2006), with function bdk in the
kohonen package, a kind of supervised Self Organized Map for classification, which
maps high-dimensional patterns to 2D.
46. dkp C (direct kernel perceptron) is a very simple and fast kernel-based classifier
proposed by us (Fern
andez-Delgado et al., 2014) which achieves competitive results
compared to SVM. The DKP requires the tuning of the kernel spread in the same
range 216 ..28 as the SVM.
47. dpp C (direct parallel perceptron) is a small and efficient Parallel Perceptron network proposed by us (Fern
andez-Delgado et al., 2011), based in the parallel-delta
rule (Auer et al., 2008) with n = 3 perceptrons. The codes for DKP and DPP are
freely available10 .
3144
62. C5.0Tree t creates a single C5.0 decision tree (Quinlan, 1993) using the function
C5.0 in the homonymous package without parameter tuning.
63. ctree t uses the function ctree in the party package, which creates conditional inference trees by recursively making binary splittings on the variables with the highest association to the class (measured by a statistical test). The threshold in the association
measure is given by the parameter mincriterion, tuned with the values 0.1:0.11:0.99
(10 values).
64. ctree2 t uses the function ctree tuning the maximum tree depth with values up to
10.
65. J48 w is a pruned C4.5 decision tree (Quinlan, 1993) with pruning confidence threshold C=0.25 and at least 2 training patterns per leaf.
66. J48 t uses the function J48 in the RWeka package, which learns pruned or unpruned
C5.0 trees with C=0.25.
67. RandomSubSpace w (Ho, 1998) trains multiple REPTrees classifiers selecting randomly subsets of inputs (random subspaces). Each REPTree is learnt using information gain/variance and error-based pruning with backfitting. Each subspace includes
the 50% of the inputs. The minimum variance for splitting is 103 , with at least 2
pattern per leaf.
68. NBTree w (Kohavi, 1996) is a decision tree with naive Bayes classifiers at the leafs.
69. RandomTree w is a non-pruned tree where each leaf tests blog2 (#inputs + 1)c randomly chosen inputs, with at least 2 instances per leaf, unlimited tree depth, without
backfitting and allowing unclassified patterns.
70. REPTree w learns a pruned decision tree using information gain and reduced error
pruning (REP). It uses at least 2 training patterns per leaf, 3 folds for reduced error
pruning and unbounded tree depth. A split is executed when the class variance is
more than 0.001 times the train variance.
71. DecisionStump w is a one-node decision tree which develops classification or regression based on just one input using entropy.
75. JRip t uses the function JRip in the RWeka package, which learns a repeated incremental pruning to produce error reduction (RIPPER) classifier (Cohen, 1995),
tuning the number of optimization runs (numOpt) from 1 to 5.
76. JRip w learns a RIPPER classifier with 2 optimization runs and minimal weights of
instances equal to 2.
77. OneR t (Holte, 1993) uses function OneR in the RWeka package, which classifies using
1-rules applied on the input with the lowest error.
78. OneR w creates a OneR classifier in Weka with at least 6 objects in a bucket.
79. DTNB w learns a decision table/naive-Bayes hybrid classifier (Hall and Frank, 2008),
using simultaneously both decision table and naive Bayes classifiers.
80. Ridor w implements the ripple-down rule learner (Gaines and Compton, 1995) with
at least 2 instance weights.
81. ZeroR w predicts the mean class (i.e., the most populated class in the training data)
for all the test patterns. Obviously, this classifier gives low accuracies, but it serves
to give a lower limit on the accuracy.
82. DecisionTable w (Kohavi, 1995) is a simple decision table majority classifier which
uses BestFirst as search method.
83. ConjunctiveRule w uses a single rule whose antecendent is the AND of several
antecedents, and whose consequent is the distribution of available classes. It uses
the antecedent information gain to classify each test pattern, and 3-fold REP (see
classifier #70) to remove unnecessary rule antecedents.
89. AdaBoostM1 J48 w is an Adaboost.M1 ensemble which combines J48 base classifiers.
90. C5.0 t creates a Boosting ensemble of C5.0 decision trees and rule models (function C5.0 in the hononymous package), with and without winnow (feature selection),
tuning the number of boosting trials in {1, 10, 20}.
91. MultiBoostAB DecisionStump w (Webb, 2000) is a MultiBoost ensemble, which
combines Adaboost and Wagging using DecisionStump base classifiers, 3 sub-committees,
10 training iterations and 100% of the weight mass to base training on. The same
options are used in the following MultiBoostAB ensembles.
92. MultiBoostAB DecisionTable w combines MultiBoost and DecisionTable, both
with the same options as above.
93. MultiBoostAB IBk w uses MultiBoostAB with IBk base classifiers (see classifier
#157).
94. MultiBoostAB J48 w trains an ensemble of J48 decision trees, using pruning confidence C=0.25 and 2 training patterns per leaf.
95. MultiBoostAB LibSVM w uses LibSVM base classifiers with the optimal C and
Gaussian kernel spread selected by the svm C classifier (see classifier #48). We included it for comparison with previous papers (Vanschoren et al., 2012), although a
strong classifier as LibSVM is in principle not recommended to use as base classifier.
96. MultiBoostAB Logistic w combines Logistic base classifiers (see classifier #86).
97. MultiBoostAB MultilayerPerceptron w uses MLP base classifiers with the same
options as MultilayerPerceptron w (which is another strong classifier).
98. MultiBoostAB NaiveBayes w uses NaiveBayes base classifiers.
99. MultiBoostAB OneR w uses OneR base classifiers.
100. MultiBoostAB PART w combines PART base classifiers.
101. MultiBoostAB RandomForest w combines RandomForest base classifiers. We
tried this classifier for comparison with previous papers (Vanschoren et al., 2012),
despite of RandomForest is itself an ensemble, so it seems not very useful to learn a
MultiBoostAB ensemble of RandomForest ensembles.
102. MultiBoostAB RandomTree w uses RandomTrees with the same options as above.
103. MultiBoostAB REPTree w uses REPTree base classifiers.
105. treebag t trains a bagging ensemble of classification trees using the caret interface
to function bagging in the ipred package.
106. ldaBag R creates a bagging ensemble of LDAs, using the function bag of the caret
package (instead of the function train) with option bagControl=ldaBag.
107. plsBag R is the previous one with bagControl=plsBag.
108. nbBag R creates a bagging of naive Bayes classifiers using the previous bag function
with bagControl=nbBag.
109. ctreeBag R uses the same function bag with bagControl=ctreeBag (conditional inference tree base classifiers).
110. svmBag R trains a bagging of SVMs, with bagControl=svmBag.
111. nnetBag R learns a bagging of MLPs with bagControl=nnetBag.
112. MetaCost w (Domingos, 1999) is based on bagging but using cost-sensitive ZeroR
base classifiers and bags of the same size as the training set (the following bagging
ensembles use the same configuration). The diagonal of the cost matrix is null and
the remaining elements are one, so that each type of error is equally weighted.
113. Bagging DecisionStump w uses DecisionStump base classifiers with 10 bagging
iterations.
114. Bagging DecisionTable w uses DecisionTable with BestFirst and forward search,
leave-one-out validation and accuracy maximization for the input selection.
115. Bagging HyperPipes w with HyperPipes base classifiers.
116. Bagging IBk w uses IBk base classifiers, which develop KNN classification tuning
K using cross-validation with linear neighbor search and Euclidean distance.
117. Bagging J48 w with J48 base classifiers.
118. Bagging LibSVM w, with Gaussian kernel for LibSVM and the same options as
the single LibSVM w classifier.
119. Bagging Logistic w, with unlimited iterations and log-likelihood ridge 108 in the
Logistic base classifier.
120. Bagging LWL w uses LocallyWeightedLearning base classifiers (see classifier #148)
with linear weighted kernel shape and DecisionStump base classifiers.
121. Bagging MultilayerPerceptron w with the same configuration as the single MultilayerPerceptron w.
122. Bagging NaiveBayes w with NaiveBayes classifiers.
123. Bagging OneR w uses OneR base classifiers with at least 6 objects per bucket.
3149
124. Bagging PART w with at least 2 training patterns per leaf and pruning confidence
C=0.25.
125. Bagging RandomForest w with forests of 500 trees, unlimited tree depth and
blog(#inputs + 1)c inputs.
126. Bagging RandomTree w with RandomTree base classifiers without backfitting, investigating blog2 (#inputs)+1c random inputs, with unlimited tree depth and 2 training patterns per leaf.
127. Bagging REPTree w use REPTree with 2 patterns per leaf, minimum class variance
0.001, 3-fold for reduced error pruning and unlimited tree depth.
137. RotationForest w (Rodrguez et al., 2006) uses J48 as base classifier, principal component analysis filter, groups of 3 inputs, pruning confidence C=0.25 and 2 patterns
per leaf.
150. glmnet R trains a GLM via penalized maximum likelihood, with Lasso or elasticnet
regularization parameter (Friedman et al., 2010) (function glmnet in the glmnet package). We use the binomial and multinomial distribution for two-class and multi-class
problems respectively.
151. mlm R (Multi-Log Linear Model) uses the function multinom in the nnet package,
fitting the multi-log model with MLP neural networks.
152. bayesglm t, Bayesian GLM (Gelman et al., 2009), with function bayesglm in the arm
package. It creates a GLM using Bayesian functions, an approximated expectationmaximization method, and augmented regression to represent the prior probabilities.
153. glmStepAIC t performs model selection by Akaike information criterion (Venables
and Ripley, 2002) using the function stepAIC in the MASS package.
164. widekernelpls R fits a PLSR model with the function plsr and method = widekernelpls, faster when #inputs is larger than #patterns.
177. ClassificationViaRegression w (Frank et al., 1998) binarizes each class and learns
its corresponding M5P tree/rule regression model (Quinlan, 1992), with at least 4
training patterns per leaf.
178. KStar w (Cleary and Trigg, 1995) is an instance-based classifier which uses entropybased similarity to assign a test pattern to the class of its nearest training patterns.
179. gaussprRadial t uses the function gausspr in the kernlab package, which trains a
Gaussian process-based classifier, with kernel= rbfdot and kernel spread (parameter
sigma) tuned with values {10i }72 .
Rank
Acc.
Classifier
Rank
Acc.
Classifier
32.9
33.1
36.8
38.0
39.4
39.6
40.3
42.5
42.9
44.1
45.5
47.0
47.1
47.3
47.6
50.1
51.6
52.5
52.6
57.6
58.5
58.9
59.9
60.4
60.5
62.1
62.5
62.6
62.8
63.0
63.4
63.6
63.8
64.0
64.1
65.0
65.2
65.5
65.5
65.6
66.0
66.4
66.6
66.7
82.0
82.3
81.8
81.2
81.9
82.0
81.4
81.0
80.6
79.4
79.5
78.7
80.8
80.3
80.6
80.9
80.7
80.6
79.9
79.1
79.7
78.5
79.7
80.1
80.0
78.7
78.4
78.6
78.5
79.9
78.7
76.9
78.7
79.0
79.9
79.0
77.9
77.4
77.8
78.6
78.5
79.2
78.2
75.5
63.5
63.6
62.2
60.1
62.5
62.0
61.1
60.0
61.0
60.5
61.0
59.4
53.0
62.0
60.0
61.6
61.4
58.0
56.9
59.3
57.2
54.7
42.6
55.8
57.4
56.0
57.5
56.0
58.1
59.4
58.4
56.0
56.7
58.6
56.9
56.8
56.2
56.6
55.7
58.4
56.5
58.9
56.8
55.2
parRF t (RF)
rf t (RF)
svm C (SVM)
svmPoly t (SVM)
rforest R (RF)
elm kernel m (NNET)
svmRadialCost t (SVM)
svmRadial t (SVM)
C5.0 t (BST)
avNNet t (NNET)
nnet t (NNET)
pcaNNet t (NNET)
BG LibSVM w (BAG)
mlp t (NNET)
RotationForest w (RF)
RRF t (RF)
RRFglobal t (RF)
MAB LibSVM w (BST)
LibSVM w (SVM)
adaboost R (BST)
pnn m (NNET)
cforest t (RF)
dkp C (NNET)
gaussprRadial R (OM)
RandomForest w (RF)
svmLinear t (SVM)
fda t (DA)
knn t (NN)
mlp C (NNET)
RandomCommittee w (OEN)
Decorate w (OEN)
mlpWeightDecay t (NNET)
rda R (DA)
MAB MLP w (BST)
MAB RandomForest w (BST)
knn R (NN)
multinom t (LMR)
gcvEarth t (MARS)
glmnet R (GLM)
MAB PART w (BST)
CVR w (OM)
treebag t (BAG)
BG PART w (BAG)
mda t (DA)
67.3
67.6
67.6
69.2
69.8
69.8
70.6
71.0
71.0
71.0
72.0
72.1
72.4
72.4
72.6
72.7
72.9
73.2
73.7
74.0
74.4
74.6
74.6
75.9
76.5
76.6
76.6
76.6
77.1
78.5
78.7
79.0
79.1
79.1
79.8
80.6
80.9
81.4
81.6
81.7
82.0
84.2
84.6
84.9
77.7
78.7
77.8
78.3
78.8
78.1
78.3
78.8
77.1
77.8
75.7
77.1
77.0
79.1
78.4
78.4
77.1
78.3
77.9
77.4
76.9
74.1
77.5
77.2
76.9
78.1
77.3
78.2
78.4
76.5
76.6
76.8
77.4
74.4
76.9
75.9
76.9
76.7
78.3
75.6
76.1
75.8
75.7
76.5
55.6
55.2
54.2
57.4
56.7
55.4
58.0
58.23
55.1
56.2
52.6
54.8
54.7
55.6
57.9
56.2
54.6
56.2
56.0
52.6
54.2
51.8
55.2
55.6
53.8
56.5
54.8
57.3
54.0
53.7
50.5
53.5
53.0
50.7
54.7
53.3
54.5
55.2
55.8
50.9
53.3
53.9
50.8
53.4
pda t (DA)
elm m (NNET)
SimpleLogistic w (LMR)
MAB J48 w (BST)
BG REPTree w (BAG)
SMO w (SVM)
MLP w (NNET)
BG RandomTree w (BAG)
mlm R (GLM)
BG J48 w (BAG)
rbf t (NNET)
fda R (DA)
lda R (DA)
svmlight C (NNET)
AdaBoostM1 J48 w (BST)
BG IBk w (BAG)
ldaBag R (BAG)
BG LWL w (BAG)
MAB REPTree w (BST)
RandomSubSpace w (DT)
lda2 t (DA)
svmBag R (BAG)
LibLINEAR w (SVM)
rbfDDA t (NNET)
sda t (DA)
END w (OEN)
LogitBoost w (BST)
MAB RandomTree w (BST)
BG RandomForest w (BAG)
Logistic w (LMR)
ctreeBag R (BAG)
BG Logistic w (BAG)
lvq t (NNET)
pls t (PLSR)
hdda R (DA)
MCC w (OEN)
mda R (DA)
C5.0Rules t (RL)
lssvmRadial t (SVM)
JRip t (RL)
MAB Logistic w (BST)
C5.0Tree t (DT)
BG DecisionTable w (BAG)
NBTree w (DT)
Table 3: Friedman ranking, average accuracy and Cohen (both in %) for each classifier,
ordered by increasing Friedman ranking. Continued in the Table 4. BG = Bagging,
MAB=MultiBoostAB.
3155
Rank
86.4
87.2
87.2
87.6
87.9
88.0
89.0
89.5
90.2
90.5
91.2
91.5
91.7
92.4
93.0
93.1
93.6
93.6
94.3
94.4
94.5
94.6
95.1
95.3
95.3
96.0
96.5
97.0
97.2
99.5
99.6
99.8
100.8
101.6
103.2
103.6
104.5
105.5
106.2
106.6
106.8
107.5
108.1
108.2
109.4
110.0
Acc.
76.3
77.1
74.6
76.4
76.2
76.0
76.1
75.8
76.6
67.5
74.0
74.0
76.6
72.8
74.7
74.8
75.3
74.7
75.1
73.5
76.4
76.5
71.8
76.0
73.9
74.4
71.9
68.9
73.9
71.3
74.4
75.3
73.8
73.6
72.2
72.8
62.6
72.1
72.7
59.3
68.1
72.5
69.4
71.5
65.2
73.9
52.6
54.2
50.3
51.3
55.0
51.7
52.4
54.8
48.5
45.8
50.9
48.9
54.1
48.5
50.1
51.4
51.1
52.3
50.7
49.5
54.5
51.6
48.7
53.9
45.8
50.7
48.1
42.0
52.1
44.9
51.6
52.7
48.9
49.3
44.5
41.3
33.1
46.7
48.0
71.7
41.5
48.3
44.6
49.8
46.5
51.0
Classifier
ASC w (OM)
KStar w (OM)
MAB DecisionTable w (BST)
J48 t (DT)
J48 w (DT)
PART t (DT)
DTNB w (RL)
PART w (DT)
RBFNetwork w (NNET)
bagging R (BAG)
rpart t (DT)
ctree t (DT)
NNge w (NN)
ctree2 t (DT)
FilteredClassifier w (OM)
JRip w (RL)
REPTree w (DT)
rpart2 t (DT)
BayesNet w (BY)
rpart R (DT)
IB1 w (NN)
Ridor w (RL)
lvq R (NNET)
IBk w (NN)
Dagging w (OEN)
qda t (DA)
obliqueTree R (DT)
plsBag R (BAG)
OCC w (OEN)
mlp m (NNET)
cascor C (NNET)
bdk R (NNET)
nbBag R (BAG)
naiveBayes R (BY)
slda t (DA)
pam t (OM)
nnetBag R (BAG)
DecisionTable w (RL)
MAB NaiveBayes w (BST)
logitboost R (BST)
PenalizedLDA R (DA)
NaiveBayes w (BY)
rbf m (NNET)
rrlda R (DA)
vbmpRadial t (BY)
RandomTree w (DT)
Rank
110.4
111.3
111.9
111.9
112.6
113.1
113.3
113.5
113.5
114.8
115.8
116.0
118.3
118.9
120.1
120.8
122.2
122.7
126.1
126.2
129.6
130.9
130.9
132.1
133.2
133.4
133.7
135.5
136.6
137.5
138.0
138.6
143.3
143.8
145.8
154.0
154.0
154.0
154.1
154.5
154.6
154.6
154.6
154.6
157.4
Acc.
71.6
62.5
63.3
62.2
70.1
61.0
68.2
70.1
70.7
58.1
70.6
69.5
67.5
72.0
55.3
57.6
63.5
68.3
50.8
65.7
62.3
64.2
62.1
63.3
64.2
63.3
61.8
64.9
60.4
60.3
56.6
60.3
53.2
57.8
53.9
49.3
49.3
49.3
49.3
49.2
49.2
49.2
49.2
49.2
52.1
46.5
38.4
43.7
39.6
38.0
38.2
39.5
46.5
39.9
32.4
46.4
39.6
34.3
45.9
33.3
32.5
35.1
39.4
30.5
44.7
31.8
33.2
29.6
36.2
34.3
33.3
28.3
42.4
27.7
26.5
15.1
26.1
17.9
24.3
15.3
3.2
3.2
3.2
3.2
7.6
2.7
2.7
5.6
2.7
25.13
Classifier
BG NaiveBayes w (BAG)
widekernelpls R (PLSR)
mars R (MARS)
simpls R (PLSR)
sddaLDA R (DA)
kernelpls R (PLSR)
sparseLDA R (DA)
NBUpdateable w (BY)
stepLDA t (DA)
bayesglm t (GLM)
QdaCov t (DA)
stepQDA t (DA)
sddaQDA R (DA)
NaiveBayesSimple w (BY)
gpls R (PLSR)
glmStepAIC t (GLM)
AdaBoostM1 w (BST)
LWL w (OEN)
glm R (GLM)
dpp C (NNET)
MAB w (BST)
BG OneR w (BAG)
MAB IBk w (BST)
OneR t (RL)
MAB OneR w (BST)
OneR w (RL)
BG DecisionStump w (BAG)
VFI w (OM)
ConjunctiveRule w (RL)
DecisionStump w (DT)
RILB w (BST)
BG HyperPipes w (BAG)
spls R (PLSR)
HyperPipes w (OM)
BG MLP w (BAG)
Stacking w (STC)
Grading w (OEN)
CVPS w (OM)
StackingC w (STC)
MetaCost w (BAG)
ZeroR w (RL)
MultiScheme w (OEN)
CSC w (OEN)
Vote w (OEN)
CVC w (OM)
set may vary with respect to previous papers in the literature due to resampling differences.
Although a leave-one-out validation might be more adequate (because it does not depend
3156
90
100
80
90
80
70
60
50
40
30
70
60
50
40
30
20
20
10
10
0
20
40
60
Data set
80
100
120
0
10
20
30
40
50
60
70
80
90
100
#data set
Figure 1: Left: Maximum accuracy (blue) and majority class (red), both in % ordered by
increasing %Maj. for each data set. Right: Histogram of the accuracy achieved
by parRF t (measured as percentage of the best accuracy for each data set).
on the data partitioning), specially for the small data sets, it would not be feasible for some
other larger data sets included in this study.
3.1 Average Accuracy and Friedman Ranking
Given its huge size (21,659 entries), the table with the complete results11 is not included
in the paper. Taking into account all the trials developed for parameter tuning in many
classifiers (number of tunable parameters and number of values used for tuning), the total
number of experiments is 241,637. The average accuracy for each classifier is calculated
excluding the data sets in which that classifier found errors (denoted as -- in the complete
table). The Figure 1 (left panel) plots, for each data set, the percentage of majority class
(see columns %Maj. in Tables 1 and 2) and the maximum accuracy achieved by some
classifier, ordered by increasing %Maj. Except for very few unbalanced data sets (with very
populated majority classes), the best accuracy is much higher than the %Maj. (which is
the accuracy achieved by classifier ZeroR w). The Friedman ranking (Sheskin, 2006) was
also computed to statistically sort the classifiers (this rank is increasing with the classifier
error) taking into account the whole data set collection. Given that this test requires the
same number of accuracy values for all the classifiers, in the error cases we use (only for
this test) the average accuracy for that data set over all the classifiers.
The Tables 3 and 4 report the Friedman ranking, the average accuracy and the Cohen
(Carletta, 1996), which excludes the probability of classifier success by chance, for the
179 classifiers, ordered following the Friedman ranking. The best classifier is parRF t
(parallel random forest implemented in R using the randomForest and caret
packages), with rank 32.9, average accuracy 82.0%(16.3) and =63.5%(30.6), followed
by rf t (random forest using the randomForest package and tuned with caret),
with rank 33.1 and the highest accuracy 82.3%(15.3) and =63.6(30.0). This result is
11. See http://persoal.citius.usc.es/manuel.fernandez.delgado/papers/jmlr/results.txt.
3157
100
100
90
90
80
80
70
Accuracy (%)
% of data sets
70
60
50
40
50
40
30
30
20
20
10
60
10
10
20
30
40
50
60
70
80
90
100
20
40
60
80
100
120
Data set
Figure 2: Left: for each % of the maximum accuracy in the horizontal axis, the vertical
axis shows the percentage of data sets for which parRF t overcomes that % of the
maximum accuracy. Right: Accuracy (in %) achieved by parRF t (in red) and
maximum accuracy (in blue) for each data set (ordered by increasing maximum
accuracies).
somehow surprising, because Random Forest is an old method, but it works better than
other newer classifiers. The high deviations in accuracies and are expected, due to the
large amount and variability of data sets. Since parRF t is a parallel version of rf t, using different random seeds, the difference between both can be considered not significant:
parRF t achieves better Friedman ranking, while rf t achieves better accuracy and . Similar situations arise with other couples of classifiers within the same family, which are slightly
different versions of the same classifier or versions with/without parameter tuning (svmRadial t and svmRadialCost t, lda R and lda2 t, among others), with similar results, being
the difference between them caused by noise, random initializations, etc. The parRF t is
the best classifier in 12 out of 121 data sets, and its average accuracy is 4.9% below the
maximum average accuracy (i.e., the maximum accuracy over all the classifiers for each
data set, averaged over all the data sets), which is 86.9%. It is very significant (and it can
not be casual) that, among so many classifiers (179), the two bests ones (parRF t and rf t,
according both to average accuracy and Friedman rank) are random forests implemented
with the randomForest package and tuned with caret: this fact shows a clear superiority
with respect to the remaining classifiers. It is also interesting that an old classifier as RF
works better than many other, more recent, approaches. The Figure 1 (right panel) shows
that for the majority of the data sets (specifically for 102 out of 121, which represents the
84.3%), the parRF t achieves more than 90% of the maximum accuracy, being very near
to the best accuracy for almost all the data sets. The Figure 2 (left panel) plots, for each
% of the maximum accuracy in the horizontal axis, the % of data sets for which parRF
overcomes that percentage: for the 93% (resp. for 84.3%) of the data sets parRF achieves
more than 80% (resp. 90%) of the maximum accuracy. In this figure, the area under
curve (AUC) of the three bests classifiers (parRF t, rf t and svm C) are 0.9349, 0.9382 and
0.9312 respectively, being rf t slightly better than parRF t (as the accuracy in Table 3) and
3158
svm C slightly worse. As we commented in the introduction, given the large number of
classifiers used in this work, it is reasonable to estimate the maximum attainable accuracy
for a data set as the maximum accuracy achieved by some classifier. Therefore, although
the No-Free-Lunch theorem states that no classifier can be always the best, in the practice,
parRF t is very near to the best attainable accuracy for almost all the data sets. Specifically, the Figure 2 (right panel) shows that parRF is very near to the maximum accuracy
for almost all the data sets, excepting three data sets: #41 (image-segmentation, 33.6%),
#70 (audiology-std, 13.0%) and #114 (balloons, 66.7%).
The third best classifier is svm C (LibSVM with Gaussian kernel), with rank (36.8, two
points above parRT t) and average accuracy 81.8%(16.2). The following classifiers are:
svmPoly t (SVM with polynomial kernel, rank 38.0), rforest R (random forest without mtry
tuning, rank 39.4), elm kernel m (extreme learning machine, rank 39.6), svmRadialCost t
(40.3), svmRadial t (42.5), C5.0 t (42.9) and avNNet t (44.1). It may not be a casuality the
presence of three RF and two SVM among the five best classifiers, identifying both classifier
families as the best ones. Besides, there are also two neural networks and one boosting
ensemble (C5.0 t) among the top-10. The Figure 3 shows the 25 classifiers with the lowest
Friedman ranks (upper panel) and the classifiers with the highest average accuracies (lower
panel): parRF t and rf t have ranks clearly lower than svm C and the following classifiers.
In fact, the highest increment (3.7) between two classifier ranks is between rf t and svm C,
which shows that parRF t and rf t are clearly better than the remaining classifiers in the
plot. Besides, rf t, parRF, svm C, rforest R and elm kernel m have higher accuracies than
the others (the largest accuracy reduction, 0.37, is between svm C and svmRadialCost t).
Our proposal dkp C is in the 23th (resp. 21th) position according to the Friedman ranking
(resp. to the accuracy, 79.7%), but this apparently good result is somehow obscured by the
low value of (42.6%). It is caused by some data sets where dkp C assigns all the patterns
to the most populated class: for these data sets, = 0, which reduces the average over
all the data sets.
We developed paired T-tests comparing the accuracies of parRF t and the following
9 classifiers in Table 3 (the null hypothesis is that the two accuracies compared are not
significantly different, so that, within a tolerance = 0.05, when p < 0.05 parRF t is
significantly better than the other classifier. The Figure 4 (left panel) plots the T-statistic,
95%-limits and p-values, showing that parRF t is only significantly better (high T-statistic,
p < 0.05) than with C5.0 t and avNNet t. Although parRF t is better than svm C in 56
of 121 data sets, worse than svm C in 55 sets, and equal in 10 sets, the Figure 4 (right
panel) compares their percentages of the maximum accuracy for each data set (ordered
by increasing percentages): for the majority of the data sets they are almost 100% (i.e.,
parRF t and svm C are near to the maximum accuracy). Besides, svm C is never much
better than parRF t: when svm C outperforms parRF t, the difference is small, but when
parRF t outperforms svm C, the difference is higher (data sets 1-20). In fact, calculating
for each data set the difference between the accuracies of parRF t and svm C, the sum of
positive differences (parRF is better) is 193.8, while the negative ones (svm C better) sum
139.8.
All the classifiers of the random forest and SVM families are included among the 25
best classifiers, with accuracies above 79% (while the best is 82.3%), which identify both
families as the best ones. Other classifiers included among the top-20, not belonging to RF
3159
60
Friedman rank
55
50
45
40
35
t
t
t
t
t
t
t
t
t
t
t
t
t
F rf mC oly stRelmostdial 5.0Net netNet Mw lp stwRF bal MwMw stR nmestkpCialR stw
m re R lo SV SV oo pn for d ad re
n aN SV
sv vmP rfore ern ialC Ra C vN
o
g ib ib ab
c
b
F
k
F
a
c
i
d
rR Fo
s
a vm
p L
ion
sp om
RR BL L ad
elmvmR s
tat
us nd
BG
MA
s
ga Ra
Ro
rR
pa
82.5
Accuracy (%)
82
81.5
81
80.5
80
79.5
79
t
t
t
t
t
t
t
t
t
t
t
t
rf elmRF stRmCost olydial RF Mwbal stw5.0 Mwmlp ialR stwMw FwmwkpC nmnetNet ag
r
e C SV
e v
ad ForeibSV BR dCo d pn n avN treeB
ern pa rfor s dialCsvmPmRa R ibSV Fglo For
b
k
R
i
r
n
A
L
a
L RRation
sp om L M Ra
sv
B
elm
t
mR
us nd
BG
MA
sv
ga Ra
Ro
Figure 3: Friedman rank (upper panel, increasing order) and average accuracies (lower
panel, decreasing order) for the 25 best classifiers.
and SVM families, are nnet t (MLP network, rank 45.5), pcaNNet t (MLP + PCA network, rank 47.0), Bagging LibSVM w (ensemble of Gaussian LibSVMs, rank 47.1), mlp t
(RSNNS MLP with tunable network size, rank 38.0), MultiBoostAB LibSVM w (MultiBoostAB ensemble of Gaussian LibSVMs, rank 52.5) and adaboost R (Adaboost.M1 ensemble of decision trees, rank 57.6). Beyond the 20th position are pnn m (Probabilistic
Neural Network with tunable Gaussian spread, rank 58.5), and our proposal dkp C (rank
59.9). Besides, note that 12 classifiers in the top-20 use caret, which might be due to the
automatic parameter tuning (only rforest R and adaboost R have no tunable parameter).
We must emphasize that, since parameter tuning and testing use different data sets, the final result can not be biased by parameter optimization, because the set of parameter values
selected in the tuning stage is not necessarily the best on the test set. In some cases, the
tuning is not relevant: for C5.0 t the differences among the performances using different
parameter values are low, so it would work similarly without parameter tuning.
OpenML Vanschoren et al. (2012) uses only 86 data sets and 93 classifiers, while our
work is much wider (121 and 179, respectively), including Weka classifiers in a later version
(3.6.9), for example openML uses Bagging 1.31.2.2, while we use Bagging version 6502. Besides, as we commented above, we do not use 9 of 93 classifiers included in the previous reference. The results in Figure 17 of that paper rank Bagging-NBayesTree as the best classifier,
followed by Bagging-PART, SVM-Polynomial, MultilayerPerceptron, Boosting-NBayesTree,
RandomForest, Boosting-PART, Bagging-C45, Boosting-C45 and SVM-RBF. However, in
our results the best Weka classifiers (in the top-20) are Baggging LibSVM w, RotationFor3160
0.033
90
0.001
0.286
0.128
0.457
0.834
0.994
0.956
0.411
60
50
av
N
et
_
0_
t
5.
C
70
sv
ad
os
t
ia
l_
_t
_m
m
80
40
30
sv
el
_k
er
ad
ia
lC
ne
l
_R
t
y_
Po
l
re
st
rfo
_C
m
sv
sv
rf_
Tstatistic/confidence interval/pvalue
100
20
40
60
80
100
120
#data set
Figure 4: Left panel: T-statistics (point), confidence intervals and p-values (above upper
interval limits) of the T-tests comparing parRF T and the remaining 9 best classifiers. Right panel: Percentage of the maximum accuracy achieved by parRF t
(blue) and svm C (red) for the 121 data sets (ordered by increasing percentage)
est w, MultiBoostAB LibSVM w and LibSVM w, i.e., a Random Forest, a SVM and two
ensembles of SVMs. This is expected, because it is known that ensembles of strong classifiers do not work better than the single classifier. Therefore, Bagging and MultiBoostAB of
LibSVM do not work better than LibSVM w, although the three are worse than svm C (the
same Gaussian LibSVM in C) and the caret SVM versions (svmPoly t, svmRadialCost t
and svmRadial t). Besides, similarly to openML, in our work svmPoly t (polynomial kernel) is near to svm C (Gaussian kernel). However, in our results Bagging NaiveBayes w
works very bad (rank 110.4, Table 4), while other Baggging ensembles are better: Bagging PART w (66.6), MultilayerPerceptron w (70.6), MultiBoostAB NaiveBayes w (equivalent to Boosting-NaiveBayes in openML, rank 106.2) and MultiBoostAB PART w (65.6).
Therefore, in our experiments the bagging and multiBoostAB ensembles (except of LibSVM w) do not work well. We use the same configurations for bagging (10 bagging iterations, 100% of the training set for bag size, changing only the base learner) and MultiBoostAB (3 sub-committees, 10 boost iterations and 100% of build mass used to build
classifiers) as openML, so these bad results can not be caused by improper configuration
(or parameter tuning) of the ensemble or base classifier. Therefore, they might be caused
by the larger number of data sets, or by the inclusion in our collection of other classifiers
and implementations (in R, caret, C and Matlab), with better accuracies, not considered
by OpenML.
3161
No.
1
2
3
4
5
6
7
8
9
10
Classifier
elm kernel m
svm C
parRF t
C5.0 t
adaboost R
rforest R
nnet t
svmRadialCost t
rf t
RRF t
PAMA
13.2
10.7
9.9
9.1
9.1
8.3
6.6
6.6
5.8
5.8
No.
11
12
13
14
15
16
17
18
19
20
Classifier
mlp t
pnn m
dkp C
LibSVM w
svmPoly t
treebag t
RRFglobal t
svmlight C
Bagging RandomForest w
mda t
PAMA
5.0
5.0
5.0
5.0
5.0
5.0
5.0
5.0
4.1
4.1
No.
1
2
3
4
5
6
7
8
9
10
Classifier
parRF t
svm C
rf t
rforest R
Bagging-LibSVM w
svmRadialCost t
svmRadial t
svmPoly t
LibSVM w
C5.0 t
P95
71.1
70.2
68.6
65.3
63.6
63.6
62.8
62.8
62.0
61.2
No.
11
12
13
14
15
16
17
18
19
20
Classifier
elm kernel m
MAB-LibSVM w
RandomForest w
RRF t
pcaNNet t
RotationForest w
avNNet t
nnet t
RRFglobal t
mlp t
P95
60.3
60.3
57.0
56.2
55.4
54.5
53.7
53.7
53.7
52.1
No.
1
2
3
4
5
6
7
8
9
10
Classifier
parRF t
rf t
rforest R
C5.0 t
RotationForest w
svm C
mlp t
LibSVM w
RRF t
dkp C
PMA
94.1
93.6
93.3
92.5
92.5
92.3
92.1
91.7
91.4
91.4
No.
11
12
13
14
15
16
17
18
19
20
Classifier
RandomCommittee w
nnet t
avNNet t
RRFglobal t
knn R
Bagging-LibSVM w
Bagging REPTree w
MAB MLP w
elm m
rda R
PMA
91.4
91.3
91.1
91.0
90.5
90.5
90.4
90.4
90.3
90.3
Table 5: Up: list of the 20 classifiers with the highest Probabilities of Achieving the Maximum Accuracies (PAMA, in %). Middle: List of the 20 classifiers with the
highest probabilities of achieving 95% (P95) of the maximum accuracy over all
the data sets. Down: Classifiers sorted by its Percentage of the Maximum Accuracy (PMA) for each data set, averaged over all the data sets. MAB means
MultiBoostAB.
that no classifier is the best for most data sets (following the No-Free-Lunch theorem). The
C5.0 t and adaboost R have about 9%. The remaining classifiers are about 4-8%, so that
many classifiers are the best for only few data sets. There are 5 classifiers of family RF, 5
SVM, 5 NNET and 4 ensembles among the 20 classifiers with the highest probabilities of
being the best. Our proposal dkp C achieves the 13th position.
The PAMA does not take into account that a classifier may be very near from the
best accuracy without being the best one. Therefore, an alternative, more significant,
measure is the probability of achieving more than 95% of the maximum accuracy
(P95) (middle part of Table 5 for the best 20 values). This probability (in %), for a given
classifier, is estimated dividing the number of data sets in which it achieves 95% or more of
the maximum accuracy (achieved by any other classifier on that data set), by the number of
data sets. The ten classifiers with the highest P95 are almost the same as in the Friedman
rank, with a different order. In this table, ParRF t achieves more than 95% of the maximum
accuracy for 71.1% of the data sets (again far from the 100%), followed by svm C (70.2%)
and rf t (68.6). The other classifiers have P95 below 65%. The low P95 of elm kernel w
(60.3%, 11th position), being the best for a highest number of data sets, shows a behavior
less stable than rf t, parRF t and svm C, because its accuracy in the other data sets is
lower in average.
Another interesting measurement is the Percentage of the Maximum Accuracy
(PMA) achieved by each classifier, averaged over the whole collection of data sets (the
20 first are shown in the lower part of the Table 5). Again, parRF t is the best achieving
94.1%(11.3) of the maximum accuracy, followed by other two Random Forests: rf t and
rforest R (93.6% and 93.3% respectively). The svm C is in the 6th position, with PMA
92.3%(15.9). Note that six out of eight Random Forest classifiers are in the top-20. The
PMA values are high, very near to, but below, the threshold of 95% used in the middle part
of Table 5. This explains the low values of P95: the bests classifiers have PMA about 94%,
so their probability of achieving 95% or more of the maximum accuracy is low (about 70%).
Setting the threshold in 90% of the maximum accuracies, the corresponding probabilities
would be much higher. The elm kernel m is not included in this table: this confirms its
unstable behavior, because in average it does not achieve PAM above 90.3% (even elm m,
without kernels, has better PMA). The mlp t (92.2%) has also a good value. The dkp C is
the 10th position, achieving in average the 91.4%, only 2.7 below the best. The 20 classifiers
are in a narrow margin between 90%-94% of the maximum accuracy, so that there are many
classifiers which a high percentage of the maximum accuracy. The Figure 5 shows that the
three Random Forests (parRT t, rf t and rforest R ) achieve PMAs clearly higher than the
remaining classifiers (including svm C), being the greatest gap (0.8) between rforest R and
C5.0 t.
3.3 Discussion by Classifier Family
The Figure 6 compares the classifier families showing in the upper panel the error bars with
the mean (blue square), minimum and maximum values of the Friedman ranks for each
family. The lower panel shows the minimum rank (corresponding to the best classifier) for
each family, by ascending order. The family RF has the lowest minimum rank (32.9) and
mean (46.7), and also a narrow interval (up to 60.5), which means that all the RF classifiers
3163
94.5
94
93.5
93
92.5
92
91.5
91
90.5
90
t
F_
rR
pa
t
t
t
t
t
t
t
rf_ st_R 5.0_ st_w m_C mlp_ M_w RF_ kp_C m_w net_ Net_ bal_ nn_R M_w ng_w P_w lm_m da_R
r
n vN glo
k SV ggi _ML e
d om
R
re
C ore sv
SV
F
rfo
a RF
ib Ba
B
_C
Lib
on
d
_L
i
R
t
R
MA
ta
BG
Ro
Figure 5: Twenty classifiers with the highest percentages of the maximum accuracy. MAB
means MultiBoostAB, BG means Bagging.
work very well. The SVM has the following minimum (36.8), but the mean is much higher
(55.4), and the interval is also much wider (up to 81.6). The third best type is NNET, whose
minimum and mean rank are 39.6 (elm kernel m) and 73.8 respectively. The DTs have
the following minimum (42.9), followed by BAG (47.1), BST (52.5), OM (other methods,
specifically gaussprRadial, 60.4), DA (62.5), NN (62.6), OEN (other ensembles, specifically
RandomCommittee w, 63.0), LMR (65.2), MARS and GLM (65.5), PLSR (79.1), RL (81.4),
BY (94.3) and STC (154.0). We can make three family groups in the lower panel of Figure 6:
a) the best ones (RF, SVM, NNET, DT, BAG and BST), with the lowest ranks (about 3050); b) the intermediate families (OM, DA, NN, OEN, LMR, MARS and GLM), about
60-70; and c) the worst families (PLSR, RL, BY and STC), with ranks above 80.
Now, we discuss the results for each classifier family (see Tables 6 and 7). The discriminant analysis (DA) classifiers work relatively well, being fda t the best one, followed
by rda R, mda t and pda t. The lda R works better than the caret version lda2 t (74.4),
which however tunes of the number of retained components. In other DA classifiers (fda
and mda) the parameter tuning developed in the caret versions allows to achieve better
accuracies than their R counterparts (without tuning). It is surprising that sophisticated
versions of LDA are worse: slda t, PenalizedLDA t, rrlda R, sddaLDA R, sparseLDA R and
stepLDA t. Finally, the QDA classifiers are very bad, achieving again the classical qda t the
best results compared to more advanced versions (QdaCov t, stepQDA t and sddaQDA R).
The Bayesian methods (BY) are clearly worse than DA, and they are not competitive
at all to the globally best classifiers, achieving the best (BayesNet w) a high rank (94.3).
Among the neural networks (NNET), the elm kernel m is the best one, followed by
several caret MLP implementations (avNNet t, nnet t, pcaNNet t and mlp t), included in
the top-20, better than other MLP implementations: mlp C (LibFANN), MultilayerPerceptron w (Weka) and mlp m (Matlab). The good result of avNNet (an ensemble of 4 small
MLPs with up to 9 hidden neurons whose weights are randomly initialized), compared to
3164
160
140
Friedman rank
120
100
80
60
40
DA
BY
NNET
SVM
DT
RL
BST
RF
SVM
NNET
DT
BAG
BST
OM
BAG STC
RF
Classifier family
OEN
GLM
NN
PLSR
LMR MARS
OM
100
90
80
70
60
50
40
30
DA
NN
OEN
Classifier family
PLSR
RL
BY
STC
Figure 6: Friedman rank interval for the classifiers of each family (upper panel) and minimum rank (by ascending order) for each family (lower panel).
greater MLPs, as mlp C and mlp m (up to 30 hidden neurons), is due to its ensemble nature, because mlpWeightDecay t also has up to 9 hidden neurons, with worse results. The
rule used for size selection by MultilayerPerceptron w (#inputs + #classes)/2 does not
achieve good results. The pnn m (probabilistic neural network) and our proposal dkp C
(direct kernel perceptron) are very near to the top-20. The bad results of elm m (67.6)
are surprising taking into account the good behavior of the Gaussian elm kernel w. Similarly, the LVQ versions are not good: lvq t, which tunes the size and k, works much better
than lvq R, being bdk R (99.8) the worst one. The cascor C (cascade correlation), which
uses LibFANN, is also worse (99.6) than the best MLP version (avNNet t). Finally, the
RBF networks are also bad, although the caret versions outperform the Weka and Matlab
versions. The dpp C is not competitive at all with the other networks.
The svm C, with Gaussian kernel using LibSVM is the best Support vector machine
(SVM), followed by the caret versions svmPoly t (polynomial kernel), svmRadialCost t and
svmRadial t (Gaussian kernel), better than the Weka versions LibSVM w and SMO w and
that svmlight C. The linear kernel versions (svmLinear t and LibLINEAR w) are clearly
worse, and lssvmRadial t is the worst one. Overall, the ten SVM classifiers achieve very
good results, with ranks in the (relatively narrow) interval 36.872.4 (excluding linear
kernels).
RandomSubSpace w is the best decision tree (DT), with a bad rank (74.0); both
J48 t and J48 w achieve similar results (the former runs the latter in the RWeka package
tuned with caret). The best Rule-based (RL) classifiers are C5.0Rules t and JRip t,
3165
1
2
3
4
5
6
7
8
9
10
1
2
3
1
2
3
4
5
6
7
8
9
10
11
1
2
3
4
5
96.0
103.2
106.8
108.2
112.6
113.3
113.5
115.8
116.0
118.3
109.4
113.5
72.0
75.9
79.1
90.2
95.1
99.5
99.6
99.8
108.1
126.2
62.1
62.8
72.4
74.6
81.6
Table 6: Friedman ranks of the classifiers in each family (continued in Table 7).
slightly worse than the best DT. The difference between JRip t and JRip w suggests that
the tuning of the number of optimization runs developed in the caret version, but not with
Weka, is important. ZeroR w is among the worst ones, because it only predicts the same
mean class for every test pattern: we included it to define the zero-level for the accuracy
(49.2%, there is no classifier with lower accuracy).
Among the boosting (BST) ensembles, C5.0 t is the best (position 9), followed by
MultiBoostAB LibSVM (position 18), adaboost R (position 19) and other MultiBoostAB
ensembles with strong base classifiers (MultilayerPerceptron, RandomForest, PART and
J48), while the ones with weak classifiers (OneR, IBk, DecisionStump, NaiveBayes, among
others) are worse. The Figure 7 (upper panel, only Weka ensembles and base classifiers
are plotted) shows that MultiBoostAB ensembles achieves much lower ranks than their
corresponding base classifiers, excepting LibSVM, RandomForest and Logistic, where both
are similar, and IBk, where the base classifier works much better. The same happens with
3166
1
2
3
4
5
6
7
8
RandomSubSpace w
C5.0Tree t
1
2
3
4
5
6
C5.0Rules t
JRip t
PART t
DTNB w
PART w
JRip w
1
2
3
4
5
6
7
8
9
10
C5.0 t
MAB LibSVM w
adaboost R
MAB MultilayerPerceptron w
MAB RandomForest w
MAB PART w
MAB J48 w
AdaBoostM1 J48 w
MAB REPTree w
LogitBoost w
1
2
3
4
5
6
7
8
9
10
11
12
BG LibSVM w
treebag t
BG PART w
Bagging REPTree w
BG RandomTree w
BG J48 w
BG IBk w
ldaBag R
BG LWL w
svmBag R
BG RandomForest w
ctreeBag R
NBTree w
J48 t
J48 w
rpart t
ctree t
92.4
93.6
93.6
94.4
96.5
110.0
137.5
94.6
105.5
132.1
133.4
136.6
154.6
Boosting (BST)
42.9
11
MAB RandomTree w
52.5
12
MAB Logistic w
57.9
13
MAB DecisionTable w
64.0
14
MAB NaiveBayes w
64.1
15
logitboost R
65.6
16 AdaBoostM1 DecisionStump w
69.2
17
MAB DecisionStump w
72.6
18
MAB IBk w
73.7
19
MAB OneR w
76.6
20
RILB w
76.6
82.0
87.2
106.2
106.6
122.2
129.6
130.9
133.2
138.0
Bagging (BAG)
47.1
13
BG Logistic w
66.4
14
BG DecisionTable w
66.6
15
bagging R
69.8
16
plsBag R
71.0
17
nbBag R
71.0
18
nnetBag R
72.7
19
BG NaiveBayes w
72.9
20
BG OneR w
73.2
21
BG DecisionStump w
74.6
22
BG HyperPipes w
77.1
23
BG MLP w
78.7
24
MetaCost w
79.0
84.6
90.5
97.0
100.8
104.5
110.4
130.9
133.7
143.8
145.8
154.5
Table 7: Continuation of Table 6. MAB means MultiBoostAB. RILB means RacedIncrementalLogitBoost. BG means Bagging. Continued in Table 8.
AdaBoostM1 (J48 much better than DecisionStump). The adaboost R (AdaboostM1 with
classification trees) works very well (included in the top-20), while AdaBoostM1 J48 w
and AdaBoostM1 DecisionStump w work much worse: this big difference might be in the
AdaboostM1 implementation or in the base classifiers. There is also difference between
LogitBoost w and logitboost R, despite of using the same base classifier (DecisionStump):
3167
Friedman rank
140
120
100
80
60
VM
tron
rest
LibS Percep domFo
r
n
e
a
y
R
a
ultil
T
PAR
IBk
One
Friedman rank
150
100
50
M
V
LibS
T
ee
ee
PARREPTr domTr
Ran
J48
IBk
t
s
n
s
L
LW Fores Logistic nTableeBaye OneRnStumperPipe ceptro
r
isio Naiv
dom
isio
Hyp yerPe
Dec
Dec
a
il
Mult
Ran
Figure 7: Upper panel: Friedman rank (ordered increasingly) of each Weka MultiBoostAB
ensemble (blue squares) and its corresponding Weka base classifier (red circles).
Lower panel: the same for Weka bagging ensembles (blue squares) and base
classifiers (red circles).
Stacking (STC)
154.0
2
Stacking w
1
2
3
4
parRF t
rf t
rforest R
RotationForest w
1
2
3
4
5
6
gmlnet R
mlm R
bayesglm t
1
2
3
knn t
knn R
NNge w
1
2
1
1
2
3
4
5
1
2
3
1
2
3
StackingC w
IBk w
IB1 w
154.1
50.1
51.6
58.9
60.5
122.7
154.0
154.6
154.6
154.6
120.8
126.1
94.5
95.3
78.5
111.9
103.6
135.5
143.8
154.0
157.4
The GLM classifiers are divided in two groups: gmlnet R and mlm R, with relatively
good ranks (60-70), and the others, with much worse results. Something similar happens
with NN, where the R and caret versions (knn t and knn R) are about 70, while the Weka
variants NNge w, IBk w and IB1 w are much worse (about 90). With respect to the PLSR
classifiers, the simplest one (pls t) is the best, while the remaining, more sophisticated,
versions are much worse. The three LMR classifiers achieve ranks about 65-75, being
multinom t the best one. The original MARS classifier (mars R) is very bad, while the
fast MARS version (gcvEarth t) works much better. Finally, only the gaussprRadial R and
3169
ClassificationViaRegression w achieve good results among the Other methods, while the
remaining ones have ranks about 90 (AttributeSelectedClassifier w, KStar w and FilteredClassifier w), and more, being some of the worse classifiers in the collection (ClassificationViaClustering w).
Rank
36.2
39.9
41.0
42.2
44.2
44.7
47.1
47.2
47.5
48.0
Classifier
avNNet t
svmPoly t
pcaNNet t
svmRadialCost t
parRF t
rf t
C5.0 t
svm C
nnet t
svmRadial t
Acc. (%)
83.0
79.9
82.9
80.0
82.6
81.2
82.0
79.0
82.1
79.4
Rank
50.0
51.4
54.1
54.9
57.6
57.7
59.7
60.8
61.5
62.9
Classifier
mlp t
elm kernel m
RotationForest w
rforest R
mlpWeightDecay t
svmBag R
fda t
cforest t
Bagging LibSVM w
knn t
Acc (%)
82.2
77.5
82.0
80.9
79.7
78.8
81.0
74.7
77.9
80.4
No.
1
2
3
4
5
6
7
8
9
10
Classifier
svmRadialCost t
svm C
svmPoly t
svmRadial t
Bagging LibSVM w
avNNet t
parRF t
LibSVM w
C5.0 t
rf t
P95
78.2
74.5
74.5
72.7
70.9
69.1
69.1
67.3
67.3
67.3
No.
11
12
13
14
15
16
17
18
19
20
Classifier
MultiBoostAB LibSVM w
pcaNNet t
svmBag R
elm kernel m
nnet t
RotationForest w
fda t
mlp t
MultiBoostAB REPTree w
RandomForest w
P95
65.5
63.6
63.6
61.8
61.8
61.8
60.0
60.0
58.2
58.2
No.
1
2
3
4
5
6
7
8
9
10
Classifier
avNNet t
pcaNNet t
parRF t
nnet t
mlp t
C5.0 t
RotationForest w
glmnet R
rda R
rf t
PMA
95.0
94.9
94.3
94.1
94.1
93.8
93.7
93.5
93.2
92.8
No.
11
12
13
14
15
16
17
18
19
20
Classifier
pda t
mlm R
fda t
MAB MLP w
bayesglm t
simpls R
rforest R
MultiBoostAB PART w
fda R
nnetBag R
PMA
92.8
92.7
92.7
92.7
92.6
92.5
92.5
92.5
92.3
92.2
Table 9: Results for two class data sets. Up: Friedman rank and average accuracies
for the 20 best classifiers. RF w = RotationForest w. MWD t = mlpWeightDecay t. Middle: Probability (in %) of achieving 95% or more of the maximum
accuracy. Down: 20 classifiers with the highest average Percentage of the Maximum Accuracy (PMA) over the two-class data sets. MAB MLP w means MultiBoostAB MultilayerPerceptron w.
3170
P d
which each data set is weighted according to each property as j = N1d N
i=1 wi Aij , j =
1, . . . , Nc , being wi is the weight measuring the property for data set i (0 wi Nd ),
defined in the following subsections; Nd = 121 is the number of data sets; Aij is the
accuracy (in %) achieved by classifier j in data set i; and Nc = 179 is the number of
classifiers. The classifier behavior with the data complexity is difficult to evaluate,
because the own data set complexity is hard to define (Ho and Basu, 2002), and it may be
relative to the classifier used. In our case, since we are trying a large number of classifiers,
we can suppose that some of them achieves the highest possible accuracy for each data
set. Since this maximum accuracy is higher for some data sets than for others, we can
believe that some data sets are harder, independently of the classifier used. Therefore, we
can calculate the weighted average accuracy C
j (the C superscript denotes complexity)
C
of classifier j using the weights wi (which evaluate the complexity of data set i) defined
(1Mi )
as wiC = NdP
, i = 1, . . . , Nd , being Mi = maxj=1,...,Nc {Aij /100}, the maximum
Nd
Nd k=1
Mk
P d C
accuracy for data set i divided by 100. Note that N
i=1 wi = Nd . The weighted accuracy
C defined above weights more the data sets i with maximum accuracy
C
(see
below)
with
w
j
i
Mi low, which are expected to be more complex. The Table 10 (upper panel) shows the
20 classifiers with the highest C , which exhibit the best behavior when the hardest data
sets have stronger weight (data sets with maximum accuracy Mi low). The parRF t is
the best one, and the three best classifiers (5 in the top-10) belong to the family RF.
Other two classifiers are neural networks (mlp t and avNNet t), C5.0 t is the 4th, and two
SVMs (svm C and LibSVM w) are 6th and 9th respectively. Our proposal dkp C exhibits
a good behavior (12th position), while other classifiers in the top-20 of Table 3 as nnet t,
Bagging LibSVM w and RRFglobal t are also included. The 20 classifiers are in a narrow
range between 70.0% and 66.9% (3.1 points), so the differences among them are not too high.
In order to study the classifier behavior increasing #patterns, the weighted accuracy
P uses the following weights wiP = PNNddNi , i = 1, . . . , Nd , where Ni is the #patterns
k=1
Nk
(population) of data set i. The middle part of the Table 10 shows the weighted accuracy P
(the two largest data sets, connect-4 and miniboone, give errors for some classifiers which
disturb this measure, so that they are excluded). Although the range is narrow (89.4%91.1%), again the rf t and parRF t are the bests, and svm C is the 3rd. There are six
random forests in the top-10. The and treebag t are also in the top-10. The positions 11-20
are completely filled by ensembles: Bagging, MultiBoostAB and AdaboostM1.
The classifier behavior decreasing #patterns in the data set can be analyzed calculating the weighted accuracy using weights wiD decreasing with the #patterns (Nm is
i)
the maximum #patterns for all the data sets) wiD = Nd (NPmNN
, i = 1, . . . , Nd ; Nm =
d
Nm
k=1
Nk
3172
Nk
No.
1
2
3
4
5
6
7
8
9
10
Classifier
parRF t
rf t
rforest R
C5.0 t
RotationForest w
svm C
mlp t
RRF t
LibSVM w
avNNet t
C
69.9
69.6
69.3
69.0
68.6
68.4
68.4
68.1
67.8
67.8
No.
11
12
13
14
15
16
17
18
19
20
Classifier
nnet t
dkp C
RRFglobal t
Bagging LibSVM w
Decorate w
knn t
Bagging REPTree w
elm m
pda t
RandomCommittee w
C
67.7
67.6
67.4
67.3
67.1
67.1
67.0
67.0
67.0
66.9
No.
1
2
3
4
5
6
7
8
9
10
Classifier
rf t
parRF t
svm C
RRF t
RRFglobal t
LibSVM w
RotationForest w
C5.0 t
rforest R
treebag t
P
91.1
91.1
90.7
90.6
90.6
90.6
90.5
90.5
90.3
90.2
No.
11
12
13
14
15
16
17
18
19
20
Classifier
Bagging LibSVM w
RandomCommittee w
Bagging RandomTree w
MultiBoostAB RandomTree w
MultiBoostAB LibSVM w
MultiBoostAB PART w
Bagging PART w
AdaBoostM1 J48 w
Bagging REPTree w
MultiBoostAB J48 w
P
89.9
89.9
89.8
89.8
89.8
89.7
89.7
89.5
89.5
89.4
No.
1
2
3
4
5
6
7
8
9
10
Classifier
rf t
rforest R
svm C
parRF t
RRF t
RotationForest w
C5.0 t
mlp t
Bagging LibSVM w
RRFglobal t
D
82.1
81.8
81.6
81.6
80.8
80.3
80.2
80.0
80.0
79.8
No.
11
12
13
14
15
16
17
18
19
20
Classifier
MultiBoostAB LibSVM w
LibSVM w
RandomCommittee w
dkp C
nnet t
elm kernel m
avNNet t
treebag t
MAB MLP w
knn R
D
79.7
79.6
79.5
79.5
79.3
79.2
79.2
79.0
78.8
78.7
Table 10: Twenty best classifiers depending on the data set complexity and population.
Up: average accuracy C (in %) weighting each data set decreasingly with its
complexity. Middle: accuracy P weighting the data sets increasingly with
#patterns. Down: average accuracy D weighted decreasingly with #patterns.
(upper part) shows the accuracy L for the 20 best classifiers. The best classifiers are svm C
and rf t (with the same accuracy), followed by rforest t, Bagging LibSVM w, parRF t and
others, only 1% below the bests. There are 4 Random Forests and 2 SVMs in the top-10.
The Bagging LibSVM w, MultiBoostAB LibSVM w and MultiBoostAB Multilayer Perceptron w ensembles are also included in the top-10. The best neural networks are dkp C (9th
position), MultilayerPerceptron w and elm m. Two DA classifiers (rda R and hdda R) and
two NN classifiers (knn R and IBk w) are included. With respect to the number of inputs, the weighted average accuracy I according to the #inputs NiI can be calculated
3173
No.
1
2
3
4
5
6
7
8
9
10
Classifier
svm C
rf t
rforest R
Bagging LibSVM w
parRF t
MultiBoostAB LibSVM w
LibSVM w
RRF t
dkp C
MAB MLP w
L
80.5
80.5
79.8
79.7
79.5
79.5
79.5
77.9
77.7
76.9
No.
11
12
13
14
15
16
17
18
19
20
Classifier
RotationForest w
RRFglobal t
MultilayerPerceptron w
rda R
knn R
SMO w
hdda R
KStar w
elm m
RandomCommittee w
L
76.6
76.1
76.1
76.0
75.9
75.6
75.4
75.3
75.1
75.1
No.
1
2
3
4
5
6
7
8
9
10
Classifier
parRF t
rf t
rforest R
RotationForest w
MAB MLP w
LibSVM w
MultilayerPerceptron w
svm C
RandomCommittee w
C5.0 t
I
84.0
83.3
82.9
82.8
82.5
82.4
82.0
82.0
81.8
81.6
No.
11
12
13
14
15
16
17
18
19
20
Classifier
mlp t
SMO w
Bagging RandomTree w
elm kernel m
mlp C
dkp C
fda t
rda R
SimpleLogistic w
RRF t
I
81.5
81.3
81.3
81.1
81.0
80.8
80.8
80.8
80.7
80.4
Table 11: Up: average accuracy L weighted using the #classes wL (only 20 first classifiers). Down: average accuracy I weighted with the #inputs wI .
Nd NiI
,i
P Nd
I
k=1 Nk
shows I for the 20 best classifiers: parRF t and rf t are the bests, with 4 random forests
among the top-5 (the other is MultiBoostAB Multilayer Perceptron w), while the svm C
falls to the 8th position, below LibSVM w (6th). The MultilayerPerceptron w and mlp t are
also included in the top-10. The dkp C is again in the top-20. Considering jointly the
four dependencies (complexity, population, #classes and #inputs), parRF t and rf t are
always in the first positions, while the svm C is not so regular: good behavior with #classes
and #patterns, but not so good with complexity and #inputs (6th and 8th positions). The
svm C and parRF t are worse than rf t with decreasing #patterns. Besides, the averages
of C , P , D , L , I are 81.3%, 81.2% and 80.8% for rf t, parRF t and svm C respectively,
which shows the similarity between rf t and parRF t, and their difference to svm C. Most
of the random forest versions (rforest R, RotationForest w, RRF t and RRFglobal t), and
LibSVM w, are in the five tables. Apart from the RF and SVM classifiers, which fill most
of the 10 best positions in the five tables, it is remarkable the good behavior of C5.0 t
(family DT), included in the four tables and three times in the top-10. Among the neural
networks, the dkp C appears more often (in four of five tables): in fact, the P table does
not include any neural network, showing a bad behavior for populated data sets. The Bagging LibSVM w is also the first bagging classifier in four tables, while MultiBoostAB of
LibSVM or MLP is the best boosting classifier, appearing in four tables. The Random3174
Committee w (the best classifier of family OEN) is also included in five tables, and in the
top-10 for I . On the other hand, three of five tables include a classifier of family NN
(knn t or knn R). The DA classifiers show bad behavior with population, being included
only pda t in C ; rda R and hdda R in L ; fda t and rda R in I .
4. Conclusion
This paper presents an exhaustive evaluation of 179 classifiers belonging to a wide collection
of 17 families over the whole UCI machine learning classification database, discarding the
large-scale data sets due to technical reasons, plus 4 own real sets, summing up to 121 data
sets from 10 to 130,064 patterns, from 3 to 262 inputs and from 2 to 100 classes. The
best results are achieved by the parallel random forest (parRF t), implemented in
R with caret, tuning the parameter mtry. The parRF t achieves in average 94.1% of the
maximum accuracy over all the data sets (Table 5, lower part), and overcomes the 90% of
the maximum accuracy in 102 out of 121 data sets. Its average accuracy over all the data
sets is 82.0%, while the maximum average accuracy (achieved by the best classifier for each
data set) is 86.9%. The random forest in R and tuned with caret (rf t) is slightly worse
(93.6% of the maximum accuracy), although it achieves slightly better average accuracy
(82.3%) than parRF t. The LibSVM implementation of SVM in C with Gaussian kernel
(svm C), tuning the regularization and kernel spread, achieves 92.3% of the maximum
accuracy. Six RFs and five SVMs are included among the 20 best classifiers, which are the
bests families. The parRF t may be considered as a reference (gold-standard) to compare
with new classifier proposals in order to assess their performance for general classification in
general (not requiring special features as large-scale, on-line learning, non-stationary data,
etc.). Other classifiers with good results are the extreme learning machine with Gaussian
kernel, the C5.0 decision tree and the multi-layer perceptron (avNNet t, a committee of
5 multi-layer perceptrons randomly initialized tuning the size and decay rate). The best
boosting and bagging ensembles use LibSVM as base classifiers (in Weka), being slightly
better than the single LibSVM classifier, and adaboost R (ensemble of decision trees trained
using Adaboost.M1). For two-class data sets, avNNet t is the best (95% of the maximum
accuracy), being the parRF t also very good (94.3%). It is also the best when the complexity,
#patterns and #inputs of the data set increase, being also good when #patterns decrease
(rf t is the best) and #classes increase (svm C is the best). The probabilistic neural network
in Matlab, tuning the Gaussian kernel spread (pnn m), and the direct kernel perceptron in C
(dkp C), a very simple and fast neural network proposed by us (Fernandez-Delgado et al.,
2014), are also very near to the top-20. The remaining families of classifiers, including
other neural networks (radial basis functions, learning vector quantization and cascade
correlation), discriminant analysis, decision trees other than C5.0, rule-based classifiers,
other bagging and boosting ensembles, nearest neighbors, Bayesian, GLM, PLSR, MARS,
etc., are not competitive at all. Most of the best classifiers are implemented in R and tuned
using caret, which seems the best alternative to select a classifier implementation.
3175
Acknowledgments
We would like to acknowledge support from the Spanish Ministry of Science and Innovation
(MICINN), which supported this work under projects TIN2011-22935 and TIN2012-32262.
References
David W. Aha, Dennis Kibler, and Marc K. Albert. Instance-based learning algorithms.
Machine Learning, 6:3766, 1991.
Miika Ahdesm
aki and Korbinian Strimmer. Feature selection in omics prediction problems
using cat scores and false non-discovery rate control. Annals of Applied Stat., 4:503519,
2010.
Esteban Alfaro, Matas G
amez, and Noelia Garca. Multiclass corporate failure prediction
by Adaboost.M1. Int. Advances in Economic Research, 13:301312, 2007.
Peter Auer, Harald Burgsteiner, and Wolfang Maass. A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 1(21):
786795, 2008.
Kevin Bache and Moshe Lichman. UCI machine learning repository, 2013. URL http:
//archive.ics.uci.edu/ml.
Laurent Berge, Charles Bouveyron, and Stephane Girard. HDclassif: an R package for
model-based clustering and discriminant analysis of high-dimensional data. J. Stat.
Softw., 46(6):129, 2012.
Michael R. Berthold and Jay Diamond. Boosting the performance of RBF networks with
dynamic decay adjustment. In Advances in Neural Information Processing Systems, pages
521528. MIT Press, 1995.
Leo Breiman. Bagging predictors. Machine Learning, 24(2):123140, 1996.
Leo Breiman. Random forests. Machine Learning, 45(1):532, 2001.
Leo Breiman, Jerome Friedman, R.A. Olshen, and Charles J. Stone. Classification and
Regression Trees. Wadsworth and Brooks, 1984.
Jean Carletta. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249254, 1996.
Jadzia Cendrowska. PRISM: An algorithm for inducing modular rules. Int. J. of ManMachine Studies, 27(4):349370, 1987.
S. Le Cessie and J.C. Van Houwelingen. Ridge estimators in logistic regression. Applied
Stat., 41(1):191201, 1992.
Chih-Chung Chang and Chih-Jen. Lin. Libsvm: a library for support vector machines,
2008. URL http://www.csie.ntu.edu.tw/~cjlin/libsvm.
3176
Hyonho Chun and Sunduz Keles. Sparse partial least squares for simultaneous dimension
reduction and variable selection. J. of the Royal Stat. Soc. - Series B, 72:325, 2010.
John G. Cleary and Leonard E. Trigg. K*: an instance-based learner using an entropic
distance measure. In Int. Conf. on Machine Learning, pages 108114, 1995.
Line H. Clemensen, Trevor Hastie, Daniela Witten, and Bjarne Ersboll. Sparse discriminant
analysis. Technometrics, 53(4):406413, 2011.
William W. Cohen. Fast effective rule induction. In Int. Conf. on Machine Learning, pages
115123, 1995.
Bhupinder S. Dayal and John F. MacGregor. Improved PLS algorithms. J. of Chemometrics,
11:7385, 1997.
G
ulsen Demiroz and H. Altay Guvenir. Classification by voting feature intervals. In European Conf. on Machine Learning, pages 8592. Springer, 1997.
Houtao Deng and George Runger. Feature selection via regularized trees. In Int. Joint
Conf. on Neural Networks, pages 18, 2012.
Beijing Ding and Robert Gentleman. Classification using generalized partial least squares.
J. of Computational and Graphical Stat., 14(2):280298, 2005.
Annette J. Dobson. An Introduction to Generalized Linear Models. Chapman and Hall,
1990.
Pedro Domingos. Metacost: A general method for making classifiers cost-sensitive. In Int.
Conf. on Knowledge Discovery and Data Mining, pages 155164, 1999.
Richard Duda, Peter Hart, and David Stork. Pattern Classification. Wiley, 2001.
Manuel J.A. Eugster, Torsten Hothorn, and Friedrich Leisch. Domain-based benchmark
experiments: exploratory and inferential analysis. Austrian J. of Stat., 41:526, 2014.
Scott E. Fahlman. Faster-learning variations on back-propagation: an empirical study. In
1988 Connectionist Models Summer School, pages 3850. Morgan-Kaufmann, 1988.
Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res., 9:18711874,
2008.
Manuel Fern
andez-Delgado, Jorge Ribeiro, Eva Cernadas, and Senen Barro. Direct parallel
perceptrons (DPPs): fast analytical calculation of the parallel perceptrons weights with
margin control for classification tasks. IEEE Trans. on Neural Networks, 22:18371848,
2011.
Manuel Fern
andez-Delgado, Eva Cernadas, Senen Barro, Jorge Ribeiro, and Jose Neves.
Direct kernel perceptron (DKP): ultra-fast kernel ELM-based classification with noniterative closed-form weight calculation. Neural Networks, 50:6071, 2014.
3177
Eibe Frank and Mark Hall. A simple approach to ordinal classification. In European Conf.
on Machine Learning, pages 145156, 2001.
Eibe Frank and Stefan Kramer. Ensembles of nested dichotomies for multi-class problems.
In Int. Conf. on Machine Learning, pages 305312. ACM, 2004.
Eibe Frank and Ian H. Witten. Generating accurate rule sets without global optimization.
In Int. Conf. on Machine Learning, pages 144151, 1999.
Eibe Frank, Yong Wang, Stuart Inglis, Geoffrey Holmes, and Ian H. Witten. Using model
trees for classification. Machine Learning, 32(1):6376, 1998.
Eibe Frank, Geoffrey Holmes, Richard Kirkby, and Mark Hall. Racing committees for large
datasets. In Int. Conf. on Discovery Science, pages 153164, 2002.
Eibe Frank, Mark Hall, and Bernhard Pfahringer. Locally weighted naive Bayes. In Conf.
on Uncertainty in Artificial Intelligence, pages 249256, 2003.
Yoav Freund and Llew Mason. The alternating decision tree learning algorithm. In Int.
Conf. on Machine Learning, pages 124133, 1999.
Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In Int.
Conf. on Machine Learning, pages 148156. Morgan Kaufmann, 1996.
Yoav Freund and Robert E. Schapire. Large margin classification using the perceptron
algorithm. In Conf. on Computational Learning Theory, pages 209217, 1998.
Jerome Friedman. Regularized discriminant analysis. J. of the American Stat. Assoc., 84:
165175, 1989.
Jerome Friedman. Multivariate adaptive regression splines. Annals of Stat., 19(1):1141,
1991.
Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a
statistical view of boosting. Annals of Stat., 28:2000, 1998.
Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization paths for generalized linear models via coordinate descent. J. of Stat. Softw., 33(1):122, 2010.
Brian R. Gaines and Paul Compton. Induction of ripple-down rules applied to modeling
large databases. J. Intell. Inf. Syst., 5(3):211228, 1995.
Andrew Gelman, Aleks Jakulin, Maria G. Pittau, and Yu-Sung Su. A weakly informative
default prior distribution for logistic and other regression models. The Annals of Applied
Stat., 2(4):13601383, 2009.
Mark Girolami and Simon Rogers. Variational bayesian multinomial probit regression with
Gaussian process priors. Neural Computation, 18:17901817, 2006.
Ekkehard Glimm, Siegfried Kropf, and J
urgen Lauter. Multivariate tests based on leftspherically distributed linear scores. The Annals of Stat., 26(5):19721988, 1998.
3178
Encarnaci
on Gonz
alez-Rufino, Pilar Carrion, Eva Cernadas, Manuel Fernandez-Delgado,
and Rosario Domnguez-Petit. Exhaustive comparison of colour texture features and
classification methods to discriminate cells categories in histological images of fish ovary.
Pattern Recognition, 46:23912407, 2013.
Mark Hall. Correlation-Based Feature Subset Selection for Machine Learning. PhD thesis,
University of Waikato, 1998.
Mark Hall and Eibe Frank. Combining naive Bayes and decision tables. In Florida Artificial
Intel. Soc. Conf., pages 318319. AAAI press, 2008.
Trevor Hastie and Robert Tibshirani. Discriminant analysis by Gaussian mixtures. J. of
the Royal Stat. Soc. series B, 58:158176, 1996.
Trevor Hastie, Robert Tibshirani, and Andreas Buja. Flexible discriminant analysis by
optimal scoring. J. of the American Stat. Assoc., 89:12551270, 1993.
Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer, 2009.
Tin Kam Ho. The random subspace method for constructing decision forests. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 20(8):832844, 1998.
Tin Kam Ho and Mitra Basu. Complexity measures of supervised classification problems.
IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(3):289300, 2002.
Geoffrey Holmes, Mark Hall, and Eibe Frank. Generating rule sets from model trees. In
Australian Joint Conf. on Artificial Intelligence, pages 112, 1999.
Robert C. Holte. Very simple classification rules perform well on most commonly used
datasets. Machine Learning, 11:6391, 1993.
Torsten Hothorn, Friedrich Leisch, Achim Zeileis, and Kurt Hornik. The design and analysis
of benchmark experiments. J. Computational and Graphical Stat., 14:675699, 2005.
Guang-Bin Huang, Hongming Zhou, Xiaojian Ding, and Rui Zhang. Extreme learning
machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B: Cybernetics, 42:513529, 2012.
Torsten Joachims. Making Large-Scale Support Vector Machine Learning Practical. In
Bernhard Scholk
opf, Cristopher J.C. Burges, and Alexander Smola, editors, Advances in
Kernel Methods - Support Vector Learning, pages 169184. MIT-Press, 1999.
George H. John and Pat Langley. Estimating continuous distributions in Bayesian classifiers.
In Conf. on Uncertainty in Artificial Intelligence, pages 338345, 1995.
Sijmen De Jong. SIMPLS: an alternative approach to partial least squares regression.
Chemometrics and Intelligent Laboratory Systems, 18:251263, 1993.
Josef Kittler, Mohammad Hatef, Robert P.W. Duin, and Jiri Matas. On combining classifiers. IEEE Trans. on Pat. Anal. and Machine Intel., 20:226239, 1998.
3179
Ron Kohavi. The power of decision tables. In European Conf. on Machine Learning, pages
174189. Springer, 1995.
Ron Kohavi. Scaling up the accuracy of naive-Bayes classifiers: a decision-tree hybrid. In
Int. Conf. on Knoledge Discovery and Data Mining, pages 202207, 1996.
Max Kuhn. Building predictive models in R using the caret package. J. Stat. Softw., 28(5):
126, 2008.
Max Kuhn and Kjell Johnson. Applied Predictive Modeling. Springer, New York, 2013.
Niels Landwehr, Mark Hall, and Eibe Frank. Logistic model trees. Machine Learning, 95
(1-2):161205, 2005.
Nick Littlestone. Learning quickly when irrelevant attributes are abound: a new linear
threshold algorithm. Machine Learning, 2:285318, 1988.
Nuria Maci`
a and Ester Bernad
o-Mansilla. Towards UCI+: a mindful repository design.
Information Sciences, 261(10):237262, 2014.
Nuria Maci`
a, Ester Bernad
o-Mansilla, Albert Orriols-Puig, and Tin Kam Ho. Learner
excellence biased by data set selection: a case for data characterisation and artificial data
sets. Pattern Recognition, 46:10541066, 2013.
Harald Martens. Multivariate Calibration. Wiley, 1989.
Brent Martin. Instance-Based Learning: Nearest Neighbor with Generalization. PhD thesis,
Univ. of Waikato, Hamilton, New Zealand, 1995.
Willem Melssen, Ron Wehrens, and Lutgarde Buydens. Supervised Kohonen networks for
classification problems. Chemom. Intell. Lab. Syst., 83:99113, 2006.
Prem Melville and Raymond J. Mooney. Creating diversity in ensembles using artificial
data. Information Fusion: Special Issue on Diversity in Multiclassifier Systems, 6(1):
99111, 2004.
John C. Platt. Fast training of support vector machines using sequential minimal optimization. In Bernhard Scholk
opf, Cristopher J.C. Burges, and Alexander Smola, editors,
Advances in Kernel Methods - Support Vector Learning, pages 185208. MIT Press, 1998.
Ross Quinlan. Induction of decision trees. Machine Learning, 1(1):81106, 1986.
Ross Quinlan. Learning with continuous classes. In Australian Joint Conf. on Artificial
Intelligence, pages 343348, 1992.
Ross Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993.
Brian D. Ripley. Pattern Recognition and Neural Networks. Cambridge Univ. Press, 1996.
Juan J. Rodrguez, Ludmila I. Kuncheva, and Carlos J. Alonso. Rotation forest: a new
classifier ensemble method. IEEE Trans. on Pattern Analysis and Machine Intelligence,
28(10):16191630, 2006.
3180
Alexander K. Seewald. How to make stacking better and faster while also taking care
of an unknown weakness. In Int. Conf. on Machine Learning, pages 554561. Morgan
Kaufmann Publishers, 2002.
Alexander K. Seewald and Johannes Fuernkranz. An evaluation of grading classifiers. In
Int. Conf. on Advances in Intelligent Data Analysis, pages 115124, 2001.
David J. Sheskin. Handbook of Parametric and Nonparametric Statistical Procedures. CRC
Press, 2006.
Donald F. Specht. Probabilistic neural networks. Neural Networks, 3(1):109118, 1990.
Johan A.K. Suykens and Joos Vandewalle. Least squares support vector machine classifiers.
Neural Processing Letters, 9(3):293300, 1999.
Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, and Gilbert Chu. Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proc. of the
National Academy of Sciences, 99(10):65676572, 2002.
Kai M. Ting and Ian H. Witten. Stacking bagged and dagged models. In Int. Conf. on
Machine Learning, pages 367375, 1997.
Valentin Todorov and Peter Filzmoser. An object oriented framework for robust multivariate
analysis. J. Stat. Softw., 32(3):147, 2009.
Alfred Truong. Fast Growing and Interpretable Oblique Trees via Probabilistic Models. PhD
thesis, Univ. Oxford, 2009.
Joaquin Vanschoren, Hendrik Blockeel, Bernhard. Pfahringer, and Geoffrey Holmes. Experiment databases. A new way to share, organize and learn from experiments. Machine
Learning, 87(2):127158, 2012.
William N. Venables and Brian D. Ripley. Modern Applied Statistics with S. Springer, 2002.
Geoffrey Webb, Janice Boughton, and Zhihai Wang. Not so naive Bayes: aggregating
one-dependence estimators. Machine Learning, 58(1):524, 2005.
Geoffrey I. Webb. Multiboosting: a technique for combining boosting and wagging. Machine
Learning, 40(2):159196, 2000.
Daniela M. Witten and Robert Tibshirani. Penalized classification using Fishers linear
discriminant. J. of the Royal Stat. Soc. Series B, 73(5):753772, 2011.
David H. Wolpert. Stacked generalization. Neural Networks, 5:241259, 1992.
David H. Wolpert. The lack of a priori distinctions between learning algorithms. Neural
Computation, 9:13411390, 1996.
Zijian Zheng and Goeffrey I. Webb. Lazy learning of Bayesian rules. Machine Learning, 4
(1):5384, 2000.
3181