Leo Breiman 2001 Random Forest Algorithm Weka - Google Scholar
Leo Breiman 2001 Random Forest Algorithm Weka - Google Scholar
Leo Breiman 2001 Random Forest Algorithm Weka - Google Scholar
N
Keywords Preprocessing
Model
Classifier Rules OK
Extract DataSet
Classes
39
International Journal of Computer Applications (0975 – 8887)
Volume 55– No.6, October 2012
algorithms are very similar. The goal of decision tree is to 1) RF may produce a highly accurate classifier for
predict to response on a categorical dependent variable to more data sets.
measure a more predictor. The WEATHER NOMINAL uses a 2) RF has much simplicity.
3) RF provides a fast learning approach.
5 attributes and 14 instances as shown given a Table.1.
Table 1. The data set used in our analysis list 2.1.3 Reduce Error Prune
This method introduced by Quinlan [11]. It is the simplest and
most understandable method in decision tree pruning. For
Weather Nominal Data Set every non-leaf sub tree of the original decision tree, the
change in misclassification over the test set is examined. The
Attributes 5 REP incremental pruning developed by Written and Frank in
1999 is a fast regression tree learner that uses information
Instances 14 variance reduction in the data set which is splited into a
training set and a prune set.
Sum of Weight 14 When any one traverse the tree from bottom to top then he
she may apply the procedure which checks for each internal
node and replace it with most frequently class, keeping in
mind about tree accuracy, which must not reduced. Now the
2.1 Decision Trees node is pruned. This procedure will continue until any further
A decision tree is a flow-chart-like tree structure. The internal pruning would decrease the accuracy.
node denotes a test on an attribute, each branch represents an
outcome of the test, and leaf nodes represent classes or class
distribution [4][9]. The top most node in a tree shown by oval 2.1.4 Logistic Model Tree
is a root node. Further internal nodes are represented by Logistic Model Tree (LMT) [12] algorithm makes a tree with
rectangles, and leaf nodes are denoted by circles which are binary and multiclass target variables, numeric and missing
depicted in figure 3. values. So this technique uses logistic regression tree. LMT
produces a single outcome in the form of tree containing
binary splits on numeric attributes.
2.1.1 J48 Algorithm
J48 is a tree based learning approach. It is developed by Ross
Quinlan which is based on iterative dichtomiser (ID3) 2.2 Cross-Validation Test
algorithm [1]. J48 uses divide-and-conquer algorithm to split Cross-validation (CV) method used in order to validate the
a root node into a subset of two partitions till leaf node (target predicted model. CV test basically divide the training data
node) occur in tree. Given a set T of total instances the into a number of partitions or folds. The classifier is evaluated
following steps are used to construct the tree structure. by accuracy on one phase after learned from other one. This
process is repeated until all partitions have been used for
evaluation [13]. The most common types are 10-fold, n-fold
Step 1: If all the instances in T belong to the same group and bootstrap result obtained into a single estimation.
class or T is having fewer instances, than the tree is leaf
labeled with the most frequent class in T.
3. PERFORMANCE MEASURES FOR
Step 2: If step 1 does not occur then select a test based on a CLASSIFICATION
single attribute with at least two or greater possible outcomes. One can use following performance measures for the
Then consider this test as a root node of the tree with one
classification and prediction of fault prone module according
branch of each outcome of the test, partition T into
corresponding T1, T2, T3........, according to the result for each to his/her own need.
respective cases, and the same may be applied in recursive
way to each sub node. 3.1 Confusion Matrix
The confusion matrix is used to measure the performance of
Step 3: Information gain and default gain ratio are ranked two class problem for the given data set Table 2. The right
using two heuristic criteria by algorithm J48. diagonal elements TP (true positive) and TN (true negative)
correctly classify Instances as well as FP (false positive) and
2.1.2 Random Forest Algorithm FN (false negative) incorrectly classify Instances.
Random Forest algorithm was initially developed by Leo
Breiman, a statistician at the University of California [2] Confusion Matrix
Berkeley. Random Forests is a method by which one can
calculate accuracy rate in better way. Some attributes of
Random Forest is mentioned below [7].
40
International Journal of Computer Applications (0975 – 8887)
Volume 55– No.6, October 2012
Actual Yes No
TP
Pr ecision
Yes TP FN TP FP
No FP TN 3.6 F-Measure
FM is a combination of recall and precision. It is also defined
as harmonic mean of precision and recall.
3.4 Recall If outlook is sunny and humidity is normal then play the game
Recall is the ratio of modules correctly classified as fault- otherwise not-play it.
prone to the number of entire faulty modules.
If outlook is overcast then game will definitely be played.
Recall TP
TP FN If outlook is rainy and windy situation is not according then
game will be held otherwise not game may not be played.
41
International Journal of Computer Applications (0975 – 8887)
Volume 55– No.6, October 2012
rainy
sunny
overcast J48 RF REP LMT
play Actual Play Not- Play Not- Play Not- Play Not-
humidity windy Play Play Play Play
Play 4 5 5 4 7 2 6 3
false true
normal
high Not 3 2 2 3 5 0 4 1
Play
Algorithms PR NR PR NR
J48 0.444 0.6 0.4 0.556 0.571 0.286 0.444 0.4 0.5 42.85
RF 0.556 0.4 0.6 0.444 0.714 0.429 0.556 0.6 0.625 57.14
LMT 0.667 0.8 0.2 0.333 0.6 0.25 0.667 0.2 0.632 50.00
RF 8 6 57.14 0.08
42
International Journal of Computer Applications (0975 – 8887)
Volume 55– No.6, October 2012
Accuracy of Classifier
60%
50%
40%
30% Accuracy of Classifier
20%
10%
0%
J48 RF REP LMT
Error Rate
60.00%
50.00%
40.00%
30.00%
Error Rate
20.00%
10.00%
0.00%
J48 RF REP LMT
5. CONCLUSION AND FUTURE WORK [2]. L. Breiman, “Random Forests. Machine Learning,”
In this paper authors have examined J48, RF, REP and LMT vol.45(1), pp. 5-32, 2001.
method of classification and observed that RF is having [3]. F. Esposito, D. Malerba, and G. Semeraro, “A
maximum accuracy and minimum error rate. On the basis of comparative Analysis of Methods for Pruning Decision
accuracy measures, of the classifiers one can easily provide Trees”, IEEE transactions on pattern analysis and
the guidelines regarding fault-prone prediction issues of any machine intelligence, Vol.19(5), pp. 476-491, 1997.
given data set in the respective situations.
[4]. J. Han and M. Kamber, “Data Mining: Concept and
More similar studies on different data set for machine learning Techniques”, Morgan Kaufmann Publishers, 2004.
approach is needed to confirm the above finding. [5]. WEKA:http//www.cs.waikato.ac.nz/ml/weka.
43
International Journal of Computer Applications (0975 – 8887)
Volume 55– No.6, October 2012
[7]. Random Forest by Leo Breiman and Adele [11]. J.R. Quinlan, “Simplifying decision trees”, Internal
Cutler:http://www.stat.berkeley.edu/~breiman/RandomF Journal of Human Computer Studies,Vol.51, pp. 497-
orests/cc_home.htm. 491, 1999.
[8]. G. Biau, L. Devroye, G. Lugosi, “Consisting of Random [12]. N. Landwehr, M. Hall, and E. Frank, “ Logistic model
Forests and other Averaging Classifiers,” Journal of trees”. for Machine Learning.,Vol. 59(1-2),pp.161-205,
Machine Learning Research, 2008. 2005.
[9]. J.R. Quinlan, “Induction of Decession Trees : Machine [13]. N. Laves son and P. Davidson, “Multi-dimensional
Learning”,vol.1,pp.81-106,1986. measures function for classifier performance”, 2nd.
IEEE International conference on intelligent system,
[10]. F. Livingston, “Implementation of Breiman’s Random pp.508-513, 2004.
Forest Machine Learning algorithm,” Machine learning
Journal, 2008.
44