20 Cost Sensitive Learning
20 Cost Sensitive Learning
20 Cost Sensitive Learning
p pb p0 p + p0 b p0 b + p0 bp p pb p0 p + p0 b
0 0.6
0
0.2 0.4 b’ p pb p0 p + p0 b p0 b(1 p)
p(1 b) p p(1 b) p(1 b)(1 p0 )
0
0.4
0.6 0.2
= = :
p 0.8 p0 b(1 p) p0 b(1 p)
0
Figure 1: p0 as a function of p and b, when b0 = 0:5.
Therefore
n0 p(1 b)(1 p0 ) b p(1 p0 )
= = :
compute estimated probabilities p^0 that are correct given a dif- n p0 b(1 p) 1 b p0 (1 p)
ferent base rate b0 . From this point of view, the theorem has
a remarkable aspect. It lets us use a classifier learned from a Note that the effective cardinality of the subset of negative
training set drawn from one probability distribution on a test training examples must be changed in a way that does not
set drawn from a different probability distribution. The theo- change the distribution of examples within this subset.
rem thus relaxes one of the most fundamental assumptions of
almost all research on machine learning, that training and test 4 Effects of changing base rates
sets are drawn from the same population. Changing the training set prevalence of positive and negative
The insight in the proof is the introduction of the variable examples is a common method of making a learning algo-
e that is the ratio of P (xjj = 0) and P (xjj = 1). If we rithm cost-sensitive. A natural question is what effect such a
try to compute the actual values of these probabilities, we change has on the behavior of standard learning algorithms.
find that we have more variables to solve for than we have Separately, many researchers have proposed duplicating or
simultaneous equations. Fortunately, all we need to know for discarding examples when one class of examples is rare, on
any particular example x is the ratio e. the assumption that standard learning methods perform bet-
The special case of Theorem 2 where p0 = 0:5 was recently ter when the prevalence of different classes is approximately
worked out independently by Weiss and Provost [2001]. The equal [Kubat and Matwin, 1997; Japkowicz, 2000]. The pur-
case where b = 0:5 is also interesting. Suppose that we do not pose of this section is to investigate this assumption.
know the base rate of positive examples at the time we learn
a classifier. Then it is reasonable to use a training set with 4.1 Changing base rates and Bayesian learning
b = 0:5. Theorem 2 says how to compute probabilities p0 Given an example x, a Bayesian classifier applies Bayes’
later that are correct given that the population of test examples rule to compute the probability of each class j as P (j jx) =
has base rate b0 . Specifically, P (xjj )P (j )=P (x): Typically P (xjj ) is computed by a func-
p p=2 p tion learned from a training set, P (j ) is estimated as the train-
p0 = b0 = : Pj , and P (x) is computed indirectly
1=2 p=2 + b0 p b0 =2 p + (1 p)(1 b0 )=b0 ing set frequency of class
by solving the equation j P (j jx) = 1.
This function of p and b0 is plotted in Figure 1. A Bayesian learning method essentially learns a model
Using Theorem 2 as a lemma, we can now prove Theo- P (xjj ) of each class j separately. If the frequency of a class is
rem 1 with a slight change of notation. changed in the training set, the only change is to the estimated
Theorem 1: To make a target probability threshold p cor- base rate P (j ) of each class. Therefore there is little reason
respond to a given probability threshold p0 , the number of to expect the accuracy of decision-making with a Bayesian
negative training examples should be multiplied by classifier to be higher with any particular base rates.
Naive Bayesian classifiers are the most important special
p 1 p0
: case of Bayesian classification. A naive Bayesian classi-
1 p p0 fier is based on the assumption that within each class, the
values of the attributes of examples are independent. It
Proof: We want to compute an adjusted base rate b0 such is well-known that these classifiers tend to give inaccurate
that for a classifier trained using this base rate, an estimated probability estimates [Domingos and Pazzani, 1996]. Given
an example x, suppose that a naive Bayesian classifier com- Bayes’ rule I (A)= is
putes N (x) as its estimate of P (j = 1jx). Usually N (x)
is too extreme: for most x, either N (x) is close to 0 and
X P (ak jj = 1)P (j = 1) P (ak jj = 0)P (j = 0)
P (ak )( ) ( ) :
then N (x) < P (j = 1jx) or N (x) is close to 1 and then k
P (ak ) P (ak )
N (x) > P (j = 1jx).
However, the ranking of examples by naive Bayesian clas- Grouping the P (ak ) factors for each k gives that I (A)= is
sifiers tends to be correct: if N (x) < N (y ) then P (j = X
P (ak )1 2
(P (ak jj = 1)P (j = 1)P (ak jj = 0)P (j = 0)) :
1jx) < P (j = 1jy ). This fact suggests that given a cost-
sensitive application where optimal decision-making uses the k
probability threshold p , one should empirically determine Now the base rate factors can be brought outside the sum, so
a different threshold p such that N (x) p is equivalent to I (A) is 1 P (j = 1) P (j = 0) times the sum
P (j = 1jx) p . This procedure is likely to improve X
the accuracy of decision-making, while changing the propor- P (ak )1 2
P (ak jj = 1) P (ak jj = 0) : (3)
tion of negative examples using Theorem 1 in order to use the k
threshold 0.5 is not. Because 1 P (j = 1) P (j = 0) is constant for all at-
tributes, the attribute A for which I (A) is minimum is deter-
4.2 Decision tree growing
mined by the minimum of (3). If 2 = 1 then (3) depends
We turn our attention now to standard decision tree learning only on P (ak jj = 1) and P (ak jj = 0), which do not depend
methods, which have two phases. In the first phase a tree is on the base rates. Otherwise, (3) is different for different base
grown top-down, while in the second phase nodes are pruned rates because
from the tree. We discuss separately the effect on each phase
of changing the proportion of negative and positive training P (ak ) = P (ak jj = 1)P (j = 1) + P (ak jj = 0)P (j = 0)
examples. unless the attribute A is independent of the class j , that is
A splitting criterion is a metric applied to an attribute P (ak jj = 1) = P (ak jj = 0) for 1 k m.
that measures how homogeneous the induced subsets are, if The sum (3) has its maximum value 1 if A is independent
a training set is partitioned based on the values of this at- of j . As desired, the sum is smaller otherwise, if A and j are
tribute. Consider a discrete attribute A that has values A = a1 correlated and hence splitting on A is reasonable.
through A = am for some m 2. In the two-class case, stan- Theorem 3 implies that changing the proportion of positive
dard splitting criteria have the form or negative examples in the training set has no effect on the
m
X p of the tree if the decision tree growing method uses
structure
I (A) = P (A = ak )f (pk ; 1 pk ) the 2 p(1 p) impurity criterion. If the algorithm uses a
k=1 different criterion, such as the C4.5 entropy measure, the ef-
where pk = P (j = 1jA = ak ) and all probabilities are
fect is usually small, because all impurity criteria are similar.
frequencies in the training set to be split based on A. The
The experimental results of Drummond and p Holte [2000]
and Dietterich et al. [1996] show that the 2 p(1 p) crite-
function f (p; 1 p) measures the impurity or heterogeneity
rion normally leads to somewhat smaller unpruned decision
of each subset of training examples. All such functions are trees, sometimes leads to more accurate trees, and never leads
qualitatively similar, with a unique maximum at p = 0:5, and
to much less accurate trees. Therefore we can recommend its
equal minima at p = 0 and p = 1.
use, and we can conclude that regardless of the impurity cri-
Drummond and Holte [2000] have shown p that for two- terion, applying Theorem 1 is not likely to have have much
valued attributes the impurity function 2 p(1 p) sug- influence on the growing phase of decision tree learning.
gested by Kearns and Mansour [1996] is invariant to changes
in the proportion of different classes in the training data. We 4.3 Decision tree pruning
prove here a more general result that applies to all discrete-
Standard methods for pruning decision trees are highly sen-
valued attributes and that shows that related impurity func-
sitive to the prevalence of different classes among training
tions, including the Gini index [Breiman et al., 1984], are not
examples. If all classes except one are rare, then C4.5 often
invariant to base rate changes.
prunes the decision tree down to a single node that classifies
Theorem 3: Suppose f (p; 1 p) = (p(1 p)) where all examples as members of the common class. Such a clas-
> 0 and > 0. For any collection of discrete-valued sifier is useless for decision-making if failing to recognize an
attributes, the attribute that minimizes I (A) using f is the example in a rare class is an expensive error.
same regardless of changes in the base rate P (j = 1) of the Several papers have examined recently the issue of how to
training set if = 0:5, and not otherwise in general. obtain good probability estimates from decision trees [Brad-
ford et al., 1998; Provost and Domingos, 2000; Zadrozny and
Proof: For any attribute A, by definition Elkan, 2001b]. It is clear that it is necessary to use a smooth-
m ing method to adjust the probability estimates at each leaf of
X
I (A) = P (ak )P (j = 1jak ) P (j = 0jak ) a decision tree. It is not so clear what pruning methods are
best.
k=1
The experiments of Bauer and Kohavi [1999] suggest that
where a1 through am are the possible values of A. So by no pruning is best when using a decision tree with probability
smoothing. The overall conclusion of Bradford et al. [1998] References
is that the best pruning is either no pruning or what they call [Bauer and Kohavi, 1999] Eric Bauer and Ron Kohavi. An em-
“Laplace pruning.” The idea of Laplace pruning is: pirical comparison of voting classification algorithms: Bagging,
1. Do Laplace smoothing: If n training examples reach a boosting, and variants. Machine Learning, 36:105–139, 1999.
node, of which k are positive, let the estimate at this [Bradford et al., 1998] J. Bradford, C. Kunz, R. Kohavi, C. Brunk,
node of P (j = 1jx) be (k + 1)=(n + 2). and C. Brodley. Pruning decision trees with misclassification
costs. In Proceedings of the European Conference on Machine
2. Compute the expected loss at each node using the Learning, pages 131–136, 1998.
smoothed probability estimates, the cost matrix, and the
[Breiman et al., 1984] L. Breiman, J. H. Friedman, R. A. Olshen,
training set. and C. J. Stone. Classification and Regression Trees. Wadswoth,
3. If the expected loss at a node is less than the sum of the Belmont, California, 1984.
expected losses at its children, prune the children. [Dietterich et al., 1996] T. G. Dietterich, M. Kearns, and Y. Man-
We can show intuitively that Laplace pruning is similar to sour. Applying the weak learning framework to understand and
no pruning. In the absence of probability smoothing, the ex- improve C4.5. In Proceedings of the Thirteenth International
pected loss at a node is always greater than or equal to the sum Conference on Machine Learning, pages 96–104. Morgan Kauf-
mann, 1996.
of the expected losses at its children. Equality holds only if
the optimal predicted class at each child is the same as the [Domingos and Pazzani, 1996] Pedro Domingos and Michael Paz-
optimal predicted class at the parent. Therefore, in the ab- zani. Beyond independence: Conditions for the optimality of
sence of smoothing, step (3) cannot change the meaning of a the simple Bayesian classifier. In Proceedings of the Thirteenth
International Conference on Machine Learning, pages 105–112.
decision tree, i.e. the classes predicted by the tree, so Laplace Morgan Kaufmann, 1996.
pruning is equivalent to no pruning.
[Drummond and Holte, 2000] Chris Drummond and Robert C.
With probability smoothing, if the expected loss at a node
Holte. Exploiting the cost (in)sensitivity of decision tree splitting
is less than the sum of the expected losses at its children, the criteria. In Proceedings of the Seventeenth International Confer-
difference must be caused by smoothing, so without smooth- ence on Machine Learning, pages 239–246, 2000.
ing there would presumably be equality. So pruning the chil-
[Japkowicz, 2000] N. Japkowicz. The class imbalance problem:
dren is still only a simplification that leaves the meaning of
Significance and strategies. In Proceedings of the International
the tree unchanged. Note that the effect of Laplace smooth- Conference on Artificial Intelligence, Las Vegas, June 2000.
ing is small at internal tree nodes, because at these nodes typ-
ically k >> 1 and n >> 2. [Kearns and Mansour, 1996] M. Kearns and Y. Mansour. On the
boosting ability of top-down decision tree learning algorithms.
In summary, growing a decision tree can be done in a In Proceedings of the Annual ACM Symposium on the Theory of
cost-insensitive way. When using a decision tree to esti- Computing, pages 459–468. ACM Press, 1996.
mate probabilities, it is preferable to do no pruning. If costs
[Kubat and Matwin, 1997] M. Kubat and S. Matwin. Addressing
are example-dependent, then decisions should be made using
the curse of imbalanced training sets: One-sided sampling. In
smoothed probability estimates and Equation (1). If costs are Proceedings of the Fourteenth International Conference on Ma-
fixed, i.e. there is a single well-defined cost matrix, then each chine Learning, pages 179–186. Morgan Kaufmann, 1997.
node in the unpruned decision tree can be labeled with the [Margineantu, 2000] Dragos Margineantu. On class probability es-
optimal predicted class for that leaf. If all the leaves under a timates and cost-sensitive evaluation of classifiers. In Workshop
certain node are labeled with the same class, then the subtree Notes, Workshop on Cost-Sensitive Learning, International Con-
under that node can be eliminated. This simplification makes ference on Machine Learning, June 2000.
the tree smaller but does not change its predictions. [Michie et al., 1994] D. Michie, D. J. Spiegelhalter, and C. C. Tay-
lor. Machine Learning, Neural and Statistical Classification. El-
5 Conclusions lis Horwood, 1994.
This paper has reviewed the basic concepts behind optimal [Provost and Domingos, 2000] Foster Provost and Pedro Domin-
learning and decision-making when different misclassifica- gos. Well-trained PETs: Improving probability estimation trees.
tion errors cause different losses. For the two-class case, we Technical Report CDER #00-04-IS, Stern School of Business,
have shown rigorously how to increase or decrease the pro- New York University, 2000.
portion of negative examples in a training set in order to make [Weiss and Provost, 2001] Gary M. Weiss and Foster Provost. The
optimal cost-sensitive classification decisions using a classi- effect of class distribution on classifier learning. Technical Re-
fier learned by a standard non cost-sensitive learning method. port ML-TR 43, Department of Computer Science, Rutgers Uni-
However, we have investigated the behavior of Bayesian and versity, 2001.
decision tree learning methods, and concluded that changing [Zadrozny and Elkan, 2001a] Bianca Zadrozny and Charles Elkan.
the balance of negative and positive training examples has Learning and making decisions when costs and probabilities are
little effect on learned classifiers. Accordingly, the recom- both unknown. Technical Report CS2001-0664, Department of
mended way of using one of these methods in a domain with Computer Science and Engineering, University of California, San
Diego, January 2001.
differing misclassification costs is to learn a classifier from
the training set as given, and then to use Equation (1) or Equa- [Zadrozny and Elkan, 2001b] Bianca Zadrozny and Charles Elkan.
tion (2) directly, after smoothing probability estimates and/or Obtaining calibrated probability estimates from decision trees
adjusting the threshold of Equation (2) empirically if neces- and naive Bayesian classifiers. In Proceedings of the Eighteenth
International Conference on Machine Learning, 2001. To appear.
sary.