Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Stability and Generalization: CMAP, Ecole Polytechnique F-91128 Palaiseau, FRANCE

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Journal of Machine Learning Research 2 (2002) 499-526

Submitted 7/01; Published 3/02

Stability and Generalization


Olivier Bousquet
CMAP, Ecole Polytechnique F-91128 Palaiseau, FRANCE bousquet@cmapx.polytechnique.fr

Andr Elissee e
BIOwulf Technologies 305 Broadway, New-York, NY 10007

andre.elisseeff@biowulf.com

Editor: Dana Ron

Abstract
We dene notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classication one when the classier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classication.

1. Introduction
A key issue in the design of ecient Machine Learning systems is the estimation of the accuracy of learning algorithms. Among the several approaches that have been proposed to this problem, one of the most prominent is based on the theory of uniform convergence of empirical quantities to their mean (see e.g. Vapnik, 1982). This theory provides ways to estimate the risk (or generalization error) of a learning system based on an empirical measurement of its accuracy and a measure of its complexity, such as the Vapnik-Chervonenkis (VC) dimension or the fat-shattering dimension (see e.g. Alon et al., 1997). We explore here a dierent approach which is based on sensitivity analysis. Sensitivity analysis aims at determining how much the variation of the input can inuence the output of a system.1 It has been applied to many areas such as statistics and mathematical programming. In the latter domain, it is often referred to as perturbation analysis (see Bonnans and Shapiro, 1996, for a survey). The motivation for such an analysis is to design robust systems that will not be aected by noise corrupting the inputs. In this paper, the objects of interest are learning algorithms. They take as input a learning set made of instance-label pairs and output a function that maps instances to the corresponding labels. The sensitivity in that case is thus related to changes of the outcome of the algorithm when the learning set is changed. There are two sources of randomness such algorithms have to cope with: the rst one comes from the sampling mechanism used to generate the learning set and the second one is due to noise in the measurements (on the instance and/or label). In contrast to standard approaches to sensitivity analysis, we mainly focus on the sampling randomness and we thus are interested in how changes in the composition of the learning set inuence the function produced by the algorithm. The outcome of such an approach is a principled way of getting bounds on the dierence between
1. For a qualitative discussion about sensitivity http://sensitivity-analysis.jrc.cec.eu.int/ analysis with links to other resources see e.g.

c 2002 Olivier Bousquet and Andr Elissee. e

Bousquet and Elisseeff

empirical and generalization error. These bounds are obtained using powerful statistical tools known as concentration inequalities. The latter are the mathematical device corresponding to the following statement, from Talagrand (1996): A random variable that depends (in a smooth way) on the inuence of many independent variables (but not too much on any of them) is essentially constant. The expression essentially constant actually means that the random variable will have, with high probability, a value close to its expected value. We will apply these inequalities to the random variable we are interested in, that is, the dierence between an empirical measure of error and the true generalization error. We will see that this variable has either a zero expectation or it has a nice property: the condition under which it concentrates around its expectation implies that its expectation is close to zero. That means that if we impose conditions on the learning system such that the dierence between the empirical error and the true generalization error is roughly constant, then this constant is zero. This observation and the existence of concentration inequalities will allow us to state exponential bounds on the generalization error of a stable learning system. The outline of the paper is as follows: after reviewing previous work in the area of stability analysis of learning algorithms, we introduce three notions of stability (Section 3) and derive bounds on the generalization error of stable learning systems (Section 4). In Section 5, we show that many existing algorithms such as SVM for classication and regression, ridge regression or variants of maximum relative entropy discrimination do satisfy the stability requirements. For each of these algorithms, it is then possible to derive original bounds which have many attractive properties. Previous work It has long been known that when trying to estimate an unknown function from data, one needs to nd a tradeo between bias and variance.2 Indeed, on one hand, it is natural to use the largest model in order to be able to approximate any function, while on the other hand, if the model is too large, then the estimation of the best function in the model will be harder given a restricted amount of data. Several ideas have been proposed to ght against this phenomenon. One of them is to perform estimation in several models of increasing size and then to choose the best estimator based on a complexity penalty (e.g. Structural Risk Minimization). This allows to control the complexity while allowing to use a large model. This technique is somewhat related to regularization procedures that we will study in greater detail in subsequent sections. Another idea is to use statistical procedures to reduce the variance without altering the bias. One such technique is the bagging approach of Breiman (1996a) which consists in averaging several estimators built from random subsamples of the data. Although it is generally accepted that having a low variance (or a high stability in our terminology) is a desirable property for a learning algorithm, there are few quantitative results relating the generalization error to the stability of the algorithm with respect to changes in the training set. The rst such results were obtained by Devroye, Rogers and Wagner in the seventies (see Rogers and Wagner, 1978, Devroye and Wagner, 1979a,b). Rogers and Wagner (1978) rst showed that the variance of the leave-one-out error can be upper bounded by what Kearns and Ron (1999) later called hypothesis stability. This quantity measures how much the function learned by the algorithm will change when one point in the training set is removed. The main distinctive feature of their approach is that, unlike VC-theory based approaches where the only property of the algorithm that matters is the size of the space to be searched, it focuses on how the algorithm searches the space. This explains why it has been successfully applied to the k-Nearest Neighbors algorithm (k-NN) whose search space is known to have an innite VC-dimension. Indeed, results from VC-theory
2. We deliberately do not provide a precise denition of bias and variance and resort to common intuition about these notions. In broad terms, the bias is the best error that can be achieved and the variance is the dierence between the typical error and the best error.

500

Stability and Generalization

would not be of any help in that case since they are meaningful when the learning algorithm performs minimization of the empirical error in the full function space. However, the k-NN algorithm is very stable because of its locality. This allowed Rogers and Wagner to get an upper bound on the dierence between the leave-one-out error and the generalization error of such a classier. These results were later extended to obtain bounds on the generalization error of k-local rules in Devroye and Wagner (1979a), and of potential rules in Devroye and Wagner (1979b). In the early nineties, concentration inequalities became popular in the probabilistic analysis of algorithms, due to the work of McDiarmid (1989) and started to be used as tools to derive generalization bounds for learning algorithms by Devroye (1991). Building on this technique, Lugosi and Pawlak (1994) obtained new bounds for the k-NN, kernel rules and histogram rules. These bounds used smoothed estimates of the error which estimate the posterior probability of error instead of simply counting the errors. This smoothing is very much related to the use of realvalued classiers and we will see that it is at the heart of the applicability of stability analysis to classication algorithms. A comprehensive account of the application of McDiarmids inequality to obtain bounds for the leave-one-out error or the smoothed error of local classiers can be found in Devroye et al. (1996). Independently from this theoretical analysis, practical methods have been developed to deal with instability of learning algorithms. In particular, Breiman (1996a,b) introduced the Bagging technique which is presented as a method to combine single classiers in such a way that the variance of the overall combination is decreased. However, there is no theoretical guarantee that this variance reduction will bring an improvement on the generalization error. Finally, a more recent work has shown an interesting connection between stability and VCtheory. Kearns and Ron (1999) derived what they called sanity-check bounds. In particular, they proved that an algorithm having a search space of nite VC-dimension, is stable in the sense that its stability (in a sense to be dened later) is bounded by its VC-dimension. Thus using the stability as a complexity measure does not give worse bounds than using the VC-dimension. The work presented here follows and extends the stability approach of Lugosi and Pawlak (1994) in that we derive exponential upper bounds on the generalization error based on notions of stability. It is based on earlier results presented in Bousquet and Elissee (2001). We consider both the leaveone-out error and the empirical error as possible estimates of the generalization error. We prove stability bounds for a large class of algorithms which includes the Support Vector Machines, both in the regression and in the classication cases. Also we generalize some earlier results from Devroye and Wagner.

2. Preliminaries
We rst introduce some notation and then the main tools we will use to derive inequalities. 2.1 Notations X and Y R being respectively an input and an output space, we consider a training set S = {z1 = (x1 , y1 ), .., zm = (xm , ym )} , of size m in Z = X Y drawn i.i.d. from an unknown distribution D. A learning algorithm is a function A from Z m into F Y X which maps a learning set S onto a function AS from X to Y. To avoid complex notation, we consider only deterministic algorithms. It is also assumed that the algorithm A is symmetric with respect to S, i.e. it does not depend on the order of the elements in the training set. Furthermore, we assume that all functions are measurable and all sets are countable which does not limit the interest of the results presented here. Given a training set S of size m, we will build, for all i = 1. . . . , m, modied training sets as follows:
501

Bousquet and Elisseeff

By removing the i-th element S \i = {z1 , . . . , zi1 , zi+1 , . . . , zm } . By replacing the i-th element S i = {z1 , . . . , zi1 , zi , zi+1 , . . . , zm } . where the replacement example zi is assumed to be drawn from D and is independent from S. Unless they are clear from context, the random variables over which we take probabilities and expectation will be specied in subscript. We thus introduce the notation PS [.] and ES [.] to denote respectively the probability and the expectation with respect to the random draw of the sample S of size m (drawn according to Dm ). Similarly, Pz [.] and Ez [.] will denote the probability and expectation when z is sampled according to D. In order to measure the accuracy of the predictions of the algorithm, we will use a cost function c : Y Y R+ . The loss of an hypothesis f with respect to an example z = (x, y) is then dened as (f, z) = c(f (x), y) . We will consider several measures of the performance of an algorithm. The main quantity we are interested in is the risk or generalization error. This is a random variable depending on the training set S and it is dened as R(A, S) = Ez [ (AS , z)] . Unfortunately, R cannot be computed since D is unknown. We thus have to estimate it from the available data S. We will consider several estimators for this quantity. The simplest estimator is the so-called empirical error (also known as resubstitution estimate) dened as Remp (A, S) = 1 m
m

(AS , zi ) .
i=1

Another classical estimator is the leave-one-out error (also known as deleted estimate) dened as Rloo (A, S) = 1 m
m

(AS \i , zi ) .
i=1

When the algorithm is clear from context, we will simply write R(S), Remp (S) and Rloo (S). We will often simplify further the notations when the training sample is clear from context. In particular, we will use the following shorthand notations R R(A, S), Remp Remp (A, S), and Rloo Rloo (A, S). 2.2 Main Tools The study we describe here intends to bound the dierence between empirical and generalization error for specic algorithms. For any > 0 our goal is to bound the term PS [|Remp (A, S) R(A, S)| > ] , which diers from what is usually studied in learning theory PS sup |Remp (f ) R(f )| >
f F

(1)

(2)

502

Stability and Generalization

Indeed, we do not want to have a bound that holds uniformly over the whole space of possible functions since we are interested in algorithms that may not explore it. Moreover we may not even have a way to describe this space and assess its size. This explains why we want to focus on (1). Our approach is based on inequalities that relate moments of multi-dimensional random functions to their rst order nite dierences. The rst one is due to Steele (1986) and provides bounds for the variance. The second one is a version of Azumas inequality due to McDiarmid (1989) and provides exponential bounds but its assumptions are more restrictive. Theorem 1 (Steele, 1986) Let S and S i dened as above, let F : Z m R be any measurable function, then m 1 2 2 ES (F (S) ES [F (S)]) ES,zi F (S) F (S i ) 2 i=1 Theorem 2 (McDiarmid, 1989) Let S and S i dened as above, let F : Z m R be any measurable function for which there exists constants ci (i = 1, . . . , m) such that sup
SZ m ,zi Z

F (S) F (S i ) ci ,
2

then PS [F (S) ES [F (S)] ] e2

Pn
i=1

c2 i

3. Dening the Stability of a Learning Algorithm


There are many ways to dene and quantify the stability of a learning algorithm. The natural way of making such a denition is to start from the goal: we want to get bounds on the generalization error of specic learning algorithm and we want these bounds to be tight when the algorithm satises the stability criterion. As one may expect, the more restrictive a stability criterion is, the tighter the corresponding bound will be. In the learning model we consider, the randomness comes from the sampling of the training set. We will thus consider stability with respect to changes in the training set. Moreover, we need an easy to check criterion so that we will consider only restricted changes such as the removal or the replacement of one single example in the training set. Although not explicitly mentioned in their work, the rst such notion was used by Devroye and Wagner (1979a) in order to get bounds on the variance of the error of local learning algorithms. Later, Kearns and Ron (1999) stated it as a denition and gave it a name. We give here a slightly modied version of Kearns and Rons denition that suits our needs. Denition 3 (Hypothesis Stability) An algorithm A has hypothesis stability with respect to the loss function if the following holds i {1, . . . , m}, ES,z [| (AS , z) (AS \i , z)|] . Note that this is the L1 norm with respect to D, so that we can rewrite the above as ES [ (AS , .) (AS \i , .) 1 ] We will also use a variant of the above denition in which instead of measuring the average change, we measure the change at one of the training points. Denition 4 (Pointwise Hypothesis Stability) An algorithm A has pointwise hypothesis stability with respect to the loss function if the following holds i {1, . . . , m}, ES [| (AS , zi ) (AS \i , zi )|] .
503

(3)

(4)

Bousquet and Elisseeff

Another, weaker notion of stability was introduced by Kearns and Ron. It consists of measuring the change in the expected error of the algorithm instead of the average pointwise change. Denition 5 (Error Stability) An algorithm A has error stability with respect to the loss function if the following holds S Z m , i {1, . . . , m}, |Ez [ (AS , z)] Ez [ (AS \i , z)] | , which can also be written S Z m , i {1, . . . , m}, |R(S) R\i (S)| . (6) (5)

Finally, we introduce a stronger notion of stability which will allow to get tight bounds. Moreover we will show that it can be applied to large classes of algorithms. Denition 6 (Uniform Stability) An algorithm A has uniform stability with respect to the loss function if the following holds S Z m , i {1, . . . , m}, (AS , .) (AS \i , .)

(7)

Notice that (3) implies (5) and (7) implies (3) so that uniform stability is the strongest notion. Considered as a function of m, the term will sometimes be denoted by m . We will say that 1 an algorithm is stable when the value of m decreases as m . An algorithm with uniform stability has also the following property: S, zi , | (AS , z) (AS i , z)| | (AS , z) (AS \i , z))| + | (AS i , z) (AS \i , z)| 2 . In other words, stability with respect to the exclusion of one point implies stability with respect to changes of one point. We will assume further that as a function of the sample size, the stability is non-increasing. This will be the case in all our examples. This assumption is not restrictive since its only purpose is to simplify the statement of the theorems (we will always upper bound m1 by m ).

4. Generalization Bounds for Stable Learning Algorithms


We start this section by introducing a useful lemma about the bias of the estimators we study. Lemma 7 For any symmetric learning algorithm A, we have i {1, .., m}: ES [R(A, S) Remp (A, S)] = ES,zi [ (AS , zi ) (AS i , zi )] , and ES R(A, S \i ) Rloo (A, S) = 0 , and ES [R(A, S) Rloo (A, S)] = ES,z [ (AS , z) (AS \i , z)] , Proof For the rst equality, we just need to compute the expectation of Remp (A, S). We have ES [Remp (S)] = 1 m
m m

ES [ (AS , zj )] =
j=1

1 m

ES,zi [ (AS , zj )] ,
j=1

and renaming zj as zi we get, i {1, .., m} ES [Remp (S)] = ES,zi [ (AS i , zi )] ,


504

Stability and Generalization

by the i.i.d. and the symmetry assumptions. This proves the rst equality. Similarly we have ES [Rloo (S)] = 1 m
m

ES [ (AS \i , zi )] =
i=1

1 m

ES,z [ (AS \i , z)] ,


i=1

from which we deduce the second and third equalities.

Remark 8 We notice from the above lemma, comparing the rst and last equalities, that the empirical error and the leave-one-out error dier from the true error in a similar way. It is usually accepted that the empirical error is very much optimistically biased while the leave-one-out error is almost unbiased (due to the second equation of the lemma). However, we will see that for the particular algorithms we have in mind (which display high stability), the two estimators are very close to each other. The similarity of the bounds we will derive for both estimators will be striking. This can be explained intuitively by the fact that we are considering algorithms that do not directly minimize the empirical error but rather a regularized version of it, so that the bias in the empirical error will be reduced. 4.1 Polynomial Bounds with Hypothesis Stability In this section we generalize a lemma from Devroye and Wagner (1979b). Their approach consists in bounding the second order moment of the estimators with the hypothesis stability of the algorithm. For this purpose, one could simply use Theorem 1. However this theorem gives a bound on the variance and we need here the second order moment of the dierence between the error (leave-oneout or empirical) and the generalization error. It turns out that a direct study of this quantity leads to better constants than the use of Theorem 1. Lemma 9 For any learning algorithm A and loss function for any i, j {1, . . . , m}, i = j for the empirical error, ES (R Remp )2 and ES (R Remp )2 M2 + M ES,zi ,z [| (AS , z) (AS i , z)|] 2m +M ES,zi [| (AS , zj ) (AS i , zj )|] +M ES,zi [| (AS , zi ) (AS i , zi )|] , such that 0 c(y, y ) M we have (8)

M2 + 3M ES,zi [| (AS , zi ) (AS i , zi )|] , 2m

and for the leave-one-out error, ES (R Rloo )2 and M2 + 2M ES,zi ,z [| (AS , z) (AS i , z)| + | (AS , z) (AS \i , z)|] . 2m The proof of this lemma is given in the appendix. ES (R Rloo )2 (10) M2 + 3M ES,z [| (AS , z) (AS \i , z)|] , 2m (9)

Remark 10 Notice that Devroye and Wagners work focused on the leave-one-out estimator and on classication. We extend it to regression and to the empirical estimator, which they treated with the following easy-to-prove inequality ES (R Remp )2 2ES (R Rloo )2 + 2M ES [| (AS , zi ) (AS \i , zi )|] , which gives a similar result but with worse constants.
505

Bousquet and Elisseeff

Lets try to explain the various quantities that appear in the upper bounds of the above lemma. We 2 notice that the term M is always present and it cannot be avoided even for a very stable algorithm 2m and somehow corresponds to the bias of the estimator. In Inequality (8), the expectation in the right-hand side corresponds to the following situation: starting from training set S we measure the error at point zi S, then we replace zi S by zi and we again measure the error at zi which is no longer in the training set. Then, in the second inequality of Lemma 9 several dierent quantities appear. They all correspond to comparing the algorithm trained on S and on S i (where zi is replaced by zi ) but the comparison point diers: it is either z, a point which is not part of the training set, or zj , a point of the training set dierent from zi or nally zi . For the leave-one-out error, in (9) we consider the average dierence in error when trained on S and on S \i (where zi has been removed) and in (10), the rst expectation in the right hand side corresponds to the average dierence in error when one point is changed while the second one is the average dierence in error when one point is removed. All these quantities capture a certain aspect of the stability of the algorithm. In order to use the lemma, we need to bound them for specic algorithms. Instead of using all these dierent quantities, we will rather focus on the few notions of stability we introduced and see how they are related. We will see later how they can be computed (or upper bounded) in particular cases. Now that we have a bound on the expected squared deviation of the estimator to the true error, the next step is to use Chebyshevs inequality in order to get a bound which holds with high probability on the deviation. Theorem 11 For any learning algorithm A with hypothesis stability 1 and pointwise hypothesis stability 2 with respect to a loss function such that 0 c(y, y ) M , we have with probability 1 , M 2 + 12M m2 , R(A, S) Remp (A, S) + 2m and M 2 + 6M m1 R(A, S) Rloo (A, S) + . 2m Proof First, notice that for all S and all z, | (AS , z) (AS i , z)| | (AS , z) (AS \i , z)| + | (AS \i , z) (AS i , z)| , so that we get ES,zi [| (AS , zi ) (AS i , zi )|] ES [| (AS , zi ) (AS \i , zi )|] + ES,zi [| (AS \i , zi ) (AS i , zi )|] 22 . We thus get by (8) ES (R Remp )2 Also, we have by (10) M2 + 3M 1 . 2m Now, recall that Chebyshevs inequality gives for a random variable X ES (R Rloo )2 P [X ] E X2
2

M2 + 6M 2 . 2m

which in turn gives that for all > 0, with probability at least 1 , X E [X 2 ] .
506

Stability and Generalization

Applying this to R Remp and R Rloo respectively give the result. As pointed out earlier, there is a striking similarity between the above bounds which seems to support the fact that for a stable algorithm, the two estimators that we are considering have a closely related behavior. In the next section we will see how to use the exponential inequality of Theorem 2 to get better bounds. 4.2 Exponential Bounds with Uniform Stability Devroye and Wagner (1979a) rst proved exponential bounds for k-local algorithms. However, the question of whether their technique can be extended to more general classes of algorithms is a topic for further research. In Devroye et al. (1996) another, more general technique is introduced which relies on concentration inequalities. Inspired by this approach, we will derive exponential bounds for algorithms based on their uniform stability. We will study separately the regression and the classication cases for reasons that will be made clear. 4.2.1 Regression Case A stable algorithm has the property that removing one element in its learning set does not change much of its outcome. As a consequence, the dierence between empirical and generalization error, if thought as a random variable, should have a small variance. If its expectation is small, stable algorithms should then be good candidates for their empirical error to be close to their generalization error. This assertion is formulated in the following theorem: Theorem 12 Let A be an algorithm with uniform stability with respect to a loss function such that 0 (AS , z) M , for all z Z and all sets S. Then, for any m 1, and any (0, 1), the following bounds hold (separately) with probability at least 1 over the random draw of the sample S, ln 1/ R Remp + 2 + (4m + M ) , (11) 2m and ln 1/ R Rloo + + (4m + M ) (12) . 2m Remark 13 This theorem gives tight bounds when the stability scales as 1/m. We will prove that this is the case for several known algorithms in later sections. Proof Lets prove that the conditions of Theorem 2 are veried by the random variables of interest. First we study how these variables change when one training example is removed. We have |R R\i | Ez [| (AS , z) (AS \i , z)|] , and
\i |Remp Remp |

(13)

1 m

j=i

| (AS , zj ) (AS \i , zj )| +

1 | (AS , zi )| m

M . m

507

Bousquet and Elisseeff

Then we upper bound the variation when one training example is changed: |R Ri | |R R\i | + |R\i Ri | 2 . Similarly we can write
i \i i \i |Remp Remp | |Remp Remp | + |Remp Remp | 2 + 2

M . m

however, a closer look reveals that the second factor of 2 is not needed. Indeed, we have
i |Remp Remp |

1 m 1 m +

j=i

| (AS , zj ) (AS i , zj )| +

1 | (AS , zi ) (AS i , zi )| m 1 m | (AS \i , zj ) (AS i , zj )|

j=i

| (AS , zj ) (AS \i , zj )| +

j=i

1 | (AS , zi ) (AS i , zi )| m M 2 + . m

Thus the random variable R Remp satises the conditions of Theorem 2 with ci = 4 + M . m It thus remains to bound the expectation of this random variable which can be done using Lemma 7 and the -stability property: ES [R Remp ] Which yields PS [R Remp > + 2m ] exp ES,zi [| (AS , zi ) (AS i , zi )|]

2 .

ES,zi [| (AS i , zi ) (AS \i , zi )|] + ES,zi [| (AS \i , zi ) (AS , zi )|]

2m 2 (4mm + M )2

Thus, setting the right hand side to , we obtain that with probability at least 1 , R Remp + 2m + (4mm + M ) and thus R Remp + 2m 1 + 2m ln 1/ + M ln 1/ , 2m ln 1/ , 2m

which gives, (11) For the leave-one-out error, we proceed similarly. We have |Rloo Rloo |
\i

1 m

j=i

| (AS \j , zj ) (AS \i,j , zj )| + M , m

1 | (AS \i , zi )| m

m1 + and also

i |Rloo Rloo | 2m1 +

M M 2m + . m m

508

Stability and Generalization

So that Theorem 2 can be applied to R Rloo with ci = 4m + with (13) to deduce PS [R Rloo > + m ] exp 2m

M m.

Then we use Lemma 7 along

2 2

(m(4m ) + M )

which gives (12) by setting the right hand side to and using e1 . Once again, we notice that the bounds for the empirical error and for the leave-one-out error are very similar. As we will see in later sections, this clearly indicates that our method is not at all suited to the analysis of algorithms which simply perform the minimization of the empirical error (which are not stable in the sense dened above). 4.2.2 Classification Case In this section we consider the case where Y = {1, 1} and the algorithm A returns a function AS that maps instances in X to labels in {1, 1}. The cost function is then simply c(AS (x), y) = 1{yAS (x)0} . Thus we see that because of the discrete nature of the cost function, the Uniform Stability of an algorithm with respect to such a cost function can only be = 0 or = 1. In the rst case, it means that the algorithm is always returning the same function. In the second case there is no hope of 1 obtaining interesting bounds since we saw that we need = O( m ) for our bounds to give interesting results. We thus have to proceed in a dierent way. One possible approach is to modify our error estimates so that they become smoother and have higher stability. The idea to smooth error estimators to decrease their variance is not new and it has even been used in conjunction with McDiarmids inequality by Lugosi and Pawlak (1994) in order to derive error bounds for certain algorithms. Lugosi and Pawlak studied algorithms which produce estimates for the distributions P (X|Y = 1) and P (X|Y = +1) and dened analogues of the resubstitution and leave-one-out estimates of the error suited to these algorithms. Here we will take a related, though slightly dierent route. Indeed, we will consider algorithm having a real-valued output. However, we do not require this output to correspond to a posterior probability but it should simply have the correct sign. That is, the label predicted by such an algorithm is the sign of its real-valued output. Of course, a good algorithm will produce outputs whose absolute value somehow represents the condence it has in the prediction. In order to apply the results obtained so far to this setting, we need to introduce some denitions. Denition 14 A real-valued classication algorithm A is a learning algorithm that maps training sets S to functions AS : X R such that the label predicted on an instance x is the sign of AS (x). This class of algorithm includes for instance the classiers produced by SVM or by ensemble methods such as boosting. Notice that the cost function dened above extends to the case where the rst argument is a real number and have the desired properties: it is zero when the algorithm does predict the right label and 1 otherwise. Denition 15 (Classication Stability) A real-valued classication algorithm A has classication stability if the following holds S Z m , i {1, . . . , m}, AS (.) AS \i (.)
509

(14)

Bousquet and Elisseeff

and we denote

We introduce a modied cost function: 1 1 yy / c (y, y ) = 0


(f, z)

for yy 0 for 0 yy for yy

= c (f (x), y) .
m (AS , zi ) , i=1 m (AS \i , zi ) . i=1

Accordingly, we dene the following error estimates


Remp (A, S) =

1 m

and similarly,
Rloo (A, S) =

1 m

The loss will count an error each time the function f gives an output close to zero, the closeness being controlled by . Lemma 16 A real-valued classication algorithm A with classication stability has uniform stability / with respect to the loss function . Proof It is easy to see that c is 1/-Lipschitz with respect to its rst argument and so does denition. Thus we have for all i, all training set S, and all z, |l (AS , z) l (AS \i , z)| = |c (AS (x), y) c (AS \i (x), y)| We can thus apply Theorem 12 with the loss function 1 |AS (x) AS \i (x)| / .

by

and get the following theorem.

Theorem 17 Let A be a real-valued classication algorithm with stability . Then, for all > 0, any m 1, and any (0, 1), with probability at least 1 over the random draw of the sample S,
R Remp + 2

+ 4m + 1

ln 1/ , 2m

(15)

and with probability at least 1 over the random draw of the sample S,
R Rloo +

+ 4m + 1

ln 1/ . 2m

(16)

Proof We apply Theorem 12 to A with the loss function which is bounded by M = 1 and for which the algorithm is /-stable. Moreover, we use the fact that R(AS ) R = Ez [l (AS , z)]. In order to make this result more practically useful, we need a statement that would hold uniformly for all values . The same techniques as in Bartlett (1996) lead to the following result: Theorem 18 Let A be a real-valued classication algorithm with stability and B be some real number. Then, for any m 1, and any (0, 1), with probability at least 1 over the random draw of the sample S,
(0, B], R Remp + 2

e +

4me +1
510

1 2m

ln 1/ +

2 ln ln

eB

(17)

Stability and Generalization

and
(0, B], R Rloo +

e +

4me +1

1 2m

ln 1/ +

2 ln ln

eB

(18)

We defer the proof of this theorem to the appendix. We can thus apply Theorem 18 with a value of which is optimized after having seen the data.

5. Stable Learning Algorithms


As seen in previous sections, our approach allowed to derive bounds on the generalization error from the empirical and leave-one-out errors which depend on the stability of the algorithm. However, we noticed that the bounds we obtain for the two estimators are very similar. This readily implies that the method is suited to the study of algorithms for which the empirical error is close to the leave-one-out error. There is thus no hope to get good bounds for algorithms which simply minimize the empirical error since their empirical error will be very much optimistically biased compared to their leave-one-out error. This means that, in order to be stable in the sense dened above, a learning algorithm has to signicantly depart from an empirical risk minimizer. It thus has to accept a signicant number of training errors (which should however not be larger that the noise level). In order to generalize, these extra training errors will thus be compensated by a decrease of the complexity of the learned function. In some sense, this is exactly what regularization-based algorithm do: they minimize an objective function which is the sum of an empirical error term and a regularizing term which penalizes the complexity of the solution. This explains why our approach is particularly well suited for the analysis of such algorithms. 5.1 Previous Results for k-Local Rules As an illustration of the various notions of stability, we will rst study the case of k-Local Rules for which a large number of results were obtained. A k-Local Rule is a classication algorithm that determines the label of an instance x based on the k closest instances in the training set. The simplest example of such a rule is the k-Nearest Neighbors (k-NN) algorithm which computes the label by a majority vote among the labels of the k nearest instances in the training set. Such an algorithm can be studied as a {0, 1}-valued classier or as a [0, 1]-valued classier if we take into account the result of the vote. We will consider the real-valued version of the k-NN classier and give a result about its stability with respect to dierent loss functions. 1. With respect to the {0, 1}-loss function, the k-NN classier has hypothesis stability 4 m k . 2

This was proven in Devroye and Wagner (1979a). We will not reproduce the proof which is quite technical but notice that a symmetry argument readily gives P [AS (z) = AS \i (z)] k . m

2. With respect to the absolute loss function (c(y, y ) = |y y |), the k-NN classier has only a trivial uniform stability which is the bound on the values of y.

511

Bousquet and Elisseeff

The polynomial bound that can be obtained from hypothesis stability suggests that k should be small if one wants a good bound. This is somehow counter-intuitive since the decision seems more robust to noise when many points are involved in the vote. There exist exponential bounds on the leave-one-out estimate of k-NN for the {0, 1}-loss obtained by Devroye and Wagner (1979a) and for the smoothed error estimate (i.e. with respect to the absolute loss) obtained by Lugosi and Pawlak (1994), and these bounds do not depend on the parameter k (due to a more careful application of McDiarmids inequality suited to the algorithm). We may then wonder in that case whether the polynomial bounds are interesting compared to exponential ones since the latter are sharper and are closer to intuitive interpretation. Despite this example, we believe that in general polynomial bounds could give relevant hints about which feature of the learning algorithm leads to good generalization. In the remainder, we will consider several algorithms that have not been studied from a stability perspective and we will focus on their uniform stability only, which turns out to be quite good. Obtaining results directly for their hypothesis stability remains an open problem. 5.2 Stability of Regularization Algorithms Uniform stability may appear as a strict condition. Actually, we will see in this section that many existing learning methods exhibit a uniform stability which is controlled by the regularization parameter and can thus be very small. 5.2.1 Stability for General Regularizers Recall that (f, z) = c(f (x), y). We assume in this section that F is a convex subset of a linear space. Denition 19 A loss function dened on F Y is -admissible with respect to F if the associated cost function c is convex with respect to its rst argument and the following condition holds y1 , y2 D, y Y, |c(y1 , y ) c(y2 , y )| |y1 y2 | , where D = {y : f F , x X , f (x) = y} is the domain of the rst argument of c. Thus in the case of the quadratic loss for example, this condition is veried if Y is bounded and F is totally bounded, that is there exists M < such that f F , f

M and y Y, |y| M .

We introduce the objective function that the algorithm will minimize: let N : F R+ be a function on F, m 1 (g, zj ) + N (g) , Rr (g) := (19) m j=1 and a modied version (based on a truncated training set),
\i Rr (g) :=

1 m

(g, zj ) + N (g) .
j=i

(20)

Depending on the algorithm N will take dierent forms. To derive stability bounds, we need some general results about the minimizers of (19) and (20). Lemma 20 Let be -admissible with respect to F, and N a functional dened on F such that \i for all training sets S, Rr and Rr have a minimum (not necessarily unique) in F. Let f denote a

512

Stability and Generalization

minimizer in F of Rr , and for i = 1, . . . , m, let f \i denote a minimizer in F of Rr . We have for any t [0, 1], t (21) N (f ) N (f + tf ) + N (f \i ) N (f \i tf ) |f (xi )| , m where f = f \i f . Proof Let us introduce the notation
\i Remp (f ) :=

\i

1 m

(f, zj ) .
j=i

Recall that a convex function g veries: x, y, t [0, 1]


\i

g(x + t(y x)) g(x) t(g(y) g(x)) .

Since c is convex, Remp is convex too and thus, t [0, 1]


\i \i \i \i Remp (f + tf ) Remp (f ) t(Remp (f \i ) Remp (f )) .

We can also get (switching the role of f and f \i ):


\i \i \i \i Remp (f \i tf ) Remp (f \i ) t(Remp (f ) Remp (f \i )) .

Summing the two preceding inequalities yields


\i \i \i \i Remp (f + tf ) Remp (f ) + Remp (f \i tf ) Remp (f \i ) 0 .

(22)

Now, by assumption we have Rr (f ) Rr (f \i \i \i Rr (f ) Rr (f \i + tf ) tf ) 0 0, (23) (24)

so that, summing the two previous inequalities and using (22), we get c(f (xi ), yi ) c((f + tf )(xi ), yi ) + m N (f ) N (f + tf ) + N (f \i ) N (f \i tf ) 0 , and thus, by the -admissibility condition, we get N (f ) N (f + tf ) + N (f \i ) N (f \i tf ) t |f (xi )| . m

In the above lemma, there is no assumption about the space F (apart from being a convex linear \i space) and the regularizer N apart from the existence of minima for Rr and Rr . However, most of the practical regularization-based algorithms work with a space F that is a vector space and with a convex regularizer. We will thus rene our previous result in this particular setting. In order to do this, we need some standard denitions about convex functions which we deferred to Appendix C where most of the material can be found in Rockafellar (1970) and in Gordon (1999). Lemma 21 Under the conditions of Lemma 20, when F is a vector space and N is a proper closed convex function from F to R {, +}, we have dN (f, f \i ) + dN (f \i , f ) when N and are dierentiable.
513

1 m

(f \i , zi ) (f, zi ) d

(.,zi ) (f

\i

, f)

|f (xi )| , m

Bousquet and Elisseeff

Proof We start with the dierentiable case and work with regular divergences. By denition of f and f \i , we have, using (30),
\i \i dRr (f \i , f ) + dR\i (f, f \i ) = Rr (f \i ) Rr (f ) + Rr (f ) Rr (f \i ) =
r

1 1 (f \i , zi ) (f, zi ) . m m

Moreover, by the nonnegativity of divergences, we have dR\i (f, f \i ) + dR\i (f \i , f ) 0 ,


emp emp

which, with the previous equality and the fact that dA+B = dA + dB , gives 1 (f \i , zi ) (f, zi ) d (.,zi ) (f \i , f ) , m and we obtain the rst part of the result. For the second part, we notice that dN (f, f \i ) + dN (f \i , f ) (f \i , zi ) (f, zi ) d (f \i , zi ) (f, zi ) d by the -admissibility condition. The results in this section can be used to derive bounds on the stability of many learning algorithms. Each procedure that can be interpreted as the minimization of a regularized functional can be analyzed with these lemmas. The only thing that will change from one procedure to another is the regularizer N and the cost function c. In the following, we show how to apply these theorems to dierent learning algorithms. 5.2.2 Application to Regularization in Hilbert Spaces Many algorithms such as Support Vector Machines (SVM) or classical regularization networks introduced by Poggio and Girosi (1990) perform the minimization of a regularized objective function where the regularizer is a norm in a reproducing kernel Hilbert space (RKHS): N (f ) = f
2 k (.,zi ) (f \i

, f ) (f \i , zi ) (f, zi ) , , f ) |f \i (xi ) f (xi )| ,

by the nonnegativity of the divergence and thus


(.,zi ) (f \i

where k refers to the kernel (see e.g. Wahba, 2000, or Evgeniou et al., 1999, for denitions). The fundamental property of a RKHS F is the so-called reproducing property which writes f F , x X , f (x) = f, k(x, .) . In particular this gives by Cauchy-Schwarz inequality f F , x X , |f (x)| f
k

k(x, x) .

(25)

We now state a result about the uniform stability of RKHS learning. Theorem 22 Let F be a reproducing kernel Hilbert space with kernel k such that x X , k(x, x) 2 < . Let be -admissible with respect to F. The learning algorithm A dened by AS = arg min
gF

1 m

(g, zi ) + g
i=1

2 k

(26)

has uniform stability with respect to

with 2 2 . 2m

514

Stability and Generalization

Proof We use the proof technique described in previous section. It can be easily checked that when 2 N (.) = . k we have dN (g, g ) = g g 2 . k Thus, Lemma 20 gives 2 f Using (25), we get |f (xi )| f so that f Now we have, by the -admissibility of | (f, z) (f \i , z)| |f (x) f \i (x)| = |f (x)| , which, using (25) again, gives the result. We are now one step away from being able to apply Theorem 12. The only thing that we need is to bound the loss function. Indeed, the -admissibility condition does not ensure the boundedness. However, since we are in a RKHS, we can use the following simple lemma which ensures that if we have an a priori bound on the target values y, then the boundedness condition is satised. Lemma 23 Let A be the algorithm of Theorem 22 where is a loss function associated to a convex cost function c(., .). We denote by B(.) a positive non-decreasing real-valued function such that for all y D. y Y, c(y, y ) B(y) For any training set S, we have f and also z Z, 0 (AS , z) B Moreover, is -admissible where can be taken as = sup
y Y 2 k k k 2 k

|f (xi )| . m k(xi , xi ) f . 2m
k

B(0) , B(0) .

 q
|y|B

sup

B(0)

c (y, y ) . y

Proof We have for f = AS , Rr (f ) Rr (0) = 1 m


m i=1

(0, zi ) B(0) ,

and also Rr (f ) f 2 which gives the rst inequality. The second inequality follows from (25). k The last one is a consequence of the denition of -admissibility.

Example 1 (Stability of bounded SVM regression) Assume k is a bounded kernel, that is k(x, x) 2 and Y = [0, B]. Consider the loss function (f, z) = |f (x) y| = 0 |f (x) y|
515

if |f (x) y| otherwise

Bousquet and Elisseeff

This function is 1-admissible and we can state B(y) = B. The SVM algorithm for regression with a kernel k can be dened as m 1 AS = arg min (g, zi ) + g 2 , k gF m i=1 and we thus get the following stability bound Moreover, by Lemma 23 we have z Z, 0 (AS , z) Plugging the above into Theorem 12 gives the following bound R Remp + 2 + m 22 + B ln 1/ . 2m B 2 . 2m

Note that we consider here SVM without the bias b, which is strictly speaking dierent from the true denition of SVM. The question whether b can be included in such a setting remains open. Example 2 (Stability of soft margin SVM classication) We have Y = {1, 1}. We consider the following loss function (f, z) = (1 yf (x))+ = 1 yf (x) 0 if 1 yf (x) 0 otherwise

which is 1-admissible. From Lemma 20, we deduce that the real-valued classication obtained by the SVM optimization procedure has classication stability with We use Theorem 17 with = 1 and thus get
1 R Remp +

2 . 2m

22 2 + 1+ m

ln 1/ , 2m
m m

1 1 1 1 where Remp is the clipped error. It can be seen that Remp m i=1 (f, zi ) = m i=1 i , where the are the Lagrange multipliers that appear in the dual formulation of the soft-margin SVM. Note that the same remark as in the previous example holds here: there is no bias b in the denition of the SVM.

Example 3 (Stability of Regularized Least Squares Regression) Again we will consider the bounded case Y = [0, B]. The regularized least squares regression algorithm is dened by AS = arg min
gF

1 m

(g, zi ) + g
i=1

2 k

where (f, z) = (f (x) y)2 . We can state B(y) = B 2 so that we have z Z, 0 (AS , z)
516

is 2B-admissible by Lemma 23. Also B .

Stability and Generalization

The stability bound for this algorithm is thus so that we have the generalization error bound R Remp + 42 B 2 + m 82 B 2 + 2B ln 1/ . 2m 22 B 2 m

5.2.3 Regularization by the Relative Entropy In this section we consider algorithms that build a mixture or a weighted combination of base hypotheses. Lets consider a set H of functions h : X Y parameterized by some parameter : H = {h : } . This set is the base class from which the learning algorithm will form mixtures by averaging the predictions of base hypotheses. More precisely, we assume that is a measurable space where a reference measure is dened. The output of our algorithm is a mixture of element from , in other words, it is a probability distribution over . We will thus choose F as the set of all such probability distributions (dominated by the reference measure), dened by their density with respect to the reference measure. Once an element f F is chosen by the algorithm, the predictions are computed as follows y (x) =

h (x)f ()d ,

which means that the prediction produced by the algorithm is indeed a weighted combination of the predictions of the base hypotheses, weighted by the density f . In Bayesian terms, AS would be a posterior on computed from the observation of S and y (x) is the corresponding Bayes prediction. By some abuse of notation, we will denote by AS both the element f F that is used by the algorithm to weigh the base hypotheses (which can be considered as a function R) and the prediction function x X y (x). Now we need to dene a loss function on F Z. This can be done by extending a loss function r dened on H Z with associated cost function s (r(h, z) = s(h(x), y)). There are two ways of deriving a loss function on F. We can simply use s to compute the discrepancy between the predicted and true labels (27) (g, z) = s((x), y) , y or we can average the loss over , (g, z) =

r(h , z)g()d .

(28)

The rst loss is the one used when one is doing Bayesian averaging of hypotheses. The second loss corresponds to the expected loss of a randomized algorithm that would sample h H according to the posterior AS to perform the predictions. In the remainder, we will focus on the second type of loss since it is easier to analyze. Note however, that this loss will be used only to dene a regularization algorithm and that the loss that is used to measure its error may be dierent. Our goal is to choose the posterior f via the minimization of a regularized objective function. We choose some xed density f0 and dene the regularizer as N (g) = K(g, f0 ) =

g() ln

g() d , f0 ()

517

Bousquet and Elisseeff

K being the Kullback-Leibler divergence or the relative entropy. In Bayesian terms, f0 would be our prior. Now, the goal is to minimize the following objective function Rr (g) = 1 m
m

(g, z) + K(g, f0 ) ,
i=1

where is given by (28). We can interpret the minimization of this objective function as the computation of the Maximum A Posteriori (MAP) estimate. Lets analyze this algorithm. We will assume that we know a bound M on the loss r(h , z). First, notice that is linear in g and is thus convex and M -Lipschitz with respect to the L1 norm | (g, z) (g , z)| M

|g() g ()|d .

Thus is M -admissible with respect to F. We can now state the following result on the uniform stability of the algorithm dened above. Theorem 24 Let F dened as above and let r be any loss function dened on H Z, bounded by M . Let f0 be a xed member of F. When is dened by (28), the learning algorithm A dened by AS = arg min
gF

1 m

(g, zi ) + K(g, f0 ) ,
i=1

(29)

has uniform stability with respect to

with M2 . m

Proof Recall the following property of the relative entropy (see e.g. Cover and Thomas 1991), for any g, g , 2 1 |g() g ()|d K(g, g ) . 2 Moreover, the Bregman divergence associated to the relative entropy to f0 is dK(.,f0 ) (g, g ) = K(g, g ) . We saw that is M -admissible thus, by Lemma 21 we get
2

|f () f \i ()|d

M m

|f () f \i ()|d ,

hence

M , m and thus, using again the M -admissibility of , we get for all z Z, |f () f \i ()|d | (f, z) (f \i , z)| which concludes the proof. Now, lets consider the case of classication where Y = {1, 1}. If we use base hypotheses h that return values in {1, 1}, it is easy to see from the proof of the above theorem that algorithm A has M classication stability m . Indeed, we have |AS (x) AS \i (x)| =

M2 , m

h (x)(AS () AS \i ())d

|AS () AS \i ()|d

M , m

where the last inequality is derived in the proof of Theorem 24.


518

Stability and Generalization

Example 4 (Maximum Entropy Discrimination) Jaakola et al. (1999) introduce the Minimum Relative Entropy (MRE) algorithm which is a real-valued classier obtained by minimizing Rr (g) = 1 m
m

(g, z) + K(g, f0 ) ,
i=1

where the base class has two parameters H = {h, : , R} (with h, = h ) and the loss is dened by (g, z) =
,R

( yh (x))g()dd

.
+

If we have a bound B on the quantity yh (x), we see that this loss function is B-admissible and thus by Theorem 24 (and the remark about the classication stability) we deduce that the MRE algorithm has classication stability bounded by B m

6. Discussion
For regularization algorithms, we obtained bounds on the uniform stability of the order of = 1 O( m ). Plugging this result into our main theorem, we obtained bounds on the generalization error of the following type 1 R Remp + O , m so that we obtain non trivial results only if we can guarantee that >> 1 . This is likely to depend m on the noise in the data and no theoretical results exist that guarantee that does not decrease too fast when m is increased. However, it should be possible to rene our results which used sometimes quite crude bounds. It seems reasonable that a bound like R Remp + O 1 m ,

could be possible to obtain. This remains an open problem. In order to better understand the distinctive feature of our bounds, we can compare them to bounds from Structural Risk Minimization (SRM) for example on the SVM algorithm. The SVM algorithm can be presented using the two equivalent formulations min
f F

1 m
m i=1

m i=1

(1 yi f (xi ))+ + f

or min
f F

1 m

(1 yi f (xi ))+ with f

The equivalence of those two problems comes from the fact that for any , there exists a such that the solution of the two problems are the same. The SRM principle consists in solving the second problem for several values of and then choosing the value that minimizes a bound that depends on the VC-dimension of the set {f : f 2 }. However, this quantity is usually not easy to compute and only loose upper bounds can be found. Moreover, since minimization under a constraint on the norm is not easy to perform, one typically

519

Bousquet and Elisseeff

performs the rst minimization for a particular value of (chosen by cross-validation) and then uses SRM bounds with = f 2 . This requires the SRM bounds to hold uniformly for all values of . This approach has led to bound which were quite predictive of the behavior but that were quantitatively very loose. In contrast, our approach directly focuses on the actual minimization that is performed (the rst one) and does not require the computation of a complexity measure. Indeed, the complexity is implicitly evaluated by the actual parameter .

7. Conclusion
We explored the possibility of obtaining generalization bounds for specic algorithms from stability properties. We introduced several notions of stability and obtained corresponding generalization bounds with either the empirical error or the leave-one-out error. Our main result is an exponential bound for algorithms that have good uniform stability. We then proved that regularization algorithms have such a property and that their stability is controlled by the regularization parameter . This allowed us to obtained bounds on the generalization error of Support Vector Machines both in the classication and in the regression framework that do not depend on the implicit VC-dimension but rather depend explicitly on the tradeo parameter C. Further directions of research include the question of obtaining better bounds via uniform stability and the use of less restrictive notions of stability. Of great practical interest would be to design algorithms that maximize their own stability.

Acknowledgements
The authors wish to thank Ryan Rifkin and Ralf Herbrich for fruitful comments that helped improved the readability and Alex Smola, Gbor Lugosi, Stphane Boucheron and Sayan Mukherjee a e for stimulating discussions.

Appendix A. Proof of Lemma 9


Lets start with a generalized version of a lemma from Rogers and Wagner (1978). Lemma 25 For any learning algorithm A, any i, j {1, . . . , m} such that i = j, we have ES (R Remp )2 ES,z,z [ (AS , z) (AS , z )] 2ES,z [ (AS , z) (AS , zi )] M +ES [ (AS \i , zi ) (AS \j , zj )] + ES [ (AS , zi )] m 1 ES [ (AS , zi ) (AS , zj )] , m

and ES (R Rloo )2 ES,z,z [ (AS , z) (AS , z )] 2ES,z [ (AS , z) (AS \i , zi )] M +ES [ (AS \i , zi ) (AS \j , zj )] + ES R\i m 1 ES [ (AS \i , zi ) (AS \j , zj )] , m = ES Ez [ (AS , z)]
2

Proof We have ES R2 = ES [Ez [ (AS , z)] Ez [ (AS , z )]] = ES [Ez,z [ (AS , z) (AS , z )]] ,
520

Stability and Generalization

and also ES [RRemp ] = = = = and also ES [RRloo ] = = = = for any xed i by symmetry. Also we have
2 ES Remp

ES R 1 m 1 m
m

1 m

(AS , zi )
i=1

ES [R (AS , zi )]
i=1 m

ES,z [ (AS , z) (AS , zi )]


i=1

ES,z [ (AS , z) (AS , zi )] ,

ES R 1 m 1 m
m

1 m

(AS \i , zi )
i=1

ES [R (AS \i , zi )]
i=1 m

ES,z [ (AS , z) (AS \i , zi )]


i=1

ES,z [ (AS , z) (AS \i , zi )] ,

1 m2

ES
i=1

(AS , zi )2 +
m

1 m2

ES [ (AS , zi ) (AS , zj )]
i=j

= and
2 ES Rloo

M 1 ES m m

(AS , zi ) +
i=1

m1 ES [ (AS , zi ) (AS , zj )] m

M m1 ES [ (AS , zi )] + ES [ (AS , zi ) (AS , zj )] , m m


m

1 m2

ES
i=1

(AS \i , zi )2 +
m

1 m2

ES [ (AS \i , zi ) (AS \j , zj )]
i=j

= which concludes the proof.

M 1 ES m m

(AS \i , zi ) +
i=1

m1 ES [ (AS \i , zi ) (AS \j , zj )] m

M m1 ES R\i + ES [ (AS \i , zi ) (AS \j , zj )] . m m

Now lets prove Lemma 9. We will use several times the fact that the random variables are i.i.d. and we can thus interchange them without modifying the expectation (it is just a matter of renaming them). We introduce the notation T = S \i,j and we will denote by AT,z,z the result of training on the set T z, z . Lets rst formulate the rst inequality of Lemma (25) as ES (R Remp )2 1 ES [ (AS , zi ) (M (AS , zj ))] m
521

Bousquet and Elisseeff

+ES,z,z [ (AS , z) (AS , z ) (AS , z) (AS , zi )] +ES,z,z [ (AS , zi ) (AS , zj ) (AS , z) (AS , zi )] = I1 + I 2 + I3 . Using Schwarzs inequality we have ES [ (AS , zi ) (M (AS , zj ))]
2

ES

M 2 ES [ (AS , zi )] ES [M (AS , zj )] = M 2 ES [ (AS , zi )] (M ES [ (AS , zi )]) M4 , 4 M2 . 2m

(AS , zi )2 ES (M (AS , zj ))2

so that we conclude I1 Now we rewrite I2 as ES,z,z

(AT,zi ,zj , z) (AT,zi ,zj , z ) (AT,zj ,z , z) (AT,zj ,z , z ) (renaming zi as z in the second term) = ES,z,z ( (AT,zi ,zj , z) (AT,z,zj , z)) (AT,zi ,zj , z ) +ES,z,z ( (AT,z,zj , z) (AT,zj ,z , z)) (AT,zi ,zj , z ) +ES,z,z ( (AT,zi ,zj , z ) (AT,zj ,z , z )) (AT,zj ,z , z) .

= ES,z,z

(AT,zi ,zj , z) (AT,zi ,zj , z ) (AT,zi ,zj , z) (AT,zi ,zj , zi )

Next we rewrite I3 as ES,z,z = ES,z,z (AT,z,z , z) (AT,z,z , z ) (AT,zi ,zj , z) (AT,zi ,zj , zi ) (renaming zj as z and zi as z in the rst term) = ES,z,z [ (AT,z,z , z) (AT,z,z , z ) (AT,z ,zi , z) (AT,z ,zi , z )] (AT,zi ,zj , zi ) (AT,zi ,zj , zj ) (AT,zi ,zj , z) (AT,zi ,zj , zi )

(exchanging zi and zj , then renaming zj as z in the second term) = ES,z,z [( (AT,z,z , z ) (AT,z,zi , z )) (AT,z,z , z)] +ES,z,z [( (AT,z,z , z) (AT,zi ,z , z)) (AT,z,zi , z )] +ES,z,z [( (AT,z,zi , z ) (AT,z ,zi , z )) (AT,zi ,z , z)] = ES,z,z ( (AT,zj ,z , z ) (AT,zj ,zi , z )) (AT,zj ,z , zj ) +ES,z,z ( (AT,z,zj , z) (AT,zi ,zj , z)) (AT,z,zi , zj ) +ES,z,z ( (AT,z
,zj , z)

(AT,z,zj , z)) (AT,zj ,z , z ) ,

where in the last line we replaced z by zj in the rst term and z by zj in the second term and we exchanged z and z and also zi and zj in the last term. Summing I2 and I3 we obtain I2 + I3 = +ES,z,z ( (AT,z,zj , z) (AT,zj ,z , z))( (AT,zi ,zj , z ) (AT,zj ,z , z )) ES,z,z ( (AT,zi ,zj , z) (AT,z,zj , z))( (AT,zi ,zj , z ) (AT,z,zi , zj ))

3M ES,z | (AT,zi ,zj , z) (AT,z,zj , z)| = 3M ES,zi [| (AS , zi ) (AS i , zi )|] ,


522

+ES,z,z ( (AT,zi ,zj , z ) (AT,zj ,z , z ))( (AT,zj ,z , z) (AT,zj ,z , zj ))

Stability and Generalization

Which proves the rst part of the bound. For the second part, we use the same technique and slightly vary the algebra. We rewrite I2 as ES,z,z = ES,z,z (AT,zi ,zj , z) (AT,zi ,zj , z ) (AT,z,zj , z ) (AT,z,zj , z) (renaming zi as z and z as z in the second term) = ES,z,z ( (AT,zi ,zj , z ) (AT,z,zj , z )) (AT,zi ,zj , z) +ES,z,z ( (AT,zi ,zj , z) (AT,z,zj , z)) (AT,z,zj , z ) . Next we rewrite I3 as ES,z,z = ES,z,z (AT,zi ,z , zi ) (AT,zi ,z , z) (AT,zi ,zj , z) (AT,zi ,zj , zi ) (renaming zj as z in the rst term) = ES,z,z ( (AT,zi ,z , zi ) (AT,zi ,zj , zi )) (AT,zi ,z , z) (AT,zi ,zj , zi ) (AT,zi ,zj , zj ) (AT,zi ,zj , z) (AT,zi ,zj , zi ) (AT,zi ,zj , z) (AT,zi ,zj , z ) (AT,zi ,zj , z) (AT,zi ,zj , zi )

+ES,z,z ( (AT,zj ,z , z) (AT,zi ,zj , z)) (AT,zi ,zj , zj ) (exchanging zi and zj in the second term) . Summing I2 and I3 we obtain I2 + I 3 = ES,z,z ( (AT,zi ,zj , z ) (AT,z,zj , z )) (AT,zi ,zj , z)

= ES,z,z ( (AT,zi ,z , zi ) (AT,zi ,zj , zi )) (AT,zi ,z , z)

+ES,z,z ( (AT,zi ,z , z) (AT,zi ,zj , z)) (AT,zi ,zj , zi )

+ES,z,z ( (AT,zi ,z , zi ) (AT,zi ,zj , zi )) (AT,zi ,z , z)

+ES,z,z ( (AT,zj ,z , z) (AT,zi ,zj , z))( (AT,z,zj , z ) (AT,zi ,zj , zj )) M ES,zi ,z [| (AS , z) (AS i , z)|] + M ES,zi [| (AS , zj ) (AS i , zj )|] +M ES,zi [| (AS , zi ) (AS i , zi )|] .

The above concludes the proof of the bound for the empirical error. We now turn to the leave-one-out error. The bound can be obtain in a similar way. Actually, we notice that if we rewrite the derivation for the empirical error, we simply have to remove from the training set the point at which the loss is computed. That is, we simply have to replace all the quantities of the form (AT,z,z , z) by (AT,z , z). It is easy to see that the above results are modied in a way that gives the correct bound for the leave-one-out error.

Appendix B. Proof of Theorem 18


First we rewrite Inequality (15) in Theorem 17 as
PS R Remp > 2

4m +1

2 1 e . 2m

We introduce the following quantity u( , ) = 2 and rewrite the above bound as


PS R Remp > u( , ) e .
2

4m +1

1 , 2m

523

Bousquet and Elisseeff

We dene a sequence (k )k0 of real numbers such that k = Bek . We dene k = t + 2 ln k. Now, we use the union bound to get a statement that holds for all values in the sequence (k )k1 :
k PS k 1, R Remp > u( k , k )

k1

k PS R Remp > u( k , k )

e
k1

2 k

k1

2 1 t2 e 2et . k2

For a given (0, B], consider the unique value k 1 such that k k1 . We thus have k ek . The following inequalities follow from the denition of k 1 e , k
k Remp Remp ,

2 ln k = so that we have u(t + 2 ln k, k ) 2

2 ln ln

B k

2 ln ln

eB ,

e + (t + )

4me +1

1 v(, t) . 2m

We thus get the following implication


k R Remp > v(, t) R Remp > u(t +

2 ln k, k ) .

This reasoning thus proves that


k PS (0, B], R Remp > v(, t) PS k 0, R Remp > u(t +

2 ln k, k ) ,

and thus which can be written as


PS (0, B], R Remp > 2

PS (0, B], R Remp > v(, t) 2et ,

e + (t + )

4me +1

2 1 2et , 2m

and gives with probability 1


(0, B], R Remp + 2

e +

ln 1/ +

2 ln ln

eB

4me +1

1 , 2m

which gives the rst inequality. The second inequality can be proven in the same way.
524

Stability and Generalization

Appendix C. Convexity
For more details see Gordon (1999) or Rockafellar (1970). A convex function F is any function from a vector space F to R {, +} which satises F (g) + (1 )F (g ) F (g + (1 )g ) , for all g, g F and [0, 1]. A proper convex function is one that is always greater than and not uniformly +. The domain of F is the set of points where F is nite. A convex function is closed if its epigraph {(f, y) : y F (f )} is closed. The subgradient of a convex function at a point g, written F (g) is the set of vectors a such that F (g ) F (g) + g g, a , for all g . Convex functions are continuous on the interior of their domain and dierentiable on the interior of their domain except on a set of measure zero. For a convex function F we dene the dual of F , noted F by F (a) = sup a, g F (g) .
g

Denoting by F (g ) a subgradient of F in g (i.e. a member of F (g )), we can dene the Bregman divergence associated to F of g to g by dF (g, g ) = F (g) F (g ) g g , F (g ) .

When F is everywhere dierentiable, this is well dened (since the subgradient is unique) and nonnegative (by the denition of the subgradient). Otherwise, we can dene the generalized divergence as dF (g, a) = F (g) + F (a) g, a ,

where a F . Notice that this divergence is also nonnegative. Moreover, the fact that f is a minimum of F in F is equivalent to 0 F (f ) , which, with the following relationship a F (g) F (g) + F (a) = g, a , gives F (f ) + F (0) = 0 , when f is a minimum of F in F. When F is everywhere dierentiable, it is easy to get g F , dF (g, f ) = F (g) F (f ) , otherwise, using generalized divergences, we have g F, dF (g, 0) = F (g) F (f ) . (31) (30)

References
N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM, 44(4):615631, 1997. P. Bartlett For valid generalization, the size of the weights is more important than the size of the network Advances in Neural Information Processing Systems, 1996.
525

Bousquet and Elisseeff

J.F. Bonnans and A. Shapiro. Optimization problems with perturbation, a guided tour. Technical Report 2872, INRIA, April 1996. O. Bousquet and A. Elissee. Algorithmic stability and generalization performance. In Neural Information Processing Systems 14, 2001. L. Breiman. Bagging predictors. Machine Learning, 24:123140, 1996a. L. Breiman. Heuristics of instability and stabilization in model selection. Annals of Statistics, 24 (6):23502383, 1996b. T.M. Cover and J.A. Thomas. Elements of Information Theory. John Wiley, 1991. L. Devroye. Exponential inequalities in nonparametric estimation. In Nonparametric Functional Estimation and Related Topics, pages 3144. Kluwer Academic Publishers, 1991. L. Devroye, L. Gyr, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer o Verlag, 1996. L. Devroye and T. Wagner. Distribution-free inequalities for the deleted and holdout error estimates. IEEE Transactions on Information Theory, 25(2):202207, 1979a. L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. IEEE Transactions on Information Theory, 25(5):601604, 1979b. T. Evgeniou and M. Pontil and T. Poggio. A unied framework for Regularization Networks and Support Vector Machines. A.I. Memo 1654, Massachusetts Institute of Technology, Articial Intelligence Laboratory, December 1999. G. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie Mellon University, 1999. T. Jaakola, M. Meila, and T. Jebara. Maximum entropy discrimination. In Neural Information Processing Systems 12, 1999. M. Kearns and D. Ron. Algorithmic stability and sanity-check bounds for leave-one-out crossvalidation. Neural Computation, 11(6):14271453, 1999. G. Lugosi and M. Pawlak. On the posterior-probability estimate of the error of nonparametric classication rules. IEEE Transactions on Information Theory, 40(2):475481, 1994. C. McDiarmid. On the method of bounded dierences. In Surveys in Combinatorics, pages 148188. Cambridge University Press, Cambridge, 1989. T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. In Science, 247(2):978982, 1990. R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, 1970. W. Rogers and T. Wagner. A nite sample distribution-free performance bound for local discrimination rules. Annals of Statistics, 6(3):506514, 1978. J.M. Steele. An Efron-Stein inequality for nonsymmetric statistics. Annals of Statistics, 14:753758, 1986. M. Talagrand. A new look at independence. Annals of Probability, 24:134, 1996. V.N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982. G. Wahba. An introduction to model building with reproducing kernel hilbert spaces. Technical Report Statistics Department TR 1020, University of Wisconsin, Madison, 2000.
526

You might also like