Understanding Machine Learning
Understanding Machine Learning
UNDERSTANDING
MACHINE LEARNING
From Theory to
Algorithms
Shai Shalev-Shwartz
The Hebrew University, Jerusalem
Shai Ben-David
University of Waterloo, Canada
Contents
Preface
Introduction
1.1 What Is Learning?
1.2 When Do We Need Machine Learning?
1.3 Types of Learning
1.4 Relations to Other Fields
1.5 How to Read This Book
1.6 Notation
Part 1
2
page xv
Foundations
A Gentle Start
2.1
2.2
2.3
2.4
PAC Learning
A More General Learning Model
Summary
Bibliographic Remarks
Exercises
1
1
3
4
6
7
8
11
13
13
15
16
20
22
22
23
28
28
28
31
31
32
34
35
35
vii
viii
Contents
36
The VC-Dimension
43
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
43
44
46
48
49
53
53
54
Nonuniform Learnability
7.1 Nonuniform Learnability
7.2 Structural Risk Minimization
7.3 Minimum Description Length and Occams Razor
7.4 Other Notions of Learnability Consistency
7.5 Discussing the Different Notions of Learnability
7.6 Summary
7.7 Bibliographic Remarks
7.8 Exercises
58
73
8.1
8.2
8.3
8.4
8.5
8.6
8.7
Part 2
9
37
40
41
41
41
Linear
9.1
9.2
9.3
9.4
9.5
9.6
Predictors
Halfspaces
Linear Regression
Logistic Regression
Summary
Bibliographic Remarks
Exercises
58
60
63
66
67
70
70
71
74
76
80
81
82
82
83
87
89
90
94
97
99
99
99
Contents
10
11
12
13
14
Boosting
10.1 Weak Learnability
10.2 AdaBoost
10.3 Linear Combinations of Base Hypotheses
10.4 AdaBoost for Face Recognition
10.5 Summary
10.6 Bibliographic Remarks
10.7 Exercises
101
Model
11.1
11.2
11.3
11.4
11.5
114
115
116
120
123
123
124
137
150
14.1
14.2
14.3
14.4
14.5
14.6
14.7
14.8
15
102
105
108
110
111
111
112
Gradient Descent
Subgradients
Stochastic Gradient Descent (SGD)
Variants
Learning with SGD
Summary
Bibliographic Remarks
Exercises
124
130
134
135
136
136
137
139
140
144
146
146
147
151
154
156
159
162
165
166
166
167
167
171
175
ix
Contents
15.4
15.5
15.6
15.7
15.8
16
17
Kernel
16.1
16.2
16.3
16.4
16.5
16.6
19
Methods
Embeddings into Feature Spaces
The Kernel Trick
Implementing Soft-SVM with Kernels
Summary
Bibliographic Remarks
Exercises
179
179
181
186
187
188
188
190
190
193
198
201
206
209
210
210
Decision Trees
18.1 Sample Complexity
18.2 Decision Tree Algorithms
18.3 Random Forests
18.4 Summary
18.5 Bibliographic Remarks
18.6 Exercises
212
Nearest Neighbor
219
19.1
19.2
19.3
19.4
19.5
19.6
20
175
176
177
177
178
18
Duality*
Implementing Soft-SVM Using SGD
Summary
Bibliographic Remarks
Exercises
Neural
20.1
20.2
20.3
20.4
20.5
20.6
213
214
217
217
218
218
k Nearest Neighbors
Analysis
Efficient Implementation*
Summary
Bibliographic Remarks
Exercises
219
220
225
225
225
225
Networks
Feedforward Neural Networks
Learning Neural Networks
The Expressive Power of Neural Networks
The Sample Complexity of Neural Networks
The Runtime of Learning Neural Networks
SGD and Backpropagation
228
229
230
231
234
235
236
Contents
20.7 Summary
20.8 Bibliographic Remarks
20.9 Exercises
22
Clustering
22.1
22.2
22.3
22.4
22.5
22.6
22.7
22.8
23
24
25
240
240
240
243
245
246
251
257
258
261
261
262
264
266
268
271
273
274
276
276
276
Dimensionality Reduction
23.1 Principal Component Analysis (PCA)
23.2 Random Projections
23.3 Compressed Sensing
23.4 PCA or Compressed Sensing?
23.5 Summary
23.6 Bibliographic Remarks
23.7 Exercises
278
Generative Models
24.1 Maximum Likelihood Estimator
24.2 Naive Bayes
24.3 Linear Discriminant Analysis
24.4 Latent Variables and the EM Algorithm
24.5 Bayesian Reasoning
24.6 Summary
24.7 Bibliographic Remarks
24.8 Exercises
295
309
279
283
285
292
292
292
293
295
299
300
301
305
307
307
308
310
316
319
xi
xii
Contents
25.4 Summary
25.5 Bibliographic Remarks
25.6 Exercises
Part 4
26
27
28
Advanced Theory
Rademacher Complexities
26.1 The Rademacher Complexity
26.2 Rademacher Complexity of Linear Classes
26.3 Generalization Bounds for SVM
26.4 Generalization Bounds for Predictors with Low 1 Norm
26.5 Bibliographic Remarks
30
31
323
325
325
332
333
335
336
Covering Numbers
27.1 Covering
27.2 From Covering to Rademacher Complexity via Chaining
27.3 Bibliographic Remarks
337
341
29
321
321
322
337
338
340
341
342
347
Multiclass Learnability
29.1 The Natarajan Dimension
29.2 The Multiclass Fundamental Theorem
29.3 Calculating the Natarajan Dimension
29.4 On Good and Bad ERMs
29.5 Bibliographic Remarks
29.6 Exercises
351
Compression Bounds
30.1 Compression Bounds
30.2 Examples
30.3 Bibliographic Remarks
359
PAC-Bayes
364
351
352
353
355
357
357
359
361
363
364
366
366
369
Appendix B
B.1
B.2
B.3
B.4
372
Measure Concentration
Markovs Inequality
Chebyshevs Inequality
Chernoffs Bounds
Hoeffdings Inequality
372
373
373
375
Contents
B.5
B.6
B.7
Appendix C
C.1
C.2
C.3
C.4
References
Index
376
378
378
Linear Algebra
Basic Definitions
Eigenvalues and Eigenvectors
Positive definite matrices
Singular Value Decomposition (SVD)
380
380
381
381
381
385
395
xiii
Preface
The term machine learning refers to the automated detection of meaningful patterns
in data. In the past couple of decades it has become a common tool in almost any
task that requires information extraction from large data sets. We are surrounded
by a machine learning based technology: Search engines learn how to bring us the
best results (while placing profitable ads), antispam software learns to filter our email messages, and credit card transactions are secured by a software that learns
how to detect frauds. Digital cameras learn to detect faces and intelligent personal
assistance applications on smart-phones learn to recognize voice commands. Cars
are equipped with accident prevention systems that are built using machine learning
algorithms. Machine learning is also widely used in scientific applications such as
bioinformatics, medicine, and astronomy.
One common feature of all of these applications is that, in contrast to more traditional uses of computers, in these cases, due to the complexity of the patterns that
need to be detected, a human programmer cannot provide an explicit, fine-detailed
specification of how such tasks should be executed. Taking example from intelligent
beings, many of our skills are acquired or refined through learning from our experience (rather than following explicit instructions given to us). Machine learning tools
are concerned with endowing programs with the ability to learn and adapt.
The first goal of this book is to provide a rigorous, yet easy to follow, introduction
to the main concepts underlying machine learning: What is learning? How can a
machine learn? How do we quantify the resources needed to learn a given concept?
Is learning always possible? Can we know whether the learning process succeeded or
failed?
The second goal of this book is to present several key machine learning algorithms. We chose to present algorithms that on one hand are successfully used in
practice and on the other hand give a wide spectrum of different learning techniques. Additionally, we pay specific attention to algorithms appropriate for large
scale learning (a.k.a. Big Data), since in recent years, our world has become
increasingly digitized and the amount of data available for learning is dramatically increasing. As a result, in many applications data is plentiful and computation
xv
xvi
Preface
time is the main bottleneck. We therefore explicitly quantify both the amount of
data and the amount of computation time needed to learn a given concept.
The book is divided into four parts. The first part aims at giving an initial rigorous answer to the fundamental questions of learning. We describe a generalization
of Valiants Probably Approximately Correct (PAC) learning model, which is a first
solid answer to the question What is learning? We describe the Empirical Risk
Minimization (ERM), Structural Risk Minimization (SRM), and Minimum Description Length (MDL) learning rules, which show how a machine can learn. We
quantify the amount of data needed for learning using the ERM, SRM, and MDL
rules and show how learning might fail by deriving a no-free-lunch theorem. We
also discuss how much computation time is required for learning. In the second part
of the book we describe various learning algorithms. For some of the algorithms,
we first present a more general learning principle, and then show how the algorithm
follows the principle. While the first two parts of the book focus on the PAC model,
the third part extends the scope by presenting a wider variety of learning models.
Finally, the last part of the book is devoted to advanced theory.
We made an attempt to keep the book as self-contained as possible. However,
the reader is assumed to be comfortable with basic notions of probability, linear
algebra, analysis, and algorithms. The first three parts of the book are intended
for first year graduate students in computer science, engineering, mathematics, or
statistics. It can also be accessible to undergraduate students with the adequate
background. The more advanced chapters can be used by researchers intending to
gather a deeper theoretical understanding.
ACKNOWLEDGMENTS
The book is based on Introduction to Machine Learning courses taught by Shai
Shalev-Shwartz at the Hebrew University and by Shai Ben-David at the University
of Waterloo. The first draft of the book grew out of the lecture notes for the course
that was taught at the Hebrew University by Shai Shalev-Shwartz during 20102013.
We greatly appreciate the help of Ohad Shamir, who served as a TA for the course
in 2010, and of Alon Gonen, who served as a TA for the course in 20112013. Ohad
and Alon prepared a few lecture notes and many of the exercises. Alon, to whom
we are indebted for his help throughout the entire making of the book, has also
prepared a solution manual.
We are deeply grateful for the most valuable work of Dana Rubinstein. Dana
has scientifically proofread and edited the manuscript, transforming it from lecturebased chapters into fluent and coherent text.
Special thanks to Amit Daniely, who helped us with a careful read of the
advanced part of the book and wrote the advanced chapter on multiclass learnability. We are also grateful for the members of a book reading club in Jerusalem who
have carefully read and constructively criticized every line of the manuscript. The
members of the reading club are Maya Alroy, Yossi Arjevani, Aharon Birnbaum,
Alon Cohen, Alon Gonen, Roi Livni, Ofer Meshi, Dan Rosenbaum, Dana Rubinstein, Shahar Somin, Alon Vinnikov, and Yoav Wald. We would also like to thank
Gal Elidan, Amir Globerson, Nika Haghtalab, Shie Mannor, Amnon Shashua, Nati
Srebro, and Ruth Urner for helpful discussions.
1
Introduction
The subject of this book is automated learning, or, as we will more often call it,
Machine Learning (ML). That is, we wish to program computers so that they can
learn from input available to them. Roughly speaking, learning is the process of
converting experience into expertise or knowledge. The input to a learning algorithm is training data, representing experience, and the output is some expertise,
which usually takes the form of another computer program that can perform some
task. Seeking a formal-mathematical understanding of this concept, well have to
be more explicit about what we mean by each of the involved terms: What is the
training data our programs will access? How can the process of learning be automated? How can we evaluate the success of such a process (namely, the quality of
the output of a learning program)?
Introduction
new e-mail arrives, the machine will search for it in the set of previous spam e-mails.
If it matches one of them, it will be trashed. Otherwise, it will be moved to the users
inbox folder.
While the preceding learning by memorization approach is sometimes useful,
it lacks an important aspect of learning systems the ability to label unseen e-mail
messages. A successful learner should be able to progress from individual examples
to broader generalization. This is also referred to as inductive reasoning or inductive
inference. In the bait shyness example presented previously, after the rats encounter
an example of a certain type of food, they apply their attitude toward it on new,
unseen examples of food of similar smell and taste. To achieve generalization in the
spam filtering task, the learner can scan the previously seen e-mails, and extract a set
of words whose appearance in an e-mail message is indicative of spam. Then, when
a new e-mail arrives, the machine can check whether one of the suspicious words
appears in it, and predict its label accordingly. Such a system would potentially be
able correctly to predict the label of unseen e-mails.
However, inductive reasoning might lead us to false conclusions. To illustrate
this, let us consider again an example from animal learning.
Pigeon Superstition: In an experiment performed by the psychologist
B. F. Skinner, he placed a bunch of hungry pigeons in a cage. An automatic mechanism had been attached to the cage, delivering food to the pigeons at regular
intervals with no reference whatsoever to the birds behavior. The hungry pigeons
went around the cage, and when food was first delivered, it found each pigeon
engaged in some activity (pecking, turning the head, etc.). The arrival of food reinforced each birds specific action, and consequently, each bird tended to spend some
more time doing that very same action. That, in turn, increased the chance that the
next random food delivery would find each bird engaged in that activity again. What
results is a chain of events that reinforces the pigeons association of the delivery of
the food with whatever chance actions they had been performing when it was first
delivered. They subsequently continue to perform these same actions diligently.1
What distinguishes learning mechanisms that result in superstition from useful
learning? This question is crucial to the development of automated learners. While
human learners can rely on common sense to filter out random meaningless learning
conclusions, once we export the task of learning to a machine, we must provide
well defined crisp principles that will protect the program from reaching senseless
or useless conclusions. The development of such principles is a central goal of the
theory of machine learning.
What, then, made the rats learning more successful than that of the pigeons?
As a first step toward answering this question, let us have a closer look at the bait
shyness phenomenon in rats.
Bait Shyness revisited rats fail to acquire conditioning between food and electric
shock or between sound and nausea: The bait shyness mechanism in rats turns out to
be more complex than what one may expect. In experiments carried out by Garcia
(Garcia & Koelling 1996), it was demonstrated that if the unpleasant stimulus that
follows food consumption is replaced by, say, electrical shock (rather than nausea),
then no conditioning occurs. Even after repeated trials in which the consumption
1
See: http://psychclassics.yorku.ca/Skinner/Pigeon
Introduction
digitally recorded data, it becomes obvious that there are treasures of meaningful information buried in data archives that are way too large and too
complex for humans to make sense of. Learning to detect meaningful patterns in large and complex data sets is a promising domain in which the
combination of programs that learn with the almost unlimited memory
capacity and ever increasing processing speed of computers opens up new
horizons.
Adaptivity. One limiting feature of programmed tools is their rigidity once the
program has been written down and installed, it stays unchanged. However,
many tasks change over time or from one user to another. Machine learning
tools programs whose behavior adapts to their input data offer a solution to
such issues; they are, by nature, adaptive to changes in the environment they
interact with. Typical successful applications of machine learning to such problems include programs that decode handwritten text, where a fixed program can
adapt to variations between the handwriting of different users; spam detection
programs, adapting automatically to changes in the nature of spam e-mails; and
speech recognition programs.
of coming up with some summary, or compressed version of that data. Clustering a data set into subsets of similar objets is a typical example of such a
task.
There is also an intermediate learning setting in which, while the training examples contain more information than the test examples, the learner is
required to predict even more information for the test examples. For example, one may try to learn a value function that describes for each setting of a
chess board the degree by which Whites position is better than the Blacks.
Yet, the only information available to the learner at training time is positions
that occurred throughout actual chess games, labeled by who eventually won
that game. Such learning frameworks are mainly investigated under the title of
reinforcement learning.
Active versus Passive Learners Learning paradigms can vary by the role played
by the learner. We distinguish between active and passive learners. An
active learner interacts with the environment at training time, say, by posing
queries or performing experiments, while a passive learner only observes the
information provided by the environment (or the teacher) without influencing or directing it. Note that the learner of a spam filter is usually passive
waiting for users to mark the e-mails coming to them. In an active setting, one could imagine asking users to label specific e-mails chosen by the
learner, or even composed by the learner, to enhance its understanding of what
spam is.
Helpfulness of the Teacher When one thinks about human learning, of a baby at
home or a student at school, the process often involves a helpful teacher, who
is trying to feed the learner with the information most useful for achieving
the learning goal. In contrast, when a scientist learns about nature, the environment, playing the role of the teacher, can be best thought of as passive
apples drop, stars shine, and the rain falls without regard to the needs of the
learner. We model such learning scenarios by postulating that the training data
(or the learners experience) is generated by some random process. This is the
basic building block in the branch of statistical learning. Finally, learning also
occurs when the learners input is generated by an adversarial teacher. This
may be the case in the spam filtering example (if the spammer makes an effort
to mislead the spam filtering designer) or in learning to detect fraud. One also
uses an adversarial teacher model as a worst-case scenario, when no milder
setup can be safely assumed. If you can learn against an adversarial teacher,
you are guaranteed to succeed interacting any odd teacher.
Online versus Batch Learning Protocol The last parameter we mention is the distinction between situations in which the learner has to respond online, throughout the learning process, and settings in which the learner has to engage the
acquired expertise only after having a chance to process large amounts of data.
For example, a stockbroker has to make daily decisions, based on the experience collected so far. He may become an expert over time, but might have
made costly mistakes in the process. In contrast, in many data mining settings,
the learner the data miner has large amounts of training data to play with
before having to output conclusions.
Introduction
In this book we shall discuss only a subset of the possible learning paradigms.
Our main focus is on supervised statistical batch learning with a passive learner
(for example, trying to learn how to generate patients prognoses, based on large
archives of records of patients that were independently collected and are already
labeled by the fate of the recorded patients). We shall also briefly discuss online
learning and batch unsupervised learning (in particular, clustering).
There are further differences between these two disciplines, of which we shall
mention only one more here. While in statistics it is common to work under the
assumption of certain presubscribed data models (such as assuming the normality of data-generating distributions, or the linearity of functional dependencies), in
machine learning the emphasis is on working under a distribution-free setting,
where the learner assumes as little as possible about the nature of the data distribution and allows the learning algorithm to figure out which models best approximate
the data-generating process. A precise discussion of this issue requires some technical preliminaries, and we will come back to it later in the book, and in particular in
Chapter 5.
Chapters 24.
Chapter 9 (without the VC calculation).
Chapters 56 (without proofs).
Chapter 10.
Chapters 7, 11 (without proofs).
Chapters 12, 13 (with some of the easier proofs).
Chapter 14 (with some of the easier proofs).
Chapter 15.
Chapter 16.
Chapter 18.
Introduction
11.
12.
13.
14.
Chapter 22.
Chapter 23 (without proofs for compressed sensing).
Chapter 24.
Chapter 25.
1.6 NOTATION
Most of the notation we use throughout the book is either standard or defined on
the spot. In this section we describe our main conventions and provide a table summarizing our notation (Table 1.1). The reader is encouraged to skip this section and
return to it if during the reading of the book some notation is unclear.
We denote scalars and abstract objects with lowercase letters (e.g. x and ).
Often, we would like to emphasize that some object is a vector and then we use
boldface letters (e.g. x and ). The i th element of a vector x is denoted by x i . We use
uppercase letters to denote matrices, sets, and sequences. The meaning should be
clear from the context. As we will see momentarily, the input of a learning algorithm
is a sequence of training examples. We denote by z an abstract example and by
S = z 1 , . . . , z m a sequence of m examples. Historically, S is often referred to as a
training set; however, we will always assume that S is a sequence rather than a set.
A sequence of m vectors is denoted by x1 , . . . , xm . The i th element of xt is denoted
by x t,i .
Throughout the book, we make use of basic notions from probability. We denote
by D a distribution over some set,2 for example, Z . We use the notation z D to
denote that z is sampled according to D. Given a random variable f : Z R, its
expected value is denoted by EzD [ f (z)]. We sometimes use the shorthand E [ f ]
when the dependence on z is clear from the context. For f : Z {true, false} we
also use PzD [ f (z)] to denote D({z : f (z) = true}). In the next chapter we will also
2
To be mathematically precise, D should be defined over some -algebra of subsets of Z . The user who
is not familiar with measure theory can skip the few footnotes and remarks regarding more formal
measurability definitions and assumptions.
1.6 Notation
meaning
R
Rd
R+
N
O, o, , , , O
1[Boolean expression]
[a]+
[n]
x, v, w
xi , vi , wi
x, v
x2 or x
x1
x
x0
A Rd,k
A
A i, j
x x
x1 , . . . , xm
xi, j
w(1) , . . ., w(T )
(t)
wi
X
Y
Z
H
: H Z R+
D
D(A)
z D
S = z 1 , . . ., z m
S Dm
P, E
PzD [ f (z)]
EzD [ f (z)]
N(, C)
f (x)
f (x)
f (w)
wi
f (w)
f (w)
min xC f (x)
max xC f (x)
argmin xC f (x)
argmax xC f (x)
log
10
Introduction
f = O(g)
means that there exists k N such that f (x) = O(g(x) logk (g(x))).
The inner product between vectors x and w is denoted by x, w. Whenever we
do not specify the vector
space we assume that it is the d-dimensional Euclidean
space and then x, w = di=1 x i wi . The Euclidean (or 2 ) norm of a vector w is
w2 = w, w. We omit the subscript from the 2 norm when it is clear from the
1/ p
context. We also use other p norms, w p =
|wi | p
, and in particular w1 =
i
|w
|
and
w
=
max
|w
|.
i
i
i
i
We use the notation minxC f (x) to denote the minimum value of the set
{ f (x) : x C}. To be mathematically more precise, we should use infxC f (x) whenever the minimum is not achievable. However, in the context of this book the
distinction between infimum and minimum is often of little interest. Hence, to simplify the presentation, we sometimes use the min notation even when inf is more
adequate. An analogous remark applies to max versus sup.
PART 1
Foundations
2
A Gentle Start
Let us begin our mathematical analysis by showing how successful learning can be
achieved in a relatively simplified setting. Imagine you have just arrived in some
small Pacific island. You soon find out that papayas are a significant ingredient in the
local diet. However, you have never before tasted papayas. You have to learn how
to predict whether a papaya you see in the market is tasty or not. First, you need
to decide which features of a papaya your prediction should be based on. On the
basis of your previous experience with other fruits, you decide to use two features:
the papayas color, ranging from dark green, through orange and red to dark brown,
and the papayas softness, ranging from rock hard to mushy. Your input for figuring
out your prediction rule is a sample of papayas that you have examined for color
and softness and then tasted and found out whether they were tasty or not. Let
us analyze this task as a demonstration of the considerations involved in learning
problems.
Our first step is to describe a formal model aimed to capture such learning tasks.
14
A Gentle Start
learner has access to (like a set of papayas that have been tasted and their
color, softness, and tastiness). Such labeled examples are often called
training examples. We sometimes also refer to S as a training set.1
The learners output: The learner is requested to output a prediction rule,
h : X Y. This function is also called a predictor, a hypothesis, or a classifier.
The predictor can be used to predict the label of new domain points. In our
papayas example, it is a rule that our learner will employ to predict whether
future papayas he examines in the farmers market are going to be tasty or not.
We use the notation A(S) to denote the hypothesis that a learning algorithm,
A, returns upon receiving the training sequence S.
A simple data-generation model We now explain how the training data is generated. First, we assume that the instances (the papayas we encounter) are
generated by some probability distribution (in this case, representing the
environment). Let us denote that probability distribution over X by D. It is
important to note that we do not assume that the learner knows anything about
this distribution. For the type of learning tasks we discuss, this could be any
arbitrary probability distribution. As to the labels, in the current discussion
we assume that there is some correct labeling function, f : X Y, and that
yi = f (x i ) for all i . This assumption will be relaxed in the next chapter. The
labeling function is unknown to the learner. In fact, this is just what the learner
is trying to figure out. In summary, each pair in the training data S is generated
by first sampling a point x i according to D and then labeling it by f .
Measures of success: We define the error of a classifier to be the probability that
it does not predict the correct label on a random data point generated by the
aforementioned underlying distribution. That is, the error of h is the probability to draw a random instance x, according to the distribution D, such that
h(x) does not equal f (x).
Formally, given a domain subset,2 A X , the probability distribution, D,
assigns a number, D(A), which determines how likely it is to observe a point
x A. In many cases, we refer to A as an event and express it using a function
: X {0, 1}, namely, A = {x X : (x) = 1}. In that case, we also use the
notation PxD [(x)] to express D(A).
We define the error of a prediction rule, h : X Y, to be
def
L D, f (h) =
def
xD
(2.1)
That is, the error of such h is the probability of randomly choosing an example
x for which h(x) = f (x). The subscript (D, f ) indicates that the error is measured with respect to the probability distribution D and the correct labeling
function f . We omit this subscript when it is clear from the context. L (D, f ) (h)
has several synonymous names such as the generalization error, the risk, or
the true error of h, and we will use these names interchangeably throughout
1
2
Despite the set notation, S is a sequence. In particular, the same example may appear twice in S and
some algorithms can take into account the order of examples in S.
Strictly speaking, we should be more careful and require that A is a member of some -algebra of
subsets of X , over which D is defined. We will formally define our measurability assumptions in the
next chapter.
the book. We use the letter L for the error, since we view this error as the loss
of the learner. We will later also discuss other possible formulations of such
loss.
A note about the information available to the learner The learner is blind to the
underlying distribution D over the world and to the labeling function f. In our
papayas example, we have just arrived in a new island and we have no clue
as to how papayas are distributed and how to predict their tastiness. The only
way the learner can interact with the environment is through observing the
training set.
In the next section we describe a simple learning paradigm for the preceding
setup and analyze its performance.
L S (h) =
(2.2)
15
16
A Gentle Start
Assume that the probability distribution D is such that instances are distributed
uniformly within the gray square and the labeling function, f , determines the label
to be 1 if the instance is within the inner square, and 0 otherwise. The area of the
gray square in the picture is 2 and the area of the inner square is 1. Consider the
following predictor:
yi if i [m] s. t. x i = x
(2.3)
h S (x) =
0 otherwise.
While this predictor mig
might
ht seem rather artificia
artificial,
l, in Exercise 2.1 we show a natural representation of it using polynomials. Clearly, no matter what the sample is,
L S (h S ) = 0, and therefore this predictor may be chosen by an ERM algorithm (it is
one of the empirical-minimum-cost hypotheses; no classifier can have smaller error).
On the other hand, the true error of any classifier that predicts the label 1 only on a
finite number of instances is, in this case, 1/2. Thus, L D (h S ) = 1/2. We have found
a predictor whose performance on the training set is excellent, yet its performance
on the true world is very poor. This phenomenon is called overfitting. Intuitively,
overfitting occurs when our hypothesis fits the training data too well (perhaps like
the everyday experience that a person who provides a perfect detailed explanation
for each of his single actions may raise suspicion).
where argmin stands for the set of hypotheses in H that achieve the minimum value
of L S (h) over H. By restricting the learner to choosing a predictor from H, we bias it
toward a particular set of predictors. Such restrictions are often called an inductive
bias. Since the choice of such a restriction is determined before the learner sees the
training data, it should ideally be based on some prior knowledge about the problem
to be learned. For example, for the papaya taste prediction problem we may choose
the class H to be the set of predictors that are determined by axis aligned rectangles
(in the space determined by the color and softness coordinates). We will later show
that ERMH over this class is guaranteed not to overfit. On the other hand, the
example of overfitting that we have seen previously, demonstrates that choosing H
to be a class of predictors that includes all functions that assign the value 1 to a finite
set of domain points does not suffice to guarantee that ERMH will not overfit.
A fundamental question in learning theory is, over which hypothesis classes
ERMH learning will not result in overfitting. We will study this question later in
the book.
Intuitively, choosing a more restricted hypothesis class better protects us against
overfitting but at the same time might cause us a stronger inductive bias. We will get
back to this fundamental tradeoff later.
(2.4)
Mathematically speaking, this holds with probability 1. To simplify the presentation, we sometimes
omit the with probability 1 specifier.
17
18
A Gentle Start
The i.i.d. assumption: The examples in the training set are independently and
identically distributed (i.i.d.) according to the distribution D. That is, every
x i in S is freshly sampled according to D and then labeled according to the
labeling function, f . We denote this assumption by S Dm where m is the
size of S, and Dm denotes the probability over m-tuples induced by applying D
to pick each element of the tuple independently of the other members of the
tuple.
Intuitively, the training set S is a window through which the learner gets
partial information about the distribution D over the world and the labeling
function, f . The larger the sample gets, the more likely it is to reflect more
accurately the distribution and labeling used to generate it.
Since L (D, f ) (h S ) depends on the training set, S, and that training set is picked by
a random process, there is randomness in the choice of the predictor h S and, consequently, in the risk L (D, f ) (h S ). Formally, we say that it is a random variable. It is not
realistic to expect that with full certainty S will suffice to direct the learner toward
a good classifier (from the point of view of D), as there is always some probability
that the sampled training data happens to be very nonrepresentative of the underlying D. If we go back to the papaya tasting example, there is always some (small)
chance that all the papayas we have happened to taste were not tasty, in spite of the
fact that, say, 70% of the papayas in our island are tasty. In such a case, ERMH (S)
may be the constant function that labels every papaya as not tasty (and has 70%
error on the true distribution of papapyas in the island). We will therefore address
the probability to sample a training set for which L (D, f ) (h S ) is not too large. Usually, we denote the probability of getting a nonrepresentative sample by , and call
(1 ) the confidence parameter of our prediction.
On top of that, since we cannot guarantee perfect label prediction, we introduce
another parameter for the quality of prediction, the accuracy parameter, commonly
denoted by
. We interpret the event L (D, f ) (h S ) >
as a failure of the learner, while
if L (D, f ) (h S )
we view the output of the algorithm as an approximately correct
predictor. Therefore (fixing some labeling function f : X Y), we are interested
in upper bounding the probability to sample m-tuple of instances that will lead to
failure of the learner. Formally, let S|x = (x 1 , . . . , x m ) be the instances of the training
set. We would like to upper bound
Dm ({S|x : L (D, f ) (h S ) >
}).
Let H B be the set of bad hypotheses, that is,
H B = {h H : L (D, f ) (h) >
}.
In addition, let
M = {S|x : h H B , L S (h) = 0}
be the set of misleading samples: Namely, for every S|x M, there is a bad hypothesis, h H B , that looks like a good hypothesis on S|x . Now, recall that we would
like to bound the probability of the event L (D, f ) (h S ) >
. But, since the realizability assumption implies that L S (h S ) = 0, it follows that the event L (D, f ) (h S ) >
can
only happen if for some h H B we have L S (h) = 0. In other words, this event will
only happen if our sample is in the set of misleading samples, M. Formally, we have
shown that
{S|x : L (D, f ) (h S ) >
} M.
Note that we can rewrite M as
M=
(2.5)
hH B
Hence,
Dm ({S|x : L (D, f ) (h S ) >
}) Dm (M) = Dm ( hH B {S|x : L S (h) = 0}).
(2.6)
Next, we upper bound the right-hand side of the preceding equation using the
union bound a basic property of probabilities.
Lemma 2.2 (Union Bound).
Applying the union bound to the right-hand side of Equation (2.6) yields
Dm ({S|x : L S (h) = 0}).
(2.7)
Dm ({S|x : L (D, f ) (h S ) >
})
hH B
Next, let us bound each summand of the right-hand side of the preceding inequality.
Fix some bad hypothesis h H B . The event L S (h) = 0 is equivalent to the event
i , h(x i ) = f (x i ). Since the examples in the training set are sampled i.i.d. we get that
Dm ({S|x : L S (h) = 0}) = Dm ({S|x : i , h(x i ) = f (x i )})
=
m
(2.8)
i=1
(2.9)
19
20
A Gentle Start
Figure 2.1. Each point in the large circle represents a possible m-tuple of instances. Each
colored oval represents the set of misleading m-tuple of instances for some bad predictor h H B . The ERM can potentially overfit whenever it gets a misleading training set
S. That is, for some h H B we have L S (h) = 0. Equation (2.9) guarantees that for each
individual bad hypothesis, h H B , at most (1
)m -fraction of the training sets would be
misleading. In particular, the larger m is, the smaller each of these colored ovals becomes.
The union bound formalizes the fact that the area representing the training sets that are
misleading with respect to some h H B (that is, the training sets in M) is at most the
sum of the areas of the colored ovals. Therefore, it is bounded by |H B | times the maximum
size of a colored oval. Any sample S outside the colored ovals cannot cause the ERM rule
to overfit.
Then, for any labeling function, f , and for any distribution, D, for which the realizability assumption holds (that is, for some h H, L (D, f ) (h) = 0), with probability of
at least 1 over the choice of an i.i.d. sample S of size m, we have that for every
ERM hypothesis, h S , it holds that
L (D, f ) (h S )
.
The preceeding corollary tells us that for a sufficiently large m, the ERMH rule
over a finite hypothesis class will be probably (with confidence 1 ) approximately
(up to an error of
) correct. In the next chapter we formally define the model of
Probably Approximately Correct (PAC) learning.
2.4 EXERCISES
2.1 Overfitting of polynomial matching: We have shown that the predictor defined in
Equation (2.3) leads to overfitting. While this predictor seems to be very unnatural,
the goal of this exercise is to show that it can be described as a thresholded polyd
m
nomial. That is, show that given a training set S = {(xi , f (xi ))}m
i=1 (R {0, 1}) ,
there exists a polynomial p S such that h S (x) = 1 if and only if p S (x) 0, where h S
is as defined in Equation (2.3). It follows that learning the class of all thresholded
polynomials using the ERM rule may lead to overfitting.
2.2 Let H be a class of binary classifiers over a domain X . Let D be an unknown distribution over X , and let f be the target hypothesis in H. Fix some h H. Show that
the expected value of L S (h) over the choice of S| x equals L (D, f ) (h), namely,
E
S|x D m
2.3 Axis aligned rectangles: An axis aligned rectangle classifier in the plane is a classifier that assigns the value 1 to a point if and only if it is inside a certain rectangle.
2.4 Exercises
R*
+
R(S)
+
+
+
R1
Formally, given real numbers a1 b1 , a2 b2 , define the classifier h (a1 ,b1 ,a2 ,b2 ) by
1 if a1 x1 b1 and a2 x2 b2
h (a1 ,b1 ,a2 ,b2 ) (x1 , x2 ) =
.
(2.10)
0 otherwise
The class of all axis aligned rectangles in the plane is defined as
H2rec = {h (a1 ,b1 ,a2 ,b2 ) : a1 b1 , and a2 b2 }.
Note that this is an infinite size hypothesis class. Throughout this exercise we rely
on the realizability assumption.
1. Let A be the algorithm that returns the smallest rectangle enclosing all positive
examples in the training set. Show that A is an ERM.
2. Show that if A receives a training set of size 4log
(4/) then, with probability of
at least 1 it returns a hypothesis with error of at most
.
Hint: Fix some distribution D over X , let R = R(a1 , b1 , a2 , b2 ) be the rectangle that generates the labels, and let f be the corresponding hypothesis. Let
a1 a1 be a number such that the probability mass (with respect to D) of the
rectangle R1 = R(a1 , a1 , a2 , b2 ) is exactly
/4. Similarly, let b1 , a2 , b2 be numbers
such that the probability masses of the rectangles R2 = R(b1 , b1 , a2 , b2 ), R3 =
R(a1 , b1 , a2 , a2 ), R4 = R(a1 , b1 , b2 , b2 ) are all exactly
/4. Let R(S) be the
rectangle returned by A. See illustration in Figure 2.2.
Show that R(S) R .
Show that if S contains (positive) examples in all of the rectangles
R1 , R2 , R3 , R4 , then the hypothesis returned by A has error of at most
.
For each i {1, . . . , 4}, upper bound the probability that S does not contain
an example from Ri .
Use the union bound to conclude the argument.
3. Repeat the previous question for the class of axis aligned rectangles in Rd .
4. Show that the runtime of applying the algorithm A mentioned earlier is polynomial in d, 1/
, and in log (1/).
21
3
A Formal Learning Model
In this chapter we define our main formal learning model the PAC learning model
and its extensions. We will consider other notions of learnability in Chapter 7.
Sample Complexity
The function m H : (0, 1)2 N determines the sample complexity of learning H: that
is, how many examples are required to guarantee a probably approximately correct
solution. The sample complexity is a function of the accuracy (
) and confidence ()
parameters. It also depends on properties of the hypothesis class H for example,
for a finite class we showed that the sample complexity depends on log the size of H.
Note that if H is PAC learnable, there are many functions m H that satisfy the
requirements given in the definition of PAC learnability. Therefore, to be precise,
we will define the sample complexity of learning H to be the minimal function,
in the sense that for any
, , m H (
, ) is the minimal integer that satisfies the
requirements of PAC learning with accuracy
and confidence .
Let us now recall the conclusion of the analysis of finite hypothesis classes from
the previous chapter. It can be rephrased as stating:
Corollary 3.2. Every finite hypothesis class is PAC learnable with sample complexity
log (|H|/)
m H (
, )
.
There are infinite classes that are learnable as well (see, for example, Exercise
3.3). Later on we will show that what determines the PAC learnability of a class is
not its finiteness but rather a combinatorial measure called the VC dimension.
23
24
L D (h) =
def
(x,y)D
(3.1)
We would like to find a predictor, h, for which that error will be minimized.
However, the learner does not know the data generating D. What the learner does
have access to is the training data, S. The definition of the empirical risk remains
the same as before, namely,
def
L S (h) =
Given S, a learner can compute L S (h) for any function h : X {0, 1}. Note that
L S (h) = L D(uniform over S) (h).
The Goal
We wish to find some hypothesis, h : X Y, that (probably approximately)
minimizes the true risk, L D (h).
Clearly, if the realizability assumption holds, agnostic PAC learning provides the
same guarantee as PAC learning. In that sense, agnostic PAC learning generalizes
the definition of PAC learning. When the realizability assumption does not hold, no
learner can guarantee an arbitrarily small error. Nevertheless, under the definition
of agnostic PAC learning, a learner can still declare success if its error is not much
larger than the best error achievable by a predictor from the class H. This is in
contrast to PAC learning, in which the learner is required to achieve a small error in
absolute terms and not relative to the best error achievable by the hypothesis class.
25
26
will be able to classify given documents according to topics (e.g., news, sports,
biology, medicine). A learning algorithm for such a task will have access to
examples of correctly classified documents and, on the basis of these examples,
should output a program that can take as input a new document and output
a topic classification for that document. Here, the domain set is the set of all
potential documents. Once again, we would usually represent documents by a
set of features that could include counts of different key words in the document,
as well as other possibly relevant features like the size of the document or its origin. The label set in this task will be the set of possible document topics (so Y will
be some large finite set). Once we determine our domain and label sets, the other
components of our framework look exactly the same as in the papaya tasting
example; Our training sample will be a finite sequence of (feature vector, label)
pairs, the learners output will be a function from the domain set to the label
set, and, finally, for our measure of success, we can use the probability, over
(document, topic) pairs, of the event that our predictor suggests a wrong label.
Regression In this task, one wishes to find some simple pattern in the data a
functional relationship between the X and Y components of the data. For example, one wishes to find a linear function that best predicts a babys birth weight
on the basis of ultrasound measures of his head circumference, abdominal circumference, and femur length. Here, our domain set X is some subset of R3 (the
three ultrasound measurements), and the set of labels, Y, is the the set of real
numbers (the weight in grams). In this context, it is more adequate to call Y the
target set. Our training data as well as the learners output are as before (a finite
sequence of (x, y) pairs, and a function from X to Y respectively). However,
our measure of success is different. We may evaluate the quality of a hypothesis
function, h : X Y, by the expected square difference between the true labels
and their predicted values, namely,
def
L D (h) =
(x,y)D
(h(x) y)2 .
(3.2)
L D (h) =
E [(h, z)].
zD
(3.3)
That is, we consider the expectation of the loss of h over objects z picked randomly according to D. Similarly, we define the empirical risk to be the expected loss
over a given sample S = (z 1 , . . . , z m ) Z m , namely,
1
(h, z i ).
m
m
def
L S (h) =
(3.4)
i=1
The loss functions used in the preceding examples of classification and regression
tasks are as follows:
01 loss: Here, our random variable z ranges over the set of pairs X Y and the
loss function is
0 if h(x) = y
def
01 (h, (x, y)) =
1 if h(x) = y
27
28
Remark 3.2 (Proper vs. Representation-Independent Learning*). In the preceding definition, we required that the algorithm will return a hypothesis from H. In
some situations, H is a subset of a set H , and the loss function can be naturally
extended to be a function from H Z to the reals. In this case, we may allow
the algorithm to return a hypothesis h H , as long as it satisfies the requirement
L D (h ) minhH L D (h) +
. Allowing the algorithm to output a hypothesis from
H is called representation independent learning, while proper learning occurs when
the algorithm must output a hypothesis from H. Representation independent learning is sometimes called improper learning, although there is nothing improper in
representation independent learning.
3.3 SUMMARY
In this chapter we defined our main formal learning model PAC learning. The
basic model relies on the realizability assumption, while the agnostic variant does
not impose any restrictions on the underlying distribution over the examples. We
also generalized the PAC model to arbitrary loss functions. We will sometimes refer
to the most general model simply as PAC learning, omitting the agnostic prefix
and letting the reader infer what the underlying loss function is from the context.
When we would like to emphasize that we are dealing with the original PAC setting
we mention that the realizability assumption holds. In Chapter 7 we will discuss
other notions of learnability.
3.5 EXERCISES
3.1 Monotonicity of Sample Complexity: Let H be a hypothesis class for a binary classification task. Suppose that H is PAC learnable and its sample complexity is given
by m H (, ). Show that m H is monotonically nonincreasing in each of its parameters. That is, show that given (0, 1), and given 0 <
1
2 < 1, we have that
3.5 Exercises
m H (
1 , ) m H (
2 , ). Similarly, show that given
(0, 1), and given 0 < 1 2 < 1,
we have that m H (
, 1 ) m H (
, 2 ).
3.2 Let X be a discrete domain, and let HSingleton = {h z : z X } {h }, where for each
z X , h z is the function defined by h z (x) = 1 if x = z and h z (x) = 0 if x = z. h
is simply the all-negative hypothesis, namely, x X, h (x) = 0. The realizability
assumption here implies that the true hypothesis f labels negatively all examples in
the domain, perhaps except one.
1. Describe an algorithm that implements the ERM rule for learning HSingleton in
the realizable setup.
2. Show that HSingleton is PAC learnable. Provide an upper bound on the sample
complexity.
3.3 Let X = R2 , Y = {0, 1}, and let H be the class of concentric circles in the plane, that
is, H = {h r : r R+ }, where h r (x) = 1[xr ] . Prove that H is PAC learnable (assume
realizability), and its sample complexity is bounded by
log (1/)
m H (
, )
.
3.4 In this question, we study the hypothesis class of Boolean conjunctions defined as
follows. The instance space is X = {0, 1}d and the label set is Y = {0, 1}. A literal over
the variables x1 , . . ., xd is a simple Boolean function that takes the form f (x) = xi , for
some i [d], or f (x) = 1 xi for some i [d]. We use the notation x i as a shorthand
for 1 xi . A conjunction is any product of literals. In Boolean logic, the product is
denoted using the sign. For example, the function h(x) = x1 (1 x2 ) is written as
x1 x 2 .
We consider the hypothesis class of all conjunctions of literals over the d variables. The empty conjunction is interpreted as the all-positive hypothesis (namely,
the function that returns h(x) = 1 for all x). The conjunction x1 x 1 (and similarly
any conjunction involving a literal and its negation) is allowed and interpreted as
the all-negative hypothesis (namely, the conjunction that returns h(x) = 0 for all x).
We assume realizability: Namely, we assume that there exists a Boolean conjunction
that generates the labels. Thus, each example (x, y) X Y consists of an assignment to the d Boolean variables x1 , . . ., xd , and its truth value (0 for false and 1 for
true).
For instance, let d = 3 and suppose that the true conjunction is x1 x 2 . Then, the
training set S might contain the following instances:
((1, 1, 1), 0), ((1, 0, 1), 1), ((0, 1, 0), 0)((1, 0, 0), 1).
Prove that the hypothesis class of all conjunctions over d variables is PAC learnable and bound its sample complexity. Propose an algorithm that implements the
ERM rule, whose runtime is polynomial in d m.
3.5 Let X be a domain and let D1 , D2 , . . ., Dm be a sequence of distributions over X . Let
H be a finite class of binary classifiers over X and let f H. Suppose we are getting
a sample S of m examples, such that the instances are independent but are not identically distributed; the i th instance is sampled from Di and then yi is set to be f (xi ).
m denote the average, that is, D m = (D1 + + Dm )/m.
Let D
Fix an accuracy parameter
(0, 1). Show that
29
30
3.6 Let H be a hypothesis class of binary classifiers. Show that if H is agnostic PAC
learnable, then H is PAC learnable as well. Furthermore, if A is a successful agnostic
PAC learner for H, then A is also a successful PAC learner for H.
3.7 (*) The Bayes optimal predictor: Show that for every probability distribution D, the
Bayes optimal predictor fD is optimal, in the sense that for every classifier g from
X to {0, 1}, L D ( f D ) L D (g).
3.8 (*) We say that a learning algorithm A is better than B with respect to some
probability distribution, D, if
L D ( A(S)) L D (B(S))
for all samples S (X {0, 1})m . We say that a learning algorithm A is better than B,
if it is better than B with respect to all probability distributions D over X {0, 1}.
1. A probabilistic label predictor is a function that assigns to every domain point
x a probability value, h(x) [0, 1], that determines the probability of predicting
the label 1. That is, given such an h and an input, x, the label for x is predicted by
tossing a coin with bias h(x) toward Heads and predicting 1 iff the coin comes up
Heads. Formally, we define a probabilistic label predictor as a function, h : X
[0, 1]. The loss of such h on an example (x, y) is defined to be |h(x) y|, which is
exactly the probability that the prediction of h will not be equal to y. Note that
if h is deterministic, that is, returns values in {0, 1}, then |h(x) y| = 1[h(x)= y] .
Prove that for every data-generating distribution D over X {0, 1}, the Bayes
optimal predictor has the smallest risk (w.r.t. the loss function (h, (x, y)) =
|h(x) y|, among all possible label predictors, including probabilistic ones).
2. Let X be a domain and {0, 1} be a set of labels. Prove that for every distribution
D over X {0, 1}, there exist a learning algorithm AD that is better than any
other learning algorithm with respect to D.
3. Prove that for every learning algorithm A there exist a probability distribution,
D, and a learning algorithm B such that A is not better than B w.r.t. D.
3.9 Consider a variant of the PAC model in which there are two example oracles: one
that generates positive examples and one that generates negative examples, both
according to the underlying distribution D on X . Formally, given a target function
f : X {0, 1}, let D+ be the distribution over X + = {x X : f (x) = 1} defined by
D+ ( A) = D( A)/D(X + ), for every A X + . Similarly, D is the distribution over X
induced by D.
The definition of PAC learnability in the two-oracle model is the same as the
standard definition of PAC learnability except that here the learner has access to
+
m+
H (
, ) i.i.d. examples from D and m (
, ) i.i.d. examples from D . The learners
goal is to output h s.t. with probability at least 1 (over the choice of the two
training sets, and possibly over the nondeterministic decisions made by the learning
algorithm), both L (D+ , f ) (h)
and L (D, f ) (h)
.
1. (*) Show that if H is PAC learnable (in the standard one-oracle model), then H
is PAC learnable in the two-oracle model.
2. (**) Define h + to be the always-plus hypothesis and h to be the always-minus
hypothesis. Assume that h + , h H. Show that if H is PAC learnable in the
two-oracle model, then H is PAC learnable in the standard one-oracle model.
4
Learning via Uniform Convergence
The first formal learning model that we have discussed was the PAC model. In
Chapter 2 we have shown that under the realizability assumption, any finite hypothesis class is PAC learnable. In this chapter we will develop a general tool, uniform
convergence, and apply it to show that any finite class is learnable in the agnostic PAC model with general loss functions, as long as the range loss function is
bounded.
31
32
Corollary 4.4. If a class H has the uniform convergence property with a function m UC
H
then the class is agnostically PAC learnable with the sample complexity m H (
, )
m UC
H (
/2, ). Furthermore, in that case, the ERMH paradigm is a successful agnostic
PAC learner for H.
Writing
{S : h H, |L S (h) L D (h)| >
} = hH {S : |L S (h) L D (h)| >
},
and applying the union bound (Lemma 2.2) we obtain
Dm ({S : h H, |L S (h) L D (h)| >
})
Dm ({S : |L S (h) L D (h)| >
}). (4.1)
hH
Our second step will be to argue that each summand of the right-hand side of
this inequality is small enough (for a sufficiently large m). That is, we will show that
for any fixed hypothesis, h, (which is chosen in advance prior to the sampling of the
training set), the gap between the true and empirical risks, |L S (h) L D (h)|, is likely
to be small.
Recall that L D (h) = EzD [(h, z)] and that L S (h) = m1 m
i=1 (h, z i ). Since each z i
is sampled i.i.d. from D, the expected value of the random variable (h, z i ) is L D (h).
By the linearity of expectation, it follows that L D (h) is also the expected value of
L S (h). Hence, the quantity |L D (h) L S (h)| is the deviation of the random variable
L S (h) from its expectation. We therefore need to show that the measure of L S (h) is
concentrated around its expected value.
A basic statistical fact, the law of large numbers, states that when m goes to
infinity, empirical averages converge to their true expectation. This is true for L S (h),
since it is the empirical average of m i.i.d random variables. However, since the law
of large numbers is only an asymptotic result, it provides no information about the
gap between the empirically estimated error and its true value for any given, finite,
sample size.
Instead, we will use a measure concentration inequality due to Hoeffding, which
quantifies the gap between empirical averages and their expected value.
Lemma 4.5 (Hoeffdings Inequality). Let 1 , . . . , m be a sequence of i.i.d. random
variables and assume that for all i , E [i ] = and P [a i b] = 1. Then, for any
>0
m
1
i
>
2 exp 2 m
2 /(b a)2 .
P
m
i=1
hH
2 exp 2 m
2
= 2 |H| exp 2 m
2 .
33
34
Finally, if we choose
m
log (2|H|/)
2
2
then
Dm ({S : h H, |L S (h) L D (h)| >
}) .
Corollary 4.6. Let H be a finite hypothesis class, let Z be a domain, and let : H
Z [0, 1] be a loss function. Then, H enjoys the uniform convergence property with
sample complexity
log (2|H|/)
m UC
(
,
)
.
H
2
2
Furthermore, the class is agnostically PAC learnable using the ERM algorithm with
sample complexity
2 log (2|H|/)
m H (
, ) m UC
(
/2,
)
.
H
2
Remark 4.1 (The Discretization Trick). While the preceding corollary only
applies to finite hypothesis classes, there is a simple trick that allows us to get a
very good estimate of the practical sample complexity of infinite hypothesis classes.
Consider a hypothesis class that is parameterized by d parameters. For example,
let X = R, Y = {1}, and the hypothesis class, H, be all functions of the form
h (x) = sign(x ). That is, each hypothesis is parameterized by one parameter,
R, and the hypothesis outputs 1 for all instances larger than and outputs 1
for instances smaller than . This is a hypothesis class of an infinite size. However,
if we are going to learn this hypothesis class in practice, using a computer, we will
probably maintain real numbers using floating point representation, say, of 64 bits.
It follows that in practice, our hypothesis class is parameterized by the set of scalars
that can be represented using a 64 bits floating point number. There are at most 264
such numbers; hence the actual size of our hypothesis class is at most 264 . More generally, if our hypothesis class is parameterized by d numbers, in practice we learn
a hypothesis class of size at most 264d . Applying Corollary 4.6 we obtain that the
. This upper bound
sample complexity of such classes is bounded by 128d+2
log(2/)
2
on the sample complexity has the deficiency of being dependent on the specific representation of real numbers used by our machine. In Chapter 6 we will introduce
a rigorous way to analyze the sample complexity of infinite size hypothesis classes.
Nevertheless, the discretization trick can be used to get a rough estimate of the
sample complexity in many practical situations.
4.3 SUMMARY
If the uniform convergence property holds for a hypothesis class H then in most
cases the empirical risks of hypotheses in H will faithfully represent their true
risks. Uniform convergence suffices for agnostic PAC learnability using the ERM
rule. We have shown that finite hypothesis classes enjoy the uniform convergence
property and are hence agnostic PAC learnable.
4.5 Exercises
4.5 EXERCISES
4.1 In this exercise, we show that the (
, ) requirement on the convergence of errors in
our definitions of PAC learning, is, in fact, quite close to a simpler looking requirement about averages (or expectations). Prove that the following two statements are
equivalent (for any learning algorithm A, any probability distribution D, and any
loss function whose range is [0, 1]):
1. For every
, > 0, there exists m(
, ) such that m m(
, )
P [L D ( A(S)) >
] <
SD m
2.
lim
E [L D ( A(S))] = 0
m SD m
35
5
The Bias-Complexity Tradeoff
In Chapter 2 we saw that unless one is careful, the training data can mislead the
learner, and result in overfitting. To overcome this problem, we restricted the search
space to some hypothesis class H. Such a hypothesis class can be viewed as reflecting
some prior knowledge that the learner has about the task a belief that one of
the members of the class H is a low-error model for the task. For example, in our
papayas taste problem, on the basis of our previous experience with other fruits,
we may assume that some rectangle in the color-hardness plane predicts (at least
approximately) the papayas tastiness.
Is such prior knowledge really necessary for the success of learning? Maybe
there exists some kind of universal learner, that is, a learner who has no prior knowledge about a certain task and is ready to be challenged by any task? Let us elaborate
on this point. A specific learning task is defined by an unknown distribution D over
X Y, where the goal of the learner is to find a predictor h : X Y, whose risk,
L D (h), is small enough. The question is therefore whether there exist a learning
algorithm A and a training set size m, such that for every distribution D, if A receives
m i.i.d. examples from D, there is a high chance it outputs a predictor h that has a
low risk.
The first part of this chapter addresses this question formally. The No-FreeLunch theorem states that no such universal learner exists. To be more precise, the
theorem states that for binary classification prediction tasks, for every learner there
exists a distribution on which it fails. We say that the learner fails if, upon receiving
i.i.d. examples from that distribution, its output hypothesis is likely to have a large
risk, say, 0. 3, whereas for the same distribution, there exists another learner that
will output a hypothesis with a small risk. In other words, the theorem states that no
learner can succeed on all learnable tasks every learner has tasks on which it fails
while other learners succeed.
Therefore, when approaching a particular learning problem, defined by some
distribution D, we should have some prior knowledge on D. One type of such prior
knowledge is that D comes from some specific parametric family of distributions.
We will study learning under such assumptions later on in Chapter 24. Another type
of prior knowledge on D, which we assumed when defining the PAC learning model,
36
is that there exists h in some predefined hypothesis class H, such that L D (h) = 0. A
softer type of prior knowledge on D is assuming that minhH L D (h) is small. In a
sense, this weaker assumption on D is a prerequisite for using the agnostic PAC
model, in which we require that the risk of the output hypothesis will not be much
larger than minhH L D (h).
In the second part of this chapter we study the benefits and pitfalls of using a
hypothesis class as a means of formalizing prior knowledge. We decompose the
error of an ERM algorithm over a class H into two components. The first component reflects the quality of our prior knowledge, measured by the minimal risk of a
hypothesis in our hypothesis class, minhH L D (h). This component is also called the
approximation error, or the bias of the algorithm toward choosing a hypothesis from
H. The second component is the error due to overfitting, which depends on the size
or the complexity of the class H and is called the estimation error. These two terms
imply a tradeoff between choosing a more complex H (which can decrease the bias
but increases the risk of overfitting) or a less complex H (which might increase the
bias but decreases the potential overfitting).
37
38
That is, the probability to choose a pair (x, y) is 1/|C| if the label y is indeed the true
label according to f i , and the probability is 0 if y = f i (x). Clearly, L Di ( fi ) = 0.
We will show that for every algorithm, A, that receives a training set of m
examples from C {0, 1} and returns a function A( S) : C {0, 1}, it holds that
max
E [ L Di ( A( S))] 1/4.
i[T ] SDim
(5.1)
Clearly, this means that for every algorithm, A , that receives a training set of m
examples from X {0, 1} there exist a function f : X {0, 1} and a distribution D
over X {0, 1}, such that L D ( f ) = 0 and
E [ L D ( A ( S))] 1/4.
SD m
(5.2)
It is easy to verify that the preceding suffices for showing that P [L D (A (S)) 1/8]
1/7, which is what we need to prove (see Exercise 5.1).
We now turn to proving that Equation (5.1) holds. There are k = (2m)m possible
sequences of m examples from C. Denote these sequences by S1 , . . . , Sk . Also, if
S j = (x 1 , . . . , x m ) we denote by S ij the sequence containing the instances in S j labeled
by the function f i , namely, S ij = ((x 1 , f i (x 1 )), . . . , (x m , fi (x m ))). If the distribution is
Di then the possible training sets A can receive are S1i , . . . , Ski , and all these training
sets have the same probability of being sampled. Therefore,
E m [L Di (A(S))] =
SDi
k
1
L Di (A(S ij )).
k
(5.3)
j =1
Using the facts that maximum is larger than average and that average is larger
than minimum, we have
max
i[T ]
k
T
k
1
1 1
L Di (A(S ij ))
L Di (A(S ij ))
k
T
k
j =1
i=1
j =1
k
T
1 1
L Di (A(S ij ))
k
T
j =1
min
j [k]
i=1
T
1
L Di (A(S ij )).
T
(5.4)
i=1
1
1[h(x)= f i (x)]
2m
xC
1
r=1
p
1
(5.5)
r=1
Hence,
p
T
T
1
1 1
i
L Di (A(S j ))
1[A(S i )(vr )= f i (vr )]
j
T
T
2p
i=1
i=1
1
2p
p
r=1
r=1
1
T
T
i=1
T
1
1
min
1[A(S i )(vr )= f i (vr )] .
j
2 r[ p] T
(5.6)
i=1
Next, fix some r [ p]. We can partition all the functions in f 1 , . . . , f T into T /2 disjoint pairs, where for a pair ( f i , fi ) we have that for every c C, f i (c) = f i (c) if and
only if c = vr . Since for such a pair we must have S ij = S ij , it follows that
1[A(S i )(vr )= f i (vr )] + 1[A(S i )(v
j
r )= f i (vr )]
= 1,
which yields
T
1
1
1[A(S i )(vr )= f i (vr )] = .
j
T
2
i=1
Combining this with Equation (5.6), Equation (5.4), and Equation (5.3), we obtain
that Equation (5.1) holds, which concludes our proof.
39
40
est = L D (h S ) app .
(5.7)
In fact, it always includes the error of the Bayes optimal predictor (see Chapter 3), the minimal yet
inevitable error, because of the possible nondeterminism of the world in this model. Sometimes in the
literature the term approximation error refers not to minhH L D (h), but rather to the excess error over
that of the Bayes optimal predictor, namely, minhH L D (h)
Bayes .
5.5 Exercises
5.3 SUMMARY
The No-Free-Lunch theorem states that there is no universal learner. Every learner
has to be specified to some task, and use some prior knowledge about that task, in
order to succeed. So far we have modeled our prior knowledge by restricting our
output hypothesis to be a member of a chosen hypothesis class. When choosing
this hypothesis class, we face a tradeoff, between a larger, or more complex, class
that is more likely to have a small approximation error, and a more restricted class
that would guarantee that the estimation error will be small. In the next chapter we
will study in more detail the behavior of the estimation error. In Chapter 7 we will
discuss alternative ways to express prior knowledge.
5.5 EXERCISES
5.1 Prove that Equation (5.2) suffices for showing that P [L D ( A(S)) 1/8] 1/7.
Hint: Let be a random variable that receives values in [0, 1] and whose expectation
satisfies E [ ] 1/4. Use Lemma B.1 to show that P [ 1/8] 1/7.
41
42
5.2 Assume you are asked to design a learning algorithm to predict whether patients
are going to suffer a heart attack. Relevant patient features the algorithm may have
access to include blood pressure (BP), body-mass index (BMI), age (A), level of
physical activity (P), and income (I).
You have to choose between two algorithms; the first picks an axis aligned rectangle in the two dimensional space spanned by the features BP and BMI and the
other picks an axis aligned rectangle in the five dimensional space spanned by all
the preceding features.
1. Explain the pros and cons of each choice.
2. Explain how the number of available labeled training samples will affect your
choice.
5.3 Prove that if |X | km for a positive integer k 2, then we can replace the lower
1
1
bound of 1/4 in the No-Free-Lunch theorem with k1
2k = 2 2k . Namely, let A be a
learning algorithm for the task of binary classification. Let m be any number smaller
than |X |/k, representing a training set size. Then, there exists a distribution D over
X {0, 1} such that:
There exists a function f : X {0, 1} with L D ( f ) = 0.
E SDm [L D ( A(S))] 12 2k1 .
6
The VC-Dimension
In the previous chapter, we decomposed the error of the ERMH rule into approximation error and estimation error. The approximation error depends on the fit
of our prior knowledge (as reflected by the choice of the hypothesis class H) to
the underlying unknown distribution. In contrast, the definition of PAC learnability requires that the estimation error would be bounded uniformly over all
distributions.
Our current goal is to figure out which classes H are PAC learnable, and to
characterize exactly the sample complexity of learning a given hypothesis class. So
far we have seen that finite classes are learnable, but that the class of all functions
(over an infinite size domain) is not. What makes one class learnable and the other
unlearnable? Can infinite-size classes be learnable, and, if so, what determines their
sample complexity?
We begin the chapter by showing that infinite classes can indeed be learnable, and thus, finiteness of the hypothesis class is not a necessary condition for
learnability. We then present a remarkably crisp characterization of the family of
learnable classes in the setup of binary valued classification with the zero-one loss.
This characterization was first discovered by Vladimir Vapnik and Alexey Chervonenkis in 1970 and relies on a combinatorial notion called the Vapnik-Chervonenkis
dimension (VC-dimension). We formally define the VC-dimension, provide several
examples, and then state the fundamental theorem of statistical learning theory,
which integrates the concepts of learnability, VC-dimension, the ERM rule, and
uniform convergence.
43
44
The VC-Dimension
Example 6.1. Let H be the set of threshold functions over the real line, namely,
H = {h a : a R}, where h a : R {0, 1} is a function such that h a (x) = 1[x<a] . To
remind the reader, 1[x<a] is 1 if x < a and 0 otherwise. Clearly, H is of infinite size.
Nevertheless, the following lemma shows that H is learnable in the PAC model
using the ERM algorithm.
Lemma 6.1. Let H be the class of thresholds as defined earlier. Then, H is PAC
learnable, using the ERM rule, with sample complexity of m H (
, ) log (2/)/
.
Proof. Let a be a threshold such that the hypothesis h (x) = 1[x<a ] achieves
L D (h ) = 0. Let Dx be the marginal distribution over the domain X and let a0 <
a < a1 be such that
P [x (a0 , a )] = P [x (a , a1 )] =
.
xDx
xDx
mass
mass
a0
a1
a*
SD m
SD
SD m
SD
SD
(6.1)
The event b0 < a0 happens if and only if all examples in S are not in the interval
(a0 , a ), whose probability mass is defined to be
, namely,
P [b0 < a0 ] = P m [(x, y) S, x (a0 , a )] = (1
)m e
m .
SD m
SD
Since we assume m > log (2/)/
it follows that the equation is at most /2. In the
same way it is easy to see that P SDm [b1 > a1 ] /2. Combining with Equation (6.1)
we conclude our proof.
learning algorithm that will succeed on the same distribution. To do so, the adversary used a finite set C X and considered a family of distributions that are
concentrated on elements of C. Each distribution was derived from a true target function from C to {0, 1}. To make any algorithm fail, the adversary used the
power of choosing a target function from the set of all possible functions from C to
{0, 1}.
When considering PAC learnability of a hypothesis class H, the adversary is
restricted to constructing distributions for which some hypothesis h H achieves a
zero risk. Since we are considering distributions that are concentrated on elements
of C, we should study how H behaves on C, which leads to the following definition.
Definition 6.2 (Restriction of H to C). Let H be a class of functions from X to {0, 1}
and let C = {c1 , . . . , cm } X . The restriction of H to C is the set of functions from C
to {0, 1} that can be derived from H. That is,
HC = {(h(c1 ), . . . , h(cm )) : h H},
where we represent each function from C to {0, 1} as a vector in {0, 1}|C| .
If the restriction of H to C is the set of all functions from C to {0, 1}, then we say
that H shatters the set C. Formally:
Definition 6.3 (Shattering). A hypothesis class H shatters a finite set C X if the
restriction of H to C is the set of all functions from C to {0, 1}. That is, |HC | = 2|C| .
Example 6.2. Let H be the class of threshold functions over R. Take a set C = {c1 }.
Now, if we take a = c1 + 1, then we have h a (c1 ) = 1, and if we take a = c1 1, then
we have h a (c1 ) = 0. Therefore, HC is the set of all functions from C to {0, 1}, and H
shatters C. Now take a set C = {c1 , c2 }, where c1 c2 . No h H can account for the
labeling (0, 1), because any threshold that assigns the label 0 to c1 must assign the
label 0 to c2 as well. Therefore not all functions from C to {0, 1} are included in HC ;
hence C is not shattered by H.
Getting back to the construction of an adversarial distribution as in the proof
of the No-Free-Lunch theorem (Theorem 5.1), we see that whenever some set C is
shattered by H, the adversary is not restricted by H, as they can construct a distribution over C based on any target function from C to {0, 1}, while still maintaining
the realizability assumption. This immediately yields:
Corollary 6.4. Let H be a hypothesis class of functions from X to {0, 1}. Let m be a
training set size. Assume that there exists a set C X of size 2m that is shattered by
H. Then, for any learning algorithm, A, there exist a distribution D over X {0, 1}
and a predictor h H such that L D (h) = 0 but with probability of at least 1/7 over the
choice of S Dm we have that L D (A(S)) 1/8.
Corollary 6.4 tells us that if H shatters some set C of size 2m then we cannot learn
H using m examples. Intuitively, if a set C is shattered by H, and we receive a sample
containing half the instances of C, the labels of these instances give us no information about the labels of the rest of the instances in C every possible labeling of the
rest of the instances can be explained by some hypothesis in H. Philosophically,
45
46
The VC-Dimension
6.3 EXAMPLES
In this section we calculate the VC-dimension of several hypothesis classes. To show
that VCdim(H) = d we need to show that
1. There exists a set C of size d that is shattered by H.
2. Every set C of size d + 1 is not shattered by H.
6.3.2 Intervals
Let H be the class of intervals over R, namely, H = {h a,b : a, b R, a < b}, where h a,b :
R {0, 1} is a function such that h a,b (x) = 1[x(a,b)] . Take the set C = {1, 2}. Then, H
shatters C (make sure you understand why) and therefore VCdim(H) 2. Now take
an arbitrary set C = {c1 , c2 , c3 } and assume without loss of generality that c1 c2 c3 .
Then, the labeling (1, 0, 1) cannot be obtained by an interval and therefore H does
not shatter C. We therefore conclude that VCdim(H) = 2.
6.3 Examples
c1
c4
c5
c2
c3
Figure 6.1. Left: 4 points that are shattered by axis aligned rectangles. Right: Any axis
aligned rectangle cannot label c5 by 0 and the rest of the points by 1.
where
h (a1 ,a2 ,b1 ,b2 ) (x 1 , x 2 ) =
1 if a1 x 1 a2
0 otherwise
and b1 x 2 b2
(6.2)
We shall show in the following that VCdim(H) = 4. To prove this we need to find
a set of 4 points that are shattered by H, and show that no set of 5 points can be
shattered by H. Finding a set of 4 points that are shattered is easy (see Figure 6.1).
Now, consider any set C R2 of 5 points. In C, take a leftmost point (whose first
coordinate is the smallest in C), a rightmost point (first coordinate is the largest), a
lowest point (second coordinate is the smallest), and a highest point (second coordinate is the largest). Without loss of generality, denote C = {c1 , . . . , c5 } and let c5
be the point that was not selected. Now, define the labeling (1, 1, 1, 1, 0). It is impossible to obtain this labeling by an axis aligned rectangle. Indeed, such a rectangle
must contain c1 , . . . , c4 ; but in this case the rectangle contains c5 as well, because
its coordinates are within the intervals defined by the selected points. So, C is not
shattered by H , and therefore VCdim( H ) = 4.
47
48
The VC-Dimension
d + log (1/)
d + log (1/)
m UC
H (
, ) C2
2
2
d + log (1/)
d + log (1/)
m H (
, ) C2
2
2
d + log (1/)
d log (1/
) + log (1/)
m H (
, ) C2
max
CX :|C|=m
|HC |.
In words, H (m) is the number of different functions from a set C of size m to {0, 1}
that can be obtained by restricting H to C.
Obviously, if VCdim(H) = d then for any m d we have H (m) = 2m . In such
cases, H induces all possible functions from C to {0, 1}. The following beautiful
lemma, proposed independently by Sauer, Shelah, and Perles, shows that when m
becomes larger than the VC-dimension, the growth function increases polynomially
rather than exponentially with m.
Lemma 6.10 (Sauer-Shelah-Perles).
H be a hypothesis class with VCdim(H)
Let
d < . Then, for all m, H (m) di=0 mi . In particular, if m > d + 1 then H (m)
(em/d)d .
(6.3)
49
50
The VC-Dimension
The reason why Equation (6.3) is sufficient to prove the lemma is that if
VCdim(H) d then no set whose size is larger than d is shattered by H and therefore
d
m
|{B C : H shatters B}|
.
i
i=0
When m > d + 1 the right-hand side of the preceding is at most (em/d)d (see
Lemma A.5 in Appendix A).
We are left with proving Equation (6.3) and we do it using an inductive argument. For m = 1, no matter what H is, either both sides of Equation (6.3) equal
1 or both sides equal 2 (the empty set is always considered to be shattered by H).
Assume Equation (6.3) holds for sets of size k < m and let us prove it for sets of size
m. Fix H and C = {c1 , . . . , cm }. Denote C = {c2 , . . . , cm } and in addition, define the
following two sets:
Y0 = {(y2 , . . . , ym ) : (0, y2 , . . . , ym ) HC (1, y2 , . . . , ym ) HC },
and
Y1 = {(y2 , . . . , ym ) : (0, y2 , . . . , ym ) HC (1, y2 , . . . , ym ) HC }.
It is easy to verify that |HC | = |Y0 | + |Y1 |. Additionally, since Y0 = HC , using the
induction assumption (applied on H and C ) we have that
|Y0 | = |HC | |{B C : H shatters B}| = |{B C : c1 B H shatters B}|.
Next, define H H to be
H = {h H : h H s.t. (1 h (c1 ), h (c2 ), . . . , h (cm ))
= (h(c1 ), h(c2 ), . . . , h(cm )},
namely, H contains pairs of hypotheses that agree on C and differ on c1 . Using
this definition, it is clear that if H shatters a set B C then it also shatters the set
B {c1 } and vice versa. Combining this with the fact that Y1 = HC and using the
inductive assumption (now applied on H and C ) we obtain that
|Y1 | = |HC | |{B C : H shatters B}| = |{B C : H shatters B {c1 }}|
= |{B C : c1 B H shatters B}| |{B C : c1 B H shatters B}|.
Overall, we have shown that
|HC | = |Y0 | + |Y1 |
|{B C : c1 B H shatters B}| + |{B C : c1 B H shatters B}|
= |{B C : H shatters B}|,
which concludes our proof.
Theorem 6.11. Let H be a class and let H be its growth function. Then, for every D
and every (0, 1), with probability of at least 1 over the choice of S Dm we
have
4 + log (H (2m))
.
|L D (h) L S (h)|
2m
Before proving the theorem, let us first conclude the proof of Theorem 6.7.
Proof of Theorem 6.7. It suffices to prove that if the VC-dimension is finite then the
uniform convergence property holds. We will prove that
16d
16d
16 d log (2e/d)
UC
log
.
+
m H (
, ) 4
(
)2
(
)2
(
)2
From Sauers lemma we have that for m > d, H (2m) (2em/d)d . Combining this
with Theorem 6.11 we obtain that with probability of at least 1 ,
4 + d log (2em/d)
|L S (h) L D (h)|
.
2m
For simplicity assume that d log (2em/d) 4; hence,
1 2d log (2em/d)
.
|L S (h) L D (h)|
m
To ensure that the preceding is at most
we need that
m
(6.4)
Since the random variable suphH |L D (h) L S (h)| is nonnegative, the proof of
the theorem follows directly from the preceding using Markovs inequality (see
Section B.1).
51
52
The VC-Dimension
To bound the left-hand side of Equation (6.4) we first note that for every h H,
is an additional i.i.d.
we can rewrite L D (h) = E S Dm [L S (h)], where S = z 1 , . . . , z m
sample. Therefore,
E
SD m hH
SD
sup
E m L S (h) L S (h)
.
S D
hH
E |L S (h) L S (h)|
m
hH S D
S D m hH
Formally, the previous two inequalities follow from Jensens inequality. Combining
all we obtain
E m sup |L D (h) L S (h)| E m sup |L S (h) L S (h)|
SD
S,S D
hH
hH
S,S D m
m
1
((h, z i ) (h, z i ))
.
sup
hH m
(6.5)
i=1
The expectation on the right-hand side is over a choice of two i.i.d. samples S =
z 1 , . . . , z m and S = z 1 , . . . , z m
. Since all of these 2m vectors are chosen i.i.d., nothing
will change if we replace the name of the random vector z i with the name of the
random vector z i . If we do it, instead of the term ((h, z i )(h, z i )) in Equation (6.5)
we will have the term ((h, z i ) (h, z i )). It follows that for every {1}m we
have that Equation (6.5) equals
E
S,S D m
m
1
i ((h, z i ) (h, z i ))
sup
hH m
i=1
Since this holds for every {1}m , it also holds if we sample each component of
uniformly at random from the uniform distribution over {1}, denoted U . Hence,
Equation (6.5) also equals
m
1
i ((h, z i ) (h, z i ))
,
sup
hH m
m S,S D m
U
i=1
m
S,S D m U
m
1
i ((h, z i ) (h, z i ))
.
sup
m
hH
i=1
Next, fix S and S , and let C be the instances appearing in S and S . Then, we can
take the supremum only over h HC . Therefore,
m
1
E m sup
i ((h, z i ) (h, z i ))
U hH m
i=1
m
1
i ((h, z i ) (h, z i ))
.
= E m max
U hHC m
i=1
m
Fix some h HC and denote h =
i=1 i ((h, z i ) (h, z i )). Since E [h ] = 0 and
h is an average of independent variables, each of which takes values in [ 1, 1], we
have by Hoeffdings inequality that for every > 0,
P [|h | > ] 2 exp 2 m 2 .
1
m
Applying the union bound over h HC , we obtain that for any > 0,
P max |h | > 2 |HC | exp 2 m 2 .
hHC
E max |h |
.
hHC
2m
Combining all with the definition of H , we have shown that
4 + log (H (2m))
.
E m sup |L D (h) L S (h)|
SD
2m
hH
6.6 SUMMARY
The fundamental theorem of learning theory characterizes PAC learnability of
classes of binary classifiers using VC-dimension. The VC-dimension of a class is a
combinatorial property that denotes the maximal sample size that can be shattered
by the class. The fundamental theorem states that a class is PAC learnable if and
only if its VC-dimension is finite and specifies the sample complexity required for
PAC learning. The theorem also shows that if a problem is at all learnable, then
uniform convergence holds and therefore the problem is learnable using the ERM
rule.
53
54
The VC-Dimension
1997; Bartlett, Long & Williamson 1994; Anthony & Bartlet 1999), and the
Natarajan dimension characterizes learnability of some multiclass learning problems (Natarajan 1989). However, in general, there is no equivalence between
learnability and uniform convergence. See (Shalev-Shwartz, Shamir, Srebro &
Sridharan 2010; Daniely, Sabato, Ben-David & Shalev-Shwartz 2011).
Sauers lemma has been proved by Sauer in response to a problem of Erdos
(Sauer 1972). Shelah (with Perles) proved it as a useful lemma for Shelahs theory
of stable models (Shelah 1972). Gil Kalai tells1 us that at some later time, Benjy
Weiss asked Perles about such a result in the context of ergodic theory, and Perles,
who forgot that he had proved it once, proved it again. Vapnik and Chervonenkis
proved the lemma in the context of statistical learning theory.
6.8 EXERCISES
6.1 Show the following monotonicity property of VC-dimension: For every two hypothesis classes if H H then VCdim(H ) VCdim(H).
6.2 Given some finite domain set, X , and a number k |X |, figure out the VC-dimension
of each of the following classes (and prove your claims):
X
1. HX
=k = {h {0, 1} : |{x : h(x) = 1}| = k}: that is, the set of all functions that assign
the value 1 to exactly k elements of X .
2. Hatmostk = {h {0, 1}X : |{x : h(x) = 1}| k or |{x : h(x) = 0}| k}.
6.3 Let X be the Boolean hypercube {0, 1}n . For a set I {1, 2, . . . , n} we define a parity
function h I as follows. On a binary vector x = (x1 , x2 , . . ., xn ) {0, 1}n ,
xi mod 2 .
h I (x) =
iI
(That is, h I computes parity of bits in I .) What is the VC-dimension of the class of
all such parity functions, Hn-parity = {h I : I {1, 2, . . . , n}}?
6.4 We proved Sauers lemma by proving that for every class H of finite VC-dimension
d, and every subset A of the domain,
|H A | |{B A : H shatters B}|
d
| A|
i=0
Show that there are cases in which the previous two inequalities are strict (namely,
the can be replaced by <) and cases in which they can be replaced by equalities.
Demonstrate all four combinations of = and <.
6.5 VC-dimension of axis aligned rectangles in Rd : Let Hdrec be the class of axis aligned
rectangles in Rd . We have already seen that VCdim(H2rec ) = 4. Prove that in general,
VCdim(Hdrec ) = 2d.
6.6 VC-dimension of Boolean conjunctions: Let Hdcon be the class of Boolean conjunctions over the variables x1 , . . ., xd (d 2). We already know that this class is finite
and thus (agnostic) PAC learnable. In this question we calculate VCdim(Hdcon ).
1. Show that |Hdcon | 3d + 1.
2. Conclude that VCdim(H) d log 3.
3. Show that Hdcon shatters the set of unit vectors {ei : i d}.
1
http://gilkalai.wordpress.com/2008/09/28/extremal-combinatorics-iii-some-basic-theorems
6.8 Exercises
SD m
hH
d m
2d
55
56
The VC-Dimension
6.8 Exercises
1. Use the Dudley representation to figure out the VC-dimension of the class
P1d the class of all d-degree polynomials over R.
2. Prove that the class of all polynomial classifiers over R has infinite VCdimension.
3. Use the Dudley representation to figure out the VC-dimension of the class
Pnd (as a function of d and n).
57
7
Nonuniform Learnability
The notions of PAC learnability discussed so far in the book allow the sample
sizes to depend on the accuracy and confidence parameters, but they are uniform
with respect to the labeling rule and the underlying data distribution. Consequently, classes that are learnable in that respect are limited (they must have a
finite VC-dimension, as stated by Theorem 6.7). In this chapter we consider more
relaxed, weaker notions of learnability. We discuss the usefulness of such notions
and provide characterization of the concept classes that are learnable using these
definitions.
We begin this discussion by defining a notion of nonuniform learnability that
allows the sample size to depend on the hypothesis to which the learner is compared. We then provide a characterization of nonuniform learnability and show that
nonuniform learnability is a strict relaxation of agnostic PAC learnability. We also
show that a sufficient condition for nonuniform learnability is that H is a countable union of hypothesis classes, each of which enjoys the uniform convergence
property. These results will be proved in Section 7.2 by introducing a new learning
paradigm, which is called Structural Risk Minimization (SRM). In Section 7.3 we
specify the SRM paradigm for countable hypothesis classes, which yields the Minimum Description Length (MDL) paradigm. The MDL paradigm gives a formal
justification to a philosophical principle of induction called Occams razor. Next,
in Section 7.4 we introduce consistency as an even weaker notion of learnability. Finally, we discuss the significance and usefulness of the different notions of
learnability.
59
60
Nonuniform Learnability
Recall that in Chapter 4 we have shown that uniform convergence is sufficient for
agnostic PAC learnability. Theorem 7.3 generalizes this result to nonuniform learnability. The proof of this theorem will be given in the next section by introducing a
new learning paradigm. We now turn to proving Theorem 7.2.
Proof of Theorem 7.2. First assume that H = nN Hn where each Hn is agnostic
PAC learnable. Using the fundamental theorem of statistical learning, it follows
that each Hn has the uniform convergence property. Therefore, using Theorem 7.3
we obtain that H is nonuniform learnable.
For the other direction, assume that H is nonuniform learnable using some
algorithm A. For every n N, let Hn = {h H : m NUL
H (1/8, 1/7, h) n}. Clearly,
H = nN Hn . In addition, using the definition of m NUL
H we know that for any distribution D that satisfies the realizability assumption with respect to Hn , with probability
of at least 6/7 over S Dn we have that L D (A(S)) 1/8. Using the fundamental
theorem of statistical learning, this implies that the VC-dimension of Hn must be
finite, and therefore Hn is agnostic PAC learnable.
The following example shows that nonuniform learnability is a strict relaxation of agnostic PAC learnability; namely, there are hypothesis classes that are
nonuniform learnable but are not agnostic PAC learnable.
Example 7.1. Consider a binary classification problem with the instance domain
being X = R. For every n N let Hn be the class of polynomial classifiers of degree
n; namely, Hn is the set of all classifiers of theform h(x) = sign( p(x)) where p :
R R is a polynomial of degree n. Let H = nN Hn . Therefore, H is the class
of all polynomial classifiers over R. It is easy to verify that VCdim(H) = while
VCdim(Hn ) = n + 1 (see Exercise 7.12). Hence, H is not PAC learnable, while on
the basis of Theorem 7.3, H is nonuniformly learnable.
(7.1)
In words, we have a fixed sample size m, and we are interested in the lowest possible
upper bound on the gap between empirical and true risks achievable by using a
sample of m examples.
From the definitions of uniform convergence and
n , it follows that for every m
and , with probability of at least 1 over the choice of S Dm we have that
(7.2)
h Hn , |L D (h) L S (h)|
n (m, ).
Let w : N [0, 1] be a function such that n=1 w(n) 1. We refer to w as a
weight function over the hypothesis classes H1 , H2 , . . .. Such a weight function can
reflect the importance that the learner attributes to each hypothesis class, or some
measure of the complexity of different hypothesis classes. If H is a finite union of N
hypothesis classes, one can simply assign the same weight of 1/N to all hypothesis
classes. This equal weighting corresponds to no a priori preference to any hypothesis
class. Of course, if one believes (as prior knowledge) that a certain hypothesis class is
more likely to contain the correct target function, then it should be assigned a larger
weight, reflecting this prior knowledge. When H is a (countable) infinite union of
hypothesis classes, a uniform weighting is not possible but many other weighting
schemes may work. For example, one can choose w(n) = 26n2 or w(n) = 2n . Later
in this chapter we will provide another convenient way to define weighting functions
using description languages.
The SRM rule follows a bound minimization approach. This means that the
goal of the paradigm is to find a hypothesis that minimizes a certain upper bound
on the true risk. The bound that the SRM rule wishes to minimize is given in the
following theorem.
Theorem 7.4. Let w : N [0, 1] be a function
such that n=1 w(n) 1. Let H be a
hypothesis class that can be written as H = nN Hn , where for each n, Hn satisfies
the uniform convergence property with a sample complexity function m UC
Hn . Let
n
be as defined in Equation (7.1). Then, for every (0, 1) and distribution D, with
probability of at least 1 over the choice of S Dm , the following bound holds
(simultaneously) for every n N and h Hn .
|L D (h) L S (h)|
n (m, w(n) ).
Therefore, for every (0, 1) and distribution D, with probability of at least 1 it
holds that
(7.3)
h H, L D (h) L S (h) + min
n (m, w(n) ).
n:hHn
Proof. For each n define n = w(n). Applying the assumption that uniform convergence holds for all n with the rate given in Equation (7.2), we obtain that if we fix n
in advance, then with probability of at least 1 n over the choice of S Dm,
h Hn , |L D (h) L S (h)|
n (m, n ).
Applyingthe union bound
over n = 1, 2, . . ., we obtain that with probability of at
least 1 n n = 1 n w(n) 1 , the preceding holds for all n, which concludes
our proof.
Denote
n(h) = min{n : h Hn },
(7.4)
61
62
Nonuniform Learnability
m
H
Hn(h)
(n(h))2
Proof. Let A be the SRM algorithm with respect to the weighting function w. For
every h H,
, and , let m m UC
Hn(h) (
, w(n(h))). Using the fact that
n w(n) = 1,
we can apply Theorem 7.4 to get that, with probability of at least 1 over the
choice of S Dm , we have that for every h H,
L D (h ) L S (h ) +
n(h ) (m, w(n(h ))).
The preceding holds in particular for the hypothesis A(S) returned by the SRM rule.
By the definition of SRM we obtain that
L S (h ) +
n(h ) (m, w(n(h ))) L S (h) +
n(h) (m, w(n(h))).
L D (A(S)) min
h
Finally, if m m Hn(h) (
/2, w(n(h))) then clearly
n(h) (m, w(n(h)))
/2. In addition, from the uniform convergence property of each Hn we have that with
probability of more than 1 ,
UC
2 log (2n)
.
2
That is, the cost of relaxing the learners prior knowledge from a specific Hn that
contains the target h to a countable union of classes depends on the log of the index
of the first class in which h resides. That cost increases with the index of the class,
which can be interpreted as reflecting the value of knowing a good priority order on
the hypotheses in H.
63
64
Nonuniform Learnability
are more likely to be the correct one, and in the learning algorithm we prefer
hypotheses that have higher weights.
In this section we discuss a particular convenient way to define a weight function
over H, which is derived from the length of descriptions given to hypotheses. Having a hypothesis class, one can wonder about how we describe, or represent, each
hypothesis in the class. We naturally fix some description language. This can be
English, or a programming language, or some set of mathematical formulas. In any
of these languages, a description consists of finite strings of symbols (or characters)
drawn from some fixed alphabet. We shall now formalize these notions.
Let H be the hypothesis class we wish to describe. Fix some finite set of symbols (or characters), which we call the alphabet. For concreteness, we let =
{0, 1}. A string is a finite sequence of symbols from ; for example, = (0, 1, 1, 1, 0)
is a string of length 5. We denote by | | the length of a string. The set of all finite
length strings is denoted . A description language for H is a function d : H ,
mapping each member h of H to a string d(h). d(h) is called the description of h,
and its length is denoted by |h|.
We shall require that description languages be prefix-free; namely, for every distinct h, h , d(h) is not a prefix of d(h ). That is, we do not allow that any string d(h)
is exactly the first |h| symbols of any longer string d(h ). Prefix-free collections of
strings enjoy the following combinatorial property:
Lemma 7.6 (Kraft Inequality). If S {0, 1} is a prefix-free set of strings, then
1
1.
2| |
S
Proof. Define a probability distribution over the members of S as follows: Repeatedly toss an unbiased coin, with faces labeled 0 and 1, until the sequence of outcomes
is a member of S; at that point, stop. For each S, let P( ) be the probability that
this process generates the string . Note that since S is prefix-free, for every S, if
the coin toss outcomes follow the bits of then we will stop only once the sequence
of outcomes equals . We therefore get that, for every S, P( ) = 2|1 | . Since
probabilities add up to at most 1, our proof is concluded.
In light of Krafts inequality, any prefix-free description language of a hypothesis
class, H, gives rise to a weighting function w over that hypothesis class we will
simply set w(h) = 21|h| . This observation immediately yields the following:
Theorem 7.7. Let H be a hypothesis class and let d : H {0, 1} be a prefix-free
description language for H. Then, for every sample size, m, every confidence parameter, > 0, and every probability distribution, D, with probability greater than 1
over the choice of S Dm we have that,
|h| + ln (2/)
,
h H, L D (h) L S (h) +
2m
where |h| is the length of d(h).
Proof. Choose w(h) = 1/2|h| , apply Theorem 7.4 with
n (m, ) =
that ln (2|h| ) = |h| ln (2) < |h|.
ln (2/)
2m ,
and note
As was the case with Theorem 7.4, this result suggests a learning paradigm for
H given
a training set, S, search for a hypothesis h H that minimizes the bound,
L S (h) + |h|+ln(2/)
. In particular, it suggests trading off empirical risk for saving
2m
description length. This yields the Minimum Description Length learning paradigm.
Minimum Description Length (MDL)
prior knowledge:
H is a countable hypothesis class
H is described by a prefix-free language over {0, 1}
For every h H, |h| is the length of the representation of h
input: A training set S
Dm , confidence
(2/)
output: h argminhH L S (h) + |h|+ln
2m
Example 7.3. Let H be the class of all predictors that can be implemented using
some programming language, say, C++. Let us represent each program using the
binary string obtained by running the gzip command on the program (this yields
a prefix-free description language over the alphabet {0, 1}). Then, |h| is simply
the length (in bits) of the output of gzip when running on the C++ program
corresponding to h.
65
66
Nonuniform Learnability
prefer h over h . But these are the same h and h for which we argued two sentences
ago that h should be preferable. Where is the catch here?
Indeed, there is no inherent generalizability difference between hypotheses.
The crucial aspect here is the dependency order between the initial choice of language (or, preference over hypotheses) and the training set. As we know from
the basic Hoeffdings bound (Equation (4.2)), if we commit to any hypothesis
before seeing the
data, then we are guaranteed a rather small estimation error term
(2/)
. Choosing a description language (or, equivalently, some
L D (h) L S (h) + ln 2m
weighting of hypotheses) is a weak form of committing to a hypothesis. Rather than
committing to a single hypothesis, we spread out our commitment among many. As
long as it is done independently of the training sample, our generalization bound
holds. Just as the choice of a single hypothesis to be evaluated by a sample can be
arbitrary, so is the choice of description language.
In the literature, consistency is often defined using the notion of either convergence in probability (corresponding to weak consistency) or almost sure convergence (corresponding to strong
consistency).
Formally, we assume that Z is endowed with some sigma algebra of subsets , and by all distributions
we mean all probability distributions that have contained in their associated family of measurable
subsets.
predicts the majority label among all labeled instances of x that exist in the training
sample (and some fixed default label if no instance of x appears in the training set).
It is possible to show (see Exercise 7.6) that the Memorize algorithm is universally
consistent for every countable domain X and a finite label set Y (w.r.t. the zero-one
loss).
Intuitively, it is not obvious that the Memorize algorithm should be viewed as a
learner, since it lacks the aspect of generalization, namely, of using observed data to
predict the labels of unseen examples. The fact that Memorize is a consistent algorithm for the class of all functions over any countable domain set therefore raises
doubt about the usefulness of consistency guarantees. Furthermore, the sharp-eyed
reader may notice that the bad learner we introduced in Chapter 2, which led
to overfitting, is in fact the Memorize algorithm. In the next section we discuss the
significance of the different notions of learnability and revisit the No-Free-Lunch
theorem in light of the different definitions of learnability.
67
68
Nonuniform Learnability
of the error that stems from estimation error and therefore know how much of the
error is attributed to approximation error. If the approximation error is large, we
know that we should use a different hypothesis class. Similarly, if a nonuniform
algorithm fails, we can consider a different weighting function over (subsets of)
hypotheses. However, when a consistent algorithm fails, we have no idea whether
this is because of the estimation error or the approximation error. Furthermore,
even if we are sure we have a problem with the estimation error term, we do not
know how many more examples are needed to make the estimation error small.
Degree 3
Degree 10
It is easy to see that the empirical risk decreases as we enlarge the degree. Therefore, if we choose H to be the class of all polynomials up to degree 10 then the
ERM rule with respect to this class would output a 10 degree polynomial and would
overfit. On the other hand, if we choose too small a hypothesis class, say, polynomials up to degree 2, then the ERM would suffer from underfitting (i.e., a large
approximation error). In contrast, we can use the SRM rule on the set of all polynomials, while ordering subsets of H according to their degree, and this will yield a 3rd
degree polynomial since the combination of its empirical risk and the bound on its
estimation error is the smallest. In other words, the SRM rule enables us to select
the right model on the basis of the data itself. The price we pay for this flexibility
(besides a slight increase of the estimation error relative to PAC learning w.r.t. the
optimal degree) is that we do not know in advance how many examples are needed
to compete with the best hypothesis in H.
Unlike the notions of PAC learnability and nonuniform learnability, the definition of consistency does not yield a natural learning paradigm or a way to encode
prior knowledge. In fact, in many cases there is no need for prior knowledge at all.
For example, we saw that even the Memorize algorithm, which intuitively should not
be called a learning algorithm, is a consistent algorithm for any class defined over
a countable domain and a finite label set. This hints that consistency is a very weak
requirement.
Which Learning Algorithm Should We Prefer?
One may argue that even though consistency is a weak requirement, it is desirable
that a learning algorithm will be consistent with respect to the set of all functions
from X to Y, which gives us a guarantee that for enough training examples, we will
always be as good as the Bayes optimal predictor. Therefore, if we have two algorithms, where one is consistent and the other one is not consistent, we should prefer
the consistent algorithm. However, this argument is problematic for two reasons.
First, maybe it is the case that for most natural distributions we will observe in
practice that the sample complexity of the consistent algorithm will be so large so
that in every practical situation we will not obtain enough examples to enjoy this
guarantee. Second, it is not very hard to make any PAC or nonuniform learner consistent with respect to the class of all functions from X to Y. Concretely, consider
a countable domain, X , a finite label set Y, and a hypothesis class, H, of functions
from X to Y. We can make any nonuniform learner for H be consistent with respect
to the class of all classifiers from X to Y using the following simple trick: Upon
receiving a training set, we will first run the nonuniform learner over the training
set, and then we will obtain a bound on the true risk of the learned predictor. If this
bound is small enough we are done. Otherwise, we revert to the Memorize algorithm.
This simple modification makes the algorithm consistent with respect to all functions
from X to Y. Since it is easy to make any algorithm consistent, it may not be wise to
prefer one algorithm over the other just because of consistency considerations.
69
70
Nonuniform Learnability
7.6 SUMMARY
We introduced nonuniform learnability as a relaxation of PAC learnability and consistency as a relaxation of nonuniform learnability. This means that even classes of
infinite VC-dimension can be learnable, in some weaker sense of learnability. We
discussed the usefulness of the different definitions of learnability.
For hypothesis classes that are countable, we can apply the Minimum Description Length scheme, where hypotheses with shorter descriptions are preferred,
following the principle of Occams razor. An interesting example is the hypothesis class of all predictors we can implement in C++ (or any other programming
language), which we can learn (nonuniformly) using the MDL scheme.
Arguably, the class of all predictors we can implement in C++ is a powerful class
of functions and probably contains all that we can hope to learn in practice. The ability to learn this class is impressive, and, seemingly, this chapter should have been the
last chapter of this book. This is not the case, because of the computational aspect
of learning: that is, the runtime needed to apply the learning rule. For example, to
implement the MDL paradigm with respect to all C++ programs, we need to perform an exhaustive search over all C++ programs, which will take forever. Even the
implementation of the ERM paradigm with respect to all C++ programs of description length at most 1000 bits requires an exhaustive search over 21000 hypotheses.
, the runtime is
While the sample complexity of learning this class is just 1000+log(2/)
2
1000
2
. This is a huge number much larger than the number of atoms in the visible
universe. In the next chapter we formally define the computational complexity of
learning. In the second part of this book we will study hypothesis classes for which
the ERM or SRM schemes can be implemented efficiently.
7.8 Exercises
7.8 EXERCISES
7.1 Prove that for any finite class H, and any description language d : H {0, 1} , the
VC-dimension of H is at most 2 sup{|d(h)| : h H} the maximum description length
of a predictor in H. Furthermore, if d is a prefix-free description then VCdim(H )
sup{|d(h)| : h H}.
7.2 Let H = {h n : n N} be an infinite countable hypothesis class for binary classification.
Show that it is impossible to assign weights to the hypotheses in H such that
H could be learnted nonuniformly using these weights.
That is, the weighting
function w : H [0, 1] should satisfy the condition hH w(h) 1.
The weights would be monotonically nondecreasing. That is, if i < j , then
w(h i ) w(h j ).
7.3 Consider a hypothesis class H =
for every n N, Hn is finite.
n=1 Hn , where
Find a weighting function w : H [0, 1] such that hH w(h) 1 and so that for
all h H, w(h) is determined by n(h) = min{n : h Hn } and by |Hn(h)|.
(*) Define such a function w when for all n Hn is countable (possibly infinite).
7.4 Let H be some hypothesis class. For any h H, let |h| denote the description length
of h, according to some fixed description language. Consider the MDL learning
paradigm in which the algorithm returns:
|h| + ln(2/)
h S arg min L S (h) +
,
hH
2m
where S is a sample of size m. For any B > 0, let H B = {h H : |h| B}, and define
h B = arg min L D (h).
hH B
71
72
Nonuniform Learnability
in
2. Given any
> 0 prove that there exists
D > 0 such that
D({x X : D({x}) <
D }) <
.
3. Prove that for every > 0, if n is such that D({xi }) < for all i > n, then for every
m N,
/ S)] nem .
P m [xi : (D({xi }) > and xi
SD
8
The Runtime of Learning
So far in the book we have studied the statistical perspective of learning, namely,
how many samples are needed for learning. In other words, we focused on the
amount of information learning requires. However, when considering automated
learning, computational resources also play a major role in determining the complexity of a task: that is, how much computation is involved in carrying out a learning
task. Once a sufficient training sample is available to the learner, there is some computation to be done to extract a hypothesis or figure out the label of a given test
instance. These computational resources are crucial in any practical application of
machine learning. We refer to these two types of resources as the sample complexity and the computational complexity. In this chapter, we turn our attention to the
computational complexity of learning.
The computational complexity of learning should be viewed in the wider context
of the computational complexity of general algorithmic tasks. This area has been
extensively investigated; see, for example, (Sipser 2006). The introductory comments that follow summarize the basic ideas of that general theory that are most
relevant to our discussion.
The actual runtime (in seconds) of an algorithm depends on the specific machine
the algorithm is being implemented on (e.g., what the clock rate of the machines
CPU is). To avoid dependence on the specific machine, it is common to analyze
the runtime of algorithms in an asymptotic sense. For example, we say that the
computational complexity of the merge-sort algorithm, which sorts a list of n items,
is O(n log (n)). This implies that we can implement the algorithm on any machine
that satisfies the requirements of some accepted abstract model of computation,
and the actual runtime in seconds will satisfy the following: there exist constants c
and n 0 , which can depend on the actual machine, such that, for any value of n > n 0 ,
the runtime in seconds of sorting any n items will be at most c n log (n). It is common
to use the term feasible or efficiently computable for tasks that can be performed
by an algorithm whose running time is O( p(n)) for some polynomial function p.
One should note that this type of analysis depends on defining what is the input
size n of any instance to which the algorithm is expected to be applied. For purely
algorithmic tasks, as discussed in the common computational complexity literature,
73
74
this input size is clearly defined; the algorithm gets an input instance, say, a list to
be sorted, or an arithmetic operation to be calculated, which has a well defined
size (say, the number of bits in its representation). For machine learning tasks, the
notion of an input size is not so clear. An algorithm aims to detect some pattern in
a data set and can only access random samples of that data.
We start the chapter by discussing this issue and define the computational
complexity of learning. For advanced students, we also provide a detailed formal
definition. We then move on to consider the computational complexity of implementing the ERM rule. We first give several examples of hypothesis classes where
the ERM rule can be efficiently implemented, and then consider some cases where,
although the class is indeed efficiently learnable, ERM implementation is computationally hard. It follows that hardness of implementing ERM does not imply
hardness of learning. Finally, we briefly discuss how one can show hardness of a
given learning task, namely, that no learning algorithm can solve it efficiently.
75
76
The output of A is probably approximately correct; namely, with probability of at least 1 (over the random samples A receives), L D (h A )
minh H L D (h ) +
2. Consider a sequence of learning problems, (Z n , Hn , n )
n=1 , where problem
n is defined by a domain Z n , a hypothesis class Hn , and a loss function
n . Let A be a learning algorithm designed for solving learning problems
of this form. Given a function g : N (0, 1)2 N, we say that the runtime
of A with respect to the preceding sequence is O(g), if for all n, A solves
the problem (Z n , Hn , n ) in time O( fn ), where f n : (0, 1)2 N is defined by
fn (
, ) = g(n,
, ).
h (a1 ,...,an ,b1 ,...,bn ) (x, y) =
1 if i , x i [ai , bi ]
0 otherwise
(8.1)
77
78
defines is
h(x) =
1 if x i1 = = x ik = 1 and x j1 = = x jr = 0
0 otherwise
Let HCn be the class of all Boolean conjunctions over {0, 1}n . The size of HCn is
at most 3n + 1 (since in a conjunction formula, each element of x either appears,
or appears with a negation sign, or does not appear at all, and we also have the all
negative formula). Hence, the sample complexity of learning HCn using the ERM
rule is at most d log (3/)/
.
Efficiently Learnable in the Realizable Case
Next, we show that it is possible to solve the ERM problem for HCn in time polynomial in n and m. The idea is to define an ERM conjunction by including in the
hypothesis conjunction all the literals that do not contradict any positively labeled
example. Let v1 , . . . , vm + be all the positively labeled instances in the input sample S.
We define, by induction on i m + , a sequence of hypotheses (or conjunctions). Let
h 0 be the conjunction of all possible literals. That is, h 0 = x 1 x 1 x 2 . . . x n x n .
Note that h 0 assigns the label 0 to all the elements of X . We obtain h i+1 by deleting
from the conjunction h i all the literals that are not satisfied by vi+1 . The algorithm
outputs the hypothesis h m + . Note that h m + labels positively all the positively labeled
examples in S. Furthermore, for every i m + , h i is the most restrictive conjunction
that labels v1 , . . . , vi positively. Now, since we consider learning in the realizable
setup, there exists a conjunction hypothesis, f HCn , that is consistent with all the
examples in S. Since h m + is the most restrictive conjunction that labels positively all
the positively labeled members of S, any instance labeled 0 by f is also labeled 0
by h m + . It follows that h m + has zero training error (w.r.t. S) and is therefore a legal
ERM hypothesis. Note that the running time of this algorithm is O(mn).
Not Efficiently Learnable in the Agnostic Case
As in the case of axis aligned rectangles, unless P = NP, there is no algorithm whose
running time is polynomial in m and n that guaranteed to find an ERM hypothesis
for the class of Boolean conjunctions in the unrealizable case.
79
80
that unless RP = NP, there is no polynomial time algorithm that properly learns
a sequence of 3-term DNF learning problems in which the dimension of the nth
problem is n. By properly we mean that the algorithm should output a hypothesis
that is a 3-term DNF formula. In particular, since ERMHn3 D N F outputs a 3-term DNF
formula it is a proper learner and therefore it is hard to implement it. The proof
uses a reduction of the graph 3-coloring problem to the problem of PAC learning
3-term DNF. The detailed technique is given in Exercise 8.4. See also (Kearns and
Vazirani 1994, section 1.4).
A1 A2 A3 =
(u v w)
u A1 ,v A2 ,w A3
3
Next, let us define: : {0, 1}n {0, 1}(2n) such that for each triplet of literals u, v, w
there is a variable in the range of indicating if u v w is true or false. So, for each
3
3-DNF formula over {0, 1}n there is a conjunction over {0, 1}(2n) , with the same truth
table. Since we assume that the data is realizable, we can solve the ERM problem
3
with respect to the class of conjunctions over {0, 1}(2n) . Furthermore, the sample
complexity of learning the class of conjunctions in the higher dimensional space
is at most n 3 log (1/)/
. Thus, the overall runtime of this approach is polynomial
in n.
Intuitively, the idea is as follows. We started with a hypothesis class for which
learning is hard. We switched to another representation where the hypothesis class
is larger than the original class but has more structure, which allows for a more
efficient ERM search. In the new representation, solving the ERM problem is easy.
,1}
(2n)
on
uncti
Conj
r {0
s ove
1
p(n) ,
where the probability is taken over a random choice of x according to the uniform
distribution over {0, 1}n and the randomness of A.
A one way function, f , is called trapdoor one way function if, for some polynomial function p, for every n there exists a bit-string sn (called a secret key) of length
p(n), such that there is a polynomial time algorithm that, for every n and every
x {0, 1}n , on input ( f (x), sn ) outputs x. In other words, although f is hard to invert,
once one has access to its secret key, inverting f becomes feasible. Such functions
are parameterized by their secret key.
81
82
Now, let Fn be a family of trapdoor functions over {0, 1}n that can be calculated
by some polynomial time algorithm. That is, we fix an algorithm that given a secret
key (representing one function in Fn ) and an input vector, it calculates the value
of the function corresponding to the secret key on the input vector in polynomial
time. Consider the task of learning the class of the corresponding inverses, H Fn =
{ f 1 : f Fn }. Since each function in this class can be inverted by some secret key
sn of size polynomial in n, the class H Fn can be parameterized by these keys and its
size is at most 2 p(n) . Its sample complexity is therefore polynomial in n. We claim
that there can be no efficient learner for this class. If there were such a learner, L,
then by sampling uniformly at random a polynomial number of strings in {0, 1}n ,
and computing f over them, we could generate a labeled training sample of pairs
( f (x), x), which should suffice for our learner to figure out an (
, ) approximation
of f 1 (w.r.t. the uniform distribution over the range of f ), which would violate the
one way property of f .
A more detailed treatment, as well as a concrete example, can be found in
(Kearns and Vazirani 1994, chapter 6). Using reductions, they also show that the
class of functions that can be calculated by small Boolean circuits is not efficiently
learnable, even in the realizable case.
8.5 SUMMARY
The runtime of learning algorithms is asymptotically analyzed as a function of different parameters of the learning problem, such as the size of the hypothesis class,
our measure of accuracy, our measure of confidence, or the size of the domain
set. We have demonstrated cases in which the ERM rule can be implemented
efficiently. For example, we derived efficient algorithms for solving the ERM problem for the class of Boolean conjunctions and the class of axis aligned rectangles,
under the realizability assumption. However, implementing ERM for these classes
in the agnostic case is NP-hard. Recall that from the statistical perspective, there
is no difference between the realizable and agnostic cases (i.e., a class is learnable in both cases if and only if it has a finite VC-dimension). In contrast, as we
saw, from the computational perspective the difference is immense. We have also
shown another example, the class of 3-term DNF, where implementing ERM is
hard even in the realizable case, yet the class is efficiently learnable by another
algorithm.
Hardness of implementing the ERM rule for several natural hypothesis classes
has motivated the development of alternative learning methods, which we will
discuss in the next part of this book.
8.7 Exercises
8.7 EXERCISES
8.1 Let H be the class of intervals on the line (formally equivalent to axis aligned rectangles in dimension n = 1). Propose an implementation of the ERMH learning rule
(in the agnostic case) that given a training set of size m, runs in time O(m 2 ).
Hint: Use dynamic programming.
8.2 Let H1 , H2 , . . . be a sequence of hypothesis classes for binary classification. Assume
that there is a learning algorithm that implements the ERM rule in the realizable
case such that the output hypothesis of the algorithm for each class Hn only depends
on O(n) examples out of the training set. Furthermore, assume that such a hypothesis can be calculated given these O(n) examples in time O(n), and that the empirical
risk of each such hypothesis can be evaluated in time O(mn). For example, if Hn is
the class of axis aligned rectangles in Rn , we saw that it is possible to find an ERM
hypothesis in the realizable case that is defined by at most 2n examples. Prove that
in such cases, it is possible to find an ERM hypothesis for Hn in the unrealizable case
in time O(mn m O(n)).
8.3 In this exercise, we present several classes for which finding an ERM classifier is
computationally hard. First, we introduce the class of n-dimensional halfspaces,
H Sn , for a domain X = Rn . This is the class of all functions of the form h w,b (x) =
sign(w, x+b) where w, x Rn , w, x is their inner product, and b R. See a detailed
description in Chapter 9.
1. Show that ERMH over the class H = H Sn of linear predictors is computationally
hard. More precisely, we consider the sequence of problems in which the dimension n grows linearly and the number of examples m is set to be some constant
times n.
Hint: You can prove the hardness by a reduction from the following problem:
Max FS: Given a system of linear inequalities, Ax > b with A R mn and
b Rm (that is, a system of m linear inequalities in n variables, x = (x1 , . . ., xn )),
find a subsystem containing as many inequalities as possible that has a
solution (such a subsystem is called feasible).
It has been shown (Sankaran 1993) that the problem Max FS is NP-hard.
Show that any algorithm that finds an ERM H Sn hypothesis for any training sample S (Rn {+1, 1})m can be used to solve the Max FS problem of size m, n.
Hint: Define a mapping that transforms linear inequalities in n variables into
labeled points in Rn , and a mapping that transforms vectors in Rn to halfspaces,
such that a vector w satisfies an inequality q if and only if the labeled point
that corresponds to q is classified correctly by the halfspace corresponding to
w. Conclude that the problem of empirical risk minimization for halfspaces in
also NP-hard (that is, if it can be solved in time polynomial in the sample size,
m, and the Euclidean dimension, n, then every problem in the class NP can be
solved in polynomial time).
2. Let X = Rn and let Hnk be the class of all intersections of k-many linear halfspaces
in Rn . In this exercise, we wish to show that ERMHnk is computationally hard for
every k 3. Precisely, we consider a sequence of problems where k 3 is a
constant and n grows linearly. The training set size, m, also grows linearly with n.
Toward this goal, consider the k-coloring problem for graphs, defined as
follows:
Given a graph G = (V , E), and a number k, determine whether there exists a
function f : V {1 . . . k} so that for every (u, v ) E, f (u) = f (v ).
83
84
The constant 1/2 in the definition can be replaced by any constant in (0, 1).
8.7 Exercises
halfspaces discussed in the previous exercise), then, unless NP = RP, there exists
no polynomial time proper PAC learning algorithm for H.
Hint: Assume you have an algorithm A that properly PAC learns a class H in
time polynomial in some class parameter n as well as in 1/
and 1/. Your
goal is to use that algorithm as a subroutine to contract an algorithm B for
solving the ERMH problem in random polynomial time. Given a training set,
S (X {1}m ), and some h H whose error on S is zero, apply the PAC
learning algorithm to the uniform distribution over S and run it so that with
probability 0. 3 it finds a function h H that has error less than
= 1/|S| (with
respect to that uniform distribution). Show that the algorithm just described
satisfies the requirements for being a RP solver for ERMH .
85
PART 2
9
Linear Predictors
In this chapter we will study the family of linear predictors, one of the most useful
families of hypothesis classes. Many learning algorithms that are being widely used
in practice rely on linear predictors, first and foremost because of the ability to learn
them efficiently in many cases. In addition, linear predictors are intuitive, are easy
to interpret, and fit the data reasonably well in many natural learning problems.
We will introduce several hypothesis classes belonging to this family
halfspaces, linear regression predictors, and logistic regression predictors and
present relevant learning algorithms: linear programming and the Perceptron
algorithm for the class of halfspaces and the Least Squares algorithm for linear
regression. This chapter is focused on learning linear predictors using the ERM
approach; however, in later chapters we will see alternative paradigms for learning
these hypothesis classes.
First, we define the class of affine functions as
L d = {h w,b : w Rd , b R},
where
h w,b (x) = w, x + b =
d
wi x i
+ b.
i=1
90
Linear Predictors
9.1 HALFSPACES
The first hypothesis class we consider is the class of halfspaces, designed for binary
classification problems, namely, X = Rd and Y = {1, +1}. The class of halfspaces is
defined as follows:
H Sd = sign L d = {x sign(h w,b (x)) : h w,b L d }.
In other words, each halfspace hypothesis in H Sd is parameterized by w Rd and
b R and upon receiving a vector x the hypothesis returns the label sign(w, x + b).
To illustrate this hypothesis class geometrically, it is instructive to consider the
case d = 2. Each hypothesis forms a hyperplane that is perpendicular to the vector
w and intersects the vertical axis at the point (0, b/w2 ). The instances that are
above the hyperplane, that is, share an acute angle with w, are labeled positively.
Instances that are below the hyperplane, that is, share an obtuse angle with w, are
labeled negatively.
w
9.1 Halfspaces
will describe the logistic regression approach, which can be implemented efficiently
even in the nonseparable case. We will study surrogate loss functions in more detail
later on in Chapter 12.
max
wRd
subject to
Aw v
i = 1, . . . , m.
i = 1, . . . , m.
Let w be a vector that satisfies this condition (it must exist since we assume real
= w . Therefore, for all i we
izability). Define = mini (yi w , xi ) and let w
have
1
x i = yi w , xi 1.
yi w,
i = 1, . . . , m.
(9.1)
91
92
Linear Predictors
T
(T
+1)
is at least R B . That is, we will show that
the angle between w and w
w , w(T +1)
T
.
(9.2)
(T
+1)
RB
w w
By the Cauchy-Schwartz inequality, the left-hand side of Equation (9.2) is at most
1. Therefore, Equation (9.2) would imply that
T
1
T (R B)2 ,
RB
which will conclude our proof.
9.1 Halfspaces
To show that Equation (9.2) holds, we first show that w , w(T +1) T . Indeed,
at the first iteration, w(1) = (0, . . . , 0) and therefore w , w(1) = 0, while on iteration
t, if we update using example (xi , yi ) we have that
w , w(t+1) w , w(t) = w , w(t+1) w(t)
= w , yi xi = yi w , xi
1.
Therefore, after performing T iterations, we get
w , w(T +1) =
T
w , w(t+1) w , w(t) T ,
(9.3)
t=1
as required.
Next, we upper bound w(T +1) . For each iteration t we have that
w(t+1) 2 = w(t) + yi xi 2
= w(t) 2 + 2yi w(t) , xi + yi2 xi 2
w(t) 2 + R 2
(9.4)
where the last inequality is due to the fact that example i is necessarily such that
yi w(t) , xi 0, and the norm of xi is at most R. Now, since w(1) 2 = 0, if we use
Equation (9.4) recursively for T iterations, we obtain that
w(T +1) , w
T
T
.
=
(T
+1)
BR
w w
B TR
We have thus shown that Equation (9.2) holds, and this concludes our proof.
Remark 9.1. The Perceptron is simple to implement and is guaranteed to converge.
However, the convergence rate depends on the parameter B, which in some situations might be exponentially large in d. In such cases, it would be better to
implement the ERM problem by solving a linear program, as described in the previous section. Nevertheless, for many natural data sets, the size of B is not too large,
and the Perceptron converges quite fast.
93
94
Linear Predictors
jJ
Now, suppose that x1 , . . . , xd+1 are shattered by the class of homogenous classes.
Then, there must exist a vector w such that w, xi > 0 for all i I while w, x j < 0
for every j J . It follows that
!
" !
"
ai xi , w =
ai xi , w =
|a j |x j , w =
|a j |x j , w < 0,
0<
iI
iI
jJ
jJ
which leads to a contradiction. Finally, if J (respectively, I ) is empty then the rightmost (respectively, left-most) inequality should be replaced by an equality, which
still leads to a contradiction.
Theorem 9.3. The VC dimension of the class of nonhomogenous halfspaces in Rd is
d + 1.
Proof. First, as in the proof of Theorem 9.2, it is easy to verify that the set of vectors
0, e1 , . . . , ed is shattered by the class of nonhomogenous halfspaces. Second, suppose
that the vectors x1 , . . . , xd+2 are shattered by the class of nonhomogenous halfspaces.
But, using the reduction we have shown in the beginning of this chapter, it follows
that there are d + 2 vectors in Rd+1 that are shattered by the class of homogenous
halfspaces. But this contradicts Theorem 9.2.
Figure 9.1. Linear regression for d = 1. For instance, the x-axis may denote the age of the
baby, and the y-axis her weight.
the discrepancy between h(x) and y. One common way is to use the squared-loss
function, namely,
(h, (x, y)) = (h(x) y)2 .
For this loss function, the empirical risk function is called the Mean Squared Error,
namely,
m
1
(h(xi ) yi )2 .
L S (h) =
m
i=1
In the next subsection, we will see how to implement the ERM rule for linear regression with respect to the squared loss. Of course, there are a variety of
other loss functions that one can use, for example, the absolute value loss function,
(h, (x, y)) = |h(x) y|. The ERM rule for the absolute value loss function can be
implemented using linear programming (see Exercise 9.1).
Note that since linear regression is not a binary prediction task, we cannot analyze its sample complexity using the VC-dimension. One possible analysis of the
sample complexity of linear regression is by relying on the discretization trick
(see Remark 4.1 in Chapter 4); namely, if we are happy with a representation of
each element of the vector w and the bias b using a finite number of bits (say a 64
bits floating point representation), then the hypothesis class becomes finite and its
size is at most 264(d+1). We can now rely on sample complexity bounds for finite
hypothesis classes as described in Chapter 4. Note, however, that to apply the sample complexity bounds from Chapter 4 we also need that the loss function will be
bounded. Later in the book we will describe more rigorous means to analyze the
sample complexity of regression problems.
95
96
Linear Predictors
with respect to this class, given a training set S, and using the homogenous version
of L d , is to find
1
(w, xi yi )2 .
m
m
argmin L S (h w ) = argmin
w
i=1
To solve the problem we calculate the gradient of the objective function and
compare it to zero. That is, we need to solve
2
(w, xi yi )xi = 0.
m
m
i=1
yi x i .
and b =
A=
i
i=1
(9.6)
i=1
..
.
A=
x1
..
.
...
..
.
b=
x1
..
.
...
..
.
. ..
xm
x1
..
..
.
.
..
y1
.
..
xm .
..
ym
.
...
..
.
xm
,
..
.
(9.7)
(9.8)
= A+ b.
and w
vi v
i b.
i:Di,i =0
is the projection of b onto the span of those vectors vi for which Di,i = 0.
That is, Aw
Since the linear span of x1 , . . . , xm is the same as the linear span of those vi , and b is
= b, which concludes our argument.
in the linear span of the xi , we obtain that Aw
1
.
1 + exp( z)
(9.9)
97
98
Linear Predictors
The name sigmoid means S-shaped, referring to the plot of this function, shown
in the figure:
The hypothesis class is therefore (where for simplicity we are using homogenous
linear functions):
Hsig = sig L d = {x sig (w, x) : w Rd }.
Note that when w, x is very large then sig (w, x) is close to 1, whereas if w, x
is very small then sig (w, x) is close to 0. Recall that the prediction of the halfspace corresponding to a vector w is sign(w, x). Therefore, the predictions of the
halfspace hypothesis and the logistic hypothesis are very similar whenever |w, x| is
large. However, when |w, x| is close to 0 we have that sig (w, x) 12 . Intuitively,
the logistic hypothesis is not sure about the value of the label so it guesses that the
label is sign(w, x) with probability slightly larger than 50%. In contrast, the halfspace hypothesis always outputs a deterministic prediction of either 1 or 1, even if
|w, x| is very close to 0.
Next, we need to specify a loss function. That is, we should define how bad it is
to predict some h w (x) [0, 1] given that the true label is y {1}. Clearly, we would
like that h w (x) would be large if y = 1 and that 1 h w (x) (i.e., the probability of
predicting 1) would be large if y = 1. Note that
1 h w (x) = 1
m
1
log 1 + exp( yi w, xi ) .
m
(9.10)
i=1
The advantage of the logistic loss function is that it is a convex function with respect
to w; hence the ERM problem can be solved efficiently using standard methods.
We will study how to learn with convex functions, and in particular specify a simple
algorithm for minimizing convex functions, in later chapters.
The ERM problem associated with logistic regression (Equation (9.10)) is identical to the problem of finding a Maximum Likelihood Estimator, a well-known
9.6 Exercises
statistical approach for finding the parameters that maximize the joint probability of
a given data set assuming a specific parametric probability function. We will study
the Maximum Likelihood approach in Chapter 24.
9.4 SUMMARY
The family of linear predictors is one of the most useful families of hypothesis
classes, and many learning algorithms that are being widely used in practice rely
on linear predictors. We have shown efficient algorithms for learning linear predictors with respect to the zero-one loss in the separable case and with respect to the
squared and logistic losses in the unrealizable case. In later chapters we will present
the properties of the loss function that enable efficient learning.
Naturally, linear predictors are effective whenever we assume, as prior knowledge, that some linear predictor attains low risk with respect to the underlying
distribution. In the next chapter we show how to construct nonlinear predictors by
composing linear predictors on top of simple classes. This will enable us to employ
linear predictors for a variety of prior knowledge assumptions.
9.6 EXERCISES
9.1 Show how to cast the ERM problem of linear regression with respect to the absolute
value loss function, (h, (x, y)) = |h(x) y|, as a linear program; namely, show how
to write the problem
m
|w, xi yi |
min
w
i=1
as a linear program.
Hint: Start with proving that for any c R,
|c| = min a s.t. c a and c a.
a0
9.2 Show that the matrix A defined in Equation (9.6) is invertible if and only if x1 , . . ., xm
span Rd .
9.3 Show that Theorem 9.1 is tight in the following sense: For any positive integer m,
there exist a vector w Rd (for some appropriate d) and a sequence of examples
{(x1 , y1 ), . . ., (xm , ym )} such that the following hold:
R = maxi xi 1.
w 2 = m, and for all i m, yi xi , w 1. Note that, using the notation in
Theorem 9.1, we therefore get
99
100
Linear Predictors
where
)
Bv,r (x) =
1
0
if x v r
.
otherwise
1. Consider the mapping : Rd Rd+1 defined by (x) = (x, x2 ). Show that if
x1 , . . ., xm are shattered by Bd then (x1 ), . . ., (xm ) are shattered by the class of
halfspaces in Rd+1 (in this question we assume that sign(0) = 1). What does this
tell us about VCdim(Bd )?
2. (*) Find a set of d + 1 points in Rd that is shattered by Bd . Conclude that
d + 1 VCdim(Bd ) d + 2.
10
Boosting
102
Boosting
This definition is almost identical to the definition of PAC learning, which here
we will call strong learning, with one crucial difference: Strong learnability implies
the ability to find an arbitrarily good classifier (with error rate at most
for an
arbitrarily small
> 0). In weak learnability, however, we only need to output a
hypothesis whose error rate is at most 1/2 , namely, whose error rate is slightly
better than what a random labeling would give us. The hope is that it may be easier
to come up with efficient weak learners than with efficient (full) PAC learners.
The fundamental theorem of learning (Theorem 6.8) states that if a hypothesis
class H has a VC dimension d, then the sample complexity of PAC learning H satisfies m H (
, ) C1 d+log
(1/) , where C1 is a constant. Applying this with
= 1/2 we
immediately obtain that if d = then H is not -weak-learnable. This implies that
from the statistical perspective (i.e., if we ignore computational complexity), weak
learnability is also characterized by the VC dimension of H and therefore is just as
hard as PAC (strong) learning. However, when we do consider computational complexity, the potential advantage of weak learning is that maybe there is an algorithm
that satisfies the requirements of weak learning and can be implemented efficiently.
One possible approach is to take a simple hypothesis class, denoted B, and to
apply ERM with respect to B as the weak learning algorithm. For this to work, we
need that B will satisfy two requirements:
ERM B is efficiently implementable.
For every sample that is labeled by some hypothesis from H, any ERM B
hypothesis will have an error of at most 1/2 .
Then, the immediate question is whether we can boost an efficient weak learner
into an efficient strong learner. In the next section we will show that this is indeed
possible, but before that, let us show an example in which efficient weak learnability
of a class H is possible using a base hypothesis class B.
Example 10.1 (Weak Learning of 3-Piece Classifiers Using Decision Stumps). Let
X = R and let H be the class of 3-piece classifiers, namely, H = {h 1 ,2 ,b : 1 , 2
R, 1 < 2 , b {1}}, where for every x,
+b if x < 1 or x > 2
h 1 ,2 ,b (x) =
b if 1 x 2
An example hypothesis (for b = 1) is illustrated as follows:
+
+
2
103
104
Boosting
m
Di 1[h(xi )= yi ] .
i=1
min min
(10.1)
Di 1[xi, j >] +
Di 1[xi, j ] .
j [d] R
i:yi =1
i:yi =1
10.2 AdaBoost
Therefore, we can calculate the objective at in a constant time, given the objective
at the previous threshold, . It follows that after a preprocessing step in which we
sort the examples with respect to each coordinate, the minimization problem can be
performed in time O(dm). This yields the following pseudocode.
ERM for Decision Stumps
input:
training set S = (x1 , y1 ), . . . , (xm , ym )
distribution vector D
goal: Find j , that solve Equation (10.1)
initialize: F =
for j = 1, . . . , d
sort S using the j th coordinate, and denote
def
x 1,
j x 2, j x m, j x m+1, j = x m, j + 1
F = i:yi =1 Di
if F < F
F = F, = x 1, j 1, j = j
for i = 1, . . . , m
F = F y i Di
if F < F and x i, j = x i+1, j
F = F, = 12 (x i, j + x i+1, j ), j = j
output j ,
10.2 ADABOOST
AdaBoost (short for Adaptive Boosting) is an algorithm that has access to a weak
learner and finds a hypothesis with a low empirical risk. The AdaBoost algorithm
receives as input a training set of examples S = (x1 , y1 ), . . . , (xm , ym ), where for
each i , yi = f (xi ) for some labeling function f . The boosting process proceeds in
a sequence of consecutive rounds. At round t, the booster first defines a distribution
m
(t)
over the examples in S, denoted D(t) . That is, D(t) Rm
+ and
i=1 Di = 1. Then,
(t)
the booster passes the distribution D and the sample S to the weak learner. (That
way, the weak learner can construct i.i.d. examples according to D(t) and f .) The
weak learner is assumed to return a weak hypothesis, h t , whose error,
def
def
t = L D(t) (h t ) =
m
(t)
i=1
is at most
1
2
105
106
Boosting
strong classifier that is based on a weighted sum of all the weak hypotheses. The
pseudocode of AdaBoost is presented in the following.
AdaBoost
input:
training set S = (x1 , y1 ), . . . , (xm , ym )
weak learner WL
number of rounds T
initialize D(1) = ( m1 , . . . , m1 ).
for t = 1, . . . , T :
invoke weak learner h t = WL(D(t) , S)
(t)
compute
t = m
i=1 D
i 1[yi =h t (xi )]
let wt = 12 log
(t)
update Di
output the
1
t
The following theorem shows that the training error of the output hypothesis
decreases exponentially fast with the number of boosting rounds.
Theorem 10.2. Let S be a training set and assume that at each iteration of AdaBoost,
the weak learner returns a hypothesis for which
t 1/2 . Then, the training error
of the output hypothesis of AdaBoost is at most
1
1[h s (xi )= yi ] exp( 2 2 T ) .
m
m
L S (h s ) =
i=1
pt
1 yi f t (xi )
e
.
m
m
Zt =
i=1
Note that for any hypothesis we have that 1[h(x)= y] eyh(x) . Therefore, L S ( f T )
Z T , so it suffices to show that Z T e2
ZT =
2T
ZT
ZT
Z T 1
Z2 Z1
=
,
Z0
Z T 1 Z T 2
Z1 Z0
(10.2)
where we used the fact that Z 0 = 1 because f 0 0. Therefore, it suffices to show that
for every round t,
Z t+1
2
e2 .
(10.3)
Zt
To do so, we first note that using a simple inductive argument, for all t and i ,
(t+1)
Di
eyi f t (xi )
= m y f (x ) .
j t j
j =1 e
10.2 AdaBoost
Hence,
Z t+1
=
Zt
m
yi f t+1 ( xi )
i=1 e
m
y f t (x )
j
e j
j =1
m
i=1 e
yi f t ( xi ) e yi wt+1 h t+1 ( xi )
m
e y j f t (x j )
j =1
m
Di
i=1
= ewt+1
(t+1)
Di
i: yi h t+1 ( xi )=1
=e
wt+1
(1 t+1 ) + e
+ ewt+1
(t+1)
Di
i: yi h t+1 ( xi )=1
wt+1
t+1
(1
t+1 ) + 1/
t+1 1
t+1
1
=
1/
t+1 1
*
t+1
1
t+1
=
(1
t+1 ) +
t+1
1
t+1
t+1
= 2
t+1 (1
t+1 ).
+ = 1 4 2.
2
2
2
2
Finally, using the inequality 1 a ea we have that 1 4 2 e4 /2 = e2 .
This shows that Equation (10.3) holds and thus concludes our proof.
Each iteration of AdaBoost involves O(m) operations as well as a single call to
the weak learner. Therefore, if the weak learner can be implemented efficiently (as
happens in the case of ERM with respect to decision stumps) then the total training
process will be efficient.
Remark 10.2. Theorem 10.2 assumes that at each iteration of AdaBoost, the weak
learner returns a hypothesis with weighted sample error of at most 1/2 . According to the definition of a weak learner, it can fail with probability . Using the union
bound, the probability that the weak learner will not fail at all of the iterations is at
least 1 T . As we show in Exercise 10.1, the dependence of the sample complexity on can always be logarithmic in 1/, and therefore invoking the weak learner
with a very small is not problematic. We can therefore assume that T is also
small. Furthermore, since the weak learner is only applied with distributions over
the training set, in many cases we can implement the weak learner so that it will have
a zero probability of failure (i.e., = 0). This is the case, for example, in the weak
107
108
Boosting
learner that finds the minimum value of L D (h) for decision stumps, as described in
the previous section.
Theorem 10.2 tells us that the empirical risk of the hypothesis constructed by
AdaBoost goes to zero as T grows. However, what we really care about is the true
risk of the output hypothesis. To argue about the true risk, we note that the output
of AdaBoost is in fact a composition of a halfspace over the predictions of the T
weak hypotheses constructed by the weak learner. In the next section we show that
if the weak hypotheses come from a base hypothesis class of low VC-dimension,
then the estimation error of AdaBoost will be small; namely, the true risk of the
output of AdaBoost would not be very far from its empirical risk.
r
i=1
i 1[x(i1 ,i ]] i , i {1}.
Denote by Gr the class of all such piece-wise constant classifiers with at most r
pieces.
In the following we show that GT L(HDS1 , T ); namely, the class of halfspaces
over T decision stumps yields all the piece-wise constant classifiers with at most T
pieces.
Indeed, without loss of generality consider any g GT with t = ( 1)t . This
implies that if x is in the interval (t1 , t ], then g(x) = ( 1)t . For example:
T
wt sign(x t1) ,
(10.5)
t=1
by O(VCdim(B)
T ) (the O notation ignores constants and logarithmic factors).
Lemma 10.3. Let B be a base class and let L(B, T ) be as defined in Equation (10.4).
Assume that both T and VCdim(B) are at least 3. Then,
VCdim(L(B, T )) T (VCdim(B) + 1) (3 log (T (VCdim(B) + 1)) + 2).
Proof. Denote d = VCdim(B). Let C = {x 1 , . . . , x m } be a set that is shattered by
L(B, T ). Each labeling of C by h L(B, T ) is obtained by first choosing h 1 , . . . , h T
B and then applying a halfspace hypothesis over the vector (h 1 (x), . . . , h T (x)). By
Sauers lemma, there are at most (em/d)d different dichotomies (i.e., labelings)
induced by B over C. Therefore, we need to choose T hypotheses, out of at most
(em/d)d different hypotheses. There are at most (em/d)d T ways to do it. Next,
for each such choice, we apply a linear predictor, which yields at most (em/T )T
dichotomies. Therefore, the overall number of dichotomies we can construct is
109
110
Boosting
upper bounded by
(em/d)d T (em/T )T m (d+1)T ,
where we used the assumption that both d and T are at least 3. Since we assume that
C is shattered, we must have that the preceding is at least 2m , which yields
2m m (d+1)T .
Therefore,
m log (m)
(d + 1)T
.
log (2)
Lemma A.1 in Appendix A tells us that a necessary condition for the preceding to
hold is that
m2
(d + 1)T
(d + 1)T
log
(d + 1)T (3 log ((d + 1)T ) + 2),
log (2)
log (2)
To calculate g we stretch the mask t to fit the rectangle R and then calculate the
sum of the pixels (that is, sum of their gray level values) that lie within the outer
rectangles and subtract it from the sum of pixels in the inner rectangles.
Since the number of such functions g is at most 244 4, we can implement a weak
learner for the base hypothesis class by first calculating all the possible outputs of
g on each image, and then apply the weak learner of decision stumps described in
the previous subsection. It is possible to perform the first step very efficiently by
a preprocessing step in which we calculate the integral image of each image in the
training set. See Exercise 10.5 for details.
In Figure 10.2 we depict the first two features selected by AdaBoost when
running it with the base features proposed by Viola and Jones.
Figure 10.1. The four types of functions, g, used by the base hypotheses for face recognition. The value of g for type A or B is the difference between the sum of the pixels within
two rectangular regions. These regions have the same size and shape and are horizontally
or vertically adjacent. For type C, the value of g is the sum within two outside rectangles
subtracted from the sum in a center rectangle. For type D, we compute the difference
between diagonal pairs of rectangles.
Figure 10.2. The first and second features selected by AdaBoost, as implemented by Viola
and Jones. The two features are shown in the top row and then overlaid on a typical training face in the bottom row. The first feature measures the difference in intensity between
the region of the eyes and a region across the upper cheeks. The feature capitalizes on
the observation that the eye region is often darker than the cheeks. The second feature
compares the intensities in the eye regions to the intensity across the bridge of the nose.
10.5 SUMMARY
Boosting is a method for amplifying the accuracy of weak learners. In this chapter
we described the AdaBoost algorithm. We have shown that after T iterations of
AdaBoost, it returns a hypothesis from the class L(B, T ), obtained by composing a
linear classifier on T hypotheses from a base class B. We have demonstrated how the
parameter T controls the tradeoff between approximation and estimation errors. In
the next chapter we will study how to tune parameters such as T, on the basis of the
data.
111
112
Boosting
Boosting can be viewed from many perspectives. In the purely theoretical context, AdaBoost can be interpreted as a negative result: If strong learning of a
hypothesis class is computationally hard, so is weak learning of this class. This negative result can be useful for showing hardness of agnostic PAC learning of a class
B based on hardness of PAC learning of some other class H, as long as H is weakly
learnable using B. For example, Klivans and Sherstov (2006) have shown that PAC
learning of the class of intersection of halfspaces is hard (even in the realizable case).
This hardness result can be used to show that agnostic PAC learning of a single halfspace is also computationally hard (Shalev-Shwartz, Shamir & Sridharan 2010). The
idea is to show that an agnostic PAC learner for a single halfspace can yield a weak
learner for the class of intersection of halfspaces, and since such a weak learner can
be boosted, we will obtain a strong learner for the class of intersection of halfspaces.
AdaBoost also shows an equivalence between the existence of a weak learner
and separability of the data using a linear classifier over the predictions of base
hypotheses. This result is closely related to von Neumanns minimax theorem (von
Neumann 1928), a fundamental result in game theory.
AdaBoost is also related to the concept of margin, which we will study later
on in Chapter 15. It can also be viewed as a forward greedy selection algorithm, a
topic that will be presented in Chapter 25. A recent book by Schapire and Freund
(2012) covers boosting from all points of view and gives easy access to the wealth of
research that this field has produced.
10.7 EXERCISES
10.1 Boosting the Confidence: Let A be an algorithm that guarantees the following:
There exist some constant 0 (0, 1) and a function m H : (0, 1) N such that
for every
(0, 1), if m m H (
) then for every distribution D it holds that with
probability of at least 1 0 , L D ( A(S)) minhH L D (h) +
.
Suggest a procedure that relies on A and learns H in the usual agnostic PAC
learning model and has a sample complexity of
2 log (4k/)
m H (
, ) k m H (
) +
,
2
where
k = log ()/ log (0 ).
Hint: Divide the data into k + 1 chunks, where each of the first k chunks is of size
m H (
) examples. Train the first k chunks using A. Argue that the probability that
for all of these chunks we have L D ( A(S)) > minhH L D (h) +
is at most 0k /2.
Finally, use the last chunk to choose from the k hypotheses that A generated from
the k chunks (by relying on Corollary 4.6).
10.2 Prove that the function h given in Equation (10.5) equals the piece-wise constant
function defined according to the same thresholds as h.
10.3 We have informally argued that the AdaBoost algorithm uses the weighting mechanism to force the weak learner to focus on the problematic examples in the next
iteration. In this question we will find some rigorous justification for this argument.
10.7 Exercises
Show that the error of h t w.r.t. the distribution D(t+1) is exactly 1/2. That is, show
that for every t [T ]
m
(t+1)
Di
1[yi =h t (xi )] = 1/2.
i=1
10.4 In this exercise we discuss the VC-dimension of classes of the form L(B, T ). We
proved an upper bound of O(dT log (dT )), where d = VCdim(B). Here we wish to
prove an almost matching lower bound. However, that will not be the case for all
classes B.
1. Note that for every class B and every number T 1, VCdim(B)
VCdim(L(B, T )). Find a class B for which VCdim(B) = VCdim(L(B, T )) for
every T 1.
Hint: Take X to be a finite set.
2. Let Bd be the class of decision stumps over Rd . Prove that log (d)
VCdim(Bd ) 5 + 2 log (d).
Hints:
For the upper bound, rely on Exercise 10.11.
For the lower bound, assume d = 2k . Let A be a k d matrix whose columns
are all the d binary vectors in {1}k . The rows of A form a set of k vectors
in R d . Show that this set is shattered by decision stumps over Rd .
3. Let T 1 be any integer. Prove that VCdim(L(Bd , T )) 0. 5 T log (d).
Hint: Construct a set of T2 k instances by taking the rows of the matrix A from
the previous question, and the rows of the matrices 2A, 3A, 4A, . . ., T2 A. Show
that the resulting set is shattered by L(Bd , T ).
10.5 Efficiently Calculating the Viola and Jones Features Using an Integral Image: Let
A be a 24 24 matrix representing an
image. The integral image of A, denoted by
I ( A), is the matrix B such that Bi, j = i i, j j Ai, j .
Show that I ( A) can be calculated from A in time linear in the size of A.
Show how every Viola and Jones feature can be calculated from I ( A) in a constant amount of time (that is, the runtime does not depend on the size of the
rectangle defining the feature).
113
11
Model Selection and Validation
In the previous chapter we have described the AdaBoost algorithm and have shown
how the parameter T of AdaBoost controls the bias-complexity tradeoff. But how
do we set T in practice? More generally, when approaching some practical problem,
we usually can think of several algorithms that may yield a good solution, each of
which might have several parameters. How can we choose the best algorithm for the
particular problem at hand? And how do we set the algorithms parameters? This
task is often called model selection.
To illustrate the model selection task, consider the problem of learning a one
dimensional regression function, h : R R. Suppose that we obtain a training set as
depicted in the figure.
We can consider fitting a polynomial to the data, as described in Chapter 9. However, we might be uncertain regarding which degree d would give the best results
for our data set: A small degree may not fit the data well (i.e., it will have a large
approximation error), whereas a high degree may lead to overfitting (i.e., it will have
a large estimation error). In the following we depict the result of fitting a polynomial of degrees 2, 3, and 10. It is easy to see that the empirical risk decreases as we
enlarge the degree. However, looking at the graphs, our intuition tells us that setting
the degree to 3 may be better than setting it to 10. It follows that the empirical risk
alone is not enough for model selection.
114
Degree 3
Degree 10
In this chapter we will present two approaches for model selection. The first
approach is based on the Structural Risk Minimization (SRM) paradigm we have
described and analyzed in Chapter 7.2. SRM is particularly useful when a learning
algorithm depends on a parameter that controls the bias-complexity tradeoff (such
as the degree of the fitted polynomial in the preceding example or the parameter T
in AdaBoost). The second approach relies on the concept of validation. The basic
idea is to partition the training set into two sets. One is used for training each of the
candidate models, and the second is used for deciding which of them yields the best
results.
In model selection tasks, we try to find the right balance between approximation and estimation errors. More generally, if our learning algorithm fails to find
a predictor with a small risk, it is important to understand whether we suffer from
overfitting or underfitting. In Section 11.3 we discuss how this can be achieved.
115
116
This bound, which follows directly from Theorem 7.4, shows that for every d and
every h Hd , the true risk is bounded by two terms the empirical risk, L S (h), and
a complexity term that depends on d. The SRM rule will search for d and h Hd
that minimize the right-hand side of Equation (11.2).
Getting back to the example of polynomial regression described earlier, even
though the empirical risk of the 10th degree polynomial is smaller than that of the
3rd degree polynomial, we would still prefer the 3rd degree polynomial since its
complexity (as reflected by the value of the function g(d)) is much smaller.
While the SRM approach can be useful in some situations, in many practical
cases the upper bound given in Equation (11.2) is pessimistic. In the next section we
present a more practical approach.
11.2 VALIDATION
We would often like to get a better estimation of the true risk of the output predictor
of a learning algorithm. So far we have derived bounds on the estimation error of
a hypothesis class, which tell us that for all hypotheses in the class, the true risk
is not very far from the empirical risk. However, these bounds might be loose and
pessimistic, as they hold for all hypotheses and all possible data distributions. A
more accurate estimation of the true risk can be obtained by using some of the
training data as a validation set, over which one can evalutate the success of the
algorithms output predictor. This procedure is called validation.
Naturally, a better estimation of the true risk is useful for model selection, as we
will describe in Section 11.2.2.
11.2 Validation
117
To illustrate how validation is useful for model selection, consider again the
example of fitting a one dimensional polynomial as described in the beginning of
this chapter. In the following we depict the same training set, with ERM polynomials of degree 2, 3, and 10, but this time we also depict an additional validation set
(marked as red, unfilled circles). The polynomial of degree 10 has minimal training
error, yet the polynomial of degree 3 has the minimal validation error, and hence it
will be chosen as the best model.
Train
Validation
0.3
Error
118
0.2
0.1
0
2
10
11.2 Validation
with a rough grid of values for the parameter(s) and plot the corresponding modelselection curve. On the basis of the curve we will zoom in to the correct regime
and employ a finer grid to search over. It is important to verify that we are in the
relevant regime. For example, in the polynomial fitting problem described, if we
start searching degrees from the set of values {1, 10, 20} and do not employ a finer
grid based on the resulting curve, we will end up with a rather poor model.
119
120
understanding the exact behavior of cross validation is still an open problem. Rogers
and Wagner (Rogers & Wagner 1978) have shown that for k local rules (e.g., k
Nearest Neighbor; see Chapter 19) the cross validation procedure gives a very good
estimate of the true error. Other papers show that cross validation works for stable
algorithms (we will study stability and its relation to learnability in Chapter 13).
In order to find the best remedy, it is essential first to understand the cause of
the bad performance. Recall that in Chapter 5 we decomposed the true error of
the learned predictor into approximation error and estimation error. The approximation error is defined to be L D (h ) for some h argminhH L D (h), while the
estimation error is defined to be L D (h S ) L D (h ), where h S is the learned predictor
(which is based on the training set S).
The approximation error of the class does not depend on the sample size or on
the algorithm being used. It only depends on the distribution D and on the hypothesis class H. Therefore, if the approximation error is large, it will not help us to
enlarge the training set size, and it also does not make sense to reduce the hypothesis class. What can be beneficial in this case is to enlarge the hypothesis class or
completely change it (if we have some alternative prior knowledge in the form of a
different hypothesis class). We can also consider applying the same hypothesis class
but on a different feature representation of the data (see Chapter 25).
The estimation error of the class does depend on the sample size. Therefore,
if we have a large estimation error we can make an effort to obtain more training
examples. We can also consider reducing the hypothesis class. However, it doesnt
make sense to enlarge the hypothesis class in that case.
Error Decomposition Using Validation
We see that understanding whether our problem is due to approximation error or
estimation error is very useful for finding the best remedy. In the previous section we
saw how to estimate L D (h S ) using the empirical risk on a validation set. However,
it is more difficult to estimate the approximation error of the class. Instead, we
give a different error decomposition, one that can be estimated from the train and
validation sets.
L D (h S ) = (L D (h S ) L V (h S )) + (L V (h S ) L S (h S )) + L S (h S ).
The first term, (L D (h S ) L V (h S )), can be bounded quite tightly using Theorem 11.1.
Intuitively, when the second term, (L V (h S ) L S (h S )), is large we say that our algorithm suffers from overfitting while when the empirical risk term, L S (h S ), is large
we say that our algorithm suffers from underfitting. Note that these two terms
are not necessarily good estimates of the estimation and approximation errors. To
illustrate this, consider the case in which H is a class of VC-dimension d, and D is
a distribution such that the approximation error of H with respect to D is 1/4. As
long as the size of our training set is smaller than d we will have L S (h S ) = 0 for every
ERM hypothesis. Therefore, the training risk, L S (h S ), and the approximation error,
L D (h ), can be significantly different. Nevertheless, as we show later, the values of
L S (h S ) and (L V (h S ) L S (h S )) still provide us useful information.
Consider first the case in which L S (h S ) is large. We can write
L S (h S ) = (L S (h S ) L S (h )) + (L S (h ) L D (h )) + L D (h ).
When h S is an ERMH hypothesis we have that L S (h S ) L S (h ) 0. In addition,
since h does not depend on S, the term (L S (h ) L D (h )) can be bounded quite
tightly (as in Theorem 11.1). The last term is the approximation error. It follows that
if L S (h S ) is large then so is the approximation error, and the remedy to the failure
of our algorithm should be tailored accordingly (as discussed previously).
Remark 11.1. It is possible that the approximation error of our class is small, yet
the value of L S (h S ) is large. For example, maybe we had a bug in our ERM implementation, and the algorithm returns a hypothesis h S that is not an ERM. It may
also be the case that finding an ERM hypothesis is computationally hard, and our
algorithm applies some heuristic trying to find an approximate ERM. In some cases,
it is hard to know how good h S is relative to an ERM hypothesis. But, sometimes it
is possible at least to know whether there are better hypotheses. For example, in the
next chapter we will study convex learning problems in which there are optimality
conditions that can be checked to verify whether our optimization algorithm converged to an ERM solution. In other cases, the solution may depend on randomness
in initializing the algorithm, so we can try different randomly selected initial points
to see whether better solutions pop out.
121
122
Error
Validation error
Val
idat
ion
erro
Train error
Train error
m
Figure 11.1. Examples of learning curves. Left: This learning curve corresponds to the
scenario in which the number of examples is always smaller than the VC dimension of the
class. Right: This learning curve corresponds to the scenario in which the approximation
error is zero and the number of examples is larger than the VC dimension of the class.
Next consider the case in which L S (h S ) is small. As we argued before, this does
not necessarily imply that the approximation error is small. Indeed, consider two
scenarios, in both of which we are trying to learn a hypothesis class of VC-dimension
d using the ERM learning rule. In the first scenario, we have a training set of m < d
examples and the approximation error of the class is high. In the second scenario,
we have a training set of m > 2d examples and the approximation error of the class
is zero. In both cases L S (h S ) = 0. How can we distinguish between the two cases?
Learning Curves
One possible way to distinguish between the two cases is by plotting learning curves.
To produce a learning curve we train the algorithm on prefixes of the data of increasing sizes. For example, we can first train the algorithm on the first 10% of the
examples, then on 20% of them, and so on. For each prefix we calculate the training
error (on the prefix the algorithm is being trained on) and the validation error (on
a predefined validation set). Such learning curves can help us distinguish between
the two aforementioned scenarios. In the first scenario we expect the validation
error to be approximately 1/2 for all prefixes, as we didnt really learn anything.
In the second scenario the validation error will start as a constant but then should
start decreasing (it must start decreasing once the training set size is larger than the
VC-dimension). An illustration of the two cases is given in Figure 11.1.
In general, as long as the approximation error is greater than zero we expect the
training error to grow with the sample size, as a larger amount of data points makes
it harder to provide an explanation for all of them. On the other hand, the validation
error tends to decrease with the increase in sample size. If the VC-dimension is
finite, when the sample size goes to infinity, the validation and train errors converge
to the approximation error. Therefore, by extrapolating the training and validation
curves we can try to guess the value of the approximation error, or at least to get a
rough estimate on an interval in which the approximation error resides.
Getting back to the problem of finding the best remedy for the failure of our
algorithm, if we observe that L S (h S ) is small while the validation error is large, then
in any case we know that the size of our training set is not sufficient for learning
the class H. We can then plot a learning curve. If we see that the validation error is
starting to decrease then the best solution is to increase the number of examples (if
we can afford to enlarge the data). Another reasonable solution is to decrease the
11.5 Exercises
complexity of the hypothesis class. On the other hand, if we see that the validation
error is kept around 1/2 then we have no evidence that the approximation error of
H is good. It may be the case that increasing the training set size will not help us at
all. Obtaining more data can still help us, as at some point we can see whether the
validation error starts to decrease or whether the training error starts to increase.
But, if more data is expensive, it may be better first to try to reduce the complexity
of the hypothesis class.
To summarize the discussion, the following steps should be applied:
1. If learning involves parameter tuning, plot the model-selection curve to make
sure that you tuned the parameters appropriately (see Section 11.2.3).
2. If the training error is excessively large consider enlarging the hypothesis
class, completely change it, or change the feature representation of the data.
3. If the training error is small, plot learning curves and try to deduce from them
whether the problem is estimation error or approximation error.
4. If the approximation error seems to be small enough, try to obtain more data.
If this is not possible, consider reducing the complexity of the hypothesis class.
5. If the approximation error seems to be large as well, try to change the
hypothesis class or the feature representation of the data completely.
11.4 SUMMARY
Model selection is the task of selecting an appropriate model for the learning task
based on the data itself. We have shown how this can be done using the SRM learning paradigm or using the more practical approach of validation. If our learning
algorithm fails, a decomposition of the algorithms error should be performed using
learning curves, so as to find the best remedy.
11.5 EXERCISES
11.1 Failure of k-fold cross validation Consider a case in that the label is chosen at random according to P [y = 1] = P [y = 0] = 1/2. Consider a learning algorithm that
outputs the constant predictor h(x) = 1 if the parity of the labels on the training set
is 1 and otherwise the algorithm outputs the constant predictor h(x) = 0. Prove that
the difference between the leave-one-out estimate and the true error in such a case
is always 1/2.
11.2 Let H1 , . . ., Hk be k hypothesis classes. Suppose you are given m i.i.d. training examples and you would like to learn the class H = ki=1 Hi . Consider two alternative
approaches:
Learn H on the m examples using the ERM rule
Divide the m examples into a training set of size (1 )m and a validation
set of size m, for some (0, 1). Then, apply the approach of model selection using validation. That is, first train each class Hi on the (1 )m training
examples using the ERM rule with respect to Hi , and let h 1 , . . ., h k be the resulting hypotheses. Second, apply the ERM rule with respect to the finite class
{h 1 , . . . , h k } on the m validation examples.
Describe scenarios in which the first method is better than the second and vice
versa.
123
12
Convex Learning Problems
Examples of convex and nonconvex sets in R2 are given in the following. For the
nonconvex sets, we depict two points in the set such that the line between the two
points is not contained in the set.
Nonconvex
Convex
f (v)
f (u) + (1 ) f (v)
f (u)
f ( u + (1 )v)
u
u + (1 )v
(12.1)
It is easy to verify that a function f is convex if and only if its epigraph is a convex
set. An illustration of a nonconvex function f : R R, along with its epigraph, is
given in the following.
125
(12.2)
(12.3)
Combining these two equations and rearranging terms, we conclude that f (u)
f (v). Since this holds for every v, it follows that f (u) is also a global minimum of f .
Another important property of convex functions is that for every w we can
construct a tangent to f at w that lies below f everywhere. If f is differentiable, this tangent is the linear function l(u) = f (w) + f (w), u w, where
f (w) is the gradient of f at w, namely, the vector of partial derivatives of f ,
f (w) = fw(w) , . . . , fw(w)
. That is, for convex differentiable functions,
d
1
(12.4)
,
f(
w
)+
f (u)
f(
w
126
f (w)
w
1. f is convex
2. f is monotonically nondecreasing
3. f is nonnegative
Example 12.1.
The scalar function f (x) = x 2 is convex. To see this, note that f (x) = 2x and
f (x) = 2 > 0.
The scalar function f (x) = log (1 + exp(x)) is convex. To see this, observe that
exp(x)
1
f (x) = 1+exp(x)
= exp (x)+1
. This is a monotonically increasing function since
the exponent function is a monotonically increasing function.
The following claim shows that the composition of a convex scalar function with
a linear function yields a convex vector-valued function.
Claim 12.4. Assume that f : Rd R can be written as f (w) = g(w, x + y), for some
x Rd , y R, and g : R R. Then, convexity of g implies the convexity of f .
Proof. Let w1 , w2 Rd and [0, 1]. We have
f (w1 + (1 )w2 ) = g(w1 + (1 )w2 , x + y)
= g(w1 , x + (1 )w2 , x + y)
= g((w1 , x + y) + (1 )(w2 , x + y))
g(w1 , x + y) + (1 )g(w2 , x + y),
where the last inequality follows from the convexity of g.
Example 12.2.
Given some x Rd and y R, let f : Rd R be defined as f (w) = (w, x y)2 .
Then, f is a composition of the function g(a) = a 2 onto a linear function, and
hence f is a convex function.
Given some x Rd and y {1}, let f : Rd R be defined as f (w) = log (1 +
exp ( yw, x)). Then, f is a composition of the function g(a) = log (1 + exp(a))
onto a linear function, and hence f is a convex function.
Finally, the following lemma shows that the maximum of convex functions is
convex and that a weighted sum of convex functions, with nonnegative weights, is
also convex.
Claim 12.5. For i = 1, . . . ,r , let fi : Rd R be a convex function. The following
functions from Rd to R are also convex.
g(x) = maxi[r] fi (x)
g(x) = ri=1 wi f i (x), where for all i , wi 0.
127
128
= g(u) + (1 )g(v).
For the second claim
g(u + (1 )v) =
wi f i (u + (1 )v)
wi [ fi (u) + (1 ) fi (v)]
wi fi (u) + (1 )
wi f i (v)
= g(u) + (1 )g(v).
Example 12.3. The function g(x) = |x| is convex. To see this, note that g(x) =
max{x, x} and that both the function f 1 (x) = x and f 2 (x) = x are convex.
12.1.2 Lipschitzness
The definition of Lipschitzness that follows is with respect to the Euclidean norm
over Rd . However, it is possible to define Lipschitzness with respect to any norm.
Definition 12.6 (Lipschitzness). Let C Rd . A function f : Rd Rk is -Lipschitz
over C if for every w1 , w2 C we have that f (w1 ) f (w2 ) w1 w2 .
Intuitively, a Lipschitz function cannot change too fast. Note that if f : R R is
differentiable, then by the mean value theorem we have
f (w1 ) f (w2 ) = f (u)(w1 w2 ),
where u is some point between w1 and w2 . It follows that if the derivative of f is
everywhere bounded (in absolute value) by , then the function is -Lipschitz.
Example 12.4.
The function f (x) = |x| is 1-Lipschitz over R. This follows from the triangle
inequality: For every x 1 , x 2 ,
|x 1 | |x 2 | = |x 1 x 2 + x 2 | |x 2 | |x 1 x 2 | + |x 2 | |x 2| = |x 1 x 2 |.
Since this holds for both x 1 , x 2 and x 2 , x 1 , we obtain that ||x 1| |x 2 || |x 1 x 2 |.
The function f (x) = log (1 + exp(x)) is 1-Lipschitz over R. To see this, observe
that
exp (x)
1
=
1.
| f (x)| =
1 + exp(x)
exp( x) + 1
The function f (x) = x 2 is not -Lipschitz over R for any . To see this, take
x 1 = 0 and x 2 = 1 + , then
f (x 2 ) f (x 1 ) = (1 + )2 > (1 + ) = |x 2 x 1 |.
However, this function is -Lipschitz over the set C = {x : |x| /2}. Indeed, for
any x 1 , x 2 C we have
|x 12 x 22 | = |x 1 + x 2 | |x 1 x 2 | 2(/2) |x 1 x 2 | = |x 1 x 2 |.
The linear function f : Rd R defined by f (w) = v, w + b where v Rd is
v-Lipschitz. Indeed, using Cauchy-Schwartz inequality,
12.1.3 Smoothness
The definition of a smooth function relies on the notion of gradient. Recall that the
gradient of a differentiable function f : Rd R at w, denoted f (w), is the vector
.
of partial derivatives of f , namely, f (w) = fw(w) , . . . , fw(w)
d
1
v w2 .
2
(12.5)
Recall that convexity of f implies that f (v) f (w) + f (w), v w. Therefore,
when a function is both convex and smooth, we have both upper and lower bounds
on the difference between the function and its first order approximation.
Setting v = w 1 f (w) in the right-hand side of Equation (12.5) and rearranging terms, we obtain
1
f (w)2 f (w) f (v).
2
129
130
If we further assume that f (v) 0 for all v we conclude that smoothness implies the
following:
f (w)2 2 f (w).
(12.6)
| f (x)| =
exp ( x)
1
1/4.
=
(1 + exp( x))2 (1 + exp( x))(1 + exp(x))
(v w, x)2
2
Example 12.6.
For any x Rd and y R, let f (w) = (w, x y)2 . Then, f is (2x2 )-smooth.
For any x Rd and y {1}, let f (w) = log (1 + exp( yw, x)). Then, f is
(x2 /4)-smooth.
Since, for a sample S = z 1 , . . . , z m , for every w, L S (w) = m1 m
i=1 (w, z i ), Claim 12.5
implies that L S (w) is a convex function. Therefore, the ERM rule is a problem of
minimizing a convex function subject to the constraint that the solution should be in
a convex set.
Under mild conditions, such problems can be solved efficiently using generic
optimization algorithms. In particular, in Chapter 14 we will present a very simple
algorithm for minimizing convex functions.
131
132
It follows that
min L D1 (w)
L D1 (w)
w
1
(1 ) >
.
4
Namely, given S the output of A is determined. This requirement is for the sake of simplicity. A slightly
more involved argument will show that nondeterministic algorithms will also fail to learn the problem.
This example shows that we need additional assumptions on the learning problem, and this time the solution is in Lipschitzness or smoothness of the loss function.
This motivates a definition of two families of learning problems, convex-Lipschitzbounded and convex-smooth-bounded, which are defined later.
133
134
Note that we also required that the loss function is nonnegative. This is needed
to ensure that the loss function is self-bounded, as described in the previous section.
Example 12.11. Let X = {x Rd : x /2} and Y = R. Let H = {w Rd : w B}
and let the loss function be (w, (x, y)) = (w, x y)2 . This corresponds to a regression problem with the squared loss, where we assume that the instances are in a
ball of radius /2 and we restrict the hypotheses to be homogenous linear functions
defined by a vector w whose norm is bounded by B. Then, the resulting problem is
Convex-Smooth-Bounded with parameters , B.
We claim that these two families of learning problems are learnable. That is, the
properties of convexity, boundedness, and Lipschitzness or smoothness of the loss
function are sufficient for learnability. We will prove this claim in the next chapters
by introducing algorithms that learn these problems successfully.
12.4 Summary
hinge
0 1
y w, x
Once we have defined the surrogate convex loss, we can learn the problem with
respect to it. The generalization requirement from a hinge loss learner will have the
form
hinge
LD
hinge
(A(S)) min L D
wH
(w) + ,
hinge
where L D (w) = E(x,y)D [hinge (w, (x, y))]. Using the surrogate property, we can
lower bound the left-hand side by L 01
D (A(S)), which yields
hinge
L 01
D (A(S)) min L D
wH
(w) + .
min
L
(w)
+
min
L
(w)
min
L
(w)
+
.
D
D
D
D
wH
wH
wH
That is, the 01 error of the learned predictor is upper bounded by three terms:
Approximation error: This is the term minwH L 01
D (w), which measures how
well the hypothesis class performs on the distribution. We already elaborated
on this error term in Chapter 5.
Estimation error: This is the error that results from the fact that we only receive
a training set and do not observe the distribution D. We already elaborated on
this error term in Chapter 5.
hinge
12.4 SUMMARY
We introduced two families of learning problems: convex-Lipschitz-bounded and
convex-smooth-bounded. In the next two chapters we will describe two generic
135
136
learning algorithms for these families. We also introduced the notion of convex
surrogate loss function, which enables us also to utilize the convex machinery for
nonconvex problems.
12.6 EXERCISES
12.1 Construct an example showing that the 01 loss function may suffer from local
minima; namely, construct a training sample S (X {1})m (say, for X = R2 ), for
which there exist a vector w and some
> 0 such that
1. For any w such that w w
we have L S (w) L S (w ) (where the loss here
is the 01 loss). This means that w is a local minimum of L S .
2. There exists some w such that L S (w ) < L S (w). This means that w is not a
global minimum of L S .
12.2 Consider the learning problem of logistic regression: Let H = X = {x Rd : x B},
for some scalar B > 0, let Y = {1}, and let the loss function be defined as
(w, (x, y)) = log (1 + exp ( yw, x)). Show that the resulting learning problem is both convex-Lipschitz-bounded and convex-smooth-bounded. Specify the
parameters of Lipschitzness and smoothness.
12.3 Consider the problem of learning halfspaces with the hinge loss. We limit our
domain to the Euclidean ball with radius R. That is, X = {x : x2 R}. The label set
is Y = {1} and the loss function is defined by (w, (x, y)) = max{0, 1 yw, x}.
We already know that the loss function is convex. Show that it is R-Lipschitz.
12.4 (*) Convex-Lipschitz-Boundedness Is Not Sufficient for Computational Efficiency:
In the next chapter we show that from the statistical perspective, all convexLipschitz-bounded problems are learnable (in the agnostic PAC model). However,
our main motivation to learn such problems resulted from the computational perspective convex optimization is often efficiently solvable. Yet the goal of this
exercise is to show that convexity alone is not sufficient for efficiency. We show
that even for the case d = 1, there is a convex-Lipschitz-bounded problem which
cannot be learned by any computable learner.
Let the hypothesis class be H = [0, 1] and let the example domain, Z , be the set of
all Turing machines. Define the loss function as follows. For every Turing machine
T Z , let (0, T ) = 1 if T halts on the input 0 and (0, T ) = 0 if T doesnt halt on the
input 0. Similarly, let (1, T ) = 0 if T halts on the input 0 and (1, T ) = 1 if T doesnt
halt on the input 0. Finally, for h (0, 1), let (h, T ) = h(0, T ) + (1 h)(1, T ).
1. Show that the resulting learning problem is convex-Lipschitz-bounded.
2. Show that no computable algorithm can learn the problem.
13
Regularization and Stability
138
the algorithm balances between low empirical risk and simpler, or less complex,
hypotheses.
There are many possible regularization functions one can use, reflecting some
prior belief about the problem (similarly to the description language in Minimum
Description Length). Throughout this section we will focus on one of the most simple regularization functions:
R(w) = w2 , where > 0 is a scalar and the norm is
d
2
the 2 norm, w =
i=1 wi . This yields the learning rule:
(13.2)
A(S) = argmin L S (w) + w2 .
w
A=
xi xi
yi x i .
(13.4)
and b =
i=1
i=1
Since A is a positive semidefinite matrix, the matrix 2m I + A has all its eigenvalues
bounded below by 2m. Hence, this matrix is invertible and the solution to ridge
regression becomes
(13.5)
w = (2m I + A)1 b.
In the next section we formally show how regularization stabilizes the algorithm
and prevents overfitting. In particular, the analysis presented in the next sections
(particularly, Corollary 13.11) will yield:
SD m
Remark 13.1. The preceding theorem tells us how many examples are needed to
guarantee that the expected value of the risk of the learned predictor will be bounded
by the approximation error of the class plus
. In the usual definition of agnostic
PAC learning we require that the risk of the learned predictor will be bounded
with probability of at least 1 . In Exercise 13.1 we show how an algorithm with a
bounded expected risk can be used to construct an agnostic PAC learner.
SD m
Proof. Since S and z are both drawn i.i.d. from D, we have that for every i ,
E [L D (A(S))] = E [(A(S), z )] = E [(A(S (i) ), z i )].
S
S,z
S,z
139
140
S,i
When the right-hand side of Equation (13.6) is small, we say that A is a stable algorithm changing a single example in the training set does not lead to a
significant change. Formally,
Definition 13.3 (On-Average-Replace-One-Stable). Let
: N R be a monotonically decreasing function. We say that a learning algorithm A is on-average-replaceone-stable with rate
(m) if for every distribution D
E
Theorem 13.2 tells us that a learning algorithm does not overfit if and only if
it is on-average-replace-one-stable. Of course, a learning algorithm that does not
overfit is not necessarily a good learning algorithm take, for example, an algorithm A that always outputs the same hypothesis. A useful algorithm should find
a hypothesis that on one hand fits the training set (i.e., has a low empirical risk)
and on the other hand does not overfit. Or, in light of Theorem 13.2, the algorithm
should both fit the training set and at the same time be stable. As we shall see, the
parameter of the RLM rule balances between fitting the training set and being
stable.
f (u)
f (w)
w
w + (1) u
(1 ) || u w ||2
The following lemma implies that the objective of RLM is (2)-strongly convex.
In addition, it underscores an important property of strong convexity.
Lemma 13.5.
1. The function f (w) = w2 is 2-strongly convex.
2. If f is -strongly convex and g is convex, then f + g is -strongly convex.
3. If f is -strongly convex and u is a minimizer of f , then, for any w,
f (u + (w u)) f (u)
f (w) f (u) (1 )w u2 .
2
Taking the limit 0 we obtain that the right-hand side converges to f (w) f (u)
2
2 w u . On the other hand, the left-hand side becomes the derivative of the
function g() = f (u + (w u)) at = 0. Since u is a minimizer of f , it follows that
= 0 is a minimizer of g, and therefore the left-hand side of the preceding goes to
zero in the limit 0, which concludes our proof.
We now turn to prove that RLM is stable. Let S = (z 1 , . . . , z m ) be a training set,
let z be an additional example, and let S (i) = (z 1 , . . . , z i1 , z , z i+1 , . . . , z m ). Let A be
the RLM rule, namely,
A(S) = argmin L S (w) + w2 .
w
Denote f S (w) = L S (w) + w2 , and on the basis of Lemma 13.5 we know that f S is
(2)-strongly convex. Relying on part 3 of the lemma, it follows that for any v,
f S (v) f S (A(S)) v A(S)2 .
(13.7)
141
142
On the other hand, for any v and u, and for all i , we have
f S (v) f S (u) = L S (v) + v2 (L S (u) + u2 )
(13.8)
In particular, choosing v = A(S (i) ), u = A(S), and using the fact that v minimizes
L S (i) (w) + w2 , we obtain that
f S (A(S (i) )) f S (A(S))
The two subsections that follow continue the stability analysis for either Lipschitz or smooth loss functions. For both families of loss functions we show that
RLM is stable and therefore it does not overfit.
(13.11)
Similarly,
(A(S), z ) (A(S (i) ), z ) A(S (i) ) A(S).
Plugging these inequalities into Equation (13.10) we obtain
A(S (i) ) A(S)2
which yields
A(S (i) ) A(S)
2
.
m
2 2
.
m
SD
2 2
.
m
(13.12)
Using the Cauchy-Schwartz inequality and Equation (12.6) we further obtain that
(A(S (i) ), z i ) (A(S), z i )
(A(S), z i ) A(S (i) ) A(S) +
(13.14)
2
(i)
(A(S), z i ) + (A(S ), z ) .
A(S ) A(S)
( m )
(i)
8
(A(S), z i ) + (A(S (i) ), z ) .
m
143
144
Combining the preceding with Equation (13.14) and again using the assumption
m/2 yield
(A(S (i) ), z i ) (A(S), z i )
(A(S),
z
)
+
(A(S
i
m (m)2
2
8
(i)
(A(S), z i ) + (A(S ), z )
m
24
(A(S), z i ) + (A(S (i) ), z ) ,
m
where in the last step we used the inequality (a + b)2 3(a 2 + b2 ). Taking expectation with respect to S, z , i and noting that E [(A(S), z i )] = E [(A(S (i) ), z )] =
E [L S (A(S))], we conclude that:
Corollary 13.7. Assume that the loss function is -smooth and nonnegative. Then, the
RLM rule with the regularizer w2 , where 2
m , satisfies
48
E (A(S (i) ), z i ) (A(S), z i )
E [L S (A(S))].
m
Note that if for all z we have (0, z) C, for some scalar C > 0, then for every S,
L S (A(S)) L S (A(S)) + A(S)2 L S (0) + 02 = L S (0) C.
Hence, Corollary 13.7 also implies that
48 C
.
E (A(S (i) ), z i ) (A(S), z i )
m
(13.15)
The first term reflects how well A(S) fits the training set while the second term
reflects the difference between the true and empirical risks of A(S). As we have
shown in Theorem 13.2, the second term is equivalent to the stability of A. Since
our goal is to minimize the risk of the algorithm, we need that the sum of both terms
will be small.
In the previous section we have bounded the stability term. We have shown
that the stability term decreases as the regularization parameter, , increases. On
the other hand, the empirical risk increases with . We therefore face a tradeoff
between fitting and overfitting. This tradeoff is quite similar to the bias-complexity
tradeoff we discussed previously in the book.
(13.16)
2 2
.
m
2 2
.
B2 m
wH
8 2 B 2
2
8
.
m
The preceding corollary holds for Lipschitz loss functions. If instead the loss
function is smooth and nonnegative, then we can combine Equation (13.16) with
Corollary 13.7 to get:
Corollary 13.10. Assume that the loss function is convex, -smooth, and nonnegative.
Then, the RLM rule with the regularization function w2 , for 2
m , satisfies the
1
Again, the bound below is on the expected risk, but using Exercise 13.1 it can be used to derive an
agnostic PAC learning guarantee.
145
146
wH
13.5 SUMMARY
We introduced stability and showed that if an algorithm is stable then it does not
overfit. Furthermore, for convex-Lipschitz-bounded or convex-smooth-bounded
problems, the RLM rule with Tikhonov regularization leads to a stable learning algorithm. We discussed how the regularization parameter, , controls the
tradeoff between fitting and overfitting. Finally, we have shown that all learning
problems that are from the families of convex-Lipschitz-bounded and convexsmooth-bounded problems are learnable using the RLM rule. The RLM paradigm
is the basis for many popular learning algorithms, including ridge regression (which
we discussed in this chapter) and support vector machines (which will be discussed
in Chapter 15).
In the next chapter we will present Stochastic Gradient Descent, which gives
us a very practical alternative way to learn convex-Lipschitz-bounded and convexsmooth-bounded problems and can also be used for efficiently implementing the
RLM rule.
13.7 Exercises
13.7 EXERCISES
13.1 From Bounded Expected Risk to Agnostic PAC Learning: Let A be an algorithm
that guarantees the following: If m m H (
) then for every distribution D it holds
that
E m [L D ( A(S))] min L D (h) +
.
SD
hH
Show that for every (0, 1), if m m H (
) then with probability of at least
1 it holds that L D ( A(S)) minhH L D (h) +
.
Hint: Observe that the random variable L D ( A(S)) minhH L D (h) is nonnegative and rely on Markovs inequality.
For every (0, 1) let
log (4/) + log (log2 (1/))
m H (
, ) = m H (
/2)log2 (1/) +
.
2
Suggest a procedure that agnostic PAC learns the problem with sample complexity of m H (
, ), assuming that the loss function is bounded by 1.
Hint: Let k = log2 (1/). Divide the data into k + 1 chunks, where each of the
first k chunks is of size m H (
/2) examples. Train the first k chunks using A. On
the basis of the previous question argue that the probability that for all of these
chunks we have L D ( A(S)) > minhH L D (h) +
is at most 2k /2. Finally, use
the last chunk as a validation set.
13.2 Learnability without Uniform Convergence: Let B be the unit ball of Rd , let H = B,
let Z = B {0, 1}d , and let : Z H R be defined as follows:
(w, (x, )) =
d
i (xi wi )2 .
i=1
147
148
hH
We say that a learning rule A learns a class H with rate (m) if for every distribution
D it holds that
E m L D ( A(S)) min L D (h)
(m).
SD
hH
Assume that for every z, the loss function (, z) is -Lipschitz with respect to
the same norm, namely,
z, w, v, (w, z) (v, z) w v.
Prove that A is on-average-replace-one-stable with rate
4. (*) Let q (1, 2) and consider the q -norm
wq =
d
i=1
1/q
| wi |
2 2
m .
13.7 Exercises
It can be shown (see, for example, Shalev-Shwartz (2007)) that the function
R(w) =
1
w2q
2(q 1)
(d)
is 1-strongly convex with respect to wq . Show that if q = loglog(d)1
then R(w) is
1
d
3log (d) -strongly convex with respect to the 1 norm over R .
149
14
Stochastic Gradient Descent
Recall that the goal of learning is to minimize the risk function, L D (h) =
EzD [(h, z)]. We cannot directly minimize the risk function since it depends on
the unknown distribution D. So far in the book, we have discussed learning methods that depend on the empirical risk. That is, we first sample a training set S and
define the empirical risk function L S (h). Then, the learner picks a hypothesis based
on the value of L S (h). For example, the ERM rule tells us to pick the hypothesis
that minimizes L S (h) over the hypothesis class, H. Or, in the previous chapter, we
discussed regularized risk minimization, in which we pick a hypothesis that jointly
minimizes L S (h) and a regularization function over h.
In this chapter we describe and analyze a rather different learning approach,
which is called Stochastic Gradient Descent (SGD). As in Chapter 12 we will focus
on the important family of convex learning problems, and following the notation
in that chapter, we will refer to hypotheses as vectors w that come from a convex
hypothesis class, H. In SGD, we try to minimize the risk function L D (w) directly
using a gradient descent procedure. Gradient descent is an iterative optimization
procedure in which at each step we improve the solution by taking a step along the
negative of the gradient of the function to be minimized at the current point. Of
course, in our case, we are minimizing the risk function, and since we do not know
D we also do not know the gradient of L D (w). SGD circumvents this problem by
allowing the optimization procedure to take a step along a random direction, as
long as the expected value of the direction is the negative of the gradient. And, as
we shall see, finding a random direction whose expected value corresponds to the
gradient is rather simple even though we do not know the underlying distribution D.
The advantage of SGD, in the context of convex learning problems, over the
regularized risk minimization learning rule is that SGD is an efficient algorithm that
can be implemented in a few lines of code, yet still enjoys the same sample complexity as the regularized risk minimization rule. The simplicity of SGD also allows us
to use it in situations when it is not possible to apply methods that are based on the
empirical risk, but this is beyond the scope of this book.
We start this chapter with the basic gradient descent algorithm and analyze its
convergence rate for convex-Lipschitz functions. Next, we introduce the notion of
150
subgradient and show that gradient descent can be applied for nondifferentiable
functions as well. The core of this chapter is Section 14.3, in which we describe
the Stochastic Gradient Descent algorithm, along with several useful variants. We
show that SGD enjoys an expected convergence rate similar to the rate of gradient
descent. Finally, we turn to the applicability of SGD to learning problems.
(14.1)
151
152
T
1
f (w(t) ) f (w )
T
t=1
T
1
=
f (w(t) ) f (w ) .
T
(14.2)
t=1
(14.3)
t=1
(14.4)
satisfies
T
w 2
vt 2 .
+
2
2
T
w(t) w , vt
t=1
(14.5)
t=1
In particular, for every B, > 0, if for all t we have that vt and if we set =
B2
, then for every w with w B we have
2 T
T
B
1 (t)
w w , vt .
T
T
t=1
1
= ( w(t) w vt 2 + w(t) w 2 + 2 vt 2 )
2
=
where the last equality follows from the definition of the update rule. Summing the
equality over t, we have
T
t=1
w
(t)
1
w(t+1) w 2 + w(t) w 2 +
w , vt =
vt 2 . (14.6)
2
2
t=1
t=1
The first sum on the right-hand side is a telescopic sum that collapses to
w(1) w 2 w(T +1) w 2 .
Plugging this in Equation (14.6), we have
T
w(t) w , vt =
t=1
1
(w(1) w 2 w(T +1) w 2 ) +
vt 2
2
2
T
t=1
1
w(1) w 2 +
vt 2
2
2
T
t=1
w 2 +
2
2
T
vt 2 ,
t=1
where the last equality is due to the definition w(1) = 0. This proves the first part of
the lemma (Equation (14.5)). The second part follows by upper bounding w by
B, vt by , dividing by T , and plugging in the value of .
Lemma 14.1 applies to the GD algorithm with vt = f (w(t) ). As we will show
later in Lemma 14.7, if f is -Lipschitz, then f (w(t) ) . We therefore satisfy
153
154
-Lipschitz function,
B
f (w ) .
f (w)
T
f (w )
, it suffices to run the GD
Furthermore, for every
> 0, to achieve f (w)
algorithm for a number of iterations that satisfies
T
B 2 2
.
2
14.2 SUBGRADIENTS
The GD algorithm requires that the function f be differentiable. We now generalize
the discussion beyond differentiable functions. We will show that the GD algorithm
can be applied to nondifferentiable functions by using a so-called subgradient of
f (w) at w(t) , instead of the gradient.
To motivate the definition of subgradients, recall that for a convex function f ,
the gradient at w defines the slope of a tangent that lies below f , that is,
u,
(14.7)
(14.8)
The proof of this lemma can be found in many convex analysis textbooks (e.g.,
(Borwein & Lewis 2006)). The preceding inequality leads us to the definition of
subgradients.
Definition 14.4. (Subgradients). A vector v that satisfies Equation (14.8) is called a
subgradient of f at w. The set of subgradients of f at w is called the differential set
and denoted f (w).
An illustration of subgradients is given on the right-hand side of Figure 14.2. For
scalar functions, a subgradient of a convex function f at w is a slope of a line that
touches f at w and is not above f elsewhere.
,
f(
w
)
14.2 Subgradients
)+
f (u)
f(
f (w)
w
Figure 14.2. Left: The right-hand side of Equation (14.7) is the tangent of f at w. For a
convex function, the tangent lower bounds f. Right: Illustration of several subgradients
of a nondifferentiable convex function.
if x > 0
{1}
f (x) = {1}
if x < 0
[ 1, 1] if x = 0
For many practical uses, we do not need to calculate the whole set of subgradients at a given point, as one member of this set would suffice. The following claim
shows how to construct a sub-gradient for pointwise maximum functions.
Claim 14.6. Let g(w) = maxi[r] gi (w) for r convex differentiable functions g1 , . . . , gr .
Given some w, let j argmaxi gi (w). Then g j (w) g(w).
Proof. Since g j is convex we have that for all u
g j (u) g j (w) + u w, g j (w).
Since g(w) = g j (w) and g(u) g j (u) we obtain that
g(u) g(w) + u w, g j (w),
which concludes our proof.
Example 14.2 (A Subgradient of the Hinge Loss). Recall the hinge loss function
from Section 12.3, f (w) = max{0, 1 yw, x} for some vector x and scalar y. To
calculate a subgradient of the hinge loss at some w we rely on the preceding claim
and obtain that the vector v defined in the following is a subgradient of the hinge
loss at w:
0
if 1 yw, x 0
v=
yx if 1 yw, x > 0
155
156
Figure 14.3. An illustration of the gradient descent algorithm (left) and the stochastic
gradient descent algorithm (right). The function to be minimized is 1. 25(x + 6)2 + (y 8)2 .
For the stochastic case, the solid line depicts the averaged value of w.
157
158
B 2 2
.
2
Proof. Let us introduce the notation v1:t to denote the sequence v1 , . . . , vt . Taking
expectation of Equation (14.2), we obtain
T
f (w )] E T1
( f (w(t) ) f (w )) .
E [ f (w)
v1:T
v1:T
t=1
Since Lemma 14.1 holds for any sequence v1 , v2 , . . . vT , it applies to SGD as well. By
taking expectation of the bound in the lemma we have
T
B
1 (t)
(14.9)
w w , vt .
E
v1:T T
T
t=1
v1:T
t=1
(14.10)
t=1
t=1
Next, we recall the law of total expectation: For every two random variables , ,
and a function g, E [g()] = E E [g()|]. Setting = v1:t and = v1:t1 we get
that
E [w(t) w , vt ] = E [w(t) w , vt ]
v1:T
v1: t
= E E [w(t) w , vt | v1: t1 ].
v1: t1 v1: t
Once we know v1:t1 , the value of w(t) is not random any more and therefore
E E [w(t) w , vt | v1: t1 ] = E w(t) w , E [vt | v1: t1].
v1: t1 v1: t
v1: t1
vt
Since w(t) only depends on v1:t1 and SGD requires that Evt [vt | w(t) ] f (w(t) ) we
obtain that Evt [vt | v1:t1 ] f (w(t) ). Thus,
E w(t) w , E [vt | v1: t1 ]
v1: t1
vt
E [ f (w(t) ) f (w )].
v1: t1
v1:T
v1: t1
= E [ f (w(t) ) f (w )].
v1:T
14.4 Variants
Summing over t, dividing by T , and using the linearity of expectation, we get that
Equation (14.10) holds, which concludes our proof.
14.4 VARIANTS
In this section we describe several variants of Stochastic Gradient Descent.
1. w(t+ 2 ) = w(t) vt
1
2. w(t+1) = argminwH w w(t+ 2 )
The projection step replaces the current value of w by the vector in H closest to it.
Clearly, the projection step guarantees that w(t) H for all t. Since H is convex
H as required. We next show that the analysis of SGD with
this also implies that w
projections remains the same. This is based on the following lemma.
Lemma 14.9 (Projection Lemma). Let H be a closed convex set and let v be the
projection of w onto H, namely,
v = argmin x w2 .
xH
159
160
Therefore,
w u2 = w v + v u2
= w v2 + v u2 + 2v w, u v
v u2 .
Equipped with the preceding lemma, we can easily adapt the analysis of SGD to
the case in which we add projection steps on a closed and convex set. Simply note
that for every t,
w(t+1) w 2 w(t) w 2
1
w(t+ 2 ) w 2 w(t) w 2 .
Therefore, Lemma 14.1 holds when we add projection steps and hence the rest of
the analysis follows directly.
T
= T1 t=1
We have set the output vector to be w
w(t) . There are alternative
(t)
approaches such as outputting w for some random t [t], or outputting the average of w(t) over the last T iterations, for some (0, 1). One can also take
a weighted average of the last few iterates. These more sophisticated averaging
schemes can improve the convergence speed in some situations, such as in the case
of strongly convex functions defined in the following.
14.4 Variants
2
(1 + log (T )).
2T
Proof. Let (t) = E [vt |w(t) ]. Since f is strongly convex and (t) is in the subgradient
set of f at w(t) we have that
w(t) w , (t) f (w(t) ) f (w ) + 2 w(t) w 2 .
(14.11)
E [w(t) w 2 w(t+1) w 2 ] t 2
+ .
2 t
2
(14.12)
Since w(t+1) is the projection of w(t+ 2 ) onto H, and w H we have that w(t+ 2 )
w 2 w(t+1) w 2 . Therefore,
1
( E [ f (w(t) )] f (w ))
t=1
T
w(t) w 2 w(t+1) w 2
t=1
2 t
2 w(t) w 2
2
t .
2
T
t=1
Next, we use the definition t = 1/( t) and note that the first sum on the right-hand
side of the equation collapses to T w(T +1) w 2 0. Thus,
T
t=1
( E [ f (w(t) )] f (w ))
2 1 2
(1 + log (T )).
2
t
2
T
t=1
161
162
The theorem follows from the preceding by dividing by T and using Jensens
inequality.
Remark 14.3. Rakhlin, Shamir, and Sridharan ((2012)) derived a convergence rate
in which the log (T ) term is eliminated for a variant
in which we
T of the algorithm
(t) . Shamir and Zhang
= T2 t=T
output the average of the last T /2 iterates, w
w
/2+1
= w( T ) .
(2013) have shown that Theorem 14.11 holds even if we output w
We have seen the method of empirical risk minimization, where we minimize the
empirical risk, L S (w), as an estimate to minimizing L D (w). SGD allows us to take
a different approach and minimize L D (w) directly. Since we do not know D, we
cannot simply calculate L D (w(t) ) and minimize it with the GD method. With SGD,
however, all we need is to find an unbiased estimate of the gradient of L D (w), that
is, a random vector whose conditional expected value is L D (w(t) ). We shall now
see how such an estimate can be easily constructed.
For simplicity, let us first consider the case of differentiable loss functions. Hence
the risk function L D is also differentiable. The construction of the random vector vt
will be as follows: First, sample z D. Then, define vt to be the gradient of the
function (w, z) with respect to w, at the point w(t) . Then, by the linearity of the
gradient we have
E [vt |w(t) ] = E [(w(t) , z)] = E [(w(t) , z)] = L D (w(t) ).
zD
zD
(14.13)
The gradient of the loss function (w, z) at w(t) is therefore an unbiased estimate of
the gradient of the risk function L D (w(t) ) and is easily constructed by sampling a
single fresh example z D at each iteration t.
The same argument holds for nondifferentiable loss functions. We simply let vt
be a subgradient of (w, z) at w(t) . Then, for every u we have
(u, z) (w(t) , z) u w(t) , vt .
Taking expectation on both sides with respect to z D and conditioned on the value
of w(t) we obtain
L D (u) L D (w(t) ) = E [(u, z) (w(t) , z)|w(t) ]
E [u w(t) , vt |w(t) ]
= u w(t) , E [vt |w(t) ].
B2
2 T
B 2 2
2
It is interesting to note that the required sample complexity is of the same order
of magnitude as the sample complexity guarantee we derived for regularized loss
minimization. In fact, the sample complexity of SGD is even better than what we
have derived for regularized loss minimization by a factor of 8.
163
164
( f t (w(t) ) f t (w ))
t=1
T
vt , w(t) w
t=1
w 2
+
vt 2 .
2
2
T
t=1
( ft (w(t) ) f t (w ))
t=1
w 2
+
f t (w(t) ).
2
T
t=1
t=1
Next, we take expectation of the two sides of the preceding equation with respect to
z 1 , . . . , z T . Clearly, E [ f t (w )] = L D (w ). In addition, using the same argument as in
the proof of Theorem 14.8 we have that
T
T
1
1
E
f t (w(t) ) = E
L D (w(t) ) E [L D (w)].
T
T
t=1
t=1
w2 + L S (w) .
min
(14.14)
w
2
Since we are dealing with convex learning problems in which the loss function is
convex, the preceding problem is also a convex optimization problem that can be
solved using SGD as well, as we shall see in this section.
1
14.6 Summary
t
t 1
(t 1)
t
=
t
1
vi .
t
(14.15)
i=1
If we assume that the loss function is -Lipschitz, it follows that for all t we have
vt and therefore w(t) , which yields
w(t) + vt 2.
Theorem 14.11 therefore tells us that after performing T iterations we have that
f (w )
E [ f (w)]
4 2
(1 + log (T )).
T
14.6 SUMMARY
We have introduced the Gradient Descent and Stochastic Gradient Descent algorithms, along with several of their variants. We have analyzed their convergence rate
and calculated the number of iterations that would guarantee an expected objective
of at most
plus the optimal objective. Most importantly, we have shown that by
using SGD we can directly minimize the risk function. We do so by sampling a
point i.i.d from D and using a subgradient of the loss of the current hypothesis w(t)
at this point as an unbiased estimate of the gradient (or a subgradient) of the risk
function. This implies that a bound on the number of iterations also yields a sample complexity bound. Finally, we have also shown how to apply the SGD method
to the problem of regularized risk minimization. In future chapters we show how
this yields extremely simple solvers to some optimization problems associated with
regularized risk minimization.
165
166
14.8 EXERCISES
14.1 Prove Claim 14.10. Hint: Extend the proof of Lemma 13.5.
14.2 Prove Corollary 14.14.
14.3 Perceptron as a subgradient descent algorithm: Let S = ((x1 , y1 ), . . ., (xm , ym ))
(Rd {1})m . Assume that there exists w Rd such that for every i [m] we have
yi w, xi 1, and let w be a vector that has the minimal norm among all vectors
that satisfy the preceding requirement. Let R = maxi xi . Define a function
f (w) = max (1 yi w, xi ) .
i[m]
Show that minw:ww f (w) = 0 and show that any w for which f (w) < 1
separates the examples in S.
Show how to calculate a subgradient of f .
Describe and analyze the subgradient descent algorithm for this case. Compare the algorithm and the analysis to the Batch Perceptron algorithm given in
Section 9.1.2.
14.4 Variable step size (*): Prove an analog of Theorem 14.8 for SGD with a variable
step size, t = Bt .
15
Support Vector Machines
In this chapter and the next we discuss a very useful machine learning tool: the
support vector machine paradigm (SVM) for learning linear predictors in high
dimensional feature spaces. The high dimensionality of the feature space raises both
sample complexity and computational complexity challenges.
The SVM algorithmic paradigm tackles the sample complexity challenge by
searching for large margin separators. Roughly speaking, a halfspace separates
a training set with a large margin if all the examples are not only on the correct
side of the separating hyperplane but also far away from it. Restricting the algorithm to output a large margin separator can yield a small sample complexity even
if the dimensionality of the feature space is high (and even infinite). We introduce
the concept of margin and relate it to the regularized loss minimization paradigm as
well as to the convergence rate of the Perceptron algorithm.
In the next chapter we will tackle the computational complexity challenge using
the idea of kernels.
168
While both the dashed and solid hyperplanes separate the four examples, our intuition would probably lead us to prefer the dashed hyperplane over the solid one.
One way to formalize this intuition is using the concept of margin.
The margin of a hyperplane with respect to a training set is defined to be the
minimal distance between a point in the training set and the hyperplane. If a hyperplane has a large margin, then it will still separate the training set even if we slightly
perturb each instance.
We will see later on that the true error of a halfspace can be bounded in terms
of the margin it has over the training sample (the larger the margin, the smaller the
error), regardless of the Euclidean dimension in which this halfspace resides.
Hard-SVM is the learning rule in which we return an ERM hyperplane that
separates the training set with the largest possible margin. To define Hard-SVM
formally, we first express the distance between a point x to a hyperplane using the
parameters defining the halfspace.
Claim 15.1. The distance between a point x and the hyperplane defined by (w, b)
where w = 1 is |w, x + b|.
Proof. The distance between a point x and the hyperplane is defined as
min{x v : w, v + b = 0}.
Taking v = x (w, x + b)w we have that
w, v + b = w, x (w, x + b)w2 + b = 0,
and
x v = |w, x + b| w = |w, x + b|.
Hence, the distance is at most |w, x + b|. Next, take any other point u on the
hyperplane, thus w, u + b = 0. We have
x u2 = x v + v u2
= x v2 + v u2 + 2x v, v u
x v2 + 2x v, v u
= x v2 + 2(w, x + b)w, v u
= x v2 ,
where the last equality is because w, v = w, u = b. Hence, the distance between
x and u is at least the distance between x and v, which concludes our proof.
On the basis of the preceding claim, the closest point in the training set to the
separating hyperplane is mini[m] |w, xi + b|. Therefore, the Hard-SVM rule is
argmax min |w, xi + b| s.t. i , yi (w, xi + b) > 0.
(w,b):w=1 i[m]
Whenever there is a solution to the preceding problem (i.e., we are in the separable
case), we can write an equivalent problem as follows (see Exercise 15.1):
argmax min yi (w, xi + b).
(15.1)
(w,b):w=1 i[m]
(15.2)
(w,b)
=
output: w
w0
w0 ,
b =
b0
w0
The lemma that follows shows that the output of hard-SVM is indeed the separating hyperplane with the largest margin. Intuitively, hard-SVM searches for
w of minimal norm among all the vectors that separate the data and for which
|w, xi + b| 1 for all i . In other words, we enforce the margin to be 1, but now
the units in which we measure the margin scale with the norm of w. Therefore, finding the largest margin halfspace boils down to finding w whose norm is minimal.
Formally:
Lemma 15.2. The output of Hard-SVM is a solution of Equation (15.1).
Proof. Let (w , b ) be a solution of Equation (15.1) and define the margin achieved
by (w , b ) to be = mini[m] yi (w , xi + b ). Therefore, for all i we have
yi (w , xi + b )
or equivalently
yi ( w , xi + b ) 1.
b
Hence, the pair ( w
, ) satisfies the conditions of the quadratic optimization prob
1
lem given in Equation (15.2). Therefore, w0 w
= . It follows that for
all i ,
= 1 yi (w0 , xi + b0 ) 1 .
xi + b)
yi (w,
w0
w0
169
170
(15.3)
In the advanced part of the book (Chapter 26), we will prove that the sample
complexity of Hard-SVM depends on (/ )2 and is independent of the dimension
d. In particular, Theorem 26.13 in Section 26.3 states the following:
Theorem 15.4. Let D be a distribution over Rd {1} that satisfies the ( , )separability with margin assumption using a homogenous halfspace. Then, with
probability of at least 1 over the choice of a training set of size m, the 0-1 error of
the output of Hard-SVM is at most
*
4 (/ )2
+
m
2 log (2/)
.
m
Remark 15.1 (Margin and the Perceptron). In Section 9.1.2 we have described and
analyzed the Perceptron algorithm for finding an ERM hypothesis with respect to
the class of halfspaces. In particular, in Theorem 9.1 we upper bounded the number of updates the Perceptron might make on a given training set. It can be shown
(see Exercise 15.2) that the upper bound is exactly (/ )2 , where is the radius of
examples and is the margin.
(15.4)
171
172
where
1
max{0, 1 yw, xi }.
m
m
hinge
LS
(w) =
i=1
E m [L D
SD
hinge
(A(S))] L D
(u) + u2 +
2 2
.
m
Furthermore, since the hinge loss upper bounds the 01 loss we also have
hinge
E m [L 01
D (A(S))] L D
SD
2 2
B2m
(u) + u2 +
2 2
.
m
then
*
01
E m [L D (A(S))]
SD
hinge
E m [L D
SD
(A(S))]
hinge
min L D
w:wB
(w) +
8 2 B 2
.
m
We therefore see that we can control the sample complexity of learning a halfspace as a function of the norm of that halfspace, independently of the Euclidean
dimension of the space over which the halfspace is defined. This becomes highly
significant when we learn via embeddings into high dimensional feature spaces, as
we will consider in the next chapter.
Remark 15.2. The condition that X will contain vectors with a bounded norm follows from the requirement that the loss function will be Lipschitz. This is not just
a technicality. As we discussed before, separation with large margin is meaningless
without imposing a restriction on the scale of the instances. Indeed, without a constraint on the scale, we can always enlarge the margin by multiplying all instances
by a large scalar.
decreases as d/m does. We now give an example in which 2 B 2 & d; hence the
bound given in Corollary 15.7 is much better than the VC bound.
Consider the problem of learning to classify a short text document according
to its topic, say, whether the document is about sports or not. We first need to
represent documents as vectors. One simple yet effective way is to use a bag-ofwords representation. That is, we define a dictionary of words and set the dimension
d to be the number of words in the dictionary. Given a document, we represent it
as a vector x {0, 1}d , where x i = 1 if the i th word in the dictionary appears in the
document and x i = 0 otherwise. Therefore, for this problem, the value of 2 will be
the maximal number of distinct words in a given document.
A halfspace for this problem assigns weights to words. It is natural to assume
that by assigning positive and negative weights to a few dozen words we will be
able to determine whether a given document is about sports or not with reasonable
accuracy. Therefore, for this problem, the value of B 2 can be set to be less than 100.
Overall, it is reasonable to say that the value of B 2 2 is smaller than 10,000.
On the other hand, a typical size of a dictionary is much larger than 10,000. For
example, there are more than 100,000 distinct words in English. We have therefore
173
174
d/m. However, the approximation error in Corollary 15.7 is measured with respect
to the hinge loss while the approximation error in VC bounds is measured with
respect to the 01 loss. Since the hinge loss upper bounds the 01 loss, the approximation error with respect to the 01 loss will never exceed that of the hinge
loss.
It is not possible to derive bounds that involve the estimation error term
2 B 2 /m for the 01 loss. This follows from the fact that the 01 loss is scale
insensitive, and therefore there is no meaning to the norm of w or its margin when
we measure error with the 01 loss. However, it is possible to define a loss
function
that on one hand it is scale sensitive and thus enjoys the estimation error 2 B 2 /m
while on the other hand it is more similar to the 01 loss. One option is the ramp
loss, defined as
ramp (w, (x, y)) = min{1, hinge (w, (x, y))} = min{1 , max{0, 1 yw, x}}.
The ramp loss penalizes mistakes in the same way as the 01 loss and does not
penalize examples that are separated with margin. The difference between the ramp
loss and the 01 loss is only with respect to examples that are correctly classified but
not with a significant margin. Generalization bounds for the ramp loss are given in
the advanced part of this book.
hinge
0 1
ramp
y w, x
15.4 Duality
The reason SVM relies on the hinge loss and not on the ramp loss is that the
hinge loss is convex and, therefore, from the computational point of view, minimizing the hinge loss can be performed efficiently. In contrast, the problem of
minimizing the ramp loss is computationally intractable.
15.4 DUALITY*
Historically, many of the properties of SVM have been obtained by considering
the dual of Equation (15.3). Our presentation of SVM does not rely on duality. For
completeness, we present in the following how to derive the dual of Equation (15.3).
We start by rewriting the problem in an equivalent form as follows. Consider the
function
m
0
if i , yi w, xi 1
.
i (1 yi w, xi ) =
g(w) = max
Rm :0
otherwise
i=1
We can therefore rewrite Equation (15.3) as
min w2 + g(w) .
w
(15.7)
Rearranging the preceding we obtain that Equation (15.3) can be rewritten as the
problem
m
1
2
w +
min max
i (1 yi w, xi ) .
(15.8)
w Rm :0 2
i=1
175
176
Now suppose that we flip the order of min and max in the equation. This can only
decrease the objective value (see Exercise 15.4), and we have
m
1
2
w +
i (1 yi w, xi )
min max
w Rm :0 2
i=1
m
1
max
w2 +
min
i (1 yi w, xi ) .
Rm :0 w
2
i=1
The preceding inequality is called weak duality. It turns out that in our case, strong
duality also holds; namely, the inequality holds with equality. Therefore, the dual
problem is
m
1
2
w +
i (1 yi w, xi ) .
(15.9)
max min
Rm :0 w
2
i=1
We can simplify the dual problem by noting that once is fixed, the optimization
problem with respect to w is unconstrained and the objective is differentiable; thus,
at the optimum, the gradient equals zero:
w
m
i yi xi = 0 w =
i=1
m
i yi xi .
i=1
This shows us that the solution must be in the linear span of the examples, a
fact we will use later to derive SVM with kernels. Plugging the preceding into
Equation (15.9) we obtain that the dual problem can be rewritten as
0
02
!
"
m
m
0
0
1
0
0
max 0
i yi xi 0 +
i 1 yi
j y j x j , x i .
(15.10)
0
Rm :0
20
i=1
i=1
m
m
m
1
max
i
i j yi y j x j , xi .
Rm :0
2
i=1
(15.11)
i=1 j =1
Note that the dual problem only involves inner products between instances and
does not require direct access to specific elements within an instance. This property is important when implementing SVM with kernels, as we will discuss in the
next chapter.
2
min
w +
max{0, 1 yw, xi } .
(15.12)
w
2
m
i=1
We rely on the SGD framework for solving regularized loss minimization problems,
as described in Section 14.5.3.
Recall that, on the basis of Equation (14.15), we can rewrite the update rule of
SGD as
t
1
vj,
w(t+1) =
t
j =1
15.6 SUMMARY
SVM is an algorithm for learning halfspaces with a certain type of prior knowledge,
namely, preference for large margin. Hard-SVM seeks the halfspace that separates
the data perfectly with the largest margin, whereas soft-SVM does not assume separability of the data and allows the constraints to be violated to some extent. The
sample complexity for both types of SVM is different from the sample complexity
of straightforward halfspace learning, as it does not depend on the dimension of the
domain but rather on parameters such as the maximal norms of x and w.
The importance of dimension-independent sample complexity will be realized
in the next chapter, where we will discuss the embedding of the given domain into
some high dimensional feature space as means for enriching our hypothesis class.
Such a procedure raises computational and sample complexity problems. The latter is solved by using SVM, whereas the former can be solved by using SVM with
kernels, as we will see in the next chapter.
177
178
15.8 EXERCISES
15.1 Show that the hard-SVM rule, namely,
argmax
(w,b):w=1 i[m]
(w,b):w=1 i[m]
(15.13)
i[m]
i[m]
15.2 Margin and the Perceptron Consider a training set that is linearly separable with a
margin and such that all the instances are within a ball of radius . Prove that the
maximal number of updates the Batch Perceptron algorithm given in Section 9.1.2
will make when running on this training set is (/ )2.
15.3 Hard versus soft SVM: Prove or refute the following claim:
There exists > 0 such that for every sample S of m > 1 examples, which is separable by the class of homogenous halfspaces, the hard-SVM and the soft-SVM (with
parameter ) learning rules return exactly the same weight vector.
15.4 Weak duality: Prove that for any function f of two vector variables x X , y Y , it
holds that
min max f (x, y) max min f (x, y).
xX yY
yY xX
16
Kernel Methods
In the previous chapter we described the SVM paradigm for learning halfspaces in
high dimensional feature spaces. This enables us to enrich the expressive power of
halfspaces by first mapping the data into a high dimensional feature space, and then
learning a linear predictor in that space. This is similar to the AdaBoost algorithm,
which learns a composition of a halfspace over base hypotheses. While this approach
greatly extends the expressiveness of halfspace predictors, it raises both sample
complexity and computational complexity challenges. In the previous chapter we
tackled the sample complexity issue using the concept of margin. In this chapter we
tackle the computational complexity challenge using the method of kernels.
We start the chapter by describing the idea of embedding the data into a high
dimensional feature space. We then introduce the idea of kernels. A kernel is a
type of a similarity measure between instances. The special property of kernel similarities is that they can be viewed as inner products in some Hilbert space (or
Euclidean space of some high dimension) to which the instance space is virtually
embedded. We introduce the kernel trick that enables computationally efficient
implementation of learning, without explicitly handling the high dimensional representation of the domain instances. Kernel based learning algorithms, and in
particular kernel-SVM, are very useful and popular machine learning tools. Their
success may be attributed both to being flexible for accommodating domain specific prior knowledge and to having a well developed set of efficient implementation
algorithms.
180
Kernel Methods
i=1
As before, we can rewrite p(x) = w, (x) where now : Rn Rd is such that
for every J [n]r , r k, the coordinate of (x) associated with J is the monomial
1r
i=1 x Ji .
1
A Hilbert space is a vector space with an inner product, which is also complete. A space is complete if
all
Cauchy sequences in the space converge. In our case, the norm w is defined by the inner product
w, w. The reason we require the range of to be in a Hilbert space is that projections in a Hilbert
space are well defined. In particular, if M is a linear subspace of a Hilbert space, then every x in the
Hilbert space can be written as a sum x = u + v where u M and v, w = 0 for all w M. We use this
fact in the proof of the representer theorem given in the next section.
181
182
Kernel Methods
In the previous chapter we saw that regularizing the norm of w yields a small
sample complexity even if the dimensionality of the feature space is high. Interestingly, as we show later, regularizing the norm of w is also helpful in overcoming the
computational problem. To do so, first note that all versions of the SVM optimization problem we have derived in the previous chapter are instances of the following
general problem:
3
2
3
2
(16.2)
min f w, (x1 ) , . . . , w, (xm ) + R(w) ,
w
where f : R R is an arbitrary function and R : R+ R is a monotonically nondecreasing function. For example, Soft-SVM for homogenous halfspaces
(Equation (15.6)) can be derived from Equation (16.2) by letting R(a) = a 2 and
f (a1 , . . . , am ) = m1 i max{0, 1 yi ai }. Similarly, Hard-SVM for nonhomogenous
halfspaces (Equation (15.2)) can be derived from Equation (16.2) by letting
R(a) = a 2 and letting f (a1 , . . . , am ) be 0 if there exists b such that yi (ai + b) 1
for all i , and f (a1 , . . . , am ) = otherwise.
The following theorem shows that there exists an optimal solution of
Equation (16.2) that lies in the span of {(x1 ), . . . , (xm )}.
m
i=1 i (xi ) is an
optimal solution of Equation (16.2).
Proof. Let w be an optimal solution of Equation (16.2). Because w is an element
of a Hilbert space, we can rewrite w as
w =
m
i (xi ) + u,
i=1
j =1
Similarly,
!
w2 =
j (x j ),
"
j (x j )
m
i j (xi ), (x j ).
i, j =1
Let K (x, x ) = (x), (x ) be a function that implements the kernel function with
respect to the embedding . Instead of solving Equation (16.2) we can solve the
equivalent problem
m
m
j K (x j , x1 ), . . . ,
j K (x j , xm )
minm f
R
j =1
j =1
4
5
5 m
i j K (x j , xi ).
+ R 6
(16.3)
i, j =1
To solve the optimization problem given in Equation (16.3), we do not need any
direct access to elements in the feature space. The only thing we should know is
how to calculate inner products in the feature space, or equivalently, to calculate
the kernel function. In fact, to solve Equation (16.3) we solely need to know the
value of the m m matrix G s.t. G i, j = K (xi , x j ), which is often called the Gram
matrix.
In particular, specifying the preceding to the Soft-SVM problem given in
Equation (15.6), we can rewrite the problem as
m
1
min T G +
max {0, 1 yi (G)i } ,
(16.4)
Rm
m
i=1
where (G)i is the i th element of the vector obtained by multiplying the Gram
matrix G by the vector . Note that Equation (16.4) can be written as quadratic
programming and hence can be solved efficiently. In the next section we describe an
even simpler algorithm for solving Soft-SVM with kernels.
Once we learn the coefficients we can calculate the prediction on a new
instance by
m
m
w, (x) =
j (x j ), (x) =
j K (x j , x).
j =1
j =1
183
184
Kernel Methods
2
3
for which K (x, x ) = (x), (x ) . For simplicity, denote x 0 = x 0 = 1. Then, we have
K (x, x ) = (1 + x, x )k = (1 + x, x ) (1 + x, x )
n
n
=
x j x j
x j x j
j =0
j =0
k
x Ji x Ji
J {0,1,...,n}k i=1
k
x Ji
J {0,1,...,n}k i=1
k
x Ji .
i=1
Now, if we define : Rn R(n+1) such that for J {0, 1, . . . , n}k there is an element
1
of (x) that equals ki=1 x Ji , we obtain that
K (x, x ) = (x), (x ).
Since contains all the monomials up to degree k, a halfspace over the range
of corresponds to a polynomial predictor of degree k over the original space.
Hence, learning a halfspace with a k degree polynomial kernel enables us to learn
polynomial predictors of degree k over the original space.
Note that here the complexity of implementing K is O(n) while the dimension
of the feature space is on the order of n k .
Example 16.2 (Gaussian Kernel). Let the original instance space be R and consider the mapping where for each nonnegative integer n 0 there exists an
x2
2
=e
n!
n=0
xx 2
= e 2 .
Here the feature space is of infinite dimension while evaluating the kernel is very
simple. More generally, given a scalar > 0, the Gaussian kernel is defined to be
K (x, x ) = e
xx 2
2 .
Intuitively, the Gaussian kernel sets the inner product in the feature space
between x, x to be close to zero if the instances are far away from each other (in
the original domain) and close to 1 if they are close. is a parameter that controls
the scale determining what we mean by close. It is easy to verify that K implements an inner product in a space in which for any n and any monomial of order k
185
186
Kernel Methods
represent an inner product between (x) and (x ) for some feature mapping ?
The following lemma gives a sufficient and necessary condition.
Lemma 16.2. A symmetric function K : X X R implements an inner product in
some Hilbert space if and only if it is positive semidefinite; namely, for all x1 , . . . , xm ,
the Gram matrix, G i, j = K (xi , x j ), is a positive semidefinite matrix.
Proof. It is trivial to see that if K implements an inner product in some Hilbert
space then the Gram matrix is positive semidefinite. For the other direction, define
the space of functions over X as RX = { f : X R}. For each x X let (x) be
the function x K (, x). Define a vector space by taking all linear combinations of
elements of the form K (, x). Define an inner product on this vector space to be
"
!
i K (, xi ),
j K (, x j ) =
i j K (xi , xj ).
i
i, j
2
min
w +
max{0, 1 yw, (xi )} ,
(16.5)
w
2
m
i=1
while only using kernel evaluations. The basic observation is that the vector w(t)
maintained by the SGD procedure we have described in Section 15.5 is always in
the linear span of {(x1 ), . . . , (xm )}. Therefore, rather than maintaining w(t) we
can maintain the corresponding coefficients .
Formally, let K be the kernel function, namely, for all x, x , K (x, x ) =
(x), (x ). We shall maintain two vectors in Rm , corresponding to two vectors
(t) and w(t) defined in the SGD procedure of Section 15.5. That is, (t) will be a
vector such that
m
(t)
(t) =
j (x j )
(16.6)
j =1
m
(t)
j (x j ).
j =1
(16.7)
16.4 Summary
(t)
= i + yi
Set i
Else
(t+1)
(t)
Set i
=
m i
= j =1 j (x j ) where =
Output: w
1
T
T
t=1
(t)
Hence, the condition in the two algorithms is equivalent and if we update we have
(t+1) = (t) + yi (xi ) =
m
j =1
(t)
j (x j ) + yi (xi ) =
m
(t+1)
(x j ),
j =1
16.4 SUMMARY
Mappings from the given domain to some higher dimensional space, on which a
halfspace predictor is used, can be highly powerful. We benefit from a rich and
complex hypothesis class, yet need to solve the problems of high sample and computational complexities. In Chapter 10, we discussed the AdaBoost algorithm, which
faces these challenges by using a weak learner: Even though were in a very high
dimensional space, we have an oracle that bestows on us a single good coordinate
to work with on each iteration. In this chapter we introduced a different approach,
187
188
Kernel Methods
the kernel trick. The idea is that in order to find a halfspace predictor in the high
dimensional space, we do not need to know the representation of instances in that
space, but rather the values of inner products between the mapped instances. Calculating inner products between instances in the high dimensional space without
using their representation in that space is done using kernel functions. We have also
shown how the SGD algorithm can be implemented using kernels.
The ideas of feature mapping and the kernel trick allow us to use the framework
of halfspaces and linear predictors for nonvectorial data. We demonstrated how
kernels can be used to learn predictors over the domain of strings.
We presented the applicability of the kernel trick in SVM. However, the kernel trick can be applied in many other algorithms. A few examples are given as
exercises.
This chapter ends the series of chapters on linear predictors and convex problems. The next two chapters deal with completely different types of hypothesis
classes.
16.6 EXERCISES
16.1 Consider the task of finding a sequence of characters in a file, as described in
Section 16.2.1. Show that every member of the class H can be realized by composing
a linear classifier over (x), whose norm is 1 and that attains a margin of 1.
16.2 Kernelized Perceptron: Show how to run the Perceptron algorithm while only
accessing the instances via the kernel function. Hint: The derivation is similar to
the derivation of implementing SGD with kernels.
16.3 Kernel Ridge Regression: The ridge regression problem, with a feature mapping
, is the problem of finding a vector w that minimizes the function
1
(w, (xi ) yi )2 ,
2m
m
f (w) = w2 +
(16.8)
i=1
16.6 Exercises
1. Let G be the Gram matrix with regard to S and K . That is, G i j = K (xi , x j ).
Define g : Rm R by
m
1
(, G ,i yi )2 ,
2m
g() = T G +
(16.9)
i=1
where G ,i
is the i th column of G. Show that if minimizes Equation (16.9)
then w = m
i=1 i (xi ) is a minimizer of f .
2. Find a closed form expression for .
16.4 Let N be any positive integer. For every x, x {1, . . . , N } define
K (x, x ) = min{x, x }.
Prove that K is a valid kernel; namely, find a mapping : {1, . . . , N } H where H
is some Hilbert space, such that
x, x {1, . . . , N }, K (x, x ) = (x), (x ).
16.5 A supermarket manager would like to learn which of his customers have babies on
the basis of their shopping carts. Specifically, he sampled i.i.d. customers, where
for customer i , let xi {1, . . . , d} denote the subset of items the customer bought,
and let yi {1} be the label indicating whether this customer has a baby. As prior
knowledge, the manager knows that there are k items such that the label is determined to be 1 iff the customer bought at least one of these k items. Of course, the
identity of these k items is not known (otherwise, there was nothing to learn). In
addition, according to the store regulation, each customer can buy at most s items.
Help the manager to design a learning algorithm such that both its time complexity
and its sample complexity are polynomial in s, k, and 1/
.
16.6 Let X be an instance set and let be a feature mapping of X into some Hilbert
feature space V . Let K : X X R be a kernel function that implements inner
products in the feature space V .
Consider the binary classification algorithm that predicts the label of an unseen
instance according to the class with the closest average. Formally, given a training
sequence S = (x1 , y1 ), . . ., (xm , ym ), for every y {1} we define
cy =
1
(xi ).
m y i:y =y
i
where m y = |{i : yi = y}|. We assume that m + and m are nonzero. Then, the
algorithm outputs the following decision rule:
1 (x) c+ (x) c
h(x) =
0 otherwise.
1. Let w = c+ c and let b = 12 (c 2 c+ 2 ). Show that
h(x) = sign(w, (x) + b).
2. Show how to express h(x) on the basis of the kernel function, and without
accessing individual entries of (x) or w.
189
17
Multiclass, Ranking, and Complex
Prediction Problems
(17.1)
i[k]
When more than one binary hypothesis predicts 1 we should somehow decide
which class to predict (e.g., we can arbitrarily decide to break ties by taking the
minimal index in argmaxi h i (x)). A better approach can be applied whenever each
h i hides additional information, which can be interpreted as the confidence in the
prediction y = i . For example, this is the case in halfspaces, where the actual prediction is sign(w, x), but we can interpret w, x as the confidence in the prediction.
In such cases, we can apply the multiclass rule given in Equation (17.1) on the real
valued predictions. A pseudocode of the One-versus-All approach is given in the
following.
One-versus-All
input:
training set S = (x1 , y1 ), . . . , (xm , ym )
algorithm for binary classification A
foreach i Y
1
let Si = (x1 , ( 1) [y1 =i] ), . . . , (xm , ( 1)1[ym =i] )
let h i = A(Si )
output:
the multiclass hypothesis defined by h(x) argmaxiY h i (x)
Another popular reduction is the All-Pairs approach, in which all pairs of classes
are compared to each other. Formally, given a training set S = (x1 , y1 ), . . . , (xm , ym ),
where every yi is in [k], for every 1 i < j k we construct a binary training
sequence, Si, j , containing all examples from S whose label is either i or j . For each
such an example, we set the binary label in Si, j to be +1 if the multiclass label in
S is i and 1 if the multiclass label in S is j . Next, we train a binary classification
algorithm based on every Si, j to get h i, j . Finally, we construct a multiclass classifier
by predicting the class that had the highest number of wins. A pseudocode of the
All-Pairs approach is given in the following.
191
192
All-Pairs
input:
training set S = (x1 , y1 ), . . . , (xm , ym )
algorithm for binary classification A
foreach i , j Y s.t. i < j
initialize Si, j to be the empty sequence
for t = 1, . . . , m
If yt = i add (xt , 1) to Si, j
If yt = j add (xt , 1) to Si, j
let h i, j = A(Si, j )
output:
the multiclass hypothesis
defined by
h(x) argmaxiY
sign(
j
i
)
h
(x)
i, j
j Y
Although reduction methods such as the One-versus-All and All-Pairs are simple and easy to construct from existing algorithms, their simplicity has a price. The
binary learner is not aware of the fact that we are going to use its output hypotheses
for constructing a multiclass predictor, and this might lead to suboptimal results, as
illustrated in the following example.
Example 17.1. Consider a multiclass categorization problem in which the instance
space is X = R2 and the label set is Y = {1, 2, 3}. Suppose that instances of the
different classes are located in nonintersecting balls as depicted in the following.
Suppose that the probability masses of classes 1, 2, 3 are 40%, 20%, and 40%,
respectively. Consider the application of One-versus-All to this problem, and
assume that the binary classification algorithm used by One-versus-All is ERM with
respect to the hypothesis class of halfspaces. Observe that for the problem of discriminating between class 2 and the rest of the classes, the optimal halfspace would
be the all negative classifier. Therefore, the multiclass predictor constructed by Oneversus-All might err on all the examples from class 2 (this will be the case if the tie in
the definition of h(x) is broken by the numerical
value
of the class label). Incontrast,
1 1
then the classifier defined by h(x) = argmaxi h i (x) perfectly predicts all the examples. We see that even though the approximation error of the class of predictors of
the form h(x) = argmaxi wi , x is zero, the One-versus-All approach might fail to
find a good predictor from this class.
That is, the prediction of h for the input x is the label that achieves the highest
weighted score, where weighting is according to the vector w.
Let W be some set of vectors in Rd , for example, W = {w Rd : w B},
for some scalar B > 0. Each pair (, W ) defines a hypothesis class of multiclass
predictors:
H,W = {x argmax w, (x, y) : w W }.
yY
Of course, the immediate question, which we discuss in the sequel, is how to construct a good . Note that if Y = {1} and we set (x, y) = yx and W = Rd , then
H,W becomes the hypothesis class of homogeneous halfspace predictors for binary
classification.
Rn
R(ky)n
(17.2)
193
194
w3
w4
TF-IDF:
The previous definition of (x, y) does not incorporate any prior knowledge about
the problem. We next describe an example of a feature function that does incorporate prior knowledge. Let X be a set of text documents and Y be a set of possible
topics. Let d be a size of a dictionary of words. For each word in the dictionary,
whose corresponding index is j , let T F( j , x) be the number of times the word corresponding to j appears in the document x. This quantity is called Term-Frequency.
Additionally, let D F( j , y) be the number of times the word corresponding to j
appears in documents in our training set that are not about topic y. This quantity
is called Document-Frequency and measures whether word j is frequent in other
topics. Now, define : X Y Rd to be such that
j (x, y) = T F( j , x) log DFm( j ,y) ,
where m is the total number of documents in our training set. The preceding quantity
is called term-frequency-inverse-document-frequency or TF-IDF for short. Intuitively, j (x, y) should be large if the word corresponding to j appears a lot in the
document x but does not appear at all in documents that are not on topic y. If this
is the case, we tend to believe that the document x is on topic y. Note that unlike
the multivector construction described previously, in the current construction the
dimension of does not depend on the number of topics (i.e., the size of Y).
17.2.3 ERM
We have defined the hypothesis class H,W and specified a loss function . To learn
the class with respect to the loss function, we can apply the ERM rule with respect
to this class. That is, we search for a multiclass hypothesis h H,W , parameterized
by a vector w, that minimizes the empirical risk with respect to ,
1
(h(xi ), yi ).
m
m
L S (h) =
i=1
We now show that when W = Rd and we are in the realizable case, then it is
possible to solve the ERM problem efficiently using linear programming. Indeed, in
the realizable case, we need to find a vector w Rd that satisfies
i [m], yi = argmaxw, (xi , y).
yY
Equivalently, we need that w will satisfy the following set of linear inequalities
i [m], y Y \ {yi }, w, (xi , yi ) > w, (xi , y).
Finding w that satisfies the preceding set of linear equations amounts to solving a
linear program.
As in the case of binary classification, it is also possible to use a generalization
of the Perceptron algorithm for solving the ERM problem. See Exercise 17.2.
In the nonrealizable case, solving the ERM problem is in general computationally hard. We tackle this difficulty using the method of convex surrogate loss
functions (see Section 12.3). In particular, we generalize the hinge loss to multiclass
problems.
Recall that a surrogate convex loss should upper bound the original nonconvex loss,
which in our case is (h w (x), y). To derive an upper bound on (h w (x), y) we first
note that the definition of h w (x) implies that
w, (x, y) w, (x, h w (x)).
Therefore,
(h w (x), y) (h w (x), y) + w, (x, h w (x)) (x, y).
195
Since h w (x) Y we can upper bound the right-hand side of the preceding by
def
max (y , y) + w, (x, y ) (x, y) = (w, (x, y)).
(17.3)
y Y
We use the term generalized hinge loss to denote the preceding expression. As we
have shown, (w, (x, y)) (h w (x), y). Furthermore, equality holds whenever the
score of the correct label is larger than the score of any other label, y , by at least
(y , y), namely,
y Y \ {y}, w, (x, y) w, (x, y ) + (y , y).
It is also immediate to see that (w, (x, y)) is a convex function with respect to w
since it is a maximum over linear functions of w (see Claim 12.5 in Chapter 12), and
that (w, (x, y)) is -Lipschitz with = max y Y (x, y ) (x, y).
Remark 17.2. We use the name generalized hinge loss since in the binary case,
when Y = {1}, if we set (x, y) = yx
2 , then the generalized hinge loss becomes the
vanilla hinge loss for binary classification,
(w, (x, y)) = max{0, 1 yw, x}.
Geometric Intuition:
The feature function : X Y Rd maps each x into |Y| vectors in Rd . The value of
(w, (x, y)) will be zero if there exists a direction w such that when projecting the |Y|
vectors onto this direction we obtain that each vector is represented by the scalar
w, (x, y), and we can rank the different points on the basis of these scalars so
that
The point corresponding to the correct y 2is top-ranked
3
2
3
For each y = y, the difference between w, (x, y) and w, (x,
y ) is larger
2
3
than
the loss
of predicting y instead of y. The difference w, (x, y)
3
2
w, (x, y ) is also referred to as the margin (see Section 15.1).
(x, y)
(x, y")
")
,y
y
(
196
(y
(y
,y
')
,y
')
(x, y')
Multiclass SVM
input: (x1 , y1 ), . . . , (xm , ym )
parameters:
regularization parameter > 0
loss function : Y Y R+
class-sensitive feature mapping : X Y Rd
solve:
m
1
min w2 +
max (y , yi ) + w, (xi , y ) (xi , yi )
m
y Y
wRd
i=1
2 2
B 2m
*
E m [L D (h w )]
SD
ghinge
E m [L D
SD
(w)]
ghinge
min L D
u:uB
(u) +
8 2 B 2
,
m
where L
(w) = E(x,y)D [(w, (x, y))] with
D (h) = E(x,y)D [(h(x), y)] and L D
being the generalized hinge-loss as defined in Equation (17.3).
ghinge
ghinge
197
198
algorithm:
SGD for Multiclass Learning
parameters:
Scalar > 0, integer T > 0
loss function : Y Y R+
class-sensitive feature mapping : X Y Rd
initialize: w(1) = 0 Rd
for t = 1, 2, . . . , T
sample (x, y) D
find y argmaxy Y ((y , y) + w(t) , (x, y ) (x, y))
set vt = (x, y ) (x, y)
update w(t+1) = w(t) vt
T
= T1 t=1
output w
w(t)
Our general analysis of SGD given in Corollary 14.12 immediately implies:
Corollary 17.2. Let D be a distribution over X Y, let : X Y Rd , and assume
that for all x X and y Y we have (x, y) /2. Let B > 0. Then, for every
> 0, if we run SGD for multiclass learning with a number of iterations (i.e., number
of examples)
B 2 2
T 2
B2
and with = 2 T , then the output of SGD satisfies
E [L
)]
D (h w
SD m
ghinge
E [L D
SD m
(w)]
ghinge
min L D
u:uB
(u) + .
Remark 17.3. It is interesting to note that the risk bounds given in Corollary 17.1 and
Corollary 17.2 do not depend explicitly on the size of the label set Y, a fact we will
rely on in the next section. However, the bounds may depend implicitly on the size
of Y via the norm of (x, y) and the fact that the bounds are meaningful only when
ghinge
(u) is not excessively large.
there exists some vector u, u B, for which L D
i, j ,1 (x, y) =
t=1
That is, we sum the value of the i th pixel only over the images for which y assigns
the letter j . The triple index (i , j , 1) indicates that we are dealing with feature (i , j )
of type 1. Intuitively, such features can capture pixels in the image whose gray level
199
200
values are indicative of a certain letter. The second type of features take the form
1
1[yt =i] 1[yt1 = j] .
r
r
i, j ,2 (x, y) =
t=2
That is, we sum the number of times the letter i follows the letter j . Intuitively,
these features can capture rules like It is likely to see the pair qu in a word or It
is unlikely to see the pair rz in a word. Of course, some of these features will not
be very useful, so the goal of the learning process is to assign weights to features by
learning the vector w, so that the weighted score will give us a good prediction via
h w (x) = argmax w, (x, y).
yY
It is left to show how to solve the optimization problem in the definition of h w (x)
efficiently, as well as how to solve the optimization problem in the definition of y in
the SGD algorithm. We can do this by applying a dynamic programming procedure.
We describe the procedure for solving the maximization in the definition of h w and
leave as an exercise the maximization problem in the definition of y in the SGD
algorithm.
To derive the dynamic programming procedure, let us first observe that we can
write
r
(x, yt , yt1 ),
(x, y) =
t=1
for an appropriate : X [q] [q] {0} Rd , and for simplicity we assume that y0
is always equal to 0. Indeed, each feature function i, j ,1 can be written in terms of
i, j ,1 (x, yt , yt1 ) = x i,t 1[yt = j] ,
while the feature function i, j ,2 can be written in terms of
i, j ,2 (x, yt , yt1 ) = 1[yt =i] 1[yt1 = j] .
Therefore, the prediction can be written as
h w (x) = argmax
yY
r
(17.4)
t=1
max
t=1
Clearly, the maximum of w, (x, y) equals maxs Ms,r . Furthermore, we can
calculate M in a recursive manner:
2
3
, 1 + w, (x, s, s )
Ms, = max
M
.
(17.5)
s
s
17.4 Ranking
17.4 RANKING
Ranking is the problem of ordering a set of instances according to their relevance. A typical application is ordering results of a search engine according to their
relevance to the query. Another example is a system that monitors electronic transactions and should alert for possible fraudulent transactions. Such a system should
order transactions according to how suspicious they are.
n
Formally, let X =
n=1 X be the set of all sequences of instances from X of
arbitrary length. A ranking hypothesis, h, is a function that receives a sequence of
instances x = (x1 , . . . , xr ) X , and returns a permutation of [r ]. It is more convenient to let the output of h be a vector y Rr , where by sorting the elements of y
we obtain the permutation over [r ]. We denote by (y) the permutation over [r ]
induced by y. For example, for r = 5, the vector y = (2, 1, 6, 1, 0. 5) induces the
permutation (y) = (4, 3, 5, 1, 2). That is, if we sort y in an ascending order, then
we obtain the vector ( 1, 0. 5, 1, 2, 6). Now, (y)i is the position of yi in the sorted
vector ( 1, 0. 5, 1, 2, 6). This notation reflects that the top-ranked instances are
those that achieve the highest values in (y).
In the notation of our PAC learning model, the examples domain is Z =
r
r
r=1 (X R ), and the hypothesis class, H, is some set of ranking hypotheses. We
next turn to describe loss functions for ranking. There are many possible ways to
define such loss functions, and here we list a few examples. In all the examples we
define (h, (x, y)) = (h(x), y), for some function : r=1
(Rr Rr ) R+ .
01 Ranking loss: (y , y) is zero if y and y induce exactly the same ranking
and (y , y) = 1 otherwise. That is, (y , y) = 1[(y )=(y)] . Such a loss function is
almost never used in practice as it does not distinguish between the case in which
(y ) is almost equal to (y) and the case in which (y ) is completely different
from (y).
201
202
Kendall-Tau Loss: We count the number of pairs (i , j ) that are in different order
in the two permutations. This can be written as
(y , y) =
r1
r
2
1[sign(yi y j )=sign(yi y j )] .
r (r 1)
i=1 j =i+1
This loss function is more useful than the 01 loss as it reflects the level of
similarity between the two rankings.
Normalized Discounted Cumulative Gain (NDCG): This measure emphasizes
the correctness at the top of the list by using a monotonically nondecreasing
discount function D : N R+ . We first define a discounted cumulative gain
measure:
r
D((y )i ) yi .
G(y , y) =
i=1
r
1
G(y , y)
=
D((y)i ) D((y )i ) yi .
G(y, y)
G(y, y)
i=1
We can easily see that (y , y) [0, 1] and that (y , y) = 0 whenever
(y ) = (y).
A typical way to define the discount function is by
1
if i {r k + 1, . . . ,r }
D(i ) = log2 (ri+2)
0
otherwise
where k < r is a parameter. This means that we care more about elements that
are ranked higher, and we completely ignore elements that are not at the top-k
ranked elements. The NDCG measure is often used to evaluate the performance
of search engines since in such applications it makes sense completely to ignore
elements that are not at the top of the ranking.
Once we have a hypothesis class and a ranking loss function, we can learn a
ranking function using the ERM rule. However, from the computational point of
view, the resulting optimization problem might be hard to solve. We next discuss
how to learn linear predictors for ranking.
17.4 Ranking
function
h w ((x1 , . . . , xr )) = (w, x1 , . . . , w, xr ).
(17.6)
As we discussed in Chapter 16, we can also apply a feature mapping that maps
instances into some feature space and then takes the inner products with w in the
feature space. For simplicity, we focus on the simpler form as in Equation (17.6).
Given some W Rd , we can now define the hypothesis class HW = {h w : w W }.
Once we have defined this hypothesis class, and have chosen a ranking loss function,
we can apply the ERM rule as follows: Given a training set, S = (x1 , y1 ), . . . , (xm , ym ),
R)ri , for some ri N, we should search w W that
where each (xi , yi ) is in (X
xi ), yi ). As in the case of binary classificaminimizes the empirical loss, m
i=1 (h w (
tion, for many loss functions this problem is computationally hard, and we therefore
turn to describe convex surrogate loss functions. We describe the surrogates for the
Kendall tau loss and for the NDCG loss.
In our case, yi y j = w, xi x j . It follows that we can use the hinge loss upper
bound as follows:
2
3>
=
1[sign( yi y j )( y y )0] max 0, 1 sign yi y j w, xi x j .
i
Taking the average over the pairs we obtain the following surrogate convex loss for
the Kendall tau loss function:
(h w (x), y)
r
r1
3>
=
2
2
max 0, 1 sign( yi y j ) w, xi x j .
r (r 1)
i=1 j =i+1
The right-hand side is convex with respect to w and upper bounds the Kendall tau
loss. It is also a -Lipschitz function with parameter maxi, j xi x j .
r
i=1
vi yi .
(17.7)
203
204
r
i=1 vi xi ;
it follows that
(h w (x)) = argmax
vV
r
vi w, xi
i=1
= argmax w,
vV
r
"
vi xi
i=1
On the basis of this observation, we can use the generalized hinge loss for costsensitive multiclass classification as a surrogate loss function for the NDCG loss as
follows:
(h w (x), y) (h w (x), y) + w, (x, (h w (x))) w, (x , (y))
2
3 2
3
max (v, y) + w, (x, v) w, (x, (y))
vV
r
(vi (y)i ) w, xi .
= max (v, y) +
vV
(17.8)
i=1
argmin
vV
r
(i vi + i D(vi )),
i=1
where i = w, xi and i = yi /G(y, y). We can think of this problem a little bit
differently by defining a matrix A Rr,r where
Ai, j = j i + D( j ) i .
Now, let us think about each j as a worker, each i as a task, and Ai, j as the cost
of assigning task i to worker j . With this view, the problem of finding v becomes
the problem of finding an assignment of the tasks to workers of minimal cost. This
problem is called the assignment problem and can be solved efficiently. One particular algorithm is the Hungarian method (Kuhn 1955). Another way to solve
the assignment problem is using linear programming. To do so, let us first write the
17.4 Ranking
assignment problem as
argmin
r
BRr,r
+ i, j =1
(17.9)
Ai, j Bi, j
s.t. i [r ],
r
Bi, j = 1
j =1
j [r ],
r
Bi, j = 1
i=1
i , j , Bi, j {0, 1}
A matrix B that satisfies the constraints in the preceding optimization problem is
called a permutation matrix. This is because the constraints guarantee that there is
at most a single entry of each row that equals 1 and a single entry of each column
that equals 1. Therefore, the matrix B corresponds to the permutation v V defined
by vi = j for the single index j that satisfies Bi, j = 1.
The preceding optimization is still not a linear program because of the combinatorial constraint Bi, j {0, 1}. However, as it turns out, this constraint is redundant
if we solve the optimization problem while simply omitting the combinatorial
constraint, then we are still guaranteed that there is an optimal solution that will
satisfy this constraint.
This is formalized later.
Denote A, B = i, j Ai, j Bi, j . Then, Equation (17.9) is the problem of minimizing A, B such that B is a permutation matrix.
A matrix B Rr,r is called doubly stochastic if all elements of B are nonnegative,
the sum of each row of B is 1, and the sum of each column of B is 1. Therefore,
solving Equation (17.9) without the constraints Bi, j {0, 1} is the problem
argminA, B s.t. B is a doubly stochastic matrix.
(17.10)
BRr,r
The following claim states that every doubly stochastic matrix is a convex
combination of permutation matrices.
Claim 17.3 (Birkhoff 1946, Von Neumann 1953). The set of doubly stochastic
matrices in Rr,r is the convex hull of the set of permutation matrices in Rr,r .
On the basis of the claim, we easily obtain the following:
Lemma 17.4. There exists an optimal solution of Equation (17.10) that is also an
optimal solution of Equation (17.9).
Proof.
can write
Let B be a solution of Equation (17.10). Then, by Claim 17.3, we
B = i i Ci , where each Ci is a permutation matrix, each i > 0, and i i = 1.
Since all the Ci are also doubly stochastic, we clearly have that A, B A, Ci for
every i . We claim that there is some i for which A, B = A, Ci . This must be true
since otherwise, if for every i A, B < A, Ci , we would have that
"
!
i C i =
i A, Ci >
i A, B = A, B,
A, B = A,
i
205
206
which cannot hold. We have thus shown that some permutation matrix, Ci , satisfies A, B = A, Ci . But, since for every other permutation matrix C we have
A, B A, C we conclude that Ci is an optimal solution of both Equation (17.9)
and Equation (17.10).
(17.11)
The recall (a.k.a. sensitivity) of a prediction vector is the fraction of true positives
a
catches, namely, a+c
. The precision is the fraction of correct predictions among
a
d
+ d+b
. This is also the accuracy on positive
ity and specificity, namely, 12 a+c
examples averaged with the accuracy on negative examples.
Here,
we set = 0
1
a
d
and the corresponding loss function is (y , y) = 1 2 a+c + d+b .
F1 -score: The F1 score is the harmonic mean of the precision and recall:
2
. Its maximal value (of 1) is obtained when both precision and recall
1
1
Precision + Recall
are 1, and its minimal value (of 0) is obtained whenever one of them is 0 (even
if the other one is 1). The F1 score can be written using the numbers a, b, c
2a
. Again, we set = 0, and the loss function becomes
as follows; F1 = 2a+b+c
(y , y) = 1 F1 .
F -score: It is like F1 score, but we attach 2 times more importance to
recall than to precision, that is,
(1+ 2 )a
(1+ 2 )a+b+ 2 c
1+ 2
1
2 1
Precision + Recall
1 F .
Recall at k: We measure the recall while the prediction must contain at most k
positive labels. That is, we should set so that a + b k. This is convenient, for
example, in the application of a fraud detection system, where a bank employee
can only handle a small number of suspicious transactions.
Precision at k: We measure the precision while the prediction must contain at
least k positive labels. That is, we should set so that a + b k.
The measures defined previously are often referred to as multivariate performance measures. Note that these measures are highly different from the average
b+d
. In the aforemenzero-one loss, which in the preceding notation equals a+b+c+d
tioned example of fraud detection, when 99. 9% of the examples are negatively
labeled, the zero-one loss of predicting that all the examples are negatives is 0. 1%.
In contrast, the recall of such prediction is 0 and hence the F1 score is also 0, which
means that the corresponding loss will be 1.
207
208
(17.12)
As in the previous section, to facilitate an efficient algorithm we derive a convex surrogate loss function on . The derivation is similar to the derivation of the
generalized hinge loss for the NDCG ranking loss, as described in the previous
section.
Our first observation is that for all the values of defined before, there is some
V {1}r such that b(y ) can be rewritten as
b(y ) = argmax
v V
r
vi yi .
(17.13)
i=1
This is clearly true for the case = 0 if we choose V = {1}r . The two measures for
which is not taken to be 0 are precision at k and recall at k. For precision at k we
can take V to be the set Vk , containing all vectors in {1}r whose number of ones
is at least k. For recall at k, we can take V to be Vk , which is defined analogously.
See Exercise 17.5.
Once we have defined b as in Equation (17.13), we can easily derive a convex
surrogate loss as follows. Assuming that y V , we have that
(h w (x), y) = (b(h w (x)), y)
(b(h w (x)), y) +
max
vV
(v, y) +
r
i=1
r
(vi yi ) w, xi .
(17.14)
i=1
17.6 Summary
vY a,b
r
vi w, xi .
i=1
Suppose the examples are sorted so that w, x1 w, xr . Then, it is easy to
verify that we would like to set vi to be positive for the smallest indices i . Doing
this, with the constraint on a, b, amounts to setting vi = 1 for the a top ranked positive examples and for the b top-ranked negative examples. This yields the following
procedure.
Solving Equation (17.14)
input:
(x1 , . . . , xr ), (y1 , . . . , yr ), w, V ,
assumptions:
is a function of a, b, c, d
V contains all vectors for which f (a, b) = 1 for some function f
initialize:
P = |{i : yi = 1}|, N = |{i : yi = 1}|
= (w, x1 , . . . , w, xr ), =
sort examples so that 1 2 r
let i 1 , . . . ,i P be the (sorted) indices of the positive examples
let j1 , . . . , j N be the (sorted) indices of the negative examples
for a = 0, 1, . . . , P
c = P a
for b = 0, 1, . . . , N such that f (a, b) = 1
d = N b
calculate using a, b, c, d
set v1 , . . . , vr s.t. vi1 = = via = v j1 = = v jb = 1
and the rest
of the elements of v equal 1
set = + ri=1 vi i
if
= , v = v
output v
17.6 SUMMARY
Many real world supervised learning problems can be cast as learning a multiclass
predictor. We started the chapter by introducing reductions of multiclass learning
to binary learning. We then described and analyzed the family of linear predictors
for multiclass learning. We have shown how this family can be used even if the
number of classes is extremely large, as long as we have an adequate structure on
the problem. Finally, we have described ranking problems. In Chapter 29 we study
the sample complexity of multiclass learning in more detail.
209
210
17.8 EXERCISES
17.1 Consider a set S of examples in Rn [k] for which there exist vectors 1 , . . ., k
such that every example (x, y) S falls within a ball centered at y whose radius
is r 1. Assume also that for every i = j , i j 4r . Consider concatenating
each instance by the constant 1 and then applying the multivector construction,
namely,
(x, y) = [ 0, . . . , 0 , x1 , . . ., xn , 1, 0, . . ., 0 ].
9 :; < 9
:;
< 9 :; <
R(y1)(n+1)
Rn+1
R(ky)(n+1)
Show that there exists a vector w Rk(n+1) such that (w, (x, y)) = 0 for every
(x, y) S.
Hint: Observe that for every example (x, y) S we can write x = y + v for some
v r . Now, take w = [w1 , . . ., wk ], where wi = [i , i 2 /2].
17.8 Exercises
211
18
Decision Trees
Not-tasty
Softness?
Other
Not-tasty
Tasty
To check if a given papaya is tasty or not, the decision tree first examines the
color of the Papaya. If this color is not in the range pale green to pale yellow, then
the tree immediately predicts that the papaya is not tasty without additional tests.
Otherwise, the tree turns to examine the softness of the papaya. If the softness level
of the papaya is such that it gives slightly to palm pressure, the decision tree predicts
that the papaya is tasty. Otherwise, the prediction is not-tasty. The preceding
example underscores one of the main advantages of decision trees the resulting
classifier is very simple to understand and interpret.
212
Overall, there are d + 3 options, hence we need log2 (d + 3) bits to describe each
block.
213
214
Decision Trees
Assuming each internal node has two children,1 it is not hard to show that this
is a prefix-free encoding of the tree, and that the description length of a tree with n
nodes is (n + 1) log2 (d + 3).
By Theorem 7.7 we have that with probability of at least 1 over a sample of
size m, for every n and every decision tree h H with n nodes it holds that
(n + 1) log2 (d + 3) + log (2/)
L D (h) L S (h) +
.
(18.1)
2m
This bound performs a tradeoff: on the one hand, we expect larger, more complex
decision trees to have a smaller training risk, L S (h), but the respective value of n
will be larger. On the other hand, smaller decision trees will have a smaller value of
n, but L S (h) might be larger. Our hope (or prior knowledge) is that we can find a
decision tree with both low empirical risk, L S (h), and a number of nodes n not too
high. Our bound indicates that such a tree will have low true risk, L D (h).
We may assume this without loss of generality, because if a decision node has only one child, we can
replace the node by its child without affecting the predictions of the decision tree.
More precisely, if NP=P then no algorithm can solve Equation (18.1) in time polynomial in n, d, and m.
ID3(S, A)
INPUT: training set S, feature subset A [d]
if all examples in S are labeled by 1, return a leaf 1
if all examples in S are labeled by 0, return a leaf 0
if A = , return a leaf whose value = majority of labels in S
else :
Let j = argmaxi A Gain(S,i )
if all examples in S have the same label
Return a leaf whose value = majority of labels in S
else
Let T1 be the tree returned by ID3({(x, y) S : x j = 1}, A \ { j }).
Let T2 be the tree returned by ID3({(x, y) S : x j = 0}, A \ { j }).
Return the tree:
xj = 1?
T2
T1
Therefore, we can define Gain to be the difference between the two, namely,
Gain(S,i ) := C( P [y = 1])
S
P [x i = 1] C( P [y = 1|x i = 1]) + P [x i = 0]C( P [y = 1|x i = 0]) .
S
Information Gain: Another popular gain measure that is used in the ID3 and
C4.5 algorithms of Quinlan (1993) is the information gain. The information gain
is the difference between the entropy of the label before and after the split, and
is achieved by replacing the function C in the previous expression by the entropy
function,
C(a) = a log (a) (1 a) log (1 a).
215
216
Decision Trees
Gini Index: Yet another definition of a gain, which is used by the CART
algorithm of Breiman, Friedman, Olshen, and Stone (1984), is the Gini index,
C(a) = 2a(1 a).
Both the information gain and the Gini index are smooth and concave upper bounds
of the train error. These properties can be advantageous in some situations (see,
for example, Kearns & Mansour (1996)).
18.2.2 Pruning
The ID3 algorithm described previously still suffers from a big problem: The
returned tree will usually be very large. Such trees may have low empirical risk,
but their true risk will tend to be high both according to our theoretical analysis,
and in practice. One solution is to limit the number of iterations of ID3, leading
to a tree with a bounded number of nodes. Another common solution is to prune
the tree after it is built, hoping to reduce it to a much smaller tree, but still with a
similar empirical error. Theoretically, according to the bound in Equation (18.1), if
we can make n much smaller without increasing L S (h) by much, we are likely to get
a decision tree with a smaller true risk.
Usually, the pruning is performed by a bottom-up walk on the tree. Each node
might be replaced with one of its subtrees or with a leaf, based on some bound or
estimate of L D (h) (for example, the bound in Equation (18.1)). A pseudocode of a
common template is given in the following.
Generic Tree Pruning Procedure
input:
function f (T , m) (bound/estimate for the generalization error
of a decision tree T , based on a sample of size m),
tree T .
foreach node j in a bottom-up walk on T (from leaves to root):
find T which minimizes f (T , m), where T is any of the following:
the current tree after replacing node j with a leaf 1.
the current tree after replacing node j with a leaf 0.
the current tree after replacing node j with its left subtree.
the current tree after replacing node j with its right subtree.
the current tree.
let T := T .
18.4 Summary
The basic idea is to reduce the problem to the case of binary features as follows.
Let x1 , . . . , xm be the instances of the training set. For each real-valued feature i ,
sort the instances so that x 1,i x m,i . Define a set of thresholds 0,i , . . . , m+1,i
such that j ,i (x j ,i , x j +1,i ) (where we use the convention x 0,i = and x m+1,i =
). Finally, for each i and j we define the binary feature 1[xi < j ,i ] . Once we have
constructed these binary features, we can run the ID3 procedure described in the
previous section. It is easy to verify that for any decision tree with threshold-based
splitting rules over the original real-valued features there exists a decision tree over
the constructed binary features with the same training error and the same number
of nodes.
If the original number of real-valued features is d and the number of examples
is m, then the number of constructed binary features becomes dm. Calculating the
Gain of each feature might therefore take O(dm 2 ) operations. However, using a
more clever implementation, the runtime can be reduced to O(dm log (m)). The
idea is similar to the implementation of ERM for decision stumps as described in
Section 10.1.1.
18.4 SUMMARY
Decision trees are very intuitive predictors. Typically, if a human programmer
creates a predictor it will look like a decision tree. We have shown that the VC
dimension of decision trees with k leaves is k and proposed the MDL paradigm for
217
218
Decision Trees
learning decision trees. The main problem with decision trees is that they are computationally hard to learn; therefore we described several heuristic procedures for
training them.
18.6 EXERCISES
18.1 1. Show that any binary classifier h : {0, 1}d {0, 1} can be implemented as a decision tree of height at most d + 1, with internal nodes of the form (xi = 0?) for
some i {1, . . ., d}.
2. Conclude that the VC dimension of the class of decision trees over the domain
{0, 1}d is 2d .
18.2 (Suboptimality of ID3)
Consider the following training set, where X = {0, 1}3 and Y = {0, 1}:
((1, 1, 1), 1)
((1, 0, 0), 1)
((1, 1, 0), 0)
((0, 0, 1), 0)
Suppose we wish to use this training set in order to build a decision tree of depth
2 (i.e., for each input we are allowed to ask two questions of the form (xi = 0?)
before deciding on the label).
1. Suppose we run the ID3 algorithm up to depth 2 (namely, we pick the root
node and its children according to the algorithm, but instead of keeping on
with the recursion, we stop and pick leaves according to the majority label in
each subtree). Assume that the subroutine used to measure the quality of each
feature is based on the entropy function (so we measure the information gain),
and that if two features get the same score, one of them is picked arbitrarily.
Show that the training error of the resulting decision tree is at least 1/4.
2. Find a decision tree of depth 2 that attains zero training error.
19
Nearest Neighbor
Nearest Neighbor algorithms are among the simplest of all machine learning algorithms. The idea is to memorize the training set and then to predict the label of
any new instance on the basis of the labels of its closest neighbors in the training
set. The rationale behind such a method is based on the assumption that the features that are used to describe the domain points are relevant to their labelings in a
way that makes close-by points likely to have the same label. Furthermore, in some
situations, even when the training set is immense, finding a nearest neighbor can
be done extremely fast (for example, when the training set is the entire Web and
distances are based on links).
Note that, in contrast with the algorithmic paradigms that we have discussed
so far, like ERM, SRM, MDL, or RLM, that are determined by some hypothesis
class, H, the Nearest Neighbor method figures out a label on any test point without
searching for a predictor within some predefined class of functions.
In this chapter we describe Nearest Neighbor methods for classification and
regression problems. We analyze their performance for the simple case of binary
classification and discuss the efficiency of implementing these methods.
219
220
Nearest Neighbor
Figure 19.1. An illustration of the decision boundaries of the 1-NN rule. The points
depicted are the sample points, and the predicted label of any new point will be the
label of the sample point in the center of the cell it belongs to. These cells are called a
Voronoi Tessellation of the space.
For a number k, the k-NN rule for binary classification is defined as follows:
k-NN
input: a training sample S = (x1 , y1 ), . . . , (xm , ym )
output: for every point x X ,
return the majority label among {yi (x) : i k}
When k = 1, we have the 1-NN rule:
h S (x) = y1 (x) .
A geometric illustration of the 1-NN rule is given in Figure 19.1.
For regression problems, namely, Y = R, one can define the
prediction to be
the average target of the k nearest neighbors. That is, h S (x) = 1k ki=1 yi (x) . More
generally, for some function : (X Y)k Y, the k-NN rule with respect to is:
(19.1)
h S (x) = (x1 (x) , y1 (x) ), . . . , (xk (x) , yk (x) ) .
It is easy to verify that we can cast the prediction by majority of labels (for classification) or by the averaged target (for regression) as in Equation (19.1) by an
appropriate choice of . The generality can lead to other rules; for example, if Y = R,
we can take a weighted average of the targets according to the distance from x:
h S (x) =
k
i=1
(x, xi (x) )
yi (x) .
k
j =1 (x, x j (x) )
19.2 ANALYSIS
Since the NN rules are such natural learning methods, their generalization properties have been extensively studied. Most previous results are asymptotic consistency
results, analyzing the performance of NN rules when the sample size, m, goes to
infinity, and the rate of convergence depends on the underlying distribution. As we
have argued in Section 7.4, this type of analysis is not satisfactory. One would like to
learn from finite training samples and to understand the generalization performance
as a function of the size of such finite training sets and clear prior assumptions on
the data distribution. We therefore provide a finite-sample analysis of the 1-NN rule,
19.2 Analysis
showing how the error decreases as a function of m and how it depends on properties of the distribution. We will also explain how the analysis can be generalized to
k-NN rules for arbitrary values of k. In particular, the analysis specifies the number
of examples required to achieve a true error of 2L D (h ) +
, where h is the Bayes
optimal hypothesis, assuming that the labeling rule is well behaved" (in a sense we
will define later).
SD m
SD m ,xD
Proof. Since L D (h S ) = E(x,y)D [1[h S (x)= y] ], we obtain that E S [L D (h S )] is the probability to sample a training set S and an additional example (x, y), such that the
label of 1 (x) is different from y. In other words, we can first sample m unlabeled
examples, Sx = (x1 , . . . , xm ), according to DX , and an additional unlabeled example,
x DX , then find 1 (x) to be the nearest neighbor of x in Sx , and finally sample
221
222
Nearest Neighbor
[1[y= y ] ]
E
m
Sx DX ,xDX
y(x),y (1 (x))
[y = y ] .
(19.2)
y(x),y (x )
Using |2(x) 1| 1 and the assumption that is c-Lipschitz, we obtain that the
probability is at most:
P
y(x),y (x )
[y = y ] 2(x)(1 (x)) + c x x.
S,x
E m
SD
r
.
me
P [Ci ]
i:Ci S=
r
E
P [Ci ] =
P [Ci ] E 1[Ci S=] .
S
i:Ci S=
i=1
19.2 Analysis
r
P [Ci ]
P [Ci ] e P [Ci ] m r max P [Ci ] e P [Ci ] m .
E
S
i:Ci S=
i=1
1
me
Equipped with the preceding lemmas we are now ready to state and prove the
main result of this section an upper bound on the expected error of the 1-NN
learning rule.
Theorem 19.3. Let X = [0, 1]d , Y = {0, 1}, and D be a distribution over X Y for
which the conditional probability function, , is a c-Lipschitz function. Let h S denote
the result of applying the 1-NN rule to a sample S Dm . Then,
1
E m [L D (h S )] 2 L D (h ) + 4 c d m d+1 .
SD
Proof. Fix some
= 1/T , for some integer T , let r = T d and let C1 , . . . , Cr be the
cover of the set X using boxes of length
: Namely, for every (1 , . . . , d ) [T ]d ,
there exists a set Ci of the form {x : j , x j [( j 1)/T , j /T ]}. An illustration for
d = 2, T = 5 and the set corresponding to = (2, 4) is given in the following.
1
Ci d + P
Ci
d ,
E [x x1 (x) ] E P
x,S
i:Ci S=
i:Ci S=
r
E [x x1(x) ] d me
+
.
x,S
223
224
Nearest Neighbor
19.6 Exercises
19.4 SUMMARY
The k-NN rule is a very simple learning algorithm that relies on the assumption
that things that look alike must be alike. We formalized this intuition using the
Lipschitzness of the conditional probability. We have shown that with a sufficiently
large training set, the risk of the 1-NN is upper bounded by twice the risk of the
Bayes optimal rule. We have also derived a lower bound that shows the curse of
dimensionality the required sample size might increase exponentially with the
dimension. As a result, NN is usually performed in practice after a dimensionality
reduction preprocessing step. We discuss dimensionality reduction techniques later
on in Chapter 23.
19.6 EXERCISES
In this exercise we will prove the following theorem for the k-NN rule.
Theorem 19.5. Let X = [0, 1]d , Y = {0, 1}, and D be a distribution over X Y for
which the conditional probability function, , is a c-Lipschitz function. Let h S denote
225
226
Nearest Neighbor
the result of applying the k-NN rule to a sample S Dm , where k 10. Let h be the
Bayes optimal hypothesis. Then,
8
E [L D (h S )] 1 +
L D (h ) + 6 c d + k m 1/(d+1).
S
k
19.1 Prove the following lemma.
Lemma 19.6. Let C1 , . . ., Cr be a collection of subsets of some domain set, X . Let S
be a sequence of m points sampled i.i.d. according to some probability distribution,
D over X . Then, for every k 2,
2r k
P [Ci ]
E
.
SD m
m
i:|C S|<k
i
Hints:
Show that
E
S
P [Ci ] =
i:|Ci S|<k
r
i=1
Fix some i and suppose that k < P [Ci ] m/2. Use Chernoffs bound to show that
P [|Ci S| < k] P [|Ci S| < P [Ci ]m/2] e P [Ci ]m/8 .
S
aema
1
me
8
.
me
Conclude the proof by using the fact that for the case k P [Ci ] m/2 we clearly
have:
2k
P [Ci ] P [|Ci S| < k] P [Ci ] .
S
m
19.2 We use the notation y p as a shorthand for y is a Bernoulli random variable
with expected value p. Prove the following lemma:
Z k be independent
Lemma 19.7. Let k 10 and let Z 1 , . . .,
Bernoulli random
variables with P [Z i = 1] = pi . Denote p = 1k i pi and p = 1k ki=1 Z i . Show that
8
P [y = 1[ p >1/2] ] 1 +
P [y = 1[ p>1/2] ].
E
Z 1 ,...,Z k y p
k y p
Hints:
W.l.o.g. assume that p 1/2. Then, P y p [y = 1[ p>1/2] ] = p. Let y = 1[ p >1/2] .
Show that
E
P [y = y ] p =
Z 1 ,...,Z k y p
Z 1 ,...,Z k
k p h 21p 1
,
where
h(a) = (1 + a) log (1 + a) a.
19.6 Exercises
To conclude the proof of the lemma, you can rely on the following inequality
(without proving it): For every p [0, 1/2] and k 10:
8
k p + k2 (log (2 p)+1)
(1 2 p) e
p.
k
19.3 Fix some p, p [0, 1] and y {0, 1}. Show that
P [ y = y ] P [ y = y ] + | p p |.
y p
y p
19.4 Conclude the proof of the theorem according to the following steps:
As in the proof of Theorem 19.3, six some
> 0 and let C1 , . . ., Cr be the cover
of the set
X using boxes of length
. For
each x, x in the same box we have
x x d
. Otherwise, x x 2 d. Show that
P [Ci ]
E [ L D (h S )] E
S
+ max
i
i:|Ci S|<k
P h S (x) = y | j [k], x x j (x)
d .
S,(x, y)
(19.3)
y1 ,...,y j y p
W.l.o.g. assume that p 1/2. Now use Lemma 19.7 to show that
8
P
P [h S (x) = y] 1 +
P [1[ p>1/2] = y].
y1 ,...,y j y p
k y p
Show that
P [1[ p>1/2] = y] = p = min{ p, 1 p} min{(x), 1 (x)} + | p (x)|.
y p
8
1+
L D (h ) + 3 c
d.
k
Use r = (2/
)d to obtain that:
8
2(2/
)d k
E [L D (h S )] 1 +
L D (h ) + 3 c
d +
.
S
k
m
Set
= 2m 1/(d+1) and use
2k
6 c m 1/(d+1) d + m 1/(d+1) 6c d + k m 1/(d+1)
e
to conclude the proof.
227
20
Neural Networks
228
A widely used heuristic for training neural networks relies on the SGD framework we studied in Chapter 14. There, we have shown that SGD is a successful
learner if the loss function is convex. In neural networks, the loss function is highly
nonconvex. Nevertheless, we can still implement the SGD algorithm and hope it will
find a reasonable solution (as happens to be the case in several practical tasks). In
Section 20.6 we describe how to implement SGD for neural networks. In particular,
the most complicated operation is the calculation of the gradient of the loss function with respect to the parameters of the network. We present the backpropagation
algorithm that efficiently calculates the gradient.
and
ot+1, j (x) = at+1, j (x) .
229
230
Neural Networks
That is, the input to vt+1, j is a weighted sum of the outputs of the neurons in Vt that
are connected to vt+1, j , where weighting is according to w, and the output of vt+1, j
is simply the application of the activation function on its input.
Layers V1 , . . . , VT 1 are often called hidden layers. The top layer, VT , is called
the output layer. In simple prediction problems the output layer contains a single
neuron whose output is the output of the network.
We refer to T as the number of layers in the network (excluding V0 ), or the
depth of the network. The size of the network is |V |. The width of the network
is maxt |Vt |. An illustration of a layered feedforward neural network of depth 2, size
10, and width 5, is given in the following. Note that there is a neuron in the hidden
layer that has no incoming edges. This neuron will output the constant (0).
Input
Hidden
Output
Layer
Layer
Layer
(V0 )
(V1 )
(V2 )
v1,1
x1
v0,1
x2
v0,2
x3
v0,3
Constant
v0,4
v1,2
v1,3
v2,1
Output
v1,4
v1,5
(20.1)
That is, the parameters specifying a hypothesis in the hypothesis class are the
weights over the edges of the network.
We can now study the approximation error, estimation error, and optimization
error of such hypothesis classes. In Section 20.3 we study the approximation error
of HV ,E, by studying what type of functions hypotheses in HV ,E, can implement,
in terms of the size of the underlying graph. In Section 20.4 we study the estimation
error of HV ,E, , for the case of binary classification (i.e., VT = 1 and is the sign
function), by analyzing its VC dimension. Finally, in Section 20.5 we show that it
is computationally hard to learn the class HV ,E, , even if the underlying graph is
small, and in Section 20.6 we present the most commonly used heuristic for training
HV ,E, .
231
232
Neural Networks
next section. This implies that |V | (2n/3 ), which concludes our proof for the
case of networks with the sign activation function. The proof for the sigmoid case is
analogous.
Remark 20.1. It is possible to derive a similar theorem for HV ,E, for any , as long
as we restrict the weights so that it is possible to express every weight using a number
of bits which is bounded by a universal constant. We can even consider hypothesis
classes where different neurons can employ different activation functions, as long as
the number of allowed activation functions is also finite.
Which functions can we express using a network of polynomial size? The preceding claim tells us that it is impossible to express all Boolean functions using a
network of polynomial size. On the positive side, in the following we show that all
Boolean functions that can be calculated in time O(T (n)) can also be expressed by
a network of size O(T (n)2 ).
Theorem 20.3. Let T : N N and for every n, let Fn be the set of functions that can
be implemented using a Turing machine using runtime of at most T (n). Then, there
exist constants b, c R+ such that for every n, there is a graph (Vn , E n ) of size at most
c T (n)2 + b such that HVn ,En ,sign contains Fn .
The proof of this theorem relies on the relation between the time complexity
of programs and their circuit complexity (see, for example, Sipser (2006)). In a
nutshell, a Boolean circuit is a type of network in which the individual neurons
implement conjunctions, disjunctions, and negation of their inputs. Circuit complexity measures the size of Boolean circuits required to calculate functions. The
relation between time complexity and circuit complexity can be seen intuitively as
follows. We can model each step of the execution of a computer program as a simple
operation on its memory state. Therefore, the neurons at each layer of the network
will reflect the memory state of the computer at the corresponding time, and the
translation to the next layer of the network involves a simple calculation that can
be carried out by the network. To relate Boolean circuits to networks with the sign
activation function, we need to show that we can implement the operations of conjunction, disjunction, and negation, using the sign activation function. Clearly, we
can implement the negation operator using the sign activation function. The following lemma shows that the sign activation function can also implement conjunctions
and disjunctions of its inputs.
Lemma 20.4. Suppose that a neuron v, that implements the sign activation function,
has k incoming edges, connecting it to neurons whose outputs are in {1}. Then, by
adding one more edge, linking a constant neuron to v, and by adjusting the weights
on the edges to v, the output of v can implement the conjunction or the disjunction of
its inputs.
function, f (x) =
Proof. Simply observe that if f : {1}k {1} is the conjunction
i x i , then it can be written as f (x) = sign 1 k + ki=1 x i . Similarly, the disjunc
tion function, f (x) = i x i , can be written as f (x) = sign k 1 + ki=1 x i .
So far we have discussed Boolean functions. In Exercise 20.1 we show that neural
networks are universal approximators. That is, for every fixed precision parameter,
> 0, and every Lipschitz function f : [ 1, 1]n [ 1, 1], it is possible to construct
a network such that for every input x [ 1, 1]n , the network outputs a number
between f (x)
and f (x) +
. However, as in the case of Boolean functions, the
size of the network here again cannot be polynomial in n. This is formalized in the
following theorem, whose proof is a direct corollary of Theorem 20.2 and is left as
an exercise.
Theorem 20.5. Fix some
(0, 1). For every n, let s(n) be the minimal integer such
that there exists a graph (V , E) with |V | = s(n) such that the hypothesis class HV ,E, ,
with being the sigmoid function, can approximate, to within precision of
, every
1-Lipschitz function f : [ 1, 1]n [ 1, 1]. Then s(n) is exponential in n.
We have shown that a neuron in layer V2 can implement a function that indicates
whether x is in some convex polytope. By adding one more layer, and letting the
neuron in the output layer implement the disjunction of its inputs, we get a network
that computes the union of polytopes. An illustration of such a function is given in
the following.
233
234
Neural Networks
H (m)
H(t) (m).
t=1
In addition, each H(t) can be written as a product of function classes, H(t) = H(t,1)
H(t,|Vt |) , where each H(t, j ) is all functions from layer t 1 to {1} that the j th
neuron of layer t can implement. In Exercise 20.3 we bound product classes, and
this yields
H(t) (m)
|Vt |
H(t,i) (m).
i=1
Let dt,i be the number of edges that are headed to the i th neuron of layer t.
Since the neuron is a homogenous halfspace hypothesis and the VC dimension of
homogenous halfspaces is the dimension of their input, we have by Sauers lemma
that
dt,i
H(t,i) (m) dem
(em)dt,i .
t,i
Overall, we obtained that
H (m) (em)
t,i dt,i
= (em)|E| .
Now, assume that there are m shattered points. Then, we must have H (m) = 2m ,
from which we obtain
2m (em)| E| m |E| log (em)/ log (2).
The claim follows by Lemma A.2.
Next, we consider HV , E, , where is the sigmoid function. Surprisingly, it turns
out that the VC dimension of HV , E, is lower bounded by (| E|2 ) (see Exercise
20.5.) That is, the VC dimension is the number of tunable parameters squared. It
is also possible to upper bound the VC dimension by O(|V |2 | E|2 ), but the proof
is beyond the scope of this book. In any case, since in practice we only consider
networks in which the weights have a short representation as floating point numbers
with O(1) bits, by using the discretization trick we easily obtain that such networks
have a VC dimension of O(| E|), even if we use the sigmoid activation function.
235
236
Neural Networks
(x,y)D
Recall the SGD algorithm for minimizing the risk function L D (w). We repeat
the pseudocode from Chapter 14 with a few modifications, which are relevant to the
neural network application because of the nonconvexity of the objective function.
First, while in Chapter 14 we initialized w to be the zero vector, here we initialize w
to be a randomly chosen vector with values close to zero. This is because an initialization with the zero vector will lead all hidden neurons to have the same weights
(if the network is a full layered network). In addition, the hope is that if we repeat
the SGD procedure several times, where each time we initialize the process with
a new random vector, one of the runs will lead to a good local minimum. Second,
while a fixed step size, , is guaranteed to be good enough for convex problems,
here we utilize a variable step size, t , as defined in Section 14.4.2. Because of the
nonconvexity of the loss function, the choice of the sequence t is more significant,
and it is tuned in practice by a trial and error manner. Third, we output the best
performing vector on a validation set. In addition, it is sometimes helpful to add regularization on the weights, with parameter . That is, we try to minimize L D (w) +
2
2 w . Finally, the gradient does not have a closed form solution. Instead, it is
implemented using the backpropagation algorithm, which will be described in the
sequel.
237
238
Neural Networks
a few definitions from vector calculus. Each element of the gradient is the partial
derivative with respect to the variable in w corresponding to one of the edges of the
network. Recall the definition of a partial derivative. Given a function f : Rn R,
the partial derivative with respect to the i th variable at w is obtained by fixing the
values of w1 , . . . , wi1 , wi+1 , wn , which yields the scalar function g : R R defined
by g(a) = f ((w1 , . . . , wi1 , wi + a, wi+1 , . . . , wn )), and then taking the derivative of g
at 0. For a function with multiple outputs, f : Rn Rm , the Jacobian of f at w Rn ,
denoted Jw (f), is the m n matrix whose i , j element is the partial derivative of f i :
Rn R w.r.t. its j th variable at w. Note that if m = 1 then the Jacobian matrix is the
gradient of the function (represented as a row vector). Two examples of Jacobian
calculations, which we will later use, are as follows.
Let f(w) = Aw for A Rm,n . Then Jw (f) = A.
For every n, we use the notation to denote the function from Rn to Rn which
applies the sigmoid function element-wise. That is, = ( ) means that for
1
. It is easy to verify that J ( ) is a diagoevery i we have i = (i ) = 1+exp(
i)
nal matrix whose (i ,i ) entry is (i ), where is the derivative function of the
1
. We also use the
(scalar) sigmoid function, namely, (i ) = (1+exp( ))(1+exp(
i
i ))
notation diag( ( )) to denote this matrix.
The chain rule for taking the derivative of a composition of functions can be
written in terms of the Jacobian as follows. Given two functions f : Rn Rm and
g : Rk Rn , we have that the Jacobian of the composition function, (f g) : Rk Rm ,
at w, is
Jw (f g) = Jg(w) (f)Jw (g).
For example, for g(w) = Aw, where A Rn,k , we have that
Jw ( g) = diag( (Aw)) A.
To describe the backpropagation algorithm, let us first decompose V into the
T V . For every t, let us write V = {v , . . . , v
layers of the graph, V = t=0
t
t
t,kt }, where
t,1
kt = |Vt |. In addition, for every t denote Wt Rkt+1 ,kt a matrix which gives a weight to
every potential edge between Vt and Vt+1 . If the edge exists in E then we set Wt,i, j to
be the weight, according to w, of the edge (vt, j , vt+1,i ). Otherwise, we add a phantom edge and set its weight to be zero, Wt,i, j = 0. Since when calculating the partial
derivative with respect to the weight of some edge we fix all other weights, these
additional phantom edges have no effect on the partial derivative with respect
to existing edges. It follows that we can assume, without loss of generality, that all
edges exist, that is, E = t (Vt Vt+1 ).
Next, we discuss how to calculate the partial derivatives with respect to the edges
from Vt1 to Vt , namely, with respect to the elements in Wt1 . Since we fix all other
weights of the network, it follows that the outputs of all the neurons in Vt1 are fixed
numbers which do not depend on the weights in Wt1 . Denote the corresponding
vector by ot1 . In addition, let us denote by t : Rkt R the loss function of the
subnetwork defined by layers Vt , . . . , VT as a function of the outputs of the neurons
in Vt . The input to the neurons of Vt can be written as at = Wt1 ot1 and the output
of the neurons of Vt is ot = (at ). That is, for every j we have ot, j = (at, j ). We
ot1
0
0
0
o
t1
.
(20.2)
Ot1 = .
..
..
..
..
.
.
.
t1
(20.3)
t1 , . . . , t,kt (at,kt ) ot1 .
It is left to calculate the vector t = Jot (t ) for every t. This is the gradient of t
at ot . We calculate this in a recursive manner. First observe that for the last layer
we have that T (u) = (u, y), where is the loss function. Since we assume that
(u, y) = 12 u y2 we obtain that Ju (T ) = (u y). In particular, T = JoT (T ) =
(oT y). Next, note that
t (u) = t+1 ( (Wt u)).
Therefore, by the chain rule,
Ju (t ) = J (Wt u) (t+1 )diag( (Wt u))Wt .
In particular,
t = Jot (t ) = J (Wt ot ) (t+1 )diag( (Wt ot ))Wt
= Jot+1 (t+1 )diag( (at+1 ))Wt
= t+1 diag( (at+1 ))Wt .
In summary, we can first calculate the vectors {at , ot } from the bottom of the
network to its top. Then, we calculate the vectors { t } from the top of the network
back to its bottom. Once we have all of these vectors, the partial derivatives are
easily obtained using Equation (20.3). We have thus shown that the pseudocode of
backpropagation indeed calculates the gradient.
239
240
Neural Networks
20.7 SUMMARY
Neural networks over graphs of size s(n) can be used to describe
hypothesis classes
of all predictors that can be implemented in runtime of O( s(n)). We have also
shown that their sample complexity depends polynomially on s(n) (specifically,
it depends on the number of edges in the network). Therefore, classes of neural network hypotheses seem to be an excellent choice. Regrettably, the problem
of training the network on the basis of training data is computationally hard. We
have presented the SGD framework as a heuristic approach for training neural networks and described the backpropagation algorithm which efficiently calculates the
gradient of the loss function with respect to the weights over the edges.
20.9 EXERCISES
20.1 Neural Networks are universal approximators: Let f : [ 1, 1]n [ 1, 1] be a
-Lipschitz function. Fix some
> 0. Construct a neural network N : [ 1, 1]n
[ 1, 1], with the sigmoid activation function, such that for every x [ 1, 1]n it
holds that | f (x) N (x)|
.
Hint: Similarly to the proof of Theorem 19.3, partition [ 1, 1]n into small boxes.
Use the Lipschitzness of f to show that it is approximately constant at each box.
20.9 Exercises
20.2
20.3
20.4
20.5
Finally, show that a neural network can first decide which box the input vector
belongs to, and then predict the averaged value of f at that box.
Prove Theorem 20.5.
Hint: For every f : {1, 1}n {1, 1} construct a 1-Lipschitz function g :
[ 1, 1]n [ 1, 1] such that if you can approximate g then you can express f .
Growth function of product: For i = 1, 2, let Fi be a set of functions from X to Yi .
Define H = F1 F2 to be the Cartesian product class. That is, for every f 1 F1
and f2 F2 , there exists h H such that h(x) = ( f 1 (x), f 2 (x)). Prove that H (m)
F1 (m) F2 (m).
Growth function of composition: Let F1 be a set of functions from X to Z and let
F2 be a set of functions from Z to Y . Let H = F2 F1 be the composition class. That
is, for every f 1 F1 and f2 F2 , there exists h H such that h(x) = f 2 ( f 1 (x)). Prove
that H (m) F2 (m)F1 (m).
VC of sigmoidal networks: In this exercise we show that there is a graph (V , E)
such that the VC dimension of the class of neural networks over these graphs with
the sigmoid activation function is (|E|2 ). Note that for every
> 0, the sigmoid
activation function can approximate the threshold activation function, 1[i xi ] , up
to accuracy
. To simplify the presentation, throughout the exercise we assume
that we can exactly implement the activation function 1[i xi >0] using a sigmoid
activation function.
Fix some n.
1. Construct a network, N1 , with O(n) weights, which implements a function from
R to {0, 1}n and satisfies the following property. For every x {0, 1}n , if we feed
the network with the real number 0. x1 x2 . . . xn , then the output of the network
will be x.
Hint: Denote = 0. x1 x2 . . . xn and observe that 10k 0. 5 is at least 0. 5 if xk = 1
and is at most 0. 3 if xk = 1.
2. Construct a network, N2 , with O(n) weights, which implements a function from
[n] to {0, 1}n such that N2 (i ) = ei for all i . That is, upon receiving the input i , the
network outputs the vector of all zeros except 1 at the i th neuron.
(i) (i)
(i)
3. Let 1 , . . ., n be n real numbers such that every i is of the form 0. a1 a2 . . .an ,
(i)
with a j {0, 1}. Construct a network, N3 , with O(n) weights, which implements
a function from [n] to R, and satisfies N2 (i ) = i for every i [n].
4. Combine N1 , N3 to obtain a network that receives i [n] and output a(i) .
(i)
5. Construct a network N4 that receives (i , j ) [n] [n] and outputs a j .
Hint: Observe that the AND function over {0, 1}2 can be calculated using O(1)
weights.
6. Conclude that there is a graph with O(n) weights such that the VC dimension
of the resulting hypothesis class is n 2 .
20.6 Prove Theorem 20.7.
Hint: The proof is similar to the hardness of learning intersections of halfspaces
see Exercise 32 in Chapter 8.
241
PART 3
21
Online Learning
245
246
Online Learning
Theorem 21.3. Let H be a finite hypothesis class. The Halving algorithm enjoys the
mistake bound MHalving (H) log2 (|H|).
247
248
Online Learning
Proof. We simply note that whenever the algorithm errs we have |Vt+1 | |Vt |/2,
(hence the name Halving). Therefore, if M is the total number of mistakes, we have
1 |VT +1 | |H| 2M .
Rearranging this inequality we conclude our proof.
Of course, Halvings mistake bound is much better than Consistents mistake
bound. We already see that online learning is different from PAC learningwhile
in PAC, any ERM hypothesis is good, in online learning choosing an arbitrary ERM
hypothesis is far from being optimal.
v1
v2
v3
v3
h1
h2
h3
h4
0
0
0
1
Figure 21.1. An illustration of a shattered tree of depth 2. The dashed path corresponds
to the sequence of examples ((v1 , 1), (v3 , 0)). The tree is shattered by H = {h 1 , h 2 , h 3 , h 4 },
where the predictions of each hypothesis in H on the instances v1 , v2 , v3 is given in the
table (the * mark means that h j (vi ) can be either 1 or 0).
there exists h H such that for all t [d] we have h(vit ) = yt where i t = 2t1 +
t1
t1 j .
j =1 y j 2
An illustration of a shattered tree of depth 2 is given in Figure 21.1.
Definition 21.5 (Littlestones Dimension (Ldim)). Ldim(H) is the maximal integer
T such that there exists a shattered tree of depth T , which is shattered by H.
The definition of Ldim and the previous discussion immediately imply the
following:
Lemma 21.6. No algorithm can have a mistake bound strictly smaller than
Ldim(H); namely, for every algorithm, A, we have M A (H) Ldim(H).
Proof. Let T = Ldim(H) and let v1 , . . . , v2T 1 be a sequence that satisfies the
requirements in the definition of Ldim. If the environment sets xt = vit and yt =
1 pt for all t [T ], then the learner makes T mistakes while the definition of Ldim
implies that there exists a hypothesis h H such that yt = h(xt ) for all t.
Let us now give several examples.
Example 21.2. Let H be a finite hypothesis class. Clearly, any tree that is shattered
by H has depth of at most log2 (|H|). Therefore, Ldim(H) log2 (|H|). Another way
to conclude this inequality is by combining Lemma 21.6 with Theorem 21.3.
Example 21.3. Let X = {1, . . . , d} and H = {h 1 , . . . , h d } where h j (x) = 1 iff x = j .
Then, it is easy to show that Ldim(H) = 1 while |H| = d can be arbitrarily large.
Therefore, this example shows that Ldim(H) can be significantly smaller than
log2 (|H|).
Example 21.4. Let X = [0, 1] and H = {x 1[x<a] : a [0, 1]}; namely, H is the
class of thresholds on the interval [0, 1]. Then, Ldim(H) = . To see this, consider
the tree
1/2
1/4
1/8
3/4
3/8
5/8
7/8
249
250
Online Learning
This tree is shattered by H. And, because of the density of the reals, this tree can be
made arbitrarily deep.
Lemma 21.6 states that Ldim(H) lower bounds the mistake bound of any algorithm. Interestingly, there is a standard algorithm whose mistake bound matches this
lower bound. The algorithm is similar to the Halving algorithm. Recall that the prediction of Halving is made according to a majority vote of the hypotheses which are
consistent with previous examples. We denoted this set by Vt . Put another way, Halving partitions Vt into two sets: Vt+ = {h Vt : h(xt ) = 1} and Vt = {h Vt : h(xt ) = 0}.
It then predicts according to the larger of the two groups. The rationale behind this
prediction is that whenever Halving makes a mistake it ends up with |Vt+1 | 0. 5 |Vt |.
The optimal algorithm we present in the following uses the same idea, but
instead of predicting according to the larger class, it predicts according to the class
with larger Ldim.
Standard Optimal Algorithm (SOA)
input: A hypothesis class H
initialize: V1 = H
for t = 1, 2, . . .
receive xt
(r)
for r {0, 1} let Vt = {h Vt : h(xt ) = r }
(r)
predict pt = argmaxr{0,1} Ldim(Vt )
(in case of a tie predict pt = 1)
receive true label yt
update Vt+1 = {h Vt : h(xt ) = yt }
The following lemma formally establishes the optimality of the preceding
algorithm.
Lemma 21.7. SOA enjoys the mistake bound MSOA (H) Ldim(H).
Proof. It suffices to prove that whenever the algorithm makes a prediction mistake
we have Ldim(Vt+1 ) Ldim(Vt ) 1. We prove this claim by assuming the contrary,
that is, Ldim(Vt+1 ) = Ldim(Vt ). If this holds true, then the definition of pt implies
(r)
that Ldim(Vt ) = Ldim(Vt ) for both r = 1 and r = 0. But, then we can construct
a shaterred tree of depth Ldim(Vt ) + 1 for the class Vt , which leads to the desired
contradiction.
Combining Lemma 21.7 and Lemma 21.6 we obtain:
Corollary 21.8. Let H be any hypothesis class. Then, the standard optimal algorithm enjoys the mistake bound MSOA (H) = Ldim(H) and no other algorithm can
have M A (H) < Ldim(H).
Comparison to VC Dimension
In the PAC learning model, learnability is characterized by the VC dimension of
the class H. Recall that the VC dimension of a class H is the maximal number d
such that there are instances x1 , . . . , xd that are shattered by H. That is, for any
sequence of labels (y1 , . . . , yd ) {0, 1}d there exists a hypothesis h H that gives
exactly this sequence of labels. The following theorem relates the VC dimension to
the Littlestone dimension.
Theorem 21.9. For any class H, VCdim(H) Ldim(H), and there are classes for
which strict inequality holds. Furthermore, the gap can be arbitrarily larger.
Proof. We first prove that VCdim(H) Ldim(H). Suppose VCdim(H) = d and
let x1 , . . . , xd be a shattered set. We now construct a complete binary tree of
instances v1 , . . . , v2d 1 , where all nodes at depth i are set to be xi see the following
illustration:
x1
x2
x3
x2
x3
x3
x3
Now, the definition of a shattered set clearly implies that we got a valid shattered
tree of depth d, and we conclude that VCdim(H) Ldim(H). To show that the gap
can be arbitrarily large simply note that the class given in Example 21.4 has VC
dimension of 1 whereas its Littlestone dimension is infinite.
t=1
t=1
(21.2)
We restate the learners goal as having the lowest possible regret relative to H. An
interesting question is whether we can derive an algorithm with low regret, meaning
that Regret A (H, T ) grows sublinearly with the number of rounds, T , which implies
that the difference between the error rate of the learner and the best hypothesis in
H tends to zero as T goes to infinity.
We first show that this is an impossible missionno algorithm can obtain a
sublinear regret bound even if |H| = 2. Indeed, consider H = {h 0 , h 1 }, where h 0
is the function that always returns 0 and h 1 is the function that always returns 1. An
251
252
Online Learning
adversary can make the number of mistakes of any online algorithm be equal to T ,
by simply waiting for the learners prediction and then providing the opposite label
as the true label. In contrast, for any sequence of true labels, y1 , . . . , yT , let b be
the majority of labels in y1 , . . . , yT , then the number of mistakes of h b is at most T /2.
Therefore, the regret of any online algorithm might be at least T T /2 = T /2, which
is not sublinear in T . This impossibility result is attributed to Cover (Cover 1965).
To sidestep Covers impossibility result, we must further restrict the power of the
adversarial environment. We do so by allowing the learner to randomize his predictions. Of course, this by itself does not circumvent Covers impossibility result, since
in deriving this result we assumed nothing about the learners strategy. To make the
randomization meaningful, we force the adversarial environment to decide on yt
without knowing the random coins flipped by the learner on round t. The adversary
can still know the learners forecasting strategy and even the random coin flips of
previous rounds, but it does not know the actual value of the random coin flips used
by the learner on round t. With this (mild) change of game, we analyze the expected
number of mistakes of the algorithm, where the expectation is with respect to the
learners own randomization. That is, if the learner outputs yt where P [ yt = 1] = pt ,
then the expected loss he pays on round t is
P [ yt = yt ] = | pt yt |.
Put another way, instead of having the predictions of the learner being in {0, 1} we
allow them to be in [0, 1], and interpret pt [0, 1] as the probability to predict the
label 1 on round t.
With this assumption it is possible to derive a low regret algorithm. In particular,
we will prove the following theorem.
Theorem 21.10. For every hypothesis class H, there exists an algorithm for online
classification, whose predictions come from [0, 1], that enjoys the regret bound
h H,
T
t=1
| p t yt |
T
|h(xt ) yt |
2 min{log (|H|) , Ldim(H) log (eT )} T .
t=1
Furthermore,
no algorithm can achieve an expected regret bound smaller than
Ldim(H) T .
We will provide a constructive proof of the upper bound part of the preceding
theorem. The proof of the lower bound part can be found in (Ben-David, Pal, &
Shalev-Shwartz 2009).
The proof of Theorem 21.10 relies on the Weighted-Majority algorithm for learning with expert advice. This algorithm is important by itself and we dedicate the next
subsection to it.
21.2.1 Weighted-Majority
Weighted-majority is an algorithm for the problem of prediction with expert advice.
In this online learning problem, on round t the learner has to choose the advice
of d given experts. We also allow the learner to randomize his choice by defining a distribution over the d experts, that is, picking a vector w(t) [0, 1]d , with
(t)
(t)
Weighted-Majority
input: number of
experts, d ; number of rounds, T
parameter: = 2 log (d)/T
(1) = (1, . . . , 1)
initialize: w
for t = 1, 2, . . .
(t)
(t) /Z t where Z t = i w i
set w(t) = w
(t)
choose expert i at random according to P [i ] = wi
d
receive costs of all experts vt [0, 1]
pay cost w(t) , vt
(t+1)
(t)
= w i evt,i
update rule i , w i
The following theorem is key for analyzing the regret bound of WeightedMajority.
Theorem 21.11. Assuming that T > 2 log (d), the Weighted-Majority algorithm enjoys
the bound
T
T
w(t) , vt min
vt,i 2 log (d) T .
i[d]
t=1
t=1
Proof. We have:
w
(t)
Z t+1
i
= log
evt,i = log
wi evt,i .
Zt
Zt
(t)
log
Using the inequality ea 1 a + a 2 /2, which holds for all a (0, 1), and the fact
(t)
that i wi = 1, we obtain
log
(t)
Z t+1
2
log
wi 1 vt,i + 2 vt,i
/2
Zt
i
(t)
2
wi vt,i 2 vt,i
/2 ).
= log(1
9
:;
<
def
=b
Next, note that b (0, 1). Therefore, taking log of the two sides of the inequality
1 b eb we obtain the inequality log (1 b) b, which holds for all b 1,
253
254
Online Learning
and obtain
log
(t)
Z t+1
2
wi vt,i 2 vt,i
/2
Zt
i
(t)
2
= w(t) , vt + 2
wi vt,i
/2
i
w , vt + /2.
(t)
T
Z t+1
T 2
.
w(t) , vt +
Zt
2
T
log
t=1
(21.3)
t=1
(T +1)
Combining the preceding with Equation (21.3) and using the fact that log (Z 1 ) =
log (d) we get that
min
i
T
w(t) , vt +
t=1
T 2
,
2
w(t) , vt min
i
t=1
t
vt,i
log (d) T
+
.
i=1
(t)
d
(t)
wi |h i (xt ) yt | = w(t) , vt .
i=1
Furthermore, for each i , t vt,i is exactly the number of mistakes hypothesis h i
makes. Applying Theorem 21.11 we obtain
Corollary 21.12. Let H be a finite hypothesis class. There exists an algorithm for
online classification, whose predictions come from [0, 1], that enjoys the regret bound
T
t=1
| pt yt | min
hH
T
|h(xt ) yt |
2 log (|H|) T .
t=1
Next, we consider the case of a general hypothesis class. Previously, we constructed an expert for each individual hypothesis. However, if H is infinite this leads
to a vacuous bound. The main idea is to construct a set of experts in a more sophisticated way. The challenge is how to define a set of experts that, on one hand, is
not excessively large and, on the other hand, contains experts that give accurate
predictions.
We construct the set of experts so that for each hypothesis h H and every
sequence of instances, x1 , x2 , . . . , xT , there exists at least one expert in the set which
behaves exactly as h on these instances. For each L Ldim(H) and each sequence
1 i 1 < i 2 < < i L T we define an expert. The expert simulates the game between
SOA (presented in the previous section) and the environment on the sequence
of instances x1 , x2 , . . . , xT assuming that SOA makes a mistake precisely in rounds
i 1 ,i 2 , . . . ,i L . The expert is defined by the following algorithm.
Expert (i 1 ,i 2 , . . . ,i L )
input A hypothesis class H ; Indices i 1 < i 2 < < i L
initialize: V1 = H
for t = 1, 2, . . . , T
receive xt
(r)
for r {0, 1} let Vt = {h Vt : h(x
t) = r}
(r)
255
256
Online Learning
d=
L=0
T
.
L
(21.4)
It can be shown that when T Ldim(H) + 2, the right-hand side of the equation is
Ldim(H)
(the proof can be found in Lemma A.5).
bounded by eT /Ldim(H)
Theorem 21.11 tells us that the expected number of mistakes
of WeightedMajority is at most the number of mistakes of the best expert plus 2 log (d) T . We
will next show that the number of mistakes of the best expert is at most the number
of mistakes of the best hypothesis in H. The following key lemma shows that, on
any sequence of instances, for each hypothesis h H there exists an expert with the
same behavior.
Lemma 21.13. Let H be any hypothesis class with Ldim(H) < . Let x1 , x2 , . . . , xT
be any sequence of instances. For any h H, there exists L Ldim(H) and indices
1 i 1 < i 2 < < i L T such that when running Expert (i 1 ,i 2 , . . . ,i L ) on the sequence
x1 , x2 , . . . , xT , the expert predicts h(xt ) on each online round t = 1, 2, . . . , T .
Proof. Fix h H and the sequence x1 , x2 , . . . , xT . We must construct L and the
indices i 1 ,i 2 , . . . ,i L . Consider running SOA on the input (x1 , h(x1 )), (x2 , h(x2 )), . . .,
(xT , h(xT )). SOA makes at most Ldim(H) mistakes on such input. We define L to
be the number of mistakes made by SOA and we define {i 1 ,i 2 , . . . ,i L } to be the set
of rounds in which SOA made the mistakes.
Now, consider the Expert (i 1 ,i 2 , . . . ,i L ) running on the sequence x1 , x2 , . . . , xT .
By construction, the set Vt maintained by Expert (i 1 ,i 2 , . . . ,i L ) equals the set Vt
maintained by SOA when running on the sequence (x1 , h(x1 )), . . . , (xT , h(xT )). The
predictions of SOA differ from the predictions of h if and only if the round is
in {i 1 ,i 2 , . . . ,i L }. Since Expert (i 1 ,i 2 , . . . ,i L ) predicts exactly like SOA if t is not
in {i 1 ,i 2 , . . . ,i L } and the opposite of SOAs predictions if t is in {i 1 ,i 2 , . . . ,i L }, we
conclude that the predictions of the expert are always the same as the predictions of h.
The previous lemma holds in particular for the hypothesis in H that makes the
least number of mistakes on the sequence of examples, and we therefore obtain the
following:
Corollary 21.14. Let (x1 , y1 ), (x2 , y2 ), . . . , (xT , yT ) be a sequence of examples and let
H be a hypothesis class with Ldim(H) < . There exists L Ldim(H) and indices
1 i 1 < i 2 < < i L T , such that Expert (i 1 ,i 2 , . . . ,i L ) makes at most as many
mistakes as the best h H does, namely,
min
hH
T
|h(xt ) yt |
t=1
Regret A (w , T ) =
T
(w , z t )
(t)
t=1
T
(w , z t ).
(21.5)
t=1
1. w(t+ 2 ) = w(t) vt
1
257
258
Online Learning
Theorem 21.15. The Online Gradient Descent algorithm enjoys the following regret
bound for every w H,
Regret A (w , T )
w 2
+
vt 2 .
2
2
T
t=1
1
Regret A (w , T ) (w 2 + 2 ) T .
2
If we further assume that H is B-bounded and we set =
Regret A (H, T ) B
then
T.
Proof. The analysis is similar to the analysis of Stochastic Gradient Descent with
1
projections. Using the projection lemma, the definition of w(t+ 2 ) , and the definition
of subgradients, we have that for every t,
w(t+1) w 2 w(t) w 2
1
w(t+ 2 ) w 2 w(t) w 2
= w(t) vt w 2 w(t) w 2
= 2w(t) w , vt + 2 vt 2
2( f t (w(t) ) f t (w )) + 2 vt 2 .
Summing over t and observing that the left-hand side is a telescopic sum we
obtain that
w(T +1) w 2 w(1) w 2 2
T
( ft (w(t) ) f t (w )) + 2
T
t=1
vt 2 .
t=1
Rearranging the inequality and using the fact that w(1) = 0, we get that
T
( ft (w(t) ) f t (w ))
t=1
t=1
w 2
+
2
2
T
vt 2 .
t=1
This proves the first bound in the theorem. The second bound follows from the
assumption that f t is -Lipschitz, which implies that vt .
259
260
Online Learning
if yt w(t) , xt > 0
otherwise
This form implies that the predictions of the Perceptron algorithm and the set M
do not depend on the actual value of as long as > 0. We have therefore obtained
the Perceptron algorithm:
Perceptron
initialize: w1 = 0
for t = 1, 2, . . . , T
receive xt
predict pt = sign(w(t) , xt )
if yt w(t) , xt 0
w(t+1) = w(t) + yt xt
else
w(t+1) = w(t)
To analyze the Perceptron, we rely on the analysis of Online Gradient Descent
given in the previous section. In our case, the subgradient of f t we use in the
Perceptron is vt = 1[yt w(t) ,xt 0] yt xt . Indeed, the Perceptrons update is w(t+1) =
w(t) vt , and as discussed before this is equivalent to w(t+1) = w(t) vt for every
> 0. Therefore, Theorem 21.15 tells us that
T
f t (w(t) )
t=1
T
f t (w )
t=1
1
w 22 +
vt 22 .
2
2
T
t=1
T
t=1
Setting =
w
R | M|
f t (w )
T
t=1 f t (w
(t) )
|M|.
w 22 + |M| R 2
2
2
(21.6)
21.5 SUMMARY
In this chapter we have studied the online learning model. Many of the results we
derived for the PAC learning model have an analog in the online model. First, we
have shown that a combinatorial dimension, the Littlestone dimension, characterizes online learnability. To show this, we introduced the SOA algorithm (for the
realizable case) and the Weighted-Majority algorithm (for the unrealizable case).
We have also studied online convex optimization and have shown that online gradient descent is a successful online learner whenever the loss function is convex and
Lipschitz. Finally, we presented the online Perceptron algorithm as a combination
of online gradient descent and the concept of surrogate convex loss functions.
261
262
Online Learning
in (Abernethy, Bartlett, Rakhlin & Tewari 2008, Rakhlin, Sridharan & Tewari
2010, Daniely et al. 2011). The Weighted-Majority algorithm is due to (Littlestone
& Warmuth 1994) and (Vovk 1990).
The term online convex programming was introduced by Zinkevich (2003)
but this setting was introduced some years earlier by Gordon (1999). The Perceptron dates back to Rosenblatt (Rosenblatt 1958). An analysis for the realizable
case (with margin assumptions) appears in (Agmon 1954, Minsky & Papert 1969).
Freund and Schapire (Freund & Schapire 1999) presented an analysis for the
unrealizable case with a squared-hinge-loss based on a reduction to the realizable
case. A direct analysis for the unrealizable case with the hinge-loss was given by
Gentile (Gentile 2003).
For additional information we refer the reader to Cesa-Bianchi and Lugosi
(2006) and Shalev-Shwartz (2011).
21.7 EXERCISES
21.1 Find a hypothesis class H and a sequence of examples on which Consistent makes
|H| 1 mistakes.
21.2 Find a hypothesis class H and a sequence of examples on which the mistake bound
of the Halving algorithm is tight.
21.3 Let d 2, X = {1, . . . , d} and let H = {h j : j [d]}, where h j (x) = 1[x= j] . Calculate
MHalving (H) (i.e., derive lower and upper bounds on MHalving (H), and prove that
they are equal).
21.4 The Doubling Trick:
In Theorem 21.15, the parameter depends on the time horizon T . In this exercise
we show how to get rid of this dependence by a simple trick.
Consider an algorithm that enjoys a regret bound of the form T , but its
parameters require the knowledge of T . The doubling trick, described in the following, enables us to convert such an algorithm into an algorithm that does not need
to know the time horizon. The idea is to divide the time into periods of increasing
size and run the original algorithm on each period.
Show that if the regret of A on each period of 2m rounds is at most 2m , then the
total regret is at most
T.
21
21.5 Online-to-batch Conversions: In this exercise we demonstrate how a successful
online learning algorithm can be used to derive a successful PAC learner as well.
Consider a PAC learning problem for binary classification parameterized by an
instance domain, X , and a hypothesis class, H. Suppose that there exists an online
learning algorithm, A, which enjoys a mistake bound M A (H) < . Consider running this algorithm on a sequence of T examples which are sampled i.i.d. from a
distribution D over the instance space X , and are labeled by some h H. Suppose
21.7 Exercises
that for every round t, the prediction of the algorithm is based on a hypothesis
h t : X {0, 1}. Show that
M A (H)
,
E [L D (h r )]
T
where the expectation is over the random choice of the instances as well as a random choice of r according to the uniform distribution over [T ].
Hint: Use similar arguments to the ones appearing in the proof of Theorem 14.8.
263
22
Clustering
Clustering is one of the most widely used techniques for exploratory data analysis.
Across all disciplines, from social sciences to biology to computer science, people
try to get a first intuition about their data by identifying meaningful groups among
the data points. For example, computational biologists cluster genes on the basis of
similarities in their expression in different experiments; retailers cluster customers,
on the basis of their customer profiles, for the purpose of targeted marketing; and
astronomers cluster stars on the basis of their spacial proximity.
The first point that one should clarify is, naturally, what is clustering? Intuitively,
clustering is the task of grouping a set of objects such that similar objects end up in
the same group and dissimilar objects are separated into different groups. Clearly,
this description is quite imprecise and possibly ambiguous. Quite surprisingly, it is
not at all clear how to come up with a more rigorous definition.
There are several sources for this difficulty. One basic problem is that the two
objectives mentioned in the earlier statement may in many cases contradict each
other. Mathematically speaking, similarity (or proximity) is not a transitive relation,
while cluster sharing is an equivalence relation and, in particular, it is a transitive
relation. More concretely, it may be the case that there is a long sequence of objects,
x 1 , . . . , x m such that each x i is very similar to its two neighbors, x i1 and x i+1 , but x 1
and x m are very dissimilar. If we wish to make sure that whenever two elements
are similar they share the same cluster, then we must put all of the elements of
the sequence in the same cluster. However, in that case, we end up with dissimilar
elements (x 1 and x m ) sharing a cluster, thus violating the second requirement.
To illustrate this point further, suppose that we would like to cluster the points
in the following picture into two clusters.
A clustering algorithm that emphasizes not separating close-by points (e.g., the
Single Linkage algorithm that will be described in Section 22.1) will cluster this input
264
22.0 Clustering
In contrast, a clustering method that emphasizes not having far-away points share
the same cluster (e.g., the 2-means algorithm that will be described in Section 22.1)
will cluster the same input by dividing it vertically into the right-hand half and the
left-hand half:
Another basic problem is the lack of ground truth for clustering, which is a
common problem in unsupervised learning. So far in the book, we have mainly dealt
with supervised learning (e.g., the problem of learning a classifier from labeled training data). The goal of supervised learning is clear we wish to learn a classifier
which will predict the labels of future examples as accurately as possible. Furthermore, a supervised learner can estimate the success, or the risk, of its hypotheses
using the labeled training data by computing the empirical loss. In contrast, clustering is an unsupervised learning problem; namely, there are no labels that we
try to predict. Instead, we wish to organize the data in some meaningful way.
As a result, there is no clear success evaluation procedure for clustering. In fact,
even on the basis of full knowledge of the underlying data distribution, it is not
clear what is the correct clustering for that data or how to evaluate a proposed
clustering.
Consider, for example, the following set of points in R2 :
265
266
Clustering
and suppose we are required to cluster them into two clusters. We have two highly
justifiable solutions:
This phenomenon is not just artificial but occurs in real applications. A given set
of objects can be clustered in various different meaningful ways. This may be due
to having different implicit notions of distance (or similarity) between objects, for
example, clustering recordings of speech by the accent of the speaker versus clustering them by content, clustering movie reviews by movie topic versus clustering
them by the review sentiment, clustering paintings by topic versus clustering them
by style, and so on.
To summarize, there may be several very different conceivable clustering solutions for a given data set. As a result, there is a wide variety of clustering algorithms
that, on some input data, will output very different clusterings.
A Clustering Model:
Clustering tasks can vary in terms of both the type of input they have and the type
of outcome they are expected to compute. For concreteness, we shall focus on the
following common setup:
Input a set of elements, X , and a distance function over it. That is, a function
d : X X R+ that is symmetric, satisfies d(x, x) = 0 for all x X , and often
also satisfies the triangle inequality. Alternatively, the function could be a similarity function s : X X [0, 1] that is symmetric and satisfies s(x, x) = 1
for all x X . Additionally, some clustering algorithms also require an input
parameter k (determining the number of required clusters).
Output a partition of the domain set X into subsets. That is, C = (C1 , . . . Ck )
where ki=1 Ci = X and for all i = j , Ci C j = . In some situations the
clustering is soft, namely, the partition of X into the different clusters is
probabilistic where the output is a function assigning to each domain point,
x X , a vector ( p1 (x), . . . , pk (x)), where pi (x) = P [x Ci ] is the probability
that x belongs to cluster Ci . Another possible output is a clustering dendrogram (from Greek dendron = tree, gramma = drawing), which is a hierarchical
tree of domain subsets, having the singleton sets in its leaves, and the full
domain as its root. We shall discuss this formulation in more detail in the
following.
In the following we survey some of the most popular clustering methods. In the
last section of this chapter we return to the high level discussion of what is clustering.
start from the trivial clustering that has each data point as a single-point cluster.
Then, repeatedly, these algorithms merge the closest clusters of the previous clustering. Consequently, the number of clusters decreases with each such round. If kept
going, such algorithms would eventually result in the trivial clustering in which all of
the domain points share one large cluster. Two parameters, then, need to be determined to define such an algorithm clearly. First, we have to decide how to measure
(or define) the distance between clusters, and, second, we have to determine when
to stop merging. Recall that the input to a clustering algorithm is a between-points
distance function, d. There are many ways of extending d to a measure of distance
between domain subsets (or clusters). The most common ways are
1. Single Linkage clustering, in which the between-clusters distance is defined
by the minimum distance between members of the two clusters, namely,
def
D(A, B) = min{d(x, y) : x A, y B}
2. Average Linkage clustering, in which the distance between two clusters is
defined to be the average distance between a point in one of the clusters and
a point in the other, namely,
def
D(A, B) =
1
|A||B|
d(x, y)
x A, yB
3. Max Linkage clustering, in which the distance between two clusters is defined
as the maximum distance between their elements, namely,
def
a
e
{b, c}
d
c
b
{a}
{b}
{c}
{d, e}
{d}
{e}
The single linkage algorithm is closely related to Kruskals algorithm for finding
a minimal spanning tree on a weighted graph. Indeed, consider the full graph whose
vertices are elements of X and the weight of an edge (x, y) is the distance d(x, y).
Each merge of two clusters performed by the single linkage algorithm corresponds
to a choice of an edge in the aforementioned graph. It is also possible to show that
267
268
Clustering
the set of edges the single linkage algorithm chooses along its run forms a minimal
spanning tree.
If one wishes to turn a dendrogram into a partition of the space (a clustering),
one needs to employ a stopping criterion. Common stopping criteria include
Fixed number of clusters fix some parameter, k, and stop merging clusters as
soon as the number of clusters is k.
Distance upper bound fix some r R+ . Stop merging as soon as all
the between-clusters distances are larger than r . We can also set r to be
max{d(x, y) : x, y X } for some < 1. In that case the stopping criterion is
called scaled distance upper bound.
xCi
k
i=1 xCi
min
k
1 ,...k X
d(x, i )2 .
(22.1)
i=1 xCi
The k-means objective function is relevant, for example, in digital communication tasks, where the members of X may be viewed as a collection of signals
that have to be transmitted. While X may be a very large set of real valued vectors, digital transmission allows transmitting of only a finite number of bits for
each signal. One way to achieve good transmission under such constraints is to
represent each member of X by a close member of some finite set 1 , . . . k ,
and replace the transmission of any x X by transmitting the index of the
closest i . The k-means objective can be viewed as a measure of the distortion
created by such a transmission representation scheme.
The k-medoids objective function is similar to the k-means objective, except that
it requires the cluster centroids to be members of the input set. The objective
function is defined by
G Kmedoid ((X , d), (C1 , . . . , Ck )) =
k
min
1 ,...k X
d(x, i )2 .
i=1 xCi
min
k
1 ,...k X
d(x, i ).
i=1 xCi
An example where such an objective makes sense is the facility location problem. Consider the task of locating k fire stations in a city. One can model
houses as data points and aim to place the stations so as to minimize the
average distance between a house and its closest fire station.
The previous examples can all be viewed as center-based objectives. The solution to such a clustering problem is determined by a set of cluster centers, and the
clustering assigns each instance to the center closest to it. More generally, centerbased objective is determined by choosing some monotonic function f : R+ R+
and then defining
G f ((X , d), (C1 , . . . Ck )) =
min
k
1 ,...k X
f (d(x, i )),
i=1 xCi
k
i=1 x,yCi
d(x, y)
269
270
Clustering
and the MinCut objective that we shall discuss in Section 22.3 are not center-based
objectives.
min
k
1 ,...,k Rn
x i 2 .
(22.2)
i=1 xCi
It is convenient to define (Ci ) = |C1i | xCi x and note that (Ci ) =
argminRn xCi x 2 . Therefore, we can rewrite the k-means objective as
G(C1 , . . . , Ck ) =
k
x (Ci )2 .
(22.3)
i=1 xCi
(t1)
(t1)
(t)
k
(t)
(t1)
, . . . , Ck
(t)
G(C1 , . . . , Ck )
(t1) 2
x i
.
(22.4)
i=1 xC (t)
i
(t)
(t)
(C1 , . . . , Ck ). Hence,
k
(t1) 2
x i
i=1 xC (t)
k
(t1) 2
x i
.
(22.5)
i=1 xC (t1)
Using Equation (22.3) we have that the right-hand side of Equation (22.5) equals
(t1)
(t1)
, . . . , Ck
). Combining this with Equation (22.4) and Equation (22.5), we
G(C1
(t)
(t)
(t1)
(t1)
, . . . , Ck
While the preceding lemma tells us that the k-means objective is monotonically
nonincreasing, there is no guarantee on the number of iterations the k-means algorithm needs in order to reach convergence. Furthermore, there is no nontrivial lower
bound on the gap between the value of the k-means objective of the algorithms
output and the minimum possible value of that objective function. In fact, k-means
might converge to a point which is not even a local minimum (see Exercise 22.2).
To improve the results of k-means it is often recommended to repeat the procedure
several times with different randomly chosen initial centroids (e.g., we can choose
the initial centroids to be random points from the data).
k
Wr,s .
/ i
i=1 rCi ,s C
271
272
Clustering
k
1
|Ci |
i=1
Wr,s .
rCi ,s
/ Ci
The preceding objective assumes smaller values if the clusters are not too small.
Unfortunately, introducing this balancing makes the problem computationally hard
to solve. Spectral clustering is a way to relax the problem of minimizing RatioCut.
1
.
|C j | [iC j ]
2
2
v Lv =
Dr,r vr 2
vr vs Wr,s +
Ds,s vs =
Wr,s (vr vs )2 .
2
2
r
r,s
s
r,s
Applying this with v = hi and noting that (h i,r h i,s )2 is nonzero only if r Ci , s
/ Ci
or the other way around, we obtain that
h
i Lhi =
1
|Ci |
Wr,s .
rCi ,s C
/ i
relax the latter requirement and simply search an orthonormal matrix H Rm,k
that minimizes trace(H
L H ). As we will see in the next chapter about PCA (particularly, the proof of Theorem 23.2), the solution to this problem is to set U to
be the matrix whose columns are the eigenvectors corresponding to the k minimal eigenvalues of L. The resulting algorithm is called Unnormalized Spectral
Clustering.
p(C|x)
273
274
Clustering
where I (; ) is the mutual information between two random variables,1 is a parameter, and the minimization is over all possible probabilistic assignments of points to
clusters. Intuitively, we would like to achieve two contradictory goals. On one hand,
we would like the mutual information between the identity of the document and
the identity of the cluster to be as small as possible. This reflects the fact that we
would like a strong compression of the original data. On the other hand, we would
like high mutual information between the clustering variable and the identity of the
words, which reflects the goal that the relevant information about the document
(as reflected by the words that appear in the document) is retained. This generalizes
the classical notion of minimal sufficient statistics2 used in parametric statistics to
arbitrary distributions.
Solving the optimization problem associated with the information bottleneck
principle is hard in the general case. Some of the proposed methods are similar
to the EM principle, which we will discuss in Chapter 24.
p(a,b)
,
That is, given a probability function, p over the pairs (x, C), I (x; C) = a b p(a, b) log p(a)
p(b)
where the sum is over all values x can take and all values C can take.
A sufficient statistic is a function of the data which has the property of sufficiency with respect to a
statistical model and its associated unknown parameter, meaning that no other statistic which can be
calculated from the same sample provides any additional information as to the value of the parameter.
For example, if we assume that a variable is distributed normally with a unit variance and an unknown
expectation, then the average function is a sufficient statistic.
275
276
Clustering
requirement as well as Scale Invariance and Richness. Furthermore, one can come
up with many other, different, properties of clustering functions that sound intuitive
and desirable and are satisfied by some common clustering functions.
Furthermore, one can come up with many other, different, properties of clustering functions that sound intuitive and desirable and are satisfied by some common
clustering functions.
There are many ways to interpret these results. We suggest to view it as indicating that there is no ideal clustering function. Every clustering function will
inevitably have some undesirable properties. The choice of a clustering function
for any given task must therefore take into account the specific properties of that
task. There is no generic clustering solution, just as there is no classification algorithm that will learn every learnable task (as the No-Free-Lunch theorem shows).
Clustering, just like classification prediction, must take into account some prior
knowledge about the specific task at hand.
22.6 SUMMARY
Clustering is an unsupervised learning problem, in which we wish to partition a set
of points into meaningful subsets. We presented several clustering approaches
including linkage-based algorithms, the k-means family, spectral clustering, and
the information bottleneck. We discussed the difficulty of formalizing the intuitive
meaning of clustering.
22.8 EXERCISES
22.1 Suboptimality of k-Means: For every parameter t > 1, show that there exists an
instance of the k-means problem for which the k-means algorithm (might) find a
solution whose k-means objective is at least t OPT, where OPT is the minimum
k-means objective.
22.2 k-Means Might Not Necessarily Converge to a Local Minimum: Show that the kmeans algorithm might converge to a point which is not a local minimum. Hint:
Suppose that k = 2 and the sample points are {1, 2, 3, 4} R suppose we initialize
the k-means with the centers {2, 4}; and suppose we break ties in the definition of
Ci by assigning i to be the smallest value in argmin j x j .
22.3 Given a metric space (X , d), where |X | < , and k N, we would like to find a
partition of X into C1 , . . ., Ck which minimizes the expression
G kdiam ((X , d), (C1 , . . . , Ck )) = max diam(C j ),
j[d]
22.8 Exercises
where diam(C j ) = max x,x C j d(x, x ) (we use the convention diam(C j ) = 0 if
|C j | < 2).
Similarly to the k-means objective, it is NP-hard to minimize the k-diam objective. Fortunately, we have a very simple approximation algorithm: Initially, we pick
some x X and set 1 = x. Then, the algorithm iteratively sets
j {2, . . ., k}, j = argmax min d(x, i ).
xX
i[ j1]
Finally, we set
i [k], Ci = {x X : i = argmin d(x, j )}.
j[k]
min
1 ,...k X
k
f (d(x, i )),
i=1 xCi
277
23
Dimensionality Reduction
278
m
xi U W xi 22 .
(23.1)
i=1
To solve this problem we first show that the optimal solution takes a specific
form.
Lemma 23.1. Let (U , W ) be a solution to Equation (23.1). Then the columns of U
are orthonormal (namely, U
U is the identity matrix of Rn ) and W = U
.
Proof. Fix any U, W and consider the mapping x U W x. The range of this mapping, R = {U W x : x Rd }, is an n dimensional linear subspace of Rd . Let V Rd,n
be a matrix whose columns form an orthonormal basis of this subspace, namely, the
range of V is R and V
V = I . Therefore, each vector in R can be written as V y
where y Rn . For every x Rd and y Rn we have
x V y22 = x2 + y
V
V y 2y
V
x = x2 + y2 2y
(V
x),
where we used the fact that V
V is the identity matrix of Rn . Minimizing the preceding expression with respect to y by comparing the gradient with respect to y to
zero gives that y = V
x. Therefore, for each x we have that
V V
x = argmin x x 22 .
x R
xi U W xi 22
m
xi V V
xi 22 .
i=1
Since this holds for every U , W the proof of the lemma follows.
279
280
Dimensionality Reduction
On the basis of the preceding lemma, we can rewrite the optimization problem
given in Equation (23.1) as follows:
argmin
m
U Rd,n :U
U =I i=1
xi UU
xi 22 .
(23.2)
(23.3)
where the trace of a matrix is the sum of its diagonal entries. Since the trace is a
linear operator, this allows us to rewrite Equation (23.2) as follows:
m
argmax trace U
xi x
(23.4)
i U .
U Rd,n :U
U =I
i=1
m
Let A = i=1 xi x
d
j =1
D j, j
n
B 2j ,i .
i=1
Note that B
B = U
VV
U = U
U = I . Therefore, the columns of B are also
orthonormal, which implies that dj =1 ni=1 B 2j ,i = n. In addition, let B Rd,d be
a matrix such that its first n columns
are the columns of B and in addition
B
B = I .
n
d
2
2
follows that
trace(U AU)
max
[0,1]d : 1 n
d
D j, j j .
j =1
It is not hard to verify (see 23.2) that the right-hand side equals nj =1 D j , j . We have
therefore shown that for every matrix U Rd,n with orthonormal columns it holds
that trace(U
AU) nj =1 D j , j . On the other hand, if we set U to be the matrix
whose
columns are the n leading eigenvectors of A we obtain that trace(U
AU) =
n
j =1 D j , j , and this concludes our proof.
Remark 23.1. The proof
of Theorem 23.2 also tells us that the value of the objective
of Equation (23.4) is ni=1 Di,i . Combining this with Equation (23.3) and noting
2
A) = di=1 Di,i we obtain that the optimal objective value
that m
i=1 xi = trace(
d
of Equation (23.1) is i=n+1 Di,i .
Remark 23.2. It is a common practiceto center the examples before applying
PCA. That is, we first calculate = m1 m
i=1 xi and then apply PCA on the vectors
(x1 ), . . . , (xm ). This is also related to the interpretation of PCA as variance
maximization (see Exercise 23.4).
281
282
Dimensionality Reduction
PCA
input
A matrix of m examples X Rm,d
number of components n
if (m > d)
A = X
X
Let u1 , . . . , un be the eigenvectors of A with largest eigenvalues
else
B = X X
corresponding to the largest eigenvalue will be close to the vector (1/ 2, 1/ 2).
When projecting a point (x, x + y) on this principal component we will obtain the
. The reconstruction of the original vector will be ((x + y/2), (x + y/2)).
scalar 2x+y
2
In Figure 23.1 we depict the original versus reconstructed data.
1.5
0.5
0.5
1.5
1.5
0.5
0.5
1.5
Figure 23.1. A set of vectors in R2 (xs) and their reconstruction after dimensionality
reduction to R1 using PCA (circles).
o oo oo o
x
+
+++ +++
x xx x x
x
*
* *
*
**
Figure 23.2. Images of faces extracted from the Yale data set. Top-left: the original
images in R50x50 . Top-right: the images after dimensionality reduction to R10 and reconstruction. Middle row: an enlarged version of one of the images before and after PCA.
Bottom: the images after dimensionality reduction to R2 . The different marks indicate
different individuals.
283
284
Dimensionality Reduction
due to Johnson and Lindenstrauss, showing that random projections do not distort
Euclidean distances too much.
Let x1 , x2 be two vectors in Rd . A matrix W does not distort too much the
distance between x1 and x2 if the ratio
W x1 W x2
x1 x2
is close to 1. In other words, the distances between x1 and x2 before and after the
transformation are almost the same. To show that W x1 W x2 is not too far away
from x1 x2 it suffices to show that W does not distort the norm of the difference
x
vector x = x1 x2 . Therefore, from now on we focus on the ratio W
x .
We start with analyzing the distortion caused by applying a random projection
to a single vector.
Lemma 23.3. Fix some x Rd . Let W Rn,d be a random matrix such that each Wi, j
is an independent normal random variable. Then, for every
(0, 3) we have
0
0(1/ n)W x0
02
2
1
>
2 e
n/6 .
P
2
x
Proof. Without loss of generality we can assume that x2 = 1. Therefore, an
equivalent inequality is
2
P (1
)n W x2 (1 +
)n 1 2e
n/6 .
Let wi be the i th row of W . The random variable wi , x is a weighted sum of
d independent normal random variables and therefore it is normally distributed
2
2
with zero mean and variance
j x j = x = 1. Therefore, the random variable
n
W x2 = i=1 (wi , x)2 has a n2 distribution. The claim now follows directly from
a measure concentration property of 2 random variables stated in Lemma B.12
given in Section B.7.
The Johnson-Lindenstrauss lemma follows from this using a simple union bound
argument.
Lemma 23.4 (Johnson-Lindenstrauss Lemma). Let Q be a finite set of vectors in
Rd . Let (0, 1) and n be an integer such that
6 log (2|Q|/)
=
3.
n
Then, with probability of at least 1 over a choice of a random matrix W Rn,d
such that each element of W is distributed normally with zero mean and variance of
1/n we have
W x2
sup
1
<
.
2
x
xQ
Proof. Combining Lemma 23.3 and the union bound we have that for every
(0, 3):
W x2
2
1
>
2 |Q| e
n/6 .
P sup
2
x
xQ
Let denote the right-hand side of the inequality; thus we obtain that
6 log (2|Q|/)
.
=
n
Interestingly, the bound given in Lemma 23.4 does not depend on the original
dimension of x. In fact, the bound holds even if x is in an infinite dimensional Hilbert
space.
285
286
Dimensionality Reduction
example, a team led by Baraniuk and Kelly has proposed a camera architecture
that employs a digital micromirror array to perform optical calculations of a linear
transformation of an image. In this case, obtaining each compressed measurement
is as easy as obtaining a single raw measurement. Another important application
of compressed sensing is medical imaging, in which requiring fewer measurements
translates to less radiation for the patient.
Informally, the main premise of compressed sensing is the following three
surprising results:
1. It is possible to reconstruct any sparse signal fully if it was compressed by
x W x, where W is a matrix which satisfies a condition called the Restricted
Isoperimetric Property (RIP). A matrix that satisfies this property is guaranteed to have a low distortion of the norm of any sparse representable
vector.
2. The reconstruction can be calculated in polynomial time by solving a linear
program.
3. A random n d matrix is likely to satisfy the RIP condition provided that n is
greater than an order of s log (d).
Formally,
Definition 23.5 (RIP). A matrix W Rn,d is (
, s)-RIP if for all x = 0 s.t. x0 s we
have
W x2
2
1
.
x22
The first theorem establishes that RIP matrices yield a lossless compression
scheme for sparse vectors. It also provides a (nonefficient) reconstruction scheme.
Theorem 23.6. Let
< 1 and let W be a (
, 2s)-RIP matrix. Let x be a vector s.t.
x0 s, let y = W x be the compression of x, and let
x argmin v0
v:W v=y
v:W v=y
1
.
1+ 2
In fact, we will prove a stronger result, which holds even if x is not a sparse
vector.
Theorem 23.8. Let
<
vector and denote
1
1+ 2
That is, xs is the vector which equals x on the s largest elements of x and equals 0
elsewhere. Let y = W x be the compression of x and let
x argmin v1
v:W v=y
2 /(1 ).
1 + 1/2
s
x xs 1 ,
1
s log (40d/(
))
.
2
Let W Rn,d be a matrix s.t. each element of W is distributed normally with zero
mean and variance of 1/n. Then, with proabability of at least 1 over the choice of
W , the matrix W U is (
, s)-RIP.
23.3.1 Proofs*
Proof of Theorem 23.8
We follow a proof due to Cands (2008).
Let h = x x. Given a vector v and a set of indices I we denote by v I the vector
whose i th element is vi if i I and 0 otherwise.
The first trick we use is to partition the set of indices [d] = {1, . . . , d} into disjoint
sets of size s. That is, we will write [d] = T0 T1 T2 . . . Td/s1 where for all i , |Ti | =
s, and we assume for simplicity that d/s is an integer. We define the partition as
follows. In T0 we put the s indices corresponding to the s largest elements in absolute
287
288
Dimensionality Reduction
values of x (ties are broken arbitrarily). Let T0c = [d]\T0 . Next, T1 will be the s indices
corresponding to the s largest elements in absolute value of hT0c . Let T0,1 = T0 T1
c
and T0,1
= [d] \ T0,1 . Next, T2 will correspond to the s largest elements in absolute
c . And, we will construct T3 , T4 , . . . in the same way.
value of hT0,1
To prove the theorem we first need the following lemma, which shows that RIP
also implies approximate orthogonality.
Lemma 23.10 Let W be an (
, 2s)-RIP matrix. Then, for any two disjoint sets I , J ,
both of size at most s, and for any vector u we have that W u I , W u J
u I 2 u J 2 .
Proof. W.l.o.g. assume u I 2 = u J 2 = 1.
W u I , W u J =
W u I + W u J 22 W u I W u J 22
.
4
(23.5)
Claim 2: hT0,1 2
2 1/2
x xs 1 .
1 s
=2
1 + 1/2
x xs 1 ,
s
1
Next, we show that hT0c 1 cannot be large. Indeed, from the definition of x we
have that x1 x 1 = x + h1 . Thus, using the triangle inequality we obtain that
x1 x + h1 =
|x i + h i | +
|x i + h i | xT0 1 hT0 1 + hT0c 1 xT0c 1
iT0c
iT0
(23.7)
and since xT0c 1 = x xs 1 = x1 xT0 1 we get that
hT0c 1 hT0 1 + 2xT0c 1 .
(23.8)
(23.9)
j 2
From the RIP condition on inner products we obtain that for all i {1, 2} and j 2
we have
|W hTi , W hT j |
hTi 2 hT j 2 .
2
1/2
s
hT0c 1 .
hT0,1 2
1
Finally, using Equation (23.8) we get that
hT0,1 2 s 1/2 (hT0 1 + 2xT0c 1 ) hT0 2 + 2s 1/2 xT0c 1 ,
but since hT0 2 hT0,1 2 this implies
hT0,1 2
2 1/2
s
xT0c 1 ,
1
289
290
Dimensionality Reduction
(23.10)
Now lets us specify k. For each x B2 (1) let v Q be the vector whose i th element
is sign(x i ) |x i | k/k. Then, for each element we have that |x i vi | 1/k and thus
d
.
x v
k
To ensure that the right-hand side will be at most
we shall set k = d/
. Plugging
this value into Equation (23.10) we conclude that
d d
:=1 vQ
:=1 vQ
Applying Lemma 23.4 on the set {U I v : v Q} we obtain that for n satisfying the
condition given in the lemma, the following holds with probability of at least 1 :
W U v2
I
1
/2,
sup
2
U
v
I
vQ
This also implies that
W U I v
1
/2.
sup
U I v
vQ
W x
1 + a.
x
Clearly a < . Our goal is to show that a
. This follows from the fact that for any
x S of unit norm there exists v Q such that x U I v
/4 and therefore
W x W U I v + W (x U I v) 1 +
/2 + (1 + a)
/4.
Thus,
x S,
W x
1 +
/2 + (1 + a)
/4 .
x
W x
x
/2 +
/4
.
1
/4
W x W U I v W (x U I v) 1 /2 (1 + ) /4 1 .
291
292
Dimensionality Reduction
23.5 SUMMARY
We introduced two methods for dimensionality reduction using linear transformations: PCA and random projections. We have shown that PCA is optimal in the
sense of averaged squared reconstruction error, if we restrict the reconstruction procedure to be linear as well. However, if we allow nonlinear reconstruction, PCA is
not necessarily the optimal procedure. In particular, for sparse data, random projections can significantly outperform PCA. This fact is at the heart of the compressed
sensing method.
23.7 Exercises
Eugenio Beltrami (1873) and Camille Jordan (1874). It has been rediscovered many
times. In the statistical literature, it was introduced by Pearson (1901). Besides PCA
and SVD, there are additional names that refer to the same idea and are being
used in different scientific communities. A few examples are the Eckart-Young theorem (after Carl Eckart and Gale Young who analyzed the method in 1936), the
Schmidt-Mirsky theorem, factor analysis, and the Hotelling transform.
Compressed sensing was introduced in Donoho (2006) and in (Candes & Tao
2005). See also Candes (2006).
23.7 EXERCISES
23.1 In this exercise we show that in the general case, exact recovery of a linear
compression scheme is impossible.
1. let A Rn,d be an arbitrary compression matrix where n d 1. Show that there
exists u, v Rn , u = v such that Au = Av.
2. Conclude that exact recovery of a linear compression scheme is impossible.
23.2 Let Rd such that 1 2 d 0. Show that
max
[0,1]d :1 n
d
jj =
j=1
n
j.
j=1
Hint: Take every vector [0, 1]d such that 1 n. Let i be the minimal index
for which i < 1. If i = n + 1 we are done. Otherwise, show that we can increase i ,
while possibly decreasing j for some j > i , and obtain a better solution. This will
imply that the optimal solution is to set i = 1 for i n and i = 0 for i > n.
23.3 Kernel PCA: In this exercise we show how PCA can be used for constructing nonlinear dimensionality reduction on the basis of the kernel trick (see
Chapter 16).
Let X be some instance space and let S = {x1 , . . . , xm } be a set of points in X .
Consider a feature mapping : X V , where V is some Hilbert space (possibly of infinite dimension). Let K : X X be a kernel function, that is, k(x, x ) =
(x), (x ). Kernel PCA is the process of mapping the elements in S into V
using , and then applying PCA over {(x1 ), . . ., (xm )} into Rn . The output of
this process is the set of reduced elements.
Show how this process can be done in polynomial time in terms of m and n,
assuming that each evaluation of K (, ) can be calculated in a constant time. In
particular, if your implementation requires multiplication of two matrices A and
B, verify that their product can be computed. Similarly, if an eigenvalue decomposition of some matrix C is required, verify that this decomposition can be
computed.
23.4 An Interpretation of PCA as Variance Maximization:
Let x1 , . . ., xm be m vectors in Rd , and let x be a random vector distributed according
to the uniform distribution over x1 , . . ., xm . Assume that E [x] = 0.
1. Consider the problem of finding a unit vector, w Rd , such that the random variable w, x has maximal variance. That is, we would like to solve the
problem
argmax Var[w, x] = argmax
w:w=1
w:w=1
m
1
(w, xi )2 .
m
i=1
293
294
Dimensionality Reduction
Show that the solution of the problem is to set w to be the first principle vector
of x1 , . . ., xm .
2. Let w1 be the first principal component as in the previous question. Now, suppose we would like to find a second unit vector, w2 Rd , that maximizes the
variance of w2 , x, but is also uncorrelated to w1 , x. That is, we would like to
solve
Var[w, x].
argmax
w:w=1, E [(w1 ,x)(w,x)]=0
Show that the solution to this problem is to set w to be the second principal
component of x1 , . . ., xm .
Hint: Note that
where A = i xi x
24
Generative Models
We started this book with a distribution free learning framework; namely, we did not
impose any assumptions on the underlying distribution over the data. Furthermore,
we followed a discriminative approach in which our goal is not to learn the underlying distribution but rather to learn an accurate predictor. In this chapter we describe
a generative approach, in which it is assumed that the underlying distribution over
the data has a specific parametric form and our goal is to estimate the parameters of
the model. This task is called parametric density estimation.
The discriminative approach has the advantage of directly optimizing the quantity of interest (the prediction accuracy) instead of learning the underlying distribution. This was phrased as follows by Vladimir Vapnik in his principle for solving
problems using a restricted amount of information:
When solving a given problem, try to avoid a more general problem as an intermediate
step.
Of course, if we succeed in learning the underlying distribution accurately, we
are considered to be experts in the sense that we can predict by using the Bayes
optimal classifier. The problem is that it is usually more difficult to learn the underlying
distribution than to learn an accurate predictor. However, in some situations, it is
reasonable to adopt the generative learning approach. For example, sometimes it
is easier (computationally) to estimate the parameters of the model than to learn a
discriminative predictor. Additionally, in some cases we do not have a specific task at
hand but rather would like to model the data either for making predictions at a later
time without having to retrain a predictor or for the sake of interpretability of the data.
We start with a popular statistical method for estimating the parameters of the
data, which is called the maximum likelihood principle. Next, we describe two generative assumptions which greatly simplify the learning process. We also describe
the EM algorithm for calculating the maximum likelihood in the presence of latent
variables. We conclude with a brief description of Bayesian reasoning.
296
Generative Models
using the drug. To do so, the drug company sampled a training set of m people and
gave them the drug. Let S = (x 1 , . . . , x m ) denote the training set, where for each i ,
x i = 1 if the i th person survived and x i = 0 otherwise. We can model the underlying
distribution using a single parameter, [0, 1], indicating the probability of survival.
We now would like to estimate the parameter on the basis of the training set S.
A natural idea is to use the average number of 1s in S as an estimator. That is,
1
xi .
=
m
m
(24.1)
i=1
| |
.
(24.2)
2m
Another interpretation of is as the Maximum Likelihood Estimator, as we
formally explain now. We first write the probability of generating the sample S:
P [S = (x 1 , . . . , x m )] =
m
xi (1 )1xi =
i xi
(1 )
(1xi )
i=1
We define the log likelihood of S, given the parameter , as the log of the preceding
expression:
x i + log (1 )
(1 x i ).
L(S; ) = log P [S = (x 1 , . . . , x m )] = log ( )
i
The maximum likelihood estimator is the parameter that maximizes the likelihood
argmax L(S; ).
(24.3)
Next, we show that in our case, Equation (24.1) is a maximum likelihood estimator.
To see this, we take the derivative of L(S; ) with respect to and equate it to zero:
(1 x i )
i xi
i
= 0.
1
Solving the equation for we obtain the estimator given in Equation (24.1).
i=1
1
(x i )2 m log ( 2 ).
L( S; ) = 2
2
m
i=1
i=1
1
d
m
L( S; ) = 3
(x i )2 = 0
d
i=1
=
xi
and = 6
(x i )
2
m
m
i=1
i=1
Note that the maximum likelihood estimate is not always an unbiased estimator.
For example, while
is unbiased, it is possible to show that the estimate of the
variance is biased (Exercise 24.1).
Simplifying Notation
To simplify our notation, we use P[X = x] in this chapter to describe both the probability that X = x (for discrete random variables) and the density of the distribution
at x (for continuous variables).
(24.4)
297
298
Generative Models
That is, (, x) is the negation of the log-likelihood of the observation x, assuming the data is distributed according to P . This loss function is often referred to as
the log-loss. On the basis of this definition it is immediate that the maximum likelihood principle is equivalent to minimizing the empirical risk with respect to the loss
function given in Equation (24.4). That is,
argmin
m
i=1
m
log (P [x i ]).
i=1
9
P[x]
1
P[x] log
+
P[x] log
,
P [x]
P[x]
x
:;
< 9
:;
<
DRE [P ||P ]
(24.5)
H (P )
where DRE is called the relative entropy, and H is called the entropy function. The
relative entropy is a divergence measure between two probabilities. For discrete
variables, it is always nonnegative and is equal to 0 only if the two distributions are
the same. It follows that the true risk is minimal when P = P.
The expression given in Equation (24.5) underscores how our generative
assumption affects our density estimation, even in the limit of infinite data. It shows
that if the underlying distribution is indeed of a parametric form, then by choosing the correct parameter we can make the risk be the entropy of the distribution.
However, if the distribution is not of the assumed parametric form, even the best
parameter leads to an inferior model and the suboptimality is measured by the
relative entropy divergence.
parameter. Then,
P [x]
[(,
x) ( , x)] =
E
log
E
P [x]
x N( ,1)
x N( ,1)
1
1
2
2
=
E
(x ) + (x )
2
2
x N( ,1)
2 ( )2
+ ( )
E
[x]
2
2
x N( ,1)
2 ( )2
+ ( )
2
2
1
)2 .
= (
2
=
(24.6)
299
300
Generative Models
d
P[X i = x i |Y = y].
i=1
With this assumption and using the Bayes rule, the Bayes optimal classifier can be
further simplified:
h Bayes (x) = argmax P[Y = y|X = x]
y{0,1}
= argmax P[Y = y]
y{0,1}
d
P[X i = x i |Y = y].
(24.7)
i=1
That is, now the number of parameters we need to estimate is only 2d + 1. Here, the
generative assumption we made reduced significantly the number of parameters we
need to learn.
When we also estimate the parameters using the maximum likelihood principle,
the resulting classifier is called the Naive Bayes classifier.
1
2
0T 1 0 1T 1 1 .
(24.8)
1
1
T 1
(x
exp
)
(x
)
.
y
y
y
(2)d/2 | y |1/2
2
(24.9)
k
y=1
k
y=1
1
1
T 1
cy
exp (x y ) y (x y ) .
(2)d/2 | y |1/2
2
Note that Y is a hidden variable that we do not observe in our data. Nevertheless,
we introduce Y since it helps us describe a simple parametric form of the probability
of X.
More generally, let be the parameters of the joint distribution of X and Y (e.g.,
in the preceding example, consists of c y , y , and y , for all y = 1, . . . , k). Then, the
log-likelihood of an observation x can be written as
k
log P [X = x] = log
P [X = x, Y = y] .
y=1
301
302
Generative Models
m
P [X = xi ]
i=1
m
log P [X = xi ]
i=1
m
log
i=1
k
P [X = xi , Y = y] .
y=1
m
k
log
P [X = xi , Y = y] .
argmax L( ) = argmax
i=1
y=1
In many situations, the summation inside the log makes the preceding optimization problem computationally hard. The Expectation-Maximization (EM) algorithm, due to Dempster, Laird, and Rubin, is an iterative procedure for searching a
(local) maximum of L( ). While EM is not guaranteed to find the global maximum,
it often works reasonably well in practice.
EM is designed for those cases in which, had we known the values of the latent
variables Y , then the maximum likelihood optimization problem would have been
tractable. More precisely, define the following function over m k matrices and the
set of parameters :
F(Q, ) =
m
k
Q i,y log P [X = xi , Y = y] .
i=1 y=1
is tractable.
The intuitive idea of EM is that we have a chicken and egg problem. On one
hand, had we known Q, then by our assumption, the optimization problem of finding
the best is tractable. On the other hand, had we known the parameters we could
have set Q i,y to be the probability of Y = y given that X = xi . The EM algorithm
Q i,y
= P (t) [Y = y|X = xi ].
(24.10)
This step is called the Expectation step, because it yields a new probability over
the latent variables, which defines a new expected log-likelihood function over .
Maximization Step: Set (t+1) to be the maximizer of the expected loglikelihood, where the expectation is according to Q (t+1) :
(t+1) = argmax F(Q (t+1) , ).
(24.11)
m
k
i=1 y=1
The second term is the sum of the entropies of the rows of Q. Let
Q = Q [0, 1]m,k : i ,
Q i,y = 1
y=1
be the set of matrices whose rows define probabilities over [k]. The following lemma
shows that EM performs alternate maximization iterations for maximizing G.
Lemma 24.2. The EM procedure can be rewritten as
Q (t+1) = argmax G(Q, (t) )
QQ
303
304
Generative Models
Therefore, we only need to show that for any , the solution of argmax QQ G(Q, )
is to set Q i,y = P [Y = y|X = xi ]. Indeed, by Jensens inequality, for any Q Q we
have that
m
k
P [X = xi , Y = y]
G(Q, ) =
Q i,y log
Q i,y
i=1
y=1
m
k
P
[X
=
x
,
Y
=
y]
i
log
Q i,y
Q i,y
i=1
y=1
m
k
log
P [X = xi , Y = y]
=
i=1
m
y=1
log P [X = xi ] = L( ),
i=1
m
k
[X
=
x
,
Y
=
y]
P
G(Q, ) =
P [Y = y|X = xi ] log
P [Y = y|X = xi ]
i=1
y=1
m
k
P [Y = y|X = xi ] log P [X = xi ]
i=1 y=1
=
=
m
k
log P [X = xi ]
P [Y = y|X = xi ]
i=1
y=1
m
log P [X = xi ] = L( ).
i=1
This shows that setting Q i,y = P [Y = y|X = xi ] maximizes G(Q, ) over Q Q and
shows that G(Q (t+1) , (t) ) = L( (t) ).
The preceding lemma immediately implies:
Theorem 24.3. The EM procedure never decreases the log-likelihood; namely,
for all t,
L( (t+1) ) L( (t) ).
Proof. By the lemma we have
L( (t+1) ) = G(Q (t+2) , (t+1) ) G(Q (t+1) , (t) ) = L( (t) ).
1
P (t) [Y = y] P (t) [ X = xi |Y = y]
Zi
1 (t)
1
=
c y exp xi (yt) 2 ,
(24.12)
Zi
2
where Z i is a normalization factor which ensures that y P (t) [Y = y| X = xi ]
sums to 1.
Maximization step: We need to set t+1 to be a maximizer of Equation (24.11),
which in our case amounts to maximizing the following expression w.r.t. c and :
P (t) [Y = y| X = xi ] =
k
m
i=1 y=1
1
P (t) [Y = y| X = xi ] log (c y ) xi y 2 .
2
(24.13)
That is, y is a weighted average of the xi where the weights are according to the
probabilities calculated in the E step. To find the optimal c we need to be more
careful since we must ensure that c is a probability vector. In Exercise 24.3 we
show that the solution is
m
P (t) [Y = y|X = xi ]
c y = k i=1
.
(24.14)
m
y =1
i=1 P (t) [Y = y |X = xi ]
It is interesting to compare the preceding algorithm to the k-means algorithm
described in Chapter 22. In the k-means algorithm, we first assign each example to a
cluster according to the distance xi y . Then, we update each center y according to the average of the examples assigned to this cluster. In the EM approach,
however, we determine the probability that each example belongs to each cluster.
Then, we update the centers on the basis of a weighted sum over the entire sample.
For this reason, the EM approach for k-means is sometimes called soft k-means.
305
306
Generative Models
As an example, let us consider again the drug company which developed a new
drug. On the basis of past experience, the statisticians at the drug company believe
that whenever a drug has reached the level of clinic experiments on people, it is
likely to be effective. They model this prior belief by defining a density distribution
on such that
0. 8 if > 0. 5
P[ ] =
(24.15)
0. 2 if 0. 5
As before, given a specific value of , it is assumed that the conditional probability,
P[X = x| ], is known. In the drug company example, X takes values in {0, 1} and
P[X = x| ] = x (1 )1x .
Once the prior distribution over and the conditional distribution over X given
are defined, we again have complete knowledge of the distribution over X. This is
because we can write the probability over X as a marginal probability
P[X = x] =
P[X = x, ] =
P[ ]P[X = x| ],
where the last equality follows from the definition of conditional probability. If
is continuous we replace P[ ] with the density function and the sum becomes an
integral:
F
P[X = x] = P[ ]P[X = x| ] d .
The second inequality follows from the assumption that X and S are independent
when we condition on . Using the Bayes rule we have
P[ |S] =
P[S| ] P[ ]
,
P[S]
and together with the assumption that points are independent conditioned on , we
can write
m
P[S| ] P[ ]
1
P[ |S] =
=
P[X = x i | ] P[ ].
P[S]
P[S]
i=1
P[X = x|S] =
(24.16)
i=1
Getting back to our drug company example, we can rewrite P[X = x|S] as
F
1
P[X = x|S] =
x+ i xi (1 )1x+ i (1xi ) P[ ] d .
P[S]
It is interesting to note that when P[ ] is uniform we obtain that
F
P[X = x|S] x+ i xi (1 )1x+ i (1xi ) d .
Solving the preceding integral (using integration by parts) we obtain
( i xi ) + 1
.
P[X = 1|S] =
m +2
Recall that the prediction
according to the maximum likelihood principle in this
i xi
case is P[X = 1|] = m . The Bayesian prediction with uniform prior is rather
similar to the maximum likelihood prediction, except it adds pseudoexamples to
the training set, thus biasing the prediction toward the uniform prior.
Maximum A Posteriori
In many situations, it is difficult to find a closed form solution to the integral given
in Equation (24.16). Several numerical methods can be used to approximate this
integral. Another popular solution is to find a single which maximizes P[ |S].
The value of which maximizes P[ |S] is called the Maximum A Posteriori estimator. Once this value is found, we can calculate the probability that X = x given the
maximum a posteriori estimator and independently on S.
24.6 SUMMARY
In the generative approach to machine learning we aim at modeling the distribution
over the data. In particular, in parametric density estimation we further assume that
the underlying distribution over the data has a specific parametric form and our goal
is to estimate the parameters of the model. We have described several principles
for parameter estimation, including maximum likelihood, Bayesian estimation, and
maximum a posteriori. We have also described several specific algorithms for implementing the maximum likelihood under different assumptions on the underlying
data distribution, in particular, Naive Bayes, LDA, and EM.
307
308
Generative Models
There are many excellent books on the generative and Bayesian approaches
to machine learning. See, for example, (Bishop 2006, Koller & Friedman 2009b,
MacKay 2003, Murphy 2012, Barber 2012).
24.8 EXERCISES
24.1 Prove that the maximum likelihood estimator of the variance of a Gaussian variable
is biased.
24.2 Regularization for Maximum Likelihood: Consider the following regularized loss
minimization:
m
1
1
log (1/P [xi ]) +
log (1/ ) + log(1/(1 )) .
m
m
i=1
Show that the preceding objective is equivalent to the usual empirical error
had we added two pseudoexamples to the training set. Conclude that the
regularized maximum likelihood estimator would be
m
1
xi .
=
1+
m +2
i=1
k
y=1
c y = 1,
Using properties of the relative entropy, conclude that c is the solution to the
optimization problem.
25
Feature Selection and Generation
In the beginning of the book, we discussed the abstract model of learning, in which
the prior knowledge utilized by the learner is fully encoded by the choice of the
hypothesis class. However, there is another modeling choice, which we have so far
ignored: How do we represent the instance space X ? For example, in the papayas
learning problem, we proposed the hypothesis class of rectangles in the smoothnesscolor two dimensional plane. That is, our first modeling choice was to represent a
papaya as a two dimensional point corresponding to its smoothness and color. Only
after that did we choose the hypothesis class of rectangles as a class of mappings
from the plane into the label set. The transformation from the real world object
papaya into the scalar representing its smoothness or its color is called a feature
function or a feature for short; namely, any measurement of the real world object
can be regarded as a feature. If X is a subset of a vector space, each x X is sometimes referred to as a feature vector. It is important to understand that the way we
encode real world objects as an instance space X is by itself prior knowledge about
the problem.
Furthermore, even when we already have an instance space X which is represented as a subset of a vector space, we might still want to change it into a different
representation and apply a hypothesis class on top of it. That is, we may define a
hypothesis class on X by composing some class H on top of a feature function which
maps X into some other vector space X . We have already encountered examples
of such compositions in Chapter 15 we saw that kernel-based SVM learns a composition of the class of halfspaces over a feature mapping that maps each original
instance in X into some Hilbert space. And, indeed, the choice of is another form
of prior knowledge we impose on the problem.
In this chapter we study several methods for constructing a good feature set. We
start with the problem of feature selection, in which we have a large pool of features and our goal is to select a small number of features that will be used by our
predictor. Next, we discuss feature manipulations and normalization. These include
simple transformations that we apply on our original features. Such transformations may decrease the sample complexity of our learning algorithm, its bias, or its
309
310
25.1.1 Filters
Maybe the simplest approach for feature selection is the filter method, in which
we assess individual features, independently of other features, according to some
quality measure. We can then select the k features that achieve the highest score
(alternatively, decide also on the number of features to select according to the value
of their scores).
Many quality measures for features have been proposed in the literature. Maybe
the most straightforward approach is to set the score of a feature according to the
error rate of a predictor that is trained solely by that feature.
To illustrate this, consider a linear regression problem with the squared loss. Let
v = (x 1, j , . . . , x m, j ) Rm be a vector designating the values of the j th feature on a
training set of m examples and let y = ( y1 , . . . , ym ) Rm be the values of the target on
the same m examples. The empirical squared loss of an ERM linear predictor that
uses only the j th feature would be
min
a,bR
1
av + b y2 ,
m
(25.1)
Taking the derivative of the right-hand side objective with respect to b and comparing it to zero we obtain that b = 0. Similarly, solving for a (once we know that
b = 0) yields a = v v,
y y /v v
2 . Plugging this value back into the objective
we obtain the value
y y 2
(v v,
y y )2
.
v v
2
Ranking the features according to the minimal loss they achieve is equivalent to
ranking them according to the absolute value of the following score (where now a
higher score yields a better feature):
1
v v,
y y
v v,
y y
= m
.
v v
y y
1
1
2
2
v
v
y
m
m
(25.2)
The preceding expression is known as Pearsons correlation coefficient. The numerator is the empirical estimate of the covariance of the j th feature and the target
value, E [(v Ev)(y E y)], while the denominator is the squared root of the empirical estimate for the variance of the j th feature, E [(v E v)2 ], times the variance of
the target. Pearsons coefficient ranges from 1 to 1, where if the Pearsons coefficient is either 1 or 1, there is a linear mapping from v to y with zero empirical
risk.
If Pearsons coefficient equals zero it means that the optimal linear function from
v to y is the all-zeros function, which means that v alone is useless for predicting y.
However, this does not mean that v is a bad feature, as it might be the case that
together with other features v can perfectly predict y. Indeed, consider a simple
example in which the target is generated by the function y = x 1 + 2x 2 . Assume also
that x 1 is generated from the uniform distribution over {1}, and x 2 = 12 x 1 + 12 z,
311
312
where z is also generated i.i.d. from the uniform distribution over {1}. Then,
E [x 1 ] = E [x 2 ] = E [y] = 0, and we also have
E [yx 1] = E [x 12 ] + 2 E[x 2 x 1 ] = E [x 12 ] E[x 12 ] + E [zx 1 ] = 0.
Therefore, for a large enough training set, the first feature is likely to have a
Pearsons correlation coefficient that is close to zero, and hence it will most probably not be selected. However, no function can predict the target value well without
knowing the first feature.
There are many other score functions that can be used by a filter method.
Notable examples are estimators of the mutual information or the area under the
receiver operating characteristic (ROC) curve. All of these score functions suffer
from similar problems to the one illustrated previously. We refer the reader to
Guyon and Elisseeff (2003).
wR
We will maintain a vector t which minimizes the right-hand side of the equations.
Initially, we set I0 = , V0 = , and 1 to be the empty vector. At round t, for
X j is the projection of X j
every j , we decompose X j = v j + u j where v j = Vt1 Vt1
onto the subspace spanned by Vt1 and u j is the part of X j orthogonal to Vt1 (see
Appendix C). Then,
min Vt1 + u j y2
,
= min Vt1 y2 + 2 u j 2 + 2u j , Vt1 y
,
= min Vt1 y2 + 2 u j 2 + 2u j , y
,
= min Vt1 y2 + min 2 u j 2 2u j , y
= Vt1 t1 y2 + min 2 u j 2 2u j , y
= Vt1 t1 y2
(u j , y)2
.
u j 2
(u j , y)2
.
u j 2
u jt
,
u jt 2
u jt , y
t = t1 ;
.
u jt 2
(u j ,y)2
u j 2
313
314
where e j is the all zeros vector except 1 in the j th element. That is, we keep the
weights of the previously chosen coordinates intact and only optimize over the new
variable. Therefore, for each j we need to solve an optimization problem over a
single variable, which is a much easier task than optimizing over t.
An even simpler approach is to upper bound R(w) using a simple function and
then choose the feature which leads to the largest decrease in this upper bound. For
example, if R is a -smooth function (see Equation (12.5) in Chapter 12), then
R(w + e j ) R(w) +
R(w)
+ 2 /2.
w j
1
Minimizing the right-hand side over yields = R(w)
w j and plugging this value
into the inequality yields
1 R(w) 2
.
R(w)
2
w j
This value is minimized if the partial derivative of R(w) with respect to w j is maximal. We can therefore choose jt to be the index of the largest coordinate of the
gradient of R(w) at w.
Remark 25.3 (AdaBoost as a Forward Greedy Selection Procedure). It is possible
to interpret the AdaBoost algorithm from Chapter 10 as a forward greedy selection
procedure with respect to the function
m
d
exp yi
w j h j (x j ) .
(25.3)
R(w) = log
i=1
j =1
Backward Elimination
Another popular greedy selection approach is backward elimination. Here, we start
with the full set of features, and then we gradually remove one feature at a time
from the set of features. Given that our current set of selected features is I , we go
over all i I , and apply the learning algorithm on the set of features I \ {i }. Each
such application yields a different predictor, and we choose to remove the feature
i for which the predictor obtained from I \ {i } has the smallest risk (on the training
set or on a validation set).
Naturally, there are many possible variants of the backward elimination idea. It
is also possible to combine forward and backward greedy steps.
where1
w0 = |{i : wi = 0}|.
In other words, we want w to be sparse, which implies that we only need to measure
the features corresponding to nonzero elements of w.
Solving this optimization problem is computationally hard (Natarajan 1995,
Davis, Mallat & Avellaneda 1997). A possible
relaxation is to replace the nonconvex
function w0 with the 1 norm, w1 = di=1 |wi |, and to solve the problem
min L S (w) s.t. w1 k1 ,
w
(25.4)
where k1 is a parameter. Since the 1 norm is a convex function, this problem can
be solved efficiently as long as the loss function is convex. A related problem is
minimizing the sum of L S (w) plus an 1 norm regularization term,
min L S (w) + w1 ,
(25.5)
w
where is a regularization parameter. Since for any k1 there exists a such that
Equation (25.4) and Equation (25.5) lead to the same solution, the two problems
are in some sense equivalent.
The 1 regularization often induces sparse solutions. To illustrate this, let us start
with the simple optimization problem
min 12 w2 xw + |w| .
(25.6)
w R
It is easy to verify (see Exercise 25.2) that the solution to this problem is the soft
thresholding operator
w = sign(x) [|x| ]+ ,
(25.7)
def
where [a]+ = max{a, 0}. That is, as long as the absolute value of x is smaller than ,
the optimal solution will be zero.
Next, consider a one dimensional regression problem with respect to the squared
loss:
m
1
2
argmin
(x i w yi ) + |w| .
2m
w Rm
i=1
i=1
The function 0 is often referred to as the 0 norm. Despite the use of the norm notation, 0
is not really a norm; for example, it does not satisfy the positive homogeneity property of norms,
aw0 = |a| w0 .
315
316
For simplicity let us assume that m1 i x i2 = 1, and denote x, y = m
i=1 x i yi ; then the
optimal solution is
w = sign(x, y) [|x, y|/m ]+ .
That is, the solution will be zero unless the correlation between the feature x and
the labels vector y is larger than .
Remark 25.4. Unlike the 1 norm, the 2 norm does not induce sparse solutions.
Indeed, consider aforementioned problem with an 2 regularization, namely,
m
1
2
2
argmin
(x i w yi ) + w .
2m
w Rm
i=1
x, y/m
.
x2 /m + 2
This solution will be nonzero even if the correlation between x and y is very small. In
contrast, as we have shown before, when using 1 regularization, w will be nonzero
only if the correlation between x and y is larger than the regularization parameter .
Adding 1 regularization to a linear regression problem with the squared loss
yields the LASSO algorithm, defined as
1
argmin
Xw y2 + w1 .
(25.8)
2m
w
Under some assumptions on the distribution and the regularization parameter ,
the LASSO will find sparse solutions (see, for example, (Zhao & Yu 2006) and
the references therein). Another advantage of the 1 norm is that a vector with
low 1 norm can be sparsified (see, for example, (Shalev-Shwartz, Zhang, and
Srebro 2010) and the references therein).
More precisely, the bounds we derived in Chapter 13 for regularized loss minimization depend on
w 2 and on either the Lipschitzness or the smoothness of the loss function. For linear predictors
and loss functions of the form (w, (x, y)) = (w, x, y), where is convex and either 1-Lipschitz or
1-smooth with respect to its first argument, we have that is either x-Lipschitz or x2 -smooth. For
example, for the squared loss, (a, y) = 12 (a y)2 , and (w, (x, y)) = 12 (w, x y)2 is x2 -smooth with
respect to its first argument.
317
318
f i f min
the range of each feature be [ 1, 1] by the transformation f i 2 f max
f min 1. Of
course, it is easy to make the range [0, b] or [ b, b], where b is a user-specified
parameter.
Standardization:
This transformation
makes all2 features have a zero mean and unit variance. For
mally, let = m1 m
i=1 ( f i f ) be the empirical variance of the feature. Then, we
set fi
f
i f
Clipping:
This transformation clips high or low values of the feature. For example, f i
sign( fi ) max{b, | f i |}, where b is a user-specified parameter.
Sigmoidal Transformation:
As its name indicates, this transformation applies a sigmoid function on the fea1
ture. For example, f i 1+exp(b
f i ) , where b is a user-specified parameter. This
transformation can be thought of as a soft version of clipping: It has a small effect
on values close to zero and behaves similarly to clipping on values far away from
zero.
Logarithmic Transformation:
The transformation is f i log (b + f i ), where b is a user-specified parameter. This is
widely used when the feature is a counting feature. For example, suppose that the
feature represents the number of appearances of a certain word in a text document.
Then, the difference between zero occurrences of the word and a single occurrence
is much more important than the difference between 1000 occurrences and 1001
occurrences.
Remark 25.5. In the aforementioned transformations, each feature is transformed
on the basis of the values it obtains on the training set, independently of other
features values. In some situations we would like to set the parameter of the
transformation on the basis of other features as well. A notable example is a transformation in which one applies a scaling to the features so that the empirical average
of some norm of the instances becomes 1.
319
320
where each wi is a string representing a word in the dictionary, and given a document, ( p1 , . . . , pd ), where each pi is a word in the document, we represent the
document as a vector x {0, 1}k , where x i is 1 if wi = p j for some j [d], and
x i = 0 otherwise. It was empirically observed in many text processing tasks that linear predictors are quite powerful when applied on this representation. Intuitively,
we can think of each word as a feature that measures some aspect of the document. Given labeled examples (e.g., topics of the documents), a learning algorithm
searches for a linear predictor that weights these features so that a right combination
of appearances of words is indicative of the label.
While in text processing there is a natural meaning to words and to the dictionary, in other applications we do not have such an intuitive representation of an
instance. For example, consider the computer vision application of object recognition. Here, the instance is an image and the goal is to recognize which object appears
in the image. Applying a linear predictor on the pixel-based representation of the
image does not yield a good classifier. What we would like to have is a mapping
that would take the pixel-based representation of the image and would output a
bag of visual words, representing the content of the image. For example, a visual
word can be there is an eye in the image. If we had such representation, we could
have applied a linear predictor on top of this representation to train a classifier for,
say, face recognition. Our question is, therefore, How can we learn a dictionary
of visual words such that a bag-of-words representation of an image would be
helpful for predicting which object appears in the image?
A first naive approach for dictionary learning relies on a clustering algorithm
(see Chapter 22). Suppose that we learn a function c : X {1, . . . , k}, where c(x)
is the cluster to which x belongs. Then, we can think of the clusters as words,
and of instances as documents, where a document x is mapped to the vector
(x) {0, 1}k , where (x)i is 1 if and only if x belongs to the i th cluster. Now, it
is straightforward to see that applying a linear predictor on (x) is equivalent to
assigning the same target value to all instances that belong to the same cluster. Furthermore, if the clustering is based on distances from a class center (e.g., k-means),
then a linear predictor on (x) yields a piece-wise constant predictor on x.
Both the k-means and PCA approaches can be regarded as special cases of a
more general approach for dictionary learning which is called auto-encoders. In an
auto-encoder we learn a pair of functions: an encoder function, : Rd Rk , and
a decoder function, : Rk Rd . The goal of the learning process is to find a pair
of functions such that the reconstruction error, i xi ((xi ))2 , is small. Of
course, we can trivially set k = d and both , to be the identity mapping, which
yields a perfect reconstruction. We therefore must restrict and in some way. In
PCA, we constrain k < d and further restrict and to be linear functions. In kmeans, k is not restricted to be smaller than d, but now and rely on k centroids,
1 , . . . , k , and (x) returns an indicator vector in {0, 1}k that indicates the closest
centroid to x, while takes as input an indicator vector and returns the centroid
representing this vector.
An important property of the k-means construction, which is key in allowing
k to be larger than d, is that maps instances into sparse vectors. In fact, in kmeans only a single coordinate of (x) is nonzero. An immediate extension of the
k-means construction is therefore to restrict the range of to be vectors with at
where v0 = |{ j : v j = 0}|. Note that when s = 1 and we further restrict v1 = 1
then we obtain the k-means encoding function; that is, (x) is the indicator vector
of the centroid closest to x. For larger values of s, the optimization problem in the
preceding definition of becomes computationally difficult. Therefore, in practice,
we sometime use 1 regularization instead of the sparsity constraint and define
to be
(x) = argmin x (v)2 + v1 ,
v
25.4 SUMMARY
Many machine learning algorithms take the feature representation of instances for
granted. Yet the choice of representation requires careful attention. We discussed
approaches for feature selection, introducing filters, greedy selection algorithms,
and sparsity-inducing norms. Next we presented several examples for feature transformations and demonstrated their usefulness. Last, we discussed feature learning, and in particular dictionary learning. We have shown that feature selection,
manipulation, and learning all depend on some prior knowledge on the data.
321
322
25.6 EXERCISES
25.1 Prove the equality given in Equation (25.1). Hint: Let a , b be minimizers of the
left-hand side. Find a, b such that the objective value of the right-hand side is
smaller than that of the left-hand side. Do the same for the other direction.
25.2 Show that Equation (25.7) is the solution of Equation (25.6).
25.3 AdaBoost as a Forward Greedy Selection Algorithm: Recall the AdaBoost algorithm from Chapter 10. In this section we give another interpretation of AdaBoost
as a forward greedy selection algorithm.
Given a set of m instances x1 , . . ., xm , and a hypothesis class H of finite VC
dimension, show that there exist d and h 1 , . . ., h d such that for every h H
there exists i [d] with h i (x j ) = h(x j ) for every j [m].
Let R(w) be as defined in Equation (25.3). Given some w, define f w to be the
function
d
fw ( ) =
wi h i ( ).
i=1
exp ( yi f w (xi ))
,
Z
Furthermore, denoting j =
i=1
m
R(w)
wj
= 2 j 1.
Conclude that if
j 1/2 then
R(w)
w j
/2.
(t+1) ) R(w(t) )
Show
that the update of AdaBoost guarantees R(w
2
log ( 1 4 ). Hint: Use the proof of Theorem 10.2.
PART 4
Advanced Theory
26
Rademacher Complexities
def
F = H = {z (h, z) : h H},
and given f F , we define
1
f (z i ).
m
m
L D ( f ) = E [ f (z)],
zD
LS( f ) =
i=1
(26.1)
326
Rademacher Complexities
(26.2)
f F
(26.3)
i=1
The Rademacher complexity measure captures this idea by considering the expectation of the term appearing in Equation 26.3 with respect to a random choice of .
Formally, let F S be the set of all possible evaluations a function f F can achieve
on a sample S, namely,
F S = {( f (z 1 ), . . . , f (z m )) : f F }.
Let the variables in be distributed i.i.d. according to P [i = 1] = P [i = 1] = 12 .
Then, the Rademacher complexity of F with respect to S is defined as follows:
m
def 1
E
i f (z i ) .
(26.4)
sup
R(F S) =
m {1}m f F
i=1
Rm ,
we define
m
def 1
E sup
R(A) =
i ai .
m a A
(26.5)
i=1
SD m
SD
L D ( f ) L S ( f ) = E [L S ( f )] L S ( f ) = E [L S ( f ) L S ( f )].
S
Taking supremum over f F of both sides, and using the fact that the supremum
of expectation is smaller than expectation of the supremum we obtain
sup (L D ( f ) L S ( f )) = sup E [L S ( f ) L S ( f )]
f F
f F S
E
S
sup (L S ( f ) L S ( f )) .
f F
sup (L D ( f ) L S ( f )) E
sup (L S ( f ) L S ( f ))
S,S
f F
f F
1
E
=
m S,S
sup
m
f F i=1
(
f (z i )
f (z i )) .
(26.6)
Next, we note that for each j , z j and z j are i.i.d. variables. Therefore, we can replace
them without affecting the expectation:
E sup ( f (z j ) f (z j )) +
( f (z i ) f (z i ))
S,S
f F
i= j
= E sup ( f (z j ) f (z j )) +
S,S
f F
( f (z i ) f (z i )) .
(26.7)
i= j
E sup j ( f (z j ) f (z j )) +
( f (z i ) f (z i ))
S,S , j
f F
i= j
1
1
= (l.h.s. of Equation (26.7)) + (r.h.s. of Equation (26.7))
2
2
= E sup ( f (z j ) f (z j )) +
( f (z i ) f (z i )) .
(26.8)
(26.9)
S,S
S,S
f F
i= j
S,S ,
f F i=1
Finally,
sup
f F
i ( f (z i ) f (z i )) sup
f F i=1
f F
i f (z i ) + sup
f F
i f (z i )
and since the probability of is the same as the probability of , the right-hand
side of Equation (26.9) can be bounded by
E
i f (z i ) + sup
i f (z i )
sup
S,S ,
f F
f F
The lemma immediately yields that, in expectation, the ERM rule finds a
hypothesis which is close to the optimal hypothesis in H.
327
328
Rademacher Complexities
SD m
SD
SD m
SD
if h
Furthermore,
= argminh L D (h) then for each (0, 1) with probability of at least
1 over the choice of S we have
2 E S Dm R( H S )
.
Proof. The first inequality follows directly from Lemma 26.2. The second inequality
follows because for any fixed h ,
L D (ERMH (S)) L D (h )
L D (h ) = E [L S (h )] E [L S (ERMH (S))].
S
The third inequality follows from the previous inequality by relying on Markovs
inequality (note that the random variable L D (ERMH (S)) L D (h ) is nonnegative).
Next, we derive bounds similar to the bounds in Theorem 26.3 with a better dependence on the confidence parameter . To do so, we first introduce the
following bounded differences concentration inequality.
Lemma 26.4 (McDiarmids Inequality). Let V be some set and let f : V m R
be a function of m variables such that for some c > 0, for all i [m] and for all
x 1 , . . . , x m , x i V we have
| f (x 1 , . . . , x m ) f (x 1 , . . . , x i1 , x i , x i+1 , . . . , x m )| c.
Let X 1 , . . . , X m be m independent random variables taking values in V . Then, with
probability of at least 1 we have
| f (X 1 , . . . , X m ) E [ f (X 1 , . . . , X m )]| c
ln
m/2.
S Dm
R( H S ) + c
2 ln (2/)
.
m
2 ln (4/)
.
m
2 ln (8/)
.
m
Proof. First note that the random variable RepD (F , S) = suphH L D (h) L S (h)
satisfies the bounded differences condition of Lemma 26.4 with a constant 2c/m.
Combining the bounds in Lemma 26.4 with Lemma 26.2 we obtain that with
probability of at least 1 ,
2 ln (2/)
2 ln (2/)
2 E R( H S ) + c
.
RepD (F , S) E RepD (F , S) + c
S
m
m
The first inequality of the theorem follows from the definition of RepD (F , S). For
the second inequality we note that the random variable R( H S) also satisfies
the bounded differences condition of Lemma 26.4 with a constant 2c/m. Therefore,
the second inequality follows from the first inequality, Lemma 26.4, and the union
bound. Finally, for the last inequality, denote h S = ERMH (S) and note that
L D (h S ) L D (h )
= L D (h S ) L S (h S ) + L S (h S ) L S (h ) + L S (h ) L D (h )
L D (h S ) L S (h S ) + L S (h ) L D (h ) .
(26.10)
The first summand on the right-hand side is bounded by the second inequality of
the theorem. For the second summand, we use the fact that h does not depend on
S; hence by using Hoeffdings inequality we obtain that with probaility of at least
1 /2,
ln (4/)
L S (h ) L D (h ) c
.
(26.11)
2m
Combining this with the union bound we conclude our proof.
The preceding theorem tells us that if the quantity R( H S) is small then it
is possible to learn the class H using the ERM rule. It is important to emphasize
that the last two bounds given in the theorem depend on the specific training set S.
That is, we use S both for learning a hypothesis from H as well as for estimating the
quality of it. This type of bound is called a data-dependent bound.
329
330
Rademacher Complexities
N
j =1 j a
( j)
: N N, j , a( j )
Proof. The main idea follows from the fact that for any vector v we have
N
sup
0:1 =1 j =1
j v j = max v j .
j
Therefore,
m R(A ) = E
sup
m
sup
0: =1 (1)
1
a ,...,a(N) i=1
=E
N
sup
0: =1
1
j =1
= E sup
m
a A
i=1
j sup
a( j )
m
N
( j)
j ai
j =1
( j)
i ai
i=1
i ai
= m R(A),
and we conclude our proof.
The next lemma, due to Massart, states that the Rademacher complexity of a
finite set grows logarithmically with the size of the set.
Lemma 26.8 (Massart Lemma). Let A = {a1 , . . . , a N } be a finite set of vectors in Rm .
N
ai . Then,
Define a = N1 i=1
2 log (N)
.
R(A) max a a
a A
m
Proof. On the basis of Lemma 26.6, we can assume without loss of generality that
a = 0. Let > 0 and let A = {a1 , . . . , a N }. We upper bound the Rademacher
complexity as follows:
,a
m R(A ) = E max , a = E log max e
a A
E log
e ,a
a A
log E
= log
e
,a
a A
m
a A
i=1
a A
// Jensens inequality
E [ei ai ] ,
i
where the last equality occurs because the Rademacher variables are independent.
Next, using Lemma A.6 we have that for all ai R,
E e i a i =
and therefore
m R(A ) log
m
exp
a A i=1
ai2
2
= log
2
exp a /2
a A
2
log |A | max exp a /2
= log (|A |) + max (a2 /2).
a A
a A
2 log (|A|)/ maxa A a2 and rearranging terms we conclude our
The following lemma shows that composing A with a Lipschitz function does not
blow up the Rademacher complexity. The proof is due to Kakade and Tewari.
Lemma 26.9 (Contraction Lemma). For each i [m], let i : R R be a -Lipschitz
function; namely, for all , R we have |i () i ()| | |. For a Rm let
(a) denote the vector (1 (a1 ), . . . , m (ym )). Let A = {(a) : a A}. Then,
R( A) R(A).
Proof. For simplicity, we prove the lemma for the case = 1. The case =
1 will follow by defining = 1 and then using Lemma 26.6. Let Ai =
{(a1 , . . . , ai1 , i (ai ), ai+1 , . . . , am ) : a A}. Clearly, it suffices to prove that for any
set A and all i we have R(Ai ) R(A). Without loss of generality we will prove the
latter claim for i = 1 and to simplify notation we omit the subscript from 1 . We have
m
m R(A1 ) = E sup
i ai
a A1 i=1
= E sup 1 (a1 ) +
a A
m
i ai
i=2
m
m
1
=
i ai + sup (a1 ) +
i ai
sup (a1 ) +
E
2 2 ,...,m a A
a A
i=2
i=2
m
m
1
E
=
i ai +
i ai
sup (a1 ) (a1 ) +
2 2 ,...,m a,a A
i=2
i=2
m
m
1
E
i ai +
i ai
sup |a1 a1 | +
,
(26.12)
2 2 ,...,m a,a A
i=2
i=2
where in the last inequality we used the assumption that is Lipschitz. Next, we
note that the absolute value on |a1 a1 | in the preceding expression can be omitted
since both a and a are from the same set A and the rest of the expression in the
331
332
Rademacher Complexities
sup
a,a A
a1 a1 +
m
i ai +
i=2
m
i ai
(26.13)
i=2
But, using the same equalities as in Equation (26.12), it is easy to see that the righthand side of Equation (26.13) exactly equals m R(A), which concludes our proof.
(26.14)
.
m
R(H2 S)
m
sup
aH2 S i=1
=E
sup
i w, xi
w:w1 i=1
sup w,
=E
w:w1
E
i ai
m
m
m
i xi
i=1
i xi 2 .
(26.15)
i=1
0
0
02 1/2
02 1/2
0
0 m
m
m
0
0
0
0
0
0
0
0
0
0
0
0
E 0
i xi 0 = E 0
i xi 0 E 0
i xi 0 . (26.16)
0
0
0
0
0
0
i=1
i=1
i=1
m
E
i xi 22 = E
i j xi , x j
i=1
i, j
xi , x j E [i j ] +
i= j
m
m
xi , xi E i2
i=1
i=1
Combining this with Equation (26.15) and Equation (26.16) we conclude our proof.
Next we bound the Rademacher complexity of H1 S.
Lemma 26.11. Let S = (x1 , . . . , xm ) be vectors in Rn . Then,
2 log (2n)
.
R(H1 S) max xi
i
m
Proof. Using Holders inequality we know that for any vectors w, v we have w, v
w1 v . Therefore,
m
i ai
m R(H1 S) = E sup
=E
aH1 S i=1
sup
=E
w:w1 1 i=1
sup w,
w:w1 1
E
m
m
i w, xi
m
i xi
i=1
i xi .
(26.17)
i=1
333
334
Rademacher Complexities
(26.18)
where : R Y R is such that for all y Y, the scalar function a (a, y) is Lipschitz. For example, the hinge-loss function, (w, (x, y)) = max{0, 1 yw, x},
can be written as in Equation (26.18) using (a, y) = max{0, 1 ya}, and note
that is 1-Lipschitz for all y {1}. Another example is the absolute loss function, (w, (x, y)) = |w, x y|, which can be written as in Equation (26.18) using
(a, y) = |a y|, which is also 1-Lipschitz for all y R.
The following theorem bounds the generalization error of all predictors in H
using their empirical error.
Theorem 26.12. Suppose that D is a distribution over X Y such that with probability
1 we have that x2 R. Let H = {w : w2 B} and let : H Z R be a loss
function of the form given in Equation (26.18) such that for all y Y, a (a, y)
is a -Lipschitz function and such that maxa[B R,B R] |(a, y)| c. Then, for any
(0, 1), with probability of at least 1 over the choice of an i.i.d. sample of size m,
2 ln (2/)
2 B R
+c
.
w H, L D (w) L S (w) +
m
m
Proof. Let F = {(x, y) (w, x, y) : w H}. We will show that with probability 1,
R(F S) B R/ m and then the theorem will follow from Theorem 26.5. Indeed,
the set F S can be written as
F S = {((w, x1 , y1 ), . . . , (w, xm , ym )) : w H},
and the bound on R(F S) follows directly by combining Lemma 26.9, Lemma 26.10,
and the assumption that x2 R with probability 1.
We next derive a generalization bound for hard-SVM based on the previous
theorem. For simplicity, we do not allow a bias term and consider the hard-SVM
problem:
argmin w2 s.t. i , yi w, xi 1
(26.19)
w
Theorem 26.13. Consider a distribution D over X {1} such that there exists some
vector w with P(x,y)D [yw , x 1] = 1 and such that x2 R with probability 1.
Let w S be the output of Equation (26.19). Then, with probability of at least 1 over
the choice of S Dm , we have that
2 ln (2/)
2 R w
+ (1 + R w )
.
P [y = sign(w S , x)]
m
m
(x,y)D
Proof. Throughout the proof, let the loss function be the ramp loss (see
Section 15.2.3). Note that the range of the ramp loss is [0, 1] and that it is a
1-Lipschitz function. Since the ramp loss upper bounds the zero-one loss, we
have that
P [y = sign(w S , x)] L D (w S ).
(x,y)D
Let B = w 2 and consider the set H = {w : w2 B}. By the definition of hardSVM and our assumption on the distribution, we have that w S H with probability
1 and that L S (w S ) = 0. Therefore, using Theorem 26.12 we have that
2 ln (2/)
2B R
.
L D (w S ) L S (w S ) + +
m
m
Remark 26.1. Theorem 26.13 implies that the sample complexity of hard-SVM
2
2
grows like R w
. Using a more delicate analysis and the separability assumption,
2
R 2 w 2
.
preceding theorem depends on w ,
+
.
P [y = sign(w S , x)]
m
m
(x,y)D
Proof. For any integer i , let Bi = 2i , Hi = {w : w Bi }, and let i = 2i2 . Fix i , then
using Theorem 26.12 we have that with probability of at least 1 i
2 ln (2/i )
2Bi R
w Hi , L D (w) L S (w) + +
m
m
Applying the union bound and using i=1 i we obtain that with probability of
at least 1 this holds for all i . Therefore, for all w, if we let i = log2 (w) then
w Hi , Bi 2w, and
2
i
(2i)2
(4 log2 (w))2
.
Therefore,
2 ln (2/i )
m
4wR
4( ln (4 log2 (w)) + ln (1/))
L S (w) +
+
.
m
m
2Bi R
L D (w) L S (w) + +
m
335
336
Rademacher Complexities
27
Covering Numbers
In this chapter we describe another way to measure the complexity of sets, which is
called covering numbers.
27.1 COVERING
Definition 27.1 (Covering). Let A Rm be a set of vectors. We say that A is r covered by a set A , with respect to the Euclidean metric, if for all a A there exists
a A with a a r . We define by N(r , A) the cardinality of the smallest A that
r -covers A.
Example 27.1 (Subspace). Suppose that A Rm , let c = maxa A a,
andd assume
m
that A lies in a d-dimensional subspace of R . Then, N(r , A) (2c d/r ) . To see
this, let v1 , . . . , vd be an orthonormal basis of the subspace. Then, any a A can be
written as a = di=1 i vi with 2 = a2 c. Let
R and consider the set
A =
d
+
i vi : i , i {c, c +
, c + 2
, . . . , c} .
i=1
Given a A s.t. a =
d
i=1 i vi
a a 2 =
(i i )vi 2 2
vi 2 2 d.
N(r , A) |A | =
2c
d
=
d
2c d
.
r
337
338
Covering Numbers
27.1.1 Properties
The following lemma is immediate from the definition.
Lemma 27.2. For any A Rm , scalar c > 0, and vector a0 Rm , we have
r > 0, N(r , {c a + a0 : a A}) N(cr , A).
Next, we derive a contraction principle.
Lemma 27.3. For each i [m], let i : R R be a -Lipschitz function; namely, for
all , R we have |i () i ()| | |. For a Rm let (a) denote the vector
(1 (a1 ), . . . , m (am )). Let A = {(a) : a A}. Then,
N( r , A) N(r , A).
Proof. Define B = A. Let A be an r -cover of A and define B = A . Then, for
all a A there exists a A with a a r . So,
(a) (a )2 =
(i (ai ) i (ai ))2 2
(ai ai )2 (r )2 .
i
Hence,
is an ( r )-cover of B.
1
1
(M)
E sup , a .
E a b +
m
m
a B
R(A) =
k=1
Therefore,
c 2M 6c k
R(A) +
2
m
m
M
log (N(c2k , A)).
k=1
6c
( + 2).
m
Proof. The bound follows from Lemma 27.4 by taking M and noting that
k
k = 2.
= 1 and
k=1 2
k=1 k2
Example 27.2. Consider a set A which lies in a d dimensional subspace of Rm and
d
such that c = maxa A a. We have shown that N(r , A) 2cr d . Therefore, for
any k,
k
log (N(c2 , A)) d log 2k+1 d
d log (2 d) + k d
d log (2 d) + d k.
6c
c d log (d)
R(A)
d log (2 d) + 2 d = O
.
m
m
339
340
Covering Numbers
28
Proof of the Fundamental Theorem
of Learning Theory
In this chapter we prove Theorem 6.8 from Chapter 6. We remind the reader the
conditions of the theorem, which will hold throughout this chapter: H is a hypothesis
class of functions from a domain X to {0, 1}, the loss function is the 0 1 loss, and
VCdim(H) = d < .
We shall prove the upper bound for both the realizable and agnostic cases and
shall prove the lower bound for the agnostic case. The lower bound for the realizable
case is left as an exercise.
d + ln (1/)
.
2
yields an
, -learner for H. We prove this result on the basis of Theorem 26.5.
Let (x1 , y1 ), . . . , (xm , ym ) be a classification training set. Recall that the SauerShelah lemma tells us that if VCdim(H) = d then
d
{(h(x1 ), . . . , h(xm )) : h H}
e m .
d
341
342
Denote A = {(1[h(x1 )= y1 ] , . . . , 1[h(xm )= ym ] ) : h H}. This clearly implies that
e m d
|A|
.
d
Combining this with Lemma 26.8 we obtain the following bound on the Rademacher
complexity:
2d log (em/d)
R(A)
.
m
Using Theorem 26.5 we obtain that with probability of at least 1 , for every h H
we have that
8d log (em/d)
2 log (2/)
L D (h) L S (h)
+
.
m
m
Repeating the previous argument for minus the zero-one loss and applying the
union bound we obtain that with probability of at least 1 , for every h H it
holds that
8d log (em/d)
2 log (4/)
|L D (h) L S (h)|
+
m
m
8d log (em/d) + 2 log (4/)
.
2
m
To ensure that this is smaller than
we need
4
m 2 8d log (m) + 8d log (e/d) + 2 log (4/) .
Using Lemma A.2, a sufficient condition for the inequality to hold is that
32d
64d
8
m 4 2 log
+ 2 8d log (e/d) + 2 log (4/) .
2
We first show that for any
< 1/ 2 and any (0, 1), we have that m(
, )
0. 5 log (1/(4))/
2 . To do so, we show that for m 0. 5 log (1/(4))/
2 , H is not
learnable.
Choose one example that is shattered by H. That is, let c be an example such that
there are h + , h H for which h + (c) = 1 and h (c) = 1. Define two distributions,
1+yb
2
if x = c
otherwise.
That is, all the distribution mass is concentrated on two examples (c, 1) and (c, 1),
1b
where the probability of (c, b) is 1+b
2 and the probability of (c, b) is 2 .
Let A be an arbitrary algorithm. Any training set sampled from Db has the
form S = (c, y1 ), . . . , (c, ym ). Therefore, it is fully characterized by the vector y =
(y1 , . . . , ym ) {1}m . Upon receiving a training set S, the algorithm A returns a
hypothesis h : X {1}. Since the error of A w.r.t. Db only depends on h(c), we
can think of A as a mapping from {1}m into {1}. Therefore, we denote by A(y)
the value in {1} corresponding to the prediction of h(c), where h is the hypothesis
that A outputs upon receiving the training set S = (c, y1 ), . . . , (c, ym ).
Note that for any hypothesis h we have
L Db (h) =
1 h(c)b
.
2
=
2
2
if A(y) = b
otherwise.
Fix A. For b {1}, let Y b = {y {0, 1}m : A(y) = b}. The distribution Db induces
a probability Pb over {1}m . Hence,
P [L Db (A(y)) L Db (h b ) =
] = Db (Y b ) =
Pb [y]1[A(y)=b] .
Denote N + = {y : |{i : yi = 1}| m/2} and N = {1}m \ N + . Note that for any y N +
we have P+ [y] P [y] and for any y N we have P [y] P+ [y]. Therefore,
max P[L Db (A(y)) L Db (h b ) =
]
b{1}
= max
b{1}
Pb [y]1[A(y)=b]
1
1
P+ [y]1[A(y)=+] +
P [y]1[A(y)=]
2 y
2 y
1
(P+ [y]1[A(y)=+] + P [y]1[A(y)=] )
2
+
yN
1
(P+ [y]1[A(y)=+] + P [y]1[A(y)=] )
2
yN
343
344
1
(P [y]1[A(y)=+] + P [y]1[A(y)=] )
2
+
yN
1
(P+ [y]1[A(y)=+] + P+ [y]1[A(y)=] )
2
yN
1
1
=
P [y] +
P+ [y].
2
2
+
yN
yN
Next note that yN + P [y] = yN P+ [y], and both values are the probability that
a Binomial (m, (1
)/2) random variable will have value greater than m/2. Using
Lemma B.11, this probability is lower bounded by
1
1
1 1 exp( m
2 /(1
2 ))
1 1 exp( 2m
2 ) ,
2
2
where we used the assumption that
2 1/2. It follows that if m 0. 5 log (1/(4))/
2
then there exists b such that
P [L Db (A(y)) L Db (h b ) =
]
1 1 4 ,
2
where the last inequality follows by standard algebraic manipulations. This concludes our proof.
We shall now prove that for every
< 1/(8 2) we have that m(
, ) 8d
.
2
Let = 8
and note that (0, 1/ 2). We will construct a family of distributions
as follows. First, let C = {c1 , . . . , cd } be a set of d instances which are shattered by H.
Second, for each vector (b1 , . . . , bd ) {1}d , define a distribution Db such that
1 1+ybi
2
if i : x = ci
Db ({(x, y)}) = d
0
otherwise.
That is, to sample an example according to Db , we first sample an element ci C
uniformly at random, and then set the label to be bi with probability (1 + )/2 or
bi with probability (1 )/2.
It is easy to verify that the Bayes optimal predictor for Db is the hypothesis h H
such that h(ci ) = bi for all i [d], and its error is 1
2 . In addition, for any other
function f : X {1}, it is easy to verify that
L Db ( f ) =
.
2
d
2
d
Therefore,
L Db ( f ) min L Db (h) =
hH
(28.2)
hH
E m L Db (A(S)) min L Db (h)
hH
(28.4)
|{i [d] : A(S)(ci ) = bi |
Em
d
Db :bU ({1}d ) SDb
(28.5)
E
E 1[A(S)(ci )=bi ] ,
d
Db :bU ({1}d ) SDbm
(28.6)
i=1
where the first equality follows from Equation (28.2). In addition, using the definition of Db , to sample S Db we can first sample ( j1 , . . . , jm ) U ([d])m , set xr = c ji ,
and finally sample yr such that P [yr = b ji ] = (1 + )/2. Let us simplify the notation
and use y b to denote sampling according to P[y = b] = (1 + )/2. Therefore, the
right-hand side of Equation (28.6) equals
E
E
E 1[A(S)(ci )=bi ] .
d
j U ([d])m bU ({1}d ) r,yr b jr
d
(28.7)
i=1
We now proceed in two steps. First, we show that among all learning algorithms,
A, the one which minimizes Equation (28.7) (and hence also Equation (28.4)) is the
Maximum-Likelihood learning rule, denoted A M L . Formally, for each i , A M L (S)(ci )
is the majority vote among the set {yr : r [m], xr = ci }. Second, we lower bound
Equation (28.7) for A M L .
Lemma 28.1. Among all algorithms, Equation (28.4) is minimized for A being the
Maximum-Likelihood algorithm, A M L , defined as
yr .
i , A M L (S)(ci ) = sign
r:xr =ci
Proof. Fix some j [d]m . Note that given j and y {1}m , the training set S is fully
determined. Therefore, we can write A( j , y) instead of A(S). Let us also fix i [d].
Denote bi the sequence (b1 , . . . , bi1 , bi+1 , . . . , bm ). Also, for any y {1}m , let y I
denote the elements of y corresponding to indices for which jr = i and let y I be
the rest of the elements of y. We have
E
bU ({1}d ) r,yr b jr
1
2
1[A(S)(ci )=bi ]
E
bi U ({1}d1 ) y
bi {1}
bi U ({1}d1 ) I
y
1
P[y I |bi ]
P[y I |bi ]1[A( j ,y)(ci )=bi ] .
2 I
y
bi {1}
345
346
The sum within the parentheses is minimized when A( j , y)(ci ) is the maximizer of
P[y I |bi ] over bi {1}, which is exactly the Maximum-Likelihood rule. Repeating
the same argument for all i we conclude our proof.
Fix i . For every j , let n i ( j ) = {|t : jt = i |} be the number of instances in which the
instance is ci . For the Maximum-Likelihood rule, we have that the quantity
E
bU ({1}d ) r,yr b jr
i=1
d
2n ( j )
2
i
E
1 1e
2d
j U ([d])m
i=1
d
E
1 2 2 n i ( j ) ,
2d
j U ([d])m
i=1
1 2
2d
j U ([d])m
i=1
d
1 2 2 m/d
2d
i=1
2
=
1 2 m/d .
2
=
As long as m <
d
,
8 2
In summary, we have shown that if m < 8d 2 then for any algorithm there exists a
distribution such that
E m L D (A(S)) min L D (h) /4.
SD
hH
Finally, Let = 1 (L D (A(S)) minhH L D (h)) and note that [0, 1] (see
Equation (28.5)). Therefore, using Lemma B.1, we get that
P [L D (A(S)) min L D (h) >
] = P >
E []
hH
8d
,
2
1
.
4
d ln (1/
) + ln (1/)
.
(1/)
We do so by showing that for m C d ln (1/
)+ln
, H is learnable using the ERM
rule. We prove this claim on the basis of the notion of
-nets.
SD
T D
m
2 }.
347
348
Note that (S, T ) B implies S B and therefore 1[(S,T )B ] = 1[(S,T )B ] 1[SB] , which
gives
P [(S, T ) B ] = E m E m 1[(S,T )B ] 1[SB]
SD T D
= E m 1[SB] E m 1[(S,T )B ] .
T D
SD
Fix some S. Then, either 1[SB] = 0 or S B and then h S such that D(h S )
and
|h S S| = 0. It follows that a sufficient condition for (S, T ) B is that |T h S | >
m
2 .
Therefore, whenever S B we have
E 1[(S,T )B ]
T D m
P [|T h S | >
T D m
m
2 ].
m
2 ]
2
m (mm/2)2
Thus,
P [|T h S | >
m
2 ]
= 1 P[|T h S |
m
2 ]
1 P[|T h S |
m
2 ]
1/2.
AD 2m hH
AD 2m hH
AD 2m hH A
AD 2m
hH A
AD 2m j J
AD 2m
hH A
hH A
j J
Now, fix some A s.t. |h A| . Then, E j 1[|hAj |=0] is the probability that when
choosing m balls from a bag with at least red balls, we will never choose a red ball.
This probability is at most
(1 /(2m))m = (1
/4)m e
m/4 .
We therefore get that
P [ A B ]
AD 2m
e m/4 e m/4
hH A
AD 2m
|H A |.
Using the definition of the growth function we conclude the proof of Claim 2.
Completing the Proof: By Sauers lemma we know that H (2m) (2em/d)d .
Combining this with the two claims we obtain that
P [S B] 2(2em/d)d e
m/4 .
We would like the right-hand side of the inequality to be at most ; that is,
2(2em/d)d e
m/4 .
Rearranging, we obtain the requirement
m
4d
4
4
d log (2em/d) + log (2/) =
log (m) + (d log (2e/d) + log (2/).
Using Lemma A.2, a sufficient condition for the preceding to hold is that
8d
16d
8
log
m
+ (d log (2e/d) + log (2/).
A sufficient condition for this is that
8d
16d
16
log
m
+ (d log (2e/d) + 12 log (2/)
16d
8
8d 2e
=
log
+ log (2/)
d
8
16e
2
=
2d log
+ log
.
349
350
i.i.d. instances from X with labels according to c we have that any ERM hypothesis
has a true error of at most
.
Proof. Define the class Hc = {c h : h H}, where c h = (h \ c) (c \ h). It is
easy to verify that if some A X is shattered by H then it is also shattered by Hc
and vice versa. Hence, VCdim(H) = VCdim(Hc ). Therefore, using Theorem 28.3
we know that with probability
of at least 1 , the sample S is an
-net for Hc .
Note
that L D (h) = D(h c). Therefore, for any h H with L D (h)
we have that
|(h c) S| > 0, which implies that h cannot be an ERM hypothesis, which concludes
our proof.
29
Multiclass Learnability
In view of the fundamental theorem of learning theory (Theorem 6.8), it is natural to seek a generalization of the VC dimension to multiclass hypothesis classes.
In Section 29.1 we show such a generalization, called the Natarajan dimension, and
state a generalization of the fundamental theorem based on the Natarajan dimension. Then, we demonstrate how to calculate the Natarajan dimension of several
important hypothesis classes.
Recall that the main message of the fundamental theorem of learning theory is
that a hypothesis class of binary classifiers is learnable (with respect to the 0-1 loss)
if and only if it has the uniform convergence property, and then it is learnable by
any ERM learner. In Chapter 13, Exercise 29.2, we have shown that this equivalence breaks down for a certain convex learning problem. The last section of this
chapter is devoted to showing that the equivalence between learnability and uniform
convergence breaks down even in multiclass problems with the 0-1 loss, which are
very similar to binary classification. Indeed, we construct a hypothesis class which is
learnable by a specific ERM learner, but for which other ERM learners might fail
and the uniform convergence property does not hold.
352
Multiclass Learnability
d + log (1/)
d log (k) + log (1/)
m UC
.
H (
, ) C2
2
2
d + log (1/)
d log (k) + log (1/)
m H (
, ) C2
.
2
2
The proof of Natarajans lemma shares the same spirit of the proof of Sauers
lemma and is left as an exercise (see Exercise 29.3).
T (h)(x)
= argmax h i (x).
i[k]
If there are two labels that maximize h i (x), we choose the smaller one. Also, let
OvA,k
: h (Hbin )k }.
Hbin
= {T (h)
OvA,k
? Intuitively, to specify a
What should be the Natarajan dimension of Hbin
hypothesis in Hbin we need d = VCdim(Hbin ) parameters. To specify a hypotheOvA,k
, we need to specify k hypotheses in Hbin . Therefore, kd parameters
sis in Hbin
should suffice. The following lemma establishes this intuition.
OvA,k
Hbin
| |C|d .
2|C|
We conclude that
OvA,k
Hbin
|C|dk .
C
The proof follows by taking the logarithm and applying Lemma A.1.
353
354
Multiclass Learnability
How tight is Lemma 29.5? It is not hard to see that for some classes,
OvA,k
) can be much smaller than dk (see Exercise 29.1). However, there
Ndim(Hbin
OvA,k
are several natural binary classes, Hbin (e.g., halfspaces), for which Ndim(Hbin
)=
(dk) (see Exercise 29.6).
: h (Hbin )l }.
Hrbin = {R(h)
x argmaxw, (x,i ) : w Rd
(29.1)
i[k]
(x, f 0 (x)) (x, f1 (x)). We claim that the set (C) = {(x) : x C} consists of
|C| elements (i.e., is one to one) and is shattered by the binary hypothesis class of
homogeneous linear separators on Rd ,
H = {x sign(w, x) : w Rd }.
Rn
R(ky)n
355
356
Multiclass Learnability
Let A be some ERM algorithm for H. Assume that A operates on a sample labeled
by h A H. Since h A is the only hypothesis in H that might return the label A, if
A observes the label A, it knows" that the learned hypothesis is h A , and, as an
ERM, must return it (note that in this case the error of the returned hypothesis
is 0). Therefore, to specify an ERM, we should only specify the hypothesis it returns
upon receiving a sample of the form
S = {(x 1 , ), . . . , (x m , )}.
We consider two ERMs: The first, Agood , is defined by
Agood (S) = h ;
that is, it outputs the hypothesis which predicts * for every x X . The second
ERM, Abad , is defined by
Abad (S) = h {x1 ,...xm }c .
The following claim shows that the sample complexity of Abad is about |X |-times
larger than the sample complexity of Agood . This establishes a gap between different
ERMs. If X is infinite, we even obtain a learnable class that is not learnable by
every ERM.
Claim 29.9.
1. Let
, > 0, D a distribution
over X and h A H. Let S be an i.i.d. sample
1
consisting of m
log 1 examples, sampled according to D and labeled by
h A . Then, with probability of at least 1 , the hypothesis returned by Agood
will have an error of at most
.
2. There exists a constant a > 0 such that for every 0 <
< a there exists a distribution D over X and h A H such that the following holds. The hypothesis
|1
, sampled according
returned by Abad upon receiving a sample of size m |X6
1
Proof. Let D be a distribution over X and suppose that the correct labeling is h A .
For any sample, Agood returns either h or h A . If it returns h A then its true error is
zero. Thus, it returns a hypothesis with error
only if all the m examples in the
sample are from X \ A while the error of h , L D (h ) = PD [ A], is
. Assume m
1
1
m
m .
log ( ); then the probability of the latter event is no more than (1
) e
This establishes item 1.
Next we prove item 2. We restrict the proof to the case that |X | = d < . The
proof for infinite X is similar. Suppose that X = {x 0 , . . . , x d1 }.
Let a > 0 be small enough such that 1 2
e4
for every
< a and fix some
< a. Define a distribution on X by setting P [x 0 ] = 1 2
and for all 1 i d 1,
2
. Suppose that the correct hypothesis is h and let the sample size be m.
P [x i ] = d1
Clearly, the hypothesis returned by Abad will err on all the examples from X which
61
,
are not in the sample. By Chernoffs bound, if m d1
6
, then with probability e
the sample will include no more than d1
examples
from
X
.
Thus
the
returned
2
hypothesis will have error
.
29.6 Exercises
29.6 EXERCISES
29.1 Let d, k > 0. Show that there exists a binary hypothesis Hbin of VC dimension d
) = d.
such that Ndim(HOvA,k
bin
29.2 Prove Lemma 29.6.
29.3 Prove Natarajans lemma.
Hint: Fix some x0 X . For i , j [k], denote by Hi j all the functions f : X \{x0 } [k]
that can be extended to a function in Hboth by defining f (x0 ) = i and by defining
f (x0 ) = j . Show that |H| |HX \{x0 } | + i= j |Hi j | and use induction.
29.4 Adapt the proof of the binary fundamental theorem and Natarajans lemma to
prove that, for some universal constant C > 0 and for every hypothesis class of
Natarajan dimension d, the agnostic sample complexity of H is
d log kd
+ log (1/)
m H (
, ) C
.
2
29.5 Prove that, for some universal constant C > 0 and for every hypothesis class of
Natarajan dimension d, the agnostic sample complexity of H is
m H (
, ) C
d + log (1/)
.
2
357
358
Multiclass Learnability
30
Compression Bounds
Throughout the book, we have tried to characterize the notion of learnability using
different approaches. At first we have shown that the uniform convergence property of a hypothesis class guarantees successful learning. Later on we introduced
the notion of stability and have shown that stable algorithms are guaranteed to be
good learners. Yet there are other properties which may be sufficient for learning,
and in this chapter and its sequel we will introduce two approaches to this issue:
compression bounds and the PAC-Bayes approach.
In this chapter we study compression bounds. Roughly speaking, we shall see
that if a learning algorithm can express the output hypothesis using a small subset
of the training set, then the error of the hypothesis on the rest of the examples
estimates its true error. In other words, an algorithm that can compress its output
is a good learner.
*
4
log
(1/)
2L
(h
)
log
(1/)
V T
.
P L D (h T ) L V (h T )
+
|V |
|V |
To derive this bound, all we needed was independence between T and V .
Therefore, we can redefine the protocol as follows. First, we agree on a sequence
of k indices I = (i 1 , . . . ,i k ) [m]k . Then, we sample a sequence of m examples
S = (z 1 , . . . , z m ). Now, define T = SI = (z i1 , . . . , z ik ) and define V to be the rest of
359
360
Compression Bounds
the examples in S. Note that this protocol is equivalent to the protocol we defined
before hence Lemma 30.1 still holds.
Applying a union bound over the choice of the sequence of indices we obtain
the following theorem.
Theorem 30.2. Let k be an integer and let B : Z k H be a mapping from sequences
of k examples to the hypothesis class. Let m 2k be a training set size and let A :
Z m H be a learning rule that receives a training sequence S of size m and returns
a hypothesis such that A(S) = B(z i1 , . . . , z ik ) for some (i 1 , . . . ,i k ) [m]k . Let V = {z j :
j
/ (i 1 , . . . ,i k )} be the set of examples which were not selected for defining A(S). Then,
with probability of at least 1 over the choice of S we have
4k log (m/) 8k log(m/)
+
.
L D (A(S)) L V (A(S)) + L V (A(S))
m
m
Proof. For any I [m]k let h I = B(z i1 , . . . , z ik ). Let n = m k. Combining
Lemma 30.1 with the union bound we have
2L V (h I ) log (1/) 4 log (1/)
k
+
P I [m] s. t. L D (h I ) L V (h I )
n
n
2L V (h I ) log (1/) 4 log (1/)
+
P L D (h I ) L V (h I )
n
n
k
I [m]
m k .
Denote = m k . Using the assumption k m/2, which implies that n = m k m/2,
the above implies that with probability of at least 1 we have that
4k log (m/ ) 8k log (m/ )
+
,
L D (A(S)) L V (A(S)) + L V (A(S))
m
m
which concludes our proof.
As a direct corollary we obtain:
Corollary 30.3. Assuming the conditions of Theorem 30.2, and further assuming that
L V (A(S)) = 0, then, with probability of at least 1 over the choice of S we have
L D (A(S))
8k log (m/)
.
m
30.2 Examples
30.2 EXAMPLES
In the examples that follows, we present compression schemes for several hypothesis classes for binary classification. In light of Lemma 30.6 we focus on the realizable
case. Therefore, to show that a certain hypothesis class has a compression scheme,
it is necessary to show that there exist A, B, and k for which L S (h ) = 0.
30.2.2 Halfspaces
Let X = Rd and consider the class of homogenous halfspaces, {x sign(w, x) :
w Rd }.
A Compression Scheme:
W.l.o.g. assume all labels are positive (otherwise, replace xi by yi xi ). The compression scheme we propose is as follows. First, A finds the vector w which is in the
361
362
Compression Bounds
convex hull of {x1 , . . . , xm } and has minimal norm. Then, it represents it as a convex
combination of d points in the sample (it will be shown later that this is always possible). The output of A are these d points. The algorithm B receives these d points
and set w to be the point in their convex hull of minimal norm.
Next we prove that this indeed is a compression sceme. Since the data is linearly
separable, the convex hull of {x1 , . . . , xm } does not contain the origin. Consider the
point w in this convex hull closest to the origin. (This is a unique point which is the
Euclidean projection of the origin onto this convex hull.) We claim that w separates
the data.1 To see this, assume by contradiction that w, xi 0 for some i . Take
2
w = (1 )w + xi for = x w
2 +w2 (0, 1). Then w is also in the convex hull and
i
xi 2 w2
w2 + xi 2
= w2
1
w2 /xi 2 + 1
< w2 ,
which leads to a contradiction.
We have thus shown that w is also an ERM. Finally, since w is in the convex hull
of the examples, we can apply Caratheodorys theorem to obtain that w is also in the
convex hull of a subset of d + 1 points of the polygon. Furthermore, the minimality
of w implies that w must be on a face of the polygon and this implies it can be
represented as a convex combination of d points.
It remains to show that w is also the projection onto the polygon defined by the
d points. But this must be true: On one hand, the smaller polygon is a subset of the
larger one; hence the projection onto the smaller cannot be smaller in norm. On the
other hand, w itself is a valid solution. The uniqueness of projection concludes our
proof.
363
31
PAC-Bayes
The Minimum Description Length (MDL) and Occams razor principles allow a
potentially very large hypothesis class but define a hierarchy over hypotheses and
prefer to choose hypotheses that appear higher in the hierarchy. In this chapter we
describe the PAC-Bayesian approach that further generalizes this idea. In the PACBayesian approach, one expresses the prior knowledge by defining prior distribution
over the hypothesis class.
By the linearity of expectation, the generalization loss and training loss of Q can be
written as
def
def
L D (Q) = E [L D (h)] and L S (Q) = E [L S (h)].
hQ
hQ
The following theorem tells us that the difference between the generalization
loss and the empirical loss of a posterior Q is bounded by an expression that depends
on the Kullback-Leibler divergence between Q and the prior distribution P. The
Kullback-Leibler is a natural measure of the distance between two distributions.
The theorem suggests that if we would like to minimize the generalization loss of Q,
364
we should jointly minimize both the empirical loss of Q and the Kullback-Leibler
distance between Q and the prior distribution. We will later show how in some cases
this idea leads to the regularized risk minimization principle.
Theorem 31.1. Let D be an arbitrary distribution over an example domain Z . Let H
be a hypothesis class and let : H Z [0, 1] be a loss function. Let P be a prior
distribution over H and let (0, 1). Then, with probability of at least 1 over
the choice of an i.i.d. training set S = {z 1 , . . . , z m } sampled according to D, for all
distributions Q over H (even such that depend on S), we have
*
D(Q||P) + ln m/
,
2(m 1)
L D (Q) L S (Q) +
where
def
D(Q||P) = E [ ln (Q(h)/P(h))]
hQ
E S [e f (S) ]
.
e
(31.1)
Let (h) = L D (h) L S (h). We will apply Equation (31.1) with the function
2
f (S) = sup 2(m 1) E ((h)) D(Q||P) .
Q
hQ
We now turn to bound E S [e f (S)]. The main trick is to upper bound f (S) by using an
expression that does not depend on Q but rather depends on the prior probability
P. To do so, fix some S and note that from the definition of D(Q||P) we get that for
all Q,
2
hQ
ln E [e2(m1)(h) P(h)/Q(h)]
hQ
= ln E [e2(m1)(h) ],
hP
(31.2)
where the inequality follows from Jensens inequality and the concavity of the log
function. Therefore,
2
E [e f (S) ] E E [e2(m1)(h) ].
S
S hP
(31.3)
The advantage of the expression on the right-hand side stems from the fact that
we can switch the order of expectations (because P is a prior that does not depend
365
366
PAC-Bayes
E [e f ( S)] E E [e2(m1)(h) ].
S
(31.4)
h P S
Next, we claim that for all h we have E S [e2(m1)(h) ] m. To do so, recall that
Hoeffdings inequality tells us that
P [(h)
] e2m
.
2
This implies that E S [e2(m1)(h) ] m (see Exercise 31.1). Combining this with
Equation (31.4) and plugging into Equation (31.1) we get
P [ f (S)
]
S
m
.
e
(31.5)
Denote the right-hand side of the above , thus
= ln (m/), and we therefore obtain
that with probability of at least 1 we have that for all Q
2(m 1) E ((h))2 D(Q||P)
= ln (m/).
hQ
Rearranging the inequality and using Jensens inequality again (the function x 2 is
convex) we conclude that
2
ln (m/) + D(Q||P)
.
(31.6)
E (h) E ((h))2
hQ
hQ
2(m 1)
(31.7)
This rule is similar to the regularized risk minimization principle. That is, we jointly
minimize the empirical loss of Q on the sample and the Kullback-Leibler distance
between Q and P.
31.3 EXERCISES
31.1 Let X be a random variable that satisfies P [X
] e2m
. Prove that
2
E [e2(m1)X ] m.
2
31.3 Exercises
31.2 Suppose that H is a finite hypothesis class, set the prior to be uniform over H,
and set the posterior to be Q(h S ) = 1 for some h S and Q(h) = 0 for all other
h H. Show that
*
ln (|H|) + ln(m/)
L D (h S ) L S (h) +
.
2(m 1)
Compare to the bounds we derived using uniform convergence.
Derive a bound similar to the Occam bound given in Chapter 7 using the PACBayes bound
367
Appendix A
Technical Lemmas
Lemma A.1. Let a > 0. Then: x 2a log (a) x a log (x). It follows that a
necessary condition for the inequality x < a log (x) to hold is that x < 2a log (a).
Proof. First note that for a (0, e ] the inequality x a log (x) holds uncondition
ally and therefore the claim is trivial. From now on, assume that a > e. Consider
the function f (x) = x a log (x). The derivative is f (x) = 1 a/x. Thus, for x > a
the derivative is positive and the function increases. In addition,
f (2a log (a)) = 2a log (a) a log (2a log (a))
= 2a log (a) a log (a) a log (2 log (a))
= a log (a) a log (2 log (a)).
Since a 2 log (a) > 0 for all a > 0, the proof follows.
Lemma A.2.
Proof. It suffices to prove that x 4a log (2a) + 2b implies that both x 2a log (x)
and x 2b. Since we assume a 1 we clearly have that x 2b. In addition, since
b > 0 we have that x 4a log (2a) which using Lemma A.1 implies that x 2a log (x).
This concludes our proof.
Lemma A.3. Let X be a random variable and x R be a scalar and assume that
2 2
there exists a > 0 such that for all t 0 we have P [|X x | > t] 2et /a . Then,
E [|X x |] 4 a.
Proof. For all i = 0, 1, 2, . . . denote ti = a i . Since ti is monotonically increasing we
have that E [|X x |] is at most
i=1 ti P [|X x | > ti1 ]. Combining this with the
(i1)2 . The proof
assumption in the lemma we get that E [|X x |] 2 a
i=1 i e
now follows from the inequalities
i=1
i e(i1)
2
5
i=1
i e(i1) +
2
369
370
Technical Lemmas
Lemma A.4. Let X be a random variable and x R be a scalar and assume that
2 2
there exists a > 0 and b e such
that for all t 0 we have P [|X x | > t] 2b et /a .
Then, E [|X x |] a(2 + log (b)).
Proof. For all i = 0, 1, 2, . . . denote ti = a (i +
increasing we have that
log (b)). Since ti is monotonically
ti P [|X x | > ti1 ].
E [|X x |] a log (b) +
i=1
i=1
(i +
2
log (b))e(i1+ log (b))
i=1
2a b
1+
F
= 2a b
F
xe(x1) d x
2
log (b)
(y + 1)ey d y
2
log (b)
yey d y
2
4a b
log (b)
2
= 2 a b ey
log (b)
= 2 a b/b = 2 a.
Combining the preceding inequalities we conclude our proof.
Lemma A.5. Let m, d be two positive integers such that d m 2. Then,
d
m
k=0
e m d
d
Proof. We prove the claim by induction. For d = 1 the left-hand side equals 1 + m
while the right-hand side equals em; hence the claim is true. Assume that the claim
holds for d and let us prove it for d + 1. By the induction assumption we have
d+1
m
k=0
e m d
d
+
m
d +1
d d m(m 1)(m 2) (m d)
1+
=
d
em
(d + 1)d!
d
em d
d
(m d)
1+
.
d
e
(d + 1)d!
e m d
Technical Lemmas
1+
d
e
(d + 1) 2d(d/e)d
e m d
m d
=
1+
d
2d(d + 1)
e m d d + 1 + (m d)/2d
=
d
d +1
e m d d + 1 + (m d)/2
d
d +1
e m d d/2 + 1 + m/2
=
d
d +1
e m d m
d
d +1
where in the last inequality we used the assumption that d m 2. On the other
hand,
d
d
e m d+1 e m d em
d +1
d
d +1
d +1
e m d em
1
d
d + 1 (1 + 1/d)d
e m d em 1
d
d +1 e
e m d
m
=
,
d
d +1
which proves our inductive argument.
Lemma A.6.
an
n=0
n!
Therefore,
ea + ea a 2n
=
,
2
(2n)!
n=0
and
ea
2 /2
a 2n
.
2n n!
n=0
371
Appendix B
Measure Concentration
(B.2)
x=0
E [Z ]
.
a
(B.3)
For random variables that take value in [0, 1], we can derive from Markovs
inequality the following.
Lemma B.1. Let Z be a random variable that takes values in [0, 1]. Assume that
E [Z ] = . Then, for any a (0, 1),
P [Z > 1 a]
(1 a)
.
a
a
a.
1a
E [Y ] 1
=
.
a
a
Therefore,
P [Z > 1 a] 1
1 a+1
=
.
a
a
Var[Z ]
,
a2
(B.4)
.
2
m
ma
m a2
i=1
The proof follows by denoting the right-hand side and solving for a.
The deviation between the empirical average and the mean given previously
decreases polynomially with m. It is possible to obtain a significantly faster decrease.
In the sections that follow we derive bounds that decrease exponentially fast.
373
374
Measure Concentration
of the exponent function and Markovs inequality, we have that for every t > 0
P [Z > (1 + ) p] = P [et Z > et(1+) p ]
E [et Z ]
.
e(1+)t p
(B.5)
Next,
E [e ] = E [e
tZ
i Zi ] = E
e
t Zi
E [et Z i ]
pi et + (1 pi )e0
by independence
=
1 + pi (et 1)
i
=e
e pi (e 1)
t
i
i
using 1 + x e x
pi (et 1)
= e(e 1) p .
t
Combining the equation with Equation (B.5) and choosing t = log (1 + ) we obtain
Lemma B.3. Let Z 1 , . . . , Z m be independent Bernoulli variables where for every i ,
m
P [Z i = 1] = pi and P [Z i = 0] = 1 pi . Let p = m
i=1 pi and let Z =
i=1 Z i . Then,
for any > 0,
P[Z > (1 + ) p] eh() p ,
where
h() = (1 + ) log (1 + ) .
Using the inequality h(a) a 2 /(2 + 2a/3) we obtain
Lemma B.4. Using the notation of Lemma B.3 we also have
P [Z > (1 + ) p] e
p 2+2/3
E [et Z ]
,
e(1)t p
(B.6)
and
E [et Z ] = E [et
=
Zi
]=E
et Z i
E [e
t Z i
by independence
1 + pi (et 1)
i
e pi (e
t 1)
using 1 + x e x
= e(e
t 1) p
P [Z < (1 ) p]
= e ph() .
p 2+2/3
Proof. Denote X i = Z i E [Z i ] and X = m1 i X i . Using the monotonicity of the
exponent function and Markovs inequality, we have that for every > 0 and
> 0,
P [ X
] = P [e X e
] e
E [e X ].
Using the independence assumption we also have
X
X i /m
E [e ] = E
e
E [eX i /m ].
=
i
i
2 (ba)2
8m 2
2 (ba)2
8m 2
= e +
2 (ba)2
8m
375
376
Measure Concentration
2
2m
2
(ba)
2
2m
2
(ba)
. The theorem follows by applying the union bound on the two cases.
Lemma B.7 (Hoeffdings Lemma). Let X be a random variable that takes values in
the interval [a, b] and such that E [X] = 0. Then, for every > 0,
E [eX ] e
2 (ba)2
8
Proof. Since f (x) = ex is a convex function, we have that for every (0, 1), and
x [a, b],
f (x) f (a) + (1 ) f (b).
Setting =
bx
ba
[0, 1] yields
ex
b x a x a b
e +
e .
ba
ba
b E [X] a E [x] a b
b a
a b
e +
e =
e
e ,
ba
ba
ba
ba
a
, and
where we used the fact that E [X] = 0. Denote h = (b a), p = ba
h
L(h) = hp + log (1 p + pe ). Then, the expression on the right-hand side of
the equation can be rewritten as e L(h) . Therefore, to conclude our proof it suf2
fices to show that L(h) h8 . This follows from Taylors theorem using the facts
L(0) = L (0) = 0 and L (h) 1/4 for all h.
i=1
m
Zi >
e
m 2 h
m 2
i=1
where
h(a) = (1 + a) log (1 + a) a.
By using the inequality h(a) a 2 /(2 + 2a/3) it is possible to derive the following:
Lemma B.9 (Bernsteins Inequality). Let Z 1 , . . . , Z m be i.i.d. random variables with
a zero mean. If for all i, P (|Z i | < M) = 1, then for all t > 0:
m
t 2 /2
Z i > t exp
P
E Z 2j + Mt/3
i=1
.
B.5.1 Application
Bernsteins inequality can be used to interpolate between the rate 1/
we derived
for PAC learning in the realizable case (in Chapter 2) and the rate 1/
2 we derived
for the unrealizable case (in Chapter 4).
Lemma B.10. Let : H Z [0, 1] be a loss function. Let D be an arbitrary
distribution over Z . Fix some h. Then, for any (0, 1) we have
2L D (h) log (1/) 2 log (1/)
P
+
L S (h) L D (h) +
SD m
3m
m
2L S (h) log (1/) 4 log (1/)
+
P
L D (h) L S (h) +
SD m
m
m
1.
2.
t 2 /2
i > t exp 2
P
E j + t/3
i=1
t 2 /2
def
exp
= .
m L D (h) + t/3
377
378
Measure Concentration
t 2 /2
Since
1
m
i i
Let X be a (m, p) binomial variable. That is, X = m
i=1 Z i , where each Z i is 1 with
probability p and 0 with probability 1 p. Assume that p = (1
)/2. Sluds inequality (Slud 1977) tells us that P [X m/2] is lower bounded
by the probability that
a normal variable will be greater than or equal to m
2 /(1
2 ). The following
lemma follows by standard tail bounds for the normal distribution.
Lemma B.11. Let X be a (m, p) binomial variable and assume that p = (1
)/2.
Then,
1
2
2
P [X m/2]
1 1 exp( m
/(1
)) .
2
2 k/6
2 k/6
2 k/6
k
2
i=1 X i
use Chernoffs bounding method. For the first inequality, we first bound E [eX 1 ],
2
where > 0 will be specified later. Since ea 1 a + a2 for all a 0 we have that
E [eX 1 ] 1 E [X 12 ] +
2
2
E [X 14 ].
2
Using the well known equalities, E [X 12 ] = 1 and E [X 14 ] = 3, and the fact that 1 a
ea we obtain that
3 2
2
E [eX 1 ] 1 + 32 2 e+ 2 .
Now, applying Chernoffs bounding method we get that
P [ Z (1
)k] = P eZ e(1
)k
e(1
)k E eZ
k
2
= e(1
)k E eX 1
3 2
k
e(1
)k ek+ 2
3
= e
k+ 2 k .
Choose =
/3 we obtain the first inequality stated in the lemma.
For the second inequality, we use a known closed form expression for the
moment generating function of a k2 distributed random variable:
2
= (1 2)k/2 .
< 12 , E eZ
(B.7)
On the basis of the equation and using Chernoffs bounding method we have
P [Z (1 +
)k)] = P eZ e(1+
)k
e(1+
)k E eZ
= e(1+
)k (1 2)k/2
e(1+
)k ek = e
k ,
where the last inequality occurs because (1 a) ea . Setting =
/6 (which is in
(0, 1/2) by our assumption) we obtain the second inequality stated in the lemma.
Finally, the last inequality follows from the first two inequalities and the union
bound.
379
Appendix C
Linear Algebra
d
u i vi .
i=1
The Euclidean norm (a.k.a. the 2 norm) is u = u, u. We also use the 1 norm,
d
u1 = i=1 |u i | and the norm u = maxi |u i |.
A subspace of Rd is a subset of Rd which is closed under addition and scalar
multiplication. The span of a set of vectors u1 , . . . , uk is the subspace containing all
vectors of the form
k
i ui
i=1
380
i , where each
i is the eigenvalue corresponding to the eigenvector ui . This can be written equivalently as A = U DU
, where the columns of U are the vectors u1 , . . . , ud , and D is
a diagonal matrix with Di,i = i and for i = j , Di, j = 0. Finally, the number of i
which are nonzero is the rank of the matrix, the eigenvectors which correspond to the
nonzero eigenvalues span the range of A, and the eigenvectors which correspond to
zero eigenvalues span the null space of A.
381
382
Linear Algebra
values. Then,
A=
r
i ui v
i .
i=1
It follows that if U is a matrix whose columns are the ui s, V is a matrix whose columns
are the vi s, and D is a diagonal matrix with Di,i = i , then
A = U DV
.
Proof. Any right singular vector of A must be in the range of A
(otherwise, the
singular value will have to be zero). Therefore, v1 , . . . , vr is an orthonormal basis
of the range of A. Let us complete it to an orthonormal basis of Rn by adding
the vectors vr+1 , . . . , vn . Define B = ri=1 i ui v
= , u = 1 Av. Then,
Av
u = = Av,
and
A
u =
A Av = v = v.
Finally, we show that if A has rank r then it has r orthonormal singular vectors.
Lemma C.5. Let A Rm,n with rank r . Define the following vectors:
v1 = argmax Av
vRn :v=1
v2 = argmax Av
vRn :v=1
v,v1 =0
..
.
vr =
argmax
vRn :v=1
i<r, v,vi =0
Av
n
2
Di,i
xi 2 .
i=1
Therefore,
max Av2 = max
v:v=1
x:x=1
n
2
Di,i
xi 2 .
i=1
The solution of the right-hand side is to set x = (1, 0, . . . , 0), which implies that v1 is
the first eigenvector of A
A. Since Av1 > 0 it follows that D1,1 > 0 as required.
For the induction step, assume that the claim holds for some 1 t r 1. Then,
any v which is orthogonal to v1 , . . . , vt can be written as v = W x with all the first t
elements of x being zero. It follows that
max
v:v=1,it,v
vi =0
Av2 = max
x:x=1
n
2
Di,i
xi 2 .
i=t+1
The solution of the right-hand side is the all zeros vector except x t+1 = 1. This
implies that vt+1 is the (t + 1)th column of W . Finally, since Avt+1 > 0 it follows
that Dt+1,t+1 > 0 as required. This concludes our proof.
Corollary C.6 (The SVD Theorem). Let A Rm,n with rank r . Then A = U DV
383
References
Abernethy, J., Bartlett, P. L., Rakhlin, A. & Tewari, A. (2008), Optimal strategies
and minimax lower bounds for online convex games, in Proceedings of the nineteenth
annual conference on computational learning theory.
Ackerman, M. & Ben-David, S. (2008), Measures of clustering quality: A working set
of axioms for clustering, in Proceedings of Neural Information Processing Systems
(NIPS), pp. 121128.
Agarwal, S. & Roth, D. (2005), Learnability of bipartite ranking functions, in
Proceedings of the 18th annual conference on learning theory, pp. 1631.
Agmon, S. (1954), The relaxation method for linear inequalities, Canadian Journal of
Mathematics 6(3), 382392.
Aizerman, M. A., Braverman, E. M. & Rozonoer, L. I. (1964), Theoretical foundations
of the potential function method in pattern recognition learning, Automation and
Remote Control 25, 821837.
Allwein, E. L., Schapire, R. & Singer, Y. (2000), Reducing multiclass to binary: A
unifying approach for margin classifiers, Journal of Machine Learning Research
1, 113141.
Alon, N., Ben-David, S., Cesa-Bianchi, N. & Haussler, D. (1997), Scale-sensitive
dimensions, uniform convergence, and learnability, Journal of the ACM 44(4),
615631.
Anthony, M. & Bartlet, P. (1999), Neural Network Learning: Theoretical Foundations,
Cambridge University Press.
Baraniuk, R., Davenport, M., DeVore, R. & Wakin, M. (2008), A simple proof of
the restricted isometry property for random matrices, Constructive Approximation
28(3), 253263.
Barber, D. (2012), Bayesian reasoning and machine learning, Cambridge University
Press.
Bartlett, P., Bousquet, O. & Mendelson, S. (2005), Local rademacher complexities,
Annals of Statistics 33(4), 14971537.
Bartlett, P. L. & Ben-David, S. (2002), Hardness results for neural network approximation problems, Theor. Comput. Sci. 284(1), 5366.
Bartlett, P. L., Long, P. M. & Williamson, R. C. (1994), Fat-shattering and the learnability of real-valued functions, in Proceedings of the seventh annual conference on
computational learning theory, (ACM), pp. 299310.
385
386
References
References
Cands, E. (2008), The restricted isometry property and its implications for compressed
sensing, Comptes Rendus Mathematique 346(9), 589592.
Candes, E. J. (2006), Compressive sampling, in Proc. of the int. congress of math.,
Madrid, Spain.
Candes, E. & Tao, T. (2005), Decoding by linear programming, IEEE Trans. on
Information Theory 51, 42034215.
Cesa-Bianchi, N. & Lugosi, G. (2006), Prediction, learning, and games, Cambridge
University Press.
Chang, H. S., Weiss, Y. & Freeman, W. T. (2009), Informative sensing, arXiv preprint
arXiv:0901.4275.
Chapelle, O., Le, Q. & Smola, A. (2007), Large margin optimization of ranking
measures, in NIPS workshop: Machine learning for Web search (Machine Learning).
Collins, M. (2000), Discriminative reranking for natural language parsing, in Machine
Learning.
Collins, M. (2002), Discriminative training methods for hidden Markov models: Theory
and experiments with perceptron algorithms, in Conference on Empirical Methods in
Natural Language Processing.
Collobert, R. & Weston, J. (2008), A unified architecture for natural language processing: deep neural networks with multitask learning, in International Conference on
Machine Learning (ICML).
Cortes, C. & Vapnik, V. (1995), Support-vector networks, Machine Learning
20(3), 273297.
Cover, T. (1965), Behavior of sequential predictors of binary sequences, Trans.
4th Prague conf. information theory statistical decision functions, random processes,
pp. 263272.
Cover, T. & Hart, P. (1967), Nearest neighbor pattern classification, Information
Theory, IEEE Transactions on 13(1), 2127.
Crammer, K. & Singer, Y. (2001), On the algorithmic implementation of multiclass kernel-based vector machines, Journal of Machine Learning Research 2,
265292.
Cristianini, N. & Shawe-Taylor, J. (2000), An introduction to support vector machines,
Cambridge University Press.
Daniely, A., Sabato, S., Ben-David, S. & Shalev-Shwartz, S. (2011), Multiclass
learnability and the erm principle, in COLT.
Daniely, A., Sabato, S. & Shwartz, S. S. (2012), Multiclass learning approaches: A
theoretical comparison with implications, in NIPS.
Davis, G., Mallat, S. & Avellaneda, M. (1997), Greedy adaptive approximation,
Journal of Constructive Approximation 13, 5798.
Devroye, L. & Gyrfi, L. (1985), Nonparametric density estimation: The L B1 S view,
Wiley.
Devroye, L., Gyrfi, L. & Lugosi, G. (1996), A probabilistic theory of pattern recognition,
Springer.
Dietterich, T. G. & Bakiri, G. (1995), Solving multiclass learning problems via errorcorrecting output codes, Journal of Artificial Intelligence Research 2, 263286.
Donoho, D. L. (2006), Compressed sensing, Information Theory, IEEE Transactions
52(4), 12891306.
Dudley, R., Gine, E. & Zinn, J. (1991), Uniform and universal glivenko-cantelli
classes, Journal of Theoretical Probability 4(3), 485510.
Dudley, R. M. (1987), Universal Donsker classes and metric entropy, Annals of
Probability 15(4), 13061326.
Fisher, R. A. (1922), On the mathematical foundations of theoretical statistics, Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of
a Mathematical or Physical Character 222, 309368.
387
388
References
References
389
390
References
References
391
392
References
References
Zhang, T. (2004), Solving large scale linear prediction problems using stochastic gradient descent algorithms, in Proceedings of the twenty-first international conference on
machine learning.
Zhao, P. & Yu, B. (2006), On model selection consistency of Lasso, Journal of
Machine Learning Research 7, 25412567.
Zinkevich, M. (2003), Online convex programming and generalized infinitesimal
gradient ascent, in International conference on machine learning.
393
Index
3-term DNF, 79
F1 -score, 207
1 norm, 149, 286, 315, 335
accuracy, 18, 22
activation function, 229
AdaBoost, 101, 105, 314
all-pairs, 191, 353
approximation error, 37, 40
auto-encoders, 319
backpropagation, 237
backward elimination, 314
bag-of-words, 173
base hypothesis, 108
Bayes optimal, 24, 30, 221
Bayes rule, 306
Bayesian reasoning, 305
Bennets inequality, 376
Bernsteins inequality, 376
bias, 16, 37, 40
bias-complexity tradeoff, 41
Boolean conjunctions, 29, 54, 78
boosting, 101
boosting the confidence, 112
boundedness, 133
C4.5, 215
CART, 216
chaining, 338
Chebyshevs inequality, 373
Chernoff bounds, 373
class-sensitive feature mapping, 193
classifier, 14
clustering, 264
spectral, 271
compressed sensing, 285
compression bounds, 359
compression scheme, 360
computational complexity, 73
confidence, 18, 22
consistency, 66
Consistent, 247
contraction lemma, 331
convex, 124
function, 125
set, 124
strongly convex, 140, 160
convex-Lipschitz-bounded learning, 133
convex-smooth-bounded learning, 133
covering numbers, 337
curse of dimensionality, 224
decision stumps, 103, 104
decision trees, 212
dendrogram, 266, 267
dictionary learning, 319
differential set, 154
dimensionality reduction, 278
discretization trick, 34
discriminative, 295
distribution free, 295
domain, 13
domain of examples, 26
doubly stochastic matrix, 205
duality, 176
strong duality, 176
weak duality, 176
Dudley classes, 56
efficient computable, 73
EM, 301
Empirical Risk Minimization, see ERM
empirical error, 15
empirical risk, 15, 27
entropy, 298
relative entropy, 298
epigraph, 125
ERM, 15
error decomposition, 40, 135
395
396
Index
estimation error, 37, 40
Expectation-Maximization, see EM
face recognition, see Viola-Jones
feasible, 73
feature, 13
feature learning, 319
feature normalization, 316
feature selection, 309, 310
feature space, 179
feature transformations, 318
filters, 310
forward greedy selection, 312
frequentist, 305
gain, 215
GD, see gradient descent
generalization error, 14
generative models, 295
Gini index, 215
Glivenko-Cantelli, 35
gradient, 126
gradient descent, 151
Gram matrix, 183
growth function, 49
halfspace, 90
homogenous, 90, 170
nonseparable, 90
separable, 90
Halving, 247
hidden layers, 230
Hilbert space, 181
Hoeffdings inequality, 33, 375
holdout, 116
hypothesis, 14
hypothesis class, 16
i.i.d., 18
ID3, 214
improper, see representation independent
inductive bias, see bias
information bottleneck, 273
information gain, 215
instance, 13
instance space, 13
integral image, 113
Johnson-Lindenstrauss lemma, 284
k-means, 268, 270
soft k-means, 304
k-median, 269
k-medoids, 269
Kendall tau, 201
kernel PCA, 281
kernels, 179
Gaussian kernel, 184
kernel trick, 181
polynomial kernel, 183
RBF kernel, 184
label, 13
Lasso, 316, 335
generalization bounds, 335
latent variables, 301
LDA, 300
Ldim, 248, 249
learning curves, 122
least squares, 95
likelihood ratio, 301
linear discriminant analysis, see LDA
linear predictor, 89
homogenous, 90
linear programming, 91
linear regression, 94
linkage, 266
Lipschitzness, 128, 142, 157
subgradient, 155
Littlestone dimension, see Ldim
local minimum, 126
logistic regression, 97
loss, 15
loss function, 26
0-1 loss, 27, 134
absolute value loss, 95, 99, 133
convex loss, 131
generalized hinge loss, 195
hinge loss, 134
Lipschitz loss, 133
log-loss, 298
logistic loss, 98
ramp loss, 174
smooth loss, 133
square loss, 27
surrogate loss, 134, 259
margin, 168
Markovs inequality, 372
Massart lemma, 330
max linkage, 267
maximum a posteriori, 307
maximum likelihood, 295
McDiarmids inequality, 328
MDL, 63, 65, 213
measure concentration, 32, 372
Minimum Description Length, see MDL
mistake bound, 246
mixture of Gaussians, 301
model selection, 114, 117
multiclass, 25, 190, 351
cost-sensitive, 194
linear predictors, 193, 354
multivector, 193, 355
Perceptron, 211
reductions, 190, 354
SGD, 198
SVM, 197
multivariate performance measures, 206
Naive Bayes, 299
Natarajan dimension, 351
NDCG, 202
Index
Nearest Neighbor, 219
k-NN, 220
neural networks, 228
feedforward networks, 229
layered networks, 229
SGD, 236
No-Free-Lunch, 37
nonuniform learning, 59
Normalized Discounted Cumulative Gain, see
NDCG
Occams razor, 65
OMP, 312
one-vs.-all, 191, 353
one-vs.-rest, see one-vs.-all
online convex optimization, 257
online gradient descent, 257
online learning, 245
optimization error, 135
oracle inequality, 145
orthogonal matching pursuit, see OMP
overfitting, 15, 41, 121
PAC, 22
agnostic PAC, 23, 25
agnostic PAC for general loss, 27
PAC-Bayes, 364
parametric density estimation, 295
PCA, 279
Pearsons correlation coefficient, 311
Perceptron, 92
kernelized Perceptron, 188
multiclass, 211
online, 258
permutation matrix, 205
polynomial regression, 96
precision, 206
predictor, 14
prefix free language, 64
Principal Component Analysis, see PCA
prior knowledge, 39
Probably Approximately Correct, see PAC
projection, 159
projection lemma, 159
proper, 28
pruning, 216
Rademacher complexity, 325
random forests, 217
random projections, 283
ranking, 201
bipartite, 206
realizability, 17
recall, 206
regression, 26, 94, 138
regularization, 137
Tikhonov, 138, 140
regularized loss minimization, see RLM
representation independent, 28, 80
representative sample, 31, 325
representer theorem, 182
397