Machine Learning Notes Cs229 1
Machine Learning Notes Cs229 1
Class Notes
CS229 Course
Machine Learning
Standford University
Topics Covered:
Reference:
[1] http://cs229.stanford.edu/syllabus.html
[2] https://see.stanford.edu/Course/CS229
Supervised learning
Let’s start by talking about a few examples of supervised learning problems.
Suppose we have a dataset giving the living areas and prices of 47 houses
from Portland, Oregon:
Living area (feet2 ) Price (1000$s)
2104 400
1600 330
2400 369
1416 232
3000 540
.. ..
. .
We can plot this data:
housing prices
1000
900
800
700
600
price (in $1000)
500
400
300
200
100
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
square feet
1
CS229 Fall 2018 2
Given data like this, how can we learn to predict the prices of other houses
in Portland, as a function of the size of their living areas?
To establish notation for future use, we’ll use x(i) to denote the “input”
variables (living area in this example), also called input features, and y (i)
to denote the “output” or target variable that we are trying to predict
(price). A pair (x(i) , y (i) ) is called a training example, and the dataset
that we’ll be using to learn—a list of n training examples {(x(i) , y (i) ); i =
1, . . . , n}—is called a training set. Note that the superscript “(i)” in the
notation is simply an index into the training set, and has nothing to do with
exponentiation. We will also use X denote the space of input values, and Y
the space of output values. In this example, X = Y = R.
To describe the supervised learning problem slightly more formally, our
goal is, given a training set, to learn a function h : X 7→ Y so that h(x) is a
“good” predictor for the corresponding value of y. For historical reasons, this
function h is called a hypothesis. Seen pictorially, the process is therefore
like this:
Training
set
Learning
algorithm
x h predicted y
(living area of (predicted price)
house.) of house)
When the target variable that we’re trying to predict is continuous, such
as in our housing example, we call the learning problem a regression prob-
lem. When y can take on only a small number of discrete values (such as
if, given the living area, we wanted to predict if a dwelling is a house or an
apartment, say), we call it a classification problem.
3
Part I
Linear Regression
To make our housing example more interesting, let’s consider a slightly richer
dataset in which we also know the number of bedrooms in each house:
Living area (feet2 ) #bedrooms Price (1000$s)
2104 3 400
1600 3 330
2400 3 369
1416 2 232
3000 4 540
.. .. ..
. . .
(i)
Here, the x’s are two-dimensional vectors in R2 . For instance, x1 is the
(i)
living area of the i-th house in the training set, and x2 is its number of
bedrooms. (In general, when designing a learning problem, it will be up to
you to decide what features to choose, so if you are out in Portland gathering
housing data, you might also decide to include other features such as whether
each house has a fireplace, the number of bathrooms, and so on. We’ll say
more about feature selection later, but for now let’s take the features as
given.)
To perform supervised learning, we must decide how we’re going to rep-
resent functions/hypotheses h in a computer. As an initial choice, let’s say
we decide to approximate y as a linear function of x:
hθ (x) = θ0 + θ1 x1 + θ2 x2
Here, the θi ’s are the parameters (also called weights) parameterizing the
space of linear functions mapping from X to Y. When there is no risk of
confusion, we will drop the θ subscript in hθ (x), and write it more simply as
h(x). To simplify our notation, we also introduce the convention of letting
x0 = 1 (this is the intercept term), so that
d
X
h(x) = θi xi = θT x,
i=0
where on the right-hand side above we are viewing θ and x both as vectors,
and here d is the number of input variables (not counting x0 ).
4
1 LMS algorithm
We want to choose θ so as to minimize J(θ). To do so, let’s use a search
algorithm that starts with some “initial guess” for θ, and that repeatedly
changes θ to make J(θ) smaller, until hopefully we converge to a value of
θ that minimizes J(θ). Specifically, let’s consider the gradient descent
algorithm, which starts with some initial θ, and repeatedly performs the
update:
∂
θj := θj − α J(θ).
∂θj
(This update is simultaneously performed for all values of j = 0, . . . , d.)
Here, α is called the learning rate. This is a very natural algorithm that
repeatedly takes a step in the direction of steepest decrease of J.
In order to implement this algorithm, we have to work out what is the
partial derivative term on the right hand side. Let’s first work it out for the
case of if we have only one training example (x, y), so that we can neglect
the sum in the definition of J. We have:
∂ ∂ 1
J(θ) = (hθ (x) − y)2
∂θj ∂θj 2
1 ∂
= 2 · (hθ (x) − y) · (hθ (x) − y)
2 ∂θj
d
!
∂ X
= (hθ (x) − y) · θi x i − y
∂θj i=0
= (hθ (x) − y) xj
5
The rule is called the LMS update rule (LMS stands for “least mean squares”),
and is also known as the Widrow-Hoff learning rule. This rule has several
properties that seem natural and intuitive. For instance, the magnitude of
the update is proportional to the error term (y (i) − hθ (x(i) )); thus, for in-
stance, if we are encountering a training example on which our prediction
nearly matches the actual value of y (i) , then we find that there is little need
to change the parameters; in contrast, a larger change to the parameters will
be made if our prediction hθ (x(i) ) has a large error (i.e., if it is very far from
y (i) ).
We’d derived the LMS rule for when there was only a single training
example. There are two ways to modify this method for a training set of
more than one example. The first is replace it with the following algorithm:
The reader can easily verify that the quantity in the summation in the update
rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this is
simply gradient descent on the original cost function J. This method looks
at every example in the entire training set on every step, and is called batch
gradient descent. Note that, while gradient descent can be susceptible
to local minima in general, the optimization problem we have posed here
for linear regression has only one global, and no other local, optima; thus
gradient descent always converges (assuming the learning rate α is not too
large) to the global minimum. Indeed, J is a convex quadratic function.
Here is an example of gradient descent as it is run to minimize a quadratic
function.
1
We use the notation “a := b” to denote an operation (in a computer program) in
which we set the value of a variable a to be equal to the value of b. In other words, this
operation overwrites a with the value of b. In contrast, we will write “a = b” when we are
asserting a statement of fact, that the value of a is equal to the value of b.
6
50
45
40
35
30
25
20
15
10
5 10 15 20 25 30 35 40 45 50
The ellipses shown above are the contours of a quadratic function. Also
shown is the trajectory taken by gradient descent, which was initialized at
(48,30). The x’s in the figure (joined by straight lines) mark the successive
values of θ that gradient descent went through.
When we run batch gradient descent to fit θ on our previous dataset,
to learn to predict housing price as a function of living area, we obtain
θ0 = 71.27, θ1 = 0.1345. If we plot hθ (x) as a function of x (area), along
with the training data, we obtain the following figure:
housing prices
1000
900
800
700
600
price (in $1000)
500
400
300
200
100
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
square feet
If the number of bedrooms were included as one of the input features as well,
we get θ0 = 89.60, θ1 = 0.1392, θ2 = −8.738.
The above results were obtained with batch gradient descent. There is
an alternative to batch gradient descent that also works very well. Consider
the following algorithm:
7
Loop {
for i = 1 to n, {
(i)
θj := θj + α y (i) − hθ (x(i) ) xj (for every j).
}
In this algorithm, we repeatedly run through the training set, and each time
we encounter a training example, we update the parameters according to
the gradient of the error with respect to that single training example only.
This algorithm is called stochastic gradient descent (also incremental
gradient descent). Whereas batch gradient descent has to scan through
the entire training set before taking a single step—a costly operation if n is
large—stochastic gradient descent can start making progress right away, and
continues to make progress with each example it looks at. Often, stochastic
gradient descent gets θ “close” to the minimum much faster than batch gra-
dient descent. (Note however that it may never “converge” to the minimum,
and the parameters θ will keep oscillating around the minimum of J(θ); but
in practice most of the values near the minimum will be reasonably good
approximations to the true minimum.2 ) For these reasons, particularly when
the training set is large, stochastic gradient descent is often preferred over
batch gradient descent.
We now state without proof some facts of matrix derivatives (we won’t
need some of these until later this quarter). Equation (4) applies only to
non-singular square matrices A, where |A| denotes the determinant of A. We
have:
∇A trAB = BT (1)
∇AT f (A) = (∇A f (A))T (2)
∇A trABAT C = CAB + C T AB T (3)
∇A |A| = |A|(A−1 )T . (4)
To make our matrix notation more concrete, let us now explain in detail
the meaning of the first of these equations. Suppose we have some fixed
matrix B ∈ Rd×n . We can then define a function f : Rn×d 7→ R according to
f (A) = trAB. Note that this definition makes sense, because if A ∈ Rn×d ,
then AB is a square matrix, and we can apply the trace operator to it; thus,
f does indeed map from Rn×d to R. We can then apply our definition of
matrix derivatives to find ∇A f (A), which will itself by an n-by-d matrix.
Equation (1) above states that the (i, j) entry of this matrix will be given by
the (i, j)-entry of B T , or equivalently, by Bji .
The proofs of Equations (1-3) are reasonably simple, and are left as an
exercise to the reader. Equations (4) can be derived using the adjoint repre-
sentation of the inverse of a matrix.3
be seen from its definition), this implies that (∂/∂Aij )|A| = A′ij . Putting all this together
shows the result.
10
Also, let ~y be the n-dimensional vector containing all the target values from
the training set:
y (1)
y (2)
~y = .. .
.
y (n)
Now, since hθ (x(i) ) = (x(i) )T θ, we can easily verify that
(x(1) )T θ y (1)
Xθ − ~y = .. ..
− .
.
(x(n) )T θ y (n)
hθ (x(1) ) − y (1)
= ..
.
.
(n) (n)
hθ (x ) − y
2
Thus, using the fact that for a vector z, we have that z T z =
P
i zi :
n
1 1X
(Xθ − ~y )T (Xθ − ~y ) = (hθ (x(i) ) − y (i) )2
2 2 i=1
= J(θ)
Hence,
1
∇θ J(θ) = ∇θ (Xθ − ~y )T (Xθ − ~y )
2
1
∇θ θT X T Xθ − θT X T ~y − ~y T Xθ + ~y T ~y
=
2
1
∇θ tr θT X T Xθ − θT X T ~y − ~y T Xθ + ~y T ~y
=
2
1
∇θ tr θT X T Xθ − 2tr ~y T Xθ
=
2
1
X T Xθ + X T Xθ − 2X T ~y
=
2
= X T Xθ − X T ~y
In the third step, we used the fact that the trace of a real number is just the
real number; the fourth step used the fact that trA = trAT , and the fifth
step used Equation (5) with AT = θ, B = B T = X T X, and C = I, and
Equation (1). To minimize J, we set its derivatives to zero, and obtain the
normal equations:
X T Xθ = X T ~y
Thus, the value of θ that minimizes J(θ) is given in closed form by the
equation
θ = (X T X)−1 X T ~y .
3 Probabilistic interpretation
When faced with a regression problem, why might linear regression, and
specifically why might the least-squares cost function J, be a reasonable
choice? In this section, we will give a set of probabilistic assumptions, under
which least-squares regression is derived as a very natural algorithm.
Let us assume that the target variables and the inputs are related via the
equation
y (i) = θT x(i) + ǫ(i) ,
where ǫ(i) is an error term that captures either unmodeled effects (such as
if there are some features very pertinent to predicting housing price, but
that we’d left out of the regression), or random noise. Let us further assume
that the ǫ(i) are distributed IID (independently and identically distributed)
according to a Gaussian distribution (also called a Normal distribution) with
12
mean zero and some variance σ 2 . We can write this assumption as “ǫ(i) ∼
N (0, σ 2 ).” I.e., the density of ǫ(i) is given by
(ǫ(i) )2
(i) 1
p(ǫ ) = √ exp − .
2πσ 2σ 2
(y (i) − θT x(i) )2
(i) (i) 1
p(y |x ; θ) = √ exp − .
2πσ 2σ 2
The notation “p(y (i) |x(i) ; θ)” indicates that this is the distribution of y (i)
given x(i) and parameterized by θ. Note that we should not condition on θ
(“p(y (i) |x(i) , θ)”), since θ is not a random variable. We can also write the
distribution of y (i) as y (i) | x(i) ; θ ∼ N (θT x(i) , σ 2 ).
Given X (the design matrix, which contains all the x(i) ’s) and θ, what
is the distribution of the y (i) ’s? The probability of the data is given by
p(~y |X; θ). This quantity is typically viewed a function of ~y (and perhaps X),
for a fixed value of θ. When we wish to explicitly view this as a function of
θ, we will instead call it the likelihood function:
Note that by the independence assumption on the ǫ(i) ’s (and hence also the
y (i) ’s given the x(i) ’s), this can also be written
n
Y
L(θ) = p(y (i) | x(i) ; θ)
i=1
n
(y (i) − θT x(i) )2
Y 1
= √ exp − .
i=1
2πσ 2σ 2
Now, given this probabilistic model relating the y (i) ’s and the x(i) ’s, what
is a reasonable way of choosing our best guess of the parameters θ? The
principal of maximum likelihood says that we should choose θ so as to
make the data as high probability as possible. I.e., we should choose θ to
maximize L(θ).
Instead of maximizing L(θ), we can also maximize any strictly increasing
function of L(θ). In particular, the derivations will be a bit simpler if we
13
4 4 4
3 3 3
y
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
x x x
2. Output θT x.
In contrast, the locally weighted linear regression algorithm does the fol-
lowing:
1. Fit θ to minimize i w(i) (y (i) − θT x(i) )2 .
P
2. Output θT x.
15
Here, the w(i) ’s are non-negative valued weights. Intuitively, if w(i) is large
for a particular value of i, then in picking θ, we’ll try hard to make (y (i) −
θT x(i) )2 small. If w(i) is small, then the (y (i) − θT x(i) )2 error term will be
pretty much ignored in the fit.
A fairly standard choice for the weights is4
(x(i) − x)2
(i)
w = exp −
2τ 2
Note that the weights depend on the particular point x at which we’re trying
to evaluate x. Moreover, if |x(i) − x| is small, then w(i) is close to 1; and
if |x(i) − x| is large, then w(i) is small. Hence, θ is chosen giving a much
higher “weight” to the (errors on) training examples close to the query point
x. (Note also that while the formula for the weights takes a form that is
cosmetically similar to the density of a Gaussian distribution, the w(i) ’s do
not directly have anything to do with Gaussians, and in particular the w(i)
are not random variables, normally distributed or otherwise.) The parameter
τ controls how quickly the weight of a training example falls off with distance
of its x(i) from the query point x; τ is called the bandwidth parameter, and
is also something that you’ll get to experiment with in your homework.
Locally weighted linear regression is the first example we’re seeing of a
non-parametric algorithm. The (unweighted) linear regression algorithm
that we saw earlier is known as a parametric learning algorithm, because
it has a fixed, finite number of parameters (the θi ’s), which are fit to the
data. Once we’ve fit the θi ’s and stored them away, we no longer need to
keep the training data around to make future predictions. In contrast, to
make predictions using locally weighted linear regression, we need to keep
the entire training set around. The term “non-parametric” (roughly) refers
to the fact that the amount of stuff we need to keep in order to represent the
hypothesis h grows linearly with the size of the training set.
4
If x is vector-valued, this is generalized to be w(i) = exp(−(x(i) − x)T (x(i) − x)/(2τ 2 )),
or w(i) = exp(−(x(i) − x)T Σ−1 (x(i) − x)/2), for an appropriate choice of τ or Σ.
16
Part II
Classification and logistic
regression
Let’s now talk about the classification problem. This is just like the regression
problem, except that the values y we now want to predict take on only
a small number of discrete values. For now, we will focus on the binary
classification problem in which y can take on only two values, 0 and 1.
(Most of what we say here will also generalize to the multiple-class case.)
For instance, if we are trying to build a spam classifier for email, then x(i)
may be some features of a piece of email, and y may be 1 if it is a piece
of spam mail, and 0 otherwise. 0 is also called the negative class, and 1
the positive class, and they are sometimes also denoted by the symbols “-”
and “+.” Given x(i) , the corresponding y (i) is also called the label for the
training example.
5 Logistic regression
We could approach the classification problem ignoring the fact that y is
discrete-valued, and use our old linear regression algorithm to try to predict
y given x. However, it is easy to construct examples where this method
performs very poorly. Intuitively, it also doesn’t make sense for hθ (x) to take
values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}.
To fix this, let’s change the form for our hypotheses hθ (x). We will choose
1
hθ (x) = g(θT x) = ,
1 + e−θT x
where
1
g(z) =
1 + e−z
is called the logistic function or the sigmoid function. Here is a plot
showing g(z):
17
0.9
0.8
0.7
0.6
g(z)
0.5
0.4
0.3
0.2
0.1
0
−5 −4 −3 −2 −1 0 1 2 3 4 5
z
So, given the logistic regression model, how do we fit θ for it? Following
how we saw least squares regression could be derived as the maximum like-
lihood estimator under a set of assumptions, let’s endow our classification
model with a set of probabilistic assumptions, and then fit the parameters
via maximum likelihood.
18
P (y = 1 | x; θ) = hθ (x)
P (y = 0 | x; θ) = 1 − hθ (x)
L(θ) = p(~y | X; θ)
Yn
= p(y (i) | x(i) ; θ)
i=1
n
Y y(i) 1−y(i)
= hθ (x(i) ) 1 − hθ (x(i) )
i=1
= (y − hθ (x)) xj
19
Above, we used the fact that g ′ (z) = g(z)(1 − g(z)). This therefore gives us
the stochastic gradient ascent rule
(i)
θj := θj + α y (i) − hθ (x(i) ) xj
If we compare this to the LMS update rule, we see that it looks identical; but
this is not the same algorithm, because hθ (x(i) ) is now defined as a non-linear
function of θT x(i) . Nonetheless, it’s a little surprising that we end up with
the same update rule for a rather different algorithm and learning problem.
Is this coincidence, or is there a deeper reason behind this? We’ll answer this
when we get to GLM models. (See also the extra credit problem on Q3 of
problem set 1.)
If we then let hθ (x) = g(θT x) as before but using this modified definition of
g, and if we use the update rule
(i)
θj := θj + α y (i) − hθ (x(i) ) xj .
50 50 50
40 40 40
30 30 30
f(x)
f(x)
f(x)
20 20 20
10 10 10
0 0 0
In the leftmost figure, we see the function f plotted along with the line
y = 0. We’re trying to find θ so that f (θ) = 0; the value of θ that achieves this
is about 1.3. Suppose we initialized the algorithm with θ = 4.5. Newton’s
method then fits a straight line tangent to f at θ = 4.5, and solves for the
where that line evaluates to 0. (Middle figure.) This give us the next guess
for θ, which is about 2.8. The rightmost figure shows the result of running
one more iteration, which the updates θ to about 1.8. After a few more
iterations, we rapidly approach θ = 1.3.
Newton’s method gives a way of getting to f (θ) = 0. What if we want to
use it to maximize some function ℓ? The maxima of ℓ correspond to points
where its first derivative ℓ′ (θ) is zero. So, by letting f (θ) = ℓ′ (θ), we can use
the same algorithm to maximize ℓ, and we obtain update rule:
ℓ′ (θ)
θ := θ − ′′ .
ℓ (θ)
(Something to think about: How would this change if we wanted to use
Newton’s method to minimize rather than maximize a function?)
21
Part III
Generalized Linear Models5
So far, we’ve seen a regression example, and a classification example. In the
regression example, we had y|x; θ ∼ N (µ, σ 2 ), and in the classification one,
y|x; θ ∼ Bernoulli(φ), for some appropriate definitions of µ and φ as functions
of x and θ. In this section, we will show that both of these methods are
special cases of a broader family of models, called Generalized Linear Models
(GLMs). We will also show how other models in the GLM family can be
derived and applied to other classification and regression problems.
Here, η is called the natural parameter (also called the canonical param-
eter) of the distribution; T (y) is the sufficient statistic (for the distribu-
tions we consider, it will often be the case that T (y) = y); and a(η) is the log
partition function. The quantity e−a(η) essentially plays the role of a nor-
malization constant, that makes sure the distribution p(y; η) sums/integrates
over y to 1.
A fixed choice of T , a and b defines a family (or set) of distributions that
is parameterized by η; as we vary η, we then get different distributions within
this family.
We now show that the Bernoulli and the Gaussian distributions are ex-
amples of exponential family distributions. The Bernoulli distribution with
mean φ, written Bernoulli(φ), specifies a distribution over y ∈ {0, 1}, so that
p(y = 1; φ) = φ; p(y = 0; φ) = 1 − φ. As we vary φ, we obtain Bernoulli
distributions with different means. We now show that this class of Bernoulli
distributions, ones obtained by varying φ, is in the exponential family; i.e.,
that there is a choice of T , a and b so that Equation (6) becomes exactly the
class of Bernoulli distributions.
5
The presentation of the material in this section takes inspiration from Michael I.
Jordan, Learning in graphical models (unpublished book draft), and also McCullagh and
Nelder, Generalized Linear Models (2nd ed.).
23
p(y; φ) = φy (1 − φ)1−y
= exp(y log φ + (1 − y) log(1 − φ))
φ
= exp log y + log(1 − φ) .
1−φ
T (y) = y
a(η) = − log(1 − φ)
= log(1 + eη )
b(y) = 1
This shows that the Bernoulli distribution can be written in the form of
Equation (6), using an appropriate choice of T , a and b.
Let’s now move on to consider the Gaussian distribution. Recall that,
when deriving linear regression, the value of σ 2 had no effect on our final
choice of θ and hθ (x). Thus, we can choose an arbitrary value for σ 2 without
changing anything. To simplify the derivation below, let’s set σ 2 = 1.6 We
then have:
1 1 2
p(y; µ) = √ exp − (y − µ)
2π 2
1 1 2 1 2
= √ exp − y · exp µy − µ
2π 2 2
6
If we leave σ 2 as a variable, the Gaussian distribution can also be shown to be in the
exponential family, where η ∈ R2 is now a 2-dimension vector that depends on both µ and
σ. For the purposes of GLMs, however, the σ 2 parameter can also be treated by considering
a more general definition of the exponential family: p(y; η, τ ) = b(a, τ ) exp((η T T (y) −
a(η))/c(τ )). Here, τ is called the dispersion parameter, and for the Gaussian, c(τ ) = σ 2 ;
but given our simplification above, we won’t need the more general definition for the
examples we will consider here.
24
η = µ
T (y) = y
a(η) = µ2 /2
= η 2 /2
√
b(y) = (1/ 2π) exp(−y 2 /2).
9 Constructing GLMs
Suppose you would like to build a model to estimate the number y of cus-
tomers arriving in your store (or number of page-views on your website) in
any given hour, based on certain features x such as store promotions, recent
advertising, weather, day-of-week, etc. We know that the Poisson distribu-
tion usually gives a good model for numbers of visitors. Knowing this, how
can we come up with a model for our problem? Fortunately, the Poisson is an
exponential family distribution, so we can apply a Generalized Linear Model
(GLM). In this section, we will we will describe a method for constructing
GLM models for problems such as these.
More generally, consider a classification or regression problem where we
would like to predict the value of some random variable y as a function of
x. To derive a GLM for this problem, we will make the following three
assumptions about the conditional distribution of y given x and about our
model:
1. y | x; θ ∼ ExponentialFamily(η). I.e., given x and θ, the distribution of
y follows some exponential family distribution, with parameter η.
2. Given x, our goal is to predict the expected value of T (y) given x.
In most of our examples, we will have T (y) = y, so this means we
would like the prediction h(x) output by our learned hypothesis h to
25
The third of these assumptions might seem the least well justified of
the above, and it might be better thought of as a “design choice” in our
recipe for designing GLMs, rather than as an assumption per se. These
three assumptions/design choices will allow us to derive a very elegant class
of learning algorithms, namely GLMs, that have many desirable properties
such as ease of learning. Furthermore, the resulting models are often very
effective for modelling different types of distributions over y; for example, we
will shortly show that both logistic regression and ordinary least squares can
both be derived as GLMs.
hθ (x) = E[y|x; θ]
= µ
= η
= θT x.
The first equality follows from Assumption 2, above; the second equality
follows from the fact that y|x; θ ∼ N (µ, σ 2 ), and so its expected value is given
by µ; the third equality follows from Assumption 1 (and our earlier derivation
showing that µ = η in the formulation of the Gaussian as an exponential
family distribution); and the last equality follows from Assumption 3.
26
hθ (x) = E[y|x; θ]
= φ
= 1/(1 + e−η )
T
= 1/(1 + e−θ x )
T
So, this gives us hypothesis functions of the form hθ (x) = 1/(1 + e−θ x ). If
you are previously wondering how we came up with the form of the logistic
function 1/(1 + e−z ), this gives one answer: Once we assume that y condi-
tioned on x is Bernoulli, it arises as a consequence of the definition of GLMs
and exponential family distributions.
To introduce a little more terminology, the function g giving the distri-
bution’s mean as a function of the natural parameter (g(η) = E[T (y); η])
is called the canonical response function. Its inverse, g −1 , is called the
canonical link function. Thus, the canonical response function for the
Gaussian family is just the identify function; and the canonical response
function for the Bernoulli is the logistic function.7
Unlike our previous examples, here we do not have T (y) = y; also, T (y) is
now a k − 1 dimensional vector, rather than a real number. We will write
(T (y))i to denote the i-th element of the vector T (y).
We introduce one more very useful piece of notation. An indicator func-
tion 1{·} takes on a value of 1 if its argument is true, and 0 otherwise
(1{True} = 1, 1{False} = 0). For example, 1{2 = 3} = 0, and 1{3 =
5 − 2} = 1. So, we can also write the relationship between T (y) and y as
(T (y))i = 1{y = i}. (Before you continue reading, please make sure you un-
derstand why this is true!) Further, we have that E[(T (y))i ] = P (y = i) = φi .
We are now ready to show that the multinomial is a member of the
28
This function mapping from the η’s to the φ’s is called the softmax function.
To complete our model, we use Assumption 3, given earlier, that the ηi ’s
are linearly related to the x’s. So, have ηi = θiT x (for i = 1, . . . , k − 1),
where θ1 , . . . , θk−1 ∈ Rd+1 are the parameters of our model. For notational
convenience, we can also define θk = 0, so that ηk = θkT x = 0, as given
previously. Hence, our model assumes that the conditional distribution of y
given x is given by
p(y = i|x; θ) = φi
eηi
= Pk
j=1 eηj
T
e θi x
= Pk T (8)
j=1 e θj x
In other words, our hypothesis will output the estimated probability that
p(y = i|x; θ), for every value of i = 1, . . . , k. (Even though hθ (x) as defined
above is only k − 1 dimensional, clearly p(y = k|x; θ) can be obtained as
Pk−1
1 − i=1 φi .)
30
To obtain the second line above, we used the definition for p(y|x; θ) given
in Equation (8). We can now obtain the maximum likelihood estimate of
the parameters by maximizing ℓ(θ) in terms of θ, using a method such as
gradient ascent or Newton’s method.
CS229 Supplemental Lecture notes
John Duchi
1 Binary classification
In binary classification problems, the target y can take on at only two
values. In this set of notes, we show how to model this problem by letting
y ∈ {−1, +1}, where we say that y is a 1 if the example is a member of the
positive class and y = −1 if the example is a member of the negative class.
We assume, as usual, that we have input features x ∈ Rn .
As in our standard approach to supervised learning problems, we first
pick a representation for our hypothesis class (what we are trying to learn),
and after that we pick a loss function that we will minimize. In binary
classification problems, it is often convenient to use a hypothesis class of the
form hθ (x) = θT x, and, when presented with a new example x, we classify it
as positive or negative depending on the sign of θT x, that is, our predicted
label is
1
if t > 0
T
sign(hθ (x)) = sign(θ x) where sign(t) = 0 if t = 0
−1 if t < 0.
1
vector θ assigns to labels for the point x: if xT θ is very negative (or very
positive), then we more strongly believe the label y is negative (or positive).
Now that we have chosen a representation for our data, we must choose a
loss function. Intuitively, we would like to choose some loss function so that
for our training data {(x(i) , y (i) )}m (i) T (i)
i=1 , the θ chosen makes the margin y θ x
very large for each training example. Let us fix a hypothetical example (x, y),
let z = yxT θ denote the margin, and let ϕ : R → R be the loss function—that
is, the loss for the example (x, y) with margin z = yxT θ is ϕ(z) = ϕ(yxT θ).
For any particular loss function, the empirical risk that we minimize is then
m
1 X
J(θ) = ϕ(y (i) θT x(i) ). (2)
m i=1
Consider our desired behavior: we wish to have y (i) θT x(i) positive for each
training example i = 1, . . . , m, and we should penalize those θ for which
y (i) θT x(i) < 0 frequently in the training data. Thus, an intuitive choice for
our loss would be one with ϕ(z) small if z > 0 (the margin is positive), while
ϕ(z) is large if z < 0 (the margin is negative). Perhaps the most natural
such loss is the zero-one loss, given by
(
1 if z ≤ 0
ϕzo (z) =
0 if z > 0.
In this case, the risk J(θ) is simply the average number of mistakes—misclassifications—
the parameter θ makes on the training data. Unfortunately, the loss ϕzo is
discontinuous, non-convex (why this matters is a bit beyond the scope of
the course), and perhaps even more vexingly, NP-hard to minimize. So we
prefer to choose losses that have the shape given in Figure 1. That is, we
will essentially always use losses that satisfy
As a few different examples, here are three loss functions that we will see
either now or later in the class, all of which are commonly used in machine
learning.
2
ϕ
z = yxT θ
Figure 1: The rough shape of loss we desire: the loss is convex and continuous,
and tends to zero as the margin z = yxT θ → ∞.
2 Logistic regression
With this general background in place, we now we give a complementary
view of logistic regression to that in Andrew Ng’s lecture notes. When we
3
ϕlogistic
ϕhinge
ϕexp
z = yxT θ
Figure 2: The three margin-based loss functions logistic loss, hinge loss, and
exponential loss.
use binary labels y ∈ {−1, 1}, it is possible to write logistic regression more
compactly. In particular, we use the logistic loss
Roughly, we hope that choosing θ to minimize the average logistic loss will
yield a θ for which y (i) θT x(i) > 0 for most (or even all!) of the training
examples.
4
1
0
-5 5
5
then we see that the likelihood of the training data is
m
Y m
Y
(i) (i)
L(θ) = p(Y = y | x ; θ) = hθ (y (i) x(i) ),
i=1 i=1
where J(θ) is exactly the logistic regression risk from Eq. (3). That is,
maximum likelihood in the logistic model (4) is the same as minimizing the
average logistic loss, and we arrive at logistic regression again.
6
This update is intuitive: if our current hypothesis hθ(t) assigns probability
close to 1 for the incorrect label −y (i) , then we try to reduce the loss by
moving θ in the direction of y (i) x(i) . Conversely, if our current hypothesis
hθ(t) assigns probability close to 0 for the incorrect label −y (i) , the update
essentially does nothing.
7
CS229 Lecture Notes
Andrew Ng
Part IV
Generative Learning algorithms
So far, we’ve mainly been talking about learning algorithms that model
p(y|x; θ), the conditional distribution of y given x. For instance, logistic
regression modeled p(y|x; θ) as hθ (x) = g(θT x) where g is the sigmoid func-
tion. In these notes, we’ll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish
between elephants (y = 1) and dogs (y = 0), based on some features of
an animal. Given a training set, an algorithm like logistic regression or
the perceptron algorithm (basically) tries to find a straight line—that is, a
decision boundary—that separates the elephants and dogs. Then, to classify
a new animal as either an elephant or a dog, it checks on which side of the
decision boundary it falls, and makes its prediction accordingly.
Here’s a different approach. First, looking at elephants, we can build a
model of what elephants look like. Then, looking at dogs, we can build a
separate model of what dogs look like. Finally, to classify a new animal, we
can match the new animal against the elephant model, and match it against
the dog model, to see whether the new animal looks more like the elephants
or more like the dogs we had seen in the training set.
Algorithms that try to learn p(y|x) directly (such as logistic regression),
or algorithms that try to learn mappings directly from the space of inputs X
to the labels {0, 1}, (such as the perceptron algorithm) are called discrim-
inative learning algorithms. Here, we’ll talk about algorithms that instead
try to model p(x|y) (and p(y)). These algorithms are called generative
learning algorithms. For instance, if y indicates whether an example is a
dog (0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’
features, and p(x|y = 1) models the distribution of elephants’ features.
After modeling p(y) (called the class priors) and p(x|y), our algorithm
1
2
can then use Bayes rule to derive the posterior distribution on y given x:
p(x|y)p(y)
p(y|x) = .
p(x)
Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y =
0)p(y = 0) (you should be able to verify that this is true from the standard
properties of probabilities), and thus can also be expressed in terms of the
quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating
p(y|x) in order to make a prediction, then we don’t actually need to calculate
the denominator, since
p(x|y)p(y)
arg max p(y|x) = arg max
y y p(x)
= arg max p(x|y)p(y).
y
Cov(X) = Σ.
3 3 3
2 2 2
3 3 3
1 2 1 2 1 2
0 1 0 1 0 1
−1 0 −1 0 −1 0
−1 −1 −1
−2 −2 −2
−2 −2 −2
−3 −3 −3 −3 −3 −3
The left-most figure shows a Gaussian with mean zero (that is, the 2x1
zero-vector) and covariance matrix Σ = I (the 2x2 identity matrix). A Gaus-
sian with zero mean and identity covariance is also called the standard nor-
mal distribution. The middle figure shows the density of a Gaussian with
zero mean and Σ = 0.6I; and in the rightmost figure shows one with , Σ = 2I.
We see that as Σ becomes larger, the Gaussian becomes more “spread-out,”
and as it becomes smaller, the distribution becomes more “compressed.”
Let’s look at some more examples.
3 3 3
2 2 2
1 1 1
0 0 0
3 3 3
−1 2 −1 2 −1 2
1 1 1
−2 0 −2 0 −2 0
−1 −1 −1
−3 −2 −3 −2 −3 −2
−3 −3 −3
The figures above show Gaussians with mean 0, and with covariance
matrices respectively
1 0 1 0.5 1 0.8
Σ= ; Σ= ; Σ= .
0 1 0.5 1 0.8 1
The leftmost figure shows the familiar standard normal distribution, and we
see that as we increase the off-diagonal entry in Σ, the density becomes more
4
“compressed” towards the 45◦ line (given by x1 = x2 ). We can see this more
clearly when we look at the contours of the same three densities:
3 3 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
From the leftmost and middle figures, we see that by decreasing the off-
diagonal elements of the covariance matrix, the density now becomes “com-
pressed” again, but in the opposite direction. Lastly, as we vary the pa-
rameters, more generally the contours will form ellipses (the rightmost figure
showing an example).
As our last set of examples, fixing Σ = I, by varying µ, we can also move
the mean of the density around.
5
0.25 0.25
0.2 0.2
0.15 0.15
0.1 0.1
0.05 0.05
3 3
2 2
3 3
1 2 1 2
0 1 0 1
−1 0 −1 0
−1 −1
−2 −2
−2 −2
−3 −3 −3 −3
0.25
0.2
0.15
0.1
0.05
3
2
3
1 2
0 1
−1 0
−1
−2
−2
−3 −3
y ∼ Bernoulli(φ)
x|y = 0 ∼ N (µ0 , Σ)
x|y = 1 ∼ N (µ1 , Σ)
6
p(y) = φy (1 − φ)1−y
1 1 T −1
p(x|y = 0) = exp − (x − µ0 ) Σ (x − µ0 )
(2π)d/2 |Σ|1/2 2
1 1 T −1
p(x|y = 1) = exp − (x − µ1 ) Σ (x − µ1 )
(2π)d/2 |Σ|1/2 2
Here, the parameters of our model are φ, Σ, µ0 and µ1 . (Note that while
there’re two different mean vectors µ0 and µ1 , this model is usually applied
using only one covariance matrix Σ.) The log-likelihood of the data is given
by
n
Y
`(φ, µ0 , µ1 , Σ) = log p(x(i) , y (i) ; φ, µ0 , µ1 , Σ)
i=1
n
Y
= log p(x(i) |y (i) ; µ0 , µ1 , Σ)p(y (i) ; φ).
i=1
−1
−2
−3
−4
−5
−6
−7
−2 −1 0 1 2 3 4 5 6 7
Shown in the figure are the training set, as well as the contours of the
two Gaussian distributions that have been fit to the data in each of the
two classes. Note that the two Gaussians have contours that are the same
shape and orientation, since they share a covariance matrix Σ, but they have
different means µ0 and µ1 . Also shown in the figure is the straight line
giving the decision boundary at which p(y = 1|x) = 0.5. On one side of
the boundary, we’ll predict y = 1 to be the most likely outcome, and on the
other side, we’ll predict y = 0.
When would we prefer one model over another? GDA and logistic regres-
sion will, in general, give different decision boundaries when trained on the
same dataset. Which is better?
We just argued that if p(x|y) is multivariate gaussian (with shared Σ),
then p(y|x) necessarily follows a logistic function. The converse, however,
is not true; i.e., p(y|x) being a logistic function does not imply p(x|y) is
multivariate gaussian. This shows that GDA makes stronger modeling as-
sumptions about the data than does logistic regression. It turns out that
when these modeling assumptions are correct, then GDA will find better fits
to the data, and is a better model. Specifically, when p(x|y) is indeed gaus-
sian (with shared Σ), then GDA is asymptotically efficient. Informally,
this means that in the limit of very large training sets (large n), there is no
algorithm that is strictly better than GDA (in terms of, say, how accurately
they estimate p(y|x)). In particular, it can be shown that in this setting,
GDA will be a better algorithm than logistic regression; and more generally,
even for small training set sizes, we would generally expect GDA to better.
In contrast, by making significantly weaker assumptions, logistic regres-
sion is also more robust and less sensitive to incorrect modeling assumptions.
There are many different sets of assumptions that would lead to p(y|x) taking
the form of a logistic function. For example, if x|y = 0 ∼ Poisson(λ0 ), and
x|y = 1 ∼ Poisson(λ1 ), then p(y|x) will be logistic. Logistic regression will
also work well on Poisson data like this. But if we were to use GDA on such
data—and fit Gaussian distributions to such non-Gaussian data—then the
results will be less predictable, and GDA may (or may not) do well.
To summarize: GDA makes stronger modeling assumptions, and is more
data efficient (i.e., requires less training data to learn “well”) when the mod-
eling assumptions are correct or at least approximately correct. Logistic
regression makes weaker assumptions, and is significantly more robust to
deviations from modeling assumptions. Specifically, when the data is in-
deed non-Gaussian, then in the limit of large datasets, logistic regression will
almost always do better than GDA. For this reason, in practice logistic re-
gression is used more often than GDA. (Some related considerations about
discriminative vs. generative models also apply for the Naive Bayes algo-
rithm that we discuss next, but the Naive Bayes algorithm is still considered
a very good, and is certainly also a very popular, classification algorithm.)
9
2 Naive Bayes
In GDA, the feature vectors x were continuous, real-valued vectors. Let’s
now talk about a different learning algorithm in which the xj ’s are discrete-
valued.
For our motivating example, consider building an email spam filter using
machine learning. Here, we wish to classify messages according to whether
they are unsolicited commercial (spam) email, or non-spam email. After
learning to do this, we can then have our mail reader automatically filter
out the spam messages and perhaps place them in a separate mail folder.
Classifying emails is one example of a broader set of problems called text
classification.
Let’s say we have a training set (a set of emails labeled as spam or non-
spam). We will cover two algorithms, the difference between which is mostly
the ways to represent an email as feature vectors. to represent an email.
Subsection 2.1 covers the so-called multi-variate Bernoulli event model, which
was not covered in the class, and can be considered as optional reading.
You could consider directly start reading from Section 2.3. To make the
subsections self-contained, Section 2.3 and Section 2.1 contains overlapping
materials.
is used to represent an email that contains the words “a” and “buy,” but not
“aardvark,” “aardwolf” or “zygmurgy.”2 The set of words encoded into the
feature vector is called the vocabulary, so the dimension of x is equal to
the size of the vocabulary.
Having chosen our feature vector, we now want to build a generative
model. So, we have to model p(x|y). But if we have, say, a vocabulary of
50000 words, then x ∈ {0, 1}50000 (x is a 50000-dimensional vector of 0’s and
1’s), and if we were to model x explicitly with a multinomial distribution over
the 250000 possible outcomes, then we’d end up with a (250000 −1)-dimensional
parameter vector. This is clearly too many parameters.
To model p(x|y), we will therefore make a very strong assumption. We will
assume that the xi ’s are conditionally independent given y. This assumption
is called the Naive Bayes (NB) assumption, and the resulting algorithm is
called the Naive Bayes classifier. For instance, if y = 1 means spam email;
“buy” is word 2087 and “price” is word 39831; then we are assuming that if
I tell you y = 1 (that a particular piece of email is spam), then knowledge
of x2087 (knowledge of whether “buy” appears in the message) will have no
effect on your beliefs about the value of x39831 (whether “price” appears).
More formally, this can be written p(x2087 |y) = p(x2087 |y, x39831 ). (Note that
this is not the same as saying that x2087 and x39831 are independent, which
would have been written “p(x2087 ) = p(x2087 |x39831 )”; rather, we are only
assuming that x2087 and x39831 are conditionally independent given y.)
We now have:
p(x1 , . . . , x50000 |y)
= p(x1 |y)p(x2 |y, x1 )p(x3 |y, x1 , x2 ) · · · p(x50000 |y, x1 , . . . , x49999 )
= p(x1 |y)p(x2 |y)p(x3 |y) · · · p(x50000 |y)
Y d
= p(xj |y)
j=1
The first equality simply follows from the usual properties of probabilities,
2
Actually, rather than looking through an English dictionary for the list of all English
words, in practice it is more common to look through our training set and encode in our
feature vector only the words that occur at least once there. Apart from reducing the
number of words modeled and hence reducing our computational and space requirements,
this also has the advantage of allowing us to model/include as a feature many words
that may appear in your email (such as “cs229”) but that you won’t find in a dictionary.
Sometimes (as in the homework), we also exclude the very high frequency words (which
will be words like “the,” “of,” “and”; these high frequency, “content free” words are called
stop words) since they occur in so many documents and do little to indicate whether an
email is spam or non-spam.
11
and the second equality used the NB assumption. We note that even though
the Naive Bayes assumption is an extremely strong assumptions, the resulting
algorithm works well on many problems.
Our model is parameterized by φj|y=1 = p(xj = 1|y = 1), φj|y=0 = p(xj =
1|y = 0), and φy = p(y = 1). As usual, given a training set {(x(i) , y (i) ); i =
1, . . . , n}, we can write down the joint likelihood of the data:
n
Y
L(φy , φj|y=0 , φj|y=1 ) = p(x(i) , y (i) ).
i=1
Maximizing this with respect to φy , φj|y=0 and φj|y=1 gives the maximum
likelihood estimates:
Pn (i) (i)
i=1 1{xj = 1 ∧ y = 1}
φj|y=1 = Pn (i) = 1}
i=1 1{y
Pn (i) (i)
i=1 1{xj = 1 ∧ y = 0}
φj|y=0 = Pn (i)
i=1 1{y = 0}
Pn (i)
i=1 1{y = 1}
φy =
n
In the equations above, the “∧” symbol means “and.” The parameters have
a very natural interpretation. For instance, φj|y=1 is just the fraction of the
spam (y = 1) emails in which word j does appear.
Having fit all these parameters, to make a prediction on a new example
with features x, we then simply calculate
p(x|y = 1)p(y = 1)
p(y = 1|x) =
p(x)
Q
d
p(xj |y = 1) p(y = 1)
j=1
= Q Q ,
d d
j=1 p(xj |y = 1) p(y = 1) + j=1 p(xj |y = 0) p(y = 0)
Naive Bayes. For instance, if we use some feature xj to represent living area,
we might discretize the continuous values as follows:
Living area (sq. feet) < 400 400-800 800-1200 1200-1600 >1600
xi 1 2 3 4 5
Thus, for a house with living area 890 square feet, we would set the value
of the corresponding feature xj to 3. We can then apply the Naive Bayes
algorithm, and model p(xj |y) with a multinomial distribution, as described
previously. When the original, continuous-valued attributes are not well-
modeled by a multivariate normal distribution, discretizing the features and
using Naive Bayes (instead of GDA) will often result in a better classifier.
I.e., because it has never seen “neurips” before in either spam or non-spam
training examples, it thinks the probability of seeing it in either type of email
is zero. Hence, when trying to decide if one of these messages containing
“neurips” is spam, it calculates the class posterior probabilities, and obtains
Qd
j=1 p(xj |y = 1)p(y = 1)
p(y = 1|x) = Qd Qd
j=1 p(xj |y = 1)p(y = 1) + j=1 p(xj |y = 0)p(y = 0)
0
= .
0
This is because each of the terms “ dj=1 p(xj |y)” includes a term p(x35000 |y) =
Q
0 that is multiplied into it. Hence, our algorithm obtains 0/0, and doesn’t
know how to make a prediction.
Stating the problem more broadly, it is statistically a bad idea to esti-
mate the probability of some event to be zero just because you haven’t seen
it before in your finite training set. Take the problem of estimating the mean
of a multinomial random variable z taking values in {1, . . . , k}. We can pa-
rameterize our multinomial with φj = p(z = j). Given a set of n independent
observations {z (1) , . . . , z (n) }, the maximum likelihood estimates are given by
Pn
1{z (i) = j}
φj = i=1 .
n
As we saw previously, if we were to use these maximum likelihood estimates,
then some of the φj ’s might end up as zero, which was a problem. To avoid
this, we can use Laplace smoothing, which replaces the above estimate
with
1 + ni=1 1{z (i) = j}
P
φj = .
n+k
Here,
Pk we’ve added 1 to the numerator, and k to the denominator. Note that
j=1 φj = 1 still holds (check this yourself!), which is a desirable property
since the φj ’s are estimates for probabilities that we know must sum to 1.
Also, φj 6= 0 for all values of j, solving our problem of probabilities being
estimated as zero. Under certain (arguably quite strong) conditions, it can
be shown that the Laplace smoothing actually gives the optimal estimator
of the φj ’s.
Returning to our Naive Bayes classifier, with Laplace smoothing, we
14
(In practice, it usually doesn’t matter much whether we apply Laplace smooth-
ing to φy or not, since we will typically have a fair fraction each of spam and
non-spam messages, so φy will be a reasonable estimate of p(y = 1) and will
be quite far from 0 anyway.)
email; then we are assuming that if I tell you y = 1 (that a particular piece of
email is spam), then the knowledge of the first word doesn’t have any effect
on our belief about the second word (or the beliefs about any other words).
We now have:
p(x1 , . . . , xd |y)
= p(x1 |y)p(x2 |y, x1 )p(x3 |y, x1 , x2 ) · · · p(xd |y, x1 , . . . , xd ))
= p(x1 |y)p(x2 |y)p(x3 |y) · · · p(xd |y)
Y d
= p(xj |y)
j=1
The first equality simply follows from the usual properties of probabilities,
and the second equality used the NB assumption. We note that even though
the Naive Bayes assumption is an extremely strong assumptions, the resulting
algorithm works well on many problems.
The parameters for our model are φy = p(y = 1), φk|y=1 = p(xj = k|y =
1) (for any j) and φk|y=0 = p(xj = k|y = 0). Note that we have assumed that
p(xj |y) is the same for all values of j (i.e., that the distribution according
to which a word is generated does not depend on its position j within the
email).
If we are given a training set {(x(i) , y (i) ); i = 1, . . . , n} where x(i) =
(i) (i) (i)
(x1 , x2 , . . . , xdi ) (here, di is the number of words in the i-training example),
the likelihood of the data is given by
n
Y
L(φy , φk|y=0 , φk|y=1 ) = p(x(i) , y (i) )
i=1
n di
!
Y Y (i)
= p(xj |y; φk|y=0 , φk|y=1 ) p(y (i) ; φy ).
i=1 j=1
Qd
j=1 p(xj |y = 1)p(y = 1)
p(y = 1|x) = Qd Qd
j=1 p(xj |y = 1)p(y = 1) + j=1 p(xj |y = 0)p(y = 0)
0
= .
0
This is because each of the terms “ dj=1 p(xj |y)” includes a term p(x1 =
Q
2|y) = 0 that is multiplied into it. Hence, our algorithm obtains 0/0, and
doesn’t know how to make a prediction.
Stating the problem more broadly, it is statistically a bad idea to esti-
mate the probability of some event to be zero just because you haven’t seen
it before in your finite training set. Take the problem of estimating the mean
of a multinomial random variable z taking values in {1, . . . , k}. We can pa-
rameterize our multinomial with φj = p(z = j). Given a set of n independent
observations {z (1) , . . . , z (n) }, the maximum likelihood estimates are given by
Pn
1{z (i) = j}
φj = i=1 .
n
17
While not necessarily the very best classification algorithm, the Naive Bayes
classifier often works surprisingly well. It is often also a very good “first thing
to try,” given its simplicity and ease of implementation.
CS229 Supplemental Lecture notes
John Duchi
1
2 The representer theorem
Let us consider a slight variant of choosing θ to minimize the risk (1). In
many situations—for reasons that we will study more later in the class—it
is useful to add regularization to the risk J. We add regularization for many
reasons: often, it makes problem (1) easier to solve numerically, and also
it can allow the θ we get out of minimizing the risk (1) able to generalize
better to unseen data. Generally, regularization is taken to be of the form
r(θ) = kθk or r(θ) = kθk2 for some norm k·k on Rn . The most common
choice is so-called ℓ2 -regularization, in which we choose
λ
r(θ) = kθk22 ,
2
√
where we recall that kθk2 = θT θ is the Euclidean norm, or length, of the
vector θ. This gives rise to the regularized risk, which is
m
1 X λ
Jλ (θ) = L(θT x(i) , y (i) ) + kθk22 . (2)
m i=1 2
Let us consider the structure of any θ that minimizes the risk (2). We
assume—as we often do—that for each fixed target value y ∈ Y, the function
L(z, y) is convex in z. (This is the case for linear regression and binary
and multiclass logistic regression, as well as a number of other losses we will
consider.) It turns out that under these assumptions, we may always write
the solutions to the problem (2) as a linear combination of the input variables
x(i) . More precisely, we have the following theorem, known as the representer
theorem.
Theorem 2.1. Suppose in the definition of the regularized risk (2) that λ ≥
0. Then there is a minimizer of the regularized risk (2) that can be written
m
X
θ= αi x(i)
i=1
Proof For intuition, we give a proof of the result in the case that L(z, y),
when viewed as a function of z, is differentiable and λ > 0. In Appendix A,
2
we present a more general statement of the theorem as well as a rigorous
proof.
∂
Let L′ (z, y) = ∂z L(z, y) denote the derivative of the loss with respect to
z. Then by the chain rule, we have the gradient identity
1
∇θ L(θT x, y) = L′ (θT x, y)x and ∇θ kθk22 = θ,
2
where ∇θ denotes taking a gradient with respect to θ. As the risk must have
0 gradient at all stationary points (including the minimizer), we can write
m
1 X ′ T (i) (i) (i)
∇Jλ (θ) = L (θ x , y )x + λθ = ~0.
m i=1
In particular, letting wi = L′ (θT x(i) , y (i) ), as L′ (θT x(i) , y (i) ) is a scalar (which
depends on θ, but no matter what θ is, wi is still a real number), we have
n
1X
θ=− wi x(i) .
λ i=1
That is, in any learning algorithm, we may can replace all appearances of
P (i) T
θT x with m i=1 αi x x, and then minimize directly over α ∈ Rm .
Let us consider this idea in somewhat more generality. In our discussion
of linear regression, we had a problem in which the input x was the living
area of a house, and we considered performing regression using the features x,
x2 and x3 (say) to obtain a cubic function. To distinguish between these two
3
sets of variables, we’ll call the “original” input value the input attributes
of a problem (in this case, x, the living area). When that is mapped to
some new set of quantities that are then passed to the learning algorithm,
we’ll call those new quantities the input features. (Unfortunately, different
authors use different terms to describe these two things, but we’ll try to use
this terminology consistently in these notes.) We will also let φ denote the
feature mapping, which maps from the attributes to the features. For
instance, in our example, we had
x
φ(x) = x2 .
x3
Rather than applying a learning algorithm using the original input at-
tributes x, we may instead want to learn using some features φ(x). To do so,
we simply need to go over our previous algorithm, and replace x everywhere
in it with φ(x).
Since the algorithm can be written entirely in terms of the inner prod-
ucts hx, zi, this means that we would replace all those inner products with
hφ(x), φ(z)i. Specificically, given a feature mapping φ, we define the corre-
sponding kernel to be
K(x, z) = φ(x)T φ(z).
Then, everywhere we previously had hx, zi in our algorithm, we could simply
replace it with K(x, z), and our algorithm would now be learning using the
features φ. Let us write this out more carefully.
P We saw(i)by the representer
theorem (Theorem 2.1) that we can write θ = m i=1 αi φ(x ) for some weights
αi . Then we can write the (regularized) risk
Jλ (θ) = Jλ (α)
m m m 2
1 X (i) T
X
(j) (i) λ X
= L φ(x ) αj φ(x ), y + αi φ(x(i) )
m i=1 j=1
2 i=1 2
m m m m
1 X X λ XX
= L αj φ(x(i) )T φ(x(j) ), y (i) + αi αj φ(x(i) )T φ(x(j) )
m i=1 j=1
2 i=1 j=1
m m
1 X X λX
= L αj K(x(i) , x(j) ), y (i) + αi αi K(x(i) , x(j) ).
m i=1 j=1
2 i,j
4
That is, we can write the entire loss function to be minimized in terms of the
kernel matrix
K = [K(x(i) , x(j) )]m
i,j=1 ∈ R
m×m
.
Now, given φ, we could easily compute K(x, z) by finding φ(x) and φ(z)
and taking their inner product. But what’s more interesting is that often,
K(x, z) may be very inexpensive to calculate, even though φ(x) itself may be
very expensive to calculate (perhaps because it is an extremely high dimen-
sional vector). In such settings, by using in our algorithm an efficient way to
calculate K(x, z), we can learn in the high dimensional feature space space
given by φ, but without ever having to explicitly find or represent vectors
φ(x). As a few examples, some kernels (corresponding to infinite-dimensional
vectors φ) include
1 2
K(x, z) = exp − 2 kx − zk2 ,
2τ
known as the Gaussian or Radial Basis Function (RBF) kernel and applicable
to data in any dimension, or the min-kernel (applicable when x ∈ R, defined
by
K(x, z) = min{x, z}.
See also the lecture notes on Support Vector Machines (SVMs) for more on
these kernel machines.
5
Now, let us consider taking a stochastic gradient of the above risk Jλ . That is,
we wish to construct a (simple to compute) random vector whose expectation
is ∇Jλ (α), which does not have too much variance. To do so, we first compute
the gradient of Jλ (α) on its own. We calculate the gradient of individual loss
terms by noting that
T T
∇α L(K (i) α, y (i) ) = L′ (K (i) α, y (i) )K (i) ,
while m
λ T X
∇α α Kα = λKα = λ K (i) αi .
2 i=1
Thus, we have
m m
1 X ′ (i) T X
∇α Jλ (α) = L (K α, y (i) )K (i) + λ K (i) αi .
m i=1 i=1
Iterate for t = 1, 2, . . .
(ii) Update
h i
′ (i) T (i) (i) (i)
α := α − ηt L (K α, y )K + mλK αi .
6
5 Support vector machines
Now we discuss (one approach) to Support Vector Machines (SVM)s, which
apply to binary classification problems with labels y ∈ {−1, 1}, and a partic-
ular choice of loss function L. In particular, for the support vector machine,
we use the margin-based loss function
So, in a sense, SVMs are nothing but a special case of the general theoret-
ical results we have described above. In particular, we have the empirical
regularized risk
m
1 Xh T
i λ
Jλ (α) = 1 − y (i) K (i) α + αT Kα,
m i=1 + 2
6 An example
In this section, we consider a particular example kernel, known as the Gaus-
sian or Radial Basis Function (RBF) kernel. This kernel is defined by
1 2
K(x, z) = exp − 2 kx − zk2 , (4)
2τ
7
1.4 1.4
1.2 1.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
τ = .1 τ = .2
In particular, we see that K(x, z) is the inner product in a space of functions that are
integrable against p(w).
8
1.4 1.4
1.2 1.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
τ = .4 τ = .8
with m = 200 and λ = 1/m, for different values of τ in the kernel (4). We
plot the training data (positive examples as blue x’s and negative examples
as red o’s) as well as the decision surface of the resulting classifier. That is,
we plot the lines defined by
( m
)
X
x ∈ R2 : K(x, x(i) )αi = 0 ,
i=1
P
the regions where the learned classifier makes a prediction m
givingP (i)
i=1 K(x, x )αi >
m (i)
0 and i=1 K(x, x )αi < 0. From the figure, we see that for large τ , we have
9
1.4 1.4
1.2 1.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
τ = 1.6 τ = 3.2
a very simple classifier: it is nearly linear, while for τ = .1, the classifier has
substantial variability and is highly non-linear. For reference, in Figure 5, we
plot the optimal classifier along with the training data; the optimal classifier
minimizes the misclassification error given infinite training data.
10
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Jr (θ(α) ) ≤ Jr (θ).
11
basis of size n⊥ = n − n0 ≥ 0, which we write as {u1 , . . . , un⊥ } ⊂ Rn . By
construction they satisfy uTi vj = 0 for all i, j.
Because θ ∈ Rn , we know that we can write it uniquely as
n0
X n⊥
X
θ= ν i vi + µi ui , where νi ∈ R and µi ∈ R,
i=1 i=1
and the values µ, ν are unique. Now, we know that by definition of the space
V as the span of {x(i) }m m
i=1 , there exists α ∈ R such that
n0
X m
X
ν i vi = αi x(i) ,
i=1 i=1
so that we have m n⊥
X X
(i)
θ= αi x + µ i ui .
i=1 i=1
(α)
Pm
Define θ = i=1 αi x . Now, for any data point x(j) , we have
(i)
12
CS229 Lecture Notes
Andrew Ng
updated by Tengyu Ma on April 21, 2019
Part V
Kernel Methods
1.1 Feature maps
Recall that in our discussion about linear regression, we considered the prob-
lem of predicting the price of a house (denoted by y) from the living area of
the house (denoted by x), and we fit a linear function of x to the training
data. What if the price y can be more accurately represented as a non-linear
function of x? In this case, we need a more expressive family of models than
linear models.
We start by considering fitting cubic functions y = θ3 x3 + θ2 x2 + θ1 x + θ0 .
It turns out that we can view the cubic function as a linear function over
the a different set of feature variables (defined below). Concretely, let the
function φ : R → R4 be defined as
1
x 4
x2 ∈ R .
φ(x) = (1)
x3
θ3 x3 + θ2 x2 + θ1 x + θ0 = θT φ(x)
1
2
in the context of kernel methods, we will call the “original” input value the
input attributes of a problem (in this case, x, the living area). When the
original input is mapped to some new set of quantities φ(x), we will call those
new quantities the features variables. (Unfortunately, different authors use
different terms to describe these two things in different contexts.) We will
call φ a feature map, which maps the attributes to the features.
n
X
y (i) − θT φ(x(i) ) φ(x(i) )
θ := θ + α (3)
i=1
n
X
θ= βi φ(x(i) ) (6)
i=1
1
Here, for simplicity, we include all the monomials with repetitions (so that, e.g., x1 x2 x3
and x2 x3 x1 both appear in φ(x)). Therefore, there are totally 1 + d + d2 + d3 entries in
φ(x).
4
You may realize that our general strategy is to implicitly represent the p-
dimensional vector θ by a set of coefficients β1 , . . . , βn . Towards doing this,
we derive the update rule of the coefficients β1 , . . . , βn . Using the equation
above, we see that the new βi depends on the old one via
βi := βi + α y (i) − θT φ(x(i) )
(8)
HerePwe still have the old θ on the RHS of the equation. Replacing θ by
θ = nj=1 βj φ(x(j) ) gives
n
!
(j) T
X
∀i ∈ {1, . . . , n}, βi := βi + α y (i) − βj φ(x ) φ(x(i) )
j=1
T
We often rewrite φ(x(j) ) φ(x(i) ) as hφ(x(j) ), φ(x(i) )i to emphasize that it’s the
inner product of the two feature vectors. Viewing βi ’s as the new representa-
tion of θ, we have successfully translated the batch gradient descent algorithm
into an algorithm that updates the value of β iteratively. It may appear that
at every iteration, we still need to compute the values of hφ(x(j) ), φ(x(i) )i for
all pairs of i, j, each of which may take roughly O(p) operation. However,
two important properties come to rescue:
1. We can pre-compute the pairwise inner products hφ(x(j) ), φ(x(i) )i for all
pairs of i, j before the loop starts.
2. For the feature map φ defined in (5) (or many other interesting fea-
ture maps), computing hφ(x(j) ), φ(x(i) )i can be efficient and does not
5
As you will see, the inner products between the features hφ(x), φ(z)i are
essential here. We define the Kernel corresponding to the feature map φ as
a function that maps X × X → R satisfying: 2
1. Compute all the values K(x(i) , x(j) ) , hφ(x(i) ), φ(x(j) )i using equa-
tion (9) for all i, j ∈ {1, . . . , n}. Set β := 0.
2. Loop:
n
!
X
(i) (i) (j)
∀i ∈ {1, . . . , n}, βi := βi + α y − βj K(x , x ) (11)
j=1
β := β + α(~y − Kβ)
You may realize that fundamentally all we need to know about the feature
map φ(·) is encapsulated in the corresponding kernel function K(·, ·). We
will expand on this in the next section.
Thus, we see that K(x, z) = hφ(x), φ(z)i is the kernel function that corre-
sponds to the the feature mapping φ given (shown here for the case of d = 3)
by
x1 x1
x1 x2
x1 x3
x2 x1
φ(x) = x2 x2 .
x2 x3
x3 x1
x3 x2
x3 x3
Revisiting the computational efficiency perspective of kernel, note that whereas
calculating the high-dimensional φ(x) requires O(d2 ) time, finding K(x, z)
takes only O(d) time—linear in the dimension of the input attributes.
For another related example, also consider K(·, ·) defined by
and the parameter c controls the relative weighting between the xi (first
order) and the xi xj (second order) terms.
More broadly, the kernel K(x, z) = (xT z + c)k corresponds to a feature
mapping to an d+k
k
feature space, corresponding of all monomials of the
form xi1 xi2 . . . xik that are up to order k. However, despite working in this
O(dk )-dimensional space, computing K(x, z) still takes only O(d) time, and
hence we never need to explicitly represent feature vectors in this very high
dimensional feature space.
a feature map φ such that the kernel K defined above satisfies K(x, z) =
φ(x)T φ(z)? In this particular example, the answer is yes. This kernel is called
the Gaussian kernel, and corresponds to an infinite dimensional feature
mapping φ. We will give a precise characterization about what properties
a function K needs to satisfy so that it can be a valid kernel function that
corresponds to some feature map φ.
Part VI
Support Vector Machines
This set of notes presents the Support Vector Machine (SVM) learning al-
gorithm. SVMs are among the best (and many believe are indeed the best)
“off-the-shelf” supervised learning algorithms. To tell the SVM story, we’ll
need to first talk about margins and the idea of separating data with a large
“gap.” Next, we’ll talk about the optimal margin classifier, which will lead
us into a digression on Lagrange duality. We’ll also see kernels, which give
a way to apply SVMs efficiently in very high dimensional (such as infinite-
dimensional) feature spaces, and finally, we’ll close off the story with the
SMO algorithm, which gives an efficient implementation of SVMs.
2 Margins: Intuition
We’ll start our story on SVMs by talking about margins. This section will
give the intuitions about margins and about the “confidence” of our predic-
tions; these ideas will be made formal in Section 4.
Consider logistic regression, where the probability p(y = 1|x; θ) is mod-
eled by hθ (x) = g(θT x). We then predict “1” on an input x if and only if
hθ (x) ≥ 0.5, or equivalently, if and only if θT x ≥ 0. Consider a positive
training example (y = 1). The larger θT x is, the larger also is hθ (x) = p(y =
1|x; θ), and thus also the higher our degree of “confidence” that the label is 1.
Thus, informally we can think of our prediction as being very confident that
12
A0
1
B0
1
C0
1
Notice that the point A is very far from the decision boundary. If we are
asked to make a prediction for the value of y at A, it seems we should be
quite confident that y = 1 there. Conversely, the point C is very close to
the decision boundary, and while it’s on the side of the decision boundary
on which we would predict y = 1, it seems likely that just a small change to
the decision boundary could easily have caused out prediction to be y = 0.
Hence, we’re much more confident about our prediction at A than at C. The
point B lies in-between these two cases, and more broadly, we see that if
a point is far from the separating hyperplane, then we may be significantly
more confident in our predictions. Again, informally we think it would be
nice if, given a training set, we manage to find a decision boundary that
allows us to make all correct and confident (meaning far from the decision
boundary) predictions on the training examples. We’ll formalize this later
using the notion of geometric margins.
13
3 Notation
To make our discussion of SVMs easier, we’ll first need to introduce a new
notation for talking about classification. We will be considering a linear
classifier for a binary classification problem with labels y and features x.
From now, we’ll use y ∈ {−1, 1} (instead of {0, 1}) to denote the class labels.
Also, rather than parameterizing our linear classifier with the vector θ, we
will use parameters w, b, and write our classifier as
Note that if y (i) = 1, then for the functional margin to be large (i.e., for
our prediction to be confident and correct), we need wT x(i) + b to be a large
positive number. Conversely, if y (i) = −1, then for the functional margin
to be large, we need wT x(i) + b to be a large negative number. Moreover, if
y (i) (wT x(i) + b) > 0, then our prediction on this example is correct. (Check
this yourself.) Hence, a large functional margin represents a confident and a
correct prediction.
For a linear classifier with the choice of g given above (taking values in
{−1, 1}), there’s one property of the functional margin that makes it not a
very good measure of confidence, however. Given our choice of g, we note that
if we replace w with 2w and b with 2b, then since g(wT x + b) = g(2wT x + 2b),
14
this would not change hw,b (x) at all. I.e., g, and hence also hw,b (x), depends
only on the sign, but not on the magnitude, of wT x + b. However, replacing
(w, b) with (2w, 2b) also results in multiplying our functional margin by a
factor of 2. Thus, it seems that by exploiting our freedom to scale w and b,
we can make the functional margin arbitrarily large without really changing
anything meaningful. Intuitively, it might therefore make sense to impose
some sort of normalization condition such as that ||w||2 = 1; i.e., we might
replace (w, b) with (w/||w||2 , b/||w||2 ), and instead consider the functional
margin of (w/||w||2 , b/||w||2 ). We’ll come back to this later.
Given a training set S = {(x(i) , y (i) ); i = 1, . . . , n}, we also define the
function margin of (w, b) with respect to S as the smallest of the functional
margins of the individual training examples. Denoted by γ̂, this can therefore
be written:
γ̂ = min γ̂ (i) .
i=1,...,n
Next, let’s talk about geometric margins. Consider the picture below:
A w
γ (i)
the decision boundary, and all points x on the decision boundary satisfy the
equation wT x + b = 0. Hence,
T (i) (i) w
w x −γ + b = 0.
||w||
This was worked out for the case of a positive training example at A in the
figure, where being on the “positive” side of the decision boundary is good.
More generally, we define the geometric margin of (w, b) with respect to a
training example (x(i) , y (i) ) to be
T !
w b
γ (i) = y (i) x(i) + .
||w|| ||w||
Note that if ||w|| = 1, then the functional margin equals the geometric
margin—this thus gives us a way of relating these two different notions of
margin. Also, the geometric margin is invariant to rescaling of the parame-
ters; i.e., if we replace w with 2w and b with 2b, then the geometric margin
does not change. This will in fact come in handy later. Specifically, because
of this invariance to the scaling of the parameters, when trying to fit w and b
to training data, we can impose an arbitrary scaling constraint on w without
changing anything important; for instance, we can demand that ||w|| = 1, or
|w1 | = 5, or |w1 + b| + |w2 | = 2, and any of these can be satisfied simply by
rescaling w and b.
Finally, given a training set S = {(x(i) , y (i) ); i = 1, . . . , n}, we also define
the geometric margin of (w, b) with respect to S to be the smallest of the
geometric margins on the individual training examples:
γ = min γ (i) .
i=1,...,n
on the training set and a good “fit” to the training data. Specifically, this
will result in a classifier that separates the positive and the negative training
examples with a “gap” (geometric margin).
For now, we will assume that we are given a training set that is linearly
separable; i.e., that it is possible to separate the positive and negative ex-
amples using some separating hyperplane. How will we find the one that
achieves the maximum geometric margin? We can pose the following opti-
mization problem:
maxγ,w,b γ
s.t. y (i) (wT x(i) + b) ≥ γ, i = 1, . . . , n
||w|| = 1.
Here, we’re going to maximize γ̂/||w||, subject to the functional margins all
being at least γ̂. Since the geometric and functional margins are related by
γ = γ̂/||w|, this will give us the answer we want. Moreover, we’ve gotten rid
of the constraint ||w|| = 1 that we didn’t like. The downside is that we now
γ̂
have a nasty (again, non-convex) objective ||w|| function; and, we still don’t
have any off-the-shelf software that can solve this form of an optimization
problem.
Let’s keep going. Recall our earlier discussion that we can add an arbi-
trary scaling constraint on w and b without changing anything. This is the
key idea we’ll use now. We will introduce the scaling constraint that the
functional margin of w, b with respect to the training set must be 1:
γ̂ = 1.
17
Here, the βi ’s are called the Lagrange multipliers. We would then find
and set L’s partial derivatives to zero:
∂L ∂L
= 0; = 0,
∂wi ∂βi
and solve for w and β.
In this section, we will generalize this to constrained optimization prob-
lems in which we may have inequality as well as equality constraints. Due to
time constraints, we won’t really be able to do the theory of Lagrange duality
justice in this class,5 but we will give the main ideas and results, which we
will then apply to our optimal margin classifier’s optimization problem.
Consider the following, which we’ll call the primal optimization problem:
minw f (w)
s.t. gi (w) ≤ 0, i = 1, . . . , k
hi (w) = 0, i = 1, . . . , l.
To solve it, we start by defining the generalized Lagrangian
k
X l
X
L(w, α, β) = f (w) + αi gi (w) + βi hi (w).
i=1 i=1
Here, the αi ’s and βi ’s are the Lagrange multipliers. Consider the quantity
θP (w) = max L(w, α, β).
α,β : αi ≥0
Here, the “P” subscript stands for “primal.” Let some w be given. If w
violates any of the primal constraints (i.e., if either gi (w) > 0 or hi (w) 6= 0
for some i), then you should be able to verify that
k
X l
X
θP (w) = max f (w) + αi gi (w) + βi hi (w) (13)
α,β : αi ≥0
i=1 i=1
= ∞. (14)
Conversely, if the constraints are indeed satisfied for a particular value of w,
then θP (w) = f (w). Hence,
f (w) if w satisfies primal constraints
θP (w) =
∞ otherwise.
5
Readers interested in learning more about this topic are encouraged to read, e.g., R.
T. Rockarfeller (1970), Convex Analysis, Princeton University Press.
19
Thus, θP takes the same value as the objective in our problem for all val-
ues of w that satisfies the primal constraints, and is positive infinity if the
constraints are violated. Hence, if we consider the minimization problem
min θP (w) = min max L(w, α, β),
w w α,β : αi ≥0
we see that it is the same problem (i.e., and has the same solutions as) our
original, primal problem. For later use, we also define the optimal value of
the objective to be p∗ = minw θP (w); we call this the value of the primal
problem.
Now, let’s look at a slightly different problem. We define
θD (α, β) = min L(w, α, β).
w
Here, the “D” subscript stands for “dual.” Note also that whereas in the
definition of θP we were optimizing (maximizing) with respect to α, β, here
we are minimizing with respect to w.
We can now pose the dual optimization problem:
max θD (α, β) = max min L(w, α, β).
α,β : αi ≥0 α,β : αi ≥0 w
This is exactly the same as our primal problem shown above, except that the
order of the “max” and the “min” are now exchanged. We also define the
optimal value of the dual problem’s objective to be d∗ = maxα,β : αi ≥0 θD (w).
How are the primal and the dual problems related? It can easily be shown
that
d∗ = max min L(w, α, β) ≤ min max L(w, α, β) = p∗ .
α,β : αi ≥0 w w α,β : αi ≥0
(You should convince yourself of this; this follows from the “max min” of a
function always being less than or equal to the “min max.”) However, under
certain conditions, we will have
d∗ = p∗ ,
so that we can solve the dual problem in lieu of the primal problem. Let’s
see what these conditions are.
Suppose f and the gi ’s are convex,6 and the hi ’s are affine.7 Suppose
further that the constraints gi are (strictly) feasible; this means that there
exists some w so that gi (w) < 0 for all i.
6
When f has a Hessian, then it is convex if and only if the Hessian is positive semi-
definite. For instance, f (w) = wT w is convex; similarly, all linear (and affine) functions
are also convex. (A function f can also be convex without being differentiable, but we
won’t need those more general definitions of convexity here.)
7
I.e., there exists ai , bi , so that hi (w) = aTi w + bi . “Affine” means the same thing as
linear, except that we also allow the extra intercept term bi .
20
∂
L(w∗ , α∗ , β ∗ ) = 0, i = 1, . . . , d (15)
∂wi
∂
L(w∗ , α∗ , β ∗ ) = 0, i = 1, . . . , l (16)
∂βi
αi∗ gi (w∗ ) = 0, i = 1, . . . , k (17)
gi (w∗ ) ≤ 0, i = 1, . . . , k (18)
α∗ ≥ 0, i = 1, . . . , k (19)
We have one such constraint for each training example. Note that from the
KKT dual complementarity condition, we will have αi > 0 only for the train-
ing examples that have functional margin exactly equal to one (i.e., the ones
corresponding to constraints that hold with equality, gi (w) = 0). Consider
the figure below, in which a maximum margin separating hyperplane is shown
by the solid line.
The points with the smallest margins are exactly the ones closest to the
decision boundary; here, these are the three points (one negative and two pos-
itive examples) that lie on the dashed lines parallel to the decision boundary.
Thus, only three of the αi ’s—namely, the ones corresponding to these three
training examples—will be non-zero at the optimal solution to our optimiza-
tion problem. These three points are called the support vectors in this
problem. The fact that the number of support vectors can be much smaller
than the size the training set will be useful later.
Let’s move on. Looking ahead, as we develop the dual form of the prob-
lem, one key idea to watch out for is that we’ll try to write our algorithm
in terms of only the inner product hx(i) , x(j) i (think of this as (x(i) )T x(j) )
between points in the input feature space. The fact that we can express our
algorithm in terms of these inner products will be key when we apply the
kernel trick.
When we construct the Lagrangian for our optimization problem we have:
n
1 2
X
αi y (i) (wT x(i) + b) − 1 .
L(w, b, α) = ||w|| − (21)
2 i=1
Note that there’re only “αi ” but no “βi ” Lagrange multipliers, since the
problem has only inequality constraints.
22
Let’s find the dual form of the problem. To do so, we need to first
minimize L(w, b, α) with respect to w and b (for fixed α), to get θD , which
we’ll do by setting the derivatives of L with respect to w and b to zero. We
have: n
X
∇w L(w, b, α) = w − αi y (i) x(i) = 0
i=1
If we take the definition of w in Equation (22) and plug that back into
the Lagrangian (Equation 21), and simplify, we get
n n n
X 1 X (i) (j) X
L(w, b, α) = αi − y y αi αj (x(i) )T x(j) − b αi y (i) .
i=1
2 i,j=1 i=1
But from Equation (23), the last term must be zero, so we obtain
n n
X 1 X (i) (j)
L(w, b, α) = αi − y y αi αj (x(i) )T x(j) .
i=1
2 i,j=1
You should also be able to verify that the conditions required for p∗ = d∗
and the KKT conditions (Equations 15–19) to hold are indeed satisfied in
our optimization problem. Hence, we can solve the dual in lieu of solving
23
∗
maxi:y(i) =−1 w∗ T x(i) + mini:y(i) =1 w∗ T x(i)
b =− . (25)
2
(Check for yourself that this is correct.)
Before moving on, let’s also take a more careful look at Equation (22),
which gives the optimal value of w in terms of (the optimal value of) α.
Suppose we’ve fit our model’s parameters to a training set, and now wish to
make a prediction at a new point input x. We would then calculate wT x + b,
and predict y = 1 if and only if this quantity is bigger than zero. But
using (22), this quantity can also be written:
n
!T
X
wT x + b = αi y (i) x(i) x+b (26)
i=1
n
X
= αi y (i) hx(i) , xi + b. (27)
i=1
Thus, examples are now permitted to have (functional) margin less than 1,
and if an example has functional margin 1 − ξi (with ξ > 0), we would pay
a cost of the objective function being increased by Cξi . The parameter C
25
controls the relative weighting between the twin goals of making the ||w||2
small (which we saw earlier makes the margin large) and of ensuring that
most examples have functional margin at least 1.
As before, we can form the Lagrangian:
n n n
1 X X X
L(w, b, ξ, α, r) = wT w + C ξi − αi y (i) (xT w + b) − 1 + ξi − ri ξi .
2 i=1 i=1 i=1
Now, all that remains is to give an algorithm for actually solving the dual
problem, which we will do in the next section.
26
max W (α1 , α2 , . . . , αn ).
α
Here, we think of W as just some function of the parameters αi ’s, and for now
ignore any relationship between this problem and SVMs. We’ve already seen
two optimization algorithms, gradient ascent and Newton’s method. The
new algorithm we’re going to consider here is called coordinate ascent:
For i = 1, . . . , n, {
αi := arg maxα̂i W (α1 , . . . , αi−1 , α̂i , αi+1 , . . . , αn ).
}
Thus, in the innermost loop of this algorithm, we will hold all the variables
except for some αi fixed, and reoptimize W with respect to just the parameter
αi . In the version of this method presented here, the inner-loop reoptimizes
the variables in order α1 , α2 , . . . , αn , α1 , α2 , . . .. (A more sophisticated version
might choose other orderings; for instance, we may choose the next variable
to update according to which one we expect to allow us to make the largest
increase in W (α).)
When the function W happens to be of such a form that the “arg max”
in the inner loop can be performed efficiently, then coordinate ascent can be
a fairly efficient algorithm. Here’s a picture of coordinate ascent in action:
27
2.5
1.5
0.5
−0.5
−1
−1.5
−2
The ellipses in the figure are the contours of a quadratic function that
we want to optimize. Coordinate ascent was initialized at (2, −2), and also
plotted in the figure is the path that it took on its way to the global maximum.
Notice that on each step, coordinate ascent takes a step that’s parallel to one
of the axes, since only one variable is being optimized at a time.
9.2 SMO
We close off the discussion of SVMs by sketching the derivation of the SMO
algorithm. Some details will be left to the homework, and for others you
may refer to the paper excerpt handed out in class.
Here’s the (dual) optimization problem that we want to solve:
n n
X 1 X (i) (j)
maxα W (α) = αi − y y αi αj hx(i) , x(j) i. (31)
i=1
2 i,j=1
s.t. 0 ≤ αi ≤ C, i = 1, . . . , n (32)
Xn
αi y (i) = 0. (33)
i=1
Let’s say we have set of αi ’s that satisfy the constraints (32-33). Now,
suppose we want to hold α2 , . . . , αn fixed, and take a coordinate ascent step
and reoptimize the objective with respect to α1 . Can we make any progress?
The answer is no, because the constraint (33) ensures that
n
X
α1 y (1) = − αi y (i) .
i=2
28
(This step used the fact that y (1) ∈ {−1, 1}, and hence (y (1) )2 = 1.) Hence,
α1 is exactly determined by the other αi ’s, and if we were to hold α2 , . . . , αn
fixed, then we can’t make any change to α1 without violating the con-
straint (33) in the optimization problem.
Thus, if we want to update some subject of the αi ’s, we must update at
least two of them simultaneously in order to keep satisfying the constraints.
This motivates the SMO algorithm, which simply does the following:
Repeat till convergence {
1. Select some pair αi and αj to update next (using a heuristic that
tries to pick the two that will allow us to make the biggest progress
towards the global maximum).
2. Reoptimize W (α) with respect to αi and αj , while holding all the
other αk ’s (k 6= i, j) fixed.
}
To test for convergence of this algorithm, we can check whether the KKT
conditions (Equations 28-30) are satisfied to within some tol. Here, tol is
the convergence tolerance parameter, and is typically set to around 0.01 to
0.001. (See the paper and pseudocode for details.)
The key reason that SMO is an efficient algorithm is that the update to
αi , αj can be computed very efficiently. Let’s now briefly sketch the main
ideas for deriving the efficient update.
Let’s say we currently have some setting of the αi ’s that satisfy the con-
straints (32-33), and suppose we’ve decided to hold α3 , . . . , αn fixed, and
want to reoptimize W (α1 , α2 , . . . , αn ) with respect to α1 and α2 (subject to
the constraints). From (33), we require that
n
X
(1) (2)
α1 y + α2 y =− αi y (i) .
i=3
Since the right hand side is fixed (as we’ve fixed α3 , . . . αn ), we can just let
it be denoted by some constant ζ:
α1 y (1) + α2 y (2) = ζ. (34)
We can thus picture the constraints on α1 and α2 as follows:
29
H α1y(1)+ α2y(2)=ζ
α2
L
α1 C
From the constraints (32), we know that α1 and α2 must lie within the box
[0, C] × [0, C] shown. Also plotted is the line α1 y (1) + α2 y (2) = ζ, on which we
know α1 and α2 must lie. Note also that, from these constraints, we know
L ≤ α2 ≤ H; otherwise, (α1 , α2 ) can’t simultaneously satisfy both the box
and the straight line constraint. In this example, L = 0. But depending on
what the line α1 y (1) + α2 y (2) = ζ looks like, this won’t always necessarily be
the case; but more generally, there will be some lower-bound L and some
upper-bound H on the permissible values for α2 that will ensure that α1 , α2
lie within the box [0, C] × [0, C].
Using Equation (34), we can also write α1 as a function of α2 :
α1 = (ζ − α2 y (2) )y (1) .
(Check this derivation yourself; we again used the fact that y (1) ∈ {−1, 1} so
that (y (1) )2 = 1.) Hence, the objective W (α) can be written
if α2new,unclipped > H
H
α2new = αnew,unclipped if L ≤ α2new,unclipped ≤ H
2
L if α2new,unclipped < L
Finally, having found the α2new , we can use Equation (34) to go back and find
the optimal value of α1new .
There’re a couple more details that are quite easy but that we’ll leave you
to read about yourself in Platt’s paper: One is the choice of the heuristics
used to select the next αi , αj to update; the other is how to update b as the
SMO algorithm is run.
CS229 Lecture notes
Andrew Ng
Part VI
Learning Theory
1 Bias/variance tradeoff
When talking about linear regression, we discussed the problem of whether
to fit a “simple” model such as the linear “y = θ0 +θ1 x,” or a more “complex”
model such as the polynomial “y = θ0 + θ1 x + · · · θ5 x5 .” We saw the following
example:
4.5 4.5 4.5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
x x x
Fitting a 5th order polynomial to the data (rightmost figure) did not
result in a good model. Specifically, even though the 5th order polynomial
did a very good job predicting y (say, prices of houses) from x (say, living
area) for the examples in the training set, we do not expect the model shown
to be a good one for predicting the prices of houses not in the training set. In
other words, what’s has been learned from the training set does not generalize
well to other houses. The generalization error (which will be made formal
shortly) of a hypothesis is its expected error on examples not necessarily in
the training set.
Both the models in the leftmost and the rightmost figures above have
large generalization error. However, the problems that the two models suffer
from are very different. If the relationship between y and x is not linear,
1
2
then even if we were fitting a linear model to a very large amount of training
data, the linear model would still fail to accurately capture the structure
in the data. Informally, we define the bias of a model to be the expected
generalization error even if we were to fit it to a very (say, infinitely) large
training set. Thus, for the problem above, the linear model suffers from large
bias, and may underfit (i.e., fail to capture structure exhibited by) the data.
Apart from bias, there’s a second component to the generalization error,
consisting of the variance of a model fitting procedure. Specifically, when
fitting a 5th order polynomial as in the rightmost figure, there is a large risk
that we’re fitting patterns in the data that happened to be present in our
small, finite training set, but that do not reflect the wider pattern of the
relationship between x and y. This could be, say, because in the training set
we just happened by chance to get a slightly more-expensive-than-average
house here, and a slightly less-expensive-than-average house there, and so
on. By fitting these “spurious” patterns in the training set, we might again
obtain a model with large generalization error. In this case, we say the model
has large variance.1
Often, there is a tradeoff between bias and variance. If our model is too
“simple” and has very few parameters, then it may have large bias (but small
variance); if it is too “complex” and has very many parameters, then it may
suffer from large variance (but have smaller bias). In the example above,
fitting a quadratic function does better than either of the extremes of a first
or a fifth order polynomial.
2 Preliminaries
In this set of notes, we begin our foray into learning theory. Apart from
being interesting and enlightening in its own right, this discussion will also
help us hone our intuitions and derive rules of thumb about how to best
apply learning algorithms in different settings. We will also seek to answer
a few questions: First, can we make formal the bias/variance tradeoff that
was just discussed? The will also eventually lead us to talk about model
selection methods, which can, for instance, automatically decide what order
polynomial to fit to a training set. Second, in machine learning it’s really
1
In these notes, we will not try to formalize the definitions of bias and variance beyond
this discussion. While bias and variance are straightforward to define formally for, e.g.,
linear regression, there have been several proposals for the definitions of bias and variance
for classification, and there is as yet no agreement on what is the “right” and/or the most
useful formalism.
3
generalization error that we care about, but most learning algorithms fit their
models to the training set. Why should doing well on the training set tell us
anything about generalization error? Specifically, can we relate error on the
training set to generalization error? Third and finally, are there conditions
under which we can actually prove that learning algorithms will work well?
We start with two simple but very useful lemmas.
Lemma. (The union bound). Let A1 , A2 , . . . , Ak be k different events (that
may not be independent). Then
P (A1 ∪ · · · ∪ Ak ) ≤ P (A1 ) + . . . + P (Ak ).
In probability theory, the union bound is usually stated as an axiom
(and thus we won’t try to prove it), but it also makes intuitive sense: The
probability of any one of k events happening is at most the sums of the
probabilities of the k different events.
Lemma. (Hoeffding inequality) Let Z1 , . . . , Zm be m independent and iden-
tically distributed (iid) random variables drawn from a Bernoulli(φ)Pdistri-
bution. I.e., P (Zi = 1) = φ, and P (Zi = 0) = 1 − φ. Let φ̂ = (1/m) mi=1 Zi
be the mean of these random variables, and let any γ > 0 be fixed. Then
P (|φ − φ̂| > γ) ≤ 2 exp(−2γ 2 m)
This lemma (which in learning theory is also called the Chernoff bound)
says that if we take φ̂—the average of m Bernoulli(φ) random variables—to
be our estimate of φ, then the probability of our being far from the true value
is small, so long as m is large. Another way of saying this is that if you have
a biased coin whose chance of landing on heads is φ, then if you toss it m
times and calculate the fraction of times that it came up heads, that will be
a good estimate of φ with high probability (if m is large).
Using just these two lemmas, we will be able to prove some of the deepest
and most important results in learning theory.
To simplify our exposition, lets restrict our attention to binary classifica-
tion in which the labels are y ∈ {0, 1}. Everything we’ll say here generalizes
to other, including regression and multi-class classification, problems.
We assume we are given a training set S = {(x(i) , y (i) ); i = 1, . . . , m}
of size m, where the training examples (x(i) , y (i) ) are drawn iid from some
probability distribution D. For a hypothesis h, we define the training error
(also called the empirical risk or empirical error in learning theory) to
be m
1 X
ε̂(h) = 1{h(x(i) ) 6= y (i) }.
m i=1
4
I.e. this is the probability that, if we now draw a new example (x, y) from
the distribution D, h will misclassify it.
Note that we have assumed that the training data was drawn from the
same distribution D with which we’re going to evaluate our hypotheses (in
the definition of generalization error). This is sometimes also referred to as
one of the PAC assumptions.2
Consider the setting of linear classification, and let hθ (x) = 1{θ T x ≥ 0}.
What’s a reasonable way of fitting the parameters θ? One approach is to try
to minimize the training error, and pick
We call this process empirical risk minimization (ERM), and the resulting
hypothesis output by the learning algorithm is ĥ = hθ̂ . We think of ERM
as the most “basic” learning algorithm, and it will be this algorithm that we
focus on in these notes. (Algorithms such as logistic regression can also be
viewed as approximations to empirical risk minimization.)
In our study of learning theory, it will be useful to abstract away from
the specific parameterization of hypotheses and from issues such as whether
we’re using a linear classifier. We define the hypothesis class H used by a
learning algorithm to be the set of all classifiers considered by it. For linear
classification, H = {hθ : hθ (x) = 1{θ T x ≥ 0}, θ ∈ Rn+1 } is thus the set of
all classifiers over X (the domain of the inputs) where the decision boundary
is linear. More broadly, if we were studying, say, neural networks, then we
could let H be the set of all classifiers representable by some neural network
architecture.
Empirical risk minimization can now be thought of as a minimization over
the class of functions H, in which the learning algorithm picks the hypothesis:
2
PAC stands for “probably approximately correct,” which is a framework and set of
assumptions under which numerous results on learning theory were proved. Of these, the
assumption of training and testing on the same distribution, and the assumption of the
independently drawn training examples, were the most important.
5
Thus, ε̂(hi ) is exactly the mean of the m random variables Zj that are drawn
iid from a Bernoulli distribution with mean ε(hi ). Hence, we can apply the
Hoeffding inequality, and obtain
This shows that, for our particular hi , training error will be close to
generalization error with high probability, assuming m is large. But we
don’t just want to guarantee that ε(hi ) will be close to ε̂(hi ) (with high
probability) for just only one particular hi . We want to prove that this will
be true for simultaneously for all h ∈ H. To do so, let Ai denote the event
that |ε(hi ) − ε̂(hi )| > γ. We’ve already show that, for any particular Ai , it
holds true that P (Ai ) ≤ 2 exp(−2γ 2 m). Thus, using the union bound, we
6
have that
(The “¬” symbol means “not.”) So, with probability at least 1−2k exp(−2γ 2 m),
we have that ε(h) will be within γ of ε̂(h) for all h ∈ H. This is called a uni-
form convergence result, because this is a bound that holds simultaneously
for all (as opposed to just one) h ∈ H.
In the discussion above, what we did was, for particular values of m and
γ, given a bound on the probability that, for some h ∈ H, |ε(h) − ε̂(h)| > γ.
There are three quantities of interest here: m, γ, and the probability of error;
we can bound either one in terms of the other two.
For instance, we can ask the following question: Given γ and some δ > 0,
how large must m be before we can guarantee that with probability at least
1 − δ, training error will be within γ of generalization error? By setting
δ = 2k exp(−2γ 2 m) and solving for m, [you should convince yourself this is
the right thing to do!], we find that if
1 2k
m≥ log ,
2γ 2 δ
then with probability at least 1 − δ, we have that |ε(h) − ε̂(h)| ≤ γ for all
h ∈ H. (Equivalently, this show that the probability that |ε(h) − ε̂(h)| > γ
for some h ∈ H is at most δ.) This bound tells us how many training
examples we need in order make a guarantee. The training set size m that
a certain method or algorithm requires in order to achieve a certain level of
performance is also called the algorithm’s sample complexity.
The key property of the bound above is that the number of training
examples needed to make this guarantee is only logarithmic in k, the number
of hypotheses in H. This will be important later.
7
Similarly, we can also hold m and δ fixed and solve for γ in the previous
equation, and show [again, convince yourself that this is right!] that with
probability 1 − δ, we have that for all h ∈ H,
r
1 2k
|ε̂(h) − ε(h)| ≤ log .
2m δ
Now, lets assume that uniform convergence holds, i.e., that |ε(h)− ε̂(h)| ≤
γ for all h ∈ H. What can we prove about the generalization of our learning
algorithm that picked ĥ = arg minh∈H ε̂(h)?
Define h∗ = arg minh∈H ε(h) to be the best possible hypothesis in H. Note
that h∗ is the best that we could possibly do given that we are using H, so
it makes sense to compare our performance to that of h∗ . We have:
ε(ĥ) ≤ ε̂(ĥ) + γ
≤ ε̂(h∗ ) + γ
≤ ε(h∗ ) + 2γ
The first line used the fact that |ε(ĥ)− ε̂(ĥ)| ≤ γ (by our uniform convergence
assumption). The second used the fact that ĥ was chosen to minimize ε̂(h),
and hence ε̂(ĥ) ≤ ε̂(h) for all h, and in particular ε̂(ĥ) ≤ ε̂(h∗ ). The third
line used the uniform convergence assumption again, to show that ε̂(h∗ ) ≤
ε(h∗ ) + γ. So, what we’ve shown is the following: If uniform convergence
occurs, then the generalization error of ĥ is at most 2γ worse than the best
possible hypothesis in H!
Lets put all this together into a theorem.
Theorem. Let |H| = k, and let any m, δ be fixed. Then with probability at
least 1 − δ, we have that
r
1 2k
ε(ĥ) ≤ min ε(h) + 2 log .
h∈H 2m δ
√
This is proved by letting γ equal the · term, using our previous argu-
ment that uniform convergence occurs with probability at least 1 − δ, and
then noting that uniform convergence implies ε(h) is at most 2γ higher than
ε(h∗ ) = minh∈H ε(h) (as we showed previously).
This also quantifies what we were saying previously saying about the
bias/variance tradeoff in model selection. Specifically, suppose we have some
hypothesis class H, and are considering switching to some much larger hy-
pothesis class H0 ⊇ H. If we switch to H0 , then the first term minh ε(h)
8
can only decrease (since we’d then be taking a min over a larger set of func-
tions). Hence, by learning using a larger hypothesis class,
√ our “bias” can
only decrease. However, if k increases, then the second 2 · term would also
increase. This increase corresponds to our “variance” increasing when we use
a larger hypothesis class.
By holding γ and δ fixed and solving for m like we did before, we can
also obtain the following sample complexity bound:
Corollary. Let |H| = k, and let any δ, γ be fixed. Then for ε(ĥ) ≤
minh∈H ε(h) + 2γ to hold with probability at least 1 − δ, it suffices that
1 2k
m ≥ log
2γ 2 δ
1 k
= O log ,
γ2 δ
“well” using a hypothesis class that has d parameters, generally we’re going
to need on the order of a linear number of training examples in d.
(At this point, it’s worth noting that these results were proved for an al-
gorithm that uses empirical risk minimization. Thus, while the linear depen-
dence of sample complexity on d does generally hold for most discriminative
learning algorithms that try to minimize training error or some approxima-
tion to training error, these conclusions do not always apply as readily to
discriminative learning algorithms. Giving good theoretical guarantees on
many non-ERM learning algorithms is still an area of active research.)
The other part of our previous argument that’s slightly unsatisfying is
that it relies on the parameterization of H. Intuitively, this doesn’t seem like
it should matter: We had written the class of linear classifiers as hθ (x) =
1{θ0 + θ1 x1 + · · · θn xn ≥ 0}, with n + 1 parameters θ0 , . . . , θn . But it could
also be written hu,v (x) = 1{(u20 − v02 ) + (u21 − v12 )x1 + · · · (u2n − vn2 )xn ≥ 0}
with 2n + 2 parameters ui , vi . Yet, both of these are just defining the same
H: The set of linear classifiers in n dimensions.
To derive a more satisfying argument, lets define a few more things.
Given a set S = {x(i) , . . . , x(d) } (no relation to the training set) of points
x(i) ∈ X , we say that H shatters S if H can realize any labeling on S.
I.e., if for any set of labels {y (1) , . . . , y (d) }, there exists some h ∈ H so that
h(x(i) ) = y (i) for all i = 1, . . . d.
Given a hypothesis class H, we then define its Vapnik-Chervonenkis
dimension, written VC(H), to be the size of the largest set that is shattered
by H. (If H can shatter arbitrarily large sets, then VC(H) = ∞.)
For instance, consider the following set of three points:
x2
x1
Can the set H of linear classifiers in two dimensions (h(x) = 1{θ0 +θ1 x1 +
θ2 x2 ≥ 0}) can shatter the set above? The answer is yes. Specifically, we
10
see that, for any of the eight possible labelings of these points, we can find a
linear classifier that obtains “zero training error” on them:
x2 x2 x2 x2
x1 x1 x1 x1
x2 x2 x2 x2
x1 x1 x1 x1
x2
x2
x1 x1
In order words, under the definition of the VC dimension, in order to
prove that VC(H) is at least d, we need to show only that there’s at least
one set of size d that H can shatter.
The following theorem, due to Vapnik, can then be shown. (This is, many
would argue, the most important theorem in all of learning theory.)
11
• Overfitting: the model is too closely related to the examples in the training set and
doesn’t generalize well to other examples.
• Underfitting: the model didn’t gather enough information from the training set, and
doesn’t capture the link between the features x and the target y.
• The data is simply noisy, that is the model is neither overfitting or underfitting, and
the high MSE is simply due to the amount of noise in the dataset.
1
Test MSE = E (y − fˆ(x))2
= E ( + f (x) − fˆ(x))2
= E(2 ) + E (f (x) − fˆ(x))2
2
2 ˆ ˆ
= σ + E(f (x) − f (x)) + Var f (x) − f (x)
2
2 ˆ
= σ + Bias f (x) + Var f (x) ˆ
There is nothing we can do about the first term σ 2 as we can not predict the noise by
definition. The bias term is due to underfitting, meaning that on average, fˆ does not predict
f . The last term is closely related to overfitting, the prediction fˆ is too close from the values
ytrain and varies a lot with the choice of our training set.
To sum up, we can understand our MSE as follows
Hence, when analyzing the performance of a machine learning algorithm, we must always
ask ourselves how to reduce the bias without increasing the variance, and respectively how to
reduce the variance without increasing the bias. Most of the time, reducing one will increase
the other, and there is a tradeoff between bias and variance.
2 Error Analysis
Even though understanding whether our poor test error is due to high bias or high variance
is important, knowing which parts of the machine learning algorithm lead to this error or
score is crucial.
Consider the machine learning pipeline on figure 1.
The algorithms is divided into several steps
2. Preprocessing to remove the background on the image. For instance, if the image are
taken from a security camera, the background is always the same, and we could remove
it easily by keeping the pixels that changed on the image.
2
Figure 1: Face recognition pipeline
If you biuld a complicated system like this one, you might want to figure out how much
error is attributable to each of the components, how good is each of these green boxes.
Indeed, if one of these boxes is really problematic, you might want to spend more time
trying to improve the performance of that one green box. How do you decide what part to
focus on?
One thing we can do is plug in the ground-truth for each component, and see how
accuracy changes. Let’s say the overall accuracy of the system is 85% (pretty bad). You
can now take your development set and manually give it the perfect background removal,
that is, instead of using your background removal algorithm, manually specify the perfect
background removal yourself (using photoshop for instance), and look at how much that
affect the performance of the overall system.
Now let’s say the accuracy only improves by 0.1%. This gives us an upperbound, that
is even if we worked for years on background removal, it wouldn’t help our system by more
than 0.1%.
Now let’s give the pipeline the perfect face detection by specifying the position of the
face manually, see how much we improve the performance, and so on.
The results are specified in the table 1.
Looking at the table, we know that working on the background removal won’t help much.
It also tells us where the biggest jumps are. We notice that having an accurate face detection
mechanism really improves the performance, and similarly, the eyes really help making the
prediction more accurate.
Error analysis is also useful when publishing a paper, since it’s a convenient way to
3
Component Accuracy
Overall system 85%
Preprocess (remove background) 85.1%
Face detection 91%
Eyes segmentation 95%
Nose segmentation 96%
Mouth segmentation 97%
Logistic regression 100%
Table 1: Accuracy when providing the system with the perfect component
analyze the error of an algorithm and explain which parts should be improved.
Ablative analysis
While error analysis tries to explain the difference between current performance and perfect
performance, ablative analysis tries to explain the difference between some baseline (much
poorer) performance and current performance.
For instance, suppose you have built a good anti-spam classifier by adding lots of clever
features to logistic regression
• Spelling correction
• Javascript parser
and your question is: How much did each of these components really help?
In this example, let’s say that simple logistic regression without any clever features gets
94% performance, but when adding these clever features, we get 99.9% performance. In
abaltive analysis, what we do is start from the current level of performance 99.9%, and
slowly take away all of these features to see how it affects performance. The results are
provided in table 2.
When presenting the results in a paper, ablative analysis really helps analyzing the fea-
tures that helped decreasing the misclassification rate. Instead of simply giving the loss/error
rate of the algorithm, we can provide evidence that some specific features are actually more
important than others.
4
Component Accuracy
Overall system 99.9%
Spelling correction 99.0%
Sender host features 98.9%
Email header features 98.9%
Email text parser features 95%
Javascript parser 94.5%
Features from images 94.0%
5
Some Calculations from Bias Variance
Christopher Ré
May 7, 2019
By setting ∇θ `(θ, λ) = 0 we can solve for the θ̂ that minimizes the above prob-
lem. Explicitly, we have:
−1
θ̂ = X T X + λI XT y (1)
Since σi2 ≥ 0 for all i ∈ [d], if we set λ > 0 then X T X + λI is full rank, and the
inverse of (X T X + λI) exists. In turn, this means there is a unique such θ̂.
1
out the full formal argument, but it suffices to make one observation that is
immediate from Eq. 1: the variance of θ̂ is proportional to the eigenvalues of
(X T X + λI)−1 . To see this, observe that the eigenvalues of an inverse are just
the inverse of the eigenvalues:
T
−1 1 1
eig X X + λI = , ..., 2
σ12 + λ σd + λ
Now, condition on the points we draw, namely X. Then, recall that ran-
domness is in the label noise (recall the linear regression model y ∼ Xθ∗ +
N (0, τ 2 I) = N (Xθ∗ , τ 2 I)).
Recall a fact about the multivariate normal distribution:
The last line above suggests that the more regularization we add (larger the λ),
the more the estimated θ̂ will be shrunk towards 0. In other words, regulariza-
tion adds bias (towards zero in this case). Though we paid the cost of higher
bias, we gain by reducing the variance of θ̂. To see this bias-variance tradeoff
concretely, observe the covariance matrix of θ̂:
C := Cov[θ̂]
= (X T X + λI)−1 X T (τ 2 I) X(X T X + λI)−1
and
τ 2 σ12 τ 2 σd2
eig(C) = , . . . ,
(σ12 + λ)2 (σd2 + λ)2
Gradient Descent We show that you can initialize gradient descent in a way
that effectively regularizes undetermined least squares–even with no regulariza-
tion penalty (λ = 0). Our first observation is that any point x ∈ Rd can be
decomposed into two orthogonal components x0 , x1 such that
2
Recall that Null(X T ) and Range(X) are orthogonal subspaces by the fundamen-
tal theory of linear algebra. We write P0 for the projection on the null and P1
for the projection on the range, then x0 = P0 (x) and x1 = P1 (x).
If one initializes at a point θ then, we observe that the gradient is orthogonal
to the null space. That is, if g(θ) = X T (Xθ−y) then g T P0 (v) = 0 for any v ∈ Rd .
But, then:
That is, no learning happens in the null. Whatever portion is in the null that
we initialize stays there throughout execution.
A key property of the Moore-Penrose pseudoinverse, is that if θ̂ = (X T X)+ X T y
then P0 (θ̂) = 0. Hence, the gradient descent solution initialized at θ0 can be
written θ̂ + P0 (θ0 ). Two immediate observations:
We’ve argued that there are many ways to find equivalent solutions, and that
this allows us to understand the effect on the model fitting procedure as regu-
larization. Thus, there are many ways to find that equivalent solution. Many
modern methods of machine learning including dropout and data augmentation
are not penalty, but their effect is understood as regularization. One contrast
with the above methods is that they often depend on some property of the data
or for how much they effectively regularization. In some sense, they adapt to
the data. A final comment is that in the same sense above, adding more data
regularizes the model as well!
3
CS229 Lecture notes
Andrew Ng
Part VI
Regularization and model
selection
Suppose we are trying select among several different models for a learning
problem. For instance, we might be using a polynomial regression model
hθ (x) = g(θ0 + θ1 x + θ2 x2 + · · · + θk xk ), and wish to decide if k should be
0, 1, . . . , or 10. How can we automatically select a model that represents
a good tradeoff between the twin evils of bias and variance1 ? Alternatively,
suppose we want to automatically choose the bandwidth parameter τ for
locally weighted regression, or the parameter C for our `1 -regularized SVM.
How can we do that?
For the sake of concreteness, in these notes we assume we have some
finite set of models M = {M1 , . . . , Md } that we’re trying to select among.
For instance, in our first example above, the model Mi would be an i-th
order polynomial regression model. (The generalization to infinite M is not
hard.2 ) Alternatively, if we are trying to decide between using an SVM, a
neural network or logistic regression, then M may contain these models.
1
Given that we said in the previous set of notes that bias and variance are two very
different beasts, some readers may be wondering if we should be calling them “twin” evils
here. Perhaps it’d be better to think of them as non-identical twins. The phrase “the
fraternal twin evils of bias and variance” doesn’t have the same ring to it, though.
2
If we are trying to choose from an infinite set of models, say corresponding to the
possible values of the bandwidth τ ∈ R+ , we may discretize τ and consider only a finite
number of possible values for it. More generally, most of the algorithms described here
can all be viewed as performing optimization search in the space of models, and we can
perform this search over infinite model classes as well.
1
2
1 Cross validation
Lets suppose we are, as usual, given a training set S. Given what we know
about empirical risk minimization, here’s what might initially seem like a
algorithm, resulting from using empirical risk minimization for model selec-
tion:
This algorithm does not work. Consider choosing the order of a poly-
nomial. The higher the order of the polynomial, the better it will fit the
training set S, and thus the lower the training error. Hence, this method will
always select a high-variance, high-degree polynomial model, which we saw
previously is often poor choice.
Here’s an algorithm that works better. In hold-out cross validation
(also called simple cross validation), we do the following:
1. Randomly split S into Strain (say, 70% of the data) and Scv (the remain-
ing 30%). Here, Scv is called the hold-out cross validation set.
3. Select and output the hypothesis hi that had the smallest error ε̂Scv (hi )
on the hold out cross validation set. (Recall, ε̂Scv (h) denotes the empir-
ical error of h on the set of examples in Scv .)
By testing on a set of examples Scv that the models were not trained on,
we obtain a better estimate of each hypothesis hi ’s true generalization error,
and can then pick the one with the smallest estimated generalization error.
Usually, somewhere between 1/4 − 1/3 of the data is used in the hold out
cross validation set, and 30% is a typical choice.
Optionally, step 3 in the algorithm may also be replaced with selecting
the model Mi according to arg mini ε̂Scv (hi ), and then retraining Mi on the
entire training set S. (This is often a good idea, with one exception being
learning algorithms that are be very sensitive to perturbations of the initial
conditions and/or data. For these methods, Mi doing well on Strain does not
necessarily mean it will also do well on Scv , and it might be better to forgo
this retraining step.)
The disadvantage of using hold out cross validation is that it “wastes”
about 30% of the data. Even if we were to take the optional step of retraining
3
the model on the entire training set, it’s still as if we’re trying to find a good
model for a learning problem in which we had 0.7m training examples, rather
than m training examples, since we’re testing models that were trained on
only 0.7m examples each time. While this is fine if data is abundant and/or
cheap, in learning problems in which data is scarce (consider a problem with
m = 20, say), we’d like to do something better.
Here is a method, called k-fold cross validation, that holds out less
data each time:
1. Randomly split S into k disjoint subsets of m/k training examples each.
Lets call these subsets S1 , . . . , Sk .
For j = 1, . . . , k
Train the model Mi on S1 ∪ · · · ∪ Sj−1 ∪ Sj+1 ∪ · · · Sk (i.e., train
on all the data except Sj ) to get some hypothesis hij .
Test the hypothesis hij on Sj , to get ε̂Sj (hij ).
The estimated generalization error of model Mi is then calculated
as the average of the ε̂Sj (hij )’s (averaged over j).
3. Pick the model Mi with the lowest estimated generalization error, and
retrain that model on the entire training set S. The resulting hypothesis
is then output as our final answer.
A typical choice for the number of folds to use here would be k = 10.
While the fraction of data held out each time is now 1/k—much smaller
than before—this procedure may also be more computationally expensive
than hold-out cross validation, since we now need train to each model k
times.
While k = 10 is a commonly used choice, in problems in which data is
really scarce, sometimes we will use the extreme choice of k = m in order
to leave out as little data as possible each time. In this setting, we would
repeatedly train on all but one of the training examples in S, and test on that
held-out example. The resulting m = k errors are then averaged together to
obtain our estimate of the generalization error of a model. This method has
its own name; since we’re holding out one training example at a time, this
method is called leave-one-out cross validation.
Finally, even though we have described the different versions of cross vali-
dation as methods for selecting a model, they can also be used more simply to
evaluate a single model or algorithm. For example, if you have implemented
4
some learning algorithm and want to estimate how well it performs for your
application (or if you have invented a novel learning algorithm and want to
report in a technical paper how well it performs on various test sets), cross
validation would give a reasonable way of doing so.
2 Feature Selection
One special and important case of model selection is called feature selection.
To motivate this, imagine that you have a supervised learning problem where
the number of features n is very large (perhaps n m), but you suspect that
there is only a small number of features that are “relevant” to the learning
task. Even if you use the a simple linear classifier (such as the perceptron)
over the n input features, the VC dimension of your hypothesis class would
still be O(n), and thus overfitting would be a potential problem unless the
training set is fairly large.
In such a setting, you can apply a feature selection algorithm to reduce the
number of features. Given n features, there are 2n possible feature subsets
(since each of the n features can either be included or excluded from the
subset), and thus feature selection can be posed as a model selection problem
over 2n possible models. For large values of n, it’s usually too expensive to
explicitly enumerate over and compare all 2n models, and so typically some
heuristic search procedure is used to find a good feature subset. The following
search procedure is called forward search:
1. Initialize F = ∅.
2. Repeat {
3. Select and output the best feature subset that was evaluated during the
entire search procedure.
5
(The equation above assumes that xi and y are binary-valued; more generally
the summations would be over the domains of the variables.) The probabil-
ities above p(xi , y), p(xi ) and p(y) can all be estimated according to their
empirical distributions on the training set.
To gain intuition about what this score does, note that the mutual infor-
mation can also be expressed as a Kullback-Leibler (KL) divergence:
MI(xi , y) = KL (p(xi , y)||p(xi )p(y))
You’ll get to play more with KL-divergence in Problem set #3, but infor-
mally, this gives a measure of how different the probability distributions
6
p(xi , y) and p(xi )p(y) are. If xi and y are independent random variables,
then we would have p(xi , y) = p(xi )p(y), and the KL-divergence between the
two distributions will be zero. This is consistent with the idea if xi and y
are independent, then xi is clearly very “non-informative” about y, and thus
the score S(i) should be small. Conversely, if xi is very “informative” about
y, then their mutual information MI(xi , y) would be large.
One final detail: Now that you’ve ranked the features according to their
scores S(i), how do you decide how many features k to choose? Well, one
standard way to do so is to use cross validation to select among the possible
values of k. For example, when applying naive Bayes to text classification—
a problem where n, the vocabulary size, is usually very large—using this
method to select a feature subset often results in increased classifier accuracy.
p(S|θ)p(θ)
p(θ|S) =
p(S)
Qm (i) (i)
i=1 p(y |x , θ) p(θ)
= R Qm (1)
θ
( i=1 p(y (i) |x(i) , θ)p(θ)) dθ
In the equation above, p(y (i) |x(i) , θ) comes from whatever model you’re using
for your learning problem. For example, if you are using Bayesian logistic re-
(i) (i)
gression, then you might choose p(y (i) |x(i) , θ) = hθ (x(i) )y (1−hθ (x(i) ))(1−y ) ,
where hθ (x(i) ) = 1/(1 + exp(−θ T x(i) )).3
When we are given a new test example x and asked to make it prediction
on it, we can compute our posterior distribution on the class label using the
posterior distribution on θ:
Z
p(y|x, S) = p(y|x, θ)p(θ|S)dθ (2)
θ
In the equation above, p(θ|S) comes from Equation (1). Thus, for example,
if the goal is to the predict the expected value of y given x, then we would
output4 Z
E[y|x, S] = yp(y|x, S)dy
y
The procedure that we’ve outlined here can be thought of as doing “fully
Bayesian” prediction, where our prediction is computed by taking an average
with respect to the posterior p(θ|S) over θ. Unfortunately, in general it is
computationally very difficult to compute this posterior distribution. This is
because it requires taking integrals over the (usually high-dimensional) θ as
in Equation (1), and this typically cannot be done in closed-form.
Thus, in practice we will instead approximate the posterior distribution
for θ. One common approximation is to replace our posterior distribution for
θ (as in Equation 2) with a single point estimate. The MAP (maximum
a posteriori) estimate for θ is given by
m
Y
θMAP = arg max p(y (i) |x(i) , θ)p(θ). (3)
θ
i=1
3
Since we are now viewing θ as a random variable, it is okay to condition on it value,
and write “p(y|x, θ)” instead of “p(y|x; θ).”
4
The integral below would be replaced by a summation if y is discrete-valued.
8
Note that this is the same formulas as for the ML (maximum likelihood)
estimate for θ, except for the prior p(θ) term at the end.
In practical applications, a common choice for the prior p(θ) is to assume
that θ ∼ N (0, τ 2 I). Using this choice of prior, the fitted parameters θMAP
will have smaller norm than that selected by maximum likelihood. (See
Problem Set #3.) In practice, this causes the Bayesian MAP estimate to be
less susceptible to overfitting than the ML estimate of the parameters. For
example, Bayesian logistic regression turns out to be an effective algorithm for
text classification, even though in text classification we usually have n m.
CS229 Lecture Notes
Andrew Ng and Kian Katanforoosh
Deep Learning
We now begin our study of deep learning. In this set of notes, we give an
overview of neural networks, discuss vectorization and discuss training neural
networks with backpropagation.
1 Neural Networks
We will start small and slowly build up a neural network, step by step. Recall
the housing price prediction problem from before: given the size of the house,
we want to predict the price.
Previously, we fitted a straight line to the graph. Now, instead of fitting a
straight line, we wish prevent negative housing prices by setting the absolute
minimum price as zero. This produces a “kink” in the graph as shown in
Figure 1.
Our goal is to input some input x into a function f (x) that outputs the
price of the house y. Formally, f : x → y. One of the simplest possible
neural networks is to define f (x) as a single “neuron” in the network where
f (x) = max(ax + b, 0), for some coefficients a, b. What f (x) does is return a
single value: (ax + b) or zero, whichever is greater. In the context of neural
networks, this function is called a ReLU (pronounced “ray-lu”), or rectified
linear unit. A more complex neural network may take the single neuron
described above and “stack” them together such that one neuron passes its
output as input into the next neuron, resulting in a more complex function.
Let us now deepen the housing prediction example. In addition to the size
of the house, suppose that you know the number of bedrooms, the zip code
Scribe: Albert Haque
1
2
housing prices
1000
900
800
700
price (in $1000)
600
500
400
300
200
100
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
square feet
We have described this neural network as if you (the reader) already have
the insight to determine these three factors ultimately affect the housing
price. Part of the magic of a neural network is that all you need are the
input features x and the output y while the neural network will figure out
everything in the middle by itself. The process of a neural network learning
the intermediate features is called end-to-end learning.
Following the housing example, formally, the input to a neural network is
a set of input features x1 , x2 , x3 , x4 . We connect these four features to three
neurons. These three ”internal” neurons are called hidden units. The goal for
the neural network is to automatically determine three relevant features such
that the three features predict the price of a house. The only thing we must
provide to the neural network is a sufficient number of training examples
(x(i) , y (i) ). Often times, the neural network will discover complex features
which are very useful for predicting the output but may be difficult for a
human to understand since it does not have a “common” meaning. This is
why some people refer to neural networks as a black box, as it can be difficult
to understand the features it has invented.
Let us formalize this neural network representation. Suppose we have
three input features x1 , x2 , x3 which are collectively called the input layer,
four hidden units which are collectively called the hidden layer and one out-
put neuron called the output layer. The term hidden layer is called “hidden”
because we do not have the ground truth/training value for the hidden units.
This is in contrast to the input and output layers, both of which we know
the ground truth values from (x(i) , y (i) ).
The first hidden unit requires the input x1 , x2 , x3 and outputs a value
denoted by a1 . We use the letter a since it refers to the neuron’s “activation”
value. In this particular example, we have a single hidden layer but it is
[1]
possible to have multiple hidden layers. Let a1 denote the output value of
the first hidden unit in the first hidden layer. We use zero-indexing to refer
to the layer numbers. That is, the input layer is layer 0, the first hidden
layer is layer 1 and the output layer is layer 2. Again, more complex neural
networks may have more hidden layers. Given this mathematical notation,
[2]
the output of layer 2 is a1 . We can unify our notation:
[0]
x 1 = a1 (1.1)
[0]
x 2 = a2 (1.2)
[0]
x 3 = a3 (1.3)
To clarify, foo[1] with brackets denotes anything associated with layer 1, x(i)
[`]
with parenthesis refers to the ith training example, and aj refers to the
4
x1
Estimated
x2
value of y
x3
Returning to our neural network from before, the first hidden unit in the first
hidden layer will perform the following computation:
[1] [1] T [1] [1] [1]
z1 = W1 x + b1 and a1 = g(z1 ) (1.7)
where W is a matrix of parameters and W1 refers to the first row of this
matrix. The parameters associated with the first hidden unit is the vector
5
[1] [1]
W1 ∈ R3 and the scalar b1 ∈ R. For the second and third hidden units in
the first hidden layer, the computation is defined as:
[1] [1] T [1] [1] [1]
z2 = W2 x + b2 and a2 = g(z2 )
[1] [1] T [1] [1] [1]
z3 = W3 x + b3 and a3 = g(z3 )
where each hidden unit has its corresponding parameters W and b. Moving
on, the output layer performs the computation:
[2] [2] T [1] [2] [2] [2]
z1 = W1 a + b1 and a1 = g(z1 ) (1.8)
2 Vectorization
In order to implement a neural network at a reasonable speed, one must be
careful when using for loops. In order to compute the hidden unit activations
in the first layer, we must compute z1 , ..., z4 and a1 , ..., a4 .
[1] [1] T [1] [1] [1]
z1 = W1 x + b1 and a1 = g(z1 ) (2.1)
.. .. ..
. . . (2.2)
[1] [1] T [1] [1] [1]
z4 = W4 x + b4 and a4 = g(z4 ) (2.3)
The most natural way to implement this in code is to use a for loop. One of
the treasures that deep learning has given to the field of machine learning is
that deep learning algorithms have high computational requirements. As a
result, code will run very slowly if you use for loops.
6
Where the Rd×n beneath each matrix indicates the dimensions. Expressing
this in matrix notation: z [1] = W [1] x + b[1] . To compute a[1] without a
for loop, we can leverage vectorized libraries in Matlab, Octave, or Python
which compute a[1] = g(z [1] ) very fast by performing parallel element-wise
operations. Mathematically, we defined the sigmoid function g(z) as:
1
g(z) = where z ∈ R (2.5)
1 + e−z
However, the sigmoid function can be defined not only for scalars but also
vectors. In a Matlab/Octave-like pseudocode, we can define the sigmoid as:
z [2] = W
|{z}
[2] [1]
b[2]
a + |{z}
|{z} |{z} and a[2] = g(|{z}
|{z} z [2] ) (2.7)
1×1 1×4 4×1 1×1 1×1 1×1
7
Why do we not use the identity function for g(z)? That is, why not use
g(z) = z? Assume for sake of argument that b[1] and b[2] are zeros. Using
Equation (2.7), we have:
z [2] = W [2] a[1] (2.8)
= W [2] g(z [1] ) by definition (2.9)
= W [2] z [1] since g(z) = z (2.10)
= W [2] W [1] x from Equation (2.4) (2.11)
[2] [1]
= W̃ x where W̃ = W W (2.12)
Notice how W [2] W [1] collapsed into W̃ . This is because applying a linear
function to another linear function will result in a linear function over the
original input (i.e., you can construct a W̃ such that W̃ x = W [2] W [1] x).
This loses much of the representational power of the neural network as often
times the output we are trying to predict has a non-linear relationship with
the inputs. Without non-linear activation functions, the neural network will
simply perform linear regression.
You may notice that we are attempting to add b[1] ∈ R4×1 to W [1] X ∈
R4×3 . Strictly following the rules of linear algebra, this is not allowed. In
practice however, this addition is performed using broadcasting. We create
an intermediate b̃[1] ∈ R4×3 :
| | |
b̃[1] = b[1] b[1] b[1] (2.15)
| | |
We can then perform the computation: Z [1] = W [1] X + b̃[1] . Often times, it
is not necessary to explicitly construct b̃[1] . By inspecting the dimensions in
(2.14), you can assume b[1] ∈ R4×1 is correctly broadcast to W [1] X ∈ R4×3 .
Putting it together: Suppose we have a training set (x(1) , y (1) ), ..., (x(n) , y (n) )
where x(i) is a picture and y (i) is a binary label for whether the picture con-
tains a cat or not (i.e., 1=contains a cat). First, we initialize the parameters
W [1] , b[1] , W [2] , b[2] to small random numbers. For each example, we compute
the output “probability” from the sigmoid function a[2](i) . Second, using the
logistic regression log likelihood:
n
X
y (i) log a[2](i) + (1 − y (i) ) log(1 − a[2](i) ) (2.16)
i=1
3 Backpropagation
Instead of the housing example, we now have a new problem. Suppose we
wish to detect whether there is a soccer ball in an image or not. Given an
input image x(i) , we wish to output a binary prediction 1 if there is a ball in
the image and 0 otherwise.
Aside: Images can be represented as a matrix with number of elements
equal to the number of pixels. However, color images are digitally represented
as a volume (i.e., three-channels; or three matrices stacked on each other).
The number three is used because colors are represented as red-green-blue
(RGB) values. In the diagram below, we have a 64 × 64 × 3 image containing
a soccer ball. It is flattened into a single vector containing 12,288 elements.
A neural network model consists of two components: (i) the network
architecture, which defines how many layers, how many neurons, and how
the neurons are connected and (ii) the parameters (values; also known as
9
weights). In this section, we will talk about how to learn the parameters.
First we will talk about parameter initialization, optimization and analyzing
these parameters.
The next step is to compute how many parameters are in this network. One
way of doing this is to compute the forward propagation by hand.
z [1] = W [1] x(i) + b[1] (3.1)
a[1] = g(z [1] ) (3.2)
z [2] = W [2] a[1] + b[2] (3.3)
[2] [2]
a = g(z ) (3.4)
[3] [3] [2] [3]
z =W a +b (3.5)
ŷ (i) = a[3] = g(z [3] ) (3.6)
We know that z [1] , a[1] ∈ R3×1 and z [2] , a[2] ∈ R2×1 and z [3] , a[3] ∈ R1×1 . As
of now, we do not know the size of W [1] . However, we can compute its size.
10
z [1] = W [1] x(i) = R3×1 Written as sizes: R3×1 = R?×? × Rd×1 (3.7)
W [2] ∈ R2×3 , b[2] ∈ R2×1 and W [3] ∈ R1×2 , b[3] ∈ R1×1 (3.8)
The loss function L(ŷ, y) produces a single scalar value. For short, we will
refer to the loss value as L. Given this value, we now must update all
parameters in layers of the neural network. For any given layer index `, we
update them:
∂L
W [`] = W [`] − α (3.10)
∂W [`]
∂L
b[`] = b[`] − α [`] (3.11)
∂b
where α is the learning rate. To proceed, we must compute the gradient with
respect to the parameters: ∂L/∂W [`] and ∂L/∂b[`] .
Remember, we made a decision to not set all parameters to zero. What if
we had initialized all parameters to be zero? We know that z [3] = W [3] a[2] +b[3]
11
will evaluate to zero, because W [3] and b[3] are all zero. However, the output
of the neural network is defined as a[3] = g(z [3] ). Recall that g(·) is defined as
the sigmoid function. This means a[3] = g(0) = 0.5. Thus, no matter what
value of x(i) we provide, the network will output ŷ = 0.5.
What if we had initialized all parameters to be the same non-zero value?
In this case, consider the activations of the first layer:
Each element of the activation vector a[1] will be the same (because W [1]
contains all the same values). This behavior will occur at all layers of the
neural network. As a result, when we compute the gradient, all neurons in
a layer will be equally responsible for anything contributed to the final loss.
We call this property symmetry. This means each neuron (within a layer)
will receive the exact same gradient update value (i.e., all neurons will learn
the same thing).
In practice, it turns out there is something better than random initializa-
tion. It is called Xavier/He initialization and initializes the weights:
r !
2
w[`] ∼ N 0, (3.13)
n[`] + n[`−1]
3.2 Optimization
Recall our neural network parameters: W [1] , b[1] , W [2] , b[2] , W [3] , b[3] . To up-
date them, we use stochastic gradient descent (SGD) using the update rules
in Equations (3.10) and (3.11). We will first compute the gradient with re-
spect to W [3] . The reason for this is that the influence of W [1] on the loss
is more complex than that of W [3] . This is because W [3] is “closer” to the
12
Again, a[2] depends on z [2] , which z [2] directly depends on W [2] , which allows
us to complete the chain:
∂L ∂L ∂a[3] ∂z [3] ∂a[2] ∂z [2]
= (3.26)
∂W [2] ∂a[3] ∂z [3] ∂a[2] ∂z [2] ∂W [2]
Recall ∂L/∂W [3] :
∂L [3] [2] T
= (a − y)a (3.27)
∂W [3]
Since we computed ∂L/∂W [3] first, we know that a[2] = ∂z [3] /∂W [3] . Similarly
we have (a[3] − y) = ∂L/∂z [3] . These can help us compute ∂L/∂W [2] . We
substitute these values into Equation (3.26). This gives us:
∂L ∂L ∂a[3] ∂z [3] ∂a[2] ∂z [2]
= = (a[3] − y)W [3] g 0 (z [2] )a[1] (3.28)
∂W [2] |∂a[3]{z∂z [3]} |∂a [2] ∂z [2] ∂W [2]
{z } | {z } | {z }
(a[3] −y) W [3] g 0 (z [2] ) a[1]
While we have greatly simplified the process, we are not done yet. Because
we are computing derivatives in higher dimensions, the exact order of matrix
multiplication required to compute Equation (3.28) is not clear. We must
reorder the terms in Equation (3.28) such that the dimensions align. First,
we note the dimensions of all the terms:
∂L [3] 0 [2]
[2]
= (a[3] − y) W a[1]
g (z ) |{z} (3.29)
|∂W
|{z}
{z } | {z } | {z }
1×1 1×2 2×1 3×1
2×3
Notice how the terms do not align their shapes properly. We must rearrange
the terms by using properties of matrix algebra such that the matrix opera-
tions produce a result with the correct output shape. The correct ordering
is below:
∂L [3] T [1] T
[2]
=W ◦ g 0 (z [2] ) (a[3] − y) a (3.30)
|∂W
| {z } | {z } | {z } |{z}
{z } 2×1 2×1 1×1 1×3
2×3
n
1
L(i) and L(i) is the loss for a single exam-
P
where J is the cost function J = n
i=1
ple. The difference between the gradient descent update versus the stochastic
gradient descent version is that the cost function J gives more accurate gra-
dients whereas L(i) may be noisy. Stochastic gradient descent attempts to
approximate the gradient from (full) gradient descent. The disadvantage of
gradient descent is that it can be difficult to compute all activations for all
examples in a single forward or backwards propagation phase.
In practice, research and applications use mini-batch gradient descent.
This is a compromise between gradient descent and stochastic gradient de-
scent. In the case mini-batch gradient descent, the cost function Jmb is
defined as follows:
B
1 X (i)
Jmb = L (3.32)
B i=1
where B is the number of examples in the mini-batch.
There is another optimization method called momentum. Consider mini-
batch stochastic gradient. For any single layer `, the update rule is as follows:
(
∂J
vdW [`] = βvdW [`] + (1 − β) ∂W [`]
[`] [`]
(3.33)
W = W − αvdW [`]
Notice how there are now two stages instead of a single stage. The weight
update now depends on the cost J at this update step and the velocity vdW [`] .
The relative importance is controlled by β. Consider the analogy to a human
driving a car. While in motion, the car has momentum. If the car were to use
the brakes (or not push accelerator throttle), the car would continue moving
due to its momentum. Returning to optimization, the velocity vdW [`] will
keep track of the gradient over time. This technique has significantly helped
neural networks during the training phase.
3.3.1 L2 Regularization
Let W below denote all the parameters in a model. In the case of neural
networks, you may think of applying the 2nd term to all layer weights W [`] .
For convenience, we simply write W . The L2 regularization adds another
term to the cost function:
λ
JL2 = J + ||W ||2 (3.34)
2
λX
=J+ |Wij |2 (3.35)
2 ij
λ
= J + WTW (3.36)
2
where J is the standard cost function from before, λ is an arbitrary value with
a larger value indicating more regularization and W contains all the weight
matrices, and where Equations (3.34), (3.35) and (3.36) are equivalent. The
update rule with L2 regularization becomes:
∂J λ ∂W T W
W =W −α −α (3.37)
∂W 2 ∂W
∂J
= (1 − αλ)W − α (3.38)
∂W
When we were updating our parameters using gradient descent, we did not
have the (1 − αλ)W term. This means with L2 regularization, every update
will include some penalization, depending on W . This penalization increases
the cost J, which encourages individual parameters to be small in magnitude,
which is a way to reduce overfitting.
We then move this window slightly to the right in the image and repeat this
process. Once we have reached the end of the row, we start at the beginning
of the second row.
Once we have reached the end of the image, the parameters θ have “seen”
all pixels of the image: θ1 is no longer related to only the top left pixel. As a
result, whether the soccer ball appears in the bottom right or top left of the
image, the neural network will successfully detect the soccer ball.
1 Forward propagation
Recall that given input x, we define a[0] = x. Then for layer ` = 1, 2, . . . , N ,
where N is the number of layers of the network, we have
1. z [`] = W [`] a[`−1] + b[`]
2. a[`] = g [`] (z [`] )
In these notes we assume the nonlinearities g [`] are the same for all layers be-
sides layer N . This is because in the output layer we may be doing regression
[hence we might use g(x) = x] or binary classification [g(x) = sigmoid(x)] or
multiclass classification [g(x) = softmax(x)]. Hence we distinguish g [N ] from
g, and assume g is used for all layers besides layer N .
Finally, given the output of the network a[N ] , which we will more simply
denote as ŷ, we measure the loss J(W, b) = L(a[N ] , y) = L(ŷ, y). For example,
for real-valued regression we might use the squared loss
1
L(ŷ, y) = (ŷ − y)2
2
and for binary classification using logistic regression we use
L(ŷ, y) = −(y log ŷ + (1 − y) log(1 − ŷ))
or negative log-likelihood. Finally, for softmax regression over k classes, we
use the cross entropy loss
k
X
L(ŷ, y) = − 1{y = j} log ŷj
j=1
1
2
2 Backpropagation
Let’s define one more piece of notation that’ll be useful for backpropagation.1
We will define
δ [`] = ∇z[`] L(ŷ, y)
We can then define a three-step “recipe” for computing the gradients with
respect to every W [`] , b[`] as follows:
1. For output layer N , we have
δ [N ] = ∇z[N ] L(ŷ, y)
2. For ` = N − 1, N − 2, . . . , 1, we have
>
δ [`] = (W [`+1] δ [`+1] ) ◦ g 0 (z [`] )
where we use ◦ to indicate the elementwise product. Note the above proce-
dure is for a single training example.
You can try applying the above algorithm to logistic regression (N = 1,
g [1] is the sigmoid function σ) to sanity check steps (1) and (3). Recall that
σ 0 (z) = σ(z) ◦ (1 − σ(z)) and σ(z [1] ) is simply a[1] . Note that for logistic
regression, if x is a column vector in Rd×1 , then W [1] ∈ R1×d , and hence
∇W [1] J(W, b) ∈ R1×d . Example code for two layers is also given at:
http://cs229.stanford.edu/notes/backprop.py
1
These notes are closely adapted from:
http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/
Scribe: Ziang Xie
CS229 Lecture notes
Andrew Ng
1
2
Thus, J measures the sum of squared distances between each training exam-
ple x(i) and the cluster centroid µc(i) to which it has been assigned. It can
be shown that k-means is exactly coordinate descent on J. Specifically, the
inner-loop of k-means repeatedly minimizes J with respect to c while holding
µ fixed, and then minimizes J with respect to µ while holding c fixed. Thus,
J must monotonically decrease, and the value of J must converge. (Usu-
ally, this implies that c and µ will converge too. In theory, it is possible for
3
and the parameter φj gives p(z (i) = j),), and x(i) |z (i) = j ∼ N (µj , Σj ). We
let k denote the number of values that the z (i) ’s can take on. Thus, our
model posits that each x(i) was generated by randomly choosing z (i) from
{1, . . . , k}, and then x(i) was drawn from one of k Gaussians depeneding on
z (i) . This is called the mixture of Gaussians model. Also, note that the
z (i) ’s are latent random variables, meaning that they’re hidden/unobserved.
This is what will make our estimation problem difficult.
The parameters of our model are thus φ, φ and Σ. To estimate them, we
can write down the likelihood of our data:
X
m
ℓ(φ, µ, Σ) = log p(x(i) ; φ, µ, Σ)
i=1
Xm X
k
= log p(x(i) |z (i) ; µ, Σ)p(z (i) ; φ).
i=1 z (i) =1
1
2
likelihood problem would have been easy. Specifically, we could then write
down the likelihood as
X
m
ℓ(φ, µ, Σ) = log p(x(i) |z (i) ; µ, Σ) + log p(z (i) ; φ).
i=1
1 X
m
φj = 1{z (i) = j},
m i=1
Pm (i)
i=1 1{z = j}x(i)
µj = P m (i) = j}
,
i=1 1{z
Pm (i)
i=1 1{z P = j}(x(i) − µj )(x(i) − µj )T
Σj = m (i) = j}
.
i=1 1{z
Indeed, we see that if the z (i) ’s were known, then maximum likelihood
estimation becomes nearly identical to what we had when estimating the
parameters of the Gaussian discriminant analysis model, except that here
the z (i) ’s playing the role of the class labels.1
However, in our density estimation problem, the z (i) ’s are not known.
What can we do?
The EM algorithm is an iterative algorithm that has two main steps.
Applied to our problem, in the E-step, it tries to “guess” the values of the
z (i) ’s. In the M-step, it updates the parameters of our model based on our
guesses. Since in the M-step we are pretending that the guesses in the first
part were correct, the maximization becomes easy. Here’s the algorithm:
1 X (i)
m
φj := w ,
m i=1 j
Pm (i) (i)
i=1 wj x
µj := Pm (i) ,
i=1 wj
Pm (i) (i)
i=1 wj (x − µj )(x(i) − µj )T
Σj := Pm (i)
i=1 wj
Part IX
The EM algorithm
In the previous set of notes, we talked about the EM algorithm as applied to
fitting a mixture of Gaussians. In this set of notes, we give a broader view
of the EM algorithm, and show how it can be applied to a large family of
estimation problems with latent variables. We begin our discussion with a
very useful result called Jensen’s inequality
1 Jensen’s inequality
Let f be a function whose domain is the set of real numbers. Recall that
f is a convex function if f ′′ (x) ≥ 0 (for all x ∈ R). In the case of f taking
vector-valued inputs, this is generalized to the condition that its hessian H
is positive semi-definite (H ≥ 0). If f ′′ (x) > 0 for all x, then we say f is
strictly convex (in the vector-valued case, the corresponding statement is
that H must be positive definite, written H > 0). Jensen’s inequality can
then be stated as follows:
Theorem. Let f be a convex function, and let X be a random variable.
Then:
E[f (X)] ≥ f (EX).
Moreover, if f is strictly convex, then E[f (X)] = f (EX) holds true if and
only if X = E[X] with probability 1 (i.e., if X is a constant).
Recall our convention of occasionally dropping the parentheses when writ-
ing expectations, so in the theorem above, f (EX) = f (E[X]).
For an interpretation of the theorem, consider the figure below.
1
2
f(a) f
E[f(X)]
f(b)
f(EX)
a E[X] b
2 The EM algorithm
Suppose we have an estimation problem in which we have a training set
{x(1) , . . . , x(n) } consisting of n independent examples. We have a latent vari-
able model p(x, z; θ) with z being the latent variable (which for simplicity is
assumed to take finite number of values). The density for x can be obtained
by marginalized over the latent variable z:
X
p(x; θ) = p(x, z; θ) (1)
z
3
1
It’s mostly an empirical observation that the optimization problem is difficult to op-
timize.
2
Empirically, the E-step and M-step can often be computed more efficiently than op-
timizing the function ℓ(·) directly. However, it doesn’t necessarily mean that alternating
the two steps can always converge to the global optimum of ℓ(·). Even for mixture of
Gaussians, the EM algorithm can either converge to a global optimum or get stuck, de-
pending on the properties of the training data. Empirically, for real-world data, often EM
can converge to a solution with relatively high likelihood (if not the optimum), and the
theory behind it is still largely not understood.
4
P
Let Q be a distribution over the possible values of z. That is, z Q(z) = 1,
Q(z) ≥ 0).
Consider the following:3
X
log p(x; θ) = log p(x, z; θ)
z
X p(x, z; θ)
= log Q(z) (6)
z
Q(z)
X p(x, z; θ)
≥ Q(z) log (7)
z
Q(z)
where the “z ∼ Q” subscripts above indicate that the expectations are with
respect to z drawn from Q. This allowed us to go from Equation (6) to
Equation (7).
Now, for any distribution Q, the formula (7) gives a lower-bound on
log p(x; θ). There are many possible choices for the Q’s. Which should we
choose? Well, if we have some current guess θ of the parameters, it seems
natural to try to make the lower-bound tight at that value of θ. I.e., we will
make the inequality above hold with equality at our particular value of θ.
To make the bound tight for a particular value of θ, we need for the step
involving Jensen’s inequality in our derivation above to hold with equality.
3
If z were continuous, then Q would be a density, and the summations over z in our
discussion are replaced with integrals over z.
4
We note that the notion p(x,z;θ)
Q(z) only makes sense if Q(z) 6= 0 whenever p(x, z; θ) 6= 0.
Here we implicitly assume that we only consider those Q with such a property.
5
p(x, z; θ)
=c
Q(z)
for some constant c that does not depend on z. This is easily accomplished
by choosing
Q(z) ∝ p(x, z; θ).
P
Actually, since we know z Q(z) = 1 (because it is a distribution), this
further tells us that
p(x, z; θ)
Q(z) = P
z p(x, z; θ)
p(x, z; θ)
=
p(x; θ)
= p(z|x; θ) (8)
Thus, we simply set the Q’s to be the posterior distribution of the z’s given
x and the setting of the parameters θ.
Indeed, we can directly verify that when Q(z) = p(z|x; θ), then equa-
tion (7) is an equality because
X p(x, z; θ) X p(x, z; θ)
Q(z) log = p(z|x; θ) log
z
Q(z) z
p(z|x; θ)
X p(z|x; θ)p(x; θ)
= p(z|x; θ) log
z
p(z|x; θ)
X
= p(z|x; θ) log p(x; θ)
z
X
= log p(x; θ) p(z|x; θ)
z
P
= log p(x; θ) (because z p(z|x; θ) = 1)
Taking sum over all the examples, we obtain a lower bound for the log-
likelihood
X
ℓ(θ) ≥ ELBO(x(i) ; Qi , θ) (11)
i
XX p(x(i) , z (i) ; θ)
= Qi (z (i) ) log
i
Qi (z (i) )
z (i)
(M-step) Set
n
X
θ := arg max ELBO(x(i) ; Qi , θ)
θ
i=1
XX p(x(i) , z (i) ; θ)
= arg max Qi (z (i) ) log . (12)
θ
i
Qi (z (i) )
z (i)
}
How do we know if this algorithm will converge? Well, suppose θ(t) and
θ(t+1) are the parameters from two successive iterations of EM. We will now
prove that ℓ(θ(t) ) ≤ ℓ(θ(t+1) ), which shows EM always monotonically im-
proves the log-likelihood. The key to showing this result lies in our choice of
the Qi ’s. Specifically, on the iteration of EM in which the parameters had
(t)
started out as θ(t) , we would have chosen Qi (z (i) ) := p(z (i) |x(i) ; θ(t) ). We
saw earlier that this choice ensures that Jensen’s inequality, as applied to get
Equation (11), holds with equality, and hence
n
(t)
X
ℓ(θ(t) ) = ELBO(x(i) ; Qi , θ(t) ) (13)
i=1
The parameters θ(t+1) are then obtained by maximizing the right hand side
of the equation above. Thus,
n
(t)
X
(t+1)
ℓ(θ )≥ ELBO(x(i) ; Qi , θ(t+1) )
i=1
(because ineqaulity (11) holds for all Q and θ)
n
(t)
X
≥ ELBO(x(i) ; Qi , θ(t) ) (see reason below)
i=1
(t)
= ℓ(θ ) (by equation (13))
where the last inequality follows from that θ(t+1) is chosen explicitly to be
n
(t)
X
arg max ELBO(x(i) ; Qi , θ)
θ
i=1
8
Setting this to zero and solving for µl therefore yields the update rule
Pn (i) (i)
i=1 wl x
µl := Pn (i)
,
i=1 wl
10
n X
k k
(i)
X X
L(φ) = wj log φj + β( φj − 1),
i=1 j=1 j=1
The derivation for the M-step updates to Σj are also entirely straightfor-
ward.
5
We don’t need to worry about the constraint that φj ≥ 0, because as we’ll shortly see,
the solution we’ll find from this derivation will automatically satisfy that anyway.
11
Now the next question is what form of Q (or what structural assumptions
to make about Q) allows us to efficiently maximize the objective above. When
the latent variable z are high-dimensional discrete variables, one popular as-
sumption is the mean field assumption, which assumes that Qi (z) gives a
distribution with independent coordinates, or in other words, Qi can be de-
composed into Qi (z) = Q1i (z1 ) · · · Qki (zk ). There are tremendous applications
of mean field assumptions to learning generative models with discrete latent
variables, and we refer to [1] for a survey of these models and their impact
to a wide range of applications including computational biology, computa-
tional neuroscience, social sciences. We will not get into the details about
the discrete latent variable cases, and our main focus is to deal with contin-
uous latent variables, which requires not only mean field assumptions, but
additional techniques.
When z ∈ Rk is a continuous latent variable, there are several decisions to
make towards successfully optimizing (20). First we need to give a succinct
representation of the distribution Qi because it is over an infinite number of
points. A natural choice is to assume Qi is a Gaussian distribution with some
mean and variance. We would also like to have more succinct representation
of the means of Qi of all the examples. Note that Qi (z (i) ) is supposed to
approximate p(z (i) |x(i) ; θ). It would make sense let all the means of the Qi ’s
be some function of x(i) . Concretely, let q(·; φ), v(·; φ) be two functions that
map from dimension d to k, which are parameterized by φ and ψ, we assume
that
Qi = N (q(x(i) ; φ), diag(v(x(i) ; ψ))2 ) (21)
Here diag(w) means the k × k matrix with the entries of w ∈ Rk on the
diagonal. In other words, the distribution Qi is assumed to be a Gaussian
distribution with independent coordinates, and the mean and standard de-
viations are governed by q and v. Often in variational auto-encoder, q and v
are chosen to be neural networks.6 In recent deep learning literature, often
q, v are called encoder (in the sense of encoding the data into latent code),
whereas g(z; θ) if often referred to as the decoder.
We remark that Qi of such form in many cases are very far from a good
approximation of the true posterior distribution. However, some approxima-
tion is necessary for feasible optimization. In fact, the form of Qi needs to
satisfy other requirements (which happened to be satisfied by the form (21))
Before optimizing the ELBO, let’s first verify whether we can efficiently
evaluate the value of the ELBO for fixed Q of the form (21) and θ. We
6
q and v can also share parameters. We sweep this level of details under the rug in this
note.
13
θ := θ + η∇θ ELBO(φ, ψ, θ)
φ := φ + η∇φ ELBO(φ, ψ, θ)
ψ := ψ + η∇ψ ELBO(φ, ψ, θ)
But computing the gradient over φ and ψ is tricky because the sampling
distribution Qi depends on φ and ψ. (Abstractly speaking, the issue we
face can be simplified as the problem of computing the gradient Ez∼Qφ [f (φ)]
with respect to variable φ. We know that in general, ∇Ez∼Qφ [f (φ)] 6=
Ez∼Qφ [∇f (φ)] because the dependency of Qφ on φ has to be taken into ac-
count as well. )
The idea that comes to rescue is the so-called re-parameterization
trick: we rewrite z (i) ∼ Qi = N (q(x(i) ; φ), diag(v(x(i) ; ψ))2 ) in an equivalent
14
way:
z (i) = q(x(i) ; φ) + v(x(i) ; ψ) ⊙ ξ (i) where ξ (i) ∼ N (0, Ik×k ) (24)
Here x ⊙ y denotes the entry-wise product of two vectors of the same
dimension. Here we used the fact that x ∼ N (µ, σ 2 ) is equivalent to that
x = µ+ξσ with ξ ∼ N (0, 1). We mostly just used this fact in every dimension
simultaneously for the random variable z (i) ∼ Qi .
With this re-parameterization, we have that
p(x(i) , z (i) ; θ)
Ez(i) ∼Qi log (25)
Qi (z (i) )
p(x(i) , q(x(i) ; φ) + v(x(i) ; ψ) ⊙ ξ (i) ; θ)
= Eξ(i) ∼N (0,1) log
Qi (q(x(i) ; φ) + v(x(i) ; ψ) ⊙ ξ (i) )
It follows that
p(x(i) , z (i) ; θ)
∇φ Ez(i) ∼Qi log
Qi (z (i) )
p(x(i) , q(x(i) ; φ) + v(x(i) ; ψ) ⊙ ξ (i) ; θ)
= ∇φ Eξ(i) ∼N (0,1) log
Qi (q(x(i) ; φ) + v(x(i) ; ψ) ⊙ ξ (i) )
p(x(i) , q(x(i) ; φ) + v(x(i) ; ψ) ⊙ ξ (i) ; θ)
= Eξ(i) ∼N (0,1) ∇φ log
Qi (q(x(i) ; φ) + v(x(i) ; ψ) ⊙ ξ (i) )
We can now sample multiple copies of ξ (i) ’s to estimate the the expecta-
tion in the RHS of the equation above.7 We can estimate the gradient with
respect to ψ similarly, and with these, we can implement the gradient ascent
algorithm to optimize the ELBO over φ, ψ, θ.
There are not many high-dimensional distributions with analytically com-
putable density function are known to be re-parameterizable. We refer to [2]
for a few other choices that can replace Gaussian distribution.
References
[1] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational in-
ference: A review for statisticians. Journal of the American Statistical
Association, 112(518):859–877, 2017.
[2] Diederik P Kingma and Max Welling. Auto-encoding variational bayes.
arXiv preprint arXiv:1312.6114, 2013.
7
Empirically people sometimes just use one sample to estimate it for maximum com-
putational efficiency.
CS229 Lecture notes
Andrew Ng
Part X
Factor analysis
When we have data x(i) ∈ Rn that comes from a mixture of several Gaussians,
the EM algorithm can be applied to fit a mixture model. In this setting,
we usually imagine problems were the we have sufficient data to be able
to discern the multiple-Gaussian structure in the data. For instance, this
would be the case if our training set size m was significantly larger than the
dimension n of the data.
Now, consider a setting in which n m. In such a problem, it might be
difficult to model the data even with a single Gaussian, much less a mixture of
Gaussian. Specifically, since the m data points span only a low-dimensional
subspace of Rn , if we model the data as Gaussian, and estimate the mean
and covariance using the usual maximum likelihood estimators,
m
1 X (i)
µ = x
m i=1
m
1 X (i)
Σ = (x − µ)(x(i) − µ)T ,
m i=1
we would find that the matrix Σ is singular. This means that Σ−1 does not
exist, and 1/|Σ|1/2 = 1/0. But both of these terms are needed in computing
the usual density of a multivariate Gaussian distribution. Another way of
stating this difficulty is that maximum likelihood estimates of the parameters
result in a Gaussian that places all of its probability in the affine space
spanned by the data,1 and this corresponds to a singular covariance matrix.
1
Pm Pm
This is the set of points x satisfying x = i=1 αi x(i) , for some αi ’s so that i=1 α1 =
1.
1
2
1 Restrictions of Σ
If we do not have sufficient data to fit a full covariance matrix, we may
place some restrictions on the space of matrices Σ that we will consider. For
instance, we may choose to fit a covariance matrix Σ that is diagonal. In this
setting, the reader may easily verify that the maximum likelihood estimate
of the covariance matrix is given by the diagonal matrix Σ satisfying
m
1 X (i)
Σjj = (x − µj )2 .
m i=1 j
Thus, Σjj is just the empirical estimate of the variance of the j-th coordinate
of the data.
Recall that the contours of a Gaussian density are ellipses. A diagonal
Σ corresponds to a Gaussian where the major axes of these ellipses are axis-
aligned.
Sometimes, we may place a further restriction on the covariance matrix
that not only must it be diagonal, but its diagonal entries must all be equal.
In this setting, we have Σ = σ 2 I, where σ 2 is the parameter under our control.
The maximum likelihood estimate of σ 2 can be found to be:
n m
2 1 X X (i)
σ = (x − µj )2 .
mn j=1 i=1 j
Here, µ1 ∈ Rr , µ2 ∈ Rs , Σ11 ∈ Rr×r , Σ12 ∈ Rr×s , and so on. Note that since
covariance matrices are symmetric, Σ12 = ΣT21 .
Under our assumptions, x1 and x2 are jointly multivariate Gaussian.
What is the marginal distribution of x1 ? It is not hard to see that E[x1 ] = µ1 ,
and that Cov(x1 ) = E[(x1 − µ1 )(x1 − µ1 )] = Σ11 . To see that the latter is
true, note that by definition of the joint covariance of x1 and x2 , we have
4
that
Cov(x) = Σ
Σ11 Σ12
=
Σ21 Σ22
= E[(x − µ)(x − µ)T ]
" T #
x1 − µ 1 x1 − µ 1
= E
x2 − µ 2 x2 − µ 2
(x1 − µ1 )(x1 − µ1 )T (x1 − µ1 )(x2 − µ2 )T
= E .
(x2 − µ2 )(x1 − µ1 )T (x2 − µ2 )(x2 − µ2 )T
Matching the upper-left subblocks in the matrices in the second and the last
lines above gives the result.
Since marginal distributions of Gaussians are themselves Gaussian, we
therefore have that the marginal distribution of x1 is given by x1 ∼ N (µ1 , Σ11 ).
Also, we can ask, what is the conditional distribution of x1 given x2 ? By
referring to the definition of the multivariate Gaussian distribution, it can
be shown that x1 |x2 ∼ N (µ1|2 , Σ1|2 ), where
When working with the factor analysis model in the next section, these
formulas for finding conditional and marginal distributions of Gaussians will
be very useful.
z ∼ N (0, I)
x|z ∼ N (µ + Λz, Ψ).
Here, the parameters of our model are the vector µ ∈ Rn , the matrix
Λ ∈ Rn×k , and the diagonal matrix Ψ ∈ Rn×n . The value of k is usually
chosen to be smaller than n.
5
z ∼ N (0, I)
∼ N (0, Ψ)
x = µ + Λz + .
E[x] = E[µ + Λz + ]
= µ + ΛE[z] + E[]
= µ.
In the last step, we used the fact that E[zz T ] = Cov(z) (since z has zero
mean), and E[zT ] = E[z]E[T ] = 0 (since z and are independent, and
6
So, using these definitions for µz(i) |x(i) and Σz(i) |x(i) , we have
(i) 1 1 (i) T −1 (i)
Qi (z ) = exp − (z − µz(i) |x(i) ) Σz(i) |x(i) (z − µz(i) |x(i) ) .
(2π)k/2 |Σz(i) |x(i) |1/2 2
7
Here, the “z (i) ∼ Qi ” subscript indicates that the expectation is with respect
to z (i) drawn from Qi . In the subsequent development, we will omit this
subscript when there is no risk of ambiguity. Dropping terms that do not
depend on the parameters, we find that we need to maximize:
Xm
E log p(x(i) |z (i) ; µ, Λ, Ψ)
i=1
m
X 1 1
= E log exp − (x(i) − µ − Λz (i) )T Ψ−1 (x(i) − µ − Λz (i) )
i=1
(2π)n/2 |Ψ|1/2 2
m
X 1 n 1
= E − log |Ψ| − log(2π) − (x(i) − µ − Λz (i) )T Ψ−1 (x(i) − µ − Λz (i) )
i=1
2 2 2
Lets maximize this with respect to Λ. Only the last term above depends
on Λ. Taking derivatives, and using the facts that tr a = a (for a ∈ R),
trAB = trBA, and ∇A trABAT C = CAB + C T AB, we get:
m
X 1 (i) (i) T −1 (i) (i)
∇Λ −E (x − µ − Λz ) Ψ (x − µ − Λz )
i=1
2
m
X 1 (i) T T −1 (i) (i) T T −1 (i)
= ∇Λ E −tr z Λ Ψ Λz + trz Λ Ψ (x − µ)
i=1
2
m
X 1 T −1 (i) (i) T T −1 (i) (i) T
= ∇Λ E −tr Λ Ψ Λz z + trΛ Ψ (x − µ)z
i=1
2
m h
(i) (i) T (i) T
X i
−1 −1 (i)
= E −Ψ Λz z + Ψ (x − µ)z
i=1
8
It is interesting to note the close relationship between this equation and the
normal equation that we’d derived for least squares regression,
The analogy is that here, the x’s are a linear function of the z’s (plus noise).
Given the “guesses” for z that the E-step has found, we will now try to
estimate the unknown linearity Λ relating the x’s and z’s. It is therefore
no surprise that we obtain something similar to the normal equation. There
is, however, one important difference between this and an algorithm that
performs least squares using just the “best guesses” of the z’s; we will see
this difference shortly.
To complete our M-step update, lets work out the values of the expecta-
tions in Equation (7). From our definition of Qi being Gaussian with mean
µz(i) |x(i) and covariance Σz(i) |x(i) , we easily find
h
(i) T
i
Ez(i) ∼Qi z = µTz(i) |x(i)
h T
i
Ez(i) ∼Qi z (i) z (i) = µz(i) |x(i) µTz(i) |x(i) + Σz(i) |x(i) .
The latter comes from the fact that, for a random variable Y , Cov(Y ) =
E[Y Y T ] − E[Y ]E[Y ]T , and hence E[Y Y T ] = E[Y ]E[Y ]T + Cov(Y ). Substitut-
ing this back into Equation (7), we get the M-step update for Λ:
m
! m
!−1
X X
Λ= (x(i) − µ)µTz(i) |x(i) µz(i) |x(i) µTz(i) |x(i) + Σz(i) |x(i) . (8)
i=1 i=1
It is important to note the presence of the Σz(i) |x(i) on the right hand side of
this equation. This is the covariance in the posterior distribution p(z (i) |x(i) )
of z (i) give x(i) , and the M-step must take into account this uncertainty
9
Since this doesn’t change as the parameters are varied (i.e., unlike the update
for Λ, the right hand side does not depend on Qi (z (i) ) = p(z (i) |x(i) ; µ, Λ, Ψ),
which in turn depends on the parameters), this can be calculated just once
and needs not be further updated as the algorithm is run. Similarly, the
diagonal Ψ can be found by calculating
m
1 X (i) (i) T (i) T T
Φ= x x −x µz(i) |x(i) ΛT −Λµz(i) |x(i) x(i) +Λ(µz(i) |x(i) µTz(i) |x(i) +Σz(i) |x(i) )ΛT ,
m i=1
and setting Ψii = Φii (i.e., letting Ψ be the diagonal matrix containing only
the diagonal entries of Φ).
CS229 Lecture notes
Andrew Ng
Part XI
Principal components analysis
In our discussion of factor analysis, we gave a way to model data x ∈ Rn as
“approximately” lying in some k-dimension subspace, where k n. Specif-
ically, we imagined that each point x(i) was created by first generating some
z (i) lying in the k-dimension affine space {Λz + µ; z ∈ Rk }, and then adding
Ψ-covariance noise. Factor analysis is based on a probabilistic model, and
parameter estimation used the iterative EM algorithm.
In this set of notes, we will develop a method, Principal Components
Analysis (PCA), that also tries to identify the subspace in which the data
approximately lies. However, PCA will do so more directly, and will require
only an eigenvector calculation (easily done with the eig function in Matlab),
and does not need to resort to EM.
Suppose we are given dataset {x(i) ; i = 1, . . . , m} of attributes of m dif-
ferent types of automobiles, such as their maximum speed, turn radius, and
so on. Lets x(i) ∈ Rn for each i (n m). But unknown to us, two different
attributes—some xi and xj —respectively give a car’s maximum speed mea-
sured in miles per hour, and the maximum speed measured in kilometers per
hour. These two attributes are therefore almost linearly dependent, up to
only small differences introduced by rounding off to the nearest mph or kph.
Thus, the data really lies approximately on an n − 1 dimensional subspace.
How can we automatically detect, and perhaps remove, this redundancy?
For a less contrived example, consider a dataset resulting from a survey of
(i)
pilots for radio-controlled helicopters, where x1 is a measure of the piloting
(i)
skill of pilot i, and x2 captures how much he/she enjoys flying. Because
RC helicopters are very difficult to fly, only the most committed students,
ones that truly enjoy flying, become good pilots. So, the two attributes
x1 and x2 are strongly correlated. Indeed, we might posit that that the
1
2
data actually likes along some diagonal axis (the u1 direction) capturing the
intrinsic piloting “karma” of a person, with only a small amount of noise
lying off this axis. (See figure.) How can we automatically compute this u1
direction?
u1
x2 (enjoyment)
u2
x1 (skill)
We will shortly develop the PCA algorithm. But prior to running PCA
per se, typically we first pre-process the data to normalize its mean and
variance, as follows:
1. Let µ = m1 m (i)
P
i=1 x .
Steps (1-2) zero out the mean of the data, and may be omitted for data
known to have zero mean (for instance, time series corresponding to speech
or other acoustic signals). Steps (3-4) rescale each coordinate to have unit
variance, which ensures that different attributes are all treated on the same
“scale.” For instance, if x1 was cars’ maximum speed in mph (taking values
in the high tens or low hundreds) and x2 were the number of seats (taking
values around 2-4), then this renormalization rescales the different attributes
to make them more comparable. Steps (3-4) may be omitted if we had
apriori knowledge that the different attributes are all on the same scale. One
3
example of this is if each data point represented a grayscale image, and each
(i)
xj took a value in {0, 1, . . . , 255} corresponding to the intensity value of
pixel j in image i.
Now, having carried out the normalization, how do we compute the “ma-
jor axis of variation” u—that is, the direction on which the data approxi-
mately lies? One way to pose this problem is as finding the unit vector u so
that when the data is projected onto the direction corresponding to u, the
variance of the projected data is maximized. Intuitively, the data starts off
with some amount of variance/information in it. We would like to choose a
direction u so that if we were to approximate the data as lying in the direc-
tion/subspace corresponding to u, as much as possible of this variance is still
retained.
Consider the following dataset, on which we have already carried out the
normalization steps:
We see that the projected data still has a fairly large variance, and the
points tend to be far from zero. In contrast, suppose had instead picked the
following direction:
#$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ $#$#$#
#$# # #
$ # #
$ # #
$ # #
$ # #
$ # #
$ $#$#$#
$#
#
$ # #$
$
# # #$
$
# # #$
$
# # #$
$
# # #$
$
# # #$
$
# #
'(
(
'(' '(
(
'
$ $#$#$# ( '(
'
$ $#$#$# ( '(
'
$ $#$#$# ('('(' $
$ #$
#
$ #$
$
#
$ #$
$
#
$ #$
$
# $#$#$#
(' '
( '
( # $ '
( '
( # $ '
( '
( # $ ' ( #
$ #
$ %
& %
& #
$ #
$ %
& %
& #
$ #
$ %
& %
& #
$ #
$ %
& %
& $#$#$# & %&
% %&
&
% %&
&
% &%&%%&
( ' '
( # $ '
( # $ '
( # $ ' ( #
$
% & %& #
$
% & %& #
$
% & %& #
$
% & %&
% & %&
% & %&
% & %& &%&%&%
('
' (
( '(
' $#$# ( '(
' $#$# ( '(
' $#$# ('(' $ # &
#$ %&
% %
&
#$# &
$ %&
% %
&
#$# &
$ %&
% %
&
#$# &
$ %&
% %
&
$#$# & %&
% %
&
&
%&
% %
&
&
%&
% %
& &%&%&%
%&
&
% %&
&
% %&
&
% %&
&
% %&
&
% %&
&
% %&
&
%
"
" " %
& %& %
& %& %
& %& %
& %& %
& %& %
& %& %
& %& &%&%&%
!
" !
"
! !
"
! !
" ! !
" !
" !
" ! " ! " &
&
% %
&
&
% %
&
&
% %
&
&
% %
&
&
% %
&
&
% %
&
&
% %
& % &%&%
!"!!
"
!"!!
"
!"
" ! !"
" ! !
" !" !
" !
" !"
! " ! " % & % & % & % & % & % &
""
!"
! "" !"
! " !"
! " !"
! "
!"
! "!"!"!
!
" !" !
" !" !
" !" !
" !
" "!
"
!"
! "
!" ! "
!" ! "
!" ! "
!" !"
! "!"!"!
!"
Here, the projections have a significantly smaller variance, and are much
closer to the origin.
We would like to automatically select the direction u corresponding to
the first of the two figures shown above. To formalize this, note that given a
5
unit vector u and a point x, the length of the projection of x onto u is given
by xT u. I.e., if x(i) is a point in our dataset (one of the crosses in the plot),
then its projection onto u (the corresponding circle in the figure) is distance
xT u from the origin. Hence, to maximize the variance of the projections, we
would like to choose a unit-length u so as to maximize:
m m
1 X (i) T 2 1 X T (i) (i) T
(x u) = u x x u
m i=1 m i=1
m
!
1 X (i) (i) T
= uT x x u.
m i=1
We easily recognize that the maximizing this subject to ||u||2 = 1 gives the
(i) (i) T
principal eigenvector of Σ = m1 m
P
i=1 x x , which is just the empirical
covariance matrix of the data (assuming it has zero mean).1
To summarize, we have found that if we wish to find a 1-dimensional
subspace with with to approximate the data, we should choose u to be the
principal eigenvector of Σ. More generally, if we wish to project our data
into a k-dimensional subspace (k < n), we should choose u1 , . . . , uk to be the
top k eigenvectors of Σ. The ui ’s now form a new, orthogonal basis for the
data.2
Then, to represent x(i) in this basis, we need only compute the corre-
sponding vector
uT1 x(i)
uT x(i)
(i) 2
y = .. ∈ Rk .
.
T (i)
uk x
Thus, whereas x(i) ∈ Rn , the vector y (i) now gives a lower, k-dimensional,
approximation/representation for x(i) . PCA is therefore also referred to as
a dimensionality reduction algorithm. The vectors u1 , . . . , uk are called
the first k principal components of the data.
Remark. Although we have shown it formally only for the case of k = 1,
using well-known properties of eigenvectors it is straightforward to show that
1
If you haven’t seen this before, try using the method of Lagrange multipliers to max-
imize uT Σu subject to that uT u = 1. You should be able to show that Σu = λu, for some
λ, which implies u is an eigenvector of Σ, with eigenvalue λ.
2
Because Σ is symmetric, the ui ’s will (or always can be chosen to be) orthogonal to
each other.
6
of all possible orthogonal bases u1 , . . . , uk , the one that we have chosen max-
(i) 2
P
imizes i ||y ||2 . Thus, our choice of a basis preserves as much variability
as possible in the original data.
In problem set 4, you will see that PCA can also be derived by picking
the basis that minimizes the approximation error arising from projecting the
data onto the k-dimensional subspace spanned by them.
PCA has many applications, our discussion with a small number of exam-
ples. First, compression—representing x(i) ’s with lower dimension y (i) ’s—is
an obvious application. If we reduce high dimensional data to k = 2 or 3 di-
mensions, then we can also plot the y (i) ’s to visualize the data. For instance,
if we were to reduce our automobiles data to 2 dimensions, then we can plot
it (one point in our plot would correspond to one car type, say) to see what
cars are similar to each other and what groups of cars may cluster together.
Another standard application is to preprocess a dataset to reduce its
dimension before running a supervised learning learning algorithm with the
x(i) ’s as inputs. Apart from computational benefits, reducing the data’s
dimension can also reduce the complexity of the hypothesis class considered
and help avoid overfitting (e.g., linear classifiers over lower dimensional input
spaces will have smaller VC dimension).
Lastly, as in our RC pilot example, we can also view PCA as a noise
reduction algorithm. In our example it, estimates the intrinsic “piloting
karma” from the noisy measures of piloting skill and enjoyment. In class, we
also saw the application of this idea to face images, resulting in eigenfaces
method. Here, each point x(i) ∈ R100×100 was a 10000 dimensional vector,
with each coordinate corresponding to a pixel intensity value in a 100x100
image of a face. Using PCA, we represent each image x(i) with a much lower-
dimensional y (i) . In doing so, we hope that the principal components we
found retain the interesting, systematic variations between faces that capture
what a person really looks like, but not the “noise” in the images introduced
by minor lighting variations, slightly different imaging conditions, and so on.
We then measure distances between faces i and j by working in the reduced
dimension, and computing ||y (i) − y (j) ||2 . This resulted in a surprisingly good
face-matching and retrieval algorithm.
CS229 Lecture notes
Andrew Ng
Part XII
Independent Components
Analysis
Our next topic is Independent Components Analysis (ICA). Similar to PCA,
this will find a new basis in which to represent our data. However, the goal
is very different.
As a motivating example, consider the “cocktail party problem.” Here, n
speakers are speaking simultaneously at a party, and any microphone placed
in the room records only an overlapping combination of the n speakers’ voices.
But lets say we have n different microphones placed in the room, and because
each microphone is a different distance from each of the speakers, it records a
different combination of the speakers’ voices. Using these microphone record-
ings, can we separate out the original n speakers’ speech signals?
To formalize this problem, we imagine that there is some data s ∈ Rn
that is generated via n independent sources. What we observe is
x = As,
1
2
1 ICA ambiguities
To what degree can W = A−1 be recovered? If we have no prior knowledge
about the sources and the mixing matrix, it is not hard to see that there are
some inherent ambiguities in A that are impossible to recover, given only the
x(i) ’s.
Specifically, let P be any n-by-n permutation matrix. This means that
each row and each column of P has exactly one “1.” Here’re some examples
of permutation matrices:
0 1 0
0 1 1 0
P = 1 0 0 ; P = ; P = .
1 0 0 1
0 0 1
3 ICA algorithm
We are now ready to derive an ICA algorithm. The algorithm we describe
is due to Bell and Sejnowski, and the interpretation we give will be of their
algorithm as a method for maximum likelihood estimation. (This is different
from their original interpretation, which involved a complicated idea called
the infomax principal, that is no longer necessary in the derivation given the
modern understanding of ICA.)
We suppose that the distribution of each source si is given by a density
ps , and that the joint distribution of the sources s is given by
n
Y
p(s) = ps (si ).
i=1
formulas from the previous section, this implies the following density on
x = As = W −1 s: n
Y
p(x) = ps (wiT x) · |W |.
i=1
Part XIII
Reinforcement Learning and
Control
We now begin our study of reinforcement learning and adaptive control.
In supervised learning, we saw algorithms that tried to make their outputs
mimic the labels y given in the training set. In that setting, the labels gave
an unambiguous “right answer” for each of the inputs x. In contrast, for
many sequential decision making and control problems, it is very difficult to
provide this type of explicit supervision to a learning algorithm. For example,
if we have just built a four-legged robot and are trying to program it to walk,
then initially we have no idea what the “correct” actions to take are to make
it walk, and so do not know how to provide explicit supervision for a learning
algorithm to try to mimic.
In the reinforcement learning framework, we will instead provide our al-
gorithms only a reward function, which indicates to the learning agent when
it is doing well, and when it is doing poorly. In the four-legged walking ex-
ample, the reward function might give the robot positive rewards for moving
forwards, and negative rewards for either moving backwards or falling over.
It will then be the learning algorithm’s job to figure out how to choose actions
over time so as to obtain large rewards.
Reinforcement learning has been successful in applications as diverse as
autonomous helicopter flight, robot legged locomotion, cell-phone network
routing, marketing strategy selection, factory control, and efficient web-page
indexing. Our study of reinforcement learning will begin with a definition of
the Markov decision processes (MDP), which provides the formalism in
which RL problems are usually posed.
1
2
• Psa are the state transition probabilities. For each state s ∈ S and
action a ∈ A, Psa is a distribution over the state space. We’ll say more
about this later, but briefly, Psa gives the distribution over what states
we will transition to if we take action a in state s.
Or, when we are writing rewards as a function of the states only, this becomes
For most of our development, we will use the simpler state-rewards R(s),
though the generalization to state-action rewards R(s, a) offers no special
difficulties.
3
This says that the expected sum of discounted rewards V π (s) for starting
in s consists of two terms: First, the immediate reward R(s) that we get
right away simply for starting in state s, and second, the expected sum of
future discounted rewards. Examining the second term in more detail, we
see that the summation term above can be rewritten Es′ ∼Psπ(s) [V π (s′ )]. This
is the expected sum of discounted rewards for starting in state s′ , where s′
is distributed according Psπ(s) , which is the distribution over where we will
end up after taking the first action π(s) in the MDP from state s. Thus, the
second term above gives the expected sum of discounted rewards obtained
after the first step in the MDP.
Bellman’s equations can be used to efficiently solve for V π . Specifically,
in a finite-state MDP (|S| < ∞), we can write down one such equation for
V π (s) for every state s. This gives us a set of |S| linear equations in |S|
variables (the unknown V π (s)’s, one for each state), which can be efficiently
solved for the V π (s)’s.
1
This notation in which we condition on π isn’t technically correct because π isn’t a
random variable, but this is quite standard in the literature.
4
In other words, this is the best possible expected sum of discounted rewards
that can be attained using any policy. There is also a version of Bellman’s
equations for the optimal value function:
X
V ∗ (s) = R(s) + max γ Psa (s′ )V ∗ (s′ ). (2)
a∈A
s′ ∈S
The first term above is the immediate reward as before. The second term
is the maximum over all actions a of the expected future sum of discounted
rewards we’ll get upon after action a. You should make sure you understand
this equation and see why it makes sense.
We also define a policy π ∗ : S 7→ A as follows:
X
π ∗ (s) = arg max Psa (s′ )V ∗ (s′ ). (3)
a∈A
s′ ∈S
Note that π ∗ (s) gives the action a that attains the maximum in the “max”
in Equation (2).
It is a fact that for every state s and every policy π, we have
∗
V ∗ (s) = V π (s) ≥ V π (s).
∗
The first equality says that the V π , the value function for π ∗ , is equal to the
optimal value function V ∗ for every state s. Further, the inequality above
says that π ∗ ’s value is at least a large as the value of any other other policy.
In other words, π ∗ as defined in Equation (3) is the optimal policy.
Note that π ∗ has the interesting property that it is the optimal policy for
all states s. Specifically, it is not the case that if we were starting in some
state s then there’d be some optimal policy for that state, and if we were
starting in some other state s′ then there’d be some other policy that’s opti-
mal policy for s′ . The same policy π ∗ attains the maximum in Equation (1)
for all states s. This means that we can use the same policy π ∗ no matter
what the initial state of our MDP is.
∞, |A| < ∞). In this section, we will also assume that we know the state
transition probabilities {Psa } and the reward function R.
The first algorithm, value iteration, is as follows:
1. Initialize π randomly.
(a) Let V := V π .
P
(b) For each state s, let π(s) := arg maxa∈A s′ Psa (s′ )V (s′ ).
Thus, the inner-loop repeatedly computes the value function for the current
policy, and then updates the policy using the current value function. (The
policy π found in step (b) is also called the policy that is greedy with re-
spect to V .) Note that step (a) can be done via solving Bellman’s equations
6
as described earlier, which in the case of a fixed policy, is just a set of |S|
linear equations in |S| variables.
After at most a finite number of iterations of this algorithm, V will con-
verge to V ∗ , and π will converge to π ∗ .
Both value iteration and policy iteration are standard algorithms for solv-
ing MDPs, and there isn’t currently universal agreement over which algo-
rithm is better. For small MDPs, policy iteration is often very fast and
converges with very few iterations. However, for MDPs with large state
spaces, solving for V π explicitly would involve solving a large system of lin-
ear equations, and could be difficult. In these problems, value iteration may
be preferred. For this reason, in practice value iteration seems to be used
more often than policy iteration.
states. For example, for a car, we might represent the state as (x, y, θ, ẋ, ẏ, θ̇),
comprising its position (x, y); orientation θ; velocity in the x and y directions
ẋ and ẏ; and angular velocity θ̇. Hence, S = R6 is an infinite set of states,
because there is an infinite number of possible positions and orientations
for the car.2 Similarly, the inverted pendulum you saw in PS4 has states
(x, θ, ẋ, θ̇), where θ is the angle of the pole. And, a helicopter flying in 3d
space has states of the form (x, y, z, φ, θ, ψ, ẋ, ẏ, ż, φ̇, θ̇, ψ̇), where here the roll
φ, pitch θ, and yaw ψ angles specify the 3d orientation of the helicopter.
In this section, we will consider settings where the state space is S = Rd ,
and describe ways for solving such MDPs.
4.1 Discretization
Perhaps the simplest way to solve a continuous-state MDP is to discretize
the state space, and then to use an algorithm like value iteration or policy
iteration, as described previously.
For example, if we have 2d states (s1 , s2 ), we can use a grid to discretize
the state space:
Here, each grid cell represents a separate discrete state s̄. We can then ap-
proximate the continuous-state MDP via a discrete-state one (S̄, A, {Ps̄a }, γ, R),
where S̄ is the set of discrete states, {Ps̄a } are our state transition probabil-
ities over the discrete states, and so on. We can then use value iteration or
policy iteration to solve for the V ∗ (s̄) and π ∗ (s̄) in the discrete state MDP
(S̄, A, {Ps̄a }, γ, R). When our actual system is in some continuous-valued
2
Technically, θ is an orientation and so the range of θ is better written θ ∈ [−π, π) than
θ ∈ R; but for our purposes, this distinction is not important.
9
4.5
3.5
y
2.5
1.5
1 2 3 4 5 6 7 8
x
4.5
3.5
y
2.5
1.5
1 2 3 4 5 6 7 8
x
10
There are several ways that one can get such a model. One is to use
physics simulation. For example, the simulator for the inverted pendulum
in PS4 was obtained by using the laws of physics to calculate what position
and orientation the cart/pole will be in at time t + 1, given the current state
at time t and the action a taken, assuming that we know all the parameters
of the system such as the length of the pole, the mass of the pole, and so
on. Alternatively, one can also use an off-the-shelf physics simulation software
package which takes as input a complete physical description of a mechanical
system, the current state st and action at , and computes the state st+1 of the
system a small fraction of a second into the future.3
An alternative way to get a model is to learn one from data collected in
the MDP. For example, suppose we execute n trials in which we repeatedly
take actions in an MDP, each trial for T timesteps. This can be done picking
actions at random, executing some specific policy, or via some other way of
choosing actions. We would then observe n state sequences like the following:
(1) (1) (1) (1)
(1) a
0 1 (1) a
2 (1) a aT −1 (1)
s0 −→ s1 −→ s2 −→ · · · −→ sT
(2) (2) (2) (2)
(2) a0 (2) a1 (2) a2 aT −1 (2)
s0 −→ s1 −→ s2 −→ · · · −→ sT
···
(n) (n) (n) (n)
(n) a0 (n) a1 (n) a2 aT −1 (n)
s0 −→ s1 −→ s2 −→ · · · −→ sT
where here ǫt is a noise term, usually modeled as ǫt ∼ N (0, Σ). (The covari-
ance matrix Σ can also be estimated from data in a straightforward way.)
Here, we’ve written the next-state st+1 as a linear function of the current
state and action; but of course, non-linear functions are also possible. Specif-
ically, one can learn a model st+1 = Aφs (st ) + Bφa (at ), where φs and φa are
some non-linear feature mappings of the states and actions. Alternatively,
one can also use non-linear learning algorithms, such as locally weighted lin-
ear regression, to learn to estimate st+1 as a function of st and at . These
approaches can also be used to build either deterministic or stochastic sim-
ulators of an MDP.
2. Initialize θ := 0.
3. Repeat {
For i = 1, . . . , n {
For each action a ∈ A {
Sample s′1 , . . . , s′k ∼ Ps(i) a (using a model of the MDP).
Set q(a) = k1 kj=1 R(s(i) ) + γV (s′j )
P
}
// In the original value iteration algorithm (over discrete states)
// we updated the value function according to V (s(i) ) := y (i) .
// In this algorithm, we want V (s(i) ) ≈ y (i) , which we’ll achieve
// using supervised learning (linear regression).
2
Set θ := arg minθ 21 ni=1 θT φ(s(i) ) − y (i)
P
Above, we had written out fitted value iteration using linear regression
as the algorithm to try to make V (s(i) ) close to y (i) . That step of the algo-
rithm is completely analogous to a standard supervised learning (regression)
problem in which we have a training set (x(1) , y (1) ), (x(2) , y (2) ), . . . , (x(n) , y (n) ),
and want to learn a function mapping from x to y; the only difference is that
here s plays the role of x. Even though our description above used linear re-
gression, clearly other regression algorithms (such as locally weighted linear
regression) can also be used.
Unlike value iteration over a discrete set of states, fitted value iteration
cannot be proved to always to converge. However, in practice, it often does
converge (or approximately converge), and works well for many problems.
Note also that if we are using a deterministic simulator/model of the MDP,
then fitted value iteration can be simplified by setting k = 1 in the algorithm.
This is because the expectation in Equation (7) becomes an expectation over
a deterministic distribution, and so a single example is sufficient to exactly
compute that expectation. Otherwise, in the algorithm above, we had to
draw k samples, and average to try to approximate that expectation (see the
definition of q(a), in the algorithm pseudo-code).
Finally, fitted value iteration outputs V , which is an approximation to
V . This implicitly defines our policy. Specifically, when our system is in
∗
some state s, and we need to choose an action, we would like to choose the
action
arg max Es′ ∼Psa [V (s′ )] (8)
a
In other words, here we are just setting ǫt = 0 (i.e., ignoring the noise in
the simulator), and setting k = 1. Equivalent, this can be derived from
Equation (8) using the approximation
where here the expectation is over the random s′ ∼ Psa . So long as the noise
terms ǫt are small, this will usually be a reasonable approximation.
However, for problems that don’t lend themselves to such approximations,
having to sample k|A| states using the model, in order to approximate the
expectation above, can be computationally expensive.
CS229 Lecture notes
Tengyu Ma
Part XV
Policy Gradient
(REINFORCE)
We will present a model-free algorithm called REINFORCE that does not
require the notion of value functions and Q functions. It turns out to be more
convenient to introduce REINFORCE in the finite horizon case, which will
be assumed throughout this note: we use τ = (s0 , a0 , . . . , sT −1 , aT −1 , sT ) to
denote a trajectory, where T < ∞ is the length of the trajectory. Moreover,
REINFORCE only applies to learning a randomized policy. We use πθ (a|s)
to denote the probability of the policy πθ outputting the action a at state s.
The other notations will be the same as in previous lecture notes.
The advantage of applying REINFORCE is that we only need to assume
that we can sample from the transition probabilities {Psa } and can query the
reward function R(s, a) at state s and action a,1 but we do not need to know
the analytical form of the transition probabilities or the reward function.
We do not explicitly learn the transition probabilities or the reward function
either.
Let s0 be sampled from some distribution µ. We consider optimizing the
expected total payoff of the policy πθ over the parameter θ defined as.
"T −1 #
X
η(θ) , E γ t R(st , at ) (1)
t=0
Recall that st ∼ Pst−1 at−1 and at ∼ πθ (·|st ). Also note that η(θ) =
Es0 ∼P [V πθ (s0 )] if we ignore the difference between finite and infinite hori-
zon.
1
In this notes we will work with the general setting where the reward depends on both
the state and the action.
1
2
Now we have a sample-based estimator for ∇θ Eτ ∼Pθ [f (τ )]. Let τ (1) , . . . , τ (n)
be n empirical samples from Pθ (which are obtained by running the policy
πθ for n times, with T steps for each run). We can estimate the gradient of
η(θ) by
Pθ (τ ) = µ(s0 )πθ (a0 |s0 )Ps0 a0 (s1 )πθ (a1 |s1 )Ps1 a1 (s2 ) · · · PsT −1 aT −1 (sT ) (6)
log Pθ (τ ) = log µ(s0 ) + log πθ (a0 |s0 ) + log Ps0 a0 (s1 ) + log πθ (a1 |s1 )
+ log Ps1 a1 (s2 ) + · · · + log PsT −1 aT −1 (sT ) (7)
∇θ log Pθ (τ ) = ∇θ log πθ (a0 |s0 ) + ∇θ log πθ (a1 |s1 ) + · · · + ∇θ log πθ (aT −1 |sT −1 )
Note that many of the terms disappear because they don’t depend on θ and
thus have zero gradients. (This is somewhat important — we don’t know how
to evaluate those terms such as log Ps0 a0 (s1 ) because we don’t have access to
the transition probabilities, but luckily those terms have zero gradients!)
Plugging the equation above into equation (4), we conclude that
" T −1 ! #
X
∇θ η(θ) = ∇θ Eτ ∼Pθ [f (τ )] = Eτ ∼Pθ ∇θ log πθ (at |st ) · f (τ )
t=0
" T −1
! T −1
!#
X X
= Eτ ∼Pθ ∇θ log πθ (at |st ) · γ t R(st , at )
t=0 t=0
(8)
Interpretation
PT −1of the policy gradient formula (8). The quantity
∇θ Pθ (τ ) = t=0 ∇ θ log π (a |s
θ t t ) is intuitively the direction of the change
of θ that will make the trajectory τ more likely to occur (or increase the
probability of choosing action a0 , . . . , at−1 ), and f (τ ) is the total payoff of
this trajectory. Thus, by taking a gradient step, intuitively we are trying to
improve the likelihood of all the trajectories, but with a different emphasis
or weight for each τ (or for each set of actions a0 , a1 , . . . , at−1 ). If τ is very
rewarding (that is, f (τ ) is large), we try very hard to move in the direction
4
that can increase the probability of the trajectory τ (or the direction that
increases the probability of choosing a0 , . . . , at−1 ), and if τ has low payoff,
we try less hard with a smaller weight.
An interesting fact that follows from formula (3) is that
"T −1 #
X
Eτ ∼Pθ ∇θ log πθ (at |st ) = 0 (9)
t=0
To see this, we take f (τ ) = 1 (that is, the reward is always a constant), PT then
the LHS of (8) is zero because the payoff is always a fixed constant t=0 γ t .
Thus the RHS of (8) is also zero, which implies (9).
In fact, one can verify that Eat ∼πθ (·|st ) ∇θ log πθ (at |st ) = 0 for any fixed t
and st .2 This fact has two consequences. First, we can simplify formula (8)
to
T −1
" T −1
!#
X X
j
∇θ η(θ) = Eτ ∼Pθ ∇θ log πθ (at |st ) · γ R(sj , aj )
t=0 j=0
T −1
" T −1
!#
X X
= Eτ ∼Pθ ∇θ log πθ (at |st ) · γ j R(sj , aj ) (10)
t=0 j≥t
Again here we used the law of total expectation. The outer expecta-
tion in the second line above is over the randomness of s0 , a0 , . . . , at−1 , st ,
whereas the inner expectation is over the randomness of at (conditioned on
s0 , a0 , . . . , at−1 , st .) It follows from equation (10) and the equation above that
T −1
" T −1
!#
X X
∇θ η(θ) = Eτ ∼Pθ ∇θ log πθ (at |st ) · γ j R(sj , aj ) − γ t B(st )
t=0 j≥t
T −1
" T −1
!#
X X
t j−t
= Eτ ∼Pθ ∇θ log πθ (at |st ) · γ γ R(sj , aj ) − B(st )
t=0 j≥t
(11)
Therefore, we will get a different estimator for estimating the ∇η(θ) with a
difference choice of B(·). The benefit of introducing a proper B(·) — which
is often referred to as a baseline — is that it helps reduce the variance of the
estimator.3 It turns
hP out that a near optimal i estimator would be the expected
T −1 j−t
future payoff E j≥t γ R(sj , aj )|st , which is pretty much the same as the
value function V πθ (st ) (if we ignore the difference between finite and infinite
horizon.) Here one could estimate the value function V πθ (·) in a crude way,
because its precise value doesn’t influence the mean of the estimator but only
the variance. This leads to a policy gradient algorithm with baselines stated
in Algorithm 1.4
3
PT −1 As a heuristic but illustrating example, suppose for a fixed t, the future reward
j−t
j≥t γ R(sj , aj ) randomly takes two values 1000 + 1 and 1000 − 2 with equal proba-
bility, and the corresponding values for ∇θ log πθ (at |st ) are vector z and −z. (Note that
because E [∇θ log πθ (at |st )] = 0, if ∇θ log πθ (at |st ) can only take two values uniformly,
then the two values have to two vectors in an opposite direction.) In this case, without
subtracting the baseline, the estimators take two values (1000 + 1)z and −(1000 − 2)z,
whereas after subtracting a baseline of 1000, the estimator has two values z and 2z. The
latter estimator has much lower variance compared to the original estimator.
4
We note that the estimator of the gradient in the algorithm does not exactly match the
equation 11. If we multiply γ t in the summand of equation (13), then they will exactly
match. Removing such discount factors empirically works well because it gives a large
update.
6
1 Boosting
We have seen so far how to solve classification (and other) problems when we
have a data representation already chosen. We now talk about a procedure,
known as boosting, which was originally discovered by Rob Schapire, and
further developed by Schapire and Yoav Freund, that automatically chooses
feature representations. We take an optimization-based perspective, which
is somewhat different from the original interpretation and justification of
Freund and Schapire, but which lends itself to our approach of (1) choose a
representation, (2) choose a loss, and (3) minimize the loss.
Before formulating the problem, we give a little intuition for what we
are going to do. Roughly, the idea of boosting is to take a weak learning
algorithm—any learning algorithm that gives a classifier that is slightly bet-
ter than random—and transforms it into a strong classifier, which does much
much better than random. To build a bit of intuition for what this means,
consider a hypothetical digit recognition experiment, where we wish to dis-
tinguish 0s from 1s, and we receive images we must classify. Then a natural
weak learner might be to take the middle pixel of the image, and if it is
colored, call the image a 1, and if it is blank, call the image a 0. This clas-
sifier may be far from perfect, but it is likely better than random. Boosting
procedures proceed by taking a collection of such weak classifiers, and then
reweighting their contributions to form a classifier with much better accuracy
than any individual classifier.
With that in mind, let us formulate the problem. Our interpretation of
boosting is as a coordinate descent method in an infinite dimensional space,
which—while it sounds complex—is not so bad as it seems. First, we assume
we have raw input examples x ∈ Rn with labels y ∈ {−1, 1}, as is usual in
binary classification. We also assume we have an infinite collection of feature
functions φj : Rn → {−1, 1} and an infinite vector θ = [θ1 θ2 · · · ]T , but
1
which we assume always has only a finite number of non-zero entries. For
our classifier we use
X
∞
hθ (x) = sign θj φj (x) .
j=1
P
We will abuse notation, and define θT φ(x) = ∞ j=1 θj φj (x).
In boosting, one usually calls the features φj weak hypotheses. Given a
training set (x(1) , y (1) ), . . . , (x(m) , y (m) ), we call a vector p = (p(1) , . . . , p(m) ) a
distribution on the examples if p(i) ≥ 0 for all i and
X
m
p(i) = 1.
i=1
Then we say that there is a weak learner with margin γ > 0 if for any
distribution p on the m training examples there exists one weak hypothesis
φj such that
X m
1
p(i) 1 y (i) 6= φj (x(i) ) ≤ − γ. (1)
i=1
2
That is, we assume that there is some classifier that does slightly better than
random guessing on the dataset. The existence of a weak learning algorithm
is an assumption, but the surprising thing is that we can transform any weak
learning algorithm into one with perfect accuracy.
In more generality, we assume we have access to a weak learner, which is
an algorithm that takes as input a distribution (weights) p on the training
examples and returns a classifier doing slightly better than random. We will
(i) Input:
PmA distribution p(1) , . . . , p(m) and training set {(x(i) , y (i) )}m
i=1
with i=1 p(i) = 1 and p(i) ≥ 0
X
m
1
p(i) 1 y (i) 6= φj (x(i) ) ≤ − γ.
i=1
2
2
show how, given access to a weak learning algorithm, boosting can return a
classifier with perfect accuracy on the training data. (Admittedly, we would
like the classifer to generalize well to unseen data, but for now, we ignore
this issue.)
We first show how to compute the exact form of the coordinate descent
update for the risk J(θ). Coordinate descent iterates as follows:
(i) Choose a coordinate j ∈ N
(ii) Update θj to
θj = arg min J(θ)
θj
3
to be a weight, and note that optimizing coordinate k corresponds to mini-
mizing
Xm
w(i) exp(−y (i) φk (x(i) )α)
i=1
in α = θk . Now, define
X X
W + := w(i) and W − := w(i)
i:y (i) φk (x(i) )=1 i:y (i) φk (x(i) )=−1
Then p
J(θ(t) ) ≤ 1 − 4γ 2 J(θ(t−1) ).
4
For each iteration t = 1, 2, . . .:
As the proof of the lemma is somewhat involved and not the central focus of
these notes—though it is important to know one’s algorithm will converge!—
we defer the proof to Appendix A.1. Let us describe how it guarantees
convergence of the boosting procedure to a classifier with zero training error.
We initialize the procedure at θ(0) = ~0, so that the initial empirical risk
J(θ(0) ) = 1. Now, we note that for any θ, the misclassification error satisfies
1 sign(θT φ(x)) 6= y = 1 yθT φ(x) ≤ 0 ≤ exp −yθT φ(x)
1 X
m
1 sign(θT φ(x(i) )) 6= y (i) ≤ J(θ),
m i=1
and so if J(θ) < m1 then the vector θ makes no mistakes on the training data.
After t iterations of boosting, we find that the empirical risk satisfies
t t
J(θ(t) ) ≤ (1 − 4γ 2 ) 2 J(θ(0) ) = (1 − 4γ 2 ) 2 .
5
1
To find how many iterations are required to guarantee J(θ(t) ) < m
, we take
logarithms to find that J(θ(t) ) < 1/m if
t 1 2 log m
log(1 − 4γ 2 ) < log , or t > .
2 m − log(1 − 4γ 2 )
Using a first order Taylor expansion, that is, that log(1 − 4γ 2 ) ≤ −4γ 2 , we
see that if the number of rounds of boosting—the number of weak classifiers
we use—satisfies
log m 2 log m
t> 2
≥ ,
2γ − log(1 − 4γ 2 )
1
then J(θ(t) ) < m
.
3 Implementing weak-learners
One of the major advantages of boosting algorithms is that they automat-
ically generate features from raw data for us. Moreover, because the weak
hypotheses always return values in {−1, 1}, there is no need to normalize fea-
tures to have similar scales when using learning algorithms, which in practice
can make a large difference. Additionally, and while this is not theoret-
ically well-understood, many types of weak-learning procedures introduce
non-linearities intelligently into our classifiers, which can yield much more
expressive models than the simpler linear models of the form θT x that we
have seen so far.
These classifiers are simple enough that we can fit them efficiently even to a
weighted dataset, as we now describe.
6
Indeed, a decision stump weak learner proceeds as follows. We begin with
a distribution—set of weights p(1) , . . . , p(m) summing to 1—on the training
set, and we wish to choose a decision stump of the form (2) to minimize the
error on the training set. That is, we wish to find a threshold s ∈ R and
index j such that
X
m
X
m n o
c j,s , p) =
Err(φ (i)
p 1 φj,s (x ) 6= y (i) (i)
=
(i)
p(i) 1 y (i) (xj − s) ≤ 0 (3)
i=1 i=1
As the only values s for which the error of the decision stump can change
(i)
are the values xj , a bit of clever book-keeping allows us to compute
X
m n o Xm n o
(i) (i )
p(i) 1 y (i) (xj − s) ≤ 0 = p(ik ) 1 y (ik ) (xj k − s) ≤ 0
i=1 k=1
(You should convince yourself that this is true.) Thus, it is important to also
c j,s , p) over all thresholds, because this
track the smallest value of 1 − Err(φ
c j,s , p), which gives a better weak learner. Using
may be smaller than Err(φ
this procedure for our weak learner (Fig. 1) gives the basic, but extremely
useful, boosting classifier.
7
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
3.2 Example
We now give an example showing the behavior of boosting on a simple
dataset. In particular, we consider a problem with data points x ∈ R2 ,
where the optimal classifier is
(
1 if x1 < .6 and x2 < .6
y= (4)
−1 otherwise.
8
1
2 Iterations 1
4 Iterations
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
1
5 Iterations 1
10 Iterations
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
9
3.3 Other strategies
There are a huge number of variations on the basic boosted decision stumps
idea. First, we do not require that the input features xj be real-valued. Some
of them may be categorical, meaning that xj ∈ {1, 2, . . . , k} for some k, in
which case natural decision stumps are of the form
(
1 if xj = l
φj (x) =
−1 otherwise,
A Appendices
A.1 Proof of Lemma 2.1
We now return to prove the progress lemma. We prove this result by directly
showing the relationship of the weights at time t to those at time t − 1. In
particular, we note by inspection that
q
J(θ ) = min{Wt e + Wt e } = 2 Wt+ Wt−
(t) + −α − α
α
while
1 X X
m t−1
(t−1) (i) (i)
J(θ )= exp −y θτ φτ (x ) = Wt+ + Wt− .
m i=1 τ =1
10
Rewriting this expression by noting that the sum on the right is nothing but
Wt− , we have
1 1 + 2γ −
−
Wt ≤ − γ (Wt+ + Wt− ), or Wt+ ≥ W .
2 1 − 2γ t
11