Machine Learning Programming Exercise
Machine Learning Programming Exercise
(Programming Exercise)
Andrew Ng
Stanford University
Contents
Programming Exercise 1: Linear Regression ................................................. 1
Introduction
In this exercise, you will implement linear regression and get to see it work
on data. Before starting on this programming exercise, we strongly recom-
mend watching the video lectures and completing the review questions for
the associated topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave to change to this directory
before starting this exercise.
You can also find instructions for installing Octave on the “Octave In-
stallation” page on the course website.
1
Throughout the exercise, you will be using the scripts ex1.m and ex1 multi.m.
These scripts set up the dataset for the problems and make calls to functions
that you will write. You do not need to modify either of them. You are only
required to modify functions in other files, by following the instructions in
this assignment.
For this programming exercise, you are only required to complete the first
part of the exercise to implement linear regression with one variable. The
second part of the exercise, which you may complete for extra credit, covers
linear regression with multiple variables.
A = eye(5);
When you are finished, run ex1.m (assuming you are in the correct direc-
tory, type “ex1” at the Octave prompt) and you should see output similar
to the following:
1
Octave is a free alternative to MATLAB. For the programming exercises, you are free
to use either Octave or MATLAB.
2
ans =
Diagonal Matrix
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
Now ex1.m will pause until you press any key, and then will run the code
for the next part of the assignment. If you wish to quit, typing ctrl-c will
stop the program in the middle of its run.
You are allowed to submit your solutions multiple times, and we will take
only the highest score into consideration. To prevent rapid-fire guessing, the
system enforces a minimum of 5 minutes between submissions.
3
The file ex1data1.txt contains the dataset for our linear regression prob-
lem. The first column is the population of a city and the second column is
the profit of a food truck in that city. A negative value for profit indicates a
loss.
The ex1.m script has already been set up to load this data for you.
Next, the script calls the plotData function to create a scatter plot of
the data. Your job is to complete plotData.m to draw the plot; modify the
file and fill in the following code:
Now, when you continue to run ex1.m, our end result should look like
Figure 1, with the same red “x” markers and axis labels.
To learn more about the plot command, you can type help plot at the
Octave command prompt or to search online for plotting documentation. (To
change the markers to red “x”, we used the option ‘rx’ together with the plot
command, i.e., plot(..,[your options here],.., ‘rx’); )
4
25
20
15
Profit in $10,000s
10
−5
4 6 8 10 12 14 16 18 20 22 24
Population of City in 10,000s
hθ (x) = θT x = θ0 + θ1 x1
Recall that the parameters of your model are the θj values. These are
the values you will adjust to minimize cost J(θ). One way to do this is to
use the batch gradient descent algorithm. In batch gradient descent, each
iteration performs the update
m
1 X (i)
θj := θj − α (hθ (x(i) ) − y (i) )xj (simultaneously update θj for all j).
m i=1
With each step of gradient descent, your parameters θj come closer to the
5
optimal values that will achieve the lowest cost J(θ).
2.2.2 Implementation
In ex1.m, we have already set up the data for linear regression. In the
following lines, we add another dimension to our data to accommodate the
θ0 intercept term. We also initialize the initial parameters to 0 and the
learning rate alpha to 0.01.
iterations = 1500;
alpha = 0.01;
You should now submit “compute cost” for linear regression with one
variable.
6
2.2.4 Gradient descent
Next, you will implement gradient descent in the file gradientDescent.m.
The loop structure has been written for you, and you only need to supply
the updates to θ within each iteration.
As you program, make sure you understand what you are trying to opti-
mize and what is being updated. Keep in mind that the cost J(θ) is parame-
terized by the vector θ, not X and y. That is, we minimize the value of J(θ)
by changing the values of the vector θ, not by changing X or y. Refer to the
equations in this handout and to the video lectures if you are uncertain.
A good way to verify that gradient descent is working correctly is to look
at the value of J(θ) and check that it is decreasing with each step. The
starter code for gradientDescent.m calls computeCost on every iteration
and prints the cost. Assuming you have implemented gradient descent and
computeCost correctly, your value of J(θ) should never increase, and should
converge to a steady value by the end of the algorithm.
After you are finished, ex1.m will use your final parameters to plot the
linear fit. The result should look something like Figure 2:
Your final values for θ will also be used to make predictions on profits in
areas of 35,000 and 70,000 people. Note the way that the following lines in
ex1.m uses matrix multiplication, rather than explicit summation or loop-
ing, to calculate the predictions. This is an example of code vectorization in
Octave.
You should now submit gradient descent for linear regression with one
variable.
2.3 Debugging
Here are some things to keep in mind as you implement gradient descent:
Octave array indices start from one, not zero. If you’re storing θ0 and
θ1 in a vector called theta, the values will be theta(1) and theta(2).
If you are seeing many errors at runtime, inspect your matrix operations
to make sure that you’re adding and multiplying matrices of compat-
ible dimensions. Printing the dimensions of variables with the size
command will help you debug.
7
25
20
15
Profit in $10,000s
10
Training data
0
Linear regression
−5
4 6 8 10 12 14 16 18 20 22 24
Population of City in 10,000s
8
% initialize J vals to a matrix of 0's
J vals = zeros(length(theta0 vals), length(theta1 vals));
After these lines are executed, you will have a 2-D array of J(θ) values.
The script ex1.m will then use these values to produce surface and contour
plots of J(θ) using the surf and contour commands. The plots should look
something like Figure 3:
3.5
800
3
700
600 2.5
500 2
400
1.5
θ1
300
200 1
100
0.5
0
4 0
3 10
2 5 −0.5
1 0
0 −5 −1
−10 −8 −6 −4 −2 0 2 4 6 8 10
θ1 −1 −10 θ0
θ0
The purpose of these graphs is to show you that how J(θ) varies with
changes in θ0 and θ1 . The cost function J(θ) is bowl-shaped and has a global
mininum. (This is easier to see in the contour plot than in the 3D surface
plot). This minimum is the optimal point for θ0 and θ1 , and each step of
gradient descent moves closer to this point.
9
Extra Credit Exercises (optional)
If you have successfully completed the material above, congratulations! You
now understand linear regression and should able to start using it on your
own datasets.
For the rest of this programming exercise, we have included the following
optional extra credit exercises. These exercises will help you gain a deeper
understanding of the material, and if you are able to do so, we encourage
you to complete them as well.
After subtracting the mean, additionally scale (divide) the feature values
by their respective “standard deviations.”
10
10
The standard deviation is a way of measuring how much variation there
is in the range of values of a particular feature (most data points will lie
within ±2 standard deviations of the mean); this is an alternative to taking
the range of values (max-min). In Octave, you can use the “std” function to
compute the standard deviation. For example, inside featureNormalize.m,
the quantity X(:,1) contains all the values of x1 (house sizes) in the training
set, so std(X(:,1)) computes the standard deviation of the house sizes.
At the time that featureNormalize.m is called, the extra column of 1’s
corresponding to x0 = 1 has not yet been added to X (see ex1 multi.m for
details).
You will do this for all the features and your code should work with
datasets of all sizes (any number of features / examples). Note that each
column of the matrix X corresponds to one feature.
You should now submit feature normalization.
You should now submit compute cost and gradient descent for linear re-
gression with multiple variables.
11
11
Implementation Note: In the multivariate case, the cost function can
also be written in the following vectorized form:
1
J(θ) = (Xθ − ~y )T (Xθ − ~y )
2m
where
— (x(1) )T — y (1)
— (x(2) )T — y (2)
X= ~y = .
.. ..
. .
(m) T
— (x ) — y (m)
12
12
Figure 4: Convergence of gradient descent with an appropriate learning rate
Implementation Note: If your learning rate is too large, J(θ) can di-
verge and ‘blow up’, resulting in values which are too large for computer
calculations. In these situations, Octave will tend to return NaNs. NaN
stands for ‘not a number’ and is often caused by undefined operations
that involve −∞ and +∞.
Octave Tip: To compare how different learning learning rates affect
convergence, it’s helpful to plot J for several learning rates on the same
figure. In Octave, this can be done by performing gradient descent multi-
ple times with a ‘hold on’ command between plots. Concretely, if you’ve
tried three different values of alpha (you should probably try more values
than this) and stored the costs in J1, J2 and J3, you can use the following
commands to plot them on the same figure:
The final arguments ‘b’, ‘r’, and ‘k’ specify different colors for the
plots.
13
13
Notice the changes in the convergence curves as the learning rate changes.
With a small learning rate, you should find that gradient descent takes a very
long time to converge to the optimal value. Conversely, with a large learning
rate, gradient descent might not converge or might even diverge!
Using the best learning rate that you found, run the ex1 multi.m script
to run gradient descent until convergence to find the final values of θ. Next,
use this value of θ to predict the price of a house with 1650 square feet and
3 bedrooms. You will use value later to check your implementation of the
normal equations. Don’t forget to normalize your features when you make
this prediction!
You do not need to submit any solutions for these optional (ungraded)
exercises.
Optional (ungraded) exercise: Now, once you have found θ using this
method, use it to make a price prediction for a 1650-square-foot house with
3 bedrooms. You should find that gives the same predicted price as the value
you obtained using the model fit with gradient descent (in Section 3.2.1).
14
14
Submission and Grading
After completing various parts of the assignment, be sure to use the submit
function system to submit your solutions to our servers. The following is a
breakdown of how each part of this exercise is scored.
15
15
Programming Exercise 2: Logistic Regression
Machine Learning
Introduction
In this exercise, you will implement logistic regression and apply it to two
different datasets. Before starting on the programming exercise, we strongly
recommend watching the video lectures and completing the review questions
for the associated topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave to change to this directory
before starting this exercise.
You can also find instructions for installing Octave on the “Octave In-
stallation” page on the course website.
16
Throughout the exercise, you will be using the scripts ex2.m and ex2 reg.m.
These scripts set up the dataset for the problems and make calls to functions
that you will write. You do not need to modify either of them. You are only
required to modify functions in other files, by following the instructions in
this assignment.
1 Logistic Regression
In this part of the exercise, you will build a logistic regression model to
predict whether a student gets admitted into a university.
Suppose that you are the administrator of a university department and
you want to determine each applicant’s chance of admission based on their
1
Octave is a free alternative to MATLAB. For the programming exercises, you are free
to use either Octave or MATLAB.
17
results on two exams. You have historical data from previous applicants
that you can use as a training set for logistic regression. For each training
example, you have the applicant’s scores on two exams and the admissions
decision.
Your task is to build a classification model that estimates an applicant’s
probability of admission based the scores from those two exams. This outline
and the framework code in ex2.m will guide you through the exercise.
80
Exam 2 score
70
60
50
40
30
30 40 50 60 70 80 90 100
Exam 1 score
To help you get more familiar with plotting, we have left plotData.m
empty so you can try to implement it yourself. However, this is an optional
(ungraded) exercise. We also provide our implementation below so you can
copy it or refer to it. If you choose to copy our example, make sure you learn
what each of its commands is doing by consulting the Octave documentation.
18
% Find Indices of Positive and Negative Examples
pos = find(y==1); neg = find(y == 0);
% Plot Examples
plot(X(pos, 1), X(pos, 2), 'k+','LineWidth', 2, ...
'MarkerSize', 7);
plot(X(neg, 1), X(neg, 2), 'ko', 'MarkerFaceColor', 'y', ...
'MarkerSize', 7);
1.2 Implementation
1.2.1 Warmup exercise: sigmoid function
Before you start with the actual cost function, recall that the logistic regres-
sion hypothesis is defined as:
hθ (x) = g(θT x),
where function g is the sigmoid function. The sigmoid function is defined as:
1
g(z) = .
1 + e−z
Your first step is to implement this function in sigmoid.m so it can be
called by the rest of your program. When you are finished, try testing a few
values by calling sigmoid(x) at the octave command line. For large positive
values of x, the sigmoid should be close to 1, while for large negative values,
the sigmoid should be close to 0. Evaluating sigmoid(0) should give you
exactly 0.5. Your code should also work with vectors and matrices. For
a matrix, your function should perform the sigmoid function on
every element.
You can submit your solution for grading by typing submit at the Octave
command line. The submission script will prompt you for your username and
password and ask you which files you want to submit. You can obtain a sub-
mission password from the website.
19
Recall that the cost function in logistic regression is
m
1 X (i)
−y log(hθ (x(i) )) − (1 − y (i) ) log(1 − hθ (x(i) )) ,
J(θ) =
m i=1
and the gradient of the cost is a vector of the same length as θ where the j th
element (for j = 0, 1, . . . , n) is defined as follows:
m
∂J(θ) 1 X (i)
= (hθ (x(i) ) − y (i) )xj
∂θj m i=1
Note that while this gradient looks identical to the linear regression gra-
dient, the formula is actually different because linear and logistic regression
have different definitions of hθ (x).
Once you are done, ex2.m will call your costFunction using the initial
parameters of θ. You should see that the cost is about 0.693.
You should now submit the cost function and gradient for logistic re-
gression. Make two submissions: one for the cost function and one for the
gradient.
20
A function that, when given the training set and a particular θ, computes
the logistic regression cost and gradient with respect to θ for the dataset
(X, y)
In ex2.m, we already have code written to call fminunc with the correct
arguments.
In this code snippet, we first defined the options to be used with fminunc.
Specifically, we set the GradObj option to on, which tells fminunc that our
function returns both the cost and the gradient. This allows fminunc to
use the gradient when minimizing the function. Furthermore, we set the
MaxIter option to 400, so that fminunc will run for at most 400 steps before
it terminates.
To specify the actual function we are minimizing, we use a “short-hand”
for specifying functions with the @(t) ( costFunction(t, X, y) ) . This
creates a function, with argument t, which calls your costFunction. This
allows us to wrap the costFunction for use with fminunc.
If you have completed the costFunction correctly, fminunc will converge
on the right optimization parameters and return the final values of the cost
and θ. Notice that by using fminunc, you did not have to write any loops
yourself, or set a learning rate like you did for gradient descent. This is all
done by fminunc: you only needed to provide a function calculating the cost
and the gradient.
Once fminunc completes, ex2.m will call your costFunction function
using the optimal parameters of θ. You should see that the cost is about
0.203.
This final θ value will then be used to plot the decision boundary on the
training data, resulting in a figure similar to Figure 2. We also encourage
you to look at the code in plotDecisionBoundary.m to see how to plot such
a boundary using the θ values.
21
100
Admitted
Not admitted
90
80
Exam 2 score
70
60
50
40
30
30 40 50 60 70 80 90 100
Exam 1 score
You should now submit the prediction function for logistic regression.
22
2 Regularized logistic regression
In this part of the exercise, you will implement regularized logistic regression
to predict whether microchips from a fabrication plant passes quality assur-
ance (QA). During QA, each microchip goes through various tests to ensure
it is functioning correctly.
Suppose you are the product manager of the factory and you have the
test results for some microchips on two different tests. From these two tests,
you would like to determine whether the microchips should be accepted or
rejected. To help you make the decision, you have a dataset of test results
on past microchips, from which you can build a logistic regression model.
You will use another script, ex2 reg.m to complete this portion of the
exercise.
0.8
0.6
Microchip Test 2
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1 −0.5 0 0.5 1 1.5
Microchip Test 1
Figure 3 shows that our dataset cannot be separated into positive and
23
negative examples by a straight-line through the plot. Therefore, a straight-
forward application of logistic regression will not perform well on this dataset
since logistic regression will only be able to find a linear decision boundary.
m n
1 X (i) λ X 2
−y log(hθ (x(i) )) − (1 − y (i) ) log(1 − hθ (x(i) )) +
J(θ) = θ .
m i=1 2m j=1 j
24
Note that you should not regularize the parameter θ0 . In Octave, re-
call that indexing starts from 1, hence, you should not be regularizing the
theta(1) parameter (which corresponds to θ0 ) in the code. The gradient of
the cost function is a vector where the j th element is defined as follows:
m
∂J(θ) 1 X (i)
= (hθ (x(i) ) − y (i) )xj for j = 0
∂θ0 m i=1
m
!
∂J(θ) 1 X (i) λ
= (hθ (x(i) ) − y (i) )xj + θj for j ≥ 1
∂θj m i=1 m
Once you are done, ex2 reg.m will call your costFunctionReg function
using the initial value of θ (initialized to all zeros). You should see that the
cost is about 0.693.
You should now submit the cost function and gradient for regularized lo-
gistic regression. Make two submissions, one for the cost function and one
for the gradient.
10
25
2.5 Optional (ungraded) exercises
In this part of the exercise, you will get to try out different regularization
parameters for the dataset to understand how regularization prevents over-
fitting.
Notice the changes in the decision boundary as you vary λ. With a small
λ, you should find that the classifier gets almost every training example
correct, but draws a very complicated boundary, thus overfitting the data
(Figure 5). This is not a good decision boundary: for example, it predicts
that a point at x = (−0.25, 1.5) is accepted (y = 1), which seems to be an
incorrect decision given the training set.
With a larger λ, you should see a plot that shows an simpler decision
boundary which still separates the positives and negatives fairly well. How-
ever, if λ is set to too high a value, you will not get a good fit and the decision
boundary will not follow the data so well, thus underfitting the data (Figure
6).
You do not need to submit any solutions for these optional (ungraded)
exercises.
lambda = 1
1.2
y=1
1 y=0
Decision boundary
0.8
0.6
Microchip Test 2
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1 −0.5 0 0.5 1 1.5
Microchip Test 1
11
26
lambda = 0
1.5
y=1
y=0
Decision boundary
1
Microchip Test 2
0.5
−0.5
−1
−1 −0.5 0 0.5 1 1.5
Microchip Test 1
lambda = 100
1.2
y=1
1 y=0
Decision boundary
0.8
0.6
Microchip Test 2
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1 −0.5 0 0.5 1 1.5
Microchip Test 1
12
27
Submission and Grading
After completing various parts of the assignment, be sure to use the submit
function system to submit your solutions to our servers. The following is a
breakdown of how each part of this exercise is scored.
13
28
Programming Exercise 3:
Multi-class Classification and Neural Networks
Machine Learning
Introduction
In this exercise, you will implement one-vs-all logistic regression and neural
networks to recognize hand-written digits. Before starting the programming
exercise, we strongly recommend watching the video lectures and completing
the review questions for the associated topics.
To get started with the exercise, download the starter code and unzip its
contents to the directory where you wish to complete the exercise. If needed,
use the cd command in Octave to change to this directory before starting
this exercise.
29
Throughout the exercise, you will be using the scripts ex3.m and ex3 nn.m.
These scripts set up the dataset for the problems and make calls to functions
that you will write. You do not need to modify these scripts. You are only
required to modify functions in other files, by following the instructions in
this assignment.
1 Multi-class Classification
For this exercise, you will use logistic regression and neural networks to
recognize handwritten digits (from 0 to 9). Automated handwritten digit
recognition is widely used today - from recognizing zip codes (postal codes)
on mail envelopes to recognizing amounts written on bank checks. This
exercise will show you how the methods you’ve learned can be used for this
classification task.
In the first part of the exercise, you will extend your previous implemen-
tion of logistic regression and apply it to one-vs-all classification.
1.1 Dataset
You are given a data set in ex3data1.mat that contains 5000 training ex-
amples of handwritten digits.1 The .mat format means that that the data
1
This is a subset of the MNIST handwritten digit dataset (http://yann.lecun.com/
exdb/mnist/).
30
has been saved in a native Octave/Matlab matrix format, instead of a text
(ASCII) format like a csv-file. These matrices can be read directly into your
program by using the load command. After loading, matrices of the correct
dimensions and values will appear in your program’s memory. The matrix
will already be named, so you do not need to assign names to them.
31
Figure 1: Examples from the dataset
32
— (x(1) )T — θ0
— (x(2) )T — θ1
X= and θ = .
.. ..
. .
(m) T
— (x ) — θn
In the last equality, we used the fact that aT b = bT a if a and b are vectors.
This allows us to compute the products θT x(i) for all our examples i in one
line of code.
Your job is to write the unregularized cost function in the file lrCostFunction.m
Your implementation should use the strategy we presented above to calcu-
late θT x(i) . You should also use a vectorized approach for the rest of the
cost function. A fully vectorized version of lrCostFunction.m should not
contain any loops.
(Hint: You might want to use the element-wise multiplication operation
(.*) and the sum operation sum when writing this function)
To vectorize this operation over the dataset, we start by writing out all
33
the partial derivatives explicitly for all θj ,
P
m (i) (i) (i)
∂J i=1 (hθ (x ) − y )x0
∂θ0
∂J Pm (h (x(i) ) − y (i) )x(i)
∂θ1 i=1 θ 1
∂J
∂θ2 = 1
(i) (i)
Pm (i)
m i=1 (hθ (x ) − y )x2
.
..
..
.
∂J
(i) (i)
Pm (i)
i=1 (hθ (x ) − y )xn
∂θn
m
1 X
(hθ (x(i) ) − y (i) )x(i)
=
m i=1
1 T
= X (hθ (x) − y). (1)
m
where
hθ (x(1) ) − y (1)
hθ (x(2) ) − y (2)
hθ (x) − y = .
..
.
hθ (x(1) ) − y (m)
Note that x(i) is a vector, while (hθ (x(i) )−y (i) ) is a scalar (single number).
To understand the last step of the derivation, let βi = (hθ (x(i) ) − y (i) ) and
observe that:
β1
X | | | β2
(i)
βi x = x (1)
x (2)
... x (m)
.. = X T β,
.
i | | |
βm
where the values βi = (hθ (x(i) ) − y (i) ).
34
Debugging Tip: Vectorizing code can sometimes be tricky. One com-
mon strategy for debugging is to print out the sizes of the matrices you
are working with using the size function. For example, given a data ma-
trix X of size 100 × 20 (100 examples, 20 features) and θ, a vector with
dimensions 20 × 1, you can observe that Xθ is a valid multiplication oper-
ation, while θX is not. Furthermore, if you have a non-vectorized version
of your code, you can compare the output of your vectorized code and
non-vectorized code to make sure that they produce the same outputs.
Note that you should not be regularizing θ0 which is used for the bias
term.
Correspondingly, the partial derivative of regularized logistic regression
cost for θj is defined as
m
∂J(θ) 1 X (i)
= (hθ (x(i) ) − y (i) )xj for j = 0
∂θ0 m i=1
m
!
∂J(θ) 1 X (i) λ
= (hθ (x(i) ) − y (i) )xj + θj for j ≥ 1
∂θj m i=1 m
Now modify your code in lrCostFunction to account for regularization.
Once again, you should not put any loops into your code.
35
Octave Tip: When implementing the vectorization for regularized lo-
gistic regression, you might often want to only sum and update certain
elements of θ. In Octave, you can index into the matrices to access and
update only certain elements. For example, A(:, 3:5) = B(:, 1:3) will
replaces the columns 3 to 5 of A with the columns 1 to 3 from B. One
special keyword you can use in indexing is the end keyword in indexing.
This allows us to select columns (or rows) until the end of the matrix.
For example, A(:, 2:end) will only return elements from the 2nd to last
column of A. Thus, you could use this together with the sum and .^ op-
erations to compute the sum of only the elements you are interested in
(e.g., sum(z(2:end).^2)). In the starter code, lrCostFunction.m, we
have also provided hints on yet another possible method computing the
regularized gradient.
You should now submit your vectorized logistic regression cost function.
36
Octave Tip: Logical arrays in Octave are arrays which contain binary (0
or 1) elements. In Octave, evaluating the expression a == b for a vector a
(of size m × 1) and scalar b will return a vector of the same size as a with
ones at positions where the elements of a are equal to b and zeroes where
they are different. To see how this works for yourself, try the following
code in Octave:
a = 1:10; % Create a and b
b = 3;
a == b % You should try different values of b here
Furthermore, you will be using fmincg for this exercise (instead of fminunc).
fmincg works similarly to fminunc, but is more more efficient for dealing with
a large number of parameters.
After you have correctly completed the code for oneVsAll.m, the script
ex3.m will continue to use your oneVsAll function to train a multi-class clas-
sifier.
You should now submit the training function for one-vs-all classification.
You should now submit the prediction function for one-vs-all classifica-
tion.
37
2 Neural Networks
In the previous part of this exercise, you implemented multi-class logistic re-
gression to recognize handwritten digits. However, logistic regression cannot
form more complex hypotheses as it is only a linear classifier.2
In this part of the exercise, you will implement a neural network to rec-
ognize handwritten digits using the same training set as before. The neural
network will be able to represent complex models that form non-linear hy-
potheses. For this week, you will be using parameters from a neural network
that we have already trained. Your goal is to implement the feedforward
propagation algorithm to use our weights for prediction. In next week’s ex-
ercise, you will write the backpropagation algorithm for learning the neural
network parameters.
The provided script, ex3 nn.m, will help you step through this exercise.
2
You could add more features (such as polynomial features) to logistic regression, but
that can be very expensive to train.
10
38
Figure 2: Neural network model.
11
39
accuracy is about 97.5%. After that, an interactive sequence will launch dis-
playing images from the training set one at a time, while the console prints
out the predicted label for the displayed image. To stop the image sequence,
press Ctrl-C.
12
40
Programming Exercise 4:
Neural Networks Learning
Machine Learning
Introduction
In this exercise, you will implement the backpropagation algorithm for neural
networks and apply it to the task of hand-written digit recognition. Before
starting on the programming exercise, we strongly recommend watching the
video lectures and completing the review questions for the associated topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave to change to this directory
before starting this exercise.
41
? indicates files you will need to complete
Throughout the exercise, you will be using the script ex4.m. These scripts
set up the dataset for the problems and make calls to functions that you will
write. You do not need to modify the script. You are only required to modify
functions in other files, by following the instructions in this assignment.
1 Neural Networks
In the previous exercise, you implemented feedforward propagation for neu-
ral networks and used it to predict handwritten digits with the weights we
provided. In this exercise, you will implement the backpropagation algorithm
to learn the parameters for the neural network.
The provided script, ex4.m, will help you step through this exercise.
42
Figure 1: Examples from the dataset
43
You have been provided with a set of network parameters (Θ(1) , Θ(2) )
already trained by us. These are stored in ex4weights.mat and will be
loaded by ex4.m into Theta1 and Theta2. The parameters have dimensions
that are sized for a neural network with 25 units in the second layer and 10
output units (corresponding to the 10 digit classes).
44
tion) is
m K
1 X X h (i) (i)
i
J(θ) = −yk log((hθ (x(i) ))k ) − (1 − yk ) log(1 − (hθ (x(i) ))k ) ,
m i=1 k=1
Once you are done, ex4.m will call your nnCostFunction using the loaded
set of parameters for Theta1 and Theta2. You should see that the cost is
about 0.287629.
45
You should now submit the neural network cost function (feedforward).
m K
1 X X h (i) (i)
i
J(θ) = −yk log((hθ (x(i) ))k ) − (1 − yk ) log(1 − (hθ (x(i) ))k ) +
m i=1 k=1
" 25 400 10 25
#
λ X X (1) 2 X X (2) 2
(Θ ) + (Θj,k ) .
2m j=1 k=1 j,k j=1 k=1
You can assume that the neural network will only have 3 layers – an input
layer, a hidden layer and an output layer. However, your code should work
for any number of input units, hidden units and outputs units. While we
have explicitly listed the indices above for Θ(1) and Θ(2) for clarity, do note
that your code should in general work with Θ(1) and Θ(2) of any size.
Note that you should not be regularizing the terms that correspond to
the bias. For the matrices Theta1 and Theta2, this corresponds to the first
column of each matrix. You should now add regularization to your cost
function. Notice that you can first compute the unregularized cost function
J using your existing nnCostFunction.m and then later add the cost for the
regularization terms.
Once you are done, ex4.m will call your nnCostFunction using the loaded
set of parameters for Theta1 and Theta2, and λ = 1. You should see that
the cost is about 0.383770.
You should now submit the regularized neural network cost function (feed-
forward).
2 Backpropagation
In this part of the exercise, you will implement the backpropagation algo-
rithm to compute the gradient for the neural network cost function. You
will need to complete the nnCostFunction.m so that it returns an appropri-
ate value for grad. Once you have computed the gradient, you will be able
46
to train the neural network by minimizing the cost function J(Θ) using an
advanced optimizer such as fmincg.
You will first implement the backpropagation algorithm to compute the
gradients for the parameters for the (unregularized) neural network. After
you have verified that your gradient computation for the unregularized case
is correct, you will implement the gradient for the regularized neural network.
47
% Randomly initialize the weights to small values
epsilon init = 0.12;
W = rand(L out, 1 + L in) * 2 * epsilon init − epsilon init;
You do not need to submit any code for this part of the exercise.
2.3 Backpropagation
48
In detail, here is the backpropagation algorithm (also depicted in Figure
3). You should implement steps 1 to 4 in a loop that processes one example
at a time. Concretely, you should implement a for-loop for t = 1:m and
place steps 1-4 below inside the for-loop, with the tth iteration performing
the calculation on the tth training example (x(t) , y (t) ). Step 5 will divide the
accumulated gradients by m to obtain the gradients for the neural network
cost function.
1. Set the input layer’s values (a(1) ) to the t-th training example x(t) .
Perform a feedforward pass (Figure 2), computing the activations (z (2) , a(2) , z (3) , a(3) )
for layers 2 and 3. Note that you need to add a +1 term to ensure that
the vectors of activations for layers a(1) and a(2) also include the bias
unit. In Octave, if a 1 is a column vector, adding one corresponds to
a 1 = [1 ; a 1].
4. Accumulate the gradient from this example using the following for-
(2)
mula. Note that you should skip or remove δ0 . In Octave, removing
(2)
δ0 corresponds to delta 2 = delta 2(2:end).
5. Obtain the (unregularized) gradient for the neural network cost func-
tion by dividing the accumulated gradients by m1 :
∂ (l) 1 (l)
J(Θ) = Dij = ∆
(l)
∂Θij m ij
49
Octave Tip: You should implement the backpropagation algorithm only
after you have successfully completed the feedforward and cost functions.
While implementing the backpropagation algorithm, it is often useful to
use the size function to print out the sizes of the variables you are work-
ing with if you run into dimension mismatch errors (“nonconformant
arguments” errors in Octave).
10
50
We have implemented the function to compute the numerical gradient for
you in computeNumericalGradient.m. While you are not required to modify
the file, we highly encourage you to take a look at the code to understand
how it works.
In the next step of ex4.m, it will run the provided function checkNNGradients.m
which will create a small neural network and dataset that will be used for
checking your gradients. If your backpropagation implementation is correct,
you should see a relative difference that is less than 1e-9.
Once your cost function passes the gradient check for the (unregularized)
neural network cost function, you should submit the neural network gradient
function (backpropagation).
11
51
∂ (l) 1 (l) λ (l)
J(Θ) = Dij = ∆ + Θij for j ≥ 1
(l)
∂Θij m ij m
Note that you should not be regularizing the first column of Θ(l) which
(l)
is used for the bias term. Furthermore, in the parameters Θij , i is indexed
starting from 1, and j is indexed starting from 0. Thus,
(i) (l)
Θ1,0 Θ1,1 . . .
(i) (l)
Θ(l) = Θ2,0 Θ2,1 .
.. ..
. .
12
52
particular hidden unit, one way to visualize what it computes is to find an
input x that will cause it to activate (that is, to have an activation value
(l)
(ai ) close to 1). For the neural network you trained, notice that the ith row
of Θ(1) is a 401-dimensional vector that represents the parameter for the ith
hidden unit. If we discard the bias term, we get a 400 dimensional vector
that represents the weights from each input pixel to the hidden unit.
Thus, one way to visualize the “representation” captured by the hidden
unit is to reshape this 400 dimensional vector into a 20 × 20 image and
display it.2 The next step of ex4.m does this by using the displayData
function and it will show you an image (similar to Figure 4) with 25 units,
each corresponding to one hidden unit in the network.
In your trained network, you should find that the hidden units corre-
sponds roughly to detectors that look for strokes and other patterns in the
input.
13
53
the training set but does not as well on new examples that it has not seen
before. You can set the regularization λ to a smaller value and the MaxIter
parameter to a higher number of iterations to see this for youself.
You will also be able to see for yourself the changes in the visualizations
of the hidden units when you change the learning parameters λ and MaxIter.
You do not need to submit any solutions for this optional (ungraded)
exercise.
14
54
Submission and Grading
After completing various parts of the assignment, be sure to use the submit
function system to submit your solutions to our servers. The following is a
breakdown of how each part of this exercise is scored.
15
55
Programming Exercise 5:
Regularized Linear Regression and Bias v.s.
Variance
Machine Learning
Introduction
In this exercise, you will implement regularized linear regression and use it to
study models with different bias-variance properties. Before starting on the
programming exercise, we strongly recommend watching the video lectures
and completing the review questions for the associated topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave to change to this directory
before starting this exercise.
56
Throughout the exercise, you will be using the script ex5.m. These scripts
set up the dataset for the problems and make calls to functions that you will
write. You are only required to modify functions in other files, by following
the instructions in this assignment.
57
A cross validation set for determining the regularization parameter:
Xval, yval
A test set for evaluating performance. These are “unseen” examples
which your model did not see during training: Xtest, ytest
The next step of ex5.m will plot the training data (Figure 1). In the
following parts, you will implement linear regression and use that to fit a
straight line to the data and plot learning curves. Following that, you will
implement polynomial regression to find a better fit to the data.
40
35
30
Water flowing out of the dam (y)
25
20
15
10
0
−50 −40 −30 −20 −10 0 10 20 30 40
Change in water level (x)
Figure 1: Data
58
You should now complete the code in the file linearRegCostFunction.m.
Your task is to write a function to calculate the regularized linear regression
cost function. If possible, try to vectorize your code and avoid writing loops.
When you are finished, the next part of ex5.m will run your cost function
using theta initialized at [1; 1]. You should expect to see an output of
303.993.
You should now submit your regularized linear regression cost function.
You should now submit your regularized linear regression gradient func-
tion.
59
not a good fit to the data because the data has a non-linear pattern. While
visualizing the best fit as shown is one possible way to debug your learning
algorithm, it is not always easy to visualize the data and model. In the next
section, you will implement a function to generate learning curves that can
help you debug your learning algorithm even if it is not easy to visualize the
data.
40
35
30
Water flowing out of the dam (y)
25
20
15
10
−5
−50 −40 −30 −20 −10 0 10 20 30 40
Change in water level (x)
2 Bias-variance
An important concept in machine learning is the bias-variance tradeoff. Mod-
els with high bias are not complex enough for the data and tend to underfit,
while models with high variance overfit to the training data.
In this part of the exercise, you will plot training and test errors on a
learning curve to diagnose bias-variance problems.
60
training and cross validation error as a function of training set size. Your
job is to fill in learningCurve.m so that it returns a vector of errors for the
training set and cross validation set.
To plot the learning curve, we need a training and cross validation set
error for different training set sizes. To obtain different training set sizes,
you should use different subsets of the original training set X. Specifically, for
a training set size of i, you should use the first i examples (i.e., X(1:i,:)
and y(1:i)).
You can use the trainLinearReg function to find the θ parameters. Note
that the lambda is passed as a parameter to the learningCurve function.
After learning the θ parameters, you should compute the error on the train-
ing and cross validation sets. Recall that the training error for a dataset is
defined as
" m #
1 X
Jtrain (θ) = (hθ (x(i) ) − y (i) )2 .
2m i=1
In particular, note that the training error does not include the regular-
ization term. One way to compute the training error is to use your existing
cost function and set λ to 0 only when using it to compute the training error
and cross validation error. When you are computing the training set error,
make sure you compute it on the training subset (i.e., X(1:n,:) and y(1:n))
(instead of the entire training set). However, for the cross validation error,
you should compute it over the entire cross validation set. You should store
the computed errors in the vectors error train and error val.
When you are finished, ex5.m wil print the learning curves and produce
a plot similar to Figure 3.
In Figure 3, you can observe that both the train error and cross validation
error are high when the number of training examples is increased. This
reflects a high bias problem in the model – the linear regression model is
too simple and is unable to fit our dataset well. In the next section, you will
implement polynomial regression to fit a better model for this dataset.
3 Polynomial regression
The problem with our linear model was that it was too simple for the data
and resulted in underfitting (high bias). In this part of the exercise, you will
61
Learning curve for linear regression
150
Train
Cross Validation
100
Error
50
0
0 2 4 6 8 10 12
Number of training examples
62
3.1 Learning Polynomial Regression
After you have completed polyFeatures.m, the ex5.m script will proceed to
train polynomial regression using your linear regression cost function.
Keep in mind that even though we have polynomial terms in our feature
vector, we are still solving a linear regression optimization problem. The
polynomial terms have simply turned into features that we can use for linear
regression. We are using the same cost function and gradient that you wrote
for the earlier part of this exercise.
For this part of the exercise, you will be using a polynomial of degree 8.
It turns out that if we run the training directly on the projected data, will
not work well as the features would be badly scaled (e.g., an example with
x = 40 will now have a feature x8 = 408 = 6.5 × 1012 ). Therefore, you will
need to use feature normalization.
Before learning the parameters θ for the polynomial regression, ex5.m will
first call featureNormalize and normalize the features of the training set,
storing the mu, sigma parameters separately. We have already implemented
this function for you and it is the same function from the first exercise.
After learning the parameters θ, you should see two plots (Figure 4,5)
generated for polynomial regression with λ = 0.
Polynomial Regression Fit (lambda = 0.000000)
40
30
20
Water flowing out of the dam (y)
10
−10
−20
−30
−40
−50
−60
−80 −60 −40 −20 0 20 40 60 80
Change in water level (x)
From Figure 4, you should see that the polynomial fit is able to follow
the datapoints very well - thus, obtaining a low training error. However, the
63
Polynomial Regression Learning Curve (lambda = 0.000000)
100
Train
90 Cross Validation
80
70
60
Error
50
40
30
20
10
0
0 2 4 6 8 10 12
Number of training examples
polynomial fit is very complex and even drops off at the extremes. This is
an indicator that the polynomial regression model is overfitting the training
data and will not generalize well.
To better understand the problems with the unregularized (λ = 0) model,
you can see that the learning curve (Figure 5) shows the same effect where
the low training error is low, but the cross validation error is high. There
is a gap between the training and cross validation errors, indicating a high
variance problem.
One way to combat the overfitting (high-variance) problem is to add
regularization to the model. In the next section, you will get to try different
λ parameters to see how regularization can lead to a better model.
64
validation and training error converge to a relatively low value. This shows
the λ = 1 regularized polynomial regression model does not have the high-
bias or high-variance problems. In effect, it achieves a good trade-off between
bias and variance.
For λ = 100, you should see a polynomial fit (Figure 8) that does not
follow the data well. In this case, there is too much regularization and the
model is unable to fit the training data.
You do not need to submit any solutions for this optional (ungraded)
exercise.
Polynomial Regression Fit (lambda = 1.000000)
160
140
120
Water flowing out of the dam (y)
100
80
60
40
20
0
−80 −60 −40 −20 0 20 40 60 80
Change in water level (x)
10
65
Polynomial Regression Learning Curve (lambda = 1.000000)
100
Train
90 Cross Validation
80
70
60
Error
50
40
30
20
10
0
0 2 4 6 8 10 12
Number of training examples
35
30
Water flowing out of the dam (y)
25
20
15
10
−5
−10
−80 −60 −40 −20 0 20 40 60 80
Change in water level (x)
how good each λ value is. After selecting the best λ value using the cross
validation set, we can then evaluate the model on the test set to estimate
how well the model will perform on actual unseen data.
Your task is to complete the code in validationCurve.m. Specifically,
you should should use the trainLinearReg function to train the model using
11
66
different values of λ and compute the training error and cross validation error.
You should try λ in the following range: {0, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10}.
20
Train
18 Cross Validation
16
14
12
Error
10
0
0 1 2 3 4 5 6 7 8 9 10
lambda
After you have completed the code, the next part of ex5.m will run your
function can plot a cross validation curve of error v.s. λ that allows you select
which λ parameter to use. You should see a plot similar to Figure 9. In this
figure, we can see that the best value of λ is around 3. Due to randomness
in the training and validation splits of the dataset, the cross validation error
can sometimes be lower than the training error.
12
67
test error of 3.8599 for λ = 3.
You do not need to submit any solutions for this optional (ungraded)
exercise.
13
68
Polynomial Regression Learning Curve (lambda = 0.010000)
100
Train
90 Cross Validation
80
70
60
Error
50
40
30
20
10
0
0 2 4 6 8 10 12
Number of training examples
14
69
Programming Exercise 6:
Support Vector Machines
Machine Learning
Introduction
In this exercise, you will be using support vector machines (SVMs) to build
a spam classifier. Before starting on the programming exercise, we strongly
recommend watching the video lectures and completing the review questions
for the associated topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave to change to this directory
before starting this exercise.
ex6 spam.m - Octave script for the second half of the exercise
spamTrain.mat - Spam training set
70
spamTest.mat - Spam test set
emailSample1.txt - Sample email 1
emailSample2.txt - Sample email 2
spamSample1.txt - Sample spam 1
spamSample2.txt - Sample spam 2
vocab.txt - Vocabulary list
getVocabList.m - Load vocabulary list
porterStemmer.m - Stemming function
readFile.m - Reads a file into a character string
submit.m - Submission script that sends your solutions to our servers
submitWeb.m - Alternative submission script
[?] processEmail.m - Email preprocessing
[?] emailFeatures.m - Feature extraction from emails
Throughout the exercise, you will be using the script ex6.m. These scripts
set up the dataset for the problems and make calls to functions that you will
write. You are only required to modify functions in other files, by following
the instructions in this assignment.
71
1 Support Vector Machines
In the first half of this exercise, you will be using support vector machines
(SVMs) with various example 2D datasets. Experimenting with these datasets
will help you gain an intuition of how SVMs work and how to use a Gaussian
kernel with SVMs. In the next half of the exercise, you will be using support
vector machines to build a spam classifier.
The provided script, ex6.m, will help you step through the first half of
the exercise.
4.5
3.5
2.5
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
In this part of the exercise, you will try using different values of the C
parameter with SVMs. Informally, the C parameter is a positive value that
controls the penalty for misclassified training examples. A large C parameter
72
tells the SVM to try to classify all the examples correctly. C plays a role
similar to λ1 , where λ is the regularization parameter that we were using
previously for logistic regression.
5
4.5
3.5
2.5
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
4.5
3.5
2.5
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
The next part in ex6.m will run the SVM training (with C = 1) using
73
SVM software that we have included with the starter code, svmTrain.m.1
When C = 1, you should find that the SVM puts the decision boundary in
the gap between the two datasets and misclassifies the data point on the far
left (Figure 2).
74
function is defined as:
n
(i) (j)
xk ) 2
P
(xk −
kx(i) − x(j) k2
k=1
(i) (j)
Kgaussian (x , x ) = exp − = exp − .
2σ 2 2σ 2
You should now submit your function that computes the Gaussian kernel.
0.9
0.8
0.7
0.6
0.5
0.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
The next part in ex6.m will load and plot dataset 2 (Figure 4). From
the figure, you can obserse that there is no linear decision boundary that
separates the positive and negative examples for this dataset. However, by
using the Gaussian kernel with the SVM, you will be able to learn a non-linear
decision boundary that can perform reasonably well for the dataset.
If you have correctly implemented the Gaussian kernel function, ex6.m
will proceed to train the SVM with the Gaussian kernel on this dataset.
75
1
0.9
0.8
0.7
0.6
0.5
0.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 5 shows the decision boundary found by the SVM with a Gaussian
kernel. The decision boundary is able to separate most of the positive and
negative examples correctly and follows the contours of the dataset well.
76
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3
you found. For our best parameters, the SVM returned a decision boundary
shown in Figure 7.
77
Implementation Tip: When implementing cross validation to select the
best C and σ parameter to use, you need to evaluate the error on the cross
validation set. Recall that for classification, the error is defined as the
fraction of the cross validation examples that were classified incorrectly.
In Octave, you can compute this error using mean(double(predictions
~= yval)), where predictions is a vector containing all the predictions
from the SVM, and yval are the true labels from the cross validation set.
You can use the svmPredict function to generate the predictions for the
cross validation set.
78
2 Spam Classification
Many email services today provide spam filters that are able to classify emails
into spam and non-spam email with high accuracy. In this part of the exer-
cise, you will use SVMs to build your own spam filter.
You will be training a classifier to classify whether a given email, x, is
spam (y = 1) or non-spam (y = 0). In particular, you need to convert each
email into a feature vector x ∈ Rn . The following parts of the exercise will
walk you through how such a feature vector can be constructed from an
email.
Throughout the rest of this exercise, you will be using the the script
ex6 spam.m. The dataset included for this exercise is based on a a subset of
the SpamAssassin Public Corpus.2 For the purpose of this exercise, you will
only be using the body of the email (excluding the email headers).
10
79
This has the effect of letting the spam classifier make a classification decision
based on whether any URL was present, rather than whether a specific URL
was present. This typically improves the performance of a spam classifier,
since spammers often randomize the URLs, and thus the odds of seeing any
particular URL again in a new piece of spam is very small.
In processEmail.m, we have implemented the following email prepro-
cessing and normalization steps:
Stripping HTML: All HTML tags are removed from the emails.
Many emails often come with HTML formatting; we remove all the
HTML tags, so that only the content remains.
Normalizing URLs: All URLs are replaced with the text “httpaddr”.
Normalizing Dollars: All dollar signs ($) are replaced with the text
“dollar”.
Word Stemming: Words are reduced to their stemmed form. For ex-
ample, “discount”, “discounts”, “discounted” and “discounting” are all
replaced with “discount”. Sometimes, the Stemmer actually strips off
additional characters from the end, so “include”, “includes”, “included”,
and “including” are all replaced with “includ”.
11
80
anyon know how much it cost to host a web portal well it depend on how
mani visitor your expect thi can be anywher from less than number buck
a month to a coupl of dollarnumb you should checkout httpaddr or perhap
amazon ecnumb if your run someth big to unsubscrib yourself from thi
mail list send an email to emailaddr
Figure 10: Vocabulary List Figure 11: Word Indices for Sample Email
12
81
this mapping. In the code, you are given a string str which is a single word
from the processed email. You should look up the word in the vocabulary
list vocabList and find if the word exists in the vocabulary list. If the word
exists, you should add the index of the word into the word indices variable.
If the word does not exist, and is therefore not in the vocabulary, you can
skip the word.
Once you have implemented processEmail.m, the script ex6 spam.m will
run your code on the email sample and you should see an output similar to
Figures 9 & 11.
Octave Tip: In Octave, you can compare two strings with the strcmp
function. For example, strcmp(str1, str2) will return 1 only when
both strings are equal. In the provided starter code, vocabList is a “cell-
array” containing the words in the vocabulary. In Octave, a cell-array is
just like a normal array (i.e., a vector), except that its elements can also
be strings (which they can’t in a normal Octave matrix/vector), and you
index into them using curly braces instead of square brackets. Specifically,
to get the word at index i, you can use vocabList{i}. You can also use
length(vocabList) to get the number of words in the vocabulary.
13
82
0
..
.
1
0
. n
.. ∈ R .
x=
1
0
.
..
0
You should now complete the code in emailFeatures.m to generate a
feature vector for an email, given the word indices.
Once you have implemented emailFeatures.m, the next part of ex6 spam.m
will run your code on the email sample. You should see that the feature vec-
tor had length 1899 and 45 non-zero entries.
our click remov guarante visit basenumb dollar will price pleas nbsp
most lo ga dollarnumb
14
83
To better understand how the spam classifier works, we can inspect the
parameters to see which words the classifier thinks are the most predictive
of spam. The next step of ex6 spam.m finds the parameters with the largest
positive values in the classifier and displays the corresponding words (Figure
12). Thus, if an email contains words such as “guarantee”, “remove”, “dol-
lar”, and “price” (the top predictors shown in Figure 12), it is likely to be
classified as spam.
You do not need to submit any solutions for this optional (ungraded)
exercise.
15
84
that occur in the dataset) and adding any additional features that you think
might be useful.
Finally, we also suggest trying to use highly optimized SVM toolboxes
such as LIBSVM.
You do not need to submit any solutions for this optional (ungraded)
exercise.
16
85
Programming Exercise 7:
K-means Clustering and Principal Component
Analysis
Machine Learning
Introduction
In this exercise, you will implement the K-means clustering algorithm and
apply it to compress an image. In the second part, you will use principal
component analysis to find a low-dimensional representation of face images.
Before starting on the programming exercise, we strongly recommend watch-
ing the video lectures and completing the review questions for the associated
topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave to change to this directory
before starting this exercise.
86
[?] projectData.m - Projects a data set into a lower dimensional space
[?] recoverData.m - Recovers the original data from the projection
[?] findClosestCentroids.m - Find closest centroids (used in K-means)
[?] computeCentroids.m - Compute centroid means (used in K-means)
[?] kMeansInitCentroids.m - Initialization for K-means centroids
Throughout the first part of the exercise, you will be using the script
ex7.m, for the second part you will use ex7 pca.m. These scripts set up the
dataset for the problems and make calls to functions that you will write.
You are only required to modify functions in other files, by following the
instructions in this assignment.
1 K-means Clustering
In this this exercise, you will implement the K-means algorithm and use it
for image compression. You will first start on an example 2D dataset that
will help you gain an intuition of how the K-means algorithm works. After
that, you wil use the K-means algorithm for image compression by reducing
the number of colors that occur in an image to only those that are most
common in that image. You will be using ex7.m for this part of the exercise.
87
1.1 Implementing K-means
The K-means algorithm is a method to automatically cluster similar data
examples together. Concretely, you are given a training set {x(1) , ..., x(m) }
(where x(i) ∈ Rn ), and want to group the data into a few cohesive “clusters”.
The intuition behind K-means is an iterative procedure that starts by guess-
ing the initial centroids, and then refines this guess by repeatedly assigning
examples to their closest centroids and then recomputing the centroids based
on the assignments.
The K-means algorithm is as follows:
% Initialize centroids
centroids = kMeansInitCentroids(X, K);
for iter = 1:iterations
% Cluster assignment step: Assign each data point to the
% closest centroid. idx(i) corresponds to cˆ(i), the index
% of the centroid assigned to example i
idx = findClosestCentroids(X, centroids);
The inner-loop of the algorithm repeatedly carries out two steps: (i) As-
signing each training example x(i) to its closest centroid, and (ii) Recomput-
ing the mean of each centroid using the points assigned to it. The K-means
algorithm will always converge to some final set of means for the centroids.
Note that the converged solution may not always be ideal and depends on the
initial setting of the centroids. Therefore, in practice the K-means algorithm
is usually run a few times with different random initializations. One way to
choose between these different solutions from different random initializations
is to choose the one with the lowest cost function value (distortion).
You will implement the two phases of the K-means algorithm separately
in the next sections.
88
where c(i) is the index of the centroid that is closest to x(i) , and µj is the
position (value) of the j’th centroid. Note that c(i) corresponds to idx(i) in
the starter code.
Your task is to complete the code in findClosestCentroids.m. This
function takes the data matrix X and the locations of all centroids inside
centroids and should output a one-dimensional array idx that holds the
index (a value in {1, ..., K}, where K is total number of centroids) of the
closest centroid to every training example.
You can implement this using a loop over every training example and
every centroid.
89
Iteration number 10
6
0
−1 0 1 2 3 4 5 6 7 8 9
90
% Initialize the centroids to be random examples
The code above first randomly permutes the indices of the examples (us-
ing randperm). Then, it selects the first K examples based on the random
permutation of the indices. This allows the examples to be selected at ran-
dom without the risk of selecting the same example twice.
You do not need to make any submissions for this part of the exercise.
91
the 16 selected colors, and for each pixel in the image you now need to only
store the index of the color at that location (where only 4 bits are necessary
to represent 16 possibilities).
In this exercise, you will use the K-means algorithm to select the 16 colors
that will be used to represent the compressed image. Concretely, you will
treat every pixel in the original image as a data example and use the K-means
algorithm to find the 16 colors that best group (cluster) the pixels in the 3-
dimensional RGB space. Once you have computed the cluster centroids on
the image, you will then use the 16 colors to replace the pixels in the original
image.
92
1
Finally, you can view the effects of the compression by reconstructing the
image based only on the centroid assignments. Specifically, you can replace
each pixel location with the mean of the centroid assigned to it. Figure 3
shows the reconstruction we obtained. Even though the resulting image re-
tains most of the characteristics of the original, we also see some compression
artifacts.
You do not need to make any submissions for this part of the exercise.
93
2 Principal Component Analysis
In this exercise, you will use principal component analysis (PCA) to perform
dimensionality reduction. You will first experiment with an example 2D
dataset to get intuition on how PCA works, and then use it on a bigger
dataset of 5000 face image dataset.
The provided script, ex7 pca.m, will help you step through the first half
of the exercise.
2
1 2 3 4 5 6
94
data. Then, you use Octave’s SVD function to compute the eigenvectors
U1 , U2 , . . . , Un . These will correspond to the principal components of varia-
tion in the data.
Before using PCA, it is important to first normalize the data by subtract-
ing the mean value of each feature from the dataset, and scaling each dimen-
sion so that they are in the same range. In the provided script ex7 pca.m,
this normalization has been performed for you using the featureNormalize
function.
After normalizing the data, you can run PCA to compute the principal
components. You task is to complete the code in pca.m to compute the prin-
cipal components of the dataset. First, you should compute the covariance
matrix of the data, which is given by:
1 T
Σ= X X
m
where X is the data matrix with examples in rows, and m is the number of
examples. Note that Σ is a n × n matrix and not the summation operator.
After computing the covariance matrix, you can run SVD on it to compute
the principal components. In Octave, you can run SVD with the following
command: [U, S, V] = svd(Sigma), where U will contain the principal
components and S will contain a diagonal matrix.
8
2
1 2 3 4 5 6
Once you have completed pca.m, the ex7 pca.m script will run PCA on
the example dataset and plot the corresponding principal components found
10
95
(Figure 5). The script will also output the top principal component (eigen-
vector) found, and you should expect to see an output of about [-0.707
-0.707]. (It is possible that Octave may instead output the negative of this,
since U1 and −U1 are equally valid choices for the first principal component.)
11
96
Once you have completed the code in projectData.m, ex7 pca.m will
recover an approximation of the first example and you should see a value of
about [-1.047 -1.047].
−1
−2
−3
−4
−4 −3 −2 −1 0 1 2 3
12
97
step in ex7 pca.m will load and visualize the first 100 of these face images
(Figure 7).
13
98
Figure 8: Principal components on the face dataset
Figure 9: Original images of faces and ones reconstructed from only the top
100 principal components.
The next part in ex7 pca.m will project the face dataset onto only the
first 100 principal components. Concretely, each face image is now described
by a vector z (i) ∈ R100 .
To understand what is lost in the dimension reduction, you can recover
the data using only the projected dataset. In ex7 pca.m, an approximate
recovery of the data is performed and the original and projected face images
are displayed side by side (Figure 9). From the reconstruction, you can ob-
serve that the general structure and appearance of the face are kept while
the fine details are lost. This is a remarkable reduction (more than 10×) in
14
99
the dataset size that can help speed up your learning algorithm significantly.
For example, if you were training a neural network to perform person recog-
nition (gven a face image, predict the identitfy of the person), you can use
the dimension reduced input of only a 100 dimensions instead of the original
pixels.
In the earlier K-means image compression exercise, you used the K-means
algorithm in the 3-dimensional RGB space. In the last part of the ex7 pca.m
script, we have provided code to visualize the final pixel assignments in this
3D space using the scatter3 function. Each data point is colored according
to the cluster it has been assigned to. You can drag your mouse on the figure
to rotate and inspect this data in 3 dimensions.
It turns out that visualizing datasets in 3 dimensions or greater can be
cumbersome. Therefore, it is often desirable to only display the data in 2D
even at the cost of losing some information. In practice, PCA is often used to
reduce the dimensionality of data for visualization purposes. In the next part
of ex7 pca.m, the script will apply your implementation of PCA to the 3-
dimensional data to reduce it to 2 dimensions and visualize the result in a 2D
scatter plot. The PCA projection can be thought of as a rotation that selects
the view that maximizes the spread of the data, which often corresponds to
the “best” view.
15
100
Figure 11: 2D visualization produced using PCA
16
101
Programming Exercise 8:
Anomaly Detection and Recommender
Systems
Machine Learning
Introduction
In this exercise, you will implement the anomaly detection algorithm and
apply it to detect failing servers on a network. In the second part, you will
use collaborative filtering to build a recommender system for movies. Before
starting on the programming exercise, we strongly recommend watching the
video lectures and completing the review questions for the associated topics.
To get started with the exercise, you will need to download the starter
code and unzip its contents to the directory where you wish to complete the
exercise. If needed, use the cd command in Octave to change to this directory
before starting this exercise.
102
movie ids.txt - List of movies
normalizeRatings.m - Mean normalization for collaborative filtering
[?] estimateGaussian.m - Estimate the parameters of a Gaussian dis-
tribution with a diagonal covariance matrix
[?] selectThreshold.m - Find a threshold for anomaly detection
[?] cofiCostFunc.m - Implement the cost function for collaborative fil-
tering
Throughout the first part of the exercise (anomaly detection) you will be
using the script ex8.m. For the second part of collaborative filtering, you
will use ex8 cofi.m. These scripts set up the dataset for the problems and
make calls to functions that you will write. You are only required to modify
functions in other files, by following the instructions in this assignment.
1 Anomaly detection
In this exercise, you will implement an anomaly detection algorithm to detect
anomalous behavior in server computers. The features measure the through-
put (mb/s) and latency (ms) of response of each server. While your servers
were operating, you collected m = 307 examples of how they were behaving,
103
and thus have an unlabeled dataset {x(1) , . . . , x(m) }. You suspect that the
vast majority of these examples are “normal” (non-anomalous) examples of
the servers operating normally, but there might also be some examples of
servers acting anomalously within this dataset.
You will use a Gaussian model to detect anomalous examples in your
dataset. You will first start on a 2D dataset that will allow you to visualize
what the algorithm is doing. On that dataset you will fit a Gaussian dis-
tribution and then find values that have very low probability and hence can
be considered anomalies. After that, you will apply the anomaly detection
algorithm to a larger dataset with many dimensions. You will be using ex8.m
for this part of the exercise.
The first part of ex8.m will visualize the dataset as shown in Figure 1.
30
25
20
Throughput (mb/s)
15
10
0
0 5 10 15 20 25 30
Latency (ms)
104
The Gaussian distribution is given by
1 (x−µ)2
p(x; µ, σ 2 ) = √ e− 2σ 2 ,
2πσ 2
where µ is the mean and σ 2 controls the variance.
105
30
25
20
Throughput (mb/s)
15
10
0
0 5 10 15 20 25 30
Latency (ms)
more likely to be the anomalies in our dataset. One way to determine which
examples are anomalies is to select a threshold based on a cross validation
set. In this part of the exercise, you will implement an algorithm to select
the threshold ε using the F1 score on a cross validation set.
You should now complete the code in selectThreshold.m. For this, we
(1) (1) (m ) (m )
will use a cross validation set {(xcv , ycv ), . . . , (xcv cv , ycv cv )}, where the la-
bel y = 1 corresponds to an anomalous example, and y = 0 corresponds
to a normal example. For each cross validation example, we will com-
(i) (1) (m
pute p(xcv ). The vector of all of these probabilities p(xcv ), . . . , p(xcv cv) ) is
passed to selectThreshold.m in the vector pval. The corresponding labels
(1) (m
ycv , . . . , ycv cv) is passed to the same function in the vector yval.
The function selectThreshold.m should return two values; the first is
the selected threshold ε. If an example x has a low probability p(x) < ε,
then it is considered to be an anomaly. The function should also return the
F1 score, which tells you how well you’re doing on finding the ground truth
anomalies given a certain threshold. For many different values of ε, you will
compute the resulting F1 score by computing how many examples the current
threshold classifies correctly and incorrectly.
The F1 score is computed using precision (prec) and recall (rec):
2 · prec · rec
F1 = , (3)
prec + rec
106
You compute precision and recall by:
tp
prec = (4)
tp + f p
tp
rec = , (5)
tp + f n
where
tp is the number of true positives: the ground truth label says it’s an
anomaly and our algorithm correctly classified it as an anomaly.
f p is the number of false positives: the ground truth label says it’s not
an anomaly, but our algorithm incorrectly classified it as an anomaly.
f n is the number of false negatives: the ground truth label says it’s an
anomaly, but our algorithm incorrectly classified it as not being anoma-
lous.
In the provided code selectThreshold.m, there is already a loop that
will try many different values of ε and select the best ε based on the F1 score.
You should now complete the code in selectThreshold.m. You can im-
plement the computation of the F1 score using a for-loop over all the cross
validation examples (to compute the values tp, f p, f n). You should see a
value for epsilon of about 8.99e-05.
107
30
25
20
Throughput (mb/s)
15
10
0
0 5 10 15 20 25 30
Latency (ms)
2 Recommender Systems
In this part of the exercise, you will implement the collaborative filtering
learning algorithm and apply it to a dataset of movie ratings.1 This dataset
consists of ratings on a scale of 1 to 5. The dataset has nu = 943 users, and
nm = 1682 movies. For this part of the exercise, you will be working with
the script ex8 cofi.m.
1
MovieLens 100k Dataset from GroupLens Research.
108
In the next parts of this exercise, you will implement the function cofiCostFunc.m
that computes the collaborative fitlering objective function and gradient. Af-
ter implementing the cost function and gradient, you will use fmincg.m to
learn the parameters for collaborative filtering.
The i-th row of X corresponds to the feature vector x(i) for the i-th movie,
and the j-th row of Theta corresponds to one parameter vector θ(j) , for the
j-th user. Both x(i) and θ(j) are n-dimensional vectors. For the purposes of
this exercise, you will use n = 100, and therefore, x(i) ∈ R100 and θ(j) ∈ R100 .
Correspondingly, X is a nm × 100 matrix and Theta is a nu × 100 matrix.
109
θ(1) , ..., θ(nu ) , where the model predicts the rating for movie i by user j as
y (i,j) = (θ(j) )T x(i) . Given a dataset that consists of a set of ratings produced
by some users on some movies, you wish to learn the parameter vectors
x(1) , ..., x(nm ) , θ(1) , ..., θ(nu ) that produce the best fit (minimizes the squared
error).
You will complete the code in cofiCostFunc.m to compute the cost func-
tion and gradient for collaborative filtering. Note that the parameters to the
function (i.e., the values that you are trying to learn) are X and Theta. In
order to use an off-the-shelf minimizer such as fmincg, the cost function has
been set up to unroll the parameters into a single vector params. You had
previously used the same vector unrolling method in the neural networks
programming exercise.
1 X
J(x(1) , ..., x(nm ) , θ(1) , ..., θ(nu ) ) = ((θ(j) )T x(i) − y (i,j) )2 .
2
(i,j):r(i,j)=1
You should now modify cofiCostFunc.m to return this cost in the vari-
able J. Note that you should be accumulating the cost for user j and movie
i only if R(i, j) = 1.
After you have completed the function, the script ex8 cofi.m will run
your cost function. You should expect to see an output of 22.22.
110
Implementation Note: We strongly encourage you to use a vectorized
implementation to compute J, since it will later by called many times
by the optimization package fmincg. As usual, it might be easiest to
first write a non-vectorized implementation (to make sure you have the
right answer), and the modify it to become a vectorized implementation
(checking that the vectorization steps don’t change your algorithm’s out-
put). To come up with a vectorized implementation, the following tip
might be helpful: You can use the R matrix to set selected entries to 0.
For example, R .* M will do an element-wise multiplication between M
and R; since R only has elements with values either 0 or 1, this has the
effect of setting the elements of M to 0 only when the corresponding value
in R is 0. Hence, sum(sum(R.*M)) is the sum of all the elements of M for
which the corresponding element in R equals 1.
∂J X (i)
(j)
= ((θ(j) )T x(i) − y (i,j) )xk .
∂θk i:r(i,j)=1
Note that the function returns the gradient for both sets of variables
by unrolling them into a single vector. After you have completed the code
to compute the gradients, the script ex8 cofi.m will run a gradient check
(checkCostFunction) to numerically check the implementation of your gra-
dients.2 If your implementation is correct, you should find that the analytical
and numerical gradients match up closely.
2
This is similar to the numerical check that you used in the neural networks exercise.
10
111
Implementation Note: You can get full credit for this assignment
without using a vectorized implementation, but your code will run much
more slowly (a small number of hours), and so we recommend that you
try to vectorize your implementation.
To get started, you can implement the gradient with a for-loop over movies
(for computing ∂J(i) ) and a for-loop over users (for computing ∂J(j) ). When
∂xk ∂θk
you first implement the gradient, you might start with an unvectorized
version, by implementing another inner for-loop that computes each ele-
ment in the summation. After you have completed the gradient computa-
tion this way, you should try to vectorize your implementation (vectorize
the inner for-loops), so that you’re left with only two for-loops (one for
looping over movies to compute ∂J(i) for each movie, and one for looping
∂xk
∂J
over users to compute (j) for each user).
∂θk
11
112
Implementation Tip: To perform the vectorization, you might find this
helpful: You should come up with a way to compute all the derivatives
(i) (i) (i)
associated with x1 , x2 , . . . , xn (i.e., the derivative terms associated with
the feature vector x(i) ) at the same time. Let us define the derivatives for
the feature vector of the i-th movie as:
∂J
(i)
∂x1
∂J
∂x(i) X
(Xgrad (i, :))T = . =
2 ((θ(j) )T x(i) − y (i,j) )θ(j)
.. j:r(i,j)=1
∂J
(i)
∂xn
Concretely, you can set idx = find(R(i, :)==1) to be a list of all the
users that have rated movie i. This will allow you to create the temporary
matrices Thetatemp = Theta(idx, :) and Ytemp = Y(i, idx) that index into
Theta and Y to give you only the set of users which have rated the i-th
movie. This will allow you to write the derivatives as:
After you have vectorized the computations of the derivatives with respect
to x(i) , you should use a similar method to vectorize the derivatives with
respect to θ(j) as well.
12
113
1 X
J(x(1) , ..., x(nm ) , θ(1) , ..., θ(nu ) ) = ((θ(j) )T x(i) − y (i,j) )2 +
2
(i,j):r(i,j)=1
n n
! nm X n
!
u X
λX (j) λX (i) 2
(θk )2 + (x ) .
2 j=1 k=1 2 i=1 k=1 k
∂J X (i) (j)
(j)
= ((θ(j) )T x(i) − y (i,j) )xk + λθk .
∂θk i:r(i,j)=1
This means that you just need to add λx(i) to the X grad(i,:) variable
described earlier, and add λθ(j) to the Theta grad(j,:) variable described
earlier.
After you have completed the code to compute the gradients, the script
ex8 cofi.m will run another gradient check (checkCostFunction) to numer-
ically check the implementation of your gradients.
13
114
recommendations for yourself. In the next part of the ex8 cofi.m script,
you can enter your own movie preferences, so that later when the algorithm
runs, you can get your own movie recommendations! We have filled out
some values according to our own preferences, but you should change this
according to your own tastes. The list of all movies and their number in the
dataset can be found listed in the file movie idx.txt.
2.3.1 Recommendations
After the additional ratings have been added to the dataset, the script
will proceed to train the collaborative filtering model. This will learn the
parameters X and Theta. To predict the rating of movie i for user j, you need
14
115
to compute (θ(j) )T x(i) . The next part of the script computes the ratings for
all the movies and users and displays the movies that it recommends (Figure
4), according to ratings that were entered earlier in the script. Note that
you might obtain a different set of the predictions due to different random
initializations.
15
116