Perceptron
Perceptron
Perceptron
Andrew Ng
Lecture on 11/4/04
sees x(1) and is asked to predict what it thinks y (1) is. After making its pre-
diction, the true value of y (1) is revealed to the algorithm (and the algorithm
may use this information to perform some learning). The algorithm is then
shown x(2) and again asked to make a prediction, after which y (2) is revealed,
and it may again perform some more learning. This proceeds until we reach
(x(m) , y (m) ). In the online learning setting, we are interested in the total
number of errors made by the algorithm during this process. Thus, it models
applications in which the algorithm has to make predictions even while it’s
still learning.
We will give a bound on the online learning error of the perceptron algo-
rithm. To make our subsequent derivations easier, we will use the notational
convention of denoting the class labels by y =∈ {−1, 1}.
Recall that the perceptron algorithm has parameters θ ∈ Rn+1 , and makes
its predictions according to
1
CS229 Winter 2003 2
where
1 if z ≥ 0
g(z) =
−1 if z < 0.
Also, given a training example (x, y), the perceptron learning rule updates
the parameters as follows. If hθ (x) = y, then it makes no change to the
parameters. Otherwise, it performs the update1
θ := θ + yx.
The following theorem gives a bound on the online learning error of the
perceptron algorithm, when it is run as an online algorithm that performs
an update each time it gets an example wrong. Note that the bound below
on the number of errors does not have an explicit dependence on the number
of examples m in the sequence, or on the dimension n of the inputs (!).
The third step above used Equation (2). Moreover, again by applying a
straightfoward inductive argument, we see that (4) implies
The second inequality above follows from the fact that u is a unit-length
vector (and z T u = ||z|| · ||u|| cos φ ≤ ||z|| · ||u||, where φ is the angle between
z and u). Our result implies that k ≤ (D/γ)2 . Hence, if the perceptron made
a k-th mistake, then k ≤ (D/γ)2 .