ADALINE For Pattern Classification: Polytechnic University Department of Computer and Information Science
ADALINE For Pattern Classification: Polytechnic University Department of Computer and Information Science
ADALINE For Pattern Classification: Polytechnic University Department of Computer and Information Science
Directory
• Table of Contents
• Begin Article
output value.
Also like Hebb’s rule and the Perceptron rule, one cycles through
the training set, presenting the training vectors one at a time to the
Toc JJ II J I Back J Doc Doc I
Section 1: Simple ADELINE for Pattern Classification 5
NN. For the Delta rule, the weights and bias are updated so as to
minimize the square of the difference between the net output and the
target value for the particular training vector presented at that step.
Notice that this procedure is not exactly the same as minimizing
the overall error between the NN outputs and their corresponding
target values for all the training vectors. Doing so would require the
solution to a large scale optimization problem involving N weight
components and 1 bias.
The question is how should the changes in the weight vector be chosen
in order that we end up with a lower value for E:
E(w(k + 1)) < E(w(k)).
For sufficiently small ∆w(k), we obtain from Taylor’s theorem
E(w(k + 1)) = E(w(k) + ∆w(k)) ≈ E(w(k)) + g(k) · ∆w(k),
where g(k) = ∇E(w)|w=w(k) is the gradient of E(w) at w(k). It
is clear that E(w(k + 1)) < E(w(k)) if g(k) · ∆w(k) < 0. The
largest decrease in the value of E(w) occurs in the direction ∆w(k) =
−αg(k), if α is sufficiently small and positive. This direction is called
the steepest descent direction, and α controls the size of the step
and is called the learning rate. Thus starting from w(0)), the idea is to
find a minimum of the function E(w) iteratively by making successive
steps along the local gradient direction, according to
w(k + 1) = w(k) − αg(k), k = 0, 1, . . . .
This method of finding the minimum is known as the steepest descent
method.
2. Delta Rule
Suppose at the k-th step in the training process, the current weight
vector and bias are given by w(k) and b(k), respectively, and the q-th
training vectors, s(k) = s(q) , is presented to the NN. The total input
received by the output neuron is
N
X
yin = b(k) + si (k)wi (k).
i=1
Since the transfer function is given by the identity function during
training, the output of the NN is
N
X
y(k) = yin = b(k) + si (k)wi (k).
i=1
However the target output is t(k) = t(q) , and so if y(k) 6= t(k) then
there is an error given by y(k) − t(k). This error can be positive or
Toc JJ II J I Back J Doc Doc I
Section 2: Delta Rule 8
negative. The Delta rule aims at finding the weights and bias so as
to minimize the square of this error
N
!2
2
X
E(w(k)) = (y(k) − t(k)) = b(k) + si (k)wi (k) − t(k) .
i=1
We can absorb the bias term by introducing an extra input neuron,
X0 , so that its activation (signal) is always fixed at 1 and its weight
is the bias. Then the square of the error in the k-th step is
N
!2
X
E(w(k)) = si (k)wi (k) − t(k) .
i=0
Notice that for the Delta rule, unlike the Perceptron rule, training
does not stop even after all the training vectors have been correctly
classified. The algorithm continuously attempts to produce more ro-
Toc JJ II J I Back J Doc Doc I
Section 2: Delta Rule 11
bust sets of weights and bias. Iteration is stopped only when changes
in the weights and bias are smaller than a preset tolerance level.
In general, there is no proof that the Delta rule will always lead
to convergence, or to a set of weights and bias that enable the NN to
correctly classify all the training vectors. One also needs to experi-
ment with the size of the learning rate. Too small a value may require
too many iterations. Too large a value may lead to non-convergence.
Also because the identity function is used as the transfer function
during training, the error at each step of the training process may
never become small, even though an acceptable set of weights and
bias may have already been found. In that case the weights will
continually change from one iteration to the next. The amount of
changes are of course proportional to α.
Therefore in some cases, one may want to gradually decrease α
towards zero during iteration, especially when one is close to obtaining
the best set of weights and bias. Of course there are many ways in
which α can be made to approach zero.
Notice that the correlation matrix C and the vector v can be easily
computed from the given training set.
Assuming that the correlation matrix is nonsingular, the solution
is therefore given by
w = vC−1 ,
where C−1 is the inverse matrix for C. Notice that the correlation
matrix is symmetric and has dimension (N + 1) × (N + 1).
Although the exact solution is formally available, computing it this
way requires the computation of the inverse of matrix C or solving
a system of linear equations. The computational complexity involved
is of O(N + 1)3 . For most practical problems, N is so large that
computing the solution this way is really not feasible.
q s(q) t(q)
1 [1 1] 1
2 [1 -1] -1
3 [-1 1] -1
4 [-1 -1] -1
We assume that the weights and bias are initially zero, and apply
the Delta rule to train the NN. We find that for a learning rate α larger
than about 0.3, there is no convergence as the weight components
increase without bound. For α less than 0.3 but larger than 0.16,
the weights converge but to values that fail to correctly classify all
the training vectors. The weights converge to values that correctly
classify all the training vectors if α is less than about 0.16. They
become closer and closer to the most robust set of weights and bias
when α is below 0.05.
We also consider here the exact formal solution given in the last
section. We will absorb the bias by appending a 1 in the leading
position of each of the training vectors so that the training set is
q s(q) t(q)
1 [1 1 1] 1
2 [1 1 -1] -1
3 [1 -1 1] -1
4 [1 -1 -1] -1
w11
x1 X1 Y1 y1
w12 w21
w22
x2 X2 Y2 y2
w23
w13
w31
w32
xn Xn w33 Ym ym
and so we have
M
X
∂wmn E(W(k)) = 2 (yj (k) − tj (k)) δj,n sm (k)
j=1
= 2sm (k) (yn (k) − tn (k)) .
Using the steepest descent method, we have
wij (k + 1) = wij (k) − 2αsi (k) (yj (k) − tj (k)) .
The i = 1, 2, . . . , N components of this equation gives the updating
rule for the weights. The i = 0 component of this equation gives the
updating rule for the bias
bj (k + 1) = bj (k) − 2α (yj (k) − tj (k)) .
6. An Example
We will now treat the same example that we have considered before for
the Perceptron with multiple output neurons. We use bipolar output
neurons and the training set:
(class 1)
s(1) = 1 1 , s(2) = 1 2 with t(1) = t(2) = −1 −1
(class 2)
s(3) = 2 −1 , s(4) = 2 t(3) = t(4) =
0 with −1 1
(class 3)
s(5) = −1 , s(6) = t(5) = t(6) =
2 −2 1 with 1 −1
(class 4)
s(7) = −1 , s(8) = t(7) = t(8) =
−1 −2 −2 with 1 1
It is clear that N = 2, Q = 8, and the number of classes is 4. The
number of output neuron is chosen to be M = 2 so that 2M = 4
classes can be represented.
Toc JJ II J I Back J Doc Doc I
Section 6: An Example 25
Our exact calculation of the weights and bias for the case of a single
output neuron can be extended to the case of multiple output neurons.
One can then obtain the following exact results for the weights and
biases:
" −91 1
#
153 6
W= −8 −2
153 3
2 1
b= 153 6
Using these exact results, we can easily see how good or bad our
iterative solutions are.
It should be remarked that the most robust set of weights and
biases is determined only by a few training vectors that lie very close
to the decision boundaries. However in the Delta rule, all training
vectors contribute in some way. Therefore the set of weights and
biases obtained by the Delta rule is not necessarily always the most
robust.
The Delta rule usually gives convergent results if the learning rate
is not too large. The resulting set of weights and biases typically leads
References
[1] See Chapter 2 in Laurene Fausett, ”Fundamentals of Neural Net-
works - Architectures, Algorithms, and Applications”, Prentice
Hall, 1994.