Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Institute For Advanced Management Systems Research Department of Information Technologies Abo Akademi University

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Institute for Advanced Management Systems Research

Department of Information Technologies

Abo Akademi University


The Perception Learning Rule - Tutorial
Robert Full er
Directory
Table of Contents
Begin Article
c 2010 rfuller@abo.
November 4, 2010
Table of Contents
1. Articial neural networks
2. The perception learning rule
3. The perceptron learning algorithm
4. Illustration of the perceptron learning algorithm
3
1. Articial neural networks
Articial neural systems can be considered as simplied math-
ematical models of brain-like systems and they function as par-
allel distributed computing networks.
However, in contrast to conventional computers, which are pro-
grammed to perform specic task, most neural networks must
be taught, or trained.
They can learn new associations, new functional dependencies
and new patterns. Although computers outperform both bio-
logical and articial neural systems for tasks based on precise
and fast arithmetic operations, articial neural systems repre-
sent the promising new generation of information processing
networks. The study of brain-style computation has its roots
Toc Back Doc Doc
Section 1: Articial neural networks 4
over 50 years ago in the work of McCulloch and Pitts (1943)
and slightly later in Hebbs famous Organization of Behavior
(1949).
The early work in articial intelligence was torn between those
who believed that intelligent systems could best be built on
computers modeled after brains, and those like Minsky and Pa-
pert (1969) who believed that intelligence was fundamentally
symbol processing of the kind readily modeled on the von Neu-
mann computer.
For a variety of reasons, the symbol-processing approach be-
came the dominant theme in Artifcial Intelligence in the 1970s.
However, the 1980s showed a rebirth in interest in neural com-
puting:
Toc Back Doc Doc
Section 1: Articial neural networks 5
1982 Hopeld provided the mathematical foundation for under-
standing the dynamics of an important class of networks.
1984 Kohonen developed unsupervised learning networks for
feature mapping into regular arrays of neurons.
1986 Rumelhart and McClelland introduced the backpropaga-
tion learning algorithm for complex, multilayer networks.
Beginning in 1986-87, many neural networks research programs
were initiated. The list of applications that can be solved by
neural networks has expanded from small test-size examples
to large practical tasks. Very-large-scale integrated neural net-
work chips have been fabricated.
In the long term, we could expect that articial neural systems
will be used in applications involving vision, speech, decision
Toc Back Doc Doc
Section 1: Articial neural networks 6
making, and reasoning, but also as signal processors such as
lters, detectors, and quality control systems.
Denition 1.1. Articial neural systems, or neural networks,
are physical cellular systems which can acquire, store, and uti-
lize experiental knowledge.
The knowledge is in the form of stable states or mappings em-
bedded in networks that can be recalled in response to the pre-
sentation of cues.
The basic processing elements of neural networks are called
articial neurons, or simply neurons or nodes.
Each processing unit is characterized by an activity level (rep-
resenting the state of polarization of a neuron), an output value
(representing the ring rate of the neuron), a set of input con-
Toc Back Doc Doc
Section 1: Articial neural networks 7
Output patterns
Hidden nodes
Input
Output
Input patterns
Hidden nodes
also as signal processors such as lters, detectors,
and quality control systems.
Denition. Articial neural systems, or neural net-
works, are physical cellular systems which can ac-
quire, store, and utilize experiental knowledge.
The knowledge is in the form of stable states or
mappings embedded in networks that can be recalled
in response to the presentation of cues.
Figure 1: A multi-layer feedforward neural network.
The basic processing elements of neural networks
4
Figure 1: A multi-layer feedforward neural network.
nections, (representing synapses on the cell and its dendrite), a
bias value (representing an internal resting level of the neuron),
and a set of output connections (representing a neurons axonal
projections).
Toc Back Doc Doc
Section 1: Articial neural networks 8
Each of these aspects of the unit are represented mathemati-
cally by real numbers. Thus, each connection has an associated
weight (synaptic strength) which determines the effect of the
incoming input on the activation level of the unit. The weights
may be positive (excitatory) or negative (inhibitory).
x1
xn
w1
wn
f
!
"
tive (inhibitory).
Figure 2: A processing element with single output connection.
The signal ow from of neuron inputs, x
j
, is con-
sidered to be unidirectionalas indicated by arrows,
as is a neurons output signal ow. The neuron out-
put signal is given by the following relationship
o = f(w, x) = f(w
T
x) = f

j=1
w
j
x
j

where w = (w
1
, . . . , w
n
)
T
R
n
is the weight vec-
tor. The function f(w
T
x) is often referred to as an
activation (or transfer) function. Its domain is the
set of activation values, net, of the neuron model,
we thus often use this function as f(net). The vari-
able net is dened as a scalar product of the weight
6
Figure 2: A processing element with single output connection.
The signal ow from of neuron inputs, x
j
, is considered to be
unidirectionalas indicated by arrows, as is a neurons output
Toc Back Doc Doc
Section 1: Articial neural networks 9
signal ow. The neuron output signal is given by,
o = f('w, x`) = f(w
T
x) = f

j=1
w
j
x
j

where w = (w
1
, . . . , w
n
)
T
R
n
is the weight vector. The func-
tion f(w
T
x) is often referred to as an activation (or transfer)
function. Its domain is the set of activation values, net, of the
neuron model, we thus often use this function as f(net). The
variable net is dened as a scalar product of the weight and
input vectors
net = 'w, x` = w
T
x = w
1
x
1
+ + w
n
x
n
and in the simplest case the output value o is computed as
o = f(net) =

1 if w
T
x
0 otherwise,
Toc Back Doc Doc
Section 1: Articial neural networks 10
where is called threshold-level and this type of node is called
a linear threshold unit.
Example 1.1. Suppose we have two Boolean inputs x
1
, x
2

0, 1, one Boolean output o 0, 1 and the training set is
given by the following input/output pairs
x
1
x
2
o(x
1
, x
2
) = x
1
x
2
1. 1 1 1
2. 1 0 0
3. 0 1 0
4. 0 0 0
Then the learning problem is to nd weight w
1
and w
2
and
threshold (or bias) value such that the computed output of
Toc Back Doc Doc
Section 1: Articial neural networks 11
our network (which is given by the linear threshold function) is
equal to the desired output for all examples. A straightforward
solution is w
1
= w
2
= 1/2, = 0.6. Really, from the equation
o(x
1
, x
2
) =

1 if x
1
/2 + x
2
/2 0.6
0 otherwise
it follows that the output neuron res if and only if both inputs
are on.
Example 1.2. Suppose we have two Boolean inputs x
1
, x
2

0, 1, one Boolean output o 0, 1 and the training set is
given by the following input/output pairs
Toc Back Doc Doc
Section 1: Articial neural networks 12
x1
x2
1/2x1 +1/2x2 = 0.6
Then the learning problem is to nd weight w
1
and
w
2
and threshold (or bias) value such that the
computed output of our network (which is given by
the linear threshold function) is equal to the desired
output for all examples. A straightforward solution
is w
1
= w
2
= 1/2, = 0.6. Really, from the equa-
tion
o(x
1
, x
2
) =

1 if x
1
/2 + x
2
/2 0.6
0 otherwise
it follows that the output neuron res if and only if
both inputs are on.
A solution to the learning problem of Boolean and.
Example 2. Suppose we have two Boolean inputs
x
1
, x
2
{0, 1}, one Boolean output o {0, 1}
8
Figure 3: A solution to the learning problem of Boolean and.
x
1
x
2
o(x
1
, x
2
) = x
1
x
2
1. 1 1 1
2. 1 0 1
3. 0 1 1
4. 0 0 0
Toc Back Doc Doc
Section 1: Articial neural networks 13
Then the learning problem is to nd weight w
1
and w
2
and
threshold value such that the computed output of our network
is equal to the desired output for all examples. A straightfor-
ward solution is w
1
= w
2
= 1, = 0.8. Really, from the
equation,
o(x
1
, x
2
) =

1 if x
1
+ x
2
0.8
0 otherwise,
it follows that the output neuron res if and only if at least one
of the inputs is on.
The removal of the threshold from our network is very easy by
increasing the dimension of input patterns. Really, the identity
w
1
x
1
+ + w
n
x
n
>
Toc Back Doc Doc
Section 1: Articial neural networks 14
w1
wn
x1
xn
!
"
w1 wn
x1 xn
!
"
-1
0
Removing the threshold.
The removal of the threshold from our network is
very easy by increasing the dimension of input pat-
terns. Really, the identity
w
1
x
1
+ + w
n
x
n
>
w
1
x
1
+ + w
n
x
n
1 > 0
means that by adding an extra neuron to the input
layer with xed input value 1 and weight the
value of the threshold becomes zero. It is why in the
10
Figure 4: Removing the threshold.
w
1
x
1
+ + w
n
x
n
1 > 0
means that by adding an extra neuron to the input layer with
xed input value 1 and weight the value of the threshold
becomes zero. It is why in the following we suppose that the
thresholds are always equal to zero.
Toc Back Doc Doc
Section 1: Articial neural networks 15
We dene now the scalar product of n-dimensional vectors,
which plays a very important role in the theory of neural net-
works.
Denition 1.2. Let w = (w
1
, . . . , w
n
)
T
and x = (x
1
, . . . , x
n
)
T
be two vectors from R
n
. The scalar (or inner) product of w and
x, denoted by < w, x > or w
T
x, is dened by
'w, x` = w
1
x
1
+ + w
n
x
n
=
n

j=1
w
j
x
j
.
Other denition of scalar product in two dimensional case is
'w, x` = |w||x| cos(w, x)
where |.| denotes the Eucledean norm in the real plane, i.e.
|w| =

w
2
1
+ w
2
2
, |x| =

x
2
1
+ x
2
2
Toc Back Doc Doc
Section 1: Articial neural networks 16
!
w
x
x2
x1
w1
w2
plane, i.e.
w =

w
2
1
+ w
2
2
, x =

x
2
1
+ x
2
2
w = (w
1
, w
2
)
T
and x = (x
1
, x
2
)
T
.
Lemma 1. The following property holds
< w, x >= w
1
x
1
+ w
2
x
2
=

w
2
1
+ w
2
2

x
2
1
+ x
2
2
cos(w, x)
12
Figure 5: w = (w
1
, w
2
)
T
and x = (x
1
, x
2
)
T
.
Lemma 1. The following property holds
'w, x` = w
1
x
1
+ w
2
x
2
=

w
2
1
+ w
2
2

x
2
1
+ x
2
2
cos(w, x)
= |w||x| cos(w, x).
Toc Back Doc Doc
Section 1: Articial neural networks 17
Proof
cos(w, x) = cos((w, 1-st axis) (x, 1-st axis))
= cos((w, 1-st axis) cos(x, 1-st axis))+
sin(w, 1-st axis) sin(x, 1-st axis) =
w
1
x
1

w
2
1
+ w
2
2

x
2
1
+ x
2
2
+
w
2
x
2

w
2
1
+ w
2
2

x
2
1
+ x
2
2
That is,
|w||x| cos(w, x) =

w
2
1
+ w
2
2

x
2
1
+ x
2
2
cos(w, x)
= w
1
x
1
+ w
2
x
2
.
From cos /2 = 0 it follows that 'w, x` = 0 whenever w and x
are perpendicular.
Toc Back Doc Doc
Section 1: Articial neural networks 18
If |w| = 1 (we say that w is normalized) then ['w, x`[ is noth-
ing else but the projection of x onto the direction of w. Really,
if |w| = 1 then we get
'w, x` = |w||x| cos(w, x) = |x| cos(w, x)
w
x
<w,x>
Then, the actual output is compared with the de-
sired target, and a match is computed. If the out-
put and target match, no change is made to the
net. However, if the output differs from the tar-
get a change must be made to some of the con-
nections.
Projection of x onto the direction of w.
The perceptron learning rule, introduced by Rosen-
blatt, is a typical error correction learning algorithm
of single-layer feedforward networks with linear thresh-
old activation function.
16
Figure 6: Projection of x onto the direction of w.
The problem of learning in neural networks is simply the prob-
lem of nding a set of connection strengths (weights) which
Toc Back Doc Doc
19
allow the network to carry out the desired computation. The
network is provided with a set of example input/output pairs (a
training set) and is to modify its connections in order to approx-
imate the function from which the input/output pairs have been
drawn. The networks are then tested for ability to generalize.
2. The perception learning rule
The error correction learning procedure is simple enough in
conception. The procedure is as follows: During training an
input is put into the network and ows through the network
generating a set of values on the output units. Then, the ac-
tual output is compared with the desired target, and a match is
computed. If the output and target match, no change is made to
the net. However, if the output differs from the target a change
Toc Back Doc Doc
Section 2: The perception learning rule 20
must be made to some of the connections.
The perceptron learning rule, introduced by Rosenblatt, is a
typical error correction learning algorithm of single-layer feed-
forward networks with linear threshold activation function.
!m
x1
x2 xn
w11 wmn
!1
Single-layer feedforward network.
Usually, w
ij
denotes the weight from the j-th input
unit to the i-th output unit and w
i
denotes the weight
vector of the i-th output node.
We are given a training set of input/output pairs
No. input values desired output values
1. x
1
= (x
1
1
, . . . x
1
n
) y
1
= (y
1
1
, . . . , y
1
m
)
.
.
.
.
.
.
.
.
.
K. x
K
= (x
K
1
, . . . x
K
n
) y
K
= (y
K
1
, . . . , y
K
m
)
17
Figure 7: Single-layer feedforward network.
Toc Back Doc Doc
Section 2: The perception learning rule 21
Usually, w
ij
denotes the weight from the j-th input unit to the
i-th output unit and w
i
denotes the weight vector of the i-th
output node. We are given a training set of input/output pairs,
No. input values desired output values
1. x
1
= (x
1
1
, . . . x
1
n
) y
1
= (y
1
1
, . . . , y
1
m
)
.
.
.
.
.
.
.
.
.
K. x
K
= (x
K
1
, . . . x
K
n
) y
K
= (y
K
1
, . . . , y
K
m
)
Our problem is to nd weight vectors w
i
such that
o
i
(x
k
) = sign(< w
i
, x
k
>) = y
k
i
, i = 1, . . . , m
for all training patterns k.
The activation function of the output nodes is linear threshold
Toc Back Doc Doc
Section 2: The perception learning rule 22
function of the form
o
i
(x) =

1 if < w
i
, x > 0
0 if < w
i
, x >< 0
and the weight adjustments in the perceptron learning method
are performed by
w
i
:= w
i
+ (y
i
o
i
)x,
for i = 1, . . . , m.
That is,
w
ij
:= w
ij
+ (y
i
o
i
)x
j
,
for j = 1, . . . , n, where > 0 is the learning rate.
From this equation it follows that if the desired output is equal
to the computed output, y
i
= o
i
, then the weight vector of the
Toc Back Doc Doc
Section 2: The perception learning rule 23
i-th output node remains unchanged, i.e. w
i
is adjusted if and
only if the computed output, o
i
, is incorrect. The learning stops
when all the weight vectors remain unchanged during a com-
plete training cycle.
Consider now a single-layer network with one output node. We
are given a training set of input/output pairs
No. input values desired output values
1. x
1
= (x
1
1
, . . . x
1
n
) y
1
.
.
.
.
.
.
.
.
.
K. x
K
= (x
K
1
, . . . x
K
n
) y
K
Then the input components of the training patterns can be clas-
Toc Back Doc Doc
Section 2: The perception learning rule 24
sied into two disjunct classes
C
1
= x
k
[y
k
= 1,
C
2
= x
k
[y
k
= 0
i.e. x belongs to class C
1
if there exists an input/output pair
(x, 1) and x belongs to class C
2
if there exists an input/output
pair (x, 0).
x1
xn
w1
wn
! = f(net)
f
A simple processing element.
Taking into consideration the denition of the acti-
vation function it is easy to see that we are searching
for a weight vector w such that
< w, x >> 0 for each x C
1
and
< w, x >< 0 for each x C
2
.
If such vector exists then the problem is called lin-
early separable.
21
Figure 8: A simple processing element.
Taking into consideration the denition of the activation func-
Toc Back Doc Doc
Section 2: The perception learning rule 25
tion it is easy to see that we are searching for a weight vector w
such that
'w, x` > 0 for each x C
1
and
'w, x` < 0 for each x C
2
.
If such vector exists then the problem is called linearly separa-
ble.
If the problem is linearly separable then we can always suppose
that the line separating the classes crosses the origin.
And the problem can be further transformed to the problem of
nding a line, for which all the elements of C
1
(C
2
) can be
found on the positive half-plane.
So, we search for a weight vector w such that 'w, x` > 0 for
each x C
1
, and 'w, x` < 0 for each x C
2
. That is, 'w, x'>
Toc Back Doc Doc
Section 2: The perception learning rule 26
C1
C2
w
x1
x2
A two-class linearly separable classication problem.
If the problem is linearly separable then we can al-
ways suppose that the line separating the classes
crosses the origin.
22
Figure 9: A two-class linearly separable classication problem.
0 for each x C
1
and 'w, x` > 0 for each x C
2
. Which
can be written in the form, 'w, x` > for each x C
1
C
2
.
Toc Back Doc Doc
Section 2: The perception learning rule 27
C1
C2
w
x1
x2
Shifting the origin.
And the problem can be further transformed to the
problem of nding a line, for which all the elements
of C
1
(C
2
) can be found on the positive half-
plane.
23
Figure 10: Shifting the origin.
And the weight adjustment in the perceptron learning rule is
performed by
w
j
:= w
j
+ (y o)x
j
.
where > 0 is the learning rate, y = 1 is he desired output,
Toc Back Doc Doc
Section 2: The perception learning rule 28
C1
-C2
w
x1
x2
C
1
(C
2
) is on the postive side of the line.
So, we are searching for a weight vector w such that
< w, x >> 0 for each x C
1
,
< w, x >< 0 for each x C
2
.
That is,
< w, x >> 0 for each x C
1
,
< w, x >> 0 for each x C
2
.
Which can be written in the form,
24
Figure 11: C
1
(C
2
) is on the postive side of the line.
o 0, 1 is the computed output, x is the actual input to the
neuron.
Toc Back Doc Doc
Section 2: The perception learning rule 29
x
w
w'
new hyperplane
old hyperplane
!x
0
Illustration of the perceptron learning algorithm.
Summary 1. The perceptron learning algorithm: Given
are K training pairs arranged in the training set
26
Figure 12: Illustration of the perceptron learning algorithm.
Toc Back Doc Doc
30
3. The perceptron learning algorithm
Given are K training pairs arranged in the training set
(x
1
, y
1
), . . . , (x
K
, y
K
)
where x
k
= (x
k
1
, . . . , x
k
n
), y
k
= (y
k
1
, . . . , y
k
m
), k = 1, . . . , K.
Step 1 > 0 is chosen
Step 2 Weigts w
i
are initialized at small random values,
the running error E is set to 0, k := 1
Step 3 Training starts here. x
k
is presented, x := x
k
,
y := y
k
and output o = o(x) is computed
o
i
=

1 if < w
i
, x >> 0
0 if < w
i
, x >< 0
Toc Back Doc Doc
Section 3: The perceptron learning algorithm 31
Step 4 Weights are updated
w
i
:= w
i
+ (y
i
o
i
)x, i = 1, . . . , m
Step 5 Cumulative cycle error is computed by adding the
present error to E
E := E +
1
2
|y o|
2
Step 6 If k < K then k := k + 1 and we continue the
training by going back to Step 3, otherwise we go to Step
7
Step 7 The training cycle is completed. For E = 0 ter-
minate the training session. If E > 0 then E is set to 0,
k := 1 and we initiate a new training cycle by going to
Step 3
Toc Back Doc Doc
32
The following theorem shows that if the problem has solutions
then the perceptron learning algorithm will nd one of them.
Theorem 3.1. (Convergence theorem) If the problem is linearly
separable then the program will go to Step 3 only netely many
times.
4. Illustration of the perceptron learning algorithm
Consider the following training set
No. input values desired output value
1. x
1
= (1, 0, 1)
T
-1
2. x
2
= (0, 1, 1)
T
1
3. x
3
= (1, 0.5, 1)
T
1
Toc Back Doc Doc
Section 4: Illustration of the perceptron learning algorithm 33
The learning constant is assumed to be 0.1. The initial weight
vector is w
0
= (1, 1, 0)
T
.
Then the learning according to the perceptron learning rule pro-
gresses as follows.
[Step 1] Input x
1
, desired output is -1:
< w
0
, x
1
>= (1, 1, 0)

1
0
1

= 1
Correction in this step is needed since
y
1
= 1 = sign(1).
We thus obtain the updated vector
Toc Back Doc Doc
Section 4: Illustration of the perceptron learning algorithm 34
w
1
= w
0
+ 0.1(1 1)x
1
Plugging in numerical values we obtain
w
1
=

1
1
0

0.2

1
0
1

0.8
1
0.2

[Step 2] Input is x
2
, desired output is 1. For the present
w
1
we compute the activation value
< w
1
, x
2
>= (0.8, 1, 0.2)

0
1
1

= 1.2
Correction is not performed in this step since 1 = sign(1.2),
so we let w
2
:= w
1
.
Toc Back Doc Doc
Section 4: Illustration of the perceptron learning algorithm 35
[Step 3] Input is x
3
, desired output is 1.
< w
2
, x
3
>= (0.8, 1, 0.2)

1
0.5
1

= 0.1
Correction in this step is needed since y
3
= 1 = sign(0.1).
We thus obtain the updated vector
w
3
= w
2
+ 0.1(1 + 1)x
3
Plugging in numerical values we obtain
w
3
=

0.8
1
0.2

+ 0.2

1
0.5
1

0.6
1.1
0.4

Toc Back Doc Doc


Section 4: Illustration of the perceptron learning algorithm 36
[Step 4] Input x
1
, desired output is -1:
< w
3
, x
1
>= (0.6, 1.1, 0.4)

1
0
1

= 0.2
Correction in this step is needed since
y
1
= 1 = sign(0.2).
We thus obtain the updated vector
w
4
= w
3
+ 0.1(1 1)x
1
Plugging in numerical values we obtain
w
4
=

0.6
1.1
0.4

0.2

1
0
1

0.4
1.1
0.6

Toc Back Doc Doc


Section 4: Illustration of the perceptron learning algorithm 37
[Step 5] Input is x
2
, desired output is 1. For the present
w
4
we compute the activation value
< w
4
, x
2
>= (0.4, 1.1, 0.6)

0
1
1

= 1.7
Correction is not performed in this step since 1 = sign(1.7),
so we let w
5
:= w
4
.
[Step 6] Input is x
3
, desired output is 1.
< w
5
, x
3
>= (0.4, 1.1, 0.6)

1
0.5
1

= 0.75
Toc Back Doc Doc
Section 4: Illustration of the perceptron learning algorithm 38
Correction is not performed in this step since 1 = sign(0.75),
so we let w
6
:= w
5
.
This terminates the learning process, because
< w
6
, x
1
>= 0.2 < 0,
< w
6
, x
2
>= 1.70 > 0,
< w
6
, x
3
>= 0.75 > 0.
Minsky and Papert (1969) provided a very careful analysis of
conditions under which the perceptron learning rule is capable
of carrying out the required mappings.
They showed that the perceptron can not succesfully solve the
problem
Toc Back Doc Doc
Section 4: Illustration of the perceptron learning algorithm 39
x
1
x
1
o(x
1
, x
2
)
1. 1 1 0
2. 1 0 1
3. 0 1 1
4. 0 0 0
This Boolean function is know in the literature az exclusive
(XOR).
Sometimes we call the above function as two-dimensional par-
ity function.
The n-dimensional parity function is a binary Boolean function,
which takes the value 1 if we have odd number of 1-s in the
input vector, and zero otherwise.
Toc Back Doc Doc
Section 4: Illustration of the perceptron learning algorithm 40
x
1
x
1
o(x
1
, x
2
)
1. 1 1 0
2. 1 0 1
3. 0 1 1
4. 0 0 0
This Boolean function is know in the literature az
exclusive (XOR).
Sometimes we call the above function as two-dimensional
parity function.
The n-dimensional parity function is a binary Boolean
function, which takes the value 1 if we have odd
34
Figure 13: Illustration of exclusive or.
For example, the following function is the 3-dimensional parity
function
Toc Back Doc Doc
Section 4: Illustration of the perceptron learning algorithm 41
x
1
x
1
x
3
o(x
1
, x
2
, x
3
)
1. 1 1 1 1
2. 1 1 0 0
3. 1 0 1 0
4. 1 0 0 1
5. 0 0 1 1
6. 0 1 1 0
7. 0 1 0 1
8. 0 0 0 0
Toc Back Doc Doc

You might also like