Supervised Hebbian Learning
Supervised Hebbian Learning
Supervised Hebbian Learning
1
7 Hebbs Postulate
When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it, some growth
process or metabolic change takes place in one or both cells such
that As efficiency, as one of the cells firing B, is increased.
D. O. Hebb, 1949
Dendrites
B Cell Body
Axon
Synapse
2
7 Linear Associator
Inputs Linear Layer
p n a
Rx1
W Sx1 Sx1
SxR
R S
a = purelin (Wp)
R
a = Wp ai = wij p j
j=1
Training Set:
{p 1, t 1} , {p 2 , t 2} , , {pQ , t Q}
3
7 Hebb Rule
w ijnew = w ijold + f i ( a iq )g j ( p jq )
Presynaptic Signal
Postsynaptic Signal
Simplified Form:
w ijnew = w ijold + a iq p jq
Supervised Form:
w ijnew = w ijold + t iq p jq
Matrix Form:
new old T
W = W + tq pq
4
7 Batch Operation
Q
T T T T
W = t1 p1 + t2 p2 ++ tQ pQ = tq pq (Zero Initial
q=1 Weights)
Matrix Form:
T
p1 P = p1 p 2 pQ
T
W = t 1 t 2 tQ p 2 = TP T
pQ
T
T = t1 t 2 tQ
5
7 Performance Analysis
Q
Q
a = Wp k = t q p Tq p k = tq ( p Tq p k )
q = 1 q=1
t q ( pq p k )
T
a = Wp k = tk +
qk
Error
6
7 Example
Banana Apple Normalized Prototype Patterns
1 1 0.5774 0.5774
p1 = 1 p2 = 1 p 1 = 0.5774 , t 1 = 1 p 2 = 0.5774 , t 2 = 1
1 1 0.5774 0.5774
Tests:
0.5774
Banana Wp 1 = 1.1548 0 0 0.5774 = 0.6668
0.5774
0.5774
Apple Wp 2 = 1.1548 0 0 0.5774 = 0.6668
0.5774
7
7 Pseudoinverse Rule - (1)
Performance Index: Wp q = t q q = 1, 2, , Q
2
F(W ) = || t q Wp q ||
q=1
Matrix Form: WP = T
T = t1 t 2 tQ P = p1 p2 p Q
2 2
F ( W ) = || T WP || = || E ||
e ij
2 2
|| E || =
i j
8
7 Pseudoinverse Rule - (2)
WP = T
Minimize:
2 2
F ( W ) = || T WP || = || E ||
+ T 1 T
P = (P P) P
9
7 Relationship to the Hebb Rule
Hebb Rule
W = TP T
Pseudoinverse Rule
W = TP +
+ T 1 T
P = (P P) P
PT P = I
+ T 1 T T
P = (P P) P = P
10
7 Example
+
1 1
+ 1 1
p1 = 1 , t1 = 1 2
p = ,
1 2 t = 1 W = TP = 1 1 1 1
1 1 1 1
1
+ T 1 T
P = (P P) P = 3 1 1 1 1 = 0.5 0.25 0.25
13 1 1 1 0.5 0.25 0.25
1 1
Wp 1 = 1 0 0 1 = 1 Wp 2 = 1 0 0 1 = 1
1 1
11
7 Autoassociative Memory
p n a T T T
W W = p1 p1 + p2 p2 + p3 p3
30x1 30x1 30x1
30x30
30 30
a = hardlims (Wp)
12
7 Tests
50% Occluded
67% Occluded
13
7 Variations of Hebbian Learning
new old T
Basic Rule: W = W + tq pq
new old T
Learning Rate: W = W + tq pq
new old T
Delta Rule: W = W + ( tq aq ) pq
new old T
Unsupervised: W = W + aq pq
14