Module 2 PDF
Module 2 PDF
• Hopfiled network
ASSOCIATIVE MEMORY NETWORK
The training input and target output vectors are the same
The determination of weights of the association net is called
storing of vectors
The vectors that have been stored can be retrieved from
distorted (noisy) input if the input is sufficiently similar to it
The net’s performance is based on its ability to reproduce a
stored pattern from a noisy input
NOTE: The weights in the diagonal can be set to ‘0’. These nets
are called auto-associative nets with no self-connection
ARCHITECTURE OF AUTOASSOCIATIVE
MEMORY NETWORK
x1 w11 y1
X1 Y1
wi1
wn1
w1i
xi yi
Xi wii Yi
wni
w1n
win
xn yn
Xn wnn Yn
TRAINING ALGORITHM
• This is same as that for the Hebb rule. Except that there are
same number of output units as the number of input units
• STEP 0: Initialize all the weights to ‘0’
• ( wij 0, i 1, 2,...n; j 1, 2,...n )
• STEP 1: For each of the vector that has to be stored, perform
• steps 2 to 4
• STEP 2: Activate each of the input unit ( xi si , i 1, 2,...n )
• STEP 3: Activate each of the output units (y j s j , j 1,2,...n )
• STEP 4: Adjust the weights, i, j = 1,2,…n;
wij (new) wij (old ) xi . y j wij (old ) si .s j
TESTING ALGORITHM
1 1 1 1 1
1 1 1 1 1
W . 1 1 1 1
1 1 1 1 1
1 1 1 1 1
• Case-1: testing the network with the same input vector
• Test input: [-1 1 1 1]
• The weight obtained above is used as the initial weights
---===c
COMPUTATIONS
1, if ( yin ) j 0;
y j f (( yin ) j )
1, if ( yin ) j 0.
• Over net input, we get
y = [-1 1 1 1]
• Hence, the correct response is obtained
COMPUTATIONS
1 1 1 1
1 1 1 1
( yin ) j x. W [0 111]. [ 3 3 3 3]
1 1 1 1
1 1 1 1
• Applying the activation function taken above, we get
• y = [-1 1 1 1]. So, the response is correct
COMPUTATIONS
1 1 1 1
1 1 1 1
( yin ) j x. W [1 111]. [ 2 2 2 2]
1 1 1 1
1 1 1 1
• Applying the activation function taken above
• y = [-1 1 1 1]
• So, the response is correct
COMPUTATIONS
1 1 1 1
1 1 1 1
( yin ) j x. W [ 1 0 0 1]. [ 2 2 2 2]
1 1 1 1
1 1 1 1
• NOTE: WE HAVE TO CHECK FOR ALL POSSIBLE INPUTS UNDER EACH CASE
TO HAVE A POSITIVE CONCLUSION
HETEROASSOCIATIVE MEMORY NETWORK
x1 w11 y1
X1 Y1
wi1
wn1
w1i
xi yi
Xi wii Yi
wni
w1m
wim
xn ym
Xn wnm Ym
1 𝑖𝑓 𝑦𝑖𝑛𝑗 ≥ 0
𝑦𝑖 =
0 𝑖𝑓 𝑦𝑖𝑛𝑗 < 0
EXAMPLE-HETEROASSOCIATIVE MEMORY
NETWORKS
• Train a heteroassociative memory network using Hebb rule to
store input row vector s1,s2,s3,s4 to the output vector
as given in the table below: t (t1 , t2 )
•
Input and targets s1 s2 s3 s4 t1 t2
1st 1 0 1 0 1 0
2nd 1 0 0 1 1 0
3rd 1 1 0 0 0 1
4th 0 0 1 1 0 1
THE NEURAL NET
• s
x1
X1 w11
y1
w12 Y1
x2 w21
X2 w22
w31
x3 w32 y2
X3 Y2
w41
x4 w42
X4
COMPUTATIONS
• The final weights after all the input/output vectors are used
are
w11 w12 2 1
w w22 0 1
W 21
w31 w32 1 1
w41 w42 1 1
Train the heteroassociative using outer product
rule – Another method
For 1 st pair . The input and output vector are s=1010, t=10
Train the heteroassociative using outer product
rule – Another method
W= =
EXAMPLE
Train a hetroassociative memory network to store the
input vector s=(s1,s2,s3,s4) to the output vector
t=(t1,t2). The vector pairs are given in table. Also the
test the performance of the network using its training
input as testing input.
Input and targets s1 s2 s3 s4 t1 t2
1st 1 0 0 0 0 1
2nd 1 1 0 0 0 1
3rd 0 0 0 1 1 0
4th 0 0 1 1 1 0
• Outer product rule is determine the weight
Layer X W Layer Y
x1 w11 y1
X1 Y1
wi1
wn1
w1i
xi yi
Xi wii Yi
wni
w1m
wim
xn ym
Xn wnm Ym
WT
BAM ARCHITECTURE
1, if (x in )i 0;
xi xi , if (x in )i 0;
0, if (x ) 0.
in i
E 1 1 1 1 -1 -1 1 1 1 1 -1 -1 1 1 1
F 1 1 1 1 1 1 1 -1 -1 1 -1 -1 1 -1 -1
E -1 1 W1
F 1 1 W2
COMPUTATIONS
W sT ( p). t ( p)
•
W2 [1 111111 1 11 1 11 1 1]T [11]
COMPUTATIONS
0 2
0 2
• The total weight matrix is
0 2
0 2
2 0
2 0
0 2
W W1 W2 2 0
2 0
0 +2
0 2
0 2
0 2
2 0
2 0
TESTING THE NETWORK
0 0 0 0 2 2 0 2 2 0 0 0 0 2 2
W
T
2 2 2 2 0 0 2 0 0 2 2 2 2 0 0
TESTING THE NETWORK
yxinin x. W T
0 0 0 0 2 2 0 2 2 0 0 0 0 2 2
[11]
2 2 2 2 0 0 2 0 0 2 2 2 2 0 0
xyinin 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
• Applying the activation functions, we get
xy 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0 0 0 0 2 2 0 2 2 0 0 0 0 2 2
[11]
2 2 2 2 0 0 2 0 0 2 2 2 2 0 0
xinyin 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
Y1 Y2 Yi Yn
y1 y2 yi yn
TRAINING ALGORITHM FOR DISCRETE
HOPFIELD NETWORK
• STEP 0: Initialize the weights to store patterns, i. e. weights
obtained from training algorithm using Hebb rule
• STEP 1: When the activations of the net are not converged, perform
steps 2 - 8
• STEP 2: Perform steps 3 -7 for each input vector X
• STEP 3: Make the initial activation of the net equal to the external
input vector X (i. e. 𝑦𝑖 = 𝑥𝑖 (𝑖 = 1,2. . . 𝑛)
• STEP 4: Perform steps 5 – 7 for each unit 𝑌𝑖
update the activation of the unit in random order
• STEP 5: Calculate the net input of the network:
𝑦𝑖𝑛 𝑖 = 𝑥𝑖 + 𝑦𝑗 𝑤𝑗𝑖
𝑗
TRAINING ALGORITHM FOR DISCRETE
HOPFIELD NETWORK
• STEP 6: Apply the activations over the net input to calculate
the output:
1, if ( yin )i i ;
yi yi , if ( yin )i i ;
0, if ( y ) .
in i i