Manual For Neural and Matlab Applications
Manual For Neural and Matlab Applications
Lab Manual
Neural Network Lab
INDEX
1. Syllabus
2. Hardware/Software Requirements
3. Practicals to be conducted in the lab
4. Programs
Neural Networks.
CSE-413 F
LTP
--2
ClassWork: 25
Exam: 50
Total: 75
Duration of Exam: 3Hrs.
To study some basic neuron models and learning algorithms by using Matlabs neural
network toolbox.
The following demonstrations
Simple neuron and transfer functions
Neuron with vector input
Decision boundaries
Perceptron learning rule
Classification with a 2-input perceptron (note - theres an error in the text here:it
says there are 5
input vectors, but really there are only 4)
Linearly non-separable vectors
Try to understand the following things:
1. How the weights and bias values affect the output of a neuron.
2. How the choice of activation function (or transfer function) affects the output of a neuron.
Experiment with
the following functions: identity (purelin), binary threshold (hardlim, hardlims) and sigmoid
(logsig, tansig).
3. How the weights and bias values are able to represent a decision boundary in the feature
space.
4. How this decision boundary changes during training with the perceptron learning rule.
5. How the perceptron learning rule works for linearly separable problems.
6. How the perceptron learning rule works for non-linearly separable problems.
HARDWARE REQUIRED
P-IV/III PROCESSOR
HDD 40GB
RAM 128MB or above
SOFTWARE REQUIRED
Window 98/2000/ME/XP
Turbo C, C++
Matlab
1.
2.
3.
4.
Programs
Example 2.1 Write a MATLAB program to generate a few activation functions that are
being used in neural networks.
Solution The activation functions play a major role in determining the output of the
functions. One such program for generating the activation functions is as given below.
Program
% Illustration of various activation functions used in NN's
x = -10:0.1:10;
tmp = exp(-x);
y1 = 1./(1+tmp);
y2 = (1-tmp)./(1+tmp);
y3 = x;
subplot(231); plot(x, y1); grid on;
axis([min(x) max(x) -2 2]);
title('Logistic Function');
xlabel('(a)');
axis('square');
subplot(232); plot(x, y2); grid on;
axis([min(x) max(x) -2 2]);
title('Hyperbolic Tangent Function');
xlabel('(b)');
axis('square');
subplot(233); plot(x, y3); grid on;
axis([min(x) max(x) min(x) max(x)]);
title('Identity Function');
xlabel('(c)');
axis('square');
Example 3.8 Generate XOR function using McCulloch-Pitts neuron by writing an M-file.
Solution The truth table for the XOR function is,
X1 X2 Y
0
0
0
0
1
1
1
0
1
1
1
0
The MATLAB program is given by,
Program
%XOR function using McCulloch-Pitts neuron
clear;
clc;
%Getting weights and threshold value
disp('Enter weights');
w11=input('Weight w11=');
w12=input('weight w12=');
w21=input('Weight w21=');
w22=input('weight w22=');
v1=input('weight v1=');
v2=input('weight v2=');
disp('Enter Threshold Value');
theta=input('theta=');
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 1 1 0];
con=1;
while con
zin1=x1*w11+x2*w21;
zin2=x1*w21+x2*w22;
for i=1:4
if zin1(i)>=theta
y1(i)=1;
else
y1(i)=0;
end
if zin2(i)>=theta
y2(i)=1;
else
y2(i)=0;
end
end
yin=y1*v1+y2*v2;
for i=1:4
if yin(i)>=theta;
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another set of weights and Threshold value');
w11=input('Weight w11=');
w12=input('weight w12=');
w21=input('Weight w21=');
w22=input('weight w22=');
v1=input('weight v1=');
v2=input('weight v2=');
theta=input('theta=');
end
end
disp('McCulloch-Pitts Net for XOR function');
disp('Weights of Neuron Z1');
disp(w11);
disp(w21);
disp('weights of Neuron Z2');
disp(w12);
disp(w22);
disp('weights of Neuron Y');
disp(v1);
disp(v2);
disp('Threshold value');
disp(theta);
Output
Enter weights
Weight w11=1
weight w12=-1
Weight w21=-1
weight w22=1
weight v1=1
weight v2=1
Enter Threshold Value
theta=1
Output of Net
0 1 1 0
McCulloch-Pitts Net for XOR function
Weights of Neuron Z1
1
-1
weights of Neuron Z2
-1
1
weights of Neuron Y
1
1
Threshold value
1
Program
%Hebb Net to classify two dimensional input patterns
clear;
clc;
%Input Patterns
E=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 1 1 1];
F=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1];
x(1,1:20)=E;
x(2,1:20)=F;
w(1:20)=0;
t=[1 -1];
b=0;
for i=1:2
w=w+x(i,1:20)*t(i);
b=b+t(i);
end
disp('Weight matrix');
disp(w);
disp('Bias');
disp(b);
Output
Weight matrix
Columns 1 through 18
0 0 0 0 0 0 0 0 0
Columns 19 through 20
2 2
Bias
0
0 0 0 0 2
Example 4.5 Write a MATLAB program for perceptron net for an AND function with
bipolar inputs and targets.
Solution The truth table for the AND function is given as
X1
X2
Y
1
1 1
1
1 1
1
1 1
1
1
1
The MATLAB program for the above table is given as follows.
Program
%Perceptron for AND funtion
clear;
clc;
x=[1 1 -1 -1;1 -1 1 -1];
t=[1 -1 -1 -1];
w=[0 0];
b=0;
alpha=input('Enter Learning rate=');
theta=input('Enter Threshold value=');
con=1;
epoch=0;
while con
con=0;
for i=1:4
yin=b+x(1,i)*w(1)+x(2,i)*w(2);
if yin>theta
y=1;
end
if yin <=theta & yin>=-theta
y=0;
end
if yin<-theta
y=-1;
end
if y-t(i)
con=1;
for j=1:2
w(j)=w(j)+alpha*t(i)*x(j,i);
end
b=b+alpha*t(i);
end
end
epoch=epoch+1;
end
disp('Perceptron for AND funtion');
Example 4.6
forms the numbers. For any valid point it is taken as 1 and invalid point it is taken as 0. The
net has to be trained to recognize all the numbers and when the test data is given, the network
has to recognize the particular numbers.
Solution
input data file is
determined. The input data files and the test data files are given. The data are stored in a file
called reg.mat. When the test data is given, if the pattern is recognized then it is + 1, and if
the pattern is not recognized, it is 1.
Data - reg.mat
input_data=[1 0 1 1 1 1 1 1 1 1;
1 1 1 1 0 1 1 1 1 1;
1 0 1 1 1 1 1 1 1 1;
1 1 0 0 1 1 1 0 1 1;
0 1 0 0 0 0 0 0 0 0;
1 0 1 1 1 0 0 1 1 1;
1 0 1 1 1 1 1 0 1 1;
0 1 1 1 1 1 1 0 1 1;
1 0 1 1 1 1 1 1 1 1;
1 0 1 0 0 0 1 0 1 0;
0 1 0 0 0 0 0 0 0 0;
1 0 0 1 1 1 1 1 1 1;
1 1 1 1 0 1 1 0 1 1;
1 1 1 1 0 1 1 0 1 1;
1 1 1 1 1 1 1 1 1 1;]
output_data=[1 0 0 0 0 0 0 0 0 0;
0 1 0 0 0 0 0 0 0 0;
0 0 1 0 0 0 0 0 0 0;
0 0 0 1 0 0 0 0 0 0;
0 0 0 0 1 0 0 0 0 0;
0 0 0 0 0 1 0 0 0 0;
0 0 0 0 0 0 1 0 0 0;
0 0 0 0 0 0 0 1 0 0;
0 0 0 0 0 0 0 0 1 0;
0 0 0 0 0 0 0 0 0 1;]
test_data=[1 0 1 1 1;
1 1 1 1 0;
1 1 1 1 1;
1 1 0 0 1;
0 1 0 0 1;
1 1 1 1 1;
1 0 1 1 1;
0 1 1 1 1;
1 0 1 1 1;
1 1 1 0 0;
0 1 0 1 0;
1 0 0 1 1;
1 1 1 1 1;
1 1 1 1 0;
1 1 1 1 1;]
Program
clear;
clc;
cd=open('reg.mat');
input=[cd.A';cd.B';cd.C';cd.D';cd.E';cd.F';cd.G';cd.H';cd.I';cd.J']';
for i=1:10
for j=1:10
if i==j
output(i,j)=1;
else
output(i,j)=0;
end
end
end
for i=1:15
for j=1:2
if j==1
aw(i,j)=0;
else
aw(i,j)=1;
end
end
end
test=[cd.K';cd.L';cd.M';cd.N';cd.O']';
net=newp(aw,10,'hardlim');
net.trainparam.epochs=1000;
net.trainparam.goal=0;
net=train(net,input,output);
y=sim(net,test);
x=y';
for i=1:5
k=0;
l=0;
for j=1:10
if x(i,j)==1
k=k+1;
l=j;
end
end
if k==1
s=sprintf('Test Pattern %d is Recognised as %d',i,l-1);
disp(s);
else
s=sprintf('Test Pattern %d is Not Recognised',i);
disp(s);
end
end
Output
TRAINC, Epoch 0/1000
TRAINC, Epoch 25/1000
TRAINC, Epoch 50/1000
TRAINC, Epoch 54/1000
TRAINC, Performance goal met.
Test Pattern 1 is Recognised as 0
Test Pattern 2 is Not Recognised
Test Pattern 3 is Recognised as 2
Test Pattern 4 is Recognised as 3
Test Pattern 5 is Recognised as 4
Example 4.7 With a suitable example demonstrate the perceptron learning law with its
decision regions using MATLAB. Give the output in graphical form.
Solution The following example demonstrates the perceptron learning law.
Program
clear
p = 5; % dimensionality of the augmented input space
N = 50; % number of training patterns - size of the training epoch
% PART 1: Generation of the training and validation sets.
X = 2*rand(p-1, 2*N)-1;
nn = round((2*N-1)*rand(N,1))+1;
X(:,nn) = sin(X(:,nn));
X = [X; ones(1,2*N)];
wht = 3*rand(1,p)-1; wht = wht/norm(wht);
wht
D = (wht*X >= 0);
Xv = X(:, N+1:2*N) ;
Dv = D(:, N+1:2*N) ;
X = X(:, 1:N) ;
D = D(:, 1:N) ;
% [X; D]
pr = [1, 3];
Xp = X(pr, :);
wp = wht([pr p]); % projection of the weight vector
c0 = find(D==0); c1 = find(D==1);
% c0 and c1 are vectors of pointers to input patterns X
% belonging to the class 0 or 1, respectively.
figure(1), clf reset
plot(Xp(1,c0),Xp(2,c0),'o', Xp(1, c1), Xp(2, c1),'x')
% The input patterns are plotted on the selected projection
% plane. Patterns belonging to the class 0, or 1 are marked
% with 'o' , or 'x' , respectively
axis(axis), hold on
% The axes and the contents of the current plot are frozen
% Superimposition of the projection of the separation plane on the
% plot. The projection is a straight line. Four points lying on this
% line are found from the line equation wp . x = 0
L = [-1 1] ;
S = -diag([1 1]./wp(1:2))*(wp([2,1])'*L +wp(3)) ;
plot([S(1,:) L], [L S(2,:)]), grid, draw now
% PART 2: Learning
eta = 0.5; % The training gain.
wh = 2*rand(1,p)-1;
% Random initialisation of the weight vector with values
% from the range [-1, +1]. An example of an initial
% weight vector follows
% Projection of the initial decision plane which is orthogonal
% to wh is plotted as previously:
wp = wh([pr p]); % projection of the weight vector
0.8716 0.0416
6
0.2684
0.0126
Example 4.8 With a suitable example simulate the perceptron learning network and separate
the boundaries. Plot the points assumed in the respective quadrants using different symbols
for identification.
Solution Plot the elements as square in the first quadrant, as star in the second quadrant, as
diamond in the third quadrant, as circle in the fourth quadrant. Based on the learning rule
draw the decision boundaries.
Program
Clear;
p1=[1 1]'; p2=[1 2]'; %- class 1, first quadrant when we plot the elements, square
p3=[2 -1]'; p4=[2 -2]'; %- class 2, 4th quadrant when we plot the elements, circle
p5=[-1 2]'; p6=[-2 1]'; %- class 3, 2nd quadrant when we plot the elements,star
p7=[-1 -1]'; p8=[-2 -2]';% - class 4, 3rd quadrant when we plot the elements,diamond
%Now, lets plot the vectors
hold on
plot(p1(1),p1(2),'ks',p2(1),p2(2),'ks',p3(1),p3(2),'ko',p4(1),p4(2),'ko')
plot(p5(1),p5(2),'k*',p6(1),p6(2),'k*',p7(1),p7(2),'kd',p8(1),p8(2),'kd')
grid
hold
axis([-3 3 -3 3])%set nice axis on the figure
t1=[0 0]'; t2=[0 0]'; %- class 1, first quadrant when we plot the elements, square
t3=[0 1]'; t4=[0 1]'; %- class 2, 4th quadrant when we plot the elements, circle
t5=[1 0]'; t6=[1 0]'; %- class 3, 2nd quadrant when we plot the elements,star
t7=[1 1]'; t8=[1 1]';% - class 4, 3rd quadrant when we plot the elements,diamond
%lets simulate perceptron learning
R=[-2 2;-2 2];
netp=newp(R,2); %netp is perceptron network with 2 neurons and 2 nodes, hardlimit transfer
function, perceptron rule learning
%Define the input matrix and target matrix
P=[p1 p2 p3 p4 p5 p6 p7 p8];
T=[t1 t2 t3 t4 t5 t6 t7 t8];
Y=sim(netp,P) %Well, that is obvioulsy not good, Y is not equal P
%Now, let's train
netp.trainParam.epochs = 20; % let's train for 20 epochs
netp = train(netp,P,T); %train,
%it seems that the training is finished after 3 epochs and goal is met. Lets check by
simulation
Y1=sim(netp,P)
%this is the same as target vector, so our network is trained
%the weights and biases after training
W=netp.IW{1,1} %weights
B=netp.b{1} %bias
%decison boundaries are lines perepndicular to weights
%We assume here that input vector p=[x y]'
x=[-3:0.01:3];
y=-W(1,1)/W(1,2)*x-B(1)/W(1,2); %boundary generated by neuron 1
y1=-W(2,1)/W(2,2)*x-B(2)/W(2,2); %boundary generated by neuron 2
%let's plot input patterns with decision boundaries
figure
hold on
plot(p1(1),p1(2),'ks',p2(1),p2(2),'ks',p3(1),p3(2),'ko',p4(1),p4(2),'ko')
plot(p5(1),p5(2),'k*',p6(1),p6(2),'k*',p7(1),p7(2),'kd',p8(1),p8(2),'kd')
grid
axis([-3 3 -3 3])%set nice axis on the figure
plot(x,y,'r',x,y1,'b')%here we plot boundaries
hold off
% SEPARATE BOUNDARIES
%additional data to set decision boundaries to separate quadrants
p9=[1 0.05]'; p10=[0.05 1]';
t9=t1;t10=t2;
p11=[1 -0.05]'; p12=[0.05 -1]';
t11=t3;t12=t4;
p13=[-1 0.05]';p14=[-0.05 1]';
t13=t5;t14=t6;
p15=[-1 -0.05]';p16=[-0.05 -1]';
t15=t7;t16=t8;
R=[-2 2;-2 2];
netp=newp(R,2,'hardlim','learnp');
%Define the input matrix an target matrix
P=[p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16];
T=[t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16];
Y=sim(netp,P);
netp.trainParam.epochs = 5000;
netp = train(netp,P,T);
Y1=sim(netp,P);
C=norm(Y1-T)
W=netp.IW{1,1} %weights
B=netp.b{1} %bias
x=[-3:0.01:3];
y=-W(1,1)/W(1,2)*x-B(1)/W(1,2); %boundary generated by neuron 1
y1=-W(2,1)/W(2,2)*x-B(2)/W(2,2); %boundary generated by neuron 2
figure
hold on
plot(p1(1),p1(2),'ks',p2(1),p2(2),'ks',p3(1),p3(2),'ko',p4(1),p4(2),'ko')
plot(p5(1),p5(2),'k*',p6(1),p6(2),'k*',p7(1),p7(2),'kd',p8(1),p8(2),'kd')
plot(p9(1),p9(2),'ks',p10(1),p10(2),'ks',p11(1),p11(2),'ko',p12(1),p12(2),'ko')
plot(p13(1),p13(2),'k*',p14(1),p14(2),'k*',p15(1),p15(2),'kd',p16(1),p16(2),'kd')
grid
axis([-3 3 -3 3])%set nice axis on the figure
plot(x,y,'r',x,y1,'b')%here we plot boundaries
hold off
Output
Current plot released
Y=
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
TRAINC, Epoch 0/20
TRAINC, Epoch 3/20
TRAINC, Performance goal met.
Y1 =
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
W=
-3 -1
1 -2
B=
-1
0
TRAINC, Epoch 0/5000
TRAINC, Epoch 25/5000
TRAINC, Epoch 50/5000
TRAINC, Epoch 75/5000
TRAINC, Epoch 92/5000
TRAINC, Performance goal met.
C=
0
W=
-20.0000 -1.0000
-1.0000 -20.0000
B=
0
0
if y(1,:)==t(I,:)
w=w;b=b;
else
con=1;
for j=1:m
b(j,1)=b(j,1)+alpha*t(I,j);
for i=1:n
w(i,j)=w(i,j)+alpha*t(I,j)*x(I,i);
end
end
end
end
epoch=epoch+1;
end
disp('Number of Epochs:');
disp(epoch);
%Testing the network with test pattern
%Plot for test pattern
figure(2);
k=1;
for i=1:2
for j=1:4
charplot(ts(k,:),10+(j-1)*10,20-(i-1)*10,5,3);
k=k+1;
end
end
axis([0 55 0 25]);
title('Noisy Input Pattern for Testing');
for I=1:8
for j=1:m
yin(j)=b(j,1);
for i=1:n
yin(j)=yin(j)+w(i,j)*ts(I,i);
end
if yin(j)>theta
y(j)=1;
end
if yin(j) <=theta & yin(j)>=-theta
y(j)=0;
end
if yin(j)<-theta
y(j)=-1;
end
end
for i=1:8
if t(i,:)==y(1,:)
or(I)=i;
end
end
end
%Plot for test output pattern
figure(3);
k=1;
for i=1:2
for j=1:4
charplot(x(or(k),:),10+(j-1)*10,20-(i-1)*10,5,3);
k=k+1;
end
end
axis([0 55 0 25]);
title('Classified Output Pattern');
Subprogram used:
function charplot(x,xs,ys,row,col)
k=1;
for i=1:row
for j=1:col
xl(i,j)=x(k);
k=k+1;
end
end
for i=1:row
for j=1:col
if xl(i,j)==-1
plot(j+xs-1,ys-i+1,'r');
hold on
else
plot(j+xs-1,ys-i+1,'k*');
hold on
end
end
end
Output
Number of Epochs =12
Chapter-5
Weight matrix
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
The vector is a known vector.
criteriontest =
1.0131
output =
0.0727 0.0838 0.0370 0.0547 0.0695 0.0795
0.0309 0.0568 0.0703 0.0445 0.0621 0.0957
The response of the errors are shown graphically as,
0.0523
0.0880
0.0173
0.0980
The MATLAB program for calculating the weight matrix using BAM network is as follows
Program
%Bidirectional Associative Memory neural net
clc;
clear;
s=[1 1 0;1 0 1];
t=[1 0;0 1];
x=2*s1
y=2*t1
w=zeros(3,2);
for i=1:2
w=w+x(i,:)'*y(i,:);
end
disp('The calculated weight matrix');
disp(w);
Output
The calculated weight matrix
0 0
2 2
2 2
for k=1:m
delw(j,k)=alpha*delk(k)*z(j)+mf*(w(j,k)w1(j,k));
delinj(j)=delk(k)*w(j,k);
end
end
delb2=alpha*delk;
for j=1:h
delj(j)=delinj(j)*bipsig1(zin(j));
end
for j=1:h
for i=1:n
delv(i,j)=alpha*delj(j)*x(I,i)+mf*(v(i,j)v1(i,j));
end
end
delb1=alpha*delj;
w1=w;
v1=v;
%Weight updation
w=w+delw;
b2=b2+delb2;
v=v+delv;
b1=b1+delb1;
for k=1:k
e=e+(t(I,k)y(k))^2;
end
end
if e<0.005
con=0;
end
epoch=epoch+1;
if epoch==30
con=0;
end
xl(epoch)=epoch;
yl(epoch)=e;
end
disp('Total Epoch Performed');
disp(epoch);
disp('Error');
disp(e);
figure(1);
k=1;
for i=1:2
for j=1:5
charplot(x(k,:),10+(j1)*15,30(i1)*15,9,7);
k=k+1;
end
end
title('Input Pattern for Compression');
axis([0 90 0 40]);
figure(2);
plot(xl,yl);
xlabel('Epoch Number');
ylabel('Error');
title('Conversion of Net');
%Output of Net after training
for I=1:10
for j=1:h
zin(j)=b1(j);
for i=1:n
zin(j)=zin(j)+x(I,i)*v(i,j);
end
z(j)=bipsig(zin(j));
end
for k=1:m
yin(k)=b2(k);
for j=1:h
yin(k)=yin(k)+z(j)*w(j,k);
end
y(k)=bipsig(yin(k));
ty(I,k)=y(k);
end
end
for i=1:10
for j=1:63
if ty(i,j)>=0.8
tx(i,j)=1;
else if ty(i,j)<=-0.8
tx(i,j)=1;
else
tx(i,j)=0;
end
end
end
end
figure(3);
k=1;
for i=1:2
for j=1:5
charplot(tx(k,:),10+(j1)*15,30-(i1)*15,9,7);
k=k+1;
end
end
axis([0 90 0 40]);
title('Decompressed Pattern');
subfuntion used:
%Plot character
function charplot(x,xs,ys,row,col)
k=1;
for i=1:row
for j=1:col
xl(i,j)=x(k);
k=k+1;
end
end
for i=1:row
for j=1:col
if xl(i,j)==1
plot(j+xs1,ysi+1,'k*');
hold on
else
plot(j+xs1,ysi+1,'r');
hold on
end
end
end
function y=bipsig(x)
y=2/(1+exp(-x))1;
function y=bipsig1(x)
y=1/2*(1-bipsig(x))*(1+bipsig(x));
Output
(i) Learning Rate:0.5
Momentum Factor:0.5
Total Epoch Performed
30
Error
68.8133