Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
742 views

Algorithm & Solved Example - ADALINE

An artificial neural network inspired by the human neural system is a network used to process data which consists of three types of layers: the input layer, hidden layer, and output layer. Adaline (Adaptive Linear Neural) is a basic neural network that has only an input layer and output layer, with weighted connections between them but no hidden layers. Madaline adds a single hidden layer. The document then discusses the architecture, workflow, and an example of using Adaline to design an OR gate by initializing weights, calculating net input and error, and updating weights according to the delta learning rule until the total error is less than a threshold.

Uploaded by

Vansh Santdasani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
742 views

Algorithm & Solved Example - ADALINE

An artificial neural network inspired by the human neural system is a network used to process data which consists of three types of layers: the input layer, hidden layer, and output layer. Adaline (Adaptive Linear Neural) is a basic neural network that has only an input layer and output layer, with weighted connections between them but no hidden layers. Madaline adds a single hidden layer. The document then discusses the architecture, workflow, and an example of using Adaline to design an OR gate by initializing weights, calculating net input and error, and updating weights according to the delta learning rule until the total error is less than a threshold.

Uploaded by

Vansh Santdasani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

An artificial neural network inspired by the human neural system is a network used to process

the data which consist of three types of layer i.e input layer, the hidden layer, and the output
layer. The basic neural network contains only two layers which are the input and output layers.
The layers are connected with the weighted path which is used to find net input data. In this
section, we will discuss two basic types of neural networks Adaline which doesn’t have any
hidden layer, and Madaline which has one hidden layer.

1. Adaline (Adaptive Linear Neural) :


A network with a single linear unit is called Adaline (Adaptive Linear Neural). A unit with
a linear activation function is called a linear unit.
In Adaline, there is only one output unit and output values are bipolar
(+1,-1).

Weights between the input unit and output unit are adjustable.

 The learning rule is found to minimize the mean square error between activation and target
values. Adaline consists of trainable weights, it compares actual output with calculated
output, and based on error training algorithm is applied.

Workflow:
Adaline

First, calculate the net input to your Adaline network then apply the activation function to its
output then compare it with the original output if both the equal, then give the output else send
an error back to the network and update the weight according to the error which is calculated
by the delta learning rule.

Architecture:
Adaline

In Adaline, all the input neuron is directly connected to the output neuron with the weighted
connected path. There is a bias b of activation function 1 is present.
Implementations

Problem: Design OR gate using Adaline Network?


Solution :
 Initially, all weights are assumed to be small random values, say 0.1, and set learning rule
to 0.1.
 Also, set the least squared error to 2.
 The weights will be updated until the total error is greater than the least squared error.
x1 x2 t

1 1 1

1 -1 1

-1 1 1

-1 -1 -1

 Calculate the net input

(when x1=x2=1)
 Now compute, (t-yin)=(1-0.3)=0.7
 Now, update the weights and bias

 calculate the error


Similarly, repeat the same steps for other input vectors and you will get.

x1 x2 t yin (t-yin) ∆w1 ∆w2 ∆b w1 (0.1) w2 (0.1) b (0.1) (t-yin)^2

1 1 1 0.3 0.7 0.07 0.07 0.07 0.17 0.17 0.17 0.49

1 -1 1 0.17 0.83 0.083 -0.083 0.083 0.253 0.087 0.253 0.69

-
-1 1 1 0.087 0.913 0.0913 0.0913 0.1617 0.1783 0.3443 0.83
0.0913

- - -
-1 -1 0.0043 0.1004 0.1004 0.2621 0.2787 0.2439 1.01
1 1.0043 0.1004

This is epoch 1 where the total error is 0.49 + 0.69 + 0.83 + 1.01 = 3.02 so more epochs
will run until the total error becomes less than equal to the least squared error i.e 2.

You might also like