Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Soft Computing PPT Module1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 102

Module 1

Introduction to Soft Computing. Difference between Hard Computing & Soft Computing.
Applications of Soft Computing. Artificial Neurons Vs Biological Neurons. Basic models
of artificial neural networks – Connections, Learning, Activation Functions. McCulloch
and Pitts Neuron. Hebb network.
Text Books
1. S.N.Sivanandam and S.N. Deepa, Principles of Soft Computing , 2ndEdition, John
Wiley & Sons.
2. Kalyanmoy Deb, Multi-objective Optimization using Evolutionary Algorithms, 1st
Edition, John Wiley & Sons.

ReferenceBooks
1. Timothy J Ross, Fuzzy Logic with Engineering Applications, John Wiley & Sons,
2016.
2. T.S.Rajasekaran, G.A.Vijaylakshmi Pai “Neural Networks, Fuzzy Logic & Genetic
Algorithms Synthesis and Applications”, Prentice-Hall India.
3. Simon Haykin, “Neural Networks- A Comprehensive Foundation”, 2/e, Pearson
Education.
4. Zimmermann H. J, “Fuzzy Set Theory & Its Applications”, Allied Publishers Ltd.
INTRODUCTION TO SOFT COMPUTING

• The idea of soft computing was initiated in 1981, by Lotfi A Zadeh. The role
model for soft computing is human mind.
• Soft computing is a term used in computer science to refer to problems, whose
solutions are unpredictable, uncertain and between 0 and 1.
• Designed to model solutions to real world problems, which are not modeled or
too difficult to model mathematically.
PROBLEM SOLVING TECHNIQUES

Two Techniques:
• Hard Computing: deals with precise models where accurate solutions are
achieved quickly.
• Soft Computing:deals with approximate models and gives solutions to
complex problems.
Fig 1.1:Problem Solving Technologies
soft computing
• soft computing deals with approximate models and gives solution to complex
problems.
• Introduced by Professor Lotfi Zadeh .
• The ultimate goal is to be able to emulate the human mind as closely as
possible.
• Soft computing involves a combination of
 Genetic Algorithms
 neural networks
 fuzzy logic.
APPLICATION OF SOFT COMPUTING

• Consumer appliance like Ac,Refrigerators,Heaters,Washing machine


• Robotics like Emotional petrobots
• Food preparation appliance like Rice cookers and Microwave
• Game playing like poker,checker etc
Introduction to Neural Networks
Artificial Neural Networks

• History
• 1943-McCulloch & Pitts are generally recognised as the designers of the first neural
network
• 1949-First learning rule (Hebb)
• 1969-Minsky & Papert - perceptron limitation - Death of ANN
• 1980’s - Re-emergence of ANN - multi-layer networks
Brain and Machine
• The Brain
– Pattern Recognition
– Association
– Complexity
– Noise Tolerance

• The Machine
– Calculation
– Precision
– Logic
Brain vs. Computer –
Comparison Between Biological Neuron and Artificial
Neuron
1. Speed:
• The cycle time of execution
in the ANN is of few nanoseconds
biological neuron it is of a few milliseconds.
• Hence, the artificial neuron modeled using a computer is more faster.
2. Processing:
• Both the biological neuron and the artificial neuron can perform massive
parallel operations simultaneously.
• but, in general, the ANN process is faster than that of the brain.
3. Size and complexity:
• The total number of neurons in the brain is about and the total number of
interconnections is about
• Hence, the complexity of the brain is comparatively higher,
 i.e. the computational work takes places not only in the brain cell body,
but also in axon, synapse, etc.
• The size and complexity of an ANN is based on the chosen application and the
network designer.
• The size and complexity of a biological neuron is more than that of an
artificial neuron.
4. Storage capacity (memory):

• The biological neuron stores the information in its interconnections or in synapse


strength but in an artificial neuron it is stored in its contiguous memory locations.

• A disadvantage related to brain is that sometimes its memory may fail to recollect
the stored information

• whereas in an artificial neuron, once the information is stored in its memory


locations, it can be retrieved.

• Owing to these facts, the adaptability is more- toward an artificial neuron.


5. Tolerance:

• The biological neuron possesses fault tolerant capability whereas the artificial
neuron has no fault tolerance.

• The distributed nature of the biological neurons enables to store and


retrieve information even when the interconnections in them get
disconnected.

• Even when some cells die, the human nervous system appears to be
performing with the same efficiency.
6.Control mechanism:
• In an artificial neuron modeled using a computer, there is a control unit
present in Central Processing Unit, which can transfer and control precise
scalar values from unit to unit,
• but there is no such control unit for monitoring in the brain.
• The strength of a neuron in the brain depends on the active chemicals
present and whether neuron connections are strong or weak.
• ANN possesses
 simpler interconnections and
 free from chemical actions
• Thus, the control mechanism of an artificial neuron is very simple compared
to that of a biological neuron.
ANN possesses the following characteristics:

1. It is a neurally implemented mathematical model.


2. There large number of highly interconnected processing elements called
neurons.
3.The interconnections with their weighted linkages hold the informative
knowledge.
4.The input signals arrive at the processing elements through connections and
connecting weights.
5.The processing elements of the ANN have the ability to learn, recall and
generalize from the given data by suitable assignment or adjustment of weights.
6. The computational power can be demonstrated only by the collective behavior
of neurons, and it should be noted that no single neuron carries specific
information.
ANN characteristics: cont..

• The above-mentioned characteristics make the ANNs as


 connectionist models,
 parallel distributed processing models,
 self-organizing systems,
 neuro-computing systems,
 neuro-morphic systems.
Artificial Neural Networks
Features of the Brain

• Ten billion (1010) neurons


• On average, several thousand connections
• Hundreds of operations per second
• Compensates for problems by massive parallelism
The Structure of Neurons
Biological neuron

• The biological neuron consists of three main parts:


1. Soma or cell body- where the cell nucleus is located.
2. Dendrites- a branching input structure where the nerve is connected to the
cell body.
3. Axon- a branching output structure which carries impulses the neuron.
• The end of axon splits into fine strands.
each strand terminates into a small bulb - like organ called synapse.
The Structure of Neurons
• Axons connect to dendrites via synapses.
• Electro-chemical signals are propagated from the dendritic input, through the
cell body, and down the axon to other neurons
• A neuron only fires if its input signal exceeds a certain amount (the threshold) in
a short time period.
• Synapses vary in strength
– Good connections allowing a large signal
– Slight connections allow only a weak signal.

• Synapses are said to be


 Inhibitory (passing impulses hinder firing of receiving cell)
 excitatory (passing impulses cause the firing of receiving cell)
Biological vs Artificial neuron

(Yin)
(y)
Artificial neuron

• where i represents the i th processing element.


• The activation function is applied over it to calculate the output.
• The weight represents the strength of synapse connecting the input and the output neurons.
• A positive weight corresponds to an excitatory synapse, and a negative weight corresponds to an inhibitory
synapse.
Artificial neural network (ANN)
• ANN is an efficient information processing system.
• which resembles in characteristics with a biological neural network.
• ANNs possess large number of highly interconnected processing elements
called nodes or units or neurons, which usually operate in parallel.
• Each neuron is connected with the other by a connection link.
• Each connection link is associated with weights which contain information
about the input signal.
• This information is used by the neuron net to solve a Particular problem.
Artificial neural network (ANN)
• The ANN processing elements are called neurons or artificial neuron
Artificial neural network (ANN)

• The above simple neuron net architecture, the net i/p calculated as:
x1 and x2 are the activations of i/p neurons X1 and X2
ie, o/p of i/p signals.

• The o/p y of the o/p neuron Y can be obtained by applying activation over the net
i/p.
• The function of net i/p:


Types of Layers

• The input layer.


– Introduces input values into the network.
– No activation function or other processing.
• The hidden layer(s).
– Perform classification of features
– Two hidden layers are sufficient to solve any problem
– Features imply more layers may be better
• The output layer.
– Functionally just like the hidden layers
– Outputs are passed on to the world outside the neural network.
Basic Models of ANN

• Three basic entities

1. Synaptic interconnections (architecture)

2. Learning/training rules (algorithm)

3. Activation functions
Synaptic interconnections
• An ANN consists of a set of highly interconnected processing elements
(neurons) such that
 each processing element output is found to be connected through weights
to the other processing elements or to itself.
• The arrangement of neuron to form layers and the connection pattern formed
within and between layers is called the network architecture.
• five basic types of neuron connection architectures.
Synaptic interconnections (architecture)

1. Single layer feed-forward networks

2. Multilayer feed-forward networks

3. Single node with its own feedback

4. Single layer recurrent networks

5. Multilayer recurrent networks


Single layer feed-forward networks
• When a layer of the processing nodes is formed, the inputs can be connected
to these nodes with various weights, resulting in a series of outputs.
• Thus, a single-layer feed-forward network is formed.
Multilayer feed-forward networks

• A multilayer feed-forward networks is formed by the interconnection of several layers.


• The input layer is that which receives the input and this layer has no function except buffering the input signal.
• The output layer generates the output of the network.
• Any layer that is formed between input and output layers is called hidden layer. This hidden layer is internal to the
network and has no direct contact with the external environment.
• More the number of the hidden layers, more complexity of the network. This may, however, provide an efficient output
response.
Single node with its own feedback
• when outputs can be directed back as inputs to same or preceding layer nodes then it
results in the formation of feed back network.
• If the feedback of the output of the processing elements is directed back as input to the
processing elements in the same layer then it is called lateral feedback.
• the competitive interconnections having fixed weights of -ԑ. This net is called Maxnet,
Single layer recurrent networks
• Recurrent networks are feedback networks with closed loop.
• A single layer network with a feedback connection in which a processing
element's output can be directed back to the processing element itself or to
the other processing element or to both.
Multilayer recurrent networks
• A processing element output can be directed back to the nodes in a preceding
layer, forming a multilayer recurrent network.
• In these networks, a processing element output can be directed back to the
processing element itself and to other processing elements in the same layer.
Lateral inhibition structure
• There also exists another type of architecture with lateral feedback, which is
called the on-center-off-surround or lateral inhibition structure,
• each processing neuron receives two different classes of inputs-
"excitatory" input from nearby processing elements and
"inhibitory" inputs from more distantly located processing elements.
• The connections with open circles are excitatory connections and the links
with solid connective circles are inhibitory connections

Fig : Lateral inhibition structure


Learning/training rules (algorithm)
• The main property of an ANN is its capability to learn.
• Learning or training is a process by means of which a neural network adapts
itself to a stimulus by making proper adjustments, resulting in the production
of desired response.
• Broadly, there are two kinds of learning in ANNs:
1.Parameter learning: It updates the connecting weights in a neural net.
2.Structure learning: It focuses on the change in network structure
 the number of processing elements as well as their connection types
Learning/training rules (algorithm)

• The learning in an ANN can be generally classified into three


categories as:

1.Supervised learning

2.Unsupervised learning

3.Reinforcement learning
Learning

Parameter learning

Structure learning

Supervised learning – help of teacher


Unsupervised learning – without teacher
Reinforcement learning – similar to supervised learning
Supervised Learning
• The learning is performed with the help of a teacher.
• example:
 The learning process of a small child.
 The child doesn't know how to read/write.
 He/she is being taught by the parents at home and by the teacher in
school.
The children are trained and molded to recognize the alphabets,
numerals, etc.
Their each and every action is supervised by a teacher.
Actually, a child works on the basis of the output that he/She has to
produce.
Supervised Learning
• ANNs following the supervised learning,
each input vector requires a corresponding target vector,
 which represents the desired output.
• The input vector along with the target vector is called training pair.
(i/p , t)  training pair
Supervised Learning
• During training the input vector is teacher is required for error
presented to the network, which results minimization.
in an output vector. • In supervised learning, it is assumed that
• This output vector is the actual output the correct “target” o/p values are known
vector. for each i/p pattern.
• Then the actual output vector is
compared with the desired (target)
output vector.
• If there exists a difference between the
two output vectors then an error signal is
generated by the network.
• This error signal is used for adjustment of
weights until the actual o/p matches the
desired (target) o/p.
• In this type of training, a supervisor or a
Unsupervised Learning
• The learning here is performed without the help of a teacher.

• Consider the learning process of a tadpole,

 it learns by itself, that is, a child fish learns to swim by itself, it is not taught
by its mother.

• Thus, its learning process is independent and is nor supervised by a teacher.


Unsupervised learning
• In ANNs following unsupervised learning,
 the input vectors of similar type are grouped without the use of training
data to specify
 how a member of each group looks or
 to which group a number belong.
 In the training process, the network receives the input patterns and
organizes these patterns to form clusters.
Unsupervised Learning
• There is no feedback from the environment to inform
what the o/p should be or whether the o/ps are
correct
• In this case, the network must itself discover
 patterns, regularities, features or
 categories from the i/p data and relations for the i/p
data over the o/p.
• While discovering all these features, the network
undergoes change in its parameters.
• This process is called self-organizing in which exact
clusters will be formed by discovering similarities and
dissimilarities among the objects.
Reinforcement Learning
• This learning process is similar to supervised learning.
• In the case of supervised learning,
 the correct target output values are known for each input pattern.
• But, in some cases, less information might be available.
• For example, the network might be told that its actual output is only "50%
correct" or so.
• Thus, here only critic information is available, not the exact information.
• The learning based on this critic information is called reinforcement learning
and the feedback sent is called reinforcement signal.
Reinforcement Learning
• The reinforcement learning is a form of
supervised learning because the network
receives some feedback from its
environment.
• However, the feedback obtained here is only
evaluative and not instructive.
• The external reinforcement signals are
processed
 in the critic signal generator, and
 the obtained critic signals are sent to the
ANN for adjustment of weights properly so
as to get better critic feedback in future.
• The reinforcement learning is also called
learning with a critic as opposed to learning
with a teacher, which indicates supervised
learning.
Activation functions
• Let us assume a person is performing some work.
• To make the work more efficient and to obtain exact output, some force or
activation may be given.
• This activation helps in achieving the extra output.
• In a similar way, the activation function is applied over the net input calculate
the output of an ANN.
Activation functions
• The information processing of a processing element can be viewed as consisting of
two major parts:
 input and
 output.
• An integration function (say f ) is associated with the input of a processing element.
• This function serves to combine activation, information or evidence from an external
source or other processing elements into a net input to the processing element.
• net i/p calculated as:

• The function of net i/p:


Activation Functions
• The function to be applied over the net i/p is called activation function.
• Transforms neuron’s input into output.
1. Identity function
2. Binary step function
3. Bipolar step function
4. Sigmoid functions
5. Ramp functions
1. Identity function
• It is a linear function and can be defined as 'I’.

f(x) = x for all x

• The output here remains the same as input. The input layer uses the identity
activation function.
2. Binary step function
• This function can be defined as

• where Ɵ represents the threshold value.


• This function is most widely used in single-layer nets to convert the net input
to an output that is a binary (1 or 0).
3. Bipolar step function

• This function can be defined as

• where Ɵ represents the threshold value.


• This function is also used in single-layer nets to convert the net input to an
output that is bipolar(+ 1 or -1).
4. Sigmoidal functions
• The sigmoidal functions are widely used in back-propagation nets
 because of the relationship between the value of the functions at a point
 and the value of the derivative at that point which reduces the
computational burden during training.
• Sigmoidal functions are of two types:
 Binary sigmoid function
 Bipolar sigmoid function
4.1 Binary sigmoid function:

• It is also termed as logistic sigmoid function or unipolar sigmoid function.


• It can be defined as

• where λ is the steepness parameter. The derivative of this function is

• Here the range of the sigmoid function is from 0 to 1·


4.2 Bipolar sigmoid function
• This function is defined as

• where λ is the steepness parameter and the sigmoid function range is


between -1 and+ 1.
• The derivative of this function can be
4.2 Bipolar sigmoid function
• The bipolar sigmoidal function is closely related to hyperbolic tangent
function, which is written as

• The derivative of the hyperbolic tangent function is

• If the network uses a binary data,


 it is better to convert it to bipolar form and
 use the bipolar sigmoidal activation function or hyperbolic tangent function.
5. Ramp function
• The ramp function is defined as
Bias
• The bias included in the network has its impact in calculating the net input.
• The bias is included by adding a component x0 = 1 to the input vector X, thus
the input vector becomes
X= (1,X1, ... ,Xi, ... ,Xn)
• The bias is considered like another weight, that is w0j = bj.
• Consider a simple network shown in Figure 2-16 with bias.
Bias conti..
• From Figure 2-16, the net input to the output neuron Yj is calculated as

• The net input to the output neuron Y without bias :


Q1. For the network shown in Figure I, calculate the
weights are net input to the output neuron.
• Solution: The given neural net consists of three input neurons and one output
neuron.
• The inputs and weights are:
[x1, x2, x3 ] = [0.3, 0.5, 0.6]
[w1,w2,w3] = [0.2,0.1,-0.3]
• The net input can be calculated as
Q2. Calculate the net input for the network shown in
Figure 2 with bias included in the network.
• Solution: The given net consist of two input neurons, a bias and an output
neuron.
• The inputs are [x1, x2] = [0.2, 0.6]
• the weights are [w1, w2] = [0.3, 0.7].
• Since the bias is included b = 0.45 and bias input xo=1, the net input is
calculated as
Q3. Obtain the output of the neuron Y for the network shown in
Figure 3 using activation functions as: (i) binary sigmoidal and (ii)
bipolar sigmoidal.
• Solution: The given network has three input neurons with
bias and one output neuron.
• These form a single-layer network.
• The inputs are given as[x1,x2,x3] = [0.8,0.6,0.4]
• the weights are [w1, w2, w3] = [0.1, 0.3, -0.2] with bias
b = 0.35 (its input is always 1).
• The net input to the output neuron is
Threshold

• Threshold is a set value based upon which the final output of the network
may be calculated.

• The threshold value is used in the activation function.

• A comparison is made between the calculated net input and the threshold to
obtain the network output.

• For each and every application there is a threshold limit.


Threshold
• Consider a direct current (DC) motor.
If its maximum speed is 1500 rpm then
the threshold based on the speed is 1500 rpm.
If the motor is run on a speed higher than its set threshold,
 it may damage motor coils.
• Similarly, in neural networks,
 based on the threshold value,
 the activation functions are defined and the output is calculated.
• The activation function using threshold can be defined as


McCulloch-Pitts Neuron
• The McCulloch-Pitts neuron was the earliest neural network discovered in
1943.
• It is usually called as M-P neuron.
• The M-P neurons are connected by directed weighted paths.
• The activation of a M-P neuron is binary, that is,
 at any time step the neuron may fire or may not fire
• The weights associated with the communication links may be excitatory
(weight is positive) or inhibitory (weight is negative).
• All the excitatory connected weights entering into a particular neuron will
have same weights.
McCulloch-Pitts Neuron
• The threshold plays a major role in M-P neuron:
 There is a fixed threshold for each neuron,
 and if the net input to the neuron is greater than the threshold
 then the neuron fires
 Any nonzero inhibitory input would
 prevent the neuron from firing.
• The M-P neurons are most widely used in the case of logic function.
McCulloch-Pitts Neuron- Architecture
• the M-P neuron has both excitatory and
inhibitory connections.
• It is excitatory with weight (w > 0) or inhibitory
with weight -p(p < 0).
 inputs x1 to xn possess excitatory weighted connections xn
and
 inputs from Xn+ 1 to Xn+m possess inhibitory weighted
interconnections.
• Since the firing of the output neuron is based
upon the threshold,
 the activation function here is defined as
McCulloch-Pitts Neuron
• For inhibition to be absolute, the threshold with the activation function
should satisfy the following condition:

• The output will fire if it receives say “k” or more excitatory inputs but no
inhibitory inputs, where

• The M-P neuron has no particular training algorithm.


McCulloch-Pitts Neuron
• An analysis has to be performed to determine the values of the weights and
the threshold.

• Here the weights of the neuron are set along with the threshold to make the
neuron perform a simple logic function

• The M –P neurons are used as buildings blocks on which we can model any
function or phenomenon,
 which can be represented as a logic function.
Q) Implement AND function using McCulloch-Pitts
neuron (take binary data).
• Solution: Consider the truth table for AND function (Table 1).
• In McCulloch-Pitts neuron, only analysis is being performed.
• Hence, assume the weights be w1 = 1 and w2 = 1.
• The network architecture is shown in Figure.
• With these assumed weights, the net input is calculated for four inputs: For
inputs (w1=w2=1)
• (x1,x2)
• For an AND function, • This can also be obtained by

 the output is high if both the inputs are • Here, n= 2, w = 1 (excitatory weights) and
high. p = 0 (no inhibitory weights). Substituting

• For this condition, the net input is calculated these values in the above mentioned

as 2. equation we get
• Hence, based on this net input, the threshold • Thus, the output of neuron Y can be written
is set,
as
o i.e. if the threshold value is greater than or equal to 2
then the neuron fires, else it does not fire.
o So the threshold value is set equal to 2(Ɵ= 2).
Q)Implement ANDNOT function using McCulloch-Pitts
neuron (use binary data representation).
Solution:
• In the case of ANDNOT function, the response is
true if the first input is true and the second input is
false.
• For all other input variations, the response is false.
• The given function gives an output only when
x1 = 1 and x2= 0.
• The weights have to be decided only after the
analysis. The net can be represented as shown in
Figure.
• AND NOT A AND (NOT B) 1 AND (NOT 0) 1 AND 11
• The truth table for AND NOT function is given in
Table.
Case 1:
• Assume that both weights w1 and w2 are excitatory, i.e., w1=w2=1
• Then for the four inputs calculate the net input using yin=x1w1+x2w2

• From the calculated net inputs, it is not possible to fire the neuron for input
(1, 0) only.
• Hence, these weights are not suitable.
Case 2 :
• Assume one weight as excitatory and the other as inhibitory, i.e.,w1 =1, w2=-1

• From the calculated net inputs, now it is possible to fire the neuron for input (1, 0) only by
fixing a threshold of 1, i.e.,Ɵ ≥ 1 for Y unit. Thus,
w1=1; w2= -1; Ɵ ≥ 1
• Note: The value of Ɵ is calculated using the following:
Q)Implement XOR function using McCulloch-Pitts neuron (consider binary data).

• Solution: The truth table for XOR function is given in Table 3.


• In this case, the output is "ON" for only odd number of 1’s. For
the rest it is "OFF." XOR function cannot be represented by
simple and single logic function; it is represented as

• A single-layer net is not sufficient to represent the function.


• An intermediate layer is necessary.
Hebb Network
• For a neural net, the Hebb learning rule is a simple one.
• Donald Hebb stated in 1949 that in the brain, the learning is performed by the
change in the synaptic gap.
• Hebb explained it:
 "When an axon of cell A is near enough to excite cell B,
 and repeatedly or permanently takes place in firing it,
 some growth process or metabolic change takes place in one or both the
cells such that A’s efficiency, as one of the cells firing B, is increased.”
Hebb Network
• According to the Hebb rule,
 the weight vector is found to increase proportionately to the product of
the input and the learning signal (neuron’s output).
• In Hebb learning,
 if two interconnected neurons are 'on' simultaneously
 then the weights associated with these neurons can be increased by
the modification made in their synaptic gap (strength).
• The weight update in Hebb rule is given by
wi(new) = wi(old) + xiy
Hebb Network
• The Hebb rule is more suited for bipolar data than binary data.
• If binary data is used, the above weight updation formula cannot distinguish
two conditions namely;
1. A training pair (s: t) in which an input unit is "on" and target value is "off."
2. A training pair (s: t) in which both the input unit and the target value are
"off."
• Thus, there are limitations in Hebb rule application over binary data.
• Hence, the representation using bipolar data is advantageous.
Hebb Network- Flowchart of Training
Algorithm
• The training algorithm is used for the calculation and adjustment of weights
• s: t refers to each training input and target output pair.
• Till there exists a pair of training input and target output,
 the training process takes place; else, it is stopped.
Training Algorithm

• Step 0: First initialize the weights. Basically in this network they


may be set to zero, i.e., wi = 0 for i= 1 to n where "n" may be the
total number of input neurons.

• Step 1: Steps 2-4 have to be performed for each input training


vector and target output pair, s: t.

• Step2: Input units activations are set. Generally, the activation


function of input layer is identify function: xi =si for i= 1 to n

• Step 3: Output units activations are set: y=t

• Step 4: Weight adjustments and bias adjustments are


performed:
wi(new) = wi(old) + xiy
Hebb Network
• The above five steps complete the algorithmic process.
• In Step 4, the weight updation formula can also be given in vector form as
w(new)= w(old) +xy
• Here the change in weight can be expressed as:
Δw = xy
• As a result,
w(new) = w(old) + Δw
• The Hebb rule can be used for
 pattern association,
 pattern categorization,
 pattern classification and over a range of other areas.
Q)Design a Hebb net to implement logical AND function (use
bipolar inputs and targets).

• Solution: The training data for the AND function is given in


Table 9.
Hence weight obtained from this are the final weights
and are given as w1 =2; w2=2; b=-2
Q) Design a Hebb net to implement OR function (consider
bipolar inputs and targets).
Q)Use the Hebb rule method to implement XOR function (take bipolar
inputs and targets)

The final weights obtained after presenting all the input patterns
do not give correct output for all patterns .

You might also like