Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Praloy

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 19

Presentation on

Acknowledgement

 This seminar could not have been finished without the assistance of
many individuals. First, I would like to acknowledge Mr. Subhojit Sarker,
Sr. Lecturer, ECE Dept. His constant support and encouragement was
the major driving force for me to prepare this seminar.

 I appreciate the constructive and encouraging comments of Mr. Gautam


Das, Asst. Prof. and HOD, ECE Dept., which helped me a lot to prepare
this seminar.

 I would also like to thank the entire teachers and faculty members of
ECE Dept. Their insightful suggestions also helped me to prepare this
seminar.
Contents
 Introduction
 History
 Biological Neurones
 Artificial Neural network
 Comparisons between convention Computers
and artificial neural network
 Feed forward network
 Back propagation- learning in Feed forward network
 Applications of neural network
 Neural Network and consciousness
 Neural Network and Financial Prediction
 Conclusion
 Reference
Introduction


The human brain is a source of natural intelligence and a truly remarkable parallel computer.
History
Biological Neurones
The brain is principally composed of about 10 billion
neurons, each connected to about 10,000 other neurons. 
Each of the yellow blobs in the picture above are
neuronal cell bodies (soma), and the lines are the input
and output channels (dendrites and axons) which
connect them.

Figure: Biological Neuron


Artificial Neural Network
Perceptron:-

The perceptron is a mathematical model of a biological neuron. While in actual neurons the dendrite receives electrical signals from the axons of other neurons, in the perceptron these electrical signals are represented as numerical values. At the synapses
between the dendrite and axons, electrical signals are modulated in various amounts. This is also modeled in the perceptron by multiplying each input value by a value called the weight. An actual neuron fires an output signal only when the total strength of
the input signals exceed a certain threshold. We model this phenomenon in a perceptron by calculating the weighted sum of the inputs to represent the total strength of the input signals, and applying a step function on the sum to determine its output. As in
biological neural networks, this output is fed to other perceptrons.

Figure: An Artificial Neuron


Artificial Neural Network
Mathematical Model of Artificial Neuron :-

1. A set of synapses or connecting links, each of which is


characterized by a weight or
strength of its own. Specifically, a signal Xi at
the input of synapse J connected to neuron K is multiplied by
the synaptic weight Wkj are written.

2. An adder for summing the input signals, weighted by the


respective synapses of the neuron; the operations described
here constitute a linear combiner.

3. An activation function for limiting the amplitude of the output of


a neuron. The activation function is also referred to as a
squashing function in that it squashes the pemissible amplitude
range of the output signal to some finite value.

4. Typically, the normalized amplitude range of the output of a Figure: Nonlinear model of a neuron
neuron is written as the closed unit interval [-1,1].
Artificial Neural Network

What can a perceptron do?

A perceptron calculates the weighted sum of the input values. For simplicity, let us assume that there
are two input values, x and y for a certain perceptron P. Let the weights for x and y be A and B for
respectively, the weighted sum could be represented as: A x + B y.

Since the perceptron outputs an non-zero value only when the weighted sum exceeds
a certain threshold C, one can write down the output of this perceptron as follows:

Output of P = {1 if A x + B y > C
{0 if A x + B y < = C

Recall that A x + B y > C and A x + B y < C are the two regions on the xy plane
separated by the line A x + B y + C = 0. If we consider the input (x, y) as a point on a
plane, then the perceptron actually tells us which region on the plane to which this
point belongs. Such regions, since they are separated by a single line, are called
linearly separable regions.
Artificial Neural Network

This result is useful because it turns out that some logic functions such as the boolean
AND, OR and NOT operators are linearly separable Ð i.e. they can be performed using a
single perceptron. We can illustrate (for the 2D case) why they are linearly separable by
plotting each of them on a graph:

Graphs showing linearly separable logic functions

In the above graphs, the two axes are the inputs which can take the value of either 0 or 1, and
the numbers on the graph are the expected output for a particular input. Using an appropriate
weight vector for each case, a single perceptron can perform all of these functions.
However, not all logic operators are linearly separable. For instance, the XOR operator is not
linearly separable and cannot be achieved by a single perceptron.
Comparisons Between Conventional Computers
and Artificial Networks

Parallel processing :
• Major advantages of the neural network is its ability to do many things at once. With traditional
computers, processing is sequential--one task, then the next, then the next, and so on.

• The artificial neural network is an inherently multiprocessor-friendly architecture. Without much


modification, it goes beyond one or even two processors of the von Neumann architecture.

• The artificial neural network is designed from the onset to be parallel. Humans can listen to music
at the same time they do their homework--at least, that's what we try to convince our parents in
high school.

The ways in which they function :


Another fundamental difference between traditional computers and artificial neural networks is the
way in which they function. While computers function logically with a set of rules and calculations,
artificial neural networks can function via images, pictures, and concepts.

Based upon the way they function, traditional computers have to learn by rules, while artificial neural
networks learn by example, by doing something and then learning from it. Because of these
fundamental differences, the applications to which we can tailor them are extremely different. We will
explore some of the applications later in the presentation.
Comparisons Between Conventional Computers
and Artificial Networks

Self-programming :

The "connections" or concepts learned by each type of architecture is different as well. The von
Neumann computers are programmable by higher level languages like C or Java and then translating
that down to the machine's assembly language. Because of their style of learning, artificial neural
networks can, in essence, "program themselves." While the conventional computers must learn only
by doing different sequences or steps in an algorithm, neural networks are continuously adaptable
by truly altering their own programming. It could be said that conventional computers are limited by
their parts, while neural networks can work to become more than the sum of their parts.

Speed :

The speed of each computer is dependant upon different aspects of the processor. Von Neumann
machines requires either big processors or the tedious, error-prone idea of parallel processors,
while neural networks requires the use of multiple chips customly built for the application.
Feed Forward Networks

Characteristics :

1 Perceptrons are arranged in layers, with the


. first layer taking in inputs and the last
layer producing outputs. The middle
layers have no connection with the
external world, and hence are called
hidden layers.
2 Each perceptron in one layer is connected to
. every perceptron on the next layer.
Hence information is constantly "fed
forward" from one layer to the next., and
this explains why these networks are
Figure: A Feed-forward network
called feed-forward networks.
3
.
There is no connection among perceptrons in
the same layer.
Feed Forward Networks
Different layers of Feed Forward Network :

A Feed-Forward Network 4 lines each dividing the plane into 2 Intersection of 4 linearly separable
with one hidden layer linearly separable regions regions forms the center region
BACK PROPAGATION – LEARNING IN FEED
FORWARD NETWORK

 Learning in feed-forward networks belongs to the realm of supervised learning, in which pairs of input and output
values are fed into the network for many cycles, so that the network 'learns' the relationship between the input and
output.

 We provide the network with a number of training samples, which consists of an input vector i and its desired output
o. For instance, in the classification problem, suppose we have points (1, 2) and (1, 3) belonging to group 0, points (2,
3) and (3, 4) belonging to group 1, (5, 6) and (6, 7) belonging to group 2, then for a feed-forward network with 2 input
nodes and 2 output nodes, the training set would be:

{ i = (1, 2) , o =( 0, 0)

i = (1, 3) , o = (0, 0)

i = (2, 3) , o = (1, 0)

i = (3, 4) , o = (1, 0)

i = (5, 6) , o = (0, 1)

i = (6, 7) , o = (0, 1) }
BACK PROPAGATION – LEARNING IN FEED
FORWARD NETWORK

 The basic rule for choosing the number of output nodes depends on the number of different regions.
It is advisable to use a unary notation to represent the different regions, i.e. for each output only one
node can have value 1. Hence the number of output nodes = number of different regions -1.

 In back propagation learning, every time an input vector of a training sample is presented, the output
vector o is compared to the desired value d.

 The comparison is done by calculating the squared difference of the two:

Minimize

where n is the learning rate (a

small number ~ 0.1)


BACK PROPAGATION – LEARNING IN FEED
FORWARD NETWORK

Back propagation Algorithm :


Step1: Normalize the inputs and outputs with respect to their maximum values. It is
proved that the neural networks work better if input and outputs lie between 0-1.
for each training pair, assume there are ‘l’ inputs given by l * 1 and n * 1 in a
normalized form.

Step 2: Assume the number of neurons in the hidden layer to lie between l<m<2l.

Step 3: [V] represents the weights of synapses connecting input neurons and hidden
neurons and [W] represents weights of synapses connecting hidden neurons and
output neurons. Initialize the weights to small random values usually from -1 to 1.

For general problems, λ can be assumed as 1 and the threshold values can be
assumed as zero.

[V] = [random weights]


[W] = [random weights]
[∆V] = [∆W] = [0]

Step 4: For the training data. Present one set of inputs and outputs. Present the pattern to
the input layer {I}i as inputs to the input layer. By using linear activation function,
the output of the input layer may be evaluated as

{O} = {I}i
L*1 L*1
BACK PROPAGATION – LEARNING IN FEED
FORWARD NETWORK

Step 5: Compute the inputs to the hidden layer by multiplying corresponding weights of
synapses as

{I} = [V]^t {O}

m*1 m*l l*1


Step 6: Let the hidden layer units evaluate the output using the sigmoidal function as

{O}h = 1 / (1 + e^-Imi)

m*1
Step 7: Compute the inputs to the output layer by multiplying corresponding weights of
synapses as
{I}O = [W]^T{O}

n*1 n*m m*1

Step 8: Let the output layer units evaluate the output using sigmoidal function as

{O}o = 1/(1 + e^-roj)

The above is the network output.


BACK PROPAGATION – LEARNING IN FEED
FORWARD NETWORK

Step 9: Calculate error and difference between the network output and the desired as for
the ith training set as

E^p = {∑(Tj - Ooj)^2}^½ / n

Step 10: Find {d}as

{d} = (Tk - Ook)Ook(1 - Ook)


n*1

Step 11: Find [Y] matrix as


[Y] = {O}h <d>
m*n m*1 1*n

Step 12: Find


[∆W]^t+1 = α[∆W]^t + η[Y]
Step 13: Find {e} = [W] {d}
m*1 m*n n*1
{d*} = ei(Ohi)(1 - Ohi)
m*1 m*1
Find [X] matrix as
[X] = {O}i ,<d*>.= {I}i <d*>.
l*m l*1 1*m 1*1 1*m

You might also like