Bee4333 Intelligent Control: Artificial Neural Network (ANN)
Bee4333 Intelligent Control: Artificial Neural Network (ANN)
Bee4333 Intelligent Control: Artificial Neural Network (ANN)
CONTROL
Chapter 4 :
Artificial Neural Network (ANN)
Axon;
sends
information
SOMA
Dendrites; SOMA
Received information
NEURON
Plasticity : Neurons heading to right answer are Learning from
strengthen and for the wrong answer is weakened. experience!
Learning
OUTPUT SIGNALS
INPUT SIGNALS
ANN Architecture
Learning
Synapses has their own weight to express the
importance of input.
The output of a neuron might be the final
solution or the input to other networks.
ANN learns through iterated adjustment from
synapses weight.
Weight is adjusted to cope with the output
environment regarding about its network
input/output behavior.
Each neutron computes its activation level based
on the I/O numerical weights.
How to design ANN?
Decide how many neurons to be used.
How the connections between neurons
are constructed? How many layers
needed?
Which learning algorithm to be apply?
Train the ANN by initialize the weight and
update the weights from training sets.
ANN characteristics
Advantages:
A neural network can perform tasks that a linear program can not.
Disadvantages:
The neural network needs training to operate.
x1 w1 y1
x2
w2 y2
x3
w3 y3
http://en.wikibooks.org/wiki/Artificial_Neural_Networks/Feed-Forward_Networks
Radial Basis Function
Networks
Each hidden layer neuron represents
a basis function of the output space,
with respect to a particular center in
the input space.
The activation function chosen is
commonly a Gaussian kernel:
http://en.wikibooks.org/wiki/Artificial_Neural_Networks/Recurrent_Networks
Echo State Networks
Recurrent networks where the hidden layer
neurons are not completely connected to all
input neurons.
Known as sparsely connected networks.
Only the weights from the hidden layer to the
output layer may be altered during training.
Echo state networks are useful for matching
and reproducing specific input patterns.
Because the only tap weights modified during
training are the output layer tap weights.
Training is typically quick and computationally
efficient in comparison to other multi-layer
networks that are not sparsely connected.
http://www.scholarpedia.org/article/Echo_state_network
Hopfield Networks
Competitive Networks
0 X 0 X
-1 -1
Step Y Linear Y
function function
+1 +1
0 X 0 X
-1 -1
4.5
SIMPLE ANN
Simple ANN: A Perceptron
Perceptron is used to classify input
in two classes; e.g class A1 or A2.
x2
A linear separable function is
used to divide the n-dimensional
space as follows; 1
0 x1
Say, 2 inputs, then we have a
characteristics as shown on left 2
figure. is used to shift the bound.
Three dimensional states is also
possible to be view.
Simple Perceptron
Must be boolean!
Inputs
x1
w1 Linear Hard
Combiner limiter Output
/bias
x2 w2
Threshold
Learning: Classification
Learning is done by adjusting the actual output Y to
meet the desired output Yd.
Usually, the initial weight is adjust between -0.5 to
0.5. At iteration k of the training example, we have
the error
e as
The epoch continues until the weights are converging to a steady state
values.
4.6
Multilayer Neural Networks &
Backpropagation Algorithm
Multilayer neural networks
Multilayer NN-feedforward neural
network with one or more hidden layer.
Model consists of input layer, middle or
hidden layer and an output layer.
Why hidden layer is important?
Input layer only receives input signal
Output layer only display the output
patterns.
Hidden layer process the input signals;
weight represents feature of inputs.
Multilayer NN model
Inputs Output
x1
x2
x3
1st 2nd
hidden hidden
layer layer
Multilayer Neural Network
Learning
Multilayer NN learns through a learning
algorithm; the popular one is BACK-
PROPAGATION.
The computations are similar to a simple
perceptron.
Back-propagation has two phases;
Input layer demonstrates the training input
pattern and then propagates from layer to
layer to output.
The calculation for error will notify the
system to modified the weights
Back Propagation NN
Back propagation is the learning or
training algorithm.
Each neuron must be connected to each
other.
Sigmoid function
Y is used for the network.
+1
0 X
-1
n m l
inputs i j k output
Input signals
Error signals
Case Study
Black pixel : 1
White Pixel : 0
Network
Training Process
Set up the weight, all in the range of
Apply input pattern calculate output.
(FORWARD PASS)
Calculated output will be different with
the TARGET.
Differences between CALCULATED
OUTPUT and TARGET is equal to error.
Error will be used for updating weight.
Step of Back Propagation
Method
Step of Back Propagation
Method (ctd)
Case Study
A
C
i : input 1, 2, 3, 4
Initial weight all 0.1
j : hidden layer A, B, C
( can be choosen -1<weight<1
k:
Training Process
1)
Find the neuron output in hidden layer
=
=
4)
5)
6)
Wji(t+1) = 0.1 X (-0.0035) X 0
+ (0.9 X 0)
=0