Artificial Neural Networks
Artificial Neural Networks
Introduction
Outline
- Artificial Neural Networks: Properties, applications
- Biological Inspirations
- Artificial Neuron
- Perceptron
- Perceptron Learning Rule
- Limitation
- History of ANNs
- Artificial Neural Networks
- Different Network Topologies
- Multi-layer Perceptrons
- Backpropagation Learning Rule
Artificial Neural Networks
Computational models inspired by the human brain:
Adaptivity
– changing the connection strengths to learn things
Non-linearity
– the non-linear activation functions are essential
Fault tolerance
– if one of the neurons or connections is damaged, the
whole network still works quite well
Properties of ANN Applications
They might be better alternatives than classical solutions for
problems characterised by:
– Nonlinearities
– High dimensionality
Seminal Paper:
“Parallel Distributed Processing” Rumelhart and McClelland et al.
Other:
“Neural and Adaptive Systems”, J. Principe, N. Euliano, C. Lefebvre
Neural Networks Literature
Review Articles:
R. P. Lippman, “An introduction to Computing with Neural Nets”’
IEEE ASP Magazine, 4-22, April 1987.
T. Kohonen, “An Introduction to Neural Computing”, Neural
Networks, 1, 3-16, 1988.
A. K. Jain, J. Mao, K. Mohuiddin, “Artificial Neural Networks: A
Tutorial”’ IEEE Computer, March 1996’ p. 31-44.
Neural Networks Literature
Journals:
IEEE Transactions on NN
Neural Networks
Neural Computation
Biological Cybernetics
...
Biological Inspirations
Biological Inspirations
Humans perform complex tasks like vision, motor
control, or language understanding very well
x0= +1
bi :Bias
x1 wi1
x2
Σ f ai
x3 Neuroni Activation
wim function
xm
Input Synaptic Output
Weights
Bias
n
An artificial neuron:
- computes the weighted sum of its input and
- if that value exceeds its “bias” (threshold),
- it “fires” (i.e. becomes active)
Bias
Bias can be incorporated as another weight clamped
to a fixed input of +1.0
This extra free variable (bias) makes the neuron more
powerful.
ai = f (ni) = f (Σ wijxj)
i=0
Activation functions
– sigmoid: a = 1/(1+e-n)
Activation functions:hardlim & linear
Activation functions: sigmoid
Other Activation Functions
Artificial Neural Networks
A neural network is a massively parallel, distributed
processor made up of simple processing units
(artificial neurons).
Single layer
network
Input Output
layer layer
Different Network Topologies
Multi-layer feed-forward networks
– One or more hidden layers. Input projects only
from previous layers onto a layer.
2-layer or
1-hidden layer
fully connected
network
Input Hidden Output
layer layer layer
Different Network Topologies
Recurrent networks
– A network with feedback, where some of its
inputs are connected to some of its outputs
(discrete time).
Recurrent
network
Input Output
layer layer
How to Decide on a Network Topology?
– # of input nodes?
• Number of features
– # of output nodes?
• Suitable to encode the output representation
– transfer function?
• Suitable to the problem
– # of hidden nodes?
• Not exactly known