Introduction To Neural Networks: Freek Stulp
Introduction To Neural Networks: Freek Stulp
Introduction To Neural Networks: Freek Stulp
Neural Networks
Freek Stulp
Overview
Biological Background
Artificial Neuron
Classes of Neural Networks
1. Perceptrons
2. Multi-Layered Feed-Forward Networks
3. Recurrent Networks
Conclusion
2
Biological Background
Neuron consists of:
Cell body
Dendrites
Axon
Synapses
Neural activation :
Throught dendrites/axon
Synapses have different
strengths
3
Artificial Neuron
Input links Unit Output links
(dendrites) (cell body) (axon)
aj Wji
ini = ai =
ai
SajWji g(ini)
4
Class I: Perceptron
Ij Wj O
-1 W0
W1 in = a=
a1 a
W2 SajWj g(in)
a2
5
Learning in Perceptrons
Perceptrons can learn mappings Example: boolean OR
from inputs I to outputs O by
changing weights W D: d I T
0 00 0
Training set D: 1 01 1
Inputs: I0, I1 ... In 2 10 1
Targets: T0, T1 ...Tn 3 11 1
Output O of network is
not necessary equal to T!
6
Learning in Perceptrons
Error often defined as:
E(W) = 1/2SdD(td-od)2
Update rules:
wi = wi + Dwi
Dwi = -hdE/dwi
dE/dwi = d/dwi 1/2SdD(td-od)2
= SdD(td-od)iid i
This is called gradient descent
7
Class II: Multi-layer
Feed-forward Networks
Multiple layers:
hidden layer(s)
Input Hidden Output
Feed-forward:
Output links only connected
to input links in the next
layer
Complex non-linear
functions can be
represented
8
Learning in MLFF Networks
For output layer, weight updating similar to
perceptrons.
Problem: What are the errors in the hidden layer?
Backpropagation Algorithm
For each hidden layer (from output to input):
For each unit in the layer determine how much it contributed to
the errors in the previous layer.
Adapt the weight according to this contribution
9
Class III: Recurrent Networks
No restrictions on
Input Hidden Output
connections
Behaviour more
difficult to predict/
understand
10
Conclusion
Inspiration from biology, though artificial brains
are still very far away.
11
Literature
Machine Learning
Tom M. Mitchell
12