Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Introduction To Neural Networks: Freek Stulp

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 12

Introduction to

Neural Networks
Freek Stulp
Overview
Biological Background
Artificial Neuron
Classes of Neural Networks
1. Perceptrons
2. Multi-Layered Feed-Forward Networks
3. Recurrent Networks
Conclusion

2
Biological Background
Neuron consists of:
 Cell body
 Dendrites
 Axon
 Synapses

Neural activation :
 Throught dendrites/axon
 Synapses have different
strengths

3
Artificial Neuron
Input links Unit Output links
(dendrites) (cell body) (axon)

aj Wji
ini = ai =
ai
SajWji g(ini)

4
Class I: Perceptron

Ij Wj O

-1 W0
W1 in = a=
a1 a
W2 SajWj g(in)
a2

a = g(-W0 + W1a1 + W2a2) { 0, in<0


g(in) = 1, in>0

5
Learning in Perceptrons
Perceptrons can learn mappings Example: boolean OR
from inputs I to outputs O by
changing weights W D: d I T
0 00 0
Training set D: 1 01 1
 Inputs: I0, I1 ... In 2 10 1
 Targets: T0, T1 ...Tn 3 11 1

Output O of network is
not necessary equal to T!

6
Learning in Perceptrons
Error often defined as:
E(W) = 1/2SdD(td-od)2

Go towards the minimum error!

Update rules:
 wi = wi + Dwi
 Dwi = -hdE/dwi
 dE/dwi = d/dwi 1/2SdD(td-od)2

= SdD(td-od)iid i
This is called gradient descent

7
Class II: Multi-layer
Feed-forward Networks
Multiple layers:
 hidden layer(s)
Input Hidden Output

Feed-forward:
 Output links only connected
to input links in the next
layer

Complex non-linear
functions can be
represented

8
Learning in MLFF Networks
For output layer, weight updating similar to
perceptrons.
Problem: What are the errors in the hidden layer?
Backpropagation Algorithm
 For each hidden layer (from output to input):
 For each unit in the layer determine how much it contributed to
the errors in the previous layer.
 Adapt the weight according to this contribution

This is also gradient descent

9
Class III: Recurrent Networks
No restrictions on
Input Hidden Output
connections

Behaviour more
difficult to predict/
understand

10
Conclusion
Inspiration from biology, though artificial brains
are still very far away.

Perceptrons too simple for most problems.

MLFF Networks good as function approximators.


 Many of your articles use these networks!

Recurrent networks complex but useful too.

11
Literature

Artificial Intelligence: A Modern Approach


 Stuart Russel and Peter Norvig

Machine Learning
 Tom M. Mitchell

12

You might also like