Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Introduction To Artificial Neural Networks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

Introduction to Artificial

Neural Networks

Ahmed Guessoum
Natural Language Processing and Machine Learning
Research Group
Laboratory for Research in Artificial Intelligence
Université des Sciences et de la Technologie
Houari Boumediene

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 1


Lecture Outline
• The Perceptron
• Multi-Layer Networks
– Nonlinear transfer functions
– Multi-layer networks of nonlinear units (sigmoid, hyperbolic tangent)
• Backpropagation of Error
– The backpropagation algorithm
– Training issues
– Convergence
– Overfitting
• Hidden-Layer Representations
• Examples: Face Recognition and Text-to-Speech
• Backpropagation and Faster Training
• Some Open problems
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 2
In the beginning was … the Neuron!
• A neuron (nervous system
cell): many-inputs / one-
output unit
• output can be excited or not
excited
• incoming signals from other
neurons determine if the
neuron shall excite ("fire")
• The output depends on the
attenuations occuring in the
synapses: parts where a
neuron communicates with
another
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks
The Synapse Concept

• The synapse resistance to the incoming signal


can be changed during a "learning" process
[Hebb, 1949]

Hebb’s Rule:
If an input of a neuron is repeatedly and persistently
causing the neuron to fire, then a metabolic change
happens in the synapse of that particular input to
reduce its resistance

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks


Connectionist
(Neural Network) Models

• Human Brain
– Number of neurons: ~100 billion (1011)
– Connections per neuron: ~10-100 thousand (104 – 105)
– Neuron switching time: ~ 0.001 (10-3) second
– Scene recognition time: ~0.1 second
– 100 inference steps doesn’t seem sufficient!
 Massively parallel computation

• (List of animals by number of neurons:


https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons )

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 5


Mathematical Modelling

The neuron calculates a weighted x (or net) sum of inputs


and compares it to a threshold T. If the sum is higher than
the threshold, the output S is set to 1, otherwise to -1.

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks


The Perceptron

x0 = 1 𝒏
x1 w1 𝟏, 𝒘𝒊 𝒙𝒊 ≥ 𝟎
w0 n 𝒐 𝒙𝟏 , 𝒙𝟐 , … , 𝒙𝒏 =

𝒊=𝟎
x2 w2 wi xi −𝟏, 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

 i 0

xn wn
𝟏, 𝒘 𝒙 ≥𝟎
𝐯𝐞𝐜𝐭𝐨𝐫 𝐧𝐨𝐭𝐚𝐭𝐢𝐨𝐧 𝒐 𝒙 = 𝒔𝒈𝒏(𝒙, 𝒘) =
−𝟏, 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

• Perceptron: Single Neuron Model


– Linear Threshold Unit (LTU) or Linear Threshold Gate (LTG)
𝒏
– Net input to unit: defined as a linear combination 𝒏𝒆𝒕 𝒙 = 𝒊=𝟎 𝒘𝒊 𝒙𝒊

– Output of unit: threshold (activation) function on net input (threshold  =- w0)


• Perceptron Networks
– Neuron is modeled using a unit connected by weighted links wi to other units
– Multi-Layer Perceptron (MLP)

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 7


Connectionist (Neural Network) Models

• Definitions of Artificial Neural Networks (ANNs)


– “… a system composed of many simple processing
elements operating in parallel whose function is
determined by network structure, connection strengths,
and the processing performed at computing elements or
nodes.” - DARPA (1988)
• Properties of ANNs
– Many neuron-like threshold switching units
– Many weighted interconnections among units
– Highly parallel, distributed process
– Emphasis on tuning weights automatically
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 8
Decision Surface of a Perceptron

x2 x2
+ +
- + -
+
x1 x1
- +
-
-

Example A Example B
• Perceptron: Can Represent Some Useful Functions (And, Or, Nand, Nor)
– LTU emulation of logic gates (McCulloch and Pitts, 1943)
– e.g., What weights represent g(x1, x2) = AND (x1, x2)? OR(x1, x2)? NOT(x)?
(w0 + w1 . x1 + w2 . x2 w0 = -0.8 w1 = w2 = 0.5 w0 = - 0.3 )
• Some Functions are Not Representable
– e.g., not linearly separable
– Solution: use networks of perceptrons (LTUs)
Ahmed Guessoum – Intro. to Neural Networks 9
24/06/2018 AMLSS
Learning Rules for Perceptrons
• Learning Rule  Training Rule
– Not specific to supervised learning
– Idea: Gradual building/update of a model
• Hebbian Learning Rule (Hebb, 1949)
– Idea: if two units are both active (“firing”),
weights between them should increase
– wij = wij + r oi oj
where r is a learning rate constant
– Supported by neuropsychological evidence
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 10
Learning Rules for Perceptrons
• Perceptron Learning Rule (Rosenblatt, 1959)
– Idea: when a target output value is provided for a single
neuron with fixed input, it can incrementally update weights to
learn to produce the output
– Assume binary (boolean-valued) input/output units; single LTU
– w i  w i  Δw i

Δw i  r(t  o)x i
where t = c(x) is target output value, o is perceptron output,
r is small learning rate constant (e.g., 0.1)
– Convergence proven for D linearly separable and r small
enough

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 11


Perceptron Learning Algorithm

• Simple Gradient Descent Algorithm


– Applicable to concept learning, symbolic learning (with
proper representation)
• Algorithm Train-Perceptron (D  {<x, t(x)  c(x)>})
– Initialize all weights wi to random values
– WHILE not all examples correctly predicted DO
FOR each training example x  D
Compute current output o(x)
FOR i = 1 to n
wi  wi + r(t - o)xi // perceptron learning rule

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 12


Perceptron Learning Algorithm

• Perceptron Learnability
– Recall: can only learn h  H - i.e., linearly separable (LS)
functions
– Minsky and Papert, 1969: demonstrated representational
limitations
• e.g., parity (n-attribute XOR: x1  x2  …  xn)
• e.g., symmetry, connectedness in visual pattern
recognition
• Influential book Perceptrons discouraged ANN research
for ~10 years
– NB: “Can we transform learning problems into LS ones?”
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 13
Linear Separators

• Functional Definition x2
+
– f(x) = 1 if w1x1 + w2x2 + … + wnxn  , 0 otherwise +
+ +
- -
+ +
– : threshold value + +
+ -- -
-
+ - x1
-
• Non Linearly Separable Functions + -
- -
-
-
- -
– Disjunctions: c(x) = x1’  x2’  …  xm’ -

– m of n: c(x) = at least 3 of (x1’ , x2’, …, xm’ ) Linearly Separable (LS)


Data Set
– Exclusive OR (XOR): c(x) = x1  x2
– General DNF: c(x) = T1  T2  … Tm; Ti = l1  l2  …  lk

• Change of Representation Problem


– Can we transform non-LS problems into LS ones?
– Is this meaningful? Practical?
– Does it represent a significant fraction of real-world problems? 14
Perceptron Convergence
• Perceptron Convergence Theorem
– Claim: If there exists a set of weights that are consistent with the data
(i.e., the data is linearly separable), the perceptron learning algorithm
will converge
– Caveat 1: How long will this take?
– Caveat 2: What happens if the data is not LS?
• Perceptron Cycling Theorem
– Claim: If the training data is not LS the perceptron learning algorithm
will eventually repeat the same set of weights and thereby enter an
infinite loop
• How to Provide More Robustness, Expressivity?
– Objective 1: develop algorithm that will find closest approximation
– Objective 2: develop architecture to overcome representational
limitation 15
Gradient Descent:
Principle
• Understanding Gradient Descent for Linear Units
– Consider simpler, unthresholded linear unit: 𝒏

𝒐 𝒙 = 𝒏𝒆𝒕 𝒙 = 𝒘𝒊 𝒙𝒊
– Objective: find “best fit” to D 𝒊=𝟎
• Approximation Algorithm
– Quantitative objective: minimize error over training data set D
– Error function: sum squared error (SSE)

𝟏
𝑬 𝒘 = 𝑬𝒓𝒓𝒐𝒓𝑫 𝒘 = (𝒕 𝒙 − 𝒐 𝒙 )𝟐
𝟐
𝒙∈𝑫
• How to Minimize?
– Simple optimization
– Move in direction of steepest gradient in weight-error space
• Computed by finding tangent
• i.e. partial derivatives (of E) with respect to weights (wi)

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 16


Gradient Descent:
Derivation of Delta/LMS (Widrow-Hoff) Rule
• Definition: Gradient
𝝏𝑬 𝝏𝑬 𝝏𝑬
𝛁𝑬 𝒘 = , ,…,
𝝏𝒘𝟎 𝝏𝒘𝟏 𝝏𝒘𝒏
• Modified Gradient Descent Training Rule

∆𝑾 = −𝒓 𝜵𝑬 𝒘

𝝏𝑬
∆𝒘𝒊 = −𝒓
𝝏𝒘𝒊

𝝏𝑬 𝝏 𝟏 𝟐 𝟏 𝝏 𝟐
= 𝒕 𝒙 −𝒐 𝒙 = 𝒕 𝒙 −𝒐 𝒙
𝝏𝒘𝒊 𝝏𝒘𝒊 𝟐 𝟐 𝝏𝒘𝒊
𝒙∈𝑫 𝒙∈𝑫

𝝏𝑬 𝟏 𝝏 𝝏
= 𝟐 𝒕 𝒙 −𝒐 𝒙 𝒕 𝒙 −𝒐 𝒙 = 𝒕 𝒙 −𝒐 𝒙 𝒕 𝒙 − 𝒘 𝒙
𝝏𝒘𝒊 𝟐 𝝏𝒘𝒊 𝝏𝒘𝒊
𝒙∈𝑫 𝒙∈𝑫

𝝏𝑬
= 𝒕 𝒙 −𝒐 𝒙 − 𝒙𝒊
𝝏𝒘𝒊
𝒙∈𝑫 17
Gradient Descent:
Algorithm using Delta/LMS Rule
• Algorithm Gradient-Descent (D, r)
– Each training example is a pair of the form <x, t(x)>, where x:
input vector; t(x): target vector; r :learning rate
– Initialize all weights wi to (small) random values
– UNTIL the termination condition is met, DO
Initialize each wi to zero
FOR each instance <x, t(x)> in D, DO
Input x into the unit and compute output o
FOR each linear unit weight wi, DO
wi  wi + r(t - o)xi
wi  wi + wi
– RETURN final w
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 18
Gradient Descent:
Algorithm using Delta/LMS Rule

• Mechanics of Delta Rule


– Gradient is based on a derivative
– Significance: later, we will use nonlinear activation
functions (aka transfer functions, squashing functions)

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 19


Gradient Descent:
Perceptron Rule versus Delta/LMS Rule
x2 x2 x2
+ +
+ -
- + +
+ - + + - -
+ +
+ -+ + - - -
x1 x1 -- x1
+ +
- + + - + - -
- - - -
- - -
-
Example A Example B Example C

• LS Concepts: Can Achieve Perfect Classification


– Example A: perceptron training rule converges
• Non-LS Concepts: Can Only Approximate
– Example B: not LS; delta rule converges, but can’t do better than 3
correct
– Example C: not LS; better results from delta rule

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 20


Incremental (Stochastic)
Gradient Descent
• Batch Mode Gradient Descent
– UNTIL the termination condition is met, DO
1. Compute the gradient 𝜵𝑬𝑫 𝒘
2. Update the weights 𝒘 ← 𝒘 − 𝒓 𝜵𝑬𝑫 𝒘
– RETURN final w
• Incremental (Online) Mode Gradient Descent
– UNTIL the termination condition is met, DO
FOR each <x, t(x)> in D, DO
1. Compute the gradient 𝜵𝑬𝒅 𝒘
2. Update the weights 𝒘 ← 𝒘 − 𝒓 𝜵𝑬𝒅 𝒘
– RETURN final w
• Emulating Batch Mode
 1 2  1

E D w     t  x   o x   , Ed w   t x   ox 
2

2  xD  2
– Incremental gradient descent can approximate batch gradient descent arbitrarily closely
if r made small enough

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 21


GD vs Stochastic GD

• Gradient Descent
– Converges to a weight vector with minimal error regardless
whether D is Lin. Separable, given a sufficiently small
learning rate is used.
– Difficulties:
 Convergence to local minimum can be slow
 No guarantee to find global minimum
• Stochastic Gradient Descent intended to alleviate
these difficulties
• Differences
– In Standard GD, error summed over D before updating W
– In Standard GD, more computation per weight update step
(but larger step size per weight update)
– Stochastic GD can sometimes avoid falling into local
minima
• Both commonly used in Practice

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 22


Artificial Neural Networks

Adaptive interaction between individual neurons


Power: collective behavior of interconnected neurons

The hidden layer learns to


recode (or to provide a
representation of) the inputs

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks


Multi-Layer Networks
of Nonlinear Units
Output Layer
o1 o2
• Nonlinear Units v42

Hidden Layer h1 h2 h3 h4
– Recall: activation function sgn (w  x) u 11

– Nonlinear activation function: generalization Input Layer x1 x2 x3

of sgn

• Multi-Layer Networks
– A specific type: Multi-Layer Perceptrons (MLPs)
– Definition: a multi-layer feedforward network is composed of an input
layer, one or more hidden layers, and an output layer
– Only hidden and output layers contain perceptrons (threshold or
nonlinear units)

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 24


Multi-Layer Networks
of Nonlinear Units
Output Layer
o1 o2
v42

• MLPs in Theory Hidden Layer h1 h2 h3 h4

u 11
– Network (of 2 or more layers) can Input Layer x1 x2 x3

represent any function (arbitrarily small error)


– Training even 3-unit multi-layer ANNs is NP-hard (Blum
and Rivest, 1992)
• MLPs in Practice
– Finding or designing effective networks for arbitrary
functions is difficult
– Training is very computation-intensive even when
structure is “known”
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 25
Nonlinear Activation Functions

x0 = 1
x1 w1
w0
x2 w2 𝒐 𝒙 = 𝒔𝒈𝒏(𝒙, 𝒘) = 𝝈(𝒏𝒆𝒕)


xn wn
𝒏
𝒏𝒆𝒕 = 𝒊=𝟎 𝒘𝒊 𝒙𝒊 = 𝒘 𝒙
• Sigmoid Activation Function
– Linear threshold gate activation function: sgn (w  x)
– Nonlinear activation (aka transfer, squashing) function: generalization of sgn
1
–  is the sigmoid function σnet  
1 e net
– Can derive gradient rules to train
• One sigmoid unit
• Multi-layer, feedforward networks of sigmoid units (using backpropagation)
sinhnet  e net  e  net
• Hyperbolic Tangent Activation Function σnet   
coshnet  e net  e net
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 26
Error Gradient for a Sigmoid Unit
𝝏𝑬 𝝏𝑬 𝝏𝑬
• Recall: Gradient of Error Function 𝜵𝑬 𝒘 = , ,…,
𝝏𝒘𝟎 𝝏𝒘𝟏 𝝏𝒘𝒏
• Gradient of Sigmoid Activation Function
𝝏𝑬 𝝏 𝟏 𝟐 𝟏 𝝏 𝟐
= 𝒕 𝒙 −𝒐 𝒙 = 𝒕 𝒙 −𝒐 𝒙 =
𝝏𝒘𝒊 𝝏𝒘𝒊 𝟐 𝟐 𝝏𝒘𝒊
𝒙 𝒕(𝒙) ∈𝑫 𝒙 𝒕(𝒙) ∈𝑫
𝟏 𝝏
= 𝟐 𝒕 𝒙 −𝒐 𝒙 𝒕 𝒙 −𝒐 𝒙
𝟐 𝝏𝒘𝒊
𝒙 𝒕(𝒙) ∈𝑫
𝝏
= 𝒕 𝒙 −𝒐 𝒙 − 𝒐 𝒙
𝝏𝒘𝒊
𝒙 𝒕(𝒙) ∈𝑫
𝝏𝒐 𝒙 𝝏𝒏𝒆𝒕(𝒙)
=− 𝒕 𝒙 −𝒐 𝒙
𝝏𝒏𝒆𝒕(𝒙) 𝝏𝒘𝒊
𝒙 𝒕(𝒙) ∈𝑫

• B𝐮𝐭 𝐰𝐞 𝐤𝐧𝐨𝐰:
𝝏𝒐 𝒙 𝝏𝝈(𝒏𝒆𝒕) 𝝏𝒏𝒆𝒕(𝒙) 𝝏 𝒘 𝒙
= = 𝒐 𝒙 𝟏−𝒐 𝒙 = = 𝒙𝒊
𝝏𝒏𝒆𝒕(𝒙) 𝝏𝒏𝒆𝒕(𝒙) 𝝏𝒘𝒊 𝝏𝒘𝒊
So:
𝝏𝑬
= 𝒕 𝒙 −𝒐 𝒙 𝒐 𝒙 𝟏 − 𝒐 𝒙 𝒙𝒊
𝝏𝒘𝒊 27
𝒙 𝒕(𝒙) ∈𝑫
The Backpropagation Algorithm

• Intuitive Idea: Distribute the Blame for Error to the Previous Layers
• Algorithm Train-by-Backprop (D, r)
– Each training example is a pair of the form <x, t(x)>, where x: input vector; t(x): target
vector; r :learning rate
– Initialize all weights wi to (small) random values
– UNTIL the termination condition is met, DO
FOR each <x, t(x)> in D, DO
Input the instance x to the unit and compute the output o(x) = (net(x))
FOR each output unit k, DO (calculate its error )
δ k  ok x 1  ok x t k x   ok x  Output Layer
o1 o2
v42
FOR each hidden unit j, DO Hidden Layer h1
δ j  h j x 1  h j x 
h2 h3 h4

koutputs
v j,k δk u 11
Update each w = ui,j (a = hj) or w = vj,k (a = ok) Input Layer x1 x2 x3

wstart-layer, end-layer  wstart-layer, end-layer +  wstart-layer, end-layer


wstart-layer, end-layer  r end-layer aend-layer
– RETURN final u, v

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 28


The Backpropagation Algorithm (1)

• Algorithm Train-by-Backprop (D, r)


– D is a set of training examples of the form <x, t(x)>, where
x: input vector;
t(x): target vector;
r :learning rate
– Initialize all weights wi to (small) random values
– UNTIL the termination condition is met, DO
Output Layer

o1 o2
v42

Hidden Layer h1 h2 h3 h4

u 11
x1 x2 x3

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 29 Input Layer


The Backpropagation Algorithm (cont.)

FOR each <x, t(x)> in D, DO


Input the instance x to the unit and compute the output o(x)
= (net(x))
FOR each output unit k, DO (calculate its error )
δ k  ok x 1  ok x t k x   ok x 
Output Layer o1 o2
v42
FOR each hidden unit j, DO
δ j  h j  x 1  h j  x  
Hidden Layer h1 h2 h3 h4
v j,k δk
u 11
koutputs
x1 x2 x3
Update each w = ui,j (a = hj) or w = vj,k (a = ok)
Input Layer
wstart-layer, end-layer  wstart-layer, end-layer +  wstart-layer, end-layer
wstart-layer, end-layer  r end-layer aend-layer
– RETURN final u, v
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 30
Backpropagation and Local Optima

• Gradient Descent in Backprop


– Performed over entire network weight vector
– Easily generalized to arbitrary directed graphs
– Property: Backpropagation on feedforward ANNs will find
a local (not necessarily global) error minimum

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 31


Backpropagation and Local Optima

• Backprop in Practice
– Local optimization often works well (can run multiple times)
– A weight momentum  is often included:
Δw start-layer, end-layer n  r δ end-layer aend-layer  αΔ w start-layer,end-layer n  1
– Minimizes error over training examples
Generalization to subsequent instances?
– Training often very slow: thousands of iterations over D (epochs)
– Inference (applying network after training) typically very fast
• Classification
• Control
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 32
When to Consider Neural Networks
• Input: High-Dimensional and Discrete or Real-Valued
– e.g., raw sensor input
– Conversion of symbolic data to quantitative (numerical) representations possible
• Output: Discrete or Real Vector-Valued
– e.g., low-level control policy for a robot actuator
– Similar qualitative/quantitative (symbolic/numerical) conversions may apply
• Data: Possibly Noisy
• Target Function: Unknown Form
• Result: Human Readability Less Important Than Performance
– Performance measured purely in terms of accuracy and efficiency
– Readability: ability to explain inferences made using model; similar criteria
• Examples
– Speech phoneme recognition
– Image classification
– Financial prediction

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 33


Autonomous Learning Vehicle
in a Neural Net (ALVINN)
• Pomerleau et al
– http://www.cs.cmu.edu/afs/cs/project/alv/member/www/projects/ALVINN.html
– Drives 70mph on highways

Hidden-to-Output Unit
Weight Map
(for one hidden unit)

Input-to-Hidden Unit
Weight Map
(for one hidden unit)

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 34


Feedforward ANNs:
Representational Power

• Representational (i.e., Expressive) Power


– 2-layer feedforward ANN
• Any Boolean function
• Any bounded continuous function (approximate with
arbitrarily small error) : 1 output (unthresholded linear
units) + 1 hidden (sigmoid)
– 3-layer feedforward ANN: any function (approximate with
arbitrarily small error): output (linear units), 2 hidden
(sigmoid)

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 35


Learning Hidden Layer Representations

• Hidden Units and Feature Extraction


– Training procedure: hidden unit representations that minimize error E
– Sometimes backprop will define new hidden features that are not explicit in the input
representation x, but which capture properties of the input instances that are most
relevant to learning the target function t(x)
– Hidden units express newly constructed features
– Change of representation to linearly separable D’
• A Target Function (Sparse aka 1-of-C, Coding)
Input Hidden Values Output
1 0 0 0 0 0 0 0  0.89 0.04 0.08  1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0  0.01 0.11 0.88  0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0  0.01 0.97 0.27  0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0  0.99 0.97 0.71  0 0 0 1 0 0 0 0
0 0 0 0 1 0 0 0  0.03 0.05 0.02  0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0  0.22 0.99 0.99  0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0  0.80 0.01 0.98  0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1  0.60 0.94 0.01  0 0 0 0 0 0 0 1
– ANNs learn discover useful representations at the hidden layers

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 36


Convergence of Backpropagation

• No Guarantee of Convergence to Global Optimum


– Compare: perceptron convergence (to best h  H, provided h  H; i.e., LS)
– Gradient descent to some local error minimum (perhaps not global)
– Possible improvements on Backprop (BP) (see later)
• Momentum term (BP variant; slightly different weight update rule)
• Stochastic gradient descent (BP variant)
• Train multiple nets with different initial weights
– Improvements on feedforward networks
• Bayesian learning for ANNs
• Other global optimization methods that integrate over multiple networks

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 37


Overtraining in ANNs
• Recall: Definition of Overfitting
– Performance of the model improves (converges ) on Dtrain,
but worsens (or much worse) on Dtest
• Overtraining: A Type of Overfitting
– Due to excessive iterations

Error versus epochs (Example 1) Error versus epochs (Example 2)


38
Overfitting in ANNs

• Other Possible Causes of Overfitting


– Number of hidden units sometimes set in advance
– Too many hidden units (“overfitting”)
• The network can memorise to a large extent the specifics of
the training data
• Analogy: fitting a quadratic polynomial with an approximator
of degree >> 2
– Too few hidden units (“underfitting”)
• ANNs with no growth
• Analogy: underdetermined linear system of equations (more
unknowns than equations)
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 39
Overfitting in ANNs

• Solution Approaches
– Prevention: attribute subset selection
– Avoidance
• Hold out cross-validation (CV) set or split k ways (when to
stop?)
• Weight decay: decrease each weight by some factor on each
epoch
– Detection/recovery: random restarts, addition and deletion
of units

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 40


Example: Neural Nets for Face Recognition
The Task (http://www.cs.cmu.edu/~tom/faces.html)

• Learning Task: Classifying Camera Images of faces of


various people in various poses:
– 20 people; 32 images per person (624 greyscale images, each
with resolution 120 x 128, greyscale intensity 0 (black) to 255
(white))
– varying expressions (happy, sad, angry, neutral)
– Varying directions (looking left, right, straight ahead, up)
– Wearing glasses or not
– Variation in background behind the person
– Clothing worn by the person
– Position of face within image
• Variety of target functions can be learned
– Id of person; direction; gender; wearing glasses; etc.
• In this case: learn direction in which person is facing
Ahmed Guessoum – Intro. to Neural Networks 41
24/06/2018 AMLSS
Neural Nets for Face Recognition

Left Straight Right Up


Output Layer Weights (including w0 = ) after 1 Epoch

Hidden Layer Weights after 25 Epochs

30 x 32 Inputs

Hidden Layer Weights after 1 Epoch

• 90% Accurate Learning Head Pose, Recognizing 1-of-20 Faces

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 42


Neural Nets for Face Recognition:
Design Choices

• Input Encoding:
– How to encode an image?
• Extract edges, regions of uniform intensity, other local
features?
• Problem: variable number of features  variable # of
input units
– Choice: encode image as 30 x 32 pixel intensity values
(summary/means of original 120 x 128)  computational
demands manageable
• This is crucial in case of ALVINN (autonomous driving)

• Output Encoding:
– ANN to output 1 of 4 values
• Option1: single unit (values e.g. 0.2, 0.4, 0.6, 0.8)
• Option2: 1-of-n output encoding (better option)
– Note: Instead of 0 and 1 values, 0.1 and 0.9 are used (sigmoid
units cannot output 0 and 1 given finite weights)

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 43


Neural Nets for Face Recognition:
Design Choices

• Network graph structure: How many units to


include and how to interconnect them
– Commonly: 1 or 2 layers of sigmoid units
(occasionally 3). If more, training becomes long!
– How many hidden units? Fairly small preferable.
• E.g. with 3 hidden units: 5 min. training; 90%
with 30 hidden units: 1 hour training; barely better

• Other Learning Algorithm Parameters:


– Learning rate: r = 0.3; momentum α = 0.3 (if too big,
training fails to converge with acceptable error)
– Full gradient descent used.

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 44


Neural Nets for Face Recognition:
Design Choices
• Network weights
– Initialised to small random values
• Input unit weights
– Initialised to zero
• Number of training iterations
– Data partitioned into training set and validation set
• Gradient descent used
– Every 50 steps, network performance evaluated over the
validation set
– Final network: the one with highest accuracy over
validation set
– Final result (90%) measured over 3rd set of examples

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 45


Example: NetTalk

• Sejnowski and Rosenberg, 1987


• Early Large-Scale Application of Backprop
– Learning to convert text to speech
• Acquired model: a mapping from letters to phonemes and stress marks
• Output passed to a speech synthesizer
– Good performance after training on a vocabulary of ~1000 words
• Very Sophisticated Input-Output Encoding
– Input: 7-letter window; determines the phoneme for the center letter and context
on each side; distributed (i.e., sparse) representation: 200 bits
– Output: units for articulatory modifiers (e.g., “voiced”), stress, closest phoneme;
distributed representation
– 40 hidden units; 10000 weights total
• Experimental Results
– Vocabulary: trained on 1024 of 1463 (informal) and 1000 of 20000 (dictionary)
– 78% on informal, ~60% on dictionary
• http://www.boltz.cs.cmu.edu/benchmarks/nettalk.html
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 46
Training (Learning) Functions

• During training: network weights and biases are iteratively


adjusted to minimize its Error (/ Loss / performance) function
• Mean Squared Error: default error function for feedforward
networks
• Different training algorithms for feedforward networks.
• All are based on the use of the gradient of the error function
(to determine how to adjust the weights)
• Gradient determined using backpropagation
• Basic backpropagation training algorithm: weights are
updated so as to move in the direction of the negative of the
gradient

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 47


More on Backpropagation

• One iteration of backpropagation updates the


weights as wk+1 = wk + αk gk
where at iteration k
– wk : vector of weights and biases,
– gk : current gradient, and
– αk : learning rate.
• Two ways of implementing gradient descent :
– incremental mode: gradient computed and weights
updated after each input to the network.
– batch mode: all the inputs are fed into the network before
the weights are updated.

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 48


Variations of the Backpropagation Algorithm

Batch (Steepest) Gradient Descent training:


• Weights and biases updated in the direction of the
negative gradient of the performance function
• The larger the learning rate, the bigger the step.
• If learning rate is set too large, then the algorithm
becomes unstable.
• If the learning rate is set too small, the algorithm
takes a long time to converge.
• Stopping conditions: max number of iterations,
performance gets below the Goal, gradient
magnitude smaller than a minimum, maximum
training time reached, etc.
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 49
Faster Training

• GD often too slow for practical problems

• Several high-performance algorithms can


converge from 10 to 100 times faster

• The faster algorithms fall into two categories:


1. Those using heuristic techniques
2. Those using standard numerical optimization
techniques

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 50


Faster Algorithms using Heuristic Functions

• GD with momentum

• Adaptive learning rate backpropagation

• Resilient backpropagation

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 51


Faster Algorithms using Heuristic Functions

Gradient Descent with Momentum


• Makes the network to respond to the local gradient, plus the
recent trends in the error surface
Δw start-layer,end-layer n  r δ end-layer aend-layer  αΔ w start-layer,end-layer n  1
• Two training parameters:
– Learning rate
– Momentum:
• value between 0 (no momentum) and close to 1 (lots
of momentum).
• If momentum = 1 , then network insensitive to the
local gradient  does not learn properly.

• Remains quite slow


24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 52
Faster Algorithms using Heuristic Functions

Adaptive learning rate backpropagation


• With SGD, learning rate is kept constant throughout
training.  algorithm performance very sensitive to
proper setting of learning rate.
• If lr is too large, the algorithm can oscillate and become
unstable.
• If lr is too small, the algorithm takes too long to converge.
• Possible to improve the performance of SGD by allowing
the learning rate to change during training.
• Adaptive learning rate attempts to keep the learning step
size as large as possible while keeping learning stable.
• Learning rate is made responsive to the complexity of the
local error surface.

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 53


Adaptive learning rate backpropagation
(cont.)

• If new error exceeds the old error by more than a


predefined ratio (typically 1.04),
– the new weights and biases are discarded.
– The learning rate is decreased (typically multiplying by a
factor of 0.7).
Otherwise, the new weights and biases, are kept.
• If new error less than old error, the learning rate is
increased (typically multiplying by a factor 1.05).
• A near-optimal learning rate can be obtained for
the given problem
• One can combine Adaptive learning rate with
Momentum

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 54


Faster Algorithms using Heuristic Functions

Resilient Backpropagation:
Problem with Sigmoid functions: their slopes
approach zero as the input gets large
 the gradient can have a very small magnitude
 small changes in weights and biases, even
though the weights and biases are far from their
optimal values.
• Resilient backpropagation attempts to eliminate
harmful effects of the magnitudes of the gradient
• Magnitude of the gradient has no effect on the
weight update
• Only the sign of the gradient is used to
determine the direction of the weight update;
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 55
Resilient Backpropagation (cont.)

• The size of weight change is determined by a separate


update value.
• The update value for each weight and bias is increased
by a factor delt_inc whenever the derivative of the
performance function with respect to that weight has the
same sign for two successive iterations.
• The update value is decreased by a factor delt_dec
whenever the derivative with respect to that weight
changes sign from the previous iteration.
• If the derivative is zero, then the update value remains
the same.
• So: Whenever the weights are oscillating, the weight
change is reduced.
• If the weight continues to change in the same direction
for several iterations, then the magnitude of the weight
change increases.
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 56
Resilient Backpropagation (cont.)

• Generally much faster than the standard steepest


descent algorithm.

• Requires only a modest increase in memory


requirements

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 57


Faster Algorithms using Standard
Numerical Optimization Techniques
• Conjugate Gradient Algorithms
• Basic Backpropagation algorithm adjusts the weights in the
steepest descent direction (negative of the gradient), the
direction in which the performance function is decreasing
most rapidly.
• This does not necessarily produce the fastest convergence.
• In conjugate gradient algorithms a search is performed
along conjugate directions, which produces generally faster
convergence than steepest descent directions.
• In most of conjugate gradient algorithms, the step size is
adjusted at each iteration.
• A search is made along the conjugate gradient direction to
determine the step size that minimizes the performance
function along that line
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 58
Conjugate Gradient Algorithms

• Fletcher-Reeves Update
• Polak-Ribiére Update
• Powell-Beale Restarts
• Scaled Conjugate Gradient

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 59


Conjugate Gradient Algorithms

• All the CG algorithms start by searching in the SGD


direction on the first iteration.
P 0 = - g0
• A line search is then performed to determine the optimal
distance to move along the current search direction:
xk+1 = xk + αk Pk
• The next search direction is determined so that it is
conjugate to previous search directions.
• The general procedure for determining the new search
direction is to combine the new steepest descent
direction with the previous search direction:
pk = - gk + βk pk-1

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 60


Conjugate Gradient Algorithms

• The various versions of the conjugate gradient algorithm


are distinguished by the way in which the constant β𝑘 is
computed.
• For the Fletcher-Reeves update the procedure is
𝑔𝑘𝑇 𝑔𝑘
β𝑘 = 𝑇
𝑔𝑘−1 𝑔𝑘−1

• For the Polak-Ribiére update, the constant k is


computed by
𝑇
𝛥𝑔𝑘−1 𝑔𝑘
β𝑘 = 𝑇
𝑔𝑘−1 𝑔𝑘−1
• Etc.
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 61
Faster Algorithms using Standard
Numerical Optimization Techniques
• Quasi-Newton Algorithms
• Basic step of Newton's method is
𝑋𝑘+1 = −𝑋𝑘 + 𝐴−1 𝑘 𝑔𝑘
where 𝐴−1
𝑘 is the Hessian matrix (second derivatives) of the
performance.
• Newton's method often converges faster than conjugate
gradient methods.
• Computing the Hessian matrix for feedforward neural
networks is complex and expensive.
• A class of algorithms that is based on Newton's method, but
which doesn't require calculation of second derivatives.
• These are called quasi-Newton (or secant) methods. They
update an approximate Hessian matrix at each iteration of
the algorithm.
Ahmed Guessoum – Intro. to Neural Networks 62
Some ANN Applications
• Diagnosis
– Closest to pure concept learning and classification
– Some ANNs can be post-processed to produce probabilistic diagnoses
• Prediction and Monitoring
– aka prognosis (sometimes forecasting)
– Predict a continuation of (typically numerical) data
• Decision Support Systems
– aka recommender systems
– Provide assistance to human “subject matter” experts in making decisions
• Design (manufacturing, engineering)
• Therapy (medicine)
• Crisis management (medical, economic, military, computer security)
• Control Automation
– Mobile robots
– Autonomic sensors and actuators
• Many, Many More
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 63
Strengths of a Neural Network

• Power: Model complex functions, nonlinearity


built into the network
• Ease of use:
– Learn by example
– Very little user domain-specific expertise
needed
• Intuitively appealing: based on model of
biology, will it lead to genuinely intelligent
computers/robots?

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks


General Advantages / Disadvantages

• Advantages
– Adapt to unknown situations
– Robustness: fault tolerance due to network
redundancy
– Autonomous learning and generalization

• Disadvantages
– Complexity of finding the “right” network
structure
– “Black box”

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks


Issues and Open Problems in (classical)
ANN Research

• Hybrid Approaches
– Incorporating knowledge and analytical learning into ANNs
• Knowledge-based neural networks
• Explanation-based neural networks
• Combining uncertain reasoning and ANN learning and
inference
• Probabilistic ANNs
• Bayesian networks

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 66


Issues and Open Problems in (classical)
ANN Research

• Global Optimization with ANNs


– Hybrid models
– Relationship to genetic algorithms
• Understanding ANN Output
– Knowledge extraction from ANNs
• Rule extraction
• Other decision surfaces
– Decision support and KDD applications
• Many More Issues (Robust Reasoning,
Representations, etc.)
24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 67
Going Deep?
Andrew Ng, Machine Learning Yearning, --Technical Strategy for AI Engineers in the
Era of Deep Learning, Draft Version, 2018, deeplearning.ai

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 68


This presentation was based on the following:

• Tom Mitchell, Machine Learning, McGraw Hill, 1997


• Lecture notes by Prof. Hsu (Kansas State University)
based on Tom Mitchell’s Machine Learning book.
• Training Functions and Fast Training, Matlab
documentation, Mathworks.
• Andrew Ng, Machine Learning and Deep Learning online
course, Coursera.
• Dave Touretzki, « Artificial Neural Networks » Lecture
notes, Carnegie Mellon University,
http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15782-
f06/syllabus.html
• Nicolas Galoppo von Borries, Introduction to Neural
Networks, COMP290-058 Motion Planning.
(http://slideplayer.com/slide/9882811/ )

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 69


Thank you!

24/06/2018 AMLSS Ahmed Guessoum – Intro. to Neural Networks 70

You might also like