Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Kevin Swingler - Lecture 4: Multi-Layer Perceptrons

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Lecture 4: Multi-Layer Perceptrons

Kevin Swingler
kms@cs.stir.ac.uk

Dept. of Computing Science & Math

Review of Gradient Descent Learning


1. The purpose of neural network training is to minimize the output errors on a particular set of training data by adjusting the network weights w We define an Cost Function E(w) that measures how far the current networks output is from the desired one Partial derivatives of the cost function E(w)/w tell us which direction we need to move in weight space to reduce the error The learning rate specifies the step sizes we take in weight space for each iteration of the weight update equation We keep stepping through weight space until the errors are small enough.

2. 3. 4. 5.

Dept. of Computing Science & Math

Review of Perceptron Training


1. 2. 3. 4. 5. Generate a training pair or pattern x that you wish your network to learn Setup your network with N input units fully connected to M output units. Initialize weights, w, at random Select an appropriate error function E(w) and learning rate Apply the weight change w = - E(w)/ w to each weight w for each training pattern p. One set of updates for all the weights for all the training patterns is called one epoch of training. Repeat step 5 until the network error function is small enough

6.

Dept. of Computing Science & Math

Review of XOR and Linear Separability


Recall that it is not possible to find weights that enable Single Layer Perceptrons to deal with non-linearly separable problems like XOR
XOR in1 0 0 1 1 in2 0 1 0 1 out 0 1 1 0

I1

I2

The proposed solution was to use a more complex network that is able to generate more complex decision boundaries. That network is the Multi-Layer Perceptron.
Dept. of Computing Science & Math 4

Multi-Layer Perceptrons (MLPs)


Yk = f (

w jk O j )

Y1

Y2

Yk

Output layer, k

Oj = f (

wij X i )
i

O1

Oj

Hidden layer, j

X1

X2

X3

Xi

Input layer, i

Dept. of Computing Science & Math

Can We Use a Generalized Form of the PLR/Delta Rule to Train the MLP?
Recall the PLR/Delta rule: Adjust neuron weights to reduce error at neurons output:

w = wold + x

where

= yt arg et y

Main problem: How to adjust the weights in the hidden layer, so they reduce the error in the output layer, when there is no specified target response in the hidden layer? Solution: Alter the non-linear Perceptron (discrete threshold) activation function to make it differentiable and hence, help derive Generalized DR for MLP training.
+1 +1

-1 Threshold function Dept. of Computing Science & Math

-1 Sigmoid function 6

Sigmoid (S-shaped) Function Properties


Approximates the threshold function Smoothly differentiable (everywhere), and hence DR applicable Positive slope Popular choice is
y = f (a) = 1 1 + e a

Derivative of sigmoidal function is:


f ' (a) = f (a) (1 f (a))

Dept. of Computing Science & Math

Weight Update Rule


Generally, weight change from any unit j to unit k by gradient descent (i.e. weight change by small increment in negative direction to the gradient) is now called Generalized Delta Rule (GDR or Backpropagation):

w = w wold =

E = + x w

So, the weight change from the input layer unit i to hidden layer unit j is:

wij = j xi

where

j = o j (1 o j ) w jk k
k

The weight change from the hidden layer unit j to the output layer unit k is:

w jk = k o j

where

k = ( yt arg et ,k y k ) y k (1 y k )

Dept. of Computing Science & Math

Training of a 2-Layer Feed Forward Network


1. Take the set of training patterns you wish the network to learn 2. Set up the network with N input units fully connected to M hidden nonlinear hidden units via connections with weights wij, which in turn are fully connected to P output units via connections with weights wjk 3. Generate random initial weights, e.g. from range [-wt, +wt] 4. Select appropriate error function E(wjk) and learning rate 5. Apply the weight update equation wjk=-E(wjk)/wjk to each weight wjk for each training pattern p. 6. Do the same to all hidden layers. 7. Repeat step 5-6 until the network error function is small enough

Dept. of Computing Science & Math

Graphical Representation of GDR


Total Error

Local minimum

Global minimum

Weight, wi
Ideal weight

Dept. of Computing Science & Math

10

Practical Considerations for Learning Rules


There are a number of important issues about training single layer neural networks that need further resolving: 1. Do we need to pre-process the training data? If so, how? 2. How do we choose the initial weights from which we start the training? 3. How do we choose an appropriate learning rate ? 4. Should we change the weights after each training pattern, or after the whole set? 5. Are some activation/transfer functions better than others? 6. How do we avoid local minima in the error function? 7. How do we know when we should stop the training? 8. How many hidden units do we need? 9. Should we have different learning rates for the different layers? We shall now consider each of these issues one by one.
Dept. of Computing Science & Math 11

Pre-processing of the Training Data


In principle, we can just use any raw input-output data to train our networks. However, in practice, it often helps the network to learn appropriately if we carry out some pre-processing of the training data before feeding it to the network. We should make sure that the training data is representative it should not contain too many examples of one type at the expense of another. On the other hand, if one class of pattern is easy to learn, having large numbers of patterns from that class in the training set will only slow down the over-all learning process.

Dept. of Computing Science & Math

12

Choosing the Initial Weight Values


The gradient descent learning algorithm treats all the weights in the same way, so if we start them all off with the same values, all the hidden units will end up doing the same thing and the network will never learn properly. For that reason, we generally start off all the weights with small random values. Usually we take them from a flat distribution around zero [wt, +wt], or from a Gaussian distribution around zero with standard deviation wt. Choosing a good value of wt can be difficult. Generally, it is a good idea to make it as large as you can without saturating any of the sigmoids. We usually hope that the final network performance will be independent of the choice of initial weights, but we need to check this by training the network from a number of different random initial weight sets.

Dept. of Computing Science & Math

13

Choosing the Learning Rate


Choosing a good value for the learning rate is constrained by two opposing facts: 1. If is too small, it will take too long to get anywhere near the minimum of the error function. 2. If is too large, the weight updates will over-shoot the error minimum and the weights will oscillate, or even diverge. Unfortunately, the optimal value is very problem and network dependent, so one cannot formulate reliable general prescriptions. Generally, one should try a range of different values (e.g. = 0.1, 0.01, 1.0, 0.0001) and use the results as a guide.

Dept. of Computing Science & Math

14

Batch Training vs. On-line Training


Batch Training: update the weights after all training patterns have been presented. On-line Training (or Sequential Training): A natural alternative is to update all the weights immediately after processing each training pattern. On-line learning does not perform true gradient descent, and the individual weight changes can be rather erratic. Normally a much lower learning rate will be necessary than for batch learning. However, because each weight now has N updates (where N is the number of patterns) per epoch, rather than just one, overall the learning is often much quicker. This is particularly true if there is a lot of redundancy in the training data, i.e. many training patterns containing similar information.

Dept. of Computing Science & Math

15

Choosing the Transfer Function


We have already seen that having a differentiable transfer/activation function is important for the gradient descent algorithm to work. We have also seen that, in terms of computational efficiency, the standard sigmoid (i.e. logistic function) is a particularly convenient replacement for the step function of the Simple Perceptron. The logistic function ranges from 0 to 1. There is some evidence that an anti-symmetric transfer function, i.e. one that satisfies f(x) = f(x), enables the gradient descent algorithm to learn faster. When the outputs are required to be non-binary, i.e. continuous real values, having sigmoidal transfer functions no longer makes sense. In these cases, a simple linear transfer function f(x) = x is appropriate.
Dept. of Computing Science & Math 16

Local Minima
Cost functions can quite easily have more than one minimum:

If we start off in the vicinity of the local minimum, we may end up at the local minimum rather than the global minimum. Starting with a range of different initial weight sets increases our chances of finding the global minimum. Any variation from true gradient descent will also increase our chances of stepping into the deeper valley.
Dept. of Computing Science & Math 17

When to Stop Training


The Sigmoid(x) function only takes on its extreme values of 0 and 1 at x = . In effect, this means that the network can only achieve its binary targets when at least some of its weights reach . So, given finite gradient descent step sizes, our networks will never reach their binary targets. Even if we off-set the targets (to 0.1 and 0.9 say) we will generally require an infinite number of increasingly small gradient descent steps to achieve those targets. Clearly, if the training algorithm can never actually reach the minimum, we have to stop the training process when it is near enough. What constitutes near enough depends on the problem. If we have binary targets, it might be enough that all outputs are within 0.1 (say) of their targets. Or, it might be easier to stop the training when the sum squared error function becomes less than a particular small value (0.2 say).

Dept. of Computing Science & Math

18

How Many Hidden Units?


The best number of hidden units depends in a complex way on many factors, including:
1. 2. 3. 4. 5. 6. The number of training patterns The numbers of input and output units The amount of noise in the training data The complexity of the function or classification to be learned The type of hidden unit activation function The training algorithm

Too few hidden units will generally leave high training and generalisation errors due to under-fitting. Too many hidden units will result in low training errors, but will make the training unnecessarily slow, and will result in poor generalisation unless some other technique (such as regularisation) is regularisation used to prevent over-fitting. Virtually all rules of thumb you hear about are actually nonsense. A sensible strategy is to try a range of numbers of hidden units and see which works best.
Dept. of Computing Science & Math 19

Different Learning Rates for Different Layers?


A network as a whole will usually learn most efficiently if all its neurons are learning at roughly the same speed. So maybe different parts of the network should have different learning rates . There are a number of factors that may affect the choices:
1. The later network layers (nearer the outputs) will tend to have larger local gradients (deltas) than the earlier layers (nearer the inputs). 2. The activations of units with many connections feeding into or out of them tend to change faster than units with fewer connections. 3. Activations required for linear units will be different for Sigmoidal units. 4. There is empirical evidence that it helps to have different learning rates for the thresholds/biases compared with the real connection weights.

In practice, it is often quicker to just use the same rates for all the weights and thresholds, rather than spending time trying to work out appropriate differences. A very powerful approach is to use evolutionary strategies to determine good learning rates.

Dept. of Computing Science & Math

20

You might also like