01 Intro Slides
01 Intro Slides
Alexander Ilin
Teaching assistants
1
Pre-requisites
2
Course schedule
3
Communication channels
4
Course grading
5
Assignments
6
Exercise sessions
• The exercise sessions are organized to help you solve the assignments, you do not have to attend
them.
• There will be four online exercise sessions for assignments 1–8, typically:
1. Friday 10:15-11:45
2. Friday 12:15-13:45
3. Monday 10:15-11:45
4. Monday 12:15-13:45
Exceptions because of public holidays.
• No exercise sessions for assignments 9–10.
• Please read carefully the protocol for the exercise sessions in MyCourses.
7
Course material
• We do not have special sessions on PyTorch, you should learn it by following PyTorch tutorials.
• If you know numpy (pre-requisite), PyTorch should be easy to learn.
• Deep learning frameworks develop very quickly, you need to learn new frameworks/features all the
time.
• If you need help with PyTorch please ask for assistance in the exercise sessions.
• The lectures will be recorded and will be available in MyCourses. Lecture notes are available in
MyCourses.
• Lecture slides will be put to MyCourses before each lecture. Credit to people whose material I
used in the slides: Tapani Raiko, Kyunghyun Cho, Jyri Kivinen, Jorma Laaksonen, Antti
Keurulainen, Sebastian Björkqvist.
• Deep Learning book by Goodfellow, Bengio and Courville (2016).
8
What is deep learning
Feature engineering
• Suppose that you need to solve a custom machine learning problem, for example:
• Spam detection.
• Reading information from scanned invoices.
• We can solve that problem by designing a set of features from the data and use those feature as
input of a machine learning model.
• Spam detection: Useful features are counts of certain words.
• Line item extraction from invoices: Useful features to classify a number as a line item or not are
position on the invoice, words that appear in the proximity.
Data → Feature engineering → Machine learning model (e.g. random forest classifier)
• Benefit of feature engineering: One can use domain knowledge to design features that are robust
(for example, invariant to certain distortions).
• What are the problems with feature engineering?
10
Feature engineering: Problem 1
11
Feature engineering: Problem 2
• Handcrafted features are not perfect. There are always examples that are not processed correctly,
which motivates engineering of new features.
Features
Misclassified
examples Classifier
12
Representation learning
• These problems can be overcome with representation learning: We use machine learning to
discover not only the mapping from representation to output but also the representation itself.
• A representation learning algorithm can discover a good set of features much faster (in days
instead of decades of efforts of an entire research community).
• Learned representations often result in much better performance compared to hand-designed
representations.
• With learned representations, AI systems can rapidly adapt to new tasks, with minimal human
intervention.
• This is what deep learning does: it learns representations from data.
13
Deep learning = artificial neural networks
learning frameworks that are not necessarily 1940 1950 1960 1970 1980 1990 2000 2010 2020
neurally inspired. Frequency of phrases ”cybernetics”, ”neural networks” and ”deep learning”
according to Google books.
14
Linear classifiers
Logistic regression
3
• Consider a binary classification problem: Our training data
consist of examples (x(1) , y (1) ), ..., (x(n) , y (n) ) with x(i) ∈ Rm 2
m
! 1
X
f (x) = σ wj xj + b = σ w> x + b 2
j=1
3
where m is the number of features in x. 3 2 1 0 1 2 3
Training examples
1
• Logistic regression model: σ(x) = 1+e −x
is a logistic
function.
• Using the logistic function guarantees that the output is
between 0 and 1 and it can be seen as the probability that
x belongs to one of the classes: p(y = 1 | x) = f (x).
Logistic function
16
Training of a binary classifier: Optimization problem
• Training of a our classifier: find parameters w and b that would classify our training examples as
accurately as possible.
• We train the classifier by solving the following optimization problem.
• Assume Bernoulli distribution for labels y :
p(y | x, w, b) = f (x)y (1 − f (x))1−y
where f (x) is the output of the classifier.
• Write the likelihood function for n training examples:
n
Y
p(data | w, b) = p(y (i) | x(i) , w, b)
i=1
• Maximize the log-likelihood function F (w, b) = log p(data | w, b) or minimize the negative of that:
n
X
L(w, b) = − y (i) log f (x(i) ) + (1 − y (i) ) log(1 − f (x(i) ))
i=1
17
Toy binary classification problem
• Consider a toy binary classification problem with two parameters w1 and w2 (no bias term):
1
f (x) = σ (w1 x1 + w2 x2 ) σ(x) =
1 + e −x
The loss function in this toy example can be visualized using a contour plot.
3.5
3
3.0
2
2.5
1
2.0
w2
0
1.5
1
1.0
2 0.5
3 0.0
4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5
3 2 1 0 1 2 3 n w1
X (i) (i) (i) (i)
Training examples L(w1 , w2 ) = − y log f (x ) + (1 − y ) log(1 − f (x ))
i=1
18
Gradient
3.5
2.5
∂L
∂w1
g(w) =
∂L 2.0
w2
∂w2
1.5
• Gradient points in the direction of the greatest
rate of increase of L, its magnitude is the slope 1.0
of the graph of L in that direction.
0.5
0.0
4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5
w1
19
Gradient descent
w ← w − ηg(w) 3.0
w2
minimum, so we need to iterate
1.5
wt+1 = wt − ηt g(wt ) 1.0
• This optimization algorithm is called gradient 0.5
descent.
0.0
• Step size η is commonly called the learning rate. 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5
w1
20
Multilayer perceptrons
A historical note: First models of neurons
• The first algorithm of training a linear binary classifier was proposed by Rosenblatt in 1958.
• The model was called Perceptron and its training procedure was inspired by neuroscience (Donald
Hebb’s rule) rather than by mathematical optimization.
• The problem with perceptrons: they are linear classifiers and can solve a very limited set of
classification problems.
• This problem was well understood already in the 1960s. Minsky and Papert (1969) in their book
called “Perceptrons” argued that more complex (nonlinear) problems have to be solved with
multiple layers of perceptrons.
22
Multilayer perceptrons
• Multilayer perceptron (MLP) is a neural network that consists of multiple layers of perceptrons
(neurons).
23
Activation functions
24
Multilayer perceptrons
25
Training of multilayer perceptrons
• Again, we can tune the parameters θ k (which include Wk , bk ) of the classifier by maximizing the
log-likelihood, for example, using gradient descent:
∂L
θk ← θk − η .
∂θ k
∂L
• The gradient ∂θ k
can be computed efficiency using the backpropagation algorithm.
26
The backpropagation algorithm
(Rumelhart et al., 1986)
Backpropagation: An example with scalars
• We can compute the derivatives wrt the model parameters θ and w using the chain rule.
∂L ∂L ∂y w θ
=
∂θ ∂y ∂θ
∂L ∂L ∂y ∂h
=
∂w ∂y ∂h ∂w h y
| {z } x f1 f2 L
∂L
∂h
28
Backpropagation: An example with scalars
• We can compute the derivatives wrt the model parameters θ and w using the chain rule.
∂L ∂L ∂y w θ
=
∂θ ∂y ∂θ
∂L ∂L ∂y ∂h
=
∂w ∂y ∂h ∂w h y
| {z } x f1 f2 L
∂L
∂h ∂L
∂y
28
Backpropagation: An example with scalars
• We can compute the derivatives wrt the model parameters θ and w using the chain rule.
∂L ∂L ∂y w θ
=
∂θ ∂y ∂θ
∂L
∂L ∂L ∂y ∂h ∂θ
=
∂w ∂y ∂h ∂w h y
| {z } x f1 f2 L
∂L
∂h ∂L ∂L
∂h ∂y
28
Backpropagation: An example with scalars
• We can compute the derivatives wrt the model parameters θ and w using the chain rule.
∂L ∂L ∂y w θ
=
∂θ ∂y ∂θ ∂L ∂L
∂L ∂L ∂y ∂h ∂w ∂θ
=
∂w ∂y ∂h ∂w h y
| {z } x f1 f2 L
∂L
∂h ∂L ∂L
∂h ∂y
28
Chain rule for multi-variable functions
• For multi-variable functions, the chain rule can be written in terms of Jacobian matrices.
y = f (u), u = g (x) y ∈ RM , u ∈ RK , x ∈ RN
∂y ∂y1
∂x1
1
· · · ∂x N
. .. ..
Jacobian matrix: Jf ◦g = .. . .
∂yM
∂x1
· · · ∂y
∂xN
M
29
Backpropagation for multi-variable functions
• We apply the chain rule to compute the derivatives wrt the model parameters (and re-use
intermediate derivatives):
K
∂L X ∂L ∂yk
w θ
=
∂θj ∂yk ∂θj
k=1
K
∂L X ∂L ∂yk
= h y
∂hl ∂yk ∂hl x f1 f2 L
k=1
L
∂L X ∂L ∂hl
=
∂wi ∂hl ∂wi
l=1
30
Backpropagation for multi-variable functions
• We apply the chain rule to compute the derivatives wrt the model parameters (and re-use
intermediate derivatives):
K
∂L X ∂L ∂yk
w θ
=
∂θj ∂yk ∂θj
k=1
K
∂L X ∂L ∂yk
= h y
∂hl ∂yk ∂hl x f1 f2 L
k=1
L ∂L
∂L X ∂L ∂hl ∂yk
=
∂wi ∂hl ∂wi
l=1
• We can compute the derivatives sequentially going from the outputs of the network towards the
inputs (thus the name of the algorithm backpropagation).
30
Backpropagation for multi-variable functions
• We apply the chain rule to compute the derivatives wrt the model parameters (and re-use
intermediate derivatives):
K
∂L X ∂L ∂yk
w θ
=
∂θj ∂yk ∂θj
k=1 ∂L
K ∂θj
∂L X ∂L ∂yk
= h y
∂hl ∂yk ∂hl x f1 f2 L
k=1
L ∂L ∂L
∂L X ∂L ∂hl ∂hl ∂yk
=
∂wi ∂hl ∂wi
l=1
• We can compute the derivatives sequentially going from the outputs of the network towards the
inputs (thus the name of the algorithm backpropagation).
30
Backpropagation for multi-variable functions
• We apply the chain rule to compute the derivatives wrt the model parameters (and re-use
intermediate derivatives):
K
∂L X ∂L ∂yk
w θ
=
∂θj ∂yk ∂θj
k=1 ∂L ∂L
K ∂wi ∂θj
∂L X ∂L ∂yk
= h y
∂hl ∂yk ∂hl x f1 f2 L
k=1
L ∂L ∂L
∂L X ∂L ∂hl ∂hl ∂yk
=
∂wi ∂hl ∂wi
l=1
• We can compute the derivatives sequentially going from the outputs of the network towards the
inputs (thus the name of the algorithm backpropagation).
30
PyTorch
• PyTorch is a programming framework which allows you to create complex multilayer models
without the need to implement the optimization procedure. Backpropagation is already
implemented in the framework.
import torch
mlp = nn.Sequential(
nn.Linear(3, 5),
nn.ReLU(), x1
nn.Linear(5, 1),
)
optimizer = torch.optim.SGD(mlp.parameters(), lr=0.01) x2 y
for i in range(100):
optimizer.zero_grad() x3
# Compute loss
y = mlp(x)
loss = loss_fn(y, targets)
• Let us extend this loss to the case of K classes. We can represent the target as a one-hot vector
y, where yj = 1 if the example belongs to class j and yi = 0:
" # " # K
1 0 X
class 1: y = class 2: y = yj = 1
0 1 j=1
• Similarly, we can represent the output of the network as a vector f whose j-th element fj is the
probability that input x belongs to class j.
" # K
f1 X
f= 0 ≤ fj ≤ 1 fj = 1
f2 j=1
33
Classification problems: Cross-entropy loss
we apply the softmax nonlinearity to the outputs hj of the last layer of the neural network:
exp hj
f j = PK
j =1 exp hj
0 0
34
Regression problems: Mean-squared error loss
(n)
• yj is the j-th element of y(n)
• fj is the j-th element of the network output f(x, θ)
• ny is the number of elements in y(n)
• N is the number of the training examples
35
Convergence of gradient descent
Toy example: minimizing a quadratic function
w2
• The axes of the ellipses are determined by the eigenvectors of 0.5
matrix A. 0.0
37
Effect of learning rate
• Suppose that we use gradient descent to find the minimum of the loss:
θ t+1 = θ t − ηg(θ t )
• The learning rate η has a major effect on the convergence of the gradient descent.
2.5 2.5
2.0 2.0
1.5 1.5
1.0 1.0
w2
w2
0.5 0.5
0.0 0.0
0.5 0.5
1.0 1.0
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5
w1 w1
small η: too slow convergence large η: oscillates and can even diverge
38
Convergence of gradient descent (see, e.g., Goh, 2017)
2.5
2.0
1
For quadratic loss: L(w) = w> Aw − b> w 1.5
2 1.0
w2
• The optimal learning rate depends on the curvature of the loss. 0.5
0.0
• If we select the learning rate optimally, the rate of convergence 0.5
1.0
2.5 2.0 1.5 1.0 0.5 0.0 0.5 1.0
kwt+1 − w∗ k w1
rate(η) = Large κ(A): slow convergence
kwt − w∗ k
because of zigzaging
of the gradient descent is determined by the condition number of 2.5
matrix A: 2.0
1.5
κ(A) − 1 1.0
rate(η∗ ) =
w2
κ(A) + 1 0.5
0.0
1.0
1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.5
rate(η∗ ) = 1: no convergence. w1
• For non-quadratic functions, the error surface locally is well approximated by a quadratic function:
1
L(w) ≈ L(wt ) + g> (w − wt ) + (w − wt )> H(w − wt )
2
3.0
• H is the matrix of second-order derivatives (called Hessian):
2.5
∂2L 2
L
∂w1 ∂w1
· · · ∂w∂1 ∂w M
2.0
w2
. .. .. 1.5
H= .. . .
1.0
2 2
∂ L ∂ L
∂wM ∂w1
· · · ∂wM ∂wM 0.5
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
w1
• For the quadratic loss L(w) = 21 w> Aw − b> w the Hessian matrix H = A. Thus, the convergence
of the gradient descent is affected by the properties of the Hessian.
40
Optimization tricks
What did deep learning start only in 2010-2012?
• Many components of deep learning have been invented long time ago but the deep learning
revolution started only in 2010-2012.
• Training of deep neural network is a non-trivial optimization problem which requires multiple
tricks: input normalization, weight initialization, mini-batch training (stochastic gradient descent),
improved optimizers, batch normalization.
42
Input normalization
• In this case, the Hessian matrix is equal to the second order moment of the data
N
1 X
H= xn x>
n = Cx
N n=1
• For fastest convergence, H = Cx should be equal the identity matrix I. We can achieve this by
decorrelating the input components using principal component analysis:
where µ is data mean and EDE> is the eigenvalue decomposition of the data covariance matrix.
• Neural networks are nonlinear models but normalizing their inputs usually improves convergence.
• A simple way of input normalization: center to zero mean and scale to unit variance.
43
Weight initialization in a linear layer
are usually initialized with random numbers drawn from some distribution p(wij ).
• Glorot and Bengio (2010): If we select p(wij ) carelessly, the magnitudes of the signals in the
forward/backward pass can grow/decay when the signals are propagated throughout the network,
which may have a negative impact on the optimization landscape.
• Popular initialization schemes balance the magnitudes of the signals in the forward and backward
passes: " √ √ #
6 6
Xavier’s initialization: wij ∼ uniform − p ,p
N x + Ny N x + Ny
where Nx , Ny is the number of inputs/outputs of a linear layer.
• Important: initialization schemes assume normalized inputs!
44
Mini-batch training
(stochastic gradient descent)
Mini-batch training
• The loss function contains N terms corresponding to the training samples, for example:
N K
1 X X (n)
L(θ) = − y log fj (x(n) , θ)
N n=1 j=1 j
• Large data sets are redundant: gradient computed on two different parts of data are likely to be
similar. Why to waste computations?
• We can compute gradient using only part of training data (a mini-batch Bm ):
K
∂L 1 X ∂ X (n)
≈− −yj log fj (x(n) , θ)
∂θ |Bm | n∈B ∂θ j=1
m
• By using mini-batches, we introduce “noise” to the gradient computations, thus the method is
called stochastic gradient descent.
46
Practical considerations for mini-batch training
• Epoch: going through all of the training examples once (usually using mini-batch training).
• It is good to shuffle the data between epochs when producing mini-batches (otherwise gradient
estimates are biased towards a particular mini-batch split).
• Mini-batches need to be balanced for classes.
• The recent trend is to use as large batches as possible (depends on the GPU memory size).
• Using larger batch sizes reduces the amount of noise in the gradient estimates.
• Computing the gradient for multiple samples at the same time is computationally efficient (requires
matrix-matrix multiplications which are efficient, especially on GPUs).
47
Model fine-tuning during mini-batch training
w2
• The simplest schedule is to decrease the learning rate 0.5
after every n updates.
0.0
• Another popular trick is to use exponential moving 0.5
average of the model parameters as the final model:
1.0
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5
θ 0t = γθ 0t−1 + (1 − γ)θ t w1
48
Improved optimization algorithms
Problems with gradient descent
2.5
2.0
• When the curvature of the objective function 1.5
substantially varies in different directions, the
1.0
w2
optimization trajectory of the gradient descent
can be zigzaging. 0.5
0.0
0.5
1.0
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5
w1
50
Momentum method (Polyak, 1964)
• Idea: 2.5
• We would like to move faster in directions with
small but consistent gradients. 2.0
• We would like to move slower in directions with 1.5
big but inconsistent gradients.
1.0
w2
• Implementation: Aggregate negative gradients in
momentum mt : 0.5
0.0
mt+1 = αmt − ηt gt
θ t+1 = θ t + mt+1 0.5
1.0
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5
w1
51
Adam (Kingma and Ba, 2014)
• The most popular algorithm today is Adam which uses a unit-less update rule:
mbt
θ t ← θ t−1 − ηt √
bvt +
• Fist and second order statistics of the gradient are computed using exponential moving averages:
mt = β1 mt−1 + (1 − β1 )gt
vt = β2 vt−1 + (1 − β2 )gt2
• Correction is used to improve estimates at the beginning of training (β t is β to the power of t):
b t = mt /(1 − β1t )
m vt = vt /(1 − β2t )
b
• Since the update rule is unit-less, the optimization procedure is not affected by the scale of the
objective function.
52
Why Adam works well
mbt
θ t ← θ t−1 − η √
bvt +
mt = β1 mt−1 + (1 − β1 )gt
vt = β2 vt−1 + (1 − β2 )gt2
• In Adam, the effective step size |∆t | is bounded. In the most common case:
m
bt E [g ]
|∆t | = η √ ≈ ηp ≤η because E [g 2 ] = E [g ]2 + E [(g − E [g ])2 ]
bvt E [g 2 ]
Thus, we never take too big steps (which can be the case for standard gradient descent).
• We go with the maximum speed (step size η) only if g is the same between updates
(mini-batches), that is when the gradients are consistent.
• At convergence, when we start fluctuating around the optimum: E [g ] ≈ 0 and E [g 2 ] > 0. The
effective step size gets smaller. Thus, Adam has a mechanism for automatic annealing of the
learning rate.
53
Batch normalization
Batch normalization (Ioffe and Szegedy, 2015)
• Idea: Since input normalization has positive effect on training, can we also normalize the
intermediate signals? The problem is that these signals change during training and we cannot
perform normalization before the training.
• The solution is to normalize intermediate signals to zero mean and unit variance in each training
mini-batch:
N N
1. Compute the means and variances of the intermediate 1 X (i) 1 X (i)
µ= x σ2 = (x − µ)2
signals x from the current mini-batch {x(1) , . . . , x(N) }. N i=1 N i=1
x−µ
2. Normalize signals to zero mean and unit variance. x̃ = √
σ2 +
3. Scale and shift the signals with trainable parameters
γ and β. y=γ x̃ + β
55
Batch normalization: Training and evaluation modes
• The mean and standard deviation are computed for each mini-batch. What to do at test time
when we use a trained network for a test example?
• Batch normalization layer keeps track of the batch statistics (mean and standard deviation) during
training:
N
1 X (i)
µ ← (1 − α)µ + α x
N i=1
N
1 X (i)
σ 2 ← (1 − α)σ 2 + α (x − µ)2
N i=1
56
Batch normalization: Training and evaluation modes
model = nn.Sequential(
• Pytorch: If you have a batch normalization layer, the nn.Linear(1, 100),
behavior of the network in the training and evaluation nn.BatchNorm1d(100),
nn.ReLU(),
modes will be different:
nn.Linear(100, 1),
• Training: Use statistics from a mini-batch, update )
running statistics µ and σ 2 .
• Evaluation: Use running statistics µ and σ 2 , keep µ # Switch to training mode
and σ 2 fixed. model.train()
# train the model
• Important to remember: BN introduces dependencies
...
between samples in a mini-batch in the computational # Switch to evaluation mode
graph. model.eval()
# test the model
57
Home assignments
Assignment 01 mlp
1. Implement the backpropagation algorithm and train a multilayer perceptron (MLP) in numpy.
y output layer
y = ψ(W3 h2 + b3 )
hidden layer 2
h2 = φ(W2 h1 + b2 )
hidden layer 1
h1 = φ(W1 x + b1 )
input x
x1 x2 x3 input layer
59
Assignment 01 mlp
2. Implement backpropagation for a multilayer perceptron network in numpy. For each block of a
neural network, you need to implement the following computations.
θ
∂L
∂θ
x y
f
∂L ∂L
∂h ∂y
60