Convolutional Networks
Convolutional Networks
Convolutional Networks
http://cs231n.github.io/convolutional-networks/
Table of Contents:
Architecture Overview
ConvNet Layers
Convolutional Layer
Pooling Layer
Normalization Layer
Fully-Connected Layer
Converting Fully-Connected Layers to Convolutional Layers
ConvNet Architectures
Layer Patterns
Layer Sizing Patterns
Case Studies (LeNet / AlexNet / ZFNet / GoogLeNet / VGGNet)
Computational Considerations
Additional References
1 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
inputs are images, which allows us to encode certain properties into the architecture.
These then make the forward function more efTcient to implement and vastly reduces
the amount of parameters in the network.
Architecture Overview
Recall: Regular Neural Nets. As we saw in the previous chapter, Neural Networks receive
an input (a single vector), and transform it through a series of hidden layers. Each
hidden layer is made up of a set of neurons, where each neuron is fully connected to all
neurons in the previous layer, and where neurons in a single layer function completely
independently and do not share any connections. The last fully-connected layer is called
the "output layer" and in classiTcation settings it represents the class scores.
Regular Neural Nets don't scale well to full images. In CIFAR-10, images are only of size
32x32x3 (32 wide, 32 high, 3 color channels), so a single fully-connected neuron in a
Trst hidden layer of a regular Neural Network would have 32*32*3 = 3072 weights. This
amount still seems manageable, but clearly this fully-connected structure does not
scale to larger images. For example, an image of more respectible size, e.g. 200x200x3,
would lead to neurons that have 200*200*3 = 120,000 weights. Moreover, we would
almost certainly want to have several such neurons, so the parameters would add up
quickly! Clearly, this full connectivity is wasteful and the huge number of parameters
would quickly lead to overTtting.
3D volumes of neurons. Convolutional Neural Networks take advantage of the fact that
the input consists of images and they constrain the architecture in a more sensible way.
In particular, unlike a regular Neural Network, the layers of a ConvNet have neurons
arranged in 3 dimensions: width, height, depth. (Note that the word depth here refers to
the third dimension of an activation volume, not to the depth of a full Neural Network,
which can refer to the total number of layers in a network.) For example, the input
images in CIFAR-10 are an input volume of activations, and the volume has dimensions
32x32x3 (width, height, depth respectively). As we will soon see, the neurons in a layer
will only be connected to a small region of the layer before it, instead of all of the
neurons in a fully-connected manner. Moreover, the Tnal output layer would for
CIFAR-10 have dimensions 1x1x10, because by the end of the ConvNet architecture we
will reduce the full image into a single vector of class scores, arranged along the depth
dimension. Here is a visualization:
2 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Left: A regular 3-layer Neural Network. Right: A ConvNet arranges its neurons in three dimensions
(width, height, depth), as visualized in one of the layers. Every layer of a ConvNet transforms the
3D input volume to a 3D output volume of neuron activations. In this example, the red input layer
holds the image, so its width and height would be the dimensions of the image, and the depth
would be 3 (Red, Green, Blue channels).
Example Architecture: Overview. We will go into more details below, but a simple
ConvNet for CIFAR-10 classiTcation could have the architecture [INPUT - CONV - RELU POOL - FC]. In more detail:
INPUT [32x32x3] will hold the raw pixel values of the image, in this case an image
of width 32, height 32, and with three color channels R,G,B.
CONV layer will compute the output of neurons that are connected to local
regions in the input, each computing a dot product between their weights and the
region they are connected to in the input volume. This may result in volume such
as [32x32x12].
RELU layer will apply an elementwise activation function, such as the max(0, x)
3 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
thresholding at zero. This leaves the size of the volume unchanged ([32x32x12]).
POOL layer will perform a downsampling operation along the spatial dimensions
(width, height), resulting in volume such as [16x16x12].
FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of
size [1x1x10], where each of the 10 numbers correspond to a class score, such as
among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as
the name implies, each neuron in this layer will be connected to all the numbers in
the previous volume.
In this way, ConvNets transform the original image layer by layer from the original pixel
values to the Tnal class scores. Note that some layers contain parameters and other
don't. In particular, the CONV/FC layers perform transformations that are a function of
not only the activations in the input volume, but also of the parameters (the weights and
biases of the neurons). On the other hand, the RELU/POOL layers will implement a Txed
function. The parameters in the CONV/FC layers will be trained with gradient descent so
that the class scores that the ConvNet computes are consistent with the labels in the
training set for each image.
In summary:
A ConvNet architecture is a list of Layers that transform the image volume into an
output volume (e.g. holding the class scores)
There are a few distinct types of Layers (e.g. CONV/FC/RELU/POOL are by far the
most popular)
Each Layer accepts an input 3D volume and transforms it to an output 3D volume
through a differentiable function
Each Layer may or may not have parameters (e.g. CONV/FC do, RELU/POOL
don't)
Each Layer may or may not have additional hyperparameters (e.g.
CONV/FC/POOL do, RELU doesn't)
4 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
The activations of an example ConvNet architecture. The initial volume stores the raw image
pixels and the last volume stores the class scores. Each volume of activations along the
processing path is shown as a column. Since it's difPcult to visualize 3D volumes, we lay out each
volume's slices in rows. The last layer volume holds the scores for each class, but here we only
visualize the sorted top 5 scores, and print the labels of each one. The full web-based demo is
shown in the header of our website. The architecture shown here is a tiny VGG Net, which we will
discuss later.
We now describe the individual layers and the details of their hyperparameters and their
connectivities.
Convolutional Layer
The Conv layer is the core building block of a Convolutional Network, and its output
volume can be interpreted as holding neurons arranged in a 3D volume. We now
discuss the details of the neuron connectivities, their arrangement in space, and their
parameter sharing scheme.
Overview and Intuition. The CONV layer's parameters consist of a set of learnable
Tlters. Every Tlter is small spatially (along width and height), but extends through the full
depth of the input volume. During the forward pass, we slide (more precisely, convolve)
each Tlter across the width and height of the input volume, producing a 2-dimensional
activation map of that Tlter. As we slide the Tlter, across the input, we are computing the
5 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
dot product between the entries of the Tlter and the input. Intuitively, the network will
learn Tlters that activate when they see some speciTc type of feature at some spatial
position in the input. Stacking these activation maps for all Tlters along the depth
dimension forms the full output volume. Every entry in the output volume can thus also
be interpreted as an output of a neuron that looks at only a small region in the input and
shares parameters with neurons in the same activation map (since these numbers all
result from applying the same Tlter). We now dive into the details of this process.
Local Connectivity. When dealing with high-dimensional inputs such as images, as we
saw above it is impractical to connect neurons to all neurons in the previous volume.
Instead, we will connect each neuron to only a local region of the input volume. The
spatial extent of this connectivity is a hyperparameter called the receptive Peld of the
neuron. The extent of the connectivity along the depth axis is always equal to the depth
of the input volume. It is important to note this asymmetry in how we treat the spatial
dimensions (width and height) and the depth dimension: The connections are local in
space (along width and height), but always full along the entire depth of the input
volume.
Example 1. For example, suppose that the input volume has size [32x32x3], (e.g. an RGB
CIFAR-10 image). If the receptive Teld is of size 5x5, then each neuron in the Conv Layer
will have weights to a [5x5x3] region in the input volume, for a total of 5*5*3 = 75
weights. Notice that the the extent of the connectivity along the depth axis must be 3,
since this is the depth of the input volume.
Example 2. Suppose an input volume had size [16x16x20]. Then using an example
receptive Teld size of 3x3, every neuron in the Conv Layer would now have a total of
3*3*20 = 180 connections to the input volume. Notice that, again, the connectivity is
local in space (e.g. 3x3), but full along the input depth (20).
6 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
L e f t : An example input volume in red (e.g. a 32x32x3 CIFAR-10 image), and an example volume
of neurons in the Prst Convolutional layer. Each neuron in the convolutional layer is connected
only to a local region in the input volume spatially, but to the full depth (i.e. all color channels).
Note, there are multiple neurons (5 in this example) along the depth, all looking at the same
region in the input - see discussion of depth columns in text below. R i g h t : The neurons from the
Neural Network chapter remain unchanged: They still compute a dot product of their weights with
the input followed by a non-linearity, but their connectivity is now restricted to be local spatially.
Spatial arrangement. We have explained the connectivity of each neuron in the Conv
Layer to the input volume, but we haven't yet discussed how many neurons there are in
the output volume or how they are arranged. Three hyperparameters control the size of
the output volume: the depth, stride and zero-padding. We discuss these next:
1. First, the depth of the output volume is a hyperparameter that we can pick; It
controls the number of neurons in the Conv layer that connect to the same region
of the input volume. This is analogous to a regular Neural Network, where we had
multiple neurons in a hidden layer all looking at the exact same input. As we will
see, all of these neurons will learn to activate for different features in the input. For
example, if the Trst Convolutional Layer takes as input the raw image, then
different neurons along the depth dimension may activate in presence of various
oriented edged, or blobs of color. We will refer to a set of neurons that are all
looking at the same region of the input as a depth column.
2. Second, we must specify the stride with which we allocate depth columns around
the spatial dimensions (width and height). When the stride is 1, then we will
allocate a new depth column of neurons to spatial positions only 1 spatial unit
apart. This will lead to heavily overlapping receptive Telds between the columns,
and also to large output volumes. Conversely, if we use higher strides then the
receptive Telds will overlap less and the resulting output volume will have smaller
dimensions spatially.
3. As we will soon see, sometimes it will be convenient to pad the input with zeros
spatially on the border of the input volume. The size of this zero-padding is a
hyperparameter. The nice feature of zero padding is that it will allow us to control
the spatial size of the output volumes. In particular, we will sometimes want to
exactly preserve the spatial size of the input volume.
We can compute the spatial size of the output volume as a function of the input volume
size (W ), the receptive Teld size of the Conv Layer neurons (F ), the stride with which
7 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
they are applied (S ), and the amount of zero padding used (P) on the border. You can
convince yourself that the correct formula for calculating how many neurons "Tt" is
given by (W F + 2P)/S + 1. If this number is not an integer, then the strides are set
incorrectly and the neurons cannot be tiled so that they "Tt" across the input volume
neatly, in a symmetric way. An example might help to get intuitions for this formula:
Illustration of spatial arrangement. In this example there is only one spatial dimension (x-axis),
one neuron with a receptive Peld size of F = 3, the input size is W = 5, and there is zero padding of
P = 1. L e f t : The neuron strided across the input in stride of S = 1, giving output of size (5 - 3 +
2)/1+1 = 5. R i g h t : The neuron uses stride of S = 2, giving output of size (5 - 3 + 2)/2+1 = 3. Notice
that stride S = 3 could not be used since it wouldn't Pt neatly across the volume. In terms of the
equation, this can be determined since (5 - 3 + 2) = 4 is not divisible by 3.
The neuron weights are in this example [1,0,-1] (shown on very right), and its bias is zero. These
weights are shared across all yellow neurons (see parameter sharing below).
Use of zero-padding. In the example above on left, note that the input dimension was 5
and the output dimension was equal: also 5. This worked out so because our receptive
Telds were 3 and we used zero padding of 1. If there was no zero-padding used, then
the output volume would have had spatial dimension of only 3, because that it is how
many neurons would have "Tt" across the original input. In general, setting zero padding
to be P = (F 1)/2 when the stride is S = 1 ensures that the input volume and
output volume will have the same size spatially. It is very common to use zero-padding
in this way and we will discuss the full reasons when we talk more about ConvNet
architectures.
Constraints on strides. Note that the spatial arrangement hyperparameters have mutual
constraints. For example, when the input has size W = 10, no zero-padding is used
P = 0, and the Tlter size is F = 3, then it would be impossible to use stride S = 2,
since (W F + 2P)/S + 1 = (10 3 + 0)/2 + 1 = 4.5, i.e. not an integer,
indicating that the neurons don't "Tt" neatly and symmetrically across the input.
Therefore, this setting of the hyperparameters is considered to be invalid, and a
8 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
ConvNet library would likely throw an exception. As we will see in the ConvNet
architectures section, sizing the ConvNets appropriately so that all the dimensions
"work out" can be a real headache, which the use of zero-padding and some design
guidelines will signiTcantly alleviate.
Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge
in 2012 accepted images of size [227x227x3]. On the Trst Convolutional Layer, it used
neurons with receptive Teld size F = 11 , stride S = 4 and no zero padding P = 0.
Since (227 - 11)/4 + 1 = 55, and since the Conv layer had a depth of K = 96, the Conv
layer output volume had size [55x55x96]. Each of the 55*55*96 neurons in this volume
was connected to a region of size [11x11x3] in the input volume. Moreover, all 96
neurons in each depth column are connected to the same [11x11x3] region of the input,
but of course with different weights.
Parameter Sharing. Parameter sharing scheme is used in Convolutional Layers to
control the number of parameters. Using the real-world example above, we see that
there are 55*55*96 = 290,400 neurons in the Trst Conv Layer, and each has 11*11*3 =
363 weights and 1 bias. Together, this adds up to 290400 * 364 = 105,705,600
paramaters on the Trst layer of the ConvNet alone. Clearly, this number is very high.
It turns out that we can dramatically reduce the number of parameters by making one
reasonable assumption: That if one patch feature is useful to compute at some spatial
position (x,y), then it should also be useful to compute at a different position (x2,y2). In
other words, denoting a single 2-dimensional slice of depth as a depth slice (e.g. a
volume of size [55x55x96] has 96 depth slices, each of size [55x55]), we are going to
constraint the neurons in each depth slice to use the same weights and bias. With this
parameter sharing scheme, the Trst Conv Layer in our example would now have only 96
unique set of weights (one for each depth slice), for a total of 96*11*11*3 = 34,848
unique weights, or 34,944 parameters (+96 biases). Alternatively, all 55*55 neurons in
each depth slice will now be using the same parameters. In practice during
backpropagation, every neuron in the volume will compute the gradient for its weights,
but these gradients will be added up across each depth slice and only update a single
set of weights per slice.
Notice that if all neurons in a single depth slice are using the same weight vector, then
the forward pass of the CONV layer can in each depth slice be computed as a
convolution of the neuron's weights with the input volume (Hence the name:
9 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Example Plters learned by Krizhevsky et al. Each of the 96 Plters shown here is of size [11x11x3],
and each one is shared by the 55*55 neurons in one depth slice. Notice that the parameter
sharing assumption is relatively reasonable: If detecting a horizontal edge is important at some
location in the image, it should intuitively be useful at some other location as well due to the
translationally-invariant structure of images. There is therefore no need to relearn to detect a
horizontal edge at every one of the 55*55 distinct locations in the Conv layer output volume.
Note that sometimes the parameter sharing assumption may not make sense. This is
especially the case when the input images to a ConvNet have some speciTc centered
structure, where we should expect, for example, that completely different features
should be learned on one side of the image than another. One practical example is when
the input are faces that have been centered in the image. You might expect that
different eye-speciTc or hair-speciTc features could (and should) be learned in different
spatial locations. In that case it is common to relax the parameter sharing scheme, and
instead simply call the layer a Locally-Connected Layer.
Numpy examples. To make the discussion above more concrete, lets express the same
ideas but in code and with a speciTc example. Suppose that the input volume is a
numpy array X . Then:
10 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Conv Layer Example. Suppose that the input volume X has shape X.shape:
(11,11,4) . Suppose further that we use no zero padding (P
is F = 5, and that the stride is S = 2. The output volume would therefore have spatial
size (11-5)/2+1 = 4, giving a volume with width and height of 4. The activation map in
the output volume (call it V ), would then look as follows (only some of the elements
are computed in this example):
V[0,0,0] = np.sum(X[:5,:5,:] * W0) + b0
V[1,0,0] = np.sum(X[2:7,:5,:] * W0) + b0
V[2,0,0] = np.sum(X[4:9,:5,:] * W0) + b0
V[3,0,0] = np.sum(X[6:11,:5,:] * W0) + b0
each point, we are computing the dot product as seen before in ordinary neural
networks. Also, we see that we are using the same weight and bias (due to parameter
sharing), and where the dimensions along the width are increasing in steps of 2 (i.e. the
stride). To construct a second activation map in the output volume, we would have:
V[0,0,1] = np.sum(X[:5,:5,:] * W1) + b1
V[1,0,1] = np.sum(X[2:7,:5,:] * W1) + b1
V[2,0,1] = np.sum(X[4:9,:5,:] * W1) + b1
V[3,0,1] = np.sum(X[6:11,:5,:] * W1) + b1
V[0,1,1] = np.sum(X[:5,2:7,:] * W1) + b1 (example of going along y)
V[2,3,1] = np.sum(X[4:9,6:11,:] * W1) + b1 (or along both)
where we see that we are indexing into the second depth dimension in V (at index 1)
because we are computing the second activation map, and that a different set of
parameters ( W1 ) is now used. In the example above, we are for brevity leaving out
some of the other operatations the Conv Layer would perform to Tll the other parts of
11 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
the output array V . Additioanlly, recall that these activation maps are often followed
elementwise through an activation function such as ReLU, but this is not shown here.
Summary. To summarize, the Conv Layer:
Accepts a volume of size W1 H1 D1
Requires four hyperparameters:
Number of Tlters K ,
their spatial extent F ,
the stride S ,
the amount of zero padding P.
Produces a volume of size W2 H2 D2 where:
W2 = (W1 F + 2P)/S + 1
H2 = (H1 F + 2P)/S + 1 (i.e. width and height are computed equally
by symmetry)
D2 = K
12 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Filter W0 (3x3x3)
w0[:,:,0]
-1 0 -1
Filter W1 (3x3x3)
w1[:,:,0]
1 1 -1
Output Volume (3
o[:,:,0]
3
-1 0
-1 0
-1 0
-1 1
x[:,:,1]
0 0 0 0
w0[:,:,1]
0 0 0
0
w0[:,:,2]
-1 1 -1
-1 -1
-1 1
x[:,:,2]
0 0 0 0
Bias b0 (1x1x1)
b0[:,:,0]
1
w1[:,:,1]
-1 0 1
-1 1
o[:,:,1]
-3
2
-1
-1
w1[:,:,2]
0 -1 0
-1 -1 0
0
Bias b1 (1x1x1)
b1[:,:,0]
0
toggle movement
13 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
follows:
1. The local regions in the input image are stretched out into columns in an
operation commonly called im2col. For example, if the input is [227x227x3] and it
is to be convolved with 11x11x3 Tlters at stride 4, then we would take [11x11x3]
blocks of pixels in the input and stretch each block into a column vector of size
11113 = 363. Iterating this process in the input at stride of 4 gives (227-11)/4+1 =
55 locations along both width and height, leading to an output matrix X_col of
im2col of size [363 x 3025], where every column is a stretched out receptive Teld
and there are 55*55 = 3025 of them in total. Note that since the receptive Telds
overlap, every number in the input volume may be duplicated in multiple distinct
columns.
2. The weights of the CONV layer are similarly stretched out into rows. For example,
if there are 96 Tlters of size [11x11x3] this would give a matrix W_row of size [96
x 363].
3. The result of a convolution is now equivalent to performing one large matrix
multiply np.dot(W_row, X_col) , which evaluates the dot product between
every Tlter and every receptive Teld location. In our example, the output of this
operation would be [96 x 3025], giving the output of the dot product of each Tlter
at each location.
4. The result must Tnally be reshaped back to its proper output dimension
[55x55x96].
This approach has the downside that it can use a lot of memory, since some values in
the input volume are replicated multiple times in X_col . However, the beneTt is that
there are many very efTcient implementations of Matrix Multiplication that we can take
advantage of (for example, in the commonly used BLAS API). Morever, the same im2col
idea can be reused to perform the pooling operation, which we discuss next.
Backpropagation. The backward pass for a convolution opteration (for both the data
and the weights) is also a convolution (but with spatially-nipped Tlters). This is easy to
derive in the 1-dimensional case with a toy example (not expanded on for now).
Pooling Layer
It is common to periodically insert a Pooling layer in-between successive Conv layers in
14 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
a ConvNet architecture. Its function is to progressively reduce the spatial size of the
representation to reduce the amount of parameters and computation in the network,
and hence to also control overTtting. The Pooling Layer operates independently on
every depth slice of the input and resizes it spatially, using the MAX operation. The most
common form is a pooling layer with Tlters of size 2x2 applied with a stride of 2
downsamples every depth slice in the input by 2 along both width and height, discarding
75% of the activations. Every MAX operation would in this case be taking a max over 4
numbers (little 2x2 region in some depth slice). The depth dimension remains
unchanged. More generally, the pooling layer:
Accepts a volume of size W1 H1 D1
Requires three hyperparameters:
their spatial extent F ,
the stride S ,
Produces a volume of size W2 H2 D2 where:
W2 = (W1 F)/S + 1
H2 = (H1 F)/S + 1
D2 = D1
15 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Pooling layer downsamples the volume spatially, independently in each depth slice of the input
volume. L e f t : In this example, the input volume of size [224x224x64] is pooled with Plter size 2,
stride 2 into output volume of size [112x112x64]. Notice that the volume depth is preserved.
R i g h t : The most common downsampling operation is max, giving rise to m a x p o o l i n g , here
shown with a stride of 2. That is, each max is taken over 4 numbers (little 2x2 square).
Backpropagation. Recall from the backpropagation chapter that the backward pass for
a max(x, y) operation has a simple interpretation as only routing the gradient to the
input that had the highest value in the forward pass. Hence, during the forward pass of
a pooling layer it is common to keep track of the index of the max activation
(sometimes also called the switches) so that gradient routing is efTcient during
backpropagation.
Recent developments.
Fractional Max-Pooling suggests a method for performing the pooling operation
with Tlters smaller than 2x2. This is done by randomly generating pooling regions
with a combination of 1x1, 1x2, 2x1 or 2x2 Tlters to tile the input activation map.
The grids are generated randomly on each forward pass, and at test time the
predictions can be averaged across several grids.
Striving for Simplicity: The All Convolutional Net proposes to discard the pooling
layer in favor of architecture that only consists of repeated CONV layers. To
reduce the size of the representation they suggest using larger stride in CONV
layer once in a while.
Due to the aggressive reduction in the size of the representation (which is helpful only
for smaller datasets to control overTtting), the trend in the literature is towards
discarding the pooling layer in modern ConvNets.
16 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Normalization Layer
Many types of normalization layers have been proposed for use in ConvNet
architectures, sometimes with the intentions of implementing inhibition schemes
observed in the biological brain. However, these layers have recently fallen out of favor
because in practice their contribution has been shown to be minimal, if any. For various
types of normalizations, see the discussion in Alex Krizhevsky's cuda-convnet library
API.
Fully-connected layer
Neurons in a fully connected layer have full connections to all activations in the previous
layer, as seen in regular Neural Networks. Their activations can hence be computed with
a matrix multiplication followed by a bias offset. See the Neural Network section of the
notes for more information.
17 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Naturally, forwarding the converted ConvNet a single time is much more efTcient than
iterating the original ConvNet over all those 36 locations, since the 36 evaluations share
computation. This trick is often used in practice to get better performance, where for
18 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
ConvNet Architectures
We have seen that Convolutional Networks are commonly made up of only three layer
types: CONV, POOL (we assume Max pool unless stated otherwise) and FC (short for
fully-connected). We will also explicitly write the RELU activation function as a layer,
which applies elementwise non-linearity. In this section we discuss how these are
commonly stacked together to form entire ConvNets.
Layer Patterns
The most common form of a ConvNet architecture stacks a few CONV-RELU layers,
follows them with POOL layers, and repeats this pattern until the image has been
merged spatially to a small size. At some point, it is common to transition to fullyconnected layers. The last fully-connected layer holds the output, such as the class
scores. In other words, the most common ConvNet architecture follows the pattern:
INPUT -> [[CONV -> RELU]*N -> POOL?]*M -> [FC -> RELU]*K -> FC
where the * indicates repetition, and the POOL? indicates an optional pooling layer.
Moreover, N >= 0 (and usually N <= 3 ), M >= 0 , K >= 0 (and usually K < 3 ).
For example, here are some common ConvNet architectures you may see that follow
this pattern:
19 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
see that there is a single CONV layer between every POOL layer.
INPUT -> [CONV -> RELU -> CONV -> RELU -> POOL]*3 -> [FC ->
RELU]*2 -> FC Here we see two CONV layers stacked before every POOL layer.
This is generally a good idea for larger and deeper networks, because multiple
stacked CONV layers can develop more complex features of the input volume
before the destructive pooling operation.
Prefer a stack of small Tlter CONV to one large receptive Teld CONV layer. Suppose that
you stack three 3x3 CONV layers on top of each other (with non-linearities in between,
of course). In this arrangement, each neuron on the Trst CONV layer has a 3x3 view of
the input volume. A neuron on the second CONV layer has a 3x3 view of the Trst CONV
layer, and hence by extension a 5x5 view of the input volume. Similarly, a neuron on the
third CONV layer has a 3x3 view of the 2nd CONV layer, and hence a 7x7 view of the
input volume. Suppose that instead of these three layers of 3x3 CONV, we only wanted
to use a single CONV layer with 7x7 receptive Telds. These neurons would have a
receptive Teld size of the input volume that is identical in spatial extent (7x7), but with
several disadvantages. First, the neurons would be computing a linear function over the
input, while the three stacks of CONV layers contain non-linearities that make their
features more expressive. Second, if we suppose that all the volumes have C channels,
then it can be seen that the single 7x7 CONV layer would contain
C (7 7 C) = 49C 2 parameters, while the three 3x3 CONV layers would only
contain 3 (C (3 3 C)) = 27C 2 parameters. Intuitively, stacking CONV layers
with tiny Tlters as opposed to having one CONV layer with big Tlters allows us to
express more powerful features of the input, and with fewer parameters. As a practical
disadvantage, we might need more memory to hold all the intermediate CONV layer
results if we plan to do backpropagation.
20 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
The input layer (that contains the image) should be divisible by 2 many times. Common
numbers include 32 (e.g. CIFAR-10), 64, 96 (e.g. STL-10), or 224 (e.g. common
ImageNet ConvNets), 384, and 512.
The conv layers should be using small Tlters (e.g. 3x3 or at most 5x5), using a stride of
S = 1, and crucially, padding the input volume with zeros in such way that the conv
layer does not alter the spatial dimensions of the input. That is, when F = 3, then using
P = 1 will retain the original size of the input. When F = 5, P = 2. For a general F , it
can be seen that P = (F 1)/2 preserves the input size. If you must use bigger Tlter
sizes (such as 7x7 or so), it is only common to see this on the very Trst conv layer that
is looking at the input image.
The pool layers are in charge of downsampling the spatial dimensions of the input. The
most common setting is to use max-pooling with 2x2 receptive Telds (i.e. F = 2), and
with a stride of 2 (i.e. S = 2). Note that this discards exactly 75% of the activations in
an input volume (due to downsampling by 2 in both width and height). Another sligthly
less common setting is to use 3x3 receptive Telds with a stride of 2, but this makes. It is
very uncommon to see receptive Teld sizes for max pooling that are larger than 3
because the pooling is then too lossy and agressive. This usually leads to worse
performance.
Reducing sizing headaches. The scheme presented above is pleasing because all the
CONV layers preserve the spatial size of their input, while the POOL layers alone are in
charge of down-sampling the volumes spatially. In an alternative scheme where we use
strides greater than 1 or don't zero-pad the input in CONV layers, we would have to very
carefully keep track of the input volumes throughout the CNN architecture and make
sure that all strides and Tlters "work out", and that the ConvNet architecture is nicely
and symmetrically wired.
Why use stride of 1 in CONV? Smaller strides work better in practice. Additionally, as
already mentioned stride 1 allows us to leave all spatial down-sampling to the POOL
layers, with the CONV layers only transforming the input volume depth-wise.
Why use padding? In addition to the aforementioned beneTt of keeping the spatial sizes
constant after CONV, doing this actually improves performance. If the CONV layers were
to not zero-pad the inputs and only perform valid convolutions, then the size of the
volumes would reduce by a small amount after each CONV, and the information at the
21 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Case studies
There are several architectures in the Teld of Convolutional Networks that have a name.
The most common are:
LeNet. The Trst successful applications of Convolutional Networks were
developed by Yann LeCun in 1990's. Of these, the best known is the LeNet
architecture that was used to read zip codes, digits, etc.
AlexNet. The Trst work that popularized Convolutional Networks in Computer
Vision was the AlexNet, developed by Alex Krizhevsky, Ilya Sutskever and Geoff
Hinton. The AlexNet was submitted to the ImageNet ILSVRC challenge in 2012
and signiTcantly outperformed the second runner-up (top 5 error of 16%
compared to runner-up with 26% error). The Network had a similar architecture
basic as LeNet, but was deeper, bigger, and featured Convolutional Layers stacked
on top of each other (previously it was common to only have a single CONV layer
immediately followed by a POOL layer).
ZF Net. The ILSVRC 2013 winner was a Convolutional Network from Matthew
Zeiler and Rob Fergus. It became known as the ZF Net (short for Zeiler & Fergus
Net). It was an improvement on AlexNet by tweaking the architecture
hyperparameters, in particular by expanding the size of the middle convolutional
layers.
GoogLeNet. The ILSVRC 2014 winner was a Convolutional Network from Szegedy
22 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
et al. from Google. Its main contribution was the development of an Inception
Module that dramatically reduced the number of parameters in the network (4M,
compared to AlexNet with 60M).
VGGNet. The runner-up in ILSVRC 2014 was the network from Karen Simonyan
and Andrew Zisserman that became known as the VGGNet. Its main contribution
was in showing that the depth of the network is a critical component for good
performance. Their Tnal best network contains 16 CONV/FC layers and,
appealingly, features an extremely homogeneous architecture that only performs
3x3 convolutions and 2x2 pooling from the beginning to the end. It was later
found that despite its slightly weaker classiTcation performance, the VGG
ConvNet features outperform those of GoogLeNet in multiple transfer learning
tasks. Hence, the VGG network is currently the most preferred choice in the
community when extracting CNN features from images. In particular, their
pretrained model is available for plug and play use in Caffe. A downside of the
VGGNet is that it is more expensive to evaluate and uses a lot more memory and
parameters (140M).
VGGNet in detail. Lets break down the VGGNet in more detail. The whole VGGNet is
composed of CONV layers that perform 3x3 convolutions with stride 1 and pad 1, and
of POOL layers that perform 2x2 max pooling with stride 2 (and no padding). We can
write out the size of the representation at each step of the processing and keep track of
both the representation size and the total number of weights:
INPUT: [224x224x3]
memory: 224*224*3=150K
weights: 0
CONV3-64: [224x224x64] memory: 224*224*64=3.2M
weights: (3*3*3)*64 = 1,728
CONV3-64: [224x224x64] memory: 224*224*64=3.2M
weights: (3*3*64)*64 = 36,8
POOL2: [112x112x64] memory: 112*112*64=800K
weights: 0
CONV3-128: [112x112x128] memory: 112*112*128=1.6M
weights: (3*3*64)*128 =
CONV3-128: [112x112x128] memory: 112*112*128=1.6M
weights: (3*3*128)*128 =
POOL2: [56x56x128] memory: 56*56*128=400K
weights: 0
CONV3-256: [56x56x256] memory: 56*56*256=800K
weights: (3*3*128)*256 = 294
CONV3-256: [56x56x256] memory: 56*56*256=800K
weights: (3*3*256)*256 = 589
CONV3-256: [56x56x256] memory: 56*56*256=800K
weights: (3*3*256)*256 = 589
POOL2: [28x28x256] memory: 28*28*256=200K
weights: 0
CONV3-512: [28x28x512] memory: 28*28*512=400K
weights: (3*3*256)*512 = 1,1
CONV3-512: [28x28x512] memory: 28*28*512=400K
weights: (3*3*512)*512 = 2,3
CONV3-512: [28x28x512] memory: 28*28*512=400K
weights: (3*3*512)*512 = 2,3
23 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
memory:
memory:
TOTAL memory: 24M * 4 bytes ~= 93MB / image (only forward! ~*2 for bwd)
TOTAL params: 138M parameters
As is common with Convolutional Networks, notice that most of the memory is used in
the early CONV layers, and that most of the parameters are in the last FC layers. In this
particular case, the Trst FC layer contains 100M weights, out of a total of 140M.
Computational Considerations
The largest bottleneck to be aware of when constructing ConvNet architectures is the
memory bottleneck. Many modern GPUs have a limit of 3/4/6GB memory, with the best
GPUs having about 12GB of memory. There are three major sources of memory to keep
track of:
From the intermediate volume sizes: These are the raw number of activations at
every layer of the ConvNet, and also their gradients (of equal size). Usually, most
of the activations are on the earlier layers of a ConvNet (i.e. Trst Conv Layers).
These are kept around because they are needed for backpropagation, but a clever
implementation that runs a ConvNet only at test time could in principle reduce
this by a huge amount, by only storing the current activations at any layer and
discarding the previous activations on layers below.
From the parameter sizes: These are the numbers that hold the network
parameters, their gradients during backpropagation, and commonly also a step
cache if the optimization is using momentum, Adagrad, or RMSProp. Therefore,
the memory to store the parameter vector alone must usually be multiplied by a
factor of at least 3 or so.
Every ConvNet implementation has to maintain miscellaneous memory, such as
the image data batches, perhaps their augmented versions, etc.
24 of 25
4/3/15, 10:49 AM
http://cs231n.github.io/convolutional-networks/
Once you have a rough estimate of the total number of values (for activations,
gradients, and misc), the number should be converted to size in GB. Take the number of
values, multiply by 4 to get the raw number of bytes (since every noating point is 4
bytes, or maybe by 8 for double precision), and then divide by 1024 multiple times to get
the amount of memory in KB, MB, and Tnally GB. If your network doesn't Tt, a common
heuristic to "make it Tt" is to decrease the batch size, since most of the memory is
usually consumed by the activations.
Additional Resources
Additional resources related to implementation:
DeepLearning.net tutorial walks through an implementation of a ConvNet in
Theano
cuda-convnet2 by Alex Krizhevsky is a ConvNet implementation that supports
multiple GPUs
ConvNetJS CIFAR-10 demo allows you to play with ConvNet architectures and
see the results and computations in real time, in the browser.
Caffe, one of the most popular ConvNet libraries.
Example Torch 7 ConvNet that achieves 7% error on CIFAR-10 with a single model
Ben Graham's Sparse ConvNet package, which Ben Graham used to great
success to achieve less than 4% error on CIFAR-10.
cs231n
cs231n
karpathy@cs.stanford.edu
25 of 25
4/3/15, 10:49 AM