Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Quo Vadis, Action Recognition? A New Model and The Kinetics Dataset

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Quo Vadis, Action Recognition?

A New Model and the Kinetics Dataset

João Carreira† Andrew Zisserman†,∗


joaoluis@google.com zisserman@google.com
† ∗
DeepMind Department of Engineering Science, University of Oxford
arXiv:1705.07750v3 [cs.CV] 12 Feb 2018

Abstract

The paucity of videos in current action classification


datasets (UCF-101 and HMDB-51) has made it difficult
to identify good video architectures, as most methods ob-
tain similar performance on existing small-scale bench-
marks. This paper re-evaluates state-of-the-art architec-
tures in light of the new Kinetics Human Action Video
dataset. Kinetics has two orders of magnitude more data,
with 400 human action classes and over 400 clips per
class, and is collected from realistic, challenging YouTube
videos. We provide an analysis on how current architectures
fare on the task of action classification on this dataset and
how much performance improves on the smaller benchmark
datasets after pre-training on Kinetics.
We also introduce a new Two-Stream Inflated 3D Con- Figure 1. A still from ‘Quo Vadis’ (1951). Where is this going?
vNet (I3D) that is based on 2D ConvNet inflation: fil- Are these actors about to kiss each other, or have they just done
ters and pooling kernels of very deep image classifica- so? More importantly, where is action recognition going? Actions
tion ConvNets are expanded into 3D, making it possible can be ambiguous in individual frames, but the limitations of exist-
to learn seamless spatio-temporal feature extractors from ing action recognition datasets has meant that the best-performing
video while leveraging successful ImageNet architecture video architectures do not depart significantly from single-image
designs and even their parameters. We show that, after analysis, where they rely on powerful image classifiers trained on
pre-training on Kinetics, I3D models considerably improve ImageNet. In this paper we demonstrate that video models are
best pre-trained on videos and report significant improvements by
upon the state-of-the-art in action classification, reaching
using spatio-temporal classifiers pre-trained on Kinetics, a freshly
80.9% on HMDB-51 and 98.0% on UCF-101. collected, large, challenging human action video dataset.

1. Introduction mentation, depth prediction, pose estimation, action classi-


fication.
One of the unexpected benefits of the ImageNet chal-
lenge has been the discovery that deep architectures trained In the video domain, it is an open question whether train-
on the 1000 images of 1000 categories, can be used for other ing an action classification network on a sufficiently large
tasks and in other domains. One of the early examples of dataset, will give a similar boost in performance when ap-
this was using the fc7 features from a network trained on plied to a different temporal task or dataset. The chal-
ImageNet for the PASCAL VOC classification and detec- lenges of building video datasets has meant that most popu-
tion challenge [10, 23]. Furthermore, improvements in the lar benchmarks for action recognition are small, having on
deep architecture, changing from AlexNet to VGG-16, im- the order of 10k videos.
mediately fed through to commensurate improvements in In this paper we aim to provide an answer to this question
the PASCAL VOC performance [25]. Since then, there have using the new Kinetics Human Action Video Dataset [16],
been numerous examples of ImageNet trained architectures which is two orders of magnitude larger than previous
warm starting or sufficing entirely for other tasks, e.g. seg- datasets, HMDB-51 [18] and UCF-101 [29]. Kinetics has

1
400 human action classes with more than 400 examples for tively shallow (up to 8 layers). Here we make the obser-
each class, each from a unique YouTube video. vation that very deep image classification networks, such
Our experimental strategy is to reimplement a number of as Inception [13], VGG-16 [28] and ResNet [12], can be
representative neural network architectures from the litera- trivially inflated into spatio-temporal feature extractors, and
ture, and then analyze their transfer behavior by first pre- that their pre-trained weights provide a valuable initializa-
training each one on Kinetics and then fine-tuning each on tion. We also find that a two-stream configuration is still
HMDB-51 and UCF-101. The results suggest that there is useful.
always a boost in performance by pre-training, but the ex- A graphical overview of the five types of architectures
tent of the boost varies significantly with the type of archi- we evaluate is shown in figure 2 and the specification of
tecture. Based on these findings, we introduce a new model their temporal interfaces is given in table 1.
that has the capacity to take advantage of pre-training on Many of these models (all but C3D) have an Imagenet
Kinetics, and can achieves a high performance. The model pre-trained model as a subcomponent. Our experimen-
termed a “Two-Stream Inflated 3D ConvNets” (I3D), builds tal strategy assumes a common ImageNet pre-trained im-
upon state-of-the-art image classification architectures, but age classification architecture as back bone, and for this
inflates their filters and pooling kernels (and optionally their we chose Inception-v1 with batch normalization [13], and
parameters) into 3D, leading to very deep, naturally spatio- morph it in different ways. The expectation is that with this
temporal classifiers. An I3D model based on Inception- back bone in common, we will be able to tease apart those
v1 [13] obtains performance far exceeding the state-of-the- changes that benefit action classification the most.
art, after pre-training on Kinetics.
In our model comparisons, we did not consider more 2.1. The Old I: ConvNet+LSTM
classic approaches such as bag-of-visual-words representa- The high performance of image classification networks
tions [6, 19, 22, 33]. However, the Kinetics dataset is pub- makes it appealing to try to reuse them with as minimal
licly available, so others can use it for such comparisons. change as possible for video. This can be achieved by using
The next section outlines the set of implemented action them to extract features independently from each frame then
classification models. Section 3 gives an overview of the pooling their predictions across the whole video [15]. This
Kinetics dataset. Section 4 reports the performance of mod- is in the spirit of bag of words image modeling approaches
els on previous benchmarks and on Kinetics, and section 5 [19, 22, 33]; but while convenient in practice, it has the issue
studies how well the features learned on Kinetics transfer to of entirely ignoring temporal structure (e.g. models can’t
different datasets. The paper concludes with a discussion of potentially distinguish opening from closing a door).
the results. In theory, a more satisfying approach is to add a recur-
rent layer to the model [5, 37], such as an LSTM, which can
2. Action Classification Architectures encode state, and capture temporal ordering and long range
While the development of image representation architec- dependencies. We position an LSTM layer with batch nor-
tures has matured quickly in recent years, there is still no malization (as proposed by Cooijmans et al. [4]) after the
clear front running architecture for video. Some of the ma- last average pooling layer of Inception-V1, with 512 hid-
jor differences in current video architectures are whether the den units. A fully connected layer is added on top for the
convolutional and layers operators use 2D (image-based) or classifier.
3D (video-based) kernels; whether the input to the network The model is trained using cross-entropy losses on the
is just an RGB video or it also includes pre-computed opti- outputs at all time steps. During testing we consider only
cal flow; and, in the case of 2D ConvNets, how information the output on the last frame. Input video frames are sub-
is propagated across frames, which can be done either us- sampled by keeping one out of every 5, from an original 25
ing temporally-recurrent layers such as LSTMs, or feature frames-per-second stream. The full temporal footprint of all
aggregation over time. models is given in table 1.
In this paper we compare and study a subset of models
2.2. The Old II: 3D ConvNets
that span most of this space. Among 2D ConvNet meth-
ods, we consider ConvNets with LSTMs on top [5, 37], and 3D ConvNets seem like a natural approach to video mod-
two-stream networks with two different types of stream fu- eling, and are just like standard convolutional networks, but
sion [8, 27]. We also consider a 3D ConvNet [14, 30]: C3D with spatio-temporal filters. They have been explored sev-
[31]. eral times, previously [14, 30, 31, 32]. They have a very im-
As the main technical contribution, we introduce Two- portant characteristic: they directly create hierarchical rep-
Stream Inflated 3D ConvNets (I3D). Due to the high- resentations of spatio-temporal data. One issue with these
dimensionality of their parameterization and the lack of la- models is that they have many more parameters than 2D
beled video data, previous 3D ConvNets have been rela- ConvNets because of the additional kernel dimension, and
Figure 2. Video architectures considered in this paper. K stands for the total number of frames in a video, whereas N stands for a subset of
neighboring frames of the video.

this makes them harder to train. Also, they seem to preclude flow frames, after passing them through two replicas of an
the benefits of ImageNet pre-training, and consequently ImageNet pre-trained ConvNet. The flow stream has an
previous work has defined relatively shallow custom archi- adapted input convolutional layer with twice as many input
tectures and trained them from scratch [14, 15, 30, 31]. Re- channels as flow frames (because flow has two channels,
sults on benchmarks have shown promise but have not been horizontal and vertical), and at test time multiple snapshots
competitive with state-of-the-art, making this type of mod- are sampled from the video and the action prediction is av-
els a good candidate for evaluation on our larger dataset. eraged. This was shown to get very high performance on
For this paper we implemented a small variation of C3D existing benchmarks, while being very efficient to train and
[31], which has 8 convolutional layers, 5 pooling layers and test.
2 fully connected layers at the top. The inputs to the model A recent extension [8] fuses the spatial and flow streams
are short 16-frame clips with 112 × 112-pixel crops as in after the last network convolutional layer, showing some
the original implementation. Differently from [31] we used improvement on HMDB while requiring less test time aug-
batch normalization after all convolutional and fully con- mentation (snapshot sampling). Our implementation fol-
nected layers. Another difference to the original model is lows this paper approximately using Inception-V1. The in-
in the first pooling layer, we use a temporal stride of 2 in- puts to the network are 5 consecutive RGB frames sam-
stead of 1, which reduces the memory footprint and allows pled 10 frames apart, as well as the corresponding optical
for bigger batches – this was important for batch normal- flow snippets. The spatial and motion features before the
ization (especially after the fully connected layers, where last average pooling layer of Inception-V1 (5 × 7 × 7 fea-
there is no weight tying). Using this stride we were able to ture grids, corresponding to time, x and y dimensions) are
train with 15 videos per batch per GPU using standard K40 passed through a 3 × 3 × 3 3D convolutional layer with 512
GPUs. output channels, followed by a 3 × 3 × 3 3D max-pooling
layer and through a final fully connected layer. The weights
2.3. The Old III: Two-Stream Networks of these new layers are initialized with Gaussian noise.
LSTMs on features from the last layers of ConvNets can Both models, the original two-stream and the 3D fused
model high-level variation, but may not be able to capture version, are trained end-to-end (including the two-stream
fine low-level motion which is critical in many cases. It is averaging process in the original model).
also expensive to train as it requires unrolling the network
2.4. The New: Two-Stream Inflated 3D ConvNets
through multiple frames for backpropagation-through-time.
A different, very practical approach, introduced by Si- With this architecture, we show how 3D ConvNets can
monyan and Zisserman [27], models short temporal snap- benefit from ImageNet 2D ConvNet designs and, option-
shots of videos by averaging the predictions from a single ally, from their learned parameters. We also adopt a two-
RGB frame and a stack of 10 externally computed optical stream configuration here – it will be shown in section 4
that while 3D ConvNets can directly learn about temporal sification layer, besides the max-pooling layers in parallel
patterns from an RGB stream, their performance can still be Inception branches. In our experiments the input videos
greatly improved by including an optical-flow stream. were processed at 25 frames per second; we found it help-
ful to not perform temporal pooling in the first two max-
Inflating 2D ConvNets into 3D. A number of very success- pooling layers (by using 1 × 3 × 3 kernels and stride 1
ful image classification architectures have been developed in time), while having symmetric kernels and strides in all
over the years, in part through painstaking trial and error. other max-pooling layers. The final average pooling layer
Instead of repeating the process for spatio-temporal models uses a 2 × 7 × 7 kernel. The overall architecture is shown in
we propose to simply convert successful image (2D) clas- fig. 3. We train the model using 64-frame snippets and test
sification models into 3D ConvNets. This can be done by using the whole videos, averaging predictions temporally.
starting with a 2D architecture, and inflating all the filters
and pooling kernels – endowing them with an additional Two 3D Streams. While a 3D ConvNet should be able to
temporal dimension. Filters are typically square and we just learn motion features from RGB inputs directly, it still per-
make them cubic – N × N filters become N × N × N . forms pure feedforward computation, whereas optical flow
algorithms are in some sense recurrent (e.g. they perform it-
Bootstrapping 3D filters from 2D Filters. Besides the ar- erative optimization for the flow fields). Perhaps because of
chitecture, one may also want to bootstrap parameters from this lack of recurrence, experimentally we still found it valu-
the pre-trained ImageNet models. To do this, we observe able to have a two-stream configuration – shown in fig. 2,
that an image can be converted into a (boring) video by e) – with one I3D network trained on RGB inputs, and an-
copying it repeatedly into a video sequence. The 3D models other on flow inputs which carry optimized, smooth flow
can then be implicitly pre-trained on ImageNet, by satisfy- information. We trained the two networks separately and
ing what we call the boring-video fixed point: the pooled averaged their predictions at test time.
activations on a boring video should be the same as on the
original single-image input. This can be achieved, thanks to 2.5. Implementation Details
linearity, by repeating the weights of the 2D filters N times
along the time dimension, and rescaling them by dividing All models but the C3D-like 3D ConvNet use ImageNet-
by N . This ensures that the convolutional filter response pretrained Inception-V1 [13] as base network. For all ar-
is the same. Since the outputs of convolutional layers for chitectures we follow each convolutional layer by a batch
boring videos are constant in time, the outputs of pointwise normalization [13] layer and a ReLU activation function,
non-linearity layers and average and max-pooling layers are except for the last convolutional layers which produce the
the same as for the 2D case, and hence the overall network class scores for each network.
response respects the boring-video fixed point. [21] studies Training on videos used standard SGD with momen-
other bootstrapping strategies. tum set to 0.9 in all cases, with synchronous paralleliza-
tion across 32 GPUs for all models except the 3D ConvNets
Pacing receptive field growth in space, time and net- which receive a large number of input frames and hence
work depth. The boring video fixed-point leaves ample require more GPUs to form large batches – we used 64
freedom on how to inflate pooling operators along the time GPUs for these. We trained models on on Kinetics for 110k
dimension and on how to set convolutional/pooling tempo- steps, with a 10x reduction of learning rate when validation
ral stride – these are the primary factors that shape the size loss saturated. We tuned the learning rate hyperparameter
of feature receptive fields. Virtually all image models treat on the validation set of Kinetics. Models were trained for
the two spatial dimensions (horizontal and vertical) equally up to 5k steps on UCF-101 and HMDB-51 using a similar
– pooling kernels and strides are the same. This is quite learning rate adaptation procedure as for Kinetics but using
natural and means that features deeper in the networks are just 16 GPUs. All the models were implemented in Tensor-
equally affected by image locations increasingly far away Flow [1].
in both dimensions. A symmetric receptive field is however Data augmentation is known to be of crucial importance
not necessarily optimal when also considering time – this for the performance of deep architectures. During train-
should depend on frame rate and image dimensions. If it ing we used random cropping both spatially – resizing the
grows too quickly in time relative to space, it may conflate smaller video side to 256 pixels, then randomly cropping a
edges from different objects breaking early feature detec- 224 × 224 patch – and temporally, when picking the start-
tion, while if it grows too slowly, it may not capture scene ing frame among those early enough to guarantee a desired
dynamics well. number of frames. For shorter videos, we looped the video
In Inception-v1, the first convolutional layer has stride 2, as many times as necessary to satisfy each model’s input
then there are four max-pooling layers with stride 2 and a interface. We also applied random left-right flipping con-
7 × 7 average-pooling layer preceding the last linear clas- sistently for each video during training. During test time
Figure 3. The Inflated Inception-V1 architecture (left) and its detailed inception submodule (right). The strides of convolution and pooling
operators are 1 where not specified, and batch normalization layers, ReLu’s and the softmax at the end are not shown. The theoretical
sizes of receptive field sizes for a few layers in the network are provided in the format “time,x,y” – the units are frames and pixels. The
predictions are obtained convolutionally in time and averaged.

Training Testing
Method #Params
# Input Frames Temporal Footprint # Input Frames Temporal Footprint
ConvNet+LSTM 9M 25 rgb 5s 50 rgb 10s
3D-ConvNet 79M 16 rgb 0.64s 240 rgb 9.6s
Two-Stream 12M 1 rgb, 10 flow 0.4s 25 rgb, 250 flow 10s
3D-Fused 39M 5 rgb, 50 flow 2s 25 rgb, 250 flow 10s
Two-Stream I3D 25M 64 rgb, 64 flow 2.56s 250 rgb, 250 flow 10s
Table 1. Number of parameters and temporal input sizes of the models.

the models are applied convolutionally over the whole video tions require more emphasis on the object to distinguish, for
taking 224 × 224 center crops, and the predictions are av- example playing different types of wind instruments.
eraged. We briefly tried spatially-convolutional testing on The dataset has 400 human action classes, with 400 or
the 256 × 256 videos, but did not observe improvement. more clips for each class, each from a unique video, for a
Better performance could be obtained by also considering total of 240k training videos. The clips last around 10s, and
left-right flipped videos at test time and by adding addi- there are no untrimmed videos. The test set consists of 100
tional augmentation, such as photometric, during training. clips for each class. A full description of the dataset and
We leave this to future work. how it was built is given in [16].
We computed optical flow with a TV-L1 algorithm [38].
4. Experimental Comparison of Architectures
3. The Kinetics Human Action Video Dataset
In this section we compare the performance of the five ar-
The Kinetics dataset is focused on human actions (rather chitectures described in section 2 whilst varying the dataset
than activities or events). The list of action classes covers: used for training and testing.
Person Actions (singular), e.g. drawing, drinking, laugh- Table 2 shows the classification accuracy when training
ing, punching; Person-Person Actions, e.g. hugging, kiss- and testing on either UCF-101, HMDB-51 or Kinetics. We
ing, shaking hands; and, Person-Object Actions, e.g. open- test on the split 1 test sets of UCF-101 and HMDB-51 and
ing presents, mowing lawn, washing dishes. Some actions on the held-out test set of Kinetics. There are several note-
are fine grained and require temporal reasoning to distin- worthy observations. First, our new I3D models do best in
guish, for example different types of swimming. Other ac- all datasets, with either RGB, flow, or RGB+flow modali-
UCF-101 HMDB-51 Kinetics
Architecture RGB Flow RGB + Flow RGB Flow RGB + Flow RGB Flow RGB + Flow
(a) LSTM 81.0 – – 36.0 – – 63.3 – –
(b) 3D-ConvNet 51.6 – – 24.3 – – 56.1 – –
(c) Two-Stream 83.6 85.6 91.2 43.2 56.3 58.3 62.2 52.4 65.6
(d) 3D-Fused 83.2 85.8 89.3 49.2 55.5 56.8 – – 67.2
(e) Two-Stream I3D 84.5 90.6 93.4 49.8 61.9 66.4 71.1 63.4 74.2

Table 2. Architecture comparison: (left) training and testing on split 1 of UCF-101; (middle) training and testing on split 1 of HMDB-51;
(right) training and testing on Kinetics. All models are based on ImageNet pre-trained Inception-v1, except 3D-ConvNet, a C3D-like [31]
model which has a custom architecture and was trained here from scratch. Note that the Two-Stream architecture numbers on individual
RGB and Flow streams can be interpreted as a simple baseline which applies a ConvNet independently on 25 uniformly sampled frames
then averages the predictions.

Kinetics ImageNet then Kinetics


Architecture RGB Flow RGB + Flow RGB Flow RGB + Flow
(a) LSTM 53.9 – – 63.3 – –
(b) 3D-ConvNet 56.1 – – – – –
(c) Two-Stream 57.9 49.6 62.8 62.2 52.4 65.6
(d) 3D-Fused – – 62.7 – – 67.2
(e) Two-Stream I3D 68.4 (88.0) 61.5 (83.4) 71.6 (90.0) 71.1 (89.3) 63.4 (84.9) 74.2 (91.3)

Table 3. Performance training and testing on Kinetics with and without ImageNet pretraining. Numbers in brackets () are the Top-5
accuracy, all others are Top-1.

ties. This is interesting, given its very large number of pa- seems plausible that the RGB stream has more discrimina-
rameters and that UCF-101 and HMDB-51 are so small, and tive information – we often struggled with our own eyes to
shows that the benefits of ImageNet pre-training can extend discern actions from flow alone in Kinetics, and this was
to 3D ConvNets. rarely the case from RGB – there may be opportunities for
Second, the performance of all models is far lower on Ki- future research on integrating some form of motion stabi-
netics than on UCF-101, an indication of the different levels lization into these architectures.
of difficulty of the two datasets. It is however higher than on We also evaluated the value of training models in Kinet-
HMDB-51; this may be in part due to lack of training data ics starting from ImageNet-pretrained weights versus from
in HMDB-51 but also because this dataset was purposefully scratch – the results are shown in table 3. It can be seen
built to be hard: many clips have different actions in the ex- that ImageNet pre-training still helps in all cases and this is
act same scene (e.g. “drawing sword” examples are taken slightly more noticeable for the RGB streams, as would be
from same videos as “sword” and “sword exercise”). Third, expected.
the ranking of the different architectures is mostly consis-
tent. 5. Experimental Evaluation of Features
Additionally, two-stream architectures exhibit superior In this section we investigate the generalizability of
performance on all datasets, but the relative value of RGB the networks trained on Kinetics. We consider two
and flow differs significantly between Kinetics and the other measures of this: first, we freeze the network weights
datasets. The contribution from flow alone, is slightly and use the network to produce features for the (un-
higher than that of RGB on UCF-101, much higher on seen) videos of the UCF-101/HMDB-51 datasets. We
HMDB-51, and substantially lower on Kinetics. Visual in- then train multi-way soft-max classifiers for the classes of
spection of the datasets suggests that Kinetics has much UCF-101/HMDB-51 (using their training data), and eval-
more camera motion which may make the job of the motion uate on their test sets; Second, we fine-tune each net-
stream harder. The I3D model seems able to get more out work for the UCF-101/HMDB-51 classes (using the UCF-
of the flow stream than the other models, however, which 101/HMDB-51 training data), and again evaluate on the
can probably be explained by its much longer temporal re- UCF-101/HMDB-51 test sets.
ceptive field (64 frames vs 10 during training) and more We also examine how important it is to pre-train on Im-
integrated temporal feature extraction machinery. While it ageNet+Kinetics instead of just Kinetics.
Figure 4. All 64 conv1 filters of each Inflated 3D ConvNet after training on Kinetics (the filter dimensions are 7 × 7 × 7, and the 7
time dimensions are shown left-to-right across the figure). The sequence on top shows the flow network filters, the one in the middle
shows filters from the RGB I3D network, and the bottom row shows the original Inception-v1 filters. Note that the I3D filters possess rich
temporal structure. Curiously the filters of the flow network are closer to the original ImageNet-trained Inception-v1 filters, while the filters
in the RGB I3D network are no longer recognizable. Best seen on the computer, in colour and zoomed in.

The results are given in table 4. The clear outcome is that ingly good even when trained from scratch (without Ima-
all architectures benefit from pre-training on the additional geNet or Kinetics), mainly due to the accuracy of the flow
video data of Kinetics, but some benefit significantly more stream, which seems much less prone to overfitting (not
than the others – notably the I3D-ConvNet and 3D-ConvNet shown). Kinetics pretraining helps significantly more than
(although the latter starting from a much lower base). Train- ImageNet.
ing just the last layers of the models after pretraining in Ki-
netics (Fixed) also leads to much better performance than 5.1. Comparison with the State-of-the-Art
directly training on UCF-101 and HMDB-51 for I3D mod-
els. We show a comparison of the performance of I3D mod-
els and previous state-of-the-art methods in table 5, on
One explanation for the significant better transferability UCF-101 and HMDB-51. We include results when pre-
of features of I3D models is their high temporal resolution training on the Kinetics dataset (with and without ImageNet
– they are trained on 64-frame video snippets at 25 frames pre-training). The conv1 filters of the trained models are
per second and process all video frames at test time, which shown in fig. 4.
makes it possible for them to capture fine-grained tempo- Many methods get similar results, but the best perform-
ral structure of actions. Stated differently, methods with ing method on these datasets is currently the one by Fe-
sparser video inputs may benefit less from training on this ichtenhofer and colleagues [7], which uses ResNet-50 mod-
large video dataset because, from their perspective, videos els on RGB and optical flow streams, and gets 94.6% on
do not differ as much from the images in ImageNet. The UCF-101 and 70.3% on HMDB-51 when combined with
difference over the C3D-like model can be explained by our the dense trajectories model [33]. We benchmarked our
I3D models being much deeper, while having much fewer methods using the mean accuracy over the three standard
parameters, by leveraging an ImageNet warm-start, by be- train/test splits. Either of our RGB-I3D or RGB-Flow mod-
ing trained on 4× longer videos, and by operating on 2× els alone, when pre-trained on Kinetics, outperforms all pre-
higher spatial resolution videos. vious published performance by any model or model com-
The performance of the two-stream models is surpris- binations. Our combined two-stream architecture widens
UCF-101 HMDB-51
Architecture Original Fixed Full-FT Original Fixed Full-FT
(a) LSTM 81.0 / 54.2 88.1 / 82.6 91.0 / 86.8 36.0 / 18.3 50.8 / 47.1 53.4 / 49.7
(b) 3D-ConvNet – / 51.6 – / 76.0 – / 79.9 – / 24.3 – / 47.0 – / 49.4
(c) Two-Stream 91.2 / 83.6 93.9 / 93.3 94.2 / 93.8 58.3 / 47.1 66.6 / 65.9 66.6 / 64.3
(d) 3D-Fused 89.3 / 69.5 94.3 / 89.8 94.2 / 91.5 56.8 / 37.3 69.9 / 64.6 71.0 / 66.5
(e) Two-Stream I3D 93.4 / 88.8 97.7 / 97.4 98.0 / 97.6 66.4 / 62.2 79.7 / 78.6 81.2 / 81.3

Table 4. Performance on the UCF-101 and HMDB-51 test sets (split 1 of both) for architectures starting with / without ImageNet pretrained
weights. Original: train on UCF-101 or HMDB-51; Fixed: features from Kinetics, with the last layer trained on UCF-101 or HMDB-51;
Full-FT: Kinetics pre-training with end-to-end fine-tuning on UCF-101 or HMDB-51.

Model UCF-101 HMDB-51


Two-Stream [27] 88.0 59.4
IDT [33] 86.4 61.7
Dynamic Image Networks + IDT [2] 89.1 65.2
TDD + IDT [34] 91.5 65.9
Two-Stream Fusion + IDT [8] 93.5 69.2
Temporal Segment Networks [35] 94.2 69.4
ST-ResNet + IDT [7] 94.6 70.3
Deep Networks [15], Sports 1M pre-training 65.2 -
C3D one network [31], Sports 1M pre-training 82.3 -
C3D ensemble [31], Sports 1M pre-training 85.2 -
C3D ensemble + IDT [31], Sports 1M pre-training 90.1 -
RGB-I3D, Imagenet+Kinetics pre-training 95.6 74.8
Flow-I3D, Imagenet+Kinetics pre-training 96.7 77.1
Two-Stream I3D, Imagenet+Kinetics pre-training 98.0 80.7
RGB-I3D, Kinetics pre-training 95.1 74.3
Flow-I3D, Kinetics pre-training 96.5 77.3
Two-Stream I3D, Kinetics pre-training 97.8 80.9

Table 5. Comparison with state-of-the-art on the UCF-101 and HMDB-51 datasets, averaged over three splits. First set of rows contains
results of models trained without labeled external data.

the advantage over previous models considerably, bring- (the large video dataset) Kinetics, just as there has been
ing overall performance to 98.0 on UCF-101 and 80.9 on such benefits in pre-training ConvNets on ImageNet for so
HMDB-51, which correspond to 63% and 35% misclassifi- many tasks. This demonstrates transfer learning from one
cation reductions, respectively compared to the best previ- dataset (Kinetics) to another dataset (UCF-101/HMDB-51)
ous model [7]. for a similar task (albeit for different action classes). How-
The difference between Kinetics pre-trained I3D mod- ever, it still remains to be seen if there is a benefit in using
els and prior 3D ConvNets (C3D) is even larger, although Kinetics pre-training for other video tasks such as seman-
C3D is trained on more videos, 1M examples from Sports- tic video segmentation, video object detection, or optical
1M plus an internal dataset, and even when ensembled and flow computation. We plan to make publicly available I3D
combined with IDT. This may be explainable by the better models trained on the official Kinetics dataset’s release to
quality of Kinetics but also because of I3D simply being a facilitate research in this area.
better architecture. Of course, we did not perform a comprehensive explo-
ration of architectures – for example we have not employed
6. Discussion action tubes [11, 17] or attention mechanisms [20] to fo-
cus in on the human actors. Recent works have proposed
We return to the question posed in the introduction, “is imaginative methods for determining the spatial and tem-
there a benefit in transfer learning from videos?”. It is evi- poral extent (detection) of actors within the two-stream
dent that there is a considerable benefit in pre-training on architectures, by incorporating linked object detections in
time [24, 26]. The relationship between space and time is a segmentation. In Proceedings of the IEEE conference on
mysterious one. Several very creative papers have recently computer vision and pattern recognition, pages 580–587,
gone out of the box in attempts to capture this relationship, 2014.
for example by learning frame ranking functions for action [11] G. Gkioxari and J. Malik. Finding action tubes. In Proceed-
ings of the IEEE Conference on Computer Vision and Pattern
classes and using these as a representation [9], by making
Recognition, pages 759–768, 2015.
analogies between actions and transformations [36], or by [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn-
creating 2D visual snapshots of frame sequences [2] – this ing for image recognition. In Computer Vision and Pattern
idea is related to the classic motion history work of [3]. It Recognition (CVPR), 2016 IEEE Conference on, 2016.
would be of great value to also include these models in our [13] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
comparison but we could not, due to lack of time and space. deep network training by reducing internal covariate shift.
arXiv preprint arXiv:1502.03167, 2015.
Acknowledgements: [14] S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural
networks for human action recognition. IEEE transactions
We would like to thank everyone on the Kinetics project on pattern analysis and machine intelligence, 35(1):221–
and in particular Brian Zhang and Tim Green for help set- 231, 2013.
ting up the data for our experiments, and Karen Simonyan [15] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar,
for helpful discussions. and L. Fei-Fei. Large-scale video classification with convo-
lutional neural networks. In Proceedings of the IEEE con-
References ference on Computer Vision and Pattern Recognition, pages
1725–1732, 2014.
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, [16] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier,
C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev,
et al. Tensorflow: Large-scale machine learning on heteroge- M. Suleyman, and A. Zisserman. The kinetics human action
neous distributed systems. arXiv preprint arXiv:1603.04467, video dataset. arXiv preprint arXiv:1705.06950, 2017.
2016. [17] A. Kläser, M. Marszalek, C. Schmid, and A. Zisserman. Hu-
[2] H. Bilen, B. Fernando, E. Gavves, A. Vedaldi, and S. Gould. man focused action localization in video. In International
Dynamic image networks for action recognition. In IEEE Workshop on Sign, Gesture, Activity, ECCV 2010, 2010.
International Conference on Computer Vision and Pattern [18] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre.
Recognition CVPR, 2016. HMDB: a large video database for human motion recog-
[3] A. F. Bobick and J. W. Davis. The recognition of human nition. In Proceedings of the International Conference on
movement using temporal templates. IEEE Transactions on Computer Vision (ICCV), 2011.
pattern analysis and machine intelligence, 23(3):257–267, [19] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld.
2001. Learning realistic human actions from movies. In Computer
[4] T. Cooijmans, N. Ballas, C. Laurent, and A. Courville. Vision and Pattern Recognition, 2008. CVPR 2008. IEEE
Recurrent batch normalization. arXiv preprint Conference on, pages 1–8. IEEE, 2008.
arXiv:1603.09025, 2016. [20] Z. Li, E. Gavves, M. Jain, and C. G. Snoek. VideoLSTM
[5] J. Donahue, L. Anne Hendricks, S. Guadarrama, convolves, attends and flows for action recognition. arXiv
M. Rohrbach, S. Venugopalan, K. Saenko, and T. Dar- preprint arXiv:1607.01794, 2016.
rell. Long-term recurrent convolutional networks for visual [21] E. Mansimov, N. Srivastava, and R. Salakhutdinov. Ini-
recognition and description. In Proceedings of the IEEE tialization strategies of spatio-temporal convolutional neural
Conference on Computer Vision and Pattern Recognition, networks. CoRR, abs/1503.07274, 2015.
pages 2625–2634, 2015. [22] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learn-
[6] A. Fathi and G. Mori. Action recognition by learning mid- ing of human action categories using spatial-temporal words.
level motion features. In Computer Vision and Pattern International journal of computer vision, 79(3):299–318,
Recognition, 2008. CVPR 2008. IEEE Conference on, pages 2008.
1–8. IEEE, 2008. [23] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and
[7] C. Feichtenhofer, A. Pinz, and R. P. Wildes. Spatiotempo- transferring mid-level image representations using convolu-
ral residual networks for video action recognition. arXiv tional neural networks. 2014 IEEE Conference on Computer
preprint arXiv:1611.02155, 2016. Vision and Pattern Recognition, pages 1717–1724, 2014.
[8] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional [24] X. Peng and C. Schmid. Multi-region two-stream R-CNN
two-stream network fusion for video action recognition. In for action detection. In European Conference on Computer
IEEE International Conference on Computer Vision and Pat- Vision, pages 744–759. Springer, 2016.
tern Recognition CVPR, 2016. [25] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards
[9] B. Fernando, E. Gavves, J. M. Oramas, A. Ghodrati, and real-time object detection with region proposal networks. In
T. Tuytelaars. Modeling video evolution for action recogni- Advances in neural information processing systems, pages
tion. In Proceedings of the IEEE Conference on Computer 91–99, 2015.
Vision and Pattern Recognition, pages 5378–5387, 2015. [26] S. Saha, G. Singh, M. Sapienza, P. H. Torr, and F. Cuzzolin.
[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- Deep learning for detecting multiple space-time action tubes
ture hierarchies for accurate object detection and semantic
in videos. British Machine Vision Conference (BMVC) 2016,
2016.
[27] K. Simonyan and A. Zisserman. Two-stream convolutional
networks for action recognition in videos. In Advances
in Neural Information Processing Systems, pages 568–576,
2014.
[28] K. Simonyan and A. Zisserman. Very deep convolutional
networks for large-scale image recognition. In ICLR, 2015.
[29] K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset
of 101 human actions classes from videos in the wild. arXiv
preprint arXiv:1212.0402, 2012.
[30] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolu-
tional learning of spatio-temporal features. In European con-
ference on computer vision, pages 140–153. Springer, 2010.
[31] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri.
Learning spatiotemporal features with 3d convolutional net-
works. In 2015 IEEE International Conference on Computer
Vision (ICCV), pages 4489–4497. IEEE, 2015.
[32] G. Varol, I. Laptev, and C. Schmid. Long-term temporal
convolutions for action recognition. IEEE transactions on
pattern analysis and machine intelligence, 2017.
[33] H. Wang and C. Schmid. Action recognition with improved
trajectories. In International Conference on Computer Vi-
sion, 2013.
[34] L. Wang, Y. Qiao, and X. Tang. Action recognition with
trajectory-pooled deep-convolutional descriptors. In Pro-
ceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 4305–4314, 2015.
[35] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and
L. Van Gool. Temporal segment networks: towards good
practices for deep action recognition. In European Confer-
ence on Computer Vision, 2016.
[36] X. Wang, A. Farhadi, and A. Gupta. Actions˜ transforma-
tions. arXiv preprint arXiv:1512.00795, 2015.
[37] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan,
O. Vinyals, R. Monga, and G. Toderici. Beyond short snip-
pets: Deep networks for video classification. In Proceed-
ings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 4694–4702, 2015.
[38] C. Zach, T. Pock, and H. Bischof. A duality based approach
for realtime TV-L1 optical flow. Pattern Recognition, pages
214–223, 2007.

You might also like