Abstract
Human action recognizing nowadays plays a key role in varieties of computer vision applications while at the same time it’s quite challenging for the requirement of accuracy and robustness. Most current computer vision methods focus on algorithms designing classifiers with handcrafted features which are complex and inflexible. To automatically extract both spatial and temporal features, in this paper we propose a method of human action recognition based on sub-data learning which combines the proposed 3D convolutional neural network (3DCNN) with the One-versus-One (OvO) algorithm. We also employ effective data augmentation to reduce overfitting. We evaluate our method on the KTH and UCF Sports dataset and achieve promising results.
T. Wang—This work is partially supported by the ANR AutoFerm project and the Platform CAPSEC funded by Région Champagne-Ardenne and FEDER, the Fundamental Research Funds for the Central Universities (YWF-14-RSC-102), the Aeronautical Science Foundation of China (2016ZC51022), the National Natural Science Foundation of China (U1435220, 61503017).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The widely used surveillance cameras nowadays give rise to explosive demand on the recognition of objects in massive videos especially the human actions. The accurate human action recognition has found varieties applications in people’s daily lives such as intelligent video surveillance, smart home, somatic gaming and so on. However, this recognition task is full of challenges due to obstacles like background clutters, scale variations, object occlusions, viewpoint shifts, etc. In the last decade, many efforts have been made in this area. However, most of them focus on designing classifiers applying handcrafted feature extracting algorithms, which is inflexible when taking both accuracy and robustness into consideration. Schuldt et al. [18] constructed local space-time feature representations in videos and incorporated the support vector machine (SVM) method for action recognition. Scovanner et al. [19] introduced the 3-dimensional (3D) SIFT descriptor in action recognition and improved the performance a lot. To achieve better results, Wong and Cipolla [26] took the global information encoded in videos into consideration. They developed their research on the organization of pixels in the video sequences and proposed a detector utilizing a set of interest points. Besides, the combination of many effective feature descriptors has been widely used such as HOG (Histogram of Oriented Gradients) + HOF (Histogram of Optical Flow) + MBH (Motion Boundary Histogram) [22], DT (Dense Trajectories) + BOF (Bag of Features) [24] and so on.
The Convolutional Neural Networks (CNN) have shown its advantages in computer vision. Many tasks, such as image segmentation [16], object tracking [15], etc. have been handled successfully by CNN. The hierarchy of features are learnt by the CNN. In the human action recognition problem, the spatial-temporal features should be constructed. Ji et al. [6] developed a novel 3D CNN architecture which fused useful spatial-temporal features. While their model took gradients and optical flow feature as the input of the neural networks. Moreover, Karpathy et al. [8] and other researchers [7, 11, 25] proposed new architectures to recognize the action. Considerable amount of training data are needed for these complex architectures, but they tend to suffer from the overfitting problem while the train sample size is small.
In the deep learning models, many useful training strategies have been pointed out to improve the results, a representative among which is the data augmentation method. To prevent from overfitting, Krizhevsky et al. [9] employed translations, horizontal reflections and RGB intensity altering on the training images to generate more samples. Jung et al. [7] then added image rotation operation to obtain 14 times more training data. To further prevent from overfitting and improve the robustness, Molchanov et al. [11] creatively introduced two more data augmentation methods: spatial elastic deformation and image pixel drop-out.
In this paper, we are interested in the human action recognition on the KTH dataset [18] and the UCF Sports dataset [17]. We develop our model with the 3D convolutional neural networks. To reduce overfitting, we apply the effective data augmentation method on the input video volumes. To boost the model’s performance to the best, we incorporate the One-versus-One (OvO) algorithm in our model which leads to an acceptable result as we wish.
2 Methodology
This part is organized as follows. We first briefly introduce the datasets used in our work in Sect. 2.1. We then provide some background information needed for our model in Sect. 2.2. Section 2.3 describes the details of the data preprocessing adopted in our model. Section 2.4 centers on the 3DCNN frameworks of our model. And Sect. 2.5 shows the details during training.
2.1 Datasets
We construct our model based on the KTH dataset and the UCF Sports dataset. The KTH action database contains six different types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) and each of them is performed by 25 people in four different scenarios: outdoors, outdoors with scale variation, outdoors with different clothes and indoors. There are 100 videos for each of the human actions except the handclapping action which has only 99 videos. And all the videos have the frame rate of 25Â fps and time length of four seconds shot by a static camera. To reduce the computation cost, the videos are all down-sampled to the resolution of \(160 \times 120\) pixels.
The UCF Sports database consists of ten human action types (diving, golf swing, kicking, lifting, riding Horse, running, skateboarding, swing-bench, swing-side and walking). All these video sequences are collected from many sports scenes in broadcast television channels. There are 150 videos in total with the resolution of \(720 \times 480\) pixels and they contain unconstrained background environments (Figs. 1 and 2).
2.2 Background
3D Convolution. The convolution operations are only applied to spatial dimensions in the typical 2D CNN, which cannot capture the temporal features useful for action recognition. Different from 2D convolutions, 3D convolutions extend the convolution operations to the temporal dimensions. By convolving 3D kernels on the give spatial-temporal video volumes, the network is capable of obtaining temporal dynamic information encoded in several adjacent frames, which is needed for action recognition.
Before the 3D convolutions, we firstly extract a few contiguous frames from the original video sequences and then stack them on the frame level to form a video volume of size \(w \times h \times d\), which represent the width, height and depth (temporal length) separately. And then we apply 3D convolutional kernels with size of \(w^\prime \times h^\prime \times d^\prime \) across the volume to get numbers of feature maps. The calculation follows this way: the output value in the feature maps corresponding to the input value at position (x, y, z) is:
where f denotes the activation function, \(w_{ijk}\) denotes the weight value of the 3D kernel with index (i, j, k), \(k_{(x+i)(y+j)(z+k)}\) denotes the input value at position \((x + i, y + j, z + k)\) and b denotes the bias value.
One-versus-One Algorithm. The OvO (One-versus-One) algorithm is one of the classic multiclass classification algorithms in machine learning [1]. The core idea of this algorithm is to reduce the problem of multiclass classification to multiple binary classification problems. In the reduction, we train \(\frac{N\times (N-1)}{2}\) binary classifiers each of which receives the data samples from a pair of classes from the original training dataset. At the training stage, we focus on making each classifier learn to distinguish between the corresponding two classes. At prediction stage, we feed the test data to all these classifiers to get \(\frac{N\times (N-1)}{2}\) results for each sample. With these results, the strategy we apply is the majority voting: the class that gets the highest votes is selected as the predicted class.
In our sub-data learning work, instead of directly designing a multiclass classification classifier, we train 15 and 45 binary classifiers on the two datasets according to the OvO algorithm. Finally we sum up all the validation results and produce the overall classification result.
2.3 Data Preprocessing
Target Area Segmentation. To achieve lower computational cost and make the result of higher precision, we don’t feed our convolutional networks with the volumes made up of the original frames from the videos. Considering part of the video sequences contains scale variation and the area occupied by the person only accounts for a small percentage of the whole area in most of the frames, instead we adopt a human detector with the help of the HOG-SVM algorithm detailed described in [2]. After getting the detection results, we crop the area where the moving person always stay in the center out of the original frames and then resize them to the size of \(40 \times 60\) pixels as shown in Fig. 3. When it comes to the temporal length of the volumes, the length should be big enough to catch a complete human motion, while at the same time as small as enough to gain reduction in computation. And according to our careful observation, the number 16 cannot be more appropriate for this very purpose. By this means, the extracted target area has a size of \(40 \times 60 \times 16\) pixels.
Data Augmentation. Note that the KTH dataset only contains 599 pieces of videos clips and the UCF Sports datasets just 150. It is far not enough to prevent from overfitting during training with this kind of sample size. As mentioned in [9], label-preserving transformation on the original dataset is the easiest way to reduce overfitting. Motivated by their work, we use two different data augmentation methods in our work to enlarge the training data while keeping the test data unchanged.
In our first method of data augmentation, we extract four \(35\,\times \,55\) pixels patches from four corners of the \(40\,\times \,60\) pixels target area cropped out according to the target area segmentation section and then do translations moving each of them in the diagonal direction (\(\pm 1\) pixels along the x axis, and \(\pm 1\) pixels along the y axis). Then again we extract the center part with size of \(35\,\times \,55\) pixels from the target area. So all above operations together make the number of patches for training increased by a factor of 9.
Our second method applies reverse operations along the temporal dimension to the output generated by the first augmentation method, with which we double the amount of the training data. For example, we create a new action that a man runs from left to right from the original action that the man runs from right to left. By this time, we increase the sample size by a factor of 18 in total.
Note that we only do the data augmentation scheme on the training data. Without the data augmentation our model suffers a lot from the overfitting problem during the experiments. And if the data augmentation is applied, the input size of the network will be smaller. Thus, at validation time we also extract 5 patches from the original test data (four corners and the center part) and then feed them to our model for validation.
2.4 Our Model Architecture
As mentioned above, our model combines the 3D convolutional neural networks (3DCNN) with the One-versus-One (OvO) algorithm. Thus, we divide the dataset into many sub-groups on each of which we train the 3DCNN which learns to distinguish between the corresponding two actions. In the following, the 3DCNN architecture that can effectively capture the temporal and spatial features useful for the classification is detailed described.
As depicted in Fig. 4, the proposed 3DCNN architecture contains six layers of which the first five are 3D convolutional or max-pooling layers and the last one is a fully connected layer. The input contains a number of volumes with size \(35\,\times \,55\,\times \,16\), corresponding to picture size \(35\,\times \,55\) and temporal length 16. Firstly we apply the 3D convolutional operation with a kernel size of \(5 \times 5 \times 3\) (\(5 \times 5\) in the spatial dimension and 3 in the temporal dimension) on the input volume data. It’s known that one kernel can only get one feature map from the input so we apply 16 different kernels to increase the number of feature maps. And to prevent the output size of the first layer from decreasing too sharply, we use the padding strategy that pads zeros around the image borders after convolution. Then with a \(2\,\times \,2\,\times \,1\) 3D max pooling operation on each of the feature maps we get the 16 feature maps with reduced size \(17\,\times \,27 \times 16\) in the layer S2. Subsequently, we further perform the 3D convolution on the feature maps of S2 with 32 kernels of size \(6\,\times \,7\,\times \,3\), leading to 32 feature maps in C3. After that we apply the \(3\,\times \,3\,\times \,1\) max pooling on the output of C3. By this time, the spatial size of the output is small (\(4\,\times \,7\)) so at the next layer we only apply the 2D convolution operations. Then C4 is obtained by applying 2D convolutions with 64 kernels of size \(4\,\times \,7\). After all the convolution and subsampling operations, we flatten the feature maps of C4 to concatenate all of them into a long 896D feature vector containing lots of useful motion information. Next we design the fully connected layer FC with 1024 nodes which is fully connected to each unit of the feature vector. Finally we set the number of the output to 2, which is as same as the number of types of human actions, and the two values represent the probability of each motion hypothesis separately with the help of the softmax regression function.
2.5 Details of Training
To train the network, we choose the average cross-entropy as the loss function to minimize it:
where N is the total number of the samples of the data, \(x^i\) denotes the ith sample of the dataset, P and Q denote respectively the inherent probability distribution and the probability distribution of x predicted by the model.
In our experiments the weights in each layer are initialized from a truncated normal distribution centered on 0 with standard deviation \( std=\sqrt{\frac{2}{n}} \) where n denotes the input or output connections at a layer. And we choose the ReLU activation function and set the biases for all the layers to 0 according to [4]. During the training stage, we apply drop-out strategy [21] with probability 0.5 after several layers and L2 regularization on the weights to overcome the possible overfitting problem. To accelerate the training, we also apply batch normalization [5] to the response of each layer.
3 Experimental Results
To evaluate the effectiveness of our model, we conduct the experiments on two benchmark datasets: the KTH dataset and the UCF Sports dataset. We first read the original video sequences frame by frame and down-sample them to the resolution of \(40 \times 60\) pixels taking the memory and computation overhead into consideration. Then we extract and form the \(35 \times 55\) pixels target areas according to the method mentioned in the Target Area Segmentation Section. Next, at train-test split, we randomly sample 10% from the reformed data as the validation part on the KTH dataset and the percentage is 33% on the UCF Sports dataset. After the data augmentation completes, for the KTH dataset, the training data contains 9702 video volumes with size \(35 \times 55 \times 16\) pixels, and the test data contains 600 video volumes with the same size. And for the UCF Sports dataset, the corresponding two numbers are 1800 and 250, separately.
According to OvO algorithm, we generate 15 binary classifiers for the KTH dataset and 45 binary classifiers for the UCF Sports dataset. Each of this classifiers utilizes the 3DCNN architecture proposed by us and are fed by the sub-data from the corresponding pair of classes. After rounds of training and the final fine-tune process, the classification results of these binary classifiers from the two datasets are reported in Tables 1 and 2. Then by summing up all these results and applying the majority voting strategy on the validation data, we finally get the overall validation accuracy: 94.0% on the KTH dataset and 95.6% on the UCF Sports dataset. The confusion matrices are shown in Figs. 5 and 6. And the comparison of our work to the peer work is demonstrated in Table 3.
4 Conclusions
In this paper, we focus on the action recognition problem on the KTH and UCF Sports dataset. Rather than capture the handcrafted features like most of the researchers do, we develope the 3D convolutional neural networks (3DCNN) to automatically caputure the useful spatial-temporal features. To boost our model’s performance, we utilize the sub-data learning method that incorporate the One-versus-One (OvO) algorithm into our 3DCNN architecture. We achieve the high correct classification rate of 94.0% on the KTH dataset and 95.6% on teh UCF Sports dataset, which is quite competitive compared to the peer work.
References
Aly, M.: Survey on multiclass classification methods. Neural Netw. 19, 1–9 (2005)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005) (2005)
Ghodrati, A., Diba, A., Pedersoli, M., Tuytelaars, T., Gool, L.V.: Deepproposals: hunting objects and actions by cascading deep convolutional layers. Int. J. Comput. Vis., pp. 1–17 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International conference on Computer Vision, pp. 1026–1034 (2015)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML (2015)
Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2010)
Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression recognition. In: 2015 IEEE International Conference on Computer Vision (ICCV) (2015)
Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (2014)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)
Liu, L., Shao, L., Rockett, P.: Boosted key-frame selection and correlated pyramidal motion-feature representation for human action recognition. Pattern Recogn. 46, 1810–1818 (2013)
Molchanov, P., Gupta, S., Kim, K., Kautz, J.: Hand gesture recognition with 3D convolutional neural networks. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2015)
Niebles, J.C., Wang, H., Fei-Fei, L.: Unsupervised learning of human action categories using spatial-temporal words. Int. J. Comput. Vis. 79, 299–318 (2006)
O’Hara, S., Draper, B.A.: Scalable action recognition with a subspace forest. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1210–1217 (2012)
Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Mach. Vis. Appl. 24, 971–981 (2012)
Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)
Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)
Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local svm approach. In: Proceedings of the 17th International Conference on Pattern Recognition 2004, ICPR 2004, vol. 3, pp. 32–36. IEEE (2004)
Scovanner, P., Ali, S., Shah, M.: A 3-dimensional sift descriptor and its application to action recognition. In: ACM Multimedia (2007)
Shao, L., Zhen, X., Tao, D., Li, X.: Spatio-temporal laplacian pyramid coding for action recognition. IEEE Trans. Cybern. 44(6), 817–27 (2014)
Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)
Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: CVPR (2011)
Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103, 60–79 (2012)
Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision (2013)
Wang, L., Xiong, Y., Wang, Z., Qiao, Y.: Towards good practices for very deep two-stream convnets. CoRR abs/1507.02159 (2015)
Wong, S.F., Cipolla, R.: Extracting spatiotemporal interest points using global information. In: 2007 IEEE 11th International Conference on Computer Vision (2007)
Zhang, Z., Wang, C., Xiao, B., Zhou, W., Liu, S.: Action recognition using context-constrained linear coding. IEEE Signal Process. Lett. 19, 439–442 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, Y., Wang, T., Li, J., Lv, X., Snoussi, H. (2017). Human Action Recognition Based on Sub-data Learning. In: Yang, J., et al. Computer Vision. CCCV 2017. Communications in Computer and Information Science, vol 773. Springer, Singapore. https://doi.org/10.1007/978-981-10-7305-2_52
Download citation
DOI: https://doi.org/10.1007/978-981-10-7305-2_52
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-7304-5
Online ISBN: 978-981-10-7305-2
eBook Packages: Computer ScienceComputer Science (R0)