Abstract
In order to describe the main movement of the video a new motion descriptor is proposed in this work. We combine two methods for estimating the motion between frames: block matching and brightness gradient of image. In this work we use a variable size block matching algorithm to extract displacement vectors as a motion information. The cross product between the block matching vector and the gradient is used to obtain the displacement vectors. These vectors are computed in a frame sequence, obtaining the block trajectory which contains the temporal information. The block matching vectors are also used to cluster the sparse trajectories according to their shape. The proposed method computes this information to obtain orientation tensors and to generate the final descriptor. The global tensor descriptor is evaluated by classification of KTH, UCF11 and Hollywood2 video datasets with a non-linear SVM classifier. Results indicate that our sparse trajectories method is competitive in comparison to the well known dense trajectories approach, using orientation tensors, besides requiring less computational effort.
M.B. Vieira—The authors thank FAPEMIG, CAPES and UFJF for funding.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mota, V.F., Souza, J.I.C., de Albuquerque Araújo, A., Vieira, M.B.: Combining orientation tensors for human action recognition. In: Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 328–333. IEEE (2013)
Sad, D., Mota, V.F., Maciel, L.M., Vieira, M.B., de Albuquerque Araújo, A.: A tensor motion descriptor based on multiple gradient estimators. In: Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 70–74. IEEE (2013)
Perez, E.A., Mota, V.F., Fernandes, V., Maciel, L.M., Sad, D., Vieira, M.B.: Combining gradient histograms using orientation tensors for human action recognition. In: 21st International Conference on Pattern Recognition (ICPR), pp. 3460–3463. IEEE (2012)
Figueiredo, A.M.O., Maia, H.A., Oliveira, F.L.M., Mota, V.F., Vieira, M.B.: A video tensor self-descriptor based on block matching. In: Murgante, B., et al. (eds.) ICCSA 2014, Part VI. LNCS, vol. 8584, pp. 401–414. Springer, Heidelberg (2014)
Jain, M., Jégou, H., Bouthemy, P.: Better exploiting motion for better action recognition. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2555–2562. IEEE (2013)
Wang, H., Schmid, C., et al.: Action recognition with improved trajectories. In: International Conference on Computer Vision (2013)
Caetano, F.A.: A video descriptor using orientation tensors and shape-based trajectory clustering, Universidade FederaL De Juiz De Fora (2014)
Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: Proceedings of the 17th International Conference on Pattern Recognition (ICPR), pp. 32–36. IEEE (2004)
Kläser, A., Marszałek, M., Schmid, C.: A Spatio-temporal descriptor based on 3D-gradients. In: British Machine Vision Conference (BMVC), pp. 995–1004 (2008)
Wang, C., Wang, Y., Yuille, A.L.: An approach to pose-based action recognition. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 915–922. IEEE (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
de Oliveira Figueiredo, A.M., Caniato, M., Mota, V.F., de Souza Silva, R.L., Vieira, M.B. (2016). A Video Self-descriptor Based on Sparse Trajectory Clustering. In: Gervasi, O., et al. Computational Science and Its Applications – ICCSA 2016. ICCSA 2016. Lecture Notes in Computer Science(), vol 9787. Springer, Cham. https://doi.org/10.1007/978-3-319-42108-7_45
Download citation
DOI: https://doi.org/10.1007/978-3-319-42108-7_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-42107-0
Online ISBN: 978-3-319-42108-7
eBook Packages: Computer ScienceComputer Science (R0)