Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Human activity recognition via optical flow: decomposing activities into basic actions

  • IAPR-MedPRAI
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Recognizing human activities using automated methods has emerged recently as a pivotal research theme for security-related applications. In this research paper, an optical flow descriptor is proposed for the recognition of human actions by considering only features derived from the motion. The signature for the human action is composed as a histogram containing kinematic features which include the local and global traits. Experimental results performed on the Weizmann and UCF101 databases confirmed the potentials of the proposed approach with attained classification rates of 98.76% and 70%, respectively, to distinguish between different human actions. For comparative and performance analysis, different types of classifiers including Knn, decision tree, SVM and deep learning are applied to the proposed descriptors. Further analysis is performed to assess the proposed descriptors under different resolutions and frame rates. The obtained results are in alignment with the early psychological studies reporting that human motion is adequate for the perception of human activities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Aggarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv (CSUR) 43(3):16

    Article  Google Scholar 

  2. Alfaro A, Mery D, Soto A (2013) Human action recognition from inter-temporal dictionaries of key-sequences. In: Pacific-Rim symposium on image and video technology. Springer, pp 419–430

  3. Almotairi S, Ribeiro E (2014) Action classification using sequence alignment and shape context. In: The Twenty-Seventh International Flairs Conference

  4. Asadi-Aghbolaghi M, Clapés A, Bellantonio M, Escalante HJ, Ponce-López V, Baró X, Guyon I, Kasaei S, Escalera S (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In: 2017 12th IEEE international conference on automatic face and gesture recognition (FG 2017). IEEE, pp 476–483

  5. Bouchrika I, Carter JN, Nixon MS, Mörzinger R, Thallinger G (2010) Using gait features for improving walking people detection. In: 2010 20th International conference on pattern recognition (ICPR). IEEE, pp 3097–3100

  6. Chaquet JM, Carmona EJ, Fernández-Caballero A (2013) A survey of video datasets for human action and activity recognition. Comput Vis Image Underst 117(6):633–659

    Article  Google Scholar 

  7. Chaudhry R, Ravichandran A, Hager G, Vidal R (2009) Histograms of oriented optical flow and binet-cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In: IEEE conference on computer vision and pattern recognition, 2009. CVPR 2009. IEEE, pp 1932–1939

  8. Chen M, Kira Z et al (2017) TS-lSTM and temporal-inception: exploiting spatiotemporal dynamics for activity recognition. arXiv preprint arXiv:1703.10667

  9. Colque RVHM, Caetano C, de Andrade MTL, Schwartz WR (2017) Histograms of optical flow orientation and magnitude and entropy to detect anomalous events in videos. IEEE Trans Circuits Syst Video Technol 27(3):673–682

    Article  Google Scholar 

  10. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE computer society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol 1. IEEE, pp 886–893

  11. Daugman J (2004) How Iris recognition works. IEEE Trans Circuits Syst Video Technol 14(1):21–30

    Article  Google Scholar 

  12. Dhulekar P, Gandhe S, Chitte H, Pardeshi K (2017) Human action recognition: an overview. In: Proceedings of the international conference on data engineering and communication technology. Springer, pp 481–488

  13. Dobhal T, Shitole V, Thomas G, Navada G (2015) Human activity recognition using binary motion image and deep learning. Procedia Comput Sci 58:178–185

    Article  Google Scholar 

  14. Donahue J, Anne Hendricks L, Guadarrama S, Rohrbach M, Venugopalan S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2625–2634

  15. Fan B, Ding Z, Gao W, Long T (2014) An improved motion compensation method for high resolution UAV SAR imaging. Sci China Inf Sci 57(12):1–13

    Article  Google Scholar 

  16. Fangbemi AS, Liu B, Yu N, Zhang Y (2018) Binary proximity patches motion descriptor for action recognition in videos. In: Proceedings of the 10th international conference on internet multimedia computing and service. ACM, p 17

  17. Fathi A, Mori G (2008) Action recognition by learning mid-level motion features. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8

  18. Feng Y, Ji M, Xiao J, Yang X, Zhang JJ, Zhuang Y, Li X (2015) Mining spatial-temporal patterns and structural sparsity for human motion data denoising. IEEE Trans Cybern 45(12):2693–2706

    Article  Google Scholar 

  19. Fortun D, Bouthemy P, Kervrann C (2015) Optical flow modeling and computation: a survey. Comput Vis Image Underst 134:1–21

    Article  MATH  Google Scholar 

  20. Gentile C, Li S, Kar P, Karatzoglou A, Etrue E, Zappella G (2016) On context-dependent clustering of bandits. arXiv preprint arXiv:1608.03544

  21. Horn BK, Schunck BG (1981) Determining optical flow. In: 1981 Technical symposium east. International Society for Optics and Photonics, pp 319–331

  22. Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194

    Article  Google Scholar 

  23. Janschek K, Tchernykh V, Dyblenko S (2005) Integrated camera motion compensation by real-time image motion tracking and image deconvolution. In: Proceedings, 2005 IEEE/ASME international conference on advanced intelligent mechatronics. IEEE, pp 1437–1444

  24. Kar P, Li S, Narasimhan H, Chawla S, Sebastiani F (2016) Online optimization methods for the quantification problem. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1625–1634

  25. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732

  26. Kiani H, Sim T, Lucey S (2014) Multi-channel correlation filters for human action recognition. In: 2014 IEEE international conference on image processing (ICIP). IEEE, pp 1485–1489

  27. Kliper-Gross O, Gurovich Y, Hassner T, Wolf L (2012) Motion interchange patterns for action recognition in unconstrained videos. In: European conference on computer vision. Springer, pp 256–269

  28. Koohzadi M, Charkari NM (2017) Survey on deep learning methods in human action recognition. IET Comput Vis 11(8):623–632

    Article  Google Scholar 

  29. Lara OD, Labrador MA (2013) A survey on human activity recognition using wearable sensors. IEEE Commun Surv Tutor 15(3):1192–1209

    Article  Google Scholar 

  30. Li S, Karatzoglou A, Gentile C (2016) Collaborative filtering bandits. In: Proceedings of the 39th international ACM SIGIR conference on research and development in information retrieval. ACM, pp 539–548

  31. Liu J, Ali S, Shah M (2008) Recognizing human actions using multiple features. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8

  32. Martínez F, Manzanera A, Romero E (2012) A motion descriptor based on statistics of optical flow orientations for action classification in video-surveillance. In: Wang FL, Lei J, Lau RWH, Zhang J (eds) Multimedia and signal processing. Springer, Berlin, pp 267–274

    Chapter  Google Scholar 

  33. Moeslund TB, Hilton A, Krüger V (2006) A survey of advances in vision-based human motion capture and analysis. Comput Vis Image Underst 104(2):90–126

    Article  Google Scholar 

  34. Moussa MM, Hamayed E, Fayek MB, El Nemr HA (2015) An enhanced method for human action recognition. J Adv Res 6(2):163–169

    Article  Google Scholar 

  35. Niebles JC, Wang H, Fei-Fei L (2008) Unsupervised learning of human action categories using spatial-temporal words. Int J Comput Vis 79(3):299–318

    Article  Google Scholar 

  36. Oshin O, Gilbert A, Bowden R (2014) Capturing relative motion and finding modes for action recognition in the wild. Comput Vis Image Underst 125:155–171

    Article  Google Scholar 

  37. Peng X, Wang L, Wang X, Qiao Y (2016) Bag of visual words and fusion methods for action recognition: comprehensive study and good practice. Comput Vis Image Underst 150:109–125

    Article  Google Scholar 

  38. Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput 28(6):976–990

    Article  Google Scholar 

  39. Rahman S, See J, Ho CC (2015) Action recognition in low quality videos by jointly using shape, motion and texture features. In: 2015 IEEE international conference on signal and image processing applications (ICSIPA). IEEE, pp 83–88

  40. Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Proceedings of the 17th international conference on pattern recognition, 2004. ICPR 2004, vol 3. IEEE, pp 32–36

  41. Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Proceedings of the 27th International Conference on Neural Information Processing Systems, vol 1. MIT Press, Cambridge, MA, USA, pp 568–576

    Google Scholar 

  42. Soomro K, Zamir AR, Shah M (2012) Ucf101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402

  43. Thurau C, Hlaváč V (2008) Pose primitive based human action recognition in videos or still images. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8

  44. Tymoshchuk PV (2009) A discrete-time dynamic k-winners-take-all neural circuit. Neurocomputing 72(13–15):3191–3202

    Article  Google Scholar 

  45. Varol G, Laptev I, Schmid C (2018) Long-term temporal convolutions for action recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1510–1517

    Article  Google Scholar 

  46. Vishwakarma S, Agrawal A (2013) A survey on activity recognition and behavior understanding in video surveillance. Vis Comput 29(10):983–1009

    Article  Google Scholar 

  47. Wang H, Schmid C (2013) Action recognition with improved trajectories. In: Proceedings of the IEEE international conference on computer vision, pp 3551–3558

  48. Wang J (2010) Analysis and design of a \( k \)-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21(9):1496–1506

    Google Scholar 

  49. Wang J, Cherian A, Porikli F (2017) Ordered pooling of optical flow sequences for action recognition. In: 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 168–176

  50. Wang L, Qiao Y, Tang X (2015) Action recognition with trajectory-pooled deep-convolutional descriptors. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4305–4314

  51. Wang L, Xiong Y, Wang Z, Qiao Y, Lin D, Tang X, Van Gool L (2016) Temporal segment networks: Towards good practices for deep action recognition. In: European conference on computer vision. Springer, pp 20–36

  52. Weinland D, Boyer E. (2008) Action recognition using exemplar-based embedding. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–7

  53. Yao A, Gall J, Van Gool L (2010) A hough transform-based voting framework for action recognition. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 2061–2068

  54. Yeffet L, Wolf L (2009) Local trinary patterns for human action recognition. In: 2009 IEEE 12th international conference on computer vision, pp 492–497

  55. Yi Y, Cheng Y, Xu C (2017) Mining human movement evolution for complex action recognition. Expert Syst Appl 78:259–272

    Article  Google Scholar 

  56. Zhu W, Hu J, Sun G, Cao X, Qiao Y (2016) A key volume mining deep framework for action recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 1991–1999

  57. Zhu Y, Nayak NM, Roy-Chowdhury AK (2013) Context-aware activity recognition and anomaly detection in video. IEEE J Sel Top Signal Process 7(1):91–101

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ammar Ladjailia.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ladjailia, A., Bouchrika, I., Merouani, H.F. et al. Human activity recognition via optical flow: decomposing activities into basic actions. Neural Comput & Applic 32, 16387–16400 (2020). https://doi.org/10.1007/s00521-018-3951-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3951-x

Keywords