Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Abstract

We present an autoencoder-based semi-supervised approach to classify perceived human emotions from walking styles obtained from videos or motion-captured data and represented as sequences of 3D poses. Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in a bottom-up manner in the encoder, following the kinematic chains in the human body. We also constrain the latent embeddings of the encoder to contain the space of psychologically-motivated affective features underlying the gaits. We train the decoder to reconstruct the motions per joint per time step in a top-down manner from the latent embeddings. For the annotated data, we also train a classifier to map the latent embeddings to emotion labels. Our semi-supervised approach achieves a mean average precision of 0.84 on the Emotion-Gait benchmark dataset, which contains both labeled and unlabeled gaits collected from multiple sources. We outperform current state-of-art algorithms for both emotion recognition and action recognition from 3D gaits by 7%–23% on the absolute. More importantly, we improve the average precision by 10%–50% on the absolute on classes that each makes up less than 25% of the labeled part of the Emotion-Gait benchmark dataset.

This project has been supported by ARO grant W911NF-19-1-0069.

Code and additional materials in project webpage: https://gamma.umd.edu/taew.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. CMU graphics lab motion capture database (2018). http://mocap.cs.cmu.edu/

  2. Ahsan, U., Sun, C., Essa, I.: DiscrimNet: semi-supervised action recognition from videos using generative adversarial networks. arXiv preprint arXiv:1801.07230 (2018)

  3. Arunnehru, J., Kalaiselvi Geetha, M.: Automatic human emotion recognition in surveillance video. In: Dey, N., Santhi, V. (eds.) Intelligent Techniques in Signal Processing for Multimedia Security. SCI, vol. 660, pp. 321–342. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-44790-2_15

    Chapter  Google Scholar 

  4. Babu, A.R., Rajavenkatanarayanan, A., Brady, J.R., Makedon, F.: Multimodal approach for cognitive task performance prediction from body postures, facial expressions and EEG signal. In: Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data, p. 2. ACM (2018)

    Google Scholar 

  5. Badler, N.I., Phillips, C.B., Webber, B.L.: Simulating Humans: Computer Graphics Animation and Control. Oxford University Press, Oxford (1993)

    MATH  Google Scholar 

  6. Barrett, L.F.: How Emotions are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt, Boston (2017)

    Google Scholar 

  7. Bauer, A., et al.: The autonomous city explorer: towards natural human-robot interaction in urban environments. IJSR 1(2), 127–140 (2009)

    Google Scholar 

  8. Bengio, S., Vinyals, O., Jaitly, N., Shazeer, N.: Scheduled sampling for sequence prediction with recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 1171–1179 (2015)

    Google Scholar 

  9. Bhattacharya, U., Mittal, T., Chandra, R., Randhavane, T., Bera, A., Manocha, D.: STEP: spatial temporal graph convolutional networks for emotion perception from gaits. In: AAAI, pp. 1342–1350 (2020)

    Google Scholar 

  10. Cai, H., Bai, C., Tai, Y.-W., Tang, C.-K.: Deep video generation, prediction and completion of human action sequences. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 374–390. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_23

    Chapter  Google Scholar 

  11. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017)

    Google Scholar 

  12. Chen, Y., Hou, W., Cheng, X., Li, S.: Joint learning for emotion classification and emotion cause detection. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 646–651 (2018)

    Google Scholar 

  13. Choutas, V., Weinzaepfel, P., Revaud, J., Schmid, C.: PoTion: pose motion representation for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7024–7033 (2018)

    Google Scholar 

  14. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289 (2015)

  15. Crenn, A., Khan, R.A., Meyer, A., Bouakaz, S.: Body expression recognition from animated 3D skeleton. In: IC3D, pp. 1–7. IEEE (2016)

    Google Scholar 

  16. Daoudi, M., Berretti, S., Pala, P., Delevoye, Y., Del Bimbo, A.: Emotion recognition by body movement representation on the manifold of symmetric positive definite matrices. In: Battiato, S., Gallo, G., Schettini, R., Stanco, F. (eds.) ICIAP 2017. LNCS, vol. 10484, pp. 550–560. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68560-1_49

    Chapter  Google Scholar 

  17. Ekman, P., Friesen, W.V.: Head and body cues in the judgment of emotion: a reformulation. Percept. Mot. Skills 24, 711–724 (1967)

    Article  Google Scholar 

  18. Ekman, P., Friesen, W.V.: The repertoire of nonverbal behavior: categories, origins, usage, and coding. Semiotica 1(1), 49–98 (1969)

    Article  Google Scholar 

  19. Fabian Benitez-Quiroz, C., Srinivasan, R., Martinez, A.M.: EmotioNet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In: CVPR (2016)

    Google Scholar 

  20. Fernández-Dols, J.M., Ruiz-Belda, M.A.: Expression of emotion versus expressions of emotions. In: Russell, J.A., Fernández-Dols, J.M., Manstead, A.S.R., Wellenkamp, J.C. (eds.) Everyday Conceptions of Emotion. ASID, vol. 81, pp. 505–522. Springer, Dordrecht (1995). https://doi.org/10.1007/978-94-015-8484-5_29

    Chapter  Google Scholar 

  21. Grassia, F.S.: Practical parameterization of rotations using the exponential map. J. Graph. Tools 3(3), 29–48 (1998)

    Article  Google Scholar 

  22. Gross, M.M., Crane, E.A., Fredrickson, B.L.: Effort-shape and kinematic assessment of bodily expression of emotion during gait. Hum. Mov. Sci. 31(1), 202–221 (2012)

    Article  Google Scholar 

  23. Habibie, I., Holden, D., Schwarz, J., Yearsley, J., Komura, T.: A recurrent variational autoencoder for human motion synthesis. In: Proceedings of the British Machine Vision Conference (BMVC) (2017)

    Google Scholar 

  24. Harvey, F.G., Roy, J., Kanaa, D., Pal, C.: Recurrent semi-supervised classification and constrained adversarial generation with motion capture data. Image Vis. Comput. 78, 42–52 (2018)

    Article  Google Scholar 

  25. Hoffmann, H., et al.: Mapping discrete emotions into the dimensional space: an empirical approach. In: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 3316–3320. IEEE (2012)

    Google Scholar 

  26. Holden, D., Saito, J., Komura, T.: A deep learning framework for character motion synthesis and editing. ACM Trans. Graph. (TOG) 35(4), 138 (2016)

    Article  Google Scholar 

  27. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)

  28. Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C.: Human3.6M: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1325–1339 (2013)

    Article  Google Scholar 

  29. Jacob, A., Mythili, P.: Prosodic feature based speech emotion recognition at segmental and supra segmental levels. In: SPICES, pp. 1–5. IEEE (2015)

    Google Scholar 

  30. Kanazawa, A., Zhang, J.Y., Felsen, P., Malik, J.: Learning 3D human dynamics from video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5614–5623 (2019)

    Google Scholar 

  31. Karg, M., Kuhnlenz, K., Buss, M.: Recognition of affect based on gait patterns. Cybernetics 40(4), 1050–1061 (2010)

    Google Scholar 

  32. Khodabandeh, M., Reza Vaezi Joze, H., Zharkov, I., Pradeep, V.: DIY human action dataset generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1448–1458 (2018)

    Google Scholar 

  33. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  34. Kleinsmith, A., Bianchi-Berthouze, N.: Affective body expression perception and recognition: a survey. IEEE Trans. Affect. Comput. 4(1), 15–33 (2013)

    Article  Google Scholar 

  35. Kosti, R., Alvarez, J., Recasens, A., Lapedriza, A.: Context based emotion recognition using EMOTIC dataset. IEEE Trans. Pattern Anal. Mach. Intell. 42, 2755–2766 (2019)

    Google Scholar 

  36. Lee, J., Kim, S., Kim, S., Park, J., Sohn, K.: Context-aware emotion recognition networks. arXiv preprint arXiv:1908.05913 (2019)

  37. Liu, Z., Zhang, H., Chen, Z., Wang, Z., Ouyang, W.: Disentangling and unifying graph convolutions for skeleton-based action recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  38. Ma, Y., Paterson, H.M., Pollick, F.E.: A motion capture library for the study of identity, gender, and emotion perception from biological motion. Behav. Res. Methods 38(1), 134–141 (2006)

    Article  Google Scholar 

  39. Meeren, H.K., van Heijnsbergen, C.C., de Gelder, B.: Rapid perceptual integration of facial expression and emotional body language. Proc. NAS 102(45), 16518–16523 (2005)

    Article  Google Scholar 

  40. Mehrabian, A.: Analysis of the big-five personality factors in terms of the pad temperament model. Aust. J. Psychol. 48(2), 86–92 (1996)

    Article  Google Scholar 

  41. Mehrabian, A., Russell, J.A.: An Approach to Environmental Psychology. The MIT Press, Cambridge (1974)

    Google Scholar 

  42. Michalak, J., Troje, N.F., Fischer, J., Vollmar, P., Heidenreich, T., Schulte, D.: Embodiment of sadness and depression—Gait patterns associated with dysphoric mood. Psychosom. Med. 71(5), 580–587 (2009)

    Article  Google Scholar 

  43. Mittal, T., Guhan, P., Bhattacharya, U., Chandra, R., Bera, A., Manocha, D.: EmotiCon: context-aware multimodal emotion recognition using Frege’s principle. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14234–14243 (2020)

    Google Scholar 

  44. Montepare, J.M., Goldstein, S.B., Clausen, A.: The identification of emotions from gait information. J. Nonverbal Behav. 11(1), 33–42 (1987)

    Article  Google Scholar 

  45. Narang, S., Best, A., Feng, A., Kang, S.H., Manocha, D., Shapiro, A.: Motion recognition of self and others on realistic 3D avatars. Comput. Anim. Virtual Worlds 28(3–4), e1762 (2017)

    Article  Google Scholar 

  46. Narayanan, V., Manoghar, B.M., Dorbala, V.S., Manocha, D., Bera, A.: ProxEmo: gait-based emotion learning and multi-view proxemic fusion for socially-aware robot navigation. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020. IEEE (2020)

    Google Scholar 

  47. Nisbett, R.E., Wilson, T.D.: Telling more than we can know: verbal reports on mental processes. Psychol. Rev. 84(3), 231 (1977)

    Article  Google Scholar 

  48. Pavllo, D., Feichtenhofer, C., Grangier, D., Auli, M.: 3D human pose estimation in video with temporal convolutions and semi-supervised training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7753–7762 (2019)

    Google Scholar 

  49. Pavllo, D., Grangier, D., Auli, M.: QuaterNet: a quaternion-based recurrent model for human motion. arXiv preprint arXiv:1805.06485 (2018)

  50. Randhavane, T., Bera, A., Kapsaskis, K., Bhattacharya, U., Gray, K., Manocha, D.: Identifying emotions from walking using affective and deep features. arXiv preprint arXiv:1906.11884 (2019)

  51. Randhavane, T., Bera, A., Kapsaskis, K., Sheth, R., Gray, K., Manocha, D.: EVA: generating emotional behavior of virtual agents using expressive features of gait and gaze. In: ACM Symposium on Applied Perception 2019, pp. 1–10 (2019)

    Google Scholar 

  52. Randhavane, T., Bhattacharya, U., Kapsaskis, K., Gray, K., Bera, A., Manocha, D.: The Liar’s walk: detecting deception with gait and gesture. arXiv preprint arXiv:1912.06874 (2019)

  53. Rao, K.S., Koolagudi, S.G., Vempada, R.R.: Emotion recognition from speech using global and local prosodic features. Int. J. Speech Technol. 16, 143–160 (2013)

    Article  Google Scholar 

  54. Riggio, H.R.: Emotional expressiveness. In: Zeigler-Hill, V., Shackelford, T. (eds.) Encyclopedia of Personality and Individual Differences. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-28099-8_508-1

    Chapter  Google Scholar 

  55. Rivas, J.J., Orihuela-Espina, F., Sucar, L.E., Palafox, L., Hernández-Franco, J., Bianchi-Berthouze, N.: Detecting affective states in virtual rehabilitation. In: Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare, pp. 287–292. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering) (2015)

    Google Scholar 

  56. Roether, C.L., Omlor, L., Christensen, A., Giese, M.A.: Critical features for the perception of emotion from gait. J. Vis. 9(6), 15–15 (2009)

    Article  Google Scholar 

  57. Schurgin, M., Nelson, J., Iida, S., Ohira, H., Chiao, J., Franconeri, S.: Eye movements during emotion recognition in faces. J. Vis. 14(13), 14–14 (2014)

    Article  Google Scholar 

  58. Shahroudy, A., Liu, J., Ng, T.T., Wang, G.: NTU RGB+D: a large scale dataset for 3D human activity analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1010–1019 (2016)

    Google Scholar 

  59. Shi, L., Zhang, Y., Cheng, J., Lu, H.: Skeleton-based action recognition with directed graph neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7912–7921 (2019)

    Google Scholar 

  60. Shi, L., Zhang, Y., Cheng, J., Lu, H.: Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12026–12035 (2019)

    Google Scholar 

  61. Si, C., Chen, W., Wang, W., Wang, L., Tan, T.: An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1227–1236 (2019)

    Google Scholar 

  62. Starke, S., Zhang, H., Komura, T., Saito, J.: Neural state machine for character-scene interactions. ACM Trans. Graph. 38(6), 209 (2019)

    Article  Google Scholar 

  63. Strapparava, C., Mihalcea, R.: Learning to identify emotions in text. In: Proceedings of the 2008 ACM Symposium on Applied Computing, pp. 1556–1560. ACM (2008)

    Google Scholar 

  64. Venture, G., Kadone, H., Zhang, T., Grèzes, J., Berthoz, A., Hicheur, H.: Recognizing emotions conveyed by human gait. IJSR 6(4), 621–632 (2014)

    Google Scholar 

  65. Wang, L., Huynh, D.Q., Koniusz, P.: A comparative review of recent kinect-based action recognition algorithms. arXiv preprint arXiv:1906.09955 (2019)

  66. Wu, Z., Fu, Y., Jiang, Y.G., Sigal, L.: Harnessing object and scene semantics for large-scale video understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3112–3121 (2016)

    Google Scholar 

  67. Yan, A., Wang, Y., Li, Z., Qiao, Y.: PA3D: pose-action 3D machine for video recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7922–7931 (2019)

    Google Scholar 

  68. Yan, S., Li, Z., Xiong, Y., Yan, H., Lin, D.: Convolutional sequence generation for skeleton-based action synthesis. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4394–4402 (2019)

    Google Scholar 

  69. Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. In: AAAI (2018)

    Google Scholar 

  70. Yang, C., Wang, Z., Zhu, X., Huang, C., Shi, J., Lin, D.: Pose guided human video generation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 204–219. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_13

    Chapter  Google Scholar 

  71. Zhang, J.Y., Felsen, P., Kanazawa, A., Malik, J.: Predicting 3D human dynamics from video. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7114–7123 (2019)

    Google Scholar 

  72. Zhang, S., et al.: Fusing geometric features for skeleton-based action recognition using multilayer LSTM networks. IEEE Trans. Multimedia 20(9), 2330–2343 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Uttaran Bhattacharya .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 68877 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bhattacharya, U. et al. (2020). Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12355. Springer, Cham. https://doi.org/10.1007/978-3-030-58607-2_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58607-2_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58606-5

  • Online ISBN: 978-3-030-58607-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics