Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Affect representation and recognition in 3D continuous valence–arousal–dominance space

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Currently, the focus of research on human affect recognition has shifted from six basic emotions to complex affect recognition in continuous two or three dimensional space due to the following challenges: (i) the difficulty in representing and analyzing large number of emotions in one framework, (ii) the problem of representing complex emotions in the framework, and (iii) the lack of validation of the framework through measured signals, and (iv) the lack of applicability of the selected framework to other aspects of affective computing. This paper presents a Valence – Arousal – Dominance framework to represent emotions. This framework is capable of representing complex emotions on continuous 3D space. To validate the model, an affect recognition technique has been proposed that analyses spontaneous physiological (EEG) and visual cues. The DEAP dataset is a multimodal emotion dataset which contains video and physiological signals as well as Valence, Arousal and Dominance values. This dataset has been used for multimodal analysis and recognition of human emotions. The results prove the correctness and sufficiency of the proposed framework. The model has also been compared with other two dimensional models and the capacity of the model to represent many more complex emotions has been discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Arifin S, Cheung PYK (2008) Affective level video segmentation by utilizing the pleasure-arousal-dominance information. IEEE Trans Multimed 10(7):1325–1341

    Article  Google Scholar 

  2. Emotion Article: www.measuredme.com visited on 20 July, 2014.

  3. Caridakis, G., Malatesta, L., Kessous, L., Amir, N., Paouzaiou, A., &Karpouzis, K. (2006, November). Modelling naturalistic affective states via facial and vocal expression recognition. In Proceedings 8th ACM International Conference on Multimodal Interfaces (ICMI’06), Banff, Alberta, Canada (pp. 146–154). ACM Publishing.

  4. Chung S. Y., Yoon H. J. (2012) Affective classification using Bayesian classifier and supervised learning. 12th Int Conf Control, Autom Syst (ICCAS) Island: pp. 1768–1771.

  5. Ekman P, Friesen WV, O’Sullivan M, Chan A, Diacoyanni-Tarlatzis I, Heider K, Krause R, LeCompte WA, Pitcairn T, Ricci-Bitti PE, Scherer K, Tomita M, Tzavaras A (1987) Universals and cultural differences in the judgments of facial expressions of emotion. J Pers Soc Psychol 53:12–717

    Article  Google Scholar 

  6. Fragopanagos F, Taylor JG (2005) Emotion recognition in human-computer interaction. Neural Netw 18:389–405. doi:10.1016/j.neunet.2005.03.006

    Article  Google Scholar 

  7. Glowinski, D., Camurri, A., Volpe, G., Dael, N. and Scherer K (2008) Technique for automatic emotion recognition by body gesture analysis. Proc. IEEE CS Conf. Computer vision and pattern recognition workshops, pp. 1–6.

  8. Gunes H. and Pantic M. (2010) Automatic measurement of affect in dimensional and continuous spaces: why, what, and how? Proc Seventh Int’,l Conf Methods Tech Behav Res, pp. 122–126.

  9. Gunes H, Schuller B (2012) Categorical and dimensional affect analysis in continuous input: current trends and future directions. Image Vis Comput 31(2):120–135

    Article  Google Scholar 

  10. Koelstra S, Muhl C, Soleymani M, Lee JS, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I (2012) DEAP: a database for emotion analysis; using physiological signals. IEEE Trans Affect Comput 3(1):18–31

    Article  Google Scholar 

  11. Liu Y, Sourina O (2013) Real-time fractal-based valence level recognition from EEG. Trans Comput Sci XVIII Lect Notes Comput Sci 7848:101–120

    Article  Google Scholar 

  12. Mansoorizadeh M, Charkari NM (2010) Multimodal information fusion application to human emotion recognition from face and speech. Multimed Tools Appl 49(2):277–297

    Article  Google Scholar 

  13. Morris JD (1995) SAM: the self-assessment manikin. An efficient cross-cultural measurement of emotion response.Jounal of. Advert Res 35(8):63–68

    Google Scholar 

  14. Nicolaou M, GunesHand PM (2011) Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans Affect Comput 2(2):92–105

    Article  Google Scholar 

  15. Picard R (2003) Affective computing: challenges. Inter J Hum Comput Stud 59(1–2):55–64

    Article  Google Scholar 

  16. Saha, A., Jonathan, Q. M. (2010) Facial Expression Recognition using Curvelet based local binary patterns. IEEE Int Conf Acoust Speech Signal Proc (ICASSP), pp. 2470–2473.

  17. Christopher P. Said, James V. Haxby, Alexander Todorov.(2011) Brain systems for assessing the affective value of faces. Phil Trans R Soc London B Biol Sci. 2011 Jun 12:366(1571):1660–1670. doi: 10.1098/rstb.2010.0351.

  18. Schachter S, Singer JE (1962) Cognitive, social and physiological determinants of emotional state. Psychol Rev 69:379–399

    Article  Google Scholar 

  19. Schuller B (2009) Acoustic emotion recognition: a benchmark comparison of performances. Proc, IEEE ASRU

    Google Scholar 

  20. Schuller B (2011) Recognizing affect from linguistic information in 3D continuous space. IEEE Trans Affect Comput 2(4):192–205

    Article  Google Scholar 

  21. Smith CA, Ellsworth PC (1985) Patterns of cognitive appraisal in emotion. J Pers Soc Psychol 48(4):813–838

    Article  Google Scholar 

  22. Stickel C, Fink J, Holzinger A (2007) Enhancing universal access – EEG based learnability assessment. Lect Notes Comput Sci 4556:813–822

    Article  Google Scholar 

  23. Sumana I, Islam M, Zhang DS, Lu G (2008) Content based image retrieval using curvelet transform. Proc. of IEEE International Workshop on Multimedia Signal Processing, Cairns, Queensland, Australia, pp. 11–16

    Google Scholar 

  24. Verma G. K. and Tiwary U. S (2014) Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.Vol. 102, Part 1, Pages 162–172 NeuroImage. doi:10.1016/j.neuroimage.2013.11.007.

  25. Viola PA, Jones MJ (2001) Rapid object detection using a boosted cascade of simple features. CVPR, Issue 1:511–518

    Google Scholar 

  26. Wang Y, Guan L, Venetsanopoulos A (2012) Kernel cross-modal factor analysis for information fusion with application to bimodal emotion recognition. Multimed, IEEE Trans on 14(3):597–607

    Article  Google Scholar 

  27. Whissell CM (1989) The dictionary of affect in language, emotion: theory, research and experience, vol 4. Academic Press, New York

    Google Scholar 

  28. Wollmer M, Schuller B, Eyben F, Rigoll G (2010) Combining long short-term memory and dynamic Bayesian networks for incremental emotion-sensitive artificial listening. IEEE J Sel Top Signal Proc 4(5):867–881

    Article  Google Scholar 

  29. Wu, X., Zhao, J. (2010) Curvelet feature extraction for face recognition and facial expression recognition. Sixth Int Conf Nat Comput (ICNC), pp. 1212–1216.

  30. Yoon HJ, Chung SY (2013) Eeg-based emotion estimation using bayesian weighted-log-posterior function and perceptron convergence algorithm. Comput Biol Med 43:2230–2237

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gyanendra K Verma.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Verma, G.K., Tiwary, U.S. Affect representation and recognition in 3D continuous valence–arousal–dominance space. Multimed Tools Appl 76, 2159–2183 (2017). https://doi.org/10.1007/s11042-015-3119-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-015-3119-y

Keywords