Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/1889075.1889120guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners

Published: 20 September 2010 Publication History

Abstract

This paper focuses on dimensional prediction of emotions from spontaneous conversational head gestures. It maps the amount and direction of head motion, and occurrences of head nods and shakes into arousal, expectation, intensity, power and valence level of the observed subject as there has been virtually no research bearing on this topic. Preliminary experiments show that it is possible to automatically predict emotions in terms of these five dimensions (arousal, expectation, intensity, power and valence) from conversational head gestures. Dimensional and continuous emotion prediction from spontaneous head gestures has been integrated in the SEMAINE project [1] that aims to achieve sustained emotionally-colored interaction between a human user and Sensitive Artificial Listeners.

References

[1]
The SEMAINE project, http://www.semaine-project.eu/
[2]
Kendon, A.: Facial Expression of Emotion. In: Some functions of the face in a kissing round, pp. 117-152. Cambridge University Press, Cambridge (1990).
[3]
McClave, E.Z.: Linguistic functions of head movements in the context of speech. J. of Pragmatics 32, 855-878 (2000).
[4]
Zeng, Z., et al.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Tran. on PAMI 31, 39-58 (2009).
[5]
Russell, J.A.: A circumplex model of affect. J. of Personality and Social Psychology 39, 1161-1178 (1980).
[6]
Scherer, K.: Psychological models of emotion. In: The Neuropsychology of Emotion, pp. 137-162. Oxford University Press, Oxford (2000).
[7]
Fontaine, J.R., et al.: The world of emotion is not two-dimensional. Psychological Science 18, 1050-1057 (2007).
[8]
Kawato, S., Ohya, J.: Real-time detection of nodding and head-shaking by directly detecting and tracking the between-eyes. In: IEEE FGR, pp. 40-45 (2000).
[9]
Kapoor, A., Picard, R.W.: A real-time head nod and shake detector. In: Workshop on Perceptive User Interfaces (2001).
[10]
Tan, W., Rong, G.: A real-time head nod and shake detector using hmms. Expert Systems with Applications 25(3), 461-466 (2003).
[11]
Morency, L.-P., et al.: Contextual recognition of head gestures. In: ICMI, pp. 18-24 (2005).
[12]
Glowinski, D., et al.: Technique for automatic emotion recognition by body gesture analysis. In: CVPR Workshops, pp. 1-6 (2008).
[13]
Kulic, D., Croft, E.A.: Affective state estimation for human-robot interaction. IEEE Tran. on Robotics 23(5), 991-1000 (2007).
[14]
Chanel, G., et al.: Valence-arousal evaluation using physiological signals in an emotion recall paradigm. In: IEEE SMC, pp. 2662-2667 (2007).
[15]
Wollmer, M., et al.: Abandoning emotion classes - towards continuous emotion recognition with modelling of long-range dependencies. In: Interspeech, pp. 597- 600 (2008).
[16]
Gunes, H., Pantic, M.: Automatic, dimensional and continuous emotion recognition. Int. Journal of Synthetic Emotions 1(1), 68-99 (2010).
[17]
Douglas-Cowie, E., et al.: The Humaine database: addressing the needs of the affective computing community. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 488-500. Springer, Heidelberg (2007).
[18]
The SEMAINE database, http://semaine-db.eu/
[19]
Schroder, M., et al.: A demonstration of audiovisual sensitive artificial listeners. In: ACII, pp. 263-264 (2009).
[20]
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: IEEE CVPR, pp. 511-518 (2001).
[21]
Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines (2001), http://www.csie.ntu.edu.tw/~cjlin/libsvm

Cited By

View all
  • (2019)Increased affect-arousal in VR can be detected from faster body motion with increased heart rateProceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games10.1145/3306131.3317022(1-6)Online publication date: 21-May-2019
  • (2017)The NoXi database: multimodal recordings of mediated novice-expert interactionsProceedings of the 19th ACM International Conference on Multimodal Interaction10.1145/3136755.3136780(350-359)Online publication date: 3-Nov-2017
  • (2017)The Conflict Escalation Resolution (CONFER) DatabaseImage and Vision Computing10.1016/j.imavis.2016.12.00165:C(37-48)Online publication date: 1-Sep-2017
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
IVA'10: Proceedings of the 10th international conference on Intelligent virtual agents
September 2010
490 pages
ISBN:3642158919

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 20 September 2010

Author Tags

  1. dimensional emotion prediction
  2. spontaneous head movements
  3. virtual character-human interaction

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 18 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2019)Increased affect-arousal in VR can be detected from faster body motion with increased heart rateProceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games10.1145/3306131.3317022(1-6)Online publication date: 21-May-2019
  • (2017)The NoXi database: multimodal recordings of mediated novice-expert interactionsProceedings of the 19th ACM International Conference on Multimodal Interaction10.1145/3136755.3136780(350-359)Online publication date: 3-Nov-2017
  • (2017)The Conflict Escalation Resolution (CONFER) DatabaseImage and Vision Computing10.1016/j.imavis.2016.12.00165:C(37-48)Online publication date: 1-Sep-2017
  • (2016)High-Level Geometry-based Features of Video Modality for Emotion PredictionProceedings of the 6th International Workshop on Audio/Visual Emotion Challenge10.1145/2988257.2988262(51-58)Online publication date: 16-Oct-2016
  • (2015)Semi-feature Level Fusion for Bimodal Affect Regression Based on Facial and Bodily ExpressionsProceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems10.5555/2772879.2773350(1557-1565)Online publication date: 4-May-2015
  • (2015)Head Movement Dynamics during Play and Perturbed Mother-Infant InteractionIEEE Transactions on Affective Computing10.1109/TAFFC.2015.24227026:4(361-370)Online publication date: 23-Nov-2015
  • (2014)Intra- and Interpersonal Functions of Head Motion in Emotion CommunicationProceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges10.1145/2666253.2666258(19-22)Online publication date: 16-Nov-2014
  • (2014)Context-Sensitive Affect Recognition for a Robotic Game CompanionACM Transactions on Interactive Intelligent Systems10.1145/26226154:2(1-25)Online publication date: 1-Jun-2014
  • (2013)Categorical and dimensional affect analysis in continuous inputImage and Vision Computing10.1016/j.imavis.2012.06.01631:2(120-136)Online publication date: 1-Feb-2013
  • (2012)Robust continuous prediction of human emotions using multiscale dynamic cuesProceedings of the 14th ACM international conference on Multimodal interaction10.1145/2388676.2388783(501-508)Online publication date: 22-Oct-2012
  • Show More Cited By

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media