Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3340555.3353747acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article
Open access

ElderReact: A Multimodal Dataset for Recognizing Emotional Response in Aging Adults

Published: 14 October 2019 Publication History

Abstract

Automatic emotion recognition plays a critical role in technologies such as intelligent agents and social robots and is increasingly being deployed in applied settings such as education and healthcare. Most research to date has focused on recognizing the emotional expressions of young and middle-aged adults and, to a lesser extent, children and adolescents. Very few studies have examined automatic emotion recognition in older adults (i.e., elders), which represent a large and growing population worldwide. Given that aging causes many changes in facial shape and appearance and has been found to alter patterns of nonverbal behavior, there is strong reason to believe that automatic emotion recognition systems may need to be developed specifically (or augmented) for the elder population. To promote and support this type of research, we introduce a newly collected multimodal dataset of elders reacting to emotion elicitation stimuli. Specifically, it contains 1323 video clips of 46 unique individuals with human annotations of six discrete emotions: anger, disgust, fear, happiness, sadness, and surprise as well as valence. We present a detailed analysis of the most indicative features for each emotion. We also establish several baselines using unimodal and multimodal features on this dataset. Finally, we show that models trained on dataset of another age group do not generalize well on elders.

References

[1]
Gerard F. Anderson and Peter Sotir Hussey. 2000. Population Aging: A Comparison Among Industrialized Countries. Health Affairs 19, 3 (2000), 191–203. https://doi.org/10.1377/hlthaff.19.3.191 arXiv:https://doi.org/10.1377/hlthaff.19.3.191
[2]
Tadas Baltrusaitis, Peter Robinson, and Louis-Philippe Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. In WACV. IEEE Computer Society, 1–10. http://dblp.uni-trier.de/db/conf/wacv/wacv2016.html#Baltrusaitis0M16
[3]
Charles E Hughes Behnaz Nojavanasghari, Tadas Baltrusaitis and Louis-Philippe Morency. 2016. EmoReact: a multimodal approach and dataset for recognizing emotional responses in children. Proceedings of the 18th ACM International Conference on Multimodal Interaction (2016).
[4]
E. M. BENNETT, R. ALPERT, and A. C. GOLDSTEIN. 1954. Communications Through Limited-Response Questioning*. Public Opinion Quarterly 18, 3 (01 1954), 303–308. https://doi.org/10.1086/266520 arXiv:http://oup.prod.sis.lan/poq/article-pdf/18/3/303/5384778/18-3-303.pdf
[5]
Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, and Shrikanth Narayanan. 2004. Analysis of Emotion Recognition Using Facial Expressions, Speech and Multimodal Information. In Proceedings of the 6th International Conference on Multimodal Interfaces(ICMI ’04). ACM, New York, NY, USA, 205–211. https://doi.org/10.1145/1027933.1027968
[6]
Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD ’16). ACM, New York, NY, USA, 785–794. https://doi.org/10.1145/2939672.2939785
[7]
Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement 20, 1 (1960), 37–46. https://doi.org/10.1177/001316446002000104 arXiv:https://doi.org/10.1177/001316446002000104
[8]
Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. COVAREP - A collaborative voice analysis repository for speech technologies. IEEE, Florence, Italy, 960–964. https://github.com/covarep/covarep
[9]
Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. 2012. Collecting Large, Richly Annotated Facial-Expression Databases from Movies. IEEE MultiMedia 19, 3 (July 2012), 34–41. https://doi.org/10.1109/MMUL.2012.26
[10]
Eran Eidinger, Roee Enbar, and Tal Hassner. 2014. Age and Gender Estimation of Unfiltered Faces. Trans. Info. For. Sec. 9, 12 (Dec. 2014), 2170–2179. https://doi.org/10.1109/TIFS.2014.2359646
[11]
Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion(1992), 169–200.
[12]
P. Ekman. 1994. Strong evidence for universals in facial expressions: a reply to russell’s mistaken critique.(1994).
[13]
R H Finn. 1970. A Note on Estimating the Reliability of Categorical Data. Educational and Psychological Measurement 30 (1970), 71–76. https://doi.org/10/d8h9cp
[14]
Mara Folster, Ursula Hess, and Katja Werheid. 2014. Facial age affects emotional decoding. Frontiers in psychology 5 (02 2014), 30. https://doi.org/10.3389/fpsyg.2014.00030
[15]
Nir Friedman, Dan Geiger, and Moises Goldszmidt. 1997. Bayesian Network Classifiers. Mach. Learn. 29, 2-3 (Nov. 1997), 131–163. https://doi.org/10.1023/A:1007465528199
[16]
Markos Georgopoulos, Yannis Panagakis, and Maja Pantic. 2018. Modelling of Facial Aging and Kinship: A Survey. CoRR abs/1802.04636(2018). arxiv:1802.04636http://arxiv.org/abs/1802.04636
[17]
R. Gockley, R. Simmons, and J. Forlizzi. 2006. Modeling Affect in Socially Interactive Robots. In ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication. 558–563. https://doi.org/10.1109/ROMAN.2006.314448
[18]
Cindy Harmon-Jones, Brock Bastian, and Eddie Harmon-Jones. 2016. The Discrete Emotions Questionnaire: A New Tool for Measuring State Self-Reported Emotions. PLOS ONE 11, 8 (08 2016), 1–25. https://doi.org/10.1371/journal.pone.0159915
[19]
Larry V. Hedges. 1981. Distribution Theory for Glass’s Estimator of Effect size and Related Estimators. Journal of Educational Statistics 6, 2 (1981), 107–128. https://doi.org/10.3102/10769986006002107 arXiv:https://doi.org/10.3102/10769986006002107
[20]
Jean Kossaifi, Robert Walecki, Yannis Panagakis, Jie Shen, Maximilian Schmitt, Fabien Ringeval, Jing Han, Vedhas Pandit, Björn W. Schuller, Kam Star, Elnar Hajiyev, and Maja Pantic. 2019. SEWA DB: A Rich Database for Audio-Visual Emotion and Sentiment Research in the Wild. CoRR abs/1901.02839(2019).
[21]
Klaus Krippendorff. 2004. Content Analysis: An Introduction to Its Methodology (second edition). Sage Publications.
[22]
Andreas Lanitis. 2008. Comparative Evaluation of Automatic Age Progression Methodologies. EURASIP J. Adv. Sig. Proc. 2008 (03 2008). https://doi.org/10.1155/2008/239480
[23]
Changchun Liu, Pramila Agrawal, Nilanjan Sarkar, and Shuo Chen. 2009. Dynamic Difficulty Adjustment in Computer Games Through Real-Time Anxiety-Based Affective Feedback. International Journal of Human-Computer Interaction 25, 6(2009), 506–529. https://doi.org/10.1080/10447310902963944 arXiv:https://doi.org/10.1080/10447310902963944
[24]
Carol Magai, Nathan S Consedine, Yulia S Krivoshekova, Elizabeth Kudadjie-Gyamfi, and Renee McPherson. 2006. Emotion experience and expression across the adult life span: Insights from a multimodal assessment study.Psychology and aging 21, 2 (2006), 303.
[25]
Linda G. Martin. 1990. The status of South Asia’s growing elderly population. Journal of Cross-Cultural Gerontology 5, 2 (01 Apr 1990), 93–117. https://doi.org/10.1007/BF00116568
[26]
Stylianos Moschoglou, Athanasios Papaioannou, Christos Sagonas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou. 2017. Agedb: the first manually collected, in-the-wild age database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop, Vol. 2. 5.
[27]
Gabriel Panis, Andreas Lanitis, Nicolas Tsapatsoulis, and Timothy F. Cootes. 2016. Overview of research on facial ageing using the FG-NET ageing database. IET Biometrics 5, 2 (2016), 37–46.
[28]
Catherine Pelachaud. 2005. Multimodal Expressive Embodied Conversational Agents. In Proceedings of the 13th Annual ACM International Conference on Multimedia(MULTIMEDIA ’05). ACM, New York, NY, USA, 683–689. https://doi.org/10.1145/1101149.1101301
[29]
Thomas S. Polzin and Alexander Waibel. 2000. EMOTION-SENSITIVE HUMAN-COMPUTER INTERFACES. ITRW on Speech and Emotion(2000).
[30]
Allen Rawls and Karl Ricanek. 2009. MORPH: Development and Optimization of a Longitudinal Age Progression Database. In COST 2101/2102 Conference(Lecture Notes in Computer Science), Vol. 5707. Springer, 17–24.
[31]
K. Ricanek and T. Tesafaye. 2006. MORPH: a longitudinal image database of normal adult age-progression. In 7th International Conference on Automatic Face and Gesture Recognition (FGR06). 341–345. https://doi.org/10.1109/FGR.2006.78
[32]
Fabien Ringeval, Andreas Sonderegger, Jurgen S. Sauer, and Denis Lalanne. 2013. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In FG. IEEE Computer Society, 1–8. http://dblp.uni-trier.de/db/conf/fgr/fg2013.html#RingevalSSL13
[33]
Rasmus Rothe, Radu Timofte, and Luc Van Gool. 2016. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision (IJCV) (July 2016).
[34]
N. Sato and Y. Obuchi. 2007. Emotion recognition using mel-frequency cepstral coefficients.(2007).
[35]
Patrick Shrout and Joseph L. Fleiss. 1979. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin 86, 2 (3 1979), 420–428. https://doi.org/10.1037/0033-2909.86.2.420
[36]
Maxim Sidorov and Wolfgang Minker. 2014. Emotion Recognition and Depression Diagnosis by Acoustic and Visual Features: A Multimodal Approach. In Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge(AVEC ’14). ACM, New York, NY, USA, 81–86. https://doi.org/10.1145/2661806.2661816
[37]
Olga Sourina, Yisi Liu, and Minh Khoa Nguyen. 2012. Real-time EEG-based emotion recognition for music therapy. Journal on Multimodal User Interfaces 5, 1 (01 Mar 2012), 27–35. https://doi.org/10.1007/s12193-011-0080-6
[38]
J. A. K. Suykens and J. Vandewalle. 1999. Least Squares Support Vector Machine Classifiers. Neural Process. Lett. 9, 3 (June 1999), 293–300. https://doi.org/10.1023/A:1018628609742
[39]
D. Tacconi, O. Mayora, P. Lukowicz, B. Arnrich, C. Setz, G. Troster, and C. Haring. [n. d.]. Activity and emotion recognition to support early diagnosis of psychiatric diseases. In 2008 Second International Conference on Pervasive Computing Technologies for Healthcare.
[40]
Michel Valstar, Björn Schuller, Kirsty Smith, Florian Eyben, Bihan Jiang, Sanjay Bilakhia, Sebastian Schnieder, Roddy Cowie, and Maja Pantic. 2013. AVEC 2013: The Continuous Audio/Visual Emotion and Depression Recognition Challenge. In Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge(AVEC ’13). ACM, New York, NY, USA, 3–10. https://doi.org/10.1145/2512530.2512533
[41]
H. Wang, A. Meghawat, L. Morency, and E. P. Xing. 2017. Select-additive learning: Improving generalization in multimodal sentiment analysis. In 2017 IEEE International Conference on Multimedia and Expo (ICME). 949–954. https://doi.org/10.1109/ICME.2017.8019301
[42]
KX Wang, QL Zhang, and SY Liao. [n. d.]. A database of elderly emotional speech.
[43]
Kunxia Wang, ZongBao Zhu, Shidong Wang, Xiao Sun, and Lian Li. 2016. A database for emotional interactions of the elderly. 1–6. https://doi.org/10.1109/ICIS.2016.7550902

Cited By

View all
  • (2024)Multimodal User Enjoyment Detection in Human-Robot Conversation: The Power of Large Language ModelsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685729(469-478)Online publication date: 4-Nov-2024
  • (2024)Emotion recognition to support personalized therapy in the elderly: an exploratory study based on CNNsResearch on Biomedical Engineering10.1007/s42600-024-00363-640:3-4(811-824)Online publication date: 1-Jul-2024
  • (2024)Design of an Emotion Care System for the Elderly Based on Precisely Detecting Emotion StatesHuman Aspects of IT for the Aged Population10.1007/978-3-031-61546-7_21(331-346)Online publication date: 1-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICMI '19: 2019 International Conference on Multimodal Interaction
October 2019
601 pages
ISBN:9781450368605
DOI:10.1145/3340555
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 October 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Emotion Recognition
  2. elders
  3. nonverbal behavior analysis

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICMI '19

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)845
  • Downloads (Last 6 weeks)115
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Multimodal User Enjoyment Detection in Human-Robot Conversation: The Power of Large Language ModelsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685729(469-478)Online publication date: 4-Nov-2024
  • (2024)Emotion recognition to support personalized therapy in the elderly: an exploratory study based on CNNsResearch on Biomedical Engineering10.1007/s42600-024-00363-640:3-4(811-824)Online publication date: 1-Jul-2024
  • (2024)Design of an Emotion Care System for the Elderly Based on Precisely Detecting Emotion StatesHuman Aspects of IT for the Aged Population10.1007/978-3-031-61546-7_21(331-346)Online publication date: 1-Jun-2024
  • (2023)Multi-label Emotion Analysis in Conversation via Multimodal Knowledge DistillationProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612517(6090-6100)Online publication date: 26-Oct-2023
  • (2023)Affective Computing for Human-Robot Interaction Research: Four Critical Lessons for the Hitchhiker2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)10.1109/RO-MAN57019.2023.10309450(1565-1572)Online publication date: 28-Aug-2023
  • (2023)Biometrics-Based Mobile User Authentication for the Elderly: Accessibility, Performance, and Method DesignInternational Journal of Human–Computer Interaction10.1080/10447318.2022.215490340:9(2153-2167)Online publication date: Jan-2023
  • (2023)THFN: Emotional health recognition of elderly people using a Two-Step Hybrid feature fusion network along with Monte-Carlo dropoutBiomedical Signal Processing and Control10.1016/j.bspc.2023.10511686(105116)Online publication date: Sep-2023
  • (2023)A Survey on Facial Emotion Recognition for the ElderlyDigital Technologies and Applications10.1007/978-3-031-29857-8_57(561-575)Online publication date: 29-Apr-2023
  • (2022)Searching for Best Predictors of Paralinguistic Comprehension and Production of Emotions in Communication in Adults With Moderate Intellectual DisabilityFrontiers in Psychology10.3389/fpsyg.2022.88424213Online publication date: 8-Jul-2022
  • (2022)Survey on Emotion Recognition Databases2022 22nd International Conference on Control, Automation and Systems (ICCAS)10.23919/ICCAS55662.2022.10003935(1173-1178)Online publication date: 27-Nov-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media